What I do when I get into a spot like this is go back to the basics.
Are the new 10GB NICs on the same subnet as the old 1GB NICs?
If not has this new subnet been added to the Nimble?
Are the IQNs for the VMware hosts in a Nimble Initiator Group?
Can the VMware and Nimble NICs talk to each other? i.e. SSH into both sides and make sure they can ping
VMware iSCSI Initiator properities: Is the discover IP address still valid? Are there existing Static Discovery entries and if so are they valid?
I think you get the idea . . .
My first thought is Initiator Groups and access; I would double check that the Initiator is a member of the appropriate Initiator Group and the Initiator Group has read/write access to the Volume.
After that, I would check the switch port configurations -- are you using the same switch ports as previously? Do you have to apply a VLAN ID to the VMKernel interface? MTU settings match?
Drop into SSH to the ESXi host and run vmkping to validate connectivity.
vmkping is a great first step after verifying cabling and switch config, you can specify the vmk as well, so I would use the following to ensure Host and Nimble interfaces are on the same network (using your screenshot as reference):
vmkping -I vmk4 <nimble iscsi interface IP>
vmkping -I vmk5 <nimble iscsi interface IP>
Make sure you're vmkernels are able to ping each of the Nimble array iSCSI addresses. Once you get that far, you can increase MTU size to make sure Jumbo Frames are enabled from end to end.
Let us know what you figure out!
A few things I'll add to the list:
- verify your cables are capable of 10 GB and GBics are compatible.
- when pinging, note latency
- VLAN and switch config
- MTU/frame size (if not 1500, more validation needed)
- subnets and default router/gateway
- dynamic discovery addresses and initiator groups... Verify both settings match on nimble and vsphere.
Check with the Nimble support folks too... they're quite good at troubleshooting these initial connectivity issues.
Hi all, just following up for everybody who may be facing a similar issue. We did test just about everything from switch configuration to physical cables, connectors, etc. What we found is that after making some re-configurations on the networking for our vsphere environment, often things would not work properly until the actual host was rebooted. We even brought in external expertise to verify, and we could not locate a problem with our setup or determine why it was failing, other than a reboot of the host fixed everything. Still scratching my head at this one, but I'd recommend bouncing a host after messing with vmware networking (even when in maintenance mode!). Thanks for the input.