This document is a VMWare KB article walking step-by-step through the iSCSI port binding and iSCSI Software initiator setup for MPIO.
I so wish this would be on the KB site.... would have saved us a ton of time and heartache.
Thanks for your note. I've forwarded that to the guy on the Support team who's responsible for the KB articles.
I recently ran across the following VMware KB article which instructs VMware customers not to configure iSCSI Port Binding when using multiple iSCSI subnets. Apparently when using Port Binding with multiple subnets, ESXi can have issues with rescans, multipathing, and accessing storage. I know that Nimble supports both "single subnet" and "multiple subnet" deployment scenarios but I just thought I'd bring this to everyone's attention, as it was news to me:
Considerations for using software iSCSI port binding in ESX/ESXi (2038869)
To add to this, the reason why ESXi can have the issues you stated when using iSCSI port binding and multiple subnets is because the software iSCSI initiator will try to log into each target from each VMkernel port. In other words, VMkernel port A on iSCSI subnet A will try to log into the Nimble iSCSI target IP on iSCSI subnet B, and vice versa. If your iSCSI subnets are "closed" and "non-routable," as they should be, then these login attempts will fail after some time. Unless you have inter-VLAN routing configured between your iSCSI VLANs, it will obviously fail to log in to targets in the other VLAN.
There is a lot of contradictory information on the interwebs between the big 3 vendors in this situation: Nimble, VMware, and Cisco.
Let's put it simply: if you're using multiple subnets for iSCSI traffic, don't use iSCSI port binding.
If you're using a single subnet for iSCSI traffic, definitely use iSCSI port binding.
Don't use multiple subnets for iSCSI traffic in vSphere environments. iSCSI port binding gives you the proper load balancing and failover you need.
Retrieving data ...