Rob, you mentioned a trunk for all vmware network traffic - I presume you have tagged VLANs for VM/vMotion/iSCSI traffic? if yes, make sure you specify the VLAN ID tag for the vmkernel port. Now for the ports on the Nexus 5k that are connected to the Nimble, make sure you just configure them to be switch ports and assign the iSCSI VLAN as the access VLAN to that port - that should do the trick.
Easiest way to see if this works is the following:
-login to ESXi host
-on the CLI, run the following command:
feel free to open a support case with us and we'll look into your setup further. Otherwise, you could post screen shots of your setup and I'll take a closer look/
A few conditions do differ for 10Gbps, such as cabling. 1Gbps is almost always the old tried and true CAT-5/5e/7 copper cabling with the traditional RJ-45 connector (looks like an 8-wire phone plug).
- 10Gbps is often fiber optic with Gbics, in which case the appropriate GBIC must be used on each end, and they must match in terms of general capabilities (can't use a long-distance on one end, and a short-distance on the other end, for example).
- If not using fiber optic cables, it will usually use a "Twin-ax" cable, which is copper with a "GBIC-like" connector stamped onto the end. When using twinax, you must be careful about whether both devices support
-- the particular twinax cable.
-- the length of twinax cable (lengths usually range between .5m to 10m) can be problematic, and must be supported on both end point devices
-- there are two types of twinax, active and passive. Cable over 5m are usually active. Support for active twinax is not universal. If the active cable is not
supported, the active twinax cable can introduce signalling errors that can cause very unreliable behavior, if it even works at all.
Generally speaking, however, recommendations and designs for 10Gbps are not different than recommendations for 1Gbps. All of the same requirements exist, including proper IP connectivity. In general, MPIO offers greater flexibility and predictability than link aggregation (some vendors call it trunking, some call it EtherChannel, some call it lag, etc).
In this particular case, it was discovered that the problem arose when trying to transition from a temporary 1Gbps infrastructure to a 10Gbps infrastructure. During this transition, the VMWare host had IP connectivity to the data ports but not to the iSCSI discovery interface on the Nimble array. This caused VMWare to be unable to discovery the Nimble arrays iSCSI configuration.
Thanks for all the help. As Paul mentioned when checking on the CLI of one of the units by using ip --list it was noted that the discovery address was being hosted by the connection eth2 and there was not connection across computer rooms. Also used the document https://connect.nimblestorage.com/docs/DOC-1242. Problem solved.