3 Replies Latest reply: Jun 20, 2013 12:03 AM by Rob Allen RSS

    VMWare Host with CNA connecting to CS240-G

    Newbie


      Hi all and thanks for taking the time to read this post.

       

      I would like some guidance in setting up 10gb iscsi on our nimbles.  We have recently installed some Nexus 5532 into our environment.  The nimbles have been connected to the switches via Twinax Direct Attach cables.  The vmware physical hosts are connected to the Nexus via Qlogic 8242 CNA also using Twinax.  I have been given a vlan and IP information for the iscsi traffic.  The physical hosts have 2 connections to the nexus and have a trunk running all the vmware network traffic.  Do you have any general steps on the config we have to do to get the host and nimbles talking to the hosts?  I've done a simple config test by assigning the ip information to the storage adapters in vcenter and on the nimbles but the VMware host does not see the iscsi target.  My knowledge in networking is lacking but I'm assuming there may be some config required on the switches maybe?  Any advice would be gratefully received.

        • Re: VMWare Host with CNA connecting to CS240-G
          Scout

          Rob, you mentioned a trunk for all vmware network traffic - I presume you have tagged VLANs for VM/vMotion/iSCSI traffic?  if yes, make sure you specify the VLAN ID tag for the vmkernel port.  Now for the ports on the Nexus 5k that are connected to the Nimble, make sure you just configure them to be switch ports and assign the iSCSI VLAN as the access VLAN to that port - that should do the trick. 

           

          Easiest way to see if this works is the following:

          -login to ESXi host

          -on the CLI, run the following command:

          #vmkping <nimble_interface_IP>

           

          feel free to open a support case with us and we'll look into your setup further.  Otherwise, you could post screen shots of your setup and I'll take a closer look/

          • Re: VMWare Host with CNA connecting to CS240-G
            Paul Munford Wayfarer

            A few conditions do differ for 10Gbps, such as cabling.  1Gbps is almost always the old tried and true CAT-5/5e/7 copper cabling with the traditional RJ-45 connector (looks like an 8-wire phone plug). 

                  - 10Gbps is often fiber optic with Gbics, in which case the appropriate GBIC must be used on each end, and they must match in terms of general capabilities (can't use a long-distance on one end, and a short-distance on the other end, for example). 

                  - If not using fiber optic cables, it will usually use a "Twin-ax" cable, which is copper with a "GBIC-like" connector stamped onto the end.  When using twinax, you must be careful about whether both devices support

                            -- the particular twinax cable.

                            -- the length of twinax cable (lengths usually range between .5m to 10m) can be problematic, and must be supported on both end point devices

                            -- there are two types of twinax, active and passive.  Cable over 5m are usually active.  Support for active twinax is not universal.  If the active cable is not

                               supported, the active twinax cable can introduce signalling errors that can cause very unreliable behavior, if it even works at all.

             

            Generally speaking, however, recommendations and designs for 10Gbps are not different than recommendations for 1Gbps.  All of the same requirements exist, including proper IP connectivity.  In general, MPIO offers greater flexibility and predictability than link aggregation (some vendors call it trunking, some call it EtherChannel, some call it lag, etc).

             

            In this particular case, it was discovered that the problem arose when trying to transition from a temporary 1Gbps infrastructure to a 10Gbps infrastructure.  During this transition, the VMWare host had IP connectivity to the data ports but not to the iSCSI discovery interface on the Nimble array.  This caused VMWare to be unable to discovery the Nimble arrays iSCSI configuration.

            • Re: VMWare Host with CNA connecting to CS240-G
              Newbie

              Thanks for all the help.  As Paul mentioned when checking on the CLI of one of the units by using ip --list it was noted that the discovery address was being hosted by the connection eth2 and there was not connection across computer rooms.  Also used the document https://connect.nimblestorage.com/docs/DOC-1242.  Problem solved.