2 Replies Latest reply: Mar 12, 2014 8:06 PM by Ben Loveday RSS

    Quad port MPIO performance with vSphere

    Ben Loveday Wayfarer

      Hi all,

       

      I couldn't seem to find much around this so hopefully someone out there can help me.

       

      I am trying to find out whether I will get a higher aggregated throughput between my ESX hosts and Nimble storage if I use four 1GbE nic ports on each and configure all four ports in ESX for MPIO (instead of two).

       

      I understand that when using iSCSI within vSphere a single iSCSI connection can't use more than one path at a time but with round robin, etc, has anyone seen an overall higher performance in this configuration?

       

      It seems to be a configuration that isn't very common as a lot of people just move to 2 x 10GbE but in my case for this particular use-case I would struggle to justify the extra cost.

       

      Any help would be much appreciated!

       

      Cheers,

      Ben

        • Re: Quad port MPIO performance with vSphere
          Eddie Tang Adventurer

          Hi Ben,

           

          You will see higher aggregated throughput between ESX hosts and Nimble should you dedicate additional NICs for iSCSI AND if you are throughput bound by your existing 2 x 1GbE connections AND assuming you have at least 4 x 1Gbps iSCSI ports on Nimble.  You can check in vCenter whether the VMNICs for iSCSI are saturated.

           

          Throughput = IOPS x block size.  E.g. 10,000 IOPS x 8KB block = ~80MB/s.

          Unless you have either a high IOP or large sequential workloads in the VMs on the host, you may not be saturating the 2 x 1GbE therefore adding additional host NIC ports for iSCSI will provide no additional performnce.

           

          Hope this helps.

           

          -Eddie