AnsweredAssumed Answered

CS210 NICs and VMware Config

Question asked by Andrew Rolt on Jun 3, 2015
Latest reply on Jun 4, 2015 by Andrew Rolt

I've been using CS210 now for a few months shy of a year and love it. It was my first SAN (and our company's first) and I can't imagine using anything else now - it's fast and simple.  It took me a few months to get our (smallish) environment entirely on the Nimble as I was learning and test along the way.

 

Anyway, over the past few months I've played with different configurations of the NICs and trying to determine the "best" setup for performance.

I started out using the recommended Eth1 and 2 as Management and Eth3 and 4 for iSCSI.  In that configuration on the VMWare side, each host had two VMKernel vSwitches with one vmnic each bonded to iSCSI traffic. On the Nimble side, while monitoring the NIC performance, both showed identical traffic patterns (good).

 

I then tried adding the Nimble Eth3 to the iSCSI network and on the VMWare side I added a 3rd iSCSI vSwitch with dedicted vmnic.  The NIC traffic patterns on Nimble were not consistent.  I would have expected all 3 NICs to show the same traffic as before, but this wasn't happening. Traffic seemed to match across 2 random NICs with the 3rd showing something different or barely used. I also didn't notice any improved throughput, but I didn't run any tests to confirm that either.  That assumption is solely based on what I was seeing through the Nimble GUI.

 

My current config since that last setup didn't get me the results I was after is that I'm back to 2 vSwitches with 1 vmnic, but I've left the 3 NIC's on Nimble dedicated to iSCSI.  There didn't seem to be a difference if I had 2 or 3 vSwitches back to the Nimble.

 

I know that there's a choice in VMWare iSCSI setup to assign one vmnic per vSwitch, or multiple vmnics per vSwitch (the Nimble docs mention either method). I haven't done any testing if the other method would work better instead of the 1-to-1 setup I have now.

 

I guess my questions are, am I missing something on setting this up to get better (or more balanced) NIC performance out of the Nimble, or is this a design "issue" somewhere? Or will Nimble never balance the throughput across 3 NICs back to VMWare?  Is this a fruitless experiment?

 

For reference, here are some other settings along the way;

Nimble OS 2.2.6

3x ESXi 5.5 hosts

NCM plugin on hosts

2x HP2920 dedicated to iSCSI, stacked, flow-control, jumbo

Jumbo is turned on in vSwitches, vmnics, Nimble.

 

I'd be happy to provide other settings if anyone thinks it necessary.

Outcomes