I too would use MPIO rather than MCS. You must, must make sure that you are multipathing correctly (using all paths). Your switching setup will determine the number of paths.
The Nimble monitoring pages (monitor interfaces, in particular - if you can distinguish between normal activity & normal + SQL activity) can help you from the array side, and good ol' task manager can of course give you a steer.
This is for our SQL servers specifically but may help you regarding your file servers if bandwidth and performance are critical... we have three 10GB NIC ports (across different NIC Cards) on both of my SQL cluster nodes, going to Arista switches. The connections themselves are split over two Arista switches for additional redundancy. 1 10GB port on each host is going to a different Arista for general network connectivity.
Each Nimble volume we present to our 2 node SQL cluster has 1 paths per portal/NIC IP so 3 paths per volume per host. 6 paths between both nodes per volume. .
Once you setup MPIO ... at least in our setup, MC/S is also setup for each discovery portal session/NIC IP (See screenshot)
Here is what I configured. Under MPIO make sure you have multiple paths, switch to least queue depth as your performance policy, and under MC/s I would switch from round robin to least queue depth as well.
Here are some screenshots.. 3 paths/sessions per volume ... The second shot is the devices tab.. you should see three volumes 1 for each session etc... The third shot is the devices tab / MPIO settings. Least Queue Depth should yield your best "load balanced" policy. I also changed the MCS portal sessions to least queue depth as well (under the MCS setting under the sessions tab, first screenshot).
Hope this rambling helps in some way. If you have any questions let me know.
FYI for SQL on NIMBLE we are seeing anywhere between 24,000 and f 50,000 IOPS for sequential writes compared to our old HP EVA SAN which yielded about 2,000 - 9,000 IOPS. Even though this setup may be a little overkill for most... on our OLTP system (AX) redundancy and performance are absolutely critical.
I have just worked my way through this process, moving away from a single iSCSI connection to a MPIO environment. Just to be helpful, I wrote the process up on my Blog site (including the references to the Nimble Support articles here: http://www.talking-it.com/2013/storage/nimble/windows-2008-r2-enhancing-the-nimble-iscsi-connection/
Hope this helps someone.
This MPIO vs. MCS issue is really confusing me. What I *thought* I understood is that MPIO and MCS both basically performed the same functions, but in a different manner. Then my thinking morphed into MCS as being equivalent to NIC teaming/port bonding, to use aggregation and increase throughput, while MPIO is intended for path redundancy, but sans the port/bandwidth aggregation for improved throughput. To me, it seems logical to want to use and have the benefits of both - maximized throughput via MCS and redundancy via MPIO. But the more I read, the less clear I am because of inconsistent information from different sources.
Some resources imply that MPIO and MCS are mutually exclusive and can't/don't need to be used simultaneously, while others imply they can be used together. Examples:
I also opened a case with Nimble support (00147105) and they too verified that I don't need to make changes to MCS because it's not even used/supported by Nimble. Since the TechNet article above stated "in order to use MCS, your storage device must support its protocol", I'd assumed MCS was not needed.
Then a different Nimble tech confused things when he told me "our best practices recommend using MPIO over MCS...it appears that we do not support MCS at the moment", which seems even MORE confusing because it both says to configure MPIO "over MCS", and then says they don't even support MCS. So why is it a 'best practice' to configure something that is not even supported by the vendor?
However, John posted above to make setting changes to *both* MPIO and MCS. If only MPIO is used, and MPIO is what Nimble recommends, then what is the point of wasting time making changes to MCS settings? Someone marked his post as helpful so that must mean that he is on the right track by configuring *both* MPIO and MCS, right?
Also, page 49 of the Nimble WIT setup doc says to set up LQD for MPIO. But they said nothing re. MCS, which is also RR by default. Should that also be changed to LQD, as John's indicated above? Because I cannot find any Nimble documentation on MCS, and their own support techs are giving me inconsistent answers, I'm hoping that someone here may know. Seems to me that MCS can be safely ignored but this post has only further added to my lack of clarity on this issue. OTOH if I have paid for multiple 10Gb links, it'd be really great if I could make *full* use out of them by 'bonding' them into an aggregate 20Gb link, while also having the failover capability back to 10Gb if one were to fail. That's how NIC port teaming works and I'd assumed this was also possible with iSCSI but so far, I have found any clear explanation on this matter.
Thank you in advance to anyone who can clear this up.