Julez - I'd tend to look at the adaptors in performance monitor and see that the I/O going down each channel is similar and not unbalanced. If your just deploying on a greenfield site you can do that on the array in the interfaces tab or if it's adding to an existing workload then your probably best to verify at the host
Right, but I'm looking from more of a configuration standpoint, how can I verify that both paths on the OS are configured properly, via powershell or whatever.
The problem we're seeing is when a snap verification is occurring actually. We'll have a single 1Gig nic 100% saturated while the other is pushing maybe 20-40Mbps (which isn't exactly idling).
So I was looking for a method to actually see proper configuration rather than just balanced charts.
I actually have a ticket open with Nimble on this as well, but I sometimes like to engage the community at times too.
I see - thanks for clarifying - I am not aware of a way beyond trying to script a check with Powershell
It would be really nice for NCM for Windows to report on the path, policy settings so that they can be verified without getting into the guts of iSCSI initiator and checking each device separately for the correct number of paths and the policy type
While I dont know of a way to verify that MPIO specifically has been configured correctly on the hosts, I would always look at the connections page under Monitor-> Connections and then hover my mouse over the number of connections and see that the correct paths have been established from host NICs to array interfaces as I would expect. If someone does know how to verify the correct MPIO connectivity, there is a good chance they will post it on here
What I would also look at in your specific case would be to understand the workload that is running on the hosts at the time you see this type of behaviour. It may be that the process which is running is a single threaded process and so will only actually use one path by default. In that case, even with MPIO configured correctly, it will never use all available paths.
A good test of MPIO I use in the field is to run a simple SQLIO throughput test and if both adapters are saturated equally, we know MPIO is configured correctly.
Let us know how you get on!
The strange thing though is that without actually clicking the iSCSI support check box under the MPIO config, it still shows the correct number of paths. So my question is does that actually need to be checked or does the Nimble Windows Toolkit actually do all of that for you after the MPIO feature is added?
Also a side note as to why I brought all this up after speaking with Nimble support was that the snapshot verification process does not leverage MPIO. So if you have a 1gig array it'll only use 1 of those 4 nics on the array regardless of the number of MPIO connections on the Microsoft guest OS. So again another reason why I'm not a huge fan of the graphs to show me accurate configuration and I'd rather see some powershell command or MPIO command to verify my settings are correct on the windows guest. Waiting to see if they have a solution to this or not that I could use myself.
So the end response was that I could use the NimbleDiag without sending the data to support and look in the C:\Users\%username%\AppData\Local\Temp\#\ location at the iSCSICLI log files.
I could also SSH to the array and use
vol --info <volume_name>
There are sections for:
"Number of connections:"
You can use these fields to verify the number of connections to the volume as well as the Host and Data IP Addresses connected.
So the two takeaways from this whole thread in the end at least for me are:
1. NimbleDiag and it's logs show a lot of good info.
2. Nimble snapshot VSS verifications can't leverage MPIO.