Justin - do you have two vmkernel ports defined for iSCSI, and bind together in the sw/iscsi configuration? additionally, are all volumes configured with PSP_RR (round_robin) policy? please do contact our support team so we could take a look @ the environment holistically and get to the bottom of this. Check out the following post as well on quick set of checklist for vmw + nimble best practices (only minor correction in that post is you only need to set iops=0 for the PSP_RR policy, not both iops & bytes)
Ajay, Its a Clone, so no, it should be going as quickly as possible.
the vmkernal is part of a DVswitch.
There are two port groups, ISCSI-A and ISCSI-B, that point to two different NICs (vmnic2 and 3)
Those each have a VMKernel port (vmk1 and vmk2 respectively)
Those ports are bound to an iSCSI adapter (vmhba32 in this case), the path status is Active for both and the Static Discovery shows that there are 4 paths.\
When I look at the datastore, and do manage paths i see that the path selection is set to RR and that it has the 4 paths listed with a status of Active(I/O) for each.
to throw one more wrench in to the mix, we are using HP C7000 chasis, which has Virtual Connect. So vmnic2 and 3 are actually defined at 5Gbps. The virtual connect has 4 physical 10Gbps connections, two on each module, making vmnic2 the path to the 2x10Gbps connections, allowed to take 5Gbps
given all of that i still feel that 160MBps seems slow. I was working with the network guy, who is going to try and setup some more monitoring, but he said he was only seeing traffic from one port at 600 Mbps.
still need to map this out a bit more to see whats going on.
This shows how its setup, with the assumption of ISCSI presented to the vms (not currently in use), and the actual switch side of it is a tad more complex, in the sense that there are two 4500's in a VSS config with cables x-crossing...
So i worked with Bryce LeBlanc this morning and we found that the VM paths were not fully configured for round robin (however this one checked once upon a time, guess i over looked something) so we sorted that out and found that we were able to then utilize both paths. At this point im still getting only 160MB each way (320Gb total) but based on some tests results provided to me by Ben Hass we found that my dual 5GB (we have HP C7000 with Virtual connect) is being reasonable utilized.
Here are the performance numbers provided:
100% Sequential Write – 256KB block size, Queue Depth – 16 630 MB/s
100% Sequential Read – 256KB block size, Queue Depth – 16 1,150 MB/s
doing some research i've found that the majority of folks are seeing around 400MB/s over 10Gb, so the 320 im seeing is pretty decent.