I think ultimately the success or failure of this requirement would be down to the decision on what to use for host-side SMB/NFS shares - ie Linux shares or something like Windows 2012 R2. The latter is pretty darn good these days as file servers, which I never thought i'd say!
The ultimate answer here is to test it out - i'm sure your local Nimble SE would only be happy to help!
Actually, I was testing Nimble and another storage (3-letter word..) for the comparison.
Say I run a script that creates certain thousands files, 1KB size each for each file.
Nimble took 5 mins to complete while the other was around 3 mins only.
I tried to configure Nimble with different settings but it wouldn't give an equal or better result.
The other storage was old so I was expecting that I would be left behind on this testing.
By the way, both storage were connected on ESX host with Windows 2008 R2 guest OS VM.
So I am really wondering where the "wrong" is.
It's CS220, version 126.96.36.199. The throughput performance was hitting average of 1.4MB/s random write and 0.04MB/s seq write. IOPS performance was on 285 random write and 0.26 seq write. I only gave the write ones because Nimble seems very good on read side.
For the setup, I believe NCM is available on Nimble Ver.2, no NCM. Connections for Nimble-to-ESX is 10Gb. By the way, I forgot to mention that we have test clients that we use to simulate similar environment on our company. Currently, we don't have AV installed yet.
Clients<--100Mb Hub--1Gb Switch-->VM server(Windows File Share)<--10Gb-->Nimble Storage.
This structure is same with the other storage brand setup, except that it connects with 4Gb FC protocol.
We also performed benchmarking on the server side where Nimble volume is directly presented.
Results showed that Nimble is way better on large block sizes files, but not on small one (4K size).
I tried using Nimble volumes as ESX datastore, iSCSI target volume, and RDM.
On those approaches, I applied different performance policies just to figure out which combinations will give results comparable to the other storage.
I have been seeing a lot of impressive reviews about Nimble, and believing it.
However, it is my bosses who I am wanting to make believe that Nimble is better than the current storage we have.
This is why I need to figure out where the problem is.
(Quite long, have patience reading this.)
Thanks for the detailed environmental analysis. If you have VMware Enterprise licensing or above its worthwhile installing our Nimble Connection Manager for VMware PSP which will enhance multipathing & path management. It can be found on Infosight. Also if this is a fresh, new install it's worthwhile looking at NOS 2.1 as it provides enhancements over 1.4.11.
Are you presenting the storage volumes to the file share via in-guest iSCSI, or via VMware's VMFS?
Do you have a support case open with Nimble to track this issue? If so, please post the case number. If not, please do so!
Another test would be to load IOMeter and emulate the same test in that tool to see if you can recreate the problem, which may give us some more clues as to where to look next.