You'll find that when you initially create a thick provisioned disk - they compress quite well
So yes, you'll find a bit of a mismatch in used space when comparing the Datastore to the Volume. I think this is going to come up with any storage array that has some sort of compression or deduplication feature. Obviously you'll want to keep an eye on the Datastore use since vSphere will alert/react once you start hitting certain thresholds.
Are you planning to keep tiering a part of the storage strategy or is the Nimble going to eliminate that for you?
I'm hoping that the flash magic will eliminate all the tiering stuff.
One new area of concern is Storage DRS and how thin provisioned volumes don't reclaim used space. It is looking like I need to eliminate datastore clusters and SDRS unless I can automate SCSI_UNMAP against all my LUN's on a regular basis.
Do you think there will be quite a bit of Storage migrations happening in the environment once everything settles into its nice new Nimble datastore? If not, then you probably won't have to worry about this too much. I was really hoping to see this addressed with the announcement of vSphere 6 but I haven't found any information relating to space reclamation. Maybe with VVOLs there will be some better communication between vSphere and supported storage arrays?
In my typical datastore clusters vcenter moves VMDK's around based on thresholds I set which in turn keep LUN's in the cluster fairly evenly balanced in terms of consumption. So SDRS does start to kick in for space thresholds when datastores start to get full.
I think with Nimble I need to retreat from datastore clustering at least until some form of automated UNMAP feature is part of the stack. I'm regretting spending the extra money on vsphere Ent+ again (buggy vflash was my first regret)
First of all, welcome to the Nimble family, and thank you for your support. I am confident you will be pleased with the migration off of your legacy arrays !
The suggestion in our VMware Best Practice guide (which can be found on Infosight in the Downloads section, under Best Practices) is to match your volume provisioning in VMware and on the Nimble Array:
VMDK Format Space Dedicated Zeroed Out Blocks Nimble Provisioning
Thin As Needed As Needed Default (Thin)
Zeroed Thick At Creation As Needed Use Volume Reservation
Eager Zeroed Thick At Creation At Creation Use Volume Reservation
So, if you want to use anything other than Thin, use the Volume Properties to set a Volume Reservation to match the volume size.
The BPG also adds this tip: For best performance, use eager zeroed thick VMDK format as the zero blocks are compressed on the array side, and do not take up additional disk space.
However, I need to also ask, what are these volumes being used for ? For application data volumes (such as SQL or Exchange), you might want to consider using Direct iSCSI Guest attached volumes, where this become irrelevant. There are other considerations here, of course, but this allows you to better match the Nimble performance policy with the application data volume (for example, SQL, or Exchange).
Thanks for the reply. We have a mixed workload typical of virtualized environments. We do have 4 to 5T of SQL storage as vmdk's (separated by OS, Data, log, thick provisioned mostly) spread across 20+ SQL servers. We have Exchange 2010 which is friendly with lower speed spindles I'm told. We've had decent low latency (1-2ms outside backup windows) performance on the VNX with Fast Cache fronting RAID10 10K SAS for SQL. We've been trying to rid ourselves of direct attached iSCSI and really don't want to go back that scenario especially with these smoking fast arrays.
Understand where you're coming from with the Direct Attached iSCSI, however it does mean that you'll lose the additional Exchange/SQL integrated snapshot awareness if you're to use VMFS. You could use VMware's RDM method of storage presentation but even reps within VMware will tell you this is not the best way to go. If you are stuck on using VMFS ensure that you create dedicated VMFS volumes for your Exchange datastores, Exchange Logs, SQL databases etc - that way you can attach the Exchange/SQL Performance Policy with the associated block size rather than the generic "VMware ESX 5". This will ensure a bit more optimisation in the SSD caching functionality and block sizes.
...of course, all of this discussion disappears with VMware's Virtual Volumes implementation which drops this year with vSphere 6.