So what size datastores do you guys use with vmware and what version of vmware?
just curious what other folks are doing. Im running 5.1 and i tend to stay around 2tb, depending on the use.
Hi Justin, we create VM O/S datastores for the operating system and VM Data datastores. We are trying to keep each datastore limited to 15 VM's so the VM O/S stores are set to 500 GB and the VM Data datastores are set to 1TB. Currently our array is truly live and most of our VM's are still on the NetApp storage until we get our Commvault Simpana suite in. Windows file servers, SQL and exchange will be in guest iSCSI attached for the log and DB volumes. All thin provisioned.
Currently our array is truly live ....... Should read Currently our Nimble array is not truly live yet.
I get this question a lot when working with new customers. I counsel them that the strategy should be to focus on putting applications in datastores to match with snapshot protection and replication schedules. This will then define the size required based on the quantity and size of the VMs for that application. This is a better plan than just putting as many VMs in a datastore that fit.
I actually ran in to a problem doing that, snapshots were failing due to VMware being unable to quiesce in a timely manor. There is about 21 VMs on a 2TB volume, which make up a development environment for us. Havent had time to get back to working on that, but since we are a SaaS based company its harder to look at things per application/usage. I think that makes more sense for things like maybe Exchange or other corp related stuff. In production we'd tend to protect them all the same
Currently Im running 2tb datastores and trying to balance the number of VMs with the expect IO requirements of the systems.
Another piece of advice is to only quiesce volumes that are running SQL or Exchange workloads. Then isolate these applications onto their own datastores. They are really the only applications that benefit from application consistency. All other applications typically recovery without any issue from crash consistent snapshots.
Thats a pretty good idea, it will definitely help with the timeout issues. I'll give that a try whenever i get time to focus on that project again.
Just set up my nimble last week and have been storage vmotioning like crazy. I firstly used the Nimble recommended "5-10 VM's per datastore" for the vm snapshotting. Everyone else I talked to said this is a crazy low recommendation. I usually also create 500GB Datastores and put my VM O/Ss on them.. if the VM requires a larger "data" drive, I usually create its own datastore for that.. not sure if this is best practice or not. Yesterday I created a 2TB datastore and started moving some of the smaller VM's to it.. well now I'm up to 15 or so VM's and just saw your comment on that you only need to "quiesce" the VMs if they're SQL or Exchange. Interesting. So for non SQL/Exchange Datastores, you recommend just putting Synchronization to "none" instead of "vmware vcenter" ?
Learning as I go here but really having fun with the Nimble so far.
That's correct, you can just set the volume collection synchronization to none. The snapshots can still be used for clones and easily recover data.
My lab is VMware 5.1 on Nimble and I run a mix of 1 and 2TB volumes.
500 GB or 2 TB in what most customers use. We work on the no more than 10-15 VM's per datastore if you are doing snapshots using VSS.
What I'm doing now is placing VMs in datastores according to their function and IOPs.
with nimble that technically shouldnt matter
We have moved away from a few big datastores to single vm datastores. We did this to maximize flexibility in replication schedules and restoration of a particular vm or vmdk. We size the datastore appropriately to the particular vm and vmdk combination.
This is what we do as well, based off of http://www.nimblestorage.com/docs/downloads/Nimble-Storage-Architecting_Storage_in_Virtualized_Environments.pdf may or may not be manageable depending on your environment (number of VMs/etc)
well that can get scary if you are using VMWare, since there is a limit of 256 volumes.
I had actually considered doing this and creating a single VM and doing a zero copy clone to generate new ones but my VM env has over 256 servers, so that wouldnt work out well for me.
I'm actually surprised how some of you are handling this, since the IO issue isnt really an issue with nimble. because you dont manage the actual disks, performance is (should be) the same if you use one volume or 100 volumes. the only real reason is for the performance policy, but per another conversation you'd stick with the VM policy unless you do a raw data map.
the only reason I stick with the 2tb is that from what i've heard anything larger can make SRM angry. I dont currently use it but i'd like to stick to a configuration that could easily implement SRM.
thanks for all the input guys!
One way to get around the 256 volume limit is to group a number of alike VMs into a datastore, then do the zero copy clone, so it's more than 1 to 1 mapping. If you only need a subset of the VMs from the source datastore, then you could simply not register them, or remove them from your script, follow by VMFS UNMAP to reclaim the space.
One comment on separation of volumes based on I/O characteristics -- one consideration factor is DB vs. transaction log volumes - it is indeed a best practice to separate those as one would benefit from cache turned "on" (DB volume), whereas the transaction log volume causes unnecessary cache churn. It is best to separate the two, leave the DB volume with cache enabled, and the transaction log volume with cache disabled.
Last but not least - interesting comment on SRM getting mad @ volumes > 2TB - I personally have not seen issues in this regard. I do hear, however, that SRM hates RDM I am checking with our QA team to see if they could comment on SRM & >2TB volumes.
Like most responses ..... It depends. If I wasn't using vcentre to snapshot then I'd have no issue having 50-100 plus on a ESX 5.5 datastore. Things change when you're using vm tools/vcentre to snapshot and I wouldn't go much higher than 20-25. Even that depends since some VM's might snap quickly and blow that guideline. There are many "considerations" not least the 256 volume limit per array. If you replicate both ways to an array every volume then divide by 2 leaving 128 live volumes per site. Obviously the volume limitation in our case might actually prevent us doing a lot of in guest attached iSCSI plus the possibility of using Zerto or SRM makes these site recovery option more complicated with scripting required. Coming around to the idea of creating Datastores for SQL Db, temp DB, logs and temp logs. No replication of temp DB or logs.
For VMFS datastores 2TB volumes here, short of volumes we use for test and lab purposes.
We decided to stay at a max of 10TB for iSCSI windows guest initiated volumes (most of this is archived videos and such anyway).
Retrieving data ...