I generally recommend that for Nimble volumes that are presented as VMFS volumes, the recommendation is to select VMware ESX5 as the appropriate performance policy. Very often, VMFS volumes contain a variety of different VMDKs so specifying SQL Server Logs as the performance policy would only optimize for SQL Server Logs and nothing else. SQL Server Logs performance policy is most often used for in-guest iSCSI connected volumes.
Hope this helps
We had a similar question/issue with Exchange and i reached out to support for this.
If you put the SQL Log as a single VMDK in that VMFS datastore, then you should select SQL Logs performance policy. If there are any other VMDK on that datastore (= not containing SQL Logs), put it to ESX5 performance policy. So you could for example dedicate a datastore for SQL Data files and one for SQL Log files (and store multiple VMDKs from different SQL Servers in that datastore all containing either Data or Logfiles) and attach the SQL performance policies to it.
Just did this myself (running v1.47) and it's working well, with the exception of the Nimble snapshots not working consistently. A few key lessons learned to consider:
- Read at least these Nimble BPGs and KBs:
- VMWare VSphere5 Best Practices
- SQL Server Best Practices
- Microsoft SQL Server Integration (customerPortal_KB-000036_SQL_Integration.pdf) - Nimble InfoSight
- Configure MPIO in Windows 2008 (kb_customerPortal_KB-000057-MPIO_Windows_2008.pdf) - Nimble InfoSight
- KB-000157 Configuring ESX for VM's with direct attached LUNs
- Always connect your VM data stores via iSCSI from within the guest OS, rather than indirectly through Windows guest, through the VMW ESX infrastructure. Only connect your VM's to your Nimble using the ESX datastore. This way the guest OS has direct access to the data on the Nimble and you can define different perf policies and vol collections for the VMDK stores, and the SQL (or Exchange, Oracle, etc) stores which read/write differently.
- Create discrete volumes for the VMDK (guest VM), SQL data, SQL logs, SQL backups and SQL TempDB. TempDB is new in SQL 2008 and by doing this now, you'll be prepared if/when you upgrade to SQL 2008/2012. Ideally your SQL server will have at least 5 discrete volumes (OS, data, logs, tempdb, backps)
- Use the correct Nimble perf policies for the VMDK, SQL data and SQL logs.
- Create an iSCSI initiator group in the Nimble so that only your SQL server can get to the volume
- When you create your SQL vol collection, put the SQL data and logs in the same one. TempDB and backup volumes aren't as critical for volume collection but this is personal choice.
- Use MS VSS for the SQL vol collection
- The MS VSS SQL vol collection synch does NOT trim the SQL log files, even though it shows a 'verify backups' option. IT only trims the logs in Exchange (Nimble InfoSight) and Nimble reports that this is a bug in their software and it shouldn't be presented as an option for SQL vol collections.
- Install the MPIO feature on the Win2K8 server, then implement multiple paths in the MS iSCSI config. Ideally you will have 4 paths to each volume (2 adapters from each guest VM, routed through ESX, connected to at least 2 data ports on your Nimble). So if you apply the dedicated volumes suggested above, you'd have 16 Favorite Targets showing up once completed.
- Use the 64K allocation unit size when creating your SQL volumes in Windows - this is the ideal size for SQL data/logs.
I would always use separate Nimble volumes attached via iSCSI initiator to the VM guest rather than using VMDKs. This way you can properly take advantage of
- Optimum performance (removing the VMWare layer altogether)
- Performance policies for each volume
- Consistent snapshots of databases
- Space and performance monitoring of the individual disk (I have found this very useful)
I did an article on my blog about SQL on Nimble SQL Server on Nimble Storage – Storage Layout
SQL Transaction log write sizes can vary quite a lot so Nimble have chosen 4KB as a sweet spot for their block size on the performance profile. Compression always on to improve performance and save space and caching always disabled to prevent cache poisoning.
- Read at least these Nimble BPGs and KBs:
David has highlighted some good use cases for in-guest attached volumes... One thing to keep in mind, for VMware Site Recovery Manager, you have to write scripts to mount the in-guest attached volumes during test recovery or failover. Reason behind ESX does not know about such storage as it all appears as network traffic (bypassing esx storage stack).
One component of tempdb which is not often talked about is the version store. Since tempdb is used for sorts, version store, temp and system tables, etc. and other types of workloads, I would be curious to know what folks are using for a performance policy with tempdb.
Here is a link to a great description of tempdb which explains the version store in detail:
We really should break this out as a new question, as this is a very good topic.
I had this question posed to me by a couple of customers recently as they were unsure whether to allow SSD caching for TempDB.
From some information passed to me from some of our APAC team one of our current SQL customers discovered that it was possible to get blistering performance for TempDB by creating a new Performance Policy with Caching turned OFF. Due to the way that TempDB is very quickly read directly after being written by the database, it was found that CASL was still delivering high cache-hit rate for these volumes as it was serving the majority of the data from NVRAM or DRAM - but never copied any of the blocks to SSD to eliminate a lot of churn for unusable data!
Wondering if it was me that passed this info on
I spent quite a long time testing various SQL server storage layouts with a goal to have minimal volumes used but still achieve highest performance and still be able to snap and replicate. Tempdb was always separated out because snapshots and replication is not desired. Tempdb was first tested with a cached performance profile, but because I wanted to keep volume counts to a minimum I did not want to poison the cache with the tempdb tlog data. I tested a workload running in Tempdb with the cache disabled but still found there were a significant number of cache hits. I could only assume the hit was coming out of memory and asked John Whyte from Nimble the question and he confirmed that the Cache hits viewed in volume performance do indeed comprise of both Memory and SSD cache. Yes - my words exactly were "very cool!"
It might not be the optimal configuration for every workload, as some databases might run procedures which run for a long time and use tempdb intensively. In these cases the data will age out too quickly in memory and we won't see many cache hits. In general thought I would definitely recommend a single volume for tempdb with cache disabled.
Ideally I'd love to do the in guest iSCSI method but with 50+ SQL server instances and other systems we'll run out of volumes. I posted something relating to Oracle Volumes but forgot to mention that they're running on OVM hosts. We are considering having 4 VMware datastores for SQL logs, 4 datastores for SQL DB, 4 for TEMP stuff and finally 4 for Backups. These datastores will have a max of 15 VM's per datastore and the respective performance policies applied. Naturally there will be 4 Datastores for the Operating system. It's a compromise I know but will it perform reasonably well? Our array is a 460G. We will be using commvault Intelligent Data agents to back the DB's and logs but will do at least one Nimble only snap each day to replicate to our DR Nimble. Will VMwware tools provide app consistency for the Nimble based snap? Currently our Vcentre has it's DB on a separate SQL server and that uses the iSCSI in guest connected DB, log method with Microsoft VSS sync. Works flawlessly. Pity it's impractical to do this accross the spectrum.