Ideally you would locate your tempdb data on the sqlserver performance policy and your tempdb log on the sqlserver-log performance policy. This would follow best practice and may be why nimble did not include any specific policy for tempdb. Doing this however would consume 2 valuable volumes instead of 1.
Like you I have created a custom tempdb policy. I have found 8 kb block size and caching off is a good fit for most cases. Since tempdb data is typically used short term then discarded it's best not to persist in cache. You will often find cache hit on these volumes because although it doesn't get cached to ssd, data will often still exist in dram, which still constitutes cache hit. So you still the benefits of some caching which is really nice.
Thanks for the information. I often wonder if certain customers (like myself), are overprotective of the SSD cache space in our arrays.
That's why I wasn't sure what to do with the TempDB data location for the larger data warehouse VMs.
If TempDB is not needed for caching or snapshots (in most situations), I'm basically using the SQL Log performance policy at that point. (8K, Comp On, Cache Off)
Where did you read that putting TempDB data/log volumes next to your SQLData data/logs within the main volumes was a best practice?
Hi Alex, Sorry did not mean co locate the temp db with other db files since it needs no snapshots and would waste space. It would have its own separate data and log volumes but this consumes two of the 255 valuable volumes, so a separate single volume is a good compromise. Yes the sqlserver log performance profile is a good match. I created one that matches sqlserver log but is called sqlservertempdb so it's can be distinguished separately. It's good to be somewhat protective of cache since it's the essence of Nimble's fast performance :)
That is what I do sort of. For our "Platinum Databases" with 1 hour Recovery Point Objectives we choose to create the DB and LOG volumes as In-Guest iSCSI but then we create separate VM Datastores for Temp DB and Temp Log and have approximately 10 SQL VM's sharing those VM Datastores. The Performance policy for both TEMP volumes/datastores have caching disabled. Like you I treat the Temp volumes much like a Page/Swap file and therefore don't back it up but do an un-quiesced snap to the downstream array. Volumes limits should not be an issue for to much longer with the advent of V-Vols and the requirement for SAN vendors to up the Volume limits from 256. I'm sure Nimble Storage will react and adapt quite quickly to this increasing requirement. The main reason for using In-Guest iSCSI is not for performance but for the ability to use Backup agents to Snap and Replicate these DB/Log volumes in their own right and the Backup agents can't do this if these volume pairs are on shared VM datastores. If you cache the Temp DB/Log pairs I feel that you are correct in suggesting they will eat into Cache usage significantly. My limited experience with Performance monitoring utilities suggested that even the Temp DB is mostly writing and not reading.
Using datastores for tempdb to share is a great idea to reduce vols consumed. Unfortunately something we can't implement as our use of the arrays is purely for iscsi connected disks and not vm datastores. I hope that nimble will increase the volume limit soon as that teens to be the limit we hit first... even before capacity. As I understand it's not an easy thing for them to increase. Hopefully on their agenda to address soon :)