As you correctly point out, the data in tempdb is transient. Since it is short lived, this data often does not receive any significant benefit from being place in SSD. As a matter of fact, depending upon how tempdb is used it can consume a significant amount of the cache available on the array. In these scenarios, the cache churn generated by tempdb can adverse impact the cache hit rate of other volumes and received no significant benefit itself. Generally speaking, it is best to disable caching on tempdb volumes (both data and log files).
If disabling caching on the tempdb volume does adversely impact your application performance, you can quickly revert to a cached policy.
If I were a betting man, I would not cache it. That said, one nice thing about Nimble is the ability to test it both ways in a non-disruptive manner. Configure a custom performance policy, with caching enabled and run some testing (longer better than shorter). Then edit the policy to disable cache and repeat your testing. If you don't mind sharing your results, please do.
+1 on what Todd said.
In a POC that was ran down under in ANZ it was found that their TempDB served quite high cache hits (>80%) even though caching was disabled; reason being any read to the TempDB typically caused the array to hairpin and read the IO from NVRAM or DRAM, as that was where the data was at the time.
I think it was detailed somewhere on NimbleConnect, i'll see if i can find it.
Sorry for getting to the party so late, but I have something hopefully useful to say regarding tempdb.
I/O to tempdb SHOULD be mostly sequential for things like sorts, (order by, group by, etc.), as well as for what is called version store. Version store involves copying the before image of a data page involved in an update transaction to another page in temdb in order to maintain transaction read consistency. Think of it as like a COW snapshot, only at the db level and not the storage level. Because version store would tend to string these before image pages together contiguously, then that should result in a sequential workload for tempdb. Then we have temp tables, which are also created in tempdb. Depending on how the application creates and then accesses these temp tables, access to these could be random in nature or sequential. Since most code I have seen tends to iterate through, and probably even sort through the entire results set for the temp table, then both the creation of and selection from a temp table should be mostly sequential.
The short story is that most of what is detailed above should result in sequential I/O, however I have experimented with this a bit and seen varying results when specifying a log versus a data perf policy. As has been said earlier, if a SQL Server data policy is chosen, but access to the volume is sequential, then the array will handle that well due to the way the caching algorithms work. However, for direct attached iSCSI volumes I would lean towards choosing a log perf policy for a log and tempdb vol, and a data per policy for data and index files regardless of the db platform as a rule of thumb.
One more thing worth mentioning is that lots of customers use VMDK's or CSV's for both data and log files, and rarely do we hear of performance issues, so the array just handles it.
Hopefully this was helpful in some way.