A quick point on the above - snapshot sizes/space/allocations do not reside in the volume size/space/allocation; so therefore the size of your snapshots will never impact on how much space you have left in your volume. So your example of 5TB + 1.5TB of snapshots = 3.5TB left for volumes is not correct. If you've provisioned 5TB, the whole volume can use 5TB (IF you've left the volume quota to the default of 100%).
In my personal view, the use of SDRS is slightly redundant in the world of Nimble, as moving VMs from volume to volume to maximise IO/throughput doesn't ring true for us as it would for a NTAP/EMC etc, as everything is virtualised and coelesced through the CASL file system.
What I can predict in this environment is that SDRS will constantly move VMs around to better balance the capacity of the volumes - however there are no automatic "shrink" commands of said volumes - which means on the underlying Nimble system the volume will still be 'x' percentage used until you perform a manual SCSI-UNMAP command - and then those freed blocks will reside as part of your snapshot space.
2 Volumes, both at 5TB in size. one used at 4TB, one used at 3TB. Snapshots being taken every hour. On both volumes.
If SDRS is involved, it will balance the VMs from the 4TB volume so that both are now 3.5TB used, using Storage vMotion. However the Nimble array will still show that there is 4TB used on one volume, and now the other is used up to 3.5TB. This is because to a block storage device, those blocks still look 'touched' to NimbleOS even though it was Storage vMotioned elsewhere (think similar to fragmentation in Windows). This is a native problem to virtualisation, NOT specific to Nimble.
You are now to run a manual SCSI UNMAP command within VMware's CLI which will free up the 500GB from the 4TB, making both volumes now sit at 3.5TB in size (this is not an automated process). However, the 500GB which was freed up will now reside as part of your snapshot collection for that volume as you are snapshotting the data every hour, and will only be removed once your volume collection policy purges those blocks as part of your schedule. So in essence whats really happened is that you've lost an extra 500GB of space in your solution in the effort to use a VMware feature to keep things tidy at the virtualisation layer.
I hope the above makes sense. My mantra in life is KISS; and for that I wouldn't bother using SDRS in a Nimble solution.
Final point on all this - VMware Virtual Volumes (VVOLs) removes all the overheads and headaches that I've just discussed above, and also makes SDRS redundant too. Happy days
VMware has a default SDRS setting where it will migrate vmdk's off datastores at the 80% usage mark if part of a storage DRS cluster.
What i'm finding is that volumes in a collection policy are using over 20% of the space associated with the volume due to snapshot chains.
ie 5 TB volume created in nimble, added to VMware as a 5TB datastore, snapshots collections are 1.5TB. Therefore VMware believes it has 5TB usable storage where in reality it can only use 3.5TB, SDRS only kicks in at the 4TB mark so it won't move any vmdks and the guest can fail as the datastore becomes exhausted even though it shows 1.5TB free at a VMware layer.
I can either set SDRS to do placement only
reduce the length of time of the snapshots
lower the % where SDRS is triggered.
It would be great if the nimble plugin allowed SDRS calculations to take into account the snapshot size but it looks like VMware is unaware of that.
Just wanted to get some feedback on how other people are managing this