Hello Robert, welcome to the forums.
I can confirm that the lower end Nimble platforms will most likely not support dedupe. Those systems are typically lower counts of CPU cores and memory - and those are the two things that are taxed by using dedupe (any vendor that tells you that dedupe doesn't have an overhead are not shooting straight). We actually dont know what systems will support dedupe when the feature comes out, so it's mostly speculation.
You're correct that some of the files that you have will not return a great return on compression; however if you have other datasets such as databases, VMs, email etc they will compress between 30-80%. There is no performance overhead for compression, hence why we released that feature on all arrays back in 2010 when we came to market. If you're engaged with a Nimble sales team they can provide you with a Windows tool called the Nimble Space Savings Estimator tool, which will analyse your dataset and will inform you what the estimated compression and dedupe rates would be on the selected drive volume.
I'm surprised that you've switched dedupe off on your Windows file server - we have hundreds of customers using this successfully. Can you explain a bit further what the issue was, please?
Finally, whilst the CS215 may be dedupe-less, you can attach capacity expansion shelves to the system which are very reasonably priced - may make more sense to get pricing on expansion shelves vs looking at a larger system with larger CPUs/more memory to try and achieve dedupe - as the $/GB may be similar?
Thank you for your answer.
I will ask my local sales rep. for the Nimble Space Savings Estimator tool.
I turned off de-dupe on my Windows 2012 server because Windows 2012 has a bug, where Windows Search Service cannot index files, which it has de-duped. Therefor client computers cannot search volumes reside on that server - including ExtremeZ-IP mac client which rely on the windows search database.
I don´t know of this is solved is 2012r2, but i suspect it´s still a issue.