Thanks for creating this thread...
1.) what does the array actually does different when setting blocksize to 4K instead of 16K?
The goal is to better optimise the application. Everything internal to Nimble is variable block, so it doesn’t affect much on the array operationally other than some fine optimisations internally.
The tagging of the volume with an appropriate application policy is also very useful for understanding different workloads that are deployed within the install base. For instance, if we see an obscure application we can use Infosight to understand the attributes of the dataset/application and use that for guidance when sizing and planning for future/other projects.
2.) Does blocksize have any impact on Compression efficiency? Is a proper blocksize for the used application yield a beter compression rate?
3.) does blocksize have any impact on Dedupe efficiency for AFA models? Is a proper blocksize for the used application yield a beter dedupe rate?
4.) does it affect cache ? In what way?
I'll answer all three questions above in one - In short No, the blocksize’s effect on compression, dedupe, cache, etc. would be almost unmeasurable.
If you'd like a deeper understanding, please feel free to reach out to your local Dutch SE team