When you perform sequential writes to a Nimble system (in this instance, a Storage vMotion) we automatically bypass writing data into cache, as we cannot have a good idea as to whether the data is hot or not. Therefore what the array does is after the data is on the system the array figures out what data is needed for cache and will pre-fetch copy blocks as and when required.
This is why you're seeing lower cache hits as the data migrated to Nimble is cold, however what you should notice is this warm back up within a few minutes or so.
Dmitry Marevutsky wrote:
this could be solve with performance policy tuning :
perfpolicy --edit name --cache_policy aggressive
perfpolicy --edit name --cache_policy normal
please refer in documentation before ( nice document of BP Nimble + vmware)
I'd be very careful enabling Aggressive Caching on any array - in fact it should only be done under the guidance and/or instruction of Nimble Storage Support or Systems Engineers for specific workloads or requirements.
It's hidden in the CLI for a reason; it's not a tool for everyday use and has the power to be detrimental to your storage array caching if not used wisely.
In this case (standard Storage vMotions) Aggressive Caching should NOT be enabled.