Which Cache Size for a CS500 with 24TB Raw is recommended? 10% of actual Data Size? Means for me, if used 12TB Data, Cache Size should be 1.2 TB.
I have a very rough sizing for cache and it usually comes down to "buy the biggest you can afford"
Technically I always want 10% usable cache capacity to your uncompressed data set so you've got your sizing correct - however as you add more workloads to the system over time you may see cache be overutilised depending on what said workload is. For me, this is where Infosight is brilliant as you (and we) can visualise this and plan accordingly for it.
If the majority of your workload is something like SQL Server then you'll absolutely need more than 10% cache allocation as from my experience databases (especially batch jobs) will require lots more working set than your average application. Here I would recommend a minimum of 2.4TB and maybe even as high as 3.2TB.
If it's mostly standard VM workloads then 1.2TB is perfectly fine.
Us Systems Engineers do have access to workload specific cache sizing tools based on Infosight data analytics so don't hesitate to chat to your SE for some more exact sizing figures.
I'm pondering whether to recommend an all flash shelf because of the rule of thumb. When all of our migration work is complete we will have approximately 65TB UNCOMPRESSED and a varied workload with 70 plus DB servers. Sure we will use performance policies to prevent log volumes from poisoning the Cache but realistically 2.4 TB of cache on the controller is going to fall short.
How is Infosight displaying your cache & cpu usage today? And what capacity are you up to, and how much of that is databases?
CPU around 7%. The actual disk space used by DB's is around 15-20% of the total uncompressed space. That total is around 23TB right now. Cache is topping out at 100%
It's a constant 100%
Certainly sounds like food for further investigation by Rich Fenton. We can even get support to pull cache usage and misalignment reports to check this out too.
I will raise a call tomorrow.
Mark no need to raise a call with support I will call you
Use my mobile. Working from home.
The results are a little misleading using say average latency. This average is raised by heavy sequential read ops during backup in the silent hours.
Correct it's always good to drill down to 1 day and 5 day view to see what latencies look like during the production day rather than just looking at the overall trend... Running traditional backups in the evening will raise the average latency
As we discussed earlier. The cache on a CS460 can be upgraded non-disruptively to X2. In addition to that the CS400 can also be added with a ES1-AFS (cache expansion shelf) to allow you to incrementally increase cache beyond the cache in the controller (in batches of 4 drives at a time). The AFS does not count as one of your three available expansion shelves.
The cache only needs to be expanded as InfoSight warns you and as you start to see your production day latencies increase
Good morning Roland
I agree with Nick and I also use a performance analysis tool (Cloudoscope) to help capture and clarify an end-users workloads (i.e.: IO, Read/Writes, etc...). I also believe in adding in a proportional amount of growth overhead to support new found or unexpected growth. In most cases, applications such as mail, databases, etc will certainly demand the need for more cache versus flat file consumption. I hope this helps. Paul
Retrieving data ...