This is a great question. It is possible to combine these workloads, but you need to consider if you have enough storage resources to cover all the bases. Depending on your findings, it may be beneficial to conduct your project in two phases.
Phase 1 - Migrate and Observe
Phase 2 - Size VDI and Deploy
You noted that you have 100 virtual servers and is that a total of 1000-1500 IOPS, i.e. average of 10-15 IOPS per server? While these workloads might be relatively low IOPS today, if you were storage constrained before, it is possible for that load to increase when you get access to faster/low latency storage from your Nimble array.
For this reason, I would recommend that you complete the migration of these workloads first, then leverage InfoSight to observe CPU and Cache utilisation.
At this point you will need to look at the sizing for your VDI pilot which should include parameters like IOPS per desktop, vRAM, Used Data and Provisioned Storage. This can be used to calculate the additional load on the Nimble Array from an IOPS, Cache and Storage capacity perspective. Finally, you can then decide if you should scale back the pilot so you don't run out of storage resources, or if you should deploy on a new array.
Best of luck with your projects.
You didn't mention what stage your VDI POC is in, have you done any assessment work? This will take allot of the guess work out of sizing for your POC. Also you need to consider what impact tasks such as a recompose and or provisioning out desktops may have on your storage As these tasks can generate a large amount of IOPs. As I have seen people who POC VDI and only take in to account the steady state and peak IO usage of the desktops and don't consider the tasks mentioned above, then wonder why it is performing poorly, this is exaggerated by placing your POC on the same storage platform as your existing server compute.
Hope this helps.
We have a number of different customers using Nimble for both virtual server and virtual desktop environments. Their usage is, of course, varied with some sites having a more prevalent virtual server estate.
Almost all of our sites use linked clones, which helps considerably with the amount of storage space required. You could then host user home directories / general file shares from a Windows server (to get the latest and greatest features) which has an iSCSI-connected volume from .... the Nimble :-)
You should be careful that when you move your 100 virtual servers your performance doesn't go through the roof; John's comment re the existing storage being a limiting factor is quite feasible, and something we see an awful lot.
Once deployed and the virtual server estate is migrated and monitored (I'd look at using both host-side - either from the individual VMs, the most performant, or hypervisor - and the storage side), you can always look at upgrading the cache or controllers if a VDI environment will demand more read or write performance, respectively. Of course, I can't say you'll be absolutely fine to deploy VDI onto the same platform, but going down the pilot route will give you the best visibility of how your environment copes (servers, switches and storage). The Nimble arrays are just so good at providing huge performance at such low latencies that I think you'll be massively surprised at what you can run on the same array when you consolidate workloads!
Good luck, and please do post your results.
I've got a CS220x2 in production for my vmware view 5.2 environment. I'm currently using view in the education space, to replace labs. I have all my pools refresh immediately so there is relatively high IO load base on the number of VMs that I'm running. (averaging 1200 IOPS 800read/400write over the last 24 hours for about 100 desktops and 10 servers)
As far as IO goes, I personally have all of the vmware view supporting infrastructure on the same CS220 as the VDI workload and have noticed that there are no performance issues. There are connection brokers, security servers, as well as vShield Endpoint Infrastrucure, ThinApp Servers, and PCoIP management servers - all said about 10 VM servers supporting the view environment.
One thing that you can do is logically break your LUNs into groups, I have one LUN for each large computer lab. I also have one Resource Pool for each lab, which allows me to tailor performance for each pool as required.
Like others have said, just watch and monitor your environment as there is no one-size fits all solution.
Currently I'm getting 35% compression in my vmware view environment, and depending on how you deploy your servers and what type of data you're storing you may exceed the storage capacity of the 240 or even the 260 model.
There's always an option of adding storage trays if raw storage capacity is an issue, and you can upgrade the controllers live, but you can't upgrade the disks in the array after the fact (from 1TB to 2TB for example)
Cache performance is where you'll notice the difference, the pricing that I got it was less expensive to get the 220x2 up front than to get the x2 upgrade later on.
Plan it out well and you'll be able to address any issues that you do have - always, always, always have a roll-back plan in the event something doesn't work.
Thank you for your replies and sorry for the delay in getting back to this thread (we just completed our network upgrade to 10G, which was needed for the migration and VDI projects).
I realize my question had a lot of open ended variables and the common theme of the responses has been to migrate our virtual servers first and then see how our performance on our new 220Gx2. This is the direction we are going and have also scaled back our VDI PoC/Assessment to be only 50 virtual desktops this FY.
The migration of our virtual servers to Nimble starts next week and hopefully VDI PoC late December/early 2014. As we progress I will post updates and findings.