Hi All,

 

I've put out some blogs before about properly sizing your desktops and how important it is to get actual numbers from your customers, BUT you need to be able to trust those numbers.  So what the heck am I talking about?  One reason I've seen many VDI implementations fail is the unwillingness to change how the desktops operate.  Let me explain....  Show of hands, how many people still have a physical desktop and the day you have a virus scan run you either:

 

A. Pause the virus scan indefinitely.

or

B. Go do something else that doesn't involve your computer for 2-3 hours.

 

Sure you can try and slog your way through the pain and be productive during this time, but it is painful and it feels the more you fight it, the longer the scan takes.  Plus, now that hard drives are larger than ever before, virus scans take that much more time.

 

This is what I mean by changing how desktops operate.  These outlying events REALLY tweak your sizing numbers and to size your VDI environment to absorb  these events would have you buy an enormous amount of hardware that would cost you a fortune!

 

One of the key reasons VDI hasn't taken off is cost and there are multiple reasons that I won't go into right now, but I do want to discuss the affects of these outlying events.  They will skew your numbers and will push your implementation to a whole new level of cost to absorb events that might happen 2-3% of the time.  Virus scans, OS patches, just to name a couple.

 

So am I saying ignore those outlying events and size for a smaller environment?  No, what I'm saying is for VDI to be cost affective AND functionally affective we need to change the way we think.  Virus

scanning and OS patches are administrative functions that can't be ignored, but this is the perfect time to be creative and look for alternative methods to solve the problem.

 

A friend of mine once told me there's always a bottleneck and during an assessment you want to make sure your product is not the bottleneck.  In VDI implementations these events are certainly the bottlenecks and we want to shift them so the users don't have to suffer through them AND help limit the expense of the project.  Luckily there are some great products out there to do this and I'll cover some of them later, but I wanted to write this first article to get you thinking that you need to question the numbers!  If they don't look right, heck, even if they look right, question your customer's and find out what's happening in their

environment.  They'll thank you for it!  :-)

 

Here are a couple of examples for you.

 

These are Performance Monitor outputs from my computer.  The first is me just doing regular work, I've got email open, a ton of internet browsers open, I've got YouTube running and you can see that the writes

are higher then the reads.  The writes are really low, so I'm only running about 10 IOPs total.

 

 

                                  

This next graph is still my computer, but with a virus scanner scanning my hard drive.  You can see the reads are just going nuts!  I'm still doing my other work like before, but my desktop is just getting hammered

by 176 read IOPs and 15 write IOPs with a total of 191 IOPs!


 

 

Yes I know this is something that probably only runs once a week and yes I know using a mean calculation would flatten out this peak over time, BUT my point is these types of events will severely impact your

results.  Even if the peak flattens out to 76 IOPs per desktop, that's going to be a VERY expensive desktop when 99% of the time the IOPs are only running around 10.

 

If you've assessed the environment and have performance numbers I'm proud of you!  But we have to go to the next step to make sure we know what's going on in the environment.  This way we don't charge the

customer for a sports car when all they really need is a nice 4 door

sedan.  :-)

 

-Neil