Working in the data storage business, I get asked about accuracy of performance figures almost daily. Unfortunately there can be a huge gap between what most storage vendors say their product can do and reality. This certainly isn’t a complete testing tutorial, but it’s enough to get you productive quickly and not waste time with meaningless tests or gamed by a vendor.
Some vendor claims I’ve seen are outright lies; some, merely lies of omission. At Nimble, we’ve promised to be straight with our customers even if it means telling them we’re not the fastest solution on the market. That’s because we never set out build the fastest storage solution. What we strive for is for is the best balance of performance and capacity. That’s been our goal since we started. We’re trying to meet the needs of most customer applications not the 1% of systems where speed is the priority.
In-short, this picture sums up why you need to consider Nimble Storage. This diagram came from a customer that replaced 2 racks of storage for their Exchange email implementation with 2 Nimble CS460G-X2 arrays that cost about what the legacy gear cost for power and cooling alone, so essentially they got their Nimble arrays for free! ;-)
When choosing a storage solution, it’s important to determine just what your performance needs are based on the demands that will be placed upon it by your applications. A good way to collect this information is using OS/Hypervisor tools and looks for 2 pairs of metrics: reads/writes and bytes read/bytes written. The first pair will tell you how many IOPS you’ll need while the second pair will tell you the total throughput required. It’s also good to chart these metrics out over time to look for anomalous spikes in activity.
Before you begin testing you need to understand what constitutes a good test and choose your testing tools accordingly.
The following are my guidelines for testing a storage solution for performance. I recommend using IOmeter 2010 for testing. It offers the most realistic results using a random test pattern versus the repeating pattern in previous versions that is unrealistic and easily dupable (pun intended). The download location is here: http://sourceforge.net/projects/iometer/files/iometer-devel/1.1.0-rc1/
- Reality check: Test should be application-specific. Avoid creating an environment that bears little resemblance to reality. In other words, test storage performance using a storage-performance testing tool. That's why the SQL Server team created SQLIO and not a TSQL script. However, be aware that SQLIO uses a repeating pattern too and will be heavily skewed by dedupe algorithms. If you’re testing primary dedupe storage then you should use IOMeter 2010 and the Random test pattern to get more realistic results.
- Take your time: Any test shorter than 60 minutes is pretty much a complete waste of time. As you can see in the graph of performance degrading over time by a hybrid storage competitor in my previous blog post.
- Size matters: Use a large enough data set for testing to ensure that you're testing the storage and not just the cache. 100 GB seems to be a good starting point, but larger is better.
- One at a time: It’s also important to test in-order so that all data can begin to cache properly as it would in production. Perform random writes first which will warm up the caches and avoid unrealistically high cache misses during random read testing.
- Block Size: 4 KB is about the smallest general block size to measure. However, know your application and adjust accordingly running different tests. Ex. SQL Server writes in 8KB blocks to DB files and reads in 64 KB blocks to/from database files; however it writes in variable size blocks and almost never reads from its’ transaction logs.
- Read/Write Weighting: The percentage of reads to writes can have a huge effect on performance in the world of flash SSD storage. It’s universally accepted that flash is fast for reading. However, you want ignore write performance. Your performance stats will help to understand the percentage that best fits your environment. We at Nimble Storage prefer to use a 50/50 mix that demonstrates our impressive read and write performance.
- Queue Depth: This is a measure of how many different tasks a thread waits on and how your storage can handle them at the same time. While DAS performs well for a single threaded, single queue test; it struggles as queue depth increases, causing performance plateaus.
|Test Description||Block Size||Alignment||Queue Depth||Test Runtime|
|100% Random, 100% Write Test||4 KB||4 KB||32||90 minutes|
|100% Random, 100% Read Test||4 KB||4 KB||32||60 minutes|
|100% Random, 50/50% Read/Write Test||4 KB||4 KB||32||90 minutes|
Here's a sampling of a Nimble Storage CS460G-X2 that has 12 hard drives and 4 Flash SSD drives in a 3U array. Note that the performance of both the read AND the writes are equally high during this 100% Random 50/50 R/W test. This simply isn't possible with any other hybrid storage platform and even the flash-only solutions that can deliver the performance suffer dramatic capacity loss. This array is also the same model which regularly replaces an entire rack of competitor SAN storage for both performance and capacity.
I am also regularly asked about Primary Storage Dedupe and it’s impact on performance and capacity. It simply isn't the panacea that those vendors claim it to be. There are certainly specific instances where duplication is rampant, but it’s rarely genuinely problematic. VDI solutions tend to suffer more than most from duplication, but both VMware and Citrix perform deduplication very efficiently at the software/hypervisor level, thus there’s little value in also performing dedupe in storage. You also have to ask a primary dedupe vendor why there’s an option to disable dedupe if it’s so earth shaking great. Ask a few of their customers and you’ll also find out that they don’t recommend using the feature for performance intense applications.
Our engineering team here at Nimble Storage has extensive experience with deduplication having helped create the technology at DataDomain. Thus, we would be the first to implement it if there was a significant bang for the buck, but it just isn’t there yet. Microsoft also recognized this when they stopped doing primary dedupe in Exchange 2010, opting for compression only. Compression is a much more efficient way to save space as both companies attest to based on real-world usage stats. If that ever changes then you’ll see us implement primary dedupe, but there’s a long list of features ahead of it based on customer demand.
One final note on the “other” hybrid storage vendors is that they’re just slapping flash onto legacy SAN technology. It’s easy to see if you look behind their marketing curtain and see that their performance is still based on adding more disk drives. Nimble Storage is the only storage vendor whose performance is based on CPU and not the number of disk drives. Give us a call or check out a webinar to learn more about how we’ve changed the storage game.