rugby01

Use your Nimble for CommVault Storage Pools

Blog Post created by rugby01 on Oct 8, 2015

Doing some digging around the last couple weeks after being asked by more than one customer, “Can I use my Nimble array as a backup target?”

 

Why wouldn't you want to have amazingly fast storage, compression, and the ability to restore files?  The thing is if you can use free no impact snaps, use them, but not everyone can move everything to their Nimble array. Some customers buy 5 years support on arrays and can’t just throw the garbage out. So the answer is a resounding – Yes!

 

For this blog looked at the existing storage partners for CommVault (CV) and found the solutions to be a 20 year old architecture with one twist - dedicated flash drives.  The basic configuration for De-Duplication storage is lots of SATA disk for the main pool, and a few SSD drives to hold the de-dup meta-data. The problem with that solution is two things: dedicated SSD drives to speed up the De-Dup lookups, and SATA pools still have high latency and can extremely slow under heavy load – like a full recovery. If anyone has used SATA drives for backup storage pools in the past, you know these “work” but are not the best solution.

 

I talked with the Local CV team and we ran their benchmark in my lab on my CS210.  The result were amazing and we think this is going to be a huge winning configuration for both teams. The smallest array in our quiver delivered 3.3 TB per hour write, and 8.3 TB per hour read performance. Now, the CV test is not a long test, so the read testing result looks a little questionable and could be a result of all blocks coming from SSD, but that could be the case in production. We ran the test over and over and the results were pretty static.

 

What you have to remember when talking about de-duplication backups is that doing restores brings EVERYTHING back.  Doing the incremental forever backup is great, and doing de-duplicated full backups do save time, but if you have to restore 2-TB of data to an empty target, you need to transfer 2-TB of data. Those blocks are going to be all over the array and searching for blocks on SATA disk is slow. Most restores from backup software take from 2x-6x+ the backup time depending on number of files, size, and networks.

 

The array one customer was looking to purchase was a 24 SATA drive and 4 SSD drive configuration. I’m thinking Raid-5 for the SATA, and RAID-10 for the SSDs.  The SSD’s are dedicated to the CV Meta-Data LUN, so don’t count for backup performance, but would only see around 5000 IOPS for meta-data. The 24 SATA drives would be the largest bottleneck being able to provide ~1600 IOPS. The example performance from CV show the pool write at 550 MB/s, and reading at 608 MB/s. Moving this solution to a Nimble CS2XX series array would increase performance for both the Meta-Data LUNS and the Pool LUNS.


The competitors configuration we reviewed for the CV solution would have a hard time keeping up in a busy environment.  The CS2XX series delivers 15,000 sustained IOPS without managing tiers of storage (or should I say tears?).  We have an excellent partnership with CV and are on their hardware integration list for intellisnaps. 


We would suggest replacing the proposed storage array for the following reasons.  We would be faster on writes by 8x and reads by 10X (see our test below).  The other solution requires CV to do the compression which will add a serious load to the CV servers.  We would allow them to turn off compression on the server and use us for compression.  We would estimate a 33% reduction in server CPU requirements with compression disabled. Our solution is smaller and takes less power. You get our all-in license model this includes replication, snaps (restore points for meta-data and backups), clones, compression, encryption, and enterprise monitoring. Of course the most compelling reason – we are cheaper according to customer.

 

From what I can tell the current storage partners for CVs storage pools are just trying to make money and not really help customers solve problems.

Here are some details on what I would propose. 

Nimble CS235

  • 15,000 IO
  • 2 x 10GB ISCSI
  • RAID 6
  • ~26TB Effective Capacity
  • Ability to Pin ~330GB to flash (to pin CommVault hash table)
  • All in software licensing including Snaps, Encryption, Replication, Monitoring,  and ability to use on other system…etc. 
  • Ability to expand to ~200TB capacity

 

 

Nimble/CommVault disk performance results. 

 

DiskPerf Version        : 1.3

Path Used : F:\DISK01

Read-Write type         : SEQUENCE

Block Size : 524288

Block Count             : 4096

File Count : 6

Total Bytes Written     : 12884901888

Time Taken to Write(S)  : 12.88

Throughput Write(GB/H)  : 3353.56

Total Bytes Read        : 12884901888

Time Taken to Read(S)   : 5.18

Throughput Read(GB/H)   : 8337.62

Outcomes