For IT architects, what are the key considerations when deciding whether to replicate data at the application level or at the storage level?

 

Many business apps today have some method of replication built in, which varies on an application basis and often by version of the product. For instance, consider some of the widely used Microsoft applications available today:

 

  • Exchange Server - Exchange 2010 and 2013 both have the Database Availability Group (DAG) functionality to replicate the active database copies to additional replica copies. This is included in Exchange standard and enterprise editions, however standard limits you to five databases per server (active or replica) whereas Enterprise allows up to 100 per server (with cumulative update 2 required for 2013).

    http://technet.microsoft.com/en-us/library/bb232170(v=exchg.150).aspx

  • SQL Server – SQL 2012 and 2014 both have the Always On Availability Group functionality to replicate one (or more) primary databases to a secondary copy. Both SQL2012 and 2014 require Enterprise edition to support this. Of course there are other replication mechanisms as well in SQL. See the links below for SQL2012/4 edition features.

    http://msdn.microsoft.com/en-us/library/cc645993.aspx#High_availability

    http://msdn.microsoft.com/en-us/library/cc645993(v=sql.120).aspx#High_availability

  • SharePoint Server – SharePoint data is typically stored in SQL Server and hence the same HA/DR replication options for SQL are applicable for SharePoint as well. Multiple web and application servers can be deployed and load balanced, and also replicated at a hypervisor level if virtualized.

 

With so much replication functionality available at the application level why would you need SAN level replication at all?

First, this isn’t an all one-way or the other discussion. In many customer environments it makes sense to leverage application replication for some apps and SAN replication for others. I’ve only discussed Microsoft apps here but customers typically have a mix of Microsoft and non-Microsoft applications, hypervisors, and databases. Often the answer is to consider data on an application by application basis and choose the best replication tool for each.


The great news is that Nimble Storage replication isn’t an additional license add-on; it’s included at no extra cost with each array. Likewise with all other software features Nimble offers; snapshots, zero copy clones, application and hypervisor integration – all included. 


So choosing to leverage it for all or a subset of applications isn’t a cost consideration as it might be with some vendors. Better still, if you do choose to use Nimble Storage replication for some data you can leverage the industry-leading efficiency it brings – only changed data blocks are sent across the wire, and those blocks are as granular as 4KB, and then compressed as well. You can choose which snapshots to replicate, the number to retain on both source and destination volumes (which can be different), and set a schedule when replication can run and how much bandwidth will be used. With Nimble Storage you can replicate as frequently as every five minutes, and for many applications a five-minute recovery point objective (RPO) is more than sufficient. You can failover storage with a click of a button, and reverse the replication direction automatically.

And don’t forget that you can always create zero copy clones off both your production and replicated storage. I particularly like the ability to clone out datasets from a DR copy of data – the investment in DR can be leveraged for multiple test/dev environments without requiring a complete copy of the data for each (clones consume no space at creation time, and only consume block level changes written).


Lastly if you leverage VMware Site Recovery Manager (SRM), Nimble provides a Storage Replication Adapter (SRA) which plugs in to SRM. This leverages Nimble SAN replication to move data from production to DR site and provides an automated failover mechanism between sites.

 

So where might I choose Microsoft application level replication over Nimble Storage SAN level? 


If a customer has two sites, its often possible to stretch an Exchange DAG between them (provided the round trip latency is less than 500ms) and this architecture would reduce the recovery time (RTO) as you can move the active database across to the DR site likely quicker than promoting DR storage and connecting to DR servers.

Likewise, a SQL Server Always On Availability Group can be stretched across sites, and also provide synchronous replication from the primary database(s) to one (or more) secondary copies. For SQL database applications where data loss in DR must be zero this is a great solution.

Moving file services between sites when using a combination of a DFS Namespace(s) and DFS Replication is simple. The alternative would be replicating both the file server and the storage so that in DR the users file paths are the same as production.

In Hyper-V environments with cluster shared volumes (CSV) you typically have multiple virtual machines co-located on the same Nimble Storage volume. Invoking DR for all of the virtual machines is easily achieved by replicating the volume at the storage level. However, what if you want the ability to move which site an individual virtual machine runs out of? With Hyper-V replica’s you can move on a per VM basis.

 

Replicate data at the application level or at the storage level? The answer depends on the environment, the applications, the RTO/RPO required, and what you’re trying to achieve. Both application and SAN level replication coupled with Nimble Storage can help you meet your service level needs.

Please post comments below, or if you’d prefer a longer conversation speak to your local Nimble SE.