Continuing with the 'Introduction to Nimble OS 2.0 series', today I will be covering the topic of “Replication Changes in 2.0”.

 

In NOS 1.x replication traffic was, by default, only configurable over the Management network.  There has been an option to engage Nimble Storage support to alter this but it was never user configurable and subsequent NOS upgrades meant further intervention from support to maintain any manual changes made by them.

 

The good news is that with 2.0, we have built in the ability for the user to choose which network is used for replication.

 

Note: If you have used support to change your replication network, you should contact them before upgrading to 2.x so that they can advise you how to proceed.

 

Let’s look at the changes:

 

Here is my array setup, a Production Group of 2 arrays in a scale-out group.

This group is configured to replicate to a DR array, looking at the Replication Partner shows us how this is configured:

If we look at this Replication setup (I click on the link for the DR Array):

This shows us that this array is configured to use the Management Network for replication traffic, this is the default and is what would be automatically inherited if you upgraded to NOS 2.x when using the previous defaults.

 

I have no Replication QOS Policy set but QOS is unaffected by making any changes to the replication network.

 

Important Note: You should always use the management subnet for replication when any of the following conditions apply:

 

  • Your data IPs are not routable across the network
  • You want to separate replication traffic from iSCSI traffic
  • Replication between Nimble arrays running NOS 1.x and Nimble arrays running NOS 2.x

 

So let’s look at how we can change which network is used for replication:


If we look at my Active Network configuration for the Production Group we can see I have the 10.206.9.x (Management) and 10.206.10.x (iSCSI) subnets configured.

For this environment we are using the same subnets on both Production and DR arrays and I want to use the data network for replication instead of the Management network.

 

RecommendationPause replication before changing any replication parameters.  Remember to do this on both ends of the replication link.

Now Edit your Replication Partner and here you can select “Use data IP’s for replication traffic”

Note: You need to change this on both ends of the link.

If you have multiple data networks configured, at this point you can choose which of those data subnets you use for replication traffic in the drop-down menu.

 

Spoiler Alert: Watch out for VLAN support in NOS 2.1 where this becomes even more configurable, allowing for a separate VLAN to be used for replication traffic.

 

Once you have chosen the appropriate data subnet on both sides, remember to save your changes in both, Resume replication on both ends and Test your link on both ends.


If you are replicating from or to a scale-out pool, you may also see the following status message when you resume and Test your replication settings, this is not a cause for concern (the status should change to “synchronized” after a few minutes – if not, call support), this is an internal synchronization of the changes made within the scale-out group.

 

Note: When I tested this on my lab setup which is a 2-node scale-out Production group replicating to a single DR array, I only saw the “Not Synchronized” status message on the Production group, as the DR Array was not in a scale-out pool, I saw no such message.

When the arrays have completed the internal synchronization, this will be shown in the Event Log and the status is set to “synchronized”.

This post has focussed on how the changes we have made to the replication network.  This is the penultimate post in the current series, the next one is about IP Address Zones and will be posted shortly; please leave some feedback if you’d like to see more?