Recently one of my customers asked me to configure their array(s) to use a dedicated interface for their replication. My immediate thought was to use the VLAN tagging capability in 2.1 (you'll learn about this in much more detail in the up and coming 2.1 blog series from my colleague Bill Borsari). However this wasn't possible as the customer, in this instance, wished to dedicate a physical port and switch infrastructure to replication, this was further complicated by the fact we had a 10GbE G array with two data networks (as this was a SmartStack configuration). Therefore our network design was to implement the following:
eth1: Dedicated Management
eth2: Dedicated Replication
tg1: iSCSI Data A
tg2: iSCSI Data B
Note: My preference would be to use VLAN's, as a cable or switch failure on eth1 or eth2 will cause the Array to pass control to the Standby controller. In a VLAN configuration, in the event of a port or switch failure, the ports will simply failover between eth1 and eth2. However our remit here was to use a dedicated interface as the customer was using a pair of dedicated switches for replication.
Nimble OS 2.0 provided the user with the option to configure replication to use the management or data networks. This was also possible in Nimble OS 1.4 but it required support to configure it remotely. Nimble OS 2.1 extends the capability in 2.0 to define which networks are carried over the available ports. This is how we configured it for this particular customer (please note: I've had to obfusicate some of the screenshots as this was a customer system):
Firstly, we needed to unconfigure the eth2 port. This can be done by creating a draft configuration or by editing the active running network configuration in the Network Configuration sections, in the following example I was commissioning the array so I performed it on the Active configuration (the array was also running 2.1.4):
Select the interface tab, select eth2 and then press Unconfigure to remove it from the Management network.
Next select the Subnets tab. We are now going to create a dedicated subnet for replication and associate that subnet to the eth2 interface.
Clicking on the Subnets tab shows you the current configured Subnets (Management, iSCSI A and iSCSI B in my example). Click the Add button to create a new subnet and define it's properties...
This may seem a little odd as we are effectively creating a 'third' data network. As such we will need to supply a Discovery IP address and Target IP addresses, these however will not be used for volume access, we are simply using the IP addresses for replication. Set up your IP addresses for the replication network, label the subnet appropriately and then select the interface to carry this network (eth2 in my instance).
Note: Clearly you have to do this on both partner arrays consistently and your IP addresses need to be unique.
Click Done and Apply your changes.
Once the replication subnet is defined and linked to eth2 interface, you simply need to define your Replication partners to use the associated IP addresses and the associated subnet (again these needs to be done on each partner).
Click on Manage > Protection > Replication Partners... then create or edit the partner appropriately:
From the above you can see I am providing the IP address of the remote array on the replication subnet and then selecting the option to use data IP's for the replication traffic (the 3 data networks will be listed iSCSI A, iSCSI B and newly created replication network - pick that one, click Save and then test that the two arrays can successfully communicate to one another.
Gotcha: Replication cannot be completely divorced between management and data. The article above allows for the replication traffic to sent over the data network, however replication control data still will route over the management network. Therefore there is still a requirement for management network between the two arrays to be routable in order for replication to work correctly. Therefore, the configuration of the replication partner’s hostname has to be the management IP address or the DNS name resolving to the management IP address of the partner (not data subnet IP). I would also point out that TCP ports need to be opened as per Firewall ports section in the administration guide.
Hopefully I will follow this blog up in the coming weeks with the VLAN tagging use case... until then thanks for reading and as ever post any comments or questions below !