This article will discuss Nimble's new VLAN functionality.  It will cover why VLANs, how to implement them on Nimble, and considerations for hosts accessing the Nimble, and replication to another Nimble.

 

First, what is a VLAN? In simplest terms a VLAN is a Virtual Local Area Network.  While VLANs add more complexity into the networking environment, using VLANs provide many benefits.  Foremost is network isolation or segmentation providing security and access control.  The use of VLANs allows for several different virtual networks to co-exist on the same physical network connection, this segmentation allows the networks to exist without any ability to communicate between them.  For example, one department on VLAN 1 would be fully segregated from any other department, say VLAN 2.  By using different VLANs there is no interference or access between departments.  This is especially useful for technology like SMB which relies heavily network broadcasts.  Another benefit of VLANs is network managment.  With VLANs it becomes easy to provision new network segments for each department.  Before VLANs it would have been necessary to create a physical interface for each network segment, with VLANs it becomes a virtual adapter on the Host or can be done by Port on the switch.  By including VLAN support, a Nimble array can exist on several different networks in a secure multi-tenant way such that each user or department is isolated for improved security and management.

 

To enable VLAN support on the Nimble Array, 2.1.X or higher must be installed and the "iSCSI Host Connection Method" must be set to "Automatic."  The key Nimble concept related to VLANs is the idea of a Subnet.  In the Nimble array the Subnet is the entire definition of the network.  With the previous software versions of Nimble all network operations were tied to the physical interfaces.  With the 2.1 release that physical connection is moved to a virtual one by way of 801.q virtual lan tagging.  The Subnet definition is what defines all the parameters related to the virtual ports.  These parameters cover all aspects of the network segment, including discovery IP, traffic type (Management, iSCSI Data, or Group), MTU, and VLAN tag [Figure 1].  The Nimble also provides a way to label the Subnet for easy identification.

 

figure1.jpg

Figure 1

 

There are also two new features of subnets added to the Nimble Array, Traffic Types and IP Address Zones.  First the Traffic Type controllers what type of data is allowed to flow across that network.  The types Data Only, Management Only, and Management + Data.  An important note is only one management subnet can be configured for the Group.   If there is an existing Management only or Management + Data, a second subnet with that Traffic Type cannot be added.  When the "Data" is selected the "Traffic Assignment" is active.  This tells the array how to use that network.  The options are: iSCSI, Group, iSCSI+Group:

 

     iSCSI Data, ISCSI client traffic

     Group, Nimble scale-out traffic between the arrays. 

    

The other new feature is "IP Address Zones."  This control sets show how Nimble will manage IP assignments when using the "Automatic" mode.  To recap, with the 2.X version of the Nimble Array we have added scale-out capabilities, the ability for more then one array to exist in a group and also service a volume.  That ability for one volume to live on two different Nimble array opens a host of new features and capabilities as well as allowing for greater performance and capacty in a non-disruptive way.  To make this new magic work, Nimble created the NCM host plug-in.  This plug-in allows for the Nimble Group to communicate important details to the hosts so when two arrays are merged together the host can update its network connection to the Nimble to include both devices.  To ensure optimal performance the Nimble array has the ability to ensure when the hosts connect they always connect to the correct ports on the Nimble array.  This is called "IP Address Zones."  The zones ensure that if there are two switches in the environment that iSCSI traffic flows across the right switches.  Put another way, when the network has two switches and those switches are linked some packets might travel down the Inter Switch Link (ISL), some vendors like Cisco's Nexus line, recommend not using those links for any client traffic.  With "IP Address Zones" Nimble is able to ensure when the hosts connect they can always connect to the right link and switch ensuring optimal traffic flow.  The best description for how to set this up comes from the little "i" next to the header [Figure 2], hovering over that looks like this:

 

Figure2.jpg

Figure 2

 

 

Once the Subnet is defined physical ports can be assigned to create the virtual interfaces.  This assignment allows individual physical interfaces to support network traffic from many different VLANs.  When a Subnet is assigned to an interface the final step is to configure the data IP's the nimble should make available to hosts on that network [Figure 3].  This capability opens a host of new options for Nimble so the configurations can be even more flexible.

 

figure3.jpg

Figure 3

 

Where this new feature really stands out is in the ability to provide a multi-tenant environment.  When several different applications or users are accessing the same resource it can be important to create segmentation so that those applications or users have the correct access.  With the iSCSI protocol that segmentation can happen via the initiator groups or via CHAP accounts.   With VLAN supports, network segmentation becomes another way of manging access to the resources.  Implementing this feature provides a great deal of flexibility. Here is an example [Figure 4] of using the VLAN feature to create a number of different networks to support different use cases.  While it is possible to create a number of different networks the performance will always be governed by the physical layer.

 

figure4.jpg

Figure 4

 

At this point it is important to remember that each Subnet on the Nimble has it’s own Discovery IP.  Another feature of network isolation is that initiator groups can be configured to only allow volume discovery on the configured subnet.  Below [Figure 5] the initiator group has been configured with two Subnets, "ESX-iscsi" and “Management."  This means that when a host with the assigned IQN issues a volume discovery, Nimble will respond on either the Management IP or the discovery address configured on ESX-iscsi subnet. 

 

figure5.jpg

Figure 5

 

From the perspective of the host the only changes for data access happen on the networking side.  The requirement is that the interfaces used to access the Nimble must be configured to send traffic down the correct VLAN.  This can be done at the switch port level or the host network interface level.  The benefit of having the host apply the VLAN tag is the server administrator can control which network subnet is access.  When the VLAN tag is removed at the switch the host receives untagged or native packets so no additional configuration is needed.  For Hyper-visors the configuration can be to pass the VLAN tag to the guest VM or to have the Virtual Switch remove the tag and pass only native traffic.  Both options are completely valid, the decision will be based on the deployments requirements.  For specific VLAN configuration on your preferred device, switch, Hyper-visor or host, please consult the vendor documentation or Google.

 

The last consideration when VLANs are configured is to decide which Subnet replication traffic will flow.  With previous versions of the Nimble Array it was required that all replication traffic was sent down the management interface.  While this could be adjusted by working with Nimble Support, the new subnet model removes this restriction.  With Nimble 2.1 and beyond it is possible to specify which configured subnet to use when establishing replication [Figure 6].   Not only can the network segment be controlled but by virtue of the nature of VLAN support, but so can the physical interface.  When not using the management subnet for replication be careful ast you might need to add a route so that each Nimble can connect to the other..  Where as before the replication might have been limited to a 1 Gbit network, it cannot be set to a specific VLAN on a 10Gbit port.

 

 

figure6.jpg

Figure 6

 

With the addition of VLAN tagging suggest Nimble takes the amazing performance and data protection abilities and extends the functionality to support larger more complex networking environments.  With VLAN support Nimble can provide greater multi-tenancy capabilities, segregate network traffic, and ensure flexibility for replication.

 

Tomorrow Dmitriy Sandler will be walking up through the new Role-Based Access Control feature!

 

Thank you for reading.

Bill Borsari