Welcome everyone to the first of a series of Nimble OS 2.0 "whats new" blog posts. Today we're going to focus on Manual vs Automatic Networking, and what that means to you.

 

With prior releases of Nimble OS' before 2.0, the way of provisioning and connecting to Nimble volumes was very much a manual process. Each host discovered Nimble volumes via the iSCSI Discovery IP address, which is a virtual IP roaming across the data ports. We would then create a Nimble volume and present to an iSCSI Initiator within the host, and then bind each host NIC to each data IP address on the array for multipathing. This was done manually by the end user, or semi-automated by using scripts (two examples are here for VMware, and here for Windows.

 

Here's an example of a VMware host connecting to a Nimble array with a single IP subnet. Notice that 8 data connections are created per volume, 4 for each NIC, and these are connected across both switches.

Manual Mode.jpg

The downside to mapping data and configuring servers this way is:

 

  • Lots of iSCSI sessions are created for each volume, which can cause a problem for VMware datastores.
  • Confusion reigns as to how many connections we are expecting to see per volume/host/array - especially with Hypervisors such as vSphere and Hyper-V.
  • Manual MPIO configuration is not the easiest.
  • Scripting MPIO configuration helps, but in certain systems the configurations are not persistent when rebooting (ie VMware again).
  • The host has no intelligence as to which host port should use to access the data. Therefore it may use a port which is being overwhelmed with data, have latency or even disk queues.
  • The host may also end up routing data via an ISL link from one switch to another to access a specific data IP address - as again no intelligence of how the array IP layout exists. This can saturate ISL links between switches and cause latency and IO problems.

 

Therefore a big change in Nimble OS 2.0 is the introduction of "Automatic Mode" of networking, which solves a lot of the above.

 

Automatic mode works in conjunction with a couple of other features called Nimble Connection Manager - NCM for short (a host based tool for connecting and managing volumes/multipathing wthin VMware and/or Windows) and Virtual Target IP addresses (or VTIP for short).

 

EDIT: In NimbleOS 2.1 we have merged the VTIP functionality into the Discovery IP and deprecated the VTIP as very often these would be the same IPs, and more people are familiar with Discovery IP than VTIP.

 

A VTIP is a Virtual IP address, which similarly to the Discovery IP roams virtually across the data ports on the Nimble array. In a single subnet configuration it often assumes the same IP address as your Discovery IP too, to make things easier (in a dual subnet configuration you should create a VTIP for each subnet). The VTIP effectively becomes a single point of management for Nimble connections and multipathing - meaning whenever a volume is created, only that IP address needs to be considered when creating a handshake from the server to the array.

 

The Nimble Connection Manager will then work in conjunction with the new Networking intelligence built into the Nimble OS 2.0 to on-the-fly create, manage, rebalance, or disconnect iSCSI connections between the host NICs and the data NICs on the Nimble array.

Automatic Mode.jpgTake the example above. We are now using a VTIP of 192.168.50.100, and the Nimble Connection Manager has automatically created two iSCSI sessions for me - NIC A going to 192.168.50.101 and NIC B going to 192.168.50.102. Also notice that these sessions are created on their own local switches, rather than reaching across the stack to bind across ports on the other switch. This is what IP Address Zones provide - a nice way to separate switches from eachother into Bisect (ie 50.1 - 50.127 on switch A, 50.128 - 50.255 on switch B) or Even/Odd (switch A only ever has even IP addresses, and switch B only has odd IP addresses).


Here's how it looks from the new Nimble OS 2.0 Networking tab. Notice my VTIP is the same as my Discovery IP address to make configuration easy. Automatic mode is enabled as well as rebalancing.

Screen Shot 2014-03-28 at 17.44.58.pngScreen Shot 2014-03-28 at 17.45.14.png


Note: A newly upgraded array from 1.4 -> 2.0 will always be in Manual mode by default to ensure legacy connections are not invalidated - only switch to Automatic mode when NCM is installed & ready to use.

 

The Automatic mode really comes into its own when Scale-Out is implemented - as data may now be distributed and reside across two different systems. NCM will understand and allow for direct connecting and rebalancing across the two/three/four systems on the fly without having to build in iSCSI Redirects, which then introduces latency for reads and writes. This is done as the VTIP now spans across all arrays, becoming the only IP address needed to know regardless of the amount of systems in the group.

 

Automatic Mode-scale out.jpg

 

See above - we are still presenting over 192.168.50.100, yet Nimble Connection Manager has now created iSCSI sessions across both systems for both NIC A and NIC B on the fly to be able to access that volume, again persisting and understanding how the switch stack is mapped out, without any manual configuration or direction of how the server should access it's data.

 

I hope you found this blog post useful - if you have any questions please ask them below. Also please consult the Nimble OS 1.4.x -> 2.0 upgrade guide for more information & guidance before upgrading.