Nimble OS 2.0 provides many powerful new capabilities. The first blog in this series described how Automatic Connections are enabled with Nimble OS 2.0 for Windows and VMware hosts Nimble OS 2.0 Part 1: Manual vs Automatic Networking and the second how to configure Nimble Connection Manager (NCM) for Windows Nimble OS 2.0 Part 2: Nimble Connection Manager for Windows

 

This time we’ll look at how to install and leverage Nimble Connection Manager (NCM) in vSphere environments. This software includes two “vSphere Installation Bundles” or VIBs, and these will manage all of your ESX iSCSI connections and MPIO settings:

  • Nimble Connection Service (NCS) – Automatically calculates and maintains the optimal number of iSCSI sessions between the ESX host and Nimble Storage;
  • Nimble Path Selection Plugin (PSP): Nimble PSP for VMware Pluggable Storage Architecture – Automatically directs I/O to the most favourable route.

  

NCM is supported with Nimble OS 2.0 and later and ESX 5.0 and later. Note: to install NCM you require vSphere Enterprise or Enterprise Plus versions which support the “Storage APIs for Array Integration Multipathing”. For future ESX and Nimble OS version support you can always check the Support Matrix available on InfoSight.


There are 3 possible ways to install the NCM:

  1. Online NCM bundle (download and install with single esxcli command);
  2. Offline NCM bundle (download bundle and manually install with esxcli commands);
  3. Offline NCM bundle via vSphere Update Manager.

 

I’m going to use option 2 as typically this is the process that I use with customers. First download the NCM software from InfoSight (https://infosight.nimblestorage.com) which provides step by step instructions on all installation methods and configuration. 


If this is a new install you should first add an iSCSI adapter to your ESX host, create your vSwitch(es), add your vmkernel ports for iSCSI, and then bind them to the iSCSI adapter in ESX. If you need assistance with these steps see the VMware Integration Guide for more info. My environment has two hosts, each with two standard vSwitches, and a single subnet used for iSCSI:

ESX environment - 1.pngvSwitches - 2.png

From a storage point of view I have a single Nimble Group with 2 arrays and the disks from each array are pooled together in a single Storage Pool (called Default). In this configuration I’m pooling all of my capacity and performance together.

Architecture - 3.png

So with the environment described, let’s look at the installation steps for NCM: 

  1. Copy the NCM package to the ESX hosts, typically I use WinSCP and place the zip file in /tmp
  2. Place the ESX host(s) in to maintenance mode
  3. Connect to the ESX hosts as root and run the following command:


    esxcli software vib install -d /tmp/ncm-2-0-7-0-nimble-ncm-2.0.7-500005.zip --no-sig-check

  4. Next verify the install with the following command:


    esxcli software vib list | grep nimble

  5. The final step is to take the ESX host back out of maintenance mode, and you’re done.

  

Note – the ESX host(s) does need to be in maintenance mode for the install so schedule it around a time when this is possible. Also if you upgrade NCM in the future to match a later Nimble OS firmware release this will require an ESX host reboot.

If you haven’t already, configure ESX iSCSI adapter with the Discovery Address for the group, which in my case is the same as my Virtual Target IP (VTIP). Note that on the array Active Configuration illustration below I’ve also configured my array iSCSI Host Connection Method for “Automatic” and “Enable rebalancing” – these settings along with NCM means that all my IO requests and data access paths will be optimised regardless of which array in the group the data resides on.

    Network config - 4.pngiSCSI Discovery - 5.png

And now you’re ready to provision some storage. You can do this either through the vSphere plugin or through the array GUI.

Once you’ve provisioned the storage you can see from ESX the paths to a given datastore and the Nimble PSP managing the Active I/O (MPIO) across them.

MPIO - 6.png


And from the Nimble array GUI you can see the initiators connected (one per ESX node) and the iSCSI connections (four from each host to the Nimble Array Group):

  iSCSI Connections - 7.png

So the final word, if you only have the Standard edition of vSphere which doesn’t support a 3rd party PSP, you can still configure MPIO in the traditional way and this is described in the VMware Integration Guide referenced earlier (also see this NimbleConnect thread: Importance of Path Change Settings in VMware)

I hope you found this blog useful. The next in this series looks at how to upgrade from Nimble OS 1.4.x to 2.0.x. If you have any questions please post them below or contact your local Nimble SE.