rfenton

Nimble Fibre Channel: Provisioning Nimble FC Volume to ESX Cluster

Blog Post created by rfenton Employee on Nov 20, 2014

The last blog, Nimble Fibre Channel: Provisioning Nimble FC Volume to Windows detailed how to connect a basic Windows volume to a server. This blog will look at the ESX side of things.  The tendency is to think of FC as being a more 'complex' protocol but, as we've seen from the blogs so far, many of the tasks are much more simplified.  That is also true with ESX.  If you've ever read the Nimble VMWare Integration Guide, with IP protocols there is setup best practices for VMKernels and mapping them to the vNIC's used for the iSCSI connections and setting them up correctly for failover. With FC none of that is required as the HBA's are dedicated to storage/data traffic.

 

Of course we could have used the manual method similar to the blog in Windows, creating a volume, mapping it to our ESX initiator groups and creating a VMFS datastore.  However as described in my Nimble OS 2.1 blog Nimble OS 2.1 Part 3: Introduction to the NEW VMware Plugin the plugin provides a much more effective mechanism for managing volumes with VMWare as it automates many of the daily operation tasks including provisioning and decommissioning.

 

Of course the VMWare plugin is supported with Fibre Channel as well so the process is the same with FC volumes as it is with iSCSI, but where appropriate the FC information is displayed.  First register the VMWare Plugin with your vCenter Server by clicking Administration > vCenter Plugin

 

FC-VMW1.png

 

You will then be asked for you vCenter credentials in order to register the plugin (note: one change in this release is you can specify the port):

 

FC-VMW2.png

 

Once Register is clicked then the Plugin will be registered within vCenter.   Navigating to the Datastore and Datastore Clusters view with vCenter and hitting the Datacenter object will show the Nimble Group Tab.  Pressing on this will show your datastores that have been provisioned to the Nimble device:

 

FC-VMW3.png

 

Note: My ESX servers above are booting from the SAN, therefore their boot drives are showing up as datastores within the Nimble plugin.

 

To create a new datastore simply click the + icon,  the wizard will ask what your datastore name should be and which hosts should it be mounted to:

 

FC-VMW4.png

 

After that clicking Next, will ask you the size of the datastore and whether the volume should be thin provisioned and any space reserved:

 

FC-VMW5.png

 

and then if the volume should be protected:

FC-VMW6.png

 

This time I've decided to integrate the taking of the backup with ESX (by providing the vCenter credentials) and then scheduling a snapshot (here I'm taking hourly snapshot Monday-Sunday between 9AM and 5PM and then a once a day at Midnight):

 

FC-VMW7.png

 

Finally clicking next, the wizard will confirm my options:

FC-VMW8.png

 

Once you are happy click Finish and the volume will be provisioned. As before, the plugin is doing a lot more than just creating a volume and a datastore.  It is infact interrogating all the hosts that I have mounted the volume too and querying the HBA information for the WWPN (note: we didn't log this at the start as we did with the Windows walk-through).  The plugin will then communicate to the array to create the Initiator Group and populate it with the cluster and all the relevant WWPN's for each node of the cluster.  It then will create the volume, map it the Initiator group. For each host, it will rescan the HBA adaptors, mount the volume, create a VMFS file and setup multipathing.

 

All of this activity is logged in the Recent Tasks menu:

FC-VMW9.png

 

The end result is a newly provisioned datastore over Fibre Channel:

FC-VMW10.png

 

Clicking on the datastore will allow us to Edit the datastore, Delete it, Clone it, Resize it or take a Manual Snapshot.  We can also see all the information that is related to that volume including size/space utilisation, performance (bandwidth, IOPS and latency):

 

FC-VMW11.png

 

Clicking on the Connections tab, will show each ESX initiator and the created connections to the Arrays Target Port, assisting with troubleshooting. As you can see below, I have two ESX Servers in my cluster and each server has two HBA initiators. Each Initiator has two connections to the storage array which is precisely what we would expect:

 

FC-VMW12.png

 

Clicking on the Storage Adaptor of one of the ESX servers will show the volume/devices that have been mounted:

FC-VMW13.png

 

Selecting one of the datastores from the Configuration tab will highlight that the correct NIMBLE_PSP policy has been selected for managing MPIO (again installing Connection Manager for VMWare for Fibre Channel is identical to iSCSI - the full details on how to install NCM for VMWare are located here - Nimble OS 2.0 Part 3: Nimble Connection Manager for VMware.

FC-VMW14.png

 

Selecting the datastore and then selecting the details tab will show you the device information and will also allow you to view the paths to the storage (by clicking Manage Paths):

 

FC-VMW15.png

 

The 8 paths (2 HBA's x 4 Target Ports) will be shown and also the Path Selection policy (which should be set to NIMBLE_PSP_DIRECTED).

You will notice 4 paths are Active (Active Controller) and 4 paths are Standby (Standby Controller).  Any failures to the array, switches, HBA's or cables will show a Failed path allowing for easy troubleshooting:

 

FC-VMW16.png

 

Finally, as with Windows, the array will show a information of what Initiators are actively using each volume. Clicking on the Volume will show IO on the volume and also show the connected initiators:

 

FC-VMW17.png

 

Looking at the Initiator Groups will show the group that was automatically created by the plugin:

FC-VMW18.png

 

Note: I'd expect the 0000 for volume and initiator groups to be no longer appended when 2.2.3 ships.

 

Finally as before, in the Manage > Connections tab - each initiator connection is shown with the relevant targets and Active/Standby connections.

FC-VMW19.png

 

So as you can see provisioning via the VMWare plugin is consistent between both iSCSI and FC.  Once again there is a video demo if you'd like to view this:

 

 

 

When will Nimble Fibre Channel be supported with VMWare ?

It already is! You can check out the updated compatibility matrix here.

 

In the next blog we will look at provisioning a volume into a Linux environment.   Please feel free to ask questions or comment in the section below!

Outcomes