rfenton

Nimble Fibre Channel: Provisioning Nimble FC Volume to Linux

Blog Post created by rfenton Employee on Nov 21, 2014

In the last blog in the series,  Nimble Fibre Channel: Provisioning Nimble FC Volume to ESX Cluster we detailed how to utilise the Nimble VMWare Plugin to manage Nimble within a VMWare environment. This blog will look at Linux provisioning.  There are a number of steps required to setup and connect volumes over iSCSI, these are detailed in MPIO Settings for Linux, the  Linux Best Practice Guide, they have been automated with several scripts in the Nimble:Connect community if you search for "Linux".  Connecting to Linux with Nimble Fibre Channel is effortlessly simple!


Note: my example below is using CentOS Linux which doesn't feature on the compatibility matrix (as it is an open source platform) but other Linux distributions should be identical.

 

For creating zones on the switch and creating the proper Initiator Group on the array you need to determine the WWPNs on the host. To do this run the following command:

[root@rhel6-fc ~]# cat /sys/class/fc_host/host*/port_name

0x2100000e1e1919f0 0x2100000e1e1919f1


The output above is the World Wide Port Names of the dual port FC Initiator in this host. When viewed in the switch or on the array they will look like this:

21:00:00:0e:1e:19:19:f0 21:00:00:0e:1e:19:19:f1

 

 

Next, go into Manage > Volumes on the Nimble GUI and create your volume. Below I am creating a volume for Oracle Data.  My Linux server is also booting from the SAN (something that we will look into in more detail in the next blog in the series).  In this instance I have tried to override the LUN ID (LUN ID 1 was automatically selected by the wizard).  The wizard won't let me continue and alerts me to the fact LUN 0 is already in use and points to my Boot volume as being the current owner!

 

FC-Linux1.png

 

After correcting my LUN ID back to 1, I am now asked for the size, space management and the data protection for this volume:

 

FC-Linux2.png

FC-Linux3.png

 

Once the volume is provisioned, we can see that if I look into the Node1 Initiator Group, there are two volumes that are allocated to my Linux server.

LUN0 - which is my boot LUN

LUN1 - which is my newly provisioned Application volume.

 

FC-Linux4.png

Logging onto the host,  I can verify what is concurrently connected by running multipath -ll (as root).

This shows me my boot volume (mapped to mpatha) and we can see 8 valid paths to the device (4 active and 4 standby/ghost).

 

FC-Linux5.png

 

In order to my discover my new volume there is nothing to install or connect other than running rescan-scsi-bus.sh (again as root).

This command rescans each of the HBA's in turn and adds any new devices.  You will see the new devices at the bottom of the output.

 

In this case 8 new or changed device(s) found.

 

FC-Linux6.png

 

Now running mutlipath -ll (as root) will show up the existing boot device (mpatha) and the new LUN  (mpathh):

 

FC-Linux7.png

 

In order to get multipath/ALUA to configure the devices correctly it is essential to make changes to /etc/multipath.conf.

 

Below shows some of the Nimble specifics in multipath.conf

Note:  Please don't use/copy my settings below.  You will find the correct settings in the Fibre Channel Best Practice Guide in the Nimble Documentation.  This should be your authoritative guide:

FC-Linux8.png

 

As with Windows and ESX, the Linux devices and connections will show up on the array in the Manage > Connections tab.

Showing which initiators are accessing which volumes and targets - aiding troubleshooting.

 

FC-Linux9.png

The video below will provide a visual demonstration:

 

 

 

Our next blog in the series will focus on Booting from SAN with both Emulex and Qlogic adaptors.

 

Please feel free to ask any questions or comments below....

Outcomes