Gary Martin

Adventures in Nimble iSCSI moving ESX Boot LUN from Netapp

Blog Post created by Gary Martin on May 22, 2015

I've started a process to move our current NetApp hosted boot LUNs for ESX to Nimble boot.  This is using UCS Blade Servers.  There is a slightly annoying point in these steps and that is that you can't use a shared boot policy.  This is because Nimble creates a unique target per LUN rather than using a single target address and different LUN IDs.  Not sure why this is the case, maybe to get beyond the 256 LUNs per target limit.  Maybe a Nimble guru will know.

 

Anyway, here are the steps.  I upgraded the hosts to the latest version as the first step just so that when I rebuild them and restore the backup I know I am matching versions.  You can get around this by added -force to the recovery steps using VMWare PowerCLI but it just didn't seem clean enough.

 

You will need PowerCLI from VMWare for the configuration backup/restore steps and a little knowledge to get it connected.

 

Please note while most of the steps are universal they are certainly not complete.  Stuff like this Configuring Cisco UCS iSCSI boot from Nimble Storage will certainly help.

 

 

ESX to iSCSI steps

 

1.  Put host into maintenance mode.

 

2.  Upgrade to latest NCM and patches (scan for updates and remediate) - Makes it easier to match backed up version and restored config version

 

3.  Backup configuration using powershell (http://www.vladan.fr/backup-restore-esxi-configuration-powercli/)

 

                - get-vmhost HOSTNAME | get-vmhostfirmware -BackupConfiguration -DestinationPath “C:\Download”

 

4.  Record VMWare Initiator Name, host IP address and iSCSI IP (A and B) - You only need to do this if you have already got Nimble attached datastores, you might not order your migration like this.  If you don't have Nimble attached datastores you can generate your own initiator name.  Using the existing VMWare Initiator name for me means I don't have to create new Initiator Groups with the new names.

 

5.  Create Nimble volume (I used size 10GB) - HOSTNAME-boot, initiator group HOSTNAME-boot, use initiator name from VMWare or your own (see above).  Use Nimble console to run vol --info VOLNAME to get iSCSI Target Name.  You will need target name for the configuration of iSCSI boot.

 

6.  Reconfigure for iSCSI boot (there are plenty of pre-requisites here, I already have NICs for iSCSI and overlay iSCSI adapters there is plenty of information both on Connect and Google which will help you if you aren't familiar)

 

                - Unbind from template (if you are using them)

 

                - iSCSI vNICs - Change Initiator Name - Use VMWare generated Initiator Name (or your own generated one)

 

                - Boot Order - Modify Boot Policy - Specific Boot Policy

 

                - Add Remote CD/DVD, Remote Virtual Drive and vNIC_A_iSCSIBoot/vNIC_B_iSCSIBoot

 

                - Set Boot Parameters vNIC_X_iSCSIBoot where X is A and B - Set only - (these are my iSCSI overlay NIC names, there are a few guides out there to how to create these and you can have a different initiator per connection, but I don't do that and use the iSCSI initiator name associated with the host)

 

                                - Initiator IP Address Policy - Static (or DHCP if that is how you roll)

 

                                - IPv4 Address - use iSCSI IP (A or B depending on vNIC_X) and subnet mask

 

                                - iSCSI Static Target Interface - Add

 

                                                -iSCSI Target Name = Target Name for LUN from Nimble

 

                                                -IPv4 Address - Nimble Discovery Address

 

7.  Reset machines to run profile changes (apply pending changes)

 

8.  Use UCS KVM to mount DVD ESX install image during boot

 

9.  Install ESXi

 

10. Login to ESXi setup screen and add IP address to Management (HOST IP address)

 

11. Connect Host in vCenter (using username and password) - It will have a new certificate so wont reappear automatically.

 

12. Scan and Remediate Host (NCM first then patches)

 

13. After reboot, restore configuration using PowerShell

 

                - Set-VMHostFirmware -VMHost HOSTNAMEFQDN -Restore -SourcePath PATHTOBACKUPFILE

 

14. After full reboot connect host again

 

15. Exit Maintenance mode

 

 

I'm sharing this incase it helps anyone doing the same thing and they want to validate their process or get some tips.  Again, this isn't exhaustive.


Comments, suggestions etc all welcome. 

 

 

 

 

 

 

 

Outcomes