More and more customers are inquiring about how to do boot-from-SAN with Nimble Storage (save a few bucks on having to buy local SAS/SSD disks) – fortunately, we got both in our playground.  Here it is, step-by-step checklist/instructions… *NOTE* *The setup is configured to attach Nimble directly to the fabric interconnects with two subnets defined (one for each 10G interface of the controller) – if you attach the Nimble to a pair of access layer switches such as Nexus 5k with vPC, then dual subnets is NOT needed.  Remember, even though the FIs are configured as a cluster pair, the cluster interconnect interfaces between the FIs DO NOT carry data traffic, thus, the need for dual subnets for both FI connections to be active to the Nimble array

 

*You have to have a service profile template configured to have iSCSI boot – here is what yours should look like, for the “iSCSI vNICs” and “Boot Order”

 

 

tabs: "Boot Order" tab should be pretty self-explanatory, first boot from CDROM, then iSCSI VNICs Next up is configuring the IP and target parameters for each iSCSI vNIC (this is so the iSCSI vNIC knows where to find the boot LUN) - remember to configure

BOTH vNICs otherwise you'd have a single point of failure!    1)ensure you have an IQN pool to derive iQN name for the iSCSI vNIC initiator (so an initiator group can be created on the array side to ensure only this blade could have access to the boot volume) *take note of the initiator name so you could add it to the initiator group on the array side 2)set a static IP for the iSCSI vNIC (there’s a nice little feature here to determine if the address is used by other blades within UCSM) 3)add the iSCSI array target information(at this point, you are going to switch to the Nimble interface to obtain two pieces of required info:

  • create a boot volume for the blade and obtain its UUID
  • obtain the target discovery IP

   Here’s how to get this stuff from the Nimble side: First let’s get the iSCSI discovery IP: Next obtain the iQN for each iSCSI vNIC and add them into an initiator group (you certainly want to do this because you don’t want the volume to be presented to every and any host that sees it!)…the IQN can be found under the server’s service profile->“boot order”->vNIC1 or vNIC2->”set iSCSI Boot Parameters”: Remember, you’d want to get the IQN for both iSCSI VNICs, and then add them to an initiator group from the Nimble side: Once the initiator group has been created, you are now ready to create a volume to serve as the boot LUN for the ESXi host:    Notice I have all the boot LUNs on the ESX server configured to be part of a volume collection – it’s for safety measures in case something were to go wrong with the ESXi install.  It’s a lot quicker to restore from a snap than to reinstall ESX all over again (unless you are fortunate to have stateless ESXi Auto Deploy configured).  If you have a DR site, it certainly wouldn’t hurt to configure replication for the ESXi boot LUN volume collection as well! After the volume has been created, we’ll then obtain the UUID for the volume so it could get entered in the blade server’s iSCSI boot target parameter:      Here we go again, same screen as before, but this time with the required info for both 1 & 2 below: Now you are ready to boot/install ESXi on a Nimble volume!  Power on the blade, and watch for the following screens to make sure the volume is discovered correctly: On Nimble (as the server boots, the iSCSI connection would be made if things are configured properly – if “Connected Initiators” count remains ‘0’ even when you see the ESXi install prompt, then go back to the previous iSCSI boot parameters to make sure 1)array target IP is entered correctly 2)boot volume UUID is entered correctly for EACH of the iSCSI vNICs: Want to see this in action?  Check out Mike Mclaughlin’s demo video on Nimble Connect