In yesterdays blog, Introduction to Nimble OS 2.1: Introduction to the new VMWare Plugin, we looked from a high level at the new functionality that has been incorporated into the Nimble VMWare plugin within Nimble OS 2.1. Todays blog we will focus on the new workflows in a lot more detail, including some screenshots and a number of video recordings of the new workflows.
As introduced yesterday there are five key workflows in 2.1, they are:
- Create New Datastores
- Expand existing Datastores
- Clone Datastores
- Delete Datastores
- Manage Datastore Backups
Let's look at each in much more detail:
Create New Datastore
The create datastore workflow sounds rather simplistic but in fact it's doing a lot more than just creating a volume on the array and a datastore within ESX. As with the prior plugin, you access the workflow by selecting the Datacentre from within the Inventory > Datastores and Datastore Clusters menu item, once registered you will see the Nimble Group listed as a tab (note: if there are multiple arrays in the scale-out group you will only see the single plugin tab):
Clicking on the tab will then navigate you to the Datacentre Overview View for the plugin, listing all the data stores hosted on Nimble and some high-level statistics and usage information:
To create a New Datastore click the + icon, the wizard will then ask you for name, description and which host(s) this datastore should be mounted to on creation:
In my modest demo lab I only have a single ESX host, but in most environments you might want the datastore to be presented and mounted to multiple-nodes within the ESX cluster. This is where the plugin does some real nice work under the covers. Not only is the volume created and mounted to each node selected but the plugin checks and ensures that iSCSI is minimally configured on each node in order for that volume to be provisioned. Some things we perform and check at this stage are:
- Enable iSCSI service (if it's not already running)
- Take care of ESX firewall rules
- Set the discovery IP address of the Group in the Dynamic Discovery
- Automatically determine the VMFS version (defaulting to the highest version based on what the participating nodes will support and their version of ESX)
- Create an appropriate Initiator Group on the array if the compute nodes are not currently identified on the array.
- Protect the volume using the Initiator Group and CHAP (if configured). This was one of those annoying quirks of the previous plugin where it left the volume accessible to all hosts which is fine for a dedicated environment but risky for shared infrastructure environments, hence it was always best practice to go back in to the Nimble GUI and edit the volume and add an Access Control List to the newly provisioned volume, to prevent any other application or OS claiming the drive.
Next the plugin will ask for the size of the volume, the space reservations and alerting thresholds:
Again this addresses one of the annoying quirks of the previous plugin, where setting the incorrect VMFS block size for the type of VMFS volume would cause the plugin to fail to complete the workflow. The volume is also "smart sized" in that the final size of the volume will take into account the VMFS metadata based on the volume size and the resultant size of the volume will in fact be what is requested (and not a fraction less).
The plugin will then ask for the Protection Schedule. Selecting None means no backup, Create a new Volume Collection, Join an existing Volume Collection (which promotes consistency) or Protect as a Standalone Volume. Here I am selecting an existing Volume Collection. Each existing Volume Collection is listed and the green circle denotes that this particular collection has VMWare synchronised backups configured:
If you choose to protect this as a standalone volume, you are then asked to schedule when the backups should be taken (and mirrored). I think the GUI designers have done a real good job with this. First we are presented with a blank schedule:
Click the + Add a schedule to create your first schedule. Here you can name the schedule, identify when the backups should be run, synchronised, mirrored and their retention period:
You can add as many schedules as you like. Here I have added an hourly backup Mon-Fri (7am - 6pm) and a Daily backup at 8pm Everyday. This is graphically shown:
Top Tip: If you want to change the schedules above - you can just drag them around the calendar with your mouse rather than re-editing them!
Clicking the List View shows the same info in tabular format:
I really hope our Engineering team add this same dialogue format to the native Nimble OS GUI!
Once finished the plugin will confirm what you asked prior to provisioning - click on Edit to change your options at this stage and click Finish to provision the datastore.
The plugin now goes through various steps of:
- Creating the volume on the Nimble Group
- Setting the correct Performance Policy, ACL's/Initatior Groups, Protection/Replicaiton
- Rescanning a single VMware host's iSCSI adaptor(s)
- Creating the VMFS Datastore with the new volume
- Rescanning the remaining host's iSCSI adaptors and mounting the datastore
- Ensuring the correct mulitpath path selection policy is selected for the datastore - NIMBLE_PSP_DIRECTED (if NCM is installed) or Round Robin (if NCM is not installed)
Each step is fully detailed in the Recent Tasks dialogue in vCenter:
The end result is a newly provisioned datastore and volume:
Clicking on the datastore, shows us the detailed information:
Note: if you need to change anything post provisioning you can do so by clicking the Edit button (the pencil icon in the top left corner).
Below is a video of this workflow so you can see what it looks like 'live'
Video: Datastore Provisioning (click the TV in the bottom right to go full screen)
Expand an Existing Datastore
In the previous plugin the resize volume, changed the size of the Nimble volume but the user was then left to manually grow the VMFS volume in VMWare. Expanding the size of the datastore is very straight forward in the new plugin. Click on the square box with arrow in the centre of it (hovering over it will confirm that it's expand):
Add your new size of datastore, and hit Grow Datastore:
This will then grow the volume of the controller, expand VMFS and then rescan VMFS on each host, logging in the task list as we go:
Gotcha: We will only grow the volume if it is backed by a single Nimble volume, If multiple volumes have been concatenated together using extents then the plugin will not be able to grow these.
Below is a video of the resize so you can see what it looks like 'live'
Video: Growing a Datastore (click the TV in the bottom right to go full screen)
Top Tip: You can navigate to the Grow, Remove, Edit and Snapshot workflows from the Datacente Datastore Overview screen by selecting the datastore and right-clicking !!!
Cloning a datastore allows you to copy a datastore one or multiple times. Remember a clone in Nimble terminology isn't creating a full copy but merely pointing to the same blocks referenced by snapshot. It therefore is:
- Logically independent from the source (if you change the clone the source remains untouched)
- Takes zero space - only space will be consumed as you write new blocks to the clone
- Take next to zero time to create - as it's just a copy of pointers
- Requires no license on the array - like all Nimble functionality it's all included
- No performance impact on the source volume - because the way CASL works as long as the array has enough resources, the performance of the source will be unaffected by activity on the clone
This capability is incredibly useful for a number of use cases:
- Disaster Recovery testing (where we don't wish to disrupt the Primary or Mirror)
- Gold Image - where a datastore has a gold image build of an environment that you wish to efficiently create rapid clones (what I term 'non-dupe')
- Application Testing - QA, Test patches
- Source Code - Provide developers with their own discreet test/lab environments
- VDI - persistent desktops
Again select the volume you wish to Clone, and hit the cloning icon (two squares):
The plugin will now ask you what do you want to call the clone datastores, how many copies of the clone are required and whether a new snapshot should be taken or whether you wish to choose an existing snapshot:
The plugin will now perform the following operations (once again logging in the Task viewer as we go):
- Create the relevant snapshot (if necessary)
- Clone the desired volume the required number of times
- Associate the new volume with the appropriate initiator groups for access control
- On-line the volumes
- Rescan VMFS for each of the hosts
- Rename and Resignature the volumes
- Ensure the PSP is set correctly (as per volume provisioning above)
The end result is 3 new cloned datastores in a few moments which consume zero space !!!
Seeing is believing so this video is shot in real-time, there is no time-lapse being performed, you'll notice that most of the time we are waiting for Datastores to be rescanned and resignatured rather than copying data (which is instant):
Video: Cloning a Live Datastore three times in a few minutes (click the TV in the bottom right to go full screen)
The delete workflow is very straight forward but I think there is a huge underlying factor that shouldn't go unnoticed. Without integrated management, as an Administrator you are fundamentally reliant on mapping the correct volume, naming it consistently within VMWare and making sure the Logical and Physical presentation match up when it comes to operationally managing volumes. This can be painful when provisioning but can lead to disaster when deleting or removing volumes. Fortunately the plugin manages this process for you. When a volume is deleted in the plugin, the correct datastore is unmounted and the correct volume is deleted, there is no getting it wrong or mismatching the names/id's, hugely de-risking destructive operations.
To delete a datastore, first select it from the GUI and then press the Waste Basket icon:
Next the GUI will give you the obligatory 'Are you really sure - this is destructive !!' warning:
Once you hit Delete Datastore, the datastore will be unmounted, disable SIOC, the SCSI volume will be unmounted and the underlying volume deleted on the array.
In addition, if the Initiator Groups (on the array) are no longer required (for other datastore) they are cleaned up automatically too.
Gotcha: It shouldn't need pointing out but I will....Deleting a volume is a one way street - with no return !
Video: Datastore Removal (click the TV in the bottom right to go full screen)
Manage Datastore Backups
Finally you can manage your datastore backups. Obviously datastores are scheduled to be backed up based on the Protection schedules in the policy when you created them (this can be edited post provisioning). As time passes and those backups run they will be listed in the snapshot tab of the volume:
This shows each backup, when it was taken, it's current size and how much compression we are seeing.
If you wished to take an ad-hoc backup (rather than wait for the next scheduled backup) you can by clicking on the camera in the top left hand corner, after which you will asked to name the backup and then it will be completed !
It will then be listed in the backups list:
Selecting the backup, allows you to delete should you wish too (by clicking the wastebasket icon) or create a clone from it for restore purposes (by clicking the two squares icon), similar to the clone steps above.
Finally clicking on the Replication tab shows you where the volume is being replicated and when the last replication took place.
Video: Datastore Backup Management (click the TV in the bottom right to go full screen)
That's it, apologies for the length of this blog but I wanted to get all the detail and demonstrations in one place. I hope you find it useful.
In the next blog in the series, Nick Dyer will be covering how Nimble are enhancing their data protection and reliability within 2.1.4 with Triple Parity RAID.
If in the meantime you have any questions then please feel free to post them below.