So far in the 'Introduction to Nimble OS 2.0 series' we have introduced you to a few new concepts like Automatic and Manual Networking, Connection Manager for Windows and VMWare and the 2.0 Software Upgrade process. These concepts have introduced a little more complexity to Nimble OS, when compared to 1.4 and earlier releases, however they form the foundational changes which allow us to introduce the concept of scaleout.
In Nimble OS 1.4 (and earlier) a user can scale a Nimble array non-disruptively in three separate dimensions:
- Scale Performance - if a CS200 user requires more IOPS, they can non-disruptively upgrade their CS200 to a CS400, which provides triple the amount of IOPS performance without any additional infrastructure.
- Scale Cache - If a working set size increased then scaling cache (to X2, X4 or X8) allows for the Nimble OS cache to be non-disruptively upgraded, in turn reducing read latency of the working set.
- Scale Capacity - If a user simply requires more useable capacity, expansion shelves can be added to the existing array to provide more useable capacity (independent from the system performance).
But what happens when a single arrays limitations are reached? For example, I have a CS400 pegged on IOPS, the maximum about of cache installed and the maximum three expansion shelves installed (1 x ES1 on the CS210). How do I expand beyond the limitations of a single array? The answer is 2.0 and scaleout!
Scaleout allows a user to physically group upto four arrays together but logically manage them as a single entity. The consolidated group delivers the aggregated performance and aggregated capacity of the four individual systems.
Today's post will show you how you merge two separately running arrays into a single group.
Once you have upgraded to Nimble OS 2.0, you are effectively already using groups! The array is now actually running as a single node group, you can see this by clicking on the Manage > Array icon within the GUI. Below is an example of my lab environment which has an array (Cobain) which is in the group Nirvana:
Note: by default when you upgrade from 1.4 to 2.0 the group name will be the same as the array name.
For the purposes of the test, I also have a second array (Grohl) running in a second independent group (FooFighters):
In order to merge the two arrays, I select the Add Array to Group icon on the Group that I wish to merge into (in this instance I will merge array Grohl into the Nirvana Group).
Next Nimble OS will discover any arrays that are candidates to be merged:
You can then select the desired array and click Add, after which you are asked to validate by providing the group (to be merged) administrative password:
Nimble OS will validate the decision to merge the two arrays, if any of the preset criteria cannot be met then the array merge will be revoked. The criteria we look for is:
- Both groups are in the same management subnet.
- Both groups use the same data subnet.
- Both arrays must be running Nimble OS 2.0.
Note: the arrays do not have to be the same architecture (therefore you could have a mixed group with CS200/CS400, with differing number of expansion shelves).
Prior to the merge if any volumes are still online or if there are any naming conflicts (with volume and volume collections) between the two groups, Nimble OS will warn you of the Merge Conflict and allow you to resolve them:
Firstly, the remaining online volumes are listed, the wizard will also allow you to offline them:
Gotcha: As the iSCSI discovery address of the array (Grohl) will effectively be discarded in favour of the settings of the Nirvana group, you will need to reconfigure the new discovery ip address (of the Nirvana group) to each of the hosts that access Grohl within the iSCSI software initiator or Nimble Connection Manager. It's import to offline the volumes on the array that is joining the group (Grohl in this instance). The volumes discovery and management IP address will be updated to the new joining group (hence the requirement to offline the volumes). Note: The Nirvana groups volumes are unaffected by the merge.
Next, naming conflicts are listed and you are prompted to add a prefix/suffix to allow the volumes to be renamed and distinguished once the merge has been completed, here I will prefix the volumes with FooFighters- :
Note: The above step solves volume collection naming conflicts as well as volume name conflicts.
Once the validation is successful, prior to the final merge, Nimble OS will warn you about the implications of the merge:
Once you click Finish, the groups will merge, this should take approximately a few minutes to complete.
On completion, you will be notified and the wizard will allow you to bring all volumes, that were taken offline prior to the merge, back online:
Now, when you refresh the browser of Nirvana group, you will now see two arrays within the group (Cobain and Grohl):
We now have a single management entity (Nirvana) which two individual arrays (Cobain and Grohl).
You will also see the volumes and volume collections successfully renamed:
In a scale out group there are some interesting concepts which we will explore in a little more detail:
A storage pool is the storage capacity that is assigned to an individual array (you can think of it as the all the available capacity in a single array prior to 2.0). In a single node group, the Storage Pool is still there only you don't explicitly see it in the GUI as it's the default Storage pool. As we have merged two arrays in the above example we would have ended up with two default storage pools (which would have clashed) so as part of the merge operation we automatically renamed FooFighters default pool to default-FooFighters so we can identify it post-merge.
The concept of a pool allows for affinity. Consider I had a two node group and I wanted to allocate one of the nodes for VDI workloads (maybe that array has more cache) and the second node to General Purpose compute. I could name the pools appropriately and when provisioning a volume I can elect to provision from the correct type of pool:
Alternatively, I may want to just expand the arrays so that I have a single pool that spans both of the two arrays within the group and have a single pool. This is possible, just navigate to the individual pool and click Merge Pools:
In 2.1, volumes can be moved non-disrutpively between pools. This flexibility to merge pools and move volumes non-disruptively between different arrays in the group is why the networking needed to be virtualised and Nimble Connection Manager is required to give the host knowledge as to where volumes are allocated or moving too.
Watch out for a future blog in this series where we will discuss the use cases of storage pools in much more detail!
If you now navigate to Administration > Networking, you will notice this largely looks the same. The Groups (Nirvana) management settings will be identical to pre-merge:
Clicking on node Cobain, the details will remain the same pre-merge:
Clicking on node Grohl, the data details will remain the same:
Note: as per the merge warning, FooFighters group settings have been discarded in favour for Nirvana's. This is why we recommend updating the hosts with the new discovery address, until this is completed the hosts will be unable to discover new volumes or paths.
Finally this isn't a one way street. If required it is possible to remove an array from a group. Click down into the array (from the array details page) and then you'll find the option to remove an array from a group (of course prior to doing this you will need to remove the volumes from the associated storage pool).
This post has focussed on how to merge two independent Nimble groups. The next post by Justin Rohan will focus scaling an existing group with a brand new array. As ever feel free to ask questions below but also consult the 2.0 documents on Infosight, specifically Chapter 2 of the User Guide where there are more details available on the process to Merge to Independent Groups.
Hopefully it won't be long until your making your own 'Storage Nirvana' !!!