Nimble OS 2.0 has many new features, especially around scale-out, and this series of blog posts aims to provide our customers with useful information to introduce this new capabilities. In the first post, Manual vs Automatic Networking in Nimble OS 2.0, my colleague Nick Dyer introduced us to the notion of scale-out, Virtual Target IP Addresses (VTIP's) and also Nimble Connection Manager (NCM) within Nimble OS.


This time we'll focus on Nimble Connection Manager for Windows. Nimble Connection Manager really solves three storage management problems:


Providing Logical to Physical Mapping

Have you ever had the situation where you look at Disk Management and see several drives, all of the same size but haven't the faintest idea of which drive maps to which volume on the SAN ?  It's possible to work it out by looking at the drive properties but you'll need a pen, pad, time (lots of it), patience and probably a belly full of caffeine to map all of the relationships. One of the first benefits of Nimble Connection Manager for Windows is it provides a nice easy-to-read mapping between the volumes on the Nimble array and which logical volumes they map to within the Guest OS. This saves your time but more importantly removes the risk of management activities when it's time to work with those volumes.


Disk1.jpg Disk2.jpg



Automating Connection Management

No pun intended but connection management can be a bind! A great place to start is to check to see how many connections each of your Windows drives have today (if you have followed best practice there should be a connection for each available initiator to each available target). I've seen many environments where a volume is only connected to a single target (which potentially causes a performance bottleneck but more importantly can affect resilience in the event of a failure). If you were a glutton for punishment and closely followed the Windows Best Practice Guide, you would have manually connected each initiator to each target, which would have resulted in multiple connections to the storage. Many people used the excellent scripts provided by Adam Herbert to automate this process. 


NCM for Windows removes the need to perform these steps manually or even run the scripts. It automatically manages the connections from the host to volumes on Nimble systems. In configuring multiple connections and MPIO, the Nimble OS requires only one IP address be advertised — the iSCSI discovery IP address — instead of advertising the full set of the array's data iSCSI network interfaces at the time of discovery.  This means that you do not need to manually make specific connections to the appropriate interfaces, or worry about how many connections there are to nodes hosting a particular volume. As connections are made to a consistent address (group target portal), the connections are redirected to the appropriate distribution of actual iSCSI network interfaces.  NCM not only makes it much easier to provision the volume but also the volume is configured to best practices with regards to the number of connections, the persistency of connections on reboot and the MPIO path selection policy as well.



Enlightening the Host about Topology Changes

One of the key concepts of scale-out in Nimble OS 2.0 is that your backend storage topology may change over time. For instance, you may incrementally grow the Nimble Storage Group from 1 array to 2 or 3 or even 4 arrays by incrementally scaling out as and when required. Despite there being four physical controllers, logically they are viewed as one group. Not only can a volume span a single array but it may indeed span all 4 arrays (for aggregated performance). 


You can also imagine how this can evolve to allow controllers to be evacuated and technology refreshed non-disruptively but also volumes moved between controllers non-disruptively for balancing and migration purposes. This after all is the flexibility the Virtual Target IP address provides allowing a consistent presentation to the host. In order to manage this process, the host needs to be able to be redirected to the most efficient place to access its volume. This is another role of NCM to facilitate that Host <> Storage Array communication to ensure optimal configuration. I like analogies, so you can think of NCM begin the switchboard operator, making sure you have the most effective connection in order for you to have good communication across the many potential connections that could be made.



You can obtain the Nimble Connection Manager for Windows software from InfoSight, just click on Downloads and hit the Windows Toolkit tab (Note: be sure to select the 2.0.x version and of course your array needs to be on 2.0.x as well).




Gotcha: Be sure to read the release notes, there are a number of Microsoft hot fixes that need to be applied to Windows 2008 R2 and Windows 2012 and Windows 2012 R2 hosts.  Unfortunately the installer doesn't check for the presence of these so I'd highly recommend installing them prior to installing NCM for Windows.  Also note: NCM for Windows is unsupported with 2008 (non-R2) environments.


Once installed (it's a double click, next, next, next type of thing) and started, you simply define which interfaces you wish NCM to manage and ignore, then enter your array's iSCSI Discovery address and then NCM will automate setting up the connections for you.  In order to demonstrate connecting the volumes (and removing them) I have posted a video demonstration here rather than post screenshots of each of the steps.


Finally you may be wondering about the license cost, as quite often this type of functionality with other vendors is a chargeable item per host. Well, just like everything else with Nimble there isn't a license fee, just download it, install and go !


Keep an eye out... Shortly Jason Monger will be bringing you part 3 in the series, where he'll talk about Nimble Connection Manager for VMWare.  Until then, please feel free to ask questions below; there is also a wealth of info in the Integration Guide on the Windows Toolkit Download page.