Can i ask you what you find more complex about 2.0 vs 1.4? As the only thing which has really changed is the ability to scale-out and group multiple Nimble systems under a single management banner. All functionality expected with 1.4 stays the same - although we've moved the Networking tab to it's own dedicated page for simplicity.
Bear in mind that 2.0 introduces Nimble Connection Manager for Windows & VMware - two very handy tools to get away from manual creation of multi-pathing (or via the use of non-persistent scripts) which makes storage presentation a lot simpler.
Also 2.0 has an awful lot of code redesign under the hood which allows us to integrate new features and functionality into the system, which in 1.4 was impossible to do. Although none of this is visible in the GUI, nor does it have any adverse impact on performance on the system - so a good engineering feat, one may say
Hi Nick, for example, in the Network Configuration it now recommends Automatic vs. Manual. And it talks about layer 2 inter-switch link, IP zone etc which I'm really not familiar with, and Target IP becomes optional now? It also asks if you want it to be cluster/data or both. I'm not sure if cluster means management? I haven't tried the connection manager yet not sure if it is as simple as Microsoft's iSCSI initiator?
I just feel like too much change all of a sudden, kind of feeling like Windows 7 to Windows 8.
Thanks for the feedback Jason. Here's some information detailing your discussions:
- Manual networking is what we have always done before in version 1.x of Nimble OS code. This is where we create manual MPIO connections from each physical NIC in the host to each array data IP address. This is not really a scalable solution and forces the end user to manually create (and manage) those connections. Automatic relies on presenting data via something called a Virtual Target IP address (or VTIP for short). If this is enabled, the host will create iSCSI connections to the VTIP rather than individual data IP addresses, and the array will handle the rest for you. This requires you to use Nimble Connection Manager for VMware and/or Windows.
- A Virtual Target IP address is, by default, the same as your Discovery IP address, unless you have two data iSCSI subnets which means you will have two VTIPs.
- IP zoning is a feature in response to a common support problem - which is data is routed across a switch ISL which saturates and causes latency. By creating seperate IP zones it gives the array the intelligence to route data paths away from the ISL meaning data is never sent across the link to access a specific host or array data port. However, this is set to "single" by default, and you should only really look to implement it if a) support have recommended it or b) your SE has recommended it.
- We also allow you now to implement your own routes should you wish to. And you can also create different networks where you can dedicate cluster traffic to be sent (if you're scaling out). Cluster traffic is the packets the array sends from one to another in a scale-out configuration. By default this will traverse your data network as you've seen.
I implore you (and every other user on NimbleConnect reading this) to read the "upgrading from 1.4 to 2.0" guide available on Infosight, as all of the above is detailed and explained a lot better than I just have... but comparing this to Windows 7-> Windows 8 is completely unjustified IMO
Thanks Nick, I'll do more reading on it. Like I said I probably just got used to how easy and simple it is in OS 1.4, and we only have 2 arrays so I don't need to worry about the painful manual setup when you have a lots of Nimble arrays. I guess when you have pretty much all your servers on Nimble and it's been working very well, you are kind of hesitant to make changes that you are not familiar with, worrying if something might break. At least that's how I feel it.
Understand loud and clear Jason. For me now the 2.0 way of doing things is second nature, but it did take some time to get to grips fully with some of the new features - which is why it's always good to upgrade the DR system first!
I think you've made a good point that there could be more awareness on this, so thank you very much for that. I'm in the process right now of a blog post describing some of the above which hopefully may rectify that...
Edit: it's here Nimble OS 2.0 Part 1: Manual vs Automatic Networking
Hmm... when we did our upgrade to 2.0 we ended up dropping all connections to our VMWare hosts and everything came crashing down. This wasn't Nimbles fault or really our fault or anyones fault for that matter, it was simply an unknown issue with the upgrade process and how the array controllers flipped over during the upgrade process. It turned out, after about 18 hours of downtime on a Sunday, that it was a little advanced config option in VMWare that caused it to happen.... the iscsi login timeout. We changed it to 60 and then everything began coming back online.
Not sure if they're going to release a white paper on that or make it a pre-req for doing the upgrade but we ran into it and can tell you firsthand that if you don't make that simple config change on your hosts before you start the upgrade you're going to be scratching your head wondering "whaduhhell happen?" when you see all those vm's come offline and drop datastores.
Very happy we ended up waiting for maintenance window on Sunday to apply this, don't foresee it being a problem down the road. Still LOVE Nimble, it's awesome.