1) Yes, this is now the default behaviour as expected for the vSphere plugin. Reason being is that when having a single iGroup with multiple initiators it becomes problematic to manage various Access Control Lists of different datastores mapped to one or many ESX servers - it would create a single "ESXHosts" Initiator Group for each variant of ACL, which was annoying - and it became even harder to mount different datastores to hosts retrospectively. This way every host has it's own Initiator Group, and now the Plugin will add/remove more or less iGroups to the volume when required. It also has introduced a new feature where you can tell the Plugin to mount a datastore to other hosts - which was lacking before.
2) In reverse to the above, With VVOLs an Initiator Group is created with multiple hosts contained inside it. This is because an Initiator Group is bound to a Protocol Endpoint, which is the SCSI communication layer between the ESX servers and the array.
Finally - your comment about the restrictive volume limit on Nimble is fair will be resolved in the near future which is good news. 1024 volumes is still a chunky number, but when one considers that each VM will have a minimum of 3 VVOLs assigned to it (and more if there are dedicated data drives) then it certainly needs to go higher. Having said that, I would probably recommend a middle ground of VMFS datastores and VVOLs for the time being until things such as VMware SRM becomes available.
So I figured I would add this in case anyone else is seeing strange vvol-igroup creation, or vvol creation on UCS in general, the issue ended up being that the 1340 VIC card in the B200M4 does not support SLLID (second Level LUN ID) on fibre channel. There is no available fix, a feature enhancement has been opened for 9 months now. I expect an update from the Cisco BU the middle of this month, their target as it stands now is 1H2017.
So I am helping a customer out with a new AFA running 3.4.1. When we deployed the system, we manually created a single Initiator Group and put all the host HBA WWPN's in that, provisioned some volumes and we were off to the races. Later on we deployed the vSphere plugin(s) and VASA provider. When we provisioned an additional volume via the plugin, it created individual Initiator Groups per host, unbeknownst to us. I don't recall that being the case in previous iterations of the plugin. Fast forward to yesterday, we start to test and deploy vVols, but begin to have issues when actually moving things to the vvol datastore. During my investigation, I look at the Initiator Group, and now there are a third set of initiator groups that have been created just for vvols. I'm still working through the vVol creation failure issue now but I am concerned that it may be because of the multiple copies of initiator groups My questions are as follows:
1. Is the creation of multiple Initiator Groups instead of a single combined Initiator Group intended behavior? It took almost 45 minutes via the plugin the first time we provisioned a volume to a system with 20 hosts, likely because of all the behind the scenes orchestration with additional Initiator Group creation. I am sure there are reasons for this, but this has all the hallmarks of an administrative overhead nightmare.
2. What's the deal with the second Initiator Group with vVols? vVols are not currently working on our implementation, as the creation of a volume appears to fail when using either storage vmotion or cloning a new VM from template to the Datastore. A volume is not created or placed in the vVol initiator group on the array side. VMware can write to the array via vVol, as it has placed and created a .vSphereHA folder in the root and it shows up under the listed volumes in the array. I will likely call support today about it, but I probably need to change the Initiator Group configuration to match what the plugin is trying to do first, just to rule that out as the issue, if that is the way forward from here on out. .