Per the ever knowledgeable Huy Duong:
Yes it is absolutely supported. You will need to assign the FI ports as “Appliance Ports” or in some Cisco Documentation, it’s also referred to as “NAS Ports”, the two terms can be used interchangeability
Here are a list of Gotchas you need to know about:
- If you are going to do Boot from san, Make sure you create an IQN pool that has the word “microsoft” in them for bare metal windows install or VSS will fail.
- Under the “LAN” Tab, either create a “Network Control Policy” or edit the default of the “Action on uplink fail” to Warning, or it will not come up.
- You will need to have two subnets, no exceptions if you want to plug into the FIs. The FIs operate as two separate switches and really don’t know about each other’s MAC address. Therefore you need to create two Vlans, each vlan to an FI.
- Create two NICS for your Service Profile, each one to a different FI. Disable Failover of the NIC (since the vlan will not exist on the other FI), MPIO will take care of this.
In a dual subnet model for ESXi connectivity to Cisco UCS, we are actually talking about dual data subnets so actually there may be 3 networks in you environment (management, data-A and data-B) where the A & B data subnets map onto the UCS Fabric Interconnects A & B respectively. On the Nimble array, there is only one officially defined iSCSI discovery address.
You have a couple of options - each with considerations:
- perform iSCSI discovery on only one of the data subnets - this means new discovery may be limited should the iSCSI discovery network is down - data path won't be disrupted - just new provisioning
- perform iSCSI discovery on the management subnet - you will need to make sure the host has connectivity to all three subnets somehow which may have undesired security considerations for access control
- allow iSCSI discovery on the data network IP addresses directly - this is currently supported through an undocumented ability and subject to change in future releases if needed
For our current Cisco/VMware/Nimble reference architecture, we use method #1 above as shown in this screenshot:
FYI -- Nicholas has also covered the networking stack for the Cisco UCS C-Series rack mount servers connecting to Nimble Storage in his guide for the Microsoft Fast Track for Private Cloud (aka the Nimble Storage SmartStack for Windows Server).
In addition to the redundant iSCSI fabrics and management networks, the Nimble Storage SmartStack for Windows Server reference architecture calls for Hyper-V migration, cluster and a VM public network -- facilitated through vNICs.
BTW for those that are interested the UCS C-Series rack mount servers also leverage the Fabric Extender (FEX). This in turn connects to the Fabric Interconnect. The FEX directly plugs into the UCS B-Series blade server chassis.
this is a good read from Cisco UCS support forum (quick list of best practices and common issues):
We just implemented a new UCS system along with 2 Nimble arrays last week. We used the appliance ports exactly as described. We did run into one gotcha during the process. We decided to use Jumbo Frames and found that we had to implement a QOS policy on the UCS with a MTU of 9216 in order for the connections to work properly.
I would classify the iSCSI traffic in the Gold class and add a nic for iSCSI traffic. Build and apply a vnic policy and apply it to the iSCSI nic.
With the configuration shown from UCS Manager, client and iSCSI traffic are using the same class.. less than desirable from a performance standpoint.
UCS will not be able to differentiate between user traffic and iSCSI as shown in the configuration above..
Sorry to dig up an old thread but I thought my question is related to the networking setup of Nimble & Cisco UCS.
I am currently implementing a CS700 (with 4 x 10Gb SFP+ on each controller) for two sites and plan on replicating between them.
Currently the Nimble OS only allows replication over one subnet. My plan was to create a third VLAN purely for replication and tag this to all four 10GB interfaces on the controller. This VLAN will be a global VLAN on both fabrics.
Can anyone see if this is a workable solution?
Thanks in advance
It should technically work, as VLAN tagging allows you to share physical ports among multiple subnets. That being said, I don't know that I would necessarily tag all the physical ports with the replication VLAN. Under heavy replication load, this could potentially cause latency for iSCSI traffic and vice-versa. I would recommend picking 1 or 2 ports for replication and leaving at least 2 dedicated for only iSCSI.
That's my opinion, and others may have different ones.
This would work, we are using something similar, we replicate using a separate routed network to other datacenters and have tagged the replication vlan to to the array's NIC ports.
We are using all of the ports as well, but I can see Brandon's concern, setting a bandwidth limit might help in case you run into the latency issues on you iSCSI traffic.
For us the replication has never given any latency issues.