Article covering basic concepts/use cases for LACP, and how to use it in VMware environment with iSCSI storage.
Great article Wen, thanks!
What about Hyper-V? It looks like most of the information is about VMware, would like to see some more about Windows Hyper-V.
heard you loud and clear Jason - we are working to develop more contents for hyper-V as well. Stay tuned. Just curious, are you looking to deploy server 2012 into your environment? If yes, what workload are you planning to run?
I already have Windows 2012 datacenter in my company. They are 3 Hyper-V servers in a cluster.
In that document, it mentioned "best of both worlds" concept with both LACP and iSCSI MPIO. Would this also work with cisco etherchannel as opposed to LACP? ex. Only two 10GB ports on ESX host. vSwitch config with IP hash and port channel on the switch side
Hi Ryan, sorry i missed your question. Short answer is 'yes' as etherchannel is cisco's proprietary implementation of link aggregation - LACP is the generic name.
I can't speak to the esxi question, but on Hyper-V, the answer is yes, and give me more.
It's a tricky question...MPIO and LACP would seem to be trying to accomplish the same thing. But LACP is down at layer 2 or 3 and MPIO is higher up (7?) in the chain.
Here's what I've done in production and in a recent bakeoff/test of the Nimble array:
- Hyper-V 2012, 6 nodes
- 12x1GbE Broadcom NICs on each node
- Cisco 6509 with WS-6748-GE-TX x2 blades
- Certain other storage vendor who does LACP on their storage array NICs
Hyper-V hosts are setup with converged fabric architecture. Essentially I take 8x1GbE ports, set them for LACP on the host, and combine them into 6 separate etherchannels on the 6509.
From that "Converged" virtual switch on the hosts, I build virtual ethernet ports for the hosts, vlan them out (CSV, Live Migration, Data, DMZ), and apportion a % of the aggregated bandwidth to each.
With the remaining four Physical NICs: 2x1GbE go to iSCSI. These are built as virtual switches on the host. I then provision virtual ethernet NICs to the host, and the VMs plug into the same thing. No LACP channels anywhere. M/CS to the array (MPIO relatively new to me).
In January we were evaluating a CS260 with 1GbE. I did what I always wanted to do from a host perspective: combined all 1gig interfaces into a single LACP team, then dangled the requisite vNICs off that team/v-switch including iSCSI. Tagged each with appropriate VLAN, and into my old 2960s they went...nimble was on access ports for iSCSI.
Worked phenomenally. No issues whatsoever. I may implement it.
Great MS blog post on it with some good diagrams by the way.
Hey wen! Thanks, Great PDF. I was wondering if this will work with a standard vSwitch? Distributed Switch is a feature exclusive to Enterprise Plus licensing. (Edit: removed sad face thanks to Wen's reply)
Hi rocky, Thanks for the feedback. With standard vSwitch, you'd have to configure LACP in static mode (in other words, turn off LACP, and configure link aggregation only). Only vDS supports dynamic LACP, or LACP in dynamic mode. Hope this helps!
Please note that the article was tested using a single switch configuration and the switch was not modular and was a single point of failure.
to get redundancy you need a series of switch that has stacking support and a proper stack configured to provide LACP or TRUNK/Etherchannel ports across fixed port switches.
Yes, good point mr_vaughn - you should absolutely configure the LACP/etherchannel/link aggregation on ports across two separate network switches (if you want to avoid single point of failure). Thanks for pointing that out!
Retrieving data ...