AnsweredAssumed Answered

ISCSI switchport aggregation with MPIO?

Question asked by rhcjmo on Jul 6, 2016
Latest reply on Jul 21, 2016 by Chris Aylott

I have a question regarding an architecture setup by a vendor that I am taking over supporting.  The current environment is VMware 5.5, two dedicated NICs per server for iscsi traffic to a switch stack that connects to a CS300.


Currently, they have setup the VMware iscsi adapters per multipath recommendations (bonded VMKs with Round-Robin), however I am a bit confused on the potential impact of the network configuration.  Currently, they have everything setup for ISCSI best-practice (dedicated switches, non-routed iscsi vlan, jumbo frames, etc), however the pair of stacked switches are aggregating the links to the NICs on the server. 


Host egress traffic, from my understanding, shouldn't be effected, but return traffic from the Nimble would have to be processed by both NIC at least to layer 2 then one of them would have to drop traffic as it isn't the intended destination MAC for the incoming frames, correct?  Since we are using software iscsi via VMware (see Nimble's VMware guide page 5), LACP or other bonded link protocols seem to me to add unnecessary overhead in both traffic and processing for ingress traffic on the host's CPU and NIC.  Am I correct or mistaken? 

 

Reference:

 

iscsi.jpg

Outcomes