Hi there I have a 1Gb standard network configuration with eth1 & 2 management only and eth3-6 data only.
Each management port and data port on each controller are plugged into a separate switches as documented in the Nimble networking best practices:
I was wondering what triggers a failover of the controller?
There's a few scenarios I'm considering:
- If S1 fails eth1, eth3 and eth 5 lose connections however we still have connections to eth 2, eth4 and eth6 so I guess no failover occurs as we still have connectivity from the controller but in a degraded state? and the same if S2 fails but with eth 2, 4 and 6?
If an individual port either on the switch or controller failed then what would happen?
- in the case of the management network does failover rely on the availability of the group management IP address which 'floats' between the 2 management ports and if that becomes unavailable then failover occurs?
- In the case of the data network, if one or more ports are unavailable on a controller then what happens?
I am not in a position to test failover scenarios at present but would like to know the theory behind failover.