well if you really wanted to take full advantage of it, you can do mgmt+data, i mean really how much mgmt traffic is going through there? cant be much...
you might want to call support, will probably be the only way to get a solid answer on this, because as you pointed out, it really depends on how the hardware is setup..
but to that point, if you are using all 1gb connection, even if you used all 4 ports (2 data, 2 data+mgmt) I cant imagine you'll over saturate any of the bus.
I'd play around and test by I have the 10gb addin cards so wouldnt be able to produce any useful results.
Typically eth1 & eth2 are allocated for management traffic only with eth3 & eth4 for data traffic, and as you say eth3 & eth4 are on the same PCI card. I wouldn't worry too much about the resilliency of the card itself. If there were a PCI failure it would cause an instant handover of the controllers to ensure data traffic is up and good, so when it came to replace the PCI card/controller through RMA it would be easy to do as it would already be the secondary role.
I guess you've answered my question really! I wasn't sure whether a PCI card failure would trigger the controller handover which is perfect.
One other question I have in relation to this is can one change the nic assignment on the iSCSI side after volumes have been provisioned? I'm not bothered about having to reconfigure iqn names from the host side but just wanted to make sure it didn't change any of the volume config. I wouldn't have thought so but wanted to ask an expert!
Is your question relating to when a controller takeover occurs and the volumes are presented from the other ports? There is no reconfiguration to do here at all (from IP addresses, IQN, MPIO etc) as the controllers are a mirror image of eachother in that respect.
The only time you have to do reconfiguration of some sort would be if you were to replicate the volume from one array to another and then clone/promote that volume.
Hope the above helps (if I understood it correctly!).