May be worth having a read of the following article (non-Nimble) about dvSwithes first:
Some discussion on pros/cons.
Can't say I've seen anything specifically about dvSwitches with Nimble.
Given some of the comments in the post I'd probably avoid using 1 dvSwitch with 8 uplinks and overriding failover order for iSCSI and have some (if only logical) seperation between iSCSI traffic and other traffic.
Have you read the Nimble VMWare integration guide on InfoSight:
This gives 2 optons with standard vSwitches, using one vSwitch for all iSCSI with multiple vmnics (and overriding failover order) OR using multiple vSwitches (one vmnic per switch, no overriding failover order required).
If you are using dvswitches then I assume you are running Enterprise Edn of ESX, so also have a look at the Nimble Connection Manager (NCM) where you can use a Nimble PSP, full details in the document link above.
While there are a number of design criteria to consider for your question, I'll mention just a couple for now. Be aware that VMware recommends putting standard 1500 byte frames on a separate switch than jumbo frames (if you're using jumbo frames). I can't find my reference for this at the moment, but I'm fairly sure that goes for versions of ESXi below v5.5. It will work with both frame sizes on one switch, it's just that it is possible to have problems. Also be aware that the 3750G ASIC, which is the hardware-level switching that happens before the IOS software gets to examine the traffic, has a surprisingly low throughput capability and you may have microburst problems that cause the ASIC to drop packets (as observed in commands such as "sh platform port-asic stats drop gigabitEthernet x/x/x") before QoS gets to see it.