I'm in the process of setting up ESX, what's the best configuration for four NICs? I've been told that setting up the SC, VMotion, and iSCSI on the same private LAN is the best way to go. Any suggestions?
I'd suggest you to put all your adapters in the same vSwitch. Then use vlans and, if you have bandwidth problems, tell the vSwitch to dedicate a link for vmotion, another one for iscsi and the remaining 2 for production networks (you can find this in vSwitch properties).
I'd also suggest to link 2 ethernet cards to a phisical switch and the other 2 to another if you can.
Hope this helps.
I'd suggest you to put all your adapters in the same vSwitch. Then use vlans and, if you have bandwidth problems, tell the vSwitch to dedicate a link for vmotion, another one for iscsi and the remaining 2 for production networks (you can find this in vSwitch properties).
I'd also suggest to link 2 ethernet cards to a phisical switch and the other 2 to another if you can.
Hope this helps.
Are you using vlans? Are you using HA? I would say it depends on your network config and how many vm's your running..
I prefer 1 COS, 1 VMotion, 2 bonded for VM traffic.
Team 2 NICs for SC/Vmotion on one virtual switch and Team the other two NICs on another virtual switch for VM traffic.
We have 100s of ESX hosts, using 1 COS, 1 VMotion, 2 for VMs bonded (cross-switch). However, we use active/passive redundant fiber fabric for shared storage.
We have not rolled out iSCSI yet, awaiting consistent and reasonable pricing for 10GB, and full support, like VCB officially supporting iSCSI. But when we do, we are considering, 1 COS, 1 VMotion/iSCSI, 2 for VMs bonded for labs. And... 1 COS, 1 VMotion, 2 iSCSI (cross-switch), and 2 VMs (cross-switch).
I know the above iSCSI model is not just 4 NICs, but I think you can see why we are taking the position we are with iSCSI, 4 versus 6 NIC ports is not a significant cost for us.