When I first implemented this, it was with ESX4.0 using 4-hosts with 12NICs in each. This included an EqualLogic with Round Robin. The design was:
1NICs – Backup Network (non-routable) - Distributed switch
2NICs – Service console - Distributed switch
2NICs – vMotion (non-routable) - Distributed switch
3NICs – VM Network - Distributed switch
4NICs – iSCSI (non-routable) – Standard virtual switch
With the introduction of EqualLogic firmware v5.0.2, Dell MEM, and ESX4.1; I was able to change the iSCSI network to a distributed switch. Using the Dell MEM rather than Round Robin, I have true load balancing with 2NIC’s and greater throughput. By default from Dell, it only uses 2NIC’s and all others are in a fail-over state (I am always in favor of defaults too). So I was thinking of reallocating the 2NICs from the iSCSI network but where this whole infrastructure relies on the iSCSI network, I opted to keep the two fail-over as an insurance policy.
Then I started to think about how ESXi does not have a service console but rather a management network. For best practices, does the management network need 2NICs like the service console? The 2NIC design was for preventing HA from executing in the event of a service console NIC failure which would cause network isolation for that particular host.
Should I combine vMotion and the old service console networks? Should I buy another 4-port NIC?
Thank you