Any suggestions on Networking: Mgmt Ntwk, VM Ntwk and vMotion.
New deployment:
3 hosts with 4 x 10Gb NICs and approximately 30 to 40 Guest VMs.
Team x 2 NICs
vSwitch0 | Mgmt and VM Ntwk (Team 1)
vSwitch1 | vMotion VLAN100, MTU 9000 (Team 2)
LB Policy - Route based on IP hash
Physical switch ports - LACP
Use a single VDS (with all 4 10g uplinks) and create their associated dvPortgroups for MGMT, vMotion and each of VM networking.
If you have high network transmission on your virtual networking, use the LACP (on the physical side) and IP Hash (on the virtual side), otherwise, I think it's not necessary to make complex networking because troubleshooting of this situation can be harder anyway
I would suggest the following (without knowing your specific requirements):
vSwitch 0: 2 x 10 GbE interfaces, Management and vMotion. vmnic0 active for Management; vmnic1 standby. Opposite for vMotion.
Virtual Distributed Switch: 2 x 10 GbE interfaces, no LACP, route based on physical NIC load. All VM networks.
Hi,
If you use route based IP hash, you will configure the physical switches. During a possible problem, your troubleshooting can be difficult.
The physical uplink bandwidth is very high, if you do not do something like streaming, I recommend you to continue with the "routed based virtual port" option.
If it was me;
Management and vMotion = vSwitch 0
Virtual machine network = vSwitch 1
Use a single VDS (with all 4 10g uplinks) and create their associated dvPortgroups for MGMT, vMotion and each of VM networking.
If you have high network transmission on your virtual networking, use the LACP (on the physical side) and IP Hash (on the virtual side), otherwise, I think it's not necessary to make complex networking because troubleshooting of this situation can be harder anyway
Moderator: Moved to vSphere vNetwork
Will need to setup as standard vSwitch (I have Essentials Plus) and setup Physical switch port-channel group mode ON.