There are 6 physical servers that I will set up a new cluster. It has 4 x 10 Gb and 2 x 1 GB network cards on servers. I will use standard switches. What are best practices? What happens if I don't use 2 x 1 GB ethernet cards?
Is it okay if I use 2 x 10 GB management - VMotion 2 x 10 VM networks? Also, should I configure one or two standard switches?
How do you attach the shared storage (which I assume you have) to the hosts?
André
there is a san switch and it is connected with fiber cable.
In case you want to go with 4x10gpbs, I'd probably create a single vSwitch with all 4 uplinks, and configure vMotion with one of the vmnics as active, and the other ones as standby. For all other port groups (including the Managment port group) set the vMotion vmnic as standby, and the other 3 vmnics as active.
André
Why don't we create 2 switches? Is there a special reason? we can add 2 nic for every virtual switch . Is this a wrong design? There are 2 x 1 GB nic on the server and 1 GB physical switch for ESXi management ports in the environment. Do you mind if I use 1gb for the management port? Can you share your experiences here?
You can of course use the 1gb vmnics for Management, but from your initial post I understood that you don't want to use them!?
Anyway, try to keep vMotion on its onw vmnic, as vMotion can completely saturate a vmnic, which may cause latency with VMs using the same vmnic.
The reason for my suggestion was to have as many vmnics as possible available for VM and Management traffic.
André