VMware Cloud Community
jenniferg
Contributor
Contributor

1GB Network Connections

I plan on having to ESX servers each with 8 physical NICs configured with seperate HP switches for ISCSI/SAN traffic. We will be using Vmotion. All 7 VMs will be on the primary ESX server. The secondary ESX server is just there in case the 1st server fails and we have to fail over.

This is the configuration I was looking at doing for each of the physical ESX servers.

Vswitch 0: 2 pnics for the service console connected to our LAN switch at 100MB

Vswitch 1: 2 pnics for Vmotion connected to to our LAN switch at 1GB

Vswitch 2: 2 pnics for the Virtual Machines connected to our LAN switch at 100MB

Vswitch3: 2 pnics for ISCSI traffic connected to our HP switches for ISCSI/SAN.

I discovered that our LAN switch only has 4 available 1GB ports on them which means I would only have 2 1GB ports per ESX server to use for the physicla network connections back to the LAN switch since the HP switches for the ISCSI/SAN do have enough 1GB ports available for the ISCSI traffic.

I wasn't sure if I can get away with the VMs themselves only have 100MB connctions if there are 2 physical NICs? Do I really need 2 physical nics for Vmotion traffic on both ESX servers or could I have one physical NIC per server instead.

Looking for some suggestions since I'm limited on the physical 1GB ports available on our LAN switch.

Tags (4)
0 Kudos
3 Replies
amvmware
Expert
Expert

It depends on what the 7VM's are doing - are they network intensive applications. If not then 100MB nics should be OK - configure for full rather than half duplex. You could also look to add more 100Mb nics to vswitch 2 and balance the VM traffic across more of them.

I would use the 2 x 1 GB nics per esx host to handle my service console and vmotion traffic. Make nic 1 the primary for service console traffic and vnic 2 it's redundant partner and reverse the nics for vmotion. You don't need vswitch 1 or dedicate NIC's for vmotion on a solution using 2 hosts. You could also create a separate portgroup on vswitch0 and make that your Vm traffic port group and assign the 100Mb NIC's to that port group.

0 Kudos
s1xth
VMware Employee
VMware Employee

I agree. With two host cluster you dont need two dedicated pnics for vmotion. Combine them on your service console. From what you are saying it doesnt seem like you like you are going to use HA and just use the second host for manual failover.

http://www.virtualizationimpact.com http://www.handsonvirtualization.com Twitter: @jfranconi
0 Kudos
hstagner
VMware Employee
VMware Employee

Hello Jenniferg,

Given the lack of a choice, I would do exactly what amvmware suggested and put the vmotion and service console on the same nic team as failover for each other.

However, if you are concerned with rendundancy then your switch is still a single point of failure if you only have one for each type of traffic (one for iSCSI and one for LAN).

Also, if you are planning on scaling this solution to more than two hosts in the future, I would invest in some Gb switches. I believe the HP Procurve 2510G-24 (24 x 10/100/1000 ports) can be bought for around $600 - $700 each.

You are already paying for the advanced functionality of Vmotion (with HA I am presuming because you should have it as well). It would be a shame to cripple all of that redundancy by not spending an extra $1200-$1400 to ensure that your environment is as redundant and scalable as possible with SMB switches.

Just my 2 cents. I hope this helps.

Don't forget to use the buttons on the side to award points if you found this useful (you'll get points too).

Regards,

Harley Stagner

----------------------------------------- Don't forget to mark this answer "correct" or "helpful" if you found it useful (you'll get points too). Regards, Harley Stagner VCP3/4, VCAP-DCD4/5, VCDX3/4/5 Website: http://www.harleystagner.com Twitter: hstagner
0 Kudos