VMware Cloud Community
tmcd35
Enthusiast
Enthusiast
Jump to solution

NIC's

My god, I can't believe I haven't been flamed yet for the amount of questions I'm asking!

Any how I have another point I'd like someone with experience to clarify for me please.

I've been reading up about NIC's and want to be sure I haven't misread as it sounds like We'd need quit a few in each server!

I've looked at the following previous posts which do seem to hint at the following:

http://www.vmware.com/community/thread.jspa?messageID=601855&#601855

http://www.vmware.com/community/thread.jspa?messageID=627292&#627292

Okay, on a per server basis - 2 servers so we'll need to double these figures:

2x 10/100 NICs for COS to support HA

2x 1Gb NICs bonded connecting vSwitch to pSwitch for VM's

1x 1Gb NIC for vMotion

1x 1Gb NIC connecting to SAN VLAN/Switch

Thats 6 NIC's total per server? 12 NIC's for around 7/8 (initially) VM's? Seems excessive!

Also does the COS need it's own VLAN? We are looking at have a seperate desktop PC set up as a DNS server and Management Console Server. How does this fit in with the COS on the hosts. I'm sure it'd need to be on the same VLAN as the COS NIC's? And then there's vMotion - is that on the main LAN, the same LAN/VLAN as the COS or the SAN's VLAN/Switch?

1800 users, 600+ PC's/Laptops and serving databases, apps, files, etc - we are 'concerned' about network workload, probably more than the speed of the SAN, so getting this part right is paramount for us!

Thanks again

Terry.

0 Kudos
1 Solution

Accepted Solutions
Ken_Cline
Champion
Champion
Jump to solution

Hmm...couple things:

\- I would strongly[/u] recommend two pNICs for iSCSI access. Since your VMs are stored there, if one link goes away, you're dead in the water - I consider two a necessity. NOTE: configuring two pNICs for iSCSI will not[/i] provide you with 2Gbps access to your iSCSI storage. The load-balancing algorithms available for ESX do not support true link aggregation, so you're effectively limited to 1Gbps for any conversation.

\- 1 Gbps is more than man enough to handle the workload of 20-30 "typical" workloads. You'll find that most systems don't push more than 3-5Mbps of sustained bandwidth, and peak at maybe 50-60Mbps. There are exceptions, but they're rare - and they don't often peak at exactly the same time...

Ken Cline VMware vExpert 2009 VMware Communities User Moderator Blogging at: http://KensVirtualReality.wordpress.com/

View solution in original post

0 Kudos
13 Replies
mreferre
Champion
Champion
Jump to solution

Terry,

there are tons of discussions on # of NIC's and the reason for which there are so many .... is because it depends. Technically you can ride the whole thing with one NIC only (well 2 for redundancy) per server.

In reality it boils down to how your own network is layed out and the constraint you have. My opinion is that, in most situations, bandwidth is not a limiting factor but I know many people on this board disagree with this statement.

Massimo.

Massimo Re Ferre' VMware vCloud Architect twitter.com/mreferre www.it20.info
juchestyle
Commander
Commander
Jump to solution

Hey man,

No worries, you don't need that many nics.

In fact, you don't want to put too many nics in an ESX host. It takes about 1 mhz of cpu for ever meg of networking bandwidth your nics transmit.

Some blade servers only have 4 nics. In our Quad servers we have 8 nics with one just hanging out for redundancy, it's not even plugged in. We are running 30 vms per host give or take. The reason we have so many nics in our hosts is because we use one pnic for each vlan we have, our networking department doesn't want to use other methods.

As a minimum you would probably want atleast 2 nics. 1 for the SC and 1 for vms.

Even with 30 vm's on one of our hosts, we don't come anywhere near the capacity of even just 1 gig nic.

Respectfully,

Matthew

Kaizen!
Dave_Mishchenko
Immortal
Immortal
Jump to solution

It's all about isolating the traffic to ensure that one component does not affect the performance of another. For example, you could combine the VM vswitch with vmotion, but there would be the possibility of vmotion impacting VMs. Likewise from the below comments about performance you wouldn't want vmotion to impact your iSCSI traffic.

You could combine the SC and vmotion, but you would really nead 1 Gbps NICs for that. If you just went with 4 Gbps NICs you could have 2 for vm, 1 for iSCSI and the last for SC, HA/vmotion.

Also does the COS need it's own VLAN? We are looking

at have a seperate desktop PC set up as a DNS server

and Management Console Server. How does this fit in

with the COS on the hosts. I'm sure it'd need to be

on the same VLAN as the COS NIC's? And then there's

vMotion - is that on the main LAN, the same LAN/VLAN

as the COS or the SAN's VLAN/Switch?

Doesn't need it's own VLAN. This is just a security option, but if you did go for it, then it would have to be the same VLAN as the PC running virtual center. Vmotion also does require a VLAN and it could be on ethier switch.

1800 users, 600+ PC's/Laptops and serving databases,

apps, files, etc - we are 'concerned' about network

workload, probably more than the speed of the SAN, so

getting this part right is paramount for us!

Thanks again

Terry.

2x 10/100 NICs for COS to support HA

2x 1Gb NICs bonded connecting vSwitch to pSwitch for

VM's

1x 1Gb NIC for vMotion

1x 1Gb NIC connecting to SAN VLAN/Switch

Thats 6 NIC's total per server? 12 NIC's for around

7/8 (initially) VM's? Seems excessive!

0 Kudos
esiebert7625
Immortal
Immortal
Jump to solution

fyi...here's some good NIC reads...

VMware ESX Server 3 802.1Q VLAN Solutions - http://www.vmware.com/pdf/esx3_vlan_wp.pdf

Networking Virtual Machines - http://download3.vmware.com/vmworld/2006/TAC9689-A.pdf

Networking Scenarios & Troubleshooting - http://download3.vmware.com/vmworld/2006/tac9689-b.pdf

ESX3 Networking Internals - http://www.vmware-tsx.com/download.php?asset_id=41

High Performance ESX Networking - http://www.vmware-tsx.com/download.php?asset_id=43

Network Throughput in a Virtual Infrastructure - http://www.vmware.com/pdf/esx_network_planning.pdf

Multi-NIC Performance in ESX 3.0.1 and XenEnterprise 3.2.0 - http://www.vmware.com/pdf/Multi-NIC_Performance.

0 Kudos
oreeh
Immortal
Immortal
Jump to solution

Even with 30 vm's on one of our hosts, we don't come anywhere near the capacity of even just 1 gig nic.

And I think I know the reason for this :smileygrin:

0 Kudos
tmcd35
Enthusiast
Enthusiast
Jump to solution

I forgot about that, I had heard the 1Ghz per 1Gb figure before. So with 2x 3.2Ghz cores - 6.4Ghz total, I don't want to potentially waste 4Ghz running the NICs! So I need to work out the best combination of sharing the NIC's, speed and redundancy.

We want to use both HA and vMotion as these two technologies particularly sold us on the idea of VMWare as an IT department (can't use them as selling point on the bean counters).

I'd prefer 2Gb connecting the vSwitch to the pSwitch. We have on board 10/100 cards to use for the COS. We are seriously sold on iSCSI.

Now all I need to do is read through all these links and pull it all together.

Thank you.

0 Kudos
Ken_Cline
Champion
Champion
Jump to solution

I like to use four pNICs as a good starting point - though as Massimo said, you can run the whole she-bang off of one, if you need to.

Four pNICs with two vSwitches:

\[----


]

\ | COS |

-


+ +------

\ | vMotion |

\[----


]

\[----


]

\ | Virtual |

-


+ +------

\ | Machines |

\[----


]

That way, you've isolated "management" functions (COS / VMotion) from production (Virtual Machines) as well as provided redundancy for everything.

To cut the pNICs in half, just roll everything into one vSwitch...

Ken Cline VMware vExpert 2009 VMware Communities User Moderator Blogging at: http://KensVirtualReality.wordpress.com/
0 Kudos
juchestyle
Commander
Commander
Jump to solution

I forgot about that, I had heard the 1Ghz per 1Gb

figure before. So with 2x 3.2Ghz cores - 6.4Ghz

total, I don't want to potentially waste 4Ghz running

the NICs!

Oreeh - ROFL

TMCD - You are right sortof. You should put gig nics in your hosts. But you won't hit 4 ghz of bandwidth. I don't want you to think that if you put gig nics in there, that will automatically lose 4 ghz of cpu. You won't. You could have 20 gig nics in there (if possible) and if you are only sending and receiving 1 meg of bandwidth than you would only use 1 mhz of cpu.

I think you know this, but the way you wrote that I just wanted to make sure.

Respectfully,

Matthew

Kaizen!
0 Kudos
tmcd35
Enthusiast
Enthusiast
Jump to solution

LOL, re-read what I wrote, your right. What I meant, of course was the potential to loose if there was the demand on bandwidth. That and I'm still getting my head around all of the figures. I'll get there!

0 Kudos
tmcd35
Enthusiast
Enthusiast
Jump to solution

Right, I think I've got It.

I like the look of Ken's diagram (cheers).

2x 1Gb NIC's for COS/vMotion/HA - 90% no bandwidth as they should only be used if theres a problem.

2x 1Gb for the VLAN - probably handling small request 60-70% of the time but I do forsee occasions when the full 2Gb bandwidth will be used.

1x 1Gb to the SAN.

Mmm, having wrote that I now wonder if 2x NICs to the VLAN are needed. As the data/apps/files are on the SAN we have a 1Gb bootleneck. The link to the SAN needs to be the same bandwidth the link between the VM's vSwitch and the pSwitch.

Now the reason for wanting 2Gb from the pSwitch to the VM's is that I'm sure one of the biggest problems of our current file server is bandwidth, I'm not sure it's 1gb NIC is enough.

Basically at the start of every lesson a good 300 students log on to the network. Plus on top of that 150 staff. This is an average. We can peek at around 6/700 users on the network at once.

Now typical usage from the file server may include - 2/300 students opening/saving their powerpoint presentations containing 20+ pages of pointless bloated graphics and pictures. A couple of teacher playing back a video they downloaded for a lesson. Maybe some teachers preparing for a lesson downloading video into their home drive. Not to forget 10-20 art students opening RAW 6-10MP images. Plus another 50-100 users maybe opening/closing smaller more sensible files.

Now this would typically occur at the start/end of every lesson. The joys of being a large school.

So we need the bandwidth to coupe with the peeks without throttling the processors so the VMs themselves don't start grinding to a halt.

(anyone reading this, beware - I'm thinking and typing at the same time...)

But we also need to factor in the fact that we are load balancing across two host servers, so splitting file services down into smaller VM's and speeding the load across the two servers means that less bandwidth is needed on each individual server...

So we are back to 6 NICs (only a different configuration). 2 for the management - largley unused, 2 for the VM's and 2 for the SAN. But at peek load - which could happen a couple of times a day - we'd have 2Gbps coming from the SAN and out across the VM's.

I'll stop here because I'm running scenarios through my head that seem if more outlandish. Cut to the chase, the SAN will have at most two 1Gb ethernet channels, which will be shared by two hosts, so more than 1Gb from host to SAN is wasted or redundant. So only 1Gb is needed from vSwitch to pSwitch has it'd never get the bandwidth from the SAN. A second NIC from vSwitch to pSwitch is only needed for redundancy or load balancing (incoming requests on one, outgoing on the other?).

But will 1Gb be man enough if we are moving 7 over used servers into 2 physical servers?

Man I thought finding the right SAN was a challange....

0 Kudos
Ken_Cline
Champion
Champion
Jump to solution

Hmm...couple things:

\- I would strongly[/u] recommend two pNICs for iSCSI access. Since your VMs are stored there, if one link goes away, you're dead in the water - I consider two a necessity. NOTE: configuring two pNICs for iSCSI will not[/i] provide you with 2Gbps access to your iSCSI storage. The load-balancing algorithms available for ESX do not support true link aggregation, so you're effectively limited to 1Gbps for any conversation.

\- 1 Gbps is more than man enough to handle the workload of 20-30 "typical" workloads. You'll find that most systems don't push more than 3-5Mbps of sustained bandwidth, and peak at maybe 50-60Mbps. There are exceptions, but they're rare - and they don't often peak at exactly the same time...

Ken Cline VMware vExpert 2009 VMware Communities User Moderator Blogging at: http://KensVirtualReality.wordpress.com/
0 Kudos
tmcd35
Enthusiast
Enthusiast
Jump to solution

I think my head is their on this one Smiley Happy

Might seems excessive but...

6x 1Gb Links and 1x100mb (it's on the mobos so might as well use it!)

1x 100 for COS

2x 1000 for vMotion, HA, COS (redundant)

2x 1000 vswitch to pswitch (VM's)

2x 1000 iSCSI

the 3 Management links can be virtually ignored, nothing would be really transmitting here unless theres a problem somewhere.

As for the rest, 2 NICs each give both the redundancy and bandwidth required. If heaven forbid all 4 NICs maxed out (couldn't think of why they would) then quit frankly they can use all the processor mhz they need.

As for - is 1Gbps man enough - like I say it's a school environment, and we're a large school. Once an hour the bandwidth of the file server(s) get truly hammered! We have 600+ terminals. Average 300 online at any given moment. 120 active in ICT lessons, 5-10 in photography, and of 75 classrooms 40-50 teachers will be running powerpoint/streaming video.

We used to have problems, slow downs until we split our file service across two servers. My only worry is if we are moving 7 machines into 2, would those problems return. The bonus here is that the 2 file server VM's don't need to be running on the same host thus load balancing file requests.

0 Kudos
Ken_Cline
Champion
Champion
Jump to solution

Sounds like a plan...I'd mark the 100Mbps pNIC as a standby for the COS - no reason to constrain it unless you have to Smiley Happy

Ken Cline VMware vExpert 2009 VMware Communities User Moderator Blogging at: http://KensVirtualReality.wordpress.com/
0 Kudos