VMware Cloud Community
NullDevice23
Contributor
Contributor

Virtual NICs: Only 1 Gbit or better more?

Hi !

We have virtualized most of our physical servers this year.

All in all we have around 40 virtual servers now.

Our clients are all connected with simple 1 Gbit NICs.

Before most of our servers were connected with several aggregated NICs.

But now we have just 1 Gbit virtual NIC per server, which i don't really like.  Considering actually 1 client can congest 1 server with network traffic.

Our IT consultant told us that it would be bad to exchange those virtual 1 gbit NICs against a 10 Gbit ones, coz that would maybe cause too heavy load on the Storage.

So what about aggregating several 1 Gbit NICs, like 2 or 3?

Is that basically a good idea?

Our storages are G8 Datacore servers with mixed 10k and 15k SAS disks.   And our physical server equipment is a HP C3000 Bladesystem, connected with 2x 10 Gbit to our LAN and per FC to the Datacore servers.

Any suggestions?

Thx in advance,

ND.

0 Kudos
4 Replies
weinstein5
Immortal
Immortal

Are you asking about the physical NICs on you ESXi hosts or the virtua NICs on the VMs?

If you find this or any other answer useful please consider awarding points by marking the answer correct or helpful
0 Kudos
MKguy
Virtuoso
Virtuoso

Our IT consultant told us that it would be bad to exchange those virtual 1 gbit NICs against a 10 Gbit ones, coz that would maybe cause too heavy load on the Storage.

So what about aggregating several 1 Gbit NICs, like 2 or 3?

Is that basically a good idea?

Your consultant shares the common and false misconception that the virtual link speed a Guest detects on it's virtual NIC limits the actual throughput like it would in the physical world. But it does not. Even if the Guest runs with a virtual 1Gbps vNIC link, they can achieve throughput well beyond that on the same host or if you have 10Gbps physical NICs, between hosts as well.

That's because real physically imposed signaling limitations do not apply in a virtualized environment. OSes don't artificially limit traffic to match the agreed on line speed unless it would be physically unavoidable.

You're able to get more than 1Gbps with a pure 1GbE emulated vNIC like the e1000, and more than 10Gbps with vmxnet3 in Linux for example, for which iperf is optimized (I got 25Gbps between 2 Linux VMs with vmxnet3 on the same host/network).

(See here for an example: https://communities.vmware.com/message/2280223 )

I have no no idea what your consultant means in relation to storage load. Why would the outside network connectivity of your VMs have a large impact on your backend storage? Do you mean you run VM and Storage traffic over the same physical links and fear storage traffic could suffer by too much network traffic if you enabled the 10Gpbs virtual vmxnet3 NIC in your Guests?

In any case that is a moot point as explained above, because the possibility basically exists already.

If you want fine-grained control over traffic allocation, you need something like Network IO control (NIOC) which comes with the distributed vSwitch.

-- http://alpacapowered.wordpress.com
0 Kudos
NullDevice23
Contributor
Contributor

Hi,

I measured it with iperf now, in exactly the same way like the guy in the link you posted.

I get just a little bit more speed than 1Gbit, but not really significantly.

Both clients are on the same vSwitch and same ip subnet (so they never pass an external link, which might have limited it to 1 Gbit):

C:\Users\admin\Desktop\iperf-2.0.5-2-win32>iperf.exe -w 1500k -l 512M -t 30 -c 10.21.1.4

------------------------------------------------------------

Client connecting to 10.21.1.4, TCP port 5001

TCP window size: 1.46 MByte

------------------------------------------------------------

[  3] local 10.21.1.27 port 56508 connected with 10.21.1.4 port 5001

[ ID] Interval       Transfer     Bandwidth

[  3]  0.0-34.8 sec  5.00 GBytes  1.23 Gbits/sec

C:\Users\admin\Desktop\iperf-2.0.5-2-win32>ipconfig

Windows-IP-Konfiguration

Ethernet-Adapter LAN-Verbindung:

   Verbindungsspezifisches DNS-Suffix:

   IPv4-Adresse  . . . . . . . . . . : 10.21.1.27

   Subnetzmaske  . . . . . . . . . . : 255.255.128.0

   Standardgateway . . . . . . . . . : 10.21.127.254

...


But you are right.  Its more than a GBit.  Possibly with the same vNIC under Linux its even more, i didnt test that yet.

So, to come back to my original question:

Do you think its basically a good or bad idea to aggregate several vNICs to lets say 2 or 3 GBit Interfaces?

Thx in advance

ND.

0 Kudos
NullDevice23
Contributor
Contributor

I just measured it with 2 Linux VMs,  one of them has an E1000 vNIC, the other one a VMXNET3 vNIC.

Both on the same subnet and the same vSwitch.

See screenshot:

iperf_2_linux_vms.JPG

Here the bandwidth is even lower. - Means i can not really confirm what was written in this other thread.

Maybe they changed that in vSphere5/ESX5, which im running..

0 Kudos