VMware Cloud Community
byte007
Contributor
Contributor

Network Speeds

Hi all,

I have a new install of ESXi 5.1 on a HP ML350p server with SAS drives and I was wondering what network speeds you guys are getting.  I'm using the VMXNET3 network card on my CentOS box and copying a 6GB file from a Windows 7 client I receive around 80Mbps - 96Mbps.  This is on a gigabit network connection.

I'm curious as to what you all get in terms of speeds as I would like to make sure I am squeezing every drop out.  I'm wondering if to configure a network pass through for one of the network cards.

Thanks in advance for your views!

0 Kudos
7 Replies
Linjo
Leadership
Leadership

Be careful when using copy operations to measure network speed, it could be the storage that is the bottleneck.

Use a tool like iperf instead to do the tests.

// Linjo

Best regards, Linjo Please follow me on twitter: @viewgeek If you find this information useful, please award points for "correct" or "helpful".
byte007
Contributor
Contributor

Thanks for your reply.  Using iperf shows my vm receiving...

[ ID] Interval       Transfer     Bandwidth
[  4]  0.0-10.0 sec    376 MBytes    315 Mbits/sec

Using this standard iperf -s and iperf -c hostname.  Using the -r syntax gives me this result...

[ ID] Interval       Transfer     Bandwidth
[  4]  0.0-10.0 sec  1.08 GBytes    925 Mbits/sec

Smiley Happy

0 Kudos
rickardnobel
Champion
Champion

byte007 wrote:

[ ID] Interval       Transfer     Bandwidth

[  4]  0.0-10.0 sec  1.08 GBytes    925 Mbits/sec

Smiley Happy

If one of the machines is physical outside of the ESXi host then this is about how good you will be able to get with 1 Gbit/s ethernet. For fun you might tree the same tool between two VMs located on the same vSwitch and see what result you get when no physical network devices are involved.

My VMware blog: www.rickardnobel.se
0 Kudos
Josh26
Virtuoso
Virtuoso

Storage is almost certainly the bottleneck.

Did you buy a batttery backed cache on that server?

0 Kudos
byte007
Contributor
Contributor

Rickard Nobel wrote:

If one of the machines is physical outside of the ESXi host then this is about how good you will be able to get with 1 Gbit/s ethernet. For fun you might tree the same tool between two VMs located on the same vSwitch and see what result you get when no physical network devices are involved.

Result between to VMs on same vSwitch using the VMXNET3 network card;

[ ID] Interval       Transfer     Bandwidth
[  4]  0.0-10.0 sec  23.2 GBytes  19.9 Gbits/sec

This is with the standard iperf -s and iperf -c hostname commands.

0 Kudos
rickardnobel
Champion
Champion

byte007 wrote:

Result between to VMs on same vSwitch using the VMXNET3 network card;

[ ID] Interval       Transfer     Bandwidth
[  4]  0.0-10.0 sec  23.2 GBytes  19.9 Gbits/sec

Great numbers!

My VMware blog: www.rickardnobel.se
0 Kudos
byte007
Contributor
Contributor

Josh26 wrote:

Storage is almost certainly the bottleneck.

Did you buy a batttery backed cache on that server? x`

Yep, I got the 1GB FBWC with this P420i adapter.  These drives are HP 2TB 6G SAS 7200rpm Midline.  I have another server without the FBWC and to be honest I notice no speed increase which another forum member mentioned I would get 10x better performance with FBWC...maybe I haven't got the configuration quite right or something.

0 Kudos