VMware Cloud Community
rmuzer
Contributor
Contributor

poor performance using private network between VMs

We have created a virtual switch with no uplink to a physical network adapter to use as a private network between VMs.  When performing rsync copies, the speeds are much lower than expected.  The copy speeds average around 85 MB per second.  Any suggestions on improving these speeds would be appreciated.  Thank you.

0 Kudos
4 Replies
D_G_Tal
Enthusiast
Enthusiast

please check copy with Linux VM .

0 Kudos
a_p_
Leadership
Leadership

I assume that you are using ESXi, so I moved your post to that community. If you are using another product, please let me know.

  • What type/model of virtual network adapters (e.g. vmxnet3) is in use?
  • Didi you also run a test that only measures network speed, i.e. rules out possible storage performance issues?

André

0 Kudos
grimsrue
Enthusiast
Enthusiast

We all need more info than what you have provided.

Are the VMs on a single physical host or multiple physical hosts in a cluster?

What type of storage are the VMs sitting on? Is it external Tier 1 or 2 storage or internal SSD or NVMe disk?

What is the MTU of the NIC/s in the OS and/or physical switch interfaces?

What vNIC driver are you using? vmxnet3 or e1000?

What are your TCP/IP settings? LRO, TSO, LSO, Ring Buffer size, etc. You could be dropping receive/transmit packets in/out of the OS. Your ring buffers probably need to be increased to 4096 if you are trying to move a large amount of data.

0 Kudos
rmuzer
Contributor
Contributor

Yes, we are using ESXi so thank you for moving to the proper community.  As for the issue, we are using vmxnet3 as the virtual adapter.  When copying to/from this vm to other physical hosts on our network with a 10Gb adapter, the speeds are much higher, so I'm pretty sure disk performance is not the issue.  Any other thoughts would be appreciated.

0 Kudos