VMware Cloud Community
MikeBem
Contributor
Contributor

network connectivity between the VMs and LAN drops

Hello everyone,

I am experiencing a rather interesting issue with a esxi5 install i'm running at home in my lab setup. The machine is actually using an old gigabyte desktop board with a quad core intel and a dell PERC 5i (don't ask Smiley Happy) hooked up to some SAS drives and a handful of big capacity SATA drives.

All is great and works like a charm until... exactly, there's no rule that could be considered a pattern. One thing is for sure, every now and then, the VMs (there's a number of them and they all are affected) lose connectivity to the outside LAN (no ping, no dhcp no nothing) and vice versa. At the same time however, the esxi host can ping and do whatever with the physical hosts on the LAN and with the VMs within with no issues so it looks like the daemon that's responsible for the NIC bridging gives up (possibly due to the large volumes of data that I am moving around the machines maxing out the poor Marvell Yukon 8085 NIC).

Has anyone ever seen this? is there a known issue? I did my best to find a similar issue but all seems to be quite different hence me opening this discussion.

Thanks upfront!

Regards,

Mike

0 Kudos
3 Replies
MikeBem
Contributor
Contributor

Hmm, no one ever saw this before?

I updated esxi to the latest patch and updated all the drivers for the hardware that was present just to be on the safe side - no difference though...

Ah, one detail, the NIC chipset is actually 8053...

0 Kudos
AlexWhit
Contributor
Contributor

i have has a similar issue with just one machine running windows 2008 R2  try changeing the NIC types to vmxnet3.  i could be telling you compleate rubbish though

http://communities.vmware.com/message/1889068#1889068

0 Kudos
golddiggie
Champion
Champion

Are you just running with onboard NIC's?? How about adding a few ports via Intel GB NIC's, get off of the onboard for the VM's and see how that works.

Whenever I set up a host, I put the VM traffic on it's own network ports (usually at least a pair), with the management network and any other network traffic on other pNICs. IME, it's better to separate the traffic. Even though you could (technically) have it all on the same vSwitch (using the same pNICs), I've found it best to not do so.

Even though most do this in production environments, I also do it on my lab host.

I also default to using the e1000 adapter type on my VM's. Most of the time, when I see VM network issues, it's because they're on a different adapter type. Changing it to the e1000 resolves many of those issues. IF you actually NEED one of the other types, then you don't have much choice there. I would just try to keep that to a minimum.

0 Kudos