Hi there
I've just setup a number of ESX 3.0.2 hosts which have 2 onboard NICs and 4 NICs in an Intel Pro 1000VT Quad Port adapter. I have 2 virtual switches, one for the Service Console and VMkernel and a second for virtual machines. The VM vSwitch has 4 uplinks, 1 from the onboard NIC and 3 from the PCI NIC.
I've noticed that not all guest VMs on the host are contactable with pings timing out. Some investigation on the physical and virtual network has shown that only the onboard NIC seems to be learning the guest VM MAC addresses and providing connectivity. If I have only the onboard NIC as an uplink everything works fine but once I introduce any of the PCI uplinks, I loose connectivity to some of the VMs. If I remove the onboard NIC, I loose connectivity to all VMs on the host.
The physical switch port configuration is excatly the same for all ports - they are setup as trunk ports as I am using port groups on the vSwitch to do 802.1q VLAN tagging (using VST).
Has anyone seen this issue before? Any help appreciated.
Thanks
First check if the PCI NIC is faulty.
Also try and plug it in a different slot - sometimes works wonders.
Hi
I've installed 8 of these adapters into 8 hosts and the issue is exactly the same on all 8 hosts. Also the NIC does autonegotiate the network settings correctly (1Gb Full Duplex by the way) and shows physical connectivity so the NICs look to be fine.
I've used this same configuration previously with no problems, the only difference this time is ESX 3.0.2 (previously used 3.0.1) but I wouldn't have thought that would have caused the issue.
Thanks
Thanks for that, update 1003515 did the trick.