I have a very weird issue and need help from VM gurus:
I have a Windows Media Server 2008R2 SP1 running the latest patches. I have a PHYSICAL Windows 7 clients on the same network that I can connect and multicast to the Windows 2008R2 media server WITHOUT ANY PACKET LOSS via multicast.
I also have a Windows 7 machines VM on the same network. I have a lot of multicast packet loss when connecting to the same windows media server 2008R2 SP1.
Then I see this: http://blogs.vmware.com/performance/2011/08/multicast-performance-on-vsphere-50.html
it stated the followings:
In releases of vSphere prior to 5.0, the packet replication for multicast is done using a single context.
When there is a high VM density per host, at high packet rates the replication context may become
a bottleneck and cause packet loss. VMware added a new feature in ESXi 5.0 to split the cost of replication
across various physical CPUs. This makes vSphere 5.0 a highly scalable and efficient platform for multicast receivers.
This feature is called splitRxMode, and it can be enabled with a VMXNET3 virtual NIC. Fanning out processing to
multiple contexts causes a slight increase in CPU consumption and is generally not needed for most systems.
Hence, the feature is disabled by default. VMware recommends enabling splitRxMode in situations where
multiple VMs share a single physical NIC and receive a lot of multicast/broadcast packets.
Issue is that I only have a single VM guest Windows7 on this ESX. Hardware is dell 2950-III with 12GB ram and dual processors and dual-cores. I assigned 6GB RAM and 4 processors to the Windows 7 but I still have LOT of Packet loss.
Anyone know why?