VMware Cloud Community
Ron4x4
Contributor
Contributor

Processor Usage in ESXi 4

Hi All,

I have a Dell PowerEdge 1950 with 2 Dual Core 3.0 procs, 32 gb RAM, and 700GB Local Storage.

My Coworkers and I are having a discussion about how to build machines in relation to the host to vm processors. We had a guy from Landesk come out here and he said that, we should not be adding any virtual machines to the HOST that has 2 or more processors, even though this server has 4 cores. (Self proclamed VMWARE expert.) My question is: Is there a reason why with 4 cores we should only be making every virtual machine with 1 processor? Please elaborate if you have an answer.

Thanks,

Ron

0 Kudos
2 Replies
J1mbo
Virtuoso
Virtuoso

Hello and welcome Smiley Happy

The self-proclaimed expert :smileyalert: is absolutely right in my opinion; the usual guidance is to have no single VM allocated vCPU more than half the overall number of cores in the machine.

There are two factors are work here,

1. ESX needs some CPU resource for itself - storage processing, vSwitch, background tasks like TPS etc etc etc (many of these are bound to core 0 also);

2. A goal of virtualisation is to increase the overall real processing achieved on the box. The best way to achieve this is for ESX to schedule time to VMs that have something to do, instead of VMs running the idle thread. Unless a vSMP guest is using 100% on all allocated vCPU, it will be wasting available CPU resource whenever it gets CPU resource by running the OS's idle thread on one or more of those vCPU.

Another issue is concurrency on your guests. Taking my second point further, in your case if allocating a 4 vCPU guest, nothing else can run at all whilst this has CPU time. Hence there are scheduling delays introduced (search on that for deeper explanations).

HTH

Please award points to any useful answer.

0 Kudos
RParker
Immortal
Immortal

Is there a reason why with 4 cores we should only be making every virtual machine with 1 processor?

There is doubt that adding more vCPU adds additional performance. Watching a CPU with activity on 1 or 2 CPU doesn't constitute additional performance, and there have been a lot of new users recently that proclaim the same thing. they complain adding more CPU doesn't improve build / query times, but making only 1 CPU seems to have the same result on performance. Although the multi-CPU VM's SEEM to have better response time and less overall lag, it still only affects an end user experience, NOT the goal which is to make things better.

There is also a physical limit to the TOTAL number of vCPU that can be on a given host, doubtful that you would reach it realistically, even if you give ALL the VM's 4 VPU (or more) each. It's something like 768 vCPU (not sure of the exact figure) but that would mean almost 200 VM's on a single host.. I haven't seen any machine with THAT much resource available, forget about performance degradation WAY before that, to even accommodate such a number.

It still follows logic. START small and grow over time IF you can prove through testing you NEED more CPU you can add it easily. So wasting resource is seldom the answer to anything. The same can be said for users on physical machines, they see 4GB of RAM, their machine is slow. So they ASSUME RAM is the problem, and they want more RAM. They don't spend the time to actually isolate what the cause is, let alone closing unnecessary programs consuming RAM. Same thing with VM's, people ASSUME CPU is the more important aspect, but its not.

In this order DISK is primarily the single more important element to VM's, THEN Memory (efficient use of memory and caching reduces the need for CPU), CPU is the lowest priority. Databases, high IO programs, everything is using disk first. So the reason why VM's are slow is because DISK IO, waiting, seek times, and lack of optimization for disk. That's where you should focus, not CPU.

0 Kudos