Wow this is great info. Thanks.
The memory transfer jumped out at me.. I don't think anyone has really investigated this, but it does make sense. We are using a software emulation and translation layer to transfer the memory so it would be significantly slower... than a physical machine. But this is a very well done graph. I am keeping this for my performance benchmarks.
What kind of cpu's? My guess is Intel Quadcore based on the memory. It could be interesting to see how the performance is using AMDs Quadcore which should have moved the memory virtualization overhead into the cpu.
I must say that the evidence is not compelling enough to change my position on the benefits of virtualizing by these benchmarks. As for the disk readings, I don't know of very many companies that are going to rely on a couple of spindles that are locally attached that is using a shared controller as the service console. Most companies that will be using ESX 3.5/3i are going to invest in either an iSCSI or Fiber-channel SAN where disk IO will actually perform better in most cases than locally attached storage. Especially as companies are P2Ving their older out-of-date/out-of-warranty hardware into the virtual infrastructure.
Yep, Intel Quadcore Xeon's, 45nm process. 2Ghz. 2 physical.
Where are you seeing that they are bad numbers? He comes out up front and states that the could only use half as many cpu's in the virtual vs. the physical so we should double the numbers, and everywhere else(memory + disks) the numbers are dead on with the expected 8-10% penalty of adding a virtual layer.
The problem with moving your hypothesis to a san of some type is that if you move the physical as well, you get the same results. Apples to apples, you will see that 8-10%(sometimes less) hit.
I will say that I wish it was more apples to apples test wise, and used the same number of cores for both, since some nimrod will take the graph and say virtualization is 50% slower!
Apparently the point was lost in my statement but our conclusions were the same...I ended up taking the long route!!!
But yes, I would agree that some will take a first glance...which most people do...and will try to draw a different conclusion that virtualization is 50% slower...Perhaps you could rerun the tests by only using one quad-core CPU (giving you the 4 cores) or by disabling the cores and making the cores look like 1 logical CPU (Giving you 2CPUs and this can be done in some BIOSes) and then rerun the tests for the Physical. Then for the VM, configure to match the number of cores that was used for the physical tests and if possible present a RDM disk to the VM and install the Guest OS on a RDM disk to help alleviate block alignment errors.
One thing to keep in mind there is a performance hit to virtualization - a vm running under ESX 3.5 will not perform as well as same o/s runnning on the physical hardware that ESX is running on that is why if you have a business critical application that needs as much performance as possible you will want to keep it physical - the reason why we virtualize is better utilization of hardware, improved availability, ease of moving/copying machines, higher reliability and most importantly to save money - so no his results do not surprise and I would expect similar type results with Xen SOurce and Nicrosoft's Hypervisor -
Nope, no time for running more tests. That server is out the door long ago. I realized my mistake when I was doing the tests, but I was out of time to go back and do some clean installs.
It's why I call it rough and unscientific. It's just a general idea to let you know.. no one else had anything like this when I was considering virtualization, so I had to do it myself. Still solid enough to share with others.
I'd love the time to analyze properly, but I have a very busy job. Free performance tests, you get what you pay for. 😄
I don't know...I'd take the 8% hit(sometimes less) just to have the ability to move it to another hardware device without having to rebuild. For the failover and ease of migration alone, it's worth the hit. If you need 8-10% that badly, it should be on a bigger box to begin with, because you're already running higher utilization that you should to begin with(personally, if it's over 85% constant, it's time to upgrade).
That might be something to try, a new "unofficial storage performance" thread but make it system specs....