VMware Cloud Community
benschmi
Contributor
Contributor

Iometer results: VM higher read tranfer rates than physical host?

Hi,

I'm using Iometer to benchmark my vm's, specifically the difference

between a physical machine and a vm running on this physical machine.

The write transfer rate seem plausible - for a block size of 32kB about

40MBps for the pysical server and about 25MBps for the vm. The read

transfer rate are confusing, I'm actually getting a higher transfer

rate for the vm than for the physical machine (see attached graph. I

ran two tests for each). Can somebody please explain?

Details of physical machine:

  • Ubuntu 8.04 64-bit Desktop

  • 4GB memory

  • SATA hard disk

  • VMware Server 1.0.7

Details of vm:

  • Ubuntu 8.04 32-bit Server

  • 256 MB memory

0 Kudos
7 Replies
Dave_Mishchenko
Immortal
Immortal

Did you also record the IO measurements that the ESX host was measuring for the VMs? If the time gets skewed in the VM then the results can be thrown off.

0 Kudos
benschmi
Contributor
Contributor

It's not a ESX host but a VMware Server 1.0.7 host, so I believe there are no statistics for the host available. Measuring the IOps with Iometer brings these results:

But I guess they are just as wrong as the tranfer rates in MBps.

Is there a way to sovle the timing issue?

0 Kudos
drummonds
Hot Shot
Hot Shot

The host OS caches IOs from Server VMs. But Iometer runs in direct IO mode. So, Iometer runs on the host will test the IO subsystem but Iometer runs on hosted products' guests will test the host's caching system.

ESX does not cache IOs, BTW.

Scott

More information on my blog and on Twitter: http://vpivot.com http://twitter.com/drummonds
0 Kudos
benschmi
Contributor
Contributor

Thanks, that makes sense.

Does ESXi cache IOs?

Ben

0 Kudos
drummonds
Hot Shot
Hot Shot

No, our hypervisor products don't do any IO caching. This has been a subject of discussion for some time. There are cases where dramatic performance gains can be derived from IO caching. But there are two big reasons not to build caching into the hypervisor, as some other products do:

  1. There are already caches in the guest OSes and at the storage, be it array or DAS. The opportunties for improvement by putting a third cache in the hypervisor are therefore limited.

  2. Hypervisor caching is dangerous and power failures can result in lost data. This is why we don't cache and force all IOs through to the array so that if data protection is important battery-backed caches at the array will guarantee its protection.

Scott

More information on my blog and on Twitter: http://vpivot.com http://twitter.com/drummonds
0 Kudos
benschmi
Contributor
Contributor

So the read results returned by Iometer are wrong. What about the write results? Are they affected by the caching? I'm doing a 0% read, 0% random workload with Iometer.

0 Kudos
Ken_Cline
Champion
Champion

So the read results returned by Iometer are wrong. What about the write results? Are they affected by the caching? I'm doing a 0% read, 0% random workload with Iometer.

It really depends on the host OS. VMware Server is simply "another application" as far as the host OS is concerned...so if it caches writes for any application, it will cache them for VMware Server, too.

Ken Cline

Technical Director, Virtualization

Wells Landers

TVAR Solutions, A Wells Landers Group Company

VMware Communities User Moderator

Ken Cline VMware vExpert 2009 VMware Communities User Moderator Blogging at: http://KensVirtualReality.wordpress.com/
0 Kudos