vPivot

Scott Drummonds on Virtualization

KVM Performance

9 Comments »

A few days ago someone forwarded me a blog article with an interesting claim about KVM performance:

Testing results from internal and customers showed SAP workloads: 85-95, Oracle OLTP: 80-92% bare metal. LAMP stack showed better than bare metal performance. Whitepapers will be published in how this was achieved. Java achieved up to 94% bare metal.

Frankly, I was surprised to hear this.  KVM is a hosted virtualization platform, equivalent to the free VMware Server, which runs on top of a host operating system.  VMware server is fine for a virtual machine or two, but you would not want it hosting your critical business applications.  The above KVM claim suggests that KVM possesses hypervisor-like performance.  So we ran a test with a few virtual machines to see what we could learn.  These tests confirmed my suspicions: KVM is a very long way from enterprise-class virtualization performance.

The thing to remember about virtualization benchmarking is that any vendor can provide virtualization software (hosted or hypervisor) that can virtualize a single application at better than 80% of native performance.  VMware has been doing this for a decade.  But it is extraordinarily difficult to build a hypervisor that can scale with many virtual machines.  Maybe this is one reason why you have never seen Microsoft or Citrix post results from a consolidated workload.  But I digress.

We decided that the easiest way to test this environment with a light/moderate enterprise workload is to use two or three VMs running SQL Server, as tested by DVD Store 2 (DS2).  We tried four configurations of these VMs:

  • Case A: Two 4-way virtual machines.
  • Case B: Two 3-way VMs and one 2-way.
  • Case C: Three 3-way VMs.
  • Case D: Three 4-way VMs.

Each virtual machine ran on an HP DL380 G5 and was given 4 GB.

Finding the right number of threads per virtual machine took some time.  Threads on the DS2 client determine the volume of transactions that are generated against the SQL Server.  We wanted to get the highest throughput for a reasonable latency, which we set at 33 ms.  Here are the best numbers I could produce for vSphere and KVM.

Case Total OPM Avg. Response Time (ms)
vSphere KVM vSphere KVM
A 58095 removed 33 removed
B 59741 removed 33 removed
C 52899 removed 33 removed
D 50996 removed 34 removed

The very best performance that KVM could muster was only removed% of vSphere’s performance on the same configuration.  Notice that at 50% CPU over-commitment (1.5 vCPUs for each CPU), KVM’s performance removed.  It’s throughput fell to removed% of vSphere and its response time removed.  Increasing threads in this configuration actually made throughput and latency worse.

I had suspected that KVM would show hosted platform performance, as it relies on a host operating system.  It appears my suspicions were correct.  It will be tough for Red Hat to sell this product as part of an enterprise product.  To do so they will likely publish results based on single virtual machines and in environments where the CPUs are under-committed.

Lastly, this is the only workload that we have attempted.  I would expect KVM to do much worse when more virtual machines are part of the test or if network or storage throughput becomes significant.  But we have no plans to spend time on KVM benchmarking.  As I mentioned in my performance debate at Catalyst 2009, I think that each vendor should do its own benchmarking to best represent its products.  I challenge Red Hat to post a KVM number using TPC, SPECweb, VMmark, vConsolidate, or any enterprise-class workload.  Customers should expect nothing less of their virtualization vendor.

10/2/09 Update

I decided to remove the KVM results to allow Red Hat or a KVM enthusiast to show their own best results on a consolidated workload.  I recommend VMmark or vConsolidate.

9 Responses

Are you going to publish any real details. I just see “KVM”, that’s like saying “Linux”

What distribution, what version, what guests, what tuning, did you install PV drivers in the KVM guests, did you install vmware tools in the VMware guests.
What management tool did you use to configure and set this up – you mention Red Hat in the article but did you actually use their management tool?

There’s so little detail in here it’s obviously a very biased piece.

  • Could you post details of the setups?

  • Aren’t you demonstrating exactly the behaviour you condemn with vendor benchmarks.
    VMware prohibits people publishing competitive benchmarks with the justification that they need to be conducted in a fair manner and audited, which makes sense.

    Here you are publishing exactly the kind of FUD sudo-benchmark that you complain to others about.

  • David’s comment (number three) is fair. I am going to remove the results and let Red Hat (or any KVM evangelist) speak for their own product with a known benchmark.

    Would anyone care to guess when that might happen?

  • An advocate for any technology can cite use cases that tout how their solution is better. Since Scott works for VMware it is not surprising that the results are slanted in VMware’s favor. What is disappointing is the lack of details on the test setup, versions, etc for others to reproduce the results.

    I am an advocate of the technology in general — and a user of both ESX and KVM. There are a ton of valid and relevant use cases for both. As for performance, there are a lot of examples where KVM is far superior to ESX. One example is linux guests based on the 2.4 kernel (e.g., RHEL 3) where the guest has 2GB or 4GB of RAM. The performance degradation of ESX on DL380 G5 is staggering (2.4 kernels are very active with page scanning). With KVM’s out-of-sync shadow table implementation that was added in August/September 2008, KVM runs these older guests just fine — in fact for my specific test the performance loss can be kept under <10%. I have others, but this one is easy for anyone to reproduce. Just install CentOS 3 in a guest with 2 or 4 cpus and 2-4 GB of RAM and then run whatever benchamrk suits your fancy 🙂

    Of course, one can argue that 2.4 kernels are old (ignore the argument for now of how virtualization allows the continued deployment of older guests), but then so is a DL380 G5. As I recall those came out some time 2006. ESX and its binary translation is certainly optimized for such older setups and given that will outperform hardware based design like KVM in a number of use cases.

    How about a comparison on newer hardware — say a DL380 G6 with 2 quad-core E5540 processors and 24 GB of RAM (3x4GB = 12 GB per processor)? Now you have a more relevant use case involving modern hardware. ESX 4.0 will leverage the EPT in the procesors. Oh, and be sure to publish all the relevant details on the test setup. Test results are meaningless of others cannot reproduce.

  • Following a study posted on principledtechnologies, Redhat’s KVM works pretty well for me with 1.5x cpu overcommit and 1.5x memory overcommit (ksm)

    • I am glad to see that they published a consolidated workload. SPECjbb is a likely choice, as it does not do any IO at all. ESX runs it at near-native speeds on SPECjbb. But, believe me, showing even this workload is better than nothing!

      You say that KVM runs “pretty well” for you. Have you quantified this? I would love to hear what you have done.

  • KVM is no doubt the best virtualization solution. Period.

    Enough nonsense with these fake VMware propaganda artists.

    we run KVM in 32 processor numa-smp machine and vmware is no-where in the picture in terms of performance , flexibility & cost.

    Vmware fools get a life –open your insight. Your loser VCP certifuication is no good when it comes to kernal programmers & enthusiasts.

  • Yeap… KVM all the way baby.
    Most people do not know that you do not need RHEV to run KVM in HA mode… You can use ssh, virsh or even the crude GUI virt-manager.

    So KVM brings FREEness to Highly available Virtualization.

    The best recipe out there:

    CenTOS 5.5
    RHCS (Clustered LVM) or even GFS2
    KVM bits

    or you can even go the route of ProxMox or Enomaly.

    But virt-manager is uber sweet already however “bare” it is. I have used it with up to 64 Guests and 4 nodes with no issues at all. And best of all — failover between Nehalem and AMD systems is supported.

    Sweet.