vPivot

Scott Drummonds on Virtualization

SPECvirt Released

3 Comments »

SPEC has diligently working on an industry standard version of VMmark since something like 2006. The first version of their product is complete and was released during my recent holiday. I have been talking with colleagues and customers about SPECvirt for years and would like to talk about what SPECvirt is and what it is not.

VMmark is clearly the reigning king of consolidation benchmarks and anything that enters its arena must stand against its standard. VMmark pioneered a new method of benchmarking that resonates with virtualization experts. It tests system performance by adding fixed load virtual machines instead of scaling up a single application to system saturation.  Traditional benchmarks tune up their load generation against a single instance but VMmark piles on virtual machines until the system is capable of no more work.

VMmark is one of VMware’s many industry-leading initiatives and was started when VMware worked closely with server vendors that wanted to benchmark their servers’ ability to run virtual machines. VMmark was conceived many years ago, well before VMware had competition. It is because of this fact that I scratch my head at claims that VMmark is biased towards VMware. There was no commercial implementation of Xen when VMmark was specified and Microsoft was only dreaming of entering the market.

But even in an environment devoid of competition, customers want certainty that their benchmarks are not hiding flaws in a product. SPEC has for years been developing honest benchmarks that survive the crucible of debate among its large member community. SPECvirt, or more properly SPECvirt_sc2010, is the result of this vigorous debate. You can read up on SPECvirt in the FAQ released coincident with the product’s launch. But I will add a few comments and comparisons here.

  1. SPECvirt costs $3000 to purchase.  VMmark is free.  But VMmark requires commercial software and versions of SPEC benchmarks that are not free.  Depending on your licensing model, you may find VMmark or SPECvirt cheaper.  But the prices of each are essentially comparable.
  2. VMmark uses the most common applications in the data center (like Apache and Microsoft Exchange).  SPECvirt does not mandate application choice for the system under test.
    • This is a Good Thing, because you may now choose a configuration that models your environment by running the exact applications you run.
    • This is a Bad Thing, because five different testers may choose five different application sets in their tests resulting in incomparable results.
  3. SPECvirt cannot be run against a cluster of hosts.  But VMmark cannot, either.  We will have to wait for an update to one of these benchmarks before we can properly test DRS clusters and their competitive equivalents.
  4. There is only one published SPECvirt result, courtesy of IBM running KVM.  There are a boatload of VMmark results, as one would expect of a more mature product.  It will be interesting to watch the rate of submissions of these two benchmarks over the coming year or two.
  5. SPECvirt runs three workloads and an idle virtual machine in its tile.  One of those workloads, tested by SPECweb, is implemented with three virtual machines.  The end product is a six-VM tile that looks very much like VMmark’s six-VM tile.

For years we have seen online and in-person griping about VMware’s misunderstood benchmark restriction in its EULA.  Both VMmark and SPECvirt can be run on any supported hypervisor.  So now its time for all the hypervisor vendors to put up or shut up.  Run one of these benchmarks on your product and compare the results against existing published results.  Then the world will know where your product stands.

3 Responses

You’ve got to wonder why there aren’t any other submissions, especially given VMware’s close involvement in the SpecVirt committee.
I wonder if they’ve not done the benchmarks, or not done one that would win ?

    • An answer requires speculation on my part, but I will hazard a guess. I think the member companies will have trouble finding a common application set to generate comparable numbers. You’ll notice that IBM used all Red Hat and chose two of their applications, WebSphere and DB2. An Oracle submission would likely choose Sun’s JVM and their own database on top of OEL. Microsoft would use Windows and all Microsoft applications. Because it is difficult to find a common ground to generate comparable numbers, the incentive for publishing results is diminished.

      Incidentally, VMware has never published a VMmark number. I am not sure that their role in SPECvirt would be any different than their role with VMmark–purely supportive.

  • got a copy of specvirt 2010 and the setup experience is just plain awlful, thier code is old and depends on stuff that are dated back to 2004. Their applications runs kn jvm and jvm may have optimizatikn themselves you dont the number is on the VM or the jvm. same logic goes to other applicatiin dependency. Why people wroot for Specvirt is beyond me.