Back in early December the folks at VKernel posted the results of a survey of their customers’ virtualization environments. They named this work and its summary paper the Virtualization Management Index (VMI) [registration required]. I talked with VKernel CMO Bryan Semple when the first version was released and he recently sent me an update. Some of VKernel’s observations are pretty interesting. And I come to a similar conclusion as VKernel based on VMware’s data in the space: the market for datacenter optimization is very big and growing by the day.
There are two graphs from the most recent VMI that are quite interesting. The first, reproduced here, shows the degree to which VKernel’s customers are overcommitting memory and CPU:
This figure is a histogram counting customers that achieve certain commitment ratios. For memory, the bulk of customers–which I visually estimate to be more than 80%–maintain a memory ratio of under 1.0. This represents an under-commitment of physical memory. Nearly all customers are over-committing CPU, but with an average ratio I estimate to be at 2.5. This means VKernel’s customers are only seeing 2-3 vCPUs per CPU core. This is light for desktops and the average workload but about right for denser databases and mail servers.
One year ago I saw a similar study produced quarterly by VMware’s Technical Account Manager (TAM) team. The so-called TAM Dashboard summarized a wide variety of metrics from customers including host core count, ESX version, application virtualization, VM sizes, and much, much more. While there is a public version of the TAM Dashboard, I think that my choice to pursue employment with another company deprives me of the right to publish details of that study. However, I will share a few high level observations.
While the TAM Dashboard did not report vCPUs per core, it reported a few other metrics from which I can estimate VKernel’s measurement of this number. And while VMware is seeing higher densities than VKernel, they are not seeing them by much. The TAM Dashboard also surprised me by showing VMware’s customer to be averaging around eight cores per host. This means nearly 50% of the servers in existence have less than eight. I consider those systems to be unfit for tier-1 virtualization.
VKernel’s VMI also produced a VM density graph that shows VM count by host count. That graph has been reproduced here.
VKernel concludes in the VMI text that VKernel’s (and possibly VMware’s) larger customers are deriving less value from virtualization. It is probably safe to say that larger customers are packing fewer VMs into their servers. But there are many characteristics of very large customers that can explain this:
- Larger customers also tend to use older systems for a longer period of time. I have known this from versions of the TAM dashboard that go back years.
- Larger customers less frequently upgrade version of vSphere and newer versions have improved features for consolidated environments.
- Larger customers are more conservative by nature, which explains both of the above as well as suggesting the presence of larger guardbands or “buffers” of unused space to protect their applications.
Also, VKernel’s data may suffer from self-select bias. There are likely common characteristics of their customers that are not shared by the entire industry. Results will be skewed to match that sample and not necessarily the whole pool of VMware’s customers.
In any case, I agree with VKernel’s observation that large environments are much more in need of intelligent capacity assessment and optimization than small. In fact, I will have a lot more to say on cross-cluster datacenter optimization in an upcoming blog.