vPivot

Scott Drummonds on Virtualization

Four Things You Should Know About ESX 4's Scheduler

2 Comments »

[This is the last re-post of old community content.  But this content is important enough to be worth a re-post.]

I spend a great deal of time answering customers’ questions about the scheduler. Never have so many questions been asked about such an abstruse component for which so little user influence is possible. But CPU scheduling is central to system performance, so VMware strives to provide as much information on the subject as possible. In this blog entry, I want to point out a few nuggets of information on the CPU scheduler. These four bullets answer 95% of the questions I get asked.

Item 1: ESX 4’s Scheduler Better Uses Caches Across Sockets

On UMA systems at low load levels, virtual machine performance improves when each virtual CPU (vCPU) is placed on its own socket. This is because providing each vCPU its own socket also gives it the entire cache on that CPU. On page 18 of a recent paper on the scheduler written by Seongbeom Kim, a graph highlights the case where vCPU spreading improves performance.

Picture 2.png

The X-axis represents different combinations of VM and vCPU counts. SPECjbb is memory intensive and shows great gains with increases in CPU cache. The few cases that show dramatic benefit due to the ESX 4.0 scheduler are benefiting from the distribution of vCPUs across sockets. Very large gains are possible in this somewhat uncommon case.

Item 2: Overuse of SMP Only Slows Consolidated Environments At Saturation

For years customers have asked me how many vCPUs they should give to their VMs. The best guidance, “as few as possible”, seems too vague to satisfy. It remains the only correct answer, unfortunately. But a recent experiment performed by Bruce Herndon’s team sheds some light on this VM sizing question.

In this experiment we ran VMmark against VMs that were configured outside of VMmark specifications. In one case some of the virtual machines were given too few vCPUs and in another they were given too many. Because VMmark’s workload is fixed, increasing the VMs’ sizes does not increase the work performed by the VMs. In other words, the system’s score does not depend on the VMs’ vCPU count. Until CPU saturation, that is.

Picture 3.png

Notice that the scores are similar between the undersized, right-sized, and over-sized VMs. Up until tile 10 (60 VMs) they are nearly identical. There is a slight difference in processor utilization that begins to impact throughput (score) as the system runs out of CPU. At that point the additional vCPUs waste cycles which degrades system performance. Two points I will call out from this work:

  • Sloppy VI admins that provide too many vCPUs need not worry about performance when their servers are under low load. But performance will suffer when CPU utilization spikes.
  • The penalty of over-sizing VMs gets worse as VMs get larger. Using a 2-way VM is not that bad, but unneeded use of 4-way VMs when one or two processors suffice can cost up to 15% of your system throughput. I presume that unnecessarily eight vCPUs would be criminal.

Item 3: ESX Has Not Strictly Co-scheduled Since ESX 2.5

I have documented ESX’s relaxation of co-scheduling previously (Co-scheduling SMP VMs in VMware ESX Server). But this statement cannot be repeated too frequently: ESX has not strictly co-scheduled virtual machines since version 2.5. This means that ESX can place vCPUs from SMP VMs individually. It is not necessary to wait for physical cores to be available for every vCPU before starting the VM. However, as Item 3 pointed out, this does not give you free license to over-size your VMs. Be frugal with your SMP VMs and assign vCPUs only when you need them.

Item 4: The Cell Construct Has Been Eliminated in ESX 4.0

In the performance best practices deck that I give at conferences I talk about the benefits of creating small virtual machines over large ones. In versions of ESX up to ESX 3.5, the scheduler used a construct called a cell that would contain and lock CPU cores. The vCPUs from a single VM could never span a cell. With a ESX 3.x’s cell size of four this meant that VMs never spanned multiple four-core sockets. Consider this figure:

http://communities.vmware.com/servlet/JiveServlet/downloadImage/38-4886-6688/Picture+1.png

What this figure shows is that a 4-way VM on ESX 3.5 can only be placed in two locations on this hypothetical two-socket configuration. There are 12 combinations for a 2-way VM and eight for a uniprocessor VM. The scheduler has more opportunities to optimize VM placement when you provide it with smaller VMs.

In ESX 4 we have eliminated the cell lock so VMs can span multiple sockets, as item one states. Continue to think of this placement problem as a challenge to the scheduler that you can alleviate. By choosing multiple, smaller VMs you free the scheduler to pursue opportunities to optimize performance in consolidated environments

2 Responses

[…] Four Things you should know about the ESX4 Scheduler […]

  • […] team’s Bruce Herndon published an article on the cost of SMP. I summarized his findings in a vPivot article I wrote on the ESX 4 scheduler. There are two key messages that you can take away from these posts to inform your decisions on […]