vPivot

Scott Drummonds on Virtualization

Misunderstanding Memory Management

3 Comments »

Twice in 2009 someone showed me competitive literature from Microsoft or Citrix claiming that VMware recommends against memory over-commitment.  Given the wide variety of literature we have provided in support of this feature, all of our customers recognize the absurdity of our competitions’ claims.  VMware and its customers love memory over-commitment.  Then where is the source of this misinformed guidance?

I believe we know the text that is misrepresented.  It comes from our performance best practices document and could be misunderstood by someone to whom the terms “working set” and “active memory” are unfamiliar.  Here is the sentence, quoted from the oldest available version of our best practices document:

Swapping is used to forcibly reclaim memory from a virtual machine when both page sharing and ballooning fail to reclaim sufficient memory from an overcommitted system. If the working set (active memory) of the virtual machine resides in physical memory, using the swapping mechanism and having inactive pages swapped out does not affect performance. However, if the working set is so large that active pages are continuously being swapped in and out (that is, the swap I/O rate is high), then performance may degrade significantly.

This excerpt describes the condition under which a host will swap.  That condition is best summarized as “the sum of the working sets of all virtual machines exceeds the amount of memory on the host”.  This definition presupposes that the reader understands the definition of a working set, which I think some readers may not.  For this discussion I will simplify the definition of working set as “recently active memory” and refer readers to their helpful search engine for a more complete description.

When a system’s working set exceeds available memory, the system will swap.  This is not unique to virtual, consolidated workloads.  As long as operating systems have implemented virtual memory, there existed a possibility that a working set could exceed available physical memory.  The only thing that has changed in a virtual environment is that the working set is calculated by summing the working sets from multiple virtual machines as opposed to a single application or operating system instance.

But the key here–and the reason why memory over-commitment remains so powerful–is that allocated memory (the virtual machine’s size) exceeds active memory (the working set) nearly 100% of the time.  Memory management in consolidated environments is about pushing a host’s active memory to as close to 100% as possible.  This is something that was not possible in physical environments, and cannot be done in virtual environments without over-commitment.

To sum this up, not only does VMware recommend some over-commitment, but we know that it is impossible to fully use your available memory without the flexibility provided by over-commitment.

3 Responses

Scott,

Please don’t harm M$ & Crapper-V anymore, I saw your comments on the TechNet Blog, keep the facts coming.

  • […] ESX memory management the the performance specter of host swapping. My last article attempts to correct the misconception that VMware recommends against over-commit memory.  In that article I suggested that memory over-commit is requirement in optimizing memory […]

  • Youre so cool! I dont suppose Ive read anything like this before. So nice to find somebody with some original thoughts on this subject. realy thank you for starting this up. this website is something that is needed on the web, someone with a little originality. useful job for bringing something new to the internet!