vPivot

Scott Drummonds on Virtualization

Solid State Disks and Host Swapping

6 Comments »

Recently I have been thinking, talking, and writing about ESX host memory swapping a lot.  ESX swaps memory under the same conditions that traditional operating systems do; the application(s) is using more memory than available on the physical hardware.  Host swapping is an unavoidable consequence of this condition, whether virtualization is present or not.

But a recent article by my engineering colleague Chethan Kumar shows an avenue that allows VI admins to aggressively over-commit memory and avoid the catastrophic performance penalty of swapping: use solid state disks to host ESX swap files.

The fundamental problem with host swapping comes from the high latency of traditional disks compared to memory.  Data can be retrieved from memory in nanoseconds but takes milliseconds to fetch from a hard drive.  That means a single 4K memory page takes 100,000 times longer to retrieve if the operating system swapped it out.

The value that solid state disks offer to this problem is exceptional latency, as compared to traditional drives.  The SSD that Chethan used showed microsecond latencies, about 1,000 times lower than physical disks.  This means that  time spent waiting for swap activity* has been decreased to 0.1% of the time spent swapping to physical disks.

The importance of fast swap files is that it enables administrators to more aggressively over-commit memory.  Today our admins rightfully fear the VMs’ aggregate active memory exceeding the available physical memory, which results in swapping.  Today SSD technology in shared storage such as EMC’s new CLARiiONs allows our admins to cleverly place swap files and drive up memory utilization to previously unheard of levels.  This may enable standard memory overcommitment of 200% or more, with extreme over-commit being much higher than this.

In future versions of ESX we want to automate the usage of SSDs to maximize the use of available memory.  But that’s a roadmap discussion that I will leave for another day.

(*) This swap wait time has conveniently been added to ESX 4’s version of esxtop under the counter %SWPWT.  See Interpreting esxtop Statistics for more information.

6 Responses

[…] of the tools and techniques that I will be describing when in this future paper.  First, place host swap files on solid state disk (SSD) stores to improve their performance.  With the right SSD device it may be possible to eliminate swap […]

    • Hello,
      Very good post! I have one question, what will happen during DRS or HA, are the delays acceptable?

      Thank you,
      Dan

      • Storage performance is usually not the key factor in VMotion which means DRS should unaffected by the presence of solid state disks. But the higher throughput provided SSDs should decrease the downtime during an HA fail over. Hmm, I’d like to see one of VMware’s partners prove me right. Takers?

  • […] wrote about using Solid State Disks in your SAN to be used as swap space for your ESX host (http://vpivot.com/2009/12/24/solid-state-disks-and-host-swapping/), which would make less of an issue but still there is a performance […]

  • […] than memory, when your host swaps your applications’ performance suffers catastrophically.  Solid state drives (SSD) can mitigate the performance cost by reducing swap latency by a couple orders of magnitude.  But SSDs still have delays tens of […]

  • […] wrote about using Solid State Disks in your SAN to be used as swap space for your ESX host (http://vpivot.com/2009/12/24/solid-state-disks-and-host-swapping/), which would make less of an issue but still there is a performance […]