A couple weeks ago I joined a discussion between engineers and customer-facing technologists within EMC and VMware. There was some confusion around a claim by EMC with respect to Transparent Page Sharing (TPS). There exists an EMC paper that hints at disabling TPS. The astute Michael Webster thought this contradicted best practices I provided when leading VMware’s performance technical marketing team. Michael was correct so I decided to jump in and see what I could learn.
Scott Drummonds on Virtualization
I just returned from a one week vacation to a warm sunny beach on a small island not too far from Singapore. Even on my vacations my conversations often migrate to technology and my travel mate is an old friend and current employee at VMware, Dave Korsunsky. Sitting by a pool with a cocktail in hand at a fantastic hotel I asked my friend, “what is the right number of hosts per DRS/HA cluster?” Great conversation for a vacation, right?
VMware’s Jeff Buell has been looking into High Performance Computing (HPC) in support of a new addition to the office of the CTO. Jeff just posted an article on VROOM! showing outstanding memory bandwidth in vSphere virtual machines. No one should be surprised by this–virtual machine memory bandwidth has rarely been a problem. But Jeff did discuss a advanced configuration parameter that should pique everyone’s curiosity: NUMA.preferHT.
Consolidation amplifies the uncertainty of application performance. Still, VI administrators need a means of guaranteeing performance SLAs to their applications’ users. But the best VMware has been able to offer are resource controls, which are at best an indirect mechanism for sustaining application performance. With the acquisition of B-hive, now AppSpeed, VMware moved a step closer to allowing VI administrators to guarantee a performance SLA. As an application-aware latency measurement tool, AppSpeed may eventually provide feedback to vCenter to guarantee throughput levels. But it does not today. So how are VI administrators to guarantee application performance?
Last week I took my first vacation in a year and a half. I had not missed a single day of work in 18 months. So last week, when I was galavanting through Spain and running terrified, screaming, and covered in sangria through the streets of Pamplona, VMware made its biggest announcement in over a year: the launch of vSphere 4.1. My old team put out what looks to be a wonderful “What’s New in Performance” paper so I want to take a few minutes to add my thoughts to some of the great work VMware has done.
I find it interesting that one day after I wrote about memory over-commitment in vSphere, Greg Shields wrote about the lack of memory over-commitment in Hyper-V. In today’s short blog entry, I want provide one paragraph that Greg’s article currently lacks:
While memory over-subscription is a critical feature for production environments, balancing the demands of heterogenous applications of varying demands in a resource starved environment is difficult. Without guidance from administrators on the relative importance of the virtual machines running these applications, a hypervisor will be forced to make arbitrary decisions in assigning limited resources. Effective use of over-commitment requires a sound resource control system. The only product on the market that does this well is VMware vSphere.
Both Greg and my articles only talked of memory over-commitment, but the rules apply for CPU over-commitment, too. Microsoft will realize how important resource controls are somewhere between year two and five of their product’s life. I can only imagine where vSphere will be by then.
Many of VMware’s customers use memory reservations during troubleshooting only in a final attempt to fix performance problems. It is true that memory reservations can limit ballooning and host swapping. But if you are only using reservations to anticipate and avoid memory bottlenecks, you are missing one of the great uses of the feature: memory reservations can drive over-commitment.
Steve Herrod’s keynote at Partner Exchange 2010 included a tantalizing slide on an upcoming memory maximization technology: memory compression. A few of you have already seen the overview of this technology Kit Colbert and Fei Guo previewed it at VMworld 2009. Today I want to tell you how this upcoming feature will help you pack even more virtual machines onto your existing servers.
My recent series of blog articles have discussed ESX memory management the the performance specter of host swapping. My last article attempts to correct the misconception that VMware recommends against over-commit memory. In that article I suggested that memory over-commit is requirement in optimizing memory utilization. Today I want to provide a specific example to show why this is true. I am have also included tips for identifying host swapping in your environments.
Read the rest of this entry »
Twice in 2009 someone showed me competitive literature from Microsoft or Citrix claiming that VMware recommends against memory over-commitment. Given the wide variety of literature we have provided in support of this feature, all of our customers recognize the absurdity of our competitions’ claims. VMware and its customers love memory over-commitment. Then where is the source of this misinformed guidance?