Part of the performance best practices talk I co-presented at VMworld in San Francisco and Copenhagen focused on answering the question, “How many virtual machines can be placed on a single VMFS volume?” There are a lot of theories as to a best answer. It will not surprise you to learn that no single consolidation ratio works in every environment. Your workloads will influence the maximum consolidation. But we know enough about how ESX virtualizes storage to provide guidance as to the right storage consolidation ratios.
Scott Drummonds on Virtualization
I continue to receive many questions from our customers on the expected performance gains of the new version of Hyper-Threading in Intel’s Core i7 processors. The answer requires a little bit of discussion on Hyper-Threading, a little bit on ESX, and comes with some performance data. If you are still interested, read on.
A VMware customer and attendee of a talk I gave at a performance roundtable asked me for a preview of unreleased features*. When I talked about the amazing improvements to VMotion that would enable as many as eight concurrent VMotions the customer said, and I am paraphrasing here, “Yawn. I can already do that.” Really? I had no idea customers could do this. As it turns out, many of us at VMware did not know that customers knew how to do this.
I heard a myth today that VMware did not support running vmxnet3 and PVSCSI in the same virtual machine. I have talked with a dozen engineers on the subject since it came up this morning and all swear the drivers run great together. The two drivers work on very different and unrelated stacks in the VMkernel. There are no inter-dependencies of any sort between PVSCSI and vmxnet3.
I think this rumor sprung from our somewhat limited support of paravirtualized drivers in FT-protected virtual machines, which will be improved in a subsequent release. And while most of you probably know that PVSCSI and vmxnet3 run together, I thought it worth a brief comment on this blog. Myths are like cockroaches. For every one you see there are hundreds hiding behind the walls.
Today at VMware Partner Exchange I had a lunchtime discussion with a partner of ours that makes a Windows file system (NTFS) defragmentation tool. He related anecdotes of incredible performance acceleration credited to defragmentation and quoted a few numbers based on his test environment. When he asked me what VMware’s recommendations were on the subject I remained uncharacteristically silent. Do we have best practices on this?
Every couple of months I receive a request for an explanation as to why performance counters in a virtual machine cannot be trusted. While it is unfairly cynical to say that in-guest counters are never right, accurate capacity management and troubleshooting should rely on the counters provided by vSphere in either vCenter or esxtop. The explanation is too short to merit a white paper but I hope a blog article will serve as the authoritative comment on the subject.
Scott Sauer recently asked me a tough question on Twitter. My roaming best practices talk includes the phrase “do not use PVSCSI for low-IO workloads”. When Scott saw a VMware KB echoing my recommendation, he asked the obvious question: “Why?” It took me a couple of days to get a sufficient answer.
Recently I have been thinking, talking, and writing about ESX host memory swapping a lot. ESX swaps memory under the same conditions that traditional operating systems do; the application(s) is using more memory than available on the physical hardware. Host swapping is an unavoidable consequence of this condition, whether virtualization is present or not.
Three times in the past week I have engaged in challenging discussions on host memory swapping and its impact to performance. If you read my article on host swapping and the whitepaper it summarized, you know the deleterious effect on performance caused by host swapping. When reading the paper, one of our most astute customers saw a sentence that gave him pause:
Read the rest of this entry »
[This is the last re-post of old community content. But this content is important enough to be worth a re-post.]
I spend a great deal of time answering customers’ questions about the scheduler. Never have so many questions been asked about such an abstruse component for which so little user influence is possible. But CPU scheduling is central to system performance, so VMware strives to provide as much information on the subject as possible. In this blog entry, I want to point out a few nuggets of information on the CPU scheduler. These four bullets answer 95% of the questions I get asked.