Scott Drummonds on Virtualization

Optimizing Network Performance on vSphere

Comments Off on Optimizing Network Performance on vSphere

You should probably not be reading this article.

What follows is my tale of performance optimizations so arcane and of extremely limited relevance as to make this guidance useful to one customer in 100,000.  The parameters I reference here should rarely–if ever–be touched by a VI admin.  I am documenting them for here for that one mischievous soul that is allowed to do as much damage as good in the pursuit of performance excellence. The rest of you should not even consider these changes for your production environment.

Seriously.  Don’t do it.  Don’t do it.

About two weeks ago I got a call that a large customer was in need of an immediate performance assistance.  Ten hours after that I was on a plane to Boston.  Nine hours later I was sitting in the customer’s lab looking at the problem.  Five hours later the problem was fixed.  Three hours more and I was reporting out to the customer’s management.  Two hours after that I was at Fenway with a beer in one hand and chowder in the other.  What follows is a very brief synopsis of the problem and its solutions.

The customer was using VMware on a very strange (non-HCL) server.  Their stringent requirements stated that one virtual machine, configured as a network bridge using RHEL 5.3’s bridge-utils, must drop zero 64-byte UDP packets at up to 80,000 frames per second.  The Ixia device the customer was using for network tests reported that the ESXi-based virtual machine was maxing out at at 20,000 FPS while meeting the 0% packet loss requirement.

I tried about a dozen configuration changes in my half-day of experimentation and learned a few interesting things.

Occasionally the e1000 Will Outperform vmxnet3

This was a surprise to me at first but our network team later explained it to me clearly.  The e1000 device will never make a monitor-to-kernel call and instead relies entirely on VMkernel polling to pass information from the guest to the host.  vmxnet does make VMM->VMk calls after queuing up packets to limit the overhead of packet processing.  This buffering behavior benefits network throughput in about 99% of cases but the additional VMM->VMk calls were not helpful with this rare workload.

The vmxnet Throughput/Latency Tradeoff Is Configurable

ESX and ESXi expose a large number of advanced network settings that can be used to configure vmxnet’s predilection for throughput or latency.  Increasing vmxnetThroughputWeight, for instance, tells vmxnet to buffer packets longer to reduce the number of VMM->VMk calls, thus improving efficiency.  CoalesceTxTimeout will similarly affect buffering by setting a maximum coalesce time for both vmxnet and e1000.

VMware has demonstrated on one workload a successful modification of the default for vmxnetThroughputWeight.  See our SPECweb work that peaked at 16 Gb/s throughput, for instance.  My customer could not benefit from this parameter as it only changes vmxnet behavior but we did see gains by increasing CoalesceTxTimeout.

Guest Drivers Require Optimization for Small Packets

The size of the ring queues used by device drivers for physical and virtual devices is configurable.  Those buffers default to 256 entries but can be increased to 4096 for both vmxnet and e1000 when using vSphere.  For very small packets that arrive incredibly rapidly a larger ring queue is needed to avoid unnecessary packet drops.  When I used ethtool to increase in ring queue size the packet drop rate decreased dramatically.


If you have a environment that consistently seeing large volumes of tiny UDP packets (less than 128 B) some of these changes may improve your environment. But this configuration is uncommon enough in the virtualized enterprise that you probably do not have such a workload. So, do not start playing with vSphere’s defaults unless you know you are running an application that is a clear outlier.

Comments are closed.