vPivot

Scott Drummonds on Virtualization

VMDirectPath

10 Comments »

Chad Sakac was recently touring APJ and delivering his high energy VMware+EMC message with demos, demos, and more demos. His encyclopedia-sized deck of technical gems included a short discussion on VMDirectPath that merits comment. I will offer my thoughts here.

Chad parenthetically wrote on VMDirectPath in a recent blog:

Normally, as people who know me can attest to, I’m not a general fan of VMdirectpath. Most people think of it as a “performance thing” when the reality is that with VMXNET3 and pvSCSI adapters in vSphere 4 you can get to within low percentage points of the same performance as you can with “hypervisor bypass” (met 3 customers in the last 2 weeks who were adamant they needed VMDirectPath for “performance” – but didn’t know what their perf goals were, and were using VI3 as the baseline – argh), and I know that this is called out in Cisco’s training on UCS. For some reason people think for “high performance IO” you can’t do it without bypass, but that is not explicitly true.

I have been touring the world telling people that VMDirectPath is not a performance feature ever since it was released.  Unfortunately my pithy marketing statements are only partial truth.  In March of this year VMware released an update to their ongoing SPECweb work that leveraged VMDirectPath.  It is unfortunate that this article encourages customers to implement VMDirectPath.  I want to convince you to ignore it.

Before it was released, VMware was internally calling VMDirectPath “pass through”.  The idea was to provide direct access to the underlying hardware to the guest operating system and its applications.  The monumental negative impact of this is that the guest operating system becomes married to the physical hardware.  This means the virtual machine cannot be moved (vMotion), load balanced (DRS), protected through fault tolerance (FT), and many other awesome vSphere feature.  Enabling VMDirectPath for a virtual machine effectively moves your virtualization technology back to 2006.

The only reason why anyone is considering VMDirectPath for production deployments is the possibility of increased performance.  But the only workload for which VMware has ever claimed substantial gains from this feature is the SPECweb work I quoted above.  That workload sustained 30 Gb/s of network traffic.  I doubt any of VMware’s customers are using even a fraction of this network throughput on a single server in their production environments.

Even in this extremely intense network environment, the VMDirectPath contribution to application throughput is not quantified.  I discussed its impact with the engineer that performed the test and he estimated the performance improvement at less than 10%.

The reason why device passthrough is not useful in vSphere is because of the incredible efficiency gains VMware has made with vmxnet and its virtual storage stack.  As Chad mentioned above, customers carry with them bad memories of VMware’s IO capabilities in VI3.  That was a different era, my friends.  I hope that I can convince you to trust VMware’s core technologies and not be looking for features that circumvent them.

So, let me summarize:

  • If you enable VMDirectPath you lose most of the features you love about vSphere.
  • VMDirectPath has never been shown to have any impact on tier-1 applications even with high network needs (10 Gb/s).
  • The largest possible performance gain VMDirectPath has been shown to add to an application is about 10%, and that was an extreme network usage of 30 Gb/s.

Tread carefully with this feature, everyone.  Its usage should be a last resort, not a first.

10 Responses

I agree with many points in the article but there are some very important capabilities that direct path has the overcomes some architectural limitations of ESX.

One limitation I have validated is the performance of PCI Express connected devices. In my testing we saw that devices marshalled through ESX instead of directpath top out at 20,000IOPS (4k small read writes to device connected via PCI Express). By using Directpath we have seen that we can get past this limitation by an order of magnitude (3X-5X)

    • Chethan,

      I could not disagree with you more. In May of 2009 VMware showed a single host driving 365,000 IOPS (http://blogs.vmware.com/performance/2009/05/350000-io-operations-per-second-one-vsphere-host-with-30-efds.html) That configuration used three virtual machines that each drove 120,000 IOPS using the paravirtualized SCSI driver. But that is storage, where there are no supported VMDirectPath configurations.

      In the network space, where VMDirectPath is consided by some customers, VMware has shown full saturation of the PCI-e bus (27 Gb/s) using vmxnet3. There is no more throughput available on the system that can be harnessed by any software technology due to the bus’s limitations.

      Scott

  • I agree with your comments about using VMDirectPath for NIC pass through.

    But it is usefull for HBA pass-through on virtualized backup servers attached to SAN libraries.

  • Does this performance discussion change if the i/o requirements are not throughput limited, but rather latency limited? Tens of gigabits and hundreds of thousands of iops are one thing, but various applications desire “high performance” in terms of minimal latency, rather than maximal throughput… memcached fetches, synchronous writes, etc., come to mind, where microseconds are much preferred over milliseconds .

  • There are three main situations where I can envision using directpath, licensing which needs a non-vmware NIC for MAC authentication (VMWare could help here by allowing arbitrary MAC addresses on VMNIC’s), licensing which needs a USB dongle (network usb hubs are available but their reliability with a given environment are hit and miss), and running a backup server that needs access to a FC on SAS HBA to talk to a tape drive.

  • Hi Scott,
    Solid state storage from both FusionIO and LSI have challenges with PCI Express through vmware. I suspected initially that they had badly written drivers but extensive discussions with their engineering teams have dissuaded me of that. Through VMdirect both devices do 5X more IOPS than through ESX. This might be a corner case but direct path is the only way that I have found to get these devices to perform at peak. I’d be happy to show you this hands on if you’re interested. It might very well turn out to be a non vmware problem and I certainly hope so.
    Best
    Chetan

  • [...] VMDirect Path – For and Against1 (Chad Sakac) and Against2 (Scott [...]

  • [...] Drummonds – VMDirectPathThe only reason why anyone is considering VMDirectPath for production deployments is the possibility [...]

  • Switch to our mobile site