Chad Sakac was recently touring APJ and delivering his high energy VMware+EMC message with demos, demos, and more demos. His encyclopedia-sized deck of technical gems included a short discussion on VMDirectPath that merits comment. I will offer my thoughts here.
Chad parenthetically wrote on VMDirectPath in a recent blog:
Normally, as people who know me can attest to, I’m not a general fan of VMdirectpath. Most people think of it as a “performance thing” when the reality is that with VMXNET3 and pvSCSI adapters in vSphere 4 you can get to within low percentage points of the same performance as you can with “hypervisor bypass” (met 3 customers in the last 2 weeks who were adamant they needed VMDirectPath for “performance” – but didn’t know what their perf goals were, and were using VI3 as the baseline – argh), and I know that this is called out in Cisco’s training on UCS. For some reason people think for “high performance IO” you can’t do it without bypass, but that is not explicitly true.
I have been touring the world telling people that VMDirectPath is not a performance feature ever since it was released. Unfortunately my pithy marketing statements are only partial truth. In March of this year VMware released an update to their ongoing SPECweb work that leveraged VMDirectPath. It is unfortunate that this article encourages customers to implement VMDirectPath. I want to convince you to ignore it.
Before it was released, VMware was internally calling VMDirectPath “pass through”. The idea was to provide direct access to the underlying hardware to the guest operating system and its applications. The monumental negative impact of this is that the guest operating system becomes married to the physical hardware. This means the virtual machine cannot be moved (vMotion), load balanced (DRS), protected through fault tolerance (FT), and many other awesome vSphere feature. Enabling VMDirectPath for a virtual machine effectively moves your virtualization technology back to 2006.
The only reason why anyone is considering VMDirectPath for production deployments is the possibility of increased performance. But the only workload for which VMware has ever claimed substantial gains from this feature is the SPECweb work I quoted above. That workload sustained 30 Gb/s of network traffic. I doubt any of VMware’s customers are using even a fraction of this network throughput on a single server in their production environments.
Even in this extremely intense network environment, the VMDirectPath contribution to application throughput is not quantified. I discussed its impact with the engineer that performed the test and he estimated the performance improvement at less than 10%.
The reason why device passthrough is not useful in vSphere is because of the incredible efficiency gains VMware has made with vmxnet and its virtual storage stack. As Chad mentioned above, customers carry with them bad memories of VMware’s IO capabilities in VI3. That was a different era, my friends. I hope that I can convince you to trust VMware’s core technologies and not be looking for features that circumvent them.
So, let me summarize:
- If you enable VMDirectPath you lose most of the features you love about vSphere.
- VMDirectPath has never been shown to have any impact on tier-1 applications even with high network needs (10 Gb/s).
- The largest possible performance gain VMDirectPath has been shown to add to an application is about 10%, and that was an extreme network usage of 30 Gb/s.
Tread carefully with this feature, everyone. Its usage should be a last resort, not a first.