To VT-D or Not to VT-D? A guide on whether to Utilize Direct Device Attach in your Virtualized System?

Technically, VT-D is the marketing term for a collection of Intel chipset technologies that individually or collectively enhance the performance of the I/O subsystem. The most commonly used of these features goes by several names, the most descriptive of which is Direct Device Attach (DDA)1. This feature is also commonly referred to as "pass through" and VMware also uses the term "VM DirectPath" to refer to it.

DDA works by wholly allocating a specific PCI-E (v2.0) device to a specific Virtual Machine. At the chipset level this happens by setting up a DMA translation scheme in the Hardware that will enable the I/O Hub to directly copy data to the memory space of the Virtual Machine. Without DDA, the hypervisor would have to perform these translations in software.

There are three main benefits to the designated VM from using DDA

  • Reduced CPU cost per unit of I/O
  • Improved I/O QoS
  • Lowered latency for individual I/O packets

The reduction in CPU cost is probably the most significant DDA contribution. It was measured to range from a minimum of 1.5x up to 2.5x at very high network I/O throughput rates2. Additionally, the entire host is set to benefit from a reduced CPU utilization owing to the savings on I/O processing from the DDA configured devices.

The latency factor tends to be less visible since the magnitude of the improvement is at the order of less than a millisecond. In strictly LAN applications, this benefit may be realized.

Total bandwidth is not affected directly by DDA, however if a system is at a very high saturation point, the reduced CPU utilization due to DDA will allow for "headroom" to support additional bandwidth.

At the same time, there are two drawbacks to using DDA (with today's state of S/W support)

  • Unavailability of the DDA device for use by other VMs
  • Limited migration support for VMs with DDA

Fortunately both of these limitations are likely to be addressed in the near future. The first will be remedied by the availability of SR-IOV (Single Root I/O Virtualization) facilities in PCI devices and controllers, and the subsequent S/W support for it3. The second is expected to be mitigated by the upcoming VMware NPA (Network Plug-in Architecture)4

In the meantime, the following is a brief questionnaire to help you decide whether or not to utilize DDA for a given VM:

  • Is your VM generating a substantial amounts of I/O (e.g. > 20 MBps)
  • Is your VM dependent on I/O and has strict QoS requirements
  • Is your host heavily utilized (e.g. >50%)
  • Is your VM latency sensitive (in the order of 1-2 micro Seconds)

Answer YES to two or more of these questions

  • Will your VM need to Migrate?
  • Will your NIC not be used by other VMs

Answer YES to both of these questions

If you answer YES to two or more of the questions in the top row AND answer YES to both questions in the bottom row then DDA is right for you and will likely improve your system's performance.

Useful Links

Introduction to Intel VT-d
VT-D usage tutorial
Full VT-D specification

1 Other VT-D features available as of Tylersburg and Boxborro chipsets include Interrupt Remapping and ATS support (full VT-D specification)
2 This improvement is limited to the CPU cost of I/O handling alone as measured by a micro benchmark.
3 Experimental support for SR/IOV is already available on the latest Xen releases and pending in both Hyper-V and ESX
4 Announced in IDF 2008. Technology co-developed by Intel and VMware and implemented by VMware

For more complete information about compiler optimizations, see our Optimization Notice.

1 comment

Tommy F.'s picture

Hello Hussam
I would like to ask why the VT-d technology is used just for hvm and not paravirtualized VM.
In both cases, the PCI needs to write and read from the VM memory.

Add a Comment

Have a technical question? Visit our forums. Have site or software product issues? Contact support.