2.5. PCI Passthrough

When running on Linux hosts with a kernel version later than 2.6.31, experimental host PCI devices passthrough is available.

Note

The PCI passthrough module is shipped as an Oracle VM VirtualBox extension package, which must be installed separately. See Installing Oracle VM VirtualBox and Extension Packs.

This feature enables a guest to directly use physical PCI devices on the host, even if host does not have drivers for this particular device. Both, regular PCI and some PCI Express cards, are supported. AGP and certain PCI Express cards are not supported at the moment if they rely on Graphics Address Remapping Table (GART) unit programming for texture management as it does rather non-trivial operations with pages remapping interfering with IOMMU. This limitation may be lifted in future releases.

To be fully functional, PCI passthrough support in Oracle VM VirtualBox depends upon an IOMMU hardware unit. If the device uses bus mastering, for example it performs DMA to the OS memory on its own, then an IOMMU is required. Otherwise such DMA transactions may write to the wrong physical memory address as the device DMA engine is programmed using a device-specific protocol to perform memory transactions. The IOMMU functions as translation unit mapping physical memory access requests from the device using knowledge of the guest physical address to host physical addresses translation rules.

Intel's solution for IOMMU is called Intel Virtualization Technology for Directed I/O (VT-d), and AMD's solution is called AMD-Vi. Check your motherboard datasheet for the appropriate technology. Even if your hardware does not have a IOMMU, certain PCI cards may work, such as serial PCI adapters, but the guest will show a warning on boot and the VM execution will terminate if the guest driver will attempt to enable card bus mastering.

It is very common that the BIOS or the host OS disables the IOMMU by default. So before any attempt to use it please make sure that the following apply:

  • Your motherboard has an IOMMU unit.

  • Your CPU supports the IOMMU.

  • The IOMMU is enabled in the BIOS.

  • The VM must run with VT-x/AMD-V and nested paging enabled.

  • Your Linux kernel was compiled with IOMMU support, including DMA remapping. See the CONFIG_DMAR kernel compilation option. The PCI stub driver (CONFIG_PCI_STUB) is required as well.

  • Your Linux kernel recognizes and uses the IOMMU unit. The intel_iommu=on boot option could be needed. Search for DMAR and PCI-DMA in kernel boot log.

Once you made sure that the host kernel supports the IOMMU, the next step is to select the PCI card and attach it to the guest. To figure out the list of available PCI devices, use the lspci command. The output will look as follows:

01:00.0 VGA compatible controller: ATI Technologies Inc Cedar PRO [Radeon HD 5450]
01:00.1 Audio device: ATI Technologies Inc Manhattan HDMI Audio [Mobility Radeon HD 5000 Series]
02:00.0 Ethernet controller: Realtek Semiconductor Co., Ltd. RTL8111/8168B PCI Express Gigabit
        Ethernet controller (rev 03)
03:00.0 SATA controller: JMicron Technology Corp. JMB362/JMB363 Serial ATA Controller (rev 03)
03:00.1 IDE interface: JMicron Technology Corp. JMB362/JMB363 Serial ATA Controller (rev 03)
06:00.0 VGA compatible controller: nVidia Corporation G86 [GeForce 8500 GT] (rev a1)

The first column is a PCI address, in the format bus:device.function. This address could be used to identify the device for further operations. For example, to attach a PCI network controller on the system listed above to the second PCI bus in the guest, as device 5, function 0, use the following command:

$ VBoxManage modifyvm VM-name --pciattach 02:00.0@01:05.0

To detach the same device, use:

$ VBoxManage modifyvm VM-name --pcidetach 02:00.0

Please note that both host and guest could freely assign a different PCI address to the card attached during runtime, so those addresses only apply to the address of the card at the moment of attachment on the host, and during BIOS PCI init on the guest.

If the virtual machine has a PCI device attached, certain limitations apply:

  • Only PCI cards with non-shared interrupts, such as those using MSI on the host, are supported at the moment.

  • No guest state can be reliably saved or restored. The internal state of the PCI card cannot be retrieved.

  • Teleportation, also called live migration, does not work. The internal state of the PCI card cannot be retrieved.

  • No lazy physical memory allocation. The host will preallocate the whole RAM required for the VM on startup, as we cannot catch physical hardware accesses to the physical memory.