KVM Configuration

Complete the following steps to configure KVM for your RTP Proxies.

BIOS/UEFI Settings

Depending on your hardware, some of these settings may not be available.

  1. In the system's BIOS, set the CPU profile to "Performance".
  2. Disable C3/C6 power states.
  3. Enable Turbo.

Tune the Kernel

The values for each setting must be customized for your CPU.

  1. Disable IRQ balance.
    sudo systemctl disable --now irqbalance
  2. Add the following CPU parameters to grub.

    Table 3-1 Kernel Arguments

    Parameter Purpose
    isolcpus Marks these CPUs as isolated from the scheduler’s general load balancing
    nohz_full Puts those CPUs into full tickless mode to minimize scheduler timer interrupts
    rcu_nocbs Offloads RCU callbacks from those CPUs to housekeeping CPUs
    intel_iommu Enables Intel VT-d IOMMU for DMA isolation and for vfio-pci use
    iommu Enables pass-through mode for IOMMU on non‑vfio devices to reduce overhead
    tuned.non_isolcpus Declares which CPUs are housekeeping (NOT isolated)
    intel_pstate Enables or disables the Intel P-state driver

    The following command isolates Intel CPUs 2 through 47 and 50 through 97.

    sudo grubby --update-kernel=ALL \
      --args="isolcpus=2-47,50-95 nohz_full=2-47,50-95 rcu_nocbs=2-47,50-95 intel_iommu=on iommu=pt tuned.non_isolcpus=00030000,00000003 intel_pstate=disable"

    The following command isolates AMD CPUs 2 through 47 and 50 through 97

    sudo grubby --update-kernel=ALL \
      --args="isolcpus=2-47,50-95 nohz_full=2-47,50-95 rcu_nocbs=2-47,50-95 amd_iommu=on iommu=pt tuned.non_isolcpus=00030000,00000003"
  3. Once the appropriate kernel parameters have been set for your hardware, reboot the system.
    Use this command to verify your kernel arguments:
    cat /proc/cmdline

Install KVM

  1. Verify your CPU supports virtualization.
    egrep -i 'vmx|svm' /proc/cpuinfo
  2. Install virtualization software.
    sudo dnf groupinstall "Virtualization Host"
  3. Start the libvirtd service.
    sudo systemctl enable --now libvirtd

Manage Virtual Networks

  1. Block the kernel from automatically loading the i40evf and iavf drivers.
    echo "blacklist i40evf" | sudo tee -a /etc/modprobe.d/blacklist.conf
    echo "blacklist iavf" | sudo tee -a /etc/modprobe.d/blacklist.conf 

    This allows the build-in driver mlx5_core to be used instead.

  2. Customize the following script as appropriate to your network interfaces and requirements.
    sudo bash -c 'cat > /etc/rc.d/rc.local << "EOF"
    #!/usr/bin/env bash
    # Create VFs
    echo 2 > /sys/bus/pci/devices/0000:af:00.0/sriov_numvfs
    echo 2 > /sys/bus/pci/devices/0000:af:00.1/sriov_numvfs
    # Assign MACs
    ip link set enp175s0f0 vf 0 mac ee:07:94:7a:a7:81
    ip link set enp175s0f0 vf 1 mac ee:07:94:7a:a7:82
    ip link set enp175s0f1 vf 0 mac 5a:b0:f8:ce:04:51
    ip link set enp175s0f1 vf 1 mac 5a:b0:f8:ce:04:52
    # Set trust and link state
    ip link set enp175s0f0 vf 0 trust on
    ip link set enp175s0f0 vf 1 trust on
    ip link set enp175s0f1 vf 0 trust on
    ip link set enp175s0f1 vf 1 trust on
    ip link set enp175s0f0 vf 0 state auto
    ip link set enp175s0f0 vf 1 state auto
    ip link set enp175s0f1 vf 0 state auto
    ip link set enp175s0f1 vf 1 state auto
    # Optional: Disable spoof check for multi-mac
    ip link set enp175s0f0 vf 0 spoof off
    ip link set enp175s0f0 vf 1 spoof off
    ip link set enp175s0f1 vf 0 spoof off
    ip link set enp175s0f1 vf 1 spoof off
    EOF
    chmod +x /etc/rc.d/rc.local'
  3. Reboot to apply the changes.

Edit the Guest's Domain XML

The edits shown below are samples only. Do not reuse them without careful evaluation.

  1. Use the command lspci -D to discover the Bus:Device:Function (BDF) values for your server.
  2. Attach the VFs with VFIO.

    Edit the actual BDF values to match those of your server.

    <hostdev mode='subsystem' type='pci' managed='yes'>
      <driver name='vfio'/>
      <source>
        <address domain='0x0000' bus='0xaf' slot='0x02' function='0x0'/>
      </source>
    </hostdev>
    <hostdev mode='subsystem' type='pci' managed='yes'>
      <driver name='vfio'/>
      <source>
        <address domain='0x0000' bus='0xaf' slot='0x06' function='0x0'/>
      </source>
    </hostdev>
  3. Add a management interface.
    <interface type='bridge'>
      <mac address='52:54:00:22:9a:59'/>
      <source bridge='br-mgmt0'/>
      <model type='virtio'/>
    </interface>
  4. Define and pin the virtual CPUs to isolated NUMA-local CPUs of the host.
    <vcpu placement='static'>16</vcpu>
    <cputune>
      <vcpupin vcpu='0' cpuset='2'/>
      <vcpupin vcpu='1' cpuset='3'/>
      ... (Repeat for all vCPUs) ...
      <emulatorpin cpuset='2-17'/>
    </cputune>
  5. Enable NUMA memory pinning.
    <numatune>
      <memory mode='strict' nodeset='0'/>
    </numatune>
  6. Enable hugepages-backed memory.
    <memoryBacking>
      <hugepages/>
    </memoryBacking>

Start the Virtual Machine

  1. Define the virtual machine.
    virsh define vm.xml
  2. Start the virtual machine.
    virsh start <name>
  3. On the guest VM, enable console access from the KVM host.
    sudo systemctl enable --now serial-getty@ttyS0
  4. From the KVM host, connect to the virtual machine.
    virsh console <name>

Prepare the Guest OS

These steps should be taken after you have connected to the guest VM.

  1. Enable IOMMU.
    sudo grubby --update-kernel=ALL --args="intel_iommu=on iommu=pt"
    sudo reboot
  2. Select the appropriate driver for your CPU.

    Table 3-2 List of Supported NIC and Their Drivers

    Vendor Model/Series PF Driver VF Driver PF Usage VF Usage Notes
    Intel E810 (800 series) ice iavf Keep on kernel (ice). Recommended: vfio-pci Enable IOMMU for vfio-pci. Bind VFs to vfio-pci for DPDK iavf PMD.
    Intel X710/XL710 (700 series) i40e iavf (i40evf deprecated) Keep on kernel (i40e). Recommended: vfio-pci Avoid i40evf (deprecated). Use iavf + vfio-pci for VFs.
    NVIDIA/Mellanox ConnectX‑5 / ConnectX‑6 mlx5_core (eth: mlx5e) mlx5_core (eth: mlx5e) Do NOT use vfio-pci (keep on kernel) Do NOT use vfio-pci (keep on kernel) mlx5 PMD works with kernel mlx5_core + rdma-core/DevX. Binding to vfio-pci breaks mlx5 PMD.
  3. Set up hugepages.

    The example below creates a 2GB hugepage that persists across reboots.

    echo 2048 | sudo tee /proc/sys/vm/nr_hugepages
    echo "vm.nr_hugepages = 2048" | sudo tee /etc/sysctl.d/99-hugepages.conf
    sudo sysctl --system
    
    sudo mkdir -p /dev/hugepages
    echo "nodev /dev/hugepages hugetlbfs defaults 0 0" | sudo tee -a /etc/fstab
    sudo mount -a
  4. Apply the network-latency tuned profile.
    sudo dnf install -y tuned-profiles-cpu-partitioning
    sudo tuned-adm profile network-latency
  5. Disable irqbalance.
    sudo systemctl disable --now irqbalance
  6. Install the following dependencies.
    sudo dnf install libibverbs rdma-core librdmacm pciutils
  7. Install the JDK 25.
    sudo dnf install jdk-25-headless