Host Hypervisor CPU Affinity (Pinning)

Many hardware platforms have built in optimizations related to VM placement. For example, some CPU sockets may have faster local access to Peripheral Component Interconnect (PCI) resources than other CPU sockets. Users should ensure that VMs requiring high media throughput are optimally placed by the hypervisor, so that traversal of cross-domain bridges, such as QuickPath Interconnect (QPI), is avoided or minimized.

Some hypervisors implement Non-Uniform Memory Access (NUMA) topology rules to automatically enforce such placements. All hypervisors provide manual methods to perform CPU pinning, achieving the same result.

The diagram below displays two paths between the system's NICs and VM-B. Without configuring pinning, VM-B runs on Socket 0, and has to traverse the QPI to access Socket 1's NICs. The preferred path pins VM-B to Socket 1, for direct access to the local NICs, avoiding the QPI.

Figure 5-1 Contrast of Data Paths with and without CPU Pinning

Note:

The SBC relies on multiple queues on virtual NICs to scale session capacity. Multiple queues enable the SBC to scale through multiple forwarding cores. This configuration is platform dependent: physical NIC, Hypervisor, virtual NIC, and vSwitch.

The user can learn how to pin CPUs via their hypervisor documentation.