Host Hypervisor CPU Affinity (Pinning)

Many hardware platforms have built in optimizations related to VM placement. For example, some CPU sockets may have faster local access to Peripheral Component Interconnect (PCI) resources than other CPU sockets. Users should ensure that VMs requiring high media throughput are optimally placed by the hypervisor, so that traversal of cross-domain bridges, such as QuickPath Interconnect (QPI), is avoided or minimized.

Some hypervisors implement Non-Uniform Memory Access (NUMA) topology rules to automatically enforce such placements. All hypervisors provide manual methods to perform CPU pinning, achieving the same result.

The diagram below displays two paths between the system's NICs and VM-B. Without configuring pinning, VM-B runs on Socket 0, and has to traverse the QPI to access Socket 1's NICs. The preferred path pins VM-B to Socket 1, for direct access to the local NICs, avoiding the QPI.

The previous paragraph describes this image's contents.

Oracle recommends you configure CPU affinities on the hypervisor to ensure mapping from only one virtual CPU to each physical CPU core. Learn how to configure CPU affinity and pin CPUs from your hypervisor documentation.

Note:

The SBC relies on multiple queues on virtual NICs to scale session capacity. Multiple queues enable the SBC to scale through multiple forwarding cores. This configuration is platform dependent: physical NIC, Hypervisor, virtual NIC, and vSwitch.