Note:

Deploy SR-IOV Enabled Network Interfaces Container Apps on OKE Using Multus CNI Plugin

Introduction

In this tutorial, we will explore how to deploy containerized applications on virtual instance worker nodes within Oracle Cloud Infrastructure Kubernetes Engine (OKE), leveraging advanced networking capabilities. Specifically, we will enable Single Root I/O Virtualization (SR-IOV) for container network interfaces and configure the Multus CNI plugin to enable multi-homed networking for your containers.

By combining SR-IOV with Multus, you can achieve high-performance, low-latency networking for specialized workloads such as AI, Machine Learning, and real-time data processing. This tutorial will provide step-by-step instructions to configure your OKE environment, deploy worker nodes with SR-IOV enabled interfaces, and use Multus CNI to manage multiple network interfaces in your pods. Whether you are aiming for high-speed packet processing or need to fine-tune your Kubernetes networking, this tutorial will equip you with the tools and knowledge to get started.

Note:

image

Objectives

Task 1: Deploy OKE with a Bastion, Operator, Three VM Worker Nodes and the Flannel CNI Plugin

Ensure that OKE is deployed with the following setup:

This setup is detailed in the tutorial here: Deploy a Kubernetes Cluster with Terraform using Oracle Cloud Infrastructure Kubernetes Engine.

The following image shows a visual overview of the components we will work with throughout this tutorial.

image

Task 2: Enable SR-IOV (Hardware-Assisted) Networking on Each Worker Node

Note: The following steps need to be performed on all the worker nodes that are part of the OKE cluster.

The following image shows a visual overview of our worker nodes inside the OKE cluster that we will work with throughout this tutorial.

image

Enable SR-IOV on the Instance

Task 3: Create a New Subnet for the SR-IOV Enabled VNICs

We will create a dedicated subnet that our SR-IOV enabled interfaces will use.

Task 3.1: Create a Security List

As we are already using security lists for the other subnets but we also need a dedicated security list for the newly created SR-IOV subnet.

Task 3.2: Create a Subnet

Task 4: Add a Second VNIC Attachment

The following image shows a visual overview of how the worker nodes have a single VNIC that is connected to the worker nodes subnet before we add a second VNIC.

image

Before we add a second VNIC attachment to the worker nodes, create a Network Security Group.

Task 4.1: Create a Network Security Group (NSG)

We are already using NSG for the other VNICs, but we also need a dedicated NSG for the newly created VNIC that we will add to an existing virtual instance that is part of the OKE cluster and that will play its part as a Kubernetes worker node. This interface will be a VNIC where we have SR-IOV enabled.

Task 4.2: Add the VNIC

Task 5: Assign an IP Address to the New Second VNIC with a Default Gateway

Now that the second VNIC has been created in Task 4 and attached, we need to assign an IP address to it. When you add a second interface to an instance you can assign it to the same subnet as the first interface, or you can pick a new subnet.

DHCP is not enabled for the second interface so the IP address needs to be assigned manually.

There are different methods of assigning the IP address for the second interface.

For all worker nodes, we have assigned an IP address to the secondary vNIC (ens5). We used Method 3 to assign an IP address to the secondary vNIC (ens5). For more information about assigning an IP address to the second VNIC, see Assign an IP Address to a Second Interface on an Oracle Linux Instance.

Once the IP address has been assigned to a VNIC, we need to verify if the IP address on the second VNICs are configured correctly. We can also verify if we enabled SR-IOV on all node pool worker nodes.

Our OKE cluster consists of:

Node Pool  
NP1 1 x Worker Node
NP2 3 x Worker Nodes

We will verify all worker nodes in all node pools.

Task 5.1: Verify all Nodes in Node Pool 1 (np1)

Task 5.2: Verify all Nodes in Node Pool 2 (np2)

Task 6: Install a Meta-Plugin CNI (Multus CNI) on the Worker Node

Multus CNI is a Kubernetes Container Network Interface (CNI) plugin that allows you to attach multiple network interfaces to a pod.

How Multus CNI Works

Why We Need Multus CNI

Task 6.1: Install Multus CNI using the Thin Install Method

What the Multus daemon Set Does

Task 6.2: Validate the Multus Installation

Task 7: Attach Network Interfaces to Pods

In this task, we will map or attach a container interface to this VNIC.

To attach additional interfaces to pods, we need a configuration for the interface to be attached.

There are several CNI plugins that can be used alongside Multus to accomplish this. For more information, see Plugins Overview.

The following example shows NetworkAttachmentDefinition objects that configure the secondary ens5 interface that was added to the nodes.

Task 7.1: Create Network Attachment Definition

The NetworkAttachmentDefinition is used to set up the network attachment, for example, secondary interface for the pod.

There are two ways to configure the NetworkAttachmentDefinition:

Note: In this tutorial, we are going to use the method using the CNI config file.

We have 4 x worker nodes and each worker node has a second VNIC that we will map to an interface on a container (pod).

Task 7.2: Create Pods with the NetworkDefinitionAttachment Attached

In this task, we will tie the NetworkAttachmentDefinitions to an actual container or pod.

In the following table, we have created a mapping on what pod we want to host on what worker node.

Worker (Primary) Node IP ens5 name pod name finished
10.0.112.134 10.0.3.30/27 sriov-vnic-1 testpod1 YES
10.0.66.97 10.0.3.15/27 sriov-vnic-2 testpod2 YES
10.0.73.242 10.0.3.14/27 sriov-vnic-3 testpod3 YES
10.0.89.50 10.0.3.16/27 sriov-vnic-4 testpod4 YES

Task 7.3: Create Pods with Node Affinity

By default, Kubernetes will decide where the pods will be placed (worker node). In this example, this is not possible because a NetworkAttachmentDefinition is bound to an IP address and this IP address is bound to a VNIC and this VNIC is bound to a specific worker node. So we need to make sure that the pods we create will end up on the worker node we want and this is required when we attach the NetworkAttachmentDefinition to a pod.

If we do not do this it may happen that a pod will end up on a different where the IP address is available for the pod. Therefore the pod will not be able to communicate using the SR-IOV enabled interface.

Task 7.4: Verify the IP Address on the Test Pods

Task 7.5: Verify the IP Address on the Worker Nodes

Task 8: Perform Ping Tests Between Multiple Pods

All pods have an IP address from the OCI subnet where the SR-IOV enabled VNICs are attached, we can do some ping tests to verify if the network connectivity is working properly.

Note: In this example, we are using testpod1 to ping all the other test pods net1 IP addresses.

Task 9: (Optional) Deploy Pods with Multiple Interfaces

Till now we have only prepared one VNIC (that happens to support SR-IOV) and moved this VNIC into a pod. We have done this for four different test pods.

Now what if we want to add or move more VNICs into a particular pod? You have to repeat these steps:

In this task, you will find an example where we will create an additional subnet, VNIC, assign the IP address, the NetworkAttachmentDefinition and add this to the pod creation YAML file for testpod1.

Task 10: Remove All Pod Deployments and NetworkAttachmentDefinitions

If you want to start over or want to clean up the containers with the NetworkAttachmentDefinitions, follow the steps:

Acknowledgments

More Learning Resources

Explore other labs on docs.oracle.com/learn or access more free learning content on the Oracle Learning YouTube channel. Additionally, visit education.oracle.com/learning-explorer to become an Oracle Learning Explorer.

For product documentation, visit Oracle Help Center.