Note:
- This tutorial requires access to Oracle Cloud. To sign up for a free account, see Get started with Oracle Cloud Infrastructure Free Tier.
- It uses example values for Oracle Cloud Infrastructure credentials, tenancy, and compartments. When completing your lab, substitute these values with ones specific to your cloud environment.
Deploy SR-IOV Enabled Network Interfaces Container Apps on OKE Using Multus CNI Plugin
Introduction
In this tutorial, we will explore how to deploy containerized applications on virtual instance worker nodes within Oracle Cloud Infrastructure Kubernetes Engine (OKE), leveraging advanced networking capabilities. Specifically, we will enable Single Root I/O Virtualization (SR-IOV) for container network interfaces and configure the Multus CNI plugin to enable multi-homed networking for your containers.
By combining SR-IOV with Multus, you can achieve high-performance, low-latency networking for specialized workloads such as AI, Machine Learning, and real-time data processing. This tutorial will provide step-by-step instructions to configure your OKE environment, deploy worker nodes with SR-IOV enabled interfaces, and use Multus CNI to manage multiple network interfaces in your pods. Whether you are aiming for high-speed packet processing or need to fine-tune your Kubernetes networking, this tutorial will equip you with the tools and knowledge to get started.
Note:
- At the time of publishing this tutorial the SR-IOV CNI cannot be used by pods or containers on a virtual instance that is part of an OKE cluster together with the Multus CNI plugin.
- In this tutorial, we will show you how you can use an SR-IOV enabled interface inside a pod running on pods or containers on a virtual instance that is part of an OKE cluster by moving the Virtual Network Interface Cards (VNIC) (that is on a virtual instance) into a pod and is usable with the help of the Multus CNI plugin (where the SR-IOV CNI plugin is not used at all).
- The SR-IOV CNI plugin is supported on a Bare Metal Instance that is part of an OKE cluster together with the Multus CNI plugin. This is out of scope for this tutorial.
Objectives
- Deploy container apps on virtual instance worker nodes within OKE with SR-IOV enabled network interfaces using Multus CNI plugin.
Task 1: Deploy OKE with a Bastion, Operator, Three VM Worker Nodes and the Flannel CNI Plugin
Ensure that OKE is deployed with the following setup:
- Bastion
- Operator
- 3 VM Worker Nodes
- Flannel CNI Plugin
This setup is detailed in the tutorial here: Deploy a Kubernetes Cluster with Terraform using Oracle Cloud Infrastructure Kubernetes Engine.
The following image shows a visual overview of the components we will work with throughout this tutorial.
Task 2: Enable SR-IOV (Hardware-Assisted) Networking on Each Worker Node
Note: The following steps need to be performed on all the worker nodes that are part of the OKE cluster.
The following image shows a visual overview of our worker nodes inside the OKE cluster that we will work with throughout this tutorial.
Enable SR-IOV on the Instance
-
Log in with SSH to the instance or worker node.
- Run the
lspci
command to verify which network driver is currently used on all the VNICs. - Note that the
Virtio SCSI
network driver is used.
- Go to the Instance Details page in the OCI Console.
- Scroll down.
- Note that the NIC attachment type is now PARAVIRTUALIZED.
- Go to the Instance Details page in the OCI Console.
- Click More Actions.
- Click Edit.
- Run the
-
Click Show advanced options.
- Click Launch options and select Hardware-assisted (SR-IOV) networking as Networking type.
- Click Save changes.
-
Click Reboot instance to confirm the instance reboot.
-
Note that the instance status has changed to STOPPING.
-
Note that the instance status has changed to STARTING.
-
Note that the instance status has changed to RUNNING.
- Scroll down.
- Note that the NIC attachment type is now VFIO.
-
The following image shows a visual overview of what we have configured so far.
Task 3: Create a New Subnet for the SR-IOV Enabled VNICs
We will create a dedicated subnet that our SR-IOV enabled interfaces will use.
Task 3.1: Create a Security List
As we are already using security lists for the other subnets but we also need a dedicated security list for the newly created SR-IOV subnet.
-
Go to the OCI Console.
- Navigate to Virtual Cloud Networks.
- Click the existing VCN.
- Click Security Lists.
- Click Create Security List.
-
For the Ingress Rule 1, enter the following information.
- Enter a Name.
- Select CIDR as Source Type.
- Enter
0.0.0.0/0
as Source CIDR. - Select All Protocols as IP Protocol.
- Scroll down.
- For the Egress Rule 1, enter the following information.
- Select CIDR as Source Type.
- Enter
0.0.0.0/0
as Source CIDR. - Select All Protocols as IP Protocol.
- Click Create Security List.
-
Note that the new security list is created.
Task 3.2: Create a Subnet
-
Go to the Virtual Cloud Network Details page.
- Click Subnets.
- Note the existing subnets that are already created for OKE environment.
- Click Create Subnet.
- Enter a Name.
- Enter IPv4 CIDR Block.
- Scroll down.
- Select Private Subnet.
- Scroll down.
- Select Default DHCP Options for DHCP Options.
- Select Security List created in Task 3.1.
- Click Create Subnet.
-
Note that the net subnet is created.
Note:
- The subnet itself does not have any SR-IOV enabled technical components.
- In this tutorial, we are using a standard OCI subnet to allow the transport of traffic using the SR-IOV technology.
-
The following image shows a visual overview of what we have configured so far.
Task 4: Add a Second VNIC Attachment
The following image shows a visual overview of how the worker nodes have a single VNIC that is connected to the worker nodes subnet before we add a second VNIC.
Before we add a second VNIC attachment to the worker nodes, create a Network Security Group.
Task 4.1: Create a Network Security Group (NSG)
We are already using NSG for the other VNICs, but we also need a dedicated NSG for the newly created VNIC that we will add to an existing virtual instance that is part of the OKE cluster and that will play its part as a Kubernetes worker node. This interface will be a VNIC where we have SR-IOV enabled.
-
Go to the Virtual Cloud Network Details page.
- Navigate to Network Security Groups.
- Click Create Network Security Group.
-
Add the following rules.
- Ingress:
- Allow Source Type: Select CIDR.
- Source: Enter
0.0.0.0/0
. - Destination: Leave the destination blank.
- Protocol: Allow all protocols.
- Egress:
- Allow Source Type: Select CIDR.
- Source: Leave the source blank.
- Destination: Enter
0.0.0.0/0
. - Protocol: Allow all protocols.
- Ingress:
-
Note that the NSG is created. We will apply it to the new (secondary) VNIC that we will create (on each worker node in the OKE cluster).
Task 4.2: Add the VNIC
-
Navigate to each virtual worker node instance and add a second VNIC to each worker node.
- Navigate to each virtual worker node instance and click Attached VNICs.
- Note that there is already an existing VNIC.
- Click Create VNIC to add a second VNIC.
- Enter a Name.
- Select the VCN.
- Select the Subnet created in Task 3.2.
- Select Use network security groups to control traffic.
- Select the NSG created in Task 4.1.
- Scroll down.
- Select Automatically assign private IPv4 address.
- Click Save changes.
-
Note that the second VNIC is created and attached to the virtual worker node instance and also attached to our subnet.
-
Log in with SSH to the instance or worker node.
- Run the
lspci
command to verify which network driver is currently used on all the VNICs. - Note that the Mellanox Technologies ConnectX Family mlx5Gen Virtual Function network driver is used.
The Mellanox Technologies ConnectX Family mlx5Gen Virtual Function network driver is the Virtual Function (VF) driver that is used by SR-IOV. So the VNIC is enabled for SR-IOV with a VF.
- Run the
-
The following image shows a visual overview of what we have configured so far.
Task 5: Assign an IP Address to the New Second VNIC with a Default Gateway
Now that the second VNIC has been created in Task 4 and attached, we need to assign an IP address to it. When you add a second interface to an instance you can assign it to the same subnet as the first interface, or you can pick a new subnet.
DHCP is not enabled for the second interface so the IP address needs to be assigned manually.
There are different methods of assigning the IP address for the second interface.
-
Method 1: Use Oracle Cloud Infrastructure Command Line Interface (OCI CLI) (
oci-utils
package) to assign an IP address to the second interface of an OCI Compute instance using the oci-network-config command. -
Method 2: Use OCI CLI (
oci-utils
package) to assign an IP address to the second interface of an OCI Compute instance using the ocid daemon. -
Method 3: Use the OCI_Multi_VNIC_Setup script.
-
Method 4: Create the interface config file manually for the new VNIC in the
/etc/sysconfig/network-scripts/
folder.
For all worker nodes, we have assigned an IP address to the secondary vNIC (ens5
). We used Method 3 to assign an IP address to the secondary vNIC (ens5
). For more information about assigning an IP address to the second VNIC, see Assign an IP Address to a Second Interface on an Oracle Linux Instance.
Once the IP address has been assigned to a VNIC, we need to verify if the IP address on the second VNICs are configured correctly. We can also verify if we enabled SR-IOV on all node pool worker nodes.
Our OKE cluster consists of:
Node Pool | |
---|---|
NP1 | 1 x Worker Node |
NP2 | 3 x Worker Nodes |
We will verify all worker nodes in all node pools.
Task 5.1: Verify all Nodes in Node Pool 1 (np1
)
-
In the OKE cluster, click Nodes.
-
Click the first node pool (
np1
). -
Click the worker node that is part of this node pool.
- Note that the NIC attachment type is VFIO (this means that SR-IOV is enabled for this virtual instance worker node).
- Note that the second VNIC is created and attached for this worker node.
Task 5.2: Verify all Nodes in Node Pool 2 (np2
)
-
Click nodes one by one and start the verification.
- Note that the NIC attachment type is VFIO (this means that SR-IOV is enabled for this virtual instance worker node).
- Note that the second VNIC is created and attached for this worker node.
-
Go to the node pool 2 (
np2
) summary page. Click the second worker node in the node pool.- Note that the NIC attachment type is VFIO (this means that SR-IOV is enabled for this virtual instance worker node).
- Note that the second VNIC is created and attached for this worker node.
-
Go to the node pool 2 (
np2
) summary page. Click the third worker node in the node pool.- Note that the NIC attachment type is VFIO (this means that SR-IOV is enabled for this virtual instance worker node).
- Note that the second VNIC is created and attached for this worker node.
-
Log in using SSH to the Kubernetes Operator.
Run the
kubectl get nodes
command to retrieve a list and IP addresses of all the worker nodes.[opc@o-sqrtga ~]$ kubectl get nodes NAME STATUS ROLES AGE VERSION 10.0.112.134 Ready node 68d v1.30.1 10.0.66.97 Ready node 68d v1.30.1 10.0.73.242 Ready node 68d v1.30.1 10.0.89.50 Ready node 68d v1.30.1 [opc@o-sqrtga ~]$
-
To make it easy to SSH into all the worker nodes, we have created the following table.
Worker Node Name ens3 IP SSH comand workernode cwe6rhf2leq-n7nwqge3zba-slsqe2jfnpa-0 10.0.112.134 ssh -i id_rsa opc@10.0.112.134
cwe6rhf2leq-ng556bw23ra-slsqe2jfnpa-1 10.0.66.97 ssh -i id_rsa opc@10.0.66.97
cwe6rhf2leq-ng556bw23ra-slsqe2jfnpa-0 10.0.73.242 ssh -i id_rsa opc@10.0.73.242
cwe6rhf2leq-ng556bw23ra-slsqe2jfnpa-2 10.0.89.50 ssh -i id_rsa opc@10.0.89.50
- Before you can SSH to all the virtual worker nodes make sure you have the correct private key available.
- Run the
ssh -i <private key> opc@<ip-address>
command to SSH into all the worker nodes.
-
Run the
ip a
command on thecwe6rhf2leq-n7nwqge3zba-slsqe2jfnpa-0
worker node.Note that when the IP address has been successfully configured the
ens5
(second VNIC) has an IP address in the range of the subnet created in Task 3.2 for the SR-IOV interfaces.[opc@oke-cwe6rhf2leq-n7nwqge3zba-slsqe2jfnpa-0 ~]$ ip a 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: ens3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9000 qdisc mq state UP group default qlen 1000 link/ether 02:00:17:00:59:58 brd ff:ff:ff:ff:ff:ff altname enp0s3 inet 10.0.112.134/18 brd 10.0.127.255 scope global dynamic ens3 valid_lft 85530sec preferred_lft 85530sec inet6 fe80::17ff:fe00:5958/64 scope link valid_lft forever preferred_lft forever 3: ens5: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9000 qdisc mq state UP group default qlen 1000 link/ether 02:00:17:00:d4:a2 brd ff:ff:ff:ff:ff:ff altname enp0s5 inet 10.0.3.30/27 brd 10.0.3.31 scope global noprefixroute ens5 valid_lft forever preferred_lft forever inet6 fe80::8106:c09e:61fa:1d2a/64 scope link noprefixroute valid_lft forever preferred_lft forever 4: flannel.1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 8950 qdisc noqueue state UNKNOWN group default link/ether 3a:b7:fb:e6:2e:cf brd ff:ff:ff:ff:ff:ff inet 10.244.1.0/32 scope global flannel.1 valid_lft forever preferred_lft forever inet6 fe80::38b7:fbff:fee6:2ecf/64 scope link valid_lft forever preferred_lft forever 5: cni0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 8950 qdisc noqueue state UP group default qlen 1000 link/ether de:35:f5:51:85:5d brd ff:ff:ff:ff:ff:ff inet 10.244.1.1/25 brd 10.244.1.127 scope global cni0 valid_lft forever preferred_lft forever inet6 fe80::dc35:f5ff:fe51:855d/64 scope link valid_lft forever preferred_lft forever 6: veth1cdaac17@if2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 8950 qdisc noqueue master cni0 state UP group default link/ether 76:e2:92:ad:37:40 brd ff:ff:ff:ff:ff:ff link-netns 1935ba66-34cc-4468-8abb-f66add46d08b inet6 fe80::74e2:92ff:fead:3740/64 scope link valid_lft forever preferred_lft forever 7: vethbcd391ab@if2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 8950 qdisc noqueue master cni0 state UP group default link/ether 9a:9a:0f:d6:48:17 brd ff:ff:ff:ff:ff:ff link-netns 3f02d5fd-596e-4b9f-8a35-35f2f946901b inet6 fe80::989a:fff:fed6:4817/64 scope link valid_lft forever preferred_lft forever 8: vethc15fa705@if2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 8950 qdisc noqueue master cni0 state UP group default link/ether 3a:d2:c8:66:d1:0b brd ff:ff:ff:ff:ff:ff link-netns f581b7f2-cfa0-46eb-b0aa-37001a11116d inet6 fe80::38d2:c8ff:fe66:d10b/64 scope link valid_lft forever preferred_lft forever [opc@oke-cwe6rhf2leq-n7nwqge3zba-slsqe2jfnpa-0 ~]$
-
Run the
ip a
command on thecwe6rhf2leq-ng556bw23ra-slsqe2jfnpa-1
worker node.Note that when the IP address has been successfully configured, the
ens5
(second VNIC) has an IP address in the range of the subnet created in Task 3.2 for the SR-IOV interfaces.[opc@oke-cwe6rhf2leq-ng556bw23ra-slsqe2jfnpa-1 ~]$ ip a 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: ens3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9000 qdisc mq state UP group default qlen 1000 link/ether 02:00:17:00:16:ca brd ff:ff:ff:ff:ff:ff altname enp0s3 inet 10.0.66.97/18 brd 10.0.127.255 scope global dynamic ens3 valid_lft 85859sec preferred_lft 85859sec inet6 fe80::17ff:fe00:16ca/64 scope link valid_lft forever preferred_lft forever 3: ens5: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9000 qdisc mq state UP group default qlen 1000 link/ether 02:00:17:00:7b:4f brd ff:ff:ff:ff:ff:ff altname enp0s5 inet 10.0.3.15/27 brd 10.0.3.31 scope global noprefixroute ens5 valid_lft forever preferred_lft forever inet6 fe80::87eb:4195:cacf:a6da/64 scope link noprefixroute valid_lft forever preferred_lft forever 4: flannel.1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 8950 qdisc noqueue state UNKNOWN group default link/ether 02:92:e7:f5:8e:29 brd ff:ff:ff:ff:ff:ff inet 10.244.1.128/32 scope global flannel.1 valid_lft forever preferred_lft forever inet6 fe80::92:e7ff:fef5:8e29/64 scope link valid_lft forever preferred_lft forever 5: cni0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 8950 qdisc noqueue state UP group default qlen 1000 link/ether f6:08:06:e2:bc:9d brd ff:ff:ff:ff:ff:ff inet 10.244.1.129/25 brd 10.244.1.255 scope global cni0 valid_lft forever preferred_lft forever inet6 fe80::f408:6ff:fee2:bc9d/64 scope link valid_lft forever preferred_lft forever 6: veth5db97290@if2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 8950 qdisc noqueue master cni0 state UP group default link/ether c2:e0:b5:7e:ce:ed brd ff:ff:ff:ff:ff:ff link-netns 3682b5cd-9039-4931-aecc-b50d46dabaf1 inet6 fe80::c0e0:b5ff:fe7e:ceed/64 scope link valid_lft forever preferred_lft forever 7: veth6fd818a5@if2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 8950 qdisc noqueue master cni0 state UP group default link/ether 3e:a8:7d:84:d3:b9 brd ff:ff:ff:ff:ff:ff link-netns 08141d6b-5ec0-4f3f-a312-a00b30f82ade inet6 fe80::3ca8:7dff:fe84:d3b9/64 scope link valid_lft forever preferred_lft forever [opc@oke-cwe6rhf2leq-ng556bw23ra-slsqe2jfnpa-1 ~]$
-
Run the
ip a
command on thecwe6rhf2leq-ng556bw23ra-slsqe2jfnpa-0
worker node.Note that when the IP address has been successfully configured, the
ens5
(second VNIC) has an IP address in the range of the subnet created in Task 3.2 for the SR-IOV interfaces.[opc@oke-cwe6rhf2leq-ng556bw23ra-slsqe2jfnpa-0 ~]$ ip a 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: ens3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9000 qdisc mq state UP group default qlen 1000 link/ether 02:00:17:00:49:9c brd ff:ff:ff:ff:ff:ff altname enp0s3 inet 10.0.73.242/18 brd 10.0.127.255 scope global dynamic ens3 valid_lft 86085sec preferred_lft 86085sec inet6 fe80::17ff:fe00:499c/64 scope link valid_lft forever preferred_lft forever 3: ens5: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9000 qdisc mq state UP group default qlen 1000 link/ether 02:00:17:00:b7:51 brd ff:ff:ff:ff:ff:ff altname enp0s5 inet 10.0.3.14/27 brd 10.0.3.31 scope global noprefixroute ens5 valid_lft forever preferred_lft forever inet6 fe80::bc31:aa09:4e05:9ab7/64 scope link noprefixroute valid_lft forever preferred_lft forever 4: flannel.1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 8950 qdisc noqueue state UNKNOWN group default link/ether 9a:c7:1b:30:e8:9a brd ff:ff:ff:ff:ff:ff inet 10.244.0.0/32 scope global flannel.1 valid_lft forever preferred_lft forever inet6 fe80::98c7:1bff:fe30:e89a/64 scope link valid_lft forever preferred_lft forever 5: cni0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 8950 qdisc noqueue state UP group default qlen 1000 link/ether 2a:2b:cb:fb:15:82 brd ff:ff:ff:ff:ff:ff inet 10.244.0.1/25 brd 10.244.0.127 scope global cni0 valid_lft forever preferred_lft forever inet6 fe80::282b:cbff:fefb:1582/64 scope link valid_lft forever preferred_lft forever 6: veth06343057@if2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 8950 qdisc noqueue master cni0 state UP group default link/ether ca:70:83:13:dc:ed brd ff:ff:ff:ff:ff:ff link-netns fb0f181f-7c3a-4fb6-8bf0-5a65d39486c1 inet6 fe80::c870:83ff:fe13:dced/64 scope link valid_lft forever preferred_lft forever 7: veth8af17165@if2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 8950 qdisc noqueue master cni0 state UP group default link/ether c6:a0:be:75:9b:d9 brd ff:ff:ff:ff:ff:ff link-netns c07346e6-33f5-4e80-ba5e-74f7487b5daa inet6 fe80::c4a0:beff:fe75:9bd9/64 scope link valid_lft forever preferred_lft forever [opc@oke-cwe6rhf2leq-ng556bw23ra-slsqe2jfnpa-0 ~]$
-
Run the
ip a
command on thecwe6rhf2leq-ng556bw23ra-slsqe2jfnpa-2
worker node.Note that when the IP address has been successfully configured, the
ens5
(second VNIC) has an IP address in the range of the subnet created in Task 3.2 for the SR-IOV interfaces.[opc@oke-cwe6rhf2leq-ng556bw23ra-slsqe2jfnpa-2 ~]$ ip a 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: ens3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9000 qdisc mq state UP group default qlen 1000 link/ether 02:00:17:00:ac:7c brd ff:ff:ff:ff:ff:ff altname enp0s3 inet 10.0.89.50/18 brd 10.0.127.255 scope global dynamic ens3 valid_lft 86327sec preferred_lft 86327sec inet6 fe80::17ff:fe00:ac7c/64 scope link valid_lft forever preferred_lft forever 3: ens5: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9000 qdisc mq state UP group default qlen 1000 link/ether 02:00:17:00:4c:0d brd ff:ff:ff:ff:ff:ff altname enp0s5 inet 10.0.3.16/27 brd 10.0.3.31 scope global noprefixroute ens5 valid_lft forever preferred_lft forever inet6 fe80::91eb:344e:829e:35de/64 scope link noprefixroute valid_lft forever preferred_lft forever 4: flannel.1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 8950 qdisc noqueue state UNKNOWN group default link/ether aa:31:9f:d0:b3:3c brd ff:ff:ff:ff:ff:ff inet 10.244.0.128/32 scope global flannel.1 valid_lft forever preferred_lft forever inet6 fe80::a831:9fff:fed0:b33c/64 scope link valid_lft forever preferred_lft forever 5: cni0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 8950 qdisc noqueue state UP group default qlen 1000 link/ether b2:0d:c0:de:02:61 brd ff:ff:ff:ff:ff:ff inet 10.244.0.129/25 brd 10.244.0.255 scope global cni0 valid_lft forever preferred_lft forever inet6 fe80::b00d:c0ff:fede:261/64 scope link valid_lft forever preferred_lft forever 6: vethb37e8987@if2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 8950 qdisc noqueue master cni0 state UP group default link/ether 7a:93:1d:2a:33:8c brd ff:ff:ff:ff:ff:ff link-netns ab3262ca-4a80-4b02-a39f-4209d003f148 inet6 fe80::7893:1dff:fe2a:338c/64 scope link valid_lft forever preferred_lft forever 7: veth73a651ce@if2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 8950 qdisc noqueue master cni0 state UP group default link/ether ae:e4:97:89:ba:6e brd ff:ff:ff:ff:ff:ff link-netns 9307bfbd-8165-46bf-916c-e1180b6cbd83 inet6 fe80::ace4:97ff:fe89:ba6e/64 scope link valid_lft forever preferred_lft forever [opc@oke-cwe6rhf2leq-ng556bw23ra-slsqe2jfnpa-2 ~]$
-
We have verified that all the IP addresses are configured on the second VNIC (
ens5
), we can create the following table with the information.ens3 IP ens3 GW ens5 IP ens5 GW 10.0.112.134 10.0.64.1 10.0.3.30/27 10.0.3.1 10.0.66.97 10.0.64.1 10.0.3.15/27 10.0.3.1 10.0.73.242 10.0.64.1 10.0.3.14/27 10.0.3.1 10.0.89.50 10.0.64.1 10.0.3.16/27 10.0.3.1 -
The following image shows a visual overview of what we have configured so far.
Task 6: Install a Meta-Plugin CNI (Multus CNI) on the Worker Node
Multus CNI is a Kubernetes Container Network Interface (CNI) plugin that allows you to attach multiple network interfaces to a pod.
How Multus CNI Works
-
Acts as a meta-plugin: Multus does not provide networking itself but instead calls other CNI plugins.
-
Creates multiple network interfaces: Each pod can have more than one network interface by combining multiple CNI plugins (for example, Flannel, Calico, SR-IOV).
-
Uses a configuration file: The primary network is set up using the default CNI, and additional networks are attached based on a Custom Resource Definition (CRD).
Why We Need Multus CNI
-
Multiple Network Isolation: Useful for workloads that require separate management and data planes.
-
High-Performance Networking: Enables direct hardware access using SR-IOV or DPDK.
-
Multi-Homing for Pods: Supports Network Function Virtualization (NFV) and Telco use cases where multiple network interfaces are essential.
Task 6.1: Install Multus CNI using the Thin Install Method
-
SSH into the Kubernetes Operator.
-
Run the following command to clone the Multus Git library.
git clone https://github.com/k8snetworkplumbingwg/multus-cni.git && cd multus-cni
-
Run the following command to install the Multus daemon set using the thin install method.
kubectl apply -f deployments/multus-daemonset.yml && cd ..
-
What the Multus daemon Set Does
-
Starts a Multus daemon set, this runs a pod on each node which places a Multus binary on each node in
/opt/cni/bin
. -
Reads the lexicographically (alphabetically) first configuration file in
/etc/cni/net.d
, and creates a new configuration file for Multus on each node as/etc/cni/net.d/00-multus.conf
, this configuration is auto-generated and is based on the default network configuration (which is assumed to be the alphabetically first configuration). -
Creates a directory named
/etc/cni/net.d/multus.d
on each node with authentication information for Multus to access the Kubernetes API.
Task 6.2: Validate the Multus Installation
-
Run the following command (on the Kubernetes Operator) to validate if the Multus daemon set is installed on all worker nodes.
kubectl get pods --all-namespaces | grep -i multus
-
You can also verify if the Multus daemon set is installed on the worker nodes itself.
- Run the
ssh -i id_rsa opc@10.0.112.134
command to SSH into the worker node with the following command. - Run the
cd /etc/cni/net.d/
command to change the directory with the following command. - Run the
ls -l
command to list the directory output with the following command. - Note that the
00-multus.conf
and themultus.d
files are listed.
- Run the
sudo more 00-multus.conf
command to view the content of the00-multus.conf
file. - Note the contents of the
00-multus.conf
file.
- Run the
Task 7: Attach Network Interfaces to Pods
In this task, we will map or attach a container interface to this VNIC.
To attach additional interfaces to pods, we need a configuration for the interface to be attached.
-
This is encapsulated in the custom resource of kind
NetworkAttachmentDefinition
. -
This configuration is essentially a CNI configuration packaged as a custom resource.
There are several CNI plugins that can be used alongside Multus to accomplish this. For more information, see Plugins Overview.
-
In the approach described here, the goal is to provide an SR-IOV Virtual Function (VF) exclusively for a single pod, so that the pod can take advantage of the capabilities without interference or any layers in between.
-
To grant a pod exclusive access to the VF, we can leverage the host-device plugin that enables you to move the interface into the pod’s namespace so that it has exclusive access to it. For more information, see host-device.
The following example shows NetworkAttachmentDefinition
objects that configure the secondary ens5
interface that was added to the nodes.
-
The
ipam
plugin configuration determines how IP addresses are managed for these interfaces. -
In this example, as we want to use the same IP addresses that were assigned to the secondary interfaces by OCI, we use the static
ipam
configuration with the appropriate routes. -
The
ipam
configuration also supports other methods likehost-local
ordhcp
for more flexible configurations.
Task 7.1: Create Network Attachment Definition
The NetworkAttachmentDefinition
is used to set up the network attachment, for example, secondary interface for the pod.
There are two ways to configure the NetworkAttachmentDefinition
:
NetworkAttachmentDefinition
with JSON CNI config.NetworkAttachmentDefinition
with CNI config file.
Note: In this tutorial, we are going to use the method using the CNI config file.
We have 4 x worker nodes and each worker node has a second VNIC that we will map to an interface on a container (pod).
-
Run the following commands to create the CNI config files for all worker nodes and corresponding VNICs.
ens3 ens5 name network nano command 10.0.112.134 10.0.3.30/27 sriov-vnic-1 10.0.3.0/27 sudo nano sriov-vnic-1.yaml
10.0.66.97 10.0.3.15/27 sriov-vnic-2 10.0.3.0/27 sudo nano sriov-vnic-2.yaml
10.0.73.242 10.0.3.14/27 sriov-vnic-3 10.0.3.0/27 sudo nano sriov-vnic-3.yaml
10.0.89.50 10.0.3.16/27 sriov-vnic-4 10.0.3.0/27 sudo nano sriov-vnic-4.yaml
-
Perform the following steps on the Kubernetes Operator. Create a new YAML file for the first worker node using the
sudo nano sriov-vnic-1.yaml
command.-
Make sure the name is unique and descriptive. In this example, we are using
sriov-vnic-1
. -
Use the IP address of the second adapter you just added (
ens5
). -
Make sure the
dst network
is also correct this is the same as the subnet created in Task 3.2.
sriov-vnic-1.yaml
:apiVersion: "k8s.cni.cncf.io/v1" kind: NetworkAttachmentDefinition metadata: name: sriov-vnic-1 spec: config: '{ "cniVersion": "0.3.1", "type": "host-device", "device": "ens5", "ipam": { "type": "static", "addresses": [ { "address": "10.0.3.30/27", "gateway": "0.0.0.0" } ], "routes": [ { "dst": "10.0.3.0/27", "gw": "0.0.0.0" } ] } }'
-
-
Create a new YAML file for the second worker node using the
sudo nano sriov-vnic-2.yaml
command.sriov-vnic-2.yaml
:apiVersion: "k8s.cni.cncf.io/v1" kind: NetworkAttachmentDefinition metadata: name: sriov-vnic-2 spec: config: '{ "cniVersion": "0.3.1", "type": "host-device", "device": "ens5", "ipam": { "type": "static", "addresses": [ { "address": "10.0.3.15/27", "gateway": "0.0.0.0" } ], "routes": [ { "dst": "10.0.3.0/27", "gw": "0.0.0.0" } ] } }'
-
Create a new YAML file for the third worker node using the
sudo nano sriov-vnic-3.yaml
command.sriov-vnic-3.yaml
:apiVersion: "k8s.cni.cncf.io/v1" kind: NetworkAttachmentDefinition metadata: name: sriov-vnic-3 spec: config: '{ "cniVersion": "0.3.1", "type": "host-device", "device": "ens5", "ipam": { "type": "static", "addresses": [ { "address": "10.0.3.14/27", "gateway": "0.0.0.0" } ], "routes": [ { "dst": "10.0.3.0/27", "gw": "0.0.0.0" } ] } }'
-
Create a new YAML file for the fourth worker node using the
sudo nano sriov-vnic-4.yaml
command.sriov-vnic-4.yaml
:apiVersion: "k8s.cni.cncf.io/v1" kind: NetworkAttachmentDefinition metadata: name: sriov-vnic-4 spec: config: '{ "cniVersion": "0.3.1", "type": "host-device", "device": "ens5", "ipam": { "type": "static", "addresses": [ { "address": "10.0.3.16/27", "gateway": "0.0.0.0" } ], "routes": [ { "dst": "10.0.3.0/27", "gw": "0.0.0.0" } ] } }'
-
Apply the
NetworkAttachmentDefinition
to the worker nodes.- Run the
kubectl apply -f sriov-vnic-1.yaml
command for the first node. - Run the
kubectl apply -f sriov-vnic-2.yaml
command for the second node. - Run the
kubectl apply -f sriov-vnic-3.yaml
command for the third node. - Run the
kubectl apply -f sriov-vnic-4.yaml
command for the fourth node.
If the
NetworkAttachmentDefinition
is correctly applied you will see something similar to the output.[opc@o-sqrtga ~]$ kubectl apply -f sriov-vnic-1.yaml networkattachmentdefinition.k8s.cni.cncf.io/sriov-vnic-1 created [opc@o-sqrtga ~]$ kubectl apply -f sriov-vnic-2.yaml networkattachmentdefinition.k8s.cni.cncf.io/sriov-vnic-2 created [opc@o-sqrtga ~]$ kubectl apply -f sriov-vnic-3.yaml networkattachmentdefinition.k8s.cni.cncf.io/sriov-vnic-3 created [opc@o-sqrtga ~]$ kubectl apply -f sriov-vnic-4.yaml networkattachmentdefinition.k8s.cni.cncf.io/sriov-vnic-4 created [opc@o-sqrtga ~]$
- Run the
-
Run the
kubectl get network-attachment-definitions.k8s.cni.cncf.io
command to verify if theNetworkAttachmentDefinitions
are applied correctly.[opc@o-sqrtga ~]$ kubectl get network-attachment-definitions.k8s.cni.cncf.io NAME AGE sriov-vnic-1 96s sriov-vnic-2 72s sriov-vnic-3 60s sriov-vnic-4 48s [opc@o-sqrtga ~]$
-
Run the
kubectl get networkattachmentdefinition.k8s.cni.cncf.io/sriov-vnic-1 -o yaml
command to get the appliedNetworkAttachmentDefinition
for the first worker node.[opc@o-sqrtga ~]$ kubectl get networkattachmentdefinition.k8s.cni.cncf.io/sriov-vnic-1 -o yaml apiVersion: k8s.cni.cncf.io/v1 kind: NetworkAttachmentDefinition metadata: annotations: kubectl.kubernetes.io/last-applied-configuration: | {"apiVersion":"k8s.cni.cncf.io/v1","kind":"NetworkAttachmentDefinition","metadata":{"annotations":{},"name":"sriov-vnic-1","namespace":"default"},"spec":{"config":"{ \"cniVersion\": \"0.3.1\", \"type\": \"host-device\", \"device\": \"ens5\", \"ipam\": { \"type\": \"static\", \"addresses\": [ { \"address\": \"10.0.3.30/27\", \"gateway\": \"0.0.0.0\" } ], \"routes\": [ { \"dst\": \"10.0.3.0/27\", \"gw\": \"0.0.0.0\" } ] } }"}} creationTimestamp: "2024-12-18T09:03:55Z" generation: 1 name: sriov-vnic-1 namespace: default resourceVersion: "22915413" uid: 2d529130-2147-4f49-9d78-4e5aa12aea62 spec: config: '{ "cniVersion": "0.3.1", "type": "host-device", "device": "ens5", "ipam": { "type": "static", "addresses": [ { "address": "10.0.3.30/27", "gateway": "0.0.0.0" } ], "routes": [ { "dst": "10.0.3.0/27", "gw": "0.0.0.0" } ] } }'
-
Run the
kubectl get networkattachmentdefinition.k8s.cni.cncf.io/sriov-vnic-2 -o yaml
command to get the appliedNetworkAttachmentDefinition
for the second worker node.[opc@o-sqrtga ~]$ kubectl get networkattachmentdefinition.k8s.cni.cncf.io/sriov-vnic-2 -o yaml apiVersion: k8s.cni.cncf.io/v1 kind: NetworkAttachmentDefinition metadata: annotations: kubectl.kubernetes.io/last-applied-configuration: | {"apiVersion":"k8s.cni.cncf.io/v1","kind":"NetworkAttachmentDefinition","metadata":{"annotations":{},"name":"sriov-vnic-2","namespace":"default"},"spec":{"config":"{ \"cniVersion\": \"0.3.1\", \"type\": \"host-device\", \"device\": \"ens5\", \"ipam\": { \"type\": \"static\", \"addresses\": [ { \"address\": \"10.0.3.15/27\", \"gateway\": \"0.0.0.0\" } ], \"routes\": [ { \"dst\": \"10.0.3.0/27\", \"gw\": \"0.0.0.0\" } ] } }"}} creationTimestamp: "2024-12-18T09:04:19Z" generation: 1 name: sriov-vnic-2 namespace: default resourceVersion: "22915508" uid: aec5740c-a093-43d3-bd6a-2209ee9e5c96 spec: config: '{ "cniVersion": "0.3.1", "type": "host-device", "device": "ens5", "ipam": { "type": "static", "addresses": [ { "address": "10.0.3.15/27", "gateway": "0.0.0.0" } ], "routes": [ { "dst": "10.0.3.0/27", "gw": "0.0.0.0" } ] } }'
-
Run the
kubectl get networkattachmentdefinition.k8s.cni.cncf.io/sriov-vnic-3 -o yaml
command to get the appliedNetworkAttachmentDefinition
for the third worker node.[opc@o-sqrtga ~]$ kubectl get networkattachmentdefinition.k8s.cni.cncf.io/sriov-vnic-3 -o yaml apiVersion: k8s.cni.cncf.io/v1 kind: NetworkAttachmentDefinition metadata: annotations: kubectl.kubernetes.io/last-applied-configuration: | {"apiVersion":"k8s.cni.cncf.io/v1","kind":"NetworkAttachmentDefinition","metadata":{"annotations":{},"name":"sriov-vnic-3","namespace":"default"},"spec":{"config":"{ \"cniVersion\": \"0.3.1\", \"type\": \"host-device\", \"device\": \"ens5\", \"ipam\": { \"type\": \"static\", \"addresses\": [ { \"address\": \"10.0.3.14/27\", \"gateway\": \"0.0.0.0\" } ], \"routes\": [ { \"dst\": \"10.0.3.0/27\", \"gw\": \"0.0.0.0\" } ] } }"}} creationTimestamp: "2024-12-18T09:04:31Z" generation: 1 name: sriov-vnic-3 namespace: default resourceVersion: "22915558" uid: 91b970ff-328f-4b6b-a0d8-7cdd07d7bca3 spec: config: '{ "cniVersion": "0.3.1", "type": "host-device", "device": "ens5", "ipam": { "type": "static", "addresses": [ { "address": "10.0.3.14/27", "gateway": "0.0.0.0" } ], "routes": [ { "dst": "10.0.3.0/27", "gw": "0.0.0.0" } ] } }'
-
Run the
kubectl get networkattachmentdefinition.k8s.cni.cncf.io/sriov-vnic-4 -o yaml
command to get the appliedNetworkAttachmentDefinition
for the fourth worker node.[opc@o-sqrtga ~]$ kubectl get networkattachmentdefinition.k8s.cni.cncf.io/sriov-vnic-4 -o yaml apiVersion: k8s.cni.cncf.io/v1 kind: NetworkAttachmentDefinition metadata: annotations: kubectl.kubernetes.io/last-applied-configuration: | {"apiVersion":"k8s.cni.cncf.io/v1","kind":"NetworkAttachmentDefinition","metadata":{"annotations":{},"name":"sriov-vnic-4","namespace":"default"},"spec":{"config":"{ \"cniVersion\": \"0.3.1\", \"type\": \"host-device\", \"device\": \"ens5\", \"ipam\": { \"type\": \"static\", \"addresses\": [ { \"address\": \"10.0.3.16/27\", \"gateway\": \"0.0.0.0\" } ], \"routes\": [ { \"dst\": \"10.0.3.0/27\", \"gw\": \"0.0.0.0\" } ] } }"}} creationTimestamp: "2024-12-18T09:04:43Z" generation: 1 name: sriov-vnic-4 namespace: default resourceVersion: "22915607" uid: 383fd3f0-7e5e-46ec-9997-29cbc9a2dcea spec: config: '{ "cniVersion": "0.3.1", "type": "host-device", "device": "ens5", "ipam": { "type": "static", "addresses": [ { "address": "10.0.3.16/27", "gateway": "0.0.0.0" } ], "routes": [ { "dst": "10.0.3.0/27", "gw": "0.0.0.0" } ] } }' [opc@o-sqrtga ~]$
-
The following image shows a visual overview of what we have configured so far.
Task 7.2: Create Pods with the NetworkDefinitionAttachment
Attached
In this task, we will tie the NetworkAttachmentDefinitions
to an actual container or pod.
In the following table, we have created a mapping on what pod we want to host on what worker node.
Worker (Primary) Node IP | ens5 | name | pod name | finished |
---|---|---|---|---|
10.0.112.134 | 10.0.3.30/27 | sriov-vnic-1 | testpod1 | YES |
10.0.66.97 | 10.0.3.15/27 | sriov-vnic-2 | testpod2 | YES |
10.0.73.242 | 10.0.3.14/27 | sriov-vnic-3 | testpod3 | YES |
10.0.89.50 | 10.0.3.16/27 | sriov-vnic-4 | testpod4 | YES |
Task 7.3: Create Pods with Node Affinity
By default, Kubernetes will decide where the pods will be placed (worker node). In this example, this is not possible because a NetworkAttachmentDefinition
is bound to an IP address and this IP address is bound to a VNIC and this VNIC is bound to a specific worker node. So we need to make sure that the pods we create will end up on the worker node we want and this is required when we attach the NetworkAttachmentDefinition
to a pod.
If we do not do this it may happen that a pod will end up on a different where the IP address is available for the pod. Therefore the pod will not be able to communicate using the SR-IOV enabled interface.
-
Get all the available nodes using the
kubectl get nodes
command.[opc@o-sqrtga ~]$ kubectl get nodes NAME STATUS ROLES AGE VERSION 10.0.112.134 Ready node 68d v1.30.1 10.0.66.97 Ready node 68d v1.30.1 10.0.73.242 Ready node 68d v1.30.1 10.0.89.50 Ready node 68d v1.30.1 [opc@o-sqrtga ~]$
- Assign a label to worker node 1 using the
kubectl label node 10.0.112.134 node_type=testpod1
command. - Assign a label to worker node 2 using the
kubectl label node 10.0.66.97 node_type=testpod2
command. - Assign a label to worker node 3 using the
kubectl label node 10.0.73.242 node_type=testpod3
command. - Assign a label to worker node 4 using the
kubectl label node 10.0.89.50 node_type=testpod4
command.
[opc@o-sqrtga ~]$ kubectl label node 10.0.112.134 node_type=testpod1 node/10.0.112.134 labeled [opc@o-sqrtga ~]$ kubectl label node 10.0.73.242 node_type=testpod3 node/10.0.73.242 labeled [opc@o-sqrtga ~]$ kubectl label node 10.0.66.97 node_type=testpod2 node/10.0.66.97 labeled [opc@o-sqrtga ~]$ kubectl label node 10.0.89.50 node_type=testpod4 node/10.0.89.50 labeled [opc@o-sqrtga ~]$
- Assign a label to worker node 1 using the
-
The following image shows a visual overview of what we have configured so far.
-
Create a YAML file for
testpod1
using thesudo nano testpod1-v2.yaml
command.-
Note the
annotations
section where we bind theNetworkAttachmentDefinition
that we created earlier (sriov-vnic-1
) to this test pod. -
Note the
spec:affinity:nodeAffinity
section where we bind the test pod to a specific worker node with the labeltestpod1
.
sudo nano testpod1-v2.yaml apiVersion: v1 kind: Pod metadata: name: testpod1 annotations: k8s.v1.cni.cncf.io/networks: sriov-vnic-1 spec: affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: node_type operator: In values: - testpod1 containers: - name: appcntr1 image: centos/tools imagePullPolicy: IfNotPresent command: [ "/bin/bash", "-c", "--" ] args: [ "while true; do sleep 300000; done;" ]
-
-
Create a YAML file for
testpod2
using thesudo nano testpod2-v2.yaml
command.-
Note the
annotations
section where we bind theNetworkAttachmentDefinition
that we created earlier (sriov-vnic-2
) to this test pod. -
Note the
spec:affinity:nodeAffinity
section where we bind the test pod to a specific worker node with the labeltestpod2
.
sudo nano testpod2-v2.yaml apiVersion: v1 kind: Pod metadata: name: testpod2 annotations: k8s.v1.cni.cncf.io/networks: sriov-vnic-2 spec: affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: node_type operator: In values: - testpod2 containers: - name: appcntr1 image: centos/tools imagePullPolicy: IfNotPresent command: [ "/bin/bash", "-c", "--" ] args: [ "while true; do sleep 300000; done;" ]
-
-
Create a YAML file for
testpod3
using thesudo nano testpod3-v2.yaml
command.-
Note the
annotations
section where we bind theNetworkAttachmentDefinition
that we created earlier (sriov-vnic-3
) to this test pod. -
Note the
spec:affinity:nodeAffinity
section where we bind the test pod to a specific worker node with the labeltestpod3
.
sudo nano testpod3-v2.yaml apiVersion: v1 kind: Pod metadata: name: testpod3 annotations: k8s.v1.cni.cncf.io/networks: sriov-vnic-3 spec: affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: node_type operator: In values: - testpod3 containers: - name: appcntr1 image: centos/tools imagePullPolicy: IfNotPresent command: [ "/bin/bash", "-c", "--" ] args: [ "while true; do sleep 300000; done;" ]
-
-
Create a YAML file for the
testpod4
using thesudo nano testpod4-v2.yaml
command.-
Note the
annotations
section where we bind theNetworkAttachmentDefinition
that we created earlier (sriov-vnic-4
) to this test pod. -
Note the
spec:affinity:nodeAffinity
section where we bind the test pod to a specific worker node with the labeltestpod4
sudo nano testpod4-v2.yaml apiVersion: v1 kind: Pod metadata: name: testpod4 annotations: k8s.v1.cni.cncf.io/networks: sriov-vnic-4 spec: affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: node_type operator: In values: - testpod4 containers: - name: appcntr1 image: centos/tools imagePullPolicy: IfNotPresent command: [ "/bin/bash", "-c", "--" ] args: [ "while true; do sleep 300000; done;" ]
- Create the
testpod1
by applying the YAML file using thekubectl apply -f testpod1-v2.yaml
command. - Create the
testpod2
by applying the YAML file using thekubectl apply -f testpod2-v2.yaml
command. - Create the
testpod3
by applying the YAML file using thekubectl apply -f testpod3-v2.yaml
command. - Create the
testpod4
by applying the YAML file using thekubectl apply -f testpod4-v2.yaml
command.
[opc@o-sqrtga ~]$ kubectl apply -f testpod1-v2.yaml pod/testpod1 created [opc@o-sqrtga ~]$ kubectl apply -f testpod2-v2.yaml pod/testpod2 created [opc@o-sqrtga ~]$ kubectl apply -f testpod3-v2.yaml pod/testpod3 created [opc@o-sqrtga ~]$ kubectl apply -f testpod4-v2.yaml pod/testpod4 created [opc@o-sqrtga ~]$
-
-
Verify if the test pods are created using the
kubectl get pod
command. Note that all the test pods are created and have theRunning
STATUS.[opc@o-sqrtga ~]$ kubectl get pod NAME READY STATUS RESTARTS AGE my-nginx-576c6b7b6-6fn6f 1/1 Running 3 40d my-nginx-576c6b7b6-k9wwc 1/1 Running 3 40d my-nginx-576c6b7b6-z8xkd 1/1 Running 6 40d mysql-6d7f5d5944-dlm78 1/1 Running 12 35d testpod1 1/1 Running 0 2m29s testpod2 1/1 Running 0 2m17s testpod3 1/1 Running 0 2m5s testpod4 1/1 Running 0 111s [opc@o-sqrtga ~]$
-
Verify if
testpod1
is running on worker node10.0.112.134
with the labeltestpod1
using thekubectl get pod testpod1 -o wide
command.[opc@o-sqrtga ~]$ kubectl get pod testpod1 -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES testpod1 1/1 Running 0 3m41s 10.244.1.6 10.0.112.134 <none> <none>
-
Verify if
testpod2
is running on worker node10.0.66.97
with the labeltestpod2
using thekubectl get pod testpod2 -o wide
command.[opc@o-sqrtga ~]$ kubectl get pod testpod2 -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES testpod2 1/1 Running 0 3m33s 10.244.1.133 10.0.66.97 <none> <none>
-
Verify if
testpod3
is running on worker node10.0.73.242
with the labeltestpod3
using thekubectl get pod testpod3 -o wide
command.[opc@o-sqrtga ~]$ kubectl get pod testpod3 -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES testpod3 1/1 Running 0 3m25s 10.244.0.5 10.0.73.242 <none> <none>
-
Verify if
testpod4
is running on worker node10.0.89.50
with the labeltestpod4
using thekubectl get pod testpod4 -o wide
command.[opc@o-sqrtga ~]$ kubectl get pod testpod4 -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES testpod4 1/1 Running 0 3m22s 10.244.0.133 10.0.89.50 <none> <none>
-
The following image shows a visual overview of what we have configured so far.
Task 7.4: Verify the IP Address on the Test Pods
-
Verify the IP address of
testpod1
for thenet1
pod interface using thekubectl exec -it testpod1 -- ip addr show
command.Note that the IP address of the
net1
interface is10.0.3.30/27
.[opc@o-sqrtga ~]$ kubectl exec -it testpod1 -- ip addr show 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: eth0@if8: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 8950 qdisc noqueue state UP group default link/ether ca:28:e4:5f:66:c4 brd ff:ff:ff:ff:ff:ff link-netnsid 0 inet 10.244.0.132/25 brd 10.244.0.255 scope global eth0 valid_lft forever preferred_lft forever inet6 fe80::c828:e4ff:fe5f:66c4/64 scope link valid_lft forever preferred_lft forever 3: net1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9000 qdisc mq state UP group default qlen 1000 link/ether 02:00:17:00:4c:0d brd ff:ff:ff:ff:ff:ff inet 10.0.3.30/27 brd 10.0.3.31 scope global net1 valid_lft forever preferred_lft forever inet6 fe80::17ff:fe00:4c0d/64 scope link valid_lft forever preferred_lft forever
-
Verify the IP address of
testpod2
for thenet1
pod interface using thekubectl exec -it testpod2 -- ip addr show
command.Note that the IP address of the
net1
interface is10.0.3.15/27
.[opc@o-sqrtga ~]$ kubectl exec -it testpod2 -- ip addr show 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: eth0@if8: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 8950 qdisc noqueue state UP group default link/ether da:ce:84:22:fc:29 brd ff:ff:ff:ff:ff:ff link-netnsid 0 inet 10.244.1.132/25 brd 10.244.1.255 scope global eth0 valid_lft forever preferred_lft forever inet6 fe80::d8ce:84ff:fe22:fc29/64 scope link valid_lft forever preferred_lft forever 3: net1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9000 qdisc mq state UP group default qlen 1000 link/ether 02:00:17:00:7b:4f brd ff:ff:ff:ff:ff:ff inet 10.0.3.15/27 brd 10.0.3.31 scope global net1 valid_lft forever preferred_lft forever inet6 fe80::17ff:fe00:7b4f/64 scope link valid_lft forever preferred_lft forever
-
Verify the IP address of
testpod3
for thenet1
pod interface using thekubectl exec -it testpod3 -- ip addr show
command.Note that the IP address of the
net1
interface is10.0.3.14/27
.[opc@o-sqrtga ~]$ kubectl exec -it testpod3 -- ip addr show 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: eth0@if8: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 8950 qdisc noqueue state UP group default link/ether de:f2:81:10:04:b2 brd ff:ff:ff:ff:ff:ff link-netnsid 0 inet 10.244.0.4/25 brd 10.244.0.127 scope global eth0 valid_lft forever preferred_lft forever inet6 fe80::dcf2:81ff:fe10:4b2/64 scope link valid_lft forever preferred_lft forever 3: net1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9000 qdisc mq state UP group default qlen 1000 link/ether 02:00:17:00:b7:51 brd ff:ff:ff:ff:ff:ff inet 10.0.3.14/27 brd 10.0.3.31 scope global net1 valid_lft forever preferred_lft forever inet6 fe80::17ff:fe00:b751/64 scope link valid_lft forever preferred_lft forever
-
Verify the IP address of
testpod4
for thenet1
pod interface using thekubectl exec -it testpod4 -- ip addr show
command.Note that the IP address of the
net1
interface is10.0.3.16/27
.[opc@o-sqrtga ~]$ kubectl exec -it testpod4 -- ip addr show 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: eth0@if9: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 8950 qdisc noqueue state UP group default link/ether ea:63:eb:57:9c:99 brd ff:ff:ff:ff:ff:ff link-netnsid 0 inet 10.244.1.5/25 brd 10.244.1.127 scope global eth0 valid_lft forever preferred_lft forever inet6 fe80::e863:ebff:fe57:9c99/64 scope link valid_lft forever preferred_lft forever 3: net1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9000 qdisc mq state UP group default qlen 1000 link/ether 02:00:17:00:d4:a2 brd ff:ff:ff:ff:ff:ff inet 10.0.3.16/27 brd 10.0.3.31 scope global net1 valid_lft forever preferred_lft forever inet6 fe80::17ff:fe00:d4a2/64 scope link valid_lft forever preferred_lft forever [opc@o-sqrtga ~]$
-
The following table shows an overview of all the IP addresses of the
net1
interfaces for all the test pods.pod name net1 IP testpod1 10.0.3.30/27 testpod2 10.0.3.15/27 testpod3 10.0.3.14/27 testpod4 10.0.3.16/27 Note: These IP addresses are in the same range as the OCI subnet that was created in Task 3 to place our SR-IOV enabled VNICs.
Task 7.5: Verify the IP Address on the Worker Nodes
-
Now that the test pods
net1
interfaces have an IP address, note that this IP address used to be the IP address of theens5
interface on the worker nodes. This IP address is now moved from theens5
worker node interface to thenet1
test pod interface. -
SSH into the first worker node using the
ssh -i id_rsa opc@10.0.112.134
command.-
Get the IP addresses of all the interfaces using the
ip a
command. -
Note that the
ens5
interface has been removed from the worker node.
[opc@o-sqrtga ~]$ ssh -i id_rsa opc@10.0.112.134 Activate the web console with: systemctl enable --now cockpit.socket Last login: Wed Dec 18 20:42:19 2024 from 10.0.0.11 [opc@oke-cwe6rhf2leq-n7nwqge3zba-slsqe2jfnpa-0 ~]$ ip a 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: ens3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9000 qdisc mq state UP group default qlen 1000 link/ether 02:00:17:00:59:58 brd ff:ff:ff:ff:ff:ff altname enp0s3 inet 10.0.112.134/18 brd 10.0.127.255 scope global dynamic ens3 valid_lft 82180sec preferred_lft 82180sec inet6 fe80::17ff:fe00:5958/64 scope link valid_lft forever preferred_lft forever 4: flannel.1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 8950 qdisc noqueue state UNKNOWN group default link/ether 3a:b7:fb:e6:2e:cf brd ff:ff:ff:ff:ff:ff inet 10.244.1.0/32 scope global flannel.1 valid_lft forever preferred_lft forever inet6 fe80::38b7:fbff:fee6:2ecf/64 scope link valid_lft forever preferred_lft forever 5: cni0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 8950 qdisc noqueue state UP group default qlen 1000 link/ether de:35:f5:51:85:5d brd ff:ff:ff:ff:ff:ff inet 10.244.1.1/25 brd 10.244.1.127 scope global cni0 valid_lft forever preferred_lft forever inet6 fe80::dc35:f5ff:fe51:855d/64 scope link valid_lft forever preferred_lft forever 6: veth1cdaac17@if2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 8950 qdisc noqueue master cni0 state UP group default link/ether 76:e2:92:ad:37:40 brd ff:ff:ff:ff:ff:ff link-netns 1935ba66-34cc-4468-8abb-f66add46d08b inet6 fe80::74e2:92ff:fead:3740/64 scope link valid_lft forever preferred_lft forever 7: vethbcd391ab@if2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 8950 qdisc noqueue master cni0 state UP group default link/ether 9a:9a:0f:d6:48:17 brd ff:ff:ff:ff:ff:ff link-netns 3f02d5fd-596e-4b9f-8a35-35f2f946901b inet6 fe80::989a:fff:fed6:4817/64 scope link valid_lft forever preferred_lft forever 8: vethc15fa705@if2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 8950 qdisc noqueue master cni0 state UP group default link/ether 3a:d2:c8:66:d1:0b brd ff:ff:ff:ff:ff:ff link-netns f581b7f2-cfa0-46eb-b0aa-37001a11116d inet6 fe80::38d2:c8ff:fe66:d10b/64 scope link valid_lft forever preferred_lft forever 9: vethc663e496@if2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 8950 qdisc noqueue master cni0 state UP group default link/ether 7e:0b:bb:5d:49:8c brd ff:ff:ff:ff:ff:ff link-netns d3993135-0f2f-4b06-b16d-31d659f8230d inet6 fe80::7c0b:bbff:fe5d:498c/64 scope link valid_lft forever preferred_lft forever [opc@oke-cwe6rhf2leq-n7nwqge3zba-slsqe2jfnpa-0 ~]$ exit logout Connection to 10.0.112.134 closed.
-
-
SSH into the second worker node using the
ssh -i id_rsa opc@10.0.66.97
command.-
Get the IP addresses of all the interfaces using the
ip a
command. -
Note that the
ens5
interface has been removed from the worker node.
[opc@o-sqrtga ~]$ ssh -i id_rsa opc@10.0.66.97 Activate the web console with: systemctl enable --now cockpit.socket Last login: Wed Dec 18 19:47:55 2024 from 10.0.0.11 [opc@oke-cwe6rhf2leq-ng556bw23ra-slsqe2jfnpa-1 ~]$ ip a 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: ens3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9000 qdisc mq state UP group default qlen 1000 link/ether 02:00:17:00:16:ca brd ff:ff:ff:ff:ff:ff altname enp0s3 inet 10.0.66.97/18 brd 10.0.127.255 scope global dynamic ens3 valid_lft 82502sec preferred_lft 82502sec inet6 fe80::17ff:fe00:16ca/64 scope link valid_lft forever preferred_lft forever 4: flannel.1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 8950 qdisc noqueue state UNKNOWN group default link/ether 02:92:e7:f5:8e:29 brd ff:ff:ff:ff:ff:ff inet 10.244.1.128/32 scope global flannel.1 valid_lft forever preferred_lft forever inet6 fe80::92:e7ff:fef5:8e29/64 scope link valid_lft forever preferred_lft forever 5: cni0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 8950 qdisc noqueue state UP group default qlen 1000 link/ether f6:08:06:e2:bc:9d brd ff:ff:ff:ff:ff:ff inet 10.244.1.129/25 brd 10.244.1.255 scope global cni0 valid_lft forever preferred_lft forever inet6 fe80::f408:6ff:fee2:bc9d/64 scope link valid_lft forever preferred_lft forever 6: veth5db97290@if2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 8950 qdisc noqueue master cni0 state UP group default link/ether c2:e0:b5:7e:ce:ed brd ff:ff:ff:ff:ff:ff link-netns 3682b5cd-9039-4931-aecc-b50d46dabaf1 inet6 fe80::c0e0:b5ff:fe7e:ceed/64 scope link valid_lft forever preferred_lft forever 7: veth6fd818a5@if2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 8950 qdisc noqueue master cni0 state UP group default link/ether 3e:a8:7d:84:d3:b9 brd ff:ff:ff:ff:ff:ff link-netns 08141d6b-5ec0-4f3f-a312-a00b30f82ade inet6 fe80::3ca8:7dff:fe84:d3b9/64 scope link valid_lft forever preferred_lft forever 8: veth26f6b686@if2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 8950 qdisc noqueue master cni0 state UP group default link/ether ae:bf:36:ca:52:cf brd ff:ff:ff:ff:ff:ff link-netns f533714a-69be-4b20-be30-30ba71494f7a inet6 fe80::acbf:36ff:feca:52cf/64 scope link valid_lft forever preferred_lft forever [opc@oke-cwe6rhf2leq-ng556bw23ra-slsqe2jfnpa-1 ~]$ exit logout Connection to 10.0.66.97 closed.
-
-
SSH into the third worker node using the
ssh -i id_rsa opc@10.0.73.242
command.-
Get the IP addresses of all the interfaces using the
ip a
command. -
Note that the
ens5
interface has been removed from the worker node.
[opc@o-sqrtga ~]$ ssh -i id_rsa opc@10.0.73.242 Activate the web console with: systemctl enable --now cockpit.socket Last login: Wed Dec 18 20:08:31 2024 from 10.0.0.11 [opc@oke-cwe6rhf2leq-ng556bw23ra-slsqe2jfnpa-0 ~]$ ip a 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: ens3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9000 qdisc mq state UP group default qlen 1000 link/ether 02:00:17:00:49:9c brd ff:ff:ff:ff:ff:ff altname enp0s3 inet 10.0.73.242/18 brd 10.0.127.255 scope global dynamic ens3 valid_lft 82733sec preferred_lft 82733sec inet6 fe80::17ff:fe00:499c/64 scope link valid_lft forever preferred_lft forever 4: flannel.1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 8950 qdisc noqueue state UNKNOWN group default link/ether 9a:c7:1b:30:e8:9a brd ff:ff:ff:ff:ff:ff inet 10.244.0.0/32 scope global flannel.1 valid_lft forever preferred_lft forever inet6 fe80::98c7:1bff:fe30:e89a/64 scope link valid_lft forever preferred_lft forever 5: cni0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 8950 qdisc noqueue state UP group default qlen 1000 link/ether 2a:2b:cb:fb:15:82 brd ff:ff:ff:ff:ff:ff inet 10.244.0.1/25 brd 10.244.0.127 scope global cni0 valid_lft forever preferred_lft forever inet6 fe80::282b:cbff:fefb:1582/64 scope link valid_lft forever preferred_lft forever 6: veth06343057@if2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 8950 qdisc noqueue master cni0 state UP group default link/ether ca:70:83:13:dc:ed brd ff:ff:ff:ff:ff:ff link-netns fb0f181f-7c3a-4fb6-8bf0-5a65d39486c1 inet6 fe80::c870:83ff:fe13:dced/64 scope link valid_lft forever preferred_lft forever 7: veth8af17165@if2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 8950 qdisc noqueue master cni0 state UP group default link/ether c6:a0:be:75:9b:d9 brd ff:ff:ff:ff:ff:ff link-netns c07346e6-33f5-4e80-ba5e-74f7487b5daa inet6 fe80::c4a0:beff:fe75:9bd9/64 scope link valid_lft forever preferred_lft forever 8: veth170b8774@if2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 8950 qdisc noqueue master cni0 state UP group default link/ether e6:c9:42:60:8f:e7 brd ff:ff:ff:ff:ff:ff link-netns edef0c81-0477-43fa-b260-6b81626e7d87 inet6 fe80::e4c9:42ff:fe60:8fe7/64 scope link valid_lft forever preferred_lft forever [opc@oke-cwe6rhf2leq-ng556bw23ra-slsqe2jfnpa-0 ~]$ exit logout Connection to 10.0.73.242 closed.
-
-
SSH into the fourth worker node using the
ssh -i id_rsa opc@10.0.89.50
command.-
Get the IP addresses of all the interfaces using the
ip a
command. -
Note that the
ens5
interface has been removed from the worker node.
[opc@o-sqrtga ~]$ ssh -i id_rsa opc@10.0.89.50 Activate the web console with: systemctl enable --now cockpit.socket Last login: Wed Dec 18 19:49:27 2024 from 10.0.0.11 [opc@oke-cwe6rhf2leq-ng556bw23ra-slsqe2jfnpa-2 ~]$ ip a 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: ens3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9000 qdisc mq state UP group default qlen 1000 link/ether 02:00:17:00:ac:7c brd ff:ff:ff:ff:ff:ff altname enp0s3 inet 10.0.89.50/18 brd 10.0.127.255 scope global dynamic ens3 valid_lft 82976sec preferred_lft 82976sec inet6 fe80::17ff:fe00:ac7c/64 scope link valid_lft forever preferred_lft forever 4: flannel.1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 8950 qdisc noqueue state UNKNOWN group default link/ether aa:31:9f:d0:b3:3c brd ff:ff:ff:ff:ff:ff inet 10.244.0.128/32 scope global flannel.1 valid_lft forever preferred_lft forever inet6 fe80::a831:9fff:fed0:b33c/64 scope link valid_lft forever preferred_lft forever 5: cni0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 8950 qdisc noqueue state UP group default qlen 1000 link/ether b2:0d:c0:de:02:61 brd ff:ff:ff:ff:ff:ff inet 10.244.0.129/25 brd 10.244.0.255 scope global cni0 valid_lft forever preferred_lft forever inet6 fe80::b00d:c0ff:fede:261/64 scope link valid_lft forever preferred_lft forever 6: vethb37e8987@if2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 8950 qdisc noqueue master cni0 state UP group default link/ether 7a:93:1d:2a:33:8c brd ff:ff:ff:ff:ff:ff link-netns ab3262ca-4a80-4b02-a39f-4209d003f148 inet6 fe80::7893:1dff:fe2a:338c/64 scope link valid_lft forever preferred_lft forever 7: veth73a651ce@if2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 8950 qdisc noqueue master cni0 state UP group default link/ether ae:e4:97:89:ba:6e brd ff:ff:ff:ff:ff:ff link-netns 9307bfbd-8165-46bf-916c-e1180b6cbd83 inet6 fe80::ace4:97ff:fe89:ba6e/64 scope link valid_lft forever preferred_lft forever 8: veth42c3a604@if2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 8950 qdisc noqueue master cni0 state UP group default link/ether f2:e6:ba:72:8f:b2 brd ff:ff:ff:ff:ff:ff link-netns a7eb561c-8182-49b2-9e43-7c52845620a7 inet6 fe80::f0e6:baff:fe72:8fb2/64 scope link valid_lft forever preferred_lft forever [opc@oke-cwe6rhf2leq-ng556bw23ra-slsqe2jfnpa-2 ~]$ exit logout Connection to 10.0.89.50 closed. [opc@o-sqrtga ~]$
-
-
The following image shows a visual overview of what we have configured so far.
Task 8: Perform Ping Tests Between Multiple Pods
All pods have an IP address from the OCI subnet where the SR-IOV enabled VNICs are attached, we can do some ping tests to verify if the network connectivity is working properly.
-
The following table provides the commands to connect to the test pods from the Kubernetes Operator.
We need this to
exec
into each pod to either perform a ping test or to look at the route table.ens3 net1 name pod name command 10.0.112.134 10.0.3.30/27 sriov-vnic-1 testpod1 kubectl exec -it testpod1 -- /bin/bash
10.0.66.97 10.0.3.15/27 sriov-vnic-2 testpod2 kubectl exec -it testpod2 -- /bin/bash
10.0.73.242 10.0.3.14/27 sriov-vnic-3 testpod3 kubectl exec -it testpod3 -- /bin/bash
10.0.89.50 10.0.3.16/27 sriov-vnic-4 testpod4 kubectl exec -it testpod4 -- /bin/bash
-
Run the
kubectl exec -it testpod1 -- route -n
command to look at the routing table directly from the Kubernetes Operator terminal fortestpod1
.Note that the routing table of
testpod1
has a default gateway foreth0
and fornet1
which is our SR-IOV enabled interface.[opc@o-sqrtga ~]$ kubectl exec -it testpod1 -- route -n Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface 0.0.0.0 10.244.0.129 0.0.0.0 UG 0 0 0 eth0 10.0.3.0 0.0.0.0 255.255.255.224 U 0 0 0 net1 10.0.3.0 0.0.0.0 255.255.255.224 U 0 0 0 net1 10.244.0.0 10.244.0.129 255.255.0.0 UG 0 0 0 eth0 10.244.0.128 0.0.0.0 255.255.255.128 U 0 0 0 eth0
-
Run the
kubectl exec -it testpod2 -- route -n
command to look at the routing table directly from the Kubernetes Operator terminal fortestpod2
.Note that the routing table of
testpod2
has a default gateway foreth0
and fornet1
which is our SR-IOV enabled interface.[opc@o-sqrtga ~]$ kubectl exec -it testpod2 -- route -n Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface 0.0.0.0 10.244.1.129 0.0.0.0 UG 0 0 0 eth0 10.0.3.0 0.0.0.0 255.255.255.224 U 0 0 0 net1 10.0.3.0 0.0.0.0 255.255.255.224 U 0 0 0 net1 10.244.0.0 10.244.1.129 255.255.0.0 UG 0 0 0 eth0 10.244.1.128 0.0.0.0 255.255.255.128 U 0 0 0 eth0
-
Run the
kubectl exec -it testpod3 -- route -n
command to look at the routing table directly from the Kubernetes Operator terminal fortestpod3
.Note that the routing table of
testpod3
has a default gateway foreth0
and fornet1
which is our SR-IOV enabled interface.[opc@o-sqrtga ~]$ kubectl exec -it testpod3 -- route -n Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface 0.0.0.0 10.244.0.1 0.0.0.0 UG 0 0 0 eth0 10.0.3.0 0.0.0.0 255.255.255.224 U 0 0 0 net1 10.0.3.0 0.0.0.0 255.255.255.224 U 0 0 0 net1 10.244.0.0 0.0.0.0 255.255.255.128 U 0 0 0 eth0 10.244.0.0 10.244.0.1 255.255.0.0 UG 0 0 0 eth0
-
Run the
kubectl exec -it testpod4 -- route -n
command to look at the routing table directly from the Kubernetes Operator terminal fortestpod4
.Note that the routing table of
testpod4
has a default gateway foreth0
and fornet1
which is our SR-IOV enabled interface.[opc@o-sqrtga ~]$ kubectl exec -it testpod4 -- route -n Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface 0.0.0.0 10.244.1.1 0.0.0.0 UG 0 0 0 eth0 10.0.3.0 0.0.0.0 255.255.255.224 U 0 0 0 net1 10.0.3.0 0.0.0.0 255.255.255.224 U 0 0 0 net1 10.244.0.0 10.244.1.1 255.255.0.0 UG 0 0 0 eth0 10.244.1.0 0.0.0.0 255.255.255.128 U 0 0 0 eth0 [opc@o-sqrtga ~]$
-
To perform the ping test from the Kubernetes Operator directly from the test pods, we need the ping command.
In the following table, we have provided all the ping commands for all test pods. The command will ping from a particular test pod to all the other test pods including its
net1
IP address.Source pod name command testpod1 kubectl exec -it testpod1 -- ping -I net1 10.0.3.30 -c 4
kubectl exec -it testpod1 -- ping -I net1 10.0.3.15 -c 4
kubectl exec -it testpod1 -- ping -I net1 10.0.3.14 -c 4
kubectl exec -it testpod1 -- ping -I net1 10.0.3.16 -c 4
testpod2 kubectl exec -it testpod2 -- ping -I net1 10.0.3.15 -c 4
kubectl exec -it testpod2 -- ping -I net1 10.0.3.30 -c 4
kubectl exec -it testpod2 -- ping -I net1 10.0.3.14 -c 4
kubectl exec -it testpod2 -- ping -I net1 10.0.3.16 -c 4
testpod3 kubectl exec -it testpod3 -- ping -I net1 10.0.3.14 -c 4
kubectl exec -it testpod3 -- ping -I net1 10.0.3.30 -c 4
kubectl exec -it testpod3 -- ping -I net1 10.0.3.15 -c 4
kubectl exec -it testpod3 -- ping -I net1 10.0.3.16 -c 4
testpod4 kubectl exec -it testpod4 -- ping -I net1 10.0.3.16 -c 4
kubectl exec -it testpod4 -- ping -I net1 10.0.3.30 -c 4
kubectl exec -it testpod4 -- ping -I net1 10.0.3.15 -c 4
kubectl exec -it testpod4 -- ping -I net1 10.0.3.14 -c 4
Note: In this example, we are using
testpod1
to ping all the other test podsnet1
IP addresses.
-
Run the
kubectl exec -it testpod1 -- ping -I net1 10.0.3.30 -c 4
command to ping fromtestpod1
totestpod1
.Note that the ping has
4 packets transmitted, 4 received, 0% packet loss
. So the ping is successful.[opc@o-sqrtga ~]$ kubectl exec -it testpod1 -- ping -I net1 10.0.3.30 -c 4 PING 10.0.3.30 (10.0.3.30) from 10.0.3.30 net1: 56(84) bytes of data. 64 bytes from 10.0.3.30: icmp_seq=1 ttl=64 time=0.043 ms 64 bytes from 10.0.3.30: icmp_seq=2 ttl=64 time=0.024 ms 64 bytes from 10.0.3.30: icmp_seq=3 ttl=64 time=0.037 ms 64 bytes from 10.0.3.30: icmp_seq=4 ttl=64 time=0.026 ms --- 10.0.3.30 ping statistics --- 4 packets transmitted, 4 received, 0% packet loss, time 3087ms rtt min/avg/max/mdev = 0.024/0.032/0.043/0.009 ms
-
Run the
kubectl exec -it testpod1 -- ping -I net1 10.0.3.15 -c 4
command to ping fromtestpod1
totestpod2
.Note that the ping has
4 packets transmitted, 4 received, 0% packet loss
. So the ping is successful.[opc@o-sqrtga ~]$ kubectl exec -it testpod1 -- ping -I net1 10.0.3.15 -c 4 PING 10.0.3.15 (10.0.3.15) from 10.0.3.30 net1: 56(84) bytes of data. 64 bytes from 10.0.3.15: icmp_seq=1 ttl=64 time=0.383 ms 64 bytes from 10.0.3.15: icmp_seq=2 ttl=64 time=0.113 ms 64 bytes from 10.0.3.15: icmp_seq=3 ttl=64 time=0.114 ms 64 bytes from 10.0.3.15: icmp_seq=4 ttl=64 time=0.101 ms --- 10.0.3.15 ping statistics --- 4 packets transmitted, 4 received, 0% packet loss, time 3109ms rtt min/avg/max/mdev = 0.101/0.177/0.383/0.119 ms
-
Run the
kubectl exec -it testpod1 -- ping -I net1 10.0.3.14 -c 4
command to ping fromtestpod1
totestpod3
.Note that the ping has
4 packets transmitted, 4 received, 0% packet loss
. So the ping is successful.[opc@o-sqrtga ~]$ kubectl exec -it testpod1 -- ping -I net1 10.0.3.14 -c 4 PING 10.0.3.14 (10.0.3.14) from 10.0.3.30 net1: 56(84) bytes of data. 64 bytes from 10.0.3.14: icmp_seq=1 ttl=64 time=0.399 ms 64 bytes from 10.0.3.14: icmp_seq=2 ttl=64 time=0.100 ms 64 bytes from 10.0.3.14: icmp_seq=3 ttl=64 time=0.130 ms 64 bytes from 10.0.3.14: icmp_seq=4 ttl=64 time=0.124 ms --- 10.0.3.14 ping statistics --- 4 packets transmitted, 4 received, 0% packet loss, time 3057ms rtt min/avg/max/mdev = 0.100/0.188/0.399/0.122 ms
-
Run the
kubectl exec -it testpod1 -- ping -I net1 10.0.3.16 -c 4
command to ping fromtestpod1
totestpod4
.Note that the ping has
4 packets transmitted, 4 received, 0% packet loss
. So the ping is successful.[opc@o-sqrtga ~]$ kubectl exec -it testpod1 -- ping -I net1 10.0.3.16 -c 4 PING 10.0.3.16 (10.0.3.16) from 10.0.3.30 net1: 56(84) bytes of data. 64 bytes from 10.0.3.16: icmp_seq=1 ttl=64 time=0.369 ms 64 bytes from 10.0.3.16: icmp_seq=2 ttl=64 time=0.154 ms 64 bytes from 10.0.3.16: icmp_seq=3 ttl=64 time=0.155 ms 64 bytes from 10.0.3.16: icmp_seq=4 ttl=64 time=0.163 ms --- 10.0.3.16 ping statistics --- 4 packets transmitted, 4 received, 0% packet loss, time 3110ms rtt min/avg/max/mdev = 0.154/0.210/0.369/0.092 ms [opc@o-sqrtga ~]$
Note: We have not included all the other ping outputs for all the other test pods, but you get the idea.
-
The following image shows a visual overview of what we have configured so far.
Task 9: (Optional) Deploy Pods with Multiple Interfaces
Till now we have only prepared one VNIC (that happens to support SR-IOV) and moved this VNIC into a pod. We have done this for four different test pods.
Now what if we want to add or move more VNICs into a particular pod? You have to repeat these steps:
- Create a new OCI subnet.
- Create a new VNIC and assign the IP address.
- Create a new
NetworkAttachmentDefinitions
. - Update the test pod YAML file by adding new annotations.
In this task, you will find an example where we will create an additional subnet, VNIC, assign the IP address, the NetworkAttachmentDefinition
and add this to the pod creation YAML file for testpod1
.
-
This is the
NetworkAttachmentDefinition
for a new interfaceens6
with an IP address10.0.4.29/27
on the network10.0.4.0/27
.Note that this is another
NetworkAttachmentDefinition
than we had before for the interfaceens5
with an IP address10.0.3.30/27
on the network10.0.3.0/27
.sriov-vnic-2-new.yaml
:apiVersion: "k8s.cni.cncf.io/v1" kind: NetworkAttachmentDefinition metadata: name: sriov-vnic-2-new spec: config: '{ "cniVersion": "0.3.1", "type": "host-device", "device": "ens6", "ipam": { "type": "static", "addresses": [ { "address": "10.0.4.29/27", "gateway": "0.0.0.0" } ], "routes": [ { "dst": "10.0.4.0/27", "gw": "0.0.0.0" } ] } }'
-
This is the (updated) YAML file for
testpod1
.Note the additional line in the
annotations
where the newNetworkAttachmentDefinition
;sriov-vnic-2-new
is referenced.sudo nano testpod1-v3.yaml apiVersion: v1 kind: Pod metadata: name: testpod1 annotations: k8s.v1.cni.cncf.io/networks: sriov-vnic-1 k8s.v1.cni.cncf.io/networks: sriov-vnic-2-new spec: affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: node_type operator: In values: - testpod1 containers: - name: appcntr1 image: centos/tools imagePullPolicy: IfNotPresent command: [ "/bin/bash", "-c", "--" ] args: [ "while true; do sleep 300000; done;" ]
Task 10: Remove All Pod Deployments and NetworkAttachmentDefinitions
If you want to start over or want to clean up the containers with the NetworkAttachmentDefinitions
, follow the steps:
-
Get all the pods using the
kubectl get pod
command.[opc@o-sqrtga ~]$ kubectl get pod NAME READY STATUS RESTARTS AGE my-nginx-576c6b7b6-6fn6f 1/1 Running 3 105d my-nginx-576c6b7b6-k9wwc 1/1 Running 3 105d my-nginx-576c6b7b6-z8xkd 1/1 Running 6 105d mysql-6d7f5d5944-dlm78 1/1 Running 12 100d testpod1 1/1 Running 0 64d testpod2 1/1 Running 0 64d testpod3 1/1 Running 0 64d testpod4 1/1 Running 0 64d [opc@o-sqrtga ~]$
-
Delete the test pods using the following commands.
kubectl delete -f testpod1-v2.yaml kubectl delete -f testpod2-v2.yaml kubectl delete -f testpod3-v2.yaml kubectl delete -f testpod4-v2.yaml
-
Get all the
NetworkAttachmentDefinitions
using thekubectl get network-attachment-definitions.k8s.cni.cncf.io
command.[opc@o-sqrtga ~]$ kubectl get network-attachment-definitions.k8s.cni.cncf.io NAME AGE sriov-vnic-1 64d sriov-vnic-2 64d sriov-vnic-3 64d sriov-vnic-4 64d [opc@o-sqrtga ~]$
-
Delete the
NetworkAttachmentDefinitions
with the following commands.kubectl delete networkattachmentdefinition.k8s.cni.cncf.io/sriov-vnic-1 kubectl delete networkattachmentdefinition.k8s.cni.cncf.io/sriov-vnic-2 kubectl delete networkattachmentdefinition.k8s.cni.cncf.io/sriov-vnic-3 kubectl delete networkattachmentdefinition.k8s.cni.cncf.io/sriov-vnic-4
Related Links
Acknowledgments
- Author - Iwan Hoogendoorn (OCI Network Specialist)
More Learning Resources
Explore other labs on docs.oracle.com/learn or access more free learning content on the Oracle Learning YouTube channel. Additionally, visit education.oracle.com/learning-explorer to become an Oracle Learning Explorer.
For product documentation, visit Oracle Help Center.
Deploy SR-IOV Enabled Network Interfaces Container Apps on OKE Using Multus CNI Plugin
G27785-01
March 2025
Copyright ©2025, Oracle and/or its affiliates.