A Appendix
This section contains additional topics that are referred to while performing some of the procedures in the document.
Artifact Acquisition and Hosting
Introduction
The CNE deployment containers require access to several resources that are downloaded from the internet. For cases where the target system is isolated from the internet, you can use the locally available repositories. These repositories require provisioning with the proper files and versions, and some of the cluster configurations need to be updated to allow the installation containers to locate these local repositories.
- Configuring YUM Repository is needed to hold a mirror of several OLX (for example, OL9) repositories, as well as the version of the docker-ce required for the CNE's Kubernetes deployment.
-
Configuring HTTP Repository is required to hold Kubernetes binaries and Helm charts.
- Configuring PIP Repository is required to allow CNE packages to be retrieved by the Bastion Hosts.
- Configuring Container Image Registry is required to configure the container image registry.
- A copy of the Oracle Linux ISO. See Downloading Oracle Linux for OS installation.
Downloading Oracle Linux
Note:
The 'X' in Oracle LinuxX or OLX in this procedure indicates the
latest version of Oracle Linux supported by CNE.
Download Oracle Linux X VM Image for OpenStack
Run this procedure to download an OLX VM image or
template (QCOW2 format). Use this image to instantiate VMs with OLX
as the guest OS.
- Open the link page https://yum.oracle.com/oracle-linux-templates.html
- Under the section Downloads, click on template *.qcow for release X.X to download the image.
- Once the download is complete, verify that the
sha256sumof the downloaded image matches theSHA256checksum provided in the page.
Download Oracle Linux X for BareMetal or VMware
Perform the following procedure to download an OLX ISO. The
ISO can then be used to install OLX as the host OS for BareMetal
servers..
- Open the link page https://yum.oracle.com/oracle-linux-isos.html
- Under the section Oracle Linux x86_64 ISOs, click Full ISO version of the image for release X.X to download the ISO image.
- Once the download is complete, perform the verification of the downloaded image by following the Verify Oracle Linux Downloads procedure.
Setting Up a Central Repository
- [Optional]: Installing certificates signed by any certificate authority
- Configuring HTTP repository
- Configuring YUM repository
- Configuring image repository
- Populating the configured repositories
Note:
This procedure is a suggested method to setup and configure the required central repositories (HTTP, YUM, podman) for CNE. However, this is not the only way to get the required repositories running.[Optional]: Installing Certificates Signed by Any Certificate Authority
Note:
You can skip this section if you are not using a certificate authority.- Place the CA certificate into the trust list as root on the central repository
and on any existing clients that needs to access the HTTP and container registry
services:
sudo cp ca_public.crt /etc/pki/ca-trust/source/anchors/ sudo update-ca-trust - Copy the server's private key and certificate to the following directories only
on the central repository:
Note:
Modify the file names as appropriate.cp ${HOSTNAME}_private.key /etc/pki/tls/private/ cp ${HOSTNAME}_signed_public.crt /etc/pki/tls/certs/
Configuring HTTP Repository
- Run the following command to install the HTTP
server:
dnf install -y httpd - Copy the required charts, binaries, scripts, and files in the
/var/www/html/path:- Copy the Helm charts to
/var/www/html/occne/charts. - Copy the binaries (containerd and binaries needed by
Kubernetes) to
/var/www/html/occne/binaries. - The CNE dependency retrieval scripts and manifests described in the Artifact Acquisition and Hosting section are used to populate the directories with the proper files obtained from the public internet.
- Other required files for cluster installation
(docker-registry certificate, MySQL install archives, OL9.ISO
installation media) are copied under
/var/www/htmlin known locations to make the automated testing of some of the installation steps easier.
- Copy the Helm charts to
- If you are using a certificate authority to create a signed certificate for
HTTPS/TLS support, then configure
httpdto use the previously copied key and certificate installed in the previous sub-section.
Configuring YUM Repository
- ol9_x86_64_developer_EPEL
- ol9_x86_64_developer
- ol9_x86_64_addons
- ol9_x86_64_UEKR7
- ol9_x86_64_appstream
- ol9_x86_64_baseos_latest
.repo file that
are placed on the Bastion Host.
Configuring Image Repository
CNE requires a central repository that is secured with encryption and a valid certificate.
- Run the following commands to install
Podman:
dnf config-manager --enable ol9_appstream dnf install -y podman - If you need a proxy to pull images from the internet, set it in the
environment using the
http_proxy,https_proxy, andno_proxystandard environment variables for Podman usage. It is suggested to set these variables in the/etc/profile.d/proxy.shfile so they can be applied to all user sessions. - If you do not use a certificate authority to create a signed certificate, you
can setup the container registry self-signed certificate.
The command line prompts for the details of the certificates. These details can also be passed as command-line arguments. Ensure that the
common-nameandsubjectAltNamematch the hostname that other machines use to access the registry.For example, the sample parameters of winterfell are as follows:- Country=
US - State=
UT - Location=
SLC - Organization=
Oracle - Organizational-Unit=
CGBU - Common-Name=
winterfell - email=
<blank>
Common-Nameand thesubjectAltName(winterfell, in this example) match the name that other systems use to reach to this host.mkdir -p /etc/opt/registry-certs/ openssl req \ -newkey rsa:4096 -nodes -sha256 -keyout /etc/opt/registry-certs/occne-reg.key \ -x509 -days 3650 -out /etc/opt/registry-certs/occne-reg.crt \ -subj "/C=US/ST=Utah/L=SLC/O=Oracle/OU=CGBU/CN=${HOSTNAME}" \ -reqexts SAN -extensions SAN \ -config <(cat /etc/pki/tls/openssl.cnf <(printf "\n[SAN]\nsubjectAltName=DNS:${HOSTNAME}"))The following example provides the commands to renew a certificate that is already created (reusing the old key):openssl req \ -key /etc/opt/registry-certs/occne-reg.key \ -x509 -days 3650 -out /etc/opt/registry-certs/occne-reg.crt \ -subj "/C=US/ST=Utah/L=SLC/O=Oracle/OU=CGBU/CN=${HOSTNAME}" \ -reqexts SAN -extensions SAN \ -config <(cat /etc/pki/tls/openssl.cnf <(printf "\n[SAN]\nsubjectAltName=DNS:${HOSTNAME}"))Place the renewed certificates on the target at the
/var/occne/certificates/and/etc/containers/certs.d/$SERVER:$PORT/ca.crtpaths. Terminate and restart the container registry to load the new certificate.Note:
In October 2020, the OL7 Oracle YUM repository began distributing a version of docker-engine that requires the Subject-Alternate-Name (SAN) extension to be utilized in any certificate that is used to verify container images. To utilize this extension to the x509 standard, modify the last two lines of the given commands to use the SAN extension and to provide a value for the new field. This extension is also required for Podman in OL8 and OL9. - Country=
- If you are using a self-signed certificate, then put the certificate in a
location to let local Podman (on the targets) know about the registry (the
directory name must be provided in the following format:
registry-hostname:port):mkdir -p /etc/containers/certs.d/winterfell:5000 cp /etc/opt/registry-certs/occne-reg.crt /etc/containers/certs.d/winterfell\:5000/ca.crt - Create a directory for registry data (This directory is auto-created,
however the SELinux permissions on the directory were found to be
incorrect).
mkdir /var/lib/registry # change selinux mode for registry directory # chcon -Rt svirt_sandbox_file_t /var/lib/registry # or, our choice, turn off selinux entirely setenforce 0 - Turn on the ipv4 forwarding and set it to turn on again after reboot
(otherwise, the container registry cannot be seen outside of this
node):
sysctl net.ipv4.ip_forward=1 echo "net.ipv4.ip_forward=1" >> /etc/sysctl.conf - Start the registry as a container image using Podman. The parameters point the
registry to the storage directory created in the previous step and to the
private key, and CA or self-signed certificate. This is to enable encryption
and validation of the registry by clients.
Note:
Modify the following paths for TLS certificate and key as appropriate.podman run -d \ --restart=unless-stopped \ --name registry \ # not needed if net.ipv4.ip_forward on --network=host \ -v /etc/opt/registry-certs:/certs \ -v /var/lib/registry:/var/lib/registry \ -e REGISTRY_HTTP_ADDR=0.0.0.0:5000 \ -e REGISTRY_HTTP_TLS_CERTIFICATE=/certs/occne-reg.crt \ -e REGISTRY_HTTP_TLS_KEY=/certs/occne-reg.key \ -e REGISTRY_STORAGE_DELETE_ENABLED=true \ -e REGISTRY_COMPATIBILITY_SCHEMA1_ENABLED=true \ -p 5000:5000 \ registry:2 - Validate if the newly created registry is up and
running:
podman psSample output:# CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES # 62s8e3b51eac registry:2 /etc/docker/regis... About a minute ago Up About a minute 0.0.0.0:5000->5000/tcp registry - If you are using a self-signed certificate, copy
/etc/opt/registry-certs/occne-reg.crtto the/etc/containers/certs.d/<registry-hostname>:5000/ca.crtpath of all the client machines:mkdir -p /etc/containers/certs.d/winterfell:5000 scp root@winterfell:/etc/opt/registry-certs/occne-reg.crt /etc/containers/certs.d/winterfell:5000/ca.crt
Populating the Configured Repositories
Follow the instructions provided in the Artifact Acquisition and Hosting section to retrieve the dependency lists and retrieval scripts from a given version of CNE and apply them to the central repository, such that it serves any number of cluster instances.
Configuring Container Image Registry
Introduction
The container images used to run the Common services are loaded onto a central server registry to avoid exposing CNE instances to the Public Internet. The container images are downloaded to the Bastion Host in each CNE instance during installation. To allow the Bastion Host to retrieve the container images, create a Container Registry in the Central Server, provisioned with the necessary files.
Prerequisites
- Ensure that the CNE delivery package archive contains the CNE
container images (delivered as the file named
occne_images_${OCCNE_VERSION}.tgz). - Ensure that a signed 'Central' container registry is running and is able to accept container pushes from the executing system.
- Ensure that Podman (or Docker) container engine is installed on the executing system and the Podman (or Docker) commands are running successfully.
- Ensure that the executing system container engine can reach the internet
docker.ioregistry and perform pulls without interference by rate-limiting. This requires a Docker Hub account, obtained at hub.docker.com and signed into by the container tool via the login command before running the container retrieval script.
References
Procedure
- Provisioning the registry with the necessary images:
On a system that is connected to the Central Repository registry, run the following steps to populate the Central Repository registry with the required container images.
Set the environment variables to ensure all commands are working with the same registry and CNE version consistently (if targeting baremetal, do not set OCCNE_vCNE):$ CENTRAL_REPO=<central-repo-name> $ CENTRAL_REPO_REGISTRY_PORT=<central-repo-registry-port> $ OCCNE_VERSION=<OCCNE version> $ OCCNE_CLUSTER=<cluster-name> $ OCCNE_vCNE=<openstack, oci, vmware, or do not define if Bare-Metal> $ if [ -x "$(command -v podman)" ]; then OCCNE_CONTAINER_ENGINE='podman' else OCCNE_CONTAINER_ENGINE='docker' fiExample:$ CENTRAL_REPO=rainbow-reg $ CENTRAL_REPO_REGISTRY_PORT=5000 $ OCCNE_VERSION=25.2.100 $ OCCNE_CLUSTER=rainbow $ OCCNE_vCNE=openstack $ if [ -x "$(command -v podman)" ]; then OCCNE_CONTAINER_ENGINE='podman' else OCCNE_CONTAINER_ENGINE='docker' fi - Once the environment is setup, load the provided images into the
CNE image
.tarfile and to the local container registry:$ tar -zxvf occne_images_${OCCNE_VERSION}.tgz $ ${OCCNE_CONTAINER_ENGINE} load -i images_${OCCNE_VERSION}.tar - Run the following commands to push the CNE images to the Central
Repository registry and remove them from temporary local
storage:
for IMAGE in $(cat images.txt); do ${OCCNE_CONTAINER_ENGINE} image tag ${IMAGE} ${CENTRAL_REPO}:${CENTRAL_REPO_REGISTRY_PORT}/${IMAGE} ${OCCNE_CONTAINER_ENGINE} image push ${CENTRAL_REPO}:${CENTRAL_REPO_REGISTRY_PORT}/${IMAGE} ${OCCNE_CONTAINER_ENGINE} image rm ${IMAGE} ${CENTRAL_REPO}:${CENTRAL_REPO_REGISTRY_PORT}/${IMAGE} done - Run the following commands to retrieve the lists of required docker
images, binaries, and helm-charts, from each CNE
container:
$ mkdir -p /var/occne/cluster/${OCCNE_CLUSTER} $ for CONTAINER in provision k8s_install configure; do ${OCCNE_CONTAINER_ENGINE} run --rm --privileged -v /var/occne/cluster/${OCCNE_CLUSTER}:/host -e "${OCCNE_vCNE:+OCCNEARGS=--extra-vars=occne_vcne=${OCCNE_vCNE}}" ${CENTRAL_REPO}:${CENTRAL_REPO_REGISTRY_PORT}/occne/${CONTAINER}:${OCCNE_VERSION} /getdeps/getdeps done - Add the
/var/occne/cluster/${OCCNE_CLUSTER}/artifactsdirectory into $PATH:$ if [[ ":$PATH:" != *":/var/occne/cluster/${OCCNE_CLUSTER}/artifacts:"* ]]; then PATH=${PATH}:/var/occne/cluster/${OCCNE_CLUSTER}/artifacts; fi - Run the following command to navigate to the
/var/occne/cluster/${OCCNE_CLUSTER}/artifactsdirectory and verify that there is aretrieve_container_images.shscript and a few*_container_images.txtfiles.$ cd /var/occne/cluster/${OCCNE_CLUSTER}/artifacts $ for f in *_container_images.txt; do retrieve_container_images.sh '' ${CENTRAL_REPO}:${CENTRAL_REPO_REGISTRY_PORT} < $f done - If there are errors reported due to the docker-hub rate-limiting,
you need to create a docker-hub account and do a 'podman login' (or 'docker
login', as appropriate). Before re-running the above steps using that new
account on this system, run the following command to see a list of the required
container
images:
$ cd /var/occne/cluster/${OCCNE_CLUSTER}/artifacts $ for f in *_container_images.txt; do cat $f done - Verify the list of repositories in the docker registry as follows:
Access endpoint
<registryaddress>:<port>/v2/_catalogusing a browser or from any linux server using the following curl command:$ curl -k https://${CENTRAL_REPO}:${CENTRAL_REPO_REGISTRY_PORT}/v2/_catalogExample result:$ {"repositories":["23.4.0/kubespray","anchore/anchore-engine","anoxis/registry-cli","aquasec/kube-bench","atmoz/sftp","bats/bats","bd_api","benigno/cne_scan","busybox","cap4c/cap4c-model-executor","cap4c-model-controller-mesh","cap4c-stream-analytics","ceph/ceph","cnc-nfdata-collector","cncc/apigw-common-config-hook","cncc/apigw-configurationinit","cncc/apigw-configurationupdate","cncc/cncc-apigateway","cncc/cncc-cmservice","cncc/cncc-core/validationhook","cncc/cncc-iam","cncc/cncc-iam/healthcheck","cncc/cncc-iam/hook","cncc/debug_tools","cncc/nf_test","cncdb/cndbtier-mysqlndb-client","cncdb/db_backup_executor_svc","cncdb/db_backup_manager_svc","cncdb/db_monitor_svc","cncdb/db_replication_svc","cncdb/docker","cncdb/gitlab/gitlab-runner","cncdb/gradle_image","cncdb/mysql-cluster","cndb2210/cndbtier-mysqlndb-client","cndb2210/db_backup_executor_svc","cndb2210/db_backup_manager_svc","cndb2210/db_monitor_svc","cndb2210/db_replication_svc","cndb2210/mysql-cluster","cndbtier/cicd/sdaas/dind","cndbtier-mysqlndb-client","cndbtier-sftp","cnsbc-ansible-precedence-testing/kubespray","cnsbc-ansible-precedence-testing2/kubespray","cnsbc-occne-8748/kubespray","cnsbc-occne-hugetlb/kubespray","coala/base","curlimages/curl","db_backup_executor_svc","db_backup_manager_svc","db_monitor_svc","db_replication_svc","devansh-kubespray/kubespray","devansh-vsphere-uplift/kubespray","diamcli","docker-remote.dockerhub-iad.oci.oraclecorp.com/jenkins/jenkins","docker-remote.dockerhub-iad.oci.oraclecorp.com/registry","docker.io/aquasec/kube-bench","docker.io/bats/bats","docker.io/bitnami/kubectl","docker.io/busybox","docker.io/calico/cni","docker.io/calico/kube-controllers","docker.io/calico/node","docker.io/ceph/ceph","docker.io/coredns/coredns","docker.io/curlimages/curl","docker.io/giantswarm/promxy","docker.io/governmentpaas/curl-ssl","docker.io/grafana/grafana","docker.io/istio/pilot","docker.io/istio/proxyv2","docker.io/jaegertracing/all-in-one","docker.io/jaegertracing/example-hotrod","docker.io/jaegertracing/jaeger-agent","docker.io/jaegertracing/jaeger-collector","docker.io/jaegertracing/jaeger-query","docker.io/jenkins/jenkins","docker.io/jenkinsci/blueocean","docker.io/jettech/kube-webhook-certgen","docker.io/jimmidyson/configmap-reload","docker.io/k8s.gcr.io/ingress-nginx/kube-webhook-certgen","docker.io/k8scloudprovider/cinder-csi-plugin","docker.io/k8scloudprovider/openstack-cloud-controller-manager","docker.io/kennethreitz/httpbin","docker.io/lachlanevenson/k8s-helm","docker.io/library/busybox","docker.io/library/nginx","docker.io/library/registry"]
Configuring HTTP Repository
Introduction
To avoid exposing CNE instances to the Public Internet, load the binaries used for Kubespray (Kubernetes installation) and the Helm charts used during Common Services installation onto a Central Server. After loading these binaries and Helm charts, you can download them to the Bastion Host in each CNE instance during installation. To allow the retrieval of binaries and charts by the Bastion Hosts, create an HTTP repository in the Central Server, provisioned with the necessary files.
Prerequisites
- Ensure that an HTTP server is deployed and running on the Central Repository server.
- Ensure that the steps to configure the container image registry are run to obtain the list of dependencies required for each CNE container.
Procedure
- Retrieve Kubernetes Binaries:
The Kubespray requires access to an HTTP server from which it can download the correct version of a set of binary files. To provision an internal HTTP repository, you need to obtain these files from the internet and place them at a known location on the internal HTTP server.
Run the following command to view the list of required binaries:
cd /var/occne/cluster/${OCCNE_CLUSTER}/artifacts for f in *_binary_urls.txt; do cat $f | grep http done - Run the following command retrieve the required binaries and place them in the
binariesdirectory under the command-line specified directory:for f in *_binary_urls.txt; do retrieve_bin.sh /var/www/html/occne/binaries < $f done - Run the following command to place CNE in-house binaries in the
binariesdirectory. You can find thebinariesdirectory containing all the binaries in tarball.tar -xzvf occne_binaries_${OCCNE_VERSION}.tgz for b in $(grep -Ev "^#|^$" delivery_binaries*.txt);do mkdir -p /var/www/html/occne/binaries/$(dirname $b) 2>/dev/null || echo "Directory $(dirname $b) already exists" cp $(basename $b) /var/www/html/occne/binaries/$(dirname $b) done - Retrieve Helm charts:
The provision container requires access to an HTTP server from which it can download the correct version of a set of Helm charts for the required services. To provision the Central Repo HTTP repository, you need to obtain these charts from the internet and place them at a known location on the Central HTTP server using the following command:
- (Optional) Run the following commands to install Helm
from the binaries. In case Helm 3 is already installed, skip this
step:
- Identify the URL where Helm was
downloaded:
HELMURL=$(cat /var/occne/cluster/${OCCNE_CLUSTER}/artifacts/PROV_binary_urls.txt | grep -o '\S*helm.sh\S*') - Determine the archive file name from the
URL:
HELMZIP=/var/www/html/occne/binaries/${HELMURL##*/} - Install Helm from the archive in
/usr/bin:sudo tar -xvf ${HELMZIP} linux-amd64/helm -C /usr/bin --strip-components 1
- Identify the URL where Helm was
downloaded:
- (Optional) Run the following commands to install Helm
from the binaries. In case Helm 3 is already installed, skip this
step:
- Run the following command to view the list of required Helm
charts:
for f in *_helm_charts.txt; do cat $f done - Run the following commands to retrieve the Helm charts from
the
internet:
for f in *_helm_charts.txt; do retrieve_helm.sh /var/www/html/occne/charts < $f done
Configuring Access to Central Repository
Once the central-repository has been populated with the artifacts necessary for CNE cluster deployment and maintenance, CNE bootstraps and Bastions can be configured to use the repository.
These directions are the same for a bootstrap, where a new CNE cluster is to be deployed, to a Bastion where a central-repository is being replaced by a new one. In the following directions the target will be referred to as the 'bootstrap' machine, however the same directions and directories are applicable to a Bastion that is adopting a different central-repository than the one it was initially bound to.
The Central Repository files and certificates must be copied to the directories listed (through SCP, USB stick, or other mechanism). In these directions values in < > symbols are to be replaced with the specific values for the system being installed. The rest of the text in the code-blocks should be used verbatim.
Procedure
- Set the environment variables for consistent access to the central
repository.
$ echo 'export CENTRAL_REPO=<central repo hostname>' | sudo tee -a /etc/profile.d/occne.sh $ echo 'export CENTRAL_REPO_IP=<central repo IPv4 address>' | sudo tee -a /etc/profile.d/occne.sh $ echo 'export CENTRAL_REPO_REGISTRY_PORT=<central repo registry port>' | sudo tee -a /etc/profile.d/occne.sh $ source /etc/profile.d/occne.sh - If the central-repository serves up content by TLS/HTTPS, and uses a
certificate-authority signed certificate, then the protocol must be specified in
the environment as
well.
$ echo 'export CENTRAL_REPO_PROTOCOL=https' | sudo tee -a /etc/profile.d/occne.sh $ source /etc/profile.d/occne.sh - If the central-repository hostname cannot be resolved by DNS, update the
/etc/hostsfile with the central repository IP/hostname association.Run the following command to add the central-repo to /etc/hosts for hostname to IP resolution:$ echo ${CENTRAL_REPO_IP} ${CENTRAL_REPO} | sudo tee -a /etc/hosts - Create the following empty directories, to hold the central repository
files.
Create YUM local repo directory and certificates directory for distribution to Bastions:
mkdir -p -m 0750 /var/occne/yum.repos.d $ mkdir -p -m 0750 /var/occne/certificates
Configuring PIP Repository
Introduction
To avoid exposing CNE instances to the public Internet, packages used during CNE installation are loaded to a central server. These packages are then downloaded to the Bastion Host in each CNE instance during the installation. To allow these packages to be retrieved by the Bastion Hosts, a PIP repository must be created in the central server and provisioned with the necessary files.
Note:
- CNE 23.4.0 runs on top of Oracle Linux 9, therefore the default Python version is updated Python 3.9. The central repository is not required to run OL9. However, to download the appropriate Python packages, run the following procedure using Python 3.9.
- A given central repository can contain both packages required by OL8 (Python
3.6) or OL9 (Python 3.9) at the same time:
- Python 3.6 packages are stored in
/var/www/html/occne/python. This path is used by clusters running CNE 23.3.x and lower, which run on top of OL8. - Python 3.9 packages are stored in
/var/www/html/occne/ol9_python. This path is used by clusters running CNE 23.4.0 and above, which run on top of OL9.
- Python 3.6 packages are stored in
Prerequisites
- Ensure that an HTTP server is deployed and running on the central repository server.
- Ensure that the steps to configure the container image registry are run to obtain the list of dependencies required for each CNE container.
- Ensure that Python 3.9 is available at the central repository server.
- Ensure that there is a CNE delivery package archive file containing the CNE
python libraries (delivered as a file named,
"
occne_python_binaries_${OCCNE_VERSION}.tgz)"
Procedure
- Ensure that Python3.9 is available:
Log in to the central repository server and run the following command to validate the Python version:
$ python3 --versionSample output:Python 3.9.XIf the central repository runs a different Python version (an older version like 3.6 or a newer version like 3.11), then install Python version 3.9 using the next step and then proceed. If installing Python 3.9 is not an option, then perform the alternate procedure (mentioned in the next step), to utilize CNE provision container to run the necessary steps in a proper Python environment.
- Retrieve Python libraries from external sources:
Note:
Run this step only if Python 3.9 is available on the central repository.- Run the following commands to retrieve the required Python 3.9 libraries and
place them in the
/var/www/html/occne/ol9_pythondirectory:$ cd /var/occne/cluster/${OCCNE_CLUSTER}/artifacts $ retrieve_python.sh /var/www/html/occne/ol9_python < PROV_python_deploy_libs_external.txt - Alternatively, you can perform the following the steps to
retrieve Python libraries using provision container:
- Set the environment variables to ensure that all
commands work with the same registry and CNE version
consistently:
Note:
If you are installing CNE on a BareMetal deplyment, then do not set theOCCNE_vCNEvariable.
For example:$ CENTRAL_REPO=<central-repo-name> $ CENTRAL_REPO_REGISTRY_PORT=<central-repo-registry-port> $ OCCNE_VERSION=<OCCNE version> $ OCCNE_CLUSTER=<cluster-name> $ OCCNE_vCNE=<openstack, oci, vmware, or do not define if Bare-Metal> $ if [ -x "$(command -v podman)" ]; then OCCNE_CONTAINER_ENGINE='podman' else OCCNE_CONTAINER_ENGINE='docker' fi$ CENTRAL_REPO=rainbow-reg $ CENTRAL_REPO_REGISTRY_PORT=5000 $ OCCNE_VERSION=23.4.0 $ OCCNE_CLUSTER=rainbow $ OCCNE_vCNE=openstack $ if [ -x "$(command -v podman)" ]; then OCCNE_CONTAINER_ENGINE='podman' else OCCNE_CONTAINER_ENGINE='docker' fi - Run the following command to retrieve the required
python libraries and place them in the
/var/www/html/occne/ol9_pythondirectory using the CNE provision container:$ podman run --rm ${https_proxy:+-e https_proxy=${https_proxy}} ${no_proxy:+-e no_proxy=${no_proxy}} -v /var/www/html/occne:/var/www/html/occne ${CENTRAL_REPO}:${CENTRAL_REPO_REGISTRY_PORT}/occne/provision:${OCCNE_VERSION} /bin/sh -c "rm /etc/pip.conf && /getdeps/getdeps && sed -i -e s'/sudo/#sudo/'g -e 's|^\([[:space:]]*\)pypi-mirror create -d |\1/usr/local/bin/pypi-mirror create -d |' /host/artifacts/retrieve_python.sh && python3.9 -m ensurepip && python3 -m pip install -U pip python-pypi-mirror && /host/artifacts/retrieve_python.sh /var/www/html/occne/ol9_python '' < /host/artifacts/PROV_python_deploy_libs_external.txt"
- Set the environment variables to ensure that all
commands work with the same registry and CNE version
consistently:
- Run the following commands to retrieve the required Python 3.9 libraries and
place them in the
- Load Python libraries packaged on CNE delivery artifacts:
- Remove old PIP mirror directory content to update the mirror index by
replacing the old python libraries with the new
ones:
$ rm -rf /var/www/html/occne/ol9_python/* - Run the following command to unpack the new python libraries in the
occne_python_binaries_24.3.0.tgzfile and place them in the/var/www/html/occne/ol9_python_libspython libraries directory:$ tar -xvf occne_python_binaries_24.3.0.tgz -C /var/www/html/occne/ol9_python_libs - Run the following command to create the mirror in the
/var/www/html/occne/ol9_pythondirectory using the new libraries among the ones that already existed on the libraries directory:$ pypi-mirror create -d /var/www/html/occne/ol9_python_libs -m /var/www/html/occne/ol9_python
- Remove old PIP mirror directory content to update the mirror index by
replacing the old python libraries with the new
ones:
Configuring YUM Repository
Introduction
The packages used during the OS installation and configuration are loaded onto a central server to avoid exposing CNE instances to the public Internet. These packages are downloaded to the Bastion Host in each CNE instance during the installation. To allow Bastion Hosts to retrieve these packages, you must create a YUM repository in the central server and provision all the necessary files.
You must create a repository file to reference the local YUM repository and place it in the required systems (the systems that run the CNE installation Docker instances).
Note:
The letter 'X' in Oracle Linux version in this section indicates the latest version of Oracle Linux supported by CNE.Prerequisites
- Use one of the following approaches to create a
local YUM mirror repository for the OL
Xbaseos_latest, addons, developer, developer_EPEL, appstream, and UEKR8 repositories:- Follow the instructions given in the Managing Software in
Oracle Linux document to subscribe to automatic synching and
updates through the Unbreakable Linux Network (ULN):
Note:
Recently (in August 2023), the appstream repository provided by ULN was found to be incomplete and lead to installation issues with CNE. - Mirror the necessary YUM channels explicitly using the
reposyncandcreaterepoOracle Linux tools. The following example provides a sample bash script (for OL9) and the guidelines to create and sync such a YUM mirror:- Ensure that yum.oracle.com is reachable.
- Create an alternative
'yum.sync.conf'file to configure the settings other than the machine's defaults. This file can be an altered copy of/etc/yum.conf. - The bash script can be run regardless of the OS
version of the central repository. However, there can be
differences in parameters or arguments. This specific version is
tested on an OL9 machine.
Sample Bash script:
#!/bin/bash # script to run reposync to get needed YUM packages for the central repo set -x set -e DIR=/var/www/html/yum/OracleLinux/OL9 umask 027 for i in "ol9_baseos_latest 1" "ol9_addons 1" "ol9_developer 1" "ol9_developer_EPEL 1" "ol9_appstream" "ol9_UEKR8 1"; do set -- $i # convert tuple into params $1 $2 etc REPO=$1 NEWESTONLY=$2 # per Oracle Linux Support: appstream does not properly support 'newest-only' mkdir -p ${DIR} # ignore errors as sometimes packages and index do not fully match, just re-run to ensure everything is gathered # use alternate yum.conf file that may point to repodir and settings not used for managing THIS machine's packages reposync --config=/etc/yum.sync.conf --repo=${REPO} -p ${DIR} ${NEWESTONLY:+--newest-only} --delete || true createrepo ${DIR}/${REPO} || true done
- Follow the instructions given in the Managing Software in
Oracle Linux document to subscribe to automatic synching and
updates through the Unbreakable Linux Network (ULN):
- Subscribe (in case of ULN) or Download (in case of
Oracle YUM) the following channels while creating the yum
mirror:
- Oracle Linux X baseOS Latest. For example:
[ol9_x86_64_baseos_latest] - Oracle Linux X addons. For example:
[ol9_x86_64_addons] - Packages for test and development. For example:
OL9 [ol9_x86_64_developer] - EPEL packages for OL
X. For example:[ol9_x86_64_developer_EPEL] - Oracle Linux X appstream. For example:
[ol9_x86_64_appstream] - Unbreakable Enterprise Kernel Rel 8 for Oracle Linux X x86_64. For example:
[ol9_x86_64_UEKR8]
- Oracle Linux X baseOS Latest. For example:
Procedure
Configuring the OLX repository mirror repo file for CNE:
Once the YUM
repository mirror is set up and functional, create a .repo file
to allow the CNE installation logic to reach and pull files from it to create the
cluster-local mirrors hosted on the Bastion nodes.
The following is a sample repository file providing the details on a mirror with the necessary repositories. This repository file is placed on the CNE Bootstrap machine which will setup the CNE Bastion Host. The directions on the locations is provided in the installation procedure.
Note:
The repository names and the sample repository file provided are explicitly for OL9.- ol9_baseos_latest
- ol9_addons
- ol9_developer
- ol9_appstream
- ol9_developer_EPEL
- ol9_UEKR8
Note:
The host used in the.repo file must be resolvable by the target nodes. Either it
must be registered in the configured name server or specify the
baseurl fields by IP
address.
[ol9_baseos_latest]
name=Oracle Linux 9 Latest (x86_64)
baseurl=http://winterfell/yum/OracleLinux/OL9/ol9_baseos_latest
gpgcheck=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-oracle
enabled=1
module_hotfixes=1
proxy=_none_
[ol9_addons]
name=Oracle Linux 9 Addons (x86_64)
baseurl=http://winterfell/yum/OracleLinux/OL9/ol9_addons
gpgcheck=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-oracle
enabled=1
module_hotfixes=1
proxy=_none_
[ol9_developer]
name=Packages for creating test and development environments for Oracle Linux 9 (x86_64)
baseurl=http://winterfell/yum/OracleLinux/OL9/ol9_developer
gpgcheck=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-oracle
enabled=1
module_hotfixes=1
proxy=_none_
[ol9_developer_EPEL]
name=EPEL Packages for creating test and development environments for Oracle Linux 9 (x86_64)
baseurl=http://winterfell/yum/OracleLinux/OL9/ol9_developer_EPEL
gpgcheck=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-oracle
enabled=1
module_hotfixes=1
proxy=_none_
[ol9_appstream]
name=Application packages released for Oracle Linux 9 (x86_64)
baseurl=http://winterfell/yum/OracleLinux/OL9/ol9_appstream
gpgcheck=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-oracle
enabled=1
module_hotfixes=1
proxy=_none_
[ol9_UEKR8]
name=Unbreakable Enterprise Kernel Release 7 for Oracle Linux 9 (x86_64)
baseurl=http://winterfell/yum/OracleLinux/OL9/ol9_UEKR8
gpgcheck=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-oracle
enabled=1
module_hotfixes=1
proxy=_none_Central Repository Security Recommendations
It is a best practice to perform the security scans on the central repository before performing an installation or other procedures such as upgrade, OS update, or adding worker node. For more information about repository management recommendations and performing the security scans, see the "Repository Management Recommendations" section in Oracle Communications Cloud Native Core Security Guide.
Installation Reference Procedures
Inventory File Preparation
CNE installation automation uses information within an CNE Inventory file to provision servers and virtual machines, install cloud native components, and configure all the components within the cluster so that they constitute a cluster compatible with the CNE platform specifications.
The boilerplate inventory file requires the input of site-specific information. This section describes the procedure to retrieve the CNE inventory boilerplate,hosts_sample.ini file and then if the iLO network is controlled
by an other network which is beyond the ToR switches, another inventory file is provided
as hosts_sample_remoteilo.ini.
Prerequisites
Before configuring the inventory file, copy the file to a system where it can be edited and saved for future use. The
hosts.ini file must be transferred to CNE bootstrap server later.
Reference
For more information about building an inventory file, see Ansible documentation.
CNE Inventory Sample hosts.ini File
hosts_sample.ini or
hosts_sample_remoteilo.ini file from MOS. Depending on the type
of Load Balancer, you can access the sample files from the following files on
MoS:
- For MetalLB deployments, the sample files are provided in the
occne_config_<release_number>.tgzfile. The file structure of theoccne_config_<release_number>.tgzfile is as follows:. ├── config_files │ ├── Readme.txt │ ├── ca-config.ini │ ├── deploy.sh │ ├── secrets.ini.template # <--secrets.ini.template │ ├── hosts_sample.ini # <--hosts_sample.ini │ ├── hosts_sample_remoteilo.ini # <--hosts_sample_remoteilo.ini │ ├── mb_resources.yaml │ ├── snmp_mibs │ │ ├── CNE-TC.mib │ │ ├── ORACLECNE-MIB.mib │ │ └── TEKELEC-TOPLEVEL-REG.mib │ └── switch_install │ ├── 93180_switchA.cfg │ └── 93180_switchB.cfg └── occne_custom_configtemplates_24.3.0-beta.2-111-g6ff4917.zip - For CNLB deployments, the sample files are provided in the
occne_cnlb_config-<release_number>.tgzfile. The file structure of theoccne_cnlb_config-<release_number>.tgzfile is as follows:. ├── config_files │ ├── Readme.txt │ ├── ca-config.ini │ ├── cnlb.ini.bond0.template │ ├── cnlb.ini.vlan.template │ ├── deploy.sh │ ├── secrets.ini.template # <--secrets.ini.template │ ├── hosts_sample.ini # <--hosts_sample.ini │ ├── hosts_sample_remoteilo.ini # <--hosts_sample_remoteilo.ini │ ├── snmp_mibs │ │ ├── CNE-TC.mib │ │ ├── ORACLECNE-MIB.mib │ │ └── TEKELEC-TOPLEVEL-REG.mib │ ├── switch_install │ │ ├── 93180_switchA_cnlb_bond0.cfg │ │ ├── 93180_switchA_cnlb_vlan.cfg │ │ ├── 93180_switchB_cnlb_bond0.cfg │ │ └── 93180_switchB_cnlb_vlan.cfg │ └── validateCnlbIniBm.py └── occne_custom_configtemplates_24.3.0-beta.2-111-g6ff4917.zip
tgz files:
- Untar the
.tgzfile:Example for MetalLB:$ tar -xvzf occne_config_<release_number>.tgzExample for CNLB:$ tar -xvzf occne_cnlb_config-<release_number>.tgz - Copy the
hosts_sample.iniorhosts_sample_remoteilo.inifiles tohosts.ini:Example forhosts_sample.ini:$ cp hosts_sample.ini hosts.iniExample forhosts_sample_remoteilo.ini:$ cp hosts_sample_remoteilo.ini hosts.ini - Copy the
secrets.ini templatetosecrets.inifile.Example:
$ cp secrets.ini.template secrets.ini
- 2 Bastion nodes: These nodes provide management access to the cluster and a repository for container images and helm charts used to install Kubernetes applications into the cluster. During installation and upgrade, the installer runs on one of these nodes.
- 3 Kubernetes master or etcd nodes: These serve as the management of the Kubernetes cluster and run-time storage of configuration data.
- Kubernetes worker nodes: These nodes run the applications for the services that the cluster provides.
- 3 Master Host machines: Each master host machine hosts one Kubernetes master or etcd virtual machine, and two of them also host a one bastion virtual machine. All of the host machines need access to the cluster network, and the two with bastions also need network accessibility to the ILO and Management networks.
- Worker machines: Each worker machine hosts the Kubernetes worker node logic locally (not in a virtual machine) for the best performance. All of the worker machines need access to the cluster network.
Inventory File Overview
The inventory file is an Initialization (INI) formatted file named hosts.ini. The elements of an inventory file are hosts, properties, and groups.
- A host is defined as a Fully Qualified Domain Name (FQDN). Properties are defined as the key is equal to value pairs.
- A property applies to a specific host when it appears on the same line as the host.
- Square brackets define group names. For example, host_hp_gen_10 defines the group of physical HP Gen10 machines. There is no explicit "end of group" delimiter, rather group definitions end at the next group declaration or the end of the file. Groups cannot be nested.
- A property applies to an entire group when it is defined under a group heading not on the same line as a host.
- Groups of groups are formed using the children keyword. For example, the occne:children creates an occne group comprised of several other groups.
- Inline comments are not allowed.
Table A-1 Base Groups
| Group Name | Description |
|---|---|
| host_hp_gen_10
host_netra_x8_2 host_netra_x9_2 |
Contains the list of all physical machines in the CNE cluster.
Each host must be listed in the group matching its hardware type.
Each entry starts with the fully qualified name of the machine as
its inventory hostname. Each host in this group must have several
properties defined as follows:
The default configuration of a node in this group is
for a Gen 10 RMS with modules providing boot interfaces at Linux
interface identifiers 'eno5' and 'eno6'. For Gen 10 blades, the
boot interfaces are usually 'eno1' and 'eno2' and must be
specified by adding the following properties:
|
| host_kvm_guest | Contains the list of all virtual machines in the CNE cluster. Each host
in this group must have several properties defined as follows:
|
| occne:children | Do not modify the children of the occne group. |
| occne:vars | This is a list of variables representing configurable site-specific data. While some variables are optional, define the ones listed in the boilerplate with valid values. If a given site does not have applicable data to fill in for a variable, consult the CNE installation or engineering team. For a description of Individual variable values, see the subsequent sections. |
| kube-master | The list of Master Node hosts where Kubernetes master components run. For example,k8s-master-1.rainbow.lab.us.oracle.com |
| etcd | The list of hosts that compose the etcd server. It must always be an odd number. This set is the same list of nodes as the kube-master group. |
| kube-node | The list of Worker Nodes. Worker Nodes are where Kubernetes pods run, and they must consist of the bladed hosts. For example, k8s-node-1.rainbow.lab.us.oracle.com |
| k8s-cluster:children | Do not modify the children of k8s-cluster. |
| occne_bastion | The list of Bastion Hosts names. For example, bastion-1.rainbow.lab.us.oracle.com |
Procedure
- Create CNE cluster name:
To provide each CNE host with a unique FQDN, the first step in composing the CNE Inventory is to create an CNE Cluster domain suffix. The CNE Cluster domain suffix starts with a Top-level Domain (TLD). Various government and commercial authorities maintain the structure of a TLD.
The domain name must begin with an "adhoc" identifier and followed by additional levels that provide more context. These levels must at least include one "geographic" and "organizational" identifier. Geographic and organizational identifiers can be multiple levels deep. This structure helps identify the cluster and conveys meaningful context within the domain name.
An example of an CNE cluster name using the identifiers is as follows:- Adhoc Identifier: rainbow
- Organizational Identifier: lab
- Geographical Identifier (Country of United States): us
- TLD: oracle.com
rainbow.lab.us.oracle.com - Create
host_hp_gen_10/host_netra_x8_2andhost_kvm_guestgroup lists:Using the CNE Cluster domain suffix created as per the above example, fill out the inventory boilerplate with the list of hosts in the host_hp_gen_10 and host_kvm_guest groups. The recommended hostname prefix for Kubernetes nodes is
k8s-[host|master|node]-xwhere x is a number 1 to N. The k8s-host-x machines run the k8s-master-x and bastion-x virtual machines. - Edit
occne:vars:- The following table provides the
occne:varsfor a standard MetalLB or CNLB deployment:Edit the values in the
occne:varsgroup to reflect site-specific data. Values in theoccne:varsgroup are defined as follows:Table A-2 occne:vars
Variable Name Description/Comment occne_cluster_name Set to the CNE Cluster Name as shown in the CNE Cluster Name section. subnet_ipv4 Set to the subnet of the network used to assign IPs for CNE hosts. subnet_cidr Set to the cidr notation for the subnet with leading /. For example: /24
netmask Set appropriately for the network used to assign IPs for CNE hosts. broadcast_address Set appropriately for the network used to assign IPs for CNE hosts. default_route Set to the IP of the TOR switch. name_server Set to comma separated list of external nameserver(s) (Optional) ntp_server Set to a comma-separated list of NTP servers to provide time to the cluster. This can be the TOR switch if it is appropriately configured with NTP. If unspecified, then the central_repo_hostwill be used.occne_repo_host_address Set to the Bootstrap Host internal IPv4 address. calico_mtu The default value for calico_mtu is 1480 from Kubernetes. If this value needs to be modified, use only a number and not a string. central_repo_host Set to the hostname of the central repository (for YUM, Docker, HTTP resources). central_repo_host_address Set to the IPv4 address of the central_repo_host.pxe_install_lights_out_usr Set to the user name configured for iLO admins on each host in the CNE Frame. pxe_install_lights_out_passwd Set to the password configured for iLO admins on each host in the CNE Frame. ilo_vlan_id Set to the VLAN ID of the iLO network. For example: 2. This variable is required only when iLO network is local to the ToR switches and ilo_host is needed on Bastion Host servers. Skip this variable if if the iLO network is beyond the ToR switches. ilo_subnet_ipv4 Set to the subnet of the iLO network used to assign IPs for bastion hosts. This variable is required only when iLO network is local to the ToR switches and ilo_host is needed on Bastion Host servers. Skip this variable if if the iLO network is beyond the ToR switches. ilo_subnet_cidr Set to the cidr notation for the subnet. For example: 24. This variable is required only when iLO network is local to the ToR switches and ilo_host is needed on Bastion Host servers. Skip this variable if if the iLO network is beyond the ToR switches. ilo_netmask Set appropriately for the network used to assign iLO IPs for bastion hosts. This variable is required only when iLO network is local to the ToR switches and ilo_host is needed on Bastion Host servers. Skip this variable if if the iLO network is beyond the ToR switches. ilo_broadcast_address Set appropriately for the network used to assign iLO IPs for bastion hosts. This variable is required only when iLO network is local to the ToR switches and ilo_host is needed on Bastion Host servers. Skip this variable if if the iLO network is beyond the ToR switches. ilo_default_route Set to the ILO VIP of the TOR switch. This variable is required only when iLO network is local to the ToR switches and ilo_host is needed on Bastion Host servers. Skip this variable if if the iLO network is beyond the ToR switches. mgmt_vlan_id Set to the VLAN ID of the Management network. For example: 4 mgmt_subnet_ipv4 Set to the subnet of the Management network used to assign IPs for bastion hosts. mgmt_subnet_cidr Set to the cidr notation for the Management subnet. For example: 29 mgmt_netmask Set appropriately for the network used to assign Management IPs for bastion hosts. mgmt_broadcast_address Set appropriately for the network used to assign Management IPs for bastion hosts. mgmt_default_route Set to the Management VIP of the TOR switch. occne_snmp_notifier_destination Set to the address of SNMP trap receiver. For example: "127.0.0.1:162" cncc_enabled Set to False for LoadBalance type service. Set to True for ClusterIP type service. The default value is False. occne_grub_password Set to the password configured for GRUB on each host in the CNE cluster. For more information about configuring GRUB password, see Configuring GRUB Password. - The following table provides the
occne:varsthat are specific to CNLB deployment:Note:
These values must be included for CNLB deployments in addition to the values in the previous table.Table A-3 CNLB occne:vars
Var Name Description/Comment occne_prom_cnlb IP address or Prometheus service occne_alert_cnlb IP address for Alert Manager service occne_graf_cnlb IP address for Grafana service occne_nginx_cnlb IP address for Nginx service occne_jaeger_cnlb IP address for Jaeger service occne_opensearch_cnlb IP address for Opensearch service -
For all environments, a
The following table provides thesecrets.inifile must be included considering the following values depending on the deployment type.secrets.inivariables:Table A-4 secrets.ini Variables
Var Name Description/Comment pxe_install_lights_out_usr Set to the user name configured for iLO admins on each host in the CNE Frame. pxe_install_lights_out_passwd Set to the password configured for iLO admins on each host in the CNE Frame. occne_grub_password Set to the password configured for GRUB on each host in the CNE cluster.
- The following table provides the
Installation Preflight Checklist
Introduction
This procedure identifies the pre-conditions necessary to begin the installation of an CNE frame. The field-install personnel can use this procedure as a reference to ensure that the frame is correctly assembled and the inventory of required artifacts is available before attempting the installation activities.
Reference
PrerequisitesThe primary function of this procedure is to identify the prerequisites necessary for the installation to begin.
Confirm if the hardware components are installed in the frame and connected as per the tables below:
Figure A-1 Rackmount ordering

The CNE frame installation must be complete before running any software installation. This section provides a reference to verify if the frame is installed as expected by software installation tools.
This
section also contains the point-to-point connections for the switches. The
switches in the solution must follow the naming scheme of
Switch<series number> like Switch1, Switch2, and so
on, where Switch1 is the first switch in the solution, and switch2 is the
second. These two switches form a redundant pair. To find the switch datasheet,
see https://www.cisco.com/c/en/us/products/collateral/switches/nexus-9000-series-switches/datasheet-c78-736651.html.
Table A-5 ToR Switch Connections
| Switch Port Name/ID (From) | From Switch 1 to Destination | From Switch 2 to Destination | Cable Type | Module Required |
|---|---|---|---|---|
| 1 | RMS 1, FLOM NIC 1 | RMS 1, FLOM NIC 2 | Cisco 10GE DAC | Integrated in DAC |
| 2 | RMS 1, iLO | RMS 2, iLO | CAT 5e or 6A | 1GE Cu SFP |
| 3 | RMS 2, FLOM NIC 1 | RMS 2, FLOM NIC 2 | Cisco 10GE DAC | Integrated in DAC |
| 4 | RMS 3, FLOM NIC 1 | RMS 3, FLOM NIC 2 | Cisco 10GE DAC | Integrated in DAC |
| 5 | RMS 3, iLO | RMS 4, iLO | CAT 5e or 6A | 1GE Cu SFP |
| 6 | RMS 4, FLOM NIC 1 | RMS 4, FLOM NIC 2 | Cisco 10GE DAC | Integrated in DAC |
| 7 | RMS 5, FLOM NIC 1 | RMS 5, FLOM NIC 2 | Cisco 10GE DAC | Integrated in DAC |
| 8 | RMS 5, iLO | RMS 6, iLO | CAT 5e or 6A | 1GE Cu SFP |
| 9 | RMS 6, FLOM NIC 1 | RMS 6, FLOM NIC 2 | Cisco 10GE DAC | Integrated in DAC |
| 10 | RMS 7, FLOM NIC 1 | RMS 7, FLOM NIC 2 | Cisco 10GE DAC | Integrated in DAC |
| 11 | RMS 7, iLO | RMS 8, iLO | CAT 5e or 6A | 1GE Cu SFP |
| 12 | RMS 8, FLOM NIC 1 | RMS 8, FLOM NIC 2 | Cisco 10GE DAC | Integrated in DAC |
| 13 | RMS 9, FLOM NIC 1 | RMS 9, FLOM NIC 2 | Cisco 10GE DAC | Integrated in DAC |
| 14 | RMS 9, iLO | RMS 10, iLO | CAT 5e or 6A | 1GE Cu SFP |
| 15 | RMS 10, FLOM NIC 1 | RMS 10, FLOM NIC 2 | Cisco 10GE DAC | Integrated in DAC |
| 16 | RMS 11, FLOM NIC 1 | RMS 11, FLOM NIC 2 | Cisco 10GE DAC | Integrated in DAC |
| 17 | RMS 11, iLO | RMS 12, iLO | CAT 5e or 6A | 1GE Cu SFP |
| 18 | RMS 12, FLOM NIC 1 | RMS 12, FLOM NIC 2 | Cisco 10GE DAC | Integrated in DAC |
| 19 - 48 | Unused (add for more RMS when needed) | Unused (add for more RMS when needed) | NA | NA |
| 49 | Mate Switch, Port 49 | Mate Switch, Port 49 | Cisco 40GE DAC | Integrated in DAC |
| 50 | Mate Switch, Port 50 | Mate Switch, Port 50 | Cisco 40GE DAC | Integrated in DAC |
| 51 | OAM Uplink to Customer | OAM Uplink to Customer | 40GE (MM or SM) Fiber | 40GE QSFP |
| 52 | Signaling Uplink to Customer | Signaling Uplink to Customer | 40GE (MM or SM) Fiber | 40GE QSFP |
| 53 | Unused | Unused | N/A | N/A |
| 54 | Unused | Unused | N/A | N/A |
| Management (Ethernet) | RMS 1, NIC 2 (1GE) | RMS 1, NIC 3 (1GE) | CAT5e or CAT 6A | None (RJ45 port) |
| Management (Serial) | Unused | Unused | None | None |
For information about the Server quick specifications, see HPE ProLiant DL380 Gen10 Server QuickSpecs.
- iLO: The integrated Lights Out management interface (iLO) contains an ethernet out of band management interface for the server. This connection is 1GE RJ45.
- 4x1GE LOM: For most servers in the solution, their 4x1GE LOM ports will be unused. The exception is the first server in the first frame. This server will serve as the management server for the ToR switches. In this case, the server will use 2 of the LOM ports to connect to ToR switches' respective out of band ethernet management ports. These connections will be 1GE RJ45 (CAT 5e or CAT 6).
- 2x10GE FLOM: Every server will be equipped with a 2x10GE Flex LOM card (or FLOM). These will be for in-band or application and solution management traffic. These connections are 10GE fiber (or DAC) and will terminate towards the ToR switches' respective SFP+ ports.
All RMS in the frame will only use the 10GE FLOM connections, except for the "management server", the first server in the frame will have some special connections listed as follows:
Table A-6 Bootstrap Server Connections
| Server Interface | Destination | Cable Type | Module Required | Notes |
|---|---|---|---|---|
| Base NIC1 (1GE) | Unused | None | None | N/A |
| Base NIC2 (1GE) | Switch1A Ethernet Mngt | CAT5e or 6a | None | Switch Initialization |
| Base NIC3 (1GE) | Switch1B Ethernet Mngt | CAT5e or 6a | None | Switch Initialization |
| Base NIC4 (1GE) | Unused | None | None | N/A |
| FLOM NIC1 | Switch1A Port 1 | Cisco 10GE DAC | Integrated in DAC | OAM, Signaling, Cluster |
| FLOM NIC2 | Switch1B Port 1 | Cisco 10GE DAC | Integrated in DAC | OAM, Signaling, Cluster |
| USB Port1 | USB Flash Drive | None | None | Bootstrap Host Initialization Only (temporary) |
| USB Port2 | Keyboard | USB | None | Bootstrap Host Initialization Only (temporary) |
| USB Port3 | Mouse | USB | None | Bootstrap Host Initialization Only (temporary) |
| Monitor Port | Video Monitor | DB15 | None | Bootstrap Host Initialization Only (temporary) |
Ensure artifacts listed in the Artifact Acquisition and Hosting are available in repositories accessible from the CNE Frame.
The beginning stage of installation requires a local KVM for installing the bootstrap environment.
Procedure
<user_input> in the table indicates that you
must determine the appropriate value as per your requirement.
Table A-7 Complete Site Survey Subnet Table
| Sl No. | Network Description | Subnet Allocation | Bitmask | VLAN ID | Gateway Address |
|---|---|---|---|---|---|
| 1 | iLO/OA Network | 192.168.20.0 | 24 | 2 | N/A |
| 2 | Platform Network | 172.16.3.0 | 24 | 3 | 172.16.3.1 |
| 3 | Switch Configuration Network | 192.168.2.0 | 24 | N/A | N/A |
| 4 | Management Network - Bastion Hosts | <user_input> | 28 | 4 | <user_input> |
| 5 | Signaling Network - MySQL Replication | <user_input> | 29 | 5 | <user_input> |
| 6 | OAM Pool -
service_ip_address in
cnlb.ini for common services.
|
<user_input> | 28 | N/A | CNLB_OAM_EXT_VIP |
| 7 | Signaling Pool -
service_ip_address in
cnlb.ini for 5G NFs.
|
<user_input> | <user_input> | N/A | CNLB_SIG_EXT_VIP |
| 8 | Other CNLB pools (Optional) | <user_input> | <user_input> | N/A | <user_input> |
| 9 | Other CNLB pools (Optional) | <user_input> | <user_input> | N/A | <user_input> |
| 10 | Other CNLB pools (Optional) | <user_input> | <user_input> | N/A | One for each subnet |
| 11 | ToR Switch A OAM Uplink Subnet | <user_input> | 30 | N/A | <user_input> |
| 12 | ToR Switch B OAM Uplink Subnet | <user_input> | 30 | N/A | <user_input> |
| 13 | ToR Switch A Signaling Uplink Subnet | <user_input> | 30 | N/A | <user_input> |
| 14 | ToR Switch B Signaling Uplink Subnet | <user_input> | 30 | N/A | <user_input> |
| 15 | ToR Switch A/B Crosslink Subnet (OSPF link) | 172.16.100.0 | 30 | 100 | <user_input> |
<user_input> in the table indicates that you
must determine the appropriate value as per your requirement.
Note:
The "iLO VLAN IP Address (VLAN 2)" column is not required if "Device iLO IP Address" is accessed from management IP interface.Table A-8 Complete Site Survey Host IP Table
| Sl No. | Component/Resource | Platform VLAN IP Address (VLAN 3) | iLO VLAN IP Address (VLAN 2) | CNE Management IP Address (VLAN 4) | Device iLO IP Address | MAC of Primary NIC |
|---|---|---|---|---|---|---|
| 1 | RMS 1 Host IP | 172.16.3.4 | 192.168.20.11 | <user_input> | 192.168.20.121 | Eno5: |
| 2 | RMS 2 Host IP | 172.16.3.5 | 192.168.20.12 | <user_input> | 192.168.20.122 | Eno5: |
| 3 | RMS 3 Host IP | 172.16.3.6 | N/A | N/A | 192.168.20.123 | Eno5: |
| 4 | RMS 4 Host IP | 172.16.3.7 | N/A | N/A | 192.168.20.124 | Eno5: |
| 5 | RMS 5 Host IP | 172.16.3.8 | N/A | N/A | 192.168.20.125 | Eno5: |
<user_input> in the table indicates that you
must determine the appropriate value as per your requirement.
Table A-9 Complete VM IP Table
| Sl No. | Component/Resource | Platform VLAN IP Address (VLAN 3) | iLO VLAN IP Address (VLAN 2) | CNE Management IP Address (VLAN 4) | SQL Replication IP Address(VLAN 5) |
|---|---|---|---|---|---|
| 1 | Bastion Host 1 | 172.16.3.100 | 192.168.20.100 | <user_input> | NA |
| 2 | Bastion Host 2 | 172.16.3.101 | 192.168.20.101 | <user_input> | NA |
<user_input> in the table indicates that you
must determine the appropriate value as per your requirement.
Table A-10 Complete Switch IP Table
| Sl No. | Procedure Reference Variable Name | Description | IP Address | VLAN ID | Notes |
|---|---|---|---|---|---|
| 1 | ToRswitchA_Platform_IP | Host Platform Network | 172.16.3.2 | 3 | |
| 2 | ToRswitchB_Platform_IP | Host Platform Network | 172.16.3.3 | 3 | |
| 3 | ToRswitch_Platform_VIP | Host Platform Network Default Gateway | 172.16.3.1 | 3 | This address is also used as the source NTP address for all servers. |
| 4 | ToRswitchA_CNEManagementNet_IP | Bastion Host Network | <user_input> | 4 | Address needs to be without prefix length. For example: 10.25.100.2 |
| 5 | ToRswitchB_CNEManagementNet_IP | Bastion Host Network | <user_input> | 4 | Address needs to be without prefix length. For example: 10.25.100.3 |
| 6 | ToRswitch_CNEManagementNet_VIP | Bastion Host Network Default Gateway | <user_input> | 4 | No prefix length, address only for VIP |
| 7 | CNEManagementNet_Prefix | Bastion Host Network Prefix Length | <user_input> | 4 | number only such as 29 |
| 8 | CNLB_OAM_EXT_SwA_Address | CNLB_OAM Network | <user_input> | 3 | Address must be without prefix length. For example: 10.25.200.2 |
| 9 | CNLB_OAM_EXT_SwB_Address | CNLB_OAM Network | <user_input> | 3 | Address must be without prefix length. For example: 10.25.200.3 |
| 10 | CNLB_OAM_EXT_VIP | CNLB_OAM Network Default Gateway | <user_input> | 3 | No prefix length, address only for VIP. For example: 10.25.200.1 |
| 11 | CNLB_OAM_EXT_GROUP_ID | CNLB_OAM VRRPv3 Group ID | <user_input> | 3 | Number only <1-255> . VRRP Group ID, unique in topology |
| 12 | CNLB_OAM_EXT_Prefix | CNLB_OAM Network Prefix Length | <user_input> | 3 | Number only. For example: 28 |
| 13 | CNLB_SIG_EXT_SwA_Address | CNLB_SIG Network | <user_input> | 3 | Address must be without prefix length. For example: 10.25.200.18 |
| 14 | CNLB_SIG_EXT_SwB_Address | CNLB_SIG Network | <user_input> | 3 | Address must be without prefix length. For example: 10.25.200.19 |
| 15 | CNLB_SIG_EXT_VIP | CNLB_SIG Network Default Gateway | <user_input> | 3 | No prefix length. Address only for VIP. For example: 10.25.200.17 |
| 16 | CNLB_SIG_EXT_GROUP_ID | CNLB_SIG VRRPv3 Group ID | <user_input> | 3 | Number only <1-255>. VRRP Group ID, unique in topology |
| 17 | CNLB_SIG_EXT_Prefix | CNLB_SIG Network Prefix Length | <user_input> | 3 | Number only. For example: 28 |
| 18 | ToRswitchA_oam_uplink_customer_IP | ToR Switch A OAM uplink route path to customer network | <user_input> | N/A | No prefix length in address, static to be /30 |
| 19 | ToRswitchA_oam_uplink_IP | ToR Switch A OAM uplink IP | <user_input> | N/A | No prefix length in address, static to be /30 |
| 20 | ToRswitchB_oam_uplink_customer_IP | ToR Switch B OAM uplink route path to customer network | <user_input> | N/A | No prefix length in address, static to be /30 |
| 21 | ToRswitchB_oam_uplink_IP | ToR Switch B OAM uplink IP | <user_input> | N/A | No prefix length in address, static to be /30 |
| 22 | ToRswitchA_signaling_uplink_customer_IP | ToR Switch A Signaling uplink route path to customer network | <user_input> | N/A | No prefix length in address, static to be /30 |
| 23 | ToRswitchA_signaling_uplink_IP | ToR Switch A Signaling uplink IP | <user_input> | N/A | No prefix length in address, static to be /30 |
| 24 | ToRswitchB_signaling_uplink_customer_IP | ToR Switch B Signaling uplink route path to customer network | <user_input> | N/A | No prefix length in address, static to be /30 |
| 25 | ToRswitchB_signaling_uplink_IP | ToR Switch B Signaling uplink IP | <user_input> | N/A | No prefix length in address, static to be /30 |
| 26 | ToRswitchA_mngt_IP | ToR Switch A Out of Band Management IP | 192.168.2.1 | N/A | |
| 27 | ToRswitchB_mngt_IP | ToR Switch A Out of Band Management IP | 192.168.2.2 | N/A | |
| 28 | Allow_Access_Server | IP address of external management server to access ToR switches | <user_input> | <user_input> | access-list Restrict_Access_ToR denies all direct external access to ToR switch vlan interfaces. If you want direct access from outside for trouble shooting or management need, then allow specific server to access the ToR switches. If this variable is not required, then delete this line from the configuration file of the switch. If you need more than one server access, then add more similar lines. |
| 29 | SNMP_Trap_Receiver_Address | IP address of the SNMP trap receiver | <user_input> | <user_input> | NA |
| 30 | SNMP_Community_String | SNMP v2c community string | <user_input> | <user_input> | To be easy, same for snmpget and snmp traps |
Table A-11 ToR Switch Variables Table (Switch Specific)
| Key/Vairable Name | ToR_SwitchA Value | ToR_SwitchB Value | Notes |
|---|---|---|---|
| switch_name | <user_input> | <user_input> | Customer defined switch name for each switch. |
| admin_password | <user_input> | <user_input> | Password for admin user. Strong password requirement: Length should be at least 8 characters Contain characters from at least three of the following classes: lower case letters, upper case letters, digits and special characters. No '?' as special character due to not working on switches. No '/' as special character due to the procedures. |
| user_name | <user_input> | <user_input> | Customer defined user. |
| user_password | <user_input> | <user_input> | Password for <user_name> Strong password requirement: Length should be at least 8 characters. Contain characters from at least three of the following classes: lower case letters, upper case letters, digits and special characters. No '?' as special character due to not working on switches. No '/' as special character due to the procedures. |
| ospf_md5_key | <user_input> | <user_input> | The key has to be same on all ospf interfaces on ToR switches and connected customer switches |
| ospf_area_id | <user_input> | <user_input> | The number as OSPF area id. |
| nxos_version | <user_input> | <user_input> | The version nxos.9.2.3.bin is used by default and hard-coded in the configuration template files. If the installed ToR switches use a different version, record the version here. The installation procedures will reference this variable and value to update a configuration template file. |
| NTP_server_1 | <user_input> | <user_input> | NA |
| NTP_server_2 | <user_input> | <user_input> | NA |
| NTP_server_3 | <user_input> | <user_input> | NA |
| NTP_server_4 | <user_input> | <user_input> | NA |
| NTP_server_5 | <user_input> | <user_input> |
Table A-12 Complete Site Survey Repository Location Table
| Repository | Location Override Value |
|---|---|
| Yum Repository | <user_input> |
| Docker Registry | <user_input> |
| Helm Repository | <user_input> |
Run the Inventory File Preparation Procedure to populate the inventory file.
Since the bootstrap environment is not connected to the network until the ToR switches are configured, you must provide the environment with the required software via USB flash drives to begin the install process.
Use one flash drive to install an OS on the Installer Bootstrap Host. The details on how to setup of the USB for OS installation is provided in a different procedure. Ensure that this flash drive contains approximately 6GB capacity.
Once the OS installation is complete, use another flash drive to transfer the required configuration files to the Installer Bootstrap Host. Ensure that this flash drive contains approximately 6GB capacity.
Note:
- The instructions listed here are for a Linux host. You can obtain these instructions from the Web if needed. The mount instructions are for a Linux machine.
- When creating these files
on a USB from Windows (using notepad or some other Windows editor),
the files can contain control characters that are not recognized
when used in a Linux environment. Usually, this includes a
^M at the end of each line. You can remove the
control characters using the dos2unix command in Linux with the
file:
dos2unix <filename>. - When copying the files to this USB, make sure that the USB is formatted as FAT32.
- Copy the hosts.ini file from Set up the Host Inventory File (hosts.ini) onto the Utility USB.
- Copy Oracle Linux repository (for example: OL9) file from the customer's OL YUM mirror instance onto the Utility USB. For more details, see the YUM Repository Configuration section.
- Copy the following switch configuration template
files from OHC to the Utility USB:
- Standard Metallb Deployments:
- 93180_switchA.cfg
- 93180_switchB.cfg
- CNLB Deployments (select the set that matches the configuration
used for deployment of CNLB):
- 93180_switchA._cnlb_bond0.cfg
- 93180_switchB._cnlb_bond0.cfg
- 93180_switchA._cnlb_vlan.cfg
- 93180_switchB._cnlb_vlan.cfg
- Standard Metallb Deployments:
dhcpd.conf file that is needed to Configuring Top of Rack 93180YC-EX Switches.
- Mount the Utility USB.
Note:
For instructions on mounting a USB in Linux, see Downloading Oracle Linux section. - Change drive (cd) to the mounted USB directory.
- Download the
poap.pyfile to the USB. You can obtain the file using the following command on any Linux server or laptop:$ wget https://raw.githubusercontent.com/datacenter/nexus9000/master/nx-os/poap/poap.py - Rename the
poap.pyscript topoap_nexus_script.py.$ mv poap.py poap_nexus_script.py - The switches' firmware version is handled before
the installation procedure, so no need to handle it from the poap.py
script. Comment out the lines to handle the firmware at lines
1931-1944.
$ vi poap_nexus_script.py # copy_system() # if single_image is False: # copy_kickstart() # signal.signal(signal.SIGTERM, sig_handler_no_exit) # # install images # if single_image is False: # install_images() # else: # install_images_7_x() # # Cleanup midway images if any # cleanup_temp_images()
dhcpd.conf file that is needed to Configure Top of Rack 93180YC-EX Switches.
- Edit file: dhcpd.conf. The
first subnet 192.168.2.0 is the subnet for mgmtBridge on bootstrap host,
the second subnet 192.168.20.0 is the ilo_subnet_ipv4 in
hosts.inifile. Modify the subnet according to real value for the cluster. - Copy the following contents to
that file and save it on the
USB.
# # DHCP Server Configuration file. # see /usr/share/doc/dhcp-server/dhcpd.conf.example # see dhcpd.conf(5) man page # # Set DNS name and DNS server's IP address or hostname option domain-name "example.com"; option domain-name-servers ns1.example.com; # Declare DHCP Server authoritative; # The default DHCP lease time default-lease-time 10800; # Set the maximum lease time max-lease-time 43200; # Set Network address, subnet mask and gateway subnet 192.168.2.0 netmask 255.255.255.0 { # Range of IP addresses to allocate range dynamic-bootp 192.168.2.101 192.168.2.102; # Provide broadcast address option broadcast-address 192.168.2.255; # Set default gateway option routers 192.168.2.1; } subnet 192.168.20.0 netmask 255.255.255.0 { # Range of IP addresses to allocate range dynamic-bootp 192.168.20.4 192.168.20.254; # Provide broadcast address option broadcast-address 192.168.20.255; # Set default gateway option routers 192.168.20.1; }
- Edit file: md5Poap.sh
- Copy the following contents to
that file and save it on the USB.
#!/bin/bash f=poap_nexus_script.py ; cat $f | sed '/^#md5sum/d' > $f.md5 ; sed -i "s/^#md5sum=.*/#md5sum=\"$(md5sum $f.md5 | sed 's/ .*//')\"/" $f
Common Installation Configuration
This section details the configurations that are common to both Baremetal and virtualized versions of the CNE installation.
Common Services Configuration
Update the file
/var/occne/cluster/${OCCNE_CLUSTER}/hosts.ini or
occne.ini to define the required ansible variables for the
deployment. The following table describes the list of possible
/var/occne/cluster/${OCCNE_CLUSTER}/occne.ini variables that can be
combined with the deploy.sh/pipeline.sh command to further define the
deployment. A starting point for this file is provided as
/var/occne/cluster/${OCCNE_CLUSTER}/occne.ini.template in
virtual environment and as hosts_sample.ini in baremetal
environment.
Set all these variables in the [occne:vars] section of the
.ini file.
Prerequisites
Gather log_trace_active_storage,
log_trace_inactive_storage and
total_metrics_storage values from the Preinstallation Tasks.
Integrating CNC Console (CNCC) with CNE significantly strengthens the overall security posture. While CNE delivers monitoring capabilities, CNCC ensures secure, role-based access to Common Servics, particularly the Observability Services.
Key benefits of CNE and CNC Console integration include:
- Authentication using CNCC IAM
- Configurated CNCC GUI, based on authorization roles
CNE Common Services can be set to enable HTTPS communication.
It is recommended to install CNC Console, in order to have the features listed above within OCCNE.
Table A-13 Configuration for CNCC Authenticated Environment
| occne.ini Variable | Definition | Default Value |
|---|---|---|
| cncc_enabled |
Can be set to either 'True' or 'False'. Set to 'True' for Cloud Native Control Console (CNCC) authenticated environment. Set to 'False' or do not define for non-CNCC authenticated environment. This is not a mandatory variable. |
False |
Configuration for Common Services
Update the file
/var/occne/cluster/${OCCNE_CLUSTER}/hosts.ini or
occne.ini to define the required ansible variables for the
deployment. The following table describes the list of possible variables that can be
combined with the deploy.sh/pipeline.sh command to further define
the deployment. A starting point for this file is provided as
/var/occne/cluster/${OCCNE_CLUSTER}/occne.ini.template in
virtual environments and as hosts_sample.ini in baremetal
environments.
Set all the variables in the [occne:vars] section of the
.ini file.
Note:
- The default values in the table do not reflect the actual values that a component in a particular environment requires. The values must be updated as per your environment and requirement. The performance of a component is proportional to the amount of resources allocated and the change in usage metrics. Therefore, ensure that you set the variable values diligently. Providing values that do not fit the requirements can lead to poor performance and unstable environment issues.
- The variables with Y under the Required (Y/N) column are necessary, however you can use the defaults if they meet the deployment requirements.
Table A-14 CNE Variables
| occne.ini Variable | Definition | Default |
|---|---|---|
| occne_opensearch_data_size | Defines the log_trace_active_storage value.
This is not a mandatory variable. |
10Gi |
| occne_opensearch_master_size | Defines the log_trace_inactive_storage
value.
This is not a mandatory variable. |
1Gi |
| occne_opensearch_data_retention_period | Defines the log_trace_retention_period in
days.
This is not a mandatory variable. |
7 |
| occne_opensearch_http_max_content_length | Defines
http.max_content_length.
This is not a mandatory variable. . |
2000mb |
| opensearch_client_cpu_request | Defines the CPU request value for OpenSearch
client.
This is not a mandatory variable. |
1000m |
| opensearch_client_cpu_limit | Defines the CPU limit value for OpenSearch client.
This is not a mandatory variable. |
1000m |
| opensearch_client_memory_request | Defines the memory request value for OpenSearch
client.
This is not a mandatory variable. |
2048Mi |
| opensearch_client_memory_limit | Defines the memory limit value for OpenSearch
client.
This is not a mandatory variable. |
2048Mi |
| opensearch_data_replicas_count | Defines the OpenSearch resource data pods replica
count.
This is not a mandatory variable. |
5 |
| opensearch_master_jvm_memory | Defines the OpenSearch master JVM memory in GB.
This is not a mandatory variable. |
1 |
| opensearch_client_jvm_memory | Defines the OpenSearch client JVM memory in GB.
This is not a mandatory variable. |
1 |
| opensearch_data_jvm_memory | Defines the OpenSearch data JVM memory in GB.
This is not a mandatory variable. |
8 |
| opensearch_data_cpu_request | Defines the CPU request value for OpenSearch data.
This is not a mandatory variable. |
200m |
| opensearch_data_cpu_limit | Defines the CPU limit value for OpenSearch data.
This is not a mandatory variable. |
200m |
| opensearch_data_memory_request | Defines the memory request value for OpenSearch
data.
This is not a mandatory variable. |
10Gi |
| opensearch_data_memory_limit | Defines the memory limit value for OpenSearch
data.
This is not a mandatory variable. |
10Gi |
| opensearch_master_cpu_request | Defines the CPU request value for OpenSearch
master.
This is not a mandatory variable. |
1000m |
| opensearch_master_cpu_limit | Defines the CPU limit value for OpenSearch master.
This is not a mandatory variable. |
1000m |
| opensearch_master_memory_request | Defines the memory request value for OpenSearch
master.
This is not a mandatory variable. |
2048Mi |
| opensearch_master_memory_limit | Defines the memory limit value for OpenSearch
master.
This is not a mandatory variable. |
2048Mi |
| occne_prom_kube_state_metrics_cpu_request | Defines the CPU usage request for kube state
metrics.
This is not a mandatory variable. |
20m |
| occne_prom_kube_state_metrics_cpu_limit | Defines the CPU usage limit for kube state
metrics.
This is not a mandatory variable. |
20m |
| occne_prom_kube_state_metrics_memory_limit | Defines the memory usage limit for kube state
metrics.
This is not a mandatory variable. |
100Mi |
| occne_prom_kube_state_metrics_memory_request | Defines the memory usage request for kube state
metrics.
This is not a mandatory variable. |
32Mi |
| occne_prom_operator_cpu_request | Defines the CPU usage request for Prometheus
operator.
This is not a mandatory variable. |
100m |
| occne_prom_operator_cpu_limit | Defines the CPU usage limit for Prometheus
operator.
This is not a mandatory variable. |
200m |
| occne_prom_operator_memory_request | Defines the memory usage request for Prometheus
operator.
This is not a mandatory variable. |
100Mi |
| occne_prom_operator_memory_limit | Defines the memory usage limit for Prometheus
operator.
This is not a mandatory variable. |
200Mi |
| occne_prom_server_size | Defines total_metrics_storage value.
This is not a mandatory variable. |
8Gi |
| occne_prom_cpu_request | Defines the Prometheus CPU usage request.
This is not a mandatory variable. |
2000m |
| occne_prom_cpu_limit | Defines the Prometheus CPU usage limit.
This is not a mandatory variable. |
2000m |
| occne_prom_memory_request | Defines the Prometheus memory usage request.
This is not a mandatory variable. |
4Gi |
| occne_prom_memory_limit | Defines the Prometheus CPU usage limit.
This is not a mandatory variable. |
4Gi |
| occne_prom_grafana_cpu_request | Defines the Grafana CPU usage request.
This is not a mandatory variable. |
100m |
| occne_prom_grafana_cpu_limit | Defines the Grafana CPU usage limit.
This is not a mandatory variable. |
200m |
| occne_prom_grafana_memory_request | Defines the Grafana memory usage request.
This is not a mandatory variable. |
128Mi |
| occne_prom_grafana_memory_limit | Defines the Grafana memory usage limit.
This is not a mandatory variable. |
256Mi |
| occne_metallb_cpu_request | Defines the CPU usage request for Metallb
Controller.
This is not a mandatory variable. |
100m |
| occne_metallb_memory_request | Defines the memory usage request for Metallb
Controller.
This is not a mandatory variable. |
100Mi |
| occne_metallb_cpu_limit | Defines the CPU usage limit for Metallb Controller.
This is not a mandatory variable. |
100m |
| occne_metallb_memory_limit | Defines the memory usage limit for Metallb
Controller.
This is not a mandatory variable. |
100Mi |
| occne_metallbspeaker_cpu_request | Defines the CPU usage request for Metallb speaker.
This is not a mandatory variable. |
100m |
| occne_metallbspeaker_cpu_limit | Defines the CPU usage limit for Metallb speaker.
This is not a mandatory variable. |
100m |
| occne_metallbspeaker_memory_request | Defines the memory usage request for Metallb
speaker.
This is not a mandatory variable. |
100Mi |
| occne_metallbspeaker_memory_limit | Defines the memory usage limit for Metallb speaker.
This is not a mandatory variable. |
100Mi |
| occne_snmp_notifier_destination | Defines the SNMP trap receiver address.
This is not a mandatory variable. |
127.0.0.1:162 |
| occne_fluentd_opensearch_cpu_request | Defines the CPU usage request for Fluentd.
This is not a mandatory variable. |
100m |
| occne_fluentd_opensearch_cpu_limit | Defines the CPU usage limit for Fluentd.
This is not a mandatory variable. |
200m |
| occne_fluentd_opensearch_memory_request | Defines the memory usage request for Fluentd.
This is not a mandatory variable. |
1Gi |
| occne_fluentd_opensearch_memory_limit | Defines the memory usage limit for Fluentd.
This is not a mandatory variable. |
1536Mi |
| occne_jaeger_collector_cpu_request | Defines the CPU usage request for Jaeger collector.
This is not a mandatory variable. |
500m |
| occne_jaeger_collector_cpu_limit | Defines the CPU usage limit for jaeger collector.
This is not a mandatory variable. |
1250m |
| occne_jaeger_collector_memory_request | Defines the memory usage request for Jaeger
collector.
This is not a mandatory variable. |
512Mi |
| occne_jaeger_collector_memory_limit | Defines the memory usage limit for Jaeger
collector.
This is not a mandatory variable. |
1Gi |
| occne_jaeger_query_cpu_request | Defines the CPU usage request for Jaeger query.
This is not a mandatory variable. |
256m |
| occne_jaeger_query_cpu_limit | Defines the CPU usage limit for Jaeger query.
This is not a mandatory variable. |
500m |
| occne_jaeger_query_memory_request | Defines the memory usage request for Jaeger query.
This is not a mandatory variable. |
128Mi |
| occne_jaeger_query_memory_limit | Defines the memory usage limit for Jaeger query.
This is not a mandatory variable. |
512Mi |
| occne_metrics_server_cpu_request | Defines the CPU usage request for Metrics server.
This is not a mandatory variable. |
100m |
| occne_metrics_server_cpu_limit | Defines the CPU usage limit for Metrics server.
This is not a mandatory variable. |
100m |
| occne_metrics_server_memory_request | Defines the memory usage request for Metrics
server.
This is not a mandatory variable. |
200Mi |
| occne_metrics_server_memory_limit | Defines the memory usage limit for Metrics server.
This is not a mandatory variable. |
200Mi |
| occne_lb_controller_data_size | Defines the data size of the LB controller.
This is not a mandatory variable. |
1Gi |
| occne_bastion_controller_cpu_request | Defines the CPU usage request for Bastion
controller.
This is not a mandatory variable. |
10m |
| occne_bastion_controller_cpu_limit | Defines the CPU usage limit for Bastion controller.
This is not a mandatory variable. |
200m |
| occne_bastion_controller_memory_request | Defines the memory usage request for Bastion
controller.
This is not a mandatory variable. |
128Mi |
| occne_bastion_controller_memory_limit | Defines the memory usage limit for Bastion
controller.
This is not a mandatory variable. |
256Mi |
| occne_prom_kube_alertmanager_cpu_request | Defines the CPU usage request for Alertmanager.
This is not a mandatory variable. |
20m |
| occne_prom_kube_alertmanager_cpu_limit | Defines the CPU usage limit for Alertmanager.
This is not a mandatory variable. |
20m |
| occne_prom_kube_alertmanager_memory_request | Defines the memory usage request for Alertmanager.
This is not a mandatory variable. |
64Mi |
| occne_prom_kube_alertmanager_memory_limit | Defines the memory usage limit for Alertmanager.
This is not a mandatory variable. |
64Mi |
| occne_promxy_cpu_request | Defines the CPU usage request for Promxy.
This is not a mandatory variable. |
100m |
| occne_promxy_cpu_limit | Defines the CPU usage limit for Promxy.
This is not a mandatory variable. |
100m |
| occne_promxy_memory_request | Defines the memory usage request for Promxy.
This is not a mandatory variable. |
512Mi |
| occne_promxy_memory_limit | Defines the memory usage limit for Promxy.
This is not a mandatory variable. |
512Mi |
| occne_kyverno_cpu_request | Defines the CPU usage request for Kyverno.
This is not a mandatory variable. |
100m |
| occne_kyverno_cpu_limit | Defines the CPU usage limit for Kyverno.
This is not a mandatory variable. |
200m |
| occne_kyverno_memory_request | Defines the memory usage request for Kyverno.
This is not a mandatory variable. |
256Mi |
| occne_kyverno_memory_limit | Defines the memory usage limit for Kyverno.
This is not a mandatory variable. |
512Mi |
| occne_kyverno_init_resource_cpu_request | Defines the CPU usage request for Kyverno init.
This is not a mandatory variable. |
10m |
| occne_kyverno_init_resource_cpu_limit | Defines the CPU usage limit for Kyverno init.
This is not a mandatory variable. |
100m |
| occne_kyverno_init_resource_memory_request | Defines the memory usage request for Kyverno init.
This is not a mandatory variable. |
64Mi |
| occne_kyverno_init_resource_memory_limit | Defines the memory usage limit for Kyverno init.
This is not a mandatory variable. |
256Mi |
| occne_snmp_notifier_cpu_request | Defines the CPU usage request for SNMP Notifier.
This is not a mandatory variable. |
100m |
| occne_snmp_notifier_cpu_limit | Defines the CPU usage limit for SNMP Notifier.
This is not a mandatory variable. |
100m |
| occne_snmp_notifier_memory_request | Defines the memory usage request for SNMP
Notifier.
This is not a mandatory variable. |
128Mi |
| occne_snmp_notifier_memory_limit | Defines the memory usage limit for SNMP Notifier.
This is not a mandatory variable. |
128Mi |
| occne_rook_operator_cpu_request | Defines the CPU usage request for Rook Operator.
This is not a mandatory variable. |
200m |
| occne_rook_operator_cpu_limit | Defines the CPU usage limit for Rook Operator.
This is not a mandatory variable. |
1500m |
| occne_rook_operator_memory_request | Defines the memory usage request for Rook
Operator.
This is not a mandatory variable. |
128Mi |
| occne_rook_operator_memory_limit | Defines the memory usage limit for Rook Operator.
This is not a mandatory variable. |
512Mi |
| occne_rook_cluster_mgr_cpu_request | Defines the CPU usage request for Rook Cluster
Manager.
This is not a mandatory variable. |
500m |
| occne_rook_cluster_mgr_cpu_limit | Defines the CPU usage limit for Rook Cluster
Manager.
This is not a mandatory variable. |
500m |
| occne_rook_cluster_mgr_memory_request | Defines the memory usage request for Rook Cluster
Manager.
This is not a mandatory variable. |
1024Mi |
| occne_rook_cluster_mgr_memory_limit | Defines the memory usage limit for Rook Cluster
Manager.
This is not a mandatory variable. |
1024Mi |
| occne_rook_cluster_mon_cpu_request | Defines the CPU usage request for Rook Cluster
Monitor.
This is not a mandatory variable. |
500m |
| occne_rook_cluster_mon_cpu_limit | Defines the CPU usage limit for Rook Cluster
Monitor.
This is not a mandatory variable. |
500m |
| occne_rook_cluster_mon_memory_request | Defines the memory usage request for Rook Cluster
Monitor.
This is not a mandatory variable. |
1024Mi |
| occne_rook_cluster_mon_memory_limit | Defines the memory usage limit for Rook Cluster
Monitor.
This is not a mandatory variable. |
1024Mi |
| occne_rook_cluster_osd_cpu_request | Defines the CPU usage request for Rook Cluster
Object Storage Daemon (OSD).
This is not a mandatory variable. |
500m |
| occne_rook_cluster_osd_cpu_limit | Defines the CPU usage limit for Rook Cluster OSD.
This is not a mandatory variable. |
1000m |
| occne_rook_cluster_osd_memory_request | Defines the memory usage request for Rook Cluster
OSD.
This is not a mandatory variable. |
4Gi |
| occne_rook_cluster_osd_memory_limit | Defines the memory usage limit for Rook Cluster
OSD.
This is not a mandatory variable. |
8Gi |
| occne_rook_cluster_prepareosd_cpu_request | Defines the CPU usage request for Rook Cluster
Prepareosd.
This is not a mandatory variable. |
500m |
| occne_rook_cluster_prepareosd_cpu_limit | Defines the CPU usage limit for Rook Cluster
Prepareosd.
This is not a mandatory variable. |
1000m |
| occne_rook_cluster_prepareosd_memory_request | Defines the memory usage request for Rook Cluster
Prepareosd.
This is not a mandatory variable. |
1024Mi |
| occne_rook_cluster_prepareosd_memory_limit | Defines the memory usage limit for Rook Cluster
Prepareosd.
This is not a mandatory variable. |
2Gi |
| occne_cert_exporter_cpu_request | Defines the CPU usage request for cert-exporter.
This is not a mandatory variable. |
100m |
| occne_cert_exporter_cpu_limit | Defines the CPU usage limit for cert-exporter.
This is not a mandatory variable. |
200m |
| occne_cert_exporter_memory_request | Defines the memory usage request for cert-exporter.
This is not a mandatory variable. |
128Mi |
| occne_cert_exporter_memory_limit | Defines the memory usage limit for cert-exporter.
This is not a mandatory variable. |
256Mi |
| multus_thick_cpu_request | Defines the CPU usage request for Multus Thick.
This is not a mandatory variable. |
250m |
| multus_thick_cpu_limit | Defines the CPU usage limit for Multus Thick.
This is not a mandatory variable. |
500m |
| multus_thick_memory_request | Defines the memory usage request for Multus Thick.
This is not a mandatory variable. |
256Mi |
| multus_thick_memory_limit | Defines the memory usage limit for Multus Thick.
This is not a mandatory variable. |
512Mi |
Configuration for Service Mesh PKI Integration
/var/occne/cluster/${OCCNE_CLUSTER}/ca-config.ini to define
the required ansible variables for the deployment. The Table A-15 table describes the list of variables defined in the
ca-config.ini file. A starting point for this file is
provided as
/var/occne/cluster/${OCCNE_CLUSTER}/ca-config.ini.template.
Table A-15 CA Config Variables
| ca-config.ini Variable | Definition | Default |
|---|---|---|
| occne_ca_issuer_type |
Defines the CA issuer type. Allowed values are internal and intermediate. If you set the occne_ca_issuer_type to internal, then istio CA will be used. If you set the
|
internal |
| occne_ca_certificate | Defines the base64 encoded CA certificate value
and required only when occne_ca_issuer_type is
intermediate.
|
"" |
| occne_ca_key | Defines the base64 encoded CA key value and
required only when occne_ca_issuer_type is
intermediate.
|
"" |
| occne_ca_client_max_duration | Maximum validity duration that can be requested
for a client certificate in XhYmZs format. If
occne_ca_client_max_duration is not set,
certificate validity duration will be one year by
default.
|
"" |
Optimizing Linux System Auditing
Table A-16 Linux System Auditing Optimization Variable
| Variable | Description | Default Value |
|---|---|---|
| audit_backlog_limit | This configuration setting in the Linux systems
governs the maximum number of audit events the kernel can retain
in the audit buffer, before it starts to discard the older
events. Tuning this limit is essential to ensure that the
critical audit events are not lost during the peak activity
periods. Increasing or appropriately adjusting this limit is
crucial for systems with high volumes of audit events to ensure
that vital information are available for analysis and system
security.
This is not a mandatory variable. |
8192
For example:
|
Enabling or Disabling Kubernetes Network Policies
Table A-17 Network Policy Creation Variable
| Variable | Description | Default Value |
|---|---|---|
| occne_infra_network_policy_creation | Creates the Kubernetes network policies on the
namespace where the CNE infrastructure services are created.
This is not a mandatory variable. |
True |
Sizing vCNE VMs
For a virtualized CNE, the VMs can be sized to host each node in the CNE so that the resources used by each node closely match the expected workload. This section provides recommendations on VM sizes for each node type. Note that these are sizing guidelines. Customers do not have to use these exact sizes, although creating smaller VMs than the minimum recommended sizes can result in a CNE that performs poorly.
Bootstrap VM
Table A-18 Bootstrap VM
| VM name | vCPUs | RAM | DISK | Comments |
|---|---|---|---|---|
| Bootstrap host | 2 | 8 GB | 40 GB | Delete the Bootstrap Host VM after the CNE installation is complete. |
Table A-19 CNE Loadbalancer VMs
| VM name | vCPUs | RAM | DISK |
|---|---|---|---|
| <cluster_name>-<peer_pool_name>-lbvm | 4 | 4 GB | 40 GB |
Kubernetes VMs
Master nodesNote:
3 master nodes are required.GCE and AWS have established consistent sizing guidelines for master node VMs. CNE follows these generally accepted guidelines.
Table A-20 Kubernetes Master Node
| VM name | vCPUs | RAM | DISK | Comments |
|---|---|---|---|---|
| K8s Master - large | 4 | 15 GB | 40 GB | For K8s clusters with 1-100 worker nodes. |
Worker nodes
A minimum of 6 worker nodes is required. You can add more worker nodes if you expect a high 5G traffic volume or if you want to install multiple NFs in the cluster.
Both GCE and AWS offer several machine types. Follow the general purpose VM sizing that is generally around 4GB RAM per vCPU.
Table A-21 Kubernetes Worker Node
| VM Name | vCPUs | RAM | DISK |
|---|---|---|---|
| K8s worker - medium | 8 | 30 GB | 40 GB |
| K8s worker - large | 16 | 60 GB | 40 GB |
| K8s worker - extra large | 32 | 120 GB | 40 GB |
Note:
The above mentioned values are suggested for worker node size. The actual size is determined only after testing the environment.Bastion host VMs
The Bastion hosts will have light, occasional workloads with a few persistent processes.
Table A-22 Bastion Host VMs
| VM name | vCPUs | RAM | DISK |
|---|---|---|---|
| Bastion host | 1 | 8 GB | 100 GB |
Updating cluster.tfvars for CNE Installation on OpenStack
This section provides information about updating
cluster.tfvars depending on the type of Load Balancer used
for CNE Installation on OpenStack.
Updating cluster.tfvars for MetalLB
This section provides information about updating
cluster.tfvars when you want to use MetalLB for network
management.
Common Configuration
- Run the
following command to create a copy of the
occne_exampledirectory and its content. The command creates a cluster name specific directory and thecluster.tfvarsfile that is used for configuring terraform. The configuration changes indicated in this procedure are made in the new copy of thecluster.tfvarsfile.$ cp -R /var/occne/cluster/${OCCNE_CLUSTER}/occne_example /var/occne/cluster/${OCCNE_CLUSTER}/${OCCNE_CLUSTER}Sample output:-rw-r--r--. 1 <user-name> <user-name> 6390 Aug 24 23:18 /var/occne/cluster/occne1-rainbow/occne1-rainbow/cluster.tfvars drwxr-xr-x. 2 <user-name> <user-name> 28 Aug 24 23:18 /var/occne/cluster/occne1-rainbow/occne1-rainbow - Access the
OpenStack Dashboard to retrieve the information
needed to configure the
cluster.tfvarsfile:- Set the different flavor settings according to the recommendations from the Sizing vCNE VMs section in this document. An admin user of the customer specific OpenStack Environment must add the flavors and provide the names of the flavors for configuration into the cluster.tfvars file. The name of each specific flavor that is used must be added to the value field of the key/value fields in the cluster.tfvars file.
- Once
the flavors are added to the OpenStack
environment, you can use the OpenStack Dashboard
to retrieve the flavor name:
Note:
The options may vary depending on the environment that you are using.- On the OpenStack Dashboard, navigate to Compute→Instances→Launch Instance to open a new dialog.
- Select Flavor from the left side panel.
- Scroll the list of available options or use the Available filter to verify if the flavor is present.
- Click Cancel to close the dialog.
-
Perform the following steps to retrieve the
external_net UUID:
- On the OpenStack Dashboard, navigate to Network→Networks and search for the appropriate external network.
- Click the network name. For
example,
ext-net. - Get the ID from the
Overview tab.
For example:
Name ext-net ID 2ddd3534-8875-4357-9d67-d8229dae81ff - On the
Subnets tab, click the
appropriate subnet to retrieve the ID.
For example:
Name ext-net-subnet ID a3d433fe-d931-4dca-bf6b-123dde4a94de
- [Optional]: If
<user-name>is set to a value other than“cloud-user", uncomment the ssh_user field and set it to the desired<user-name>.Note:
The<user-name>value must be the same as the one used in the previous sections of this procedure. That is, it must be same as same as$OCCNE_USERand$USER.ssh_user = "<user-name>" - Set the
imagefield to the same value that was used when uploading the image to Openstack:For example:
image = "ol9u5"Note:
This image is the boot image of all VMs in the cluster. - The following fields define the default
number of each node type. For a standard deployment,
use the default values. You can update these values
if the deployment requires additional nodes.
Note:
The following fields are integer values and don't require double quotation marks.number_of_bastionsnumber_of_k8s_nodesnumber_of_k8s_ctrls_no_floating_ipWARNING: Set the number of control nodes to an odd number. The recommended value for
number_of_k8s_ctrls_no_floating_ipis 3.
- Set the corresponding flavors for each node
type to a unique flavor name. Flavors are provided
from the OpenStack Provider administrator.
flavor_bastionflavor_k8s_ctrlflavor_k8s_node
There are four settings for server group affinity, one for each group type (Bastion Hosts, k8s Nodes, k8s Control Nodes, and LBVMs). Currently all of these settings default to anti-affinity. Using this default, forcesterraform/openstackto create the VMs for each server group on different hosts within the OpenStack environment. If there are not enough hosts to perform this task, terraform will fail which causes thedeploy.shcommand to fail. Soft-anti-affinity could also be used to force VMs to be spread across the hosts as much as possible without failing. The only other option available is affinity.Note:
Before trying to use soft-anti-affinity, ensure that your OpenStack release supports the use of that value. Also, ensure that anti-affinity works for your deployment (as in the number of hosts is greater than or equal to the number of worker nodes, Kubernetes nodes, or LBVMs, whichever is greater). The OpenStack instance where CNE will be installed must have at least the following number of physical servers in order to install CNE: the larger of the number of Kubernetes nodes or the number of LB VMs (the number of LB VMs is equal to the number of configured external networks times 2).- k8s_ctrl_server_group_policies
- k8s_node_server_group_policies
- bastion_server_group_policies
- lbvm_server_group_policies
- Set the cluster_name
Note:
Kubespray doesn't allow to use uppercase alphanumeric characters in the node hostname. So, don't use any uppercase characters when defining thecluster-namefield.# Kubernetes short cluster name cluster_name = "<cluster-short-name>" # networking network_name = "<cluster-short-name>" - Set the ntp_server value in the cluster.tfvars file. If the OpenStack environment has a NTP service, then use the cloud IP of the OpenStack URL. If not, use a customer-specific NTP server.
- If the
deployment requires a specific Availability Zone other than the default
availability zone called nova, make the
following changes in the
cluster.tfvarsfile. This value can be added after thecluster_namefield. If you want to usenova, then skip this step.az_list = ["<availability_zone_name>"]For example:# Authorizing_Zone az_list = ["foobar"] - If the deployment requires the delivery of
metadata and user data through a configuration drive for each instance, add the
following changes in the
cluster.tfvarsfile. This is an optional value. The default value is occne_config_drive = "false" that indicates using a metadata server instead of a config drive for OpenStack.Note:
Ensure that the OpenStack administrator did not set the force_config_drive=true in the /etc/nova/nova.conf file, otherwise it uses the config drive in either case. The sample configuration format is as follows:occne_config_drive = "<true/false>"Example:occne_config_drive = "true" - If the deployment requires external connections to nodes and controllers, set the
occne_k8s_floating_ipvariable to true in thecluster.tfvarsfile. This is an optional value. The default value is set to false, which indicates that the nodes and controllers doen't have an assigned floating IP. For more information about floating IPs, see Enabling or Disabling Floating IP in OpenStack. - Perform the following steps to retrieve the
floatingip_poolandexternal_netID name from Openstack Dashboard and set the variables on thecluster.tfvarsfile:- Obtain the value of
floatingip_poolandexternal_netfrom Openstack Dashboard:- On the OpenStack Dashboard, navigate to Network → Networks.
- From the list, select an existing external network and click Overview.
- Obtain the value of
floatingip_poolfrom theNamefield andexternal_netfrom theIDfield.For example, the following codeblock displays the data displayed by OpenStack Dashboards. This data may vary depending on the OpenStack environment:ext-net2 Overview / Subnets / Ports Name ext-net2 ID 4ebb3784-0192-7482-9d67-a1389c3a8a93 Project ID 3bf3937f03414845ac09d41e6cb9a8b2 Status Active Admin State UP Shared Yes External Network Yes MTU 9050
- Assign the
floatingip_poolfield in thecluster.tfvarsfile.floatingip_pool = "<floating_ip_pool_name>"where,
<floating_ip_pool_name>is the name of the external network obtained in step a.For example:floatingip_pool = "ext-net2" - Assign the
external_netfield in thecluster.tfvarsfile.external_net = "<network UUID>"where,
<network UUID>is the ID of the external network obtained in step a.For example:external_net = "4ebb3784-0192-7482-9d67-a1389c3a8a93"
- Obtain the value of
Configuring Custom Configurable Volumes (CCV)
Note:
- This procedure is optional. Perform this procedure only if you choose to use CCV over the standard configuration.
- CCV and the standard hard disk configuration are mutually exclusive. You can choose one of the two.
- If you select CCV, all resources use the custom boot disk configuration. It can't be used on a per VM resource basis.
- The size of the volume created must be equal to the size of the disk as defined in the flavor that is used for the VM.
- Set use_configurable_boot_volume to
true. - Set the volume size variables to the size
of the hard disk as defined in the flavor's
Diskfield assigned to the given resource type. Flavors can be listed using the OpenStack Dashboard.Note:
The volume size settings must match the disk size defined by the flavor setting for each resource type. The defaults listed in this example are derived from the current flavor settings. Update the setting correctly.For example:occne_lbvm_boot_volume_size = 40 occne_bh_boot_volume_size = 100 occne_k8s_ctrl_boot_volume_size = 40 occne_k8s_node_boot_volume_size = 60 - Set
occne_boot_volume_image_idto the ID of the image from which the VM is booted (Example: ol9u5). Perform the following steps to obtain the image ID from OpenStack Dashboard:- Select Compute →Images.
- From the list, select the appropriate OL
Ximage. - Obtain the value of
occne_boot_volume_image_idfrom theIDfield.For example:→ test Image Active Public No QCOW2 641.13 MB Launch ↓ ol9u5 Image Active Shared No QCOW2 708.81 MB Launch ------------------------------------------------------------------------------------ Name Visibility Min. Disk ol9u5 Shared 0 ID Protected Min. RAM 2c673fc4-9d72-4a60-bb14-bc44f636ee94 No 0 ------------------------------------------------------------------------------------ → test-imag Image Active Public No QCOW2 988.75 MB Launch
- By default,
occne_delete_on_terminationis set to true, this means that the volumes are deleted along with the VMs (recommended behavior). You can set this field to false if you want to preserve the volumes while deleting VMs.
Configuring MetalLB in tfvars file
MetalLB replaces Octavia (which is no longer supported) on the vCNE deployment in OpenStack and is the default behavior for the use of Load Balancers in CNE vCNE.
The vCNE deployment uses the following configuration to pre-reserve the ports used for external access to the services created during the configure stage of the deployment. It generates the mb_configmap.yaml file from the pre-reserved ports. The mb_configmap.yaml file is used to configure MetalLB during the configure stage of the deployment.
Network → Network → Network Name →
Ports). You can sort the ports using the filter
option:<cluster_name>-mb-<pool_name>-port-n where n = 0CC.(sum of num ports for all pool names included in the tfvars file)pool_name: oam, the
assigned port names of the oam pool is in the following format:
<cluster_name>-mb-oam-port-n or
mycne-cluster-mb-oam-port-1Note:
Ensure that thepool_name is not a substring of cluster_name. For
example, if the cluster_name is misignal1, then the pool_name must not
be sig.
- MetalLB allows Terraform to automatically create the
necessary Load Balancing VMs (LBVMs) to support service load
balancing for CNE (and the NFs as they are configured). An
LBVM is created per Peer Address Pool name.
Note:
The oam pool is required, while other network pools are NF application specific. The VMs are named according to the following format in OpenStack:<cluster_name>-<peer_address_pool>-lbvm-1 or 2and are visible from the OpenStack Dashboard. - Update the occne_metallb_flavor field.
Ensure that the field is set as per the Oracle sizing charts
Sizing vCNE VMs in Reference 1. Obtain the flavor
detail from the OpenStack Dashboard or the OpenStack
administrator.
occne_metallb_flavor="<lbvm_flavor_name>"Example:occne_metallb_flavor="OCCNE-lbvm-host" - Update the occne_metallb_peer_addr_pool_names
list variable. It can take any value as address pool name,
for example, "oam", "signaling", "random1", and so on. Set
this field depending on what you want to configure, for
example,
["oam"]or["oam", "signaling"]or["oam", "signaling", "random1"].Note:
Don't use any special characters or capital letters in the pool name. The name must only contain[a-z,1-9]character sets. There is no limitation on the number of characters used in a pool name. However it's recommended to keep the name as short as possible, as the name is used to build the LBVM host names. For example, for a LBVM1 with pool name "oam", and cluster name "my-cluster", the LBVM host name build is"my-cluster-oam-lbvm1". Linux has a hostname limitation of 253 ascii(7) and allows [a-z,0-1] characters including the dots, and each section of the hostname can only have 63 characters.occne_metallb_peer_addr_pool_names=["oam","poolname"]Examples:occne_metallb_peer_addr_pool_names=["oam"] occne_metallb_peer_addr_pool_names=["oam","signaling"] occne_metallb_peer_addr_pool_names=["oam","signaling","nfpoolname1"] - Use the following fields to create the MetalLB peer
address pools. This part of the MetalLB configuration is used to pre-reserve the ports for
MetalLB to use in the generated
mb_configmap.yamlfile.There can be more than one pool added to the
tfvarsfile and each can be configured with a different IP list field as given in the following table.When configuring the port objects, use only one of the three input field types to define the IPs for each peer address pool:ip_list,ip_range, orip_cidr. Delete the other two unused input fields for that peer address pool. Ensure that you follow the above process for all port objects. If you fail to configure any of the fields correctly, then Terraform will fail at the deployment stage before it gets to the configure stage.Note:
If you configure more than one IP field per peer address pool, the first one that is found is selected.In
occne_metallb_list, the fieldnum_poolsis the number of pools mentioned inoccne_metallb_peer_addr_pool_names.For example, for
occne_metallb_peer_addr_pool_names=["oam","signaling","xyz","abc1"], the value ofnum_poolsis 4.Each object includes the following common key or value pairs except the type of field it represents:num_ports: specifies the number of ports to reserve for the given peer address pool (use this key without quotation marks).subnet_id: specifies the OpenStack UUID (or ID) for the subnet from which the ports are configured within.network_id: specifies the OpenStack network UUID (or ID) which the subnet is configured within.pool_name: specifies one of the peer address pool names given inoccne_metallb_peer_addr_pool_names.egress_ip_addr: specifies the IP address used for LBVM egress interface. Each network pool has only one address.- Configuring IP input field: use only one of the three IP address input fields for each peer address pool. Delete the other two unused input fields.
Table A-23 MetalLB Fields
Field Name Notes ip_list Represents a random list of IPs from the same subnet or network for the peer address pool. Define the ip_list within square brackets [], with each value surrounded by double quotation marks and separated by a comma. The ports (IPs) defined within this list are available within the configured OpenStack network or subnet.
Note: The number of IPs provided in the list must be equal to or greater than the setting of
num_ports. If the number of IPs in the list is greater than thenum_ports, only the first num_ports IPs are used to pre-reserve the ports for MetalLB. All the IPs must be unique. IP used once in one address pool can't be used in any other address pool.You can define multiple pools at a time by adding each pool object to the comma-separated
pool_objectlist. The following codeblock provides a sample template for adding a singlepool object. Copy and paste the same template for eachpool objectthat you want to add.{ pool_name = "<pool_name>" num_ports = <no_of_ips_needed_for_this_addrs_pool_object> ip_list = ["<ip_0>","<ip_(num_ports-1)>"] subnet_id = "<subnet UUID for the given network>" network_id = "<network UUID>" egress_ip_addr = "<IP address for egress port>" }Example:
num_poolsis 1 andoccne_metallb_peer_addr_pool_names=["oam"]occne_metallb_list = { num_pools = 1 pool_object = [ { pool_name = "oam" num_ports = 6 ip_list = ["10.10.235.1","10.10.235.3","10.10.235.19","10.10.235.6","10.10.235.21","10.10.235.45"] subnet_id = "c3a5381a-3a17-4775-8e42-a1c816414e12" network_id = "cd132f1f-1a31-1a1f-b69a-ade2f7a283f4" egress_ip_addr = "10.10.235.56" } ] }ip_range Represents a range of IPs from the same subnet or network for the peer address pool. Define the ip_range as a single string within double quotation marks and separate the min and max IPs using a dash '-".
Note:
You can extend the range of IPs beyond the setting of num_ports, but ensure that the range is not less than num_ports. If num_ports is less than the range, only the first num_ports IPs are used to pre-reserve the ports for MetalLB. All the IPs must be unique. IP used once in one address pool can't be used in any other address pool.You can define multiple pools at a time by adding each pool object to the comma-separated
pool_objectlist. The following codeblock provides a sample template for adding a singlepool object. Copy and paste the same template for eachpool objectthat you want to add.{ pool_name = "<pool_name>" num_ports = <no_of_ips_needed_for_this_addrs_pool_object> ip_range = "<ip_n> - <ip_(n + num_ports - 1)>" subnet_id = "<subnet UUID for the given network>" network_id = "network UUID" }Example: num_pools is 1 and occne_metallb_peer_addr_pool_names=["oam"]
occne_metallb_list = { num_pools = 1 pool_object = [ { pool_name = "oam" num_ports = 6 ip_range = "10.10.235.10 - 10.10.235.15" subnet_id = "c3a5381a-3a17-4775-8e42-a1c816414e12" network_id = "cd132f1f-1a31-1a1f-b69a-ade2f7a283f4" egress_ip_addr = "10.10.235.17" } ] }cidr Represents a range of IPs from the same subnet or network for the peer address pool using the cidr slash notation. This must be a single string enclosed within double quotation marks with a starting IP and the netmask as designated by the forward slash "/".
Note: You can extend the range of IPs determined by the cidr beyond the setting of num_ports but can't be less than num_ports. If num_ports is less than the range provided by the cidr slash notation, only the first num_ports IPs are used to pre-reserve the ports for MetalLB.
You can define multiple pools at a time by adding each pool object to the comma-separated
pool_objectlist. The following codeblock provides a sample template for adding a singlepool object. Copy and paste the same template for eachpool objectthat you want to add.{ pool_name = "<pool_name>" num_ports = <no_of_ips_needed_for_this_addrs_pool_object> cidr = "<0.0.0.0/29>" subnet_id = "<subnet_id or network UUID for cidr>" network_id = "network_id" egress_ip_addr = "<IP address for egress port>" }Example: num_pools is 1 and occne_metallb_peer_addr_pool_names=["oam"]
occne_metallb_list = { num_pools = 1 pool_object = [ { pool_name = "oam" num_ports = 6 cidr = "10.10.235.8/29" subnet_id = "c3a5381a-3a17-4775-8e42-a1c816414e12" network_id = "cd132f1f-1a31-1a1f-b69a-ade2f7a283f4" egress_ip_addr = "10.10.235.50" } ] } - If you want to use more than one pool, and for each
pool you want to input IPs in a different way, then follow
the example given below:
num_poolsis 4 andoccne_metallb_peer_addr_pool_names=["oam","signaling","userpool1","userpool2"], here:- ip_list, is used for "oam" and assigned with 6 IPs to it
- ip_range, is used for "userpool1"
- cidr, is used for "userpool2"
Note:- Ensure that there are no extra lines.
- Each pool in the
pool_objectlist must contain only five input fields. oam is a required pool and other network pools are application-specific. - Ensure that you use only one IP input field per pool. Delete the other two unused input fields.
- Ensure that the IP is used only once in a pool. Don't use this IP in another pool.
- Different pools can have various network and subnet IDs.
- The
num_portsfield must contain an integer input (without quotation marks).
Example for more than one address pool:occne_metallb_list = { num_pools = 4 pool_object = [ { pool_name = "oam" num_ports = 6 ip_list = ["10.10.235.1","10.10.235.3","10.10.235.19","10.10.235.6","10.10.235.21","10.10.235.45"] subnet_id = "a1f0c182-af23-1a12-a7a3-118223fd9c10" network_id = "f1251f4f-81e3-4a23-adc2-def1294490c2" egress_ip_addr = "10.10.235.2" }, { pool_name = "userpool1" num_ports = 4 ip_range = "10.10.235.34 - 10.10.235.37" subnet_id = "a1f0c182-af23-1a12-a7a3-118223fd9c10" network_id = "f1251f4f-81e3-4a23-adc2-def1294490c2" egress_ip_addr = "10.10.235.38" }, { pool_name = "userpool2" num_ports = 2 cidr = "10.10.231.0/29" subnet_id = "c3a5381a-3a17-4775-8e42-a1c816414e12" network_id = "cd132f1f-1a31-1a1f-b69a-ade2f7a283f4" egress_ip_addr = "10.10.231.12" } ] }
Updating cluster.tfvars for CNLB
This section provides information about updating
cluster.tfvars when you want to use LBVM for network
management.
Common Configuration
- Run the
following command to create a copy of the
occne_exampledirectory and its content. The command creates a cluster name specific directory and thecluster.tfvarsfile that is used for configuring OpenTofu. The configuration changes indicated in this procedure are made in the new copy of thecluster.tfvarsfile.$ cp -R /var/occne/cluster/${OCCNE_CLUSTER}/occne_example /var/occne/cluster/${OCCNE_CLUSTER}/${OCCNE_CLUSTER}Sample output:-rw-r--r--. 1 <user-name> <user-name> 6390 Aug 24 23:18 /var/occne/cluster/occne1-rainbow/occne1-rainbow/cluster.tfvars drwxr-xr-x. 2 <user-name> <user-name> 28 Aug 24 23:18 /var/occne/cluster/occne1-rainbow/occne1-rainbow - Access the
OpenStack Dashboard to retrieve the information
needed to configure the
cluster.tfvarsfile:- Set the different flavor settings according to the recommendations from the Sizing vCNE VMs section in this document. An admin user of the customer specific OpenStack Environment must add the flavors and provide the names of the flavors for configuration into the cluster.tfvars file. The name of each specific flavor that is used must be added to the value field of the key/value fields in the cluster.tfvars file.
- Once
the flavors are added to the OpenStack
environment, you can use the OpenStack Dashboard
to retrieve the flavor name:
Note:
The options may vary depending on the environment that you are using.- On the OpenStack Dashboard, navigate to Compute→Instances→Launch Instance to open a new dialog.
- Select Flavor from the left side panel.
- Scroll the list of available options or use the Available filter to verify if the flavor is present.
- Click Cancel to close the dialog.
-
Perform the following steps to retrieve the
external_net UUID:
- On the OpenStack Dashboard, navigate to Network→Networks and search for the appropriate external network.
- Click the network name. For
example,
ext-net. - Get the ID from the
Overview tab.
For example:
Name ext-net ID 2ddd3534-8875-4357-9d67-d8229dae81ff - On the
Subnets tab, click the
appropriate subnet to retrieve the ID.
For example:
Name ext-net-subnet ID a3d433fe-d931-4dca-bf6b-123dde4a94de
- [Optional]: If
<user-name>is set to a value other than“cloud-user", uncomment the ssh_user field and set it to the desired<user-name>.Note:
The<user-name>value must be the same as the one used in the previous sections of this procedure. That is, it must be same as same as$OCCNE_USERand$USER.ssh_user = "<user-name>" - Set the
imagefield to the same value that was used when uploading the image to Openstack:For example:
image = "ol9u5"Note:
This image is the boot image of all VMs in the cluster. - The following fields define the default
number of each node type. For a standard deployment,
use the default values. You can update these values
if the deployment requires additional nodes.
occne_bastion_namesoccne_node_namesoccne_control_names
Note:
- These fields are defined as lists of string
values. The values cannot repeat and can contain a
maximum of 4 characters.
For example:
occne_bastion_names = ["1", "2"]occne_control_names = ["1", "2", "3"]occne_node_names = ["1", "2", "3", "4"]
- The number of control nodes must be set to an
odd number. The recommended value for
occne_control_namesis["1", "2", "3"].
- Set the corresponding flavors for each node
type to a unique flavor name. Flavors are provided from the OpenStack Provider
administrator.
flavor_bastionflavor_k8s_ctrlflavor_k8s_node
There are four settings for server group affinity, one for each group type (Bastion Hosts, k8s Nodes, k8s Control Nodes, and LBVMs). Currently all of these settings default to anti-affinity. Using this default, forcesOpenTofu/openstackto create the VMs for each server group on different hosts within the OpenStack environment. If there are not enough hosts to perform this task, the OpenTofu fails causing thedeploy.shcommand to fail.soft-anti-affinitycan also be used to force VMs to be spread across the hosts as much as possible without failing. The only other option available is affinity.Note:
Before usingsoft-anti-affinity, ensure that your OpenStack release supports the use of that value. Also, ensure that anti-affinity works for your deployment (the is, the number of hosts is greater than or equal to the number of Kubernetes nodes). The OpenStack instance where CNE is installed must have at least the following number of physical servers to install CNE: the larger of the number of Kubernetes nodes.- bastion_server_group_policies
- k8s_ctrl_server_group_policies
- k8s_node_server_group_policies
- Set the cluster_name
Note:
Kubespray doesn't allow to use uppercase alphanumeric characters in the node hostname. So, don't use any uppercase characters when defining thecluster-namefield.# Kubernetes short cluster name cluster_name = "<cluster-short-name>" - The subnet_cidr defines the tenant side network IP address range. This field must remain in the default value.
- The field bastion_allowed_remote_ips defines the configuration for the bastion networking security group. This field must remain in the default value.
- Set the
ntp_server field. If the OpenStack
environment has a NTP service, then use the cloud IP of the OpenStack URL. If not, use a
customer-specific NTP server. You can use the
pingcommand to get the IP of the OpenStack environment:$ ping <openstack_provider_url>For example:$ ping openstack-cloud.us.oracle.comSample output:PING srv-10-10-10-10.us.oracle.com (10.10.10.10) 56(84) bytes of data. 64 bytes from srv-10-10-10-10.us.oracle.com (10.10.10.10): icmp_seq=1 ttl=63 time=0.283 ms - If the
deployment requires a specific Availability Zone other than the default
availability zone called nova, make the
following changes in the
cluster.tfvarsfile. This value can be added after thecluster_namefield. If you want to usenova, then skip this step.az_list = ["<availability_zone_name>"]For example:# Authorizing_Zone az_list = ["foobar"] - If the deployment requires the delivery of
metadata and user data through a configuration drive for each instance, add the
following changes in the
cluster.tfvarsfile. This is an optional value. The default value is occne_config_drive = "false" which indicates using a metadata server instead of a configuration drive for OpenStack.Note:
Ensure that the OpenStack administrator did not set the force_config_drive=true in the /etc/nova/nova.conf file, otherwise it uses the config drive in either case. The sample configuration format is as follows:occne_config_drive = "<true/false>"Example:occne_config_drive = "true" - If the deployment requires external connections to nodes and controllers,
set the
occne_k8s_floating_ipandoccne_k8s_floating_ip_assocvariables to true in thecluster.tfvarsfile. However, these values are optional. The default value is set to false, which indicates that the nodes and controllers doesn’t have an assigned floating IP. For more information about floating IPs, see Enabling or Disabling Floating IP in OpenStack.Note:
Ensure that bothoccne_k8s_floating_ipandoccne_k8s_floating_ip_assochave the same value.The following example provides thecluster.tfvarsconfiguration when floating IP is enabled:occne_k8s_floating_ip = true occne_k8s_floating_ip_assoc = trueThe following example provides thecluster.tfvarsconfiguration when floating IP is disabled:occne_k8s_floating_ip = false occne_k8s_floating_ip_assoc = false - Perform the following steps to retrieve the
floatingip_poolandexternal_netID name from an existing external network created by the Openstack provider and set the variables on thecluster.tfvarsfile:- Obtain the value of
floatingip_poolandexternal_netfrom Openstack Dashboard:- On the OpenStack Dashboard, navigate to Network → Networks.
- From the list, select an existing external network and click Overview.
- Obtain the value of
floatingip_poolfrom theNamefield andexternal_netfrom theIDfield.For example, the following codeblock displays the data displayed by OpenStack Dashboards. This data may vary depending on the OpenStack environment:ext-net2 Overview / Subnets / Ports Name ext-net2 ID 4ebb3784-0192-7482-9d67-a1389c3a8a93 Project ID 3bf3937f03414845ac09d41e6cb9a8b2 Status Active Admin State UP Shared Yes External Network Yes MTU 9050
- Assign the
floatingip_poolfield in thecluster.tfvarsfile.floatingip_pool = "<floating_ip_pool_name>"where,
<floating_ip_pool_name>is the name of the external network obtained in step a.For example:floatingip_pool = "ext-net2" - Assign the
external_netfield in thecluster.tfvarsfile.external_net = "<network UUID>"where,
<network UUID>is the ID of the external network obtained in step a.For example:external_net = "4ebb3784-0192-7482-9d67-a1389c3a8a93"
- Obtain the value of
Configuring Custom Configurable Volumes (CCV)
Note:
- This procedure is optional. Perform this procedure only if you choose to use CCV over the standard configuration.
- CCV and the standard hard disk configuration are mutually exclusive. You can choose one of the two.
- If you select CCV, all resources use the custom boot disk configuration. It can't be used on a per VM resource basis.
- The size of the volume created must be equal to the size of the disk as defined in the flavor that is used for the VM.
- Set use_configurable_boot_volume to
true. - Set the volume size variables to the size
of the hard disk as defined in the flavor's
Diskfield assigned to the given resource type. Flavors can be listed using the OpenStack Dashboard.Note:
The volume size settings must match the disk size defined by the flavor setting for each resource type. The defaults listed in this example are derived from the current flavor settings. Update the setting correctly.For example:occne_bh_boot_volume_size = 100 occne_k8s_ctrl_boot_volume_size = 40 occne_k8s_node_boot_volume_size = 60 - Set
occne_boot_volume_image_idto the ID of the image from which the VM is booted (Example: ol9u5). Perform the following steps to obtain the image ID from OpenStack Dashboard:- Select Compute →Images.
- From the list, select the appropriate OL
Ximage (For example, ol9u5). - Obtain the value of
occne_boot_volume_image_idfrom theIDfield.For example:→ test Image Active Public No QCOW2 641.13 MB Launch ↓ ol9u5 Image Active Shared No QCOW2 708.81 MB Launch ------------------------------------------------------------------------------------ Name Visibility Min. Disk ol9u5 Shared 0 ID Protected Min. RAM 2c673fc4-9d72-4a60-bb14-bc44f636ee94 No 0 ------------------------------------------------------------------------------------ → test-imag Image Active Public No QCOW2 988.75 MB Launch
- By default,
occne_delete_on_terminationis set to true, this means that the volumes are deleted along with the VMs (recommended behavior). You can set this field to false if you want to preserve the volumes while deleting VMs.
Configuring CNLB
Enable and configure Cloud Native Load Balancer (CNLB) by following the procedure in the Configuring Cloud Native Load Balancer (CNLB) section.
Environment Variables
The following table describes the list of possible environment variables
that can be combined with the deploy.sh command to further define the
running of the deployment.
Note:
The variables marked Y under the Required (Y/N) column are necessary, but you can use the defaults if they meet the deployment requirements.Deployment Environment Variables
Table A-24 Environment Variables
| Environment Variable | Definition | Default Value | Required (Y/N) |
|---|---|---|---|
| OCCNE_VERSION | Defines the version of the container images used during deployment. | Defaults to current release | Y |
| OCCNE_TFVARS_DIR | Provides the path to the
clusters.tfvars file in reference to the
current directory.
|
${OCCNE_CLUSTER} | Y |
| OCCNE_VALIDATE_TFVARS |
Instructs the
|
1 | N |
| CENTRAL_REPO | Central repository Hostname | ${CENTRAL_REPO:=winterfell} | Y |
| CENTRAL_REPO_IP | Central repository IPv4 address | ${CENTRAL_REPO_IP:=10.75.216.10} | Y |
| CENTRAL_REPO_REGISTRY_PORT | Central repository container registry port | ${CENTRAL_REPO_REGISTRY_PORT:=5000} | N |
| CENTRAL_REPO_PROTOCOL | Central repository container registry protocol | ${CENTRAL_REPO_PROTOCOL:=http} | N |
| OCCNE_PIPELINE_ARGS | Additional parameters to the installation process. | N | |
| OCCNE_PREFIX | Development time prefix for the OCCNE image names. | N | |
| OS_USERNAME | OpenStack username account for deployment (must be set by the OpenStack RC file). | (Set by .rc file) | Y |
| OS_PASSWORD | OpenStack password for the account for deployment (must be set by the OpenStack RC file). | (Set by .rc file) | Y |
occne.ini Variables
/var/occne/cluster/${OCCNE_CLUSTER}/occne.ini variables (for
LBVM and CNLB) that can be combined with the deploy.sh command to
further define the running of the deployment.
Note:
The variables marked Y under the Required (Y/N) column are necessary, but you can use the defaults if they meet the deployment requirements.Table A-25 occne.ini Variables for LBVM
| occne.ini Variables | Definition | Default setting | Required (Y/N) |
|---|---|---|---|
| central_repo_host | Central Repository Hostname. | NA | Y |
| central_repo_host_address | Central Repository IPv4 Address. | NA | Y |
| central_repo_registry_port | Central Repository Container Registry Port. | 5000 | N |
| name_server | External DNS nameserver to resolve out-of-cluster DNS
queries (must be a comma separated list of IPv4 addresses).
It is always required to set name_server to openstack environment nameserver(s). Optionally, you can add external nameservers to this list. Openstack environment nameserver(s) can be obtained by running the following command on bootstrap, which was created while creating the Bootstrap host: cat /etc/resolv.conf | grep nameserver |
awk '{ print $2 }' | paste -d, -s |
NA | Y |
| ntp_server | Auto filled by deploy.sh from values
in the cluster.tfvars. Refrain from configuring
this variable.
|
NA | N |
| occne_cluster_network | Auto filled by deploy.sh from values
in the cluster.tfvars. Refrain from configuring
this variable.
|
NA | N |
| kube_network_node_prefix | If you want to modify the default value, change the
variable name to kube_network_node_prefix_value and
use only number and not string.
|
25 | N |
| occne_cluster_name | Name of cluster to deploy. | NA | Y |
| external_openstack_auth_url | OpenStack authorization URL (must be set by the OpenStack RC file in the environment as OS_AUTH_URL). | NA | Y |
| external_openstack_region | OpenStack region name (must be set by the OpenStack RC file in the environment as OS_REGION_NAME). | NA | Y |
| external_openstack_tenant_id | OpenStack project ID (must be set by the OpenStack RC file in the environment as OS_PROJECT_ID). | NA | Y |
| external_openstack_domain_name | OpenStack domain name (must. be set by the OpenStack RC file in the environment as OS_USER_DOMAIN_NAME). | NA | Y |
| external_openstack_tenant_domain_id | OpenStack tenant domain id (must be set by the OpenStack RC file in the environment as OS_PROJECT_DOMAIN_ID). | NA | Y |
| cinder_auth_url | Cinder authorization URL (must be set by the OpenStack RC file in the environment as OS_AUTH_URL). | NA | Y |
| cinder_region | Cinder region name (must be set by the OpenStack RC file in the environment as OS_REGION_NAME). | NA | Y |
| cinder_tenant_id | Cinder project ID (must be set by the OpenStack RC file in the environment as OS_PROJECT_ID). | NA | Y |
| cinder_tenant_domain_id | Cinder tenant domain ID (must be set by the OpenStack RC file in the environment as OS_PROJECT_DOMAIN_ID). | NA | Y |
| cinder_domain_name | Cinder domain name (must be set by the OpenStack RC file in the environment as OS_USER_DOMAIN_NAME). | NA | Y |
| openstack_cinder_availability_zone | OpenStack Cinder storage volume availability zone (must be set on bootstrap host if OpenStack Cinder availability zone needs to be different from default zone 'nova'). | nova | N |
| flannel_interface | Interface to use for flannel networking if using Fixed IP deployment. Do not define this variable. | NA | N |
| calico_mtu | The default value for calico_mtu is 8980 from Kubenetes. If this value needs to be modified, then set the value as an number only and not string. | 8980 | N |
| openstack_parallel_max_limit | Specifies the maximum number of parallel requests
that can be handled by OpenStack controller.
Note:
|
0 | N |
Table A-26 occne.ini Variables for CNLB
| occne.ini Variables | Definition | Default setting | Required (Y/N) |
|---|---|---|---|
| central_repo_host | Central Repository Hostname | NA | Y |
| central_repo_host_address | Central Repository IPv4 Address | NA | Y |
| central_repo_registry_port | Central Repository Container Registry Port | 5000 | N |
| name_server | External DNS nameserver to resolve out-of-cluster DNS
queries (must be a comma separated list of IPv4 addresses).
It is always required to set name_server to openstack environment nameserver(s). Optionally, you can add external nameservers to this list. Openstack environment nameserver(s) can be obtained by running the following command on bootstrap, which was created while creating the Bootstrap host: cat /etc/resolv.conf | grep nameserver |
awk '{ print $2 }' | paste -d, -s |
NA | Y |
| kube_network_node_prefix | If you want to modify the default value, change the
variable name to kube_network_node_prefix_value and
use only number and not string.
|
25 | N |
| occne_cluster_name | Name of cluster to deploy. | NA | Y |
| external_openstack_auth_url | OpenStack authorization URL (must be set by the OpenStack RC file in the environment as OS_AUTH_URL). | NA | Y |
| external_openstack_region | OpenStack region name (must be set by the OpenStack RC file in the environment as OS_REGION_NAME). | NA | Y |
| external_openstack_tenant_id | OpenStack project ID (must be set by the OpenStack RC file in the environment as OS_PROJECT_ID). | NA | Y |
| external_openstack_domain_name | OpenStack domain name (must. be set by the OpenStack RC file in the environment as OS_USER_DOMAIN_NAME). | NA | Y |
| external_openstack_tenant_domain_id | OpenStack tenant domain id (must be set by the OpenStack RC file in the environment as OS_PROJECT_DOMAIN_ID). | NA | Y |
| cinder_auth_url | Cinder authorization URL (must be set by the OpenStack RC file in the environment as OS_AUTH_URL). | NA | Y |
| cinder_region | Cinder region name (must be set by the OpenStack RC file in the environment as OS_REGION_NAME). | NA | Y |
| cinder_tenant_id | Cinder project ID (must be set by the OpenStack RC file in the environment as OS_PROJECT_ID). | NA | Y |
| cinder_tenant_domain_id | Cinder tenant domain ID (must be set by the OpenStack RC file in the environment as OS_PROJECT_DOMAIN_ID) | NA | Y |
| cinder_domain_name | Cinder domain name (must be set by the OpenStack RC file in the environment as OS_USER_DOMAIN_NAME) | NA | Y |
| openstack_cinder_availability_zone | OpenStack Cinder storage volume availability zone (must be set on bootstrap host if OpenStack Cinder availability zone needs to be different from default zone 'nova') | nova | N |
| occne_grub_password | GRUB password to prevent unauthorized access. All VMs use the same password. | NA | Y |
| external_openstack_username | OpenStack username (must be set by the OpenStack RC file in the environment as OS_USERNAME). | NA | Y |
| external_openstack_password | OpenStack password (must be set by the OpenStack RC file in the environment as OS_PASSWORD). | NA | Y |
| cinder_username | Cinder username (must be set by the OpenStack RC file in the environment as OS_USERNAME). | NA | Y |
| cinder_password | Cinder password (must be set by the OpenStack RC file in the environment as OS_PASSWORD). | NA | Y |
| openstack_cacert | Path to OpenStack certificate in installation
container. Do not define if no certificate is needed. Define as
/host/openstack-cacert.pem if you need the
certificate.
|
NA | N |
| flannel_interface | Interface to use for flannel networking if using Fixed IP deployment. Do not define this variable. | NA | N |
| calico_mtu | The default value for calico_mtu is 8980 from Kubenetes. If this value needs to be modified, then set the value as an number only and not string. | 8980 | N |
secrets.ini Variables
/var/occne/cluster/${OCCNE_CLUSTER}/secrets.ini variables that
are required by the deploy.sh command to install the cluster for a
LBVM based deployment.
Note:
- The
secrets.inifile is applicable to CNE deployed using LBVM only. - The variables marked Y under the Required (Y/N) column are necessary.
- You must place all the secrets used on the cluster, in the
secrets.inifile.
Table A-27 secrets.ini Variables
| secrets.ini Variables | Definition | Required (Y/N) |
|---|---|---|
| occne_grub_password | GRUB password to prevent unauthorized access. All VMs use the same password. | Y |
| external_openstack_username | OpenStack username. This variable must be set by the
OpenStack RC file in the environment as
OS_USERNAME.
|
Y |
| external_openstack_password | OpenStack password. This variable must be set by the
OpenStack RC file in the environment as
OS_PASSWORD.
|
Y |
| cinder_username | Cinder username. This variable must be set by the
OpenStack RC file in the environment as
OS_USERNAME.
|
Y |
| cinder_password | Cinder password. This variable must be set by the
OpenStack RC file in the environment as
OS_PASSWORD.
|
Y |
| occne_registry_user | Registry username to access the cluster's Bastion container registries. | N |
| occne_registry_pass | Registry password to access the cluster's Bastion container registries. | N |
Configuring Cloud Native Load Balancer (CNLB)
This section provides the procedure to enable and configure Cloud Native Load Balancer (CNLB) for BareMetal and vCNE. Perform this procedure before the deploying a cluster using a completely configured Bootstrap host.
Prerequisites
- Each
[network:vars]section undercnlb.inimust have a relationship between an internal and an external network. CNLB generates the internal networks automatically. You must preconfigure the external networks. - Each external network must have an available IPs addressing to host worker node interfaces, egress interfaces, and services.
- For VMware, external networks must have the ability to assign dynamic IP addresses using Dynamic Host Configuration Protocol (DHCP). Dynamic addresses are used in network interfaces by worker nodes and static IP addresses are used by services and egress. It is recommended to reduce the DHCP Pool to cover just half of subnet range to avoid IP address overlapping.
- For VMware, ensure that a MAC Discovery Profile with MAC learning enabled is created previously within NSX-T. This profile must be applied to CNLB isolated networks using NSX-T credentials.
- Install CNE cluster with atleast (number of cnlb app replicas + 2) worker nodes to allow High Availability and Upgrades to function properly in CNLB environment.
Note:
This procedure is used for configuring CNLB on vCNE deployments only.Procedure
- For vCNE deployments (OpenStack and VMware), perform the following
steps to configure the
occne.inifile:- Open the
occne.inifile located in the/var/occne/cluster/$OCCNE_CLUSTERfolder. A sample template is available at/var/occne/cluster/$OCCNE_CLUSTER/occne.ini.template. - Under the
[occne:vars]section, set the common services variables foroamnetwork as seen in the following example. These variables are IP addresses for Prometheus, Grafana, Nginx, Alert Manager, Jaeger, and Opensearch services.occne_prom_cnlb = 10.75.200.17 occne_alert_cnlb = 10.75.200.18 occne_graf_cnlb = 10.75.200.19 occne_nginx_cnlb = 10.75.200.20 occne_jaeger_cnlb = 10.75.200.21 occne_opensearch_cnlb = 10.75.200.22
- Open the
- For BareMetal deployments, perform the following steps to configure
the
hosts.inifile:- Create the
hosts.inifile under/var/occne/cluster/$OCCNE_CLUSTERfolder. Sample templates (hosts_sample.iniandhosts_sample_remoteilo.ini) are available at/var/occne/cluster/$OCCNE_CLUSTER/. Use the sample template to configure thehosts.inifile. - Under the
[occne:vars]section, set the common services variables foroamnetwork as seen in the following example. These variables provide the IP addresses for Prometheus, Grafana, Nginx, Alert Manager, Jaeger, and Opensearch services:occne_prom_cnlb = 10.75.200.17 occne_alert_cnlb = 10.75.200.18 occne_graf_cnlb = 10.75.200.19 occne_nginx_cnlb = 10.75.200.20 occne_jaeger_cnlb = 10.75.200.21 occne_opensearch_cnlb = 10.75.200.22 - Add new
[cnlb:children]to define CNLB nodes:[cnlb:children] k8s-cluster occne_bastion
- Create the
- Perform the following steps to configure the
cnlb.inifile:- The
cnlb.inifile contains most of CNLB configurations. Create thecnlb.inifile under/var/occne/cluster/$OCCNE_CLUSTERfolder. A sample template is available at/var/occne/cluster/$OCCNE_CLUSTER/cnlb.ini.template. The following code block provides samples to configure thecnlb.inifile for vCNE and BareMetal deplyments:Example for vCNE (OpenStack and VMware):Note:
All string fields that are marked with quotes must use double quotes only.[cnlb] [cnlb:vars] cnlb_replica_cnt = ext_net_attach_def_plugin = [cnlb:children] # Network names to be used for pod and external # networking, these networks and ports # will be created and attached to hosts # in cnlb host group oam [oam:vars] service_network_name = subnet_id = network_id = external_network_range = external_default_gw = egress_ip_addr = internal_network_range = service_ip_addr = # Optional variable. Uncomment and specify external destination subnets to communicate with. # One egress NAD is needed for oam network, for other networks it is optional # egress_dest =Example for BareMetal:[cnlb] [cnlb:vars] cnlb_replica_cnt = ext_net_attach_def_plugin = net_attach_def_master = [cnlb:children] # Network names to be used for pod and external # networking, these networks and ports # will be created and attached to hosts # in cnlb host group oam [oam:vars] service_network_name = external_network_range = external_default_gw = egress_ip_addr = internal_network_range = service_ip_addr = bm_bond_interface = # Uncomment external_vlanid and internal_vlanid variables if net_attach_def_master is equal to "vlan". # external_vlanid = # internal_vlanid = # Optional variable. Uncomment and specify external destination subnets to communicate with. # One egress NAD is needed for oam network, for other networks it is optional # egress_dest = - Define the
cnlb_replica_cntvariable with an even integer. Thecnlb_replica_cntsets the number of replicas for CNLB client where half of the counts are considered for Active pods and the other half for Standby pods. The default number of replicas is set to "4". - Define the
ext_net_attach_def_pluginvariable as follows:- Use
"macvlan"for OpenStack and BareMetal. - Use
"ipvlan"for VMware.
- Use
- Define the
net_attach_def_mastervariable to configure a BareMetal cluster. Use the following values as per your requirement:- Use
bond0for a cluster without traffic segregation. - Use
"vlan"for traffic segregation using VLAN IDs (external_vlanidandinternal_vlanidvariables).Note:
Skip this step for VMware and OpenStack.
- Use
- Define the network names under the
[cnlb:children]section. Thecnlb.ini.templatefile suggestsoamas the default network name for common services. You can add as many network names as required. Terraform uses each network name to generate cloud resources, therefore use short names that are less than six characters in length. The following code block provides a sample[cnlb:children]section with three networks:[cnlb:children] oam sig dia - Define the network variables in the
[oam:vars]section to host CNE common services such as Prometheus, Grafana, Nginx, AlertManager, Jaeger, and Opensearch.The following code blocks provide the sample configurations for OpenStack, VMware, and BareMetal deployments:- Sample configuration for OpenStack with
cnlb_replica_cntset to 4:[oam:vars] service_network_name = "oam" subnet_id = "4c3573ee-d931-4dca-bf6b-129247108ee3" network_id = "7edd3784-8875-4357-9d67-b9089c3a9f13" external_network_range = "10.75.200.0/25" external_default_gw = "10.75.200.1" egress_ip_addr = ["10.75.200.15","10.75.200.16"] internal_network_range = "172.16.0.0/21" service_ip_addr = ["10.75.200.17","10.75.200.18","10.75.200.19","10.75.200.20","10.75.200.21","10.75.200.22"] egress_dest = ["10.10.10.0/24"] - Sample configuration for VMware with
cnlb_replica_cntset to 4:[oam:vars] service_network_name = "oam" subnet_id = "vcd-ext-net" network_id = "vcd-ext-net" external_network_range = "10.144.10.0/24" external_default_gw = "10.144.10.1" egress_ip_addr = ["10.144.10.30","10.144.10.31"] internal_network_range = "171.16.0.0/24" service_ip_addr = ["10.144.10.32","10.144.10.33","10.144.10.34","10.144.10.35","10.144.10.36","10.144.10.37"] egress_dest = ["10.10.10.0/24"] - Sample configurations for BareMetal with
cnlb_replica_cntset to 4 andnet_attach_def_masterset to "bond0":[oam:vars] service_network_name = "oam" external_network_range = "10.75.200.0/25" external_default_gw = "10.75.200.1" egress_ip_addr = ["10.75.200.15","10.75.200.16"] internal_network_range = "172.16.0.0/21" service_ip_addr = ["10.75.200.17","10.75.200.18","10.75.200.19","10.75.200.20","10.75.200.21","10.75.200.22"] bm_bond_interface = "bond0" - Sample configurations for BareMetal with
cnlb_replica_cntset to 4 andnet_attach_def_masterset to "vlan":[oam:vars] service_network_name = "oam" external_network_range = "10.75.200.0/25" external_default_gw = "10.75.200.1" egress_ip_addr = ["10.75.200.15","10.75.200.16"] internal_network_range = "172.16.0.0/21" service_ip_addr = ["10.75.200.17","10.75.200.18","10.75.200.19","10.75.200.20","10.75.200.21","10.75.200.22"] bm_bond_interface = "bond0" external_vlanid = 50 internal_vlanid = 110
service_network_name: Indicates the service network name. Set this parameter to a string value of maximum six character length.subnet_id: Indicates the subnet ID for OpenStack and indicates the network name for VMware. For Openstack, add a subnet ID. For VMware, add network name.network_id: Indicates the network ID for OpenStack and indicates the network name for VMware. For Openstack add a network ID. For VMware add network name.external_network_range: Indicates the external network address in CIDR notation.external_default_gw: Indicates the external network's default gateway address.egress_ip_addr: Indicates the list of egress IP addresses used for each of the CNLB client pods. The number of list elements depends on the number of replicas used (that is, total number of replicas divided by 2).internal_network_range: Indicates the internal network address in CIDR notation. Subnets with more than 128 addresses (/25) are recommended.service_ip_addr: Indicates the list of service IP addresses hosted by this network. Foroam, there must be a list of 6 addresses.bm_bond_interface: Indicates the string value to define master bond interface. Current BareMetal deployments use "bond0". This configuration is applicable for BareMetal deployemnts only.external_vlanid: Indicates the integer value for external VLAN ID. This ID must be already configured within physical ToR switches. This configuration is applicable for BareMetal deployemnts only.internal_vlanid: Indicates the integer value for internal VLAN ID. This ID must be already configured within physical ToR switches. This configuration is applicable for BareMetal deployemnts only.egress_dest: Indicates the destination addresses to allow pods to originate traffic and route it through CNLB applications. This configuration is required for oam network SNMP notifier deployment only and it is optional for other networks. For more information, see Step 4.
- Sample configuration for OpenStack with
- To add more networks, configure the corresponding sections.
Create all external networks in advance and ensure that the addresses
are available for all resources involved. The following examples provide
fully configured
cnlb.inifiles for OpenStack and BareMetal using three networks.Example for OpenStack:Note:
External networks can be shared between CNLB networks as long as the IP addresses don't overlap.[cnlb] [cnlb:vars] cnlb_replica_cnt = 4 ext_net_attach_def_plugin = "ipvlan" [cnlb:children] # Network names to be used for pod and external # networking, these networks and ports # will be created and attached to hosts # in cnlb host group oam sig dia [oam:vars] service_network_name = "oam" subnet_id = "4c3573ee-d931-4dca-bf6b-129247108ee3" network_id = "7edd3784-8875-4357-9d67-b9089c3a9f13" external_network_range = "10.75.200.0/25" external_default_gw = "10.75.200.1" egress_ip_addr = ["10.75.200.15","10.75.200.16"] internal_network_range = "171.16.0.0/24" service_ip_addr = ["10.75.200.17","10.75.200.18","10.75.200.19","10.75.200.20","10.75.200.21","10.75.200.22"] [sig:vars] service_network_name = "sig" subnet_id = "c3926ab2-a739-4476-946b-2b8d9f07e3bc" network_id = "bc681e0c-7a44-48f9-9ba6-319df1efdec4" external_network_range = "10.75.201.0/25" external_default_gw = "10.75.201.1" egress_ip_addr = ["10.75.201.11","10.75.201.12"] internal_network_range = "172.16.0.0/24" service_ip_addr = ["10.75.201.13"] [dia:vars] service_network_name = "dia" subnet_id = "c3926ab2-a739-4476-946b-2b8d9f07e3bc" network_id = "bc681e0c-7a44-48f9-9ba6-319df1efdec4" external_network_range = "10.75.201.0/25" external_default_gw = "10.75.201.1" egress_ip_addr = ["10.75.201.14","10.75.201.15"] internal_network_range = "173.16.0.0/24" service_ip_addr = ["10.75.201.16","10.75.201.17"]Example for BareMetal:[cnlb] [cnlb:vars] cnlb_replica_cnt = 4 net_attach_def_master = "vlan" ext_net_attach_def_plugin = "macvlan" [cnlb:children] # Network names to be used for pod and external # networking, these networks and ports # will be created and attached to hosts # in cnlb host group oam sig dia [oam:vars] service_network_name = "oam" external_network_range = "10.75.200.0/25" external_default_gw = "10.75.200.1" egress_ip_addr = ["10.75.200.15","10.75.200.16"] internal_network_range = "171.16.110.0/24" service_ip_addr = ["10.75.200.17","10.75.200.18","10.75.200.19","10.75.200.20","10.75.200.21","10.75.200.22"] external_vlanid = 50 internal_vlanid = 110 [sig:vars] service_network_name = "sig" external_network_range = "10.75.201.0/25" external_default_gw = "10.75.201.1" egress_ip_addr = ["10.75.201.11","10.75.201.12"] internal_network_range = "172.16.120.0/24" service_ip_addr = ["10.75.201.13"] external_vlanid = 60 internal_vlanid = 120 [dia:vars] service_network_name = "dia" external_network_range = "10.75.201.0/25" external_default_gw = "10.75.201.1" egress_ip_addr = ["10.75.201.14","10.75.201.15"] internal_network_range = "173.16.130.0/24" service_ip_addr = ["10.75.201.16","10.75.201.17"] external_vlanid = 60 internal_vlanid = 130 - Refer to the following example to pass resource allotment
configurations to the CNLB client (
cnlb-app) and manager (cnlb-man). The example includes the default values for each variable.[cnlb] [cnlb:vars] cnlb_replica_cnt = 4 ext_net_attach_def_plugin = "ipvlan" cnlb_app_cpu_limit = 4 cnlb_app_mem_limit = 1Gi cnlb_app_cpu_req = 500m cnlb_app_mem_req = 1Gi cnlb_man_cpu_limit = 4 cnlb_man_mem_limit = 1Gi cnlb_man_cpu_req = 500m cnlb_man_mem_req = 1Gi [cnlb:children] # Network names to be used for pod and external # networking, these networks and ports # will be created and attached to hosts # in cnlb host group oam sig dia [oam:vars] service_network_name = "oam" subnet_id = "4c3573ee-d931-4dca-bf6b-129247108ee3" network_id = "7edd3784-8875-4357-9d67-b9089c3a9f13" external_network_range = "10.75.200.0/25" external_default_gw = "10.75.200.1" egress_ip_addr = ["10.75.200.15","10.75.200.16"] internal_network_range = "171.16.0.0/24" service_ip_addr = ["10.75.200.17","10.75.200.18","10.75.200.19","10.75.200.20","10.75.200.21","10.75.200.22"] [sig:vars] service_network_name = "sig" subnet_id = "c3926ab2-a739-4476-946b-2b8d9f07e3bc" network_id = "bc681e0c-7a44-48f9-9ba6-319df1efdec4" external_network_range = "10.75.201.0/25" external_default_gw = "10.75.201.1" egress_ip_addr = ["10.75.201.11","10.75.201.12"] internal_network_range = "172.16.0.0/24" service_ip_addr = ["10.75.201.13"] [dia:vars] service_network_name = "dia" subnet_id = "c3926ab2-a739-4476-946b-2b8d9f07e3bc" network_id = "bc681e0c-7a44-48f9-9ba6-319df1efdec4" external_network_range = "10.75.201.0/25" external_default_gw = "10.75.201.1" egress_ip_addr = ["10.75.201.14","10.75.201.15"] internal_network_range = "173.16.0.0/24" service_ip_addr = ["10.75.201.16","10.75.201.17"]
- The
- Configure the egress network. Egress network
allows pods to originate traffic and route it through CNLB apps. This is done by
creating network attachment definitions with destination routes, allowing pods
to use a CNLB application as gateway. Packets going out through a CNLB
application translates internal IP address to an egress external IP address.
To generate egress network attachment definitions, specify the external destination subnets to communicate. These subnets are defined as a list under the
egress_destvariable within thecnlb.inifile and are used to configure routes in pods routing tables.The following example shows the egress network attachments definitions created forsignetwork which allows pods to originate traffic destined to 10.10.10.0/24 and 11.11.11.0/24 subnets. Packets routed through this feature are translated to the source IP 10.75.201.11 or 10.75.201.12 that are defined under theegress_ip_addrvariable:[sig:vars] service_network_name = "sig" subnet_id = "c3926ab2-a739-4476-946b-2b8d9f07e3bc" network_id = "bc681e0c-7a44-48f9-9ba6-319df1efdec4" external_network_range = "10.75.201.0/25" external_default_gw = "10.75.201.1" egress_ip_addr = ["10.75.201.11","10.75.201.12"] internal_network_range = "172.16.0.0/24" service_ip_addr = ["10.75.201.13"] egress_dest = ["10.10.10.0/24","11.11.11.0/24"]If egress network attachments definitions are not required for a network, don't add
egress_destin the egress network section. - Enable Single Internal Network (SIN) configuration. SIN is the ability to share
one internal network with more networks. Instead of creating internal ports and
attaching them to worker nodes for all internal networks, the resources can be
shared between them to use less platform resources. Networks using SIN
functionality do not segregate traffic based on interface sharing. In vCNE,
shared interfaces are taken from the oam network. To enable SIN under vCNE use
the
shared_internalvariable within the desired network with the exception of oam.The following code block provides a samplecnlb.iniSIN configuration for vCNE:[cnlb] [cnlb:vars] cnlb_replica_cnt = 4 ext_net_attach_def_plugin = "ipvlan" [cnlb:children] oam sig prov [oam:vars] service_network_name = "oam" subnet_id = "ded16d80-4210-4cf7-9dc7-6174971273ea" network_id = "5304c83c-09ca-41a1-8d7e-266a24398926" external_network_range = "10.75.212.0/23" egress_ip_addr = ["10.75.213.248", "10.75.213.111"] internal_network_range = "142.16.0.0/24" service_ip_addr = ["10.75.213.37", "10.75.213.148", "10.75.213.115", "10.75.213.246", "10.75.212.41", "10.75.212.184"] external_default_gw = "10.75.212.1" egress_dest = ["10.75.212.0/23"] [sig:vars] service_network_name = "sig" subnet_id = "ded16d80-4210-4cf7-9dc7-6174971273ea" network_id = "5304c83c-09ca-41a1-8d7e-266a24398926" external_network_range = "10.75.212.0/23" egress_ip_addr = ["10.75.212.109", "10.75.213.49"] internal_network_range = "143.16.0.0/24" service_ip_addr = ["10.75.213.250", "10.75.212.28", "10.75.212.168", "10.75.213.227", "10.75.213.233", "10.75.213.112"] external_default_gw = "10.75.212.1" egress_dest = ["10.75.212.0/23"] shared_internal = True # Add this line to enable SIN on sig network [prov:vars] service_network_name = "prov" subnet_id = "ded16d80-4210-4cf7-9dc7-6174971273ea" network_id = "5304c83c-09ca-41a1-8d7e-266a24398926" external_network_range = "10.75.212.0/23" egress_ip_addr = ["10.75.213.74", "10.75.212.4"] internal_network_range = "144.16.0.0/24" service_ip_addr = ["10.75.212.162", "10.75.212.167", "10.75.213.47", "10.75.213.67", "10.75.212.93", "10.75.212.250"] external_default_gw = "10.75.212.1" egress_dest = ["10.75.212.0/23"] shared_internal = False # Add this line to disable SIN on prov networkThe
shared_internalvalue is case-insensitive. If the variable is present in thecnlb.inifile, the value must be set to either True (when SIN is enabled) or False (when SIN is not enabled). The value can be optionally enclosed in quotes. This variable (shared_internal = True) can be applied to any network except the oam network. You can configure network combinations with and without SIN. Networks withshared_internal = Falsegenerates their respective interfaces to segregate traffic.To enable SIN in a BareMetal deployment, match the values ofinternal_vlanidbetween any set of networks. This allows to share the internal interfaces between them. In a BareMetal deployment, you can enable SIN for any network including the oam network. There can be multiple network groups where theinternal_vlanidis the same and SIN is supported between these networks.Note:
CNLB SIN is not supported in acnlb.iniwhere thenet_attach_def_mastervariable is set to bond0 (net_attach_def_master = "bond0").The following sample shows how theoamandsignetworks share internal interfaces for VLAN 110:[cnlb] [cnlb:vars] cnlb_replica_cnt = 4 net_attach_def_master = "vlan" ext_net_attach_def_plugin = "macvlan" [cnlb:children] # Network names to be used for pod and external # networking, these networks and ports # will be created and attached to hosts # in cnlb host group oam sig [oam:vars] service_network_name = "oam" external_network_range = "10.75.200.0/25" external_default_gw = "10.75.200.1" egress_ip_addr = ["10.75.200.15","10.75.200.16"] internal_network_range = "171.16.110.0/24" service_ip_addr = ["10.75.200.17","10.75.200.18","10.75.200.19","10.75.200.20","10.75.200.21","10.75.200.22"] external_vlanid = 50 internal_vlanid = 110 [sig:vars] service_network_name = "sig" external_network_range = "10.75.201.0/25" external_default_gw = "10.75.201.1" egress_ip_addr = ["10.75.201.11","10.75.201.12"] internal_network_range = "172.16.120.0/24" service_ip_addr = ["10.75.201.13"] external_vlanid = 60 internal_vlanid = 110 # VLAN ID value matches oam and interfaces are shared. - For OpenStack and VMware deployments, follow the OpenStack and VMware deployments procedures to deploy the cluster using
the
deploy.shscript. - For VMware deployement, perform the following steps to enable MAC
learning on internal isolated networks:
Note:
Perform this step duringdeploy.shinstallation immediately after the Terraform resource creation.- Run the following command using NSX-T credentials to get the
segment ID for each internal network created from the
cnlb.inifile:$ curl -k -u <user>:<password> https://<domain_name>/policy/api/v1/infra/segments/ | egrep -B1 <cluster-name>Sample output:"id" : "9d9f4599-2423-439f-8f77-b56bda1ec5db", "display_name" : "cluster-name-sig-int-net-893b497f-2be0-4032-b162-fd4227ccfab6", "id" : "fbd4efd9-8afb-4b23-92a9-0e889cb03868", "display_name" : "cluster-name-dia-int-net-d4563ad7-9ca3-4435-a345-c5e611026471", "id" : "a56f7fec-36bf-4dfc-b437-d68a82552043", "display_name" : "cluster-name-oam-int-net-1f1e58dd-400c-4487-8f3c-535a7eb8ec3a", - Run the following command to get the Media Access Control
(MAC) discovery profile ID. The profile must be created beforehand using
NSX-T
credentials:
$ curl -k -u <user>:<password> https://<domain_name>/policy/api/v1/infra/mac-discovery-profilesSample output:{ "results" : [ { "mac_change_enabled" : false, "mac_learning_enabled" : true, "mac_learning_aging_time" : 0, "unknown_unicast_flooding_enabled" : true, "mac_limit" : 4096, "mac_limit_policy" : "ALLOW", "remote_overlay_mac_limit" : 2048, "resource_type" : "MacDiscoveryProfile", "id" : "MacLearningEnabled", <--------------------------------------------- "display_name" : "MacLearningEnabled", "path" : "/infra/mac-discovery-profiles/MacLearningEnabled", "relative_path" : "MacLearningEnabled", "parent_path" : "/infra", "unique_id" : "21c954e5-6ac3-4288-a498-0aa5a2b5ffc0", "realization_id" : "21c954e5-6ac3-4288-a498-0aa5a2b5ffc0", "marked_for_delete" : false, "overridden" : false, "_create_time" : 1695389858236, "_create_user" : "admin", "_last_modified_time" : 1695389858238, "_last_modified_user" : "admin", "_system_owned" : false, "_protection" : "NOT_PROTECTED", "_revision" : 0 - Run the following command to apply MAC discovery profile to
segment:
$ curl -k -u <user>:<password> -X PATCH https://<domain_name>/policy/api/v1/infra/segments/<segment_id>/segment-discovery-profile-binding-maps/<mac_discovery_profile_id> -H "Content-Type: application/json" -d '{"resource_type":" SegmentDiscoveryProfileBindingMap", "display_name": "", "description":"", "mac_discovery_profile_path":"/infra/mac-discovery-profiles/<mac_discovery_profile_id>", "ip_discovery_profile_path":"/infra/ip-discovery-profiles/default-ip-discovery-profile", "_revision": 1}' - Run the following command to show MAC discovery profile on
segment:
$ curl -k -u <user>:<password> https://<domain_name>/policy/api/v1/infra/segments/<segment_id>/segment-discovery-profile-binding-maps/Sample output:{ "results" : [ { "mac_discovery_profile_path" : "/infra/mac-discovery-profiles/MacLearningEnabled", "ip_discovery_profile_path" : "/infra/ip-discovery-profiles/default-ip-discovery-profile", "resource_type" : "SegmentDiscoveryProfileBindingMap", "id" : "MacLearningEnabled", "display_name" : "MacLearningEnabled", "path" : "/infra/segments/fbd4efd9-8afb-4b23-92a9-0e889cb03868/segment-discovery-profile-binding-maps/MacLearningEnabled", "relative_path" : "MacLearningEnabled", "parent_path" : "/infra/segments/fbd4efd9-8afb-4b23-92a9-0e889cb03868", "unique_id" : "99a3af93-06b1-43ff-99b3-043408fedce2", "realization_id" : "99a3af93-06b1-43ff-99b3-043408fedce2", "marked_for_delete" : false, "overridden" : false, "_create_time" : 1709158001412, "_create_user" : "user", "_last_modified_time" : 1709158001412, "_last_modified_user" : "user", "_system_owned" : false, "_protection" : "NOT_PROTECTED", "_revision" : 0 } ], "result_count" : 1, "sort_by" : "display_name", "sort_ascending" : true
- Run the following command using NSX-T credentials to get the
segment ID for each internal network created from the
Scaling CNLB App Pod
Scaling of CNLB pod can have some disadvantages. The following are some points to note before scaling the CNLB pod:
- Risk of Service Disruption:
- Scaling CNLB pods disrupts active traffic sessions during pod startup and readiness probe failures.
- There can be a brief outage window where external traffic becomes unavailable until the new pod becomes ready and re-establishes routes.
- Handing the traffic:
- CNLB pods are not bottlenecked under normal circumstances due to their ability to handle high volumes using event-based networking.
- Adding more pods doesn't improve routing logic, performance, or throughput unless a specific architectural limitation is hit.
Note:
- CNLB app pod pairs are designed to handle mixed network traffic, unlike the traditional LBVM solution where each LBVM pair was limited to handling either ingress or egress for a single network.
- A single active CNLB app pod is capable of providing both ingress and egress
functionality for multiple networks, as defined in the
cnlb.iniconfiguration file. Therefore, horizontal scaling of CNLB app pods is not required to match the number of networks your applications will use. - This scaling procedure should only be followed under very specific and exceptional circumstances. For example, where two pairs of CNLB app pods are insufficient for a special use case.
- The following are the preferred and recommended approach in cases of
resource limitations:
- Vertically scale the CNLB app pods (i.e., increase CPU/memory resources), rather than horizontally scale the deployment.
- Scaling horizontally/Scaling out should be considered a last resort, and only after thorough analysis and approval.
- Run the following command to change the directories to the
cluster
directory:
$ cd /var/occne/cluster/${OCCNE_CLUSTER} - Run the following command to edit the
cnlb.iniconfiguration file:$ vi cnlb.ini - Run the following command to modify the
cnlb_replica_cntparameter undercnlb:vars:[cnlb] [cnlb:vars] cnlb_replica_cnt = 6 ext_net_attach_def_plugin = "ipvlan" [cnlb:children] oam [oam:vars] service_network_name = "oam" subnet_id = "3b0be628-2b1d-4821-8a64-2a17e936f5ae" network_id = "d7fc56eb-e6e6-42a7-9da8-bdc4cdc12c6d" external_network_range = "10.123.154.0/24" egress_ip_addr = ["10.123.154.219","10.123.154.108","10.123.154.71"] internal_network_range = "152.16.0.0/24" service_ip_addr = ["10.123.154.253", "10.123.154.98", "10.123.154.209", "10.123.154.175", "10.123.154.104", "10.123.154.184"] external_default_gw = "10.123.154.1" egress_dest = ["10.123.154.0/24"]Note:
Theegress_ip_addrdepends on number of replicas/2.For example:
- For 4 replicas, 2
egress_ip_addrmust be present. - For 6 replicas, 3
egress_ip_addrmust be present.
- For 4 replicas, 2
- After editing the
cnlb.inifile, ensure to run the validation script to check for any discrepancies.$ ./installer/validateCnlbIni.pySample output:
Successfully validated the vCNE cnlb.ini file - Run the following command to source the openrc
file:
$ source openrc.shSample output:
Please enter your OpenStack Username for project Team-CNE: Please enter your OpenStack Password for project Team-CNE as user prince.p.pranav@oracle.com: Please enter your OpenStack Domain for project Team-CNE: - Run the following command to run the
UpdNetwork.pyscript:$./installer/updNetwork.pySample output:
Updating CNLB network as indicated in the cnlb.ini file - Validating the cnlb.ini file - Validation for vCNE cnlb.ini file succeeded. - Generating new cnlb.auto.tfvars file - Successfully created cnlb.auto.tfvars file - Initializing and running tofu apply - Tofu initialized - Running tofu apply... (may take several minutes) - Apply complete! Resources: 1 added, 0 changed, 0 destroyed - Successful run of tofu apply - check /var/occne/cluster/occne5-prince-p-pranav/updNetwork-05282025_114515.log for details - Running installCnlb.py - Successfully ran installCnlb.py - Restarting cnlb-manager and cnlb-app deployments - Deployment: cnlb-manager was restarted. Please wait for the pods status to be 'running' - Deployment: cnlb-app was restarted. Please wait for the pods status to be 'running' - Network update successfully completed - Run the following command to validate the scale-out
operation:
$ kco get po | grep cnlb-appSample output:
cnlb-app-7bc4df4ffd-4v8hj 1/1 Running 0 14s cnlb-app-7bc4df4ffd-gtqs8 1/1 Running 0 17s cnlb-app-7bc4df4ffd-hxxrb 1/1 Running 0 14s cnlb-app-7bc4df4ffd-jth27 1/1 Running 0 17s cnlb-app-7bc4df4ffd-rvx9n 1/1 Running 0 15s cnlb-app-7bc4df4ffd-x8vdf 1/1 Running 0 17s
- Running Applications: Applications running on older replicas will not be automatically redistributed to the new replicas. These applications will continue running on their current replicas unless they are reassigned manually to the new replicas. You must reassigning their applications to utilize the new replicas.
- New Applications: Any new applications installed will automatically be scheduled to utilize the new replicas.
Assigning Egress IPs to Dedicated Serviceset
This step enables you to assign Egress IP addresses from available networks to dedicated CNLB application pods or servicesets. This ensures that outbound traffic from specific services uses predefined network IPs for consistent routing and control.
Run the following command to display the current egress configuration:$ kubectl -n occne-infra exec -it $(kubectl -n occne-infra get po --no-headers -l app=cnlb-manager -o custom-columns=:.metadata.name) -- curl http://localhost:5001/net-info | python -m json.tool | jqSample output:
{
"10.233.124.125": [
{
"egressIpExt": "10.75.200.45",
"gatewayIp": "152.16.0.194",
"networkName": "oam"
}
],
"10.233.73.252": [
{
"egressIpExt": "10.75.200.14",
"gatewayIp": "152.16.0.193",
"networkName": "oam"
}
]
}Following are the field descriptions of the sample output:
Table A-28 Field Descriptions
| Field Name | Description |
|---|---|
egressIpExt |
|
gatewayIp |
|
networkName |
|
apiVersion: v1
data:
cnlb_manager_net_cm.json: |-
{
"networks": [
{
"external-network": [
{
"range": "10.75.200.0/25",
"gatewayIp": "10.75.200.1"
}
],
"internal-network": [
{
"range": "152.16.0.0/24",
"egressIp": [
[
"10.75.200.79",
"152.16.0.193",
"serviceIpSet0"
],
[
"10.75.200.120",
"152.16.0.194"
]
]
}
],
"networkName": "oam"
},
{
"external-network": [
{
"range": "10.75.200.0/25",
"gatewayIp": "10.75.200.1"
}
],
"internal-network": [
{
"range": "153.16.0.0/24",
"egressIp": [
[
"10.75.200.54",
"153.16.0.194",
"serviceIpSet1"
]
]
}
],
"networkName": "sig"
}
]
}
kind: ConfigMapCNLB Application and Manager Container Environment Variables
This section lists the environment variables used by the CNLB Application and Manager containers.
The tables provided in this section describe all the supported environment variables
for the cnlb-app and cnlb-manager deployments.
These variables control logging, monitoring, replica scaling, and feature enablement
for CNLB components.
Environment variables can be set or modified at the time of deployment or later using
the kubectl set env command.
Note:
- Boolean values are case-insensitive.
- Only
true(or its equivalent) enables the relevant features.
Table A-29 cnlb-app Environment Variables
| Variable Name | Default Value | Supported Values | Description |
|---|---|---|---|
| MY_NODE_NAME | (Set by Kubernetes) | Node name string (auto-set) | Auto-set to Kubernetes node name and is used for pod identification and health endpoints. |
| LOG_LEVEL | INFO |
INFO, DEBUG |
Controls log verbosity.
|
| INTERFACE_MONITORING | False |
true, false
(case-insensitive)
|
Enables network interface link-state monitoring when set to
True or true.
|
Table A-30 cnlb-manager Environment Variables
| Variable Name | Default Value | Supported Values | Description |
|---|---|---|---|
| CNLB_APP_REP_CNT | 4 |
Integer (recommended even ≥2) | Indicates the expected number of
cnlb-app replicas for distribution and
failover logic.
|
| CNLB_APP_MON_TIMEOUT | 1 |
Integer (Seconds) | Indicates the time interval (seconds) for
cnlb-manager to check state of monitored
cnlb-app pods health endpoint.
|
| LOG_LEVEL | INFO |
INFO, DEBUG |
Controls log verbosity required at startup. |
| CNLB_POD_MAX_RETRY_CNT | 1 |
Integer (count) | Indicates the number of times the
cnlb-app pod health endpoint is retried before
performing a failover.
|
| CNLB_CONNTRACK_ENABLED | FALSE |
true, false
(case-insensitive)
|
Enables conntrack for tracking and failover;
only true (any case) enables the
feature.
Note: This feature is not supported for VMware in the current CNE version. |
NF Configurations for Compatibility with CNLB
Many applications deployed on Kubernetes clusters require efficient load balancing and egress traffic management. Kubernetes provides basic load balancing capabilities, however advanced feature such as CNLB is required to handle complex scenarios involving dynamic traffic routing and scaling. When CNLB is enabled, you must configure pod annotations and network attachments correctly to leverage the functionalities of CNLB effectively. This section outlines the necessary steps to configure pod annotations and network attachments for Network Function (NF) applications to fully utilize CNLB.
Network Attachment Definitions
Network Attachment Definition (NAD) is a resource that is used to setup a network attachment. In this case, a secondary network interface to a pod. This section provides information about the different types of Network Attachment Definitions that CNLB supports.
- Ingress Network Attachment Definitions: These definitions are used to
handle inbound traffic only. This traffic enters CNLB applicataion through an
external interface service IP address and gets routed internally using internal
interfaces within CNLB networks.
Ingress Network Attachment Definitions uses the following naming convention:
nf-<service_network_name>-int. - Ingress and Egress Network Attachment Definitions (Also
referred to as Ingress/Egress Network Attachment Definitions): These definitions
are used to enable inbound and outbound traffic. An NF pod can initiate traffic
and route it through a CNLB application, translating source IP address to an
external egress IP address (defined in the
egress_addrvariable of thecnlb.inifile). An Ingress/Egress network attachment definition contains network information to create interfaces for NF pods and routes to external subnets. Apart from enabling outbound traffic, the Ingress/Egress network attachment definition also handles inbound traffic. Therefore, if you need both inbound and outbound traffic handling, you must use an Ingress/Egress Network Attachment Definitions.The following conditions must be satisfied to use Ingress/Egress functionality:- Ingress Network Attachment Definitions must be already created for the desired internal networks.
- Source (ingress) and destination (egress) subnet
addresses must be known beforehand and defined in the
cnlb.inifile against theegress_destvariable to generate Network Attachment Definitions. - The use of an Ingress/Egress network attachment definition on a deployment must replace Ingress Network Attachment Definitions.
Ingress/Egress Network Attachment Definitions uses the following naming convention:
nf-<service_network_name>-ie.The following diagram depicts a sample Ingress/Egress network attachment setup:Figure A-2 Ingress Egress Network Attachment Definition

In the sample setup,
nf-sig-ie1uses192.168.1.1/24as gateway. A gateway address is set when a service is configured under CNLB manager which allocates services between Active CNLB applications. This gateway address is set on one of the many Active CNLB applications for the same network. Since each Active CNLB application has its own uniqueegress IP address, egress traffic is translated to its corresponding external address. In this case, 192.168.1.1/24 is configured on Active CNLB App 1. This means that egress traffic will translate to 10.75.180.11/24. If the CNLB Manager allocated 192.168.1.1/24 to Active CNLB App 2, then the traffic would be translated to 10.75.180.12/24. - Egress Network Attachment Definitions: These definitions are
used to enable outbound traffic only. An NF pod can initiate traffic and route
it through a CNLB application, translating source IP address to an external
egress IP address. An Egress network attachment definition contains network
information to create interfaces for NF pods and routes to external subnets.
The following conditions must be satisfied to use Egress functionality:
- Ingress Network Attachment Definitions must be already created for the desired internal networks.
- Destination (egress) subnet addresses must be known
beforehand and defined in the
cnlb.inifile against theegress_destvariable to generate Network Attachment Definitions. - The use of an Egress network attachment definition on a deployment can be used in combination with Ingress or Ingress/Egress Network Attachment Definitions to route traffic through specific CNLB applications.
Egress Network Attachment Definitions uses the following naming convention:
nf-<service_network_name>-egr.The following diagram depicts a sample Egress network attachment setup:Figure A-3 Egress Network Attachment Definition

In the sample setup,
nf-sig-egr1uses192.168.1.193/24as gateway andnf-sig-egr2uses192.168.1.194/24as gateway. Gateway addresses used by Egress NADs are unique for each Active CNLB application and don't depend on services. That is an egress gateway address will be always available to use. Each Egress NAD will point to a separate gateway address per network, allowing traffic to route through a desired Active CNLB application. In this case, there are two Active CNLB applications with their corresponding egress IP addresses. NF Pod 1 usesnf-sig-egr1NAD to use Active CNLB App 1 as router and translate to 10.75.180.11/24. NF Pod 2 usesnf-sig-egr2NAD to use Active CNLB App 2 as router and translate to 10.75.180.12/24.
Annotations for Network Attachment and CNLB
When CNLB is enabled, applications does not use Kubernetes service configurations,
metallb annotation, or egress annotation for assigning load balancing external IP's or
performing egress operations. Instead, each application must specify the pod annotations
with network attachments ("k8s.v1.cni.cncf.io/networks") and CNLB
annotation ("oracle.com.cnc/cnlb" ) during Helm installations to enable
load balancing and add egress capabilities for application pods. This sections provides
details about the network attachment and CNLB annotations.
- Run the following command to locate Network Attachment Definitions (NADs)
associated with the specified network name. This displays all Ingress and Egress
NADs if configured:
Note:
ReplaceNETWORK_NAMEin the command with the name of the network you want to configure.$ kubectl get net-attach-def | grep NETWORK_NAMEFor example:$ kubectl get net-attach-def | grep sigSample output:nf-sig-int1 11h nf-sig-int2 11h nf-sig-int3 11h nf-sig-int4 11h nf-sig-int5 11h nf-sig-ie1 11h nf-sig-ie2 11h nf-sig-ie3 11h nf-sig-ie4 11h nf-sig-ie5 11h nf-sig-egr1 11h nf-sig-egr2 11h - Use one of the NADs obtained in the previous step to configure the annotations, however ensure you avoid reusing a network attachment.
- Run the following command to check the list of network attachments already in
use by
applications:
$ kubectl get deploy,sts,ds -A -o json | jq -r '.items[] | select(.spec.template.metadata.annotations."k8s.v1.cni.cncf.io/networks" != null) | {kind, namespace: .metadata.namespace, name: .metadata.name, podAnnotations: .spec.template.metadata.annotations."k8s.v1.cni.cncf.io/networks"}'Sample output:{ "kind": "Deployment", "namespace": "occne-infra", "name": "cnlb-app", "podAnnotations": "lb-dia-ext@lb-dia-ext,lb-dia-int@lb-dia-int,lb-oam-ext@lb-oam-ext,lb-oam-int@lb-oam-int,lb-sig-ext@lb-sig-ext,lb-sig-int@lb-sig-int" } { "kind": "StatefulSet", "namespace": "occne-infra", "name": "alertmanager-occne-kube-prom-stack-kube-alertmanager", "podAnnotations": "default/nf-oam-int2@nf-oam-int2" } { "kind": "StatefulSet", "namespace": "occne-infra", "name": "prometheus-occne-kube-prom-stack-kube-prometheus", "podAnnotations": "default/nf-oam-int1@nf-oam-int1" } { "kind": "DaemonSet", "namespace": "ingress-nginx", "name": "ingress-nginx-controller", "podAnnotations": "default/nf-dia-int1@nf-dia-int1" } .... OUTPUT TRUNCATED ...
Annotations for Ingress, Egress, and Ingress/Egress Communications
This section provides details about the annotations for configuring ingress, egress, and ingress/egress communications.
When you install a deployment, StatefulSet (STS), or DaemonSet that requires load
balancing, ensure that you include network attachment and CNLB annotations in the
pod.spec.template.metadata.annotations section of the Helm chart or
application deployment.
Note:
- For consistency and ease of maintenance, it is recommended to add the annotations through Helm chart values before installation.
- statefulSets cannot be annotated after installation. Deployments, replicasets, and daemonsets can be annotated post installation, however it is not recommended as stated in the previous note.
Annotations for Single Network Ingress or Ingress/Egress Communications
- The annotation key for network attachment is
k8s.v1.cni.cncf.io/networks - The annotation key for CNLB is
oracle.com.cnc/cnlb
spec:
template:
metadata:
annotations:
k8s.v1.cni.cncf.io/networks: default/nf-sig-int3@nf-sig-int3
oracle.com.cnc/cnlb: '[{"backendPortName": "grafana", "cnlbIp": "10.75.181.55", "cnlbPort": "80"}]'egress_dest variable of
cnlb.ini:# Annotation key for network attachment: k8s.v1.cni.cncf.io/networks
# Annotation key for cnlb annotation: oracle.com.cnc/cnlb
spec:
template:
metadata:
annotations:
k8s.v1.cni.cncf.io/networks: default/nf-sig-ie3@nf-sig-ie3
oracle.com.cnc/cnlb: '[{"backendPortName": "grafana", "cnlbIp": "10.75.181.55", "cnlbPort": "80"}]'oracle.com.cnc/cnlb annotation is used to define service IP and
port configurations that the Deployment, StatefulSet (STS), or DaemonSet (DS)
employs for ingress load balancing. The oracle.com.cnc/cnlb
annotation contains the following attributes:
- backendPortName: The backend port name of the container that needs load balancing. This name came be retrieved from the deployment or pod specification of the application. If an application does not have the port specification during Helm installation, you can add the container port specification information after the installation by editing the deployment. However, it is always preferred to provide the container port specification information as part of the Helm chart.
- cnlbIp: The frontend IP utilized by the application.
- cnlbPort: The frontend port used in conjunction with the CNLB IP for load balancing.
Note:
- Grafana deployment in the examples uses a network attachment named
nf-sig-int3/nf-sig-ie3in the default namespace which is identified by the name of the application interface,nf-sig-int3/nf-sig-ie3. Ensure that the names before and after the@character matches. - The IP address configured (in these examples,
10.75.181.55) must fall within the specified range of IP addresses provided in thecnlb.inifile and must be available while configuring the annotations. You can check the availability of the service IP address by requesting CNLB manager to display the services assigned to all sets:
Sample output:$ kubectl -n occne-infra exec -it $(kubectl -n occne-infra get po --no-headers -l app=cnlb-manager -o custom-columns=:.metadata.name) -- curl http://localhost:5001/service-info | python -m json.tool | jq | grep "frontEndIp"
The sample output doesn't display the IP address configured in the sample annotation ("frontEndIp": "10.123.155.48", "frontEndIp": "10.123.155.5", "frontEndIp": "10.123.155.8", "frontEndIp": "10.123.155.16", "frontEndIp": "10.123.155.26", "frontEndIp": "10.123.155.27", "frontEndIp": "10.123.155.49", "frontEndIp": "10.123.155.7", "frontEndIp": "10.123.155.9", "frontEndIp": "10.123.155.9", "frontEndIp": "10.123.155.17",10.75.181.55) in the"frontEndIp"values. This means that the IP address is unavailable to use.
10.75.181.55 is used
for load balancing ingress traffic on port 80 with the backend
container named grafana. The backend port name aligns with the
container port name specified in the deployment
specification:ports:
- containerPort: 3000
name: grafana
protocol: TCPAnnotations for Single Network Egress Communications
Egress communication allows the pods to originate traffic and route it through CNLB applications. This is done by creating Network Attachment Definitions with destination routes. This allows the pods to use a CNLB application as gateway. Packets going out through a CNLB application translates the internal IP address to an egress external IP address.
To generate egress NAD, you must specify the external destination subnets to which
the system communicates. These subnets are defined as a list under the
egress_dest variable in the cnlb.ini file.
These subnets are used to configure the routes in pods routing tables.
Egress NADs enable a deployment to generate outbound traffic without setting an ingress IP address.
The following example outlines the annotation for a single egress NAD without CNLB service IP annotation.
k8s.v1.cni.cncf.io/networks.
spec:
template:
metadata:
annotations:
k8s.v1.cni.cncf.io/networks: default/nf-sig-egr1@nf-sig-egr1Annotations for Multiple Port Ingress Communication with Same External IP
- backendPortName: The backend port name of the container that needs load balancing.
- cnlbIp: The frontend IP utilized by the application. In this case, the same external IP can be used for each list item.
- cnlbPort: The frontend port used in conjunction with the CNLB IP for load balancing.
spec:
template:
metadata:
annotations:
k8s.v1.cni.cncf.io/networks: default/nf-oam-int5@nf-oam-int5
oracle.com.cnc/cnlb: '[{"backendPortName": "query", "cnlbIp": "10.75.180.128","cnlbPort": "80"}, {"backendPortName": "admin", "cnlbIp": "10.75.180.128", "cnlbPort":"16687"}]''
In the above example, each item in the list refers to a different backend port name with the same CNLB IP, however the ports for the frontend are distinct.
cnlbIp represents the external IP of the service and
cnlbPort is the external-facing
port:ports:
- containerPort: 16686
name: query
protocol: TCP
- containerPort: 16687
name: admin
protocol: TCPAnnotations for Statefulsets
This section provides details about the annotations for Statefulsets that are used for single external IP or Statefulset pod mapping.
The annotations in this section are used for SQL Statesfulset pods to do one to one mapping for external service IP or backend for a Statesfulset pod. If any other Statesfulset requires the same mapping, then apply only this configuration.
- oracle.com.cnc/InstanceSvcIpSupport: Set the
"
oracle.com.cnc/InstanceSvcIpSupport" variable to "true". - oracle.com.cnc/cnlb: Use the
cnlbIpvariable in theoracle.com.cnc/cnlbannotation key to define the CNLB annotation for each external IP in the following format: "net-attach-name/externalip".
net-attach-name" is the network attachment name without
'default/' for a network attachment backend that an external IP
will use.
- The following example provides the configuration for SQL Statesfulset pods with
one network attachment or network and two external IPs in the same network for
the "
default/sig1" network attachment:k8s.v1.cni.cncf.io/networks: default/nf-sig1-ie1@nf-sig1-ie1 oracle.com.cnc/InstanceSvcIpSupport : "true" oracle.com.cnc/cnlb: '[{"backendPortName": "mysql", "cnlbIp": "nf-sig1-ie1/10.75.210.5,nf-sig1-ie1/10.75.210.6","cnlbPort":"3306"}]' - The following example provides the configuration for SQL Statesfulset pods with
two network attachments in two different networks and two external IPs in
different
networks:
k8s.v1.cni.cncf.io/networks: default/nf-sig1-ie1@nf-sig1-ie1,default/nf-sig2-ie1@nf-sig2-ie1 oracle.com.cnc/InstanceSvcIpSupport : "true" oracle.com.cnc/cnlb: '[{"backendPortName": "mysql", "cnlbIp": "nf-sig1-ie1/10.75.210.5,nf-sig2-ie1/10.76.210.6","cnlbPort":"3306"}]'In this example,
sig1/10.75.210.5indicates thatsql-0pod uses the10.75.210.5external IP for ingress external IP and usessig1network attachment assigned IP as backend. similarly,sql-1pod uses the10.76.210.6external IP for ingress external IP withsig2network attachment assigned IP as backend. Any combinations of this can be applied with any number of network attachment or external IPs by following the correct syntax.
Annotations for Multi-Network Ingress and Egress Communications
This section describes the annotation configuration for pods that requires ingress and egress (Ingress/Egress) communication from multiple networks.
- oracle.com.cnc/ingressMultiNetwork: Set the
"
oracle.com.cnc/ingressMultiNetwork" variable to "true". - k8s.v1.cni.cncf.io/networks: Use
k8s.v1.cni.cncf.io/networksto define all network attachments information that the pod requires. The parameter must define all the networks that the pod uses for network segregation. - oracle.com.cnc/cnlb: Use the
cnlbIpvariable in theoracle.com.cnc/cnlbannotation to list the external IPs that are required for load balancing. Each service IP must have network attachment prefix in the following format: "nf-oam-ie1/10.75.180.55". Where, network attachment and external service IP are for same network. IPs fromnf-oam-ie1network attachment are used as backend for external load balancing IP10.75.180.55. Similarly,nf-sig-egr1network attachment is used as the backend for external IP10.75.181.154.Note:
- Any number of
cnlbIpcan be added in thecnlbIpvariable for annotations. - When
oracle.com.cnc/ingressMultiNetworkis set to True, ensure that you provide eachcnlbIpin the following format: "<network-attachment-name>/<service-external-ip>".
- Any number of
k8s.v1.cni.cncf.io/networks: default/nf-oam-ie1@nf-oam-ie1,default/nf-sig-ie1@nf-sig-ie1
oracle.com.cnc/ingressMultiNetwork: "true"
oracle.com.cnc/cnlb: '[{"backendPortName": "http", "cnlbIp": "nf-oam-egr1/10.75.180.55,nf-sig-egr1/10.75.181.154",
"cnlbPort": "80"}]'Annotations for Multi-Network Egress Communications
This section provides the annotation configuration for pods that requires egress communication from multiple networks.
Use the k8s.v1.cni.cncf.io/networks to define all the network
attachments information that the pod requires for network segregation.
k8s.v1.cni.cncf.io/networks: default/nf-oam-egr1@nf-oam-egr1,default/nf-sig-egr1@nf-sig-egr1
Configuring Application Pods on Fixed CNLB Application Pods
This section provides information about configuring application pods on a particular CNLB application pod.
CNLB
application replicas // 2). Perform the following steps to override the
algorithm and specify a set name or CNLB application pod:
- Run the following command to find all service set
names:
kubectl -n occne-infra exec -it $(kubectl -n occne-infra get po --no-headers -l app=cnlb-manager -o custom-columns=:.metadata.name) -- curl http://localhost:5001/service-info | python -m json.tool | jq | grep serviceIpSetSample output:"serviceIpSet0" "serviceIpSet1" "serviceIpSet2" - Choose one of the service set name from output where application pods are to be
configured.
For example, if you have a cnDBTier multi-site installation within a single cluster, then choose the
serviceIpSet0service set name for site 1 and use the following annotation on the Statesfulset or deployment pod specification that needs load balancing:Note:
Service set names are case sensitive. Ensure that you use the correct service set names.oracle.com.cnc/cnlbSetName: serviceIpSet0Similarly, use the "
serviceIpSet1" service set name for site2 and so on.This setting ensures that the pods on each are contained to a particular CNLB application for proper internal testing.
API to Fetch Service or Application Load Balancing Details
This section provides information about the service-info API which
is used to fetch load balancing details of a service or application.
CNLB solution does not use service configuration to implement load balancing. The system
uses the IP addresses provided by Multus as backend IPs for CNLB enabled applications.
Therefore, you cannot find the load balancing IP using "kubectl get
svc" command. To overcome this, the system provides the
service-info API to fetch the load balancing details of an
application.
service-info API, run
following API request from Bastion
Host:$ kubectl -n occne-infra exec -it $(kubectl -n occne-infra get po --no-headers -l app=cnlb-manager -o custom-columns=:.metadata.name) -- curl http://localhost:5001/service-info | python -m json.tool | jq
{
"serviceIpSet0": [
{
"appName": "occne-promxy-apigw-nginx",
"backendIpList": [
"132.16.0.136",
"132.16.0.135"
],
"backendPort": 80,
"backendPortName": "nginx",
"frontEndIp": "10.75.180.212",
"frontEndPort": "80",
"gatewayIp": "132.16.0.4",
"networkName": "oam"
}
],
"serviceIpSet1": [
{
"appName": "opensearch-dashboards",
"backendIpList": [
"132.16.0.129"
],
"backendPort": 5601,
"backendPortName": "http",
"frontEndIp": "10.75.180.51",
"frontEndPort": "80",
"gatewayIp": "132.16.0.6",
"networkName": "oam"
}
]
}The API endpoint provides information such as application name, backend IP list (Pod Multus IP addresses), front end IP, frond port for application such as a deployment and Statefulses that are annotated with CNLB annotations.
CNLB Annotation Generation Script
CNLB installation provides the cnlbGenAnnotations.py script on Bastion
Host to generate annotations. You can use this interactive script to copy and update
Helm charts before installation. This script finds the available network attachments or
external IPs within the cluster and generates the annotations. You can update or edit
these annotations as per your application requirements.
cnlbGenAnnotations.py
script:$ python3 /var/occne/cluster/$OCCNE_CLUSTER/artifacts/cnlbGenAnnotations.pyWhen the script is initiated, it prompts you with questions related to your application. When you input the answers, the system provides the list of available external IPs, network attachments per network, and also provides the annotations that the application can use.
# once script is run it will ask some questions, below answers are for an application
# will use sig1 network for ingress / egress requests.
1. Network Names that application will use? Valid Choice example - oam OR comma seperated values - oam,sig,sig1 = sig1
2. Application pod external communication ? Valid choice - IE(Ingress/Egress) , EGR(Egress Only) = ie
3. Pod ingress network type ? Valid choice - SI(Single Instance -DB tier only) , MN(Multi network ingress / egress) , SN(Single Network) = si
4. Provide container ingress port name for backend from pod spec , external ingress load balancing port? Valid choice - (http,80) = sql,3306
# After answering these questions script will give a list of available external ip's/ network attachments per network and also give output of annotations that application can use ---
-------------------------------------------------
Available service ip for network oam , service ip list ['10.123.155.5', '10.123.155.48', '10.123.155.7', '10.123.155.8', '10.123.155.9', '10.123.155.49']
-------------------------------------------------
-------------------------------------------------
Available net attachment names
-------------------------------------------------
['default/nf-sig1-ie1@nf-sig1-ie1', 'default/nf-sig1-ie10@nf-sig1-ie10', 'default/nf-sig1-ie11@nf-sig1-ie11', 'default/nf-sig1-ie12@nf-sig1-ie12', 'default/nf-sig1-ie13@nf-sig1-ie13', 'default/nf-sig1-ie14@nf-sig1-ie14', 'default/nf-sig1-ie15@nf-sig1-ie15', 'default/nf-sig1-ie2@nf-sig1-ie2', 'default/nf-sig1-ie3@nf-sig1-ie3', 'default/nf-sig1-ie4@nf-sig1-ie4']
-------------------------------------------------
-------------------------------------------------
Pod network attachment annotation, add to application pod spec # (OUTPUT TO USE FOR ANNOTATING APPLICATIONS)
-------------------------------------------------
k8s.v1.cni.cncf.io/networks: default/nf-sig1-ie1@nf-sig1-ie1
oracle.com.cnc/InstanceSvcIpSupport : "true"
oracle.com.cnc/cnlb: '[{"backendPortName": "sql", "cnlbIp": "nf-sig1-ie1/10.123.155.13","cnlbPort": "3306"}]'
-------------------------------------------------Updating Annotations to Change NAD or CNLB IPs for Applications
This section provides the procedure to update the annotations to change NAD or CNLB IPs for a running application.
- Scale down the application deployment to 0 before changing the annotation. This is
also applicable to
statefulset.
kubectl -n <namespace> scale deploy <deployment> --replicas 0 - Edit the deployment and update the required
annotations:
$ kubectl -n <namespace> edit deploy <deployment> - Scale up the application deployment to required
replicas:
$ kubectl -n <namespace> scale deploy <deployment> --replicas <replica>
Additional Considerations and Options
CNLB provides the following additional considerations and options:
- Advanced Configuration: The
pluginConfigsection provides extensive configuration depending on the delegate CNI plugin that you choose. Explore the documentation for specific plugins like "whereabouts" for available options. - Network Policy Integration: You can configure network policies to control traffic flow between pods attached to different networks.
For more information on the additional options and configurations, see Multus CNI documentation.
Sample ToR Switch Configurations
This section provides sample ToR switch configuration templates for MetalLB, CNLB VLAN version, and CNLB bond0 version.
Sample ToR Switch Configuration Templates for MetalLB
- Sample Template for Switch
A:
hostname {switchname} vdc {switchname} id 1 limit-resource vlan minimum 16 maximum 4094 limit-resource vrf minimum 2 maximum 4096 limit-resource port-channel minimum 0 maximum 511 limit-resource u4route-mem minimum 248 maximum 248 limit-resource u6route-mem minimum 96 maximum 96 limit-resource m4route-mem minimum 58 maximum 58 limit-resource m6route-mem minimum 8 maximum 8 feature scp-server feature sftp-server cfs eth distribute feature ospf feature bgp feature interface-vlan feature lacp feature vpc feature bfd feature vrrpv3 username admin password {admin_password} role network-admin username {user_name} password {user_password} role network-admin username {user_name} passphrase lifetime 99999 warntime 7 gracetime 3 no ip domain-lookup system jumbomtu 9000 ip access-list ALLOW_5G_XSI_LIST 10 permit ip {ALLOW_5G_XSI_LIST_WITH_PREFIX_LEN} any class-map type qos match-any MATCH_SIGNALING_SUBNETS description MATCH SIGNALING,TCP,REPLICATION PACKETS match access-group name ALLOW_5G_XSI_LIST policy-map type qos SET_DSCP_SIGNALING_SUBNETS description MARK SIGNALING,TCP,REPLICATION PACKETS AS DSCP CS3 class MATCH_SIGNALING_SUBNETS ip prefix-list signal-metalLB-pool description signal-metalLB-pool ip prefix-list signal-metalLB-pool seq 10 permit {SIG1_POOL_WITH_PREFIX_LEN} le 32 ip prefix-list signal-metalLB-pool seq 15 permit {SIG2_POOL_WITH_PREFIX_LEN} le 32 route-map signal-metalLB-app permit 10 match ip address prefix-list signal-metalLB-pool track 1 interface Ethernet1/51 line-protocol track 2 interface Ethernet1/52 line-protocol copp profile strict rmon event 1 description FATAL(1) owner PMON@FATAL rmon event 2 description CRITICAL(2) owner PMON@CRITICAL rmon event 3 description ERROR(3) owner PMON@ERROR rmon event 4 description WARNING(4) owner PMON@WARNING rmon event 5 description INFORMATION(5) owner PMON@INFO ntp server {NTPSERVER1} ntp server {NTPSERVER2} ntp server {NTPSERVER3} ntp server {NTPSERVER4} ntp server {NTPSERVER5} ntp master ip route 0.0.0.0/0 {OAM_UPLINK_CUSTOMER_ADDRESS} name oam-uplink-customer ip route 0.0.0.0/0 172.16.100.2 name backup-oam-to-customer 50 vlan 1-4,100 vlan 2 name Plat-OAiLO-Management vlan 3 name Host_Net vlan 4 name CNE_Management vlan 100 name OSPF_VPC_Management ip prefix-list ALLOW_5G_XSI description ADVERTISE XSI SUBNETS to ASK ip prefix-list ALLOW_5G_XSI seq 5 permit {ALLOW_5G_XSI_LIST_WITH_PREFIX_LEN} le 32 route-map 5G_APP permit 10 match ip address prefix-list ALLOW_5G_XSI vrf context management vrf context vpc-only hardware access-list tcam region egr-racl 1280 hardware access-list tcam region egr-l3-vlan-qos 512 vpc domain 1 role priority 1 peer-keepalive destination 192.168.2.2 source 192.168.2.1 vrf vpc-only delay restore 150 peer-gateway layer3 peer-router auto-recovery reload-delay 60 ip arp synchronize interface Vlan1 no ip redirects no ipv6 redirects interface Vlan2 description "Plat-OAiLO-Management Addresses" no shutdown mtu 9000 no ip redirects ip address 172.16.2.2/24 no ipv6 redirects vrrpv3 2 address-family ipv4 object-track 1 address 172.16.2.1 primary interface Vlan3 description "Host Addresses" no shutdown mtu 9000 no ip redirects ip address 172.16.3.2/24 no ipv6 redirects vrrpv3 3 address-family ipv4 object-track 1 address 172.16.3.1 primary interface Vlan4 description CNE_Management for outside accessible no shutdown mtu 9000 no ip redirects ip address {CNE_Management_SwA_Address}/{CNE_Management_Prefix} no ipv6 redirects vrrpv3 4 address-family ipv4 object-track 1 address {CNE_Management_VIP} primary interface Vlan100 description OSPF_VPC_Management no shutdown mtu 9000 ip address 172.16.100.1/30 ip ospf message-digest-key 1 md5 {ospf_md5_key} ip ospf cost 10 ip ospf dead-interval 10 ip ospf hello-interval 3 ip ospf network point-to-point no ip ospf passive-interface interface port-channel1 description PortChannel to RMS1 switchport switchport mode trunk switchport trunk native vlan 3 switchport trunk allowed vlan 2-5 spanning-tree port type edge trunk mtu 9000 no lacp suspend-individual vpc 1 interface port-channel2 description PortChannel to RMS2 switchport switchport mode trunk switchport trunk native vlan 3 switchport trunk allowed vlan 2-5 spanning-tree port type edge trunk mtu 9000 no lacp suspend-individual vpc 2 interface port-channel3 description PortChannel to RMS3 switchport switchport mode trunk switchport trunk native vlan 3 switchport trunk allowed vlan 2-5 spanning-tree port type edge trunk mtu 9000 no lacp suspend-individual vpc 3 interface port-channel4 description PortChannel to RMS4 switchport switchport mode trunk switchport trunk native vlan 3 switchport trunk allowed vlan 3 spanning-tree port type edge trunk mtu 9000 no lacp suspend-individual vpc 4 interface port-channel5 description PortChannel to RMS5 switchport switchport mode trunk switchport trunk native vlan 3 switchport trunk allowed vlan 3 spanning-tree port type edge trunk mtu 9000 no lacp suspend-individual vpc 5 interface port-channel6 description PortChannel to RMS6 switchport switchport mode trunk switchport trunk native vlan 3 switchport trunk allowed vlan 3 spanning-tree port type edge trunk mtu 9000 no lacp suspend-individual vpc 6 interface port-channel7 description PortChannel to RMS7 switchport switchport mode trunk switchport trunk native vlan 3 switchport trunk allowed vlan 3 spanning-tree port type edge trunk mtu 9000 no lacp suspend-individual vpc 7 interface port-channel8 description PortChannel to RMS8 switchport switchport mode trunk switchport trunk native vlan 3 switchport trunk allowed vlan 3 spanning-tree port type edge trunk mtu 9000 no lacp suspend-individual vpc 8 interface port-channel9 description PortChannel to RMS9 switchport switchport mode trunk switchport trunk native vlan 3 switchport trunk allowed vlan 3 spanning-tree port type edge trunk mtu 9000 no lacp suspend-individual vpc 9 interface port-channel10 description PortChannel to RMS10 switchport switchport mode trunk switchport trunk native vlan 3 switchport trunk allowed vlan 3 spanning-tree port type edge trunk mtu 9000 no lacp suspend-individual vpc 10 interface port-channel11 description PortChannel to RMS11 switchport switchport mode trunk switchport trunk native vlan 3 switchport trunk allowed vlan 3 spanning-tree port type edge trunk mtu 9000 no lacp suspend-individual vpc 11 interface port-channel12 description PortChannel to RMS12 switchport switchport mode trunk switchport trunk native vlan 3 switchport trunk allowed vlan 3 spanning-tree port type edge trunk mtu 9000 no lacp suspend-individual vpc 12 interface port-channel50 description "vpc peer-link" switchport switchport mode trunk switchport trunk allowed vlan 2-5,100 spanning-tree port type network vpc peer-link interface Ethernet1/1 description RMS1 NIC1 switchport switchport mode trunk switchport trunk native vlan 3 switchport trunk allowed vlan 2-5 spanning-tree port type edge trunk mtu 9000 channel-group 1 force mode active no shutdown interface Ethernet1/2 description "Reserved for RMS1 iLO when needed" switchport switchport access vlan 2 no shutdown interface Ethernet1/3 description RMS2 NIC1 switchport switchport mode trunk switchport trunk native vlan 3 switchport trunk allowed vlan 2-5 spanning-tree port type edge trunk mtu 9000 channel-group 2 force mode active no shutdown interface Ethernet1/4 description RMS3 NIC1 switchport switchport mode trunk switchport trunk native vlan 3 switchport trunk allowed vlan 3 spanning-tree port type edge trunk mtu 9000 channel-group 3 force mode active no shutdown interface Ethernet1/5 description RMS3 iLO switchport switchport access vlan 2 no shutdown interface Ethernet1/6 description RMS4 NIC1 switchport switchport mode trunk switchport trunk native vlan 3 switchport trunk allowed vlan 3 spanning-tree port type edge trunk mtu 9000 channel-group 4 force mode active no shutdown interface Ethernet1/7 description RMS5 NIC1 switchport switchport mode trunk switchport trunk native vlan 3 switchport trunk allowed vlan 3 spanning-tree port type edge trunk mtu 9000 channel-group 5 force mode active no shutdown interface Ethernet1/8 description RMS5 iLO switchport switchport access vlan 2 no shutdown interface Ethernet1/9 description RMS6 NIC1 switchport switchport mode trunk switchport trunk native vlan 3 switchport trunk allowed vlan 3 spanning-tree port type edge trunk mtu 9000 channel-group 6 force mode active no shutdown interface Ethernet1/10 description RMS7 NIC1 switchport switchport mode trunk switchport trunk native vlan 3 switchport trunk allowed vlan 3 spanning-tree port type edge trunk mtu 9000 channel-group 7 force mode active no shutdown interface Ethernet1/11 description RMS7 iLO switchport switchport access vlan 2 no shutdown interface Ethernet1/12 description RMS8 NIC1 switchport switchport mode trunk switchport trunk native vlan 3 switchport trunk allowed vlan 3 spanning-tree port type edge trunk mtu 9000 channel-group 8 force mode active no shutdown interface Ethernet1/13 description RMS9 NIC1 switchport switchport mode trunk switchport trunk native vlan 3 switchport trunk allowed vlan 3 spanning-tree port type edge trunk mtu 9000 channel-group 9 force mode active no shutdown interface Ethernet1/14 description RMS9 iLO switchport switchport access vlan 2 no shutdown interface Ethernet1/15 description RMS10 NIC1 switchport switchport mode trunk switchport trunk native vlan 3 switchport trunk allowed vlan 3 spanning-tree port type edge trunk mtu 9000 channel-group 10 force mode active no shutdown interface Ethernet1/16 description RMS11 NIC1 switchport switchport mode trunk switchport trunk native vlan 3 switchport trunk allowed vlan 3 spanning-tree port type edge trunk mtu 9000 channel-group 11 force mode active no shutdown interface Ethernet1/17 description RMS11 iLO switchport switchport access vlan 2 no shutdown interface Ethernet1/18 description RMS12 NIC1 switchport switchport mode trunk switchport trunk native vlan 3 switchport trunk allowed vlan 3 spanning-tree port type edge trunk mtu 9000 channel-group 12 force mode active no shutdown interface Ethernet1/19 interface Ethernet1/20 interface Ethernet1/21 interface Ethernet1/22 interface Ethernet1/23 interface Ethernet1/24 interface Ethernet1/25 interface Ethernet1/26 interface Ethernet1/27 interface Ethernet1/28 interface Ethernet1/29 interface Ethernet1/30 interface Ethernet1/31 interface Ethernet1/32 interface Ethernet1/33 interface Ethernet1/34 interface Ethernet1/35 interface Ethernet1/36 interface Ethernet1/37 interface Ethernet1/38 interface Ethernet1/39 interface Ethernet1/40 interface Ethernet1/41 interface Ethernet1/42 interface Ethernet1/43 interface Ethernet1/44 interface Ethernet1/45 interface Ethernet1/46 interface Ethernet1/47 interface Ethernet1/48 interface Ethernet1/49 description ISL-Mate-93180B switchport switchport mode trunk switchport trunk allowed vlan 2-5,100 channel-group 50 force mode active no shutdown interface Ethernet1/50 description ISL-Mate-93180B switchport switchport mode trunk switchport trunk allowed vlan 2-5,100 channel-group 50 force mode active no shutdown interface Ethernet1/51 description OAM_uplink_customer mtu 9000 ip address {OAM_UPLINK_SwA_ADDRESS}/30 no shutdown interface Ethernet1/52 description Uplink_to_Customer_Signaling mtu 9000 service-policy type qos output SET_DSCP_SIGNALING_SUBNETS ip address {SIGNAL_UPLINK_SwA_ADDRESS}/30 ip ospf authentication message-digest ip ospf message-digest-key 1 md5 {ospf_md5_key} ip ospf cost 100 ip ospf dead-interval 10 ip ospf hello-interval 3 ip ospf network point-to-point no ip ospf passive-interface ip ospf bfd no shutdown interface Ethernet1/53 interface Ethernet1/54 interface mgmt0 description RMS1 eno2 vrf member management ip address 192.168.2.1/24 interface loopback0 description OSPF_BGP_router_id ip address 192.168.0.1/32 ip ospf network point-to-point ip ospf advertise-subnet line console line vty router ospf 1 router-id 192.168.0.1 network {SIGNAL_UPLINK_SUBNET}/30 area {OSPF_AREA_ID} network 172.16.100.0/30 area {OSPF_AREA_ID} redistribute bgp 64501 route-map 5G_APP summary-address {ALLOW_5G_XSI_LIST_WITH_PREFIX_LEN} router bgp 64501 router-id 192.168.0.1 log-neighbor-changes address-family ipv4 unicast maximum-paths 64 neighbor 172.16.3.0/24 remote-as 64512 maximum-peers 256 address-family ipv4 unicast maximum-prefix 256 - Sample Template for Switch
B:
hostname {switchname} vdc {switchname} id 1 limit-resource vlan minimum 16 maximum 4094 limit-resource vrf minimum 2 maximum 4096 limit-resource port-channel minimum 0 maximum 511 limit-resource u4route-mem minimum 248 maximum 248 limit-resource u6route-mem minimum 96 maximum 96 limit-resource m4route-mem minimum 58 maximum 58 limit-resource m6route-mem minimum 8 maximum 8 feature scp-server feature sftp-server cfs eth distribute feature ospf feature bgp feature interface-vlan feature lacp feature vpc feature bfd feature vrrpv3 username admin password {admin_password} role network-admin username {user_name} password {user_password} role network-admin username {user_name} passphrase lifetime 99999 warntime 7 gracetime 3 no ip domain-lookup system jumbomtu 9000 ip access-list ALLOW_5G_XSI_LIST 10 permit ip {ALLOW_5G_XSI_LIST_WITH_PREFIX_LEN} any class-map type qos match-any MATCH_SIGNALING_SUBNETS description MATCH SIGNALING,TCP,REPLICATION PACKETS match access-group name ALLOW_5G_XSI_LIST policy-map type qos SET_DSCP_SIGNALING_SUBNETS description MARK SIGNALING,TCP,REPLICATION PACKETS AS DSCP CS3 class MATCH_SIGNALING_SUBNETS track 1 interface Ethernet1/51 line-protocol track 2 interface Ethernet1/52 line-protocol copp profile strict rmon event 1 description FATAL(1) owner PMON@FATAL rmon event 2 description CRITICAL(2) owner PMON@CRITICAL rmon event 3 description ERROR(3) owner PMON@ERROR rmon event 4 description WARNING(4) owner PMON@WARNING rmon event 5 description INFORMATION(5) owner PMON@INFO ntp server {NTPSERVER1} ntp server {NTPSERVER2} ntp server {NTPSERVER3} ntp server {NTPSERVER4} ntp server {NTPSERVER5} ntp master ip route 0.0.0.0/0 {OAM_UPLINK_CUSTOMER_ADDRESS} name oam-uplink-customer ip route 0.0.0.0/0 172.16.100.1 name backup-oam-to-customer 50 vlan 1-5,100 vlan 2 name Plat-OAiLO-Management vlan 3 name Host_Net vlan 4 name CNE_Management vlan 100 name OSPF_VPC_Management ip prefix-list ALLOW_5G_XSI description ADVERTISE XSI SUBNETS to ASK ip prefix-list ALLOW_5G_XSI seq 5 permit {ALLOW_5G_XSI_LIST_WITH_PREFIX_LEN} le 32 route-map 5G_APP permit 10 match ip address prefix-list ALLOW_5G_XSI vrf context management vrf context vpc-only hardware access-list tcam region egr-racl 1280 hardware access-list tcam region egr-l3-vlan-qos 512 vpc domain 1 role priority 1 peer-keepalive destination 192.168.2.2 source 192.168.2.1 vrf vpc-only delay restore 150 peer-gateway layer3 peer-router auto-recovery reload-delay 60 ip arp synchronize interface Vlan1 no ip redirects no ipv6 redirects interface Vlan2 description "Plat-OAiLO-Management Addresses" no shutdown mtu 9000 no ip redirects ip address 172.16.2.3/24 no ipv6 redirects vrrpv3 2 address-family ipv4 object-track 1 address 172.16.2.1 primary interface Vlan3 description "Host Addresses" no shutdown mtu 9000 no ip redirects ip address 172.16.3.3/24 no ipv6 redirects vrrpv3 3 address-family ipv4 object-track 1 address 172.16.3.1 primary interface Vlan4 description CNE_Management for outside accessible no shutdown mtu 9000 no ip redirects ip address {CNE_Management_SwB_Address}/{CNE_Management_Prefix} no ipv6 redirects vrrpv3 4 address-family ipv4 object-track 1 address {CNE_Management_VIP} primary interface Vlan100 description OSPF_VPC_Management no shutdown mtu 9000 ip address 172.16.100.2/30 ip ospf message-digest-key 1 md5 {ospf_md5_key} ip ospf cost 10 ip ospf dead-interval 10 ip ospf hello-interval 3 ip ospf network point-to-point no ip ospf passive-interface interface port-channel1 description PortChannel to RMS1 switchport switchport mode trunk switchport trunk native vlan 3 switchport trunk allowed vlan 2-4 spanning-tree port type edge trunk mtu 9000 no lacp suspend-individual vpc 1 interface port-channel2 description PortChannel to RMS2 switchport switchport mode trunk switchport trunk native vlan 3 switchport trunk allowed vlan 2-4 spanning-tree port type edge trunk mtu 9000 no lacp suspend-individual vpc 2 interface port-channel3 description PortChannel to RMS3 switchport switchport mode trunk switchport trunk native vlan 3 switchport trunk allowed vlan 2-4 spanning-tree port type edge trunk mtu 9000 no lacp suspend-individual vpc 3 interface port-channel4 description PortChannel to RMS4 switchport switchport mode trunk switchport trunk native vlan 3 switchport trunk allowed vlan 3 spanning-tree port type edge trunk mtu 9000 no lacp suspend-individual vpc 4 interface port-channel5 description PortChannel to RMS5 switchport switchport mode trunk switchport trunk native vlan 3 switchport trunk allowed vlan 3 spanning-tree port type edge trunk mtu 9000 no lacp suspend-individual vpc 5 interface port-channel6 description PortChannel to RMS6 switchport switchport mode trunk switchport trunk native vlan 3 switchport trunk allowed vlan 3 spanning-tree port type edge trunk mtu 9000 no lacp suspend-individual vpc 6 interface port-channel7 description PortChannel to RMS7 switchport switchport mode trunk switchport trunk native vlan 3 switchport trunk allowed vlan 3 spanning-tree port type edge trunk mtu 9000 no lacp suspend-individual vpc 7 interface port-channel8 description PortChannel to RMS8 switchport switchport mode trunk switchport trunk native vlan 3 switchport trunk allowed vlan 3 spanning-tree port type edge trunk mtu 9000 no lacp suspend-individual vpc 8 interface port-channel9 description PortChannel to RMS9 switchport switchport mode trunk switchport trunk native vlan 3 switchport trunk allowed vlan 3 spanning-tree port type edge trunk mtu 9000 no lacp suspend-individual vpc 9 interface port-channel10 description PortChannel to RMS10 switchport switchport mode trunk switchport trunk native vlan 3 switchport trunk allowed vlan 3 spanning-tree port type edge trunk mtu 9000 no lacp suspend-individual vpc 10 interface port-channel11 description PortChannel to RMS11 switchport switchport mode trunk switchport trunk native vlan 3 switchport trunk allowed vlan 3 spanning-tree port type edge trunk mtu 9000 no lacp suspend-individual vpc 11 interface port-channel12 description PortChannel to RMS12 switchport switchport mode trunk switchport trunk native vlan 3 switchport trunk allowed vlan 3 spanning-tree port type edge trunk mtu 9000 no lacp suspend-individual vpc 12 interface port-channel50 description "vpc peer-link" switchport switchport mode trunk switchport trunk allowed vlan 2-4,100 spanning-tree port type network vpc peer-link interface Ethernet1/1 description RMS1 NIC2 switchport switchport mode trunk switchport trunk native vlan 3 switchport trunk allowed vlan 2-4 spanning-tree port type edge trunk mtu 9000 channel-group 1 force mode active no shutdown interface Ethernet1/2 description RMS2 iLO switchport switchport access vlan 2 no shutdown interface Ethernet1/3 description RMS2 NIC2 switchport switchport mode trunk switchport trunk native vlan 3 switchport trunk allowed vlan 2-4 spanning-tree port type edge trunk mtu 9000 channel-group 2 force mode active no shutdown interface Ethernet1/4 description RMS3 NIC2 switchport switchport mode trunk switchport trunk native vlan 3 switchport trunk allowed vlan 3 spanning-tree port type edge trunk mtu 9000 channel-group 3 force mode active no shutdown interface Ethernet1/5 description RMS4 iLO switchport switchport access vlan 2 no shutdown interface Ethernet1/6 description RMS4 NIC2 switchport switchport mode trunk switchport trunk native vlan 3 switchport trunk allowed vlan 3 spanning-tree port type edge trunk mtu 9000 channel-group 4 force mode active no shutdown interface Ethernet1/7 description RMS5 NIC2 switchport switchport mode trunk switchport trunk native vlan 3 switchport trunk allowed vlan 3 spanning-tree port type edge trunk mtu 9000 channel-group 5 force mode active no shutdown interface Ethernet1/8 description RMS6 iLO switchport switchport access vlan 2 no shutdown interface Ethernet1/9 description RMS6 NIC2 switchport switchport mode trunk switchport trunk native vlan 3 switchport trunk allowed vlan 3 spanning-tree port type edge trunk mtu 9000 channel-group 6 force mode active no shutdown interface Ethernet1/10 description RMS7 NIC2 switchport switchport mode trunk switchport trunk native vlan 3 switchport trunk allowed vlan 3 spanning-tree port type edge trunk mtu 9000 channel-group 7 force mode active no shutdown interface Ethernet1/11 description RMS8 iLO switchport switchport access vlan 2 no shutdown interface Ethernet1/12 description RMS8 NIC2 switchport switchport mode trunk switchport trunk native vlan 3 switchport trunk allowed vlan 3 spanning-tree port type edge trunk mtu 9000 channel-group 8 force mode active no shutdown interface Ethernet1/13 description RMS9 NIC2 switchport switchport mode trunk switchport trunk native vlan 3 switchport trunk allowed vlan 3 spanning-tree port type edge trunk mtu 9000 channel-group 9 force mode active no shutdown interface Ethernet1/14 description RMS10 iLO switchport switchport access vlan 2 no shutdown interface Ethernet1/15 description RMS10 NIC2 switchport switchport mode trunk switchport trunk native vlan 3 switchport trunk allowed vlan 3 spanning-tree port type edge trunk mtu 9000 channel-group 10 force mode active no shutdown interface Ethernet1/16 description RMS11 NIC2 switchport switchport mode trunk switchport trunk native vlan 3 switchport trunk allowed vlan 3 spanning-tree port type edge trunk mtu 9000 channel-group 11 force mode active no shutdown interface Ethernet1/17 description RMS12 iLO switchport switchport access vlan 2 no shutdown interface Ethernet1/18 description RMS12 NIC2 switchport switchport mode trunk switchport trunk native vlan 3 switchport trunk allowed vlan 3 spanning-tree port type edge trunk mtu 9000 channel-group 12 force mode active no shutdown interface Ethernet1/19 interface Ethernet1/20 interface Ethernet1/21 interface Ethernet1/22 interface Ethernet1/23 interface Ethernet1/24 interface Ethernet1/25 interface Ethernet1/26 interface Ethernet1/27 interface Ethernet1/28 interface Ethernet1/29 interface Ethernet1/30 interface Ethernet1/31 interface Ethernet1/32 interface Ethernet1/33 interface Ethernet1/34 interface Ethernet1/35 interface Ethernet1/36 interface Ethernet1/37 interface Ethernet1/38 interface Ethernet1/39 interface Ethernet1/40 interface Ethernet1/41 interface Ethernet1/42 interface Ethernet1/43 interface Ethernet1/44 interface Ethernet1/45 interface Ethernet1/46 interface Ethernet1/47 interface Ethernet1/48 interface Ethernet1/49 description ISL-Mate-93180A switchport switchport mode trunk switchport trunk allowed vlan 2-4,100 channel-group 50 force mode active no shutdown interface Ethernet1/50 description ISL-Mate-93180A switchport switchport mode trunk switchport trunk allowed vlan 2-4,100 channel-group 50 force mode active no shutdown interface Ethernet1/51 description OAM_uplink_customer mtu 9000 ip address {OAM_UPLINK_SwB_ADDRESS}/30 no shutdown interface Ethernet1/52 description Uplink_to_Customer_Signaling mtu 9000 service-policy type qos output SET_DSCP_SIGNALING_SUBNETS ip address {SIGNAL_UPLINK_SwB_ADDRESS}/30 ip ospf authentication message-digest ip ospf message-digest-key 1 md5 {ospf_md5_key} ip ospf cost 100 ip ospf dead-interval 10 ip ospf hello-interval 3 ip ospf network point-to-point no ip ospf passive-interface ip ospf bfd no shutdown interface Ethernet1/53 interface Ethernet1/54 interface mgmt0 description RMS1 1G-NIC3 vrf member management ip address 192.168.2.2/24 interface loopback0 description OSPF_BGP_router_id ip address 192.168.0.2/32 ip ospf network point-to-point ip ospf advertise-subnet line console line vty router ospf 1 router-id 192.168.0.2 network {SIGNAL_UPLINK_SUBNET}/30 area {OSPF_AREA_ID} network 172.16.100.0/30 area {OSPF_AREA_ID} redistribute bgp 64501 route-map 5G_APP summary-address {ALLOW_5G_XSI_LIST_WITH_PREFIX_LEN} router bgp 64501 router-id 192.168.0.2 log-neighbor-changes address-family ipv4 unicast maximum-paths 64 neighbor 172.16.3.0/24 remote-as 64512 maximum-peers 256 address-family ipv4 unicast maximum-prefix 256
Sample ToR Switch Configuration Templates for CNLB VLAN Version
- Sample Template for Switch
A:
hostname {switchname} vdc {switchname} id 1 limit-resource vlan minimum 16 maximum 4094 limit-resource vrf minimum 2 maximum 4096 limit-resource port-channel minimum 0 maximum 511 limit-resource u4route-mem minimum 248 maximum 248 limit-resource u6route-mem minimum 96 maximum 96 limit-resource m4route-mem minimum 58 maximum 58 limit-resource m6route-mem minimum 8 maximum 8 feature scp-server feature sftp-server cfs eth distribute feature ospf feature interface-vlan feature lacp feature vpc feature bfd feature vrrpv3 username admin password {admin_password} role network-admin username {user_name} password {user_password} role network-admin username {user_name} passphrase lifetime 99999 warntime 7 gracetime 3 no ip domain-lookup system jumbomtu 9000 ip access-list ALLOW_5G_XSI_LIST 10 permit ip {ALLOW_5G_XSI_LIST_WITH_PREFIX_LEN} any class-map type qos match-any MATCH_SIGNALING_SUBNETS description MATCH SIGNALING,TCP,REPLICATION PACKETS match access-group name ALLOW_5G_XSI_LIST policy-map type qos SET_DSCP_SIGNALING_SUBNETS description MARK SIGNALING,TCP,REPLICATION PACKETS AS DSCP CS3 class MATCH_SIGNALING_SUBNETS track 1 interface Ethernet1/51 line-protocol track 2 interface Ethernet1/52 line-protocol copp profile strict rmon event 1 description FATAL(1) owner PMON@FATAL rmon event 2 description CRITICAL(2) owner PMON@CRITICAL rmon event 3 description ERROR(3) owner PMON@ERROR rmon event 4 description WARNING(4) owner PMON@WARNING rmon event 5 description INFORMATION(5) owner PMON@INFO ntp server {NTPSERVER1} ntp server {NTPSERVER2} ntp server {NTPSERVER3} ntp server {NTPSERVER4} ntp server {NTPSERVER5} ntp master ip route 0.0.0.0/0 {OAM_UPLINK_CUSTOMER_ADDRESS} name oam-uplink-customer ip route 0.0.0.0/0 172.16.100.2 name backup-oam-to-customer 50 vlan 1-5,100 vlan 2 name Plat-OAiLO-Management vlan 3 name Host_Net vlan 4 name CNE_Management vlan 50 name CNLB-OAM-EXT-VLAN vlan 60 name CNLB-SIG-EXT-VLAN vlan 100 name OSPF_VPC_Management vlan 110 name CNLB-OAM-INT-VLAN vlan 120 name CNLB-SIG-INT-VLAN vpc domain 1 role priority 1 peer-keepalive destination 192.168.2.2 source 192.168.2.1 vrf vpc-only delay restore 150 peer-gateway layer3 peer-router auto-recovery reload-delay 60 ip arp synchronize interface Vlan2 description "Plat-OAiLO-Management Addresses" no shutdown mtu 9000 no ip redirects ip address 172.16.2.2/24 no ipv6 redirects vrrpv3 2 address-family ipv4 object-track 1 address 172.16.2.1 primary interface Vlan3 description "Host Addresses" no shutdown mtu 9000 no ip redirects ip address 172.16.3.2/24 no ipv6 redirects vrrpv3 3 address-family ipv4 object-track 1 address 172.16.3.1 primary interface Vlan4 description CNE_Management for outside accessible no shutdown mtu 9000 no ip redirects ip address {CNE_Management_SwA_Address}/{CNE_Management_Prefix} no ipv6 redirects vrrpv3 4 address-family ipv4 object-track 1 address {CNE_Management_VIP} primary interface Vlan50 description CNLB_OAM_EXT no shutdown ip address {CNLB_OAM_EXT_SwA_Address}/{CNLB_OAM_EXT_Prefix} ipv6 address 2050::/64 eui64 vrrpv3 50 address-family ipv4 object-track 1 address {CNLB_OAM_EXT_VIP} primary interface Vlan60 description CNLB_SIG_EXT no shutdown ip address {CNLB_SIG_EXT_SwA_Address}/{CNLB_SIG_EXT_Prefix} ipv6 address 2060::/64 eui64 vrrpv3 60 address-family ipv4 object-track 1 address {CNLB_SIG_EXT_VIP} primary interface Vlan100 description OSPF_VPC_Management no shutdown mtu 9000 ip address 172.16.100.1/30 ip ospf message-digest-key 1 md5 {ospf_md5_key} ip ospf cost 10 ip ospf dead-interval 10 ip ospf hello-interval 3 ip ospf network point-to-point no ip ospf passive-interface interface Vlan110 description CNLB_OAM_INT no shutdown ip address 172.16.110.102/24 ipv6 address 2110::/64 eui64 interface Vlan120 description CNLB_SIG_INT no shutdown ip address 172.16.120.102/24 ipv6 address 2120::/64 eui64 interface port-channel1 description PortChannel to RMS1 switchport switchport mode trunk switchport trunk native vlan 3 switchport trunk allowed vlan 2-4 spanning-tree port type edge trunk mtu 9000 no lacp suspend-individual vpc 1 interface port-channel2 description PortChannel to RMS2 switchport switchport mode trunk switchport trunk native vlan 3 switchport trunk allowed vlan 2-4 spanning-tree port type edge trunk mtu 9000 no lacp suspend-individual vpc 2 interface port-channel3 description PortChannel to RMS3 switchport switchport mode trunk switchport trunk native vlan 3 switchport trunk allowed vlan 2-4 spanning-tree port type edge trunk mtu 9000 no lacp suspend-individual vpc 3 interface port-channel4 description PortChannel to RMS4 switchport switchport mode trunk switchport trunk native vlan 3 switchport trunk allowed vlan 3,50,60,110,120 spanning-tree port type edge trunk mtu 9000 no lacp suspend-individual vpc 4 interface port-channel5 description PortChannel to RMS5 switchport switchport mode trunk switchport trunk native vlan 3 switchport trunk allowed vlan 3,50,60,110,120 spanning-tree port type edge trunk mtu 9000 no lacp suspend-individual vpc 5 interface port-channel6 description PortChannel to RMS6 switchport switchport mode trunk switchport trunk native vlan 3 switchport trunk allowed vlan 3,50,60,110,120 spanning-tree port type edge trunk mtu 9000 no lacp suspend-individual vpc 6 interface port-channel7 description PortChannel to RMS7 switchport switchport mode trunk switchport trunk native vlan 3 switchport trunk allowed vlan 3,50,60,110,120 spanning-tree port type edge trunk mtu 9000 no lacp suspend-individual vpc 7 interface port-channel8 description PortChannel to RMS8 switchport switchport mode trunk switchport trunk native vlan 3 switchport trunk allowed vlan 3,50,60,110,120 spanning-tree port type edge trunk mtu 9000 no lacp suspend-individual vpc 8 interface port-channel9 description PortChannel to RMS9 switchport switchport mode trunk switchport trunk native vlan 3 switchport trunk allowed vlan 3,50,60,110,120 spanning-tree port type edge trunk mtu 9000 no lacp suspend-individual vpc 9 interface port-channel10 description PortChannel to RMS10 switchport switchport mode trunk switchport trunk native vlan 3 switchport trunk allowed vlan 3,50,60,110,120 spanning-tree port type edge trunk mtu 9000 no lacp suspend-individual vpc 10 interface port-channel11 description PortChannel to RMS11 switchport switchport mode trunk switchport trunk native vlan 3 switchport trunk allowed vlan 3,50,60,110,120 spanning-tree port type edge trunk mtu 9000 no lacp suspend-individual vpc 11 interface port-channel12 description PortChannel to RMS12 switchport switchport mode trunk switchport trunk native vlan 3 switchport trunk allowed vlan 3,50,60,110,120 spanning-tree port type edge trunk mtu 9000 no lacp suspend-individual vpc 12 interface port-channel50 description "vpc peer-link" switchport switchport mode trunk switchport trunk allowed vlan 2-4,50,60,100,110,120 spanning-tree port type network vpc peer-link interface Ethernet1/1 description RMS1 NIC1 switchport switchport mode trunk switchport trunk native vlan 3 switchport trunk allowed vlan 2-4 spanning-tree port type edge trunk mtu 9000 channel-group 1 force mode active no shutdown interface Ethernet1/2 description "Reserved for RMS1 iLO when needed" switchport switchport access vlan 2 no shutdown interface Ethernet1/3 description RMS2 NIC1 switchport switchport mode trunk switchport trunk native vlan 3 switchport trunk allowed vlan 2-4 spanning-tree port type edge trunk mtu 9000 channel-group 2 force mode active no shutdown interface Ethernet1/4 description RMS3 NIC1 switchport switchport mode trunk switchport trunk native vlan 3 switchport trunk allowed vlan 3,50,60,110,120 spanning-tree port type edge trunk switchport trunk allowed vlan 3, spanning-tree port type edge trunk mtu 9000 channel-group 3 force mode active no shutdown interface Ethernet1/5 description RMS3 iLO switchport switchport access vlan 2 no shutdown interface Ethernet1/6 description RMS4 NIC1 switchport switchport mode trunk switchport trunk native vlan 3 switchport trunk allowed vlan 3,50,60,110,120 spanning-tree port type edge trunk mtu 9000 channel-group 4 force mode active no shutdown interface Ethernet1/7 description RMS5 NIC1 switchport switchport mode trunk switchport trunk native vlan 3 switchport trunk allowed vlan 3,50,60,110,120 spanning-tree port type edge trunk mtu 9000 channel-group 5 force mode active no shutdown interface Ethernet1/8 description RMS5 iLO switchport switchport access vlan 2 no shutdown interface Ethernet1/9 description RMS6 NIC1 switchport switchport mode trunk switchport trunk native vlan 3 switchport trunk allowed vlan 3,50,60,110,120 spanning-tree port type edge trunk mtu 9000 channel-group 6 force mode active no shutdown interface Ethernet1/10 description RMS7 NIC1 switchport switchport mode trunk switchport trunk native vlan 3 switchport trunk allowed vlan 3,50,60,110,120 spanning-tree port type edge trunk mtu 9000 channel-group 7 force mode active no shutdown interface Ethernet1/11 description RMS7 iLO switchport switchport access vlan 2 no shutdown interface Ethernet1/12 description RMS8 NIC1 switchport switchport mode trunk switchport trunk native vlan 3 switchport trunk allowed vlan 3,50,60,110,120 spanning-tree port type edge trunk mtu 9000 channel-group 8 force mode active no shutdown interface Ethernet1/13 description RMS9 NIC1 switchport switchport mode trunk switchport trunk native vlan 3 switchport trunk allowed vlan 3,50,60,110,120 spanning-tree port type edge trunk mtu 9000 channel-group 9 force mode active no shutdown interface Ethernet1/14 description RMS9 iLO switchport switchport access vlan 2 no shutdown interface Ethernet1/15 description RMS10 NIC1 switchport switchport mode trunk switchport trunk native vlan 3 switchport trunk allowed vlan 3,50,60,110,120 spanning-tree port type edge trunk mtu 9000 channel-group 10 force mode active no shutdown interface Ethernet1/16 description RMS11 NIC1 switchport switchport mode trunk switchport trunk native vlan 3 switchport trunk allowed vlan 3,50,60,110,120 spanning-tree port type edge trunk mtu 9000 channel-group 11 force mode active no shutdown interface Ethernet1/17 description RMS11 iLO switchport switchport access vlan 2 no shutdown interface Ethernet1/18 description RMS12 NIC1 switchport switchport mode trunk switchport trunk native vlan 3 switchport trunk allowed vlan 3,50,60,110,120 spanning-tree port type edge trunk mtu 9000 channel-group 12 force mode active no shutdown interface Ethernet1/19 interface Ethernet1/20 interface Ethernet1/21 interface Ethernet1/22 interface Ethernet1/23 interface Ethernet1/24 interface Ethernet1/25 interface Ethernet1/26 interface Ethernet1/27 interface Ethernet1/28 interface Ethernet1/29 interface Ethernet1/30 interface Ethernet1/31 interface Ethernet1/32 interface Ethernet1/33 interface Ethernet1/34 interface Ethernet1/35 interface Ethernet1/36 interface Ethernet1/37 interface Ethernet1/38 interface Ethernet1/39 interface Ethernet1/40 interface Ethernet1/41 interface Ethernet1/42 interface Ethernet1/43 interface Ethernet1/44 interface Ethernet1/45 interface Ethernet1/46 interface Ethernet1/47 interface Ethernet1/48 interface Ethernet1/49 description ISL-Mate-93180B switchport switchport mode trunk switchport trunk allowed vlan 2-4,50,60,110,120,100 channel-group 50 force mode active no shutdown interface Ethernet1/50 description ISL-Mate-93180B switchport switchport mode trunk switchport trunk allowed vlan 2-4,50,60,110,120,100 channel-group 50 force mode active no shutdown interface Ethernet1/51 description OAM_uplink_customer mtu 9000 ip address {OAM_UPLINK_SwA_ADDRESS}/30 no shutdown interface Ethernet1/52 description Uplink_to_Customer_Signaling mtu 9000 service-policy type qos output SET_DSCP_SIGNALING_SUBNETS ip address {SIGNAL_UPLINK_SwA_ADDRESS}/30 ip ospf authentication message-digest ip ospf message-digest-key 1 md5 {ospf_md5_key} ip ospf cost 100 ip ospf dead-interval 10 ip ospf hello-interval 3 ip ospf network point-to-point no ip ospf passive-interface ip ospf bfd no shutdown interface mgmt0 description RMS1 eno2 vrf member management ip address 192.168.2.1/24 interface loopback0 description OSPF_router_id ip address 192.168.0.1/32 ip ospf network point-to-point ip ospf advertise-subnet router ospf 1 router-id 192.168.0.1 network {SIGNAL_UPLINK_SUBNET}/30 area {OSPF_AREA_ID} network 172.16.100.0/30 area {OSPF_AREA_ID} network {CNLB_SIG_EXT_SUBNET}/{CNLB_SIG_EXT_Prefix} - Sample Template for Switch
B:
hostname {switchname} vdc {switchname} id 1 limit-resource vlan minimum 16 maximum 4094 limit-resource vrf minimum 2 maximum 4096 limit-resource port-channel minimum 0 maximum 511 limit-resource u4route-mem minimum 248 maximum 248 limit-resource u6route-mem minimum 96 maximum 96 limit-resource m4route-mem minimum 58 maximum 58 limit-resource m6route-mem minimum 8 maximum 8 feature scp-server feature sftp-server cfs eth distribute feature ospf feature interface-vlan feature lacp feature vpc feature bfd feature vrrpv3 username admin password {admin_password} role network-admin username {user_name} password {user_password} role network-admin username {user_name} passphrase lifetime 99999 warntime 7 gracetime 3 no ip domain-lookup system jumbomtu 9000 ip access-list ALLOW_5G_XSI_LIST 10 permit ip {ALLOW_5G_XSI_LIST_WITH_PREFIX_LEN} any class-map type qos match-any MATCH_SIGNALING_SUBNETS description MATCH SIGNALING,TCP,REPLICATION PACKETS match access-group name ALLOW_5G_XSI_LIST policy-map type qos SET_DSCP_SIGNALING_SUBNETS description MARK SIGNALING,TCP,REPLICATION PACKETS AS DSCP CS3 class MATCH_SIGNALING_SUBNETS track 1 interface Ethernet1/51 line-protocol track 2 interface Ethernet1/52 line-protocol copp profile strict rmon event 1 description FATAL(1) owner PMON@FATAL rmon event 2 description CRITICAL(2) owner PMON@CRITICAL rmon event 3 description ERROR(3) owner PMON@ERROR rmon event 4 description WARNING(4) owner PMON@WARNING rmon event 5 description INFORMATION(5) owner PMON@INFO ntp server {NTPSERVER1} ntp server {NTPSERVER2} ntp server {NTPSERVER3} ntp server {NTPSERVER4} ntp server {NTPSERVER5} ntp master ip route 0.0.0.0/0 {OAM_UPLINK_CUSTOMER_ADDRESS} name oam-uplink-customer ip route 0.0.0.0/0 172.16.100.1 name backup-oam-to-customer 50 vlan 1-5,100 vlan 2 name Plat-OAiLO-Management vlan 3 name Host_Net vlan 4 name CNE_Management vlan 50 name CNLB-OAM-EXT-VLAN vlan 60 name CNLB-SIG-EXT-VLAN vlan 100 name OSPF_VPC_Management vlan 110 name CNLB-OAM-INT-VLAN vlan 120 name CNLB-SIG-INT-VLAN vpc domain 1 role priority 1 peer-keepalive destination 192.168.2.2 source 192.168.2.1 vrf vpc-only delay restore 150 peer-gateway layer3 peer-router auto-recovery reload-delay 60 ip arp synchronize interface Vlan2 description "Plat-OAiLO-Management Addresses" no shutdown mtu 9000 no ip redirects ip address 172.16.2.3/24 no ipv6 redirects vrrpv3 2 address-family ipv4 object-track 1 address 172.16.2.1 primary interface Vlan3 description "Host Addresses" no shutdown mtu 9000 no ip redirects ip address 172.16.3.3/24 no ipv6 redirects vrrpv3 3 address-family ipv4 object-track 1 address 172.16.3.1 primary interface Vlan4 description CNE_Management for outside accessible no shutdown mtu 9000 no ip redirects ip address {CNE_Management_SwB_Address}/{CNE_Management_Prefix} no ipv6 redirects vrrpv3 4 address-family ipv4 object-track 1 address {CNE_Management_VIP} primary interface Vlan50 description CNLB_OAM_EXT no shutdown ip address {CNLB_OAM_EXT_SwB_Address}/{CNLB_OAM_EXT_Prefix} ipv6 address 2050::/64 eui64 vrrpv3 50 address-family ipv4 object-track 1 address {CNLB_OAM_EXT_VIP} primary interface Vlan60 description CNLB_SIG_EXT no shutdown ip address {CNLB_SIG_EXT_SwB_Address}/{CNLB_SIG_EXT_Prefix} ipv6 address 2060::/64 eui64 vrrpv3 60 address-family ipv4 object-track 1 address {CNLB_SIG_EXT_VIP} primary interface Vlan100 description OSPF_VPC_Management no shutdown ip address 172.16.100.2/30 ip ospf message-digest-key 1 md5 {ospf_md5_key} ip ospf cost 10 ip ospf dead-interval 10 ip ospf hello-interval 3 ip ospf network point-to-point no ip ospf passive-interface interface Vlan110 description CNLB_OAM_INT no shutdown ip address 172.16.110.103/24 ipv6 address 2110::/64 eui64 interface Vlan120 description CNLB_SIG_INT no shutdown ip address 172.16.120.103/24 ipv6 address 2120::/64 eui64 interface port-channel1 description PortChannel to RMS1 switchport switchport mode trunk switchport trunk native vlan 3 switchport trunk allowed vlan 2-4 spanning-tree port type edge trunk mtu 9000 no lacp suspend-individual vpc 1 interface port-channel2 description PortChannel to RMS2 switchport switchport mode trunk switchport trunk native vlan 3 switchport trunk allowed vlan 2-4 spanning-tree port type edge trunk mtu 9000 no lacp suspend-individual vpc 2 interface port-channel3 description PortChannel to RMS3 switchport switchport mode trunk switchport trunk native vlan 3 switchport trunk allowed vlan 2-4 spanning-tree port type edge trunk mtu 9000 no lacp suspend-individual vpc 3 interface port-channel4 description PortChannel to RMS4 switchport switchport mode trunk switchport trunk native vlan 3 switchport trunk allowed vlan 3,50,60,110,120 spanning-tree port type edge trunk mtu 9000 no lacp suspend-individual vpc 4 interface port-channel5 description PortChannel to RMS5 switchport switchport mode trunk switchport trunk native vlan 3 switchport trunk allowed vlan 3,50,60,110,120 spanning-tree port type edge trunk mtu 9000 no lacp suspend-individual vpc 5 interface port-channel6 description PortChannel to RMS6 switchport switchport mode trunk switchport trunk native vlan 3 switchport trunk allowed vlan 3,50,60,110,120 spanning-tree port type edge trunk mtu 9000 no lacp suspend-individual vpc 6 interface port-channel7 description PortChannel to RMS7 switchport switchport mode trunk switchport trunk native vlan 3 switchport trunk allowed vlan 3,50,60,110,120 spanning-tree port type edge trunk mtu 9000 no lacp suspend-individual vpc 7 interface port-channel8 description PortChannel to RMS8 switchport switchport mode trunk switchport trunk native vlan 3 switchport trunk allowed vlan 3,50,60,110,120 spanning-tree port type edge trunk mtu 9000 no lacp suspend-individual vpc 8 interface port-channel9 description PortChannel to RMS9 switchport switchport mode trunk switchport trunk native vlan 3 switchport trunk allowed vlan 3,50,60,110,120 spanning-tree port type edge trunk mtu 9000 no lacp suspend-individual vpc 9 interface port-channel10 description PortChannel to RMS10 switchport switchport mode trunk switchport trunk native vlan 3 switchport trunk allowed vlan 3,50,60,110,120 spanning-tree port type edge trunk mtu 9000 no lacp suspend-individual vpc 10 interface port-channel11 description PortChannel to RMS11 switchport switchport mode trunk switchport trunk native vlan 3 switchport trunk allowed vlan 3,50,60,110,120 spanning-tree port type edge trunk mtu 9000 no lacp suspend-individual vpc 11 interface port-channel12 description PortChannel to RMS12 switchport switchport mode trunk switchport trunk native vlan 3 switchport trunk allowed vlan 3,50,60,110,120 spanning-tree port type edge trunk mtu 9000 no lacp suspend-individual vpc 12 interface port-channel50 description "vpc peer-link" switchport switchport mode trunk switchport trunk allowed vlan 2-4,100 spanning-tree port type network vpc peer-link interface Ethernet1/1 description RMS1 NIC2 switchport switchport mode trunk switchport trunk native vlan 3 switchport trunk allowed vlan 2-4 spanning-tree port type edge trunk mtu 9000 channel-group 1 force mode active no shutdown interface Ethernet1/2 description RMS2 iLO switchport switchport access vlan 2 no shutdown interface Ethernet1/3 description RMS2 NIC2 switchport switchport mode trunk switchport trunk native vlan 3 switchport trunk allowed vlan 2-4 spanning-tree port type edge trunk mtu 9000 channel-group 2 force mode active no shutdown interface Ethernet1/4 description RMS3 NIC2 switchport switchport mode trunk switchport trunk native vlan 3 switchport trunk allowed vlan 2-4 spanning-tree port type edge trunk mtu 9000 channel-group 3 force mode active no shutdown interface Ethernet1/5 description RMS4 iLO switchport switchport access vlan 2 no shutdown interface Ethernet1/6 description RMS4 NIC2 switchport switchport mode trunk switchport trunk native vlan 3 switchport trunk allowed vlan 3,50,60,110,120 spanning-tree port type edge trunk mtu 9000 channel-group 4 force mode active no shutdown interface Ethernet1/7 description RMS5 NIC2 switchport switchport mode trunk switchport trunk native vlan 3 switchport trunk allowed vlan 3,50,60,110,120 spanning-tree port type edge trunk mtu 9000 channel-group 5 force mode active no shutdown interface Ethernet1/8 description RMS6 iLO switchport switchport access vlan 2 no shutdown interface Ethernet1/9 description RMS6 NIC2 switchport switchport mode trunk switchport trunk native vlan 3 switchport trunk allowed vlan 3,50,60,110,120 spanning-tree port type edge trunk mtu 9000 channel-group 6 force mode active no shutdown interface Ethernet1/10 description RMS7 NIC2 switchport switchport mode trunk switchport trunk native vlan 3 switchport trunk allowed vlan 3,50,60,110,120 spanning-tree port type edge trunk mtu 9000 channel-group 7 force mode active no shutdown interface Ethernet1/11 description RMS8 iLO switchport switchport access vlan 2 no shutdown interface Ethernet1/12 description RMS8 NIC2 switchport switchport mode trunk switchport trunk native vlan 3 switchport trunk allowed vlan 3,50,60,110,120 spanning-tree port type edge trunk mtu 9000 channel-group 8 force mode active no shutdown interface Ethernet1/13 description RMS9 NIC2 switchport switchport mode trunk switchport trunk native vlan 3 switchport trunk allowed vlan 3,50,60,110,120 spanning-tree port type edge trunk mtu 9000 channel-group 9 force mode active no shutdown interface Ethernet1/14 description RMS10 iLO switchport switchport access vlan 2 no shutdown interface Ethernet1/15 description RMS10 NIC2 switchport switchport mode trunk switchport trunk native vlan 3 switchport trunk allowed vlan 3,50,60,110,120 spanning-tree port type edge trunk mtu 9000 channel-group 10 force mode active no shutdown interface Ethernet1/16 description RMS11 NIC2 switchport switchport mode trunk switchport trunk native vlan 3 switchport trunk allowed vlan 3,50,60,110,120 spanning-tree port type edge trunk mtu 9000 channel-group 11 force mode active no shutdown interface Ethernet1/17 description RMS12 iLO switchport switchport access vlan 2 no shutdown interface Ethernet1/18 description RMS12 NIC2 switchport switchport mode trunk switchport trunk native vlan 3 switchport trunk allowed vlan 3,50,60,110,120 spanning-tree port type edge trunk mtu 9000 channel-group 12 force mode active no shutdown interface Ethernet1/19 interface Ethernet1/20 interface Ethernet1/21 interface Ethernet1/22 interface Ethernet1/23 interface Ethernet1/24 interface Ethernet1/25 interface Ethernet1/26 interface Ethernet1/27 interface Ethernet1/28 interface Ethernet1/29 interface Ethernet1/30 interface Ethernet1/31 interface Ethernet1/32 interface Ethernet1/33 interface Ethernet1/34 interface Ethernet1/35 interface Ethernet1/36 interface Ethernet1/37 interface Ethernet1/38 interface Ethernet1/39 interface Ethernet1/40 interface Ethernet1/41 interface Ethernet1/42 interface Ethernet1/43 interface Ethernet1/44 interface Ethernet1/45 interface Ethernet1/46 interface Ethernet1/47 interface Ethernet1/48 interface Ethernet1/49 description ISL-Mate-93180A switchport switchport mode trunk switchport trunk allowed vlan 2-4,50,60,100,110,120 channel-group 50 force mode active no shutdown interface Ethernet1/50 description ISL-Mate-93180A switchport switchport mode trunk switchport trunk allowed vlan 2-4,50,60,100,110,120 channel-group 50 force mode active no shutdown interface Ethernet1/51 description OAM_uplink_customer mtu 9000 ip address {OAM_UPLINK_SwB_ADDRESS}/30 no shutdown interface Ethernet1/52 description Uplink_to_Customer_Signaling mtu 9000 service-policy type qos output SET_DSCP_SIGNALING_SUBNETS ip address {SIGNAL_UPLINK_SwB_ADDRESS}/30 ip ospf authentication message-digest ip ospf message-digest-key 1 md5 {ospf_md5_key} ip ospf cost 100 ip ospf dead-interval 10 ip ospf hello-interval 3 ip ospf network point-to-point no ip ospf passive-interface ip ospf bfd no shutdown interface Ethernet1/53 interface Ethernet1/54 interface mgmt0 description RMS1 1G-NIC3 vrf member management ip address 192.168.2.2/24 interface loopback0 description OSPF_router_id ip address 192.168.0.2/32 ip ospf network point-to-point ip ospf advertise-subnet line console line vty router ospf 1 router-id 192.168.0.2 network {SIGNAL_UPLINK_SUBNET}/30 area {OSPF_AREA_ID} network 172.16.100.0/30 area {OSPF_AREA_ID} network {CNLB_SIG_EXT_SUBNET}/{CNLB_SIG_EXT_Prefix}
Sample ToR Switch Configuration Templates for CNLB bond0 Version
- Sample Template for Switch
A:
hostname {switchname} vdc {switchname} id 1 limit-resource vlan minimum 16 maximum 4094 limit-resource vrf minimum 2 maximum 4096 limit-resource port-channel minimum 0 maximum 511 limit-resource u4route-mem minimum 248 maximum 248 limit-resource u6route-mem minimum 96 maximum 96 limit-resource m4route-mem minimum 58 maximum 58 limit-resource m6route-mem minimum 8 maximum 8 feature scp-server feature sftp-server cfs eth distribute feature ospf feature interface-vlan feature lacp feature vpc feature bfd feature vrrpv3 username admin password {admin_password} role network-admin username {user_name} password {user_password} role network-admin username {user_name} passphrase lifetime 99999 warntime 7 gracetime 3 no ip domain-lookup system jumbomtu 9000 ip access-list ALLOW_5G_XSI_LIST 10 permit ip {ALLOW_5G_XSI_LIST_WITH_PREFIX_LEN} any class-map type qos match-any MATCH_SIGNALING_SUBNETS description MATCH SIGNALING,TCP,REPLICATION PACKETS match access-group name ALLOW_5G_XSI_LIST policy-map type qos SET_DSCP_SIGNALING_SUBNETS description MARK SIGNALING,TCP,REPLICATION PACKETS AS DSCP CS3 class MATCH_SIGNALING_SUBNETS track 1 interface Ethernet1/51 line-protocol track 2 interface Ethernet1/52 line-protocol copp profile strict rmon event 1 description FATAL(1) owner PMON@FATAL rmon event 2 description CRITICAL(2) owner PMON@CRITICAL rmon event 3 description ERROR(3) owner PMON@ERROR rmon event 4 description WARNING(4) owner PMON@WARNING rmon event 5 description INFORMATION(5) owner PMON@INFO ntp server {NTPSERVER1} ntp server {NTPSERVER2} ntp server {NTPSERVER3} ntp server {NTPSERVER4} ntp server {NTPSERVER5} ntp master ip route 0.0.0.0/0 {OAM_UPLINK_CUSTOMER_ADDRESS} name oam-uplink-customer ip route 0.0.0.0/0 172.16.100.2 name backup-oam-to-customer 50 vlan 1-5,100 vlan 2 name Plat-OAiLO-Management vlan 3 name Host_Net vlan 4 name CNE_Management vlan 100 name OSPF_VPC_Management ip prefix-list ALLOW_5G_XSI description ADVERTISE XSI SUBNETS to ASK ip prefix-list ALLOW_5G_XSI seq 5 permit {ALLOW_5G_XSI_LIST_WITH_PREFIX_LEN} le 32 route-map 5G_APP permit 10 match ip address prefix-list ALLOW_5G_XSI vrf context management vrf context vpc-only hardware access-list tcam region egr-racl 1280 hardware access-list tcam region egr-l3-vlan-qos 512 vpc domain 1 role priority 1 peer-keepalive destination 192.168.2.2 source 192.168.2.1 vrf vpc-only delay restore 150 peer-gateway layer3 peer-router auto-recovery reload-delay 60 ip arp synchronize interface Vlan1 no ip redirects no ipv6 redirects interface Vlan2 description "Plat-OAiLO-Management Addresses" no shutdown mtu 9000 no ip redirects ip address 172.16.2.2/24 no ipv6 redirects vrrpv3 2 address-family ipv4 object-track 1 address 172.16.2.1 primary interface Vlan3 description "Host Addresses" no shutdown mtu 9000 no ip redirects ip address 172.16.3.2/24 ip address {CNLB_OAM_EXT_SwA_Address}/{CNLB_OAM_EXT_Prefix} ip address {CNLB_SIG_EXT_SwA_Address}/{CNLB_SIG_EXT_Prefix} no ipv6 redirects vrrpv3 3 address-family ipv4 object-track 1 address 172.16.3.1 primary vrrpv3 {CNLB_OAM_EXT_GROUP_ID} address-family ipv4 object-track 1 address {CNLB_OAM_EXT_VIP} primary vrrpv3 {CNLB_SIG_EXT_GROUP_ID} address-family ipv4 object-track 1 address {CNLB_SIG_EXT_VIP} primary interface Vlan4 description CNE_Management for outside accessible no shutdown mtu 9000 no ip redirects ip address {CNE_Management_SwA_Address}/{CNE_Management_Prefix} no ipv6 redirects vrrpv3 4 address-family ipv4 object-track 1 address {CNE_Management_VIP} primary interface Vlan100 description OSPF_VPC_Management no shutdown mtu 9000 ip address 172.16.100.1/30 ip ospf message-digest-key 1 md5 {ospf_md5_key} ip ospf cost 10 ip ospf dead-interval 10 ip ospf hello-interval 3 ip ospf network point-to-point no ip ospf passive-interface interface port-channel1 description PortChannel to RMS1 switchport switchport mode trunk switchport trunk native vlan 3 switchport trunk allowed vlan 2-4 spanning-tree port type edge trunk mtu 9000 no lacp suspend-individual vpc 1 interface port-channel2 description PortChannel to RMS2 switchport switchport mode trunk switchport trunk native vlan 3 switchport trunk allowed vlan 2-4 spanning-tree port type edge trunk mtu 9000 no lacp suspend-individual vpc 2 interface port-channel3 description PortChannel to RMS3 switchport switchport mode trunk switchport trunk native vlan 3 switchport trunk allowed vlan 2-4 spanning-tree port type edge trunk mtu 9000 no lacp suspend-individual vpc 3 interface port-channel4 description PortChannel to RMS4 switchport switchport mode trunk switchport trunk native vlan 3 switchport trunk allowed vlan 3 spanning-tree port type edge trunk mtu 9000 no lacp suspend-individual vpc 4 interface port-channel5 description PortChannel to RMS5 switchport switchport mode trunk switchport trunk native vlan 3 switchport trunk allowed vlan 3 spanning-tree port type edge trunk mtu 9000 no lacp suspend-individual vpc 5 interface port-channel6 description PortChannel to RMS6 switchport switchport mode trunk switchport trunk native vlan 3 switchport trunk allowed vlan 3 spanning-tree port type edge trunk mtu 9000 no lacp suspend-individual vpc 6 interface port-channel7 description PortChannel to RMS7 switchport switchport mode trunk switchport trunk native vlan 3 switchport trunk allowed vlan 3 spanning-tree port type edge trunk mtu 9000 no lacp suspend-individual vpc 7 interface port-channel8 description PortChannel to RMS8 switchport switchport mode trunk switchport trunk native vlan 3 switchport trunk allowed vlan 3 spanning-tree port type edge trunk mtu 9000 no lacp suspend-individual vpc 8 interface port-channel9 description PortChannel to RMS9 switchport switchport mode trunk switchport trunk native vlan 3 switchport trunk allowed vlan 3 spanning-tree port type edge trunk mtu 9000 no lacp suspend-individual vpc 9 interface port-channel10 description PortChannel to RMS10 switchport switchport mode trunk switchport trunk native vlan 3 switchport trunk allowed vlan 3 spanning-tree port type edge trunk mtu 9000 no lacp suspend-individual vpc 10 interface port-channel11 description PortChannel to RMS11 switchport switchport mode trunk switchport trunk native vlan 3 switchport trunk allowed vlan 3 spanning-tree port type edge trunk mtu 9000 no lacp suspend-individual vpc 11 interface port-channel12 description PortChannel to RMS12 switchport switchport mode trunk switchport trunk native vlan 3 switchport trunk allowed vlan 3 spanning-tree port type edge trunk mtu 9000 no lacp suspend-individual vpc 12 interface port-channel50 description "vpc peer-link" switchport switchport mode trunk switchport trunk allowed vlan 2-4,100 spanning-tree port type network vpc peer-link interface Ethernet1/1 description RMS1 NIC1 switchport switchport mode trunk switchport trunk native vlan 3 switchport trunk allowed vlan 2-4 spanning-tree port type edge trunk mtu 9000 channel-group 1 force mode active no shutdown interface Ethernet1/2 description "Reserved for RMS1 iLO when needed" switchport switchport access vlan 2 no shutdown interface Ethernet1/3 description RMS2 NIC1 switchport switchport mode trunk switchport trunk native vlan 3 switchport trunk allowed vlan 2-4 spanning-tree port type edge trunk mtu 9000 channel-group 2 force mode active no shutdown interface Ethernet1/4 description RMS3 NIC1 switchport switchport mode trunk switchport trunk native vlan 3 switchport trunk allowed vlan 3 spanning-tree port type edge trunk mtu 9000 channel-group 3 force mode active no shutdown interface Ethernet1/5 description RMS3 iLO switchport switchport access vlan 2 no shutdown interface Ethernet1/6 description RMS4 NIC1 switchport switchport mode trunk switchport trunk native vlan 3 switchport trunk allowed vlan 3 spanning-tree port type edge trunk mtu 9000 channel-group 4 force mode active no shutdown interface Ethernet1/7 description RMS5 NIC1 switchport switchport mode trunk switchport trunk native vlan 3 switchport trunk allowed vlan 3 spanning-tree port type edge trunk mtu 9000 channel-group 5 force mode active no shutdown interface Ethernet1/8 description RMS5 iLO switchport switchport access vlan 2 no shutdown interface Ethernet1/9 description RMS6 NIC1 switchport switchport mode trunk switchport trunk native vlan 3 switchport trunk allowed vlan 3 spanning-tree port type edge trunk mtu 9000 channel-group 6 force mode active no shutdown interface Ethernet1/10 description RMS7 NIC1 switchport switchport mode trunk switchport trunk native vlan 3 switchport trunk allowed vlan 3 spanning-tree port type edge trunk mtu 9000 channel-group 7 force mode active no shutdown interface Ethernet1/11 description RMS7 iLO switchport switchport access vlan 2 no shutdown interface Ethernet1/12 description RMS8 NIC1 switchport switchport mode trunk switchport trunk native vlan 3 switchport trunk allowed vlan 3 spanning-tree port type edge trunk mtu 9000 channel-group 8 force mode active no shutdown interface Ethernet1/13 description RMS9 NIC1 switchport switchport mode trunk switchport trunk native vlan 3 switchport trunk allowed vlan 3 spanning-tree port type edge trunk mtu 9000 channel-group 9 force mode active no shutdown interface Ethernet1/14 description RMS9 iLO switchport switchport access vlan 2 no shutdown interface Ethernet1/15 description RMS10 NIC1 switchport switchport mode trunk switchport trunk native vlan 3 switchport trunk allowed vlan 3 spanning-tree port type edge trunk mtu 9000 channel-group 10 force mode active no shutdown interface Ethernet1/16 description RMS11 NIC1 switchport switchport mode trunk switchport trunk native vlan 3 switchport trunk allowed vlan 3 spanning-tree port type edge trunk mtu 9000 channel-group 11 force mode active no shutdown interface Ethernet1/17 description RMS11 iLO switchport switchport access vlan 2 no shutdown interface Ethernet1/18 description RMS12 NIC1 switchport switchport mode trunk switchport trunk native vlan 3 switchport trunk allowed vlan 3 spanning-tree port type edge trunk mtu 9000 channel-group 12 force mode active no shutdown interface Ethernet1/19 interface Ethernet1/20 interface Ethernet1/21 interface Ethernet1/22 interface Ethernet1/23 interface Ethernet1/24 interface Ethernet1/25 interface Ethernet1/26 interface Ethernet1/27 interface Ethernet1/28 interface Ethernet1/29 interface Ethernet1/30 interface Ethernet1/31 interface Ethernet1/32 interface Ethernet1/33 interface Ethernet1/34 interface Ethernet1/35 interface Ethernet1/36 interface Ethernet1/37 interface Ethernet1/38 interface Ethernet1/39 interface Ethernet1/40 interface Ethernet1/41 interface Ethernet1/42 interface Ethernet1/43 interface Ethernet1/44 interface Ethernet1/45 interface Ethernet1/46 interface Ethernet1/47 interface Ethernet1/48 interface Ethernet1/49 description ISL-Mate-93180B switchport switchport mode trunk switchport trunk allowed vlan 2-4,100 channel-group 50 force mode active no shutdown interface Ethernet1/50 description ISL-Mate-93180B switchport switchport mode trunk switchport trunk allowed vlan 2-4,100 channel-group 50 force mode active no shutdown interface Ethernet1/51 description OAM_uplink_customer mtu 9000 ip address {OAM_UPLINK_SwA_ADDRESS}/30 no shutdown interface Ethernet1/52 description Uplink_to_Customer_Signaling mtu 9000 service-policy type qos output SET_DSCP_SIGNALING_SUBNETS ip address {SIGNAL_UPLINK_SwA_ADDRESS}/30 ip ospf authentication message-digest ip ospf message-digest-key 1 md5 {ospf_md5_key} ip ospf cost 100 ip ospf dead-interval 10 ip ospf hello-interval 3 ip ospf network point-to-point no ip ospf passive-interface ip ospf bfd no shutdown interface mgmt0 description RMS1 eno2 vrf member management ip address 192.168.2.1/24 interface loopback0 description OSPF_router_id ip address 192.168.0.1/32 ip ospf network point-to-point ip ospf advertise-subnet router ospf 1 router-id 192.168.0.1 network {SIGNAL_UPLINK_SUBNET}/30 area {OSPF_AREA_ID} network 172.16.100.0/30 area {OSPF_AREA_ID} network {CNLB_SIG_EXT_SUBNET}/{CNLB_SIG_EXT_Prefix} - Sample Template for Switch
B:
hostname {switchname} vdc {switchname} id 1 limit-resource vlan minimum 16 maximum 4094 limit-resource vrf minimum 2 maximum 4096 limit-resource port-channel minimum 0 maximum 511 limit-resource u4route-mem minimum 248 maximum 248 limit-resource u6route-mem minimum 96 maximum 96 limit-resource m4route-mem minimum 58 maximum 58 limit-resource m6route-mem minimum 8 maximum 8 feature scp-server feature sftp-server cfs eth distribute feature ospf feature interface-vlan feature lacp feature vpc feature bfd feature vrrpv3 username admin password {admin_password} role network-admin username {user_name} password {user_password} role network-admin username {user_name} passphrase lifetime 99999 warntime 7 gracetime 3 no ip domain-lookup system jumbomtu 9000 ip access-list ALLOW_5G_XSI_LIST 10 permit ip {ALLOW_5G_XSI_LIST_WITH_PREFIX_LEN} any class-map type qos match-any MATCH_SIGNALING_SUBNETS description MATCH SIGNALING,TCP,REPLICATION PACKETS match access-group name ALLOW_5G_XSI_LIST policy-map type qos SET_DSCP_SIGNALING_SUBNETS description MARK SIGNALING,TCP,REPLICATION PACKETS AS DSCP CS3 class MATCH_SIGNALING_SUBNETS track 1 interface Ethernet1/51 line-protocol track 2 interface Ethernet1/52 line-protocol copp profile strict rmon event 1 description FATAL(1) owner PMON@FATAL rmon event 2 description CRITICAL(2) owner PMON@CRITICAL rmon event 3 description ERROR(3) owner PMON@ERROR rmon event 4 description WARNING(4) owner PMON@WARNING rmon event 5 description INFORMATION(5) owner PMON@INFO ntp server {NTPSERVER1} ntp server {NTPSERVER2} ntp server {NTPSERVER3} ntp server {NTPSERVER4} ntp server {NTPSERVER5} ntp master ip route 0.0.0.0/0 {OAM_UPLINK_CUSTOMER_ADDRESS} name oam-uplink-customer ip route 0.0.0.0/0 172.16.100.1 name backup-oam-to-customer 50 vlan 1-5,100 vlan 2 name Plat-OAiLO-Management vlan 3 name Host_Net vlan 4 name CNE_Management vlan 100 name OSPF_VPC_Management ip prefix-list ALLOW_5G_XSI description ADVERTISE XSI SUBNETS to ASK ip prefix-list ALLOW_5G_XSI seq 5 permit {ALLOW_5G_XSI_LIST_WITH_PREFIX_LEN} le 32 route-map 5G_APP permit 10 match ip address prefix-list ALLOW_5G_XSI vrf context management vrf context vpc-only hardware access-list tcam region egr-racl 1280 hardware access-list tcam region egr-l3-vlan-qos 512 vpc domain 1 role priority 1 peer-keepalive destination 192.168.2.2 source 192.168.2.1 vrf vpc-only delay restore 150 peer-gateway layer3 peer-router auto-recovery reload-delay 60 ip arp synchronize interface Vlan1 no ip redirects no ipv6 redirects interface Vlan2 description "Plat-OAiLO-Management Addresses" no shutdown mtu 9000 no ip redirects ip address 172.16.2.3/24 no ipv6 redirects vrrpv3 2 address-family ipv4 object-track 1 address 172.16.2.1 primary interface Vlan3 description "Host Addresses" no shutdown mtu 9000 no ip redirects ip address 172.16.3.3/24 ip address {CNLB_OAM_EXT_SwB_Address}/{CNLB_OAM_EXT_Prefix} ip address {CNLB_SIG_EXT_SwB_Address}/{CNLB_SIG_EXT_Prefix} no ipv6 redirects vrrpv3 3 address-family ipv4 object-track 1 address 172.16.3.1 primary vrrpv3 {CNLB_OAM_EXT_GROUP_ID} address-family ipv4 object-track 1 address {CNLB_OAM_EXT_VIP} primary vrrpv3 {CNLB_SIG_EXT_GROUP_ID} address-family ipv4 object-track 1 address {CNLB_SIG_EXT_VIP} primary interface Vlan4 description CNE_Management for outside accessible no shutdown mtu 9000 no ip redirects ip address {CNE_Management_SwB_Address}/{CNE_Management_Prefix} no ipv6 redirects vrrpv3 4 address-family ipv4 object-track 1 address {CNE_Management_VIP} primary interface Vlan100 description OSPF_VPC_Management no shutdown ip address 172.16.100.2/30 ip ospf message-digest-key 1 md5 {ospf_md5_key} ip ospf cost 10 ip ospf dead-interval 10 ip ospf hello-interval 3 ip ospf network point-to-point no ip ospf passive-interface interface port-channel1 description PortChannel to RMS1 switchport switchport mode trunk switchport trunk native vlan 3 switchport trunk allowed vlan 2-4 spanning-tree port type edge trunk mtu 9000 no lacp suspend-individual vpc 1 interface port-channel2 description PortChannel to RMS2 switchport switchport mode trunk switchport trunk native vlan 3 switchport trunk allowed vlan 2-4 spanning-tree port type edge trunk mtu 9000 no lacp suspend-individual vpc 2 interface port-channel3 description PortChannel to RMS3 switchport switchport mode trunk switchport trunk native vlan 3 switchport trunk allowed vlan 2-4 spanning-tree port type edge trunk mtu 9000 no lacp suspend-individual vpc 3 interface port-channel4 description PortChannel to RMS4 switchport switchport mode trunk switchport trunk native vlan 3 switchport trunk allowed vlan 3 spanning-tree port type edge trunk mtu 9000 no lacp suspend-individual vpc 4 interface port-channel5 description PortChannel to RMS5 switchport switchport mode trunk switchport trunk native vlan 3 switchport trunk allowed vlan 3 spanning-tree port type edge trunk mtu 9000 no lacp suspend-individual vpc 5 interface port-channel6 description PortChannel to RMS6 switchport switchport mode trunk switchport trunk native vlan 3 switchport trunk allowed vlan 3 spanning-tree port type edge trunk mtu 9000 no lacp suspend-individual vpc 6 interface port-channel7 description PortChannel to RMS7 switchport switchport mode trunk switchport trunk native vlan 3 switchport trunk allowed vlan 3 spanning-tree port type edge trunk mtu 9000 no lacp suspend-individual vpc 7 interface port-channel8 description PortChannel to RMS8 switchport switchport mode trunk switchport trunk native vlan 3 switchport trunk allowed vlan 3 spanning-tree port type edge trunk mtu 9000 no lacp suspend-individual vpc 8 interface port-channel9 description PortChannel to RMS9 switchport switchport mode trunk switchport trunk native vlan 3 switchport trunk allowed vlan 3 spanning-tree port type edge trunk mtu 9000 no lacp suspend-individual vpc 9 interface port-channel10 description PortChannel to RMS10 switchport switchport mode trunk switchport trunk native vlan 3 switchport trunk allowed vlan 3 spanning-tree port type edge trunk mtu 9000 no lacp suspend-individual vpc 10 interface port-channel11 description PortChannel to RMS11 switchport switchport mode trunk switchport trunk native vlan 3 switchport trunk allowed vlan 3 spanning-tree port type edge trunk mtu 9000 no lacp suspend-individual vpc 11 interface port-channel12 description PortChannel to RMS12 switchport switchport mode trunk switchport trunk native vlan 3 switchport trunk allowed vlan 3 spanning-tree port type edge trunk mtu 9000 no lacp suspend-individual vpc 12 interface port-channel50 description "vpc peer-link" switchport switchport mode trunk switchport trunk allowed vlan 2-4,100 spanning-tree port type network vpc peer-link interface Ethernet1/1 description RMS1 NIC2 switchport switchport mode trunk switchport trunk native vlan 3 switchport trunk allowed vlan 2-4 spanning-tree port type edge trunk mtu 9000 channel-group 1 force mode active no shutdown interface Ethernet1/2 description RMS2 iLO switchport switchport access vlan 2 no shutdown interface Ethernet1/3 description RMS2 NIC2 switchport switchport mode trunk switchport trunk native vlan 3 switchport trunk allowed vlan 2-4 spanning-tree port type edge trunk mtu 9000 channel-group 2 force mode active no shutdown interface Ethernet1/4 description RMS3 NIC2 switchport switchport mode trunk switchport trunk native vlan 3 switchport trunk allowed vlan 3 spanning-tree port type edge trunk mtu 9000 channel-group 3 force mode active no shutdown interface Ethernet1/5 description RMS4 iLO switchport switchport access vlan 2 no shutdown interface Ethernet1/6 description RMS4 NIC2 switchport switchport mode trunk switchport trunk native vlan 3 switchport trunk allowed vlan 3 spanning-tree port type edge trunk mtu 9000 channel-group 4 force mode active no shutdown interface Ethernet1/7 description RMS5 NIC2 switchport switchport mode trunk switchport trunk native vlan 3 switchport trunk allowed vlan 3 spanning-tree port type edge trunk mtu 9000 channel-group 5 force mode active no shutdown interface Ethernet1/8 description RMS6 iLO switchport switchport access vlan 2 no shutdown interface Ethernet1/9 description RMS6 NIC2 switchport switchport mode trunk switchport trunk native vlan 3 switchport trunk allowed vlan 3 spanning-tree port type edge trunk mtu 9000 channel-group 6 force mode active no shutdown interface Ethernet1/10 description RMS7 NIC2 switchport switchport mode trunk switchport trunk native vlan 3 switchport trunk allowed vlan 3 spanning-tree port type edge trunk mtu 9000 channel-group 7 force mode active no shutdown interface Ethernet1/11 description RMS8 iLO switchport switchport access vlan 2 no shutdown interface Ethernet1/12 description RMS8 NIC2 switchport switchport mode trunk switchport trunk native vlan 3 switchport trunk allowed vlan 3 spanning-tree port type edge trunk mtu 9000 channel-group 8 force mode active no shutdown interface Ethernet1/13 description RMS9 NIC2 switchport switchport mode trunk switchport trunk native vlan 3 switchport trunk allowed vlan 3 spanning-tree port type edge trunk mtu 9000 channel-group 9 force mode active no shutdown interface Ethernet1/14 description RMS10 iLO switchport switchport access vlan 2 no shutdown interface Ethernet1/15 description RMS10 NIC2 switchport switchport mode trunk switchport trunk native vlan 3 switchport trunk allowed vlan 3 spanning-tree port type edge trunk mtu 9000 channel-group 10 force mode active no shutdown interface Ethernet1/16 description RMS11 NIC2 switchport switchport mode trunk switchport trunk native vlan 3 switchport trunk allowed vlan 3 spanning-tree port type edge trunk mtu 9000 channel-group 11 force mode active no shutdown interface Ethernet1/17 description RMS12 iLO switchport switchport access vlan 2 no shutdown interface Ethernet1/18 description RMS12 NIC2 switchport switchport mode trunk switchport trunk native vlan 3 switchport trunk allowed vlan 3 spanning-tree port type edge trunk mtu 9000 channel-group 12 force mode active no shutdown interface Ethernet1/19 interface Ethernet1/20 interface Ethernet1/21 interface Ethernet1/22 interface Ethernet1/23 interface Ethernet1/24 interface Ethernet1/25 interface Ethernet1/26 interface Ethernet1/27 interface Ethernet1/28 interface Ethernet1/29 interface Ethernet1/30 interface Ethernet1/31 interface Ethernet1/32 interface Ethernet1/33 interface Ethernet1/34 interface Ethernet1/35 interface Ethernet1/36 interface Ethernet1/37 interface Ethernet1/38 interface Ethernet1/39 interface Ethernet1/40 interface Ethernet1/41 interface Ethernet1/42 interface Ethernet1/43 interface Ethernet1/44 interface Ethernet1/45 interface Ethernet1/46 interface Ethernet1/47 interface Ethernet1/48 interface Ethernet1/49 description ISL-Mate-93180A switchport switchport mode trunk switchport trunk allowed vlan 2-4,100 channel-group 50 force mode active no shutdown interface Ethernet1/50 description ISL-Mate-93180A switchport switchport mode trunk switchport trunk allowed vlan 2-4,100 channel-group 50 force mode active no shutdown interface Ethernet1/51 description OAM_uplink_customer mtu 9000 ip address {OAM_UPLINK_SwB_ADDRESS}/30 no shutdown interface Ethernet1/52 description Uplink_to_Customer_Signaling mtu 9000 service-policy type qos output SET_DSCP_SIGNALING_SUBNETS ip address {SIGNAL_UPLINK_SwB_ADDRESS}/30 ip ospf authentication message-digest ip ospf message-digest-key 1 md5 {ospf_md5_key} ip ospf cost 100 ip ospf dead-interval 10 ip ospf hello-interval 3 ip ospf network point-to-point no ip ospf passive-interface ip ospf bfd no shutdown interface mgmt0 description RMS1 1G-NIC3 vrf member management ip address 192.168.2.2/24 interface loopback0 description OSPF_router_id ip address 192.168.0.2/32 ip ospf network point-to-point ip ospf advertise-subnet router ospf 1 router-id 192.168.0.2 network {SIGNAL_UPLINK_SUBNET}/30 area {OSPF_AREA_ID} network 172.16.100.0/30 area {OSPF_AREA_ID} network {CNLB_SIG_EXT_SUBNET}/{CNLB_SIG_EXT_Prefix}
Sample ToR Switch Configuration Templates for MetalLB, CNLB VLAN Version, and CNLB bond0 Version
- Sample Template for Switch
A:
feature interface-vlan feature lacp feature vpc feature vrrpv3 username admin password tklc role network-admin username admin password tklc role network-admin username admin passphrase lifetime 99999 warntime 7 gracetime 3 system jumbomtu 9000 track 1 interface Ethernet1/51 line-protocol ip prefix-list ALLOW_5G_XSI description ADVERTISE XSI SUBNETS to ASK ip prefix-list ALLOW_5G_XSI seq 5 permit 10.75.182.0/28 le 32 ip prefix-list ALLOW_5G_XSI seq 5 permit 10.75.182.16/28 le 32 route-map 5G_APP permit 10 match ip address prefix-list ALLOW_5G_XSI vpc domain 1 role priority 1 peer-keepalive destination 192.168.2.2 source 192.168.2.1 delay restore 150 peer-gateway layer3 peer-router auto-recovery reload-delay 60 ip arp synchronize vlan 2 name Plat-OAiLO-Management vlan 3 name Host_Net vlan 4 name CNE_Management vlan 50 name CNLB-OAM-EXT-VLAN vlan 60 name CNLB-SIG-EXT-VLAN vlan 110 name CNLB-OAM-INT-VLAN vlan 120 name CNLB-SIG-INT-VLAN interface Vlan2 description "Plat-OAiLO-Management Addresses" no shutdown mtu 9000 no ip redirects ip address 172.16.2.2/24 no ipv6 redirects vrrpv3 2 address-family ipv4 object-track 1 address 172.16.2.1 primary interface Vlan3 description "Host Addresses" no shutdown mtu 9000 no ip redirects ip address 172.16.3.2/24 ip address 10.75.182.98/28 secondary ip address 10.75.182.114/28 secondary no ipv6 redirects vrrpv3 3 address-family ipv4 object-track 1 address 172.16.3.1 primary vrrpv3 211 address-family ipv4 object-track 1 address 10.75.182.97 primary vrrpv3 221 address-family ipv4 object-track 1 address 10.75.182.113 primary interface Vlan4 description CNE_Management for outside accessible no shutdown mtu 9000 no ip redirects ip address 10.65.106.114/28 no ipv6 redirects vrrpv3 4 address-family ipv4 object-track 1 address 10.65.106.113 primary interface Vlan50 description CNLB_OAM_EXT no shutdown ip address 10.75.125.2/28 ipv6 address 2050::/64 eui64 vrrpv3 50 address-family ipv4 object-track 1 address 10.75.125.1 primary interface Vlan60 description CNLB_SIG_EXT no shutdown ip address 10.75.125.18/28 ipv6 address 2060::/64 eui64 vrrpv3 60 address-family ipv4 object-track 1 address 10.75.125.17 primary interface Vlan100 description OSPF_VPC_Management no shutdown mtu 9000 ip address 172.16.100.1/30 ip ospf message-digest-key 1 md5 3 f6ac6e6279683dda ip ospf cost 10 ip ospf dead-interval 10 ip ospf hello-interval 3 ip ospf network point-to-point no ip ospf passive-interface interface Vlan110 description CNLB_OAM_INT no shutdown ip address 172.16.110.102/24 ipv6 address 2110::/64 eui64 interface Vlan120 description CNLB_SIG_INT no shutdown ip address 172.16.120.102/24 ipv6 address 2120::/64 eui64 interface port-channel1 description PortChannel to RMS1 switchport switchport mode trunk switchport trunk native vlan 3 switchport trunk allowed vlan 2-5 spanning-tree port type edge trunk mtu 9000 no lacp suspend-individual vpc 1 interface port-channel2 description PortChannel to RMS2 switchport switchport mode trunk switchport trunk native vlan 3 switchport trunk allowed vlan 2-5 spanning-tree port type edge trunk mtu 9000 no lacp suspend-individual vpc 2 interface port-channel3 description PortChannel to RMS3 switchport switchport mode trunk switchport trunk native vlan 3 switchport trunk allowed vlan 2-5 spanning-tree port type edge trunk mtu 9000 no lacp suspend-individual vpc 3 interface port-channel4 description PortChannel to RMS4 switchport switchport mode trunk switchport trunk native vlan 3 switchport trunk allowed vlan 3,50,60,110,120 spanning-tree port type edge trunk mtu 9000 no lacp suspend-individual vpc 4 interface port-channel5 description PortChannel to RMS5 switchport switchport mode trunk switchport trunk native vlan 3 switchport trunk allowed vlan 3,50,60,110,120 spanning-tree port type edge trunk mtu 9000 no lacp suspend-individual vpc 5 interface port-channel6 description PortChannel to RMS6 switchport switchport mode trunk switchport trunk native vlan 3 switchport trunk allowed vlan 3,50,60,110,120 spanning-tree port type edge trunk mtu 9000 no lacp suspend-individual vpc 6 interface port-channel7 description PortChannel to RMS7 switchport switchport mode trunk switchport trunk native vlan 3 switchport trunk allowed vlan 3,50,60,110,120 spanning-tree port type edge trunk mtu 9000 no lacp suspend-individual vpc 7 interface port-channel8 description PortChannel to RMS8 switchport switchport mode trunk switchport trunk native vlan 3 switchport trunk allowed vlan 3,50,60,110,120 spanning-tree port type edge trunk mtu 9000 no lacp suspend-individual vpc 8 interface port-channel9 description PortChannel to RMS9 switchport switchport mode trunk switchport trunk native vlan 3 switchport trunk allowed vlan 3,50,60,110,120 spanning-tree port type edge trunk mtu 9000 no lacp suspend-individual vpc 9 interface port-channel11 description PortChannel to RMS11 switchport switchport mode trunk switchport trunk native vlan 3 switchport trunk allowed vlan 3,50,60,110,120 spanning-tree port type edge trunk mtu 9000 no lacp suspend-individual vpc 11 interface port-channel12 description PortChannel to RMS12 switchport switchport mode trunk switchport trunk native vlan 3 switchport trunk allowed vlan 3,50,60,110,120 spanning-tree port type edge trunk mtu 9000 no lacp suspend-individual vpc 12 interface port-channel50 description "vpc peer-link" switchport switchport mode trunk switchport trunk allowed vlan 2-5,50,60,100,110,120 spanning-tree port type network vpc peer-link interface Ethernet1/1 description RMS1 NIC1 switchport switchport mode trunk switchport trunk native vlan 3 switchport trunk allowed vlan 2-5 spanning-tree port type edge trunk mtu 9000 channel-group 1 force mode active no shutdown interface Ethernet1/2 description "Reserved for RMS1 iLO when needed" switchport switchport access vlan 2 no shutdown interface Ethernet1/3 description RMS2 NIC1 switchport switchport mode trunk switchport trunk native vlan 3 switchport trunk allowed vlan 2-5 spanning-tree port type edge trunk mtu 9000 channel-group 2 force mode active no shutdown interface Ethernet1/4 description RMS3 NIC1 switchport switchport mode trunk switchport trunk native vlan 3 switchport trunk allowed vlan 3,50,60,110,120 spanning-tree port type edge trunk mtu 9000 channel-group 3 force mode active no shutdown interface Ethernet1/5 description RMS3 iLO switchport switchport access vlan 2 no shutdown interface Ethernet1/6 description RMS4 NIC1 switchport switchport mode trunk switchport trunk native vlan 3 switchport trunk allowed vlan 3,50,60,110,120 spanning-tree port type edge trunk mtu 9000 channel-group 4 force mode active no shutdown interface Ethernet1/7 description RMS5 NIC1 switchport switchport mode trunk switchport trunk native vlan 3 switchport trunk allowed vlan 3,50,60,110,120 spanning-tree port type edge trunk mtu 9000 channel-group 5 force mode active no shutdown interface Ethernet1/8 description RMS5 iLO switchport switchport access vlan 2 no shutdown interface Ethernet1/9 description RMS6 NIC1 switchport switchport mode trunk switchport trunk native vlan 3 switchport trunk allowed vlan 3,50,60,110,120 spanning-tree port type edge trunk mtu 9000 channel-group 6 force mode active no shutdown interface Ethernet1/10 description RMS7 NIC1 switchport switchport mode trunk switchport trunk native vlan 3 switchport trunk allowed vlan 3,50,60,110,120 spanning-tree port type edge trunk mtu 9000 channel-group 7 force mode active no shutdown interface Ethernet1/11 description RMS7 iLO switchport switchport access vlan 2 no shutdown interface Ethernet1/12 description RMS8 NIC1 switchport switchport mode trunk switchport trunk native vlan 3 switchport trunk allowed vlan 3,50,60,110,120 spanning-tree port type edge trunk mtu 9000 channel-group 8 force mode active no shutdown interface Ethernet1/13 description RMS9 NIC1 switchport switchport mode trunk switchport trunk native vlan 3 switchport trunk allowed vlan 3,50,60,110,120 spanning-tree port type edge trunk mtu 9000 channel-group 9 force mode active no shutdown interface Ethernet1/14 description RMS9 iLO switchport switchport access vlan 2 no shutdown interface Ethernet1/15 description RMS10 NIC1 switchport switchport mode trunk switchport trunk native vlan 3 switchport trunk allowed vlan 3,50,60,110,120 spanning-tree port type edge trunk mtu 9000 channel-group 10 force mode active no shutdown interface Ethernet1/16 description RMS11 NIC1 switchport switchport mode trunk switchport trunk native vlan 3 switchport trunk allowed vlan 3,50,60,110,120 spanning-tree port type edge trunk mtu 9000 channel-group 11 force mode active no shutdown interface Ethernet1/17 description RMS11 iLO switchport switchport access vlan 2 no shutdown interface Ethernet1/18 description RMS12 NIC1 switchport switchport mode trunk switchport trunk native vlan 3 switchport trunk allowed vlan 3,50,60,110,120 spanning-tree port type edge trunk mtu 9000 channel-group 12 force mode active no shutdown interface Ethernet1/49 description ISL-Mate-93180B switchport switchport mode trunk switchport trunk allowed vlan 2-5,50,60,100,110,120 channel-group 50 force mode active no shutdown interface Ethernet1/50 description ISL-Mate-93180B switchport switchport mode trunk switchport trunk allowed vlan 2-5,50,60,100,110,120 channel-group 50 force mode active no shutdown interface Ethernet1/51 description OAM_uplink_customer mtu 9000 ip address 192.168.1.2/30 interface Ethernet1/52 description Uplink_to_Customer_Signaling mtu 9000 ip address 172.16.100.2/30 ip ospf authentication message-digest ip ospf message-digest-key 1 md5 3 0d1ecdfeda51d123 ip ospf cost 100 ip ospf dead-interval 10 ip ospf hello-interval 3 ip ospf network point-to-point no ip ospf passive-interface ip ospf bfd no shutdown interface loopback0 description OSPF_BGP_router_id ip address 192.168.0.1/32 ip ospf network point-to-point ip ospf advertise-subnet interface mgmt0 vrf member management ip address 192.168.2.1/24 router bgp 64501 router-id 192.168.0.1 log-neighbor-changes address-family ipv4 unicast maximum-paths 64 neighbor 172.16.3.0/24 remote-as 64512 maximum-peers 256 address-family ipv4 unicast maximum-prefix 256 router ospf 1 router-id 192.168.0.1 network 10.75.216.40/30 area 1.1.1.1 network 172.16.100.0/30 area 1.1.1.1 redistribute bgp 64501 route-map 5G_APP log-adjacency-changes detail summary-address 10.75.182.0/27 area 1.1.1.1 authentication message-digest timers throttle spf 10 100 5000 timers lsa-arrival 50 timers throttle lsa 5 100 5000 passive-interface default - Sample Template for Switch
B:
feature interface-vlan feature lacp feature vpc feature vrrpv3 username admin password tklc role network-admin username admin password tklc role network-admin username admin passphrase lifetime 99999 warntime 7 gracetime 3 system jumbomtu 9000 track 1 interface Ethernet1/51 line-protocol ip prefix-list ALLOW_5G_XSI description ADVERTISE XSI SUBNETS to ASK ip prefix-list ALLOW_5G_XSI seq 5 permit 10.75.182.0/28 le 32 ip prefix-list ALLOW_5G_XSI seq 5 permit 10.75.182.16/28 le 32 route-map 5G_APP permit 10 match ip address prefix-list ALLOW_5G_XSI vpc domain 1 role priority 1 peer-keepalive destination 192.168.2.1 source 192.168.2.2 delay restore 150 peer-gateway layer3 peer-router auto-recovery reload-delay 60 ip arp synchronize vlan 2 name Plat-OAiLO-Management vlan 3 name Host_Net vlan 4G name CNE_Management vlan 50 name CNLB-OAM-EXT-VLAN vlan 60 name CNLB-SIG-EXT-VLAN vlan 110 name CNLB-OAM-INT-VLAN vlan 120 name CNLB-SIG-INT-VLAN interface Vlan2 description "Plat-OAiLO-Management Addresses" no shutdown mtu 9000 no ip redirects ip address 172.16.2.3/24 no ipv6 redirects vrrpv3 2 address-family ipv4 object-track 1 address 172.16.2.1 primary interface Vlan3 description "Host Addresses" no shutdown mtu 9000 no ip redirects ip address 172.16.3.3/24 ip address 10.75.182.99/28 secondary ip address 10.75.182.115/28 secondary no ipv6 redirects vrrpv3 3 address-family ipv4 object-track 1 address 172.16.3.1 primary vrrpv3 211 address-family ipv4 object-track 1 address 10.75.182.97 primary vrrpv3 221 address-family ipv4 object-track 1 address 10.75.182.113 primary interface Vlan4 description CNE_Management for outside accessible no shutdown mtu 9000 no ip redirects ip address 10.65.106.115/28 no ipv6 redirects vrrpv3 4 address-family ipv4 object-track 1 address 10.65.106.113 primary interface Vlan50 description CNLB_OAM_EXT no shutdown ip address 10.75.125.3/28 ipv6 address 2050::/64 eui64 vrrpv3 50 address-family ipv4 object-track 1 address 10.75.125.1 primary interface Vlan60 description CNLB_SIG_EXT no shutdown ip address 10.75.125.19/28 ipv6 address 2060::/64 eui64 vrrpv3 60 address-family ipv4 object-track 1 address 10.75.125.17 primary interface Vlan100 description OSPF_VPC_Management no shutdown mtu 9000 ip address 172.16.100.2/30 ip ospf message-digest-key 1 md5 3 f6ac6e6279683dda ip ospf cost 10 ip ospf dead-interval 10 ip ospf hello-interval 3 ip ospf network point-to-point no ip ospf passive-interface interface Vlan110 description CNLB_OAM_INT no shutdown ip address 172.16.110.103/24 ipv6 address 2110::/64 eui64 interface Vlan120 description CNLB_SIG_INT no shutdown ip address 172.16.120.103/24 ipv6 address 2120::/64 eui64 vpc domain 1 role priority 1 peer-keepalive destination 192.168.2.1 source 192.168.2.1 delay restore 150 peer-gateway layer3 peer-router auto-recovery reload-delay 60 ip arp synchronize interface port-channel1 description PortChannel to RMS1 switchport switchport mode trunk switchport trunk native vlan 3 switchport trunk allowed vlan 2-5 spanning-tree port type edge trunk mtu 9000 no lacp suspend-individual vpc 1 interface port-channel2 description PortChannel to RMS2 switchport switchport mode trunk switchport trunk native vlan 3 switchport trunk allowed vlan 2-5 spanning-tree port type edge trunk mtu 9000 no lacp suspend-individual vpc 2 interface port-channel3 description PortChannel to RMS3 switchport switchport mode trunk switchport trunk native vlan 3 switchport trunk allowed vlan 2-5 spanning-tree port type edge trunk mtu 9000 no lacp suspend-individual vpc 3 interface port-channel4 description PortChannel to RMS4 switchport switchport mode trunk switchport trunk native vlan 3 switchport trunk allowed vlan 3,50,60,110,120 spanning-tree port type edge trunk mtu 9000 no lacp suspend-individual vpc 4 interface port-channel5 description PortChannel to RMS5 switchport switchport mode trunk switchport trunk native vlan 3 switchport trunk allowed vlan 3,50,60,110,120 spanning-tree port type edge trunk mtu 9000 no lacp suspend-individual vpc 5 interface port-channel6 description PortChannel to RMS6 switchport switchport mode trunk switchport trunk native vlan 3 switchport trunk allowed vlan 3,50,60,110,120 spanning-tree port type edge trunk mtu 9000 no lacp suspend-individual vpc 6 interface port-channel7 description PortChannel to RMS7 switchport switchport mode trunk switchport trunk native vlan 3 switchport trunk allowed vlan 3,50,60,110,120 spanning-tree port type edge trunk mtu 9000 no lacp suspend-individual vpc 7 interface port-channel8 description PortChannel to RMS8 switchport switchport mode trunk switchport trunk native vlan 3 switchport trunk allowed vlan 3,50,60,110,120 spanning-tree port type edge trunk mtu 9000 no lacp suspend-individual vpc 8 interface port-channel9 description PortChannel to RMS9 switchport switchport mode trunk switchport trunk native vlan 3 switchport trunk allowed vlan 3,50,60,110,120 spanning-tree port type edge trunk mtu 9000 no lacp suspend-individual vpc 9 interface port-channel11 description PortChannel to RMS11 switchport switchport mode trunk switchport trunk native vlan 3 switchport trunk allowed vlan 3,50,60,110,120 spanning-tree port type edge trunk mtu 9000 no lacp suspend-individual vpc 11 interface port-channel12 description PortChannel to RMS12 switchport switchport mode trunk switchport trunk native vlan 3 switchport trunk allowed vlan 3,50,60,110,120 spanning-tree port type edge trunk mtu 9000 no lacp suspend-individual vpc 12 interface port-channel50 description "vpc peer-link" switchport switchport mode trunk switchport trunk allowed vlan 2-5,50,60,100,110,120 spanning-tree port type network vpc peer-link interface Ethernet1/1 description RMS1 NIC2 switchport switchport mode trunk switchport trunk native vlan 3 switchport trunk allowed vlan 2-5 spanning-tree port type edge trunk mtu 9000 channel-group 1 force mode active no shutdown interface Ethernet1/2 description RMS2 iLO switchport switchport access vlan 2 no shutdown interface Ethernet1/3 description RMS2 NIC2 switchport switchport mode trunk switchport trunk native vlan 3 switchport trunk allowed vlan 2-5 spanning-tree port type edge trunk mtu 9000 channel-group 2 force mode active no shutdown interface Ethernet1/4 description RMS3 NIC2 switchport switchport mode trunk switchport trunk native vlan 3 switchport trunk allowed vlan 3,50,60,110,120 spanning-tree port type edge trunk mtu 9000 channel-group 3 force mode active no shutdown interface Ethernet1/5 description RMS4 iLO switchport switchport access vlan 2 no shutdown interface Ethernet1/6 description RMS4 NIC2 switchport switchport mode trunk switchport trunk native vlan 3 switchport trunk allowed vlan 3,50,60,110,120 spanning-tree port type edge trunk mtu 9000 channel-group 4 force mode active no shutdown interface Ethernet1/7 description RMS5 NIC2 switchport switchport mode trunk switchport trunk native vlan 3 switchport trunk allowed vlan 3,50,60,110,120 spanning-tree port type edge trunk mtu 9000 channel-group 5 force mode active no shutdown interface Ethernet1/8 description RMS6 iLO switchport switchport access vlan 2 no shutdown interface Ethernet1/9 description RMS6 NIC2 switchport switchport mode trunk switchport trunk native vlan 3 switchport trunk allowed vlan 3,50,60,110,120 spanning-tree port type edge trunk mtu 9000 channel-group 6 force mode active no shutdown interface Ethernet1/10 description RMS7 NIC2 switchport switchport mode trunk switchport trunk native vlan 3 switchport trunk allowed vlan 3,50,60,110,120 spanning-tree port type edge trunk mtu 9000 channel-group 7 force mode active no shutdown interface Ethernet1/11 description RMS8 iLO switchport switchport access vlan 2 no shutdown interface Ethernet1/12 description RMS8 NIC2 switchport switchport mode trunk switchport trunk native vlan 3 switchport trunk allowed vlan 3,50,60,110,120 spanning-tree port type edge trunk mtu 9000 channel-group 8 force mode active no shutdown interface Ethernet1/13 description RMS9 NIC2 switchport switchport mode trunk switchport trunk native vlan 3 switchport trunk allowed vlan 3,50,60,110,120 spanning-tree port type edge trunk mtu 9000 channel-group 9 force mode active no shutdown interface Ethernet1/14 description RMS10 iLO switchport switchport access vlan 2 no shutdown interface Ethernet1/15 description RMS10 NIC2 switchport switchport mode trunk switchport trunk native vlan 3 switchport trunk allowed vlan 3,50,60,110,120 spanning-tree port type edge trunk mtu 9000 channel-group 10 force mode active no shutdown interface Ethernet1/16 description RMS11 NIC2 switchport switchport mode trunk switchport trunk native vlan 3 switchport trunk allowed vlan 3,50,60,110,120 spanning-tree port type edge trunk mtu 9000 channel-group 11 force mode active no shutdown interface Ethernet1/17 description RMS12 iLO switchport switchport access vlan 2 no shutdown interface Ethernet1/18 description RMS12 NIC2 switchport switchport mode trunk switchport trunk native vlan 3 switchport trunk allowed vlan 3,50,60,110,120 spanning-tree port type edge trunk mtu 9000 channel-group 12 force mode active no shutdown interface Ethernet1/49 description ISL-Mate-93180A switchport switchport mode trunk switchport trunk allowed vlan 2-5,50,60,100,110,120 channel-group 50 force mode active no shutdown interface Ethernet1/50 description ISL-Mate-93180A switchport switchport mode trunk switchport trunk allowed vlan 2-5,50,60,100,110,120 channel-group 50 force mode active no shutdown interface Ethernet1/51 description OAM_uplink_customer mtu 9000 ip address 192.168.1.4/30 interface Ethernet1/52 description Uplink_to_Customer_Signaling mtu 9000 ip address 172.16.100.4/30 ip ospf authentication message-digest ip ospf message-digest-key 1 md5 3 0d1ecdfeda51d123 ip ospf cost 100 ip ospf dead-interval 10 ip ospf hello-interval 3 ip ospf network point-to-point no ip ospf passive-interface ip ospf bfd no shutdown interface mgmt0 vrf member management ip address 192.168.2.2/24 interface loopback0 description OSPF_BGP_router_id ip address 192.168.0.2/32 ip ospf network point-to-point ip ospf advertise-subnet router bgp 64501 router-id 192.168.0.2 log-neighbor-changes address-family ipv4 unicast maximum-paths 64 neighbor 172.16.3.0/24 remote-as 64512 maximum-peers 256 address-family ipv4 unicast maximum-prefix 256 router ospf 1 router-id 192.168.0.2 network 10.75.216.44/30 area 1.1.1.1 network 172.16.100.0/24 area 1.1.1.1 redistribute bgp 64501 route-map 5G_APP log-adjacency-changes detail summary-address 10.75.182.0/27 area 1.1.1.1 authentication message-digest timers throttle spf 10 100 5000 timers lsa-arrival 50 timers throttle lsa 5 100 5000 passive-interface default
Postinstallation and Postupgrade Reference Procedures
This appendix lists the procedures that are referred in postinstallation and postupgarde procedures.
Upgrading Grafana
This section details the procedure to upgrade Grafana to a custom version.
Note:
- This procedure is optional and can be run if you want to upgrade Grafana to a custom version.
- This procedure is applicable to both BareMetal and vCNE deployments.
Limitations
- This procedure is only used to upgrade from Grafana release 9.5.3 to 11.2.x.
- Grafana version 11.2.x is not tested with CNE. If you are upgrading to Grafana 11.2.x, ensure that you manage, adapt and maintain the version.
- Plugin installation and Helm chart adaptation are not in the purview of this procedure.
- Some versions of Grafana image may try to pull certificates from the internet.
Prerequisites
Before running the procedure, ensure that you meet the following prerequisites:- The cluster must run a stable Grafana version. Most CNE clusters run with the version 9.5.3 which is acceptable.
- This procedure must be run in the active Bastion Host.
- The target version of Grafana must available in the cluster. This can be achieved by pulling the required version from the desired repository.
- Podman must be installed and you must be able to run the Podman commands.
- Upgrade Helm to the minimum supported Helm version (3.15.2 or more).
kubectlmust be installed.
Procedure
- Log in to the active Bastion Host and run the following command to ensure that
you are logged in to the active
Bastion:
Sample output:$ is_active_bastionIS active-bastion - Ensure the desired Grafana image is present in the podman
registry:
Sample output:$ podman image lsREPOSITORY TAG IMAGE ID CREATED SIZE winterfell:5000/occne/provision 25.2.0-alpha.0-11-g647fa73e6 04e905388051 3 days ago 2.48 GB localhost/grafana/grafana 11.2.5 37c12d738603 6 weeks ago 469 MB - Tag and push the image to follow CNE image naming convention. This is done to
ensure that the repository has the correct naming convention after pulling the
desired image
version.
For example:$ podman tag <CURRENT_GRAFANA_IMAGE_NAME>:<CURRENT_TAG> occne-repo-host:5000/occne/<DESIRED_GRAFANA_NAME>:<CURRENT_TAG> $ podman push occne-repo-host:5000/occne/<DESIRED_GRAFANA_NAME>:<CURRENT_TAG>$ podman tag localhost/grafana/grafana:11.2.5 occne-repo-host:5000/occne/grafana:11.2.5 $ podman push occne-repo-host:5000/occne/grafana:11.2.5 - Review all the deployments on the cluster and search for the Grafana
deployment:
Sample output:$ kubectl -n occne-infra get deployNAME READY UP-TO-DATE AVAILABLE AGE cnlb-app 4/4 4 4 6h59m cnlb-manager 1/1 1 1 6h59m occne-alertmanager-snmp-notifier 1/1 1 1 6h54m occne-bastion-controller 1/1 1 1 6h54m occne-kube-prom-stack-grafana 1/1 1 1 6h55m # HERE IS THE GRAFANA DEPLOYMENT occne-kube-prom-stack-kube-operator 1/1 1 1 6h55m occne-kube-prom-stack-kube-state-metrics 1/1 1 1 6h55m occne-metrics-server 1/1 1 1 6h54m occne-opensearch-dashboards 1/1 1 1 6h55m occne-promxy 1/1 1 1 6h54m occne-promxy-apigw-nginx 2/2 2 2 6h54m occne-tracer-jaeger-collector 1/1 1 1 6h54m occne-tracer-jaeger-query 1/1 1 1 6h54m - Edit the
occne-kube-prom-stack-grafanadeployment. This opens an editable YAML file where you can loacate the previous Grafana image.
Sample output:$ kubectl -n occne-infra edit deploy occne-kube-prom-stack-grafana... - name: GF_PATHS_DATA value: /var/lib/grafana/ - name: GF_PATHS_LOGS value: /var/log/grafana - name: GF_PATHS_PLUGINS value: /var/lib/grafana/plugins - name: GF_PATHS_PROVISIONING value: /etc/grafana/provisioning image: occne-repo-host:5000/docker.io/grafana/grafana:9.5.3 # HERE IS THE IMAGE imagePullPolicy: IfNotPresent livenessProbe: failureThreshold: 10 ... - Replace the old image with the recently pushed image:
For example:
... - name: GF_PATHS_DATA value: /var/lib/grafana/ - name: GF_PATHS_LOGS value: /var/log/grafana - name: GF_PATHS_PLUGINS value: /var/lib/grafana/plugins - name: GF_PATHS_PROVISIONING value: /etc/grafana/provisioning image: occne-repo-host:5000/occne/grafana:11.2.5 # HERE IS THE IMAGE imagePullPolicy: IfNotPresent livenessProbe: failureThreshold: 10 ... - Run the following command to verify the pods' health. Ensure that all pods are
in the healthy
Runningstate with no resets.
Sample output:$ kco get pods | grep grafanaoccne-kube-prom-stack-grafana-7ccf687579-ns94w 3/3 Running 0 7h18m - Run the following command to verify the pods internal logs. Use the pod name
obtained from the previous
step.
For example:$ kubectl -n occne-infra logs <YOUR_GRAFANA_POD>$ kubectl -n occne-infra logs occne-kube-prom-stack-grafana-7ccf687579-ns94w - Depending on the type of Load Balancer used, use one of the following steps to
retrieve Grafana external IP:
- If you are using LBVM, run the following command to extract
the external
IP:
Sample output:[cloud-user@occne1-<user-name>-bastion-1 ~]$ kubectl -n occne-infra get service | grep grafana -o wideNAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE occne-kube-prom-stack-grafana LoadBalancer 10.233.42.123 10.75.200.32 80:30553/TCP 4d21h - If you are using LBVM, use the
occne.inifile to extract the external IP:
Sample output:$ cat /var/occne/cluster/$OCCNE_CLUSTER/occne.ini | grep occne_graf_cnlboccne_graf_cnlb = 10.75.200.32 #In both of these examples the external ip is - 10.75.200.32
- If you are using LBVM, run the following command to extract
the external
IP:
- Ensure that the Grafana dashboard is accessible by either pinging the Grafana
external IP or accessing the dashboard in a browser.
The following code block provides the command to ping the external IP:
For example:$ ping <YOUR-GRAFANA-EXTERNAL-IP>
Sample output:$ ping 10.75.200.32PING 10.75.200.32 ( 10.75.200.32 ) 56(84) bytes of data. 64 bytes from 10.75.200.32 : icmp_seq=1 ttl=62 time=3.04 ms 64 bytes from 10.75.200.32 : icmp_seq=2 ttl=62 time=1.63 ms 64 bytes from 10.75.200.32 : icmp_seq=3 ttl=62 time=1.24 msThe following code block provides the CURL command to access Grafana dashboard using the external IP:
For example:$ curl <YOUR-GRAFANA-EXTERNAL-IP>
Sample output:$ curl 10.75.225.166<a href="/occne1-<user-name>/grafana/login">Found</a>.
Checking Preupgrade Config Files
Check manual updates on the pod resources: Check the manual updates made to
the Kubernetes cluster configuration such as deployments and daemonsets, after the
initial deployment, are configured in the proper occne.ini (vCNE) or
hosts.ini (Bare Metal) file. For more information, see Preinstallation Tasks.
Activating Velero
This section describes the procedure to activate Velero to enable the functionality to take on-demand backup and restore of CNE. Perform this procedure after installing CNE 23.1.x or upgrading to CNE 23.1.x to address Velero requirements. Velero gets installed as a common service in CNE. CNE configures Velero to perform all operations of backup and restore in an on-demand basis. That is, Velero doesn't perform any scheduled or triggered backups or restores.
Prerequisites
- Perform this procedure from the active Bastion only.
- CNE cluster must have connectivity to
s3object storage location. - The following table provides the list of
required dependencies that the system must meet
before activating Velero. Failing which, the Velero
activation can fail.
Table A-31 Required Dependencies
Dependency Description CNE Variable Name backupStorageLocation. name Name of the backup storage location where backups must be stored. occne_velero_bucket_provider_name backupStorageLocation. provider Name for the backup storage location provider. occne_velero_bucket_provider backupStorageLocation.bucket Name of the bucket in which the backups are stored. It's recommended to use this bucket name as your cluster name, unless you use a single bucket for several clusters.
occne_velero_bucket_name backupStorageLocation.config.region Region where the bucket cloud is located. For example, if the region is minio, then set the dependency to "minio". occne_velero_bucket_region backupStorageLocation.config.s3Url URL of the Bucket s3 API. occne_velero_bucket_s3url backupStorageLocation.config.publicUrl Public URL of the Bucket s3 API. In some cases, the public URL is same as the URL. occne_velero_bucket_public_s3url volumeSnapshotLocation.name Name of the volume snapshot location where snapshots are stored. occne_velero_volume_snapshot_location_name credentials.name Name given to the created secret that contains the bucket credentials. occne_velero_bucket_credentials_name credentials.secreContents.access_keyid Key ID that is used for authentication. occne_velero_bucket_credentials_access_key_id credentials.secreContents.access_accesskey Key secret that is passed for authentication purposes. occne_velero_bucket_credentials_secret_access_key Sample Dependency Values
Refer to the following sample/var/occne/cluster/ ${OCCNE_CLUSTER}/artifacts/backup/velero.inifile and replace the values in the sample with your s3 compatible storage values:[occne:vars] occne_velero_bucket_provider_name=minio occne_velero_bucket_provider=aws occne_velero_bucket_name=default-bucket occne_velero_bucket_region=minio occne_velero_bucket_s3url=http://10.75.216.10:9000 occne_velero_bucket_public_s3url=http://10.75.216.10:9000 occne_velero_volume_snapshot_location_name=aws-minio occne_velero_bucket_credentials_name=bucket-credentials occne_velero_bucket_credentials_access_key_id=default-user occne_velero_bucket_credentials_secret_access_key=default-passwordNote:
- Once the
velero.inifile is filled with the actual values, it must be appended tohosts.inifor Bare Metal clusters or tooccne.inifor VMWare or OpenStack clusters. Theoccne.iniorhosts.inifiles are located in the/var/occne/cluster/${OCCNE_CLUSTER}/directory. - Connectivity to s3 bucket is tested only on HTTP.
- Velero takes backup restore of CNE components only.
- Once the
Procedure
- Perform the following steps to fill up the values
in
velero.ini:- Navigate to the
/var/occne/cluster/${OCCNE_CLUSTER}directory:cd /var/occne/cluster/${OCCNE_CLUSTER} - Create a copy of the
velero.ini.template.- For CNLB deployment, run the following
command:
cp scripts/backup/velero.ini.template /var/occne/cluster/${OCCNE_CLUSTER}/velero.ini - For LBVM deployment, run the following
command:
$ cp artifacts/backup/velero.ini /var/occne/cluster/${OCCNE_CLUSTER}/velero.ini
- For CNLB deployment, run the following
command:
- Fill up the values for the Velero
variables in
velero.ini. For sample Velero variables, see Sample Dependency Values.Note:
To store the backup of all the critical data for a CNE instance, a bucket whose name must be same as the value ofoccne_velero_bucket_namevariable must exist.
- Navigate to the
- Append the
velero.inifile tohosts.inioroccne.inidepending on your deployment. Theoccne.iniorhosts.inifile is located in the/var/occne/cluster/ ${OCCNE_CLUSTER}/directory:Note:
If Velero is being activated after an upgrade, it is not necessary to run any of the following commands.- For OpenStack or VMware clusters,
run the following command to append the
velero.inifile tooccne.ini:cat /var/occne/cluster/${OCCNE_CLUSTER}/velero.ini >> /var/occne/cluster/${OCCNE_CLUSTER}/occne.ini - For Bare Metal clusters, run the
following command to append the
velero.inifile tohosts.ini:cat /var/occne/cluster/${OCCNE_CLUSTER}/velero.ini >> /var/occne/cluster/${OCCNE_CLUSTER}/hosts.ini
- For OpenStack or VMware clusters,
run the following command to append the
- CNE requires Boto3 python library to upload Bastion
backup into S3 object storage. Not installing the library
results in a cluster backup failure. Run the following
command to install boto3 python
library:
$ pip3 install boto3If you encounter any network unreachable warning, perform the following steps to add your proxy for downloading the library and unset the setting after installing the library:
- Run the following script to enable Velero:
- For LBVM deployment, navigate to
/var/occne/cluster/${OCCNE_CLUSTER}/artifacts/backupand run the script:$ cd /var/occne/cluster/${OCCNE_CLUSTER}/artifacts/backup $ ./install_velero.shSample output:
... DEPLOY Deployment Finished |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||| +POST Post Processing Started Process Skipping: CFG POST Skipping: CFG REMOVE -POST Post Processing Finished |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||| Wed Mar 19 06:08:52 PM UTC 2025 /var/occne/cluster/occne-example/artifacts/pipeline.sh completed successfully |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||| - For CNLB deployment, navigate to
/var/occne/cluster/${OCCNE_CLUSTER}/scripts/backupand run the script:$ cd /var/occne/cluster/${OCCNE_CLUSTER}/scripts/backup $ ./install_velero.shSample output:
... DEPLOY Deployment Finished |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||| +POST Post Processing Started Process Skipping: CFG POST Skipping: CFG REMOVE -POST Post Processing Finished |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||| Wed Mar 19 06:08:52 PM UTC 2025 /var/occne/cluster/occne-example/artifacts/pipeline.sh completed successfully ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
- For LBVM deployment, navigate to
Activating Local DNS
This section provides information about activating Local DNS post installation.
The Local DNS feature is a reconfiguration of core DNS (CoreDNS) to support external hostname resolution. When Local DNS is enabled, CNE routes the connection to external hosts through core DNS rather than the nameservers on the Bastion Hosts. For information about activating this feature post installation, see the "Activating Local DNS" section in Oracle Communications Cloud Native Core, Cloud Native Environment User Guide.
To stop DNS forwarding to Bastion DNS, you must define the DNS details through 'A' records and SRV records. A records and SRV records are added to CNE cluster using Local DNS API calls. For more information about adding and deleting DNS records, see the "Adding and Removing DNS Records" section in Oracle Communications Cloud Native Core, Cloud Native Environment User Guide.
Enabling or Disabling Floating IP in OpenStack
This section provides information about enabling or disabling the floating IP feature in an OpenStack deployment. This feature can be enabled or disabled post the installation or upgrade.
Note:
- CNE supports the floating IP feature for LBVM and CNLB.
- Enabling floating IPs expose nodes and controllers to external connections.
Prerequisites
- The cluster must be on vCNE Openstack.
- The CNE version must be 25.1.2xx or above.
- Openstack must have enough floating IPs to support the number of nodes and controllers.
- Ensure that OpenTofu or Terraform plan is not trying to create or destroy any resources before performing the enabling or disabling steps.
Note:
Floating IP specific security groups are only for CNLB environments.Prevalidating a Cluster
- Use SSH to log in to the active Bastion Host.
- Confirm that the Bastion Host is an active Bastion
Host:
$ is_active_bastionExpected sample output:IS active-bastion - Confirm if the current version of CNE is 25.1.2xx or
above:
$ echo $OCCNE_VERSIONSample output:25.1.200 - Navigate to the
/var/occne/cluster/${OCCNE_CLUSTER}/directory:$ cd /var/occne/cluster/${OCCNE_CLUSTER}/ - Run the following command and enter your Openstack credentials when
prompted:
$ source openrc.sh - Check the Terraform or OpenTofu plan and confirm that it is not trying to create
or destroy any resources. If the plan indicates that there are going to be
changes, then do not proceed with enabling or disabling floating IP.
- If you are using LBVM, run the following command to check the Terraform
plan:
$ terraform plan --var-file ${OCCNE_CLUSTER}/cluster.tfvarsSample output:... No changes. Your infrastructure matches the configuration. Terraform has compared your real infrastructure against your configuration and found no differences, so no changes are needed. - If you are using CNLB, run the following command to check the OpenTofu
plan:
$ tofu plan --var-file ${OCCNE_CLUSTER}/cluster.tfvarsSample output:... No changes. Your infrastructure matches the configuration. OpenTofu has compared your real infrastructure against your configuration and found no differences, so no changes are needed.
- If you are using LBVM, run the following command to check the Terraform
plan:
Enabling Floating IP
- Run the following command to open the
cluster.tfvarsfile in edit mode:$ vi ${OCCNE_CLUSTER}/cluster.tfvars - Set the value of the
occne_k8s_floating_ipvariable to true and save the file:Note:
If the cluster was upgraded from a previous version, then consider the following points while editing thecluster.tfvarsfile:- The file may not have the
occne_k8s_floating_ipvariable. In such a case, add the variable to the file. - The file has the
use_floating_ipvariable which is deprecated. Delete this variable from the file.
occne_k8s_floating_ip = true - The file may not have the
- Depending on the type of Load Balancer, run one of the following commands to
apply the changes:
Note:
While running the command, if the system indicates that the action will destroy or create any resource other than the floating IPs, stop the process immediately.- If you are using LBVM, run the Terraform to apply all the
necessary changes. When prompted, type yes and press
Enter.
$ terraform apply --var-file ${OCCNE_CLUSTER}/cluster.tfvarsSample output:... Plan: 14 to add, 0 to change, 0 to destroy. Changes to Outputs: ~ occne_flip_info = { ~ k8s_ctrl_flips = [ + (known after apply), + (known after apply), + (known after apply), ] ~ k8s_node_flips = [ + (known after apply), + (known after apply), + (known after apply), + (known after apply), ] # (2 unchanged attributes hidden) } Do you want to perform these actions? Terraform will perform the actions described above. Only 'yes' will be accepted to approve. Enter a value: yes ... Apply complete! Resources: 14 added, 0 changed, 0 destroyed. - If you are using CNLB, perform the following
steps:
- Run the following command to apply the changes on OpenTofu:
When prompted, type yes and press Enter.
Note:
CNLB has some limitations regarding floating IP association, to ensure a proper procedure, the apply process is divided into several steps.While using CNLB for floating IP association, ensure to run the process in several steps.
$ tofu apply --var-file ${OCCNE_CLUSTER}/cluster.tfvarsSample output:... Plan: 14 to add, 0 to change, 0 to destroy. Changes to Outputs: ~ floating_ips = { ~ k8s_control_nodes = null -> { + occne5-user-k8s-ctrl-1 = (known after apply) + occne5-user-k8s-ctrl-2 = (known after apply) + occne5-user-k8s-ctrl-3 = (known after apply) } ~ k8s_nodes = null -> { + occne5-user-k8s-node-1 = (known after apply) + occne5-user-k8s-node-2 = (known after apply) + occne5-user-k8s-node-3 = (known after apply) + occne5-user-k8s-node-4 = (known after apply) } # (1 unchanged attribute hidden) } Do you want to perform these actions? OpenTofu will perform the actions described above. Only 'yes' will be accepted to approve. Enter a value: yes ... Apply complete! Resources: 14 added, 0 changed, 0 destroyed. - Run the following command to open the
cluster.tfvarsfile in edit mode:$ vi ${OCCNE_CLUSTER}/cluster.tfvars - Set the value of the
occne_k8s_floating_ip_assocvariable to true and save the file:occne_k8s_floating_ip_assoc = true - Run the following command again to apply the changes on
OpenTofu:
When prompted, type yes and press Enter.
tofu apply -target=module.floatingip_assoc --var-file ${OCCNE_CLUSTER}/cluster.tfvarsPlan: 9 to add, 0 to change, 0 to destroy. Do you want to perform these actions? OpenTofu will perform the actions described above. Only 'yes' will be accepted to approve. Enter a value: yes ... Apply complete! Resources: 9 added, 0 changed, 0 destroyed. ...
- Run the following command to apply the changes on OpenTofu:
- If you are using LBVM, run the Terraform to apply all the
necessary changes. When prompted, type yes and press
Enter.
- Perform the steps mentioned in the Validating if Floating IP is Enabled section to validate if the floating IPs are assigned correctly.
Disabling Floating IP
- Run the following command to open the
cluster.tfvarsfile in edit mode:$ vi ${OCCNE_CLUSTER}/cluster.tfvars - Set the value of the
occne_k8s_floating_ipvariable to false and save the file:occne_k8s_floating_ip = false- If you are
using CNLB, set the value of the
occne_k8s_floating_ip_assocvariable to false and save the file.
- If you are
using CNLB, set the value of the
- Depending on the type of Load Balancer, run one of the following commands to
apply the changes:
Note:
While running the command, if the system indicates that the action will destroy or create any resource other than the floating IPs, stop the process immediately.- If you are using LBVM, run the following command to apply the changes on
Terraform. When prompted, type yes and press
Enter.
$ terraform apply --var-file ${OCCNE_CLUSTER}/cluster.tfvarsSample output:... Plan: 0 to add, 0 to change, 14 to destroy. Changes to Outputs: ~ occne_flip_info = { ~ k8s_ctrl_flips = [ - "10.14.26.49", - "10.14.26.30", - "10.14.27.235", ] ~ k8s_node_flips = [ - "10.14.26.207", - "10.14.26.32", - "10.14.26.235", - "10.14.26.188", ] # (2 unchanged attributes hidden) } Do you want to perform these actions? Terraform will perform the actions described above. Only 'yes' will be accepted to approve. Enter a value: yes ... Apply complete! Resources: 0 added, 0 changed, 14 destroyed. - If you are using CNLB, run the following command to apply the changes on
OpenTofu. When prompted, type yes and press
Enter.
$ tofu apply --var-file ${OCCNE_CLUSTER}/cluster.tfvarsSample output:... Plan: 0 to add, 0 to change, 14 to destroy. Changes to Outputs: ~ floating_ips = { ~ k8s_control_nodes = { - occne5-user-ctrl-1 = "10.14.26.178" - occne5-user-k8s-ctrl-2 = "10.14.26.75" - occne5-user-k8s-ctrl-3 = "10.14.26.130" } -> null ~ k8s_nodes = { - occne5-user-k8s-node-1 = "10.14.26.124" - occne5-user-k8s-node-2 = "10.14.26.219" - occne5-user-k8s-node-3 = "10.14.26.24" - occne5-user-k8s-node-4 = "10.14.26.238" } -> null # (1 unchanged attribute hidden) } Do you want to perform these actions? OpenTofu will perform the actions described above. Only 'yes' will be accepted to approve. Enter a value: yes ... Apply complete! Resources: 0 added, 0 changed, 14 destroyed.
- If you are using LBVM, run the following command to apply the changes on
Terraform. When prompted, type yes and press
Enter.
- Perform the steps mentioned in the Validating if Floating IP is Disabled section to validate if the floating IPs are disabled.
Postvalidating a Cluster
Validating if Floating IP is Enabled
- Depending on the type of Load Balancer, run one of the following
commands to validate the floating IPs:
Note:
Ensure that the fields in the output are not empty.- If you are using LBVM, run the following command to validate
the floating
IPs:
$ terraform output occne_flip_infoSample output:{ "bastion_flips" = [ "10.14.26.186", "10.14.26.28", ] "floating_network_id" = "06effdde-013g-87cx-k98p-27eaab11e46a" "k8s_ctrl_flips" = [ "10.14.26.49", "10.14.26.30", "10.14.27.235", ] "k8s_node_flips" = [ "10.14.26.207", "10.14.26.32", "10.14.26.235", "10.14.26.188", ] } - If you are using CNLB, run the following command to
validate the floating
IPs:
$ tofu output floating_ipsSample output:{ "bastions" = { "occne5-user-bastion-1" = "10.14.26.142" "occne5-user-bastion-2" = "10.14.26.176" } "k8s_control_nodes" = { "occne5-user-k8s-ctrl-1" = "10.14.26.178" "occne5-user-k8s-ctrl-2" = "10.14.26.75" "occne5-user-k8s-ctrl-3" = "10.14.26.130" } "k8s_nodes" = { "occne5-user-k8s-node-1" = "10.14.26.124" "occne5-user-k8s-node-2" = "10.14.26.219" "occne5-user-k8s-node-3" = "10.14.26.24" "occne5-user-k8s-node-4" = "10.14.26.238" } }
- If you are using LBVM, run the following command to validate
the floating
IPs:
- Open OpenStack Dashboard and confirm that the floating IPs have been assigned correctly. This can be reviewed from Compute → Instances.
- Use SSH to connect to any node of the cluster using its newly
assigned floating IP and an
id_rsa. Do not use the Bastion for this step; use an external computer outside the cluster.
Validating if Floating IP is Disabled
- Depending on the type of Load Balancer, run one of the following
commands to validate the floating IPs:
Note:
Ensure that theControlandk8s node floating IPsmust be displayed as empty or null.- If you are using LBVM, run the following command to
validate the floating
IPs:
$ terraform output occne_flip_infoSample output:{ "bastion_flips" = [ "10.14.26.186", "10.14.26.28", ] "floating_network_id" = "06effdde-013g-87cx-k98p-27eaab11e46a" "k8s_ctrl_flips" = [] "k8s_node_flips" = [] } - If you are using CNLB, run the following command to
validate the floating
IPs:
$ tofu output floating_ipsSample output:{ "bastions" = { "occne5-juan-p-lopez-bastion-1" = "10.14.26.142" "occne5-juan-p-lopez-bastion-2" = "10.14.26.176" } "k8s_control_nodes" = null /* object */ "k8s_nodes" = null /* object */ } - Open OpenStack Dashboard and confirm that the floating IPs have been assigned correctly. This can be reviewed from Compute → Instances.
- If you are using LBVM, run the following command to
validate the floating
IPs:
Troubleshooting Floating IP
Confirm to the following troubleshooting guidelines if you encounter errors while enabling or disabling the Floating IP feature:
- Review the prerequisites and prevalidations to ensure that cluster meets the requirement.
- Check if the Openstack credentials are valid, the
account is not locked out, and the environment variable has been set correctly.
If not, you might get an authentication error.
Following is an example of an authentication error detected by OpenTofu or Terraform:
# Authentication error example: Planning failed. OpenTofu encountered an error while generating this plan. ╷ │ Error: One of 'auth_url' or 'cloud' must be specified # OpenStack Only: # Load OpenStack environment variables $ source openrc.sh # To list the current credentials $ env | grep "OS_" # VMware Only: # To list the current credentials $ cdcl $ grep vcd $OCCNE_CLUSTER/cluster.tfvars # Log in via the OpenStack dashboard or VCD, using the credentials present on the cluster to ensure the credentials are valid and the account is not locked out - Enabling or disabling floating IP fails, if the OpenTofu or
Terraform commands are not run in the
/var/occne/cluster/${OCCNE_CLUSTER}directory. In such cases, use the following command to navigate to the correct directory and retry running the OpenTofu or Terraform commands:$ cd /var/occne/cluster/${OCCNE_CLUSTER} - Confirm that the OpenTofu or Terraform providers are not corrupted.
The
/var/occne/cluster/${OCCNE_CLUSTER}directory must contain a folder named.terraform. The.terraformfolder must contain another folder namedproviders. Run the following command to confirm that the providers exists. If any of these folders doesn't exist, it indicates that the providers are corrupted. In such cases, contact My Oracle Support.$ ls -la /var/occne/cluster/${OCCNE_CLUSTER}/.terraformSample output:drwxr-xr-x. 2 cloud-user cloud-user 26 Aug 1 00:13 modules drwxr-xr-x. 3 cloud-user cloud-user 35 Aug 1 00:13 providers # Validate providers using --version $ tofu/terraform --version # Example: LBVM Only $ cdcl $ terraform --version Terraform v1.5.7 on linux_amd64 + provider registry.terraform.io/hashicorp/null v3.2.1 + provider registry.terraform.io/hashicorp/template v2.2.0 + provider registry.terraform.io/terraform-provider-openstack/openstack v1.42.0 # Example: CNLB Only $ cdcl $ tofu --version OpenTofu v1.6.0-dev on linux_amd64 + provider registry.opentofu.org/hashicorp/null v3.2.1 + provider registry.opentofu.org/hashicorp/template v2.2.0 + provider registry.opentofu.org/terraform-provider-openstack/openstack v1.42.0 - If the OpenTofu or Terraform plan tries to create or destroy any resource other than the floating IPs, stop the process immediately. For more information about cluster recovery procedures, see the Fault Recovery section.
Dedicating CNLB Pods to Specific Worker Nodes
This section explains how to configure Kubernetes so that the CNLB application pods are scheduled only on a specific worker node. This procedure ensures that the CNLB workloads run on dedicated hardware, isolating them from other applications and improving network performance, stability, and predictability. This setup is ideal for cases where CNLB requires specific CPU and memory mostly in case of high performance needs.
Note:
This procedure must be run only when there are enough nodes in cluster to accomodate application pods to be scheduled on other worker nodes apart from tainted worker nodes for CNLB application pods. At any time if there are equal number of worker nodes to number of CNLB app replicas, this procedure msut not be run as it causes disruption in application pods and will result in only cnlb app pods getting scheduled on these worker nodes. Users are responsible to ensure cluster has enough resources to accomodate dedicated cnlb app pods.
Application pods running on labelled selected by users for dedicating cnlb app pods will not automatically be evicted but once application pods are rescheduled they will not be scheduled on nodes list used by this procedure.
Procedure to schedule CNLB pods to a specific worker node
- For releases earlier to 25.2.100: Copy and run the script on
Bastion Host to add taints, labels, or toleration after CNE installation:
- Copy the contents of the script
taintTolerate.pyon Bastion Host from the code block below:import sys import subprocess import json import argparse def runCmd(cmd): print(f"[INFO] Running command: {' '.join(cmd)}") try: result = subprocess.run(cmd, capture_output=True, text=True, check=True) except subprocess.CalledProcessError as e: print(f"[ERROR] Command failed: {e.cmd}") print(f"[ERROR] Error Output: {e.stderr.strip()}") sys.exit(1) output = result.stdout.strip() if output: print(f"[DEBUG] Command output: {output}") return output def getNodesWithLabel(labelKey='cnlb_sched'): cmd = [ "kubectl", "get", "nodes", "-l", f"{labelKey}=true", "-o", "json" ] try: nodesJson = json.loads(runCmd(cmd)) nodes = [item["metadata"]["name"] for item in nodesJson.get("items", [])] print(f"[INFO] Nodes with label '{labelKey}=true': {nodes}") return nodes except Exception as e: print(f"[ERROR] Failed to get nodes with label {labelKey}: {e}") sys.exit(1) def labelNode(nodeName): print(f"[INFO] Labeling node '{nodeName}' with 'cnlb_sched=true'") cmd = ["kubectl", "label", "nodes", nodeName, "cnlb_sched=true", "--overwrite"] runCmd(cmd) def taintNode(nodeName, taint='cnlb_sched=true:NoSchedule'): print(f"[INFO] Adding taint '{taint}' to node '{nodeName}'") cmd = ["kubectl", "taint", "nodes", nodeName, taint, "--overwrite"] runCmd(cmd) def patchDeployment(toleration, deployName='cnlb-app', namespace='occne-infra'): cmd = [ "kubectl", "get", "deployment", deployName, "-n", namespace, "-o", "json" ] try: currDeploy = json.loads(runCmd(cmd)) spec = currDeploy["spec"]["template"]["spec"] changed = False # Handle tolerations tolerations = spec.get("tolerations", []) if not any( all(toleration.get(k) == tol.get(k) for k in toleration) for tol in tolerations ): tolerations.append(toleration) changed = True nodeSelector = {"cnlb_sched": "true"} if spec.get("nodeSelector") != nodeSelector: changed = True if changed: patchObj = { "spec": { "template": { "spec": { "tolerations": tolerations, "nodeSelector": nodeSelector } } } } patchJson = json.dumps(patchObj) patchCmd = [ "kubectl", "patch", "deployment", deployName, "-n", namespace, "--type=merge", "-p", patchJson ] print(f"[INFO] Patching deployment '{deployName}' in '{namespace}'.") runCmd(patchCmd) else: print(f"[INFO] No changes needed for deployment: {deployName}") except Exception as e: print(f"[ERROR] Failed to patch deployment: {e}") sys.exit(1) def getWorkerNodesCount(): cmd = [ "kubectl", "get", "nodes", "--selector=!node-role.kubernetes.io/master,!node-role.kubernetes.io/control-plane", "--no-headers" ] result = subprocess.run(cmd, capture_output=True, text=True, check=True) # Each line represents a node lines = result.stdout.strip().splitlines() return len(lines) def getDeploymentReplicas(deployName, namespace): cmd = [ "kubectl", "get", "deployment", deployName, "-n", namespace, "-o", "jsonpath={.spec.replicas}" ] result = subprocess.run(cmd, capture_output=True, text=True, check=True) replicas = result.stdout.strip() return int(replicas) if replicas.isdigit() else 0 def main(): nodeCount = getWorkerNodesCount() cnlbAppReps = getDeploymentReplicas('cnlb-app', 'occne-infra') if nodeCount < (cnlbAppReps + 2): print("⚠️ WARNING: Not enough worker nodes available. Required at least {}, but found {}.".format(cnlbAppReps + 2, nodeCount)) sys.exit(1) userInputPrompt = input("⚠️ WARNING: This is a service disruptive procedure, not to be run in service. Still want to proceed? Options - 'yes' or 'no' ") if userInputPrompt.lower() == 'no' or userInputPrompt.lower() != 'yes': print('⚠️ Exiting script from user action {}'.format(userInputPrompt)) sys.exit(1) parser = argparse.ArgumentParser( description="Label/taint nodes and restrict cnlb-app deployment via kubectl." ) parser.add_argument('--nodes', nargs='*', help='List of node names. If omitted, will auto-select nodes labeled cnlb_sched=true.') args = parser.parse_args() if not args.nodes: print("[INFO] No node list provided. Searching for nodes labeled 'cnlb_sched=true'...") nodes = getNodesWithLabel("cnlb_sched") if not nodes: print("[ERROR] No nodes found with the label 'cnlb_sched=true'. Please specify nodes with --nodes node1 node2.") sys.exit(1) else: nodes = args.nodes print(f"[INFO] Using provided nodes list: {', '.join(nodes)}") # Label and taint nodes for node in nodes: labelNode(node) taintNode(node) # Add toleration & nodeSelector to cnlb-app deployment toleration = { "key": "cnlb_sched", "value": "true", "effect": "NoSchedule", "operator": "Equal" } patchDeployment(toleration) if __name__ == "__main__": main()Note:
If nodes have been installed with the parametercnlb_node_labelset astruein thecnlb.inifile, then --nodes parameter does not need to be specified. Script will fetch the list of labelled nodes and taint them automatically. - Run the script using the following commands if the
cnlb_node_labelparameter is not defined in thecnlb.inifile.$ python3 taintTolerate.py --nodes node1 node2 node3Here, the --nodes argument takes in the list of nodes that user wants to be dedicated to be used by cnlb app pods only, once this script is run tolerations are added to cnlb app pod and taints/ labels on worker nodes.
If cnlb_node_label is set as true in thecnlb.inifile and no additional nodes have to be tainted/ labelled, then run the command after copying the following script:$ python3 taintTolerate.py - The script accepts a list of nodes through the --nodes argument. Upon running the script, the specified nodes will be labeled and/or tainted. The cnlb application deployment will be updated with the tolerations to allow its pods to be scheduled only on those specified nodes. This can be verified by running and checking nodes that new cnlb app pods are scheduled on.
- Copy the contents of the script
- For releases starting from 25.2.100: Run the script on Bastion Host to
add taints/labels/toleration post CNE installation:
- The Bastion Host includes a script named
cnlbTaintTol.pywhich is located at/var/occne/cluster/$OCCNE_CLUSTER/artifacts/directory. It is installed as part of the CNE deployment.Note:
If nodes have been installed with cnlb_node_label set as true in thecnlb.inifile, --nodes argument does not need to be specified. Script will fetch the list of labelled nodes and taint them automatically. - If
cnlb_node_labelis not defined in the/var/occne/cluster/$OCCNE_CLUSTER/cnlb.inifile, then run thecnlbTaintTol.pyscript using the following commands:
The --nodes argument accepts a list of node names to dedicated for cnlb‑app pods. Once the script runs, tolerations are added to cnlb app pod and taints/ labels on this list of nodes.$ python3 /var/occne/cluster/$OCCNE_CLUSTER/artifacts/cnlbTaintTol.py --nodes node1 node2 node3 - If
cnlb_node_labelis set astrueincnlb.inifile and no additional nodes have to be tainted or labelled, then run the following command after copying the script:$ python3 /var/occne/cluster/$OCCNE_CLUSTER/artifacts/cnlbTaintTol.py
- The Bastion Host includes a script named
Performing Manual Switchover of LBVMs During Upgrade
Note:
It is highly recommended to perform the switchover on a single PAP at a time if there are more than one (oam) lbvm pair on the system. After each switchover, verify if all services associated with that service are functioning correctly before moving to the next pair.- For VMware or Openstack: Initiate a switchover manually from the OpenStack or VMware Desktop by selecting the ACTIVE LBVMs (the LBVMs that include the service IPs attached) and performing a hard reboot on each LBVM.
- For Openstack: Use the
lbvmSwitchOver.pyscript included in the/var/occne/cluster/${OCCNE_CLUSTER}/scriptsdirectory.Note:
This option is applicable to OpenStack deployments only. VMware deployments must use the option described in Step 1.The
lbvmSwitchOver.pyscript allows the user to perform the switchover on a single PAP (such as "oam") or on all PAPs where it performs the reboot on each LBVM pair. It also includes a warning while issuing the command which can be overridden using the--forceoption. Invalid PAPs return an error response.Use the current Bastion Host to run the switchover script and use the following command for help and information about running the script:$ ./scripts/lbvmSwitchOver.py --helpSample output:Command called to perform a LBVM switchover for Openstack only. --all : Required parameter: run on all peer address pools. --pap : Required parameter: run on a single peer address pool. --force: Optional parameter: run without prompt. --help : Print usage lbvmSwitchOver --all [optional: --force] lbvmSwitchOver --pap <peer address pool name> [optional: --force] Examples: ./scripts/lbvmSwitchOver.py --pap oam ./scripts/lbvmSwitchOver.py --pap oam --force ./scripts/lbvmSwitchOver.py --all ./scripts/lbvmSwitchOver.py --all --forceThe following code block provides a sample command to use the
--forceoption to ignore the warnings that are encountered while running the script:$ /var/occne/cluster/${OCCNE_CLUSTER}/scripts/lbvmSwitchOver.py --pap oam --forceSample output:Performing LBVM switchover on LBVM pairs: [{'id': 0, 'poolName': 'oam', 'name': 'my-cluster-name-oam-lbvm-1', 'ipaddr': '192.168.0.1', 'role': 'ACTIVE', 'status': 'UP'}, {'id': 1, 'poolName': 'oam', 'name': 'my-cluster-name-oam-lbvm-2', 'ipaddr': '192.168.0.2', 'role': 'STANDBY', 'status': 'UP'}]. - Validating LBVM states and communication... - Calling monitor for LBVM: my-cluster-name-oam-lbvm-1 - Calling monitor for LBVM: my-cluster-name-oam-lbvm-2 - Requesting ACTIVE LBVM reboot to force switchover... - Sending reboot request to ACTIVE LBVM: my-cluster-name-oam-lbvm-1 - Waiting for LBVM communication to be re-established to LBVM(s)... - Calling monitor for LBVM: my-cluster-name-oam-lbvm-1 - Calling monitor for LBVM: my-cluster-name-oam-lbvm-2 LBVM switchover successful on LBVMs. Service ports require additional time to switchover. Please wait for all ports to switchover and verify service operation.
Configuring Central Repository Access on Bootstrap
This section provides information about configuring the central repository access on Bootstrap.
When the central repository is populated with the artifacts necessary for CNE cluster deployment and maintenance, you can configure CNE Bootstraps and Bastions to use the repository.
This procedure is also applicable for a bootstrap, where a new CNE cluster is to be deployed to a Bastion where a central repository is replaced by a new one.
Note:
- The central repository files and certificates must be copied to the directories listed in the following procedure (through SCP, USB stick, or other mechanism).
- Replace the values inside
<>(angular brackets) with the specific values for the system that is being installed. Run the remaining text/syntax in the code blocks as-is.
- Set environment variables for consistent access to the
central repository:
- Set the hotname, repository IP, and
registory
port:
$ echo 'export CENTRAL_REPO=<central repo hostname>' | sudo tee -a /etc/profile.d/occne.sh $ echo 'export CENTRAL_REPO_IP=<central repo IPv4 address>' | sudo tee -a /etc/profile.d/occne.sh $ echo 'export CENTRAL_REPO_REGISTRY_PORT=<central repo registry port>' | sudo tee -a /etc/profile.d/occne.sh $ source /etc/profile.d/occne.sh - Set proper protocol for the central
repository. Set http if you are not using a CA
certificate and set https if you are using a CA
certificate.
$ echo 'export CENTRAL_REPO_PROTOCOL=<http | https>' | sudo tee -a /etc/profile.d/occne.sh - Run the following command to source
occne.sh:$ source /etc/profile.d/occne.sh
- Set the hotname, repository IP, and
registory
port:
- If the central repository hostname cannot be resolved by
DNS, update the
/etc/hostsfile with the central repository IP/hostname association:$ echo ${CENTRAL_REPO_IP} ${CENTRAL_REPO} | sudo tee -a /etc/hosts - Create the empty directories (YUM local repo directory and
certificates directory) for distribution to bastions. These
directories hold the central repository
files.
$ mkdir -p -m 0750 /var/occne/yum.repos.d $ mkdir -p -m 0750 /var/occne/certificates - Perform the following steps to add or copy the required files into the
directories created in the previous step:
- Add the central repository YUM .repo file to
/var/occne/yum.repos.d/. The name of this file must be in the following format:<central_repo_hostname>-ol9.repo( For example,winterfell-ol9.repo). - If the central repository HTTP or container registry
servers use certificates signed by a
certificate-authority, then copy the
certificate-authority certificate to the
/var/occne/certificates/directory. The name of this file must be in the following format:ca_<authority>.crt(For example,ca_oracle_cgbu.crt). - If the central repository registry uses a
self-signed certificate, then copy the certificate
to the
/var/occne/certificates/directory. The name of this file must be in the following format:<central_repo_hostname>:<central_repo_port>.crt(For example,test-repository:5000.crt). Skip this step if the registry uses a certificate signed by a certificate authority as stated in step b.
- Add the central repository YUM .repo file to
- Ensure that proper permission (0644) is granted to all the
files
copied:
$ chmod 0644 /var/occne/certificates/* $ chmod 0644 /var/occne/yum.repos.d/* - If you are using a certificate-authority certificate, copy
the certificate-authority certificate file to the
/var/occne/certificates/ca_*.crt /etc/pki/ca-trust/source/anchors/directory and update the ca-trust list of the operating system:$ sudo cp /var/occne/certificates/ca_*.crt /etc/pki/ca-trust/source/anchors/ $ sudo update-ca-trust - If you are using a self-signed registry certificate, then
create the following directory and copy the registry certificate to
it on the bootstrap for local
use:
$ sudo mkdir -p /etc/containers/certs.d/${CENTRAL_REPO}:${CENTRAL_REPO_REGISTRY_PORT}/ $ sudo cp /var/occne/certificates/${CENTRAL_REPO}:${CENTRAL_REPO_REGISTRY_PORT}.crt /etc/containers/certs.d/${CENTRAL_REPO}:${CENTRAL_REPO_REGISTRY_PORT}/ca.crt - Copy the yum repository directory to the cluster-specific
directory to be used in the targeted cluster
nodes:
$ cp -r /var/occne/yum.repos.d/ /var/occne/cluster/${OCCNE_CLUSTER}/ - Configure
dnf.confto point to the central repository:$ echo reposdir=/var/occne/yum.repos.d | sudo tee -a /etc/dnf/dnf.conf - Verify the repository access by running the following
command:
$ dnf repolistSample output:repo id repo name OCCNE1_Terraform missing packages ol9_UEKR8 Unbreakable Enterprise Kernel Release 8 for Oracle Linux 9 (x86_64) ol9_addons Oracle Linux 9 Addons (x86_64) ol9_appstream Application packages released for Oracle Linux 9 (x86_64) ol9_baseos_latest Oracle Linux 9 Latest (x86_64) ol9_developer Packages for creating test and development environments for Oracle Linux 9 (x86_64) ol9_developer_EPEL EPEL Packages for creating test and development environments for Oracle Linux 9 (x86_64)
Removing a Kubernetes Controller Node
This section describes the procedure to remove a controller node from the CNE Kubernetes cluster in a vCNE deployment.
Note:
- A controller node must be removed from the cluster only if you are replacing the controller node. CNE doesn't support removing a controller node if the controller node is not replaced immediately after removal.
- This procedure is applicable for vCNE (OpenStack and VMWare) deployments only.
- This procedure is applicable for removing a single controller node only.
- A minimum of three control nodes (control plane and etcd hosts) are required to maintain the cluster's high availability and responsiveness. However, the cluster can still operate with an even number of control nodes, though it is not recommended for a long period of time.
- Some maintenance procedures such as, CNE standard upgrade and cluster update procedures are not supported after removing a control node from a cluster with even number of controller nodes. In such cases, you must add a new node before performing the procedures.
Removing a Controller Node in OpenStack Deployment
This section describes the procedure to remove a single controller node from the CNE Kubernetes cluster in an OpenStack deployment.
- Locate the controller node internal IP address by running the
following command from the Bastion
Host:
$ kubectl get nodes -o wide | egrep control | awk '{ print $1, $2, $6}'For example:$ [cloud-user@occne7-test-bastion-1 ~]$ kubectl get node -o wide | egrep control | awk '{ print $1, $2, $6}'Sample output:occne7-test-k8s-ctrl-1 NotReady 192.168.201.158 occne7-test-k8s-ctrl-2 Ready 192.168.203.194 occne7-test-k8s-ctrl-3 Ready 192.168.200.115Note that the status of controller node 1 is
NotReadyin the sample output. - Run the following commands to backup the
terraform.tfstatefile:$ cd /var/occne/cluster/${OCCNE_CLUSTER} $ cp terraform.tfstate ${OCCNE_CLUSTER}/terraform.tfstate.backup - From the Bastion Host, use SSH to log in to a working controller
node and run the following commands to list the etcd
members:
$ ssh <working control node hostname> # sudo su # source /etc/etcd.env # /usr/local/bin/etcdctl --endpoints https://<working control node IP address>:2379 --cacert=${ETCD_PEER_TRUSTED_CA_FILE} --cert=${ETCD_CERT_FILE} --key=${ETCD_KEY_FILE} member listFor example:$ ssh occne7-test-k8s-ctrl-2 [cloud-user@occne7-test-k8s-ctrl-2]$ sudo su [root@occne7-test-k8s-ctrl-2 cloud-user]# source /etc/etcd.env [root@occne7-test-k8s-ctrl-2 cloud-user]# /usr/local/bin/etcdctl --endpoints https://192.168.203.194:2379 --cacert=${ETCD_PEER_TRUSTED_CA_FILE} --cert=${ETCD_CERT_FILE} --key=${ETCD_KEY_FILE} member list 52513ddd2aa49770, started, etcd1, https://192.168.201.158:2380, https://192.168.201.158:2379, false 80845fb2b5120458, started, etcd3, https://192.168.200.115:2380, https://192.168.200.115:2379, false f1200d9975868073, started, etcd2, https://192.168.203.194:2380, https://192.168.203.194:2379, false- From the output, identify the etcd (etcd1, etcd2, or etcd3) to which the failed controller node belongs.
- Copy the controller node ID that is displayed in the first column of the output to be used later in the procedure.
- If the failed controller node is reachable, use SSH to log in to
the failed controller node from the Bastion Host and stop etcd service by
running the following
commands:
$ ssh <failed control node hostname> $ sudo systemctl stop etcdExample:$ ssh occne7-test-k8s-ctrl-1 $ sudo systemctl stop etcd
- From the Bastion Host, use SSH to log in to a working controller
node and remove the failed controller node from the etcd member
list:
$ ssh <working control node hostname> $ sudo su $ source /etc/etcd.env $ /usr/local/bin/etcdctl --endpoints https://<working control node IP address>:2379 --cacert=${ETCD_PEER_TRUSTED_CA_FILE} --cert=${ETCD_CERT_FILE} --key=${ETCD_KEY_FILE} member remove <failed control node ID>Example:[root@occne7-test-k8s-ctrl-2 cloud-user]# /usr/local/bin/etcdctl --endpoints https://192.168.203.194:2379 --cacert=${ETCD_PEER_TRUSTED_CA_FILE} --cert=${ETCD_CERT_FILE} --key=${ETCD_KEY_FILE} member remove 52513ddd2aa49770Sample output:Member 52513ddd2aa49770 removed from cluster f347ab69786ba4f7
- Validate if the failed node is removed from the etcd member
list:
$ /usr/local/bin/etcdctl --endpoints https://<working control node IP address>:2379 --cacert=${ETCD_PEER_TRUSTED_CA_FILE} --cert=${ETCD_CERT_FILE} --key=${ETCD_KEY_FILE} member listFor example:[root@occne7-test-k8s-ctrl-2 cloud-user]# /usr/local/bin/etcdctl --endpoints https://192.168.203.194:2379 --cacert=${ETCD_PEER_TRUSTED_CA_FILE} --cert=${ETCD_CERT_FILE} --key=${ETCD_KEY_FILE} member list 80845fb2b5120458, started, etcd3, https://192.168.200.115:2380, https://192.168.200.115:2379, false f1200d9975868073, started, etcd2, https://192.168.203.194:2380, https://192.168.203.194:2379, false - From the Bastion Host, switch the controller nodes in
terraform.tfstateby running the following commands:Note:
Perform this step only if the failed controller node is a etcd1 member.$ cd /var/occne/cluster/$OCCNE_CLUSTER $ cp terraform.tfstate terraform.tfstate.original $ python3 scripts/switchTfstate.pyFor example:[cloud-user@occne7-test-bastion-1]$ python3 scripts/switchTfstate.py
Sample output:Beginning tfstate switch order k8s control nodes terraform.tfstate.lastversion created as backup Controller Nodes order before rotation: occne7-test-k8s-ctrl-1 occne7-test-k8s-ctrl-2 occne7-test-k8s-ctrl-3 Controller Nodes order after rotation: occne7-test-k8s-ctrl-2 occne7-test-k8s-ctrl-3 occne7-test-k8s-ctrl-1 Success: terraform.tfstate rotated for cluster occne7-test - Remove the failed controller node from the cluster by performing
one the following steps in the Bastion Host depending on whether the failed
controller node is reachable or not:
- If the failed controller node is reachable, run the
following commands to remove the controller node from the
cluster:
$ kubectl cordon <failed control node hostname> $ kubectl drain <failed control node hostname> --force --ignore-daemonsets --delete-emptydir-data $ kubectl delete node <failed control node hostname>Example:$ [cloud-user@occne7-test-bastion-1]$ kubectl cordon occne7-test-k8s-ctrl-1 $ [cloud-user@occne7-test-bastion-1]$ kubectl drain occne7-test-k8s-ctrl-1 --force --ignore-daemonsets --delete-emptydir-data $ [cloud-user@occne7-test-bastion-1]$ kubectl delete node occne7-test-k8s-ctrl-1 - If the failed controller node is not reachable, run the
following commands to remove the controller node from the
cluster:
$ kubectl cordon <failed control node hostname> $ kubectl delete node <failed control node hostname>Example:$ [cloud-user@occne7-test-bastion-1]$ kubectl cordon occne7-test-k8s-ctrl-1 $ [cloud-user@occne7-test-bastion-1]$ kubectl delete node occne7-test-k8s-ctrl-1
- If the failed controller node is reachable, run the
following commands to remove the controller node from the
cluster:
- Verify if the failed controller node is deleted from
cluster.
$ kubectl get nodeSample output:[cloud-user@occne7-test-bastion-1]$ kubectl get node NAME STATUS ROLES AGE VERSION occne7-test-k8s-ctrl-2 Ready control-plane,master 82m v1.23.7 occne7-test-k8s-ctrl-3 Ready control-plane,master 82m v1.23.7 occne7-test-k8s-node-1 Ready <none> 81m v1.23.7 occne7-test-k8s-node-2 Ready <none> 81m v1.23.7 occne7-test-k8s-node-3 Ready <none> 81m v1.23.7 occne7-test-k8s-node-4 Ready <none> 81m v1.23.7Note:
If you are not able to runkubectlcommands from the Bastion Host, update the/var/occne/cluster/$OCCNE_CLUSTER/artifacts/admin.conffile with the new working node IP address:vi /var/occne/cluster/occne7-test/artifacts/admin.conf server: https://192.168.203.194:6443 - Delete the failed controller node's instance using the Openstack
GUI:
- Log in to OpenStack cloud using your credentials.
- From the Compute menu, select Instances, and
locate the failed controller node's instance that you want to delete, as
shown in the following image:

- On the instance record, click the drop-down option in the
Actions column, select Delete Instance to delete the
failed controller node's instance, as shown in the following image:

- If you are unable to run
kubectlcommands from Bastion Host, update the/var/occne/cluster/$OCCNE_CLUSTER/artifacts/admin.conffile with the new working node IP address:Note:
Ifetcd1is being replaced, update the IP with previousetcd2IP.
Sample output:vi /var/occne/cluster/$OCCNE_CLUSTER/artifacts/admin.confserver: https://192.168.203.1:6443
Removing a Controller Node in VMware Deployment
This section describes the procedure to remove a single controller node from the CNE Kubernetes cluster in a VMware deployment.
- Locate the controller node internal IP address by running the
following command from the Bastion
Host:
$ kubectl get node -o wide | egrep ctrl | awk '{ print $1, $2, $6}'Sample output:occne7-test-k8s-ctrl-1 192.168.201.158 occne7-test-k8s-ctrl-2 192.168.203.194 occne7-test-k8s-ctrl-3 192.168.200.115For example:$ [cloud-user@occne7-test-bastion-1 ~]$ kubectl get node -o wide | egrep control | awk '{ print $1, $2, $6}'Sample output:occne7-test-k8s-ctrl-1 NotReady 192.168.201.158 occne7-test-k8s-ctrl-2 Ready 192.168.203.194 occne7-test-k8s-ctrl-3 Ready 192.168.200.115Note that the status of control node 1 is
NotReadyin the sample output. - Backup the terraform.tfstate file by running the following
commands:
$ cd /var/occne/cluster/${OCCNE_CLUSTER} $ cp terraform.tfstate ${OCCNE_CLUSTER}/terraform.tfstate.backup - On the Bastion Host, use SSH to log in to a working controller node
and run the following commands to list the etcd
members:
$ ssh <working control node hostname> # sudo su # source /etc/etcd.env # /usr/local/bin/etcdctl --endpoints https://<working control node IP address>:2379 --cacert=${ETCD_PEER_TRUSTED_CA_FILE} --cert=${ETCD_CERT_FILE} --key=${ETCD_KEY_FILE} member listFor example:$ ssh occne7-test-k8s-ctrl-2 [cloud-user@occne7-test-k8s-ctrl-2]$ sudo su [root@occne7-test-k8s-ctrl-2 cloud-user]# source /etc/etcd.env [root@occne7-test-k8s-ctrl-2 cloud-user]# /usr/local/bin/etcdctl --endpoints https://192.168.203.194:2379 --cacert=${ETCD_PEER_TRUSTED_CA_FILE} --cert=${ETCD_CERT_FILE} --key=${ETCD_KEY_FILE} member listSample output:52513ddd2aa49770, started, etcd1, https://192.168.201.158:2380, https://192.168.201.158:2379, false 80845fb2b5120458, started, etcd3, https://192.168.200.115:2380, https://192.168.200.115:2379, false f1200d9975868073, started, etcd2, https://192.168.203.194:2380, https://192.168.203.194:2379, false
- From the output, identify the etcd (etcd1, etcd2, or etcd3) to which the failed controller node belongs.
- Copy the controller node ID that is displayed in the first column of the output to be used later in the procedure.
- If the failed controller node is reachable, use SSH to log in to the
failed controller node from the Bastion Host and stop etcd service by running
the following
commands:
$ ssh <failed control node hostname> $ sudo systemctl stop etcdFor example:$ ssh occne7-test-k8s-ctrl-1 $ sudo systemctl stop etcd
- From the Bastion Host, use SSH to log in to a working controller
node and remove the failed controller node from the etcd member
list:
$ ssh <working control node hostname> $ sudo su $ source /etc/etcd.env $ /usr/local/bin/etcdctl --endpoints https://<working control node IP address>:2379 --cacert=${ETCD_PEER_TRUSTED_CA_FILE} --cert=${ETCD_CERT_FILE} --key=${ETCD_KEY_FILE} member remove <failed control node ID>For example:[root@occne7-test-k8s-ctrl-2 cloud-user]# /usr/local/bin/etcdctl --endpoints https://192.168.203.194:2379 --cacert=${ETCD_PEER_TRUSTED_CA_FILE} --cert=${ETCD_CERT_FILE} --key=${ETCD_KEY_FILE} member remove 52513ddd2aa49770Sample output:Member 52513ddd2aa49770 removed from cluster f347ab69786ba4f7
- Validate if the failed node is removed from the etcd member
list:
$ /usr/local/bin/etcdctl --endpoints https://<working control node IP address>:2379 --cacert=${ETCD_PEER_TRUSTED_CA_FILE} --cert=${ETCD_CERT_FILE} --key=${ETCD_KEY_FILE} member listFor example:[root@occne7-test-k8s-ctrl-2 cloud-user]# /usr/local/bin/etcdctl --endpoints https://192.168.203.194:2379 --cacert=${ETCD_PEER_TRUSTED_CA_FILE} --cert=${ETCD_CERT_FILE} --key=${ETCD_KEY_FILE} member listSample output:80845fb2b5120458, started, etcd3, https://192.168.200.115:2380, https://192.168.200.115:2379, false f1200d9975868073, started, etcd2, https://192.168.203.194:2380, https://192.168.203.194:2379, false
- From the Bastion Host, switch the controller nodes in
terraform.tfstateby running the following commands:Note:
Perform this step only if the failed controller node is a etcd1 member.$ cd /var/occne/cluster/${OCCNE_CLUSTER} $ cp terraform.tfstate terraform.tfstate.original $ python3 scripts/switchTfstate.pyFor example:[cloud-user@occne7-test-bastion-1]$ python3 scripts/switchTfstate.py
Sample output:Beginning tfstate switch order k8s control nodes terraform.tfstate.lastversion created as backup Controller Nodes order before rotation: occne7-test-k8s-ctrl-1 occne7-test-k8s-ctrl-2 occne7-test-k8s-ctrl-3 Controller Nodes order after rotation: occne7-test-k8s-ctrl-2 occne7-test-k8s-ctrl-3 occne7-test-k8s-ctrl-1 Success: terraform.tfstate rotated for cluster occne7-test - Remove the failed controller node from the cluster by performing
one the following steps in the Bastion Host depending on whether the failed
controller node is reachable or not:
- If the failed controller node is reachable, run the
following commands to remove the controller node from the
cluster:
$ kubectl cordon <failed control node hostname> $ kubectl drain <failed control node hostname> --force --ignore-daemonsets --delete-emptydir-data $ kubectl delete node <failed control node hostname>For example:$ [cloud-user@occne7-test-bastion-1]$ kubectl cordon occne7-test-k8s-ctrl-1 $ [cloud-user@occne7-test-bastion-1]$ kubectl drain occne7-test-k8s-ctrl-1 --force --ignore-daemonsets --delete-emptydir-data $ [cloud-user@occne7-test-bastion-1]$ kubectl delete node occne7-test-k8s-ctrl-1
- If the failed controller node is not reachable, run the
following commands to remove the controller node from the
cluster:
$ kubectl cordon <failed control node hostname> $ kubectl delete node <failed control node hostname>Example:$ [cloud-user@occne7-test-bastion-1]$ kubectl cordon occne7-test-k8s-ctrl-1 $ [cloud-user@occne7-test-bastion-1]$ kubectl delete node occne7-test-k8s-ctrl-1
- If the failed controller node is reachable, run the
following commands to remove the controller node from the
cluster:
- Verify if the failed controller node is deleted from
cluster.
$ kubectl get nodeSample output:NAME STATUS ROLES AGE VERSION occne7-test-k8s-ctrl-2 Ready control-plane,master 82m v1.23.7 occne7-test-k8s-ctrl-3 Ready control-plane,master 82m v1.23.7 occne7-test-k8s-node-1 Ready <none> 81m v1.23.7 occne7-test-k8s-node-2 Ready <none> 81m v1.23.7 occne7-test-k8s-node-3 Ready <none> 81m v1.23.7 occne7-test-k8s-node-4 Ready <none> 81m v1.23.7Note:
If you are not able to runkubectlcommands from the Bastion Host, update the/var/occne/cluster/$OCCNE_CLUSTER/artifacts/admin.conffile with the new working node IP address:vi /var/occne/cluster/occne7-test/artifacts/admin.conf server: https://192.168.203.194:6443 - Delete the failed controller node's VM using the VMWare GUI:
- Log in to VMware cloud using your credentials.
- From the Compute menu, select Virtual
Machines, and locate the failed controller node's VM to delete, as
shown in the following image:

- From the Actions menu, select Delete to
delete the failed controller node's VM, as shown in the following
image:

- If you are unable to run
kubectlcommands from Bastion Host, update the/var/occne/cluster/$OCCNE_CLUSTER/artifacts/admin.conffile with the new working node IP address:Note:
Ifetcd1is being replaced, update the IP with previousetcd2IP.
Sample output:vi /var/occne/cluster/$OCCNE_CLUSTER/artifacts/admin.confserver: https://192.168.203.1:6443
rook_toolbox
apiVersion: apps/v1
kind: Deployment
metadata:
name: rook-ceph-tools
namespace: rook-ceph # namespace:cluster
labels:
app: rook-ceph-tools
spec:
replicas: 1
selector:
matchLabels:
app: rook-ceph-tools
template:
metadata:
labels:
app: rook-ceph-tools
spec:
dnsPolicy: ClusterFirstWithHostNet
containers:
- name: rook-ceph-tools
image: occne-repo-host:5000/docker.io/rook/ceph:v1.10.2
command: ["/bin/bash"]
args: ["-m", "-c", "/usr/local/bin/toolbox.sh"]
imagePullPolicy: IfNotPresent
tty: true
securityContext:
runAsNonRoot: true
runAsUser: 2016
runAsGroup: 2016
env:
- name: ROOK_CEPH_USERNAME
valueFrom:
secretKeyRef:
name: rook-ceph-mon
key: ceph-username
- name: ROOK_CEPH_SECRET
valueFrom:
secretKeyRef:
name: rook-ceph-mon
key: ceph-secret
volumeMounts:
- mountPath: /etc/ceph
name: ceph-config
- name: mon-endpoint-volume
mountPath: /etc/rook
volumes:
- name: mon-endpoint-volume
configMap:
name: rook-ceph-mon-endpoints
items:
- key: data
path: mon-endpoints
- name: ceph-config
emptyDir: {}
tolerations:
- key: "node.kubernetes.io/unreachable"
operator: "Exists"
effect: "NoExecute"
tolerationSeconds: 5