Upgrade the Nodes
Upgrade Kubernetes nodes from Release 1 to Release 2.
For each node in the cluster, perform the following steps to upgrade from Release 1 to Release 2.
Important:
The control planes nodes must be upgraded first, then worker nodes.
Preparing Nodes
Remove the Kubernetes nodes from the Release 1 cluster, and add them back using the Release 2 Ignition configuration.
- Set the location of the
kubeconfig
file.Set the location of the Release 1kubeconfig
file as theKUBECONFIG
environment variable. For example:export KUBECONFIG=~/.kube/kubeconfig.mycluster
Important:
The Kubernetes configuration file must be saved as the name of the Release 1 cluster (the Kubernetes Module name).
- Find the node name.
Find the name of the node to upgrade in the cluster.
kubectl get nodes
- Set the node name.
Set the node name as an environment variable.
export TARGET_NODE=nodename
- Drain the node.
Use the
kubectl drain
command to drain the node.For control plane nodes, use:
kubectl drain $TARGET_NODE --ignore-daemonsets
For worker nodes, use:
kubectl drain $TARGET_NODE --ignore-daemonsets --delete-emptydir-data
- Reset the node.
Use the
ocne cluster console
command to reset the node. The syntax to use is:ocne cluster console
[{-d|--direct}] {-N|--node} nodename [{-t|--toolbox}] [-- command]For more information on the syntax options, see Oracle Cloud Native Environment: CLI.
For example:
ocne cluster console --node $TARGET_NODE --direct -- kubeadm reset -f
- Add the node back into the cluster.
- If the node is a control plane node:
When adding a control plane node, two things need to be created, an encrypted certificate bundle, and a join token.
Run the following command to connect to a control plane node's OS console and create the certificate bundle:
ocne cluster console --node control_plane_name --direct -- kubeadm init phase upload-certs --certificate-key certificate-key --upload-certs
Replace control_plane_name with a control plane node that's running in the Release 1 cluster.
Important:
This isn't the target node, but a separate control plane node that's used to run the
kubeadm init
command.Replace certificate-key with the output displayed when the Ignition information for control plane nodes was generated using the
ocne cluster join
command.Run the following command to create a join token:
ocne cluster console --node control_plane_name --direct -- kubeadm token create token
Replace control_plane_name with a control plane node that's running in the Release 1 cluster.
Replace token with the output displayed when the Ignition information for control plane nodes was generated using the
ocne cluster join
command.Important:
If the token was generated more than 24 hours prior, it has likely expired, and you must regenerate the Ignition files, which also generates a fresh token.
Use the
kubectl get nodes
command to confirm the control plane node is added to the cluster. This might take a few moments.kubectl get nodes
- If the node is a worker node:
When adding a worker node, a join token must be created. Run the following command to connect to a control plane node's OS console to perform this step:
ocne cluster console --node control_plane_name --direct -- kubeadm token create token
Replace control_plane_name with a control plane node that's running in the Release 1 cluster.
Replace token with the output displayed when the Ignition information for worker nodes was generated in using the
ocne cluster join
command.Important:
If the token was generated more than 24 hours prior, it has likely expired, and you must regenerate the Ignition files, which also generates a fresh token.
Use the
kubectl get nodes
command to confirm the worker node is added to the cluster. This might take a few moments.kubectl get nodes
- If the node is a control plane node:
Installing the Release 2 OS
Reboot the host and reinstall the OS using the Kickstart file to install the Oracle CNE Release 2 OS image.
- Shut down the system.
- Boot the system.
- Interrupt the installation.
When the OS installation screen is displayed, select the
Install Oracle Linux release
option and presse
. The boot options are displayed. - Set the Kickstart file location.
Add the location of the Kickstart file, using the
inst.ks
option, to the boot options. For example:inst.ks=http://myhost.example.com/ock/control-plane.cfg
Important:
Two Kickstart files must be available, one for control plane nodes, and one for worker nodes, as they include different Ignition information. Ensure you set this to the appropriate Kickstart file.
Enter
Ctrl
+X
to boot the server using the Kickstart file.The server boots to the Release 2 image and rejoins the Kubernetes cluster.
Uncordon Nodes
Uncordon the Kubernetes nodes to enable cluster workloads to run.
- Find the node name.
Find the name of the node to uncordon.
kubectl get nodes
- Uncordon the node.
Use the
kubectl uncordon
command to uncordon the node.kubectl uncordon node_name
For example:
kubectl uncordon ocne-control-plane-1
- Verify the node is available.
Use the
kubectl get nodes
command to confirm theSTATUS
column is set toReady
.kubectl get nodes
Validating the Node Upgrade
Validate a node is running the Release 2 OS.
- List the nodes in the cluster.
List the nodes in the Kubernetes cluster to ensure all expected nodes are listed.
kubectl get nodes
- Show information about the node.
Use the
ocne cluster info
command to display information about the node. The syntax is:ocne cluster info
[{-N|--nodes}] nodename, ... [{-s|--skip-nodes }]For more information on the syntax options, see Oracle Cloud Native Environment: CLI.
For example:
ocne cluster info --nodes ocne-control-plane-1
- Validate the node information.
The node is running the Release 2 image if the output of the
ocne cluster info
command looks similar to:Node: ocne-control-plane-1 Registry and tag for ostree patch images: registry: container-registry.oracle.com/olcne/ock-ostree tag: 1.29 transport: ostree-unverified-registry Ostree deployments: ock 5d6e86d05fa0b9390c748a0a19288ca32bwer1eac42fef1c048050ce03ffb5ff9.1 (staged) * ock 5d6e86d05fa0b9390c748a0a19288ca32bwer1eac42fef1c048050ce03ffb5ff9.0
The OSTree based image information is displayed in the output.
The node isn't running the Release 2 image if the output is missing this information, and looks similar to:
Node: ocne-control-plane-2 Registry and tag for ostree patch images: Ostree deployments: