The software described in this documentation is either no longer supported or is in extended support.
Oracle recommends that you upgrade to a current supported release.
Chapter 3 Upgrading to Release 1.1
This section describes how to upgrade Oracle Cloud Native Environment from Release 1.0 to Release 1.1.
When the Oracle Cloud Native Environment packages and the Kubernetes cluster are upgraded to Release 1.1, you can install the new Istio module and use other new features in this release such as scaling a Kubernetes cluster.
Perform each step in this chapter in order to upgrade your environment from Release 1.0 to Release 1.1.
3.1 Updating the Network Configuration
On each Kubernetes node, make changes to the firewall to set new rules
to disable masquerading and to add the cni0
interface to the trusted zone.
sudo firewall-cmd --remove-masquerade --permanent sudo firewall-cmd --zone=trusted --add-interface=cni0 --permanent
3.2 Changing the Software Packages Source
Disable the ULN channel or Oracle Linux yum server repository for Release 1.0 and enable the one for Release 1.1.
If the systems are registered to use ULN, use the ULN web
interface to subscribe the systems to the
ol7_x86_64_olcne11
channel. Make sure you
unsubscribe each system from the following channels.
-
ol7_x86_64_olcne
-
ol7_x86_64_olcne12
-
ol7_x86_64_developer
If you are using the Oracle Linux yum server for system updates, on each node update
the oracle-olcne-release-el7
release package,
disable the ol7_olcne
and
ol7_developer
repositories, and enable the
ol7_olcne11
repository. On each node,
run:
sudo yum update oracle-olcne-release-el7 sudo yum-config-manager --disable ol7_olcne ol7_developer sudo yum-config-manager --enable ol7_olcne11
3.3 Upgrading the Operator Node
Upgrade the operator node with the new Oracle Cloud Native Environment software packages.
-
On the operator node, stop the
olcne-api-server
service:sudo systemctl stop olcne-api-server.service
-
Update the Platform CLI, Platform API Server, and utilities packages:
sudo yum update olcnectl olcne-api-server olcne-utils
-
Start the
olcne-api-server
service:sudo systemctl start olcne-api-server.service
3.4 Upgrading the Kubernetes Nodes
Upgrade the Kubernetes nodes with the new Oracle Cloud Native Environment software packages.
-
On the node to update, stop the
olcne-agent
service:sudo systemctl stop olcne-agent.service
-
Update the Platform Agent and utilities packages:
sudo yum update olcne-agent olcne-utils
-
Start the
olcne-agent
service:sudo systemctl start olcne-agent.service
3.5 Upgrading the Kubernetes Cluster
Upgrade the cluster to Kubernetes Release 1.17.
On the operator node, use the olcnectl module
update command to upgrade to the latest Kubernetes release
available for Oracle Cloud Native Environment Release 1.1. This example upgrades a
Kubernetes module named mycluster
in the myenvironment
environment to Kubernetes
Release 1.17.
olcnectl module update \ --environment-name myenvironment \ --name mycluster \ --kube-version 1.17.9
The --kube-version
option specifies the release
to which you want to upgrade. This example uses release number
1.17.9.
Make sure you upgrade to the latest Kubernetes release. To get the version number of the latest Kubernetes release for Oracle Cloud Native Environment Release 1.1, see Release Notes.
If you are using the NGINX load balancer deployed by the
Platform CLI, you should also upgrade NGINX on the control
plane nodes. You specify the location from which to pull the NGINX
container image used to upgrade NGINX using the
--nginx-image
option. For example, include this
additional line in the olcnectl module update
command to upgrade NGINX from the Oracle Container Registry:
--nginx-image container-registry.oracle.com/olcne/nginx:1.17.7
Make sure you upgrade to the latest NGINX release. To get the version number of the latest NGINX container image for Oracle Cloud Native Environment Release 1.1, see Release Notes.
When you upgrade from Kubernetes Release 1.14 to 1.17, the update iterates through each Kubernetes release up to Release 1.17. That is, the nodes are upgraded to Kubernetes Release 1.15, then 1.16, and finally to 1.17.
The Kubernetes Releases 1.15 and 1.16 should not be used, other than to perform the upgrade to Release 1.17.
When each node in the cluster is upgraded to the next Kubernetes release, the cluster's health is validated. If the cluster is healthy, the cycle of back up, upgrade to the next release, and cluster validation starts again, until all nodes are upgraded to the latest release.