Upgrading Clusters to Newer Kubernetes Versions
After a new version of Kubernetes has been released and when Container Engine for Kubernetes supports the new version, you can upgrade the Kubernetes version running on control plane nodes and worker nodes in a cluster.
The control plane nodes and worker nodes that comprise the cluster can run different versions of Kubernetes, provided you follow the Kubernetes version skew support policy described in the Kubernetes documentation.
You upgrade control plane nodes and worker nodes differently:
-
You upgrade control plane nodes by upgrading the cluster and specifying a more recent Kubernetes version for the cluster. Control plane nodes running older versions of Kubernetes are upgraded. Because Container Engine for Kubernetes distributes the Kubernetes Control Plane on multiple Oracle-managed control plane nodes to ensure high availability (distributed across different availability domains in a region where supported), you're able to upgrade the Kubernetes version running on control plane nodes with zero downtime.
Having upgraded control plane nodes to a new version of Kubernetes, you can subsequently create new node pools with worker nodes running the newer version. Alternatively, you can continue to create new node pools with worker nodes running older versions of Kubernetes (providing those older versions are compatible with the Kubernetes version running on the control plane nodes).
For more information about control plane node upgrade, see Upgrading the Kubernetes Version on Control Plane Nodes in a Cluster.
-
You upgrade worker nodes in one of the following ways:
- By performing an 'in-place' upgrade of a node pool in the cluster, specifying a more recent Kubernetes version for the existing node pool, and then cycling the nodes to automatically replace all existing worker nodes.
- By performing an 'in-place' upgrade of a node pool in the cluster, specifying a more recent Kubernetes version for the existing node pool, and then manually replacing each existing worker node with a new worker node.
- By performing an 'out-of-place' upgrade of a node pool in the cluster, replacing the original node pool with a new node pool for which you've specified a more recent Kubernetes version.
For more information about worker node upgrade, see Upgrading the Kubernetes Version on Worker Nodes in a Cluster.
To find out more about the Kubernetes versions currently and previously supported by Container Engine for Kubernetes, see Supported Versions of Kubernetes.
Notes about Upgrading Clusters
Note the following when upgrading clusters:
- Container Engine for Kubernetes only upgrades the Kubernetes version running on control plane nodes when you explicitly initiate the upgrade operation.
- After upgrading control plane nodes to a newer version of Kubernetes, you cannot downgrade the control plane nodes to an earlier Kubernetes version.
- Before you upgrade the version of Kubernetes running on the control plane nodes, it is your responsibility to test that applications deployed on the cluster are compatible with the new Kubernetes version. For example, before upgrading the existing cluster, you might create a new separate cluster with the new Kubernetes version to test your applications.
- The versions of Kubernetes running on the control plane nodes and the worker nodes must be compatible (that is, the Kubernetes version on the control plane nodes must be no more than two minor versions ahead of the Kubernetes version on the worker nodes). See the Kubernetes version skew support policy described in the Kubernetes documentation.
- If the version of Kubernetes currently running on the control plane nodes is more than one version behind the most recent supported version, you are given a choice of versions to upgrade to. If you want to upgrade to a version of Kubernetes that is more than one version ahead of the version currently running on the control plane nodes, you must upgrade to each intermediate version in sequence without skipping versions (as described in the Kubernetes documentation).
-
To successfully upgrade control plane nodes in a cluster, the Kubernetes Dashboard service must be of type ClusterIP. If the Kubernetes Dashboard service is not of type ClusterIP (for example, if the service is of type NodePort), the upgrade will fail. In this case, change the type of the Kubernetes Dashboard service back to ClusterIP (for example, by entering
kubectl -n kube-system edit service kubernetes-dashboard
and changing the type). - Prior to Kubernetes version 1.14, Container Engine for Kubernetes created clusters with kube-dns as the DNS server. However, from Kubernetes version 1.14 onwards, Container Engine for Kubernetes creates clusters with CoreDNS as the DNS server. When you upgrade a cluster created by Container Engine for Kubernetes from an earlier version to Kubernetes 1.14 or later, the cluster's kube-dns server is automatically replaced with the CoreDNS server. Note that if you customized kube-dns behavior using the original kube-dns ConfigMap, those customizations are not carried forward to the CoreDNS ConfigMap. You will have to create and apply a new ConfigMap containing the customizations to override settings in the CoreDNS Corefile. For more information about upgrading to CoreDNS, see Configuring DNS Servers for Kubernetes Clusters.