2.6.1 Updating the Master Node

You must update the master node in your cluster before you update worker nodes. The kubeadm-setup.sh upgrade command is used on the master node to complete the necessary steps to prepare and update the cluster. The following steps describe how to update the master node.

Important

Before you perform any update operations, make a backup file for your cluster at its current version. After you update the kubeadm package, any backup files that you make are not backward compatible, and if you revert to an earlier version of Oracle Linux Container Services for use with Kubernetes, the restore operation may fail to successfully load your backup file. See Section 4.3, “Cluster Backup and Restore” for more information.

Steps to update the master node

  1. On the master node, update the kubeadm package first:

    # yum update kubeadm
  2. If you are using the Oracle Container Registry to obtain images, log in.

    Follow the instructions in Section 2.2.5, “Oracle Container Registry Requirements”. Note that if images are updated on the Oracle Container Registry, you may be required to accept the Oracle Standard Terms and Restrictions again before you are able to perform the update. If you are using one of the Oracle Container Registry mirrors, see Section 2.2.5.1, “Using an Oracle Container Registry Mirror” for more information. If you have configured a local registry, you may need to set the KUBE_REPO_PREFIX environment variable to point to the appropriate registry. You may also need to update your local registry with the most current images for the version that you are upgrading to. See Section 2.2.5.2, “Setting Up an Optional Local Registry” for more information.

  3. Ensure that you open any new firewall ports in Section 2.2.7, “Firewall and iptables Requirements”.

  4. Create a pre-update backup file. In the event that the update does not complete successfully, the backup can revert back to the configuration of your cluster prior to update.

    # kubeadm-setup.sh stop
    Stopping kubelet now ...
    Stopping containers now ...
    
    # kubeadm-setup.sh backup /backups
    Creating backup at directory /backup ...
    Using 3.2.24
    Checking if container-registry.oracle.com/kubernetes/etcd:3.2.24 is available
    d05a0ef2bea8cd05e1311fcb5391d8878a5437f8384887ae31694689bc6d57f5 
    /var/run/kubeadm/backup/etcd-backup-1543581013.tar
    9aa26d015a4d2cf7a73438b04b2fe2e61be71ee56e54c08fd7047555eb1e0e6f 
    /var/run/kubeadm/backup/k8s-master-0-1543581013.tar
    Backup is successfully stored at /backup/master-backup-v1.12.5-2-1543581013.tar ...
    You can restart your cluster now by doing: 
    # kubeadm-setup.sh restart
    
    # kubeadm-setup.sh restart
    Restarting containers now ...
    Detected node is master ...
    Checking if env is ready ...
    Checking whether docker can pull busybox image ...
    Checking access to container-registry.oracle.com/kubernetes ...
    Trying to pull repository container-registry.oracle.com/kubernetes/pause ... 
    3.1: Pulling from container-registry.oracle.com/kubernetes/pause
    Digest: sha256:802ef89b9eb7e874a76e1cfd79ed990b63b0b84a05cfa09f0293379ac0261b49
    Status: Image is up to date for container-registry.oracle.com/kubernetes/pause:3.1
    Checking firewalld settings ...
    Checking iptables default rule ...
    Checking br_netfilter module ...
    Checking sysctl variables ...
    Restarting kubelet ...
    Waiting for node to restart ...
    .......+..............
    Master node restarted. Complete synchronization between nodes may take a few minutes.
  5. Run the kubeadm-setup.sh upgrade command as root on the master node. The script prompts you to continue with the update and warns you to make a backup file before you continue. Enter 1 to continue.

    # kubeadm-setup.sh upgrade
    Checking whether api-server is using image lower than 1.12
    [WARNING] Please make sure that you have performed backup of the cluster before upgrading
              Please select 1 (continue) or 2 (abort) :
    1) continue
    2) abort
    #? 1
    Checking whether https works (export https_proxy if behind firewall)
    v1.12.5-2: Pulling from kubernetes/kube-proxy-amd64
    Digest: sha256:d3b87a1cb0eb64d702921169e442c6758a09c94ee91a0080e801ec41355077cd
    Status: Image is up to date for 
           container-registry.oracle.com/kubernetes/kube-proxy-amd64:v1.12.5-2
    Checking cluster health ...
     
    [preflight] Running pre-flight checks.
    [upgrade] Making sure the cluster is healthy:
    [upgrade/config] Making sure the configuration is correct:
    [upgrade/config] Reading configuration options from a file: /var/run/kubeadm/kubeadm-cfg
    [upgrade/version] You have chosen to change the cluster version to "v1.12.5-2"
    [upgrade/versions] Cluster version: v1.12.7+1.1.2.el7
    [upgrade/versions] kubeadm version: v1.12.7+1.1.2.el7
    [upgrade/prepull] Will prepull images for components [kube-apiserver kube-controller-manager 
                                                          kube-scheduler]
    [upgrade/apply] Upgrading your Static Pod-hosted control plane to version "v1.12.5-2"...
    [upgrade/staticpods] Writing new Static Pod manifests to 
        "/etc/kubernetes/tmp/kubeadm-upgraded-manifests120255399"
    [controlplane] Wrote Static Pod manifest for component kube-apiserver to 
        "/etc/kubernetes/tmp/kubeadm-upgraded-manifests120255399/kube-apiserver.yaml"
    [controlplane] Wrote Static Pod manifest for component kube-controller-manager to 
        "/etc/kubernetes/tmp/kubeadm-upgraded-manifests120255399/kube-controller-manager.yaml"
    [controlplane] Wrote Static Pod manifest for component kube-scheduler to 
        "/etc/kubernetes/tmp/kubeadm-upgraded-manifests120255399/kube-scheduler.yaml"
    [upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-apiserver.yaml" 
        and backed up old manifest to 
        "/etc/kubernetes/tmp/kubeadm-backup-manifests555128538/kube-apiserver.yaml"
    [upgrade/staticpods] Waiting for the kubelet to restart the component
    [apiclient] Found 1 Pods for label selector component=kube-apiserver
    [upgrade/staticpods] Component "kube-apiserver" upgraded successfully!
    [upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-controller-manager.yaml" 
        and backed up old manifest to 
        "/etc/kubernetes/tmp/kubeadm-backup-manifests555128538/kube-controller-manager.yaml"
    [upgrade/staticpods] Waiting for the kubelet to restart the component
    [apiclient] Found 1 Pods for label selector component=kube-controller-manager
    [upgrade/staticpods] Component "kube-controller-manager" upgraded successfully!
    [upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-scheduler.yaml" 
        and backed up old manifest to 
        "/etc/kubernetes/tmp/kubeadm-backup-manifests555128538/kube-scheduler.yaml"
    [upgrade/staticpods] Waiting for the kubelet to restart the component
    [apiclient] Found 1 Pods for label selector component=kube-scheduler
    [upgrade/staticpods] Component "kube-scheduler" upgraded successfully!
    [uploadconfig] Storing the configuration used in ConfigMap "kubeadm-config" in the 
       "kube-system" Namespace
    [bootstraptoken] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in 
        order for nodes to get long term certificate credentials
    [bootstraptoken] Configured RBAC rules to allow the csrapprover controller automatically 
        approve CSRs from a Node Bootstrap Token
    [bootstraptoken] Configured RBAC rules to allow certificate rotation for all node client 
        certificates in the cluster
    [addons] Applied essential addon: kube-dns
    [addons] Applied essential addon: kube-proxy
     
    [upgrade/successful] SUCCESS! Your cluster was upgraded to "v1.12.5-2". Enjoy!
     
    [upgrade/kubelet] Now that your control plane is upgraded, please proceed with upgrading 
        your kubelets in turn.
    Warning: kubelet.service changed on disk. Run 'systemctl daemon-reload' to reload units.
     
    [MASTER UPGRADE COMPLETED SUCCESSFULLY]
     Cluster may take a few minutes to get backup!
     Please proceed to upgrade your $WORKER node *in turn* by running the following command:
      # kubectl drain $WORKER --ignore-daemonsets (run following command with proper KUBECONFIG)
      Login to the $WORKER node
      # yum update kubeadm
      # kubeadm-setup.sh upgrade
      # kubectl uncordon $WORKER (run the following command with proper KUBECONFIG)
      upgrade the next $WORKER node

    The upgrade command performs a health check on the cluster, validates the existing configuration, and then pulls the necessary images that are required to update the cluster. All of the controlplane components for the cluster are updated and certificates and tokens are configured to ensure that all cluster components on all nodes are able to continue to function after update.

    After these components have been updated, the kubelet and kubectl packages are updated automatically.

  6. If you are prompted by the following message, it is an indication that you need to update the flannel component manually:

    [INFO] Flannel is not upgraded yet. Run 'kubeadm-setup.sh upgrade --flannel' to upgrade flannel

    Re-run the kubeadm-setup.sh upgrade with the --flannel flag to ensure that you have fully upgraded your cluster:

    # kubeadm-setup.sh upgrade --flannel

After you have completed the master node upgrade, you can upgrade the packages for Oracle Linux Container Services for use with Kubernetes on each worker node.