2.6.2 Updating Worker Nodes

Only update worker nodes after the master node has completed the update process, as described in Section 2.6.1, “Updating the Master Node”.

Important

You must perform several manual steps to complete the update of a worker node. These steps involve draining the node prior to update to prevent the cluster from scheduling or starting any pods on the node while it is being updated. The drain process deletes any running pods from the node. If there is local storage configured, the drain process errors out so that you have the opportunity to determine whether or not you need to back up local data.

When the update is complete you can uncordon the worker node so that pods are able to resume on this node.

To update a worker node, perform the following steps:

  1. Drain the worker node by running the following command from the master node:

    $ kubectl drain worker1.example.com --ignore-daemonsets

    where worker1.example.com is the hostname of the worker node that you wish to update.

    If local storage is configured for the node, the drain process might generate an error. The following example output shows a node, using local storage, that fails to drain:

    node/worker1.example.com cordoned
    error: unable to drain node "worker1.example.com", aborting command...
     
    There are pending nodes to be drained:
     worker1.example.com
    error: pods with local storage (use --delete-local-data to override): carts-74f4558cb8-c8p8x, 
        carts-db-7fcddfbc79-c5pkx, orders-787bf5b89f-nt9zj, orders-db-775655b675-rhlp7, 
        shipping-5bd69fb4cc-twvtf, user-db-5f9d89bbbb-7t85k

    In the case where a node fails to drain, determine whether you should follow any procedure to back up local data and restore it later or whether you can proceed and delete the local data directly. After any backup files have been made, you can rerun the command with the --delete-local-data switch to force the removal of the data and drain the node, for example:

    $ kubectl drain worker1.example.com --ignore-daemonsets --delete-local-data
    node/worker1.example.com already cordoned
    WARNING: Ignoring DaemonSet-managed pods: kube-flannel-ds-xrszk, kube-proxy-7g9px; 
    Deleting pods with local storage: carts-74f4558cb8-g2fdw, orders-db-775655b675-gfggs, 
                                      user-db-5f9d89bbbb-k78sk
    pod "user-db-5f9d89bbbb-k78sk" evicted
    pod "rabbitmq-96d887875-lxm5f" evicted
    pod "orders-db-775655b675-gfggs" evicted
    pod "catalogue-676d4b9f7c-lvwfb" evicted
    pod "payment-75f75b467f-skrbq" evicted
    pod "carts-74f4558cb8-g2fdw" evicted
    node "kubernetes-worker1" drained
  2. Check that the worker node is unable to accept any further scheduling by running the following command on the master node:

    $ kubectl get nodes

    A node that has been drained should have its status set to SchedulingDisabled.

  3. Update the packages on the worker node by using a standard yum update command. To specifically update only those packages required for Oracle Linux Container Services for use with Kubernetes, on the worker node, run the following command as root:

    # yum update kubeadm
  4. If you are using the Oracle Container Registry to obtain images, log in.

    Follow the instructions in Section 2.2.5, “Oracle Container Registry Requirements”. Note that if images are updated on the Oracle Container Registry, you may be required to accept the Oracle Standard Terms and Restrictions again before you are able to perform the update. If you are using one of the Oracle Container Registry mirrors, see Section 2.2.5.1, “Using an Oracle Container Registry Mirror” for more information. If you have configured a local registry, you may need to set the KUBE_REPO_PREFIX environment variable to point to the appropriate registry. You may also need to update your local registry with the most current images for the version that you are upgrading to. See Section 2.2.5.2, “Setting Up an Optional Local Registry” for more information.

  5. When the yum update process completes, run the kubeadm-setup.sh upgrade command as root on the worker node. You are warned that the update affects the node's availability temporarily. Confirm that you wish to continue to complete the update:

    #  kubeadm-setup.sh upgrade
    [WARNING] Upgrade will affect this node's application(s) availability temporarily
              Please select 1 (continue) or 2 (abort) :
    1) continue
    2) abort
    #? 1
    Checking access to container-registry.oracle.com/kubernetes for update
    v1.12.5-2: Pulling from kubernetes/kube-proxy-amd64
    Digest: sha256:f525d06eebf7f21c55550b1da8cee4720e36b9ffee8976db357f49eddd04c6d0
    Status: Image is up to date for 
    container-registry.oracle.com/kubernetes/kube-proxy-amd64:v1.12.5-2
    Restarting containers ...
    [NODE UPGRADED SUCCESSFULLY]

    The kubelet service and all of the running containers are restarted automatically after the update.

  6. Uncordon the worker node so that it is able to schedule new nodes, as required, by running the following command on the master node:

    $ kubectl uncordon worker1.example.com
    node/worker1.example.com uncordoned

    where worker1.example.com is the hostname of the worker node that you have just updated.

  7. After the update process has completed, run the following command on the master node to check that all of the nodes are running the expected version:

    $ kubectl get nodes
    NAME                  STATUS    ROLES   AGE       VERSION
    master.example.com    Ready     master  1h        v1.12.7+1.1.2.el7
    worker1.example.com   Ready     <none>  1h        v1.12.7+1.1.2.el7
    worker2.example.com   Ready     <none>  1h        v1.12.7+1.1.2.el7