2.5.2 Upgrading Worker Nodes from 1.1.9 to 1.1.12

Only upgrade worker nodes after the master node has completed the upgrade process, as described in Section 2.5.1, “Upgrading the Master Node from 1.1.9 to 1.1.12”.

Important

You must perform several manual steps to complete the upgrade of a worker node. These steps involve draining the node prior to upgrade to prevent the cluster from scheduling or starting any pods on the node while it is being upgraded. The drain process deletes any running pods from the node. If there is local storage configured, the drain process errors out so that you have the opportunity to determine whether or not you need to back up local data.

When the upgrade is complete, you can uncordon the worker node so that pods are able to resume on this node.

To upgrade a worker node, perform the following steps:

  1. Drain the worker node by running the following command from the master node:

    $ kubectl drain worker1.example.com --ignore-daemonsets

    where worker1.example.com is the hostname of the worker node that you wish to upgrade.

    If local storage is configured for the node, the drain process may generate an error. The following example output shows a node, using local storage, that fails to drain:

    node/worker1.example.com cordoned
    error: unable to drain node "worker1.example.com", aborting command...
     
    There are pending nodes to be drained:
     worker1.example.com
    error: pods with local storage (use --delete-local-data to override): carts-74f4558cb8-c8p8x, 
        carts-db-7fcddfbc79-c5pkx, orders-787bf5b89f-nt9zj, orders-db-775655b675-rhlp7, 
        shipping-5bd69fb4cc-twvtf, user-db-5f9d89bbbb-7t85k

    In the case where a node fails to drain, determine whether to follow any procedure to back up local data and restore it later or whether you can proceed and delete the local data directly. After any backup files have been made, you can rerun the command with the --delete-local-data switch to force the removal of the data and drain the node. For example, on the master node, run:

    $ kubectl drain worker1.example.com --ignore-daemonsets --delete-local-data
    node/worker1.example.com cordoned already cordoned
    WARNING: Ignoring DaemonSet-managed pods: kube-flannel-ds-xrszk, kube-proxy-7g9px; 
    Deleting pods with local storage: carts-74f4558cb8-g2fdw, orders-db-775655b675-gfggs, 
                                      user-db-5f9d89bbbb-k78sk
    pod "user-db-5f9d89bbbb-k78sk" evicted
    pod "rabbitmq-96d887875-lxm5f" evicted
    pod "orders-db-775655b675-gfggs" evicted
    pod "catalogue-676d4b9f7c-lvwfb" evicted
    pod "payment-75f75b467f-skrbq" evicted
    pod "carts-74f4558cb8-g2fdw" evicted
    node "kubernetes-worker1" drained
  2. Check that the worker node is unable to accept any further scheduling by running the following command on the master node:

    $ kubectl get nodes

    Note that a node that has been drained should have its status set to SchedulingDisabled.

  3. If you are using the Oracle Container Registry to obtain images, log in.

    Follow the instructions in Section 2.2.5, “Oracle Container Registry Requirements”. Note that if images are updated on the Oracle Container Registry, you may be required to accept the Oracle Standard Terms and Restrictions again before you are able to perform the upgrade. If you are using one of the Oracle Container Registry mirrors, see Section 2.2.5.1, “Using an Oracle Container Registry Mirror” for more information. If you have configured a local registry, you may need to set the KUBE_REPO_PREFIX environment variable to point to the appropriate registry. You may also need to update your local registry with the most current images for the version that you are upgrading to. See Section 2.2.5.2, “Setting Up an Optional Local Registry” for more information.

  4. Run the kubeadm-upgrade.sh upgrade command as root on the worker node:

    # kubeadm-upgrade.sh upgrade
    -- Running upgrade script---
    Number of cpu present in this system 2
    Total memory on this system: 7710MB
    Space available on the mount point /var/lib/docker: 44GB
    Space available on the mount point /var/lib/kubelet: 44GB
    kubeadm version 1.9
    kubectl version 1.9
    kubelet version 1.9
    ol7_addons repo enabled
    [WARNING] This action will upgrade this node to latest version
    [WARNING] The cluster will be upgraded through intermediate versions which are unsupported
    [WARNING] You must take backup before upgrading the cluster as upgrade may fail
      Please select 1 (continue) or 2 (abort) :
    1) continue
    2) abort
    #? 1
    Upgrading worker node
    Updating kubeadm package
    Checking access to container-registry.oracle.com/kubernetes for update
    Trying to pull repository container-registry.oracle.com/kubernetes/kube-proxy ...
    v1.12.5: Pulling from container-registry.oracle.com/kubernetes/kube-proxy
    Digest: sha256:9eba681b56e15078cb499a3360f138cc16987cf5aea06593f77d0881af6badbe
    Status: Image is up to date for container-registry.oracle.com/kubernetes/kube-proxy:v1.12.5
    Upgrading kubeadm to latest version
    Loaded plugins: langpacks, ulninfo
    Resolving Dependencies
    --> Running transaction check
    ---> Package kubeadm.x86_64 0:1.9.11-2.1.1.el7 will be updated
    ---> Package kubeadm.x86_64 0:1.12.7-1.1.2.el7 will be an update
    --> Finished Dependency Resolution
    
    Dependencies Resolved
    
    ===============================================================
     Package    Arch        Version          Repository      Size
    ===============================================================
    Updating:
     kubeadm   x86_64     1.12.7-1.1.2.el7   ol7_addons      7.3 M
    
    Transaction Summary
    ===============================================================
    Upgrade  1 Package
    
    Total download size: 7.3 M   
    Downloading packages:
    Delta RPMs disabled because /usr/bin/applydeltarpm not installed.
    Running transaction check
    Running transaction test
    Transaction test succeeded   
    Running transaction
    Upgrading kubeadm forcefully from version earlier that 1.11
      Updating   : kubeadm-1.12.7-1.1.2.el7.x86_64                              1/2
      Cleanup    : kubeadm-1.9.11-2.1.1.el7.x86_64                              2/2
      Verifying  : kubeadm-1.12.7-1.1.2.el7.x86_64                              1/2
      Verifying  : kubeadm-1.9.11-2.1.1.el7.x86_64                              2/2
    
    Updated:
      kubeadm.x86_64 0:1.12.7-1.1.2.el7
    
    Complete!
    Upgrading kubelet and kubectl now ...
    Checking kubelet and kubectl RPM ...
    [INFO] yum install -y kubelet-1.12.7-1.1.2.el7.x86_64
    Loaded plugins: langpacks, ulninfo
    Resolving Dependencies
    --> Running transaction check
    ---> Package kubelet.x86_64 0:1.9.11-2.1.1.el7 will be updated
    ---> Package kubelet.x86_64 0:1.12.7-1.1.2.el7 will be an update
    --> Finished Dependency Resolution
    
    Dependencies Resolved
    
    ==========================================================================================
     Package    Arch           Version                       Repository             Size
    ==========================================================================================
    Updating:
     kubelet    x86_64        1.12.7-1.1.2.el7              ol7_addons              19 M
    
    Transaction Summary
    ==========================================================================================
    Upgrade  1 Package
    
    Total download size: 19 M
    Downloading packages:
    Delta RPMs disabled because /usr/bin/applydeltarpm not installed.
    kubelet-1.12.7-1.1.2.el7.x86_64.rpm                               |  19 MB  00:00:01
    Running transaction check
    Running transaction test
    Transaction test succeeded   
    Running transaction
      Updating   : kubelet-1.12.7-1.1.2.el7.x86_64                      1/2
      Cleanup    : kubelet-1.9.11-2.1.1.el7.x86_64                      2/2
      Verifying  : kubelet-1.12.7-1.1.2.el7.x86_64                      1/2
      Verifying  : kubelet-1.9.11-2.1.1.el7.x86_64                      2/2
    
    Updated:
      kubelet.x86_64 0:1.12.7-1.1.2.el7
    
    Complete!
    [INFO] yum install -y kubectl-1.12.7-1.1.2.el7.x86_64
    Loaded plugins: langpacks, ulninfo
    Resolving Dependencies
    --> Running transaction check
    ---> Package kubectl.x86_64 0:1.9.11-2.1.1.el7 will be updated
    ---> Package kubectl.x86_64 0:1.12.7-1.1.2.el7 will be an update
    --> Finished Dependency Resolution
    
    Dependencies Resolved
    
    ==========================================================================================
     Package       Arch             Version                       Repository           Size
    ==========================================================================================
    Updating:
     kubectl       x86_64           1.12.7-1.1.2.el7              ol7_addons         7.7 M
    
    Transaction Summary
    ==========================================================================================
    Upgrade  1 Package
    
    Total download size: 7.7 M   
    Downloading packages:
    Delta RPMs disabled because /usr/bin/applydeltarpm not installed.
    kubectl-1.12.7-1.1.2.el7.x86_64.rpm                            | 7.7 MB  00:00:00
    Running transaction check
    Running transaction test
    Transaction test succeeded   
    Running transaction
      Updating   : kubectl-1.12.7-1.1.2.el7.x86_64                 1/2
      Cleanup    : kubectl-1.9.11-2.1.1.el7.x86_64                 2/2
      Verifying  : kubectl-1.12.7-1.1.2.el7.x86_64                 1/2
      Verifying  : kubectl-1.9.11-2.1.1.el7.x86_64                 2/2
    
    Updated:
      kubectl.x86_64 0:1.12.7-1.1.2.el7
    
    Complete!
    [kubelet] Downloading configuration for the kubelet from
     the "kubelet-config-1.12" ConfigMap in the kube-system namespace
    [kubelet] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
    [upgrade] The configuration for this node was successfully updated!
    [upgrade] Now you should go ahead and upgrade the kubelet package 
    using your package manager.
    [WORKER NODE UPGRADED SUCCESSFULLY]

    Note that you are warned that the upgrade affects the node's availability temporarily. You must confirm that you wish to continue to complete the upgrade.

    The kubelet service and all running containers are restarted automatically after upgrade.

  5. Uncordon the worker node so that it is able to schedule new nodes, as required. On the master node, run:

    $ kubectl uncordon worker1.example.com
    node/worker1.example.com uncordoned

    where worker1.example.com is the hostname of the worker node that you have just upgraded.

  6. When you have finished the upgrade process, check that the nodes are all running the expected version as follows:

    $ kubectl get nodes
    NAME                  STATUS    ROLES   AGE       VERSION
    master.example.com    Ready     master  1h        v1.12.7+1.1.2.el7
    worker1.example.com   Ready     <none>  1h        v1.12.7+1.1.2.el7
    worker2.example.com   Ready     <none>  1h        v1.12.7+1.1.2.el7