2.5.1 Upgrading the Master Node from 1.1.9 to 1.1.12

You must upgrade the master node in your cluster before upgrading the worker nodes. Use the kubeadm-upgrade.sh upgrade command on the master node to create the necessary backup files and complete the necessary steps to prepare and upgrade the cluster.

Important

Before you perform any update operations, make a backup file of your cluster at its current version. After you update the kubeadm package, any backup files that you make are not backward compatible: if you revert to an earlier version of Oracle Linux Container Services for use with Kubernetes, the restore operation might fail to successfully load your backup file. See Section 4.3, “Cluster Backup and Restore” for more information.

Do not use backups that are generated by kubeadm-setup to restore from a failed 1.1.9 to 1.1.12 upgrade. The kubeadm-upgrade tool provides its own separate backup and restore mechanism, as described later in this section.

Upgrade the master node to 1.1.12

  1. Unlike errata upgrades, you do not need to manually update the kubeadm package, but you do need to install the kubeadm-upgrade package is required:

    # yum install kubeadm-upgrade
  2. If you are using the Oracle Container Registry to obtain images, log in.

    Follow the instructions in Section 2.2.5, “Oracle Container Registry Requirements”. Note that if images are updated on the Oracle Container Registry, you may be required to accept the Oracle Standard Terms and Restrictions again before you are able to perform the upgrade. If you are using one of the Oracle Container Registry mirrors, see Section 2.2.5.1, “Using an Oracle Container Registry Mirror” for more information.

    If you configured a local registry, you may need to set the KUBE_REPO_PREFIX environment variable to point to the appropriate registry. You might also need to update your local registry with the most current images for the version that you are upgrading to. See Section 2.2.5.2, “Setting Up an Optional Local Registry” for more information.

  3. Ensure that you open any new firewall ports, as described in Section 2.2.7, “Firewall and iptables Requirements”.

  4. Create a pre-upgrade backup file. Unlike the errata release upgrade procedure, the backup file is generated by using kubeadm-upgrade.sh backup. In the event that the upgrade does not complete successfully, the backup can revert back to the configuration of your cluster prior to upgrade.

    # kubeadm-setup.sh stop
    Stopping kubelet now ...
    Stopping containers now ...
    
    # kubeadm-upgrade.sh backup /backups
    -- Running upgrade script---
    Backing up cluster
    Creating backup at directory /backups ...
    Using 3.1.11
    Checking if container-registry.oracle.com/kubernetes/etcd-amd64:3.1.11 is available
    dc9ed9408e82dbd9d925c4d660206f9c60dce98c150cb32517284a6ef764f59d  
    /var/run/kubeadm/backup/etcd-backup-1546953894.tar
    aa2dad1ba2c2ec486d30fe0a15b29566b257474429d79889472fd79128489ae0  
    /var/run/kubeadm/backup/k8s-master-0-1546953894.tar
    Backup is successfully stored at /backups/master-backup-v1.9.11-0-1546953894.tar ...
    You can restart your cluster now by doing: 
    # kubeadm-setup.sh restart
    Storing meta-data to backup file master-backup-v1.9.11-0-1546953894.tar
    .version-info
    Backup creation successful :)
    
    # kubeadm-setup.sh restart
    Restarting containers now ...
    Detected node is master ...
    Checking if env is ready ...
    Checking whether docker can pull busybox image ...
    Checking access to container-registry.oracle.com/kubernetes ...
    Trying to pull repository container-registry.oracle.com/kubernetes/pause ... 
    3.1: Pulling from container-registry.oracle.com/kubernetes/pause
    Digest: sha256:802ef89b9eb7e874a76e1cfd79ed990b63b0b84a05cfa09f0293379ac0261b49
    Status: Image is up to date for container-registry.oracle.com/kubernetes/pause:3.1
    Checking firewalld settings ...
    Checking iptables default rule ...
    Checking br_netfilter module ...
    Checking sysctl variables ...
    Restarting kubelet ...
    Waiting for node to restart ...
    .......+..............
    Master node restarted. Complete synchronization between nodes may take a few minutes.

  5. Run the kubeadm-upgrade.sh upgrade command as root on the master node.

    # kubeadm-upgrade.sh upgrade
    -- Running upgrade script---
    Number of cpu present in this system 2
    Total memory on this system: 7710MB
    Space available on the mount point /var/lib/docker: 44GB
    Space available on the mount point /var/lib/kubelet: 44GB
    kubeadm version 1.9
    kubectl version 1.9
    kubelet version 1.9
    ol7_addons repo enabled
    [WARNING] This action will upgrade this node to latest version
    [WARNING] The cluster will be upgraded through intermediate 
    versions which are unsupported
    [WARNING] You must take backup before upgrading the cluster as upgrade may fail
      Please select 1 (continue) or 2 (abort) :
    1) continue
    2) abort
    #? 1
    
    Upgrading master node
    Checking access to container-registry.oracle.com/kubernetes for update
    Trying to pull repository container-registry.oracle.com/kubernetes/kube-proxy-amd64
    v1.10.5: Pulling from container-registry.oracle.com/kubernetes/kube-proxy-amd64
    Digest: sha256:4739e1154818a95786bc94d44e1cb4f493083d1983e98087c8a8279e616582f1
    Status: Image is up to date for 
    container-registry.oracle.com/kubernetes/kube-proxy-amd64:v1.10.5
    Checking access to container-registry.oracle.com/kubernetes for update
    Trying to pull repository container-registry.oracle.com/kubernetes/kube-proxy-amd64
    v1.11.3: Pulling from container-registry.oracle.com/kubernetes/kube-proxy-amd64
    Digest: sha256:2783b4d4689da3210d2a915a8ee60905bf53841be4d52ffbf56cc811c61d5728
    Status: Image is up to date for 
    container-registry.oracle.com/kubernetes/kube-proxy-amd64:v1.11.3
    Checking access to container-registry.oracle.com/kubernetes for update
    Trying to pull repository container-registry.oracle.com/kubernetes/kube-proxy ...
    v1.12.7: Pulling from Pulling from container-registry.oracle.com/kubernetes/kube-proxy
    Digest: sha256:f4f9e7b70a65f4f7d751da9b97c7536b21a7ac2b301155b0685778fc83d5510f
    Status: Image is up to date for Pulling from 
    container-registry.oracle.com/kubernetes/kube-proxy:v1.12.7
    Loaded plugins: langpacks, ulninfo
    Resolving Dependencies
    --> Running transaction check
    ---> Package kubeadm.x86_64 0:1.9.11-2.1.1.el7 will be updated
    ---> Package kubeadm.x86_64 0:1.10.5-2.0.2.el7 will be an update
    ---> Package kubectl.x86_64 0:1.9.11-2.1.1.el7 will be updated
    ---> Package kubectl.x86_64 0:1.10.5-2.0.2.el7 will be an update
    ---> Package kubelet.x86_64 0:1.9.11-2.1.1.el7 will be updated
    ---> Package kubelet.x86_64 0:1.10.5-2.0.2.el7 will be an update
    --> Finished Dependency Resolution
    
    Dependencies Resolved
    
    ================================================================
     Package   Arch          Version          Repository       Size
    ================================================================
    Updating:
     kubeadm   x86_64     1.10.5-2.0.2.el7    ol7_addons       17 M
     kubectl   x86_64     1.10.5-2.0.2.el7    ol7_addons       7.6 M
     kubelet   x86_64     1.10.5-2.0.2.el7    ol7_addons       17 M
    
    Transaction Summary
    =================================================================
    Upgrade  3 Packages
    
    Total download size: 42 M
    Downloading packages:
    Delta RPMs disabled because /usr/bin/applydeltarpm not installed.
    --------------------------------------------------------------------------------
    Total                                               49 MB/s |  42 MB  00:00
    Running transaction check
    Running transaction test
    Transaction test succeeded   
    Running transaction
      Updating   : kubelet-1.10.5-2.0.2.el7.x86_64                              1/6
      Updating   : kubectl-1.10.5-2.0.2.el7.x86_64                              2/6
      Updating   : kubeadm-1.10.5-2.0.2.el7.x86_64                              3/6
      Cleanup    : kubeadm-1.9.11-2.1.1.el7.x86_64                              4/6
      Cleanup    : kubectl-1.9.11-2.1.1.el7.x86_64                              5/6
      Cleanup    : kubelet-1.9.11-2.1.1.el7.x86_64                              6/6
      Verifying  : kubectl-1.10.5-2.0.2.el7.x86_64                              1/6
      Verifying  : kubelet-1.10.5-2.0.2.el7.x86_64                              2/6
      Verifying  : kubeadm-1.10.5-2.0.2.el7.x86_64                              3/6
      Verifying  : kubectl-1.9.11-2.1.1.el7.x86_64                              4/6
      Verifying  : kubeadm-1.9.11-2.1.1.el7.x86_64                              5/6
      Verifying  : kubelet-1.9.11-2.1.1.el7.x86_64                              6/6
    
    Updated:
      kubeadm.x86_64 0:1.10.5-2.0.2.el7      kubectl.x86_64 0:1.10.5-2.0.2.el7
      kubelet.x86_64 0:1.10.5-2.0.2.el7
    
    Complete!
    Upgrading pre-requisite
    Checking whether api-server is using image lower than 1.9
    Upgrading pre-requisite done 
    Checking cluster health ...  
    ....
    [preflight] Running pre-flight checks.
    [upgrade] Making sure the cluster is healthy:
    [upgrade/config] Making sure the configuration is correct:
    [upgrade/config] Reading configuration options from a file: 
    /var/run/kubeadm/kubeadm-cfg
    [upgrade/version] You have chosen to change the cluster version to "v1.10.5"
    [upgrade/versions] Cluster version: v1.9.11+2.1.1.el7
    [upgrade/versions] kubeadm version: v1.10.5+2.0.2.el7
    [upgrade/prepull] Will prepull images for 
    components [kube-apiserver kube-controller-manager kube-scheduler]
    [upgrade/apply] Upgrading your Static Pod-hosted control plane to version "v1.10.5"...
    Static pod: kube-apiserver-master.example.com hash: 
    3b6cc643053ae0164a687e53fbcf4eb7
    Static pod: kube-controller-manager-master.example.com hash: 
    78b0313a30bbf65cf169686001a2c093
    Static pod: kube-scheduler-master.example.com hash: 
    8fa7d39f0a3246bb39baf3712702214a
    [upgrade/etcd] Upgrading to TLS for etcd
    Static pod: etcd-master.example.com hash: 196164156fbbd2ef7daaf8c6a0ec6379
    [etcd] Wrote Static Pod manifest for a local etcd instance 
    to "/etc/kubernetes/tmp/kubeadm-upgraded-manifests139181353/etcd.yaml"
    [certificates] Generated etcd/ca certificate and key.
    [certificates] Generated etcd/server certificate and key.
    [certificates] etcd/server serving cert is signed for DNS names [localhost] 
    and IPs [127.0.0.1]
    [certificates] Generated etcd/peer certificate and key.
    [certificates] etcd/peer serving cert is signed for DNS 
    names [master.example.com] and IPs [19.0.2.10]
    [certificates] Generated etcd/healthcheck-client certificate and key.
    [upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/etcd.yaml" 
    and backed up old manifest 
    to "/etc/kubernetes/tmp/kubeadm-backup-manifests154060916/etcd.yaml"
    [upgrade/staticpods] Not waiting for pod-hash change for component "etcd"
    [upgrade/etcd] Waiting for etcd to become available
    [util/etcd] Waiting 30s for initial delay
    [util/etcd] Attempting to see if all cluster endpoints are available 1/10
    [util/etcd] Attempt failed with error: dial tcp [::1]:2379: 
    getsockopt: connection refused
    [util/etcd] Waiting 15s until next retry
    [util/etcd] Attempting to see if all cluster endpoints are available 2/10
    [util/etcd] Attempt failed with error: dial tcp [::1]:2379: 
    getsockopt: connection refused
    [util/etcd] Waiting 15s until next retry
    [util/etcd] Attempting to see if all cluster endpoints are available 3/10
    [upgrade/staticpods] Writing new Static Pod manifests 
    to "/etc/kubernetes/tmp/kubeadm-upgraded-manifests139181353"
    [controlplane] Wrote Static Pod manifest for component kube-apiserver 
    to "/etc/kubernetes/tmp/kubeadm-upgraded-manifests139181353/kube-apiserver.yaml"
    [controlplane] Wrote Static Pod manifest for component kube-controller-manager 
    to "/etc/kubernetes/tmp/kubeadm-upgraded-manifests139181353/kube-controller-manager.yaml"
    [controlplane] Wrote Static Pod manifest for component kube-scheduler 
    to "/etc/kubernetes/tmp/kubeadm-upgraded-manifests139181353/kube-scheduler.yaml"
    [upgrade/staticpods] The etcd manifest will be restored if 
    component "kube-apiserver" fails to upgrade
    [certificates] Using the existing etcd/ca certificate and key.
    [certificates] Generated apiserver-etcd-client certificate and key.
    [upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-apiserver.yaml" 
    and backed up old manifest 
    to "/etc/kubernetes/tmp/kubeadm-backup-manifests154060916/kube-apiserver.yaml"
    [upgrade/staticpods] Waiting for the kubelet to restart the component
    Static pod: kube-apiserver-master.example.com hash: 3b6cc643053ae0164a687e53fbcf4eb7
    Static pod: kube-apiserver-master.example.com hash: f7c7c2a1693f48bc6146119961c47cad
    [apiclient] Found 1 Pods for label selector component=kube-apiserver
    [upgrade/staticpods] Component "kube-apiserver" upgraded successfully!
    [upgrade/staticpods] Moved new manifest 
    to "/etc/kubernetes/manifests/kube-controller-manager.yaml" 
    and backed up old manifest 
    to "/etc/kubernetes/tmp/kubeadm-backup-manifests154060916/kube-controller-manager.yaml"
    [upgrade/staticpods] Waiting for the kubelet to restart the component
    Static pod: kube-controller-manager-master.example.com hash: 
    78b0313a30bbf65cf169686001a2c093
    Static pod: kube-controller-manager-master.example.com hash: 
    3fffc11595801c3777e45ff96ce75444
    [apiclient] Found 1 Pods for label selector component=kube-controller-manager
    [upgrade/staticpods] Component "kube-controller-manager" upgraded successfully!
    [upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-scheduler.yaml" 
    and backed up old manifest 
    to "/etc/kubernetes/tmp/kubeadm-backup-manifests154060916/kube-scheduler.yaml"
    [upgrade/staticpods] Waiting for the kubelet to restart the component
    Static pod: kube-scheduler-master.example.com hash: 8fa7d39f0a3246bb39baf3712702214a
    Static pod: kube-scheduler-master.example.com hash: c191e26d0faa00981a2f0d6f1f0d7e5f
    [apiclient] Found 1 Pods for label selector component=kube-scheduler
    [upgrade/staticpods] Component "kube-scheduler" upgraded successfully!
    [uploadconfig] Storing the configuration used in ConfigMap "kubeadm-config" 
    in the "kube-system" Namespace
    [bootstraptoken] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs 
    in order for nodes to get long term certificate credentials
    [bootstraptoken] Configured RBAC rules to allow the csrapprover controller 
    automatically approve CSRs from a Node Bootstrap Token
    [bootstraptoken] Configured RBAC rules to allow certificate rotation 
    for all node client certificates in the cluster
    [addons] Applied essential addon: kube-dns
    [addons] Applied essential addon: kube-proxy
    
    [upgrade/successful] SUCCESS! Your cluster was upgraded to "v1.10.5". Enjoy!
    
    [upgrade/kubelet] Now that your control plane is upgraded, 
    please proceed with upgrading your kubelets in turn.
    Upgrading kubeadm to 1.11.3 version
    Loaded plugins: langpacks, ulninfo
    Resolving Dependencies
    --> Running transaction check
    ---> Package kubeadm.x86_64 0:1.10.5-2.0.2.el7 will be updated
    ---> Package kubeadm.x86_64 0:1.11.3-2.0.2.el7 will be an update
    --> Finished Dependency Resolution
    
    Dependencies Resolved
    
    
    ================================================================
     Package   Arch          Version          Repository       Size
    ================================================================
    Updating:
     kubeadm   x86_64    1.11.3-2.0.2.el7     ol7_addons      7.6 M
    
    Transaction Summary
    ================================================================
    Upgrade  1 Package
    
    Total download size: 7.6 M   
    Downloading packages:
    Delta RPMs disabled because /usr/bin/applydeltarpm not installed.
    Running transaction check
    Running transaction test
    Transaction test succeeded   
    Running transaction
      Updating   : kubeadm-1.11.3-2.0.2.el7.x86_64                              1/2
      Cleanup    : kubeadm-1.10.5-2.0.2.el7.x86_64                              2/2
      Verifying  : kubeadm-1.11.3-2.0.2.el7.x86_64                              1/2
      Verifying  : kubeadm-1.10.5-2.0.2.el7.x86_64                              2/2
    
    Updated:
      kubeadm.x86_64 0:1.11.3-2.0.2.el7
    
    Complete!
    Upgrading pre-requisite
    Checking whether api-server is using image lower than 1.9
    Upgrading pre-requisite done 
    Checking cluster health ...  
    ....................................................................................
    [preflight] Running pre-flight checks.
    [upgrade] Making sure the cluster is healthy:
    [upgrade/config] Making sure the configuration is correct:
    [upgrade/config] Reading configuration options from a file: /var/run/kubeadm/kubeadm-cfg
    [upgrade/apply] Respecting the --cri-socket flag that 
    is set with higher priority than the config file.
    [upgrade/version] You have chosen to change the cluster version to "v1.11.3"
    [upgrade/versions] Cluster version: v1.10.5+2.0.2.el7
    [upgrade/versions] kubeadm version: v1.11.3+2.0.2.el7
    [upgrade/version] Found 1 potential version compatibility errors 
    but skipping since the --force flag is set:
    
            - There are kubelets in this cluster that are too old 
              that have these versions [v1.9.11+2.1.1.el7]
    [upgrade/prepull] Will prepull images 
    for components [kube-apiserver kube-controller-manager kube-scheduler etcd]
    [upgrade/apply] Upgrading your Static Pod-hosted control plane to version "v1.11.3"...
    Static pod: kube-apiserver-master.example.com hash: f7c7c2a1693f48bc6146119961c47cad
    Static pod: kube-controller-manager-master.example.com hash: 
    3fffc11595801c3777e45ff96ce75444
    Static pod: kube-scheduler-master.example.com hash: c191e26d0faa00981a2f0d6f1f0d7e5f
    Static pod: etcd-master.example.com hash: 6ecccbc01b0cd9daa0705a1396ef38e5
    [etcd] Wrote Static Pod manifest for a local etcd instance 
    to "/etc/kubernetes/tmp/kubeadm-upgraded-manifests842182537/etcd.yaml"
    [certificates] Using the existing etcd/ca certificate and key.
    [certificates] Using the existing etcd/server certificate and key.
    [certificates] Using the existing etcd/peer certificate and key.
    [certificates] Using the existing etcd/healthcheck-client certificate and key.
    [upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/etcd.yaml" 
    and backed up old manifest 
    to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2019-01-09-07-25-48/etcd.yaml"
    [upgrade/staticpods] Waiting for the kubelet to restart the component
    Static pod: etcd-master.example.com hash: 6ecccbc01b0cd9daa0705a1396ef38e5
    Static pod: etcd-master.example.com hash: 6ecccbc01b0cd9daa0705a1396ef38e5
    Static pod: etcd-master.example.com hash: 6ecccbc01b0cd9daa0705a1396ef38e5
    Static pod: etcd-master.example.com hash: 560672e3081cf0ff6a30ac1f943240eb
    [apiclient] Found 1 Pods for label selector component=etcd
    [upgrade/staticpods] Component "etcd" upgraded successfully!
    [upgrade/etcd] Waiting for etcd to become available
    [util/etcd] Waiting 0s for initial delay
    [util/etcd] Attempting to see if all cluster endpoints are available 1/10
    [upgrade/staticpods] Writing new Static Pod manifests 
    to "/etc/kubernetes/tmp/kubeadm-upgraded-manifests842182537"
    [controlplane] wrote Static Pod manifest for component kube-apiserver 
    to "/etc/kubernetes/tmp/kubeadm-upgraded-manifests842182537/kube-apiserver.yaml"
    [controlplane] wrote Static Pod manifest for component kube-controller-manager 
    to "/etc/kubernetes/tmp/kubeadm-upgraded-manifests842182537/kube-controller-manager.yaml"
    [controlplane] wrote Static Pod manifest for component kube-scheduler 
    to "/etc/kubernetes/tmp/kubeadm-upgraded-manifests842182537/kube-scheduler.yaml"
    [certificates] Using the existing etcd/ca certificate and key.
    [certificates] Using the existing apiserver-etcd-client certificate and key.
    [upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-apiserver.yaml" 
    and backed up old manifest 
    to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2019-01-09-07-25-48/kube-apiserver.yaml"
    [upgrade/staticpods] Waiting for the kubelet to restart the component
    Static pod: kube-apiserver-master.example.com hash: f7c7c2a1693f48bc6146119961c47cad
    Static pod: kube-apiserver-master.example.com hash: f7c7c2a1693f48bc6146119961c47cad
    Static pod: kube-apiserver-master.example.com hash: f7c7c2a1693f48bc6146119961c47cad
    Static pod: kube-apiserver-master.example.com hash: 9eefcb38114108702fad91f927799c04
    [apiclient] Found 1 Pods for label selector component=kube-apiserver
    [upgrade/staticpods] Component "kube-apiserver" upgraded successfully!
    [upgrade/staticpods] Moved new manifest 
    to "/etc/kubernetes/manifests/kube-controller-manager.yaml" 
    and backed up old manifest to 
    "/etc/kubernetes/tmp/kubeadm-backup-manifests-2019-01-09-07-25-48/
    kube-controller-manager.yaml"
    [upgrade/staticpods] Waiting for the kubelet to restart the component
    Static pod: kube-controller-manager-master.example.com hash: 
    3fffc11595801c3777e45ff96ce75444
    Static pod: kube-controller-manager-master.example.com hash: 
    32b0f7233137a5c4879bda1067f36f8a
    [apiclient] Found 1 Pods for label selector component=kube-controller-manager
    [upgrade/staticpods] Component "kube-controller-manager" upgraded successfully!
    [upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-scheduler.yaml" 
    and backed up old manifest 
    to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2019-01-09-07-25-48/kube-scheduler.yaml"
    [upgrade/staticpods] Waiting for the kubelet to restart the component
    Static pod: kube-scheduler-master.example.com hash: c191e26d0faa00981a2f0d6f1f0d7e5f
    Static pod: kube-scheduler-master.example.com hash: b589c7f85a86056631f252695c20358b
    [apiclient] Found 1 Pods for label selector component=kube-scheduler
    [upgrade/staticpods] Component "kube-scheduler" upgraded successfully!
    [uploadconfig] storing the configuration used in 
    ConfigMap "kubeadm-config" in the "kube-system" Namespace
    [kubelet] Creating a ConfigMap "kubelet-config-1.11" in namespace kube-system 
    with the configuration for the kubelets in the cluster
    [kubelet] Downloading configuration for the kubelet from 
    the "kubelet-config-1.11" ConfigMap in the kube-system namespace
    [kubelet] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
    [kubelet] Writing kubelet environment file with flags 
    to file "/var/lib/kubelet/kubeadm-flags.env"
    [patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" 
    to the Node API object "master.example.com" as an annotation
    [bootstraptoken] configured RBAC rules to allow Node Bootstrap tokens 
    to post CSRs in order for nodes to get long term certificate credentials
    [bootstraptoken] configured RBAC rules to allow the csrapprover controller 
    automatically approve CSRs from a Node Bootstrap Token
    [bootstraptoken] configured RBAC rules 
    to allow certificate rotation for all node client certificates in the cluster
    [addons] Applied essential addon: kube-dns
    [addons] Applied essential addon: kube-proxy
    
    [upgrade/successful] SUCCESS! Your cluster was upgraded to "v1.11.3". Enjoy!
    
    [upgrade/kubelet] Now that your control plane is upgraded, please proceed with 
    upgrading your kubelets if you haven't already done so.
    Upgrading kubelet and kubectl now ...
    Checking kubelet and kubectl RPM ...
    [INFO] yum install -y kubelet-1.11.3-2.0.2.el7.x86_64
    Loaded plugins: langpacks, ulninfo
    Resolving Dependencies
    --> Running transaction check
    ---> Package kubelet.x86_64 0:1.10.5-2.0.2.el7 will be updated
    ---> Package kubelet.x86_64 0:1.11.3-2.0.2.el7 will be an update
    --> Finished Dependency Resolution
    
    Dependencies Resolved
    ===================================================================================
     Package        Arch          Version                   Repository           Size
    ===================================================================================
    Updating:
     kubelet        x86_64      1.11.3-2.0.2.el7             ol7_addons           18 M
    
    Transaction Summary
    ===================================================================================
    Upgrade  1 Package
    
    Total download size: 18 M
    Downloading packages:
    Delta RPMs disabled because /usr/bin/applydeltarpm not installed.
    kubelet-1.11.3-2.0.2.el7.x86_64.rpm                           |  18 MB  00:00:00
    Running transaction check
    Running transaction test
    Transaction test succeeded   
    Running transaction
      Updating   : kubelet-1.11.3-2.0.2.el7.x86_64                1/2
      Cleanup    : kubelet-1.10.5-2.0.2.el7.x86_64                2/2
      Verifying  : kubelet-1.11.3-2.0.2.el7.x86_64                1/2
      Verifying  : kubelet-1.10.5-2.0.2.el7.x86_64                2/2
    
    Updated:
      kubelet.x86_64 0:1.11.3-2.0.2.el7
    
    Complete!
    [INFO] yum install -y kubectl-1.11.3-2.0.2.el7.x86_64
    Loaded plugins: langpacks, ulninfo
    Resolving Dependencies
    --> Running transaction check
    ---> Package kubectl.x86_64 0:1.10.5-2.0.2.el7 will be updated
    ---> Package kubectl.x86_64 0:1.11.3-2.0.2.el7 will be an update
    --> Finished Dependency Resolution
    
    Dependencies Resolved
    
    ===================================================================================
     Package        Arch          Version                   Repository           Size
    ===================================================================================
    Updating:
     kubectl       x86_64     1.11.3-2.0.2.el7               ol7_addons          7.6 M
    
    Transaction Summary
    ===================================================================================
    Upgrade  1 Package
    
    Total download size: 7.6 M   
    Downloading packages:
    Delta RPMs disabled because /usr/bin/applydeltarpm not installed.
    kubectl-1.11.3-2.0.2.el7.x86_64.rpm                          | 7.6 MB  00:00:00
    Running transaction check
    Running transaction test
    Transaction test succeeded   
    Running transaction
      Updating   : kubectl-1.11.3-2.0.2.el7.x86_64                1/2
      Cleanup    : kubectl-1.10.5-2.0.2.el7.x86_64                2/2
      Verifying  : kubectl-1.11.3-2.0.2.el7.x86_64                1/2
      Verifying  : kubectl-1.10.5-2.0.2.el7.x86_64                2/2
    
    Updated:
      kubectl.x86_64 0:1.11.3-2.0.2.el7
    
    Complete!
    Upgrading kubelet and kubectl to 1.11.3 version
    Loaded plugins: langpacks, ulninfo
    Package kubelet-1.11.3-2.0.2.el7.x86_64 already installed and latest version
    Package kubectl-1.11.3-2.0.2.el7.x86_64 already installed and latest version
    Nothing to do
    Upgrading kubeadm to 1.12.7-1.1.2.el7 version
    Loaded plugins: langpacks, ulninfo
    Resolving Dependencies
    --> Running transaction check
    ---> Package kubeadm.x86_64 0:1.11.3-2.0.2.el7 will be updated
    ---> Package kubeadm.x86_64 0:1.12.7-1.1.2.el7 will be an update
    --> Finished Dependency Resolution
    
    Dependencies Resolved
    
    
    ===================================================================================
     Package        Arch          Version                   Repository           Size
    ===================================================================================
     kubeadm        x86_64      1.12.7-1.1.2.el7            ol7_addons          7.3 M
    
    Transaction Summary
    ===================================================================================
    Upgrade  1 Package
    
    Total download size: 7.3 M   
    Downloading packages:
    Delta RPMs disabled because /usr/bin/applydeltarpm not installed.
    kubeadm-1.12.7-1.1.2.el7.x86_64.rpm                               | 7.3 MB  00:00:00
    Running transaction check
    Running transaction test
    Transaction test succeeded   
    Running transaction
      Updating   : kubeadm-1.12.7-1.1.2.el7.x86_64                    1/2
      Cleanup    : kubeadm-1.11.3-2.0.2.el7.x86_64                    2/2
      Verifying  : kubeadm-1.12.7-1.1.2.el7.x86_64                    1/2
      Verifying  : kubeadm-1.11.3-2.0.2.el7.x86_64                    2/2
    
    Updated:
      kubeadm.x86_64 0:1.12.7-1.1.2.el7
    
    Complete!
    Upgrading pre-requisite
    Checking whether api-server is using image lower than 1.9
    Upgrading pre-requisite done 
    Checking cluster health ...  
    ...........................................................................
    [preflight] Running pre-flight checks.
    [upgrade] Making sure the cluster is healthy:
    [upgrade/config] Making sure the configuration is correct:
    [upgrade/config] Reading configuration options from a file: /var/run/kubeadm/kubeadm-cfg
    [upgrade/apply] Respecting the --cri-socket flag that is set with higher priority 
    than the config file.
    [upgrade/version] You have chosen to change the cluster version to "v1.12.5"
    [upgrade/versions] Cluster version: v1.11.3+2.0.2.el7
    [upgrade/versions] kubeadm version: v1.12.7+1.1.2.el7
    [upgrade/version] Found 1 potential version compatibility errors 
    but skipping since the --force flag is set:
    
            - There are kubelets in this cluster that are too old that have 
              these versions [v1.9.11+2.1.1.el7]
    [upgrade/prepull] Will prepull images for 
    components [kube-apiserver kube-controller-manager kube-scheduler etcd]
    [upgrade/prepull] Prepulling image for component etcd.
    [upgrade/prepull] Prepulling image for component kube-apiserver.
    [upgrade/prepull] Prepulling image for component kube-controller-manager.
    [upgrade/prepull] Prepulling image for component kube-scheduler.
    [apiclient] Found 0 Pods for label selector k8s-app=upgrade-prepull-kube-scheduler
    [apiclient] Found 0 Pods for label selector k8s-app=upgrade-prepull-etcd
    [apiclient] Found 1 Pods for label selector k8s-app=upgrade-prepull-kube-controller-manager
    [apiclient] Found 1 Pods for label selector k8s-app=upgrade-prepull-kube-apiserver
    [apiclient] Found 1 Pods for label selector k8s-app=upgrade-prepull-kube-scheduler
    [apiclient] Found 0 Pods for label selector k8s-app=upgrade-prepull-etcd
    [upgrade/prepull] Prepulled image for component kube-apiserver.
    [upgrade/prepull] Prepulled image for component kube-controller-manager.
    [upgrade/prepull] Prepulled image for component etcd.
    [upgrade/prepull] Prepulled image for component kube-scheduler.
    [upgrade/prepull] Successfully prepulled the images for all the control plane components
    [upgrade/apply] Upgrading your Static Pod-hosted control plane to version "v1.12.5"...
    Static pod: kube-apiserver-master.example.com hash: 7c19bbee52e8a857c9e75551139951b7
    Static pod: kube-controller-manager-master.example.com hash: 
    0221796c266be3d6f237a7256da5fa36
    Static pod: kube-scheduler-master.example.com hash: e0549b9041665ae07cfacdaf337ab1e0
    Static pod: etcd-master.example.com hash: 7a68f8a24bf031e2027cc6d528ce6efe
    [etcd] Wrote Static Pod manifest for a local etcd instance 
    to "/etc/kubernetes/tmp/kubeadm-upgraded-manifests665746710/etcd.yaml"
    [upgrade/staticpods] Moved new manifest 
    to "/etc/kubernetes/manifests/etcd.yaml" and backed up old manifest 
    to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2019-01-09-07-34-07/etcd.yaml"
    [upgrade/staticpods] Waiting for the kubelet to restart the component
    [upgrade/staticpods] This might take a minute or longer depending 
    on the component/version gap (timeout 5m0s
    Static pod: etcd-master.example.com hash: 7a68f8a24bf031e2027cc6d528ce6efe
    Static pod: etcd-master.example.com hash: 7a68f8a24bf031e2027cc6d528ce6efe
    Static pod: etcd-master.example.com hash: 7eab06d7296bf87cff84cb56f26d13e6
    [apiclient] Found 1 Pods for label selector component=etcd
    [upgrade/staticpods] Component "etcd" upgraded successfully!
    [upgrade/etcd] Waiting for etcd to become available
    [util/etcd] Waiting 0s for initial delay
    [util/etcd] Attempting to see if all cluster endpoints are available 1/10
    [upgrade/staticpods] Writing new Static Pod manifests 
    to "/etc/kubernetes/tmp/kubeadm-upgraded-manifests665746710"
    [controlplane] wrote Static Pod manifest for component kube-apiserver 
    to "/etc/kubernetes/tmp/kubeadm-upgraded-manifests665746710/kube-apiserver.yaml"
    [controlplane] wrote Static Pod manifest for component kube-controller-manager 
    to "/etc/kubernetes/tmp/kubeadm-upgraded-manifests665746710/kube-controller-manager.yaml"
    [controlplane] wrote Static Pod manifest for component kube-scheduler 
    to "/etc/kubernetes/tmp/kubeadm-upgraded-manifests665746710/kube-scheduler.yaml"
    [upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-apiserver.yaml" 
    and backed up old manifest 
    to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2019-01-09-07-34-07/kube-apiserver.yaml"
    [upgrade/staticpods] Waiting for the kubelet to restart the component
    [upgrade/staticpods] This might take a minute or longer depending on 
    the component/version gap (timeout 5m0s
    Static pod: kube-apiserver-master.example.com hash: 7c19bbee52e8a857c9e75551139951b7
    Static pod: kube-apiserver-master.example.com hash: 7c19bbee52e8a857c9e75551139951b7
    Static pod: kube-apiserver-master.example.com hash: 7c19bbee52e8a857c9e75551139951b7
    Static pod: kube-apiserver-master.example.com hash: 7c19bbee52e8a857c9e75551139951b7
    Static pod: kube-apiserver-master.example.com hash: 5c6ceef93d0a8c04d331d6ea6da4b6a7
    [apiclient] Found 1 Pods for label selector component=kube-apiserver
    [apiclient] Found 1 Pods for label selector component=kube-scheduler
    [upgrade/staticpods] Component "kube-scheduler" upgraded successfully!
    [uploadconfig] storing the configuration used in ConfigMap 
    "kubeadm-config" in the "kube-system" Namespace
    [kubelet] Creating a ConfigMap "kubelet-config-1.12" in 
    namespace kube-system with the configuration for the kubelets in the cluster
    [kubelet] Downloading configuration for the kubelet from 
    the "kubelet-config-1.12" ConfigMap in the kube-system namespace
    [kubelet] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
    [patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to 
    the Node API object "k8s-m1.us.oracle.com" as an annotation
    [bootstraptoken] configured RBAC rules to allow Node Bootstrap tokens to 
    post CSRs in order for nodes to get long term certificate credentials
    [bootstraptoken] configured RBAC rules to allow the csrapprover controller 
    automatically approve CSRs from a Node Bootstrap Token
    [bootstraptoken] configured RBAC rules to allow certificate rotation for 
    all node client certificates in the cluster
    [addons] Applied essential addon: kube-dns
    [addons] Applied essential addon: kube-proxy
    
    [upgrade/successful] SUCCESS! Your cluster was upgraded to "v1.12.5". Enjoy!
    
    [upgrade/kubelet] Now that your control plane is upgraded, 
    please proceed with upgrading your kubelets if you haven't already done so.
    Upgrading kubelet and kubectl now ...
    Checking kubelet and kubectl RPM ...
    [INFO] yum install -y kubelet-1.12.7-1.1.2.el7.x86_64
    Loaded plugins: langpacks, ulninfo
    Resolving Dependencies
    --> Running transaction check
    ---> Package kubelet.x86_64 0:1.11.3-2.0.2.el7 will be updated
    ---> Package kubelet.x86_64 0:1.12.7-1.1.2.el7 will be an update
    --> Finished Dependency Resolution
    
    Dependencies Resolved
    
    
    ===================================================================================
     Package        Arch          Version                   Repository           Size
    ===================================================================================
    Updating:
     kubelet       x86_64      1.12.7-1.1.2.el7             ol7_addons           19 M
    
    Transaction Summary
    ====================================================================================
    
    Upgrade  1 Package
    
    Total download size: 19 M
    Downloading packages:
    Delta RPMs disabled because /usr/bin/applydeltarpm not installed.
    kubelet-1.12.7-1.1.2.el7.x86_64.rpm                                        
    Running transaction check
    Running transaction test
    Transaction test succeeded   
    Running transaction
      Updating   : kubelet-1.12.7-1.1.2.el7.x86_64                        1/2
      Cleanup    : kubelet-1.11.3-2.0.2.el7.x86_64                        2/2
      Verifying  : kubelet-1.12.7-1.1.2.el7.x86_64                        1/2
      Verifying  : kubelet-1.11.3-2.0.2.el7.x86_64                        2/2
    
    Updated:
      kubelet.x86_64 0:1.12.7-1.1.2.el7
    
    Complete!
    [INFO] yum install -y kubectl-1.12.7-1.1.2.el7.x86_64
    Loaded plugins: langpacks, ulninfo
    Resolving Dependencies
    --> Running transaction check
    ---> Package kubectl.x86_64 0:1.11.3-2.0.2.el7 will be updated
    ---> Package kubectl.x86_64 0:1.12.7-1.1.2.el7 will be an update
    --> Finished Dependency Resolution
    
    Dependencies Resolved
    
    
    ===================================================================================
     Package        Arch          Version                   Repository           Size
    ===================================================================================
    Updating:
     kubectl        x86_64      1.12.7-1.1.2.el7            ol7_addons           7.7 M
    
    Transaction Summary
    ===================================================================================
    Upgrade  1 Package
    
    Total download size: 7.7 M   
    Downloading packages:
    Delta RPMs disabled because /usr/bin/applydeltarpm not installed.
    kubectl-1.12.7-1.1.2.el7.x86_64.rpm                        | 7.7 MB  00:00:00
    Running transaction check
    Running transaction test
    Transaction test succeeded   
    Running transaction
      Updating   : kubectl-1.12.7-1.1.2.el7.x86_64              1/2
      Cleanup    : kubectl-1.11.3-2.0.2.el7.x86_64              2/2
      Verifying  : kubectl-1.12.7-1.1.2.el7.x86_64              1/2
      Verifying  : kubectl-1.11.3-2.0.2.el7.x86_64              2/2
    
    Updated:
      kubectl.x86_64 0:1.12.7-1.1.2.el7
    
    Complete!
    [INSTALLING DASHBOARD NOW]   
    
    Installing kubernetes-dashboard ...
    
    Kubernetes version: v1.12.7 and dashboard yaml file: 
    /usr/local/share/kubeadm/kubernetes-dashboard-self-certs.yaml
    The connection to the server 10.147.25.195:6443 was refused - 
    did you specify the right host or port?
    Restarting kubectl-proxy.service ...
    [INFO] Upgrading master node done successfully
    [INFO] Flannel is not upgraded yet. Please run 
    'kubeadm-upgrade.sh upgrade --flannel' to upgrade flannel
    [INFO] Dashboard is not upgraded yet. Please run 
    'kubeadm-upgrade.sh upgrade --dashboard' to upgrade dashboard
    
  6. Because the flannel service that Oracle Linux Container Services for use with Kubernetes 1.1.12 depends on is not upgraded automatically by the specialized upgrade script, ensure you upgrade separately, for example:

    # kubeadm-setup.sh upgrade --flannel
    Trying to pull repository container-registry.oracle.com/kubernetes/flannel ... 
    v0.10.0: Pulling from container-registry.oracle.com/kubernetes/flannel
    Digest: sha256:da1f7af813d6b6123c9a240b3e7f9b58bc7b50d9939148aa08c7ba8253e0c312
    Status: Image is up to date for container-registry.oracle.com/kubernetes/flannel:v0.10.0
    kube-flannel-ds-85clc kube-flannel-ds-x9grm
    clusterrole.rbac.authorization.k8s.io "flannel" deleted
    clusterrolebinding.rbac.authorization.k8s.io "flannel" deleted
    serviceaccount "flannel" deleted
    configmap "kube-flannel-cfg" deleted
    daemonset.extensions "kube-flannel-ds" deleted
    pod "kube-flannel-ds-85clc" deleted
    pod "kube-flannel-ds-x9grm" deleted
    NAME                                        READY   STATUS    RESTARTS   AGE
    etcd-master.example.com                      1/1     Running   0          11m
    kube-apiserver-master.example.com            1/1     Running   0          11m
    kube-controller-manager-master.example.com   1/1     Running   0          11m
    kube-dns-554d547449-hhl6p                    3/3     Running   0          12m
    kube-proxy-bc7ht                             1/1     Running   0          12m
    kube-proxy-jd8gh                             1/1     Running   0          12m
    kube-scheduler-master.example.com            1/1     Running   0          11m
    kubernetes-dashboard-64c8c8b9dd-c9wfl        1/1     Running   1          41m
    clusterrole.rbac.authorization.k8s.io/flannel created
    clusterrolebinding.rbac.authorization.k8s.io/flannel created
    serviceaccount/flannel created
    configmap/kube-flannel-cfg created
    daemonset.extensions/kube-flannel-ds created

  7. The Oracle Linux Container Services for use with Kubernetes dashboard service also needs to be upgraded separately to 1.1.12:

    # kubeadm-upgrade.sh upgrade --dashboard
    Upgrading dashboard
    secret "kubernetes-dashboard-certs" deleted
    serviceaccount "kubernetes-dashboard" deleted
    role.rbac.authorization.k8s.io "kubernetes-dashboard-minimal" deleted
    rolebinding.rbac.authorization.k8s.io "kubernetes-dashboard-minimal" deleted
    deployment.apps "kubernetes-dashboard" deleted
    service "kubernetes-dashboard" deleted
    
    Installing kubernetes-dashboard ...
    
    Kubernetes version: v1.12.7 and dashboard yaml file: 
    /usr/local/share/kubeadm/kubernetes-dashboard-self-certs.yaml
    secret/kubernetes-dashboard-certs created
    serviceaccount/kubernetes-dashboard created
    role.rbac.authorization.k8s.io/kubernetes-dashboard-minimal created
    rolebinding.rbac.authorization.k8s.io/kubernetes-dashboard-minimal created
    deployment.apps/kubernetes-dashboard created
    service/kubernetes-dashboard created
    Restarting kubectl-proxy.service ...
  8. If the master node upgrade fails, roll back as follows:

    # kubeadm-upgrade.sh restore /backups/master-backup-v1.9.11-0-1546953894.tar
    -- Running upgrade script---
    Restoring the cluster
    Loaded plugins: langpacks, ulninfo
    Nothing to do
    Checking sha256sum of the backup files ...
    /var/run/kubeadm/backup/etcd-backup-1546953894.tar: OK
    /var/run/kubeadm/backup/k8s-master-0-1546953894.tar: OK
    Restoring backup from /backups/master-backup-v1.9.11-0-1546953894.tar ...
    Using 3.1.11
    etcd cluster is healthy ...
    Cleaning up etcd container ...
    ab9e7a31a721c2b9690047ac3445beeb2c518dd60da81da2a396f250f089e82e
    ab9e7a31a721c2b9690047ac3445beeb2c518dd60da81da2a396f250f089e82e
    Restore successful ...
    You can restart your cluster now by doing: 
    # kubeadm-setup.sh restart
    Restore successful :)
  9. If the script completes successfully, create a fresh backup on your new Oracle Linux Container Services for use with Kubernetes 1.1.12 master node by using kubeadm-setup.sh backup.

    See Section 4.3, “Cluster Backup and Restore”.

You can read the full upgrade log in /var/log/kubeadm-upgrade. After completing the master node upgrade, you can upgrade the packages for Oracle Linux Container Services for use with Kubernetes on each worker node.