The software described in this documentation is either no longer supported or is in extended support.
Oracle recommends that you upgrade to a current supported release.
Chapter 2 Installing Oracle Linux Container Services for use with Kubernetes
- 2.1 Overview
-
2.2 Requirements
- 2.2.1 Yum or ULN Channel Subscription
- 2.2.2 Setting up UEK R5
- 2.2.3 Resource Requirements
- 2.2.4 Docker Engine Requirements
- 2.2.5 Oracle Container Registry Requirements
- 2.2.6 Network Time Service Requirements
- 2.2.7 Firewall and iptables Requirements
- 2.2.8 Network Requirements
- 2.2.9 SELinux Requirements
- 2.2.10 Requirements to Use Oracle Linux Container Services for use with Kubernetes on Oracle Cloud Infrastructure
- 2.3 Setting Up the Master Node
- 2.4 Setting Up a Worker Node
- 2.5 Upgrading 1.1.9 to 1.1.12
- 2.6 Updating to Errata Releases
This chapter describes the steps required to install Kubernetes on an Oracle Linux 7 host, and to build a Kubernetes cluster.
2.1 Overview
Kubernetes can be deployed in a variety of ways depending on
requirements and on the tools that you have at hand. The
kubeadm
package provides the
kubeadm utility, a tool designed to make the
deployment of a Kubernetes cluster simple. Many users may find that
using this tool directly, along with the upstream documentation,
provides the maximum configuration flexibility.
Oracle provides the kubeadm-setup.sh script in
the kubeadm
package to help new users install
and configure a base deployment of Kubernetes with the greater ease,
regardless of whether it is hosted on bare metal, on a virtual
machine, or out on the cloud. The script handles checking that
basic package requirements are in place, setting proxy and
firewall requirements, configuring networking, and initializing a
cluster configuration for the Kubernetes environment. The script uses
the kubeadm utility, but handles many
additional configuration steps that can help new users get running
with minimal effort.
The kubeadm utility automatically taints the master node so that no other workloads or containers can run on this node. This helps to ensure that the master node is never placed under any unnecessary load and that backing up and restoring the master node for the cluster is simplified.
The instructions provided here, assume that you are new to Kubernetes and are using the provided kubeadm-setup.sh script to deploy your cluster. This script is developed and tested at Oracle and deployment using this script is fully supported. Alternate configurations and deployment mechanisms are untested by Oracle.
2.2 Requirements
- 2.2.1 Yum or ULN Channel Subscription
- 2.2.2 Setting up UEK R5
- 2.2.3 Resource Requirements
- 2.2.4 Docker Engine Requirements
- 2.2.5 Oracle Container Registry Requirements
- 2.2.6 Network Time Service Requirements
- 2.2.7 Firewall and iptables Requirements
- 2.2.8 Network Requirements
- 2.2.9 SELinux Requirements
- 2.2.10 Requirements to Use Oracle Linux Container Services for use with Kubernetes on Oracle Cloud Infrastructure
Kubernetes is a clustered environment that generally functions with more than one node in the cluster. It is possible to run a single node cluster, but this defeats the point of having a cluster in the first place. Therefore, your environment should consist of two or more systems where Kubernetes is installed.
The following sections describe various other requirements that must be met to install and configure Kubernetes on an Oracle Linux 7 system.
Oracle Linux Container Services for use with Kubernetes 1.12 requires that you configure the system to use the Unbreakable Enterprise Kernel Release 5 (UEK R5) or later and boot the system with this kernel.
If you are still using the Unbreakable Enterprise Kernel Release 4 (UEK R4) and have a pre-existing cluster based on Oracle Linux Container Services for use with Kubernetes 1.1.9, this is the last supported release available for your environment. It is strongly recommended that you upgrade your system to use the Unbreakable Enterprise Kernel Release 5 (UEK R5) and boot the system with this kernel.
2.2.1 Yum or ULN Channel Subscription
To install all of the required packages to use Kubernetes, you must ensure that you are subscribed to the correct yum repositories or Unbreakable Linux Network (ULN) channels.
If your systems are registered with ULN, enable the
ol7_x86_64_addons
channel.
If you use the Oracle Linux yum server, enable the
ol7_addons
repository on each system in your
deployment. You can do this easily using
yum-config-manager:
# yum-config-manager --enable ol7_addons
For more information on the ol7_x86_64_addons
channel, please see Oracle® Linux: Unbreakable Linux Network User's Guide for Oracle Linux 6 and Oracle Linux 7.
Oracle does not support Kubernetes on systems where the
ol7_preview
,
ol7_developer
or
ol7_developer_EPEL
repositories are
enabled, or where software from these repositories is
currently installed on the systems where Kubernetes runs. Even if
you follow the instructions in this document, you may render
your platform unsupported if these repositories or channels
are enabled or software from these channels or repositories is
installed on your system.
2.2.2 Setting up UEK R5
Oracle Linux Container Services for use with Kubernetes 1.1.12 and later versions require that you configure the system to use the Unbreakable Enterprise Kernel Release 5 (UEK R5) and boot the system with this kernel. If you are using either UEK R4 or the Red Hat Compatible Kernel (RHCK), you must configure Yum to allow you to install UEK R5.
-
If your system is registered with the Unbreakable Linux Network (ULN), disable access to the
ol7_x86_64_UEKR4
channel and enable access to theol7_x86_64_UEKR5
channel.If you use the Oracle Linux yum server, disable the
ol7_UEKR4
repository and enable theol7_UEKR5
repository. You can do this easily using yum-config-manager, if you have theyum-utils
package installed:#
yum-config-manager --disable ol7_UEKR4
#yum-config-manager --enable ol7_UEKR5
-
Run the following command to upgrade the system to UEK R5:
#
yum update
For information on how to make UEK R5 the default boot kernel, see Oracle® Linux 7: Administrator's Guide.
-
Reboot the system, selecting the UEK R5 kernel if this is not the default boot kernel.
#
systemctl reboot
2.2.3 Resource Requirements
Each node in your cluster requires at least 2 GB of RAM and 2 or
more CPUs to facilitate the use of kubeadm
and any further applications that are provisioned using
kubectl
.
Also ensure that each node has a unique hostname, MAC address and product UUID as Kubernetes uses this informaton to identify and track each node in the cluster. You can verify the product UUID on each host with:
# dmidecode -s system-uuid
A storage volume with at least 5 GB free space must be mounted
at /var/lib/kubelet
on each node. For the
underlying Docker engine an additional volume with at least 5
GB free space must be mounted on each node at
/var/lib/docker
.
2.2.4 Docker Engine Requirements
Kubernetes is used to manage containers running on a containerization
platform deployed on several systems. On Oracle Linux, Kubernetes is
currently only supported when used in conjunction with the
Docker containerization platform. Therefore, each system in
the deployment must have the Docker engine installed and ready
to run. Support of Oracle Linux Container Services for use with Kubernetes is limited to usage with the
latest Oracle Container Runtime for Docker version available in the
ol7_addons
repository on the Oracle Linux yum server and in the
ol7_x86_64_addons
channel on ULN.
Please note that if you enable the
ol7_preview
repository, you may install a
preview version of Oracle Container Runtime for Docker and your installation can no
longer be supported by Oracle. If you have already installed a
version of Docker from the ol7_preview
repository, you should disable the repository and uninstall this
version before proceeding with the installation.
Install, the Docker engine on all nodes in the cluster:
# yum install docker-engine
Enable the Docker service in systemd
so
that it starts on subsequent reboots and you should start the
service before running the kubeadm-setup.sh
script.
#systemctl enable docker
#systemctl start docker
See Oracle® Linux: Oracle Container Runtime for Docker User's Guide for more information on installing and running the Docker engine.
2.2.5 Oracle Container Registry Requirements
The images that are deployed by the kubeadm-setup.sh script are hosted on the Oracle Container Registry. For the script to be able to install the required components, you must perform the following steps:
-
Log in to the Oracle Container Registry website at https://container-registry.oracle.com using your Single Sign-On credentials.
-
Use the web interface to navigate to the
Container Services
business area and accept the Oracle Standard Terms and Restrictions for the Oracle software images that you intend to deploy. You are able to accept a global agreement that applies to all of the existing repositories within this business area. If newer repositories are added to this business area in the future, you may need to accept these terms again before performing upgrades. -
Ensure that each of the systems that are used as nodes within the cluster are able to access https://container-registry.oracle.com and use the docker login command to authenticate against the Oracle Container Registry using the same credentials that you used to log into the web interface:
#
docker login container-registry.oracle.com
The command prompts you for your user name and password.
Detailed information about the Oracle Container Registry is available in Oracle® Linux: Oracle Container Runtime for Docker User's Guide.
2.2.5.1 Using an Oracle Container Registry Mirror
It is also possible to use any of the Oracle Container Registry mirror servers to obtain the correct images to set up Oracle Linux Container Services for use with Kubernetes. The Oracle Container Registry mirror servers are located within the same data centers used for Oracle Cloud Infrastructure. More information about the Oracle Container Registry mirror servers is available in Oracle® Linux: Oracle Container Runtime for Docker User's Guide.
Steps to use an alternate Oracle Container Registry mirror server follow:
-
You must still log in to the Oracle Container Registry website at https://container-registry.oracle.com using your Single Sign-On credentials and use the web interface to accept the Oracle Standard Terms and Restrictions.
-
On each node, use the docker login command to authenticate against the Oracle Container Registry mirror server using the same credentials that you used to log into the web interface:
#
docker login
container-registry-phx.oracle.com
The command prompts you for your user name and password.
-
After you have logged in, set the environment variable to use the correct registry mirror when you deploy Kubernetes:
#
export KUBE_REPO_PREFIX=
#container-registry-phx.oracle.com
/kubernetesecho '
export KUBE_REPO_PREFIX=
' > ~/.bashrccontainer-registry-phx.oracle.com
/kubernetesIf you are using Oracle Linux Container Services for use with Kubernetes on Oracle Cloud Infrastructure, the kubeadm-setup.sh script automatically detects the most appropriate mirror server to use and sets this environment variable for you so that you do not have to perform this step. If you manually set the
KUBE_REPO_PREFIX
environment variable on the command line, the kubeadm-setup.sh honors the variable and does not attempt to detect which mirror server you should be using.
2.2.5.2 Setting Up an Optional Local Registry
If the systems that you are using for your Kubernetes cluster nodes do not have direct access to the Internet and are unable to connect directly to the Oracle Container Registry, you can set up a local Docker registry to perform this function. The kubeadm-setup.sh script provides an option to change the registry that you use to obtain these images. Instructions to set up a local Docker registry are provided in Oracle® Linux: Oracle Container Runtime for Docker User's Guide.
When you have set up a local Docker registry, you must pull
the images required to run Oracle Linux Container Services for use with Kubernetes, tag these images and
then push them to your local registry. The images must be
tagged identically to the way that they are tagged in the
Oracle Container Registry. The
kubeadm-setup.sh matches version numbers
during the setup process and cannot successfully complete many
operations if it cannot find particular versions of images. To
assist with this process, Oracle Linux Container Services for use with Kubernetes provides the
kubeadm-registry.sh script in the
kubeadm
package.
To use the kubeadm-registry.sh script to automatically pull images from the Oracle Container Registry, tag them appropriately and push them to your local registry:
-
If you are using the Oracle Container Registry to obtain images, log in following the instructions in Section 2.2.5, “Oracle Container Registry Requirements”. If you are using one of the Oracle Container Registry mirrors, see Section 2.2.5.1, “Using an Oracle Container Registry Mirror” for more information.
-
Run the kubeadm-registry.sh script with the required options:
#
kubeadm-registry.sh --to
host.example.com:5000
Substitute
host.example.com:5000
with the resolvable domain name and port by which your local Docker registry is available.You may optionally use the
--from
option to specify an alternate registry to pull the images from. You may also use the--version
option to specify the version of Kubernetes images that you intend to host. For example:#
kubeadm-registry.sh --to
host.example.com:5000
--from \container-registry-phx.oracle.com/kubernetes
--version1.12.5
If you are upgrading your environment and you intend to use a local registry, you must make sure that you have the most recent version of the images required to run Oracle Linux Container Services for use with Kubernetes. You can use the kubeadm-registry.sh script to pull the correct images and to update your local registry before running the upgrade on the master node.
After your local Docker registry is installed and configured and the required images have been imported, you must set the environment variable that controls which registry server the kubeadm-setup.sh script uses. On each of the systems where you intend to run the kubeadm-setup.sh tool run the following commands:
#export KUBE_REPO_PREFIX="
#local-registry.example.com
:5000/kubernetes"echo 'export KUBE_REPO_PREFIX="
local-registry.example.com
:5000/kubernetes"' > ~/.bashrc
Substitute
local-registry.example.com
with the
IP address or resolvable domain name of the host on which your
local Docker registry is configured.
2.2.6 Network Time Service Requirements
As a clustering environment, Kubernetes requires that system time is synchronized across each node within the cluster. Typically, this can be achieved by installing and configuring an NTP daemon on each node. You can do this in the following way:
-
Install the ntp package, if it is not already installed:
#
yum install ntp
-
Edit the NTP configuration in
/etc/ntp.conf
. Your requirements may vary. If you are using DHCP to configure the networking for each node, it is possible to configure NTP servers automatically. If you have not got a locally configured NTP service that your systems can sync to, and your systems have Internet access, you can configure them to use the publicpool.ntp.org
service. See https://www.ntppool.org/en/. -
Ensure that NTP is enabled to restart at boot and that it is started before you proceed with your Kubernetes installation. For example:
#
systemctl start ntpd
#systemctl enable ntpd
Note that systems running on Oracle Cloud Infrastructure are configured to use the
chronyd
time service by default, so there is
no requirement to add or configure NTP if you are installing
into an Oracle Cloud Infrastructure environment.
For information on configuring a Network Time Service, see Oracle® Linux 7: Administrator's Guide.
2.2.7 Firewall and iptables Requirements
Kubernetes uses iptables to handle many networking
and port forwarding rules. Therefore, you must ensure that you
do not have any rules set that may interfere with the
functioning of Kubernetes. The kubeadm-setup.sh
script requires an iptables rule to accept forwarding traffic.
If this rule is not set, the script exits and notifies you that
you may need to add this iptables
rule. A
standard Docker installation may create a firewall rule that
prevents forwarding, therefore you may need to run:
# iptables -P FORWARD ACCEPT
The kubeadm-setup.sh script checks iptables rules and, where there is a match, instructions are provided on how to modify your iptables configuration to meet any requirements. See Section 4.1, “Kubernetes and iptables Rules” for more information.
If you have a requirement to run a firewall directly on the systems where Kubernetes is deployed, you must ensure that all ports required by Kubernetes are available. For instance, the TCP port 6443 must be accessible on the master node to allow other nodes to access the API Server. All nodes must be able to accept connections from the master node on the TCP port 10250 and traffic should be allowed on the UDP port 8472. All nodes must be able to receive traffic from all other nodes on every port on the network fabric that is used for the Kubernetes pods. The firewall must support masquerading.
Oracle Linux 7 installs and enables firewalld, by default. If you are running firewalld, the kubeadm-setup.sh script notifies you of any rules that you may need to add. In summary, run the following commands on all nodes:
#firewall-cmd --add-masquerade --permanent
#firewall-cmd --add-port=10250/tcp --permanent
#firewall-cmd --add-port=8472/udp --permanent
Additionally, run the following command on the master node:
# firewall-cmd --add-port=6443/tcp --permanent
Use the --permanent
option to make these
firewall rules persistent across reboots.
Remember to restart the firewall for these rules to take effect:
# systemctl restart firewalld
2.2.8 Network Requirements
The kubeadm-setup.sh script requires that it is able to access the Oracle Container Registry and possibly other internet resources to be able to pull any container images that you required. Therefore, unless you intend to set up a local mirror for all of your container image requirements, the systems where you intend to install Kubernetes must either have direct internet access, or must be configured to use a proxy. See Section 4.2, “Using Kubernetes With a Proxy Server” for more information.
The kubeadm-setup.sh script checks whether
the br_netfilter
module is loaded and exits
if it is not available. This module is required to enable
transparent masquerading and to facilitate Virtual Extensible
LAN (VxLAN) traffic for communication between Kubernetes pods across
the cluster. If you need to check whether it is loaded, run:
# lsmod|grep br_netfilter
Kernel modules are usually loaded as they are needed, and it is unlikely that you would need to load this module manually. However, if necessary, you can load the module manually by running:
#modprobe br_netfilter
#echo "br_netfilter" > /etc/modules-load.d/br_netfilter.conf
Kubernetes requires that packets traversing a network bridge are
processed by iptables for filtering and for port forwarding. To
achieve this, tunable parameters in the kernel bridge module are
automatically set when the kubeadm package is installed and a
sysctl file is created at
/etc/sysctl.d/k8s.conf
that contains the
following lines:
net.bridge.bridge-nf-call-ip6tables = 1 net.bridge.bridge-nf-call-iptables = 1
If you modify this file, or create anything similar yourself, you must run the following command to load the bridge tunable parameters:
# /sbin/sysctl -p /etc/sysctl.d/k8s.conf
The kubeadm-setup.sh script configures a flannel network as the network fabric that is used for communications between Kubernetes pods. This overlay network uses VxLANs to facilitate network connectivity: https://github.com/coreos/flannel
By default, the kubeadm-setup.sh script
creates a network in the 10.244.0.0/16
range
to host this network. The kubeadm-setup.sh
script provides an option to set the network range to an
alternate range, if required, during installation. Systems in
the Kubernetes deployment must not have any network devices
configured for this reserved IP range.
2.2.9 SELinux Requirements
The kubeadm-setup.sh script checks whether
SELinux is set to enforcing mode. If enforcing mode is enabled,
the script exits with an error requesting that you set SELinux
to permissive mode. Setting SELinux to permissive mode allows
containers to access the host file system, which is required by
pod networks. This requirement exists until SELinux support in
the kubelet
tool for Kubernetes is improved.
To disable SELinux temporarily, do the following:
# /usr/sbin/setenforce 0
To disable SELinux enforcing mode for subsequent reboots so that
Kubernetes continues to run correctly, modify
/etc/selinux/config
and set the
SELinux
variable:
SELINUX=Permissive
2.2.10 Requirements to Use Oracle Linux Container Services for use with Kubernetes on Oracle Cloud Infrastructure
Oracle Linux Container Services for use with Kubernetes is engineered to work on Oracle Cloud Infrastructure. You can use all of the instructions that are provided in this document to install and configure Kubernetes across a group of compute instances. Additional information about configuration steps and usage of Oracle Cloud Infrastructure can be found at https://docs.cloud.oracle.com/iaas/Content/home.htm.
The most important requirement for Oracle Linux Container Services for use with Kubernetes on Oracle Cloud Infrastructure is that your Virtual Cloud Network (VCN) allows the compute nodes that are used in your Kubernetes deployment to communicate through the required ports. By default, compute nodes are unable to access each other across the VCN until you have configured the Security List with the appropriate ingress rules.
Ingress rules should match the rules that are required in any firewall configuration, as described in Section 2.2.7, “Firewall and iptables Requirements”. Typically, the configuration involves adding the following ingress rules to the default security list for your VCN:
-
Allow 6443/TCP.
-
STATELESS
: Unchecked -
SOURCE CIDR
:10.0.0.0/16
-
IP PROTOCOL
: TCP -
SOURCE PORT RANGE
: All -
DESTINATION PORT RANGE
: 6443
-
-
Allow 10250/TCP.
-
STATELESS
: Unchecked -
SOURCE CIDR
:10.0.0.0/16
-
IP PROTOCOL
: TCP -
SOURCE PORT RANGE
: All -
DESTINATION PORT RANGE
: 10250
-
-
Allow 8472/UDP.
-
STATELESS
: Unchecked -
SOURCE CIDR
:10.0.0.0/16
-
IP PROTOCOL
: UDP -
SOURCE PORT RANGE
: All -
DESTINATION PORT RANGE
: 8472
-
Substitute 10.0.0.0/16
with the range
used for the subnet that you created within the VCN for the
compute nodes that will participate in the Kubernetes cluster. You
may wish to limit the specific IP address range to the range
that is used specifically by the cluster components, or you may
expand this range, depending on your particular security
requirements.
The ingress rules that are described here are the core rules that you need to set up to allow the cluster to function. For each service that you define or intend to use, you might need to define additional rules in the Security List.
When creating compute instances to host Oracle Linux Container Services for use with Kubernetes, all shape types are supported. The environment requires that you use Oracle Linux 7 Update 5 or later, with Unbreakable Enterprise Kernel Release 5 (UEK R5).
A future version of Oracle Linux Container Services for use with Kubernetes will migrate existing single master clusters from KubeDNS to CoreDNS. CoreDNS requires an Oracle Linux 7 Update 5 image or later with the Unbreakable Enterprise Kernel Release 5 (UEK R5).
Existing Oracle Linux Container Services for use with Kubernetes 1.1.9 installations may already run on an Oracle Linux 7 Update 3 image, with Unbreakable Enterprise Kernel Release 4 (UEK R4), but you must upgrade your environment to permit future product upgrades.
2.3 Setting Up the Master Node
Before you begin, ensure you have satisfied the requirements in
Section 2.2.5, “Oracle Container Registry Requirements”. Then on the host
that you are configuring as the master node, install the
kubeadm
package and its dependencies:
# yum install kubeadm kubelet kubectl
As root
, run kubeadm-setup.sh
up to add the host as a master node:
# kubeadm-setup.sh up
Checking kubelet and kubectl RPM ...
Starting to initialize master node ...
Checking if env is ready ...
Checking whether docker can pull busybox image ...
Checking access to container-registry.oracle.com/kubernetes...
Trying to pull repository container-registry.oracle.com/kube-proxy ...
v1.12.5: Pulling from container-registry.oracle.com/kube-proxy
Digest: sha256:9f57fd95dc9c5918591930b2316474d10aca262b5c89bba588f45c1b96ba6f8b
Status: Image is up to date for container-registry.oracle.com/kube-proxy:v1.12.5
Checking whether docker can run container ...
Checking firewalld settings ...
Checking iptables default rule ...
Checking br_netfilter module ...
Checking sysctl variables ...
Enabling kubelet ...
Created symlink from /etc/systemd/system/multi-user.target.wants/kubelet.service
to /etc/systemd/system/kubelet.service.
Check successful, ready to run 'up' command ...
Waiting for kubeadm to setup master cluster...
Please wait ...
\ - 80% completed
Waiting for the control plane to become ready ...
...............
100% completed
clusterrole.rbac.authorization.k8s.io/flannel created
clusterrolebinding.rbac.authorization.k8s.io/flannel created
serviceaccount/flannel created
configmap/kube-flannel-cfg created
daemonset.extensions/kube-flannel-ds created
Installing kubernetes-dashboard ...
secret/kubernetes-dashboard-certs created
serviceaccount/kubernetes-dashboard created
role.rbac.authorization.k8s.io/kubernetes-dashboard-minimal created
rolebinding.rbac.authorization.k8s.io/kubernetes-dashboard-minimal created
deployment.apps/kubernetes-dashboard created
service/kubernetes-dashboard created
Enabling kubectl-proxy.service ...
Starting kubectl-proxy.service ...
[===> PLEASE DO THE FOLLOWING STEPS BELOW: <===]
Your Kubernetes master has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
You can now join any number of machines by running the following on each node
as root:
export KUBE_REPO_PREFIX=container-registry.oracle.com/kubernetes && kubeadm-setup.sh join 192.0.2.10:6443 \
--token 8tipwo.tst0nvf7wcaqjcj0 --discovery-token-ca-cert-hash \
sha256:f2a5b22b658683c3634459c8e7617c9d6c080c72dd149f3eb903445efe9d8346
If you do not specify a network range, the script uses the default
network range of 10.244.0.0/16
to configure the
internal network used for pod interaction within the cluster. To
specify an alternative network range, run the script with the
--pod-network-cidr
option. For example, you
would set the network to use the
10.100.0.0/16
range as follows:
# kubeadm-setup.sh up --pod-network-cidr 10.100.0.0/16
The kubeadm-setup.sh script checks whether the host meets all of the requirements before it sets up the master node. If a requirement is not met, an error message is displayed, along with the recommended fix. You should fix the errors before running the script again.
The systemd
service for the
kubelet
is automatically enabled on the host so
that the master node always starts at system boot.
The output of the kubeadm-setup.sh script provides the command for adding worker nodes to the cluster. Take note of this command for later use. The token that is shown in the command is only valid for 24 hours. See Section 2.4, “Setting Up a Worker Node” for more details about tokens.
Preparing to Use Kubernetes as a Regular User
To use the Kubernetes cluster as a regular user, perform the following steps on the master node:
-
Create the
.kube
subdirectory in your home directory:$
mkdir -p $HOME/.kube
-
Create a copy of the Kubernetes
admin.conf
file in the.kube
directory:$
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
-
Change the ownership of the file to match your regular user profile:
$
sudo chown $(id -u):$(id -g) $HOME/.kube/config
-
Export the path to the file for the
KUBECONFIG
environment variable:$
export KUBECONFIG=$HOME/.kube/config
NoteYou cannot use the kubectl command if the path to this file is not set for this environment variable. Remember to export the
KUBECONFIG
variable for each subsequent login so that the kubectl and kubeadm commands use the correctadmin.conf
file, otherwise you might find that these commands do not behave as expected after a reboot or a new login.For example, append the export line to your
.bashrc
:$
echo 'export KUBECONFIG=$HOME/.kube/config' >> $HOME/.bashrc
-
Verify that you can use the kubectl command.
Kubernetes runs many of its services to manage the cluster configuration as Docker containers running as a Kubernetes pod, which can be viewed by running the following command on the master node:
$
kubectl get pods -n kube-system
NAME READY STATUS RESTARTS AGE coredns-6c77847dcf-77grm 1/1 Running 0 5m16s coredns-6c77847dcf-vtk8k 1/1 Running 0 5m16s etcd-master.example.com 1/1 Running 0 4m26s kube-apiserver-master.example.com 1/1 Running 0 4m46s kube-controller-manager-master.example.com 1/1 Running 0 4m31s kube-flannel-ds-glwgx 1/1 Running 0 5m13s kube-proxy-tv2mj 1/1 Running 0 5m16s kube-scheduler-master.example.com 1/1 Running 0 4m32s kubernetes-dashboard-64458f66b6-q8dzh 1/1 Running 0 5m13s
2.4 Setting Up a Worker Node
Repeat these steps on each host that you want to add to the cluster as a worker node.
Install the kubeadm
package and its
dependencies:
# yum install kubeadm kubelet kubectl
As root
, run the kubeadm-setup.sh
join command to add the host as a worker node:
# kubeadm-setup.sh join 192.0.2.10:6443
--token 8tipwo.tst0nvf7wcaqjcj0
\
--discovery-token-ca-cert-hash \
sha256:f2a5b22b658683c3634459c8e7617c9d6c080c72dd149f3eb903445efe9d8346
Checking kubelet and kubectl RPM ...
Starting to initialize worker node ...
Checking if env is ready ...
Checking whether docker can pull busybox image ...
Checking access to container-registry.oracle.com/kubernetes...
Trying to pull repository container-registry.oracle.com/kubernetes/kube-proxy ...
v1.12.5: Pulling from container-registry.oracle.com/kubernetes/kube-proxy
Digest: sha256:9f57fd95dc9c5918591930b2316474d10aca262b5c89bba588f45c1b96ba6f8b
Status: Image is up to date for container-registry.oracle.com/kubernetes/kube-proxy:v1.12.5
Checking whether docker can run container ...
Checking firewalld settings ...
Checking iptables default rule ...
Checking br_netfilter module ...
Checking sysctl variables ...
Enabling kubelet ...
Created symlink from /etc/systemd/system/multi-user.target.wants/kubelet.service
to /etc/systemd/system/kubelet.service.
Check successful, ready to run 'join' command ...
[validation] WARNING: kubeadm doesn't fully support multiple API Servers yet
[preflight] running pre-flight checks
[discovery] Trying to connect to API Server "192.0.2.10:6443"
[discovery] Created cluster-info discovery client, requesting info from "https://192.0.2.10:6443"
[discovery] Trying to connect to API Server "192.0.2.10:6443"
[discovery] Created cluster-info discovery client, requesting info from "https://192.0.2.10:6443"
[discovery] Requesting info from "https://192.0.2.10:6443" again
to validate TLS against the pinned public key
[discovery] Requesting info from "https://192.0.2.10:6443" again
to validate TLS against the pinned public key
[discovery] Cluster info signature and contents are valid
and TLS certificate validates against pinned roots, will use API Server "192.0.2.10:6443"
[discovery] Successfully established connection with API Server "192.0.2.10:6443"
[discovery] Cluster info signature and contents are valid
and TLS certificate validates against pinned roots, will use API Server "192.0.2.10:6443"
[discovery] Successfully established connection with API Server "192.0.2.10:6443"
[kubelet] Downloading configuration for the kubelet from the "kubelet-config-1.12" ConfigMap
in the kube-system namespace
[kubelet] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet] Writing kubelet environment file with flags
to file "/var/lib/kubelet/kubeadm-flags.env"
[preflight] Activating the kubelet service
[tlsbootstrap] Waiting for the kubelet to perform the TLS Bootstrap...
[patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock"
to the Node API object "worker1.example.com" as an annotation
This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.
Run 'kubectl get nodes' on the master to see this node join the cluster.
Replace the IP address and port,
192.0.2.10:6443
, with the IP address
and port that is used by the API Server (the master node). Note
that the default port is 6443.
Replace the --token
value,
8tipwo.tst0nvf7wcaqjcj0
, with a valid
token for the master node. If you do not have this information,
run the following command on the master
node to obtain this information:
# kubeadm token list
TOKEN TTL EXPIRES USAGES DESCRIPTION EXTRA GROUPS
8tipwo.tst0nvf7wcaqjcj0 22h 2018-12-11 authentication, <none> system:
T03:32:44-08:00 signing bootstrappers:
kubeadm:
default-node-token
By default, tokens expire after 24 hours. If you are joining a node to the cluster after the current token has expired, you can create a new token by running the following command on the master node:
# kubeadm token create
e05e12.3c1096c88cc11720
You can explicitly set the expiry period for a token when you
create it by using the --ttl
option. This
option sets the expiration time of the token, relative to the
current time. The value is generally set in seconds, but other
units can be specified as well. For example, you can set the token
expiry for 15m
(or 15 minutes) from the current
time; or, for 1h
(1 hour) from the current
time. A value of 0
means the token never
expires, but this value is not recommended.
Replace the --discovery-token-ca-cert-hash
value,
f2a5b22b658683c3634459c8e7617c9d6c080c72dd149f3eb903445efe9d8346
,
with the correct SHA256 CA certificate hash that is used to sign
the token certificate for the master node. If you do not have this
information, run the following command chain on the
master node to obtain it:
# openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt | openssl rsa -pubin -outform der 2>/dev/null | \
openssl dgst -sha256 -hex | sed 's/^.* //'
f2a5b22b658683c3634459c8e7617c9d6c080c72dd149f3eb903445efe9d8346
The kubeadm-setup.sh script checks whether the host meets all the requirements before it sets up a worker node. If a requirement is not met, an error message is displayed together with the recommended fix. You should fix the errors before running the script again.
The kubelet
systemd
service
is automatically enabled on the host so that the worker node
always starts at boot.
After the kubeadm-setup.sh join command completes, check that the worker node has joined the cluster on the master node:
$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
master.example.com Ready master 1h v1.12.7+1.1.2.el7
worker1.example.com Ready <none> 1h v1.12.7+1.1.2.el7
The output for this command displays a listing of all of the nodes in the cluster and their status.
2.5 Upgrading 1.1.9 to 1.1.12
The following instructions are specifically for a major package upgrade from Oracle Linux Container Services for use with Kubernetes 1.1.9 to version 1.1.12.
The upgrade process that is described here only applies for the stated upgrade path on existing hosts.
Oracle does not support upgrading existing clusters between smaller errata releases by using the kubeadm-uprade.sh script. Instead, you must use the kubeadm-setup.sh script that is described in Section 2.6, “Updating to Errata Releases”.
These instructions work on hosts that are booting from UEK R4, but it is recommended that hosts currently running UEK R4 are upgraded to use UEK R5 to facilitate future upgrades, where KubeDNS is deprecated.
The upgrade process requires you to first upgrade the master node in your cluster, and then update each of the worker nodes. The upgrade of the master node is scripted so that the pre-requisite checks, validation, and reconfiguration are automated. It is a good practice to make a backup file for your cluster before upgrading. This process is described in Section 2.5.1, “Upgrading the Master Node from 1.1.9 to 1.1.12”.
After the master node is upgraded, you can upgrade each worker node, as described in Section 2.5.2, “Upgrading Worker Nodes from 1.1.9 to 1.1.12”.
When the upgrade of the cluster has completed, you must restart or redeploy any applications that were running in the cluster.
Oracle does not support an upgrade from a preview release to a stable and supported release.
Oracle also does not support upgrading existing single master node clusters that are built with the kubeadm-setup.sh script to High Availability clusters. You must build and manage High Availability clusters by using the kubeadm-ha-setup utility.
2.5.1 Upgrading the Master Node from 1.1.9 to 1.1.12
You must upgrade the master node in your cluster before upgrading the worker nodes. Use the kubeadm-upgrade.sh upgrade command on the master node to create the necessary backup files and complete the necessary steps to prepare and upgrade the cluster.
Before you perform any update operations, make a backup file
of your cluster at its current version. After you update the
kubeadm
package, any backup files that you
make are not backward compatible: if you revert to an earlier
version of Oracle Linux Container Services for use with Kubernetes, the restore operation might fail to
successfully load your backup file. See
Section 4.3, “Cluster Backup and Restore” for more
information.
Do not use backups that are generated by
kubeadm-setup
to restore from a failed
1.1.9 to 1.1.12 upgrade. The
kubeadm-upgrade
tool provides its own
separate backup and restore mechanism, as described later in
this section.
-
Unlike errata upgrades, you do not need to manually update the
kubeadm
package, but you do need to install thekubeadm-upgrade
package is required:#
yum install kubeadm-upgrade
-
If you are using the Oracle Container Registry to obtain images, log in.
Follow the instructions in Section 2.2.5, “Oracle Container Registry Requirements”. Note that if images are updated on the Oracle Container Registry, you may be required to accept the Oracle Standard Terms and Restrictions again before you are able to perform the upgrade. If you are using one of the Oracle Container Registry mirrors, see Section 2.2.5.1, “Using an Oracle Container Registry Mirror” for more information.
If you configured a local registry, you may need to set the
KUBE_REPO_PREFIX
environment variable to point to the appropriate registry. You might also need to update your local registry with the most current images for the version that you are upgrading to. See Section 2.2.5.2, “Setting Up an Optional Local Registry” for more information. -
Ensure that you open any new firewall ports, as described in Section 2.2.7, “Firewall and iptables Requirements”.
-
Create a pre-upgrade backup file. Unlike the errata release upgrade procedure, the backup file is generated by using kubeadm-upgrade.sh backup. In the event that the upgrade does not complete successfully, the backup can revert back to the configuration of your cluster prior to upgrade.
#
kubeadm-setup.sh stop
Stopping kubelet now ... Stopping containers now ... #kubeadm-upgrade.sh backup
-- Running upgrade script--- Backing up cluster Creating backup at directory /backups ... Using 3.1.11 Checking if container-registry.oracle.com/kubernetes/etcd-amd64:3.1.11 is available dc9ed9408e82dbd9d925c4d660206f9c60dce98c150cb32517284a6ef764f59d /var/run/kubeadm/backup/etcd-backup-1546953894.tar aa2dad1ba2c2ec486d30fe0a15b29566b257474429d79889472fd79128489ae0 /var/run/kubeadm/backup/k8s-master-0-1546953894.tar Backup is successfully stored at /backups/master-backup-v1.9.11-0-1546953894.tar ... You can restart your cluster now by doing: # kubeadm-setup.sh restart Storing meta-data to backup file master-backup-v1.9.11-0-1546953894.tar .version-info Backup creation successful :) #/backups
kubeadm-setup.sh restart
Restarting containers now ... Detected node is master ... Checking if env is ready ... Checking whether docker can pull busybox image ... Checking access to container-registry.oracle.com/kubernetes ... Trying to pull repository container-registry.oracle.com/kubernetes/pause ... 3.1: Pulling from container-registry.oracle.com/kubernetes/pause Digest: sha256:802ef89b9eb7e874a76e1cfd79ed990b63b0b84a05cfa09f0293379ac0261b49 Status: Image is up to date for container-registry.oracle.com/kubernetes/pause:3.1 Checking firewalld settings ... Checking iptables default rule ... Checking br_netfilter module ... Checking sysctl variables ... Restarting kubelet ... Waiting for node to restart ... .......+.............. Master node restarted. Complete synchronization between nodes may take a few minutes. -
Run the kubeadm-upgrade.sh upgrade command as
root
on the master node.#
kubeadm-upgrade.sh upgrade
-- Running upgrade script--- Number of cpu present in this system 2 Total memory on this system: 7710MB Space available on the mount point /var/lib/docker: 44GB Space available on the mount point /var/lib/kubelet: 44GB kubeadm version 1.9 kubectl version 1.9 kubelet version 1.9 ol7_addons repo enabled [WARNING] This action will upgrade this node to latest version [WARNING] The cluster will be upgraded through intermediate versions which are unsupported [WARNING] You must take backup before upgrading the cluster as upgrade may fail Please select 1 (continue) or 2 (abort) : 1) continue 2) abort #?1
Upgrading master node Checking access to container-registry.oracle.com/kubernetes for update Trying to pull repository container-registry.oracle.com/kubernetes/kube-proxy-amd64 v1.10.5: Pulling from container-registry.oracle.com/kubernetes/kube-proxy-amd64 Digest: sha256:4739e1154818a95786bc94d44e1cb4f493083d1983e98087c8a8279e616582f1 Status: Image is up to date for container-registry.oracle.com/kubernetes/kube-proxy-amd64:v1.10.5 Checking access to container-registry.oracle.com/kubernetes for update Trying to pull repository container-registry.oracle.com/kubernetes/kube-proxy-amd64 v1.11.3: Pulling from container-registry.oracle.com/kubernetes/kube-proxy-amd64 Digest: sha256:2783b4d4689da3210d2a915a8ee60905bf53841be4d52ffbf56cc811c61d5728 Status: Image is up to date for container-registry.oracle.com/kubernetes/kube-proxy-amd64:v1.11.3 Checking access to container-registry.oracle.com/kubernetes for update Trying to pull repository container-registry.oracle.com/kubernetes/kube-proxy ... v1.12.7: Pulling from Pulling from container-registry.oracle.com/kubernetes/kube-proxy Digest: sha256:f4f9e7b70a65f4f7d751da9b97c7536b21a7ac2b301155b0685778fc83d5510f Status: Image is up to date for Pulling from container-registry.oracle.com/kubernetes/kube-proxy:v1.12.7 Loaded plugins: langpacks, ulninfo Resolving Dependencies --> Running transaction check ---> Package kubeadm.x86_64 0:1.9.11-2.1.1.el7 will be updated ---> Package kubeadm.x86_64 0:1.10.5-2.0.2.el7 will be an update ---> Package kubectl.x86_64 0:1.9.11-2.1.1.el7 will be updated ---> Package kubectl.x86_64 0:1.10.5-2.0.2.el7 will be an update ---> Package kubelet.x86_64 0:1.9.11-2.1.1.el7 will be updated ---> Package kubelet.x86_64 0:1.10.5-2.0.2.el7 will be an update --> Finished Dependency Resolution Dependencies Resolved ================================================================ Package Arch Version Repository Size ================================================================ Updating: kubeadm x86_64 1.10.5-2.0.2.el7 ol7_addons 17 M kubectl x86_64 1.10.5-2.0.2.el7 ol7_addons 7.6 M kubelet x86_64 1.10.5-2.0.2.el7 ol7_addons 17 M Transaction Summary ================================================================= Upgrade 3 Packages Total download size: 42 M Downloading packages: Delta RPMs disabled because /usr/bin/applydeltarpm not installed. -------------------------------------------------------------------------------- Total 49 MB/s | 42 MB 00:00 Running transaction check Running transaction test Transaction test succeeded Running transaction Updating : kubelet-1.10.5-2.0.2.el7.x86_64 1/6 Updating : kubectl-1.10.5-2.0.2.el7.x86_64 2/6 Updating : kubeadm-1.10.5-2.0.2.el7.x86_64 3/6 Cleanup : kubeadm-1.9.11-2.1.1.el7.x86_64 4/6 Cleanup : kubectl-1.9.11-2.1.1.el7.x86_64 5/6 Cleanup : kubelet-1.9.11-2.1.1.el7.x86_64 6/6 Verifying : kubectl-1.10.5-2.0.2.el7.x86_64 1/6 Verifying : kubelet-1.10.5-2.0.2.el7.x86_64 2/6 Verifying : kubeadm-1.10.5-2.0.2.el7.x86_64 3/6 Verifying : kubectl-1.9.11-2.1.1.el7.x86_64 4/6 Verifying : kubeadm-1.9.11-2.1.1.el7.x86_64 5/6 Verifying : kubelet-1.9.11-2.1.1.el7.x86_64 6/6 Updated: kubeadm.x86_64 0:1.10.5-2.0.2.el7 kubectl.x86_64 0:1.10.5-2.0.2.el7 kubelet.x86_64 0:1.10.5-2.0.2.el7 Complete! Upgrading pre-requisite Checking whether api-server is using image lower than 1.9 Upgrading pre-requisite done Checking cluster health ... .... [preflight] Running pre-flight checks. [upgrade] Making sure the cluster is healthy: [upgrade/config] Making sure the configuration is correct: [upgrade/config] Reading configuration options from a file: /var/run/kubeadm/kubeadm-cfg [upgrade/version] You have chosen to change the cluster version to "v1.10.5" [upgrade/versions] Cluster version: v1.9.11+2.1.1.el7 [upgrade/versions] kubeadm version: v1.10.5+2.0.2.el7 [upgrade/prepull] Will prepull images for components [kube-apiserver kube-controller-manager kube-scheduler] [upgrade/apply] Upgrading your Static Pod-hosted control plane to version "v1.10.5"... Static pod: kube-apiserver-master.example.com hash: 3b6cc643053ae0164a687e53fbcf4eb7 Static pod: kube-controller-manager-master.example.com hash: 78b0313a30bbf65cf169686001a2c093 Static pod: kube-scheduler-master.example.com hash: 8fa7d39f0a3246bb39baf3712702214a [upgrade/etcd] Upgrading to TLS for etcd Static pod: etcd-master.example.com hash: 196164156fbbd2ef7daaf8c6a0ec6379 [etcd] Wrote Static Pod manifest for a local etcd instance to "/etc/kubernetes/tmp/kubeadm-upgraded-manifests139181353/etcd.yaml" [certificates] Generated etcd/ca certificate and key. [certificates] Generated etcd/server certificate and key. [certificates] etcd/server serving cert is signed for DNS names [localhost] and IPs [127.0.0.1] [certificates] Generated etcd/peer certificate and key. [certificates] etcd/peer serving cert is signed for DNS names [master.example.com] and IPs [19.0.2.10] [certificates] Generated etcd/healthcheck-client certificate and key. [upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/etcd.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests154060916/etcd.yaml" [upgrade/staticpods] Not waiting for pod-hash change for component "etcd" [upgrade/etcd] Waiting for etcd to become available [util/etcd] Waiting 30s for initial delay [util/etcd] Attempting to see if all cluster endpoints are available 1/10 [util/etcd] Attempt failed with error: dial tcp [::1]:2379: getsockopt: connection refused [util/etcd] Waiting 15s until next retry [util/etcd] Attempting to see if all cluster endpoints are available 2/10 [util/etcd] Attempt failed with error: dial tcp [::1]:2379: getsockopt: connection refused [util/etcd] Waiting 15s until next retry [util/etcd] Attempting to see if all cluster endpoints are available 3/10 [upgrade/staticpods] Writing new Static Pod manifests to "/etc/kubernetes/tmp/kubeadm-upgraded-manifests139181353" [controlplane] Wrote Static Pod manifest for component kube-apiserver to "/etc/kubernetes/tmp/kubeadm-upgraded-manifests139181353/kube-apiserver.yaml" [controlplane] Wrote Static Pod manifest for component kube-controller-manager to "/etc/kubernetes/tmp/kubeadm-upgraded-manifests139181353/kube-controller-manager.yaml" [controlplane] Wrote Static Pod manifest for component kube-scheduler to "/etc/kubernetes/tmp/kubeadm-upgraded-manifests139181353/kube-scheduler.yaml" [upgrade/staticpods] The etcd manifest will be restored if component "kube-apiserver" fails to upgrade [certificates] Using the existing etcd/ca certificate and key. [certificates] Generated apiserver-etcd-client certificate and key. [upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-apiserver.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests154060916/kube-apiserver.yaml" [upgrade/staticpods] Waiting for the kubelet to restart the component Static pod: kube-apiserver-master.example.com hash: 3b6cc643053ae0164a687e53fbcf4eb7 Static pod: kube-apiserver-master.example.com hash: f7c7c2a1693f48bc6146119961c47cad [apiclient] Found 1 Pods for label selector component=kube-apiserver [upgrade/staticpods] Component "kube-apiserver" upgraded successfully! [upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-controller-manager.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests154060916/kube-controller-manager.yaml" [upgrade/staticpods] Waiting for the kubelet to restart the component Static pod: kube-controller-manager-master.example.com hash: 78b0313a30bbf65cf169686001a2c093 Static pod: kube-controller-manager-master.example.com hash: 3fffc11595801c3777e45ff96ce75444 [apiclient] Found 1 Pods for label selector component=kube-controller-manager [upgrade/staticpods] Component "kube-controller-manager" upgraded successfully! [upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-scheduler.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests154060916/kube-scheduler.yaml" [upgrade/staticpods] Waiting for the kubelet to restart the component Static pod: kube-scheduler-master.example.com hash: 8fa7d39f0a3246bb39baf3712702214a Static pod: kube-scheduler-master.example.com hash: c191e26d0faa00981a2f0d6f1f0d7e5f [apiclient] Found 1 Pods for label selector component=kube-scheduler [upgrade/staticpods] Component "kube-scheduler" upgraded successfully! [uploadconfig] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace [bootstraptoken] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials [bootstraptoken] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token [bootstraptoken] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster [addons] Applied essential addon: kube-dns [addons] Applied essential addon: kube-proxy [upgrade/successful] SUCCESS! Your cluster was upgraded to "v1.10.5". Enjoy! [upgrade/kubelet] Now that your control plane is upgraded, please proceed with upgrading your kubelets in turn. Upgrading kubeadm to 1.11.3 version Loaded plugins: langpacks, ulninfo Resolving Dependencies --> Running transaction check ---> Package kubeadm.x86_64 0:1.10.5-2.0.2.el7 will be updated ---> Package kubeadm.x86_64 0:1.11.3-2.0.2.el7 will be an update --> Finished Dependency Resolution Dependencies Resolved ================================================================ Package Arch Version Repository Size ================================================================ Updating: kubeadm x86_64 1.11.3-2.0.2.el7 ol7_addons 7.6 M Transaction Summary ================================================================ Upgrade 1 Package Total download size: 7.6 M Downloading packages: Delta RPMs disabled because /usr/bin/applydeltarpm not installed. Running transaction check Running transaction test Transaction test succeeded Running transaction Updating : kubeadm-1.11.3-2.0.2.el7.x86_64 1/2 Cleanup : kubeadm-1.10.5-2.0.2.el7.x86_64 2/2 Verifying : kubeadm-1.11.3-2.0.2.el7.x86_64 1/2 Verifying : kubeadm-1.10.5-2.0.2.el7.x86_64 2/2 Updated: kubeadm.x86_64 0:1.11.3-2.0.2.el7 Complete! Upgrading pre-requisite Checking whether api-server is using image lower than 1.9 Upgrading pre-requisite done Checking cluster health ... .................................................................................... [preflight] Running pre-flight checks. [upgrade] Making sure the cluster is healthy: [upgrade/config] Making sure the configuration is correct: [upgrade/config] Reading configuration options from a file: /var/run/kubeadm/kubeadm-cfg [upgrade/apply] Respecting the --cri-socket flag that is set with higher priority than the config file. [upgrade/version] You have chosen to change the cluster version to "v1.11.3" [upgrade/versions] Cluster version: v1.10.5+2.0.2.el7 [upgrade/versions] kubeadm version: v1.11.3+2.0.2.el7 [upgrade/version] Found 1 potential version compatibility errors but skipping since the --force flag is set: - There are kubelets in this cluster that are too old that have these versions [v1.9.11+2.1.1.el7] [upgrade/prepull] Will prepull images for components [kube-apiserver kube-controller-manager kube-scheduler etcd] [upgrade/apply] Upgrading your Static Pod-hosted control plane to version "v1.11.3"... Static pod: kube-apiserver-master.example.com hash: f7c7c2a1693f48bc6146119961c47cad Static pod: kube-controller-manager-master.example.com hash: 3fffc11595801c3777e45ff96ce75444 Static pod: kube-scheduler-master.example.com hash: c191e26d0faa00981a2f0d6f1f0d7e5f Static pod: etcd-master.example.com hash: 6ecccbc01b0cd9daa0705a1396ef38e5 [etcd] Wrote Static Pod manifest for a local etcd instance to "/etc/kubernetes/tmp/kubeadm-upgraded-manifests842182537/etcd.yaml" [certificates] Using the existing etcd/ca certificate and key. [certificates] Using the existing etcd/server certificate and key. [certificates] Using the existing etcd/peer certificate and key. [certificates] Using the existing etcd/healthcheck-client certificate and key. [upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/etcd.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2019-01-09-07-25-48/etcd.yaml" [upgrade/staticpods] Waiting for the kubelet to restart the component Static pod: etcd-master.example.com hash: 6ecccbc01b0cd9daa0705a1396ef38e5 Static pod: etcd-master.example.com hash: 6ecccbc01b0cd9daa0705a1396ef38e5 Static pod: etcd-master.example.com hash: 6ecccbc01b0cd9daa0705a1396ef38e5 Static pod: etcd-master.example.com hash: 560672e3081cf0ff6a30ac1f943240eb [apiclient] Found 1 Pods for label selector component=etcd [upgrade/staticpods] Component "etcd" upgraded successfully! [upgrade/etcd] Waiting for etcd to become available [util/etcd] Waiting 0s for initial delay [util/etcd] Attempting to see if all cluster endpoints are available 1/10 [upgrade/staticpods] Writing new Static Pod manifests to "/etc/kubernetes/tmp/kubeadm-upgraded-manifests842182537" [controlplane] wrote Static Pod manifest for component kube-apiserver to "/etc/kubernetes/tmp/kubeadm-upgraded-manifests842182537/kube-apiserver.yaml" [controlplane] wrote Static Pod manifest for component kube-controller-manager to "/etc/kubernetes/tmp/kubeadm-upgraded-manifests842182537/kube-controller-manager.yaml" [controlplane] wrote Static Pod manifest for component kube-scheduler to "/etc/kubernetes/tmp/kubeadm-upgraded-manifests842182537/kube-scheduler.yaml" [certificates] Using the existing etcd/ca certificate and key. [certificates] Using the existing apiserver-etcd-client certificate and key. [upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-apiserver.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2019-01-09-07-25-48/kube-apiserver.yaml" [upgrade/staticpods] Waiting for the kubelet to restart the component Static pod: kube-apiserver-master.example.com hash: f7c7c2a1693f48bc6146119961c47cad Static pod: kube-apiserver-master.example.com hash: f7c7c2a1693f48bc6146119961c47cad Static pod: kube-apiserver-master.example.com hash: f7c7c2a1693f48bc6146119961c47cad Static pod: kube-apiserver-master.example.com hash: 9eefcb38114108702fad91f927799c04 [apiclient] Found 1 Pods for label selector component=kube-apiserver [upgrade/staticpods] Component "kube-apiserver" upgraded successfully! [upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-controller-manager.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2019-01-09-07-25-48/ kube-controller-manager.yaml" [upgrade/staticpods] Waiting for the kubelet to restart the component Static pod: kube-controller-manager-master.example.com hash: 3fffc11595801c3777e45ff96ce75444 Static pod: kube-controller-manager-master.example.com hash: 32b0f7233137a5c4879bda1067f36f8a [apiclient] Found 1 Pods for label selector component=kube-controller-manager [upgrade/staticpods] Component "kube-controller-manager" upgraded successfully! [upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-scheduler.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2019-01-09-07-25-48/kube-scheduler.yaml" [upgrade/staticpods] Waiting for the kubelet to restart the component Static pod: kube-scheduler-master.example.com hash: c191e26d0faa00981a2f0d6f1f0d7e5f Static pod: kube-scheduler-master.example.com hash: b589c7f85a86056631f252695c20358b [apiclient] Found 1 Pods for label selector component=kube-scheduler [upgrade/staticpods] Component "kube-scheduler" upgraded successfully! [uploadconfig] storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace [kubelet] Creating a ConfigMap "kubelet-config-1.11" in namespace kube-system with the configuration for the kubelets in the cluster [kubelet] Downloading configuration for the kubelet from the "kubelet-config-1.11" ConfigMap in the kube-system namespace [kubelet] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [kubelet] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "master.example.com" as an annotation [bootstraptoken] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials [bootstraptoken] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token [bootstraptoken] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster [addons] Applied essential addon: kube-dns [addons] Applied essential addon: kube-proxy [upgrade/successful] SUCCESS! Your cluster was upgraded to "v1.11.3". Enjoy! [upgrade/kubelet] Now that your control plane is upgraded, please proceed with upgrading your kubelets if you haven't already done so. Upgrading kubelet and kubectl now ... Checking kubelet and kubectl RPM ... [INFO] yum install -y kubelet-1.11.3-2.0.2.el7.x86_64 Loaded plugins: langpacks, ulninfo Resolving Dependencies --> Running transaction check ---> Package kubelet.x86_64 0:1.10.5-2.0.2.el7 will be updated ---> Package kubelet.x86_64 0:1.11.3-2.0.2.el7 will be an update --> Finished Dependency Resolution Dependencies Resolved =================================================================================== Package Arch Version Repository Size =================================================================================== Updating: kubelet x86_64 1.11.3-2.0.2.el7 ol7_addons 18 M Transaction Summary =================================================================================== Upgrade 1 Package Total download size: 18 M Downloading packages: Delta RPMs disabled because /usr/bin/applydeltarpm not installed. kubelet-1.11.3-2.0.2.el7.x86_64.rpm | 18 MB 00:00:00 Running transaction check Running transaction test Transaction test succeeded Running transaction Updating : kubelet-1.11.3-2.0.2.el7.x86_64 1/2 Cleanup : kubelet-1.10.5-2.0.2.el7.x86_64 2/2 Verifying : kubelet-1.11.3-2.0.2.el7.x86_64 1/2 Verifying : kubelet-1.10.5-2.0.2.el7.x86_64 2/2 Updated: kubelet.x86_64 0:1.11.3-2.0.2.el7 Complete! [INFO] yum install -y kubectl-1.11.3-2.0.2.el7.x86_64 Loaded plugins: langpacks, ulninfo Resolving Dependencies --> Running transaction check ---> Package kubectl.x86_64 0:1.10.5-2.0.2.el7 will be updated ---> Package kubectl.x86_64 0:1.11.3-2.0.2.el7 will be an update --> Finished Dependency Resolution Dependencies Resolved =================================================================================== Package Arch Version Repository Size =================================================================================== Updating: kubectl x86_64 1.11.3-2.0.2.el7 ol7_addons 7.6 M Transaction Summary =================================================================================== Upgrade 1 Package Total download size: 7.6 M Downloading packages: Delta RPMs disabled because /usr/bin/applydeltarpm not installed. kubectl-1.11.3-2.0.2.el7.x86_64.rpm | 7.6 MB 00:00:00 Running transaction check Running transaction test Transaction test succeeded Running transaction Updating : kubectl-1.11.3-2.0.2.el7.x86_64 1/2 Cleanup : kubectl-1.10.5-2.0.2.el7.x86_64 2/2 Verifying : kubectl-1.11.3-2.0.2.el7.x86_64 1/2 Verifying : kubectl-1.10.5-2.0.2.el7.x86_64 2/2 Updated: kubectl.x86_64 0:1.11.3-2.0.2.el7 Complete! Upgrading kubelet and kubectl to 1.11.3 version Loaded plugins: langpacks, ulninfo Package kubelet-1.11.3-2.0.2.el7.x86_64 already installed and latest version Package kubectl-1.11.3-2.0.2.el7.x86_64 already installed and latest version Nothing to do Upgrading kubeadm to 1.12.7-1.1.2.el7 version Loaded plugins: langpacks, ulninfo Resolving Dependencies --> Running transaction check ---> Package kubeadm.x86_64 0:1.11.3-2.0.2.el7 will be updated ---> Package kubeadm.x86_64 0:1.12.7-1.1.2.el7 will be an update --> Finished Dependency Resolution Dependencies Resolved =================================================================================== Package Arch Version Repository Size =================================================================================== kubeadm x86_64 1.12.7-1.1.2.el7 ol7_addons 7.3 M Transaction Summary =================================================================================== Upgrade 1 Package Total download size: 7.3 M Downloading packages: Delta RPMs disabled because /usr/bin/applydeltarpm not installed. kubeadm-1.12.7-1.1.2.el7.x86_64.rpm | 7.3 MB 00:00:00 Running transaction check Running transaction test Transaction test succeeded Running transaction Updating : kubeadm-1.12.7-1.1.2.el7.x86_64 1/2 Cleanup : kubeadm-1.11.3-2.0.2.el7.x86_64 2/2 Verifying : kubeadm-1.12.7-1.1.2.el7.x86_64 1/2 Verifying : kubeadm-1.11.3-2.0.2.el7.x86_64 2/2 Updated: kubeadm.x86_64 0:1.12.7-1.1.2.el7 Complete! Upgrading pre-requisite Checking whether api-server is using image lower than 1.9 Upgrading pre-requisite done Checking cluster health ... ........................................................................... [preflight] Running pre-flight checks. [upgrade] Making sure the cluster is healthy: [upgrade/config] Making sure the configuration is correct: [upgrade/config] Reading configuration options from a file: /var/run/kubeadm/kubeadm-cfg [upgrade/apply] Respecting the --cri-socket flag that is set with higher priority than the config file. [upgrade/version] You have chosen to change the cluster version to "v1.12.5" [upgrade/versions] Cluster version: v1.11.3+2.0.2.el7 [upgrade/versions] kubeadm version: v1.12.7+1.1.2.el7 [upgrade/version] Found 1 potential version compatibility errors but skipping since the --force flag is set: - There are kubelets in this cluster that are too old that have these versions [v1.9.11+2.1.1.el7] [upgrade/prepull] Will prepull images for components [kube-apiserver kube-controller-manager kube-scheduler etcd] [upgrade/prepull] Prepulling image for component etcd. [upgrade/prepull] Prepulling image for component kube-apiserver. [upgrade/prepull] Prepulling image for component kube-controller-manager. [upgrade/prepull] Prepulling image for component kube-scheduler. [apiclient] Found 0 Pods for label selector k8s-app=upgrade-prepull-kube-scheduler [apiclient] Found 0 Pods for label selector k8s-app=upgrade-prepull-etcd [apiclient] Found 1 Pods for label selector k8s-app=upgrade-prepull-kube-controller-manager [apiclient] Found 1 Pods for label selector k8s-app=upgrade-prepull-kube-apiserver [apiclient] Found 1 Pods for label selector k8s-app=upgrade-prepull-kube-scheduler [apiclient] Found 0 Pods for label selector k8s-app=upgrade-prepull-etcd [upgrade/prepull] Prepulled image for component kube-apiserver. [upgrade/prepull] Prepulled image for component kube-controller-manager. [upgrade/prepull] Prepulled image for component etcd. [upgrade/prepull] Prepulled image for component kube-scheduler. [upgrade/prepull] Successfully prepulled the images for all the control plane components [upgrade/apply] Upgrading your Static Pod-hosted control plane to version "v1.12.5"... Static pod: kube-apiserver-master.example.com hash: 7c19bbee52e8a857c9e75551139951b7 Static pod: kube-controller-manager-master.example.com hash: 0221796c266be3d6f237a7256da5fa36 Static pod: kube-scheduler-master.example.com hash: e0549b9041665ae07cfacdaf337ab1e0 Static pod: etcd-master.example.com hash: 7a68f8a24bf031e2027cc6d528ce6efe [etcd] Wrote Static Pod manifest for a local etcd instance to "/etc/kubernetes/tmp/kubeadm-upgraded-manifests665746710/etcd.yaml" [upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/etcd.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2019-01-09-07-34-07/etcd.yaml" [upgrade/staticpods] Waiting for the kubelet to restart the component [upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s Static pod: etcd-master.example.com hash: 7a68f8a24bf031e2027cc6d528ce6efe Static pod: etcd-master.example.com hash: 7a68f8a24bf031e2027cc6d528ce6efe Static pod: etcd-master.example.com hash: 7eab06d7296bf87cff84cb56f26d13e6 [apiclient] Found 1 Pods for label selector component=etcd [upgrade/staticpods] Component "etcd" upgraded successfully! [upgrade/etcd] Waiting for etcd to become available [util/etcd] Waiting 0s for initial delay [util/etcd] Attempting to see if all cluster endpoints are available 1/10 [upgrade/staticpods] Writing new Static Pod manifests to "/etc/kubernetes/tmp/kubeadm-upgraded-manifests665746710" [controlplane] wrote Static Pod manifest for component kube-apiserver to "/etc/kubernetes/tmp/kubeadm-upgraded-manifests665746710/kube-apiserver.yaml" [controlplane] wrote Static Pod manifest for component kube-controller-manager to "/etc/kubernetes/tmp/kubeadm-upgraded-manifests665746710/kube-controller-manager.yaml" [controlplane] wrote Static Pod manifest for component kube-scheduler to "/etc/kubernetes/tmp/kubeadm-upgraded-manifests665746710/kube-scheduler.yaml" [upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-apiserver.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2019-01-09-07-34-07/kube-apiserver.yaml" [upgrade/staticpods] Waiting for the kubelet to restart the component [upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s Static pod: kube-apiserver-master.example.com hash: 7c19bbee52e8a857c9e75551139951b7 Static pod: kube-apiserver-master.example.com hash: 7c19bbee52e8a857c9e75551139951b7 Static pod: kube-apiserver-master.example.com hash: 7c19bbee52e8a857c9e75551139951b7 Static pod: kube-apiserver-master.example.com hash: 7c19bbee52e8a857c9e75551139951b7 Static pod: kube-apiserver-master.example.com hash: 5c6ceef93d0a8c04d331d6ea6da4b6a7 [apiclient] Found 1 Pods for label selector component=kube-apiserver [apiclient] Found 1 Pods for label selector component=kube-scheduler [upgrade/staticpods] Component "kube-scheduler" upgraded successfully! [uploadconfig] storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace [kubelet] Creating a ConfigMap "kubelet-config-1.12" in namespace kube-system with the configuration for the kubelets in the cluster [kubelet] Downloading configuration for the kubelet from the "kubelet-config-1.12" ConfigMap in the kube-system namespace [kubelet] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "k8s-m1.us.oracle.com" as an annotation [bootstraptoken] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials [bootstraptoken] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token [bootstraptoken] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster [addons] Applied essential addon: kube-dns [addons] Applied essential addon: kube-proxy [upgrade/successful] SUCCESS! Your cluster was upgraded to "v1.12.5". Enjoy! [upgrade/kubelet] Now that your control plane is upgraded, please proceed with upgrading your kubelets if you haven't already done so. Upgrading kubelet and kubectl now ... Checking kubelet and kubectl RPM ... [INFO] yum install -y kubelet-1.12.7-1.1.2.el7.x86_64 Loaded plugins: langpacks, ulninfo Resolving Dependencies --> Running transaction check ---> Package kubelet.x86_64 0:1.11.3-2.0.2.el7 will be updated ---> Package kubelet.x86_64 0:1.12.7-1.1.2.el7 will be an update --> Finished Dependency Resolution Dependencies Resolved =================================================================================== Package Arch Version Repository Size =================================================================================== Updating: kubelet x86_64 1.12.7-1.1.2.el7 ol7_addons 19 M Transaction Summary ==================================================================================== Upgrade 1 Package Total download size: 19 M Downloading packages: Delta RPMs disabled because /usr/bin/applydeltarpm not installed. kubelet-1.12.7-1.1.2.el7.x86_64.rpm Running transaction check Running transaction test Transaction test succeeded Running transaction Updating : kubelet-1.12.7-1.1.2.el7.x86_64 1/2 Cleanup : kubelet-1.11.3-2.0.2.el7.x86_64 2/2 Verifying : kubelet-1.12.7-1.1.2.el7.x86_64 1/2 Verifying : kubelet-1.11.3-2.0.2.el7.x86_64 2/2 Updated: kubelet.x86_64 0:1.12.7-1.1.2.el7 Complete! [INFO] yum install -y kubectl-1.12.7-1.1.2.el7.x86_64 Loaded plugins: langpacks, ulninfo Resolving Dependencies --> Running transaction check ---> Package kubectl.x86_64 0:1.11.3-2.0.2.el7 will be updated ---> Package kubectl.x86_64 0:1.12.7-1.1.2.el7 will be an update --> Finished Dependency Resolution Dependencies Resolved =================================================================================== Package Arch Version Repository Size =================================================================================== Updating: kubectl x86_64 1.12.7-1.1.2.el7 ol7_addons 7.7 M Transaction Summary =================================================================================== Upgrade 1 Package Total download size: 7.7 M Downloading packages: Delta RPMs disabled because /usr/bin/applydeltarpm not installed. kubectl-1.12.7-1.1.2.el7.x86_64.rpm | 7.7 MB 00:00:00 Running transaction check Running transaction test Transaction test succeeded Running transaction Updating : kubectl-1.12.7-1.1.2.el7.x86_64 1/2 Cleanup : kubectl-1.11.3-2.0.2.el7.x86_64 2/2 Verifying : kubectl-1.12.7-1.1.2.el7.x86_64 1/2 Verifying : kubectl-1.11.3-2.0.2.el7.x86_64 2/2 Updated: kubectl.x86_64 0:1.12.7-1.1.2.el7 Complete! [INSTALLING DASHBOARD NOW] Installing kubernetes-dashboard ... Kubernetes version: v1.12.7 and dashboard yaml file: /usr/local/share/kubeadm/kubernetes-dashboard-self-certs.yaml The connection to the server 10.147.25.195:6443 was refused - did you specify the right host or port? Restarting kubectl-proxy.service ... [INFO] Upgrading master node done successfully [INFO] Flannel is not upgraded yet. Please run 'kubeadm-upgrade.sh upgrade --flannel' to upgrade flannel [INFO] Dashboard is not upgraded yet. Please run 'kubeadm-upgrade.sh upgrade --dashboard' to upgrade dashboard -
Because the
flannel
service that Oracle Linux Container Services for use with Kubernetes 1.1.12 depends on is not upgraded automatically by the specialized upgrade script, ensure you upgrade separately, for example:#
kubeadm-setup.sh upgrade --flannel
Trying to pull repository container-registry.oracle.com/kubernetes/flannel ... v0.10.0: Pulling from container-registry.oracle.com/kubernetes/flannel Digest: sha256:da1f7af813d6b6123c9a240b3e7f9b58bc7b50d9939148aa08c7ba8253e0c312 Status: Image is up to date for container-registry.oracle.com/kubernetes/flannel:v0.10.0 kube-flannel-ds-85clc kube-flannel-ds-x9grm clusterrole.rbac.authorization.k8s.io "flannel" deleted clusterrolebinding.rbac.authorization.k8s.io "flannel" deleted serviceaccount "flannel" deleted configmap "kube-flannel-cfg" deleted daemonset.extensions "kube-flannel-ds" deleted pod "kube-flannel-ds-85clc" deleted pod "kube-flannel-ds-x9grm" deleted NAME READY STATUS RESTARTS AGE etcd-master.example.com 1/1 Running 0 11m kube-apiserver-master.example.com 1/1 Running 0 11m kube-controller-manager-master.example.com 1/1 Running 0 11m kube-dns-554d547449-hhl6p 3/3 Running 0 12m kube-proxy-bc7ht 1/1 Running 0 12m kube-proxy-jd8gh 1/1 Running 0 12m kube-scheduler-master.example.com 1/1 Running 0 11m kubernetes-dashboard-64c8c8b9dd-c9wfl 1/1 Running 1 41m clusterrole.rbac.authorization.k8s.io/flannel created clusterrolebinding.rbac.authorization.k8s.io/flannel created serviceaccount/flannel created configmap/kube-flannel-cfg created daemonset.extensions/kube-flannel-ds created -
The Oracle Linux Container Services for use with Kubernetes dashboard service also needs to be upgraded separately to 1.1.12:
#
kubeadm-upgrade.sh upgrade --dashboard
Upgrading dashboard secret "kubernetes-dashboard-certs" deleted serviceaccount "kubernetes-dashboard" deleted role.rbac.authorization.k8s.io "kubernetes-dashboard-minimal" deleted rolebinding.rbac.authorization.k8s.io "kubernetes-dashboard-minimal" deleted deployment.apps "kubernetes-dashboard" deleted service "kubernetes-dashboard" deleted Installing kubernetes-dashboard ... Kubernetes version: v1.12.7 and dashboard yaml file: /usr/local/share/kubeadm/kubernetes-dashboard-self-certs.yaml secret/kubernetes-dashboard-certs created serviceaccount/kubernetes-dashboard created role.rbac.authorization.k8s.io/kubernetes-dashboard-minimal created rolebinding.rbac.authorization.k8s.io/kubernetes-dashboard-minimal created deployment.apps/kubernetes-dashboard created service/kubernetes-dashboard created Restarting kubectl-proxy.service ... -
If the master node upgrade fails, roll back as follows:
#
kubeadm-upgrade.sh restore
-- Running upgrade script--- Restoring the cluster Loaded plugins: langpacks, ulninfo Nothing to do Checking sha256sum of the backup files ... /var/run/kubeadm/backup/etcd-backup-1546953894.tar: OK /var/run/kubeadm/backup/k8s-master-0-1546953894.tar: OK Restoring backup from /backups/master-backup-v1.9.11-0-1546953894.tar ... Using 3.1.11 etcd cluster is healthy ... Cleaning up etcd container ... ab9e7a31a721c2b9690047ac3445beeb2c518dd60da81da2a396f250f089e82e ab9e7a31a721c2b9690047ac3445beeb2c518dd60da81da2a396f250f089e82e Restore successful ... You can restart your cluster now by doing: # kubeadm-setup.sh restart Restore successful :)/backups/master-backup-v1.9.11-0-1546953894.tar
-
If the script completes successfully, create a fresh backup on your new Oracle Linux Container Services for use with Kubernetes 1.1.12 master node by using kubeadm-setup.sh backup.
You can read the full upgrade log in
/var/log/kubeadm-upgrade
. After completing
the master node upgrade, you can upgrade the packages for
Oracle Linux Container Services for use with Kubernetes on each worker node.
2.5.2 Upgrading Worker Nodes from 1.1.9 to 1.1.12
Only upgrade worker nodes after the master node has completed the upgrade process, as described in Section 2.5.1, “Upgrading the Master Node from 1.1.9 to 1.1.12”.
You must perform several manual steps to complete the upgrade of a worker node. These steps involve draining the node prior to upgrade to prevent the cluster from scheduling or starting any pods on the node while it is being upgraded. The drain process deletes any running pods from the node. If there is local storage configured, the drain process errors out so that you have the opportunity to determine whether or not you need to back up local data.
When the upgrade is complete, you can uncordon the worker node so that pods are able to resume on this node.
To upgrade a worker node, perform the following steps:
-
Drain the worker node by running the following command from the master node:
$
kubectl drain
worker1.example.com
--ignore-daemonsetswhere
worker1.example.com
is the hostname of the worker node that you wish to upgrade.If local storage is configured for the node, the drain process may generate an error. The following example output shows a node, using local storage, that fails to drain:
node/worker1.example.com cordoned error: unable to drain node "worker1.example.com", aborting command... There are pending nodes to be drained: worker1.example.com error: pods with local storage (use --delete-local-data to override): carts-74f4558cb8-c8p8x, carts-db-7fcddfbc79-c5pkx, orders-787bf5b89f-nt9zj, orders-db-775655b675-rhlp7, shipping-5bd69fb4cc-twvtf, user-db-5f9d89bbbb-7t85k
In the case where a node fails to drain, determine whether to follow any procedure to back up local data and restore it later or whether you can proceed and delete the local data directly. After any backup files have been made, you can rerun the command with the
--delete-local-data
switch to force the removal of the data and drain the node. For example, on the master node, run:$
kubectl drain
node/worker1.example.com cordoned already cordoned WARNING: Ignoring DaemonSet-managed pods: kube-flannel-ds-xrszk, kube-proxy-7g9px; Deleting pods with local storage: carts-74f4558cb8-g2fdw, orders-db-775655b675-gfggs, user-db-5f9d89bbbb-k78sk pod "user-db-5f9d89bbbb-k78sk" evicted pod "rabbitmq-96d887875-lxm5f" evicted pod "orders-db-775655b675-gfggs" evicted pod "catalogue-676d4b9f7c-lvwfb" evicted pod "payment-75f75b467f-skrbq" evicted pod "carts-74f4558cb8-g2fdw" evicted node "kubernetes-worker1" drainedworker1.example.com
--ignore-daemonsets --delete-local-data -
Check that the worker node is unable to accept any further scheduling by running the following command on the master node:
$
kubectl get nodes
Note that a node that has been drained should have its status set to
SchedulingDisabled
. -
If you are using the Oracle Container Registry to obtain images, log in.
Follow the instructions in Section 2.2.5, “Oracle Container Registry Requirements”. Note that if images are updated on the Oracle Container Registry, you may be required to accept the Oracle Standard Terms and Restrictions again before you are able to perform the upgrade. If you are using one of the Oracle Container Registry mirrors, see Section 2.2.5.1, “Using an Oracle Container Registry Mirror” for more information. If you have configured a local registry, you may need to set the
KUBE_REPO_PREFIX
environment variable to point to the appropriate registry. You may also need to update your local registry with the most current images for the version that you are upgrading to. See Section 2.2.5.2, “Setting Up an Optional Local Registry” for more information. -
Run the kubeadm-upgrade.sh upgrade command as
root
on the worker node:#
kubeadm-upgrade.sh upgrade
-- Running upgrade script--- Number of cpu present in this system 2 Total memory on this system: 7710MB Space available on the mount point /var/lib/docker: 44GB Space available on the mount point /var/lib/kubelet: 44GB kubeadm version 1.9 kubectl version 1.9 kubelet version 1.9 ol7_addons repo enabled [WARNING] This action will upgrade this node to latest version [WARNING] The cluster will be upgraded through intermediate versions which are unsupported [WARNING] You must take backup before upgrading the cluster as upgrade may fail Please select 1 (continue) or 2 (abort) : 1) continue 2) abort #?1
Upgrading worker node Updating kubeadm package Checking access to container-registry.oracle.com/kubernetes for update Trying to pull repository container-registry.oracle.com/kubernetes/kube-proxy ... v1.12.5: Pulling from container-registry.oracle.com/kubernetes/kube-proxy Digest: sha256:9eba681b56e15078cb499a3360f138cc16987cf5aea06593f77d0881af6badbe Status: Image is up to date for container-registry.oracle.com/kubernetes/kube-proxy:v1.12.5 Upgrading kubeadm to latest version Loaded plugins: langpacks, ulninfo Resolving Dependencies --> Running transaction check ---> Package kubeadm.x86_64 0:1.9.11-2.1.1.el7 will be updated ---> Package kubeadm.x86_64 0:1.12.7-1.1.2.el7 will be an update --> Finished Dependency Resolution Dependencies Resolved =============================================================== Package Arch Version Repository Size =============================================================== Updating: kubeadm x86_64 1.12.7-1.1.2.el7 ol7_addons 7.3 M Transaction Summary =============================================================== Upgrade 1 Package Total download size: 7.3 M Downloading packages: Delta RPMs disabled because /usr/bin/applydeltarpm not installed. Running transaction check Running transaction test Transaction test succeeded Running transaction Upgrading kubeadm forcefully from version earlier that 1.11 Updating : kubeadm-1.12.7-1.1.2.el7.x86_64 1/2 Cleanup : kubeadm-1.9.11-2.1.1.el7.x86_64 2/2 Verifying : kubeadm-1.12.7-1.1.2.el7.x86_64 1/2 Verifying : kubeadm-1.9.11-2.1.1.el7.x86_64 2/2 Updated: kubeadm.x86_64 0:1.12.7-1.1.2.el7 Complete! Upgrading kubelet and kubectl now ... Checking kubelet and kubectl RPM ... [INFO] yum install -y kubelet-1.12.7-1.1.2.el7.x86_64 Loaded plugins: langpacks, ulninfo Resolving Dependencies --> Running transaction check ---> Package kubelet.x86_64 0:1.9.11-2.1.1.el7 will be updated ---> Package kubelet.x86_64 0:1.12.7-1.1.2.el7 will be an update --> Finished Dependency Resolution Dependencies Resolved ========================================================================================== Package Arch Version Repository Size ========================================================================================== Updating: kubelet x86_64 1.12.7-1.1.2.el7 ol7_addons 19 M Transaction Summary ========================================================================================== Upgrade 1 Package Total download size: 19 M Downloading packages: Delta RPMs disabled because /usr/bin/applydeltarpm not installed. kubelet-1.12.7-1.1.2.el7.x86_64.rpm | 19 MB 00:00:01 Running transaction check Running transaction test Transaction test succeeded Running transaction Updating : kubelet-1.12.7-1.1.2.el7.x86_64 1/2 Cleanup : kubelet-1.9.11-2.1.1.el7.x86_64 2/2 Verifying : kubelet-1.12.7-1.1.2.el7.x86_64 1/2 Verifying : kubelet-1.9.11-2.1.1.el7.x86_64 2/2 Updated: kubelet.x86_64 0:1.12.7-1.1.2.el7 Complete! [INFO] yum install -y kubectl-1.12.7-1.1.2.el7.x86_64 Loaded plugins: langpacks, ulninfo Resolving Dependencies --> Running transaction check ---> Package kubectl.x86_64 0:1.9.11-2.1.1.el7 will be updated ---> Package kubectl.x86_64 0:1.12.7-1.1.2.el7 will be an update --> Finished Dependency Resolution Dependencies Resolved ========================================================================================== Package Arch Version Repository Size ========================================================================================== Updating: kubectl x86_64 1.12.7-1.1.2.el7 ol7_addons 7.7 M Transaction Summary ========================================================================================== Upgrade 1 Package Total download size: 7.7 M Downloading packages: Delta RPMs disabled because /usr/bin/applydeltarpm not installed. kubectl-1.12.7-1.1.2.el7.x86_64.rpm | 7.7 MB 00:00:00 Running transaction check Running transaction test Transaction test succeeded Running transaction Updating : kubectl-1.12.7-1.1.2.el7.x86_64 1/2 Cleanup : kubectl-1.9.11-2.1.1.el7.x86_64 2/2 Verifying : kubectl-1.12.7-1.1.2.el7.x86_64 1/2 Verifying : kubectl-1.9.11-2.1.1.el7.x86_64 2/2 Updated: kubectl.x86_64 0:1.12.7-1.1.2.el7 Complete! [kubelet] Downloading configuration for the kubelet from the "kubelet-config-1.12" ConfigMap in the kube-system namespace [kubelet] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [upgrade] The configuration for this node was successfully updated! [upgrade] Now you should go ahead and upgrade the kubelet package using your package manager. [WORKER NODE UPGRADED SUCCESSFULLY]Note that you are warned that the upgrade affects the node's availability temporarily. You must confirm that you wish to continue to complete the upgrade.
The
kubelet
service and all running containers are restarted automatically after upgrade. -
Uncordon the worker node so that it is able to schedule new nodes, as required. On the master node, run:
$
kubectl uncordon
node/worker1.example.com uncordonedworker1.example.com
where
worker1.example.com
is the hostname of the worker node that you have just upgraded. -
When you have finished the upgrade process, check that the nodes are all running the expected version as follows:
$
kubectl get nodes
NAME STATUS ROLES AGE VERSION master.example.com Ready master 1h v1.12.7+1.1.2.el7 worker1.example.com Ready <none> 1h v1.12.7+1.1.2.el7 worker2.example.com Ready <none> 1h v1.12.7+1.1.2.el7
2.6 Updating to Errata Releases
Updates for Oracle Linux Container Services for use with Kubernetes are released on the Oracle Linux yum server and on ULN.
The update process that is described here only applies for updates to errata releases that provide minor updates and security patches for existing installations.
Oracle does not support upgrading existing clusters that are created by using Oracle Linux Container Services for use with Kubernetes 1.1.9 to 1.1.12 with the kubeadm-setup.sh script. You must use the kubeadm-upgrade.sh script, as described in Section 2.5, “Upgrading 1.1.9 to 1.1.12”.
These instructions work on hosts that are booting from UEK R4, but it is recommended that hosts currently running UEK R4 are upgraded to use UEK R5 to facilitate future upgrades, where KubeDNS is deprecated.
The update process requires that you first update the master node in your cluster, and then update each of the worker nodes. Update of the master node is scripted so that the pre-requisite checks, validation, and reconfiguration are automated. It is good practice to make a backup file for your cluster before update. See Section 2.6.1, “Updating the Master Node”.
After the master node is updated, you can update each worker node, as described in Section 2.6.2, “Updating Worker Nodes”.
Oracle does not support any upgrade from a preview release to a stable and supported release.
Oracle also does not support upgrading existing single master node clusters built with the kubeadm-setup.sh script to High Availability clusters. You must build and manage High Availability clusters by using the kubeadm-ha-setup utility.
2.6.1 Updating the Master Node
You must update the master node in your cluster before you update worker nodes. The kubeadm-setup.sh upgrade command is used on the master node to complete the necessary steps to prepare and update the cluster. The following steps describe how to update the master node.
Before you perform any update operations, make a backup file
for your cluster at its current version. After you update the
kubeadm
package, any backup files that you
make are not backward compatible, and if you revert to an
earlier version of Oracle Linux Container Services for use with Kubernetes, the restore operation may fail
to successfully load your backup file. See
Section 4.3, “Cluster Backup and Restore” for more
information.
-
On the master node, update the
kubeadm
package first:#
yum update kubeadm
-
If you are using the Oracle Container Registry to obtain images, log in.
Follow the instructions in Section 2.2.5, “Oracle Container Registry Requirements”. Note that if images are updated on the Oracle Container Registry, you may be required to accept the Oracle Standard Terms and Restrictions again before you are able to perform the update. If you are using one of the Oracle Container Registry mirrors, see Section 2.2.5.1, “Using an Oracle Container Registry Mirror” for more information. If you have configured a local registry, you may need to set the
KUBE_REPO_PREFIX
environment variable to point to the appropriate registry. You may also need to update your local registry with the most current images for the version that you are upgrading to. See Section 2.2.5.2, “Setting Up an Optional Local Registry” for more information. -
Ensure that you open any new firewall ports in Section 2.2.7, “Firewall and iptables Requirements”.
-
Create a pre-update backup file. In the event that the update does not complete successfully, the backup can revert back to the configuration of your cluster prior to update.
#
kubeadm-setup.sh stop
Stopping kubelet now ... Stopping containers now ... #kubeadm-setup.sh backup
Creating backup at directory /backup ... Using 3.2.24 Checking if container-registry.oracle.com/kubernetes/etcd:3.2.24 is available d05a0ef2bea8cd05e1311fcb5391d8878a5437f8384887ae31694689bc6d57f5 /var/run/kubeadm/backup/etcd-backup-1543581013.tar 9aa26d015a4d2cf7a73438b04b2fe2e61be71ee56e54c08fd7047555eb1e0e6f /var/run/kubeadm/backup/k8s-master-0-1543581013.tar Backup is successfully stored at /backup/master-backup-v1.12.5-2-1543581013.tar ... You can restart your cluster now by doing: # kubeadm-setup.sh restart #/backups
kubeadm-setup.sh restart
Restarting containers now ... Detected node is master ... Checking if env is ready ... Checking whether docker can pull busybox image ... Checking access to container-registry.oracle.com/kubernetes ... Trying to pull repository container-registry.oracle.com/kubernetes/pause ... 3.1: Pulling from container-registry.oracle.com/kubernetes/pause Digest: sha256:802ef89b9eb7e874a76e1cfd79ed990b63b0b84a05cfa09f0293379ac0261b49 Status: Image is up to date for container-registry.oracle.com/kubernetes/pause:3.1 Checking firewalld settings ... Checking iptables default rule ... Checking br_netfilter module ... Checking sysctl variables ... Restarting kubelet ... Waiting for node to restart ... .......+.............. Master node restarted. Complete synchronization between nodes may take a few minutes. -
Run the kubeadm-setup.sh upgrade command as
root
on the master node. The script prompts you to continue with the update and warns you to make a backup file before you continue. Enter
to continue.1
#
kubeadm-setup.sh upgrade
Checking whether api-server is using image lower than 1.12 [WARNING] Please make sure that you have performed backup of the cluster before upgrading Please select 1 (continue) or 2 (abort) : 1) continue 2) abort #?1
Checking whether https works (export https_proxy if behind firewall) v1.12.5-2: Pulling from kubernetes/kube-proxy-amd64 Digest: sha256:d3b87a1cb0eb64d702921169e442c6758a09c94ee91a0080e801ec41355077cd Status: Image is up to date for container-registry.oracle.com/kubernetes/kube-proxy-amd64:v1.12.5-2 Checking cluster health ... [preflight] Running pre-flight checks. [upgrade] Making sure the cluster is healthy: [upgrade/config] Making sure the configuration is correct: [upgrade/config] Reading configuration options from a file: /var/run/kubeadm/kubeadm-cfg [upgrade/version] You have chosen to change the cluster version to "v1.12.5-2" [upgrade/versions] Cluster version: v1.12.7+1.1.2.el7 [upgrade/versions] kubeadm version: v1.12.7+1.1.2.el7 [upgrade/prepull] Will prepull images for components [kube-apiserver kube-controller-manager kube-scheduler] [upgrade/apply] Upgrading your Static Pod-hosted control plane to version "v1.12.5-2"... [upgrade/staticpods] Writing new Static Pod manifests to "/etc/kubernetes/tmp/kubeadm-upgraded-manifests120255399" [controlplane] Wrote Static Pod manifest for component kube-apiserver to "/etc/kubernetes/tmp/kubeadm-upgraded-manifests120255399/kube-apiserver.yaml" [controlplane] Wrote Static Pod manifest for component kube-controller-manager to "/etc/kubernetes/tmp/kubeadm-upgraded-manifests120255399/kube-controller-manager.yaml" [controlplane] Wrote Static Pod manifest for component kube-scheduler to "/etc/kubernetes/tmp/kubeadm-upgraded-manifests120255399/kube-scheduler.yaml" [upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-apiserver.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests555128538/kube-apiserver.yaml" [upgrade/staticpods] Waiting for the kubelet to restart the component [apiclient] Found 1 Pods for label selector component=kube-apiserver [upgrade/staticpods] Component "kube-apiserver" upgraded successfully! [upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-controller-manager.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests555128538/kube-controller-manager.yaml" [upgrade/staticpods] Waiting for the kubelet to restart the component [apiclient] Found 1 Pods for label selector component=kube-controller-manager [upgrade/staticpods] Component "kube-controller-manager" upgraded successfully! [upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-scheduler.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests555128538/kube-scheduler.yaml" [upgrade/staticpods] Waiting for the kubelet to restart the component [apiclient] Found 1 Pods for label selector component=kube-scheduler [upgrade/staticpods] Component "kube-scheduler" upgraded successfully! [uploadconfig] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace [bootstraptoken] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials [bootstraptoken] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token [bootstraptoken] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster [addons] Applied essential addon: kube-dns [addons] Applied essential addon: kube-proxy [upgrade/successful] SUCCESS! Your cluster was upgraded to "v1.12.5-2". Enjoy! [upgrade/kubelet] Now that your control plane is upgraded, please proceed with upgrading your kubelets in turn. Warning: kubelet.service changed on disk. Run 'systemctl daemon-reload' to reload units. [MASTER UPGRADE COMPLETED SUCCESSFULLY] Cluster may take a few minutes to get backup! Please proceed to upgrade your $WORKER node *in turn* by running the following command: # kubectl drain $WORKER --ignore-daemonsets (run following command with proper KUBECONFIG) Login to the $WORKER node # yum update kubeadm # kubeadm-setup.sh upgrade # kubectl uncordon $WORKER (run the following command with proper KUBECONFIG) upgrade the next $WORKER nodeThe
upgrade
command performs a health check on the cluster, validates the existing configuration, and then pulls the necessary images that are required to update the cluster. All of thecontrolplane
components for the cluster are updated and certificates and tokens are configured to ensure that all cluster components on all nodes are able to continue to function after update.After these components have been updated, the
kubelet
andkubectl
packages are updated automatically. -
If you are prompted by the following message, it is an indication that you need to update the
flannel
component manually:[INFO] Flannel is not upgraded yet. Run 'kubeadm-setup.sh upgrade --flannel' to upgrade flannel
Re-run the kubeadm-setup.sh upgrade with the
--flannel
flag to ensure that you have fully upgraded your cluster:#
kubeadm-setup.sh upgrade --flannel
After you have completed the master node upgrade, you can upgrade the packages for Oracle Linux Container Services for use with Kubernetes on each worker node.
2.6.2 Updating Worker Nodes
Only update worker nodes after the master node has completed the update process, as described in Section 2.6.1, “Updating the Master Node”.
You must perform several manual steps to complete the update of a worker node. These steps involve draining the node prior to update to prevent the cluster from scheduling or starting any pods on the node while it is being updated. The drain process deletes any running pods from the node. If there is local storage configured, the drain process errors out so that you have the opportunity to determine whether or not you need to back up local data.
When the update is complete you can uncordon the worker node so that pods are able to resume on this node.
To update a worker node, perform the following steps:
-
Drain the worker node by running the following command from the master node:
$
kubectl drain
worker1.example.com
--ignore-daemonsetswhere
worker1.example.com
is the hostname of the worker node that you wish to update.If local storage is configured for the node, the drain process might generate an error. The following example output shows a node, using local storage, that fails to drain:
node/worker1.example.com cordoned error: unable to drain node "worker1.example.com", aborting command... There are pending nodes to be drained: worker1.example.com error: pods with local storage (use --delete-local-data to override): carts-74f4558cb8-c8p8x, carts-db-7fcddfbc79-c5pkx, orders-787bf5b89f-nt9zj, orders-db-775655b675-rhlp7, shipping-5bd69fb4cc-twvtf, user-db-5f9d89bbbb-7t85k
In the case where a node fails to drain, determine whether you should follow any procedure to back up local data and restore it later or whether you can proceed and delete the local data directly. After any backup files have been made, you can rerun the command with the
--delete-local-data
switch to force the removal of the data and drain the node, for example:$
kubectl drain
node/worker1.example.com already cordoned WARNING: Ignoring DaemonSet-managed pods: kube-flannel-ds-xrszk, kube-proxy-7g9px; Deleting pods with local storage: carts-74f4558cb8-g2fdw, orders-db-775655b675-gfggs, user-db-5f9d89bbbb-k78sk pod "user-db-5f9d89bbbb-k78sk" evicted pod "rabbitmq-96d887875-lxm5f" evicted pod "orders-db-775655b675-gfggs" evicted pod "catalogue-676d4b9f7c-lvwfb" evicted pod "payment-75f75b467f-skrbq" evicted pod "carts-74f4558cb8-g2fdw" evicted node "kubernetes-worker1" drainedworker1.example.com
--ignore-daemonsets --delete-local-data -
Check that the worker node is unable to accept any further scheduling by running the following command on the master node:
$
kubectl get nodes
A node that has been drained should have its status set to
SchedulingDisabled
. -
Update the packages on the worker node by using a standard yum update command. To specifically update only those packages required for Oracle Linux Container Services for use with Kubernetes, on the worker node, run the following command as
root
:#
yum update kubeadm
-
If you are using the Oracle Container Registry to obtain images, log in.
Follow the instructions in Section 2.2.5, “Oracle Container Registry Requirements”. Note that if images are updated on the Oracle Container Registry, you may be required to accept the Oracle Standard Terms and Restrictions again before you are able to perform the update. If you are using one of the Oracle Container Registry mirrors, see Section 2.2.5.1, “Using an Oracle Container Registry Mirror” for more information. If you have configured a local registry, you may need to set the
KUBE_REPO_PREFIX
environment variable to point to the appropriate registry. You may also need to update your local registry with the most current images for the version that you are upgrading to. See Section 2.2.5.2, “Setting Up an Optional Local Registry” for more information. -
When the yum update process completes, run the kubeadm-setup.sh upgrade command as
root
on the worker node. You are warned that the update affects the node's availability temporarily. Confirm that you wish to continue to complete the update:#
kubeadm-setup.sh upgrade
[WARNING] Upgrade will affect this node's application(s) availability temporarily Please select 1 (continue) or 2 (abort) : 1) continue 2) abort #?1
Checking access to container-registry.oracle.com/kubernetes for update v1.12.5-2: Pulling from kubernetes/kube-proxy-amd64 Digest: sha256:f525d06eebf7f21c55550b1da8cee4720e36b9ffee8976db357f49eddd04c6d0 Status: Image is up to date for container-registry.oracle.com/kubernetes/kube-proxy-amd64:v1.12.5-2 Restarting containers ... [NODE UPGRADED SUCCESSFULLY]The
kubelet
service and all of the running containers are restarted automatically after the update. -
Uncordon the worker node so that it is able to schedule new nodes, as required, by running the following command on the master node:
$
kubectl uncordon
node/worker1.example.com uncordonedworker1.example.com
where
worker1.example.com
is the hostname of the worker node that you have just updated. -
After the update process has completed, run the following command on the master node to check that all of the nodes are running the expected version:
$
kubectl get nodes
NAME STATUS ROLES AGE VERSION master.example.com Ready master 1h v1.12.7+1.1.2.el7 worker1.example.com Ready <none> 1h v1.12.7+1.1.2.el7 worker2.example.com Ready <none> 1h v1.12.7+1.1.2.el7