The software described in this documentation is either no longer supported or is in extended support.
Oracle recommends that you upgrade to a current supported release.
Chapter 3 Installing High Availability Oracle Linux Container Services for use with Kubernetes
- 3.1 Overview
-
3.2 Requirements
- 3.2.1 Yum or ULN Channel Subscription
- 3.2.2 Requirement for Upgrading the Unbreakable Enterprise Kernel
- 3.2.3 Resource Requirements
- 3.2.4 Docker Engine Requirements
- 3.2.5 Oracle Container Registry Requirements
- 3.2.6 Network Time Service Requirements
- 3.2.7 Firewall and iptables Requirements
- 3.2.8 Network Requirements
- 3.2.9 SELinux Requirements
- 3.2.10 Requirements to Use Oracle Linux Container Services for use with Kubernetes on Oracle Cloud Infrastructure
- 3.3 Setting Up the Master Cluster
- 3.4 Setting Up a Worker Node
- 3.5 Upgrading
This chapter describes the steps required to install Kubernetes clusters using master nodes configured for high availability on Oracle Linux 7 hosts.
3.1 Overview
Kubernetes can be deployed with more than one replica of the required master node, and automated failover to those replicas, for the purpose of providing a more scalable and resilient service.
The kubeadm
package provides the
kubeadm utility, a tool designed to make the
deployment of a Kubernetes cluster simple. Many users may find that
using this tool directly, along with the upstream documentation,
provides the maximum configuration flexibility.
Oracle provides the kubeadm-ha-setup tool in an
additional kubeadm-ha-setup
package to help new
users install and configure a high availability deployment of
Kubernetes with greater ease, regardless of whether it is hosted on
bare metal, on a virtual machine, or out on the cloud. The tool
handles checking that basic package requirements are in place,
setting proxy and firewall requirements, configuring networking,
and initializing a high availability master cluster configuration
for the Kubernetes environment. The tool uses the
kubeadm utility, but handles many additional
configuration steps that can help new users get running with
minimal effort.
The instructions provided here assume that you are new to Kubernetes and are using the provided kubeadm-ha-setup tool to deploy your cluster. This tool is developed and tested at Oracle and deployment using this tool is fully supported. Alternate configurations and deployment mechanisms are untested by Oracle.
High availability clusters have resilience for one master node failure. If more than one master node fails then you will need to restore you master cluster from a backup file to avoid data loss.
3.2 Requirements
- 3.2.1 Yum or ULN Channel Subscription
- 3.2.2 Requirement for Upgrading the Unbreakable Enterprise Kernel
- 3.2.3 Resource Requirements
- 3.2.4 Docker Engine Requirements
- 3.2.5 Oracle Container Registry Requirements
- 3.2.6 Network Time Service Requirements
- 3.2.7 Firewall and iptables Requirements
- 3.2.8 Network Requirements
- 3.2.9 SELinux Requirements
- 3.2.10 Requirements to Use Oracle Linux Container Services for use with Kubernetes on Oracle Cloud Infrastructure
Kubernetes configured for high availability requires three nodes in the master cluster and at least one worker node.
Creating three master nodes ensures replication of configuration
data between them through the distributed key store,
etcd
, so that your high availability cluster is
resilient to a single node failing without any loss of data or
uptime.
Placing each master node in a different Kubernetes zone can safeguard cluster availability in the event of a zone failure within the master cluster.
The following sections describe various requirements that must be met to install and configure Kubernetes clusters with high availability on Oracle Linux 7 hosts.
3.2.1 Yum or ULN Channel Subscription
To install all of the required packages to use Kubernetes, you must ensure that you are subscribed to the correct yum repositories or Unbreakable Linux Network (ULN) channels.
If your systems are registered with ULN, enable the
ol7_x86_64_addons
channel.
If you use the Oracle Linux yum server, enable the
ol7_addons
repository on each system in your
deployment. You can do this easily using
yum-config-manager:
# yum-config-manager --enable ol7_addons
For more information on the ol7_x86_64_addons
channel, please see Oracle® Linux: Unbreakable Linux Network User's Guide for Oracle Linux 6 and Oracle Linux 7.
Oracle does not support Kubernetes on systems where the
ol7_preview
,
ol7_developer
or
ol7_developer_EPEL
repositories are
enabled, or where software from these repositories is
currently installed on the systems where Kubernetes runs. Even if
you follow the instructions in this document, you may render
your platform unsupported if these repositories or channels
are enabled or software from these channels or repositories is
installed on your system.
3.2.2 Requirement for Upgrading the Unbreakable Enterprise Kernel
Oracle Linux Container Services for use with Kubernetes 1.1.12 and later versions require that you configure the system to use the Unbreakable Enterprise Kernel Release 5 (UEK R5) and boot the system with this kernel. If you are using either UEK R4 or the Red Hat Compatible Kernel (RHCK), you must configure Yum to allow you to install UEK R5.
-
If your system is registered with the Unbreakable Linux Network (ULN), disable access to the
ol7_x86_64_UEKR4
channel and enable access to theol7_x86_64_UEKR5
channel.If you use the Oracle Linux yum server, disable the
ol7_UEKR4
repository and enable theol7_UEKR5
repository. You can do this easily using yum-config-manager, if you have theyum-utils
package installed:#
yum-config-manager --disable ol7_UEKR4
#yum-config-manager --enable ol7_UEKR5
-
Run the following command to upgrade the system to UEK R5:
#
yum update
For information on how to make UEK R5 the default boot kernel, see Oracle® Linux 7: Administrator's Guide.
-
Reboot the system, selecting the UEK R5 kernel if this is not the default boot kernel.
#
systemctl reboot
3.2.3 Resource Requirements
Each node in your cluster requires at least 2 GB of RAM and 2 or
more CPUs to facilitate the use of kubeadm
and any further applications that are provisioned using
kubectl
.
Also ensure that each node has a unique hostname, MAC address and product UUID as Kubernetes uses this information to identify and track each node in the cluster. You can verify the product UUID on each host with:
# dmidecode -s system-uuid
At least 5 GB free space must be available in the
/var/lib/kubelet
directory or volume on each
node. The underlying Docker engine requires an additional 5 GB
free space available in the /var/lib/docker
directory or volume on each node.
3.2.4 Docker Engine Requirements
Kubernetes is used to manage containers running on a containerization
platform deployed on several systems. On Oracle Linux, Kubernetes is
currently only supported when used in conjunction with the
Docker containerization platform. Therefore, each system in
the deployment must have the Docker engine installed and ready
to run. Support of Oracle Linux Container Services for use with Kubernetes is limited to usage with the
latest Oracle Container Runtime for Docker version available in the
ol7_addons
repository on the Oracle Linux yum server and in the
ol7_x86_64_addons
channel on ULN.
Please note that if you enable the
ol7_preview
repository, you may install a
preview version of Oracle Container Runtime for Docker and your installation can no
longer be supported by Oracle. If you have already installed a
version of Docker from the ol7_preview
repository, you should disable the repository and uninstall this
version before proceeding with the installation.
Install, the Docker engine on all nodes in the cluster:
# yum install docker-engine
Enable the Docker service in systemd
so
that it starts on subsequent reboots and you should start the
service before running the kubeadm-setup.sh
tool.
#systemctl enable docker
#systemctl start docker
See Oracle® Linux: Oracle Container Runtime for Docker User's Guide for more information on installing and running the Docker engine.
3.2.5 Oracle Container Registry Requirements
The images that are deployed by the kubeadm-ha-setup tool are hosted on the Oracle Container Registry. For the tool to be able to install the required components, you must perform the following steps:
-
Log in to the Oracle Container Registry website at https://container-registry.oracle.com using your Single Sign-On credentials.
-
Use the web interface to navigate to the
Container Services
business area and accept the Oracle Standard Terms and Restrictions for the Oracle software images that you intend to deploy. You are able to accept a global agreement that applies to all of the existing repositories within this business area. If newer repositories are added to this business area in the future, you may need to accept these terms again before performing upgrades. -
Ensure that each of the systems that are used as nodes within the cluster are able to access https://container-registry.oracle.com and use the docker login command to authenticate against the Oracle Container Registry using the same credentials that you used to log into the web interface:
#
docker login container-registry.oracle.com
The command prompts you for your user name and password.
Detailed information about the Oracle Container Registry is available in Oracle® Linux: Oracle Container Runtime for Docker User's Guide.
3.2.5.1 Using an Oracle Container Registry Mirror
It is also possible to use any of the Oracle Container Registry mirror servers to obtain the correct images to set up Oracle Linux Container Services for use with Kubernetes. The Oracle Container Registry mirror servers are located within the same data centers used for Oracle Cloud Infrastructure. More information about the Oracle Container Registry mirror servers is available in Oracle® Linux: Oracle Container Runtime for Docker User's Guide.
Steps to use an alternate Oracle Container Registry mirror server follow:
-
You must still log in to the Oracle Container Registry website at https://container-registry.oracle.com using your Single Sign-On credentials and use the web interface to accept the Oracle Standard Terms and Restrictions.
-
On each node, use the docker login command to authenticate against the Oracle Container Registry mirror server using the same credentials that you used to log into the web interface:
#
docker login
container-registry-phx.oracle.com
The command prompts you for your user name and password.
-
After you have logged in, set the environment variable to use the correct registry mirror when you deploy Kubernetes:
#
export KUBE_REPO_PREFIX=
#container-registry-phx.oracle.com
/kubernetesecho '
export KUBE_REPO_PREFIX=
' > ~/.bashrccontainer-registry-phx.oracle.com
/kubernetesIf you are using Oracle Linux Container Services for use with Kubernetes on Oracle Cloud Infrastructure, the kubeadm-ha-setup tool automatically detects the most appropriate mirror server to use and sets this environment variable for you so that you do not have to perform this step. If you manually set the
KUBE_REPO_PREFIX
environment variable on the command line, the kubeadm-ha-setup honors the variable and does not attempt to detect which mirror server you should be using.
3.2.5.2 Setting Up an Optional Local Registry
If the systems that you are using for your Kubernetes cluster nodes do not have direct access to the Internet and are unable to connect directly to the Oracle Container Registry, you can set up a local Docker registry to perform this function. The kubeadm-ha-setup tool provides an option to change the registry that you use to obtain these images. Instructions to set up a local Docker registry are provided in Oracle® Linux: Oracle Container Runtime for Docker User's Guide.
When you have set up a local Docker registry, you must pull
the images required to run Oracle Linux Container Services for use with Kubernetes, tag these images and
then push them to your local registry. The images must be
tagged identically to the way that they are tagged in the
Oracle Container Registry. The
kubeadm-ha-setup matches version numbers
during the setup process and cannot successfully complete many
operations if it cannot find particular versions of images. To
assist with this process, Oracle Linux Container Services for use with Kubernetes provides the
kubeadm-registry.sh tool in the
kubeadm
package.
To use the kubeadm-registry.sh tool to automatically pull images from the Oracle Container Registry, tag them appropriately, and push them to your local registry:
-
If you are using the Oracle Container Registry to obtain images, log in following the instructions in Section 2.2.5, “Oracle Container Registry Requirements”. If you are using one of the Oracle Container Registry mirrors, see Section 3.2.5.1, “Using an Oracle Container Registry Mirror” for more information.
-
Run the kubeadm-registry.sh tool with the required options:
#
kubeadm-registry.sh --to
host.example.com:5000
Substitute
host.example.com:5000
with the resolvable domain name and port by which your local Docker registry is available.You may optionally use the
--from
option to specify an alternate registry to pull the images from. You may also use the--version
option to specify the version of Kubernetes images that you intend to host. For example:#
kubeadm-registry.sh --to
host.example.com:5000
--from \container-registry-phx.oracle.com/kubernetes
--version1.12.0
If you are upgrading your environment and you intend to use a local registry, you must make sure that you have the most recent version of the images required to run Oracle Linux Container Services for use with Kubernetes. You can use the kubeadm-registry.sh tool to pull the correct images and to update your local registry before running the upgrade on the master node.
After your local Docker registry is installed and configured and the required images have been imported, you must set the environment variable that controls which registry server the kubeadm-ha-setup tool uses. On each of the systems where you intend to run the kubeadm-ha-setup tool run the following commands:
#export KUBE_REPO_PREFIX="
#local-registry.example.com
:5000/kubernetes"echo 'export KUBE_REPO_PREFIX="
local-registry.example.com
:5000/kubernetes"' > ~/.bashrc
Substitute
local-registry.example.com
with the
IP address or resolvable domain name of the host on which your
local Docker registry is configured.
3.2.6 Network Time Service Requirements
As a clustering environment, Kubernetes requires that system time is synchronized across each node within the cluster. Typically, this can be achieved by installing and configuring an NTP daemon on each node. You can do this in the following way:
-
Install the ntp package, if it is not already installed:
#
yum install ntp
-
Edit the NTP configuration in
/etc/ntp.conf
. Your requirements may vary. If you are using DHCP to configure the networking for each node, it is possible to configure NTP servers automatically. If you have not got a locally configured NTP service that your systems can sync to, and your systems have Internet access, you can configure them to use the publicpool.ntp.org
service. See https://www.ntppool.org/en/. -
Ensure that NTP is enabled to restart at boot and that it is started before you proceed with your Kubernetes installation. For example:
#
systemctl start ntpd
#systemctl enable ntpd
Note that systems running on Oracle Cloud Infrastructure are configured to use the
chronyd
time service by default, so there is
no requirement to add or configure NTP if you are installing
into an Oracle Cloud Infrastructure environment.
For information on configuring a Network Time Service, see Oracle® Linux 7: Administrator's Guide.
3.2.7 Firewall and iptables Requirements
Kubernetes uses iptables to handle many networking
and port forwarding rules. Therefore, you must ensure that you
do not have any rules set that may interfere with the
functioning of Kubernetes. The kubeadm-ha-setup
tool requires an iptables rule to accept forwarding traffic. If
this rule is not set, the tool exits and notifies you that you
may need to add this iptables
rule. A
standard Docker installation may create a firewall rule that
prevents forwarding, therefore you may need to run:
# iptables -P FORWARD ACCEPT
The kubeadm-ha-setup tool checks iptables rules and, where there is a match, instructions are provided on how to modify your iptables configuration to meet any requirements. See Section 4.1, “Kubernetes and iptables Rules” for more information.
If you have a requirement to run a firewall directly on the systems where Kubernetes is deployed, you must ensure that all ports required by Kubernetes are available. For instance, the TCP port 6443 must be accessible on the master node to allow other nodes to access the API Server. All nodes must be able to accept connections from the master node on the TCP ports 10250-10252 and 10255, and traffic should be allowed on the UDP port 8472. All nodes must be able to receive traffic from all other nodes on every port on the network fabric that is used for the Kubernetes pods. The firewall must support masquerading.
Oracle Linux 7 installs and enables firewalld, by default. If you are running firewalld, the kubeadm-ha-setup tool notifies you of any rules that you may need to add. In summary, run the following commands on all nodes:
# firewall-cmd --add-masquerade --permanent
#firewall-cmd --add-port=2379-2380/tcp --permanent
#firewall-cmd --add-port=10250/tcp --permanent
#firewall-cmd --add-port=10251/tcp --permanent
#firewall-cmd --add-port=10252/tcp --permanent
#firewall-cmd --add-port=10255/tcp --permanent
#firewall-cmd --add-port=8472/udp --permanent
Additionally, run the following command on each node in the master cluster to enable API access:
# firewall-cmd --add-port=6443/tcp --permanent
The --permanent
option ensures these firewall
rules persistent across reboots. Remember to restart the
firewall for these rules to take effect:
# systemctl restart firewalld
3.2.8 Network Requirements
The kubeadm-ha-setup tool requires that it is able to access the Oracle Container Registry and possibly other internet resources to be able to pull any container images that you required. Therefore, unless you intend to set up a local mirror for all of your container image requirements, the systems where you intend to install Kubernetes must either have direct internet access, or must be configured to use a proxy. See Section 4.2, “Using Kubernetes With a Proxy Server” for more information.
The kubeadm-ha-setup tool checks whether the
br_netfilter
module is loaded and exits if it
is not available. This module is required to enable transparent
masquerading and to facilitate Virtual Extensible LAN (VxLAN)
traffic for communication between Kubernetes pods across the cluster.
If you need to check whether it is loaded, run:
# lsmod|grep br_netfilter
Kernel modules are usually loaded as they are needed, and it is unlikely that you would need to load this module manually. However, if necessary, you can load the module manually by running:
#modprobe br_netfilter
#echo "br_netfilter" > /etc/modules-load.d/br_netfilter.conf
Kubernetes requires that packets traversing a network bridge are
processed by iptables
for filtering and for
port forwarding. To achieve this, tunable parameters in the
kernel bridge module are automatically set when the kubeadm
package is installed and a sysctl
file is
created at /etc/sysctl.d/k8s.conf
that
contains the following lines:
net.bridge.bridge-nf-call-ip6tables = 1 net.bridge.bridge-nf-call-iptables = 1
If you modify this file, or create anything similar yourself, you must run the following command to load the bridge tunable parameters:
# /sbin/sysctl -p /etc/sysctl.d/k8s.conf
The kubeadm-ha-setup tool configures a flannel network as the network fabric that is used for communications between Kubernetes pods. This overlay network uses VxLANs to facilitate network connectivity: https://github.com/coreos/flannel
By default, the kubeadm-ha-setup tool creates
a network in the 10.244.0.0/16
range to host
this network. The kubeadm-ha-setup tool
provides an option to set the network range to an alternate
range, if required, during installation. Systems in the Kubernetes
deployment must not have any network devices configured for this
reserved IP range.
3.2.9 SELinux Requirements
The kubeadm-ha-setup tool checks whether
SELinux is set to enforcing mode. If enforcing mode is enabled,
the tool exits with an error requesting that you set SELinux to
permissive mode. Setting SELinux to permissive mode allows
containers to access the host file system, which is required by
pod networks. This is a requirement until SELinux support is
improved in the kubelet
tool for Kubernetes.
To disable SELinux temporarily, do the following:
# /usr/sbin/setenforce 0
To disable SELinux enforcing mode for subsequent reboots so that
Kubernetes continues to run correctly, modify
/etc/selinux/config
and set the
SELinux
variable:
SELINUX=Permissive
3.2.10 Requirements to Use Oracle Linux Container Services for use with Kubernetes on Oracle Cloud Infrastructure
Oracle Linux Container Services for use with Kubernetes is engineered to work on Oracle Cloud Infrastructure. All of the instructions provided in this document can be used to install and configure Kubernetes across a group of compute instances. If you require additional information on configuration steps and usage of Oracle Cloud Infrastructure, please see:
https://docs.cloud.oracle.com/iaas/Content/home.htm
The most important requirement for Oracle Linux Container Services for use with Kubernetes on Oracle Cloud Infrastructure is that your Virtual Cloud Network (VCN) allows the compute nodes used in your Kubernetes deployment to communicate on the required ports. By default, compute nodes are unable to access each other across the Virtual Cloud Network until you have configured the Security List with the appropriate ingress rules.
Ingress rules should match the rules required in any firewall configuration, as described in Section 3.2.7, “Firewall and iptables Requirements”. Typically this involves adding the following ingress rules to the default security list for your VCN:
-
Allow 2379-2380/TCP.
-
STATELESS
: Unchecked -
SOURCE CIDR
:10.0.0.0/16
-
IP PROTOCOL
: TCP -
SOURCE PORT RANGE
: All -
DESTINATION PORT RANGE
: 2379-2380
-
-
Allow 6443/TCP.
-
STATELESS
: Unchecked -
SOURCE CIDR
:10.0.0.0/16
-
IP PROTOCOL
: TCP -
SOURCE PORT RANGE
: All -
DESTINATION PORT RANGE
: 6443
-
-
Allow 10250-10252/TCP.
-
STATELESS
: Unchecked -
SOURCE CIDR
:10.0.0.0/16
-
IP PROTOCOL
: TCP -
SOURCE PORT RANGE
: All -
DESTINATION PORT RANGE
: 10250-10252
-
-
Allow 10255/TCP.
-
STATELESS
: Unchecked -
SOURCE CIDR
:10.0.0.0/16
-
IP PROTOCOL
: TCP -
SOURCE PORT RANGE
: All -
DESTINATION PORT RANGE
: 10255
-
-
Allow 8472/UDP.
-
STATELESS
: Unchecked -
SOURCE CIDR
:10.0.0.0/16
-
IP PROTOCOL
: UDP -
SOURCE PORT RANGE
: All -
DESTINATION PORT RANGE
: 8472
-
Substitute 10.0.0.0/16
with the range
used for the subnet that you created within the VCN for the
compute nodes that will participate in the Kubernetes cluster. You
may wish to limit this to the specific IP address range used
specifically by the cluster components, or you may set this
wider depending on your own security requirements.
The ingress rules described here are the core rules that you need to set up to allow the cluster to function. For each service that you define or that you intend to use, you may need to define additional rules in the security list.
When creating compute instances to host Oracle Linux Container Services for use with Kubernetes, all shape types are supported. The environment requires that for high availability clusters you use an Oracle Linux 7 Update 5 image or later with the Unbreakable Enterprise Kernel Release 5 (UEK R5).
If you intend to configure load balancers for your master cluster, while using Oracle Cloud Infrastructure, as described in Configure Load Balancing, see:
https://docs.cloud.oracle.com/en-us/iaas/Content/Balance/Concepts/balanceoverview.htm
3.3 Setting Up the Master Cluster
Before you begin, ensure you have satisfied the requirements in
Section 3.2.5, “Oracle Container Registry Requirements”. Then on all
the hosts that you are configuring as master nodes, install the
kubeadm
and kubeadm-ha-setup
packages and their dependencies:
# yum install kubeadm kubelet kubectl kubeadm-ha-setup
Define the nodes in your high availability master cluster before
proceeding further. To generate a template configuration file,
copy the one provided on any node in the master cluster at
/usr/local/share/kubeadm/kubeadm-ha/ha.yaml
:
# cp /usr/local/share/kubeadm/kubeadm-ha/ha.yaml ~/ha.yaml
The first step is to specify the server IP addresses for each node
used in the master
cluster. There must be three
nodes defined in this cluster, and they must each have unique
hostnames:
clusters: - name: master vip:192.0.2.13
nodes: -192.0.2.10
-192.0.2.11
-192.0.2.12
Your cluster's vip
address is the IP address of
the server running the keepalive service for
your cluster. This service is included by default with Oracle
Linux 7, and you can find out more information about this service
in Oracle® Linux 7: Administrator's Guide.
All master nodes in your cluster must have shell access with
password-less key-based authentication for the other master nodes
whenever you use kubeadm-ha-setup
. You can
configure SSH keys for this, by following the instructions in
Oracle® Linux 7: Administrator's Guide.
You must define the SSH private key in the
private_key
variable, and the remote user in
the user
variable:
private_key:/root/.ssh/id_rsa
user:root
You can optionally define a pod_cidr
for your
pod network. This is set by default to a reserved local IP range:
pod_cidr: 10.244.0.0/16
Set the image
variable to point at the Oracle
Container Registry or an Oracle Container Registry mirror so that
you are able to fetch the container images for the current
release. See Section 3.2.5.1, “Using an Oracle Container Registry Mirror” for
more information on using a mirror:
image:container-registry.oracle.com/kubernetes
k8sversion:v1.12.5
Run kubeadm-ha-setup up instead of kubeadm-setup.sh up on just one Kubernetes node in the master cluster to apply these settings and automatically provision the other master nodes.
As root
, run kubeadm-ha-setup
up to add the host as a master node:
# kubeadm-ha-setup up ~/ha.yaml
Cleaning up ...
Reading configuration file /usr/local/share/kubeadm/kubeadm-ha/ha.yaml ...
CreateSSH /root/.ssh/id_rsa root
Checking 192.0.2.10
status 0
Checking 192.0.2.11
status 0
Checking 192.0.2.12
status 0
Configuring keepalived for HA ...
success
success
Setting up first master ... (maximum wait time 185 seconds)
[init] using Kubernetes version: v1.12.5
[preflight] running pre-flight checks
[preflight/images] Pulling images required for setting up a Kubernetes cluster
[preflight/images] This might take a minute or two,
depending on the speed of your internet connection
[preflight/images] You can also perform this action beforehand using 'kubeadm config images pull'
[kubelet] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[preflight] Activating the kubelet service
[certificates] Generated front-proxy-ca certificate and key.
[certificates] Generated front-proxy-client certificate and key.
[certificates] Generated etcd/ca certificate and key.
[certificates] Generated etcd/server certificate and key.
[certificates] etcd/server serving cert is signed for DNS names
[master1.example.com localhost] and IPs [127.0.0.1 ::1 192.0.2.10]
[certificates] Generated etcd/peer certificate and key.
[certificates] etcd/peer serving cert is signed for DNS names
[master1.example.com localhost] and IPs [192.0.2.10 127.0.0.1 ::1 192.0.2.10]
[certificates] Generated apiserver-etcd-client certificate and key.
[certificates] Generated etcd/healthcheck-client certificate and key.
[certificates] Generated ca certificate and key.
[certificates] Generated apiserver certificate and key.
[certificates] apiserver serving cert is signed for DNS names
[master1.example.com kubernetes kubernetes.default
kubernetes.default.svc kubernetes.default.svc.cluster.local] and
IPs [10.96.0.1 192.0.2.10 192.0.2.10 192.0.2.10 192.0.2.11 192.0.2.12 192.0.2.10]
[certificates] Generated apiserver-kubelet-client certificate and key.
[certificates] valid certificates and keys now exist in "/etc/kubernetes/pki"
[certificates] Generated sa key and public key.
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/admin.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/kubelet.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/controller-manager.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/scheduler.conf"
[controlplane] wrote Static Pod manifest for component kube-apiserver
to "/etc/kubernetes/manifests/kube-apiserver.yaml"
[controlplane] wrote Static Pod manifest for component kube-controller-manager
to "/etc/kubernetes/manifests/kube-controller-manager.yaml"
[etcd] Wrote Static Pod manifest for a local etcd instance to "/etc/kubernetes/manifests/etcd.yaml"
[init] waiting for the kubelet to boot up the control plane as Static Pods
from directory "/etc/kubernetes/manifests"
[init] this might take a minute or longer if the control plane images have to be pulled
[apiclient] All control plane components are healthy after 27.004111 seconds
[uploadconfig] storing the configuration used in
ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.12"
in namespace kube-system with the configuration for the kubelets in the cluster
[markmaster] Marking the node master1.example.com as master
by adding the label "node-role.kubernetes.io/master=''"
[markmaster] Marking the node master1.example.com as master
by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock"
to the Node API object "master1.example.com" as an annotation
[bootstraptoken] using token: ixxbh9.zrtxo7jwo1uz2ssp
[bootstraptoken] configured RBAC rules to allow Node Bootstrap tokens
to post CSRs in order for nodes to get long term certificate credentials
[bootstraptoken] configured RBAC rules to allow the csrapprover controller
automatically approve CSRs from a Node Bootstrap Token
[bootstraptoken] configured RBAC rules to allow certificate rotation
for all node client certificates in the cluster
[bootstraptoken] creating the "cluster-info" ConfigMap in the "kube-public" namespace
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy
Your Kubernetes master has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
You can now join any number of machines by running the following on each node
as root:
kubeadm-ha-setup join container-registry.oracle.com/kubernetes:v1.12.5 192.0.2.10:6443 \
--token ixxbh9.zrtxo7jwo1uz2ssp \
--discovery-token-ca-cert-hash \
sha256:6459031d2993f672f5a47f1373f009a3ce220ceddd6118f14168734afc0a43ad
Attempting to send file to: 192.0.2.11:22
Attempting to send file to: 192.0.2.12:22
Setting up master on 192.0.2.11
[INFO] 192.0.2.11 added
Setting up master on 192.0.2.12
[INFO] 192.0.2.12 added
Installing flannel and dashboard ...
[SUCCESS] Complete synchronization between nodes may take a few minutes.
You should back up the ~/ha.yaml
file on
shared or external storage in case you need to recreate the
cluster at a later date.
Configure Load Balancing
To support a load balancer as part of your high availability
master cluster configuration, set its IP address as the
loadbalancer
value in your
~/ha.yaml
file:
loadbalancer: 192.0.2.15
The loadbalancer
value will be applied as
part of the setup process with the following command:
# kubeadm-ha-setup up ~/ha.yaml
--lb
This configuration step is optional, but if it is included ensure port 6443 is open for all of your master nodes. See Section 3.2.7, “Firewall and iptables Requirements”.
Preparing to Use Kubernetes as a Regular User
To use the Kubernetes cluster as a regular user, perform the following steps on each of the nodes in the master cluster:
-
Create the
.kube
subdirectory in your home directory:$
mkdir -p $HOME/.kube
-
Create a copy of the Kubernetes
admin.conf
file in the.kube
directory:$
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
-
Change the ownership of the file to match your regular user profile:
$
sudo chown $(id -u):$(id -g) $HOME/.kube/config
-
Export the path to the file for the
KUBECONFIG
environment variable:$
export KUBECONFIG=$HOME/.kube/config
You cannot use the kubectl command if the path to this file is not set for this environment variable. Remember to export the
KUBECONFIG
variable for each subsequent login so that the kubectl and kubeadm commands use the correctadmin.conf
file, otherwise you might find that these commands do not behave as expected after a reboot or a new login. For instance, append the export line to your.bashrc
:$
echo 'export KUBECONFIG=$HOME/.kube/config' >> $HOME/.bashrc
-
Verify that you can use the kubectl command.
Kubernetes runs many of its services to manage the cluster configuration as Docker containers running as a Kubernetes pod. These can be viewed by running the following command on the master node:
$
kubectl get pods -n kube-system
NAME READY STATUS RESTARTS AGE coredns-6c77847dcf-mxjqt 1/1 Running 0 12m coredns-6c77847dcf-s6pgz 1/1 Running 0 12m etcd-master1.example.com 1/1 Running 0 11m etcd-master2.example.com 1/1 Running 0 11m etcd-master3.example.com 1/1 Running 0 11m kube-apiserver-master1.example.com 1/1 Running 0 11m kube-apiserver-master2.example.com 1/1 Running 0 11m kube-apiserver-master3.example.com 1/1 Running 0 11m kube-controller-master1.example.com 1/1 Running 0 11m kube-controller-master2.example.com 1/1 Running 0 11m kube-controller-master3.example.com 1/1 Running 0 11m kube-flannel-ds-z77w9 1/1 Running 0 12m kube-flannel-ds-n8t99 1/1 Running 0 12m kube-flannel-ds-pkw2l 1/1 Running 0 12m kube-proxy-zntpv 1/1 Running 0 12m kube-proxy-p5kfv 1/1 Running 0 12m kube-proxy-x7rfh 1/1 Running 0 12m kube-scheduler-master1.example.com 1/1 Running 0 11m kube-scheduler-master2.example.com 1/1 Running 0 11m kube-scheduler-master3.example.com 1/1 Running 0 11m kubernetes-dashboard-64458f66b6-2l5n6 1/1 Running 0 12m
3.4 Setting Up a Worker Node
Repeat these steps on each host that you want to add to the cluster as a worker node.
Install the kubeadm
package and its
dependencies:
# yum install kubeadm kubelet kubectl kubeadm-ha-setup
As root
, run the kubeadm-ha-setup
join command to add the host as a worker node:
# kubeadm-ha-setup join container-registry.oracle.com/kubernetes:v1.12.5
192.0.2.13:6443
\
--token ixxbh9.zrtxo7jwo1uz2ssp
--discovery-token-ca-cert-hash \
sha256:6459031d2993f672f5a47f1373f009a3ce220ceddd6118f14168734afc0a43ad
Trying to pull image kube-proxy v1.12.5 from container-registry.oracle.com/kubernetes
Cleaning up ...
[preflight] running pre-flight checks
[discovery] Trying to connect to API Server "192.0.2.13:6443"
[discovery] Created cluster-info discovery client,
requesting info from "https://192.0.2.13:6443"
[discovery] Requesting info from "https://192.0.2.13:6443" again
to validate TLS against the pinned public key
[discovery] Cluster info signature and contents are valid and TLS certificate validates
against pinned roots, will use API Server "192.0.2.13:6443"
[discovery] Successfully established connection with API Server "192.0.2.13:6443"
[kubelet] Downloading configuration for the kubelet from
the "kubelet-config-1.12" ConfigMap in the kube-system namespace
[kubelet] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[preflight] Activating the kubelet service
[tlsbootstrap] Waiting for the kubelet to perform the TLS Bootstrap...
[patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock"
to the Node API object "worker1.example.com" as an annotation
This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.
Run 'kubectl get nodes' on the master to see this node join the cluster.
Replace the IP address and port,
192.0.2.13:6443
, with the IP address
and port that is set for the vip
or
loadbalancer
used by the master cluster. Note
that the default port is 6443, and you can check the IP address
you need to use with kubectl cluster-info.
To verify that the worker has been successfully added to the high availability cluster, run kubectl get nodes on any node in the master cluster:
$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
master1.example.com Ready master 10m v1.12.5+2.1.1.el7
master2.example.com Ready master 9m56s v1.12.5+2.1.1.el7
master3.example.com Ready master 9m16s v1.12.5+2.1.1.el7
worker1.example.com Ready <none> 2m26s v1.12.5+2.1.1.el7
3.5 Upgrading
Oracle Linux Container Services for use with Kubernetes 1.1.12 is the first release to support high availability clusters.
Oracle does not support upgrading existing single master node clusters built with the kubeadm-setup.sh script to high availability clusters. You must build and manage high availability clusters using the kubeadm-ha-setup utility.
Similarly, upgrading master nodes in a high availability cluster with the kubeadm-setup.sh script is not supported. All maintenance and management operations within high availability clusters must be performed with the kubeadm-ha-setup utility.
3.5.1 Updating the High Availability cluster
The kubeadm-ha-setup update
command is only
supported for errata release updates on existing High
Availability clusters.
A kubeadm-ha-setup upgrade
command for
larger upgrades will be provided in a future release. Major
release upgrades are not supported at this time.
-
Create a backup for your High Availability cluster before proceeding before proceeding by following the instructions in Section 4.3, “Cluster Backup and Restore”.
-
On each master node in the cluster, update the
kubeadm-ha-setup
package:#
yum update kubeadm-ha-setup
-
On the master node from which you intend to run the cluster update from, update the required prerequisite packages:
#
yum update kubeadm
-
If you are using the Oracle Container Registry to obtain images, log in.
Follow the instructions in Section 3.2.5, “Oracle Container Registry Requirements”. Note that if images are updated on the Oracle Container Registry, you may be required to accept the Oracle Standard Terms and Restrictions again before you are able to perform the update. If you are using one of the Oracle Container Registry mirrors, see Section 3.2.5.1, “Using an Oracle Container Registry Mirror” for more information. If you have configured a local registry, you may need to set the
KUBE_REPO_PREFIX
environment variable to point to the appropriate registry. You may also need to update your local registry with the most current images for the version that you are upgrading to. See Section 3.2.5.2, “Setting Up an Optional Local Registry” for more information. -
Verify that the currently reported node versions match those of the previous package:
#
kubectl get nodes
NAME STATUS ROLES AGE VERSION master1.example.com Ready master 4m8s v1.12.5+2.1.1.el7 master2.example.com Ready master 2m25s v1.12.5+2.1.1.el7 master3.example.com Ready master 2m12s v1.12.5+2.1.1.el7 worker1.example.com Ready <none> 25s v1.12.5+2.1.1.el7 -
Start the scripted update process by using the
kubeadm-ha-setup
tool:#
kubeadm-ha-setup update
[WARNING] This action will update this cluster to the latest version(1.12.7). [WARNING] You must take a backup before updating the cluster, as the update may fail. [PROMPT] Do you want to continue updating your cluster? Please type Yes/y to confirm or No/n to abort(Case insensitive): Y Kubernetes Cluster Version: v1.12.5 Kubeadm version:1.12.7-1.1.2, Kueblet version 1.12.5-2.1.1 Kubeadm version: 1.12.5-2.1.1 Kubelet version: 1.12.7-1.1.2 Reading configuration file /usr/local/share/kubeadm/run/kubeadm/ha.yaml ... Checking repo access [preflight] Running pre-flight checks. [upgrade] Making sure the cluster is healthy: [upgrade/config] Making sure the configuration is correct: [upgrade/config] Reading configuration from the cluster... [upgrade/config] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml' [upgrade/apply] Respecting the --cri-socket flag that is set with higher priority than the config file. [upgrade/version] You have chosen to change the cluster version to "v1.12.7" [upgrade/versions] Cluster version: v1.12.5+2.1.1.el7 [upgrade/versions] kubeadm version: v1.12.7+1.1.2.el7 [upgrade/prepull] Will prepull images for components [kube-apiserver kube-controller-manager kube-scheduler etcd] [upgrade/prepull] Prepulling image for component etcd. [upgrade/prepull] Prepulling image for component kube-apiserver. [upgrade/prepull] Prepulling image for component kube-controller-manager. [upgrade/prepull] Prepulling image for component kube-scheduler. [apiclient] Found 0 Pods for label selector k8s-app=upgrade-prepull-etcd [apiclient] Found 1 Pods for label selector k8s-app=upgrade-prepull-kube-controller-manager [apiclient] Found 0 Pods for label selector k8s-app=upgrade-prepull-kube-scheduler [apiclient] Found 3 Pods for label selector k8s-app=upgrade-prepull-kube-apiserver [apiclient] Found 3 Pods for label selector k8s-app=upgrade-prepull-etcd [apiclient] Found 3 Pods for label selector k8s-app=upgrade-prepull-kube-controller-manager [apiclient] Found 3 Pods for label selector k8s-app=upgrade-prepull-kube-scheduler [upgrade/prepull] Prepulled image for component kube-apiserver. [upgrade/prepull] Prepulled image for component kube-controller-manager. [upgrade/prepull] Prepulled image for component kube-scheduler. [upgrade/prepull] Prepulled image for component etcd. [upgrade/prepull] Successfully prepulled the images for all the control plane components [upgrade/apply] Upgrading your Static Pod-hosted control plane to version "v1.12.7"... Static pod: kube-apiserver-master1.example.com hash: f9004e982ed918c6303596943cef5493 Static pod: kube-controller-manager-master1.example.com hash: 9590101be574fc0a237ca3f029f03ea2 Static pod: kube-scheduler-master1.example.com hash: 22961405d099beb7721c7598daaa73d6 [upgrade/staticpods] Writing new Static Pod manifests to "/etc/kubernetes/tmp/kubeadm-upgraded-manifests867609756" [controlplane] wrote Static Pod manifest for component kube-apiserver to "/etc/kubernetes/tmp/kubeadm-upgraded-manifests867609756/kube-apiserver.yaml" [controlplane] wrote Static Pod manifest for component kube-controller-manager to "/etc/kubernetes/tmp/kubeadm-upgraded-manifests867609756/kube-controller-manager.yaml" [controlplane] wrote Static Pod manifest for component kube-scheduler to "/etc/kubernetes/tmp/kubeadm-upgraded-manifests867609756/kube-scheduler.yaml" [upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-apiserver.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2019-04-08-14-28-11/kube-apiserver.yaml" [upgrade/staticpods] Waiting for the kubelet to restart the component [upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s Static pod: kube-apiserver-master1.example.com hash: f9004e982ed918c6303596943cef5493 Static pod: kube-apiserver-master1.example.com hash: f9004e982ed918c6303596943cef5493 Static pod: kube-apiserver-master1.example.com hash: f9004e982ed918c6303596943cef5493 Static pod: kube-apiserver-master1.example.com hash: a692b9726292a4c2a89e2cdcd8301035 [apiclient] Found 3 Pods for label selector component=kube-apiserver [upgrade/staticpods] Component "kube-apiserver" upgraded successfully! [upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-controller-manager.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2019-04-08-14-28-11/ kube-controller-manager.yaml" [upgrade/staticpods] Waiting for the kubelet to restart the component [upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s Static pod: kube-controller-manager-master1.example.com hash: 9590101be574fc0a237ca3f029f03ea2 Static pod: kube-controller-manager-master1.example.com hash: 7dbb816a4ac17a9522e761017dcd444c [apiclient] Found 3 Pods for label selector component=kube-controller-manager [upgrade/staticpods] Component "kube-controller-manager" upgraded successfully! [upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-scheduler.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2019-04-08-14-28-11/kube-scheduler.yaml" [upgrade/staticpods] Waiting for the kubelet to restart the component [upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s Static pod: kube-scheduler-master1.example.com hash: 22961405d099beb7721c7598daaa73d6 Static pod: kube-scheduler-master1.example.com hash: 980091350a77a7fbcff570589689adc2 [apiclient] Found 3 Pods for label selector component=kube-scheduler [upgrade/staticpods] Component "kube-scheduler" upgraded successfully! [uploadconfig] storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace [kubelet] Creating a ConfigMap "kubelet-config-1.12" in namespace kube-system with the configuration for the kubelets in the cluster [kubelet] Downloading configuration for the kubelet from the "kubelet-config-1.12" ConfigMap in the kube-system namespace [kubelet] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "master1.example.com" as an annotation [bootstraptoken] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials [bootstraptoken] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token [bootstraptoken] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster [addons] Applied essential addon: CoreDNS [addons] Applied essential addon: kube-proxy [upgrade/successful] SUCCESS! Your cluster was upgraded to "v1.12.7". Enjoy! [upgrade/kubelet] Now that your control plane is upgraded, please proceed with upgrading your kubelets if you haven't already done so. Loaded plugins: langpacks, ulninfo Resolving Dependencies --> Running transaction check ---> Package kubelet.x86_64 0:1.12.5-2.1.1.el7 will be updated ---> Package kubelet.x86_64 0:1.12.7-1.1.2.el7 will be an update --> Processing Dependency: conntrack for package: kubelet-1.12.7-1.1.2.el7.x86_64 --> Running transaction check ---> Package conntrack-tools.x86_64 0:1.4.4-4.el7 will be installed --> Processing Dependency: libnetfilter_cttimeout.so.1(LIBNETFILTER_CTTIMEOUT_1.1)(64bit) for package: conntrack-tools-1.4.4-4.el7.x86_64 --> Processing Dependency: libnetfilter_cttimeout.so.1(LIBNETFILTER_CTTIMEOUT_1.0)(64bit) for package: conntrack-tools-1.4.4-4.el7.x86_64 --> Processing Dependency: libnetfilter_cthelper.so.0(LIBNETFILTER_CTHELPER_1.0)(64bit) for package: conntrack-tools-1.4.4-4.el7.x86_64 --> Processing Dependency: libnetfilter_cttimeout.so.1()(64bit) for package: conntrack-tools-1.4.4-4.el7.x86_64 --> Processing Dependency: libnetfilter_cthelper.so.0()(64bit) for package: conntrack-tools-1.4.4-4.el7.x86_64 --> Processing Dependency: libnetfilter_queue.so.1()(64bit) for package: conntrack-tools-1.4.4-4.el7.x86_64 --> Running transaction check ---> Package libnetfilter_cthelper.x86_64 0:1.0.0-9.el7 will be installed ---> Package libnetfilter_cttimeout.x86_64 0:1.0.0-6.el7 will be installed ---> Package libnetfilter_queue.x86_64 0:1.0.2-2.el7_2 will be installed --> Finished Dependency Resolution Dependencies Resolved ================================================================================ Package Arch Version Repository Size ================================================================================ Updating: kubelet x86_64 1.12.7-1.1.2.el7 ol7_addons 19 M Installing for dependencies: conntrack-tools x86_64 1.4.4-4.el7 ol7_latest 186 k libnetfilter_cthelper x86_64 1.0.0-9.el7 ol7_latest 17 k libnetfilter_cttimeout x86_64 1.0.0-6.el7 ol7_latest 17 k libnetfilter_queue x86_64 1.0.2-2.el7_2 ol7_latest 22 k Transaction Summary ================================================================================ Install ( 4 Dependent packages) Upgrade 1 Package Total download size: 19 M Downloading packages: Delta RPMs disabled because /usr/bin/applydeltarpm not installed. -------------------------------------------------------------------------------- Total 5.2 MB/s | 19 MB 00:03 Running transaction check Running transaction test Transaction test succeeded Running transaction Installing : libnetfilter_cthelper-1.0.0-9.el7.x86_64 1/6 Installing : libnetfilter_cttimeout-1.0.0-6.el7.x86_64 2/6 Installing : libnetfilter_queue-1.0.2-2.el7_2.x86_64 3/6 Installing : conntrack-tools-1.4.4-4.el7.x86_64 4/6 Updating : kubelet-1.12.7-1.1.2.el7.x86_64 5/6 Cleanup : kubelet-1.12.5-2.1.1.el7.x86_64 6/6 Verifying : libnetfilter_queue-1.0.2-2.el7_2.x86_64 1/6 Verifying : libnetfilter_cttimeout-1.0.0-6.el7.x86_64 2/6 Verifying : kubelet-1.12.7-1.1.2.el7.x86_64 3/6 Verifying : libnetfilter_cthelper-1.0.0-9.el7.x86_64 4/6 Verifying : conntrack-tools-1.4.4-4.el7.x86_64 5/6 Verifying : kubelet-1.12.5-2.1.1.el7.x86_64 6/6 Dependency Installed: conntrack-tools.x86_64 0:1.4.4-4.el7 libnetfilter_cthelper.x86_64 0:1.0.0-9.el7 libnetfilter_cttimeout.x86_64 0:1.0.0-6.el7 libnetfilter_queue.x86_64 0:1.0.2-2.el7_2 Updated: kubelet.x86_64 0:1.12.7-1.1.2.el7 Complete! Loaded plugins: langpacks, ulninfo Resolving Dependencies --> Running transaction check ---> Package kubectl.x86_64 0:1.12.5-2.1.1.el7 will be updated ---> Package kubectl.x86_64 0:1.12.7-1.1.2.el7 will be an update --> Finished Dependency Resolution Dependencies Resolved ================================================================================ Package Arch Version Repository Size ================================================================================ Updating: kubectl x86_64 1.12.7-1.1.2.el7 ol7_addons 7.7 M Transaction Summary ================================================================================ Upgrade 1 Package Total download size: 7.7 M Downloading packages: Delta RPMs disabled because /usr/bin/applydeltarpm not installed. Running transaction check Running transaction test Transaction test succeeded Running transaction Updating : kubectl-1.12.7-1.1.2.el7.x86_64 1/2 Cleanup : kubectl-1.12.5-2.1.1.el7.x86_64 2/2 Verifying : kubectl-1.12.7-1.1.2.el7.x86_64 1/2 Verifying : kubectl-1.12.5-2.1.1.el7.x86_64 2/2 Updated: kubectl.x86_64 0:1.12.7-1.1.2.el7 Complete! Waiting for the cluster to become healthy .Updating remote master nodes CreateSSH /root/.ssh/id_rsa root Updating the master node: master2.example.com Successfully updated the master node: master2.example.com Updating the master node: master3.example.com Successfully updated the master node: master3.example.com The cluster has been updated successfully Please update the worker nodes in your cluster and do the following: 1. On Master: kubectl drain worker1.example.com --ignore-daemonsets 2. On Worker1: yum install -y \ kubeadm-1.12.7-1.1.2.el7 kubelet-1.12.7-1.1.2.el7 \ kubectl-1.12.7-1.1.2.el7 kubeadm-ha-setup-0.0.2-1.0.21.el7 3. On Worker1: systemctl restart kubelet 4. On Master: kubectl uncordon worker1.example.com 5. Verify the update on master node: kubectl get nodesOptionally, you can override the default container registry choice during the errata release update by specifying the
--registry
option:#
kubeadm-ha-setup update --registry
container-registry-phx.oracle.com
-
Verify that your master nodes have been updated correctly before proceeding to update the worker nodes:
#
kubectl get nodes
NAME STATUS ROLES AGE VERSION master1.example.com Ready master 17m v1.12.7+1.1.2.el7 master2.example.com Ready master 15m v1.12.7+1.1.2.el7 master3.example.com Ready master 15m v1.12.7+1.1.2.el7 worker1.example.com Ready <none> 13m v1.12.5+2.1.1.el7 -
Use the
kubectl
tool to drain each of your worker nodes from the cluster:#
kubectl drain
node/worker1.example.com cordonedworker1.example.com
--ignore-daemonsetsCheck that the worker nodes are unable to accept any further scheduling or new pods:
#
kubectl get nodes
Note that a node that has been drained should have its status set to
SchedulingDisabled
. -
On each of the worker nodes, upgrade the required packages to the latest versions and restart the
kubelet
service:#
yum update kubeadm kubelet kubectl kubeadm-ha-setup
#systemctl restart kubelet
-
Now that the upgrades are complete for each worker node, uncordon them using the
kubectl
tool from the master cluster:#
kubectl uncordon
node/worker1.example.com uncordonedworker1.example.com
Check that the worker nodes are now able to accept new schedules and pods:
#
kubectl get nodes
NAME STATUS ROLES AGE VERSION master1.example.com Ready master 17m v1.12.7+1.1.2.el7 master2.example.com Ready master 15m v1.12.7+1.1.2.el7 master3.example.com Ready master 15m v1.12.7+1.1.2.el7 worker1.example.com Ready <none> 13m v1.12.7+1.1.2.el7
Recover from Errata Release Update Failures
If the update fails to complete successfully, you will need to do a full cluster restore from backup. Note that the cluster will not be responsive to new commands until the restore process is complete.
-
Check which of the required packages were updated on each node:
#
yum list installed kubeadm kubelet kubectl
-
Downgrade each of the individual packages that has already been updated to the previous errata version. For example, to downgrade the
kubeadm
package:#
yum downgrade
kubeadm
NoteDo not downgrade the
kubeadm-ha-setup
package on your master nodes, as the latest version is always designed to support errata release update recovery. -
Follow the restore steps in Section 4.3, “Cluster Backup and Restore”, but add the
--force
flag to override any version checks:#
kubeadm-ha-setup restore
/backups/master-backup-v1.12.5-2-1544442719.tar
--force -
When recovery is complete, you may re-attempt the High Availability cluster update.