3.3 Setting Up the Master Cluster

Before you begin, ensure you have satisfied the requirements in Section 3.2.5, “Oracle Container Registry Requirements”. Then on all the hosts that you are configuring as master nodes, install the kubeadm and kubeadm-ha-setup packages and their dependencies:

# yum install kubeadm kubelet kubectl kubeadm-ha-setup

Define the nodes in your high availability master cluster before proceeding further. To generate a template configuration file, copy the one provided on any node in the master cluster at /usr/local/share/kubeadm/kubeadm-ha/ha.yaml:

# cp /usr/local/share/kubeadm/kubeadm-ha/ha.yaml ~/ha.yaml    

The first step is to specify the server IP addresses for each node used in the master cluster. There must be three nodes defined in this cluster, and they must each have unique hostnames:

clusters:
- name: master
  vip: 192.0.2.13
  nodes:
  - 192.0.2.10
  - 192.0.2.11
  - 192.0.2.12

Your cluster's vip address is the IP address of the server running the keepalive service for your cluster. This service is included by default with Oracle Linux 7, and you can find out more information about this service in Oracle® Linux 7: Administrator's Guide.

All master nodes in your cluster must have shell access with password-less key-based authentication for the other master nodes whenever you use kubeadm-ha-setup. You can configure SSH keys for this, by following the instructions in Oracle® Linux 7: Administrator's Guide.

You must define the SSH private key in the private_key variable, and the remote user in the user variable:

  private_key: /root/.ssh/id_rsa
  user: root

You can optionally define a pod_cidr for your pod network. This is set by default to a reserved local IP range:

  pod_cidr: 10.244.0.0/16

Set the image variable to point at the Oracle Container Registry or an Oracle Container Registry mirror so that you are able to fetch the container images for the current release. See Section 3.2.5.1, “Using an Oracle Container Registry Mirror” for more information on using a mirror:

  image: container-registry.oracle.com/kubernetes
  k8sversion: v1.12.5

Run kubeadm-ha-setup up instead of kubeadm-setup.sh up on just one Kubernetes node in the master cluster to apply these settings and automatically provision the other master nodes.

As root, run kubeadm-ha-setup up to add the host as a master node:

# kubeadm-ha-setup up ~/ha.yaml
Cleaning up ...
Reading configuration file /usr/local/share/kubeadm/kubeadm-ha/ha.yaml ...
CreateSSH /root/.ssh/id_rsa root

Checking 192.0.2.10
status 0

Checking 192.0.2.11
status 0

Checking 192.0.2.12
status 0

Configuring keepalived for HA ...
success
success
Setting up first master ... (maximum wait time 185 seconds)
[init] using Kubernetes version: v1.12.5
[preflight] running pre-flight checks
[preflight/images] Pulling images required for setting up a Kubernetes cluster
[preflight/images] This might take a minute or two, 
depending on the speed of your internet connection
[preflight/images] You can also perform this action beforehand using 'kubeadm config images pull'
[kubelet] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[preflight] Activating the kubelet service
[certificates] Generated front-proxy-ca certificate and key.
[certificates] Generated front-proxy-client certificate and key.
[certificates] Generated etcd/ca certificate and key.
[certificates] Generated etcd/server certificate and key.
[certificates] etcd/server serving cert is signed for DNS names 
[master1.example.com localhost] and IPs [127.0.0.1 ::1 192.0.2.10]
[certificates] Generated etcd/peer certificate and key.
[certificates] etcd/peer serving cert is signed for DNS names 
[master1.example.com localhost] and IPs [192.0.2.10 127.0.0.1 ::1 192.0.2.10]
[certificates] Generated apiserver-etcd-client certificate and key.
[certificates] Generated etcd/healthcheck-client certificate and key.
[certificates] Generated ca certificate and key.
[certificates] Generated apiserver certificate and key.
[certificates] apiserver serving cert is signed for DNS names 
[master1.example.com kubernetes kubernetes.default 
kubernetes.default.svc kubernetes.default.svc.cluster.local] and
 IPs [10.96.0.1 192.0.2.10 192.0.2.10 192.0.2.10 192.0.2.11 192.0.2.12 192.0.2.10]
[certificates] Generated apiserver-kubelet-client certificate and key.
[certificates] valid certificates and keys now exist in "/etc/kubernetes/pki"
[certificates] Generated sa key and public key.
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/admin.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/kubelet.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/controller-manager.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/scheduler.conf"
[controlplane] wrote Static Pod manifest for component kube-apiserver 
to "/etc/kubernetes/manifests/kube-apiserver.yaml"
[controlplane] wrote Static Pod manifest for component kube-controller-manager 
to "/etc/kubernetes/manifests/kube-controller-manager.yaml"
[etcd] Wrote Static Pod manifest for a local etcd instance to "/etc/kubernetes/manifests/etcd.yaml"
[init] waiting for the kubelet to boot up the control plane as Static Pods 
from directory "/etc/kubernetes/manifests"
[init] this might take a minute or longer if the control plane images have to be pulled
[apiclient] All control plane components are healthy after 27.004111 seconds
[uploadconfig] storing the configuration used in 
ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.12" 
in namespace kube-system with the configuration for the kubelets in the cluster
[markmaster] Marking the node master1.example.com as master 
by adding the label "node-role.kubernetes.io/master=''"
[markmaster] Marking the node master1.example.com as master 
by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" 
to the Node API object "master1.example.com" as an annotation
[bootstraptoken] using token: ixxbh9.zrtxo7jwo1uz2ssp
[bootstraptoken] configured RBAC rules to allow Node Bootstrap tokens 
to post CSRs in order for nodes to get long term certificate credentials
[bootstraptoken] configured RBAC rules to allow the csrapprover controller 
automatically approve CSRs from a Node Bootstrap Token
[bootstraptoken] configured RBAC rules to allow certificate rotation 
for all node client certificates in the cluster
[bootstraptoken] creating the "cluster-info" ConfigMap in the "kube-public" namespace
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes master has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

You can now join any number of machines by running the following on each node
as root:

  kubeadm-ha-setup join container-registry.oracle.com/kubernetes:v1.12.5 192.0.2.10:6443 \
 --token ixxbh9.zrtxo7jwo1uz2ssp \
--discovery-token-ca-cert-hash \
sha256:6459031d2993f672f5a47f1373f009a3ce220ceddd6118f14168734afc0a43ad


Attempting to send file to:  192.0.2.11:22
Attempting to send file to:  192.0.2.12:22
Setting up master on 192.0.2.11
[INFO] 192.0.2.11 added   
Setting up master on 192.0.2.12
[INFO] 192.0.2.12 added   
Installing flannel and dashboard ...
[SUCCESS] Complete synchronization between nodes may take a few minutes.

Note

You should back up the ~/ha.yaml file on shared or external storage in case you need to recreate the cluster at a later date.

Configure Load Balancing

To support a load balancer as part of your high availability master cluster configuration, set its IP address as the loadbalancer value in your ~/ha.yaml file:

 loadbalancer: 192.0.2.15

The loadbalancer value will be applied as part of the setup process with the following command:

# kubeadm-ha-setup up ~/ha.yaml --lb

This configuration step is optional, but if it is included ensure port 6443 is open for all of your master nodes. See Section 3.2.7, “Firewall and iptables Requirements”.

Preparing to Use Kubernetes as a Regular User

To use the Kubernetes cluster as a regular user, perform the following steps on each of the nodes in the master cluster:

  1. Create the .kube subdirectory in your home directory:

    $ mkdir -p $HOME/.kube
  2. Create a copy of the Kubernetes admin.conf file in the .kube directory:

    $ sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  3. Change the ownership of the file to match your regular user profile:

    $ sudo chown $(id -u):$(id -g) $HOME/.kube/config
  4. Export the path to the file for the KUBECONFIG environment variable:

    $ export KUBECONFIG=$HOME/.kube/config

    You cannot use the kubectl command if the path to this file is not set for this environment variable. Remember to export the KUBECONFIG variable for each subsequent login so that the kubectl and kubeadm commands use the correct admin.conf file, otherwise you might find that these commands do not behave as expected after a reboot or a new login. For instance, append the export line to your .bashrc:

    $ echo 'export KUBECONFIG=$HOME/.kube/config' >> $HOME/.bashrc
  5. Verify that you can use the kubectl command.

    Kubernetes runs many of its services to manage the cluster configuration as Docker containers running as a Kubernetes pod. These can be viewed by running the following command on the master node:

    $ kubectl get pods -n kube-system
    NAME                                   READY   STATUS             RESTARTS   AGE
    coredns-6c77847dcf-mxjqt               1/1     Running            0          12m
    coredns-6c77847dcf-s6pgz               1/1     Running            0          12m
    etcd-master1.example.com               1/1     Running            0          11m
    etcd-master2.example.com               1/1     Running            0          11m
    etcd-master3.example.com               1/1     Running            0          11m
    kube-apiserver-master1.example.com     1/1     Running            0          11m
    kube-apiserver-master2.example.com     1/1     Running            0          11m
    kube-apiserver-master3.example.com     1/1     Running            0          11m
    kube-controller-master1.example.com    1/1     Running            0          11m
    kube-controller-master2.example.com    1/1     Running            0          11m
    kube-controller-master3.example.com    1/1     Running            0          11m
    kube-flannel-ds-z77w9                  1/1     Running            0          12m
    kube-flannel-ds-n8t99                  1/1     Running            0          12m
    kube-flannel-ds-pkw2l                  1/1     Running            0          12m
    kube-proxy-zntpv                       1/1     Running            0          12m
    kube-proxy-p5kfv                       1/1     Running            0          12m
    kube-proxy-x7rfh                       1/1     Running            0          12m
    kube-scheduler-master1.example.com     1/1     Running            0          11m
    kube-scheduler-master2.example.com     1/1     Running            0          11m
    kube-scheduler-master3.example.com     1/1     Running            0          11m
    kubernetes-dashboard-64458f66b6-2l5n6  1/1     Running            0          12m