2.4 Setting Up a Worker Node

Repeat these steps on each host that you want to add to the cluster as a worker node.

Install the kubeadm package and its dependencies:

# yum install kubeadm kubelet kubectl

As root, run the kubeadm-setup.sh join command to add the host as a worker node:

# kubeadm-setup.sh join 192.0.2.10:6443 --token 8tipwo.tst0nvf7wcaqjcj0 \
      --discovery-token-ca-cert-hash \
      sha256:f2a5b22b658683c3634459c8e7617c9d6c080c72dd149f3eb903445efe9d8346
Checking kubelet and kubectl RPM ...
Starting to initialize worker node ...
Checking if env is ready ...
Checking whether docker can pull busybox image ...
Checking access to container-registry.oracle.com/kubernetes...
Trying to pull repository container-registry.oracle.com/kubernetes/kube-proxy ... 
v1.12.5: Pulling from container-registry.oracle.com/kubernetes/kube-proxy
Digest: sha256:9f57fd95dc9c5918591930b2316474d10aca262b5c89bba588f45c1b96ba6f8b
Status: Image is up to date for container-registry.oracle.com/kubernetes/kube-proxy:v1.12.5
Checking whether docker can run container ...
Checking firewalld settings ...
Checking iptables default rule ...
Checking br_netfilter module ...
Checking sysctl variables ...
Enabling kubelet ...
Created symlink from /etc/systemd/system/multi-user.target.wants/kubelet.service 
to /etc/systemd/system/kubelet.service.
Check successful, ready to run 'join' command ...
[validation] WARNING: kubeadm doesn't fully support multiple API Servers yet
[preflight] running pre-flight checks
[discovery] Trying to connect to API Server "192.0.2.10:6443"
[discovery] Created cluster-info discovery client, requesting info from "https://192.0.2.10:6443"
[discovery] Trying to connect to API Server "192.0.2.10:6443"
[discovery] Created cluster-info discovery client, requesting info from "https://192.0.2.10:6443"
[discovery] Requesting info from "https://192.0.2.10:6443" again 
to validate TLS against the pinned public key
[discovery] Requesting info from "https://192.0.2.10:6443" again 
to validate TLS against the pinned public key
[discovery] Cluster info signature and contents are valid 
and TLS certificate validates against pinned roots, will use API Server "192.0.2.10:6443"
[discovery] Successfully established connection with API Server "192.0.2.10:6443"
[discovery] Cluster info signature and contents are valid 
and TLS certificate validates against pinned roots, will use API Server "192.0.2.10:6443"
[discovery] Successfully established connection with API Server "192.0.2.10:6443"
[kubelet] Downloading configuration for the kubelet from the "kubelet-config-1.12" ConfigMap 
in the kube-system namespace
[kubelet] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet] Writing kubelet environment file with flags 
to file "/var/lib/kubelet/kubeadm-flags.env"
[preflight] Activating the kubelet service
[tlsbootstrap] Waiting for the kubelet to perform the TLS Bootstrap...
[patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" 
to the Node API object "worker1.example.com" as an annotation

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the master to see this node join the cluster.

Replace the IP address and port, 192.0.2.10:6443, with the IP address and port that is used by the API Server (the master node). Note that the default port is 6443.

Replace the --token value, 8tipwo.tst0nvf7wcaqjcj0, with a valid token for the master node. If you do not have this information, run the following command on the master node to obtain this information:

# kubeadm token list
TOKEN                    TTL  EXPIRES             USAGES           DESCRIPTION        EXTRA GROUPS
8tipwo.tst0nvf7wcaqjcj0  22h  2018-12-11          authentication,  <none>             system:
                              T03:32:44-08:00     signing                             bootstrappers:
                                                                                      kubeadm:
                                                                                      default-node-token

By default, tokens expire after 24 hours. If you are joining a node to the cluster after the current token has expired, you can create a new token by running the following command on the master node:

# kubeadm token create
e05e12.3c1096c88cc11720

You can explicitly set the expiry period for a token when you create it by using the --ttl option. This option sets the expiration time of the token, relative to the current time. The value is generally set in seconds, but other units can be specified as well. For example, you can set the token expiry for 15m (or 15 minutes) from the current time; or, for 1h (1 hour) from the current time. A value of 0 means the token never expires, but this value is not recommended.

Replace the --discovery-token-ca-cert-hash value, f2a5b22b658683c3634459c8e7617c9d6c080c72dd149f3eb903445efe9d8346, with the correct SHA256 CA certificate hash that is used to sign the token certificate for the master node. If you do not have this information, run the following command chain on the master node to obtain it:

#  openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt | openssl rsa -pubin -outform der 2>/dev/null | \
   openssl dgst -sha256 -hex | sed 's/^.* //'
f2a5b22b658683c3634459c8e7617c9d6c080c72dd149f3eb903445efe9d8346

The kubeadm-setup.sh script checks whether the host meets all the requirements before it sets up a worker node. If a requirement is not met, an error message is displayed together with the recommended fix. You should fix the errors before running the script again.

The kubelet systemd service is automatically enabled on the host so that the worker node always starts at boot.

After the kubeadm-setup.sh join command completes, check that the worker node has joined the cluster on the master node:

$ kubectl get nodes
NAME                  STATUS    ROLES   AGE       VERSION
master.example.com    Ready     master  1h        v1.12.7+1.1.2.el7
worker1.example.com   Ready     <none>  1h        v1.12.7+1.1.2.el7

The output for this command displays a listing of all of the nodes in the cluster and their status.