The software described in this documentation is either no longer supported or is in extended support.
Oracle recommends that you upgrade to a current supported release.

Chapter 4 Kubernetes Administration and Configuration

This chapter describes how to configure and administer your Kubernetes deployment.

4.1 Kubernetes and iptables Rules

Kubernetes uses iptables to handle many networking and port forwarding rules. Be careful of using services that may create conflicting iptables rules. You can check the rules by running iptables-save, which dumps the rule set to STDOUT.

If you intend to expose application services externally, by either using the NodePort or LoadBalancing service types, traffic forwarding must be enabled in your iptables rule set. If you find that you are unable to access a service from outside of the network used by the pod where your application is running, check that your iptables rule set does not contain a rule similar to the following:

:FORWARD DROP [0:0]

If you have a rule to drop all forwarding traffic, you may need to run:

# iptables -P FORWARD ACCEPT

If you are running iptables as a service instead of firewalld, you can save current iptables configuration so that it is persistent across reboots. To do this, run:

# iptables-save > /etc/sysconfig/iptables

Note that you must have the iptables-services package installed for this to work. Oracle recommends using the default firewalld service as this provides a more consistent experience and allows you to make changes to the firewall configuration without flushing existing rules and reloading the firewall.

Nodes running applications that need to communicate directly between pods and that are IP aware, may require additional custom iptables configuration to bypass the default firewalld masquerading rules. For example, setting these two iptables rules on the nodes running a server application on IP address 192.0.2.15 and a client application on IP address 192.0.2.16 enables direct communication between them:

# iptables -t nat -I POST_public_allow -s 192.0.2.15/32 -d 192.0.2.16/32 -j RETURN
# iptables -t nat -I POST_public_allow -s 192.0.2.16/32 -d 192.0.2.15/32 -j RETURN

4.2 Using Kubernetes With a Proxy Server

In environments where a proxy server is configured to access the internet services, such as the Docker Hub or the Oracle Container Registry, you may need to perform several configuration steps to get Kubernetes to install and to run correctly.

  1. Ensure that the Docker engine startup configuration on each node in the cluster is configured to use the proxy server. For instance, create a systemd service drop-in file at /etc/systemd/system/docker.service.d/http-proxy.conf with the following contents:

    [Service]
    Environment="HTTP_PROXY=http://proxy.example.com:80/"
    Environment="HTTPS_PROXY=https://proxy.example.com:443/"

    Replace http://proxy.example.com:80/ with the URL for your HTTP proxy service. If you have an HTTPS proxy and you have specified this as well, replace https://proxy.example.com:443/ with the URL and port for this service. If you have made a change to your Docker systemd service configuration, run the following commands:

    # systemctl daemon-reload; systemctl restart docker
  2. You may need to set the http_proxy or https_proxy environment variables to be able to run other commands on any of the nodes in your cluster. For example:

    # export http_proxy="http://proxy.example.com:80/"
    # export https_proxy="https://proxy.example.com:443/"
  3. Disable the proxy configuration for the local host and any node IPs in the cluster:

    # export no_proxy="127.0.0.1, 192.0.2.10, 192.0.2.11, 192.0.2.12"

These steps should be sufficient to enable the deployment to function normally. Use of a transparent proxy that does not require configuration on the host and which ignores internal network requests, can reduce the complexity of the configuration and may help to avoid unexpected behavior.

4.3 Cluster Backup and Restore

4.3.1 Single Master Cluster

The kubeadm-setup.sh script enables cluster backup and restore functionality so that you can easily protect your Kubernetes deployment from a failure of the master node in the cluster. Cluster status and configuration data is stored in the Cluster State Store, also referred to as etcd.

For the backup and restore processes to work properly, there are some basic requirements:

  • The hostname and IP address of the master node being restored, must match the hostname and IP address of the master node that was backed up. The usual use case for restore is after system failure, so the restore process expects a matching system for the master node with a fresh installation of the Docker engine and the Kubernetes packages.

  • The master node must be tainted so that is unable to run any workloads or containers other than those that the master node requires. This is the default configuration if you used the kubeadm-setup.sh script to setup your environment. The backup process does not back up any containers running on the master node other than the containers specific to managing the Kubernetes cluster.

  • The backup command must be run on the master node.

  • Any Docker engine configuration applied to the master node prior to the backup process must be manually replicated on the node on which you intend to run the restore operation. You may need to manually configure your Docker storage driver and proxy settings before running a restore operation.

  • The backup command checks for minimum disk space of 100 MB at the specified backup location. If the space is not available, the backup command exits with an error.

  • A restore can only function correctly using the backup file for a Kubernetes cluster running the same version of Kubernetes. You cannot restore a backup file for a Kubernetes 1.7.4 cluster, using the Kubernetes 1.8.4 tools.

The backup command requires that you stop the cluster during the backup process. Running container configurations on the worker nodes are unaffected during the backup process. The following steps describe how to create a backup file for the master node.

Back up the cluster configuration and state
  1. Stop the cluster.

    To back up the cluster configuration and state, the cluster must be stopped so that no changes can occur in state or configuration during the backup process. While the cluster is stopped, the worker nodes continue to run independently of the cluster, allowing the containers hosted on each of these nodes to continue to function. To stop the cluster, on the master node, run:

    # kubeadm-setup.sh stop
    Stopping kubelet now ...
    Stopping containers now ...
  2. Run kubeadm-setup.sh backup and specify the directory where the backup file should be stored.

    # kubeadm-setup.sh backup /backups
    Using container-registry.oracle.com/etcd:3.2.24
    Checking if container-registry.oracle.com/etcd:3.2.24 is available
    376ebb3701caa1e3733ef043d0105569de138f3e5f6faf74c354fa61cd04e02a 
    /var/run/kubeadm/backup/etcd-backup-1544442719.tar
    e8e528be930f2859a0d6c7b953cec4fab2465278376a59f8415a430e032b1e73 
    /var/run/kubeadm/backup/k8s-master-0-1544442719.tar
    Backup is successfully stored at /backups/master-backup-v1.12.5-2-1544442719.tar ...
    You can restart your cluster now by doing: 
    # kubeadm-setup.sh restart

    Substitute /backups with the path to a directory where you wish to store the backed up data for your cluster.

    Each run of the backup command creates as a tar file that is timestamped so that you can easily restore the most recent backup file. The backup file also contains a sha256 checksum that is used to verify the validity of the backup file during restore. The backup command instructs you to restart the cluster when you have finished backing up.

  3. Restart the cluster.

    # kubeadm-setup.sh restart
    Restarting containers now ...
    Detected node is master ...
    Checking if env is ready ...
    Checking whether docker can pull busybox image ...
    Checking access to container-registry.oracle.com ...
    Trying to pull repository container-registry.oracle.com/pause ... 
    3.1: Pulling from container-registry.oracle.com/pause
    Digest: sha256:802ef89b9eb7e874a76e1cfd79ed990b63b0b84a05cfa09f0293379ac0261b49
    Status: Image is up to date for container-registry.oracle.com/pause:3.1
    Checking firewalld settings ...
    Checking iptables default rule ...
    Checking br_netfilter module ...
    Checking sysctl variables ...
    Restarting kubelet ...
    Waiting for node to restart ...
    ....
    Master node restarted. Complete synchronization between nodes may take a few minutes.

    Checks, similar to those performed during cluster setup, are performed when the cluster is restarted, to ensure that no environment changes may have occurred that could prevent the cluster from functioning correctly. Once the cluster has started, it can take a few minutes for the nodes within the cluster to report status and for the cluster to settle back to normal operation.

A restore operation is typically performed on a freshly installed host, but can be run on an existing setup, as long as any pre-existing setup configuration is removed. The restore process assumes that the Docker engine is configured in the same way as the original master node. The Docker engine must be configured to use the same storage driver and if proxy configuration is required, you must set this up manually before restoring, as described in the following steps.

Restore the cluster configuration and state
  1. On the master host, ensure that the latest Docker and Kubernetes versions are installed and that the master node IP address and hostname match the IP address and hostname used before failure. The kubeadm package pulls in all of the required dependencies, including the correct version of the Docker engine.

    # yum install kubeadm kubectl kubelet
  2. Run the kubeadm-setup.sh restore command.

    # kubeadm-setup.sh restore /backups/master-backup-v1.12.5-2-1544442719.tar
    Checking sha256sum of the backup files ...
    /var/run/kubeadm/backup/etcd-backup-1544442719.tar: OK
    /var/run/kubeadm/backup/k8s-master-0-1544442719.tar: OK
    Restoring backup from /backups/master-backup-v1.12.5-2-1544442719.tar ...
    Using 3.2.24
    etcd cluster is healthy ...
    Cleaning up etcd container ...
    27148ae6765a546bf45d527d627e5344130fb453c4a532aa2f47c54946f2e665
    27148ae6765a546bf45d527d627e5344130fb453c4a532aa2f47c54946f2e665
    Restore successful ...
    You can restart your cluster now by doing: 
    # kubeadm-setup.sh restart
    

    Substitute /backups/master-backup-v1.12.5-2-1544442719.tar with the full path to the backup file that you wish to restore.

  3. Restart the cluster.

    # kubeadm-setup.sh restart
    Restarting containers now ...
    Detected node is master ...
    Checking if env is ready ...
    Checking whether docker can pull busybox image ...
    Checking access to container-registry.oracle.com ...
    Trying to pull repository container-registry.oracle.com/pause ... 
    3.1: Pulling from container-registry.oracle.com/pause
    Digest: sha256:802ef89b9eb7e874a76e1cfd79ed990b63b0b84a05cfa09f0293379ac0261b49
    Status: Image is up to date for container-registry.oracle.com/pause:3.1
    Checking firewalld settings ...
    Checking iptables default rule ...
    Checking br_netfilter module ...
    Checking sysctl variables ...
    Enabling kubelet ...
    Created symlink from /etc/systemd/system/multi-user.target.wants/kubelet.service 
    to /etc/systemd/system/kubelet.service.
    Restarting kubelet ...
    Waiting for node to restart ...
    ....+++++
    Restarting pod kube-flannel-ds-glwgx
    pod "kube-flannel-ds-glwgx" deleted
    Restarting pod kube-flannel-ds-jz8sf
    pod "kube-flannel-ds-jz8sf" deleted
    Master node restarted. Complete synchronization between nodes may take a few minutes.
    
  4. Copy the Kubernetes admin.conf file to your home directory:

    $ sudo cp /etc/kubernetes/admin.conf $HOME/ 

    Change the ownership of the file to match your regular user profile:

    $ sudo chown $(id -u):$(id -g) $HOME/admin.conf

    Export the path to the file for the KUBECONFIG environment variable:

    $ export KUBECONFIG=$HOME/admin.conf

    You cannot use the kubectl command if the path to this file is not set for this environment variable. Remember to export the KUBECONFIG variable for each subsequent login so that the kubectl and kubeadm commands use the correct admin.conf file, otherwise you might find that these commands do not behave as expected after a reboot or a new login. For instance, append the export line to your .bashrc:

    $ echo 'export KUBECONFIG=$HOME/admin.conf' >> $HOME/.bashrc
  5. Check that you cluster has been properly restored. Use kubectl to check on the status of the nodes within the cluster and to check any existing configuration. For example:

    $ kubectl get nodes
    NAME                  STATUS    ROLES   AGE       VERSION
    master.example.com    Ready     master  1h        v1.12.5+2.1.1.el7
    worker1.example.com   Ready     <none>  1h        v1.12.5+2.1.1.el7
    worker2.example.com   Ready     <none>  1h        v1.12.5+2.1.1.el7
    
    $ kubectl get pods
    NAME                                READY     STATUS    RESTARTS   AGE
    nginx-deployment-4234284026-g8g95   1/1       Running   0          10m
    nginx-deployment-4234284026-k1h8w   1/1       Running   0          10m
    nginx-deployment-4234284026-sbkqr   1/1       Running   0          10m
    

4.3.2 High Availability Cluster

The kubeadm-ha-setup tool enables cluster backup and restore functionality so that you can easily protect your Kubernetes deployment from a failure of the master node in the cluster. Cluster status, configuration data and snapshots are stored in the Cluster State Store, also referred to as etcd.

For the backup and restore processes to work properly, there are some basic requirements:

  • The hostname and IP address of the master node being restored, must match the hostname and IP address of the master node that was backed up. The usual use case for restore is after system failure, so the restore process expects a matching system for each master node with a fresh installation of the Docker engine and the Kubernetes packages.

  • A restore can only function correctly using a backup of a Kubernetes high availability cluster running the same version of Kubernetes. The Docker engine versions must also match.

  • There must be a dedicated share storage directory that is accessible to all nodes in the master cluster during the backup and restore phases.

  • All nodes in the master cluster must have root access using password-less key-based authentication for all other nodes in the master cluster whenever kubeadm-ha-setup is used.

A full restore is only required if a period of downtime included more than one node in the master cluster. Note that a full restore disrupts master node availability throughout the duration of the restore process.

Back up the cluster configuration and state

  1. Run kubeadm-ha-setup backup and specify the directory where the backup file should be stored.

    # kubeadm-ha-setup backup /backups
    Disaster Recovery
    Reading configuration file /usr/local/share/kubeadm/run/kubeadm/ha.yaml ...
    CreateSSH /root/.ssh/id_rsa root
    Backup  /backup
    Checking overall clusters health ...
    Performing backup on 192.0.2.10
    Performing backup on 192.0.2.11
    Performing backup on 192.0.2.13
    {"level":"info","msg":"created temporary db file","path":"/var/lib/etcd/etcd-snap.db.part"}
    {"level":"info","msg":"fetching snapshot","endpoint":"127.0.0.1:2379"}
    {"level":"info","msg":"fetched snapshot","endpoint":"127.0.0.1:2379","took":"110.033606ms"}
    {"level":"info","msg":"saved","path":"/var/lib/etcd/etcd-snap.db"}
    [Backup is stored at /backup/fulldir-1544115826/fullbackup-1544115827.tar]

    Substitute /backups with the path to the network share directory where you wish to store the backup data for your master cluster.

    Each run of the backup command creates as a tar file that is timestamped so that you can easily restore the most recent backup file. The backup file also contains a sha256 checksum that is used to verify the validity of the backup file during restore. The backup command instructs you to restart the cluster when you have finished backing up.

A restore operation is typically performed on a freshly installed host, but can be run on an existing setup, as long as any pre-existing setup configuration is removed.

The restore process assumes that the IP address configuration for each node in the master cluster matches the configuration in the backed up data. If you are restoring on one or more freshly installed hosts, make sure that the IP addressing matches the address assigned to the host or hosts that you are replacing.

The restore process assumes that the Docker engine is configured in the same way as the original master node. The Docker engine must be configured to use the same storage driver and if proxy configuration is required, you must set this up manually before restoring, as described in the following steps.

Note

A full restore of the high availability master cluster disrupts service availability for the duration of the restore operation

Restore the cluster configuration and state

  1. On the master host, ensure that the latest Docker and Kubernetes versions are installed and that the master node IP address and hostname match the IP address and hostname used before failure. The kubeadm package pulls in all of the required dependencies, including the correct version of the Docker engine.

    # yum install kubeadm kubectl kubelet kubeadm-ha-setup
  2. Run the kubeadm-ha-setup restore command.

    # kubeadm-ha-setup restore /backups/fulldir-1544115826/fullbackup-1544115827.tar
    Disaster Recovery
    Reading configuration file /usr/local/share/kubeadm/run/kubeadm/ha.yaml ...
    CreateSSH /root/.ssh/id_rsa root
    Restore  /share/fulldir-1544115826/fullbackup-1544115827.tar 
    with binary /usr/bin/kubeadm-ha-setup
    Checking etcd clusters health (this will take a few mins) ...
    Cleaning up node 10.147.25.195
    Cleaning up node 10.147.25.196
    Cleaning up node 10.147.25.197
    file to be restored from:  /share/fulldir-1544115826/backup-10.147.25.195-1544115826.tar
    Configuring keepalived for HA ...
    success
    success
    file to be restored from:  /share/fulldir-1544115826/backup-10.147.25.196-1544115826.tar
    [INFO]  /usr/local/share/kubeadm/kubeadm-ha/etcd-extract.sh 
    /share/fulldir-1544115826/fullbackup-1544115827.tar 10.147.25.196:22  retrying ...
    file to be restored from:  /share/fulldir-1544115826/backup-10.147.25.197-1544115827.tar
    [INFO]  /usr/bin/kubeadm-ha-setup etcd 
    fullrestore 10.147.25.197 10.147.25.197:22  retrying ...
    [COMPLETED] Restore completed, cluster(s) may take a few minutes to get backup!
    

    Substitute /backups/fulldir-1544115826/fullbackup-1544115827.tar with the full path to the backup file that you wish to restore. Note that the backup directory and file must be accessible to all master nodes in the cluster during the restore process.

    If the script detects that all three master nodes are currently healthy, you need to confirm you wish to proceed:

    [WARNING] All nodes are healthy !!! This will perform a FULL CLUSTER RESTORE
    pressing [y] will restore cluster to the state stored 
    in /share/fulldir-1544115826/fullbackup-1544115827.tar

    Alternatively if the script detects that more than one master node is unavailable then it prompts you before proceeding with a full cluster restore.

  3. Copy the Kubernetes admin.conf file to your home directory:

    $ sudo cp /etc/kubernetes/admin.conf $HOME/ 

    Change the ownership of the file to match your regular user profile:

    $ sudo chown $(id -u):$(id -g) $HOME/admin.conf

    Export the path to the file for the KUBECONFIG environment variable:

    $ export KUBECONFIG=$HOME/admin.conf

    You cannot use the kubectl command if the path to this file is not set for this environment variable. Remember to export the KUBECONFIG variable for each subsequent login so that the kubectl and kubeadm commands use the correct admin.conf file, otherwise you might find that these commands do not behave as expected after a reboot or a new login. For instance, append the export line to your .bashrc:

    $ echo 'export KUBECONFIG=$HOME/admin.conf' >> $HOME/.bashrc
  4. Check that you cluster has been properly restored. Use kubectl to check on the status of the nodes within the cluster and to check any existing configuration. For example:

    $ kubectl get nodes
    NAME                  STATUS    ROLES   AGE      VERSION
    master1.example.com   Ready     master  1h       v1.12.5+2.1.1.el7
    master2.example.com   Ready     master  1h       v1.12.5+2.1.1.el7
    master3.example.com   Ready     master  1h       v1.12.5+2.1.1.el7
    worker2.example.com   Ready     <none>  1h       v1.12.5+2.1.1.el7
    worker3.example.com   Ready     <none>  1h       v1.12.5+2.1.1.el7

4.4 Kubernetes Dashboard

When the kubeadm-setup.sh script or kubeadm-ha-setup utility is used to install master nodes in the Kubernetes cluster, the Kubernetes Dashboard container is created as part of the kube-system namespace. This provides an intuitive graphical user interface to Kubernetes that can be accessed using a standard web browser.

The Kubernetes Dashboard is described in the Kubernetes documentation at https://kubernetes.io/docs/tasks/access-application-cluster/web-ui-dashboard/.

To access the Dashboard, you can run a proxy service that allows traffic on the node where it is running to reach the internal pod where the Dashboard application is running. This is achieved by running the kubectl proxy service:

$ kubectl proxy
Starting to serve on 127.0.0.1:8001

The Dashboard is available on the node where the proxy is running for as long as the proxy runs. To exit the proxy, use Ctrl+C. You can run this as a systemd service and enable it so that it is always available after subsequent reboots:

# systemctl start kubectl-proxy
# systemctl enable kubectl-proxy

This systemd service requires that the /etc/kubernetes/admin.conf is present to run. If you want to change the port that is used for the proxy service, or you want to add other proxy configuration parameters, you can configure this by editing the systemd drop-in file at /etc/systemd/system/kubectl-proxy.service.d/10-kubectl-proxy.conf. You can get more information about the configuration options available for the kubectl proxy service by running:

$ kubectl proxy ‐‐help

To access the Dashboard, open a web browser on the node where the proxy is running and navigate to http://localhost:8001/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/#!/login.

To login, you must authenticate using a token. See https://github.com/kubernetes/dashboard/tree/master/docs/user/access-control for more information. If you have not set up specific tokens for this purpose, you can use a token allocated to a service account, such as the namespace-controller. Run the following command to obtain the token value for the namespace-controller:

$ kubectl -n kube-system describe $(kubectl -n kube-system \
   get secret -n kube-system -o name | grep namespace) | grep token:
token:      eyJhbGciOiJSUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2Nvd\
            W50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSI\
            sImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJuYW1lc3BhY2UtY29ud\
            HJvbGxlci10b2tlbi1zeHB3ayIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1h\
            Y2NvdW50Lm5hbWUiOiJuYW1lc3BhY2UtY29udHJvbGxlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFj\
            Y291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6IjM4OTk1MWIyLWJlNDYtMTFlNy04ZGY2LTA4MDAyNzY\
            wOTVkNyIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTpuYW1lc3BhY2UtY2\
            9udHJvbGxlciJ9.aL-9sRGic_b7XW2eOsDfxn9QCCobBSU41J1hMbT5D-Z86iahl1mQnV60zEKOg-45\
            5pLO4aW_RSETxxCp8zwaNkbwoUF1rbi17FMR_zfhj9sfNKzHYO1tjYf0lN452k7_oCkJ7HR2mzCHmw-\
            AygILeO0NlIgjxH_2423Dfe8In9_nRLB_PzKvlEV5Lpmzg4IowEFhawRGib3R1o74mgIb3SPeMLEAAA

Copy and paste the entire value of the token into the token field on the login page to authenticate.

If you need to access the Dashboard remotely, Oracle recommends using SSH tunneling to do port forwarding from your localhost to the proxy node, as described in the following sections.

SSH Tunneling

The easiest option is to use SSH tunneling to forward a port on your local system to the port configured for the proxy service running on the node that you wish to access. This method retains some security as the HTTP connection is encrypted by virtue of the SSH tunnel and authentication is handled by your SSH configuration. For example, on your local system run:

$ ssh -L 8001:127.0.0.1:8001 192.0.2.10

Substitute 192.0.2.10 with the IP address of the host where you are running kubectl proxy. Once the SSH connection is established, you can open a browser on your localhost and navigate to http://localhost:8001/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/#!/login to access the Dashboard hosted in the remote Kubernetes cluster. Use the same token information to authenticate as if you were connecting to the Dashboard locally.

4.5 Removing Worker Nodes from the Cluster

4.5.1 Single Master Cluster

At any point, you can remove a worker node from the cluster. Use the kubeadm-setup.sh down command to completely remove all of the Kubernetes components installed and running on the system. Since this operation is destructive, the script warns you when you attempt to do this on a worker node and requires confirmation to continue with the action. The script also reminds you that you need to remove the node from the cluster configuration:

# kubeadm-setup.sh down
[WARNING] This action will RESET this node !!!!
          Since this is a worker node, please also run the following on the master (if not already done)
          # kubectl delete node worker1.example.com
          Please select 1 (continue) or 2 (abort) :
1) continue
2) abort
#? 1
[preflight] Running pre-flight checks
[reset] Stopping the kubelet service
[reset] Unmounting mounted directories in "/var/lib/kubelet"
[reset] Removing kubernetes-managed containers
[reset] No etcd manifest found in "/etc/kubernetes/manifests/etcd.yaml", assuming external etcd.
[reset] Deleting contents of stateful directories: [/var/lib/kubelet /etc/cni/net.d /var/lib/dockershim]
[reset] Deleting contents of config directories: [/etc/kubernetes/manifests /etc/kubernetes/pki]
[reset] Deleting files: [/etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf \
        /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf

The cluster must be updated so that it no longer looks for a node that you have decommissioned. Remove the node from the cluster using the kubectl delete node command:

$ kubectl delete node worker1.example.com
node "test2.example.com" deleted

Substitute worker1.example.com with the name of the worker node that you wish to remove from the cluster.

If you run the kubeadm-setup.sh down command on the master node, the only way to recover the cluster is to restore from a backup file. Doing this effectively destroys the entire cluster. The script warns you that this is a destructive action and that you are performing it on the master node. You must confirm the action before you are able to continue:

# kubeadm-setup.sh down
[WARNING] This action will RESET this node !!!!
          Since this is a master node, all of the clusters information will be lost !!!!
          Please select 1 (continue) or 2 (abort) :
1) continue
2) abort
#? 1
[preflight] Running pre-flight checks
[reset] Stopping the kubelet service
[reset] Unmounting mounted directories in "/var/lib/kubelet"
[reset] Removing kubernetes-managed containers
[reset] Deleting contents of stateful directories: [/var/lib/kubelet /etc/cni/net.d \
        /var/lib/dockershim /var/lib/etcd]
[reset] Deleting contents of config directories: [/etc/kubernetes/manifests /etc/kubernetes/pki]
[reset] Deleting files: [/etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf \
        /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf]
deleting flannel.1 ip link ...
deleting cni0 ip link ...
removing /var/lib/cni directory ...
removing /var/lib/etcd directory ...
removing /etc/kubernetes directory ...

4.5.2 High Availability Cluster

Removing a worker node from a high availability cluster follows a similar process, but uses the kubeadm-ha-setup down command instead. Since this operation is destructive, the utility warns you when you attempt to do this on a worker node and requires confirmation to continue with the action. The script also reminds you that you need to remove the node from the cluster configuration:

# kubeadm-ha-setup down
[WARNING] This operation will clean up all kubernetes installations on this node
press [y] to continue ...
y
[INFO] Removing interface flannel.1

The cluster must be updated so that it no longer looks for a node that you have decommissioned. Remove the node from the cluster using the kubectl delete node command on any of your master nodes:

$ kubectl delete node worker1.example.com
node "worker1.example.com" deleted

Substitute worker1.example.com with the name of the node that you wish to remove from the cluster.

If you run the kubeadm-ha-setup command on any of your master nodes, the only way to recover the cluster is to restore from a backup file.