4.5.1 Single Master Cluster

At any point, you can remove a worker node from the cluster. Use the kubeadm-setup.sh down command to completely remove all of the Kubernetes components installed and running on the system. Since this operation is destructive, the script warns you when you attempt to do this on a worker node and requires confirmation to continue with the action. The script also reminds you that you need to remove the node from the cluster configuration:

# kubeadm-setup.sh down
[WARNING] This action will RESET this node !!!!
          Since this is a worker node, please also run the following on the master (if not already done)
          # kubectl delete node worker1.example.com
          Please select 1 (continue) or 2 (abort) :
1) continue
2) abort
#? 1
[preflight] Running pre-flight checks
[reset] Stopping the kubelet service
[reset] Unmounting mounted directories in "/var/lib/kubelet"
[reset] Removing kubernetes-managed containers
[reset] No etcd manifest found in "/etc/kubernetes/manifests/etcd.yaml", assuming external etcd.
[reset] Deleting contents of stateful directories: [/var/lib/kubelet /etc/cni/net.d /var/lib/dockershim]
[reset] Deleting contents of config directories: [/etc/kubernetes/manifests /etc/kubernetes/pki]
[reset] Deleting files: [/etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf \
        /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf

The cluster must be updated so that it no longer looks for a node that you have decommissioned. Remove the node from the cluster using the kubectl delete node command:

$ kubectl delete node worker1.example.com
node "test2.example.com" deleted

Substitute worker1.example.com with the name of the worker node that you wish to remove from the cluster.

If you run the kubeadm-setup.sh down command on the master node, the only way to recover the cluster is to restore from a backup file. Doing this effectively destroys the entire cluster. The script warns you that this is a destructive action and that you are performing it on the master node. You must confirm the action before you are able to continue:

# kubeadm-setup.sh down
[WARNING] This action will RESET this node !!!!
          Since this is a master node, all of the clusters information will be lost !!!!
          Please select 1 (continue) or 2 (abort) :
1) continue
2) abort
#? 1
[preflight] Running pre-flight checks
[reset] Stopping the kubelet service
[reset] Unmounting mounted directories in "/var/lib/kubelet"
[reset] Removing kubernetes-managed containers
[reset] Deleting contents of stateful directories: [/var/lib/kubelet /etc/cni/net.d \
        /var/lib/dockershim /var/lib/etcd]
[reset] Deleting contents of config directories: [/etc/kubernetes/manifests /etc/kubernetes/pki]
[reset] Deleting files: [/etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf \
        /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf]
deleting flannel.1 ip link ...
deleting cni0 ip link ...
removing /var/lib/cni directory ...
removing /var/lib/etcd directory ...
removing /etc/kubernetes directory ...