4.7.2 Scaling Down a Kubernetes Cluster

This procedure shows you how to remove nodes from a Kubernetes cluster.

Warning

Be careful if you are scaling down the master nodes of your cluster. If you have two master nodes and you scale down to have only one master node, then you would have only a single point of failure.

To scale down a Kubernetes cluster:

  1. From a master node of the Kubernetes cluster, use the kubectl get nodes command to see the master and worker nodes of the cluster.

    $ kubectl get nodes
    NAME                   STATUS   ROLE     AGE     VERSION
    master1.example.com    Ready    master   26h     v1.17.9+1.0.1.el7  
    master2.example.com    Ready    master   26h     v1.17.9+1.0.1.el7
    master3.example.com    Ready    master   26h     v1.17.9+1.0.1.el7
    master4.example.com    Ready    master   2m38s   v1.17.9+1.0.1.el7
    worker1.example.com    Ready    <none>   26h     v1.17.9+1.0.1.el7
    worker2.example.com    Ready    <none>   26h     v1.17.9+1.0.1.el7
    worker3.example.com    Ready    <none>   26h     v1.17.9+1.0.1.el7
    worker4.example.com    Ready    <none>   2m38s   v1.17.9+1.0.1.el7
    worker5.example.com    Ready    <none>   2m38s   v1.17.9+1.0.1.el7

    For this example, there are four master nodes in the Kubernetes cluster:

    • master1.example.com

    • master2.example.com

    • master3.example.com

    • master4.example.com

    There are also five worker nodes in the cluster:

    • worker1.example.com

    • worker2.example.com

    • worker3.example.com

    • worker4.example.com

    • worker5.example.com

  2. To update a Kubernetes cluster by scaling it down, use the olcnectl module update command.

    For example, to remove the master4.example.com master node, worker4.example.com worker node, and worker5.example.com worker node from the kubernetes module named mycluster, access the operator node of the Kubernetes cluster and execute the following command:

    $ olcnectl --api-server 127.0.0.1:8091 module update --environment-name myenvironment \  
      --name mycluster \
      --master-nodes master1.example.com:8090,master2.example.com:8090,master3.example.com:8090 \
      --worker-nodes worker1.example.com:8090,worker2.example.com:8090,worker3.example.com:8090

    In this example, you are scaling down a Kubernetes cluster so that it has three master nodes and three worker nodes.

    In this example, the master4.example.com, worker4.example.com, and worker5.example.com node are not specified. The Platform API Server keeps track of the nodes of the cluster that it is managing, and knows to scale down the cluster so that these nodes are removed from the cluster.

    Because three master nodes and three worker nodes are specified in the –-master-nodes and –-worker-nodes options of the olcnectl module update command, the Platform API Server does not remove these nodes from the cluster.

  3. Return to a master node of the Kubernetes cluster, and then use the kubectl get nodes command to verify that the cluster is scaled down.

    $ kubectl get nodes
    NAME                   STATUS   ROLE     AGE     VERSION
    master1.example.com    Ready    master   26h     v1.17.9+1.0.1.el7  
    master2.example.com    Ready    master   26h     v1.17.9+1.0.1.el7
    master3.example.com    Ready    master   26h     v1.17.9+1.0.1.el7
    worker1.example.com    Ready    <none>   26h     v1.17.9+1.0.1.el7
    worker2.example.com    Ready    <none>   26h     v1.17.9+1.0.1.el7
    worker3.example.com    Ready    <none>   26h     v1.17.9+1.0.1.el7