The software described in this documentation is either no longer supported or is in extended support.
Oracle recommends that you upgrade to a current supported release.

4.7 Scaling a Kubernetes Cluster

A Kubernetes cluster may consist of either a single or multiple master and worker nodes. The more applications that you run in a cluster, the more resources (nodes) that you need. So, what do you do if you need additional resources to handle a high amount of workload or traffic, or if you want to deploy more services to the cluster? You add additional nodes to the cluster. Or, what happens if there are faulty nodes in your cluster? You remove them.

Scaling a Kubernetes cluster is updating the cluster by adding nodes to it or removing nodes from it. When you add nodes to a Kubernetes cluster, you are scaling up the cluster, and when you remove nodes from the cluster, you are scaling down the cluster.

You may want to replace one master or worker node in a Kubernetes cluster with another. So, first scale up to bring in the new node, and then scale down to remove the outdated node.

Note

Oracle recommends that you should not scale the cluster up and down at the same time. You should scale up, then scale down, in two separate commands.

If you used the --apiserver-advertise-address option when you created a Kubernetes module, then you cannot scale up from a single-master cluster to a multi-master cluster. However, if you used the --virtual-ip or the --load-balancer options, then you can scale up, even if you have only a single-master cluster. For more information about the --apiserver-advertise-address, --virtual-ip, or --load-balancer options, see Section 4.6, “Creating a Multi-Master (HA) Kubernetes Cluster”.

When you scale a Kubernetes cluster, the following actions are completed:

  1. A backup is taken of the cluster. In case something goes wrong during scaling up or scaling down, such as a failure occurring while scaling the cluster, you can revert to the previous state so that you can restore the cluster. For more information about backing up and restoring a Kubernetes cluster, see Container Orchestration.

  2. Any nodes that you want to add to the cluster are validated. If the nodes have any validation issues, such as firewall issues, then the update to the cluster cannot proceed, and the nodes cannot be added to the cluster. You are prompted for what to do to resolve the validation issues so that the nodes can be added to the cluster.

  3. The master and worker nodes are added to or removed from the cluster.

  4. The cluster is checked to ensure that all of the new and existing nodes are healthy. After validation of the cluster is completed, the cluster is scaled and you can access it.

In the following procedures, you learn how to scale up and scale down a Kubernetes cluster.

4.7.1 Scaling Up a Kubernetes Cluster

Before you scale up a Kubernetes cluster, set up the new nodes so they can be added to the cluster.

To prepare a node:

  1. Set up the node so it can be added to an Oracle Linux Cloud Native Environment. See Section 3.4.2, “Setting up Kubernetes Nodes”.

  2. Set up a X.509 certificate for the node so that communication can happen between it and the other nodes of the Kubernetes cluster in a secure fashion. See Section 3.5, “Setting up X.509 Certificates”.

  3. Start the Platform Agent service. See Section 3.7, “Starting the Platform API Server and Platform Agent Services”.

After completing these actions, use the instructions in this procedure to add nodes to a Kubernetes cluster.

To scale up a Kubernetes cluster:

  1. From a master node of the Kubernetes cluster, use the kubectl get nodes command to see the master and worker nodes of the cluster.

    $ kubectl get nodes
    NAME                   STATUS   ROLE     AGE     VERSION
    master1.example.com    Ready    master   26h     v1.17.x+x.x.x.el7  
    master2.example.com    Ready    master   26h     v1.17.x+x.x.x.el7
    master3.example.com    Ready    master   26h     v1.17.x+x.x.x.el7
    worker1.example.com    Ready    <none>   26h     v1.17.x+x.x.x.el7
    worker2.example.com    Ready    <none>   26h     v1.17.x+x.x.x.el7
    worker3.example.com    Ready    <none>   26h     v1.17.x+x.x.x.el7

    For this example, there are three master nodes in the Kubernetes cluster:

    • master1.example.com

    • master2.example.com

    • master3.example.com

    There are also three worker nodes in the cluster:

    • worker1.example.com

    • worker2.example.com

    • worker3.example.com

  2. To update a Kubernetes cluster by scaling it up, use the olcnectl module update command.

    For example, to add the master4.example.com master node, worker4.example.com worker node, and worker5.example.com worker node to the kubernetes module named mycluster, access the operator node of the Kubernetes cluster and execute the following command:

    $ olcnectl --api-server 127.0.0.1:8091 module update --environment-name myenvironment \  
      --name mycluster \
      --master-nodes master1.example.com:8090,master2.example.com:8090,master3.example.com:8090,\
    master4.example.com:8090 \
      --worker-nodes worker1.example.com:8090,worker2.example.com:8090,worker3.example.com:8090,\
    worker4.example.com:8090,worker5.example.com:8090

    In this example, you are scaling up a Kubernetes cluster so that it has four master nodes and five worker nodes. Make sure that if you are scaling up from a single-master to a multi-master cluster, you have specified a load balancer for the cluster. If you do not specify a load balancer, then you cannot scale up your master nodes.

    There are other options that you may find useful when scaling a Kubernetes cluster. The following example shows a more-complex method of scaling the cluster:

    $ olcnectl --api-server 127.0.0.1:8091 module update --environment-name myenvironment \  
      --name mycluster \
      --master-nodes master1.example.com:8090,master2.example.com:8090,master3.example.com:8090,\
    master4.example.com:8090 \
      --worker-nodes worker1.example.com:8090,worker2.example.com:8090,worker3.example.com:8090,\
    worker4.example.com:8090,worker5.example.com:8090 \
      --generate-scripts \
      --force

    The --generate-scripts option generates scripts you can run for each node in the event of any validation failures encountered during scaling. A script is created for each node in the module, saved to the local directory, and named hostname:8090.sh.

    The --force option suppresses the prompt displayed to confirm you want to continue with scaling the cluster.

  3. Return to a master node of the Kubernetes cluster, and then use the kubectl get nodes command to verify that the cluster is scaled up to include the new master and worker nodes.

    $ kubectl get nodes
    NAME                   STATUS   ROLE     AGE     VERSION
    master1.example.com    Ready    master   26h     v1.17.x+x.x.x.el7  
    master2.example.com    Ready    master   26h     v1.17.x+x.x.x.el7
    master3.example.com    Ready    master   26h     v1.17.x+x.x.x.el7
    master4.example.com    Ready    master   2m38s   v1.17.x+x.x.x.el7
    worker1.example.com    Ready    <none>   26h     v1.17.x+x.x.x.el7
    worker2.example.com    Ready    <none>   26h     v1.17.x+x.x.x.el7
    worker3.example.com    Ready    <none>   26h     v1.17.x+x.x.x.el7
    worker4.example.com    Ready    <none>   2m38s   v1.17.x+x.x.x.el7
    worker5.example.com    Ready    <none>   2m38s   v1.17.x+x.x.x.el7