The software described in this documentation is either no longer supported or is in extended support.
Oracle recommends that you upgrade to a current supported release.

3.4.3 Setting up a Load Balancer

There are two methods of setting up a load balancer to enable high availability of a multi-master Kubernetes cluster:

  • Using your own external load balancer instance.

  • Using the load balancer that can be deployed by the Platform CLI on the master nodes.

If you want to use your own load balancer implementation, it should be set up and ready to use before you perform a multi-master deployment. The load balancer hostname and port is entered as an option when you create the Kubernetes module. For more information on setting up your own load balancer, see the Oracle® Linux 7: Administrator's Guide, or Oracle® Linux 8: Setting Up Load Balancing.

If you want to use the in-built load balancer that can be deployed by the Platform CLI, you need to perform the following steps to prepare the master nodes. These steps should be performed on each master node.

To prepare master nodes for the load balancer deployed by the Platform CLI:

  1. Set up the master nodes as described in Section 3.4.2, “Setting up Kubernetes Nodes”.

  2. Nominate a virtual IP address that can be used for the primary master node. This IP address should not be in use on any node, and is assigned dynamically to the master node assigned the primary role by the load balancer. If the primary node fails, the load balancer reassigns the virtual IP address to another master node, and that, in turn, becomes the primary node. The virtual IP address used in examples in this documentation is 192.0.2.100.

  3. Open port 6444. When you use a virtual IP address, the Kubernetes API server port is changed from the default of 6443 to 6444. The load balancer listens on port 6443 and receives the requests and passes them to the Kubernetes API server.

    $ sudo firewall-cmd --add-port=6444/tcp
    $ sudo firewall-cmd --add-port=6444/tcp --permanent
  4. Enable the Virtual Router Redundancy Protocol (VRRP) protocol:

    $ sudo firewall-cmd --add-protocol=vrrp
    $ sudo firewall-cmd --add-protocol=vrrp --permanent
  5. If you use a proxy server, configure it with NGINX. On each Kubernetes master node, create an NGINX systemd configuration directory:

    $ sudo mkdir /etc/systemd/system/olcne-nginx.service.d

    Create a file named proxy.conf in the directory, and add the proxy server information. For example:

    [Service]
    Environment="HTTP_PROXY=proxy.example.com:3128"
    Environment="HTTPS_PROXY=proxy.example.com:3128"
    Environment="NO_PROXY=mydomain.example.com"