3 Using the MetalLB Load Balancer
Important:
The software described in this documentation is either in Extended Support or Sustaining Support. See Oracle Open Source Support Policies for more information.
We recommend that you upgrade the software described by this documentation as soon as possible.
This chapter discusses how to install and use the MetalLB module to set up a network load balancer for Kubernetes applications using MetalLB in Oracle Cloud Native Environment on bare metal hosts.
Prerequisites
This section contains the prerequisite information you need to set up the MetalLB module.
Setting up the Health Check Endpoint Network Ports
When using a Kubernetes LoadBalancer service with the
ServiceInternalTrafficPolicy
set to Cluster
(the default), a health check endpoint is expected to be available on TCP port
10256. kube-proxy
creates a listener on this port, which enables
access to the LoadBalancer service to verify that kube-proxy
is
healthy on the nodes. The LoadBalancer service determines which nodes can have
traffic routed to them using this policy. To allow traffic on this port, you must
open TCP port 10256 on all Kubernetes nodes. On each Kubernetes node, run:
sudo firewall-cmd --zone=public --add-port=10256/tcp
sudo firewall-cmd --zone=public --add-port=10256/tcp --permanent
sudo systemctl restart firewalld.service
For more information on the
ServiceInternalTrafficPolicy
, see the upstream documentation
at:
https://kubernetes.io/docs/concepts/services-networking/service-traffic-policy/
Make sure traffic is allowed for TCP port 10256 in the network security list.
Setting up the Network Ports
You must open the following ports on Kubernetes worker nodes. On each worker node, run:
sudo firewall-cmd --zone=public --add-port=7946/tcp --permanent sudo firewall-cmd --zone=public --add-port=7946/udp --permanent sudo systemctl restart firewalld.service
Creating a MetalLB Configuration File
You must provide a MetalLB configuration file on the operator node. The configuration file contains the required information to configure MetalLB. This file is where you list configuration information such as the IP address ranges to use when provisioning load balancer IPs to Kubernetes applications, and the protocol to use.
The configuration file is a snippet, or cut down version, of the upstream MetalLB ConfigMap
file. The snippet file should only contain the options available to be set under the
config
section shown in the upstream ConfigMap files, that is, any
combination of address-pools
, peers
,
bgp-communities
, bfd-profiles
, and so on. For
example:
peers:
- peer-address: 10.0.0.1
peer-asn: 64501
my-asn: 64500
address-pools:
- name: default
protocol: bgp
addresses:
- 192.168.10.0/24
The Platform API Server uses the information contained in the configuration file when creating the MetalLB module.
Important:
Oracle Cloud Native Environment installs MetalLB Release 0.12.1. This release uses a ConfigMap to configure the MetalLB cluster. MetalLB Release 0.13 onwards uses a CustomResource to perform this configuration. You should use the upstream examples for MetalLB Release 0.12.1 to create a snippet of a ConfigMap to configure the version of MetalLB installed with Oracle Cloud Native Environment.
For information on the options available to use in the configuration file, see the upstream documentation for the MetalLB ConfigMap file, at:
https://github.com/metallb/metallb/blob/v0.12.1/website/content/configuration/_index.md
Important:
Do not include a full ConfigMap file in the configuration file, only the options
available under the config
section.
The following example configuration file uses a MetalLB Layer 2 configuration and provides
the IP address range from 192.168.1.240 to 192.168.1.250 to MetalLB to create load balancer
IPs for Kubernetes applications. This example file is named
metallb-config.yaml
and contains:
address-pools:
- name: default
protocol: layer2
addresses:
- 192.168.1.240-192.168.1.250
Deploying the MetalLB Module
You can deploy all the modules required to set up MetalLB for a Kubernetes cluster using a
single olcnectl module create
command. This method might be useful if you
want to deploy the MetalLB module at the same time as deploying a Kubernetes cluster.
If you have an existing deployment of the Kubernetes module, you can specify that instance when deploying the MetalLB module.
This section guides you through installing each component required to deploy the MetalLB module.
For the full list of the Platform CLI command options available when creating modules, see
the olcnectl module create
command in Platform Command-Line Interface.
To deploy the MetalLB module:
-
If you do not already have an environment set up, create one into which the modules can be deployed. For information on setting up an environment, see Getting Started. The name of the environment in this example is
myenvironment
. -
If you do not already have a Kubernetes module set up or deployed, set one up. For information on adding a Kubernetes module to an environment, see Container Orchestration. The name of the Kubernetes module in this example is
mycluster
. -
If you do not already have a Helm module created and installed, create one. The Helm module in this example is named
myhelm
and is associated with the Kubernetes module namedmycluster
using the--helm-kubernetes-module
option.olcnectl module create \ --environment-name myenvironment \ --module helm \ --name myhelm \ --helm-kubernetes-module mycluster
-
If you are deploying a new Helm module, use the
olcnectl module validate
command to validate the Helm module can be deployed to the nodes. For example:olcnectl module validate \ --environment-name myenvironment \ --name myhelm
-
If you are deploying a new Helm module, use the
olcnectl module install
command to install the Helm module. For example:olcnectl module install \ --environment-name myenvironment \ --name myhelm
The Helm software packages are installed on the control plane nodes, and the Helm module is deployed into the Kubernetes cluster.
-
Create a MetalLB module and associate it with the Helm module named
myhelm
using the--metallb-helm-module
option. In this example, the MetalLB module is namedmymetallb
.olcnectl module create \ --environment-name myenvironment \ --module metallb \ --name mymetallb \ --metallb-helm-module myhelm \ --metallb-config /home/opc/metallb-config.yaml
The
--module
option sets the module type to create, which ismetallb
. You define the name of the MetalLB module using the--name
option, which in this case ismymetallb
.The
--metallb-helm-module
option sets the name of the Helm module. If there is an existing Helm module with the same name, the Platform API Server uses that instance of Helm.The
--metallb-config
option sets the location for the MetalLB configuration file. This file must be available on the operator node under the provided path. For information on creating this configuration file, see Creating a MetalLB Configuration File.If you do not include all the required options when adding the modules, you are prompted to provide them.
-
Use the
olcnectl module validate
command to validate the MetalLB module can be deployed to the nodes. For example:olcnectl module validate \ --environment-name myenvironment \ --name mymetallb
-
Use the
olcnectl module install
command to install the MetalLB module. For example:olcnectl module install \ --environment-name myenvironment \ --name mymetallb
The MetalLB module is deployed into the Kubernetes cluster.
Verifying the MetalLB Module Deployment
You can verify the MetalLB module is deployed using the olcnectl module
instances
command on the operator node. For example:
olcnectl module instances \ --environment-name myenvironment INSTANCE MODULE STATE mymetallb metallb installed mycluster kubernetes installed myhelm helm installed control1.example.com node installed ...
Note the entry for metallb
in the MODULE
column is in the
installed
state.
In addition, use the olcnectl module report
command to review information
about the module. For example, use the following command to review the MetalLB module named
mymetallb
in myenvironment
:
olcnectl module report \ --environment-name myenvironment \ --name mymetallb \ --children
For more information on the syntax for the olcnectl module report
command, see Platform Command-Line Interface.
Creating an Application Using MetalLB
This section contains a basic test to verify you can create a Kubernetes application that uses MetalLB to provide external IP addresses.
To create a test application to use MetalLB:
-
Create a Kubernetes application that uses a LoadBalancer service. The deployment in this example creates an NGINX application with a replica count of 2, and an associated LoadBalancer service.
On a control plane node, create a file named
nginx-metallb.yaml
and copy the following into the file.--- apiVersion: apps/v1 kind: Deployment metadata: name: nginx-deployment labels: app: nginx spec: replicas: 2 selector: matchLabels: app: nginx template: metadata: labels: app: nginx spec: containers: - name: nginx image: container-registry.oracle.com/olcne/nginx:1.17.7 ports: - containerPort: 80 --- kind: Service apiVersion: v1 metadata: name: nginx-service spec: selector: app: nginx type: LoadBalancer ports: - name: http port: 80 targetPort: 80
-
Start the NGINX deployment and LoadBalancer service:
kubectl apply -f nginx-metallb.yaml deployment.apps/nginx-deployment created service/nginx-service created
-
You can see the
nginx-deployment
application is running using thekubectl get deployment
command:kubectl get deployments.apps NAME READY UP-TO-DATE AVAILABLE AGE nginx-deployment 2/2 2 2 31s
-
You can see the
nginx-deployment
service is running using thekubectl get svc
command:kubectl get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 25h nginx-service LoadBalancer 10.99.253.99 192.168.1.240 80:31875/TCP 70s
You can see the EXTERNAL-IP for the
nginx-service
LoadBalancer has an IP address of192.168.1.240
. This IP address is provided by MetalLB and is the external IP address that you can use to connect to the application. - Use
curl
to connect to the NGINX application's IP address and add the port for the application (192.168.1.240:80
in this example) to show the NGINX default page.curl 192.168.1.240:80 <!DOCTYPE html> <html> <head> <title>Welcome to nginx!</title> <style> body { width: 35em; margin: 0 auto; font-family: Tahoma, Verdana, Arial, sans-serif; } </style> </head> <body> <h1>Welcome to nginx!</h1> <p>If you see this page, the nginx web server is successfully installed and working. Further configuration is required.</p> <p>For online documentation and support please refer to <a href="http://nginx.org/">nginx.org</a>.<br/> Commercial support is available at <a href="http://nginx.com/">nginx.com</a>.</p> <p><em>Thank you for using nginx.</em></p> </body> </html>
-
You can delete the
nginx-service
LoadBalancer service using:kubectl delete svc nginx-service service "nginx-service" deleted
-
You can delete the
nginx-deployment
application using:kubectl delete deployments.apps nginx-deployment deployment.apps "nginx-deployment" deleted
Removing the MetalLB Module
You can remove a deployment of the MetalLB module and leave the Kubernetes cluster in place. To do this, you remove the MetalLB module from the environment.
Use the olcnectl module uninstall
command to remove the MetalLB module.
For example, to uninstall the MetalLB module named mymetallb
in the
environment named myenvironment
:
olcnectl module uninstall \ --environment-name myenvironment \ --name mymetallb
The MetalLB module is removed from the environment.