Chapter 1 About Load Balancing in Oracle Linux

This chapter provides an overview of the load balancer technologies that are used in Oracle Linux. Installation and configuration information is also provided in this chapter.

1.1 About Load Balancing

The term load balancing refers to the efficient distribution of incoming network traffic across a group of back-end servers. The use of load balancing ensures that your infrastructure is highly available, reliable, and that performance is not degraded. Load balancers can typically handle traffic for the HTTP, HTTPS, TCP, and UDP protocols.

Load balancers manage network traffic by routing client requests across all of the servers that can fulfill those requests. This routing maximizes speed and capacity utilization so that no one particular server becomes overloaded, thereby improving overall performance. In situations where a server may become unavailable or goes down, the load balancer redirects any incoming traffic to other servers that are online. In this way, server downtime is minimized. When a new server is added to the server group, the load balancer automatically redistributes the workload and starts to send requests to that new server.

In Oracle Linux, load balancing of network traffic is primarily handled by two integrated software components: HAProxy and Keepalived. The HAProxy feature provides load balancing and high-availability services to TCP and HTTP, while Keepalived performs load balancing and failover tasks on both active and passive routers. The NGINX feature can also be used in Oracle Linux for load balancing.

1.2 About HAProxy

HAProxy, or High Availability Proxy, is an application layer (Layer 7) load balancer and high-availability solution that you can use to implement a reverse proxy for HTTP and TCP-based Internet services. An application layer load balancer often includes many features, because it is able to inspect the content of the traffic that it is routing and can either modify content within each packet, as required, or can make decisions about how to handle each packet based on its content. This makes it simple to implement session persistence, TLS, ACLs, and HTTP rewrites and redirection.

The configuration file for the haproxy daemon is /etc/haproxy/haproxy.cfg. This file must be present on each server on which you configure HAProxy for load balancing or high availability.

For more information, see, the /usr/share/doc/haproxy-version documentation, and the haproxy(1) manual page.

1.3 About Keepalived

Keepalived uses the IP Virtual Server (IPVS) kernel module to provide transport layer (Layer 4) load balancing by redirecting requests for network-based services to individual members of a server cluster. IPVS monitors the status of each server and uses the Virtual Router Redundancy Protocol (VRRP) to implement high availability. A load balancer that functions at the transport layer is less aware of the content of the packets that it re-routes, which has the advantage of being able to perform this task significantly faster than a reverse proxy system functioning at the application layer.

The configuration file for the keepalived daemon is /etc/keepalived/keepalived.conf. This file must be present on each server on which you configure Keepalived for load balancing or high availability.

For more information, see, the /usr/share/doc/keepalive-version documentation, and the keepalived(8) and keepalived.conf(5) manual pages.

Using Keepalived With VRRP

VRRP is a networking protocol that automatically assigns routers that are available to handle inbound traffic. A detailed standard document for this protocol can be found at

Keepalived uses VRRP to ascertain the current state of all of the routers on the network. The protocol enables routing to switch between primary and backup routers automatically. The backup routers detect when the primary router becomes unavailable and then sends multicast packets to each other until one of the routers is "elected" as the new primary router A floating virtual IP address can be used to always direct traffic to the primary router. When the original primary router is back online, it detects the new routing state and returns to the network as a backup router.

The benefit of using VRRP is that you can rely on multiple routers to provide high availability and redundancy without requiring a separate software service or hardware device to manage this process. On each router, Keepalived configures the VRRP settings and ensures that the network routing continues to function correctly.

For more information, see, the /usr/share/doc/keepalive-version documentation, and the keepalived(8) and keepalived.conf(5) manual pages.

1.4 About Combining Keepalived With HAProxy for High-Availability Load Balancing

You can combine the Keepalived and HAProxy load balancer features to achieve a high-availability, load-balancing environment. HAProxy provides scalability, application-aware functionality, and ease of configuration when configuring load balancing services. Keepalived provides failover services for backup routers, as well as the ability to distribute loads across servers for increased availability.

This complex configuration scenario illustrates how you can use different load balancing applications with each other to achieve better redundancy and take advantage of features at different layers of the stack. While this example shows how Keepalived can be used to provide redundancy for HAProxy, you can also achieve similar results by using Keepalived with alternate application layer proxy systems, like NGINX.

For more details, see Section 3.4, “Setting Up Load Balancing by Using Keepalived With HAProxy”.

1.5 About NGINX

NGINX is a well-known HTTP server that provides modular functionality for reverse proxying, traffic routing, and application-layer load balancing for HTTP, HTTPS or TCP/UDP connections. You can use NGINX load balancing and proxy services to distribute traffic for improved performance, scalability, and reliability of your applications.

NGINX provides capability for the following load balancing methods:

  • Round Robin. This method is one of the simplest for implementing load balancing and is the default method that is used by NGINX. Round Robin distributes requests to application servers by going down the list of the servers that are within the group, then forwarding client requests to each server, in turn. After reaching the end of the list, the load balancer repeats this same sequence.

  • Least Connected. This methods works by assigning the next request to the server that has the least number of active connections. With the least-connected method, the load balancer compares the number of currently active connections to each server, then sends the request to the server with the fewest connections. You set the configuration by using the least_conn directive.

  • IP Hash. This method uses a hash-function to determine which server to select for the next request, which is based on the client’s IP address. You set the configuration by using the ip_hash directive.

For more information, see Chapter 4, Setting Up Load Balancing by Using NGINX.

See also