Sun Java System Application Server Enterprise Edition 8.2 High Availability Administration Guide

How the HTTP Load Balancer Works

The load balancer attempts to evenly distribute the workload among multiple Application Server instances (either stand-alone or clustered), thereby increasing the overall throughput of the system.

Using a load balancer also enables requests to fail over from one server instance to another. For HTTP session information to persist, you must be using Enterprise Edition, have installed and set up the HADB, and configured HTTP session persistence. For more information, see Chapter 9, Configuring High Availability Session Persistence and Failover.

Note –

The load balancer does not handle URI/URLs that are greater than 8k.

Use the asadmin utility, not the Admin Console, to configure HTTP load balancing.

This section contains the following topics

Assigned Requests and Unassigned Requests

When a request first comes in from an HTTP client to the load balancer, it is a request for a new session. A request for a new session is called an unassigned request. The load balancer routes this request to an application server instance in the cluster according to a round-robin algorithm.

Once a session is created on an application server instance, the load balancer routes all subsequent requests for this session only to that particular instance. A request for an existing session is called an assigned or a sticky request.

HTTP Load Balancing Algorithm

The Sun Java System Application Server load balancer uses a sticky round robin algorithm to load balance incoming HTTP and HTTPS requests. All requests for a given session are sent to the same application server instance. With a sticky load balancer, the session data is cached on a single application server rather than being distributed to all instances in a cluster.

Therefore, the sticky round robin scheme provides significant performance benefits that normally override the benefits of a more evenly distributed load obtained with a pure round robin scheme.

When a new HTTP request is sent to the load balancer plug-in, it’s forwarded to an application server instance based on a simple round robin scheme. Subsequently, this request is “stuck” to this particular application server instance, either by using cookies, or explicit URL rewriting. The load balancer determines the method of stickiness automatically.

The load balancer plug-in uses the following methods to determine session stickiness:

From the sticky information, the load balancer plug-in first determines the instance to which the request was previously forwarded. If that instance is found to be healthy, the load balancer plug-in forwards the request to that specific application server instance. Therefore, all requests for a given session are sent to the same application server instance.

Sample Applications

The following directories contain sample applications that demonstrate load balancing and failover:


The ee-samples directory also contains information for setting up your environment to run the samples.