5 Load Balancing in a Cluster

Oracle WebLogic Server clusters provide load balancing support for different types of objects. Learn the related planning and configuration considerations for architects and administrators.

For information about load balancing with the ReadyApp framework, see Using the ReadyApp Framework in Deploying Applications to Oracle WebLogic Server.

This chapter includes the following sections:

Load Balancing for Servlets and JSPs

You can accomplish load balancing of servlets and JSPs with the built-in load balancing capabilities of a WebLogic proxy plug-in, with separate load balancing hardware, or NGINX.

Note:

In addition to distributing HTTP traffic, external load balancers can distribute initial context requests that come from Java clients over t3 and the default channel. For a discussion of object-level load balancing in WebLogic Server, see Load Balancing for EJBs and RMI Objects .

Load Balancing with a Proxy Plug-in

The WebLogic proxy plug-in maintains a list of WebLogic Server instances that host a clustered servlet or JSP, and forwards HTTP requests to those instances on a round-robin basis. For more information about this load balancing method, see Round-Robin Load Balancing.

The plug-in also provides the logic necessary to locate the replica of a client's HTTP session state if a WebLogic Server instance should fail.

WebLogic Server supports the following Web servers and associated proxy plug-ins:

  • WebLogic Server with the HttpClusterServlet

  • Netscape Enterprise Server with the Netscape (proxy) plug-in

  • Apache with the Apache Server (proxy) plug-in

  • Microsoft Internet Information Server with the Microsoft-IIS (proxy) plug-in

For instructions on setting up proxy plug-ins, see Configure Proxy Plug-Ins.

How Session Connection and Failover Work with a Proxy Plug-in

For a description of connection and failover for HTTP sessions in a cluster with proxy plug-ins, see Accessing Clustered Servlets and JSPs Using a Proxy.

Load Balancing HTTP Sessions with an External Load Balancer

Clusters that employ a hardware load balancing solution can use any load balancing algorithm supported by the hardware. These can include advanced load-based balancing strategies that monitor the utilization of individual machines.

Load Balancer Configuration Requirements

If you choose to use load balancing hardware instead of a proxy plug-in, it must support a compatible passive or active cookie persistence mechanism, and SSL persistence.

  • Passive Cookie Persistence

    Passive cookie persistence enables WebLogic Server to write a cookie containing session parameter information through the load balancer to the client. For information about the session cookie and how a load balancer uses session parameter data to maintain the relationship between the client and the primary WebLogic Server hosting a HTTP session state, see Load Balancers and the WebLogic Session Cookie.

  • Active Cookie Persistence

    You can use certain active cookie persistence mechanisms with WebLogic Server clusters, provided the load balancer does not modify the WebLogic Server cookie. WebLogic Server clusters do not support active cookie persistence mechanisms that overwrite or modify the WebLogic HTTP session cookie. If the load balancer's active cookie persistence mechanism works by adding its own cookie to the client session, no additional configuration is required to use the load balancer with a WebLogic Server cluster.

  • SSL Persistence

    When SSL persistence is used, the load balancer performs all encryption and decryption of data between clients and the WebLogic Server cluster. The load balancer then uses the plain text cookie that WebLogic Server inserts on the client to maintain an association between the client and a particular server in the cluster.

Load Balancers and the WebLogic Session Cookie

A load balancer that uses passive cookie persistence can use a string in the WebLogic session cookie to associate a client with the server hosting its primary HTTP session state. The string uniquely identifies a server instance in the cluster. You must configure the load balancer with the offset and length of the string constant. The correct values for the offset and length depend on the format of the session cookie.

The format of a session cookie is:

sessionid!primary_server_id!secondary_server_id

where:

  • sessionid is a randomly generated identifier of the HTTP session. The length of the value is configured by the IDLength parameter in the <session-descriptor> element in the weblogic.xml file for an application. By default, the sessionid length is 52 bytes.

  • primary_server_id and secondary_server_id are 10 character identifiers of the primary and secondary hosts for the session.

    Note:

    For sessions using non-replicated memory, cookie, or file-based session persistence, the secondary_server_id is not present. For sessions that use in-memory replication, if the secondary session does not exist, the secondary_server_id is "NONE".

For general instructions on configuring load balancers, see Configuring Load Balancers that Support Passive Cookie Persistence. For instructions on configuring BIG-IP, see Configuring BIG-IP Hardware with Clusters.

Related Programming Considerations

For programming constraints and recommendations for clustered servlets and JSPs, see Programming Considerations for Clustered Servlets and JSPs.

How Session Connection and Failover Works with a Load Balancer

For a description of connection and failover for HTTP sessions in a cluster with load balancing hardware, see Accessing Clustered Servlets and JSPs with Load Balancing Hardware.

Load Balancing WebLogic Cluster Servers with NGINX Open Source

This section describes the load balancing of the WebLogic Cluster Managed Servers using NGINX open source.

NGINX is a HTTP, reverse proxy server, mail proxy server, and generic TCP/UDP proxy server. It can be installed on various operating systems, such as Unix, Linux, Windows etc.

NGINX has two versions: NGINX Open Source and NGINX Plus.

  • NGINX Open Source is an open-source version, and is available for free use under the FreeBSD License.

  • NGINX Plus is a commercial version with community support and other enhanced features.

NGINX acts as a load balancer for the incoming HTTP or HTTPS requests to the WebLogic Managed Servers which hosts the user applications. It is a very efficient HTTP load balancer that distributes traffic among several application servers and enhance the performance, scalability, and reliability of web applications. For more information, see Using nginx as HTTP load balancer.

For more information about NGINX, see NGINX.

Figure 5-1 Load Balancing of the WebLogic Managed Servers Using NGINX


Load balncing of WebLogic Managed Servers using the NGINX.

The WebLogic cluster domain, either configured or dynamic, can be set up on a single VM or multiple VMs. It has Managed Servers listening on HTTP and HTTPS ports. For more information, see Setting Up WebLogic Clusters.

The WebLogic cluster domain can also be configured with Secure Mode enabled. See Secure your production domain in Oracle WebLogic Server Administration Console Online Help.

The NGINX load balancer can be installed on the VM hosting the WebLogic Admin Server or on a separate VM and configured to load balance both HTTP and HTTPS traffic. For more information on NGINX installation, see Installing NGINX Open Source.

The following NGINX load balancing algorithms are used to load balance the incoming traffic to the WebLogic Managed Servers:
  • Round-Robin Load Balancing Algorithm: Requests are distributed evenly across the servers, with server weights taken into consideration. When the load balancing method is not configured to anything specific, it defaults to the round-robin load balancing algorithm. See Configuring Basic Load Balancing.

  • IPHash (Session Stickiness) Load Balancing Algorithm: Requests from the same client will always be directed to the same server, except when that server is unavailable. All requests with that hash are sent to that server, thus establishing session persistence. See Configuring Basic Session Persistence.

For more information about load balancing algorithms, see Choosing a Load-Balancing Method.

To secure the traffic between the client application or browser and the NGINX server, you can enable SSL termination on NGINX. For more information, see NGINX SSL Termination.

To support high-volume traffic, NGINX might require performance tuning. For more information, see Tuning NGINX for Performance.

Load Balancing for EJBs and RMI Objects

Learn the WebLogic Server load balancing algorithms for EJBs and RMI objects. The load balancing algorithm for an object is maintained in the replica-aware stub obtained for a clustered object.

By default, WebLogic Server clusters use round-robin load balancing, described in Round-Robin Load Balancing. You can configure a different default load balancing method for a cluster by using the WebLogic Server Administration Console to set weblogic.cluster.defaultLoadAlgorithm. For instructions, see Configure Load Balancing Method for EJBs and RMIs. You can also specify the load balancing algorithm for a specific RMI object using the -loadAlgorithm option in rmic, or with the home-load-algorithm or stateless-bean-load-algorithm in an EJB's deployment descriptor. A load balancing algorithm that you configure for an object overrides the default load balancing algorithm for the cluster.

In addition to the standard load balancing algorithms, WebLogic Server supports custom parameter-based routing. See Parameter-Based Routing for Clustered Objects.

Also, external load balancers can distribute initial context requests that come from Java clients over t3 and the default channel. However, because WebLogic Server load balancing for EJBs and RMI objects is controlled using replica-aware stubs, including situations where server affinity is employed, you should not route client requests, following the initial context request, through the load balancers. When using the t3 protocol with external load balancers, you must ensure that only the initial context request is routed through the load balancers, and that subsequent requests are routed and controlled using WebLogic Server load balancing.

Oracle advises against using the t3s protocol with external load balancers. In cases where the use of t3 and SSL with external load balancers is required, Oracle recommends using t3 tunneling through HTTPS. In cases where server affinity is required, you must use HTTP session IDs for routing requests and terminate SSL at the load balancer, performing session-based routing to enable appropriate routing requests based on session IDs.

Note:

Oracle does not recommend enabling tunneling on channels that are available external to the firewall.

Round-Robin Load Balancing

WebLogic Server uses the round-robin algorithm as the default load balancing strategy for clustered object stubs when no algorithm is specified. This algorithm is supported for RMI objects and EJBs. It is also the method used by WebLogic proxy plug-ins.

The round-robin algorithm cycles through a list of WebLogic Server instances in order. For clustered objects, the server list consists of WebLogic Server instances that host the clustered object. For proxy plug-ins, the list consists of all WebLogic Server instances that host the clustered servlet or JSP.

The advantages of the round-robin algorithm are that it is simple, cheap and very predictable. The primary disadvantage is that there is some chance of convoying. Convoying occurs when one server is significantly slower than the others. Because replica-aware stubs or proxy plug-ins access the servers in the same order, a slow server can cause requests to "synchronize" on the server, then follow other servers in order for future requests.

Note:

WebLogic Server does not always load balance an object's method calls. See Optimization for Collocated Objects.

Weight-Based Load Balancing

This algorithm applies only to EJB and RMI object clustering.

Weight-based load balancing is improved than the round-robin algorithm by taking a pre-assigned weight into account for each server. In the WebLogic Server Administration Console, navigate to the Cluster page by Server > Configuration > Cluster to assign each server in the cluster a numerical weight between 1 and 100 in the Cluster Weight field. This value determines the proportion of the load the server will withstand relative to other servers. If all servers have the same weight, each of them will withstand an equal proportion of the load. If one server has weight 50 and all other servers have weight 100, then the server with the weight of 50 will withstand half as much as any other server. This algorithm makes it possible to apply the advantages of the round-robin algorithm to clusters that are not homogeneous.

If you use the weight-based algorithm, carefully determine the relative weights to assign to each server instance. Factors to consider include:

  • The processing capacity of the server's hardware in relationship to other servers (for example, the number and performance of CPUs dedicated to WebLogic Server).

  • The number of non-clustered ("pinned") objects each server hosts.

If you change the specified weight of a server and reboot it, the new weighting information is propagated throughout the cluster via the replica-aware stubs. For related information see Cluster-Wide JNDI Naming Service.

Note:

WebLogic Server does not always load balance an object's method calls. See Optimization for Collocated Objects.

In this version of WebLogic Server, weight-based load balancing is not supported for objects that communicate using the RMI/IIOP protocol.

Random Load Balancing

The random method of load balancing applies only to EJB and RMI object clustering.

In random load balancing, requests are routed to servers at random. Random load balancing is recommended only for homogeneous cluster deployments, where each server instance runs on a similarly configured machine. A random allocation of requests does not allow for differences in processing power among the machines upon which server instances run. If a machine hosting servers in a cluster has significantly less processing power than other machines in the cluster, random load balancing will give the less powerful machine as many requests as it gives more powerful machines.

Random load balancing distributes requests evenly across server instances in the cluster, increasingly so as the cumulative number of requests increases. Over a small number of requests the load may not be balanced exactly evenly.

Disadvantages of random load balancing include the slight processing overhead incurred by generating a random number for each request, and the possibility that the load may not be evenly balanced over a small number of requests.

Note:

WebLogic Server does not always load balance an object's method calls. See Optimization for Collocated Objects.

Server Affinity Load Balancing Algorithms

WebLogic Server provides three load balancing algorithms for RMI objects that provide server affinity. Server affinity turns off load balancing for external client connections; instead, the client considers its existing connections as WebLogic Server instances when choosing the server instance to access an object. If an object is configured for server affinity, the client-side stub attempts to choose a server instance to which it is already connected, and continues to use the same server instance for method calls. All stubs on that client attempt to use that server instance. If the server instance becomes unavailable, the stubs failover, if possible, to a server instance to which the client is already connected.

The purpose of server affinity is to minimize the number of IP sockets opened between external Java clients and server instances in a cluster. WebLogic Server accomplishes this by causing method calls on objects to "stick" to an existing connection, instead of being load balanced among the available server instances. With server affinity algorithms, the less costly server-to-server connections are still load-balanced according to the configured load balancing algorithm.

Note:

Load balancing is disabled only for external client connections.

Server affinity is used in combination with one of the standard load balancing methods: Round-Robin, Weight-Based, or Random.

  • Round-Robin-affinity: Server affinity governs connections between external Java clients and server instances; round-robin load balancing is used for connections between server instances.

  • Weight-Based-affinity: Server affinity governs connections between external Java clients and server instances; weight-based load balancing is used for connections between server instances.

  • Random-affinity: Server affinity governs connections between external Java clients and server instances; random load balancing is used for connections between server instances.

For more information about load balancing algorithms that provide server affinity, see Round-Robin Affinity, Weight-Based Affinity, and Random-Affinity.

Server Affinity and Initial Context

A client can request an initial context from a particular server instance in the cluster, or from the cluster by specifying the cluster address in the URL. The connection process varies depending on how the context is obtained.

  • If the initial context is requested from a specific Managed Server, the context is obtained using a new connection to the specified server instance.

  • If the initial context is requested from a cluster by default, context requests are load balanced on a round-robin basis among the clustered server instances. To reuse an existing connection between a particular JVM and the cluster, set ENABLE_SERVER_AFFINITY to true in the hash table of weblogic.jndi.WLContext properties you specify when obtaining context. (If a connection is not available, a new connection is created.) ENABLE_SERVER_AFFINITY is only supported when the context is requested from the cluster address.

Server Affinity and IIOP Client Authentication Using CSIv2

If you use WebLogic Server Common Secure Interoperability (CSIv2) functionality to support stateful interactions with the WebLogic Server Java EE Application Client (thin client), you must use an affinity-based load balancing algorithm to ensure that method calls stick to a server instance. Otherwise, all remote calls will be authenticated. To prevent redundant authentication of stateful CSIv2 clients, use one of the load balancing algorithms described in Round-Robin Affinity, Weight-Based Affinity, and Random-Affinity.

Round-Robin Affinity, Weight-Based Affinity, and Random-Affinity

WebLogic Server has the following three load balancing algorithms that provide server affinity:

  • round-robin-affinity

  • weight-based-affinity

  • random-affinity

Server affinity is supported for all types of RMI objects including JMS objects, all EJB home interfaces, and stateless EJB remote interfaces.

The server affinity algorithms consider existing connections between an external Java client and server instances in balancing the client load among WebLogic Server instances. Server affinity:

  • Turns off load balancing between external Java clients and server instances.

  • Causes method calls from an external Java client to stick to a server instance to which the client has an open connection, assuming that the connection supports the necessary protocol and QOS.

  • In the case of failure, causes the client to failover to a server instance to which it has an open connection, assuming that the connection supports the necessary protocol and QOS.

  • Does not affect the load balancing performed for server-to-server connections.

Server Affinity Examples

The following examples illustrate the effect of server affinity under a variety of circumstances. In each example, the objects deployed are configured for round-robin-affinity.

Example 1: Context From Cluster

In the example shown in Figure 5-2, the client obtains context from the cluster. Lookups on the context and object calls stick to a single connection. Requests for new initial context are load balanced on a round-robin basis.

Figure 5-2 Client Obtains Context From the Cluster

Description of Figure 5-2 follows
Description of "Figure 5-2 Client Obtains Context From the Cluster"
  1. Client requests a new initial context from the cluster (Provider_URL=clusteraddress) and obtains the context from MS1.

  2. Client does a lookup on the context for Object A. The lookup goes to MS1.

  3. Client issues a call to Object A. The call goes to MS1, to which the client is already connected. Additional method calls to Object A stick to MS1.

  4. Client requests a new initial context from the cluster (Provider_URL=clusteraddress) and obtains the context from MS2.

  5. Client does a lookup on the context for Object B. The call goes to MS2, to which the client is already connected. Additional method calls to Object B stick to MS2.

Example 2: Server Affinity and Failover

The example shown in Figure 5-3 illustrates the effect that server affinity has on object failover. When a Managed Server goes down, the client fails over to another Managed Server to which it has a connection.

Figure 5-3 Server Affinity and Failover

Description of Figure 5-3 follows
Description of "Figure 5-3 Server Affinity and Failover"
  1. Client requests a new initial context from MS1.

  2. Client does a lookup on the context for Object A. The lookup goes to MS1.

  3. Client makes a call to Object A. The call goes to MS1, to which the client is already connected. Additional calls to Object A stick to MS1.

  4. The client obtains a stub for Object C, which is pinned to MS3. The client opens a connection to MS3.

  5. MS1 fails.

  6. Client makes a call to Object A. The client no longer has a connection to MS1. Because the client is connected to MS3, it fails over to a replica of Object A on MS3.

Example 3: Server affinity and server-to-server connections

The example shown in Figure 5-4 illustrates the fact that server affinity does not affect the connections between server instances.

Figure 5-4 Server Affinity and Server-to-Server Connections

Description of Figure 5-4 follows
Description of "Figure 5-4 Server Affinity and Server-to-Server Connections"
  1. A JSP on MS4 obtains a stub for Object B.

  2. The JSP selects a replica on MS1. For each method call, the JSP cycles through the Managed Servers upon which Object B is available, on a round-robin basis.

Parameter-Based Routing for Clustered Objects

Parameter-based routing allows you to control load balancing behavior at a lower level. Any clustered object can be assigned a CallRouter. This is a class that is called before each invocation with the parameters of the call. The CallRouter is free to examine the parameters and return the name server to which the call should be routed. For information about creating custom CallRouter classes, see Parameter-Based Routing for Clustered Objects in Developing RMI Applications for Oracle WebLogic Server.

Optimization for Collocated Objects

WebLogic Server does not always load balance an object's method calls. In most cases, it is more efficient to use a replica that is collocated with the stub itself, rather than using a replica that resides on a remote server. Figure 5-5 illustrates this.

Figure 5-5 Collocation Optimization Overrides Load Balancer Logic for Method Call

Description of Figure 5-5 follows
Description of "Figure 5-5 Collocation Optimization Overrides Load Balancer Logic for Method Call"

In this example, a client connects to a servlet hosted by the first WebLogic Server instance in the cluster. In response to client activity, the servlet obtains a replica-aware stub for Object A. Because a replica of Object A is also available on the same server instance, the object is said to be collocated with the client's stub.

WebLogic Server always uses the local, collocated copy of Object A, rather than distributing the client's calls to other replicas of Object A in the cluster. It is more efficient to use the local copy, because doing so avoids the network overhead of establishing peer connections to other servers in the cluster.

This optimization is often overlooked when planning WebLogic Server clusters. The collocation optimization is also frequently confusing for administrators or developers who expect or require load balancing on each method call. If your Web application is deployed to a single cluster, the collocation optimization overrides any load balancing logic inherent in the replica-aware stub.

If you require load balancing on each method call to a clustered object, see Recommended Multitier Architecture for information about how to plan your WebLogic Server cluster accordingly.

Transactional Collocation

As an extension to the basic collocation strategy, WebLogic Server attempts to use collocated clustered objects that are enlisted as part of the same transaction. When a client creates a UserTransaction object, WebLogic Server attempts to use object replicas that are collocated with the transaction. This optimization is depicted in the example shown in Figure 5-6.

Figure 5-6 Collocation Optimization Extends to Other Objects in Transaction

Description of Figure 5-6 follows
Description of "Figure 5-6 Collocation Optimization Extends to Other Objects in Transaction"

In this example, a client attaches to the first WebLogic Server instance in the cluster and obtains a UserTransaction object. After beginning a new transaction, the client looks up Objects A and B to do the work of the transaction. In this situation WebLogic Server always attempts to use replicas of A and B that reside on the same server as the UserTransaction object, regardless of the load balancing strategies in the stubs for A and B.

This transactional collocation strategy is even more important than the basic optimization described in Optimization for Collocated Objects. If remote replicas of A and B were used, added network overhead would be incurred for the duration of the transaction, because the peer connections for A and B would be locked until the transaction committed. WebLogic Server. By using collocating clustered objects during a transaction, WebLogic Server reduces the network load for accessing the individual objects.

XA Transaction Cluster Affinity

XA transaction cluster affinity allows server instances that are participating in a global transactions to service related requests rather than load-balancing these requests to other member servers. When Enable Transaction Affinity=true, cluster throughput is increased by:

  • Reducing inter-server transaction coordination traffic.

  • Improving resource utilization, such as reducing JDBC connections.

  • Simplifying asynchronous processing of transactions.

If the cluster does not have a member participating in the transaction, the request is load balanced to an available cluster member. If the selected cluster member fails, the JTA Transaction Recovery Service can be migrated using the Roadmap for Configuring Automatic Migration of the JTA Transaction Recovery Service.

For more information about configuring the cluster, see Configure clusters in Oracle WebLogic Server Administration Console Online Help. You can also enable XA transaction affinity on the command line using -Dweblogic.cluster.TxnAffinityEnabled=true.

Load Balancing for JMS

WebLogic Server JMS supports server affinity for distributed JMS destinations and client connections.

By default, a WebLogic Server cluster uses the round-robin method to load balance objects. To use a load balancing algorithm that provides server affinity for JMS objects, you must configure the desired method for the cluster as a whole. You can configure the load balancing algorithm by using the WebLogic Server Administration Console to set weblogic.cluster.defaultLoadAlgorithm. For instructions, see Configure Load Balancing Method for EJBs and RMIs .

Server Affinity for Distributed JMS Destinations

Server affinity is supported for JMS applications that use the distributed destination feature; this feature is not supported for standalone destinations. If you configure server affinity for JMS connection factories, a server instance that is load balancing consumers or producers across multiple members of a distributed destination will first attempt to load balance across any destination members that are also running on the same server instance.

For detailed information on how the JMS connection factory's Server Affinity Enabled option affects the load balancing preferences for distributed destination members, see How Distributed Destination Load Balancing Is Affected When Using the Server Affinity Enabled Attribute in Administering JMS Resources for Oracle WebLogic Server.

Initial Context Affinity and Server Affinity for Client Connections

A system administrator can establish load balancing of JMS destinations across multiple servers in a cluster by configuring multiple JMS servers and using targets to assign them to the defined WebLogic Servers. Each JMS server is deployed on exactly one WebLogic Server and handles requests for a set of destinations. During the configuration phase, the system administrator enables load balancing by specifying targets for JMS servers. For instructions on setting up targets, see Configure Migratable Targets for Pinned Services. For instructions on deploying a JMS server to a migratable target, see Deploying_ Activating_ and Migrating Migratable Services.

A system administrator can establish cluster-wide, transparent access to destinations from any server in the cluster by configuring multiple connection factories and using targets to assign them to WebLogic Servers. Each connection factory can be deployed on multiple WebLogic Servers. For more information about connection factories, see ConnectionFactory in Developing JMS Applications for Oracle WebLogic Server.

The application uses the JNDI to look up a connection factory and create a connection to establish communication with a JMS server. Each JMS server handles requests for a set of destinations. Requests for destinations not handled by a JMS server are forwarded to the appropriate server.

WebLogic Server provides server affinity for client connections. If an application has a connection to a given server instance, JMS will attempt to establish new JMS connections to the same server instance.

When creating a connection, JMS will try first to achieve initial context affinity. It will attempt to connect to the same server or servers to which a client connected for its initial context, assuming that the server instance is configured for that connection factory. For example, if the connection factory is configured for servers A and B, but the client has an InitialContext on server C, then the connection factory will not establish the new connection with A, but will choose between servers B and C.

If a connection factory cannot achieve initial context affinity, it will try to provide affinity to a server to which the client is already connected. For instance, assume the client has an initial context on server A and some other type of connection to server B. If the client uses a connection factory configured for servers B and C, it will not achieve initial context affinity. The connection factory will instead attempt to achieve server affinity by trying to create a connection to server B, to which it already has a connection, rather than server C.

If a connection factory cannot provide either initial context affinity or server affinity, then the connection factory is free to make a connection wherever possible. For instance, assume a client has an initial context on server A, no other connections and a connection factory configured for servers B and C. The connection factory is unable to provide any affinity and is free to attempt new connections to either server B or C.

Note:

In the last case, if the client attempts to make a second connection using the same connection factory, it will go to the same server as it did on the first attempt. That is, if it chose server B for the first connection, when the second connection is made, the client will have a connection to server B and the server affinity rule will be enforced.