Sun GlassFish Enterprise Server 2.1 Quick Start Guide

Chapter 4 Working with Load Balancers

This section provides instructions on how to set up the Web Server software to act as a load balancer to the cluster of Application Servers. In addition, it provides steps for configuring a load balancer and exporting it to the Web Server. The load balancer feature is available to you only if you are running a domain with enterprise profile or cluster profile.

A load balancer is deployed with a cluster. A load balancer provides the following features:

Enterprise Server includes load balancing plug-ins for popular web servers such as Sun JavaTM System Web Server, Apache, and Microsoft Windows IIS.

To complete this section, you must have sufficient memory to run a Web Server on your system in addition to the Domain Administration Server and the two instances you have created so far in this guide. A system with 512 Mbytes to 1024 Mbytes of memory is recommended.

This topic presents the following steps:

Setting up Load Balancing

Before you set up load balancing, you need to install the load balancer plug-in. These procedures assume you are running a domain with cluster or enterprise profile.

ProcedureTo Set Up Load Balancing

  1. Create a load balancer using the Admin Console. Alternatively, you can use the asadmin create-http-lb(1) command.

    1. Click the HTTP Load Balancers node in the Admin Console.

    2. Click New.

    3. Type lb1 as the name of the load balancer, the host on which Web Server is installed, and the Web Server instance port. In this sample scenarios Web Server host is localhost and the port is 38000.

    4. Select the Apply Changes Automatically check box. If you choose this option, you do not have to export the load balancer configuration. All changes you make to the load balancer configuration are propagated automatically.

    5. Select cluster1 as target.

      Creating a Cluster explains how to create a sample cluster (cluster1)

    6. Click Save.

  2. Enable cluster1 for load balancing:

    asadmin enable-http-lb-server cluster1

  3. Enable the clusterjsp application for load balancing:

    asadmin enable-http-lb-application clusterjsp.

See Also

For information on advanced topics, such as changing the load balancer configuration or creating health checkers, see the Chapter 4, Configuring HTTP Load Balancing, in Sun GlassFish Enterprise Server 2.1 High Availability Administration Guide.

Starting Load Balancing

Start load balancing by starting or restarting the Web Server.

Verifying Load Balancing

Once the application is deployed and the load balancer is running, verify that the load balancing is working.

ProcedureTo Verify Load Balancing

  1. To display the first page of the clusterjsp application, type this URL in your browser:

    http://localhost:web_server_port/clusterjsp

    Replace the localhost variable with the name of the system that the Web Server is running on.

    Replace the web_server_port variable with the value of the port attribute of the LS element in web_server_install_dir/https-hostname/config/server.xml. For this example, port 38000 is used.

    A page similar to what you saw in To Verify Application Deployment. appears.

  2. Examine the Session and Host information displayed. For example:

    • Executed From Server: localhost

    • Server Port Number: 38000

    • Executed Server IP Address: 192.18.145.133

    • Session Created: Day Mon 05 14:55:34 PDT 2005

  3. The Server Port Number is 38000, the Web Server’s port. The load balancer has forwarded the request on the two instances in the cluster.

  4. Using different browser software, or a browser on a different machine, create a new session. Requests from the same browser are “sticky” and go to the same instance.

    These sessions should be distributed to the two instances in the cluster. You can verify this by looking at the server access log files located here:

    • Solaris Java Enterprise System installation:

      /var/opt/SUNWappserver/nodeagents/nodeagent_name/instance1/logs/access/server_access_log

      /var/opt/SUNWappserver/nodeagents/nodeagent_name/instance2/logs/access/server_access_log

    • Linux Java Enterprise System installation:

      /var/opt/sun/appserver/nodeagents/nodeagent_name/instance1/logs/access/server_access_log

      /var/opt/sun/appserver/nodeagents/nodeagent_name/instance2/logs/access/server_access_log

    • Windows Java Enterprise System installation:

      as-install\nodeagents\nodeagent_name \instance1\logs\access\server_access_log

      as-install\nodeagents\nodeagent_name\instance1\logs\access\server_access_log

    • Stand-alone Enterprise Server installations:

      as-install/nodeagents/nodeagent_name/instance1/logs/access/server_access_log

      as-install/nodeagents/nodeagent_name/instance2/logs/access/server_access_log

  5. Add a name and value pair (Name=Name Value=Duke) for storing in HttpSession.

  6. Click the Add to Session Data button.

  7. Verify that the session data was added

High Availability and Failover Using the In-memory Replication Feature

GlassFish v2 does not offer HADB. For high availability and failover, GlassFish offers the in-memory replication feature. The following procedure illustrates this feature:

  1. Restart the web server that has the load balancer plugin installed before deploying an application. This ensures that requests are served by instances in the order set in the loadbalancer.xml file. If you use the loadbalancer.xml file provided in this chapter, instance1 serves the first request.

  2. You have already deployed the clusterjsp web application, which stores session data. You should be able to see that successive requests are served by the same instance that served the first request and the session data is maintained across the requests.

  3. Send few requests and note down the instance that served those requests and shutdown that particular instance. Use this command to stop the instance: asadmin stop-instance --user adminuser --password adminpassword instance1

  4. Send in the next request and verify that the new data is stored and that the previously added data is still there in the session. If one of the server serving requests is not available, another server in the same cluster takes over the request with all earlier session data and completes the request.