This section provides instructions on how to set up the Web Server software to act as a load balancer to the cluster of Application Servers. In addition, it provides steps for configuring a load balancer and exporting it to the Web Server. The load balancer feature is available to you only if you are running a domain with enterprise profile or cluster profile.
A load balancer is deployed with a cluster. A load balancer provides the following features:
Allows an application or service to be scaled horizontally across multiple physical (or logical) hosts yet still presents the user with a single URL
Insulates the user from host failures or server crashes, when it is used with session persistence.
Enhances security by hiding the internal network from the user
Enterprise Server includes load balancing plug-ins for popular web servers such as Sun JavaTM System Web Server, Apache, and Microsoft Windows IIS.
To complete this section, you must have sufficient memory to run a Web Server on your system in addition to the Domain Administration Server and the two instances you have created so far in this guide. A system with 512 Mbytes to 1024 Mbytes of memory is recommended.
This topic presents the following steps:
Before you set up load balancing, you need to install the load balancer plug-in. These procedures assume you are running a domain with cluster or enterprise profile.
Create a load balancer using the Admin Console. Alternatively, you can use the asadmin create-http-lb(1) command.
Click the HTTP Load Balancers node in the Admin Console.
Click New.
Type lb1 as the name of the load balancer, the host on which Web Server is installed, and the Web Server instance port. In this sample scenarios Web Server host is localhost and the port is 38000.
Select the Apply Changes Automatically check box. If you choose this option, you do not have to export the load balancer configuration. All changes you make to the load balancer configuration are propagated automatically.
Select cluster1 as target.
Creating a Cluster explains how to create a sample cluster (cluster1)
Click Save.
Enable cluster1 for load balancing:
asadmin enable-http-lb-server cluster1
Enable the clusterjsp application for load balancing:
asadmin enable-http-lb-application clusterjsp.
For information on advanced topics, such as changing the load balancer configuration or creating health checkers, see the Chapter 4, Configuring HTTP Load Balancing, in Sun GlassFish Enterprise Server 2.1 High Availability Administration Guide.
Start load balancing by starting or restarting the Web Server.
If the Web Server instance serving as load balancer is not already running, start the Web Server.
If you are using Web Server 7.0, use the wadm start-instance command.
For Web Server 6.1, run the start script in the <websvr-instance-dir> directory.
If the Web Server instance serving as load balancer is already running, stop the Web Server and restart.
For Web Server 6.1, use the stop program in web_server_install_dir/https-hostname and restart the server by running the start program.
For Web Server 7.0, use the wadm stop-instance followed by the wadm start-instance command.
Once the application is deployed and the load balancer is running, verify that the load balancing is working.
To display the first page of the clusterjsp application, type this URL in your browser:
http://localhost:web_server_port/clusterjsp
Replace the localhost variable with the name of the system that the Web Server is running on.
Replace the web_server_port variable with the value of the port attribute of the LS element in web_server_install_dir/https-hostname/config/server.xml. For this example, port 38000 is used.
A page similar to what you saw in To Verify Application Deployment. appears.
Examine the Session and Host information displayed. For example:
Executed From Server: localhost
Server Port Number: 38000
Executed Server IP Address: 192.18.145.133
Session Created: Day Mon 05 14:55:34 PDT 2005
The Server Port Number is 38000, the Web Server’s port. The load balancer has forwarded the request on the two instances in the cluster.
Using different browser software, or a browser on a different machine, create a new session. Requests from the same browser are “sticky” and go to the same instance.
These sessions should be distributed to the two instances in the cluster. You can verify this by looking at the server access log files located here:
Solaris Java Enterprise System installation:
/var/opt/SUNWappserver/nodeagents/nodeagent_name/instance1/logs/access/server_access_log
/var/opt/SUNWappserver/nodeagents/nodeagent_name/instance2/logs/access/server_access_log
Linux Java Enterprise System installation:
/var/opt/sun/appserver/nodeagents/nodeagent_name/instance1/logs/access/server_access_log
/var/opt/sun/appserver/nodeagents/nodeagent_name/instance2/logs/access/server_access_log
Windows Java Enterprise System installation:
as-install\nodeagents\nodeagent_name \instance1\logs\access\server_access_log
as-install\nodeagents\nodeagent_name\instance1\logs\access\server_access_log
Stand-alone Enterprise Server installations:
as-install/nodeagents/nodeagent_name/instance1/logs/access/server_access_log
as-install/nodeagents/nodeagent_name/instance2/logs/access/server_access_log
Add a name and value pair (Name=Name Value=Duke) for storing in HttpSession.
Click the Add to Session Data button.
Verify that the session data was added
GlassFish v2 does not offer HADB. For high availability and failover, GlassFish offers the in-memory replication feature. The following procedure illustrates this feature:
Restart the web server that has the load balancer plugin installed before deploying an application. This ensures that requests are served by instances in the order set in the loadbalancer.xml file. If you use the loadbalancer.xml file provided in this chapter, instance1 serves the first request.
You have already deployed the clusterjsp web application, which stores session data. You should be able to see that successive requests are served by the same instance that served the first request and the session data is maintained across the requests.
Send few requests and note down the instance that served those requests and shutdown that particular instance. Use this command to stop the instance: asadmin stop-instance --user adminuser --password adminpassword instance1
Send in the next request and verify that the new data is stored and that the previously added data is still there in the session. If one of the server serving requests is not available, another server in the same cluster takes over the request with all earlier session data and completes the request.