Sun GlassFish Enterprise Server 2.1 Quick Start Guide

Chapter 5 Setting Up High Availability Failover

With the configuration used in the previous chapter, if a server instance goes down, users lose session state. This section, the second of two advanced topics, provides the steps for installing the high-availability database (HADB), creating a highly available cluster, and testing HTTP session persistence.

Enterprise Server supports both HTTP session persistence and persistence for Stateful Session Beans. The procedures in this chapter cover high availability using in-memory replication or HADB.

These steps assume you have already performed the steps in the previous sections of this Quick Start. The steps are presented in the order that you should complete them. To use the HADB feature, you need to be running a domain with enterprise profile.


Note –

Completing the procedures in this section may require additional hardware resources.


This topic contains the following sections:

High-availability Clusters and HADB

A highly availability cluster inSun GlassFish Enterprise Server integrates a state replication service with the clusters and load balancer created earlier, enabling failover of HTTP sessions.

HttpSession objects and Stateful Session Bean state is stored in HADB, a high-availability database for storing session state. This horizontally scalable state management service can be managed independently of the application server tier. It was designed to support up to 99.999% service and data availability with load balancing, failover and state recovery capabilities.

Keeping state management responsibilities separated from Enterprise Server has significant benefits. Instances spend their cycles performing as a scalable and high performance JavaTM Platform, Enterprise Edition 5 (Java EETM 5 platform) containers delegating state replication to an external high availability state service. Due to this loosely coupled architecture, instances can be easily added to or deleted from a cluster. The HADB state replication service can be independently scaled for optimum availability and performance. When an application server instance also performs replication, the performance of J2EE applications can suffer and can be subject to longer garbage collection pauses.

Because each HADB node requires 512 Mbytes of memory, you need 1 Gbyte of memory to run two HADB nodes on the same machine. If you have less memory, set up each node on a different machine. Running a two-node database on only one host is not recommended for deployment since it is not fault tolerant.

HADB Preinstallation Steps

This procedure covers the most common preinstallation tasks. For information on other preinstallation topics, including prerequisites for installing HADB, configuring network redundancy, and file system support, see Chapter 10, Installing and Setting Up High Availability Database, in Sun GlassFish Enterprise Server 2.1 High Availability Administration Guide.

The recommended system configuration values in this section are sufficient for running up to six HADB nodes and do not take into consideration other applications on the system that also use shared memory.

ProcedureTo Configure Your System for HADB

  1. Get root access.

  2. Define variables related to shared memory and semaphores

    • On Solaris:

      1. Add these lines to the /etc/system file (or if these lines are in the file as comments, uncomment them and make sure that the values match these):

        set shmsys:shminfo_shmmax=0x80000000

        set shmsys:shminfo_shmseg=36

        set semsys:seminfo_semmnu=600

        Set shminfo_shmmax to the total memory in your system (in hexadecimal notation the value 0x80000000 shown is for 2 Gigabytes of memory).

        If the seminfo_* variables are already defined, increment them by the amounts shown. The default values for seminfo_semmni and seminfo_semmns do not need to be changed. The variable shminfo_shmeg is obsolete after Solaris 8.

      2. Reboot, using this command:

        sync; sync; reboot

    • On Linux:

      1. Add these lines to the /etc/sysctl.conf file (or if they are in the file as comments, uncomment them). Set the value to the amount physical memory on the machine. Specify the value as a decimal number of bytes. For example, for a machine having 2 GB of physical memory:

        echo 2147483648 > /proc/sys/shmmax

        echo 2147483648 > /proc/sys/shmall

      2. Reboot, using this command:

        sync; sync; reboot

    • On Windows: No special system settings are needed.

  3. If you used existing JDK software when you installed a standalone Enterprise Server, check the JDK version.

    HADB requires Sun JDK 1.4.1_03 or higher (for the latest information on JDK versions, see the Sun GlassFish Enterprise Server 2.1 Release Notes). Check the version installed, and if it is not done already, set the JAVA_HOME environment variable to the directory where the JDK is installed.

  4. If necessary after the reboot, restart the domain, Web Server, and node agent.

    To restart the domain, use the command asadmin start-domain domain1.

    To restart the Web Server, execute the start program in web_server_install_dir/https-hostname.

    To restart the node agent, use the command asadmin start-node-agent hostname. Replace the variable hostname with the name of the host where the Enterprise Server is running.

Installing HADB

This section provides the steps for installing the high-availability database (HADB).


Note –

If you plan to run the high-availability database on the Enterprise Server machine, and if you installed HADB when you installed Enterprise Server, skip to Starting HADB.


You can install the HADB component on the same machine as your Enterprise Server system if you have 2 Gbytes of memory and 1-2 CPUs. If not, use additional hardware. For example:

ProcedureTo Install HADB

  1. Run the installer.

  2. Choose the option to install HADB.

  3. Complete the installation on your hosts.

Starting HADB

This section describes starting the HADB management agent in most cases by running the ma-initd script. For a production deployment, start the management agent as a service to ensure its availability. For more information, see Starting the HADB Management Agent in Sun GlassFish Enterprise Server 2.1 High Availability Administration Guide.

If starting a database with HADB nodes on several hosts, start the management agent on each host.

ProcedureTo Start HADB in a Java Enterprise System Installation on Solaris or Linux

  1. Change to the /etc/init.d directory:

    cd /etc/init.d

  2. Run the command to start the agent:

    ./ma-initd start

ProcedureTo Start HADB in a Java Enterprise System Installation on Windows

HADB is started by default when Sun Java System is configured and running. However, if you need to start it manually, follow these steps:

  1. Go to Start⇒Settings⇒Control Panel, and double click Administrative Tools.

  2. Double click Services shortcut.

  3. Select HADBMgmtAgent Service from the Services list.

  4. From the Action menu, select Start.

ProcedureTo Start HADB in a Stand-Alone Installation on Solaris or Linux

  1. Change to the HADB bin directory in the Enterprise Serverinstallation: as-install/hadb/4/bin

  2. Run the command to start the agent:

    ./ma-initd start

ProcedureTo Start HADB in a Stand-Alone Installation on Windows

  1. In a terminal window, change to the HADB bin directory in the Enterprise Serverinstallation: as-install\hadb\4.x\bin

    The x represents the release number of HADB.

  2. Run the command to start the agent:

    ma -i ma.cfg

Configuring a Cluster and Application for High Availability

The FirstCluster cluster must be configured to use HADB and high-availability must be enabled for the clusterjsp application before you can verify HTTP session persistence. Use the asadmin configure-ha-cluster command to configure an existing cluster for high availability. For more information on how to use this command, type configure-ha-cluster --help at the asadmin command prompt or see the configure-ha-cluster(1) man page.

Restarting the Cluster

Before the changes made in the previous section take effect, the cluster's instances must be restarted.

ProcedureTo Restart the Cluster

  1. In the Admin Console, expand the Clusters node.

  2. Click FirstCluster.

  3. In the right pane, click Stop Instances.

  4. Once the instances are stopped, click Start Instances.

Verifying HTTP Session Failover

The steps for testing session data failover are similar for testing load balancing as described in the topic Verifying Load Balancing. This time Session Data is preserved after failure. Failover is transparent to the user because the sample application is configured for automatic retry after failure.

ProcedureTo Verify HTTP Session Failover

  1. To display the first page of the clusterjsp application, type this URL in your browser:

    http://localhost:web_server_port/clusterjsp

    Replace the localhost variable with the name of the system that the Web Server is running on.

    Replace the web_server_port variable with the value of the port attribute of the LS element in web_server_install_dir/https-hostname/config/server.xml. For this example, port 38000 is used.

    A page similar to what you saw in To Verify Application Deployment appears.

  2. Examine the Session and Host information displayed. For example:

    • Executed From Server: localhost

    • Server Port Number: 38000

    • Executed Server IP Address: 192.18.145.133

    • Session ID: 41880f618e4593e14fb5d0ac434b1

    • Session Created: Wed Feb 23 15:23:18 PST 2005

  3. View the server access log files to determine which instance is serving the application. The log files are located here:

    • Solaris Java Enterprise System installation:

      /var/opt/SUNWappserver/nodeagents/nodeagent_name/i1/logs/access/server_access_log

      /var/opt/SUNWappserver/nodeagents/nodeagent_name/i2/logs/access/server_access_log

    • Linux Java Enterprise System installation:

      /var/opt/sun/appserver/nodeagents/nodeagent_name/i1/logs/access/server_access_log

      /var/opt/sun/appserver/nodeagents/nodeagent_name/i2/logs/access/server_access_log

    • Windows Java Enterprise System installation:

      as-install\nodeagents\nodeagent_name\i1\logs\access\server_access_log

      as-install\nodeagents\nodeagent_name\i2\logs\access/server_access_log

    • Standalone Enterprise Server installations:

      as-install/nodeagents/nodeagent_name/i1/logs/access/server_access_log

      as-install/nodeagents/nodeagent_name/i2/logs/access/server_access_log

  4. Stop the instance that is serving the page.

    1. In the Admin Console, in the left pane, expand Clusters.

    2. Click FirstCluster.

    3. In the right pane, click the Instances tab.

    4. Click the checkbox next to the server instance that served the request and click the Stop button.

  5. Reload the clusterjsp sample application page.

    The session ID and session attribute data is retained.

  6. Check the access log of the other instance, and notice that it is now servicing the request.

    The state failover features work because the HTTP session is stored persistently in the HADB. In addition to the HTTP session state, the Enterprise Server also can store the state of EJB in the HADB.