Sun GlassFish Enterprise Server 2.1 High Availability Administration Guide

Chapter 1 High Availability in Enterprise Server

This chapter describes the high availability features in the Sun GlassFish Enterprise Server that are available with the cluster profile and the enterprise profile.

Note –

The HADB software is supplied with the Enterprise Server standalone distribution of Sun GlassFish Enterprise Server. For information about available distributions of Sun GlassFish Enterprise Server, see Distribution Types and Their Components in Sun GlassFish Enterprise Server 2.1 Installation Guide. HADB features are available only in the enterprise profile. For information about profiles, see Usage Profiles in Sun GlassFish Enterprise Server 2.1 Administration Guide.

This chapter contains the following topics.

Overview of High Availability

High availability applications and services provide their functionality continuously, regardless of hardware and software failures. Such applications are sometimes referred to as providing five nines of reliability, because they are intended to be available 99.999% of the time.

Enterprise Server provides the following high availability features:

HTTP Load Balancer Plug-in

The HTTP load balancer plug-in accepts HTTP/HTTPS requests and forwards them to application server instances in a cluster. If an instance fails, becomes unavailable (due to network faults), or becomes unresponsive, the load balancer redirects requests to existing, available machines. The load balancer can also recognize when a failed instance has recovered and redistribute the load accordingly. GlassFish Communications Server provides the load balancer plug-in for the Sun Java System Web Server and the Apache Web Server, and Microsoft Internet Information Server.

By distributing workload among multiple physical machines, the load balancer increases overall system throughput. It also provides higher availability through failover of HTTP requests. For HTTP session information to persist, you must configure HTTP session persistence.

For simple, stateless applications a load-balanced cluster may be sufficient. However, for mission-critical applications with session state, use load balanced clusters with replicated persistence.

Server instances and clusters participating in load balancing have a homogenous environment. Usually that means that the server instances reference the same server configuration, can access the same physical resources, and have the same applications deployed to them. Homogeneity assures that before and after failures, the load balancer always distributes load evenly across the active instances in the cluster.

For information on configuring load balancing and failover for the HTTP load balancer plug-in, see Chapter 5, Configuring HTTP Load Balancing.

High Availability Java Message Service

The Java Message Service (JMS) API is a messaging standard that allows Java EE applications and components to create, send, receive, and read messages. It enables distributed communication that is loosely coupled, reliable, and asynchronous. The Sun Java System Message Queue (MQ), which implements JMS, is tightly integrated with Enterprise Server, enabling you to create components that rely on JMS, such as message-driven beans (MDBs).

JMS is made highly available through connection pooling and failover and MQ clustering. For more information, see Chapter 10, Java Message Service Load Balancing and Failover.

Connection Pooling and Failover

Enterprise Server supports JMS connection pooling and failover. The Enterprise Server pools JMS connections automatically. By default, Enterprise Server selects its primary MQ broker randomly from the specified host list. When failover occurs, MQ transparently transfers the load to another broker and maintains JMS semantics.

For more information about JMS connection pooling and failover, see Connection Pooling and Failover.

MQ Clustering

MQ Enterprise Edition supports multiple interconnected broker instances known as a broker cluster. With broker clusters, client connections are distributed across all the brokers in the cluster. Clustering provides horizontal scalability and improves availability.

For more information about MQ clustering, see Using MQ Clusters with Enterprise Server.

RMI-IIOP Load Balancing and Failover

With RMI-IIOP load balancing, IIOP client requests are distributed to different server instances or name servers, which spreads the load evenly across the cluster, providing scalability. IIOP load balancing combined with EJB clustering and availability also provides EJB failover.

When a client performs a JNDI lookup for an object, the Naming Service essentially binds the request to a particular server instance. From then on, all lookup requests made from that client are sent to the same server instance, and thus all EJBHome objects will be hosted on the same target server. Any bean references obtained henceforth are also created on the same target host. This effectively provides load balancing, since all clients randomize the list of target servers when performing JNDI lookups. If the target server instance goes down, the lookup or EJB method invocation will failover to another server instance.

IIOP Load balancing and failover happens transparently. No special steps are needed during application deployment. If the Enterprise Server instance on which the application client is deployed participates in a cluster, the Enterprise Server finds all currently active IIOP endpoints in the cluster automatically. However, a client should have at least two endpoints specified for bootstrapping purposes, in case one of the endpoints has failed.

For more information on RMI-IIOP load balancing and failover, see Chapter 11, RMI-IIOP Load Balancing and Failover.

More Information

For information about planning a high-availability deployment, including assessing hardware requirements, planning network configuration, and selecting a topology, see the Sun GlassFish Enterprise Server 2.1 Deployment Planning Guide. This manual also provides a high-level introduction to concepts such as:

For more information about developing applications that take advantage of high availability features, see Sun GlassFish Enterprise Server 2.1 Developer’s Guide.

Tuning High Availability Servers and Applications

For information on how to configure and tune applications and Enterprise Server for best performance with high availability, see the Performance Tuning Guide on, which discusses topics such as:

How Enterprise ServerProvides High Availability

Enterprise Server provides high availability through the following subcomponents and features:

Storage for Session State Data

Storing session state data enables the session state to be recovered after the failover of a server instance in a cluster. Recovering the session state enables the session to continue without loss of information. Enterprise Server provides the following types of high availability storage for HTTP session and stateful session bean data:

In-Memory Replication on Other Servers in the Cluster

In-memory replication on other servers provides lightweight storage of session state data without the need to obtain a separate database, such as HADB. This type of replication uses memory on other servers for high availability storage of HTTP session and stateful session bean data. Clustered server instances replicate session state in a ring topology. Each backup instance stores the replicated data in memory. Replication of session state data in memory on other servers enables sessions to be distributed.

The use of in-memory replication requires the Group Management Service (GMS) to be enabled. For more information about GMS, see Group Management Service.

If server instances in a cluster are located on different machines, ensure that the following prerequisites are met:

High Availability Database

Note –

The HADB software is supplied with the Enterprise Server standalone distribution of Sun GlassFish Enterprise Server. For information about available distributions of Sun GlassFish Enterprise Server, see Distribution Types and Their Components in Sun GlassFish Enterprise Server 2.1 Installation Guide. HADB features are available only in the enterprise profile. For information about profiles, see Usage Profiles in Sun GlassFish Enterprise Server 2.1 Administration Guide.

GlassFish Communications Server provides the High Availability Database (HADB) for high availability storage of HTTP session and stateful session bean data. HADB is designed to support up to 99.999% service and data availability with load balancing, failover, and state recovery. Generally, you must configure and manage HADB independently of Enterprise Server.

Keeping state management responsibilities separated from GlassFish Communications Server has significant benefits. GlassFish Communications Server instances spend their cycles performing as a scalable and high performance application containers delegating state replication to an external high availability state service. Due to this loosely coupled architecture, GlassFish Communications Server instances can be very easily added to or deleted from a cluster. The HADB state replication service can be independently scaled for optimum availability and performance. When an GlassFish Communications Server instance also performs replication, the performance of Java EE applications can suffer and can be subject to longer garbage collection pauses.

For information on planning and setting up your installation for high availability with HADB, including determining hardware configuration, sizing, and topology, see the Sun GlassFish Enterprise Server 2.1 Deployment Planning Guide.

Highly Available Clusters

A cluster is a collection of instances that work together as one logical entity. A cluster provides a runtime environment for one or more Java EE applications. A highly available cluster integrates a state replication service with clusters and load balancer.

Using clusters provides the following advantages:

All instances in a cluster:

Every cluster in the domain has a unique name; furthermore, this name must be unique across all node agent names, server instance names, cluster names, and configuration names. The name must not be domain. You perform the same operations on a cluster (for example, deploying applications and creating resources) that you perform on an unclustered server instance.

Clusters and Configurations

A cluster's settings are derived from a named configuration, which can potentially be shared with other clusters. A cluster whose configuration is not shared by other server instances or clusters is said to have a stand-alone configuration . By default, the name of this configuration is cluster_name -config, where cluster_name is the name of the cluster.

A cluster that shares its configuration with other clusters or instances is said to have a shared configuration.

Clusters, Instances, Sessions, and Load Balancing

Clusters, server instances, load balancers, and sessions are related as follows:

The cluster thus acts as a safe boundary for session failover for the server instances within the cluster. You can use the load balancer and upgrade components within the Enterprise Server without loss of service.

Recovering from Failures

Using Sun Cluster

Sun Cluster provides automatic failover of the domain administration server, node agents, Enterprise Serverinstances, and Message Queue. For more information, see the Sun Cluster Data Service for Sun Java System Application Server Guide for Solaris OS on

Use standard Ethernet interconnect and a subset of Sun Cluster products. This capability is included in Java ES.

Manual Recovery

You can use various techniques to manually recover individual subcomponents:

Recovering the Domain Administration Server

Loss of the Domain Administration Server (DAS) affects only administration. Enterprise Server clusters and applications will continue to run as before, even if the DAS is not reachable

Use any of the following methods to recover the DAS:

Recovering Node Agents and Server Instances

There are two methods for recovering node agents and sever instances.

Keep a backup zip file. There are no explicit commands to back up the node agent and server instances. Simply create a zip file with the contents of the node agents directory. After failure, unzip the saved backup on a new machine with same host name and IP address. Use the same install directory location, OS, and so on. A file-based install, package-based install, or restored backup image must be present on the machine.

Manual recovery. You must use a new host with the same IP address.

  1. Install the Enterprise Server with node agent on the machine.

  2. Recreate the node agents. You do not need to create any server instances.

  3. Synchronization will copy and update the configuration and data from the DAS.

Recovering the HTTP Load Balancer and Web Server

There are no explicit commands to back up only a web server configuration. Simply zip the web server installation directory. After failure, unzip the saved backup on a new machine with the same network identity. If the new machine has a different IP address, update the DNS server or the routers.

Note –

This assumes that the web server is either reinstalled or restored from an image first.

The load balancer plugin (plugins directory) and configurations are in the web server installation directory, typically /opt/SUNWwbsvr. The web-install/web-instance/config directory contains the loadbalancer.xml file.

Recovering Message Queue

Message Queue (MQ) configurations and resources are stored in the DAS and can be synchronized to the instances. Any other data and configuration information is in the MQ directories, typically under /var/imq, so backup and restore these directories as required. The new machine must already contain the MQ installation. Be sure to start the MQ brokers as before when you restore a machine.

Recovering HADB

Note –

The HADB software is supplied with the Enterprise Server standalone distribution of Sun GlassFish Enterprise Server. For information about available distributions of Sun GlassFish Enterprise Server, see Distribution Types and Their Components in Sun GlassFish Enterprise Server 2.1 Installation Guide. HADB features are available only in the enterprise profile. For information about profiles, see Usage Profiles in Sun GlassFish Enterprise Server 2.1 Administration Guide.

If you have two active HADB nodes, you can configure two spare nodes (on separate machines), that can take over in case of failure. This is a cleaner method because backup and restore of HADB may result in stale sessions being restored.

For information on creating a database with spare nodes, see Creating a Database. For information on adding spare nodes to a database, see Adding Nodes. If recovery and self-repair fail, then the spare node takes over automatically.

Using Netbackup

Note –

This procedure has not been tested by Sun QA.

Use Veritas Netbackup to save an image of each machine. In the case of BPIP backup the four machines with web servers and Application Servers.

For each restored machine use the same configuration as the original, for example the same host name, IP address, and so on.

For file-based products such as Enterprise Server, backup and restore just the relevant directories. However, for package-based installs such as the web server image, you must backup and restore the entire machine. Packages are installed into the Solaris package database. So, if you only back up the directories and subsequently restore on to a new system, the result will be a "deployed" web server with no knowledge in the package database. This may cause problems with future patching or upgrading.

Do not manually copy and restore the Solaris package database. The other alternative is to backup an image of the machine after the components are installed, for example, web server. Call this the baseline tar file. When you make changes to the web server, back up these directories for example, under /opt/SUNWwbsvr. To restore, start with the baseline tar file and then copy over the web server directories that have been modified. Similarly, you can use this procedure for MQ (package-based install for BPIP). If you upgrade or patch the original machine be sure to create a new baseline tar file.

If the machine with the DAS goes down there will be a time when it is unavailable until you restore it.

The DAS is the central repository. When you restore server instances and restart them they will be synchronized with information from the DAS only. Hence, all changes must be performed via asadmin or Admin Console.

Daily backup image of HADB may not work, since the image may contain old application session state.

Recreating the Domain Administration Server

If the machine hosting the domain administration server (DAS) fails, you can recreate the DAS if you have previously backed up the DAS. To recreate a working copy of the DAS, you must have:

Note –

You must maintain a backup of the DAS from the first machine. Use asadmin backup-domain to backup the current domain.

ProcedureTo migrate the DAS

The following steps are required to migrate the Domain Administration Server from the first machine (machine1) to the third machine (machine3).

  1. Install the Enterprise Server on the third machine just as it is installed on the first machine.

    This is required so that the DAS can be properly restored on the third machine and there are no path conflicts.

    1. Install the Enterprise Server administration package using the command-line (interactive) mode.

      To activate the interactive command-line mode, invoke the installation program using the console option:

      ./bundle-filename -console

      You must have root permission to install using the command-line interface.

    2. Deselect the option to install default domain.

      Restoration of backed up domains is only supported on two machines with same architecture and exactly the same installation paths (use same as-install and domain-root-dir on both machines).

  2. Copy the backup ZIP file from the first machine into the domain-root-dir on the third machine.

    You can also FTP the file.

  3. Restore the ZIP file onto the third machine.

    asadmin restore-domain --filename domain-root-dir/ 
    --clienthostname machine3 domain1

    Note –

    By specifying the --clienthostname option, you avoid the need to modify the jmx-connector element's client-hostname property in the domain.xml file.

    You can backup any domain. However, while recreating the domain, the domain name should be same as the original.

  4. Change domain-root-dir/domain1/generated/tmp directory permissions on the third machine to match the permissions of the same directory on first machine.

    The default permissions of this directory are: drwx------ (or 700).

    For example:

    chmod 700 domain-root-dir/domain1/generated/tmp

    The example above assumes you are backing up domain1. If you are backing up a domain by another name, you should replace domain1 above with the name of the domain being backed up.

  5. In the domain-root-dir/domain1/config/domain.xml file on the third machine, update the value of the jms-service element's host attribute.

    The original setting of this attribute is as follows:

    <jms-service... host=machine1.../>

    Modify the setting of this attribute as follows:

    <jms-service... host=machine3.../>
  6. Start the restored domain on machine3:

    asadmin start-domain --user admin-user --password admin-password domain1

    The DAS contacts all running node agents and provides the node agents with information for contacting the DAS. The node agents use this information to communicate with the DAS.

  7. For any node agents that are not running when the DAS is restarted, change property value in as-install/nodeagents/nodeagent/agent/config/ on machine2.

    This step is not required for node agents that are running when the DAS is restarted.

  8. Restart the node agent on machine2.

    Note –

    Start the cluster instances using the asadmin start-instance command to allow them to synchronize with the restored domain.