Sun Java System Application Server Enterprise Edition 8.2 High Availability Administration Guide

Chapter 1 High Availability in Application Server

This chapter describes the high availability features in the Sun Java SystemApplication Server Enterprise Edition, with the following topics:

Overview of High Availability

High availability applications and services provide their functionality continuously, regardless of hardware and software failures. Such applications are sometimes referred to as providing five nines of reliability, because they are meant to be available 99.999% of the time.

Application Server provides the following high availability features:

High Availability Session Persistence

Application Server provides high availability of HTTP requests and session data (both HTTP session data and stateful session bean data).

J2EE applications typically have significant amounts of session state data. A web shopping cart is the classic example of a session state. Also, an application can cache frequently-needed data in the session object. In fact, almost all applications with significant user interactions need to maintain session state. Both HTTP sessions and stateful session beans (SFSBs) have session state data.

Preserving session state across server failures can be important to end users. For high availability, Application Server provides the capability to persist session state in the HADB. If the Application Serverinstance hosting the user session experiences a failure, the session state can be recovered, and the session can continue without loss of information.

For a detailed description of how to set up high availability session persistence, see Chapter 9, Configuring High Availability Session Persistence and Failover

High Availability Java Message Service

The Java Message Service (JMS) API is a messaging standard that allows J2EE applications and components to create, send, receive, and read messages. It enables distributed communication that is loosely coupled, reliable, and asynchronous. The Sun Java System Message Queue (MQ), which implements JMS, is tightly integrated with Application Server, enabling you to create components that rely on JMS, such as message-driven beans (MDBs).

JMS is made highly available through connection pooling and failover and MQ clustering. For more information, see Chapter 10, Java Message Service Load Balancing and Failover.

Connection Pooling and Failover

Application Server supports JMS connection pooling and failover. The Application Server pools JMS connections automatically. By default, Application Server selects its primary MQ broker randomly from the specified host list. When failover occurs, MQ transparently transfers the load to another broker and maintains JMS semantics.

For more information about JMS connection pooling and failover, see Connection Pooling and Failover.

MQ Clustering

MQ Enterprise Edition supports multiple interconnected broker instances known as a broker cluster. With broker clusters, client connections are distributed across all the brokers in the cluster. Clustering provides horizontal scalability and improves availability.

For more information about MQ clustering, see Using MQ Clusters with Application Server.

RMI-IIOP Load Balancing and Failover

With RMI-IIOP load balancing, IIOP client requests are distributed to different server instances or name servers, which spreads the load evenly across the cluster, providing scalability. IIOP load balancing combined with EJB clustering and availability also provides EJB failover.

When a client performs a JNDI lookup for an object, the Naming Service essentially binds the request to a particular server instance. From then on, all lookup requests made from that client are sent to the same server instance, and thus all EJBHome objects will be hosted on the same target server. Any bean references obtained henceforth are also created on the same target host. This effectively provides load balancing, since all clients randomize the list of target servers when performing JNDI lookups. If the target server instance goes down, the lookup or EJB method invocation will failover to another server instance.

IIOP Load balancing and failover happens transparently. No special steps are needed during application deployment. However, adding or deleting new instances to the cluster will not update an existing client's view of the cluster. To do so, you must manually update the endpoints list on the client.

For more information on RMI-IIOP load balancing and failover, see Chapter 11, RMI-IIOP Load Balancing and Failover.

More Information

For information about planning a high-availability deployment, including assessing hardware requirements, planning network configuration, and selecting a topology, see Sun Java System Application Server Enterprise Edition 8.2 Deployment Planning Guide. This manual also provides a high-level introduction to concepts such as:

For more information about developing applications that take advantage of high availability features, see Sun Java System Application Server Enterprise Edition 8.2 Developer’s Guide.

For information on how to configure and tune applications and Application Server for best performance with high availability, see Sun Java System Application Server Enterprise Edition 8.2 Performance Tuning Guide, which discusses topics such as:

How Application Server Provides High Availability

Application Server provides high availability through the following sub-components and features:

Load Balancer Plug-in

The load balancer plug-in accepts HTTP / HTTPS requests and forwards them to application server instances in a cluster. If an instance fails, becomes unavailable (due to network faults), or becomes unresponsive, the load balancer redirects requests to existing, available machines. The load balancer can also recognize when a failed instance has recovered and redistribute the load accordingly. Application ServerEnterprise Edition and Standard Edition provide the load balancer plug-in for the Sun Java System Web Server and the Apache Web Server, and Microsoft Internet Information Server.

By distributing workload among multiple physical machines, the load balancer increases overall system throughput. It also provides higher availability through failover of HTTP requests. For HTTP session information to persist, you must configure HTTP session persistence.

For simple, stateless applications a load-balanced cluster may be sufficient. However, for mission-critical applications with session state, use load balanced clusters with HADB.

Server instances and clusters participating in load balancing have a homogenous environment. Usually that means that the server instances reference the same server configuration, can access the same physical resources, and have the same applications deployed to them. Homogeneity assures that before and after failures, the load balancer always distributes load evenly across the active instances in the cluster.

For information on configuring load balancing and failover for see Chapter 5, Configuring HTTP Load Balancing

High Availability Database

Application ServerEnterprise Edition provides the High Availability Database (HADB) for high availability storage of HTTP session and stateful session bean data. HADB is designed to support up to 99.999% service and data availability with load balancing, failover, and state recovery. Generally, you must configure and manage HADB independently of Application Server.

Keeping state management responsibilities separated from Application Server has significant benefits. Application Server instances spend their cycles performing as a scalable and high performance application containers delegating state replication to an external high availability state service. Due to this loosely coupled architecture, Application Server instances can be very easily added to or deleted from a cluster. The HADB state replication service can be independently scaled for optimum availability and performance. When an Application Server instance also performs replication, the performance of J2EE applications can suffer and can be subject to longer garbage collection pauses.

For information on planning and setting up your application server installation for high availability with HADB, including determining hardware configuration, sizing, and topology, see Planning for Availability in Sun Java System Application Server Enterprise Edition 8.2 Deployment Planning Guide and Chapter 3, Selecting a Topology, in Sun Java System Application Server Enterprise Edition 8.2 Deployment Planning Guide.

Highly Available Clusters

A cluster is a collection of Application Server instances that work together as one logical entity. A cluster provides a runtime environment for one or more J2EE applications. A highly available cluster integrates a state replication service with clusters and load balancer.

Using clusters provides the following advantages:

All instances in a cluster:

Every cluster in the domain has a unique name; furthermore, this name must be unique across all node agent names, server instance names, cluster names, and configuration names. The name must not be domain. You perform the same operations on a cluster (for example, deploying applications and creating resources) that you perform on an unclustered server instance.

Clusters and Configurations

A cluster's settings are derived from a named configuration, which can potentially be shared with other clusters. A cluster whose configuration is not shared by other server instances or clusters is said to have a stand-alone configuration . By default, the name of this configuration is cluster_name -config, where cluster_name is the name of the cluster.

A cluster that shares its configuration with other clusters or instances is said to have a shared configuration.

Clusters, Instances, Sessions, and Load Balancing

Clusters, server instances, load balancers, and sessions are related as follows:

The cluster thus acts as a safe boundary for session failover for the server instances within the cluster. You can use the load balancer and upgrade components within the Application Server without loss of service.

Recovering from Failures

Using Sun Cluster

Sun Cluster provides automatic failover of the domain administration server, node agents, Application Serverinstances, Message Queue, and HADB. For more information, see Sun Cluster Data Service for Sun Java System Application Server Guide for Solaris OS.

Use standard ethernet interconnect and a subset of Sun Cluster products. This capability is included in Java ES.

Manual Recovery

You can use various techniques to manually recover individual subcomponents:

Recovering the Domain Administration Server

Loss of the Domain Administration Server (DAS) affects only administration. Application Server clusters and applications will continue to run as before, even if the DAS is not reachable

Use any of the following methods to recover the DAS:

Recovering Node Agents and Server Instances

There are two methods for recovering node agents and sever instances.

Keep a backup zip file. There are no explicit commands to back up the node agent and server instances. Simply create a zip file with the contents of the node agents directory. After failure, unzip the saved backup on a new machine with same host name and IP address. Use the same install directory location, OS, and so on. A file-based install, package-based install, or restored backup image must be present on the machine.

Manual recovery. You must use a new host with the same IP address.

  1. Install the Application Server node agent bits on the machine.

  2. See the instructions for AS8.1 UR2 patch 4 installation

  3. Recreate the node agents. You do not need to create any server instances.

  4. Synchronization will copy and update the configuration and data from the DAS.

Recovering Load Balancer and Web Server

There are no explicit commands to back up only a web server configuration. Simply zip the web server installation directory. After failure, unzip the saved backup on a new machine with the same network identity. If the new machine has a different IP address, update the DNS server or the routers.


Note –

This assumes that the web server is either reinstalled or restored from an image first.


The load balancer plugin (plugins directory) and configurations are in the web server installation directory, typically /opt/SUNWwbsvr. The web-install/web-instance/config directory contains the loadbalancer.xml file.

Recovering Message Queue

Message Queue (MQ) configurations and resources are stored in the DAS and can be synchronized to the instances. Any other data and configuration information is in the MQ directories, typically under /var/imq, so backup and restore these directories as required. The new machine must already contain the MQ installation. Be sure to start the MQ brokers as before when you restore a machine.

Recovering HADB

If you have two active HADB nodes, you can configure two spare nodes (on separate machines), that can take over in case of failure. This is a cleaner method because backup and restore of HADB may result in stale sessions being restored.

For information on creating a database with spare nodes, see Creating a Database. For information on adding spare nodes to a database, see Adding Nodes. If recovery and self-repair fail, then the spare node takes over automatically.

Using Netbackup


Note –

This procedure has not been tested by Sun QA.


Use Veritas Netbackup to save an image of each machine. In the case of BPIP backup the four machines with web servers and application servers.

For each restored machine use the same configuration as the original, for example the same host name, IP address, and so on.

For file-based products such as Application Server, backup and restore just the relevant directories. However, for package-based installs such as the web server image, you must backup and restore the entire machine. Packages are installed into the Solaris package database. So, if you only back up the directories and subsequently restore on to a new system, the result will be a "deployed" web server with no knowledge in the package database. This may cause problems with future patching or upgrading.

Do not manually copy and restore the Solaris package database. The other alternative is to backup an image of the machine after the components are installed, for example, web server. Call this the baseline tar file. When you make changes to the web server, back up these directories for example, under /opt/SUNWwbsvr. To restore, start with the baseline tar file and then copy over the web server directories that have been modified. Similarly, you can use this procedure for MQ (package-based install for BPIP). If you upgrade or patch the original machine be sure to create a new baseline tar file.

If the machine with the DAS goes down there will be a time when it is unavailable until you restore it.

The DAS is the central repository. When you restore server instances and restart them they will be synchronized with information from the DAS only. Hence, all changes must be performed via asadmin or Admin Console.

Daily backup image of HADB may not work, since the image may contain old application session state.

Recreating the Domain Administration Server

If you have backed up the Domain Administration Server (DAS), you can recreate it if the host machine fails. To recreate a working copy of the DAS, you must have:


Note –

You must maintain a backup of the DAS from the first machine. Use asadmin backup-domain to backup the current domain.


ProcedureTo migrate the Domain Administration Server

To migrate the DAS from the first machine (machine1) to the third machine (machine3), follow these steps:

  1. Install the application server on the third machine just as it is installed on the first machine.

    This is required so that the DAS can be properly restored on the third machine and there are no path conflicts.

    1. Install the application server administration package using the command-line (interactive) mode.

      To activate the interactive command-line mode, invoke the installation program using the console option:


      ./bundle-filename -console

      You must have root permission to install using the command-line interface.

    2. Deselect the option to install default domain.

      Restoration of backed up domains is only supported on two machines with same architecture and exactly the same installation paths (use same install-dir and domain-root-dir on both machines).

  2. Copy the backup ZIP file from the first machine into the domain-root-dir on the third machine.

    You can also FTP the file.

  3. Execute asadmin restore-domain command to restore the zip file onto the third machine:


    asadmin restore-domain --filename domain-root-dir/sjsas_backup_v00001.zip domain1

    You can backup any domain. However, while recreating the domain, the domain name should be same as the original.

  4. Change domain-root-dir/domain1/generated/tmp directory permissions on the third machine to match the permissions of the same directory on first machine.

    The default permissions of this directory are: drwx------ (or 700).

    For example:

    chmod 700 domain-root-dir/domain1/generated/tmp

    The example above assumes you are backing up domain1. If you are backing up a domain by another name, you should replace domain1 above with the name of the domain being backed up.

  5. Change the host values for the properties in the domain.xml file for the third machine:

  6. Update the domain-root-dir/domain1/config/domain.xml on the third machine.

    For example, search for machine1 and replace it with machine3. So, you can change:

    <jmx-connector><property name=client-hostname value=machine1/>...

    to:

    <jmx-connector><property name=client-hostname value=machine3/>...
  7. Change:

    <jms-service... host=machine1.../>

    to:

    <jms-service... host=machine3.../>
  8. Start the restored domain on machine3:


    asadmin start-domain --user admin-user --password admin-password domain1
  9. Change the DAS host values for properties under node agent on machine2.

  10. Change agent.das.host property value in install-dir/nodeagents/nodeagent/agent/config/das.properties on machine2.

  11. Restart the node agent on machine2.


    Note –

    Start the cluster instances using the asadmin start-instance command to allow them to synchronize with the restored domain.