This chapter describes the high availability features in the Sun Java SystemApplication Server Enterprise Edition, with the following topics:
High availability applications and services provide their functionality continuously, regardless of hardware and software failures. Such applications are sometimes referred to as providing five nines of reliability, because they are meant to be available 99.999% of the time.
Application Server provides the following high availability features:
High Availability Session Persistence
High Availability Java Message Service
RMI-IIOP Load Balancing and Failover
Application Server provides high availability of HTTP requests and session data (both HTTP session data and stateful session bean data).
J2EE applications typically have significant amounts of session state data. A web shopping cart is the classic example of a session state. Also, an application can cache frequently-needed data in the session object. In fact, almost all applications with significant user interactions need to maintain session state. Both HTTP sessions and stateful session beans (SFSBs) have session state data.
Preserving session state across server failures can be important to end users. For high availability, Application Server provides the capability to persist session state in the HADB. If the Application Serverinstance hosting the user session experiences a failure, the session state can be recovered, and the session can continue without loss of information.
For a detailed description of how to set up high availability session persistence, see Chapter 9, Configuring High Availability Session Persistence and Failover
The Java Message Service (JMS) API is a messaging standard that allows J2EE applications and components to create, send, receive, and read messages. It enables distributed communication that is loosely coupled, reliable, and asynchronous. The Sun Java System Message Queue (MQ), which implements JMS, is tightly integrated with Application Server, enabling you to create components that rely on JMS, such as message-driven beans (MDBs).
JMS is made highly available through connection pooling and failover and MQ clustering. For more information, see Chapter 10, Java Message Service Load Balancing and Failover.
Application Server supports JMS connection pooling and failover. The Application Server pools JMS connections automatically. By default, Application Server selects its primary MQ broker randomly from the specified host list. When failover occurs, MQ transparently transfers the load to another broker and maintains JMS semantics.
For more information about JMS connection pooling and failover, see Connection Pooling and Failover.
MQ Enterprise Edition supports multiple interconnected broker instances known as a broker cluster. With broker clusters, client connections are distributed across all the brokers in the cluster. Clustering provides horizontal scalability and improves availability.
For more information about MQ clustering, see Using MQ Clusters with Application Server.
With RMI-IIOP load balancing, IIOP client requests are distributed to different server instances or name servers, which spreads the load evenly across the cluster, providing scalability. IIOP load balancing combined with EJB clustering and availability also provides EJB failover.
When a client performs a JNDI lookup for an object, the Naming Service essentially binds the request to a particular server instance. From then on, all lookup requests made from that client are sent to the same server instance, and thus all EJBHome objects will be hosted on the same target server. Any bean references obtained henceforth are also created on the same target host. This effectively provides load balancing, since all clients randomize the list of target servers when performing JNDI lookups. If the target server instance goes down, the lookup or EJB method invocation will failover to another server instance.
IIOP Load balancing and failover happens transparently. No special steps are needed during application deployment. However, adding or deleting new instances to the cluster will not update an existing client's view of the cluster. To do so, you must manually update the endpoints list on the client.
For more information on RMI-IIOP load balancing and failover, see Chapter 11, RMI-IIOP Load Balancing and Failover.
For information about planning a high-availability deployment, including assessing hardware requirements, planning network configuration, and selecting a topology, see Sun Java System Application Server Enterprise Edition 8.2 Deployment Planning Guide. This manual also provides a high-level introduction to concepts such as:
Application server components such as node agents, domains, and clusters
IIOP load balancing in a cluster
Message queue failover
For more information about developing applications that take advantage of high availability features, see Sun Java System Application Server Enterprise Edition 8.2 Developer’s Guide.
For information on how to configure and tune applications and Application Server for best performance with high availability, see Sun Java System Application Server Enterprise Edition 8.2 Performance Tuning Guide, which discusses topics such as:
Tuning persistence frequency and persistence scope
Checkpointing stateful session beans
Configuring the JDBC connection pool
Tuning HADB disk use, memory allocation, performance, and operating system configuration
Configuring load balancer for best performance
Application Server provides high availability through the following sub-components and features:
The load balancer plug-in accepts HTTP / HTTPS requests and forwards them to application server instances in a cluster. If an instance fails, becomes unavailable (due to network faults), or becomes unresponsive, the load balancer redirects requests to existing, available machines. The load balancer can also recognize when a failed instance has recovered and redistribute the load accordingly. Application ServerEnterprise Edition and Standard Edition provide the load balancer plug-in for the Sun Java System Web Server and the Apache Web Server, and Microsoft Internet Information Server.
By distributing workload among multiple physical machines, the load balancer increases overall system throughput. It also provides higher availability through failover of HTTP requests. For HTTP session information to persist, you must configure HTTP session persistence.
For simple, stateless applications a load-balanced cluster may be sufficient. However, for mission-critical applications with session state, use load balanced clusters with HADB.
Server instances and clusters participating in load balancing have a homogenous environment. Usually that means that the server instances reference the same server configuration, can access the same physical resources, and have the same applications deployed to them. Homogeneity assures that before and after failures, the load balancer always distributes load evenly across the active instances in the cluster.
For information on configuring load balancing and failover for see Chapter 5, Configuring HTTP Load Balancing
Application ServerEnterprise Edition provides the High Availability Database (HADB) for high availability storage of HTTP session and stateful session bean data. HADB is designed to support up to 99.999% service and data availability with load balancing, failover, and state recovery. Generally, you must configure and manage HADB independently of Application Server.
Keeping state management responsibilities separated from Application Server has significant benefits. Application Server instances spend their cycles performing as a scalable and high performance application containers delegating state replication to an external high availability state service. Due to this loosely coupled architecture, Application Server instances can be very easily added to or deleted from a cluster. The HADB state replication service can be independently scaled for optimum availability and performance. When an Application Server instance also performs replication, the performance of J2EE applications can suffer and can be subject to longer garbage collection pauses.
For information on planning and setting up your application server installation for high availability with HADB, including determining hardware configuration, sizing, and topology, see Planning for Availability in Sun Java System Application Server Enterprise Edition 8.2 Deployment Planning Guide and Chapter 3, Selecting a Topology, in Sun Java System Application Server Enterprise Edition 8.2 Deployment Planning Guide.
A cluster is a collection of Application Server instances that work together as one logical entity. A cluster provides a runtime environment for one or more J2EE applications. A highly available cluster integrates a state replication service with clusters and load balancer.
Using clusters provides the following advantages:
High availability, by allowing for failover protection for the server instances in a cluster. If one server instance goes down, other server instances take over the requests that the unavailable server instance was serving.
Scalability, by allowing for the addition of server instances to a cluster, thus increasing the capacity of the system. The load balancer plug-in distributes requests to the available server instances within the cluster. No disruption in service is required as an administrator adds more server instances to a cluster.
All instances in a cluster:
Reference the same configuration.
Have the same set of deployed applications (for example, a J2EE application EAR file, a web module WAR file, or an EJB JAR file).
Have the same set of resources, resulting in the same JNDI namespace.
Every cluster in the domain has a unique name; furthermore, this name must be unique across all node agent names, server instance names, cluster names, and configuration names. The name must not be domain. You perform the same operations on a cluster (for example, deploying applications and creating resources) that you perform on an unclustered server instance.
A cluster's settings are derived from a named configuration, which can potentially be shared with other clusters. A cluster whose configuration is not shared by other server instances or clusters is said to have a stand-alone configuration . By default, the name of this configuration is cluster_name -config, where cluster_name is the name of the cluster.
A cluster that shares its configuration with other clusters or instances is said to have a shared configuration.
Clusters, server instances, load balancers, and sessions are related as follows:
A server instance is not required to be part of a cluster. However, an instance that is not part of a cluster cannot take advantage of high availability through transfer of session state from one instance to other instances.
The server instances within a cluster can be hosted on one or multiple machines. You can group server instances across different machines into a cluster.
A particular load balancer can forward requests to server instances on multiple clusters. You can use this ability of the load balancer to perform an online upgrade without loss of service. For more information, see To Upgrade Components Without Loss of Service.
A single cluster can receive requests from multiple load balancers. If a cluster is served by more than one load balancer, you must configure the cluster in exactly the same way on each load balancer.
Each session is tied to a particular cluster. Therefore, although you can deploy an application on multiple clusters, session failover will occur only within a single cluster.
The cluster thus acts as a safe boundary for session failover for the server instances within the cluster. You can use the load balancer and upgrade components within the Application Server without loss of service.
Sun Cluster provides automatic failover of the domain administration server, node agents, Application Serverinstances, Message Queue, and HADB. For more information, see Sun Cluster Data Service for Sun Java System Application Server Guide for Solaris OS.
Use standard ethernet interconnect and a subset of Sun Cluster products. This capability is included in Java ES.
You can use various techniques to manually recover individual subcomponents:
Loss of the Domain Administration Server (DAS) affects only administration. Application Server clusters and applications will continue to run as before, even if the DAS is not reachable
Use any of the following methods to recover the DAS:
Run asadmin backup commands periodically, so you have periodic snapshots. After a hardware failure, install Application Server on a new machine, with the same network identity and run asadmin restore from the back up created earlier. For more information, see Recreating the Domain Administration Server.
Put the domain installation and configuration on a shared and robust file system (NFS for example). If the primary DAS machine fails, a second machine is brought up with the same IP address and will take over with manual intervention or user supplied automation. Sun cluster uses a similar approach for making DAS fault-tolerant.
Zip the Application Serverinstallation and domain root directory. Restore it on the new machine, assigning it the same network identity. This may be the simplest approach if you are using the file-based installation.
Restore from DAS backup. See the AS8.1 UR2 patch 4 instructions
There are two methods for recovering node agents and sever instances.
Keep a backup zip file. There are no explicit commands to back up the node agent and server instances. Simply create a zip file with the contents of the node agents directory. After failure, unzip the saved backup on a new machine with same host name and IP address. Use the same install directory location, OS, and so on. A file-based install, package-based install, or restored backup image must be present on the machine.
Manual recovery. You must use a new host with the same IP address.
Install the Application Server node agent bits on the machine.
See the instructions for AS8.1 UR2 patch 4 installation
Recreate the node agents. You do not need to create any server instances.
Synchronization will copy and update the configuration and data from the DAS.
There are no explicit commands to back up only a web server configuration. Simply zip the web server installation directory. After failure, unzip the saved backup on a new machine with the same network identity. If the new machine has a different IP address, update the DNS server or the routers.
This assumes that the web server is either reinstalled or restored from an image first.
The load balancer plugin (plugins directory) and configurations are in the web server installation directory, typically /opt/SUNWwbsvr. The web-install/web-instance/config directory contains the loadbalancer.xml file.
Message Queue (MQ) configurations and resources are stored in the DAS and can be synchronized to the instances. Any other data and configuration information is in the MQ directories, typically under /var/imq, so backup and restore these directories as required. The new machine must already contain the MQ installation. Be sure to start the MQ brokers as before when you restore a machine.
If you have two active HADB nodes, you can configure two spare nodes (on separate machines), that can take over in case of failure. This is a cleaner method because backup and restore of HADB may result in stale sessions being restored.
For information on creating a database with spare nodes, see Creating a Database. For information on adding spare nodes to a database, see Adding Nodes. If recovery and self-repair fail, then the spare node takes over automatically.
This procedure has not been tested by Sun QA.
Use Veritas Netbackup to save an image of each machine. In the case of BPIP backup the four machines with web servers and application servers.
For each restored machine use the same configuration as the original, for example the same host name, IP address, and so on.
For file-based products such as Application Server, backup and restore just the relevant directories. However, for package-based installs such as the web server image, you must backup and restore the entire machine. Packages are installed into the Solaris package database. So, if you only back up the directories and subsequently restore on to a new system, the result will be a "deployed" web server with no knowledge in the package database. This may cause problems with future patching or upgrading.
Do not manually copy and restore the Solaris package database. The other alternative is to backup an image of the machine after the components are installed, for example, web server. Call this the baseline tar file. When you make changes to the web server, back up these directories for example, under /opt/SUNWwbsvr. To restore, start with the baseline tar file and then copy over the web server directories that have been modified. Similarly, you can use this procedure for MQ (package-based install for BPIP). If you upgrade or patch the original machine be sure to create a new baseline tar file.
If the machine with the DAS goes down there will be a time when it is unavailable until you restore it.
The DAS is the central repository. When you restore server instances and restart them they will be synchronized with information from the DAS only. Hence, all changes must be performed via asadmin or Admin Console.
Daily backup image of HADB may not work, since the image may contain old application session state.
If you have backed up the Domain Administration Server (DAS), you can recreate it if the host machine fails. To recreate a working copy of the DAS, you must have:
One machine (machine1) that contains the original DAS.
A second machine (machine2) that contains a cluster with server instances running applications and catering to clients. The cluster is configured using the DAS on the first machine.
A third backup machine (machine3) where the DAS needs to be recreated in case the first machine crashes.
You must maintain a backup of the DAS from the first machine. Use asadmin backup-domain to backup the current domain.
To migrate the DAS from the first machine (machine1) to the third machine (machine3), follow these steps:
Install the application server on the third machine just as it is installed on the first machine.
This is required so that the DAS can be properly restored on the third machine and there are no path conflicts.
Install the application server administration package using the command-line (interactive) mode.
To activate the interactive command-line mode, invoke the installation program using the console option:
You must have root permission to install using the command-line interface.
Deselect the option to install default domain.
Restoration of backed up domains is only supported on two machines with same architecture and exactly the same installation paths (use same install-dir and domain-root-dir on both machines).
Copy the backup ZIP file from the first machine into the domain-root-dir on the third machine.
You can also FTP the file.
Execute asadmin restore-domain command to restore the zip file onto the third machine:
asadmin restore-domain --filename domain-root-dir/sjsas_backup_v00001.zip domain1
You can backup any domain. However, while recreating the domain, the domain name should be same as the original.
Change domain-root-dir/domain1/generated/tmp directory permissions on the third machine to match the permissions of the same directory on first machine.
The default permissions of this directory are: drwx------ (or 700).
chmod 700 domain-root-dir/domain1/generated/tmp
The example above assumes you are backing up domain1. If you are backing up a domain by another name, you should replace domain1 above with the name of the domain being backed up.
Change the host values for the properties in the domain.xml file for the third machine:
Update the domain-root-dir/domain1/config/domain.xml on the third machine.
For example, search for machine1 and replace it with machine3. So, you can change:
<jmx-connector><property name=client-hostname value=machine1/>...
<jmx-connector><property name=client-hostname value=machine3/>...
Start the restored domain on machine3:
asadmin start-domain --user admin-user --password admin-password domain1
Change the DAS host values for properties under node agent on machine2.
Change agent.das.host property value in install-dir/nodeagents/nodeagent/agent/config/das.properties on machine2.
Restart the node agent on machine2.
Start the cluster instances using the asadmin start-instance command to allow them to synchronize with the restored domain.