Sun Java logo     Previous      Contents      Index      Next     

Sun logo
Sun Java System Application Server Enterprise Edition 8.1 2005Q1 High Availability Administration Guide 

Chapter 1
Application Server High Availability Features

This chapter describes the high availability features in the Sun Java™ System Application Server Enterprise Edition, with the following topics:


Overview of High Availability

High availability applications and services provide their functionality continuously, regardless of hardware and software failures. Application Server provides high availability through the following sub-components and features:

High Availability Database

Application Server Enterprise Edition provides the High Availability Database (HADB) for high availability storage of HTTP session and stateful session bean data. Generally, you must install, configure, and manage HADB independently of Application Server. HADB is designed to support up to 99.999% service and data availability with load balancing, failover, and state recovery.

Keeping state management responsibilities separated from Application Server has significant benefits. Application Server instances spend their cycles performing as a scalable and high performance Java™ 2 Platform, Enterprise Edition (J2EE™ platform) containers delegating state replication to an external high availability state service. Due to this loosely coupled architecture, application server instances can be very easily added to or deleted from a cluster. The HADB state replication service can be independently scaled for optimum availability and performance. When an application server instance also performs replication, the performance of J2EE applications can suffer and can be subject to longer garbage collection pauses.

For information on planning and setting up your application server installation for high availability with HADB, including determining hardware configuration, sizing, and topology, see "Planning for Availability" chapter and the "Selecting a Topology" chapter in the Sun Java System Application Server Enterprise Edition Deployment Guide.

Load Balancer Plug-in

The load balancer plug-in accepts HTTP and HTTPS requests and forwards them to application server instances in a cluster. If an instance fails, becomes unavailable (due to network faults), or becomes unresponsive, the load balancer redirects requests to existing, available machines. The load balancer can also recognize when a failed instance has recovered and redistribute the load accordingly. The Application Server Enterprise Edition includes the load balancer plug-in for the Sun Java System Web Server and the Apache Web Server.

By distributing workload among multiple physical machines, the load balancer increases overall system throughput. It also provides higher availability through failover of HTTP requests. For HTTP session information to persist, you must configure HTTP session persistence.

For simple, stateless applications a load-balanced cluster may be sufficient. However, for mission-critical applications with session state, use load balanced clusters with HADB.

Server instances and clusters participating in load balancing have a homogenous environment. Usually that means that the server instances reference the same server configuration, can access the same physical resources, and have the same applications deployed to them. Homogeneity assures that before and after failures, the load balancer always distributes load evenly across the active instances in the cluster.

For information on configuring load balancing and failover for see HTTP Load Balancing and Failover.

Highly Available Clusters

A highly available cluster in the Sun Java System Application Server Enterprise Edition integrates a state replication service with clusters and load balancer. A cluster is a collection of server instances that work together as one logical entity. A cluster provides a runtime environment for one or more J2EE applications.

Using Application Server clusters provides the following advantages:

All instances in a cluster:

Every cluster in the domain has a unique name; furthermore, this name must be unique across all node agent names, server instance names, cluster names, and configuration names. The name must not be domain. You perform the same operations on a cluster (for example, deploying applications and creating resources) that you perform on an unclustered server instance.

Clusters, Instances, Sessions, and Load Balancing

Clusters, server instances, load balancers, and sessions are related as follows:

The cluster thus acts as a safe boundary for session failover for the server instances within the cluster. You can use the load balancer and upgrade components within the Application Server without loss of service.

More Information

For more information about

Planning a High Availability Deployment

For information about planning a high-availability deployment, including assessing hardware requirements, planning network configuration, and selecting a topology, see Sun Java System Application Sever Deployment Planning Guide. This manual also provides a high-level introduction to concepts such as:

Tuning High Availability Servers and Applications

For information on how to configure and tune applications and Application Server for best performance with high availability, see the chapter "Tuning for High-Availability" in the Sun Java System Application Server Performance Tuning Guide. This manual discusses topics such as:


HTTP Load Balancing and Failover

This section describes the HTTP load balancer plug-in. It includes the following topics:

For more information about the HTTP load balancer plug-in, including monitoring the load balancer, See the chapter "Configuring Load Balancing and Failover" in the Sun Java System Application Server Administration Guide.

How the Load Balancer Works

Assigned Requests and Unassigned Requests

When a request first comes in from an HTTP client to the load balancer, it is a request for a new session. A request for a new session is called an unassigned request. The load balancer routes this request to an application server instance in the cluster according to a round-robin algorithm.

Once a session is created on an application server instance, the load balancer routes all subsequent requests for this session only to that particular instance. A request for an existing session is called an assigned or a sticky request.

HTTP Load Balancing Algorithm

The Sun Java System Application Server load balancer uses a sticky round robin algorithm to load balance incoming HTTP and HTTPS requests. All requests for a given session are sent to the same application server instance. With a sticky load balancer, the session data is cached on a single application server rather than being distributed to all instances in a cluster.

Therefore, the sticky round robin scheme provides significant performance benefits that normally override the benefits of a more evenly distributed load obtained with pure round robin.

When a new HTTP request is sent to the load balancer plug-in, it's forwarded to an application server instance based on a simple round robin scheme. Subsequently, this request is "stuck" to this particular application server instance, either by using cookies, or explicit URL rewriting.

From the sticky information, the load balancer plug-in first determines the instance to which the request was previously forwarded. If that instance is found to be healthy, the load balancer plug-in forwards the request to that specific application server instance. Therefore, all requests for a given session are sent to the same application server instance.

The load balancer plug-in uses the following methods to determine session stickiness:

Sample Applications

The following directories contain sample applications that demonstrate load balancing and failover:

install_dir/samples/ee-samples/highavailability
install_dir/samples/ee-samples/failover

The ee-samples directory also contains information for setting up your environment to run the samples.

Setting Up HTTP Load Balancing

This section describes how to set up the Load Balancer plug-in.

Prerequisites

To setup a system with load balancing, in addition to the Application Server, you must install a web server and the load-balancer plug-in. Then you must:

Server instances and clusters participating in load balancing must have a homogenous environment. Usually that means that the server instances reference the same server configuration and have the same applications deployed to them.

Procedure to Set Up Load Balancing

Use the asadmin tool to configure load balancing in your environment. Follow these steps:

  1. Create a load balancer configuration using the asadmin command create-http-lb-config.
  2. Add references to clusters and stand-alone server instances for the load balancer to manage using asadmin create-http-lb-ref.
  3. If you created the load balancer configuration with a target, and that target is the only cluster or stand-alone server instance the load balancer references, skip this step.

  4. Enable the clusters or stand-alone server instances reference by the load balancer using asadmin enable-http-lb-server.
  5. Enable applications for load balancing using asadmin enable-http-lb-application.
  6. These applications must already be deployed and enabled for use on the clusters or stand-alone instances that the load balancer references. Enabling for load balancing is a separate step from enabling them for use.

  7. Create a health checker using asadmin create-health-checker.
  8. The health checker monitors unhealthy server instances so that when they become healthy again, the load balancer can send new requests to them.

  9. Generate the load balancer configuration file using asadmin export-http-lb-config.
  10. This command generates a configuration file to use with the load balancer plug-in shipped with the Sun Java System Application Server.

  11. Copy the load balancer configuration file to your web server config directory where the load balancer plug-in configuration files are stored.

Configuring Web Servers for HTTP Load Balancing

The load balancer plug-in installation program makes a few modifications to the web server's configuration files. The changes made depend upon the web server.


Note

The load balancer plug-in can be installed either along with Sun Java System Application Server Enterprise Edition, or separately, on a machine running the supported web server.

For complete details on the installation procedure, see Sun Java System Application Server Installation Guide.


Modifications to Sun Java System Web Server

The installation program makes the following changes to the Sun Java System Web Server's configuration files:

  1. Adds the following load balancer plug-in specific entries to the web server instance's magnus.conf file:
  2. ##EE lb-plugin
    Init fn="load-modules" shlib="
    web_server_install_dir/plugins/lbplugin/bin/libpassthrough.so" funcs="init-passthrough,service-passthrough,name-trans-passthrough" Thread="no"

    Init fn="init-passthrough"

    ##end addition for EE lb-plugin

  3. Adds the following entries specific to the load balancer plug-in to the web server instance's obj.conf file:
  4. <Object name=default>

    NameTrans fn="name-trans-passthrough" name="lbplugin" config-file="web_server_install_dir/web_server_instance/config/loadbalancer.xml"

    <Object name="lbplugin">
    ObjectType fn="force-type" type="magnus-internal/lbplugin"
    PathCheck fn="deny-existence" path="*/WEB-INF/*"
    Service type="magnus-internal/lbplugin" fn="service-passthrough"
    Error reason="Bad Gateway" fn="send-error" uri="$docroot/badgateway.html"
    </object>

lbplugin is a name that uniquely identifies the Object, and web_server_install_dir/web_server_instance/config/loadbalancer.xml is the location of the XML configuration file for the virtual server on which the load balancer is configured to run.

After installing, configure the load balancer as described in Setting Up HTTP Load Balancing.

Modifications to Apache Web Server

For the Apache Web Server, your installation must meet the minimum requirements, and you must perform certain configuration steps before installing the Sun Java System Application Server load balancer plug-in.

Additional modifications to the Apache Web Server are made during the load balancer plug-in installation. After the plug-in is installed, additional configuration is required.

Configuration before Installing the Load Balancer Plug-in

Before installing the load balancer plug-in for Apache, install the Apache Web Server. The Apache source must be compiled and built to run with SSL. This section describes the minimum requirements and high-level steps needed to successfully compile Apache Web Server to run the load balancer plug-in.

Minimum Requirements for Apache 1.3:

In addition, before compiling Apache:

Minimum Requirements for Apache 2:

In addition, before compiling Apache:

Installing SSL-aware Apache:

You must have already downloaded and uncompressed the Apache software before starting.

  1. Compile and build OpenSSL. For more information about OpenSSL, see:
  2. http://www.openssl.org/

    This step is not required on the Linux platform if the version of OpenSSL installed with Linux is 0.9.7.e.

    Download and unpack the OpenSSL source.

    1. cd openssl-0.9.7e
    2. make
    3. make install
    4. For Apache 1.3, configure Apache with mod_ssl. You do not need to complete this step for Apache 2. For more information about mod_ssl, see:

      http://www.modssl.org/

      Unpack the mod_ssl source and follow these steps.

    5. cd mod_ssl-2.8.14-1.3.x
    6. Run ./configure --with-apache=../apache_1.3.x --with-ssl=../openssl-0.9.7e --prefix=install_path --enable-module=ssl --enable-shared=ssl --enable-rule=SHARED_CORE --enable-module=so
    7. The directory specified in the above command example is a variable. The prefix argument indicates where to install Apache. The x in the version number represents your actual version.

  3. For Apache 2.0, configure the source tree:
    1. cd to the http-2.0_x directory.
    2. Run ./configure --with-ssl=open_ssl_install_path --prefix=install_path --enable-ssl --enable-so
    3. The directory specified in the above command example is a variable. The prefix argument indicates where to install Apache. The x in the version number represents your actual version.

  4. For Apache on Linux 2.1, before compiling:
    1. Open src/MakeFile and find the end of the automatically generated section.
    2. Add the following lines after the first four lines after the automatically generated section:
    3. LIBS+= -licuuc -licui18n -lnspr4 -lpthread -lxerces-c -lsupport -lnsprwrap -lns-httpd40

      LDFLAGS+= -L/appserver_installdir/lib -L/opt/sun/private/lib

      Note that -L/opt/sun/private/lib is only required if you installed Application Server as part of a Java Enterprise System installation.

      For example:

      ## (End of automatically generated section)
      ##

      CFLAGS=$(OPTIM) $(CFLAGS1) $(EXTRA_CFLAGS)
      LIBS=$(EXTRA_LIBS) $(LIBS1)
      INCLUDES=$(INCLUDES1) $(INCLUDES0) $(EXTRA_INCLUDES)
      LDFLAGS=$(LDFLAGS1) $(EXTRA_LDFLAGS)"

      LIBS+= -licuuc -licui18n -lnspr4 -lpthread -lxerces-c -lsupport -lnsprwrap -lns-httpd40
      LDFLAGS+= -L/
      appserver_installdir/lib -L/opt/sun/private/lib

    4. Create an environment variable LD_LIBRARY_PATH equal to: appserver_install_dir/lib (for all installations) and appserver_install_dir/lib:opt/sun/private/lib (for Application Server installed as part of a Java Enterprise System installation).
  5. Compile Apache as described in the installation instructions for the version you are using. Full documentation is at:
  6. http://httpd.apache.org/

    In general the steps are:

    1. make
    2. make certificate (Apache 1.3 only)
    3. make install
    4. The command make certificate asks for a secure password. Remember this password as it is required for starting secure Apache.

  7. Configure Apache for your environment.
Modifications Made by the Application Server Installer

The installation program installing the load balancer plug-in extracts the necessary files to the libexec (Apache 1.3) or modules (Apache 2.0) folder under the web server's root directory. It adds the following entries specific to the load balancer plug-in to the web server instance's httpd.conf file:

<VirtualHost machine_name:443>

##Addition for EE lb-plugin

LoadFile /usr/lib/libCstd.so.1

LoadModule apachelbplugin_module libexec/mod_loadbalancer.so
#AddModule mod_apachelbplugin.cpp
<IfModule mod_apachelbplugin.cpp>
   config-file
webserver_instance/conf/loadbalancer.xml
locale en
</IfModule>

<VirtualHost machine_ip_address>
DocumentRoot "
webserver_instance/htdocs"
ServerName server_name
</VirtualHost>

##END EE LB Plugin ParametersVersion 7


Note

  • On Apache 1.3, when more than one Apache child processes runs, each process has its own load balancing round robin sequence.

    For example, if there are two Apache child processes running, and the load balancer plug-in load balances on to two application server instances, the first request is sent to instance #1 and the second request is also sent to instance #1. The third request is sent to instance #2 and the fourth request is sent to instance #2 again. This pattern is repeated (instance1, instance1, instance2, instance2, etc.)

    This behavior is different from what you might expect, that is, instance1, instance2, instance1, instance2, etc. In Sun Java System Application Server, the load balancer plug-in for Apache instantiates a load balancer instance for each Apache process, creating an independent load balancing sequence.
  • Apache 2.0 has multithreaded behavior if compiled with the --with-mpm=worker option.

Modifications After Installation

Apache Web Server must have the correct security files to work well with the load balancer plug-in.

  1. Create a directory called sec_db_files under apache_install_dir.
  2. Copy application_server_domain_dir/config/*.db to apache_install_dir/sec_db_files.
Additional Modifications on the Solaris Platform

On the Solaris platform, add the path /usr/lib/mps/secv1 to LD_LIBRARY_PATH in the apache_install_dir/bin/apachectl script. The path must be added before /usr/lib/mps.

Additional Modifications on the Linux Platform

On the Linux platform, add the path /opt/sun/private/lib to LD_LIBRARY_PATH in the apache_install_dir/bin/apachectl script. The path must be added before /usr/lib.

Configuring Multiple Web Server Instances

The Sun Java System Application Server installer does not allow the installation of multiple load balancer plug-ins on a single machine. To have multiple web servers with the load balancer plug-in on a single machine, in either a single cluster or multiple clusters, a few manual steps are required to configure the load balancer plug-in.

  1. Configure the new web server instance to use the load balancer plug-in, as described in Modifications to Sun Java System Web Server or Modifications to Apache Web Server.
  2. Copy the sun-loadbalancer_1_1.dtd file from the existing web server instance's config directory to the new instance's config directory.
  3. To use the same load balancer configuration, copy the loadbalancer.xml file from the existing web server instance's config directory to the new instance's config directory.
  4. To use a different load balancer configuration:
    1. Create a new load balancer configuration using asadmin create-http-lb-config.
    2. Export the new configuration to a loadbalancer.xml file using asadmin export http-lb-config.
    3. Copy that loadbalancer.xml file to the new web server's config directory.
    4. For information on creating a load balancer configuration and exporting it to a loadbalancer.xml file, see Configuring the Load Balancer.

Configuring the Load Balancer

A load balancer configuration is a named configuration in the domain.xml file that defines a load balancer. Load balancer configuration is extremely flexible:

This section describes how to create, modify, and use a load balancer configuration, including the following topics:

Creating an HTTP Load Balancer Configuration

Create a load balancer configuration using the asadmin command create-http-lb-config. Table 1-1 describes the parameters. For more information see the documentation for create-http-lb-config, delete-http-lb-config, and list-http-lb-configs.

Table 1-1 Load Balancer Configuration Parameters

Parameter

Description

response timeout

Time in seconds within which a server instance must return a response. If no response is received within the time period, the server is considered unhealthy. The default is 60.

HTTPS routing

Whether HTTPS requests to the load balancer result in HTTPS or HTTP requests to the server instance. For more information, see Configuring HTTP and HTTPS Session Failover.

reload interval

Interval between checks for changes to the load balancer configuration file loadbalancer.xml. When the check detects changes, the configuration file is reloaded. A value of 0 disables reloading.For more information, see Enabling Dynamic Reconfiguration.

monitor

Whether monitoring is enabled for the load balancer. For more information, see the Sun Java System Application Server Administration Guide.

routecookie

Name of the cookie the load balancer plug-in uses to record the route information. The HTTP client must support cookies. If your browser is set to ask before storing a cookie, the name of the cookie is JROUTE.

target

Target for the load balancer configuration. If you specify a target, it is the same as adding a reference to it. Targets can be clusters or stand-alone instances.

Creating an HTTP Load Balancer Reference

When you create a reference in the load balancer to a stand-alone server or cluster, the server or cluster is added to the list of target servers and clusters the load balancer controls. The referenced server or cluster still needs to be enabled (using enable-http-lb-server) before requests to it are load balanced. If you created the load balancer configuration with a target, that target is already added as a reference.

Create a reference using create-http-lb-ref. You must supply the load balancer configuration name and the target server instance or cluster.

To delete a reference, use delete-http-lb-ref. Before you can delete a reference, the referenced server or cluster must be disabled using disable-http-lb-server.

For more information, see the documentation for create-http-lb-ref and delete-http-lb-ref.

Enabling Server Instances for Load Balancing

After creating a reference to the server instance or cluster, enable the server instance or cluster using enable-http-lb-server. If you used a server instance or cluster as the target when you created the load balancer configuration, you must enable it.

For more information, see the documentation for enable-http-lb-server.

Enabling Applications for Load Balancing

All servers managed by a load balancer must have homogenous configurations, including the same set of applications deployed to them. Once an application is deployed and enabled for access (this happens during or after the deployment step) you must enable it for load balancing. If an application is not enabled for load balancing, requests to it are not load balanced and failed over, even if requests to the servers the application is deployed to are load balanced and failed over.

When enabling the application, specify the application name and target. If the load balancer manages multiple targets (for example, two clusters), enable the application on all targets.

For more information, see the online help for enable-http-lb-application.

If you deploy a new application, you must also enable it for load balancing and export the load balancer configuration again.

Creating the HTTP Health Checker

The load balancer's health checker periodically checks all the configured Application Server instances that are marked as unhealthy. A health checker is not required, but if no health checker exists, or if the health checker is disabled, the periodic health check of unhealthy instances is not performed.

The load balancer's health check mechanism communicates with the application server instance using HTTP. The health checker sends an HTTP request to the URL specified and waits for a response. A status code in the HTTP response header between 100 and 500 means the instance is healthy.

Creating a Health Checker

To create the health checker, use the asadmin create-http-health-checker command. Specify the following parameters:

If an application server instance is marked as unhealthy, the health checker polls the unhealthy instances to determine if the instance has become healthy. The health checker uses the specified URL to check all unhealthy application server instances to determine if they have returned to the healthy state.

If the health checker finds that an unhealthy instance has become healthy, that instance is added to the list of healthy instances.

For more information see the documentation for create-http-health-checker and delete-http-health-checker.

Additional Health Check Properties for Healthy Instances

The health checker created by create-http-health-checker only checks unhealthy instances. To periodically check healthy instances set some additional properties in your exported loadbalancer.xml file.


Note

These properties can only be set by manually editing loadbalancer.xml after you've exported it. There is no equivalent asadmin command to use.


To check healthy instances, set the following properties:

Table 1-2 Health-checker properties

Property

Definition

active-healthcheck-enabled

True/false flag indicating whether to ping healthy server instances to determine whether they are healthy. To ping server instances, set the flag to true.

number-healthcheck-retries

Specifies how many times the load balancer's health checker pings an unresponsive server instance before marking it unhealthy. Valid range is between 1 and 1000. A default value to set is 3.

Set the properties by editing the loadbalancer.xml file. For example:

<property name="active-healthcheck-enabled" value="true"/>

<property name="number-healthcheck-retries" value="3"/>

If you add these properties, then subsequently edit and export the loadbalancer.xml file again, you must add these properties to the file again, since the newly exported configuration won't contain them.

Exporting the Load Balancer Configuration File

The load balancer plug-in shipped with Sun Java System Application Server uses a configuration file called loadbalancer.xml. Use the asadmin tool to create a load balancer configuration in the domain.xml file. After configuring the load balancing environment, export it to a file:

  1. Export a loadbalancer.xml file using the asadmin command export-http-lb-config.
  2. Export the loadbalancer.xml file for a particular load balancer configuration. You can specify a path and a different file name. If you don't specify a file name, the file is named loadbalancer.xml.load_balancer_config_name. If you don't specify a path, the file is created in the application_server_install_dir/domains/domain_name/generated directory.

  3. Copy the exported load balancer configuration file to the web server's configuration directory.
  4. For example, for the Sun Java System Web Server, that location might be web_server_root/config.

    The load balancer configuration file in the web server's configuration directory must be named loadbalancer.xml. If your file has a different name, such as loadbalancer.xml.load_balancer_config_name, you must rename it.

Changing the HTTP Load Balancer Configuration

If you change an HTTP load balancer configuration by creating or deleting references to servers, deploying new applications, enabling or disabling servers or applications, and so on, export the load balancer configuration file again and copy it to the web server's config directory. For more information, see Exporting the Load Balancer Configuration File.

The load balancer plug-in checks for an updated configuration periodically based on the reload interval specified in the load balancer configuration. After the specified amount of time, if the load balancer discovers a new configuration file, it starts using that configuration.

Enabling Dynamic Reconfiguration

When dynamic reconfiguration is enabled, the load balancer plug-in periodically checks for an updated configuration. To enable dynamic reconfiguration:

After changing these settings, export the load balancer configuration file again and copy it to the web server's config directory.

If you enable dynamic reconfiguration after it has previously been disabled, you also must restart the web server.


Note

  • If the load balancer encounters a hard disk read error while attempting to reconfigure itself, then it uses the configuration that is currently in memory. The load balancer also ensures that the modified configuration data is compliant with the DTD before over writing the existing configuration.

    If a disk read error is encountered, a warning message is logged to the web server's error log file.

    The error log for Sun Java System Web Server' is at: web_server_install_dir/webserver_instance/logs/.

Disabling (Quiescing) a Server Instance or Cluster

Before stopping an application server for any reason, you want the instance to complete serving requests. The process of gracefully disabling a server instance or cluster is called quiescing.

The load balancer uses the following policy for quiescing application server instances:

To disable a server instance or cluster:

  1. Run asadmin disable-http-lb-server, setting the timeout (in minutes).
  2. Export the load balancer configuration file using asadmin export-http-lb-config.
  3. Copy the exported configuration to the web server config directory.
  4. Stop the server instance or instances.

Disabling (Quiescing) an Application

Before you undeploy a web application, you want to the application to complete serving requests. The process of gracefully disabling an application is called quiescing.

The load balancer uses the following policy for quiescing applications:

When you disable an application from every server instance or cluster the load balancer references, the users of the disabled application experience loss of service until the application is enabled again.

If you disable the application from one server instance or cluster while keeping it enabled in another server instance or cluster, users can still access the application.

To disable an application:

  1. Run asadmin disable-http-lb-application, specifying the timeout (in minutes) the name of the application to disable, and the target cluster or instance on which to disable it.
  2. Export the load balancer configuration file using asadmin export-http-lb-config.
  3. Copy the exported configuration to the web server config directory.

Configuring HTTP and HTTPS Session Failover

The load balancer plug-in fails over HTTP/HTTPS sessions to another application server instance if the original application server instance to which the session was connected becomes unavailable. This section describes how to configure the load balancer plug-in to enable HTTP/HTTPS routing and session failover.

This section discusses the following topics:

HTTPS Routing

All incoming requests, whether HTTP or HTTPS, are routed by the load balancer plug-in to application server instances. However, if HTTPS routing is enabled, a HTTPS request will be forwarded by the load balancer plug-in to the application server using an HTTPS port only. Note that HTTPS routing is performed on both new and sticky requests.

If an HTTPS request is received and no session is in progress, then the load balancer plug-in selects an available application server instance with a configured HTTPS port, and forwards the request to that instance.

In an ongoing HTTP session, if a new HTTPS request for the same session is received, then the session and sticky information saved during the HTTP session is used to route the HTTPS request. The new HTTPS request is routed to the same server where the last HTTP request was served, but on the HTTPS port.

Configuring HTTPS Routing

The httpsrouting option of the create-http-lb-config command controls whether HTTPS routing is turned on or off for all the application servers that are participating in load balancing. If this option is set to false, all HTTP and HTTPS requests are forwarded as HTTP. Set it to true when creating a new load balancer configuration, or change it later using the asadmin set command.


Note

  • For HTTPS routing to work, one or more HTTPS listeners must be configured.
  • If https-routing is set to true, and a new or a sticky request comes in where there are no healthy HTTPS listeners in the cluster, then that request generates an error.

Known Issues

The following list discusses the limitations in Load Balancer with respect to HTTP/HTTPS request processing.

Configuring Idempotent URLs

To enhance the availability of deployed applications, configure the environment to retry failed idempotent HTTP requests on all the application server instances serviced by a load balancer. This option is used for read-only requests, for example, to retry a search request.

An idempotent request is one that does not cause any change or inconsistency in an application when retried. In HTTP, some methods (such as GET) are idempotent, while other methods (such as POST) are not. Retrying an idempotent URL must not cause values to change on the server or in the database. The only difference is a change in the response received by the user.

Examples of idempotent requests include search engine queries and database queries. The underlying principle is that the retry does not cause an update or modification of data.

Configure idempotent URLs in the sun-web.xml file. When you export the load balancer configuration, idempotent URL information is automatically added to the loadbalancer.xml file.

For more information on configuring idempotent URLs, see the Sun Java System Application Server Developer's Guide.

Upgrading Applications Without Loss of Availability

Upgrading an application to a new version without loss of availability to users is called a rolling upgrade. Carefully managing the two versions of the application across the upgrade ensures that current users of the application complete their tasks without interruption, while new users transparently get the new version of the application. With a rolling upgrade, users are unaware that the upgrade occurs.

Application Compatibility

Rolling upgrades pose varying degrees of difficulty depending on the magnitude of changes between the two application versions.

If the changes are superficial, for example, changes to static text and images, the two versions of the application are compatible and can both run at once in the same cluster. Compatible applications must:

You can perform a rolling upgrade of a compatible application in either a single cluster or multiple clusters. For more information, see Upgrading In a Single Cluster and Upgrading in Multiple Clusters

If the two versions of an application do not meet all the above criteria, then the applications are considered incompatible. Executing incompatible versions of an application in one cluster can corrupt application data and cause session failover to not function correctly. The problems depend on the type and extent of the incompatibility. It is good practice to upgrade an incompatible application by creating a "shadow cluster" to which to deploy the new version and slowly quiesce the old cluster and application. For more information, see Upgrading Incompatible Applications.

The application developer and administrator are the best people to determine whether application versions are compatible. If in doubt, assume that the versions are incompatible, since this is the safest approach.

Upgrading In a Single Cluster

You can perform a rolling upgrade of an application deployed to a single cluster, providing the cluster's configuration is not shared with any other cluster. To upgrade an application in a single cluster:

  1. Save an old version of the application or back up the domain.
  2. To back up the domain use the asadmin backup-domain command.

  3. Turn off dynamic reconfiguration (if enabled) for the cluster.
  4. To do this with Admin Console:

    1. Expand the Configurations node.
    2. Click the name of the cluster's configuration.
    3. On the Configuration System Properties page, uncheck the Dynamic Reconfiguration Enabled box.
    4. Click Save
    5. Alternatively, use this asadmin command:

      asadmin set --user user --passwordfile password_file cluster_name-config.dynamic-reconfiguration-enabled=false

  5. Redeploy the upgraded application to the target domain. If you redeploy using the Admin Console, the domain is automatically the target. If you use asadmin, specify the target domain. Because dynamic reconfiguration is disabled, the old application continues to run on the cluster.
  6. Enable the redeployed application for the instances using asadmin enable-http-lb-application.
  7. Quiesce one server instance in the cluster from the load balancer:
    1. Disable the server instance using asadmin disable-http-lb-server.
    2. Export the load balancer configuration file using
      asadmin export-http-lb-config.
    3. Copy the exported configuration file to the web server instance's configuration directory. For example, for Sun Java System Web Server, the location is web_server_install_dir/https-host-name/config/loadbalancer.xml. To ensure that the load balancer loads the new configuration file, be sure that dynamic reconfiguration is enabled by setting the reloadinterval in the load balancer configuration.
    4. Wait until the timeout has expired. Monitor the load balancer's log file to make sure the instance is offline. If users see a retry URL, skip the quiescing period and restart the server immediately.
  8. Restart the disabled server instance while the other instances in the cluster are still running. Restarting causes the server to synchronize with the domain and update the application.
  9. Test the application on the restarted server to make sure it runs correctly.
  10. Re-enable the server instance in load balancer:
    1. Enable the server instance using asadmin enable-http-lb-server.
    2. Export the load balancer configuration file using
      asadmin export-http-lb-config.
    3. Copy the configuration file to the web server's configuration directory as described in Step c of Step 5.
  11. Repeat Step 5 through Step c for each instance in the cluster.
  12. When all server instances have the new application and are running, enable dynamic reconfiguration for the cluster again.

Upgrading in Multiple Clusters

Follow these steps to upgrade a compatible application deployed in two or more clusters.

  1. Save an old version of the application or back up the domain.
  2. To back up the domain use the asadmin backup-domain command.

  3. Turn off dynamic reconfiguration (if enabled) for all clusters.
  4. Through the Admin Console:

    1. Expand the Configurations node.
    2. Click the name of one cluster's configuration.
    3. On the Configuration System Properties page, uncheck the Dynamic Reconfiguration Enabled box.
    4. Click Save
    5. Repeat for the other clusters
    6. Alternatively, use this asadmin command:

      asadmin set --user user --passwordfile password_file cluster_name-config.dynamic-reconfiguration-enabled=false

  5. Redeploy the upgraded application to the target domain. If you redeploy using the Admin Console, the domain is automatically the target. If you use asadmin, specify the target domain. Because dynamic reconfiguration is disabled, the old application continues to run on the clusters.
  6. Enable the redeployed application for the clusters using asadmin enable-http-lb-application.
  7. Quiesce one cluster from the load balancer
    1. Disable the cluster using asadmin disable-http-lb-server.
    2. Export the load balancer configuration file using asadmin export-http-lb-config.
    3. Copy the exported configuration file to the web server instance's configuration directory. For example, for Sun Java System Web Server, the location is web_server_install_dir/https-host-name/config/loadbalancer.xml. Dynamic reconfiguration must be enabled for the load balancer (by setting the reloadinterval in the load balancer configuration), so that the new load balancer configuration file is loaded automatically.
    4. Wait until the timeout has expired. Monitor the load balancer's log file to make sure the instance is offline. If users see a retry URL, skip the quiescing period and restart the server immediately.
  8. Restart the disabled cluster while the other clusters are still running. Restarting causes the cluster to synchronize with the domain and update the application.
  9. Test the application on the restarted cluster to make sure it runs correctly.
  10. Re-enable the cluster in load balancer:
    1. Enable the cluster using asadmin enable-http-lb-server.
    2. Export the load balancer configuration file using asadmin export-http-lb-config.
    3. Copy the configuration file to the web server's configuration directory.
  11. Repeat Step 5 through Step 8 for the other clusters.
  12. When all server instances have the new application and are running, enable dynamic reconfiguration for all clusters again.

Upgrading Incompatible Applications

For information on what makes applications compatible, see Application Compatibility. You must use a different rolling upgrade procedure if the new version of the application is incompatible with the old. Also, you must upgrade incompatible application in two or more clusters. If you have only one cluster, create a "shadow cluster" for the upgrade, as described below.

When upgrading an incompatible application:

To upgrade an incompatible application by creating a second cluster:

  1. Save an old version of the application or back up the domain.
  2. To back up the domain use the asadmin backup-domain command.

  3. Create a "shadow cluster" on the same or a different set of machines as the existing cluster.
    1. Use the Admin Console to create the new cluster and reference the existing cluster's named configuration. Customize the ports for the new instances on each machine to avoid conflict with existing active ports.
    2. For all resources associated with the cluster, add a resource reference to the newly created cluster using asadmin create-resource-ref.
    3. Create a reference to all other applications deployed to the cluster (except the current redeployed application) from the newly created cluster using asadmin create-application-ref.
    4. Configure the cluster to be highly available using asadmin configure-ha-cluster.
    5. Create reference to the newly-created cluster in the load balancer configuration file using asadmin create-http-lb-ref.
  4. Give the new version of application a different name from the old version.
  5. Deploy the new application with the new cluster as the target. Use a different context root or roots.
  6. Enable the deployed new application for the clusters using asadmin enable-http-lb-application.
  7. Start the new cluster while the other cluster is still running. The start causes the cluster to synchronize with the domain and be updated with the new application.
  8. Test the application on the new cluster to make sure it runs correctly.
  9. Disable the old cluster from the load balancer using asadmin disable-http-lb-server.
  10. Set a timeout for how long lingering sessions survive.
  11. Enable the new cluster from the load balancer using asadmin enable-http-lb-server.
  12. Export the load balancer configuration file using asadmin export-http-lb-config.
  13. Copy the exported configuration file to the web server instance's configuration directory. For example, for Sun Java System Web Server, the location is web_server_install_dir/https-host-name/config/loadbalancer.xml. Dynamic reconfiguration must be enabled for the load balancer (by setting the reloadinterval in the load balancer configuration), so that the new load balancer configuration file is loaded automatically.
  14. After the timeout period expires or after all users of the old application have exited, stop the old cluster and delete the old application.


High Availability Session Persistence

This chapter explains how to enable and configure high availability session pleasantness:

Overview of Session Failover

Application Server provides high availability session persistence through failover of HTTP session data and stateful session bean (SFSB) session data. Failover means that in the event of an server instance or hardware failure, another server instance takes over a distributed session.

Requirements

A distributed session can run in multiple Sun Java System Application Server instances, if:

Restrictions

When a session fails over, any references to open files or network connections are lost. Applications must be coded with this restriction in mind.

You can only bind certain objects to distributed sessions that support failover. Contrary to the Servlet 2.4 specification, Sun Java System Application Server does not throw an IllegalArgumentException if an object type not supported for failover is bound into a distributed session.

You can bind the following objects into a distributed session that supports failover:

You cannot bind the following object types into sessions that support failover:

In general, for these objects, failover will not work. However, failover might work in some cases, if for example the object is serializable.

Sample Applications

The following directories contain sample applications that demonstrate session persistence:

install_dir/samples/ee-samples/highavailability
install_dir/samples/ee-samples/failover

The following sample application demonstrates SFSB session persistence:

install_dir/samples/ee-samples/failover/apps/sfsbfailover

Setting Up High Availability Session Persistence

To enable high availability session persistence, follow these steps:

  1. Create an Application Server cluster. For more information, see the chapter "Configuring Clusters" in the Sun Java System Application Server Administration Guide.
  2. Create an HADB database for the cluster. See the description of the configure-ha-cluster command in the Reference Manual.
  3. Set up HTTP load balancing for the cluster. See HTTP Load Balancing and Failover.
  4. Enable availability for the application server instances and web or EJB containers that you want to support high availability session persistence, and configure the session persistence settings. Choose one of these approaches:
  5. Restart each server instance in the cluster.
  6. If the instance is currently serving requests, quiesce the instance before restarting it so that the instance gets enough time to serve the requests it is handling. For more information, see Disabling (Quiescing) a Server Instance or Cluster.

  7. Enable availability for any specific SFSB that requires it, and select methods for which checkpointing the session state is necessary. See Configuring Availability for an Individual Bean and Specifying Methods to Be Checkpointed.
  8. Make each web module distributable if you want it to be highly available. For more information, see the Sun Java System Application Server Developer's Guide.
  9. Enable availability for individual applications, web modules, or EJB modules during deployment. See Configuring Availability for an Individual Application or EJB Module, Configuring Availability for Individual Web Applications.
  10. In the Administration Console, check the Availability Enabled box, or use the deploy command with the --availabilityenabled option set to true.


    Note

    High availability session persistence is incompatible with dynamic deployment, dynamic reloading, and auto-deployment. These features are for development, not production environments. For information about how to disable these features, see the Sun Java System Application Server Administration Guide.


Enabling Session Availability

You can enable session availability at five different scopes (from highest to lowest):

  1. Server instance, enabled by default. For instructions, see next section, Enabling Availability for a Server Instance.
  2. Container (web or EJB), enabled by default. For information on enabling availability at the container level, see:
  3. Application, disabled by default
  4. Stand-alone web or EJB module, disabled by default
  5. Individual SFSB, disabled by default

To enable availability at a given scope, you must enable it at all higher levels as well. For example, to enable availability at the application level, you must also enable it at the server instance and container levels.

The default for a given level is the setting at the next level up. For example, if availability is enabled at the container level, it is enabled by default at the application level.

When availability is disabled at the server instance level, enabling it at any other level has no effect. When availability is enabled at the server instance level, it is enabled at all levels unless explicitly disabled.

Enabling Availability for a Server Instance

To enable availability for a server instance using the Administration Console:

  1. In the tree component, expand the Configurations node.
  2. Expand the node for the configuration you want to edit.
  3. In the Availability Service page, enable instance level availability by checking the Availability Service box. To disable it, uncheck the box.
  4. Additionally, you can change the Store Pool Name if you changed the JDBC resource used for connections to the HADB for session persistence. For details, see the description of the configure-ha-cluster command in the Reference Manual.

  5. Click on the Save button.
  6. Stop and restart the server instance.

HTTP Session Failover

J2EE applications typically have significant amounts of session state data. A web shopping cart is the classic example of session state. Also, an application can cache frequently-needed data in the session object. In fact, almost all applications with significant user interactions need to maintain session state.

Configuring Availability for the Web Container

To enable and configure web container availability using the Administration Console:

  1. In the tree component, select the desired configuration.
  2. Click on Availability Service.
  3. Select the Web Container Availability tab.
  4. Check the Availability Service box to enable availability. To disable it, uncheck the box.

  5. Change other settings, as described in the following section, Availability Settings. Click on the Save button.
  6. Restart the server instance.

To enable and configure web container availability using asadmin, see the configure-ha-persistence command in the Reference Manual.

Availability Settings

The Web Container Availability tab of the Availability Service enables you to change these availability settings:

Persistence Type: Specifies the session persistence mechanism for web applications that have availability enabled. Allowed values are memory (no persistence) file (the file system) and ha (the HADB).

HADB must be configured and enabled before you can use ha session persistence. For configuration details, see the description of the configure-ha-cluster command in the Reference Manual.

If web container availability is enabled, the default is ha. Otherwise, the default is memory. For production environments that require session persistence, use ha. The first two types, memory and file persistence, do not provide high availability session persistence; for more information on them, see the Sun Java System Application Server Developer's Guide.

Persistence Frequency: Specifies how often the session state is stored. Applicable only if the Persistence Type is ha. Allowed values are as follows:

Persistence Scope: Specifies how much of the session object and how often session state is stored. Applicable only if the Persistence Type is ha. Allowed values are as follows:

Single Sign-On State: Check this box to enable persistence of the single sign-on state. To disable it, uncheck the box. For more information, see Using Single Sign-on with Session Failover.

HTTP Session Store: You can change the HTTP Session Store if you changed the JDBC resource used for connections to the HADB for session persistence. For details, see the description of the configure-ha-cluster command in the Reference Manual.

Configuring Availability for Individual Web Applications

You can enable availability and configure availability settings for an individual web application, in its sun-web.xml deployment descriptor file. The settings in an application's deployment descriptor override the web container's availability settings.

The session-manager element's persistence-type attribute determines the type of session persistence an application uses. It must be set to ha to enable high availability session persistence.

For more information about the sun-web.xml file, see the Sun Java System Application Server Developer's Guide.

Example

<sun-web-app>
   ...
   <session-config>
      <session-manager persistence-type=ha>
         <manager-properties>
            <property name=persistenceFrequency value=web-method />
         </manager-properties>
         <store-properties>
            <property name=persistenceScope value=session />
         </store-properties>
      </session-manager>
      ...
   </session-config>
   ...

Using Single Sign-on with Session Failover

In a single application server instance, once a user is authenticated by an application, the user is not required to reauthenticate individually to other applications running on the same instance. This is called single sign-on. For more information on single sign-on, see the chapter "Configuring Security" in the Sun Java System Application Server Developer's Guide.

For this feature to continue to work even when an HTTP session fails over to another instance in a cluster, single sign-on information must be persisted to the HADB. First enable availability for the server instance and the web container, then enable single-sign-on state persistence. See Enabling Availability for a Server Instance.

Applications that can be accessed through a single name and password combination constitute a single sign-on group.

For HTTP sessions corresponding to applications that are part of a single sign-on group, if one of the sessions times out, other sessions are not invalidated and continue to be available. This is because time out of one session should not affect the availability of other sessions.

As a corollary of this behavior, if a session times out and you try to access the corresponding application from the same browser window that was running the session, you are not required to authenticate again. However, a new session is created.

Take the example of a shopping cart application that is a part of a single sign-on group with two other applications. Assume that the session time out value for the other two applications is higher than the session time out value for the shopping cart application. If your session for the shopping cart application times out and you try to run the shopping cart application from the same browser window that was running the session, you are not required to authenticate again. However, the previous shopping cart is lost, and you have to create a new shopping cart. The other two applications continue to run as usual even though the session running the shopping cart application has timed out.

Similarly, suppose a session corresponding to any of the other two applications times out. You are not required to authenticate again while connecting to the application from the same browser window in which you were running the session.


Note

This behavior applies only to cases where the session times out. If single sign-on is enabled and you invalidate one of the sessions using HttpSession.invalidate(), the sessions for all applications belonging to the single sign-on group are invalidated. If you try to access any application belonging to the single sign-on group, you are required to authenticate again, and a new session is created for the client accessing the application.


Stateful Session Bean Failover

Stateful session beans (SFSBs) contain client-specific state. There is a one-to-one relationship between clients and the stateful session beans. At creation, the EJB container gives each SFSB a unique session ID that binds it to a client.

An SFSB's state can be saved in a persistent store in case a server instance fails. The state of an SFSB is saved to the persistent store at predefined points in its life cycle. This is called checkpointing. If enabled, checkpointing generally occurs after the bean completes any transaction, even if the transaction rolls back.

However, if an SFSB participates in a bean-managed transaction, the transaction might be committed in the middle of the execution of a bean method. Since the bean's state might be undergoing transition as a result of the method invocation, this is not an appropriate time to checkpoint the bean's state. In this case, the EJB container checkpoints the bean's state at the end of the corresponding method, provided the bean is not in the scope of another transaction when that method ends. If a bean-managed transaction spans across multiple methods, checkpointing is delayed until there is no active transaction at the end of a subsequent method.

The state of an SFSB is not necessarily transactional and might be significantly modified as a result of non-transactional business methods. If this is the case for an SFSB, you can specify a list of checkpointed methods, as described in Specifying Methods to Be Checkpointed.

If a distributable web application references an SFSB, and the web application's session fails over, the EJB reference is also failed over.

If an SFSB that uses session persistence is undeployed while the Sun Java System Application Server instance is stopped, the session data in the persistence store might not be cleared. To prevent this, undeploy the SFSB while the Sun Java System Application Server instance is running.

Configuring Availability for the EJB Container

To enable availability for the EJB container using the Administration Console:

  1. Select the EJB Container Availability tab, then check the Availability Service box. To disable it, uncheck the box.
  2. Change other settings, as described in the following section, Availability Settings.
  3. Click on the Save button.
  4. Restart the server instance.
Availability Settings

The EJB Container Availability tab of the Availability Service enables you to change these settings:

HA Persistence Type: Specifies the session persistence and passivation mechanism for SFSBs that have availability enabled. Allowed values are file (the file system) and ha (the HADB). For production environments that require session persistence, use ha, the default.

SFSB Persistence Type: Specifies the passivation mechanism for SFSBs that do not have availability enabled. Allowed values are file (the default) and ha.

If either Persistence Type is set to file, the EJB container specifies the file system location where the passivated session bean state is stored. Checkpointing to the file system is useful for testing but is not for production environments.

HA persistence enables a cluster of server instances to recover the SFSB state if any server instance fails. The HADB is also used as the passivation and activation store. Use this option in a production environment that requires SFSB state persistence. For information about how to set up and configure this database, see the description of the configure-ha-cluster command in the Reference Manual.

SFSB Store Pool Name: You can change the SFSB Store Pool Name if you changed the JDBC resource used for connections to the HADB for session persistence. For details, see the description of the configure-ha-cluster command in the Reference Manual.

Configuring the SFSB Session Store When Availability Is Disabled

If availability is disabled, the local file system is used for SFSB state passivation, but not persistence. To change where the SFSB state is stored, change the Session Store Location setting in the EJB container. For more information, see the chapter "J2EE Containers" in the Sun Java System Application Server Administration Guide.

Configuring Availability for an Individual Application or EJB Module

You can enable SFSB availability for an individual application or EJB module during deployment:

Configuring Availability for an Individual Bean

To enable availability and select methods to be checkpointed for an individual SFSB, use the sun-ejb-jar.xml deployment descriptor file. For details, see the Sun Java System Application Server Developer's Guide.

To enable high availability session persistence, set availability-enabled="true" in the ejb element. To control the size and behavior of the SFSB cache, use the following elements:

The max-cache-size element specifies the maximum number of session beans that are held in cache. If the cache overflows (when the number of beans exceeds max-cache-size), the container then passivates some beans or writes out the serialized state of the bean into a file. The directory in which the file is created is obtained from the EJB container using the configuration APIs.

For more information about sun-ejb-jar.xml, see the appendix "Deployment Descriptor Files" in the Sun Java System Application Server Developer's Guide.

Example

<sun-ejb-jar>
   ...
   <enterprise-beans>
      ...
      <ejb availability-enabled="true">
         <ejb-name>MySFSB</ejb-name>
      </ejb>
      ...
   </enterprise-beans>
</sun-ejb-jar>

Specifying Methods to Be Checkpointed

If enabled, checkpointing generally occurs after the bean completes any transaction, even if the transaction rolls back. To specify additional optional checkpointing of SFSBs at the end of non-transactional business methods that cause important modifications to the bean's state, use the checkpoint-at-end-of-method element in the ejb element of the sun-ejb-jar.xml deployment descriptor file.

The non-transactional methods in the checkpoint-at-end-of-method element can be:

Any other methods mentioned in this list are ignored. At the end of invocation of each of these methods, the EJB container saves the state of the SFSB to persistent store.


Note

If an SFSB does not participate in any transaction, and if none of its methods are explicitly specified in the checkpoint-at-end-of-method element, the bean's state is not checkpointed at all even if availability-enabled="true" for this bean.

For better performance, specify a small subset of methods. The methods should accomplish a significant amount of work or result in important modification to the bean's state.


For example:

<sun-ejb-jar>
   ...
   <enterprise-beans>
      ...
      <ejb availability-enabled="true">
         <ejb-name>ShoppingCartEJB</ejb-name>
         <checkpoint-at-end-of-method>
            <method>
               <method-name>addToCart</method-name>
            </method>
         </checkpoint-at-end-of-method>
      </ejb>
      ...
   </enterprise-beans>
</sun-ejb-jar>


RMI-IIOP Load Balancing and Failover

This section describes using Sun Java System Application Server's high-availability features for remote EJB references and JNDI objects over RMI-IIOP.

Overview

With RMI-IIOP load balancing, IIOP client requests are distributed to different server instances or name servers. The goal is to spread the load evenly across the cluster, thus providing scalability. IIOP load balancing combined with EJB clustering and availability also provides EJB failover.

When a client performs a JNDI lookup for an object, the Naming Service creates a InitialContext (IC) object associated with a particular server instance. From then on, all lookup requests made using that IC object are sent to the same server instance. All EJBHome objects looked up with that InitialContext are hosted on the same target server. Any bean references obtained henceforth are also created on the same target host. This effectively provides load balancing, since all clients randomize the list of live target servers when creating InitialContext objects. If the target server instance goes down, the lookup or EJB method invocation will failover to another server instance.

IIOP Load balancing and failover happens transparently. No special steps are needed during application deployment. However, adding or deleting new instances to the cluster will not update an existing client's view of the cluster. To do so, you must manually update the endpoints list on the client side.

Requirements

Sun Java System Application Server provides high availability of remote EJB references and NameService objects over RMI-IIOP. Before using these features, your environment must meet the following requirements:

Algorithm

Sun Java System Application Server employs a randomization and round-robin algorithm for RMI-IIOP load balancing and failover.

When an RMI-IIOP client first creates a new InitialContext object, the list of available Application Server IIOP endpoints is randomized for that client. For that InitialContext object, the load balancer directs lookup requests and other InitialContext operations to the first endpoint on the randomized list. If the first endpoint is not available then the second endpoint in the list is used, and so on.

Each time the client subsequently creates a new InitialContext object, the endpoint list is rotated so that a different IIOP endpoint is used for InitialContext operations.

When you obtain or create beans from references obtained by an InitialContext object, those beans are created on the Application Server instance serving the IIOP endpoint assigned to the InitialContext object. The references to those beans contain the IIOP endpoint addresses of all Application Server instances in the cluster.

The primary endpoint is the bean endpoint corresponding to the InitialContext endpoint used to look up or create the bean. The other IIOP endpoints in the cluster are designated as alternate endpoints. If the bean's primary endpoint becomes unavailable, further requests on that bean fail over to one of the alternate endpoints.

You can configure RMI-IIOP load balancing and failover to work with applications running in the ACC and with standalone Java clients.

Sample Application

The following directory contains a sample application that demonstrates using RMI-IIOP failover with and without ACC:

install_dir/samples/ee-samples/sfsbfailover

See the index.html file accompanying this sample for instructions on running the application with and without ACC. The ee-samples directory also contains information for setting up your environment to run the samples.

Procedure for Application Client Container

This procedure gives an overview of the steps necessary to use RMI-IIOP load balancing and failover with the application client container (ACC). For additional information on the ACC, see the Sun Java System Application Server Developer's Guide.

  1. Go to the install_dir/bin directory.
  2. Run package-appclient.
  3. This utility produces an appclient.jar file. For more information on package-appclient, see the Sun Java System Application Server Reference Manual.

  4. Copy the appclient.jar file to the machine where you want your client and extract it.
  5. Edit the asenv.conf or asenv.bat path variables to point to the correct directory values on that machine.
  6. The file is at appclient_install_dir/config/.

    For a list of the path variables to update, see the package-appclient documentation in the Sun Java System Application Server Reference Manual.

  7. If required, make the appclient script executable. For example, on UNIX use chmod 700.
  8. Find the IIOP listener port numbers for the instances in the cluster.
  9. You specify the IIOP listeners as endpoints to determine which IIOP listener receives the requests. The IIOP listeners are displayed in the Admin Console:

    1. In the Admin Console's tree component, expand the Clusters node.
    2. Expand the cluster.
    3. Select an instance in the cluster.
    4. In the right pane, click the Properties tab.
    5. Make note of the IIOP listener port for the specific instance.

    6. Repeat the process for every instance.
  10. Edit sun-acc.xml for the endpoint values.
  11. Using the IIOP Listeners from the previous step, create endpoint values in the form:

    machine1:iiopport_of_instance1,machine2:iiopport_of_instance2

    For example:

    <property name="com.sun.appserv.iiop.endpoints" value="host1.sun.com:3335,host2.sun.com:3333,host3.sun.com:3334"/>

  12. Deploy your client application with the --retrieve option to get the client jar file. Keep the client jar file on the client machine.
  13. For example:

    asadmin deploy --user admin --passwordfile pw.txt --retrieve /my_dir myapp

  14. Run your application client in the following way:
  15. appclient -client clientjar -name AppName

To test failover, stop one instance in the cluster and see that the application functions normally. You can also have breakpoints (or sleeps) in your client application.

To test load balancing, use multiple clients and see how the load gets distributed among all endpoints.

Procedure for Stand-Alone Client

To use RMI-IIOP load balancing and failover in a stand-alone client:

  1. Deploy the application with the --retrieve option to get the client jar file. Keep the client jar file on the client machine.
  2. For example:

    asadmin deploy --user admin --passwordfile pw.txt --retrieve /my_dir myapp

  3. Copy the client jar and the required jar files, specifying the endpoints and InitialContext as -D values. For example:
  4. java -Dcom.sun.appserv.iiop.endpoints=host1.sun.com:33700,host2.sun.com:3370 0,host3.sun.com:33700 samples.rmiiiopclient.client.Standalone_Client

To test failover, stop one instance in the cluster and confirm that the application functions normally. You can also have breakpoints (or sleeps) in your client application.

To test load balancing, use multiple clients and see how the load gets distributed among all endpoints.


Java Message Service Load Balancing and Failover

This section contains the following topics:

Overview of Java Message Service

The Java Message Service (JMS) API is a messaging standard that allows J2EE applications and components to create, send, receive, and read messages. It enables distributed communication that is loosely coupled, reliable, and asynchronous. The Sun Java System Message Queue 3 2005Q1 (MQ), which implements JMS, is tightly integrated with Application Server, enabling you to create components such as message-driven beans (MDBs).

MQ is integrated with Application Server using a connector module, also known as a resource adapter, defined by the J2EE Connector Architecture Specification 1.5. J2EE components deployed to the Application Server exchange JMS messages using the JMS provider integrated via the connector module. Creating a JMS resource in Application Server creates a connector resource in the background. So, each JMS operation invokes the connector runtime and uses the MQ resource adapter in the background.

You can manage the Java Message Service through the Admin Console or the asadmin command-line utility.

Sample Application

The mqfailover sample application demonstrates MQ failover with a Message Driven Bean receiving incoming messages from a JMS Topic.  The sample contains an MDB and a application client. The Application Server makes the MDB highly available. If one broker goes down, the conversational state (the messages received by MDB) is migrated transparently to another available broker instance in the cluster.

The sample is installed to

install_dir/samples/ee-samples/failover/apps/mqfailover

Further Information

For more information on JMS, see the chapter "Using the Java Message Service" in the Sun Java System Application Server Developer's Guide. For more information on connectors (resource adapters), see the chapter "Developing Connectors" in the Sun Java System Application Server Developer's Guide.

For more information about the Sun Java System Message Queue, refer to the following documentation:

http://docs.sun.com/app/docs/coll/MessageQueue_05q1

For general information about the JMS API, see the JMS web page at:

http://java.sun.com/products/jms/index.html

Configuring the Java Message Service

The Java Message Service configuration is available to all inbound and outbound connections to the Sun Java System Application Server cluster or instance. You can configure the Java Message Service with:

You can override the Java Message Service configuration using JMS connection factory settings. For details, see the Sun Java System Application Server Administration Guide.


Note

You must restart the Application Server instance after changing the configuration of the Java Message Service.


For more information about JMS administration, see the Sun Java System Application Server Administration Guide.

Java Message Service Integration

MQ can be integrated with Application Server in two ways: LOCAL and REMOTE, represented in Admin Console by the Java Message Service Type attribute.

LOCAL Java Message Service

When the Type attribute is LOCAL (the default for a stand-alone Application Server instances), the Application Server will start and stop the MQ broker specified as the Default JMS host. LOCAL type is most suitable for standalone Application Server instances.

To create a one-to-one relationship between Application Server instances and Message Queue brokers, set the type to LOCAL and give each Application Server instance a different default JMS host. You can do this regardless of whether clusters are defined in the Application Server or MQ.

With LOCAL type, use the Start Arguments attribute to specify MQ broker startup parameters.

REMOTE Java Message Service

When the Type attribute is REMOTE, the MQ broker must be started separately. This is the default if clusters are defined in the Application Server. For information about starting the broker, see the Sun Java System Message Queue Administration Guide.

In this case, Application Server will use an externally configured broker or broker cluster. Also, you must start and stop MQ brokers separately from Application Server, and use MQ tools to configure and tune the broker or broker cluster. REMOTE type is most suitable for Application Server clusters.

With REMOTE type, you must specify MQ broker startup parameters using MQ tools. The Start Arguments attribute is ignored.

JMS Hosts List

A JMS host represents an MQ broker. The Java Message Service contains a JMS Hosts list (also called AddressList) that contains all the JMS hosts that Application Server uses.

The JMS Hosts list is populated with the hosts and ports of the specified MQ brokers and is updated whenever a JMS host configuration changes. When you create JMS resources or deploy MDBs, they inherit the JMS Hosts list.


Note

In the Sun Java System Message Queue software, the AddressList property is called imqAddressList.


Default JMS Host

One of the hosts in the JMS Hosts list is designated the default JMS host. The default host is named Default_JMS_host. The Application Server instance starts the default JMS host when the Java Message Service type is configured as LOCAL.

If you have created a multi-broker cluster in the Sun Java System Message Queue software, delete the default JMS host, then add the Message Queue cluster's brokers as JMS hosts. In this case, the default JMS host becomes the first one in the JMS Hosts list.

When the Application Server uses a Message Queue cluster, it executes Message Queue specific commands on the default JMS host. For example, when a physical destination is created for a Message Queue cluster of three brokers, the command to create the physical destination is executed on the default JMS host, but the physical destination is used by all three brokers in the cluster.

Creating JMS Hosts

You can create additional JMS hosts in the following ways:

The JMS Hosts list is updated whenever a JMS host configuration changes.

Connection Pooling and Failover

Application Server supports JMS connection pooling and failover. The Sun Java System Application Server pools JMS connections automatically. When the Address List Behavior attribute is random (the default), Application Server selects its primary broker randomly from the JMS host list. When failover occurs, MQ transparently transfers the load to another broker and maintains JMS semantics.

To specify whether the Application Server tries to reconnect to the primary broker when the connection is lost, select the Reconnect checkbox. If enabled and the primary broker goes down, Application Server tries to reconnect to another broker in the JMS Hosts list.

When Reconnect is enabled, also specify the following attributes:

You can override these settings using JMS connection factory settings. For details, see the Sun Java System Application Server Administration Guide.

Load-Balanced Message Inflow

Application Server delivers messages randomly to MDBs having same ClientID. The ClientID is required for durable subscribers.

For non-durable subscribers in which the ClientID is not configured, all instances of a specific MDB that subscribe to same topic are considered equal. When an MDB is deployed to multiple instances of the Application Server, only one of the MDBs receives the message. If multiple distinct MDBs subscribe to same topic, one instance of each MDB receives a copy of the message.

To support multiple consumers using the same queue, set the maxNumActiveConsumers property of the physical destination to a large value. If this property is set, MQ allows up to that number of MDBs to consume messages from same queue. The message is delivered randomly to the MDBs. If maxNumActiveConsumers is set to -1, there is no limit to the number of consumers.

Using MQ Clusters with Application Server

MQ Enterprise Edition supports using multiple interconnected broker instances known as a broker cluster. With broker clusters, client connections are distributed across all the brokers in the cluster. Clustering provides horizontal scalability and improves availability.

This section describes how to configure Application Server to use highly available Sun Java System Message Queue clusters. It explains how to start and configure Message Queue clusters.

For more information about the topology of Application Server and MQ deployment, Sun Java System Application Sever Deployment Planning Guide.

Enabling MQ Clusters

To use MQ clusters with Application Server clusters, follow this procedure:

  1. Create an Application Server cluster, if one does not already exist.  For information on creating clusters, see the chapter "Configuring Clusters" in the Sun Java System Application Server Administration Guide.
  2. Create an MQ broker cluster. First, delete the default JMS host that refers to the broker started by the Domain Administration Server, and then create three external brokers (JMS hosts) that will be in the MQ broker cluster.
  1. Start the master MQ broker and the other MQ brokers. In addition to the three external brokers started on JMS host machines, start one master broker on any machine. This master broker need not be part of a broker cluster. For example:
  2. /usr/bin/imqbrokerd -tty -name brokerm -port 6772
    -cluster myhost1:6769,myhost2:6770,myhost2:6772,myhost3:6771 -D"imq.cluster.masterbroker=myhost2:6772"

  3. Start the Application Server instances in the cluster.
  4. Create JMS resources on the cluster.
    1. Create JMS physical destinations. For example, using asadmin:
    2.    asadmin create-jmsdest --desttype queue --target cluster1 MyQueue

         asadmin create-jmsdest --desttype queue --target cluster1 MyQueue1

      To use Admin Console, navigate to the JMS Hosts page (Configurations  >  config-name  >  Java Message Service  >  Physical Destinations). Then:

      • Click New to create each JMS physical destination.
      • For each destination, enter its name and type (queue).
    1. Create JMS connection factories. For example, using asadmin:
    2.    asadmin create-jms-resource --target cluster1
         --restype javax.jms.QueueConnectionFactory jms/MyQcf

         asadmin create-jms-resource --target cluster1
         --restype javax.jms.QueueConnectionFactory jms/MyQcf1

      To use Admin Console, navigate to the JMS Connection Factories page (Resources > JMS Resources > Connection Factories). Then, to create each connection factory:

      • Click New. The Create JMS Connection Factory page opens.
      • For each connection factory, enter
        JNDI Name: for example "jms/MyQcf" and "jms/MyQcf1"
        Type: javax.jms.QueueConnectionFactory
      • Select the cluster from the list of available targets at the bottom of the page, then click "Add."
      • Click OK to create the connection factory.
    1. Create JMS destination resources. For example, using asadmin:
    2.    asadmin create-jms-resource --target cluster1
         --restype javax.jms.Queue
         --property imqDestinationName=MyQueue jms/MyQueue

         asadmin create-jms-resource --target cluster1
         --restype javax.jms.Queue
         --property imqDestinationName=MyQueue1 jms/MyQueue1

      To use Admin Console, navigate to the JMS Destination Resources page (Resources > JMS Resources > Connection Factories). Then, to create each destination resource:

      • Click New. The Create JMS Destination Resource page opens.
      • For each destination resource, enter
        JNDI Name: for example "jms/MyQueue" and "jms/MyQueue1"
        Type: javax.jms.Queue
      • Select the cluster from the list of available targets at the bottom of the page, then click "Add."
      • Click OK to create the destination resource.
  5. Deploy the applications with the -retrieve option for application clients. For example:
  6. asadmin deploy --target cluster1 --retrieve /opt /work/MQapp/mdb-simple3.ear

  7. Access the application and test it to ensure it is behaving as expected.
  8. If you want to return the Application Server to its default JMS configuration, delete the JMS hosts you created and recreate the default. For example:
  9. asadmin delete-jms-host --target cluster1 broker1

    asadmin delete-jms-host --target cluster1 broker2

    asadmin delete-jms-host --target cluster1 broker3

    asadmin create-jms-host --target cluster1
    --mqhost myhost1 --mqport 7676
    --mquser admin --mqpassword admin
    default_JMS_host

Troubleshooting

If you encounter problems, consider the following:



Previous      Contents      Index      Next     


Copyright 2004-2005 Sun Microsystems, Inc. All rights reserved.