The administration tools, such as the browser-based Admin Console, communicate with the domain administration server (DAS), which in turn communicates with the node agents and server instances.
A server instance is a Application Server running in a single Java Virtual Machine (JVM) process. The Application Server is certified with Java 2 Standard Edition (J2SE) 5.0 and 1.4. The recommended J2SE distribution is included with the Application Server installation.
It is usually sufficient to create a single server instance on a machine, since the Application Server and accompanying JVM are both designed to scale to multiple processors. However, it can be beneficial to create multiple instances on one machine for application isolation and rolling upgrades. In some cases, a large server with multiple instances can be used in more than adminsitrative domain. The administration tools makes it easy to create, delete and manage server instances across multiple machines.
An administrative domain (or simply domain) is a group of server instances that are administered together. A server instance belongs to a single administrative domain. The instances in a domain can run on different physical hosts.
You can create multiple domains from one installation of the Application Server. By grouping server instances into domains, different organizations and administrators can share a single Application Server installation. Each domain has its own configuration, log files, and application deployment areas that are independent of other domains. Changing the configuration of one domain does not affect the configurations of other domains. Likewise, deploying an application on a one domain does not deploy it or make it visible to any other domain. At any given time, an administrator can be authenticated to only one domain, and thus can only perform administration on that domain.
A domain has one Domain Administration Server (DAS), a specially-designated application server instance that hosts the administrative applications. The DAS authenticates the administrator, accepts requests from administration tools, and communicates with server instances in the domain to carry out the requests.
The administration tools are the asadmin command-line tool, the browser-based Admin Console. The Application Server also provides a JMX-based API for server administration. The administrator can view and manage a single domain at a time, thus enforcing secure separation.
Since the DAS is an application server instance, it can also host Java EE applications for testing purposes. However, do not use it to host production applications. You might want to deploy applications to the DAS, for example, if the clusters and instances that will host the production application have not yet been created.
The DAS keeps a repository containing the configuration its domain and all the deployed applications. If the DAS is inactive or down, there is no impact on the performance or availability of active server instances, however administrative changes cannot be made. In certain cases, for security purposes, it may be useful to intentionally stop the DAS process; for example to freeze a production configuration.
Administrative commands are provided to backup and restore domain configuration and applications. With the standard backup and restore procedures, you can quickly restore working configurations. If the DAS host fails, you must create a new DAS installation to restore the previous domain configuration. For instructions, see Recreating the Domain Administration Server in Sun GlassFish Enterprise Server 2.1 Administration Guide.
Sun Cluster Data Services provides high availability of the DAS through failover of the DAS host IP address and use of the Global File System. This solution provides nearly continuous availability for DAS and the repository against many types of failures. Sun Cluster Data Services are available with the Sun Java Enterprise System or purchased separately with Sun Cluster. For more information, see the documentation for Sun Cluster Data Services.
A cluster is a named collection of server instances that share the same applications, resources, and configuration information. You can group server instances on different machines into one logical cluster and administer them as one unit. You can easily control the lifecycle of a multi-machine cluster with the DAS.
Clusters enable horizontal scalability, load balancing, and failover protection. By definition, all the instances in a cluster have the same resource and application configuration. When a server instance or a machine in a cluster fails, the load balancer detects the failure, redirects traffic from the failed instance to other instances in the cluster, and recovers the user session state. Since the same applications and resources are on all instances in the cluster, an instance can failover to any other instance in the cluster.
Clusters, domains, and instances are related as follows:
An administrative domain can have zero or more clusters.
A cluster has one or more server instances.
A cluster belongs to a single domain
Starts and stops server instances as instructed by the DAS.
Restarts failed server instances.
Provides a view of the log files of failed servers and assists in remote diagnosis
Synchronizes each server instance’s local configuration repository with the DAS’s central repository, as it starts up the server instances under its watch.
When an instance is initially created, creates directories the instance needs and synchronizes the instance’s configuration with the central repository.
Performs appropriate cleanup when a server instance is deleted.
Each physical host must have at least one node agent for each domain to which the host belongs. If a physical host has instances from more than one domain, then it needs a node agent for each domain. There is no advantage of having more than one node agent per domain on a host, though it is allowed.
Because a node agent starts and stops server instances, it must always be running. Therefore, it is started when the operating system boots up. On Solaris and other Unix platforms, the node agent can be started by the inetd process. On Windows, the node agent can be made a Windows service.
For more information on node agents, see Chapter 3, Configuring Node Agents, in Sun GlassFish Enterprise Server 2.1 High Availability Administration Guide.
A named configuration is an abstraction that encapsulates Application Server property settings. Clusters and stand-alone server instances reference a named configuration to get their property settings. With named configurations, Java EE containers’ configurations are independent of the physical machine on which they reside, except for particulars such as IP address, port number, and amount of heap memory. Using named configurations provides power and flexibility to Application Server administration.
To apply configuration changes, you simply change the property settings of the named configuration, and all the clusters and stand-alone instances that reference it pick up the changes. You can only delete a named configuration when all references to it have been removed. A domain can contain multiple named configurations.
The Application Server comes with a default configuration, called default-config. The default configuration is optimized for developer productivity in Application Server Platform Edition and for security and high availability.
You can create your own named configuration based on the default configuration that you can customize for your own purposes. Use the Admin Console and asadmin command line utility to create and manage named configurations.
The load balancer distributes the workload among multiple physical machines, thereby increasing the overall throughput of the system. The Enterprise Server includes the load balancer plug-in for the Sun Java System Web Server, the Apache Web Server, and Microsoft Internet Information Server.
The load balancer plug-in accepts HTTP and HTTPS requests and forwards them to one of the application server instances in the cluster. Should an instance fail, become unavailable (due to network faults), or become unresponsive, requests are redirected to existing, available machines. The load balancer can also recognize when a failed instance has recovered and redistribute the load accordingly.
For simple stateless applications, a load-balanced cluster may be sufficient. However, for mission-critical applications with session state, use load balanced clusters with HADB.
To setup a system with load balancing, in addition to the Application Server, you must install a web server and the load-balancer plug-in. Then you must:
Create Enterprise Server clusters that you want to participate in load balancing.
Deploy applications to these load-balanced clusters.
Server instances and clusters participating in load balancing have a homogenous environment. Usually that means that the server instances reference the same server configuration, can access the same physical resources, and have the same applications deployed to them. Homogeneity assures that before and after failures, the load balancer always distributes load evenly across the active instances in the cluster.
Use the asadmin command-line tool to create a load balancer configuration, add references to clusters and server instances to it, enable the clusters for reference by the load balancer, enable applications for load balancing, optionally create a health checker, generate the load balancer configuration file, and finally copy the load balancer configuration file to your web server config directory. An administrator can create a script to automate this entire process.
For more details and complete configuration instructions, see Chapter 4, Configuring HTTP Load Balancing, in Sun GlassFish Enterprise Server 2.1 High Availability Administration Guide.
Java EE applications typically have significant amounts of session state data. A web shopping cart is the classic example of a session state. Also, an application can cache frequently-needed data in the session object. In fact, almost all applications with significant user interactions need to maintain a session state. Both HTTP sessions and stateful session beans (SFSBs) have session state data.
While the session state is not as important as the transactional state stored in a database, preserving the session state across server failures can be important to end users.The Application Server provides the capability to save, or persist, this session state in a repository. If the application server instance that is hosting the user session experiences a failure, the session state can be recovered. The session can continue without loss of information.
The Application Server supports the following types of session persistence stores:
High availability (HA)
With memory persistence, the state is always kept in memory and does not survive failure. With HA persistence, the Application Server uses HADB as the persistence store for both HTTP and SFSB sessions. With file persistence, the Application Server serializes session objects and stores them to the file system location specified by session manager properties. For SFSBs, if HA is not specified, the Application Server stores state information in the session-store sub-directory of this location.
Checking an SFSB’s state for changes that need to be saved is called checkpointing. When enabled, checkpointing generally occurs after any transaction involving the SFSB is completed, even if the transaction rolls back. For more information on developing stateful session beans, see Using Session Beans in Sun GlassFish Enterprise Server 2.1 Developer’s Guide. For more information on enabling SFSB failover, see Stateful Session Bean Failover in Sun GlassFish Enterprise Server 2.1 High Availability Administration Guide.
Apart from the number of requests being served by the Application Server, the session persistence configuration settings also affect the number of requests received per minute by the HADB, as well as the session information in each request.
For more information on configuring session persistence, see Chapter 7, Configuring High Availability Session Persistence and Failover, in Sun GlassFish Enterprise Server 2.1 High Availability Administration Guide.
With IIOP load balancing, IIOP client requests are distributed to different server instances or name servers. The goal is to spread the load evenly across the cluster, thus providing scalability. IIOP load balancing combined with EJB clustering and availability features in the Sun Java System Application provides not only load balancing but also EJB failover.
When a client performs a JNDI lookup for an object, the Naming Service creates a InitialContext (IC) object associated with a particular server instance. From then on, all lookup requests made using that IC object are sent to the same server instance. All EJBHome objects looked up with that InitialContext are hosted on the same target server. Any bean references obtained henceforth are also created on the same target host. This effectively provides load balancing, since all clients randomize the list of live target servers when creating InitialContext objects. If the target server instance goes down, the lookup or EJB method invocation will failover to another server instance.
For example, as illustrated in this figure, ic1, ic2, and ic3 are three different InitialContext instances created in Client2’s code. They are distributed to the three server instances in the cluster. Enterprise JavaBeans created by this client are thus spread over the three instances. Client1 created only one InitialContext object and the bean references from this client are only on Server Instance 1. If Server Instance 2 goes down, the lookup request on ic2 will failover to another server instance (not necessarily Server Instance 3). Any bean method invocations to beans previously hosted on Server Instance 2 will also be automatically redirected, if it is safe to do so, to another instance. While lookup failover is automatic, Enterprise JavaBeans modules will retry method calls only when it is safe to do so.
IIOP Load balancing and failover happens transparently. No special steps are needed during application deployment. Adding or deleting new instances to the cluster will not update the existing client’s view of the cluster. You must manually update the endpoints list on the client side.
The Sun Java System Message Queue (MQ) provides reliable, asynchronous messaging for distributed applications. MQ is an enterprise messaging system that implements the Java Message Service (JMS) standard. MQ provides messaging for Java EE application components such as message-driven beans (MDBs).
The Application Server implements the Java Message Service (JMS) API by integrating the Sun Java System Message Queue into the Application Server. Enterprise Server includes the Enterprise version of MQ which has failover, clustering and load balancing features.
For basic JMS administration tasks, use the Application Server Admin Console and asadmin command-line utility.
For advanced tasks, including administering a Message Queue cluster, use the tools provided in the install_dir/imq/bin directory. For details about administering Message Queue, see the Sun Java System Message Queue Administration Guide.
For information on deploying JMS applications and MQ clustering for message failover, see Planning Message Queue Broker Deployment.