The following topics are addressed here:
The administration tools, such as the browser-based Administration Console, communicate with the domain administration server (DAS), which in turn communicates with the server instances.
It is usually sufficient to create a single server instance on a machine, since GlassFish Server and accompanying JVM are both designed to scale to multiple processors. However, it can be beneficial to create multiple instances on one machine for application isolation and rolling upgrades. In some cases, a large server with multiple instances can be used in more than one administrative domain. The administration tools makes it easy to create, delete, and manage server instances across multiple machines.
An administrative domain (or simply domain) is a group of server instances that are administered together. A server instance belongs to a single administrative domain. The instances in a domain can run on different physical hosts.
You can create multiple domains from one installation of GlassFish Server. By grouping server instances into domains, different organizations and administrators can share a single GlassFish Server installation. Each domain has its own configuration, log files, and application deployment areas that are independent of other domains. Changing the configuration of one domain does not affect the configurations of other domains. Likewise, deploying an application on one domain does not deploy it or make it visible to any other domain.
Note - All hosts in a domain on which the DAS and GlassFish Server instances are running must have the same operating system.
A domain has one Domain Administration Server (DAS), a specially designated GlassFish Server instance that hosts the administrative applications. The DAS authenticates the administrator, accepts requests from administration tools, and communicates with server instances in the domain to carry out the requests.
The administration tools are the asadmin command-line tool and the browser-based Administration Console. GlassFish Server also provides a RESTful API for server administration. The administrator can view and manage a single domain at a time, thus enforcing secure separation.
Since the DAS is a GlassFish Server instance, it can also host Java EE applications for testing purposes. However, do not use it to host production applications. You might want to deploy applications to the DAS, for example, if the clusters and instances that will host the production application have not yet been created.
The DAS keeps a repository containing the configuration of its domain and all the deployed applications. If the DAS is inactive or down, there is no impact on the performance or availability of active server instances, however administrative changes cannot be made. In certain cases, for security purposes, it may be useful to intentionally stop the DAS process, for example to reboot the host operating system to install a kernel patch or a hardware upgrade.
Administrative commands are provided to backup and restore the domain configuration and applications. With the standard backup and restore procedures, you can quickly restore working configurations. If the DAS host fails, you must create a new DAS installation to restore the previous domain configuration. For instructions, see Chapter 3, Administering Domains, in Oracle GlassFish Server 3.1 Administration Guide.
A cluster is a named collection of server instances that share the same applications, resources, and configuration information. You can group server instances on different machines into one logical cluster and administer them as one unit. You can easily control the lifecycle of a multi-machine cluster with the DAS.
Clusters enable horizontal scalability, load balancing, and failover protection. By definition, all the instances in a cluster have the same resource and application configuration. When a server instance or a machine in a cluster fails, the load balancer detects the failure, redirects traffic from the failed instance to other instances in the cluster, and recovers the user session state. Since the same applications and resources are on all instances in the cluster, an instance can failover to any other instance in the cluster.
Note - All hosts in a cluster on which the DAS and GlassFish Server instances are running must have the same operating system.
Clusters, domains, and instances are related as follows:
An administrative domain can have zero or more clusters.
A cluster has one or more server instances.
A cluster belongs to a single domain.
A named configuration is an abstraction that encapsulates GlassFish Server property settings. Clusters and stand-alone server instances reference a named configuration to get their property settings. With named configurations, Java EE containers’ configurations are independent of the physical machine on which they reside, except for particulars such as IP address, port number, and amount of heap memory. Using named configurations provides power and flexibility to GlassFish Server administration.
To apply configuration changes, you simply change the property settings of the named configuration, and all the clusters and stand-alone instances that reference it pick up the changes. You can only delete a named configuration when all references to it have been removed. A domain can contain multiple named configurations.
You can create your own named configuration based on the default configuration that you can customize for your own purposes. Use the Administration Console and asadmin command line utility to create and manage named configurations.
The load balancer distributes the workload among multiple physical machines, thereby increasing the overall throughput of the system. The GlassFish Server includes the load balancer plug-ins for Oracle iPlanet Web Server, Oracle HTTP Server, Apache Web Server, and Microsoft Internet Information Server.
The load balancer plug-in accepts HTTP and HTTPS requests and forwards them to one of the GlassFish Server instances in the cluster. Should an instance fail, become unavailable (due to network faults), or become unresponsive, requests are redirected to existing, available machines. The load balancer can also recognize when a failed instance has recovered and redistribute the load accordingly.
For simple stateless applications, a load-balanced cluster may be sufficient. However, for mission-critical applications with session state, use load balanced clusters with replicated session persistence.
To setup a system with load balancing, in addition to GlassFish Server, you must install a web server and the load-balancer plug-in. Then you must:
Create GlassFish Server clusters that you want to participate in load balancing.
Deploy applications to these load-balanced clusters.
Server instances and clusters participating in load balancing have a homogenous environment. Usually that means that the server instances reference the same server configuration, can access the same physical resources, and have the same applications deployed to them. Homogeneity enables configuration consistency, and improves the ability to support a production deployment.
Use the asadmin command-line tool to create a load balancer configuration, add references to clusters and server instances to it, enable the clusters for reference by the load balancer, enable applications for load balancing, optionally create a health checker, generate the load balancer configuration file, and finally copy the load balancer configuration file to your web server config directory. An administrator can create a script to automate this entire process.
For more details and complete configuration instructions, see Chapter 8, Configuring HTTP Load Balancing, in Oracle GlassFish Server 3.1-3.1.1 High Availability Administration Guide.
Java EE applications typically have significant amounts of session state data. A web shopping cart is the classic example of a session state. Also, an application can cache frequently-needed data in the session object. In fact, almost all applications with significant user interactions need to maintain a session state. Both HTTP sessions and stateful session beans (SFSBs) have session state data.
While the session state is not as important as the transactional state stored in a database, preserving the session state across server failures can be important to end users. GlassFish Server provides the capability to save, or persist, this session state in a repository. If the GlassFish Server instance that is hosting the user session experiences a failure, the session state can be recovered. The session can continue without loss of information.
GlassFish Server supports the following session persistence types:
With memory persistence, the state is always kept in memory and does not survive failure. With replicated persistence, GlassFish Server uses other server instances in the cluster as the persistence store for both HTTP and SFSB sessions. With file persistence, GlassFish Server serializes session objects and stores them to the file system location specified by session manager properties. For SFSBs, if replicated persistence is not specified, GlassFish Server stores state information in the session-store subdirectory of this location. For more information about Coherence*Web, see Using Coherence*Web with GlassFish Server.
Checking an SFSB’s state for changes that need to be saved is called checkpointing. When enabled, checkpointing generally occurs after any transaction involving the SFSB is completed, even if the transaction rolls back. For more information on developing stateful session beans, see Using Session Beans in Oracle GlassFish Server 3.1 Application Development Guide. For more information on enabling SFSB failover, see Stateful Session Bean Failover in Oracle GlassFish Server 3.1-3.1.1 High Availability Administration Guide.
Apart from the number of requests being served by GlassFish Server, the session persistence configuration settings also affect the session information in each request.
For more information on configuring session persistence, see Chapter 10, Configuring High Availability Session Persistence and Failover, in Oracle GlassFish Server 3.1-3.1.1 High Availability Administration Guide.
With IIOP load balancing, IIOP client requests are distributed to different server instances or name servers. The goal is to spread the load evenly across the cluster, thus providing scalability. IIOP load balancing combined with EJB clustering and availability features in GlassFish Server provides not only load balancing but also EJB failover.
There are two steps to IIOP failover and load balancing. The first step, bootstrapping, is the process by which the client sets up the initial naming context with one ORB in the cluster. The client attempts to connect to one of the IIOP endpoints. When launching an application client using the appclient script, you specify these endpoints using the -targetserver option on the command line or target-server elements in the sun-acc.xml configuration file. The client randomly chooses one of these endpoints and tries to connect to it, trying other endpoints if needed until one works.
The second step concerns sending messages to a specific EJB. By default, all naming look-ups, and therefore all EJB accesses, use the cluster instance chosen during bootstrapping. The client exchanges messages with an EJB through the client ORB and server ORB. As this happens, the server ORB updates the client ORB as servers enter and leave the cluster. Later, if the client loses its connection to the server from step 1, the client fails over to some other server using its list of currently active members. In particular, this cluster member might have joined the cluster after the client made the initial connection.
When a client performs a JNDI lookup for an object, the Naming Service creates an InitialContext (IC) object associated with a particular server instance. From then on, all lookup requests made using that IC object are sent to the same server instance. All EJBHome objects looked up with that InitialContext are hosted on the same target server. Any bean references obtained henceforth are also created on the same target host. This effectively provides load balancing, since all clients randomize the list of live target servers when creating InitialContext objects. If the target server instance goes down, the lookup or EJB method invocation will failover to another server instance.
Adding or deleting new instances to the cluster does not update the existing client’s view of the cluster. You must manually update the endpoints list on the client side.
The GlassFish Server Message Queue (Message Queue) provides reliable, asynchronous messaging for distributed applications. Message Queue is an enterprise messaging system that implements the Java Message Service (JMS) standard. Message Queue provides messaging for Java EE application components such as message-driven beans (MDBs).
GlassFish Server implements the Java Message Service (JMS) API by integrating Message Queue into GlassFish Server. GlassFish Server includes the Enterprise version of Message Queue which has failover, clustering and load balancing features.
For basic JMS administration tasks, use the GlassFish Server Administration Console and asadmin command-line utility.
For advanced tasks, including administering a Message Queue cluster, use the tools provided in the as-install/mq/bin directory. For details about administering Message Queue, see the Oracle GlassFish Server Message Queue 4.5 Administration Guide.
For information on deploying JMS applications and Message Queue clustering for message failover, see Planning Message Queue Broker Deployment.