Sun Java logo     Previous      Contents      Index      Next     

Sun logo
Sun Java System Application Server 8.1 2004Q4 Deployment Planning Guide 

Chapter 1
Product Concepts

The Sun Java System Application Server Enterprise Edition 8 provides a robust J2EE platform for the development, deployment, and management of enterprise applications. Key features include transaction management, performance, scalability, security, and integration. The Application Server supports services from Web publishing to enterprise-scale transaction processing.

The Application Server is available in the Platform and Enterprise editions. The Platform edition is free and is intended for software development and department-level production environments. Designed for mission-critical services and large-scale production environments, the Enterprise edition supports horizontal scalability and service continuity via a load balancer plug-in and cluster management. The Enterprise edition also supports session continuity via the Highly Available Database (HADB).


J2EE Platform Overview

The Sun Java System Application Server Enterprise Edition 8 implements the Java 2 Enterprise Edition (J2EE) platform. The J2EE platform is a set of standard specifications that describe application components, APIs, and the runtime containers and services of an application server.

J2EE Applications

J2EE applications are made up of components such as Java Server Pages (JSP), Java servlets, and Enterprise Java Beans (EJB) modules. These components enable software developers to build large-scale, distributed. applications. Developers package J2EE applications in Java Archive (JAR) files (similar to zip files), which can be distributed to production sites. Administrators install J2EE applications onto the Application Server by deploying J2EE JAR files onto one or more server instances (or clusters of instances).

Figure 1-1 illustrates the components of the J2EE platform, which are discussed in the following sections.

Containers

Each server instance includes two containers: Web and EJB. A container is a runtime environment that provides services such as security and transaction management to J2EE components. Web components, such as JSP pages and servlets, run within the Web container. EJBs run within the EJB container.

J2EE Services

The J2EE platform is designed so that the containers provide services for applications. These services include:

Web Services

On the J2EE 1.4 platform, the administrator can deploy a Web application that provides a Web service implemented by Java API for XML-Based RPC (JAX-RPC). A J2EE application or component is also a client to other Web services. Applications access XML registries through the Java API for XML Registries (JAXR).

Client Access

At runtime, browser clients access Web applications by communicating with the Web server via HTTP, the protocol used throughout the internet. The HTTPS protocol is for applications that require secure communication. Rich end-user clients and EJB clients communicate with the Object Request Broker (ORB) through the IIOP or IIOP/SSL (secure) protocols. The Application Server has separate listeners for the HTTP, HTTPS, IIOP, and IIOP/SSL protocols. Each listener has exclusive use of a specific port number.

External Systems and Resources

The J2EE platform enables applications to access systems that are outside of the Application Server. The J2EE platform enables access to external systems through the following APIs and components:

On the J2EE platform, an external system is called a resource. For example, a database management system is called a JDBC resource. In the Application Server, each resource is identified by its JNDI name.

Figure 1-1  J2EE Platform Architecture


Application Server Components

This section describes the components that are specific to the Application Server implementation.

Administrative Domains

Administrative domains provide a basic security structure whereby different administrators administer specific groups (domains) of Application Server instances. By grouping server instances into separate domains, different organizations and administrators can share a single Application Server installation. Each domain has its own configuration, log files, and application deployment areas that are independent of other domains. If the configuration for one domain is changed, the configurations of other domains are not affected. When an application on a single domain is deployed (installed), it is not visible or available to other domains.

For administration, the Application Server includes the asadmin utility, a command-line tool, and the Admin Console, a browser-based GUI. In a given tool session, the administrator views and manages a single domain, re-enforcing the separation of administrative domains. Behind the scenes, a domain administration server (DAS) accepts requests from the tools and communicates with server instances. The tools enable administrators to easily manage server instances running on multiple physical hosts.

Instances

A server instance is a single Java Virtual Machine (JVM) that runs the Application Server on one physical machine. With some operating systems, a large physical machine can be partitioned into multiple hosts, each of which runs an individual JVM. The JVM is part of the Java 2 Standard Edition (J2SE) software, which is included with the Application Server bundle.

A server instance belongs to a single domain. An administrative domain can have zero or more server instances. (It can have zero server instances because a domain always has an administration server.) The instances in a domain can run on different physical hosts.

Clusters

A cluster is a named collection of server instances that share the same applications, resources, and configuration information. The server instances within a cluster can run on different physical machines or on the same machine. That is, the administrator can group server instances across different machines into one logical cluster. Typically, the instances in a cluster run on multiple physical machines. Clusters enable the administrator to take advantage of horizontal scalability, load balancing, and failover protection.

Clusters, domains, and instances are related as follows: An administrative domain can have zero or more clusters. A cluster belongs to a single domain. A cluster has one or more server instances. A server instance is not required to belong to a cluster, but usually multiple instances are grouped into clusters.

Node Agents

The domain administration server instructs a node agent to create, start, and stop server instances on a particular physical host. A node agent also attempts to restart failed server instances. Each physical host that runs a domain’s server instances must have a node agent. If a physical host has instances from more than one domain, a node agent for each domain is required.

Because a node agent must always be running, it must be started by the operating system’s bootstrap operation. On Solaris and Unix, the node agent is started by inetd process. On Windows, it is started as a Windows service.

Figure 1-2 illustrates some of the Application Server components. The administration tools, such as the browser-based Admin Console, communicate with the domain administration server, which in turn communicates with the node agents and server instances. This figure is a simplified view of the Application Server architecture and represents only one possible topology.

Figure 1-2  Application Server Components

Named Configurations

A server instance or cluster references a single named configuration, which specifies elements such as HTTP listeners, security, and monitoring. A named configuration defines all configurable elements except for applications and resources. A named configuration belongs to a single domain, which can have multiple configurations. A server instance or cluster references a single named configuration.

Load Balancer Plug-in

The goal of a load balancer is to evenly distribute the workload among multiple physical machines, thereby increasing the overall throughput of the system. Many third-party hardware and software load balancers are available. The Application Server bundle includes the load balancer plug-in, a software module that can be installed onto the Sun Java System Web Server or the Apache HTTP Server.

The load balancer plug-in accepts HTTP and HTTPS requests and forwards them to one of the application server instances in the cluster. Should an instance fail, become unavailable (due to network faults), or become unresponsive, requests are redirected only to existing, available machines. The load balancer can also recognize when a failed instance has recovered and redistribute the load accordingly.

For stateless applications or applications that only involve low-value, simple user transactions, a simple load balanced cluster is often all that is required. However, for mission-critical applications that retain session state, it is best to use load balanced clusters with the Application Server’s HADB.


High Availability Database (HADB)

This section contains the following topics:

The Application Server supports persistence of HTTP sessions, stateful session beans, and remote references of EJB look-ups on the RMI/IIOP path. The HADB, bundled with the Enterprise Edition of the Application Server, provides a highly available persistence store.

An HADB system comprises various nodes. Each HADB node stores and updates session data and consists of the following:

There are two types of HADB nodes:

HADB nodes are organized into two Data Redundancy Units (DRUs) that mirror each other. Each DRU consists of half of the active node, and half of the spare node. One DRU contains one complete copy of the data.

To ensure fault tolerance, the servers that support one DRU must have independent power (through uninterruptible power supplies), processing units, and storage. If a power failure occurs in one DRU, the nodes in the other DRU continue servicing requests until the power returns.


Note

Machines that host HADB Nodes must be added in pairs, with one machine in each DRU.


Active Nodes

Each active node must have a mirror node; that is active nodes must be configured in pairs. In addition, to maximize HADB availability, it is best to include two spare nodes for each pair so that if an active node fails, a spare node can take over while the failed node is repaired.

Spare Nodes

A spare node is an additional HADB node connected to a DRU. A spare node initially does not contain data, but constantly monitors for failure of active nodes in the DRU. If an active node fails, the spare node takes over the functions of the failed node while the failed node is being repaired.

Though the administrator can configure an HADB system without spare nodes, it is not recommended. If the machine running an active node fails, the other nodes (including the mirror node) become overloaded, which drastically reduces performance. Depending on the impact of losing one machine, this can make your system effectively unavailable because machines running the other nodes also become overloaded.

Moreover, the system will be running without fault tolerance until the failed machine is repaired, as there is no mirror node to replicate the data. For high availability, minimize the time during which the system functions with only a single node.

Spare nodes are not mandatory, but can be used if high availability is required. Spare nodes allow a single machine to fail and yet maintain overall level of service. Allocate one machine for each DRU to act as a spare machine, so that if one of the machines fails, the HADB system continues without adversely affecting performance. A spare node also makes it easy for the administrator to perform planned maintenance on the machines that host the active nodes.


Note

As a general rule, have a spare machine with enough Application Server instances and HADB nodes to replace any machine that becomes unavailable.


Sample Spare Node Configuration 1

If you have a co-located deployment, with four Sun FireTM V480 servers where each server has one Application Server instance and two HADB data nodes, allocate two more servers as spare machines (one machine per DRU). Each spare machine runs one application server instance and two spare HADB nodes.

Sample Spare Node Configuration 2

Suppose you have a separate-tier deployment where the HADB tier has two Sun FireTM 280R servers, each running two HADB data nodes. To maintain this system at full capacity, even if one machine becomes unavailable, configure one spare machine for the Application Server instances tier and one spare machine for the HADB tier.

The spare machine for the Application Server instances tier must have as many instances as the other machines in the Application Server instances tier. Similarly, the spare machine for the HADB tier must have as many HADB nodes as the other machines in the HADB tier.

For more information about the co-located and the separate tier deployment topologies, see Chapter 3, "Selecting a Topology."

Sample HADB Architecture

Figure 1-3 shows the architecture of a database with four active nodes and two spare nodes. Nodes 0 and 1 are a mirror node pair, as are nodes 2 and 3.

Figure 1-3  Sample HADB Architecture

Session Persistence

As an application session proceeds, there is often data that is part of the session that is not stored in a traditional database. An example of such data is the content of a Web shopping cart. The Application Server provides the capability to save, or persist, this session state in a repository, so that if an application server instance experiences a failure, the session state can be recovered and the session can continue without loss of information.

The Application Server supports the following types of persistence for HTTP sessions and stateful session bean (SFSB) sessions:

With ha persistence, the Application Server uses HADB as the persistence store for both HTTP and SFSB sessions. Otherwise, the server uses the session store defined for passivated SFSBs for SFSB sessions. The memory and file settings have no effect on SFSBs.

When an SFSB’s state is checked for changes that need to be saved, this is called checkpointing. When enabled, checkpointing generally occurs after any transaction involving the SFSB is completed, even if the transaction rolls back. For more information on enabling SFSB checkpointing, see the Sun Java System Application Server Developer’s Guide.

Apart from the number of requests being served by the Application Server, the session persistence configuration settings also affect the number of requests received per minute by the HADB, as well as the session information in each request.

Persistence settings for each application server instance can also be defined. However, all application server instances in a single cluster must have the same persistence configuration. See the Sun Java System Application Server Administration Guide for instructions. If more than one Application Server cluster has been deployed, it is not necessary for all clusters to have the same persistence configuration settings.



Previous      Contents      Index      Next     


Copyright 2004 Sun Microsystems, Inc. All rights reserved.