Oracle Application Server 10g High Availability Guide 10g (9.0.4) Part Number B10495-01 |
|
This chapter focuses on the high availability aspects of the Oracle Application Server Infrastructure. It discusses the features and architectural solutions for high availability of the Infrastructure and is divided into the following sections:
Oracle Application Server provides a completely integrated infrastructure and framework for development and deployment of enterprise applications. An Oracle Application Server Infrastructure installation type provides centralized product metadata, security and management services, and configuration information and data repositories for the Oracle Application Server middle tier. By integrating the Infrastructure services required by the middle tier, time and effort required to develop enterprise applications are reduced. In turn, the total cost of developing and deploying these applications is reduced, and the deployed applications are more reliable.
The Oracle Application Server Infrastructure provides the following overall services:
Oracle Application Server Infrastructure stores all application server metadata required by Oracle Application Server middle tier instances. This data is stored in an Oracle9i database, thereby leveraging the robustness of the database to provide a reliable, scalable, and easy-to-manage metadata repository.
The security service provides a consistent security model and identity management for all applications deployed on Oracle Application Server. The service enables centralized authentication using single sign-on, Web-based administration through the Oracle Delegated Administration Services, and centralized storage of user authentication credentials. The Oracle Internet Directory is used as the underlying repository for this service.
This service is used by Distributed Configuration Management to manage and administer Oracle Application Server middle tier instances and the Oracle Application Server Infrastructure. It is also used to administer clustering services for the middle tier. Application Server Console reduces the total administrative cost by centralizing the management of deployed J2EE applications.
The Oracle Application Server Infrastructure consists of several components that contribute to its role and function. These components work with each other to provide the Infrastructure's product metadata, security, and management services. This section describes these Infrastructure components, which are:
Oracle Application Server Metadata Repository is an Oracle9i Enterprise Edition database server and stores component-specific information that is accessed by the Oracle Application Server middle tier or Infrastructure components as part of their application deployment. The end user or the client application does not access this data directly. For example, a Portal application on the middle tier accesses the Portal metadata as part of the Portal page assembly aggregation. Metadata also includes demo data for many Oracle Application Server components, such as data used by the Order Management Demo for BC4J.
Oracle Application Server metadata and customer or application data can co-exist in the Oracle Application Server Metadata Repository, the difference is in which applications are allowed to access them.
The Oracle Application Server Metadata Repository stores three main types of metadata corresponding to the three main Infrastructure services described in the section "Oracle Application Server Infrastructure Overview". These types of metadata are:
Table 3-1 shows the Oracle Application Server components that store and use these types of metadata during application deployment.
Oracle Application Server Metadata Repository (OracleAS Metadata Repository) is needed for all application deployments except for those using the J2EE and Web Cache installation type. Oracle Application Server provides three middle tier installation options:
Integration components, such as Oracle Application Server ProcessConnect, Oracle Application Server InterConnect, and Oracle Workflow are installed on top of any of these middle tier install options.
The Distributed Configuration Management (DCM) component enables middle tier management, and stores its metadata in the OracleAS Metadata Repository for both the Portal and Wireless, and the Business Intelligence and Forms install options. For the J2EE and Web Cache installation type, by default, DCM uses a file-based repository. If you choose to associate the J2EE and Web Cache installation type with an Infrastructure, the file-based repository is moved into the OracleAS Metadata Repository.
The Oracle Identity Management framework in the Infrastructure includes the following components:
Oracle Internet Directory is Oracle's implementation of a directory service using the Lightweight Directory Access Protocol (LDAP) version 3. It runs as an application on the Oracle9i database and utilizes the database's high performance, scalability, and high availability.
Oracle Internet Directory provides a centralized repository for creating and managing users for the rest of the Oracle Application Server components such as OC4J, Oracle Application Server Portal, or Oracle Application Server Wireless. Central management of user authorization and authentication enables users to be defined centrally in Oracle Internet Directory and shared across all Oracle Application Server components.
Oracle Internet Directory is provided with a Java-based management tool (Oracle Directory Manager), a Web-based administration tool (Oracle Delegated Administration Services) for trusted proxy-based administration, and several command-line tools. Oracle Delegated Administration Services provide a means of provisioning end users in the Oracle Application Server environment by delegated administrators who are not the Oracle Internet Directory administrator. It also allows end users to modify their own attributes.
Oracle Internet Directory also enables Oracle Application Server components to synchronize data about users and group events, so that those components can update any user information stored in their local application instances.
See Also:
Oracle Internet Directory Administrator's Guide for more information. |
OracleAS Single Sign-On is a multi-part environment which is made up of both middle tier and database functions allowing for a single user authentication across partner applications. A partner application can be achieved either by using the SSOSDK or via the Apache mod_osso
module. This module allows Apache (and subsequently URLS) to be made partner applications.
OracleAS Single Sign-On is fully integrated with Oracle Internet Directory, which stores user information. It supports LDAP-based user and password management through Oracle Internet Directory.
OracleAS Single Sign-On supports Public Key Infrastructure (PKI) client authentication, which enables PKI authentication to a wide range of Web applications. Additionally, it supports the use of X.509 digital client certificates and Kerberos Security Tickets for user authentication.
By means of an API, OracleAS Single Sign-On can integrate with third-party authentication mechanisms such as Netegrity Site Minder.
See Also:
Oracle Application Server Single Sign-On Administrator's Guide. (This guide also includes Identity Management replication instructions.) |
The Infrastructure installation type installs Oracle HTTP Server for the Infrastructure. This is used to service requests from other distributed components of the Infrastructure and middle tier instances. In the Infrastructure, Oracle HTTP Server services requests for OracleAS Single Sign-On and Oracle Delegated Administration Services. The latter is implemented as a servlet in an OC4J process in the Infrastructure.
In the Infrastructure, OC4J is installed in the Infrastructure to run Oracle Delegated Administration Services and OracleAS Single Sign-On. The former runs as a servlet in OC4J.
Oracle Delegated Administration Services provide a self-service console (for end users and application administrators) that can be customized to support third-party applications. In addition, it provides a number of services for building customized administration interfaces that manipulate Oracle Internet Directory data. Oracle Delegated Administration Services are a component of Oracle Identity Management.
See Also:
Oracle Internet Directory Administrator's Guide for more information about Oracle Delegated Administration Services. |
Oracle Enterprise Manager - Application Server Console (Application Server Console) provides a Web-based interface for managing Oracle Application Server components and applications. Using the Oracle Application Server Console, you can do the following:
For more information on Oracle Enterprise Manager and its two frameworks, see Oracle Enterprise Manager Concepts.
See Also:
Oracle Application Server Administrator's Guide - provides descriptions on Application Server Console and instructions on how to use it. |
As described earlier the Oracle Application Server Infrastructure provides the following services
From an availability standpoint, these services are provided by the following components, which must all be available to guarantee availability of the Infrastructure:
For the Infrastructure to provide all essential services, all of the above components must be available. On UNIX platforms, this means that the processes associated with these components must be up and active. Any high availability solution must be able to detect and recover from any software failures of any of the processes associated with the Infrastructure components. It must also be able to detect and recover from any hardware failures on the hosts that are running the Infrastructure.
In Oracle Application Server, all of the Infrastructure processes, except the database, its listener, and Application Server Console, are started, managed, and restarted by the Oracle Process Management and Notification (OPMN) framework. This means any failure of an OPMN-managed process is handled internally by OPMN. OPMN is automatically installed and configured at install time. However, any database process failure or database listener failure is not handled by OPMN. Also, failure of any OPMN processes leaves the Infrastructure in a non-resilient mode if the failure is not detected and appropriate recovery steps taken.
OracleAS provides two solutions to provide intrasite high availability for the Infrastructure. These are:
In this release of Oracle Application Server 10g (9.0.4), Oracle Application Server Active Failover Cluster is a limited release feature. Check OracleMetalink (http://metalink.oracle.com) for the most current certification status of this feature or consult your Oracle sales representative before deploying this feature in a production environment.
Note:
These intrasite high availability solutions provide protection from local hardware and software failures that cannot be detected and recovered by OPMN. Examples of such failures are a system panic or node crash. These solutions, however, cannot protect the Infrastructure from site failures or media failures, which result in damage to or loss of data.
Oracle Application Server provides a disaster recovery solution to protect against disasters and site failures. This solution is described in Chapter 6, "Oracle Application Server Disaster Recovery".
A site failure or disaster will most likely affect all the systems including middle tiers, Infrastructure, and backend databases. Hence, the disaster recovery solution also provides mechanisms to protect the middle tier and the Infrastructure database.
In short, the intrasite high availability solutions, OracleAS Cold Failover Cluster and OracleAS Active Failover Cluster, provide resilience for only the OracleAS Infrastructure from local hardware and software failures. The middle tier can continue to function with a resilient Infrastructure. The disaster recovery solution, on the other hand, deals with a complete site failure, which requires failing over not only the Infrastructure but also the middle tier. The intrasite high availability solutions for the Infrastructure are discussed in the following sections.
The Oracle Application Server Cold Failover Cluster (OracleAS Cold Failover Cluster) solution for the Infrastructure uses a two node hardware cluster as depicted in Figure 3-1, "Normal operation of OracleAS Cold Failover Cluster solution" below.
For the purpose of describing the solution, it is important to clarify the following terminology within the context of the OracleAS Cold Failover Cluster solution.
A cluster, in generic definition, is a collection of loosely coupled computers (called nodes) that provides a single view of network services (for example: an IP address) or application services (for example: databases, web servers) to clients of these services. Each node in a cluster is a standalone server that runs its own processes. These processes can communicate with one another to form what looks like a single system that cooperatively provides applications, system resources, and data to users. This type of clustering offers several advantages over traditional single server systems for highly available and scalable applications.
Hardware clusters are clusters that achieve high availability and scalability through the use of additional hardware (cluster interconnect, shared storage) and software (health monitors, resource monitors). (The cluster interconnect is a private link used by the hardware cluster for heartbeat information to detect node death.) Due to the need for additional hardware and software, hardware clusters are commonly provided by hardware vendors such as SUN, HP, IBM, and Dell. While the number of nodes that can be configured in a hardware cluster is vendor dependent, for the purpose of Oracle Application Server Infrastructure High Availability using the Oracle Application Server Cold Failover Cluster solution, only two nodes are required. Hence, this document assumes a two-node hardware cluster for that solution.
Failover is the process by which the hardware cluster automatically relocates the execution of an application from a failed node to a designated standby node. When a failover occurs, clients may see a brief interruption in service and may need to reconnect after the failover operation has completed. However, clients are not aware of the physical server from which they are provided the application and data. The hardware cluster's software provides the APIs to automatically start, stop, monitor, and failover applications between the two nodes of the hardware cluster.
The node that is actively executing one or more Infrastructure installations at any given time. If this node fails, the hardware cluster automatically fails the Infrastructure over to the secondary node. Since the primary node runs the active Infrastructure installation(s), it is considered the "hot" node.
This is the node that takes over the execution of the Infrastructure if the primary node fails. Since the secondary node does not originally run the Infrastructure, it is considered the "cold" node. And, because the application fails from a hot node (primary) to a cold node (secondary), this type of failover is called cold failover.
To present a single system view of the cluster to network clients, hardware clusters use what is called a logical or virtual IP address. This is a dynamic IP address that is presented to the outside world as the entry point into the cluster. The hardware cluster's software manages the movement of this IP address between the two physical nodes of the cluster while the external clients connect to this IP address without the need to know which physical node this IP address is currently active on. In a typical two-node cluster configuration, each physical node has its own physical IP address and hostname, while there could be several logical IP addresses, which float or migrate between the two nodes. For a given OracleAS Infrastructure installation, the logical IP/virtual name associated with that installation is the IP/name that is used by the clients to connect to the Infrastructure. Refer to the Oracle Application Server 10g Installation Guide for more information on the installation process.
The virtual hostname is the hostname associated with the logical or virtual IP. This is the name that is chosen to give the OracleAS middle tier a single system view of the hardware cluster. This name-IP entry must be added to the DNS that the site uses, so that the middle tier nodes can associate with the Infrastructure without having to add this entry into their local /etc/hosts
(or equivalent) file. For example, if the two physical hostnames of the hardware cluster are node1.mycompany.com
and node2.mycompany.com
, the single view of this cluster can be provided by the name selfservice.mycompany.com
. In the DNS, selfservice
maps to the logical IP address of the Infrastructure, which floats between node1
and node2
without the middle tier knowing which physical node is active and servicing the requests.
Even though each hardware cluster node is a standalone server that runs its own set of processes, the storage subsystem required for any cluster-aware purpose is usually shared. Shared storage refers to the ability of the cluster to be able to access the same storage, usually disks, from both the nodes. While the nodes have equal access to the storage, only one node, the primary node, has active access to the storage at any given time. The hardware cluster's software grants the secondary node access to this storage if the primary node fails. For the OracleAS Infrastructure, its ORACLE_HOME
is on such a shared storage file system. This file system is mounted by the primary node; if that node fails, the secondary node takes over and mounts the file system. In some cases, the primary node may relinquish control of the shared storage, such as when the hardware cluster's software deems the Infrastructure as unusable from the primary node and decides to move it to the secondary.
Figure 3-1 shows the layout of the two-node cluster for the OracleAS Cold Failover Cluster high availability solution. The two nodes are attached to shared storage. For illustration purposes, a virtual/logical IP address of 144.25.245.1 is active on physical Node 1. Hence, Node 1 is the primary or active node. The virtual name selfservice.mycompany.com
is mapped to this logical IP address, and the middle tier associates the Infrastructure with selfservice.mycompany.com
.
In normal operating mode, the hardware cluster's software enables the logical IP 144.25.245.1 on physical Node 1 and starts all Infrastructure processes (database, database listener, Oracle Enterprise Manager process, and OPMN) on that node. OPMN then starts, monitors, and restarts, if necessary, any of the following failed Infrastructure processes: Oracle Internet Directory, OC4J instances, and Oracle HTTP Server.
If the primary node fails, the logical IP address 144.25.245.1 is manually enabled on the secondary node (Figure 3-2). All the Infrastructure processes are then started on the secondary node. The middle tier processes accessing the Infrastructure will see a temporary loss of service as the logical IP and the shared storage are moved over and the database, database listener, and all other Infrastructure processes are started. Once the processes are up, middle tier processes that were retrying during this time are reconnected. New connections are not aware that a failover has occurred.
While the hardware cluster framework can start, monitor, detect, restart, or failover Infrastructure processes, these actions are not automatic and involve some scripting or simple programming. Required scripts are described in Chapter 5, "Managing Infrastructure High Availability".
For information on setting up and operating the OracleAS Cold Failover Cluster solution for the Infrastructure, see Chapter 5, "Managing Infrastructure High Availability". This chapter also covers the pre-installation and installation tasks.
OracleAS middle tier can also be installed on the same node(s) as the OracleAS Cold Failover Cluster solution (see Figure 3-3). If the OracleAS middle tier is installed on both nodes of the OracleAS Cold Failover Cluster, both middle tier installations are concurrently active and servicing requests while the Infrastructure is active only on one of the nodes. Figure 3-3 provides a graphical depiction of this discussion.
This set up has the following characteristics:
staticports.ini
file. See Oracle Application Server 10g Installation Guide.
Note: In this release of Oracle Application Server 10g (9.0.4), Oracle Application Server Active Failover Cluster is a limited release feature. Check OracleMetalink (http://metalink.oracle.com) for the most current certification status of this feature or consult your Oracle sales representative before deploying this feature in a production environment. |
Oracle Application Server Active Failover Cluster (OracleAS Active Failover Cluster) provides a robust cluster architecture for the Infrastructure. It provides a more transparent high availability solution than the OracleAS Cold Failover Cluster solution. Because the nodes in the OracleAS Active Failover Cluster solution are all active, failover from one node to another is quick and requires no manual intervention. The active-active set up also provides scalability to the Infrastructure deployed on it. Figure 3-4 depicts the overall architecture of the solution.
In this solution, the Infrastructure software is installed identically on each node of a hardware cluster that is running OracleAS Active Failover Cluster technology. Each node has a local copy of the Infrastructure software (including Oracle Identity Management software) and an instance of the database. The database files are installed in shared storage accessible by all nodes. The database instances open the database concurrently for read/write operations. The Infrastructure configuration files that are not in the database but in the file system are local to each node. These files contain node-specific configuration information.
The cluster is front-ended by a load balancer appliance. Oracle recommends that this load balancer be deployed in a fault-tolerant mode to maintain availability in case of load balancer failure. The load balancer appliance is used to direct non Oracle Net traffic from the middle tier to the Infrastructure. This traffic includes HTTP, HTTPS, and LDAP requests. The configuration of the load balancer is set to direct requests from the middle tier to any of the active Infrastructure nodes.
Note: Check http://metalink.oracle.com for information on supported external load balancers. |
Oracle Net traffic from the middle tier does not go through the load balancer. This traffic is directed to the Infrastructure nodes via connect descriptors with multiple addresses in the address list. The address list is used to load balance certain Oracle Net traffic across the Infrastructure nodes. Oracle Net traffic include those initiated by:
The OracleAS Active Failover Cluster high availability solution enables failover for failure of a whole node as well as failure of individual components of the node such as the database instance and Oracle Internet Directory.
The following considerations apply to this solution:
ORACLE_HOME
path and structure is the same on all nodes of the cluster.
ORACLE_SID
has to be unique for each database instance on each node.
For information on setting up and operating the OracleAS Active Failover Cluster high availability solution for the Infrastructure, see Chapter 5, "Managing Infrastructure High Availability". The pre-installation and installation tasks for this high availability solution are provided in detail in Oracle Application Server 10g Installation Guide.
In order for an OracleAS Active Failover Cluster to service Oracle Internet Directory LDAP and HTTP (for OracleAS Single Sign-On and Oracle Delegated Administration Services) requests, a load balancer is required for the OracleAS Active Failover Cluster configuration. The hostname of the load balancer virtual server is exposed as the hostname of the Infrastructure for these requests. This section describes the configuration requirements for the load balancer for the default installation of OracleAS Active Failover Cluster.
For high availability, the following is recommended:
Two load balancer parameters are of primary importance for the OracleAS Active Failover Cluster configuration:
The recommended setting for the load balancer for the above two parameters are provided below in Table 3-2. Load balancers come in many flavors and each may have its own configuration mechanism. Consult your load balancer's documentation for the specific instructions to achieve these configurations.
The persistence mechanism used should provide session level stickiness. By default, HTTP and Oracle Internet Directory requests both use the same virtual host address configured for the load balancer. Hence, the persistence mechanism used is available for both kinds of requests.
If the load balancer allows for the configuration of different persistence mechanisms for different server ports (LDAP and HTTP) for the same virtual server, then this is recommended strategy. In this case, a cookie-based persistence with session-level timeout is more suitable for the HTTP traffic. No persistence setting is required for the LDAP traffic.
If the load balancer does not allow specification of different persistence mechanisms for LDAP and HTTP, then the timeout value for session level stickiness should be configured based on the requirements of the deployed application. The timeout value should not be too high as chances of traffic from a given middle tier instance always being directed to the same node of the OracleAS Active Failover Cluster are higher. Alternatively, if the timeout is too low, the chances of a session timeout occurring for longer running operations that access the Infrastructure are higher.
The recommended default stickiness timeout is 60 seconds. This should be adjusted based on the nature of the deployment and the load balancing achieved across the OracleAS Active Failover Cluster nodes. It should be increased if session timeouts are experienced by Delegated Administration Services users. It should be decreased if even load balancing is not achieved.
Both the LDAP & HTTP traffic should be tested after configuration of the load balancer. This should be done from any machine outside the OracleAS Active Failover Cluster. The tests should have the following coverage:
ldapsearch
commands for LDAP requests.
The requests types above should be directed to different nodes of the OracleAS Active Failover Cluster. The desired operation(s) should complete successfully for the tests to be successful.
|
![]() Copyright © 2003 Oracle Corporation. All Rights Reserved. |
|