Oracle® Application Server Installation Guide
10g Release 2 (10.1.2) for hp HP-UX PA-RISC (64-bit), and Linux x86 Part No. B14141-02 |
|
Previous |
Next |
This chapter provides an overview of the high availability configurations supported by Oracle Application Server. Subsequent chapters provide the details. This chapter also lists the common requirements.
Contents of this chapter:
Section 10.1, "Overview of High Availability Configurations"
Section 10.2, "Installation Order for High Availability Configurations"
Section 10.3, "Requirements for High Availability Configurations"
This chapter provides only a brief overview of the high availability configurations in Oracle Application Server. For a complete description of the configurations, see the Oracle Application Server High Availability Guide.
Oracle Application Server supports the following types of high availability configurations:
For a quick summary of the high availability configurations, see Section 10.1.4, "Summary of Differences".
OracleAS Cold Failover Cluster configurations have the following characteristics:
Active and passive nodes. The active node handles all the requests. The passive node becomes the active node when the active node fails. A failover event occurs and the requests are routed to the passive node.
Shared disk. Typically, you install Oracle Application Server on the shared disk. The active and passive nodes have access to the shared disk, but only one node (the active node) mounts the shared disk at any given time.
Virtual IP and hostname. You need to set up a virtual IP and hostname for the active and passive nodes. During installation, you provide the virtual hostname. Clients use the virtual hostname to access the Oracle Application Server in an OracleAS Cold Failover Cluster configuration (for example, the virtual hostname is used in URLs). The virtual IP and hostname points to the active node. If the active node fails, the virtual IP and hostname switches to point to the new active node.
You can install OracleAS Infrastructure and the middle tier in OracleAS Cold Failover Cluster configurations. See Chapter 11, "Installing in High Availability Environments: OracleAS Cold Failover Cluster" for details.
OracleAS Cluster (Identity Management) configurations have the following characteristics:
Active nodes. All the nodes in an OracleAS Cluster (Identity Management) configuration are active. This means that all the nodes can handle requests. If a node fails, the remaining nodes handle all the requests.
Load balancer. You need a load balancer to load-balance the requests to all the active nodes. During installation, you enter the virtual server name configured on your load balancer. During runtime, clients use the virtual server name to access the OracleAS Cluster (Identity Management) configuration. The load balancer then directs the request to the appropriate node.
OracleAS Cluster (Identity Management) is used for installing Identity Management components in a high availability configuration. It is not used for middle tiers. For details on OracleAS Cluster (Identity Management), see Chapter 12, "Installing in High Availability Environments: OracleAS Cluster (Identity Management)".
OracleAS Disaster Recovery configurations have the following characteristics:
A production site and a standby site that mirrors the production site. Typically, these sites are located some distance from each other to guard against site failures such as floods, fires, or earthquakes. During normal operation, the production handles all the requests. If the production site goes down, the standby site takes over and handles all the requests.
Each site has all the hardware and software to run: it contains nodes for running OracleAS Infrastructure and the middle tiers; load balancers; and DNS servers.
OracleAS Disaster Recovery includes OracleAS Infrastructure and middle tiers. For details, see Chapter 13, "Installing in High Availability Environments: OracleAS Disaster Recovery".
Table 10-1 summarizes the differences among the high availability configurations:
Table 10-1 Differences Among the High Availability Configurations
|
OracleAS Cold Failover Cluster
|
OracleAS Cluster (Identity Management)
|
OracleAS Disaster Recovery
|
---|---|---|---|
Node configuration | Active-Passive | Active-Active | Active-Passive |
Hardware cluster | Yes | No | Optional (hardware cluster required only if you installed the OracleAS Infrastructure in an OracleAS Cold Failover Cluster configuration) |
Virtual hostname | Yes | No | Yes |
Load balancer | No | Yes | No |
Shared storage | Yes | No | No |
For all high availability configurations, you install the components in the following order:
OracleAS Metadata Repository
Identity Management components
If you are distributing the Identity Management components, you install them in the following order:
Oracle Internet Directory and Oracle Directory Integration and Provisioning
OracleAS Single Sign-On and Oracle Delegated Administration Services
Middle tiers
This section describes the requirements common to all high availability configurations. In addition to these common requirements, each configuration has its own specific requirements. See the individual chapters for details.
Note: You still need to meet the requirements listed in Chapter 4, "Requirements", plus requirements specific to the high availability configuration that you plan to use. |
The common requirements are:
Section 10.3.2, "Check That Groups Are Defined Identically on All Nodes"
Section 10.3.4, "Check for Previous Oracle Installations on All Nodes"
You need at least two nodes in a high availability configuration. If a node fails for any reason, the second node takes over.
Check that the /etc/group
file on all nodes in the cluster contains the operating system groups that you plan to use. You should have one group for the oraInventory directory, and one or two groups for database administration. The group names and the group IDs must be the same for all nodes.
See Section 4.5, "Operating System Groups" for details.
Check that the oracle
operating system user, which you log in as to install Oracle Application Server, has the following properties:
Belongs to the oinstall
group and to the osdba
group. The oinstall
group is for the oraInventory directory, and the osdba
group is a database administration group. See Section 4.5, "Operating System Groups" for details.
Has write privileges on remote directories.
Check that all the nodes where you want to install in a high availability configuration do not have existing oraInventory directories.
Details of all Oracle software installations are recorded in the Oracle Installer Inventory directory. Typically, this directory is unique to a node and named oraInventory
. The directory path of the Oracle Installer Inventory directory is stored in the oraInst.loc
file.
The existence of this file on a node confirms that the node contains some Oracle software installation. Since the high availability configurations require installations on multiple nodes with Oracle Installer Inventory directories on a file system that may not be accessible on other nodes, the installation instructions in this chapter and subsequent chapters for high availability configurations assume that there have not been any previous installations of any Oracle software on any of the nodes that are used for this high availability configuration. The oraInst.loc
file and the Oracle Installer Inventory directory should not exist on any of these nodes prior to these high availability installations.
To check if a node contains an oraInventory directory that could be detected by the installer:
On each node, check for the existance of the oraInst.loc
file. This file is stored in the /etc
directory on Linux and in the /var/opt/oracle
directory on HP-UX.
If a node does not contain this file, then it does not have an oraInventory directory that will be used by the installer. You can check the next node.
For nodes that contain the oraInst.loc
file, rename the file and the oraInventory directory. The installer then prompts you to enter a location for a new oraInventory directory.
For example enter the following commands as root on HP-UX:
# cat /var/opt/oracle/oraInst.loc inventory_loc=/localfs/app/oracle/oraInventory inst_group=dba # mv /var/opt/oracle/oraInst.loc /var/opt/oracle/oraInst.loc.orig # mv /localfs/app/oracle/oraInventory /localfs/app/oracle/oraInventory.orig
Since the oraInst.loc
file and the Oracle Installer Inventory directory are required only during the installation of Oracle software, and not at runtime, renaming them and restoring them later does not affect the behavior of any installed Oracle software on any node. Make sure that the appropriate oraInst.loc
file and Oracle Installer Inventory directory are in place before starting the Oracle Universal Installer.
Note: For an OracleAS Disaster Recovery configuration, the correctoraInst.loc file and associated oraInventory directory are required during normal operation, not just during installation.
|