Oracle® Application Server Installation Guide 10g Release 3 (10.1.3.1.0) for Linux x86 B31013-01 |
|
Previous |
Next |
This chapter provides an overview of the high availability configurations supported by Oracle Application Server and instructions for installation.
Contents of this chapter:
Section 6.2, "Requirements for High Availability Configurations"
Section 6.5, "Creating an OracleAS Disaster Recovery Configuration"
This chapter provides only a brief overview of the high availability configurations in Oracle Application Server. For a complete description of the configurations, see the Oracle Application Server High Availability Guide.
Oracle Application Server supports the following types of high availability configurations at installation time. Note that there are multiple variants of each type.
Section 6.1.1, "Active-Active Topologies: OracleAS Clusters"
Section 6.1.2, "Active-Passive Topologies: OracleAS Cold Failover Clusters"
For a quick summary of the high availability configurations, see Section 6.1.4, "Summary of Differences".
Oracle Application Server provides an active-active redundant model for all its components with OracleAS Clusters. In an OracleAS Clusters configuration, two or more Oracle Application Server instances are configured to serve the same workload. These instances can run on the same machine or on different machines.The instances are front-ended by an external load balancer, which directs requests to any of the active instances. Instead of an external load balancer, you can also run a software load balancer to distribute the requests. In production environment, however, a hardware load balancer is recommended.
Common properties of an OracleAS Clusters configuration include:
Similar instance configuration
The instances need to serve the same workload or applications. Some configuration properties should have similar values across instances so that the instances can deliver the same reply to the same request. Other configuration properties may be instance-specific, such as local host name information.
If you make a configuration change to one instance, you should also make the same change to the other instances in the active-active topology. The "Configuring and Managing Clusters" chapter in the Oracle Containers for J2EE Configuration and Administration Guide lists the files that contain properties that should be replicated.
Independent operation
If one Oracle Application Server instance in an active-active topology fails, the other instances in the cluster continue to serve requests. The load balancer directs requests only to instances that are alive.
Advantages of an OracleAS Clusters configuration include:
Increased availability
An active-active topology is a redundant configuration. Loss of one instance can be tolerated because other instances can continue to serve the same requests.
Increased scalability and performance
Multiple identically-configured instances provide the capability to share a workload among different machines and processes. You can scale the topology by adding new instances as the number of requests increase.
For instructions on creating the OracleAS Clusters configuration, see Section 6.3, "Creating the Active-Active Topology".
Oracle Application Server provides an active-passive model for all its components in OracleAS Cold Failover Clusters. In an OracleAS Cold Failover Cluster topology, two Oracle Application Server instances are configured to serve the same application workload but only one is active at any particular time. The passive instance runs (that is, becomes active) only when the active instance fails. These instances run on nodes that are in a hardware cluster.
Common properties of an OracleAS Cold Failover Cluster topology include:
Hardware cluster
In an OracleAS Cold Failover Cluster topology, you run Oracle Application Server on machines that are in a hardware cluster, with vendor clusterware running on the machines.
You install the Oracle home for the Oracle Application Server instance on storage shared by the machines in the hardware cluster.
The active node in the OracleAS Cold Failover Cluster topology mounts the shared storage so that it has access to the Oracle home. If it fails, the passive instance mounts the shared storage and accesses the same Oracle home.
Virtual hostname
The virtual hostname gives clients a single system view of the Oracle Application Server middle tier. Clients use the virtual hostname to access the Oracle Application Server middle tier.
The virtual hostname is associated with a virtual IP. This name-IP entry must be added to the DNS that the site uses. For example, if the two physical hostnames of the hardware cluster are node1.mycompany.com
and node2.mycompany.com
, the single view of this cluster can be provided by the virtual hostname apps.mycompany.com
. In the DNS, apps
maps to a virtual IP address that floats between node1
and node2
via a hardware cluster. Clients access Oracle Application Server using apps.mycompany.com
; they do not know which physical node is active and actually servicing a particular request.
You can specify the virtual hostname during installation. See Section 6.4, "Creating the Active-Passive Topology".
Failover procedure
An active-passive configuration also includes a set of scripts and procedures to detect failure of the active instance and fail over to the passive instance while minimizing downtime.
Advantages of an OracleAS Cold Failover Cluster topology include:
Increased availability
If the active instance fails for any reason or must be taken offline, an identically configured passive instance is prepared to take over at any time.
Reduced operating costs
In an active-passive topology only one set of processes is up and serving requests. Managing the active instance is generally easier than managing an array of active instances.
Application independence
Some applications may not be suited to an active-active topology. This may include applications that rely heavily on application state or on information stored locally. An active-passive topology has only one instance serving requests at any particular time.
For instructions on creating the OracleAS Cold Failover Cluster configuration, see Section 6.4, "Creating the Active-Passive Topology".
OracleAS Disaster Recovery configurations have the following characteristics:
A production site and a standby site that mirrors the production site. Typically, these sites are located some distance from each other to guard against site failures such as floods, fires, or earthquakes. During normal operation, the production site handles all the requests. If the production site goes down, the standby site takes over and handles all the requests.
Each site has all the hardware and software to run. It contains nodes for running Oracle Application Server instances, load balancers, and DNS servers.
For installation details, see Section 6.5, "Creating an OracleAS Disaster Recovery Configuration".
Table 6-1 summarizes the differences among the high availability configurations.
Table 6-1 Differences Among the High Availability Configurations
OracleAS Cold Failover Cluster |
OracleAS Clusters |
OracleAS Disaster Recovery |
|
---|---|---|---|
Node configuration |
Active-Passive |
Active-Active |
Active-Passive |
Hardware cluster |
Yes |
No |
Optional (hardware cluster required only if you installed the OracleAS Infrastructure in an OracleAS Cold Failover Cluster configuration) |
Virtual hostname |
Yes |
No |
Yes |
Load balancer |
No |
Yes |
No |
Shared storage |
Yes |
No |
No |
This section describes the requirements common to all high availability configurations. In addition to these common requirements, each configuration has its own specific requirements. See the individual chapters for details.
Note: You still need to meet the requirements listed in Chapter 2, "Requirements", plus requirements specific to the high availability configuration that you plan to use. |
The common requirements are:
Section 6.2.2, "Check That Groups Are Defined Identically on All Nodes"
Section 6.2.4, "Check for Previous Oracle Installations on All Nodes"
You need at least two nodes in a high availability configuration. If a node fails for any reason, the second node takes over.
Check that the /etc/group
file on all nodes in the cluster contains the operating system groups that you plan to use. You should have one group for the oraInventory directory, and one or two groups for database administration. The group names and the group IDs must be the same for all nodes.
See Section 2.6, "Operating System Groups" for details.
Check that the oracle
operating system user, which you log in as to install Oracle Application Server, has the following properties:
Belongs to the oinstall
group and to the osdba
group. The oinstall
group is for the oraInventory directory, and the osdba
group is a database administration group. See Section 2.6, "Operating System Groups" for details.
Has write privileges on remote directories.
Check that all the nodes where you want to install in a high availability configuration do not have existing oraInventory directories.
Details of all Oracle software installations are recorded in the Oracle Installer Inventory directory. Typically, this directory is unique to a node and named oraInventory
. The directory path of the Oracle Installer Inventory directory is stored in the oraInst.loc
file.
The existence of this file on a node confirms that the node contains some Oracle software installation. Since the high availability configurations require installations on multiple nodes with Oracle Installer Inventory directories on a file system that may not be accessible on other nodes, the installation instructions in this chapter and subsequent chapters for high availability configurations assume that there have not been any previous installations of any Oracle software on any of the nodes that are used for this high availability configuration. The oraInst.loc
file and the Oracle Installer Inventory directory should not exist on any of these nodes prior to these high availability installations.
To check if a node contains an oraInventory directory that could be detected by the installer:
On each node, check for the existence of the oraInst.loc
file. This file is stored in the /etc
directory.
If a node does not contain this file, then it does not have an oraInventory directory that will be used by the installer. You can check the next node.
For nodes that contain the oraInst.loc
file, rename the file and the oraInventory directory. The installer then prompts you to enter a location for a new oraInventory directory.
For example enter the following commands as root:
# cat /etc/oraInst.loc inventory_loc=/localfs/app/oracle/oraInventory inst_group=dba # mv /etc/oraInst.loc /etc/oraInst.loc.orig # mv /localfs/app/oracle/oraInventory /localfs/app/oracle/oraInventory.orig
Since the oraInst.loc
file and the Oracle Installer Inventory directory are required only during the installation of Oracle software, and not at runtime, renaming them and restoring them later does not affect the behavior of any installed Oracle software on any node. Make sure that the appropriate oraInst.loc
file and Oracle Installer Inventory directory are in place before starting the Oracle Universal Installer.
Note: For an OracleAS Disaster Recovery configuration, the correctoraInst.loc file and associated oraInventory directory are required during normal operation, not just during installation. |
This section describes how to install Oracle Application Server in an active-active topology with OracleAS Clusters. OracleAS Clusters is one of the high availability environments supported by Oracle Application Server.
Contents of this section:
Section 6.3.2, "OracleAS Clusters in Active-Active Topologies"
Section 6.3.3, "Properties of Oracle Application Server Instances in Active-Active Topologies"
Section 6.3.4, "Installation Steps for Active-Active Topologies"
Section 6.3.5, "Supporting Procedures for Creating the Active-Active Topology"
An active-active topology consists of redundant middle-tier instances that deliver greater scalability and availability than a single instance. Active-active topologies remove the single point of failure that a single instance poses. While a single Oracle Application Server instance leverages the resources of a single host, a cluster of middle-tier instances spans multiple hosts, distributing application execution over a greater number of CPUs. A single Oracle Application Server instance is vulnerable to the failure of its host and operating system, but an active-active topology continues to function despite the loss of an operating system or a host, hiding any such failure from clients.
In active-active topologies, all the instances are active at the same time. This is different from active-passive topologies, where only one instance is active at any time.
The nodes in the active-active topologies are not in a hardware cluster.
Load Balancer Requirements
Active-active topologies use a load balancer to direct requests to one of the Oracle Application Server instances in the topology. In other words, the Oracle Application Server instances are fronted by the load balancer.
You configure the load balancer with virtual server names for HTTP and HTTPS traffic. Clients use the virtual server names in their requests. The load balancer directs requests to an available Oracle Application Server instance.
See the Oracle Application Server High Availability Guide for a list of features that your load balancer should have.
Figures of Active-Active Topologies
The following figures show two active-active topologies. The difference in the topologies is whether you install Oracle HTTP Server and OC4J in the same Oracle home or in separate Oracle homes.
Figure 6-1 shows an active-active topology with Oracle HTTP Server and OC4J in the same Oracle home. Figure 6-2 shows an active-active topology with Oracle HTTP Server and OC4J in separate Oracle homes.
Figure 6-1 Active-Active Topology with Oracle HTTP Server and OC4J in the Same Oracle Home
Figure 6-2 Active-Active Topology with Oracle HTTP Server and OC4J in Separate Oracle Homes
All the Oracle Application Server instances in an active-active topology belong to the same cluster. Oracle HTTP Server forwards application requests only to OC4J instances that belong to the same cluster.
You can cluster instances with OPMN using one of the following ways:
All the instances use the same multicast IP address and port.
All the instances are chained to the same discovery server.
Each instance specifies all other instances in the opmn.xml configuration file.
If the instances run on nodes that are on different subnets, you have to designate a node to be the gateway server, which bridges the instances on the different subnets.
Clustering with OPMN also enables you to use the @cluster
parameter in some opmnctl
commands. Commands that use the @cluster
parameter apply to all instances in the cluster. For example, you can use the @cluster
parameter to start all components in all instances in the cluster.
OC4J instances in a cluster have the following features:
OC4J instances have cluster-wide properties as well as instance-specific properties. Cluster-wide properties are properties whose values are identical for all OC4J instances in the cluster. Instance-specific properties are properties that have different values for each OC4J instance. For a list of cluster-wide properties, see the "Configuring and Managing Clusters" chapter in the Oracle Containers for J2EE Configuration and Administration Guide.
If you modify a cluster-wide property in one OC4J instance, make sure that you propagate the change to all other OC4J instances in the cluster.
When you deploy an application to an OC4J instance, you also need to deploy it on all other OC4J instances in the cluster.
The number of OC4J processes is an instance-specific property: it can be different for each OC4J instance. This must be configured for each Oracle Application Server instance in the cluster. The OC4J process configuration provides flexibility to tune according to the specific hardware capabilities of the host. By default, each OC4J instance is instantiated with a single OC4J process.
For details, see the "Configuring and Managing Clusters" chapter in the Oracle Containers for J2EE Configuration and Administration Guide.
Because the load balancer can send a request to any Oracle Application Server instance in the topology, you need to ensure that the instances are configured in the same manner so that clients get the same response regardless of which instance handles the request. This includes the following:
Deploy the same applications on each OC4J instance in the topology.
Ensure that you replicate state and stateful session bean information across OC4J instances so that in the event that an OC4J instance fails, another OC4J instance contains the state information and can continue the session.
Ensure that configuration properties for all the OC4J instances in the topology are identical. These configuration properties are listed in chapter 8, "Configuring and Managing Clusters", in section "Replicating Changes Across a Cluster", in the Oracle Containers for J2EE Configuration and Administration Guide.
To create the topology shown in Figure 6-1 or Figure 6-2, you perform the following steps:
Step 1: Configure the Load Balancer with Virtual Server Names
Step 2: Install Oracle HTTP Server and OC4J and Cluster the Instances using OPMN
Step 3: Cluster the OC4J Components to Create an Application Cluster
The following sections describe the steps in detail.
Step 1 Configure the Load Balancer with Virtual Server Names
Refer to your load balancer documentation for configuration steps. On your load balancer, you need to configure a virtual server name and port for HTTP traffic, and another virtual server name and port for HTTPS traffic. The port numbers for the virtual server names should match the port numbers at which Oracle HTTP Server is listening. Clients will use the virtual server names and ports to access Oracle Application Server instances.
Step 2 Install Oracle HTTP Server and OC4J and Cluster the Instances using OPMN
You can install Oracle HTTP Server and OC4J in the same Oracle home (see Figure 6-1), or in different Oracle homes (see Figure 6-2).
For Oracle Application Server instances that you want to group in the same active-active topology, you need to place them in the same cluster. This enables communication between the Oracle HTTP Server and OC4J instances, and simplifies the management of Oracle Application Server instances. OracleAS Clusters enable you to use the @cluster
parameter for the opmnctl
command to manage all the instances in the cluster.
You can create clusters using one of the following methods:
Dynamic Discovery Method
In this method, each ONS node within the same subnet announces its presence with a multicast message. The cluster topology map for each node is automatically updated as nodes are added or removed, enabling the cluster to be self-managing.
If you use this method, you should specify the multicast address and port on the Cluster Topology Configuration screen in the installer.
Discovery Server Method
In this method, specific nodes within a cluster are configured to serve as "discovery servers", which maintain the topology map for the cluster; the remaining nodes then connect with one another via this server.
If you use this method, you can define a cluster for OPMN by specifying the names of the Oracle Application Server instances explicitly in the opmn.xml
file of each instance by following the steps in Section 6.3.5.1, "Setting up Clusters with the Discovery Server Method" after installation.
Gateway Method
This configuration is used to connect topologies separated by firewalls or on different subnets using specified "gateway" nodes.
If you use this method, see the section "Configuring Cross-Topology Gateways" in the Oracle Containers for J2EE Configuration and Administration Guide for configuration details.
You can perform either an integrated installation or a distributed installation.
For Integrated Installations (Oracle HTTP Server and OC4J in the Same Oracle Home)
You install Oracle Application Server on the local storage of each node in the active-active topology.
Perform an advanced installation by following the steps in Section 5.2.3, "Installing J2EE Server and Web Server" so that both Oracle HTTP Server and OC4J will run from the same Oracle home.
During the installation procedure, follow the prompts, ensuring you perform the following:
On the Administration Instance Settings screen:
If you want this node to administer the cluster using Application Server Control, select Configure this as an Administration OC4J instance. In a cluster topology, only one instance should be configured as an Administration OC4J instance. Note that the Administration OC4J instance for the cluster does not have to be the first installed node.
If you do not want this node to administer the cluster, deselect Configure this as an Administration OC4J instance.
If you are using the dynamic discovery method to cluster the Oracle Application Server instances for OPMN, perform the following:
On the Cluster Topology Configuration screen, select Configure this instance to be part of an Oracle Application Server cluster topology. Specify the IP Address and Port for the multicast address shared by all the nodes in the cluster.
Note that the multicast address must be between 224.0.0.1 and 239.255.255.255. If you are installing on the first node in the cluster, you may choose any IP address and port, as long as it falls in the multicast address range.
Note the following:
Set the Oracle home to be on the local storage of each node.
Ensure that the same component uses the same port number in each Oracle Application Server instance in the cluster. For example, ensure that Oracle HTTP Server is listening at the same port number for all instances in the cluster.
To simplify administering the instances, use the same Oracle home path and the same instance name for each node.
If you are using the discovery server method to cluster the Oracle Application Server instances for OPMN, be sure to perform the steps in Section 6.3.5.1, "Setting up Clusters with the Discovery Server Method" after installation.
If you are using the gateway method to cluster the Oracle Application Server instances for OPMN, see the section "Configuring Cross-Topology Gateways" in the Oracle Containers for J2EE Configuration and Administration Guide for configuration details.
For Distributed Installations (Oracle HTTP Server and OC4J in Different Oracle Homes)
You install Oracle Application Server on the local storage of each node in the active-active topology.
For the nodes where you want to run Oracle HTTP Server, follow the steps in Section 5.2.5, "Installing Web Server". For the nodes where you want to run OC4J, follow the steps in Section 5.2.4, "Installing J2EE Server".
During installation, select the following options:
On the Administration Instance Settings screen:
If you want this node to administer the cluster using Application Server Control, select Configure this as an Administration OC4J instance. In a cluster topology, only one instance should be configured as an Administration OC4J instance. Note that the Administration OC4J instance for the cluster does not have to be the first installed node.
If you do not want this node to administer the cluster, deselect Configure this as an Administration OC4J instance.
If you are using the dynamic discovery method to cluster the Oracle Application Server instances for OPMN, perform the following:
If you are installing Oracle HTTP Server, select Configure this HTTP Server instance to be part of an Oracle Application Server cluster on the "Cluster Topology Configuration" screen. Specify the IP Address and Port for the multicast address shared by all the nodes in the cluster.
If you are installing OC4J, select Configure this OC4J instance to be part of an Oracle Application Server cluster topology on the "Cluster Topology Configuration" screen. Specify the IP Address and Port for the multicast address shared by all the nodes in the cluster and select Access this OC4J Instance from a separate Oracle HTTP Server.
Note that the multicast address must be between 224.0.0.1 and 239.255.255.255. If you are installing on the first node in the cluster, you may choose any IP address and port, as long as it falls in the multicast address range.
Note the following:
Set the Oracle home to be on the local storage of each node.
Ensure that the same component uses the same port number in each Oracle Application Server instance in the cluster. For example, ensure that Oracle HTTP Server is listening at the same port number for all instances in the cluster.
To simplify administering the instances, use the same Oracle home path and the same instance name for each node.
If you are using the discovery server method to cluster the Oracle Application Server instances for OPMN, be sure to perform the steps in Section 6.3.5.1, "Setting up Clusters with the Discovery Server Method" after installation.
If you are using the gateway method to cluster the Oracle Application Server instances for OPMN, see the section "Configuring Cross-Topology Gateways" in the Oracle Containers for J2EE Configuration and Administration Guide for configuration details.
Step 3 Cluster the OC4J Components to Create an Application Cluster
You can also cluster the OC4J components within the Oracle Application Server instances. This type of cluster is called Application Cluster.
Application Clusters provides the following features:
Replication of objects and data contained in an HTTP session or a stateful session Enterprise JavaBean
In-memory replication using multicast or peer-to-peer communication, or persistence of state to a database
Load-balancing of incoming requests across OC4J instances
Transparent failover across applications within the cluster
Application Clusters Defined at the Global Level or Application Level
You can define properties of an application cluster at the global level or at the application level. Properties defined at the global level apply to all applications, but you can override specific properties by defining them at the application level.
To define properties at the global level, you define them in the ORACLE_HOME/j2ee/home/config/application.xml
file, which is the configuration file for the global default
application.
To define properties at the application level, you define them in the application's orion-application.xml
file. When you deploy the application, the file is located in the ORACLE_HOME/j2ee/home/application-deployments/
<app-name>
/
directory.
Procedure
To create an application cluster at either the global or application level, you perform these steps:
Add an empty <distributable/>
tag to the web.xml
file for all Web modules that are part of an application configured for clustering.
Specify the mechanism for replicating state and session information between Oracle Application Server instances. You choose one of the following replication mechanisms:
Table 6-2 Application Cluster Replication Mechanisms
Replication Mechanism | Description |
---|---|
Multicast |
OC4J instances use a multicast address and port to replicate information between themselves. See Section 6.3.5.2, "Setting up Multicast Replication" for details. |
Peer-to-peer |
Oracle Application Server supports two types of peer-to-peer replication: dynamic and static.
See Section 6.3.5.3, "Setting up Peer-to-Peer Replication" for details. |
Replication to database |
State and session information are saved to the database that you specify. The database must be defined in the See Section 6.3.5.4, "Setting up Replication to a Database" for details. |
Specify how often and which data are replicated. See Section 6.3.5.5, "Setting the Replication Policy" for details.
Specify the number of nodes to replicate the data to. See Section 6.3.5.6, "Specifying the Number of Nodes to Replicate To" for details.
For details, see the "Application Clustering in OC4J" chapter in the Oracle Containers for J2EE Configuration and Administration Guide.
This section describes some common procedures that you may need to perform to maintain the active-active topology:
Section 6.3.5.1, "Setting up Clusters with the Discovery Server Method"
Section 6.3.5.6, "Specifying the Number of Nodes to Replicate To"
If you do not want to use the multicast method, you can define a cluster by specifying the names of the nodes running the Oracle Application Server instances in the opmn.xml
file of each instance.
Example: if you want to cluster four instances (inst1.node1.mycompany.com, inst2.node2.mycompany.com, inst3.node3.mycompany.com, inst4.node4.mycompany.com), you would perform these steps:
Designate at least one of the instances to serve as the "discovery server". The discovery server maintains the topology for the cluster.
This example assumes that inst1.node1.mycompany.com and inst2.node2.mycompany.com will be the discovery servers for the cluster.
In distributed installations (Oracle HTTP Server and OC4J on different Oracle homes), any instance, whether running Oracle HTTP Server or OC4J, can serve as the discovery server.
In the opmn.xml
file for all instances in the cluster, specify the nodes that are running the discovery servers (node1.mycompany.com and node2.mycompany.com in the example).
In the example, the opmn.xml
file is changed to include the following lines:
<notification-server> <topology> <discover list="node1.mycompany.com:6201,node2.mycompany.com:6201"/> </topology> ... </notification-server>
The 6201 specifies the port number at which the notification server is listening. You can find this value in the opmn.xml
file of that instance.
If you have more than one discovery server, you separate them with the comma character.
On all the instances, run "opmnctl
reload
" to force OPMN to read the updated opmn.xml
file.
> ORACLE_HOME/opmn/bin/opmnctl reload
Multicast replication is the default replication type. To set up an application to use multicast replication, you can just add the empty <cluster/>
tag to the application's orion-application.xml
file or to the global ORACLE_HOME/j2ee/home/config/application.xml
file. For example:
<orion-application ... > ... <cluster/> </orion-application>
You need to add the <cluster/>
tag on all nodes where the application is deployed.
By default, multicast replication uses multicast address 230.230.0.1 and port 45566. If you want to change these values, you specify the desired values in the ip
and port
attributes of the multicast
element. For example, the following snippet shows the ip
and port
attributes set to customized values:
<orion-application ... > ... <cluster allow-colocation="false"> <replication-policy trigger="onShutdown" scope="allAttributes"/> <protocol> <multicast ip="225.130.0.0" port="45577" bind-addr="226.83.24.10"/> </protocol> </cluster> </orion-application>
The multicast address must be between 224.0.1.0 and 239.255.255.255.
Description of other tags and attributes used in the snippet above:
allow-colocation
: specifies whether or not application state is replicated to other Oracle Application Server instances running on the same host. The default is true.
trigger
and scope
: see Section 6.3.5.5, "Setting the Replication Policy".
bind-addr
: specifies the IP of the network interface card (NIC) to bind to. This is useful if the host machine has multiple NICs, each with its own IP address.
Oracle Application Server supports two types of peer-to-peer replication: dynamic and static.
In dynamic peer-to-peer replication, OC4J discovers other OC4J instances through OPMN. You do not have to list the names of the instances in a configuration file.
In static peer-to-peer replication, you list the names of the instances that you want to be involved in the replication.
Dynamic Peer-to-Peer Replication
To specify dynamic peer-to-peer replication, you include an empty <opmn-discovery/>
tag in the application's orion-application.xml
file or in the global ORACLE_HOME/j2ee/home/config/application.xml
file
<orion-application ... >
...
<cluster allow-colocation="false">
<replication-policy trigger="onShutdown" scope="allAttributes"/>
<protocol>
<peer>
<opmn-discovery/>
</peer>
</protocol>
</cluster>
</orion-application>
You defined how OPMN discovers instances in a cluster in step 2, "Install Oracle HTTP Server and OC4J and Cluster the Instances using OPMN".
Static Peer-to-Peer Replication
To specify static peer-to-peer replication, you list the names of the hosts in the <node>
element in the application's orion-application.xml
file or in the global ORACLE_HOME/j2ee/home/config/application.xml
file. For each node, you specify another node in the active-active topology such that all the nodes in the topology are connected in the chain. For example, if you have three Oracle Application Server instances in your topology, node 1 can specify node 2, node 2 can specify node 3, and node 3 can specify node 1.
Example:
On node 1, the <node>
tag specifies node 2:
<orion-application ... > ... <cluster allow-colocation="false"> <replication-policy trigger="onShutdown" scope="allAttributes"/> <protocol> <peer start-port="7900" range="10" timeout="6000"> <node host="node2.mycompany.com" port="7900"/> </peer> </protocol> </cluster> </orion-application>
On node 2, the <node>
tag specifies node 3:
<orion-application ... > ... <cluster allow-colocation="false"> <replication-policy trigger="onShutdown" scope="allAttributes"/> <protocol> <peer start-port="7900" range="10" timeout="6000"> <node host="node3.mycompany.com" port="7900"/> </peer> </protocol> </cluster> </orion-application>
On node 3, the <node>
tag specifies node 1:
<orion-application ... > ... <cluster allow-colocation="false"> <replication-policy trigger="onShutdown" scope="allAttributes"/> <protocol> <peer start-port="7900" range="10" timeout="6000"> <node host="node1.mycompany.com" port="7900"/> </peer> </protocol> </cluster> </orion-application>
Another way of doing this is to have all the nodes specify the same node. In a three-node example, you could also have nodes 1 and 2 specify node 3, and node 3 can specify either node 1 or node 2.
Description of the tags and attributes used in the example above:
start-port
: specifies the first port on the local node that Oracle Application Server tries to bind to for peer communication. If this port is already in use, Oracle Application Server increments the port number until it finds an available port. The default is 7800.
timeout
: specifies the length of time in milliseconds to wait for a response from the specified peer node. The default is 3000 milliseconds.
host
: specifies the name of the peer node.
port
: specifies the port to use on the specified host (in the host
attribute) for peer communication. The default is 7800.
range
: specifies the number of times to increment the port specified on the port
(not start-port
) attribute. The default is 5.
Note the following:
In static peer-to-peer replication, the application's orion-application.xml
file is different for each instance. When you deploy your application, you have to make sure that you update the orion-application.xml
accordingly.
In this replication mechanism, the replicated data is saved to a database. You specify the database in the <database>
tag in the application's orion-application.xml
file or in the global ORACLE_HOME/j2ee/home/config/application.xml
file. For example:
<orion-application ... > ... <cluster allow-colocation="false"> <replication-policy trigger="onShutdown" scope="allAttributes"/> <protocol> <database data-source="jdbc/MyOracleDS"/> </protocol> </cluster> </orion-application>
The value for the data-source
attribute must match the data source's jndi-name
as specified in the data-sources.xml
file. See the Oracle Containers for J2EE Services Guide for details on creating and using data sources.
Attributes in the <replication-policy>
tag enable you to specify which data is to be replicated and how frequently the data is replicated.
The trigger
attribute specifies when replication occurs. Table 6-3 describes supported values for this attribute:
Table 6-3 Values for the trigger Attribute
Value | HttpSession | Stateful Session Bean |
---|---|---|
|
Replicate each change made to an HTTP session attribute at the time the value is modified. From a programmatic standpoint, replication occurs each time This option can be resource intensive in cases where the session is being extensively modified. |
Not applicable. |
|
Queue all changes made to HTTP session attributes, then replicate all changes just before the HTTP response is sent. |
Replicate the current state of the bean after each EJB method call. The state is replicated frequently, but offers higher reliance. |
|
Replicate the current state of the HTTP session whenever the JVM is terminated gracefully, such as with Control-C. State is not replicated if the host is terminated unexpectedly, as in the case of a system crash. Because session state was not previously replicated, all session data is sent across the network at once upon JVM termination, which can impact network performance. This option can also significantly increase the amount of time needed for the JVM to shut down. |
Replicate the current state of the bean whenever the JVM is terminated gracefully. State is not replicated if the host is terminated unexpectedly, as in case of a system crash. Because bean state was not previously replicated, all state data is sent across the network at once upon JVM termination, which can impact network performance. This option may also significantly increase the amount of time needed for the JVM to shut down. |
The scope attribute
The scope
attribute specifies which data is replicated. Table 6-4 describes supported values for the attribute:
Table 6-4 Values for the scope Attribute
Value | HttpSession | Stateful Session Bean |
---|---|---|
|
Replicate only the modified HTTP session attributes. This is the default replication setting for HttpSession. |
Not applicable. |
|
Replicate all attribute values set on the HTTP session. |
Replicate all member variable values set on the stateful session bean. This is the default replication setting for stateful session beans. |
To specify the number of nodes to replicate to, use the write-quota
attribute of the <cluster>
tag. For example, the following snippet specifies that the replicated data is replicated to two other nodes.
<orion-application ... > ... <cluster allow-colocation="false" write-quota="2"> <replication-policy trigger="onShutdown" scope="allAttributes"/> <protocol> <peer> <opmn-discovery/> </peer> </protocol> </cluster> </orion-application>
The default is 1.
Recommendations: For a two-node active-active topology, set write-quota
to 1
, so that the data is replicated to the other node.
For topologies with three or more nodes, set write-quota
to at least 2
to ensure that the data is replicated to at least two other nodes.
To replicate data to all nodes in the topology, set write-quota
to the total number of nodes in the topology. It is possible to write back to the same node if there is another instance running on that node.
The write-quota
attribute is not used if you are replicating to database.
This section describes how to install Oracle Application Server in an active-passive topology with OracleAS Cold Failover Cluster. OracleAS Cold Failover Cluster is one of the high availability environments supported by Oracle Application Server.
Contents of this section:
Section 6.4.2, "Overview of Installation Steps for OracleAS Cold Failover Cluster"
Section 6.4.3, "Preinstallation Steps for OracleAS Cold Failover Cluster"
Section 6.4.4, "OracleAS Cold Failover Cluster: Details of Installation Steps"
An active-passive topology consists of the following:
Two nodes in a hardware cluster
A virtual hostname and IP address
A shared storage, to be shared between the two nodes
You install the Oracle home on the shared storage. During runtime in an active-passive topology, only one node is active. The other node is passive. The active node mounts the shared storage so that it can access the files and runs all the processes and handles all the requests. Clients access the active node through the virtual hostname. Clients do not need to know the physical hostnames of the nodes in the topology.
If the active node fails for any reason, a failover event occurs and the passive node takes over and becomes the active node. It mounts the shared storage and runs all the processes and handles all the requests. The virtual hostname and IP now point to the passive node. Clients, because they access the nodes using the virtual hostname, do not know that it is the passive node that is servicing their requests.
The nodes need to be in hardware cluster to enable failover.
Note: Installing the Oracle home on the local storage of each node in the OracleAS Cold Failover Cluster topology is not supported. You have to install it on the shared storage. |
Vendor Clusterware
The two nodes in an active-passive topology are in a hardware cluster, which typically includes some vendor clusterware. For a list of certified clusterware, visit the Oracle Technology Network website (http://www.oracle.com/technology
).
These products must be installed on both nodes (active and passive) in the topology.
Figures of Active-Passive Topologies
Figure 6-3 shows a diagram of an active-passive topology with the Oracle Application Server Oracle home installed on the shared storage. The Oracle home contains both Oracle HTTP Server and OC4J. Figure 6-4 shows a distributed active-passive topology, where Oracle HTTP Server and OC4J are installed on different Oracle home.
Figure 6-3 Active-Passive Topology with Oracle HTTP Server and OC4J in the Same Oracle Home
Figure 6-4 Active-Passive Topology with Oracle HTTP Server and OC4J in Separate Oracle Homes
Follow the steps in Table 6-5 to create the OracleAS Cold Failover Cluster configuration. If you are installing Oracle HTTP Server and OC4J in the same Oracle Home (Figure 6-3), perform the steps on the hardware cluster. If you are installing Oracle HTTP Server and OC4J in separate Oracle Homes (Figure 6-4), perform each step on both hardware clusters.
Table 6-5 Overview of Installation Steps for OracleAS Cold Failover Cluster
Step | Description | |
---|---|---|
1. |
|
Preinstallation tasks, described in Section 6.4.3 include: |
2. |
Set VIRTUAL_HOST_NAME Environment Variable |
Set the VIRTUAL_HOST_NAME variable to the virtual hostname. |
3. |
Install Oracle Application Server on the Shared Disk |
In this step, you run the installer from either node of the hardware cluster to install Oracle HTTP Server and OPMN on the shared disk. |
4. |
(optional) Configure the Oracle Application Server Instance for SSL |
If you want the Oracle Application Server instance to use SSL, enable SSL in the Oracle Application Server installation. |
5. |
(optional) Create a File System on the Shared Disk for OracleAS JMS File-Based Persistence |
If you are using OracleAS JMS, create a file system on the shared disk. |
Before installing Oracle Application Server in an OracleAS Cold Failover Cluster, perform these procedures:
Section 6.4.3.1, "Map the Virtual Hostname and Virtual IP Address"
Section 6.4.3.2, "Set Up a File System That Can Be Mounted from Both Nodes"
Note: In addition to the requirements listed in this chapter, ensure that you meet the requirements described in Section 6.2, "Requirements for High Availability Configurations". |
Each node in an OracleAS Cold Failover Cluster configuration is associated with its own physical IP address. In addition, the active node in the cluster is associated with a virtual hostname and virtual IP address. This allows clients to access the OracleAS Cold Failover Cluster using the virtual hostname.
Virtual hostnames and virtual IP addresses are any valid hostname and IP address in the context of the subnet containing the hardware cluster.
Note: Map the virtual hostname and virtual IP address only to the active node. Do not map the virtual hostname and IP address to both active and passive nodes at the same time. When you failover, only then map the virtual hostname and IP address to the passive node, which is now the active node. |
Note: Before attempting to complete this procedure, ask the system or network administrator to review all the steps required. The procedure will reconfigure the network settings on the cluster nodes and may vary with differing network implementations. |
The following example configures a virtual hostname called vhost.mydomain.com
, with a virtual IP of 138.1.12.191:
Register the virtual hostname and IP address with DNS for the network.
For example, register the vhost.mydomain.com
/138.1.12.191
pair with DNS.
Determine the primary public network interface.
The primary public network interface for Ethernet encapsulation is typically eth0
. Use the following command and search for a network interface that has an inet addr
value of the physical IP address of the node:
/sbin/ifconfig
Find an available index number for the primary public network interface.
For example, if the following is the output of the /sbin/ifconfig
command and eth0 is determined to be the primary public interface in step 2, then eth0:1 is available for an additional IP address:
eth0 Link encap:Ethernet HWaddr 00:B0:D0:68:B4:3D inet addr:130.35.137.46 Bcast:130.35.139.255 Mask:255.255.252.0 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:608598569 errors:8 dropped:0 overruns:0 frame:8 TX packets:578257570 errors:111 dropped:0 overruns:0 carrier:111 collisions:0 txqueuelen:100 RX bytes:2407934851 (2296.3 Mb) TX bytes:3386476912 (3229.5 Mb) Interrupt:26 Base address:0xe0c0 Memory:fbefc000-fbefc038
eth1 Link encap:Ethernet HWaddr 00:02:B3:28:80:8C inet addr:10.0.0.1 Bcast:10.255.255.255 Mask:255.0.0.0 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:781415 errors:0 dropped:0 overruns:0 frame:0 TX packets:725511 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:100 RX bytes:280473135 (267.4 Mb) TX bytes:254651952 (242.8 Mb) Interrupt:23 Base address:0xccc0 Memory:fabff000-fabff038
lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:114185902 errors:0 dropped:0 overruns:0 frame:0 TX packets:114185902 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:2307872746 (2200.9 Mb) TX bytes:2307872746 (2200.9 Mb)
Add the virtual IP address to the primary public network interface by running the following command, as the root user, using the available index number from step 3:
/sbin/ifconfig <primary_public_interface>:<available_index> <virtual_ip_address> netmask <netmask_value> up
For example, enter the following command if eth0:1
is available:
/sbin/ifconfig eth0:1 138.1.12.191 up
Check that the virtual IP address is configured correctly:
Use the instructions listed in step 2 to confirm the new entry for the primary_public_interface:available_index
entry created in step 4.
Try to connect to the node using the virtual hostname and virtual IP address from another node. For example, entering both of the following commands from a different node should provide a login to the node you configured in this procedure:
telnet hostname.domain telnet ip_address
For example, enter:
telnet vhost.mydomain.com telnet 138.1.12.191
If the active node fails, then the passive node takes over. If you do not have a clusterware agent to map the virtual IP from the failed node to the passive node, then you have to do it manually. You have to remove the virtual IP mapping from the failed node, and map it to the passive node.
On the failed node, remove the virtual IP address by running the following command as the root user:
/sbin/ifconfig configured_interface down
For example, enter the following command if eth0:1
is configured with the virtual IP address:
/sbin/ifconfig eth0:1 down
Note: Use the commands in step 2 of the previous procedure to confirm that the virtual IP address has been removed. |
On the passive node, add the virtual IP address.
On the passive node, follow steps 2 to 5 of the previous procedure to add and confirm the virtual IP address on the passive node.
Although the hardware cluster has shared storage, you need to create a file system on this shared storage such that both nodes of the OracleAS Cold Failover Cluster can mount this file system. You will use this file system for the following directories:
Oracle home directory for the Oracle Application Server instance
The oraInventory directory
For disk space requirements, see Section 2.2, "System Requirements".
If you are running a volume manager on the cluster to manage the shared storage, refer to the volume manager documentation for steps to create a volume. Once a volume is created, you can create the file system on that volume.
If you do not have a volume manager, you can create a file system on the shared disk directly. Ensure that the hardware vendor supports this, that the file system can be mounted from either node of the OracleAS Cold Failover Cluster, and that the file system is repairable from either node if a node fails.
To check that the file system can be mounted from either node, do the following steps:
Set up and mount the file system from node 1.
Unmount the file system from node 1.
Mount the file system from node 2 using the same mount point that you used in step 1.
Unmount it from node 2, and mount it on node 1, because you will be running the installer from node 1.
This section lists the steps for installing OracleAS Cold Failover Cluster.
If you are installing Oracle HTTP Server and OC4J in separate Oracle Homes, you need to perform each of these steps on both clusters.
Step 1 Perform Preinstallation Steps
Perform the preinstallation steps listed in Section 6.4.3, "Preinstallation Steps for OracleAS Cold Failover Cluster".
Step 2 Set VIRTUAL_HOST_NAME Environment Variable
Set the VIRTUAL_HOST_NAME environment variable to the virtual hostname on either node of the hardware cluster. You will perform the install from this node onto the shared disk in the next step. To find out more about how to set environment variables, see Section 2.10, "Environment Variables".
Step 3 Install Oracle Application Server on the Shared Disk
Install Oracle Application Server on the shared disk of the hardware cluster from the node where you set the VIRTUAL_HOST_NAME environment variable.
For OracleAS Cold Failover Cluster with Oracle HTTP Server and OC4J in the Same Oracle Home
Follow the steps in Section 5.2.3, "Installing J2EE Server and Web Server". During installation, perform the following actions:
On the "Administration Instance Settings" screen, select Configure this as an Administration OC4J instance if you want to configure Application Server Control for administering the OC4J instance. Otherwise, deselect this option.
For OracleAS Cold Failover Cluster with Oracle HTTP Server and OC4J in Separate Oracle Homes
If you are installing on the hardware cluster where you want to run Oracle HTTP Server, follow the steps in Section 5.2.5, "Installing Web Server". During installation, perform the following actions:
If you want to route all requests to OC4J through the Oracle HTTP Server, select Configure this HTTP Server instance to be part of an Oracle Application Server cluster on the "Cluster Topology Configuration" screen. Specify the IP Address and Port for the multicast address shared by all the nodes in the cluster.
If you do not want to route all requests to OC4J through the Oracle HTTP Server, deselect Configure this HTTP Server instance to be part of an Oracle Application Server cluster on the "Cluster Topology Configuration" screen.
If you are installing on the hardware cluster where you want to run OC4J, follow the steps in Section 5.2.4, "Installing J2EE Server". During installation, perform the following actions:
On the "Administration Instance Settings" screen, select Configure this as an Administration OC4J instance if you want to configure Application Server Control for administering the OC4J instance. Otherwise, deselect this option.
If you want to route all requests to OC4J through the Oracle HTTP Server, select Configure this OC4J instance to be part of an Oracle Application Server cluster topology on the "Cluster Topology Configuration" screen. Specify the IP Address and Port for the multicast address shared by all the nodes in the cluster. Select Access this OC4J Instance from a separate Oracle HTTP Server.
If you do not want to route all requests to OC4J through the Oracle HTTP Server, deselect Configure this OC4J instance to be part of an Oracle Application Server cluster topology on the "Cluster Topology Configuration" screen.
Step 4 (optional) Configure the Oracle Application Server Instance for SSL
If you want the Oracle Application Server instance to use SSL, follow the steps in the Oracle Application Server Administrator's Guide.
Step 5 (optional) Create a File System on the Shared Disk for OracleAS JMS File-Based Persistence
If you are using OracleAS JMS with file-based persistence, create a file system on the shared disk for the OracleAS JMS queues, and mount this file system from node 1.
This section describes how to install Oracle Application Server in OracleAS Disaster Recovery configurations. OracleAS Disaster Recovery is one of the high availability environments supported by Oracle Application Server.
Contents of this section:
Section 6.5.2, "Setting up the OracleAS Disaster Recovery Environment"
Section 6.5.3, "Installing Oracle Application Server in an OracleAS Disaster Recovery Environment"
Section 6.5.5, "Patching OracleAS Guard Release 10.1.2.n.n with Release 10.1.3.1.0"
Use the OracleAS Disaster Recovery environment when you want to have two physically separate sites in your environment. One site is the production site, and the other site is the standby site. The production site is active, while the standby site is passive; the standby site becomes active when the production site goes down.
OracleAS Disaster Recovery supports a number of basic topologies for the configuration of the Infrastructure and middle tier on production and standby sites. OracleAS Disaster Recovery supports these basic topologies:
Symmetrical topologies -- strict mirror of the production site with collocated Oracle Identity Management and OracleAS Metadata Repository Infrastructure
Asymmetrical topologies -- simple asymmetric standby topology with collocated Oracle Identity Management and OracleAS Metadata Repository Infrastructure
Separate OracleAS Metadata Repository for OracleAS Portal with collocated Oracle Identity Management and OracleAS Metadata Repository Infrastructure (the Departmental Topology)
Distributed Application OracleAS metadata Repositories with Non collocated Oracle Identity Management and OracleAS Metadata Repository Infrastructure
Redundant Multiple OracleAS 10.1.3 Home J2EE Topology
Redundant Single OracleAS 10.1.3 Home J2EE Topology Integrated with an Existing Oracle Identity Management 10.1.2.0.2 Topology
In a symmetric topology, each node in the standby site corresponds to a node in the production site. This includes the nodes running both OracleAS Infrastructure and middle tiers. In an asymmetric topology, the number of instances required on the standby site are fewer than the number on the production site and the number of instances required on the standby site must be the minimum set of instances required to run your site in the event of a switchover or failover operation. The last two supported topologies are particularly important in OracleAS Release 10.1.3.1.0. See the Oracle Application Server High Availability Guide for a detailed description of these topologies.
As a small variation to this environment, you can set up the OracleAS Infrastructure on the production site in an OracleAS Cold Failover Cluster environment. See Section 6.5.2.4, "If You Want to Use OracleAS Cold Failover Cluster on the Production Site (OracleAS 10.1.2.n.n only)" for details.
For these supported topologies, OracleAS Guard will be installed in every Oracle home on every system that is part of your production and standby topology configured for the OracleAS Disaster Recovery solution.
OracleAS Guard can be installed as a standalone install kit located on OracleAS Companion CD #2. See Section 6.5.4, "Installing the OracleAS 10g (10.1.3.1.0) Standalone Install of OracleAS Guard into Oracle Homes" for more information about when this standalone kit should be installed.
Figure 6-5 shows an example symmetric OracleAS Disaster Recovery environment. Each site has two nodes running middle tiers and a node running OracleAS Infrastructure.
For OracleAS Disaster Recovery to work, data between the production and standby sites must be synchronized so that failover can happen very quickly. Configuration changes done at the production site must be synchronized with the standby site.
You need to synchronize two types of data. The synchronization method depends on the type of data:
Use Oracle Data Guard to synchronize data in the OracleAS Metadata Repository databases on the production and standby sites. You can configure Oracle Data Guard to perform the synchronization.
Use the backup and recovery scripts to synchronize data outside of the database (such as data stored in configuration files).
See the Oracle Application Server High Availability Guide for details on how to use Oracle Data Guard and the backup and recovery scripts.
Figure 6-5 OracleAS Disaster Recovery Environment
Before you can install Oracle Application Server in an OracleAS Disaster Recovery environment, you have to perform these steps:
Section 6.5.2.1, "Ensure Nodes Are Identical at the Operating System Level"
Section 6.5.2.3, "Set Up Identical Hostnames on Both Production and Standby Sites"
Ensure that the nodes are identical with respect to the following items:
The nodes are running the same version of the operating system.
The nodes have the same operating system patches and packages.
You can install Oracle Application Server in the same directory path on all nodes.
The same component must use the same port number on the production and standby sites. For example, if Oracle HTTP Server is using port 80 on the production site, it must also use port 80 on the standby site. To ensure this is the case, create a staticports.ini
file for use during installation. This file enables you to specify port numbers for each component. See Section 2.5.3, "Using Custom Port Numbers (the "Static Ports" Feature)" for details.
The names of the corresponding nodes on the production and standby sites must be identical, so that when you synchronize data between the sites, you do not have to edit the data to fix the hostnames.
For the Infrastructure Nodes
For the node running the infrastructure, set up a virtual name. To do this, specify an alias for the node in the /etc/hosts
file.
For example, on the infrastructure node on the production site, the following line in the hosts
file sets the alias to asinfra
:
138.1.2.111 prodinfra asinfra
On the standby site, the following line sets the node's alias to asinfra
.
213.2.2.110 standbyinfra asinfra
When you install OracleAS Infrastructure on the production and standby sites, you specify this alias (asinfra
) in the Specify Virtual Hostname screen. The configuration data will then contain this alias for the infrastructure nodes.
For the Middle-Tier Nodes
For the nodes running the middle tiers, you cannot set up aliases like you did for the infrastructure nodes because the installer does not display the Specify Virtual Hostname screen for middle-tier installations. When installing middle tiers, the installer determines the hostname automatically by calling the gethostname() function. You want to be sure that for each middle-tier node on the production site, the corresponding node on the standby site returns the same hostname.
To do this, set up a local, or internal, hostname, which could be different from the public, or external, hostname. You can change the names of the nodes on the standby site to match the names of the corresponding nodes on the production site, or you can change the names of the nodes on both production and standby sites to be the same. This depends on other applications that you might be running on the nodes, and whether changing the node name will affect those applications.
On the nodes whose local names you want to change, reconfigure the node so that the hostname
command returns the new local hostname.
Note: The procedure to change the hostname of a system differs between different operating systems. Contact the system administrator of your system to perform this step. Note also that changing the hostname of a system will affect installed software that has a dependency on the previous hostname. Consider the impact of this before changing the hostname. |
Enable the other nodes in the OracleAS Disaster Recovery environment to be able to resolve the node using the new local hostname. You can do this in one of two ways:
Method 1: Set up separate internal DNS servers for the production and standby sites. This configuration allows nodes on each site (production or standby) to resolve hostnames within the site. Above the internal DNS servers are the corporate, or external, DNS servers. The internal DNS servers forward non-authoritative requests to the external DNS servers. The external DNS servers do not know about the existence of the internal DNS servers. See Figure 6-6.
Method 1 Details
Make sure the external DNS names are defined in the external DNS zone. Example:
prodmid1.us.oracle.com IN A 138.1.2.333 prodmid2.us.oracle.com IN A 138.1.2.444 prodinf.us.oracle.com IN A 138.1.2.111 standbymid1.us.oracle.com IN A 213.2.2.330 standbymid2.us.oracle.com IN A 213.2.2.331 standbyinf.us.oracle.com IN A 213.2.2.110
At the production site, create a new zone at the production site using a domain name different from your external domain name. To do this, populate the zone data files with entries for each node in the OracleAS Disaster Recovery environment.
For the infrastructure node, use the virtual name or alias.
For the middle-tier nodes, use the node name (the value in /etc/nodename
).
The following example uses "asha" as the domain name for the new zone.
asmid1.asha IN A 138.1.2.333 asmid2.asha IN A 138.1.2.444 asinfra.asha IN A 138.1.2.111
Do the same for the standby site. Use the same domain name that you used for the production site.
asmid1.asha IN A 213.2.2.330 asmid1.asha IN A 213.2.2.331 asinfra.asha IN A 213.2.2.110
Configure the DNS resolver to point to the internal DNS servers instead of the external DNS server.
In the /etc/resolv.conf
file for each node on the production site, replace the existing name server IP address with the IP address of the internal DNS server for the production site.
Do the same for the nodes on the standby site, but use the IP address of the internal DNS server for the standby site.
Create a separate entry for Oracle Data Guard in the internal DNS servers. This entry is used by Oracle Data Guard to ship redo data to the database on the standby site.
In the next example, the "remote_infra" entry points to the infrastructure node on the standby site. This name is used by the TNS entries on both the production and standby sites so that if a switchover occurs, the entry does not have to be changed.
Figure 6-7 Entry for Oracle Data Guard in the Internal DNS Servers
On the production site, the DNS entries look like this:
asmid1.asha IN A 138.1.2.333 asmid2.asha IN A 138.1.2.444 asinfra.asha IN A 138.1.2.111 remote_infra.asha IN A 213.2.2.110
On the standby site, the DNS entries look like this:
asmid1.asha IN A 213.2.2.330 asmid2.asha IN A 213.2.2.331 asinfra.asha IN A 213.2.2.110 remote_infra.asha IN A 138.1.2.111
Method 2: Edit the /etc/hosts
file on each node on both sites. This method does not involve configuring DNS servers, but you have to maintain the hosts
file on each node in the OracleAS Disaster Recovery environment. For example, if an IP address changes, you have to update the files on all the nodes, and restart the nodes.
Method 2 Details
On each node on the production site, include these lines in the /etc/hosts
file. The IP addresses resolve to nodes on the production site.
Note: In thehosts file, be sure that the line that identifies the current node comes immediately after the localhost definition (the line with the 127.0.0.1 address). |
127.0.0.1 localhost 138.1.2.333 asmid1.oracle.com asmid1 138.1.2.444 asmid2.oracle.com asmid2 138.1.2.111 asinfra.oracle.com asinfra
On each node on the standby site, include these lines in the hosts
file. The IP addresses resolve to nodes on the standby site.
Note: In thehosts file, be sure that the line that identifies the current node comes immediately after the localhost definition (the line with the 127.0.0.1 address). |
127.0.0.1 localhost 213.2.2.330 asmid1.oracle.com asmid1 213.2.2.331 asmid2.oracle.com asmid2 213.2.2.110 asinfra.oracle.com asinfra
Ensure that the "hosts:" line in the /etc/nsswitch.conf
file has "files" as the first item:
hosts: files nis dns
The entry specifies the ordering of the name resolution. If another method is listed first, then the node will use the other method to resolve the hostname.
Note: Restart the nodes after editing these files. |
Verifying that the Nodes Resolve the Hostnames Correctly
After making the changes and restarting the nodes, check that the nodes resolve the hostnames properly by running the following commands:
On the middle-tier nodes on both sites, you must set the internal hostname. For example, for the prodmid1 middle-tier, set the internal hostname as asmid1 as follows:
> hostname asmid1
On the middle-tier nodes on both sites, run the hostname
command. This should return the internal hostname. For example, the command should return "asmid1" if you run it on prodmid1 and standbymid1.
prompt> hostname
asmid1
On each node, ping the other nodes in the environment using the internal hostname as well as the external hostname. The command should be successful. For example, from the first midtier node, prodmid1, run the following commands:
prompt> ping prodinfra ping the production infrastructure node PING prodinfra: 56 data byes 64 bytes from prodinfra.oracle.com (138.1.2.111): icmp_seq=0. time=0. ms ^C prompt> ping iasinfra ping the production infrastructure node PING iasinfra: 56 data byes 64 bytes from iasinfra.oracle.com (138.1.2.111): icmp_seq=0. time=0. ms ^C prompt> ping iasmid2 ping the second production midtier node PING iasmid2: 56 data byes 64 bytes from iasmid2.oracle.com (138.1.2.444): icmp_seq=0. time=0. ms ^C prompt> ping prodmid2 ping the second production midtier node PING prodmid2: 56 data byes 64 bytes from prodmid2.oracle.com (138.1.2.444): icmp_seq=0. time=0. ms ^C prompt> ping standbymid1 ping the first standby midtier node PING standbymid1: 56 data byes 64 bytes from standbymid1.oracle.com (213.2.2.330): icmp_seq=0. time=0. ms ^C
Note: You must perform this installation in an OracleAS Release 10.1.2.n.n environment, where n.n represents 0.0 or higher. This information is presented here for informative purposes only. |
On the production site of a OracleAS Disaster Recovery system, you can set up the OracleAS Infrastructure to run in a OracleAS Cold Failover Cluster configuration. In this case, you have two nodes in a hardware cluster, and you install the OracleAS Infrastructure on a shared disk. See Chapter 11, "Installing in High Availability Environments: OracleAS Cold Failover Cluster" in the Oracle Application Server Installation Guide 10g Release 2 (10.1.2) Documentation set for details.
Figure 6-8 Infrastructure in an OracleAS Cold Failover Cluster Configuration
To set up OracleAS Cold Failover Cluster in this environment, use the virtual IP address (instead of the physical IP address) for asinfra.asha on the production site. The following example assumes 138.1.2.120 is the virtual IP address.
asmid1.asha IN A 138.1.2.333
asmid2.asha IN A 138.1.2.444
asinfra.asha IN A 138.1.2.120 this is a virtual IP address
remote_infra.asha IN A 213.2.2.110
On the standby site, you still use the physical IP address for asinfra.asha, but the remote_infra.asha uses the virtual IP address.
asmid1.asha IN A 213.2.2.330 asmid2.asha IN A 213.2.2.331 asinfra.asha IN A 213.2.2.110 physical IP address remote_infra.asha IN A 138.1.2.120 virtual IP address
For OracleAS Release 10.1.3.1.0, you can only install middle tiers on the production and standby sites.
Install Oracle Application Server as follows:
Note: For all of the installations, be sure to use staticports.ini to specify port numbers for the components. See Section 6.5.2.2, "Set Up staticports.ini File". |
Install Middle Tiers (OracleAS Release 10.1.3.1.0 only)
Install middle tiers on the production site.
Install middle tiers on the standby site.
Install OracleAS Infrastructure and Middle Tiers (Release 10.1.2.n.n only)
Note: You must perform this installation in an OracleAS Release 10.1.2.n.n environment, where n.n represents 0.0 or higher. This information is presented here for informative purposes only. |
Install OracleAS Infrastructure on the production site.
Install OracleAS Infrastructure on the standby site.
Start the OracleAS Infrastructure in each site before installing the middle tiers for that site.
Install middle tiers on the production site.
Install middle tiers on the standby site.
Note: You must perform this installation in an OracleAS Release 10.1.2.n.n environment, where n.n represents 0.0 or higher. This information is presented here for informative purposes only. |
In an OracleAS Release 10.1.2.0.0 environment, you must install the Oracle Identity Management and the OracleAS Metadata Repository components of OracleAS Infrastructure on the same node. You cannot distribute the components over multiple nodes. In an OracleAS Release 10.1.2.0.2 environment, you can distribute the components over multiple nodes.
The installation steps are similar to that for OracleAS Cold Failover Cluster. See Section 11.3, "Installing an OracleAS Cold Failover Cluster (Infrastructure) Configuration" in the Oracle Application Server Installation Guide 10g Release 2 (10.1.2) Documentation set for the screen sequence.
Note the following points:
Select Configuration Options screen: be sure you select High Availability and Replication. See Table 11–5, step 2.
Specify Virtual Hostname screen: enter an alias as the virtual address (for example, asinfra.oracle.com). See Table 11–5, step 6.
Depending on your configuration, you can install OracleAS 10.1.3.1.0 middle tiers or OracleAS 10.1.2.n.n middle tiers, where n.n represents 0.0 or higher.
OracleAS Release 10.1.3.1.0
On OracleAS release 10.1.3.1.0, you can install any type of middle tier that you like:
For installing J2EE Server, see Section 5.2.4, "Installing J2EE Server".
For installing Web Server, see Section 5.2.5, "Installing Web Server".
For installing J2EE Server and Web Server, see Section 5.2.3, "Installing J2EE Server and Web Server".
For installing J2EE Server, Web Server and SOA Suite, see Section 5.2.2, "Installing J2EE Server, Web Server and SOA Suite".
OracleAS Release 10.1.2.n.n
Note: You must perform this installation in an OracleAS Release 10.1.2.n.n environment, where n.n represents 0.0 or higher. This information is presented here for informative purposes only. |
On OracleAS Release 10.1.2.n.n, you can install any type of middle tier that you like:
For installing J2EE and Web Cache, see Section 7.9 "Installing J2EE and Web Cache in a Database-Based Farm Repository and with Oracle Identity Management Access" in the Oracle Application Server Installation Guide for 10g Release 2 (10.1.2).
For installing Portal and Wireless or Business Intelligence and Forms, see Section 7.13, "Installing Portal and Wireless or Business Intelligence and Forms".
Note the following points on OracleAS 10.1.2.n.n:
When the installer prompts you to register with Oracle Internet Directory, and asks you for the Oracle Internet Directory hostname, enter the alias of the node running OracleAS Infrastructure (for example, asinfra.oracle.com).
OracleAS 10g (10.1.3.1.0) standalone install of OracleAS Guard is located on Companion CD Disk 2. This standalone install of OracleAS Guard can be installed in the following environments:
In its own home in the case when you are cloning an instance or topology to a new standby system (see the section on standby site cloning in Oracle Application Server High Availability Guide for more information).
Oracle database server home for an OracleAS Metadata Repository configuration created using OracleAS Metadata Repository Creation Assistant.
OracleAS Disaster Recovery full site upgrade from OracleAS 10g (9.0.4) to OracleAS 10g (10.1.3.1.0) (see the chapter on OracleAS Disaster Recovery site upgrade procedure in Oracle Application Server High Availability Guide for more information).
OracleAS Guard patch upgrade from OracleAS 10g (10.1.2.0.0) to OracleAS 10g (10.1.2.0.2) (see Section 6.5.5, "Patching OracleAS Guard Release 10.1.2.n.n with Release 10.1.3.1.0" for more information).
If this is an upgrade installation of OracleAS Guard, make a copy of your dsa.conf
configuration file to save your current settings for your OracleAS Guard environment. After running the OracleAS 10g (10.1.3.1.0) standalone install kit of OracleAS Guard, you can restore your saved dsa.conf
configuration file with your settings to continue using the same settings for the upgraded OracleAS Guard environment.
To run the OracleAS 10g (10.1.3.1.0) standalone install kit of OracleAS Guard, run the kit in the following directory path:
On UNIX systems:
/Disk2/asg/install/runInstaller
Choose the type of install that you want. Choose Typical for most installations. Choose Custom or Reinstall for upgrading from an older release of OracleAS Guard to the current release.
Enter the oc4jadmin
account password to continue the installation.
If you already have an OracleAS Disaster Recovery environment set up using OracleAS Guard Release 10.1.2.n.n (where n.n represents 0.0 or higher, you can patch OracleAS Guard in your environment to take advantage of new features and support for the topologies described in Section 6.5.1, "OracleAS Disaster Recovery: Introduction". To patch your OracleAS Disaster Recovery environment, follow these basic steps:
Stop the OracleAS Guard server in all OracleAS 10.1.2.n.n Oracle homes on both production and standby sites using the following opmnctl command:
On UNIX systems:
<ORACLE_HOME>/opmn/bin/opmnctl stopall
Install the OracleAS 10g (10.1.3.1.0) standalone install of OracleAS Guard into each Oracle home on the production and standby sites.
If multiple Oracle homes exist on the same system, ensure that different ports are configured for each of the OracleAS Guard servers in this configuration file.
Because this is an upgrade installation of OracleAS Guard, make a copy of your dsa.conf
configuration file to save your current settings for your OracleAS Guard environment. After running the OracleAS 10g (10.1.3.1.0) standalone install kit of OracleAS Guard, you can restore your saved dsa.conf
configuration file with your settings to continue using the same settings for the upgraded OracleAS Guard environment.
On UNIX systems:
<ORACLE_HOME>/dsa/dsa.conf
Start the OracleAS Guard server in all OracleAS 10.1.3.1.0 Oracle homes on both production and standby sites using the following opmnctl command:
On UNIX systems:
<ORACLE_HOME>/opmn/bin/opmnctl startall <ORACLE_HOME>/opmn/bin/opmnctl startproc ias-component=ASG
For information on how to manage your OracleAS Disaster Recovery environment, such as setting up Oracle Data Guard and configuring the OracleAS Metadata Repository database, see the Oracle Application Server High Availability Guide.