Skip Headers
Oracle® Application Server 10g High Availability Guide
10g (10.1.2)
Part No. B14003-01
  Go To Documentation Library
Home
Go To Product List
Solution Area
Go To Table Of Contents
Contents
Go To Index
Index

Previous
Previous
Next
Next
 

4 Managing and Operating Middle-tier High Availability

This chapter describes how to perform configuration changes and on-going maintenance for the Oracle Application Server middle-tier.

This chapter covers the following topics:

4.1 Middle-tier High Availability Configuration Overview

Oracle Application Server provides different configuration options to support high availability for the Oracle Application Server middle-tier.

This section covers the following topics:

4.1.1 DCM-Managed Oracle Application Server Clusters

When administering a DCM-Managed OracleAS Cluster, an administrator uses either Application Server Control Console or dcmctl commands to manage and configure common configuration information on one Oracle Application Server instance. DCM then propagates and replicates the common configuration information across all Oracle Application Server instances within the DCM-Managed OracleAS Cluster. The common configuration information for the cluster is called the cluster-wide configuration.


Note:

There is configuration information that can be configured individually, per Oracle Application Server instance within a cluster (these configuration options are also called instance-specific parameters).

Each application server instance in an DCM-Managed OracleAS Cluster has the same base configuration. The base configuration contains the cluster-wide configuration and excludes instance-specific parameters.

4.1.2 Manually Managed Oracle Application Server Clusters

Using a Manually Managed OracleAS Cluster, it is the administrator's responsibility to synchronize the configuration of Oracle Application Server instances within the OracleAS Cluster.

4.2 Using DCM-Managed OracleAS Clusters

This section describes how to create and use a DCM-Managed OracleAS Cluster and covers the following topics:


See Also:

Distributed Configuration Management Administrator's Guide for information on dcmctl commands

4.2.1 Creating DCM-Managed OracleAS Clusters

An OracleAS Farm contains a collection of Oracle Application Server instances. In an OracleAS Farm, you can view a list of all application server instances when you start Application Server Control Console. The application server instances shown in the Standalone Instances area on the Application Server Control Console Farm Home Page are available to be added to DCM-Managed OracleAS Clusters.

Each Oracle Application Server Farm has the characteristic that it uses either a File Based Repository or a Database-Based Repository. The steps for associating an application server instance with an OracleAS Farm differ depending on the type of the respiratory.

This section covers the following:


Note:

This section covers procedures for clusterable middle-tier instances that are part of an OracleAS Farm. For purposes of this section, a clusterable instance is a middle-tier instance, where the dcmctl isclusterable command returns the value true.

4.2.1.1 Associating An Instance With An OracleAS Database-based Farm

If you have not already done so during the Oracle Application Server installation process, you can associate an application server instance with an OracleAS Database-based Farm:

For an OracleAS Database-based Farm, do the following to add an application server instance to the OracleAS Farm:

  1. Navigate to the Application Server Control Console Instance Home Page.

  2. In the Home area, select the Infrastructure link and follow the instructions for associating an application server instance with an Oracle Application Server Infrastructure.

4.2.1.2 Associating An Instance With An OracleAS File-based Farm

This section covers the following topics:

4.2.1.2.1 Creating An OracleAS File-based Farm Repository Host

You can instruct the Oracle Application Server installer to create an OracleAS File-based Farm when you install Oracle Application Server. If you did not create an OracleAS File-based Farm during installation, then you can create the OracleAS File-based Farm with the following steps.

  1. Using the Application Server Control Console for the instance that you want to use as the repository host, select the Infrastructure link to navigate to the Infrastructure page. If a repository is not configured, then the Farm Repository field shows "Not Configured", as shown in Figure 4-1.

    Figure 4-1 Application Server Control Console Farm Repository Management

    Description of infra_configure.gif follows
    Description of the illustration infra_configure.gif

  2. On the Infrastructure page, in the OracleAS Farm Repository Management area, select the Configure button to start the Configure OracleAS Farm Repository wizard. The repository creation wizard appears. The appropriate host name appears under Configure Oracle Farm Repository Source. Select the New file-based repository button and select Next, as shown in Figure 4-2.

    Figure 4-2 Application Server Control Console Create Repository Wizard Step 1

    Description of infr_wizard.gif follows
    Description of the illustration infr_wizard.gif

  3. The wizard jumps to Step 4 of 4, Validation, as shown in Figure 4-3.

    Figure 4-3 Application Server Control Console Create Repository Wizard Step 4

    Description of infr_wizard4.gif follows
    Description of the illustration infr_wizard4.gif

  4. Select Finish, and Oracle Application Server creates the OracleAS File-based Farm.

  5. When the wizard completes, note the Repository ID shown in the OracleAS Farm Repository Management area on the Infrastructure page. You need to use the Repository ID to add instances to the OracleAS File-based Farm.

When you go to the Application Server Control Console Home page, notice that the home page shows the OC4J instance and the Oracle HTTP Server are stopped, and the page now includes a Farm link in the General area.

4.2.1.2.2 Adding Instances To An OracleAS File-based Farm

To add standalone application server instances to an OracleAS File-based Farm, perform the following steps:

  1. Obtain the Repository ID for the OracleAS File-based Farm that you want to join. To find the Repository ID, on any Oracle Application Server instance that uses the OracleAS File-based Farm, select the Infrastructure link, and check the value of the File-based Repository ID field in the OracleAS Farm Repository Management area.

  2. Switch to the Application Server Control Console for the standalone instance that you want to add to the OracleAS File-based Farm and select the Infrastructure link. If a repository is not configured, then the Farm Repository field shows "Not Configured", as shown in Figure 4-1.

  3. Select the Configure button to start the Configure OracleAS Farm Repository wizard. The repository creation wizard appears, as shown in Figure 4-2. The appropriate host name appears in the OracleAS Instance field under the Configure Oracle Farm Repository Source area.

  4. Select the Existing file-based repository button and select Next. The repository creation wizard then brings up the Location page, Step 3 of 4, as shown in Figure 4-4.

    Figure 4-4 Application Server Control Console Add Instance to Farm

    Description of infr_wizardloc.gif follows
    Description of the illustration infr_wizardloc.gif

  5. Enter the repository ID for the Repository Host and select Next.

  6. This shows wizard Step 4 of 4 page, Configure OracleAS Farm Repository Validation. Select Finish. When the wizard completes, the standalone instance joins the OracleAS File-based Farm.

  7. After the wizard completes, you return to the Application Server Control Console Infrastructure page.

4.2.1.3 Using the Application Server Control Console Create Cluster Page

Using the Application Server Control Console Farm Home Page, you can create a new DCM-Managed OracleAS Cluster.

From the Farm Home page, create a new DCM-Managed OracleAS Cluster as follows:

  1. Select the Farm link to navigate to the Farm Home Page.


    Note:

    Application Server Control Console shows the Farm Home Page when an Oracle Application Server instance is part of a farm.

  2. Select the Create Cluster button. Application Server Control Console displays the Create Cluster page as shown in Figure 4-5.

    Figure 4-5 Create Cluster Page

    Description of creaclus.gif follows
    Description of the illustration creaclus.gif

  3. Enter a name for the new cluster and click Create. Each new cluster name within the farm must be unique.

    A confirmation page appears.

  4. Click OK to return to the Farm Home Page.

After creating a new cluster, the Farm Home page shows the cluster in the Clusters area. After creating a new cluster, the cluster is empty and does not include any application server instances. Use the Join Cluster button on the Farm Home page to add application server instances to the cluster.

4.2.2 Adding Instances To DCM-Managed OracleAS Clusters

To add application server instances to a DCM-Managed OracleAS Cluster, do the following:

  1. Navigate to the Farm Home Page. To navigate to the Farm Home page from an Oracle Application Server instance Home page, select the link next to the Farm field in the General area on the Home page.


    Note:

    If the Farm field is not shown, then the instance is not part of a Farm and you will need to associate the standalone instance with a Farm.

  2. Select the radio button for the application server instance that you want to add to a cluster from the Standalone Instances section.

  3. Click Join Cluster.

    Figure 4-6 shows the Join Cluster page.

    Figure 4-6 Join Cluster Page

    Description of joinclus.gif follows
    Description of the illustration joinclus.gif

  4. Select the radio button of the cluster that you want the application server instance to join.

  5. Click Join. OracleAS adds the application server instance to the selected cluster and then displays a confirmation page.

  6. Click OK to return to the Farm Home Page.

Repeat these steps for each additional standalone application server instance you want to join the cluster.

Note the following when adding application server instances to a DCM-Managed OracleAS Cluster:

  1. When adding application server instances to a DCM-Managed OracleAS Cluster, the order that you add instances is significant. The first application server instance that joins the DCM-Managed OracleAS Cluster is used as the base configuration for all additional application server instances that join the cluster. The base configuration includes all cluster-wide configuration information. It does not include instance-specific parameters.

  2. After the first application server instance joins the DCM-Managed OracleAS Cluster, the base configuration overwrites existing cluster-wide configuration information for subsequent application server instances that join the cluster. Each additional application server instance, after the first, that joins the cluster inherits the base configuration specified for the first application server instance that joins the cluster.

  3. Before an application server instance joins a DCM-Managed OracleAS Cluster, Application Server Control Console stops the instance. You can restart the application server instance by selecting the cluster link, selecting the appropriate instance from within the cluster, and then selecting the Start button.

  4. An application server instance is removed from the Standalone Instances area when the instance joins a DCM-Managed OracleAS Cluster.

  5. To add multiple standalone application server instances to a DCM-Managed OracleAS Cluster in a single operation, use the dcmctl joinCluster command.

  6. When an application server instance contains certain Oracle Application Server components, it is not clusterable. Use the dcmctl isClusterable command to test if an application server instance is clusterable. If the application server instance is not clusterable, then Application Server Control Console returns an error when you attempt to add the instance to a DCM-Managed OracleAS Cluster.

  7. To be clusterable, all application server instances that are to be members of a DCM-Managed OracleAS Cluster must be installed on the same flavor operating system. For example, different variants of UNIX are clusterable together, but they are not clusterable with Windows systems.


Note:

For adding instances to a OracleAS File-based Farm, where the instances will be added to an DCM-Managed OracleAS Cluster, there is no known fixed upper limit on the number of instances; a DCM-Managed OracleAS Cluster of 12 instances has been tested successfully.

4.2.3 Removing Instances from DCM-Managed OracleAS Clusters

To remove an application server instance from a cluster, do the following:

  1. Select the cluster in which you are interested on the Farm Home Page. This brings you to the cluster page.

  2. Select the radio button of the application server instance to remove from the cluster and click Remove.

To remove multiple standalone application server instances, you need to repeat these steps multiple times.

Note the following when removing application server instances from an DCM-Managed OracleAS Cluster:

  • Before an application server instance leaves a cluster, Application Server Control Console stops the instance. After the operation completes, you restart the application server instance from the Standalone Instances area of the Farm Home Page.

  • The dcmctl leaveCluster command removes one application server instance from the cluster at each invocation.

  • When the last application server instance leaves a cluster, cluster-wide configuration information associated with the cluster is removed. The cluster is now empty and the base configuration is not set. Subsequently, Oracle Application Server uses the first application server instance that joins the cluster as the base configuration for all additional application server instances that join the cluster.

  • You can remove an application server instance from the cluster at any time. The first instance to join a cluster does not have special properties. The base configuration is created from the first instance to join the cluster, but this instance can be removed from the cluster in the same manner as the other instances.

4.2.4 Starting Stopping and Deleting DCM-Managed OracleAS Clusters

Figure 4-7 shows the Application Server Control Console Farm Home Page, including two clusters, cluster1 and cluster2.

Figure 4-7 Oracle Application Server 10g Farm Page

Description of farmclus.gif follows
Description of the illustration farmclus.gif

Table 4-1 lists the cluster control options available on the Farm Home Page.

Table 4-1  Oracle Application Server Farm Page Options

If you want to... Then...
Start all application server instances in a DCM-Managed OracleAS Cluster Select the radio button next to the cluster and click Start
Restart all application server instances in an DCM-Managed OracleAS Cluster Select the radio button next to the cluster and click Restart
Stop all application server instances in an DCM-Managed OracleAS Cluster Select the radio button next to the cluster and click Stop
Delete a DCM-Managed OracleAS Cluster, including any application server instances still included in the cluster (the instances are removed from the cluster and become standalone instances in the Farm). Select the radio button next to the cluster and click Delete

4.2.5 Configuring Oracle HTTP Server Options for DCM-Managed OracleAS Clusters

This section describes Oracle HTTP Server options for DCM-Managed Oracle Application Server Clusters.

This section covers the following:

4.2.5.1 Using and Configuring mod_oc4j Load Balancing

Using DCM-Managed OracleAS Clusters the Oracle HTTP Server module mod_oc4j load balances requests to OC4J processes. The Oracle HTTP Server, using mod_oc4j configuration options, supports different load balancing policies. By specifying load balancing policies DCM-Managed OracleAS Clusters provide performance benefits along with failover and high availability, depending on the network topology and host machine capabilities.

By default, mod_oc4j uses weights to select a node to forward a request to. Each node uses a default weight of 1. A node's weight is taken as a ratio compared to the weights of the other available nodes to define the number of requests the node should service compared to the other nodes in the DCM-Managed OracleAS Cluster. Once a node is selected to service a particular request, by default, mod_oc4j uses the roundrobin policy to select OC4J processes on the node. If an incoming request belongs to an established session, the request is forwarded to the same node and the same OC4J process that started the session.

The mod_oc4j load balancing policies do not take into account the number of OC4J processes running on a node when calculating which node to send a request to. Node selection is based on the configured weight for the node, and its availability.

To modify the mod_oc4j load balancing policy, Administrators use the Oc4jSelectMethod and Oc4jRoutingWeight configuration directives in the mod_oc4j.conf file.

Using Application Server Control Console, configure the mod_oc4j.conf file as follows:

  1. Select the HTTP_Server component from the System Components area of an instance home page.

  2. Select the Administration link on the HTTP_Server page.

  3. Select the Advanced Server Properties link on the HTTP_Server page Administration page.

  4. On the Advanced Server Properties page, select the mod_oc4j.conf link from the Configuration Files area.

  5. On the Edit mod_oc4j.conf page, within the <IfModule mod_oc4j.c> section, and specify the directives Oc4jSelectMethod and Oc4jRoutingWeight to select the desired load balancing option.


Note:

If you do not use Application Server Control Console, you can edit mod_oc4j.conf and use the dcmctl updateConfig command to propagate changes to other mod_oc4j.conf files across a DCM-Managed OracleAS Cluster as follows:
% dcmctl updateconfig -ct ohs
% opmnctl @cluster:<cluster_name> restartproc ias-component=HTTP_Server process-type=HTTP_Server

Where cluster_name is the name of the cluster.

The opmnctl restartproc command is required for the changes to take effect across all the instances in the cluster.


4.2.5.2 Configuring Oracle HTTP Server Instance-Specific Parameters

You can modify the Oracle HTTP Server ports and listening addresses on the Server Properties Page, which can be accessed from the Oracle HTTP Server Home Page. You can modify the virtual host information by selecting a virtual host from the Virtual Hosts section on the Oracle HTTP Server Home Page.

Table 4-3 shows the Oracle HTTP Server instance-specific parameters.

4.2.5.3 Configuring mod_plsql With Real Application Clusters

This section covers the following:

4.2.5.3.1 Configuring Detection and Cleanup of Dead Connections

Using Oracle HTTP Server with the mod_plsql module, if a database becomes unavailable, the connections to the database need to be detected and cleaned up. This section explains how to configure mod_plsql to detect and cleanup dead connections.

The mod_plsql module maintains a pool of connections to the database and reuses established connections for subsequent requests. If there is no response from a database connection, mod_plsql detects this case, discards the dead connection, and creates a new database connection for subsequent requests.

By default, when a Real Application Clusters node or a database instance goes down and mod_plsql previously pooled connections to the node or instance, the first mod_plsql request which uses a dead connection in its pool results in a failure response of HTTP-503 that is sent to the end-user. The mod_plsql module processes this failure and uses it to trigger detection and removal of all dead connections in the connection pool. The mod_plsql module pings all connection pools that were created before the failure response. This ping operation is performed at the time of processing for the next request that uses a pooled connection. If the ping operation fails, the database connection is discarded, and a new connection is created and processed.

Setting the PlsqlConnectionValidation parameter to Automatic causes the mod_plsql module to test all pooled database connections that were created before a failed request. This is the default configuration.

Setting the PlsqlConnectionValidation parameter to AlwaysValidate causes mod_plsql to test all pooled database connections before issuing any request. Although the AlwaysValidate configuration option ensures greater availability, it also introduces additional performance overhead.

You can specify the timeout period for mod_plsql to test a bad database connection in a connection pool. The PlsqlConnectionTimeout parameter, which specifies the maximum time mod_plsql should wait for the test request to complete before it assumes that a connection is not usable.


See Also:

Oracle Application Server mod_plsql User's Guide

4.2.5.3.2 Using Oracle Directory for Lookups

Oracle Net clients can use a Directory Server to lookup connect descriptors. At the beginning of a request, the client uses a connect identifier to the Directory Server where it is then resolved into a connect descriptor.

The advantage of using a Directory Server is that the connection information for a server can be centralized. If the connection information needs to be changed, either because of a port change or a host change, the new connection information only needs to be updated once, in the Directory Server, and all Oracle Net clients using this connection method will be able to connect to the new host.


See Also:

Oracle Database Net Services Administrator's Guide for instructions on configuring Directory Naming.

4.2.6 Understanding DCM-Managed OracleAS Cluster Membership

After an DCM-Managed OracleAS Cluster is created, you can add Oracle Application Server Instances to it. This section describes DCM-Managed OracleAS Cluster configuration and the characteristics of clusterable Oracle Application Server Instances.

This section covers the following topics:

4.2.6.1 How the Common Configuration is Established

The order in which the Oracle Application Server Instances are added to the DCM-Managed OracleAS Cluster is significant. The common configuration that will be replicated across the DCM-Managed OracleAS Cluster is established by the first Oracle Application Server Instance added to the cluster. The configuration of the first Oracle Application Server Instance added is inherited by all Oracle Application Server Instances that subsequently join the DCM-Managed OracleAS Cluster.

The common configuration includes all cluster-wide configuration information— namely, DCM-Managed OracleAS Cluster and Oracle Application Server Instance attributes, such as components configured. For example, if the first Oracle Application Server Instance to join the cluster has four OC4J instances, then the common configuration includes those four OC4J instances and the applications deployed on the OC4J instances. Instances that subsequently join the DCM-Managed OracleAS Cluster replicate the OC4J instances and their deployed applications. (In addition, when the Oracle Application Server Instance joins the DCM-Managed OracleAS Cluster, DCM removes any OC4J components that do not match the common configuration). Furthermore, changes to one Oracle Application Server Instance in the DCM-Managed OracleAS Cluster, such as adding new OC4J instances or removing OC4J instances, are replicated across the DCM-Managed OracleAS Cluster; the components configured are part of the replicated cluster-wide, common configuration.

When the last Oracle Application Server Instance leaves a DCM-Managed OracleAS Cluster, the DCM-Managed OracleAS Cluster becomes an empty DCM-Managed OracleAS Cluster, and the next Oracle Application Server Instance to joins the DCM-Managed OracleAS Cluster provides the new common configuration for the DCM-Managed OracleAS Cluster.

4.2.6.2 Parameters Excluded from the Common Configuration: Instance-Specific Parameters

Some parameters only apply to a given Oracle Application Server Instance or computer; these parameters are instance-specific parameters. DCM does not propagate instance-specific parameters to the Oracle Application Server Instances in a ­DCM-Managed OracleAS Cluster. When you change an instance-specific parameter, if you want the change to apply across the DCM-Managed OracleAS Cluster, you must apply the change individually to each appropriate Oracle Application Server Instance.

Table 4-2 OC4J Instance-specific Parameters

Parameter Description
island definitions Specific to an Oracle Application Server Instance. Stateful OC4J applications that need to replicate state require that all of the islands in each OC4J instance across the DCM-Managed OracleAS Cluster have the same name.
number of processes Specific to a computer. You may want to tune this parameter according to the computer's capabilities.
command-line options Specific to a computer.
port numbers for RMI, JMS and AJP communication Specific to a computer.

Table 4-3 Oracle HTTP Server Instance-Specific Parameters

Parameter Description
ApacheVirtualHost Specific to a computer.
Listen Specific to a computer. This directive binds the server to specific addresses or ports.
OpmnHostPort Specific to a computer.
Port Specific to a computer. This directive specifies the port to which the standalone server listens.
User Specific to a computer.
Group Specific to a computer.
NameVirtualHost Specific to a computer. This directive specifies the IP address on which the server receives requests for a name-based virtual host. This directive can also specify a port.
ServerName Specific to a computer. This directive specifies the host name that the server should return when creating redirection URLs. This directive is used if gethostbyname does not work on the local host. You can also use it if you want the server to return a DNS alias as a host name (for example, www.abccompany.com).
PerlBlob Specific to a computer.

Table 4-4 OPMN Instance-Specific Parameters

Parameter in opmn.xml file Description
All configuration for the notification server:

opmn/notification-server

Specific to a computer.
process-manager elements:
log-file
process-module
Specific to an Oracle Application Server Instance.
ias_instance attributes:
id
ORACLE_HOME
ORACLE_CONFIG_HOME
Specific to an Oracle Application Server Instance.
The following elements and attributes of opmn/process-manager/ias-instance /ias-component/process-type
port.range
start
stop
ping
restart
process-set
Specific to an Oracle Application Server Instance. Although instance-specific, these elements and attributes have default configurations (the configurations are not propagated, but retrieved from repository).The MissingLocalValuePolicy flag indicates that element or attribute has a default value:

MissingLocalValuePolicy="UseRepositoryValue"

The following opmn/process-manager/ias-instance attributes and elements:
module-data/category/data[id='config-file']
ias-component/module-data/category/data[id='config-file']
ias-component/process-type/ module-data/category/data[id='config-file']
ias-component/process-type/ process-set/module-data/category/data[id='config-file']

environment

ias-component/environment

ias-component/process-type/environment

ias-component/process-type/process-set/environment

All other components whose id is not HTTP_Server or OC4J in opmn/process-manager/ias-instance/ias-component:

[id='dcm-daemon']   
[id='WebCache']
[id='OID']
[id='IASPT']
[id='wireless']  
[id='Discoverer']
[id='LogLoader']
[id='Custom']
Most of the HTTP_Server and OC4J parameters are cluster-wide; only those shown here are instance-specific.

In general:

  • Any data element whose id is config-file is instance-specific.

  • Any environment element is instance-specific.

    See Also: Appendix B, "Troubleshooting DCM", in the Distributed Configuration Management Administrator's Guide for a complete opmn.xml file that shows the hierarchy of these elements and attributes.


4.3 Availability Considerations for the DCM Configuration Repository

This section covers availability considerations for the DCM configuration repository, and covers the following topics:


Note:

The availability of the configuration repository only affects the Oracle Application Server configuration and administration services. It does not affect the availability of the system for handling requests, or availability of the applications running in a DCM-Managed OracleAS Cluster.

4.3.1 Availability Considerations for DCM-Managed OracleAS Cluster (Database)

This section covers availability considerations for the DCM configuration repository when using DCM-Managed OracleAS Clusters with an OracleAS Database-based Farm.

Using an OracleAS Database-based Farm, with a Database that uses Oracle Real Application Clusters (RAC) or other Database level high availability solution protects the system by providing high availability, scalability, and redundancy during failures of DCM configuration repository Database.


See Also:

The Oracle Database High Availability Architecture and Best Practices guide for a description of Oracle Database high availability solutions.

4.3.2 Availability Considerations for DCM-Managed OracleAS Cluster (File-based)

Using an OracleAS File-based Farm, the DCM configuration repository resides on one Oracle Application Server instance at any time. A failure of the host that contains the DCM configuration repository requires manual failover (by migrating the repository host to another host).

This section covers availability considerations for the DCM configuration repository when using DCM-Managed OracleAS Clusters with an OracleAS File-based Farm.


Note:

The information in this section does not apply to a DCM-Managed Oracle Application Server Cluster that uses a OracleAS Database-based Farm (with the repository type, database).

4.3.2.1 Selecting the Instance to Use for a OracleAS File-based Farm Repository Host

An important consideration for using DCM-Managed OracleAS Clusters with a OracleAS File-based Farm is determining which Oracle Application Server instance is the repository host.

Consider the following when selecting the repository host for an OracleAS File-based Farm:

  • When the repository host instance is temporarily unavailable, a DCM-Managed OracleAS Cluster that uses a OracleAS File-based Farm is still able to run normally, but it cannot update any configuration information.

  • Since the Oracle Application Server instance that is the repository host instance stores and manages the DCM-Managed OracleAS Cluster related configuration information in its file system, the repository host instance should use mirrored or RAID disks. If the repository host instance uses disk mirroring, this improves the availability of the DCM-Managed OracleAS Cluster.

  • When the repository host instance is not available, read-only configuration operations are not affected on any Oracle Application Server instances that are running (the OracleAS Farm cluster-wide configuration information is distributed and managed through local Java Object Cache).

  • When the repository host instance is not available, operations that attempt to change configuration information in the file-based repository will generate an error. These operations must be delayed until the repository host instance is available, or until the repository host instance is relocated to another application server instance within the OracleAS File-based Farm.

4.3.2.2 Protecting Against The Loss of a Repository Host

Using a OracleAS File-based Farm, one instance in the farm is designated as the repository host. The repository host holds configuration information for all instances in the OracleAS File-based Farm. Access to the repository host is required for all configuration changes, write operations, for instances in the OracleAS File-based Farm. However, instances have local configuration caches to perform read operations, where the configuration is not changing.

In the event of the loss of the repository host, any other instance in the OracleAS File-based Farm can take over as the new repository host if an exported copy of the old repository hosts is available. We recommend that you make regular backups of the repository host, and save the backups on a separate system.

4.3.2.3 Impact of Repository Host Unavailability

When the repository host is unavailable, only read-only operations are allowed. No configuration changes are allowed. If an operation is attempted that requires updates to the repository host, such as use of the updateConfig command, dcmctl reports a message, for example,

ADMN-100205
Base Exception:
The DCM repository is not currently available. The OracleAS 10g instance, "myserver.mycompany.com", is using a cached copy of the repository information. This operation will update the repository, therefore the repository must be available.

If the repository host is permanently down, or unavailable for the long-term, then the repository host should be relocated. If the restored repository is not recent, local instance archives can be applied to bring each instance up to a newer state.

4.3.2.4 Impact of Non-Repository Host Unavailability

When the instances in a DCM-Managed OracleAS Cluster, other than the repository host instance are down, all other instances can function properly. If an instance is experiencing a short-term outage, the instance automatically updates its configuration information when it becomes available again.

If an instance is permanently lost, this will have no affect on other instances in the OracleAS File-based Farm. However, to maintain consistency, it will be necessary to delete all records pertaining to the lost instance.

To delete configuration information for a lost instance, use the following command:

dcmctl destroyInstance

4.3.2.5 Updating and Checking the State of Local Configuration

It is important that all configuration changes complete successfully, and that all instances in a cluster are "In Sync". The local configuration information must match the information stored in the repository. DCM does not know about manual changes to configuration files, and such changes could make the instances in a cluster have an In Sync status of false.

Use the following dcmctl command to return a list of all managed components with their In Sync status:

dcmctl getState -cl cluster_name

The In Sync status of true implies that the local configuration information for a component is the same as the information that is stored in the repository.

If you need to update the file-based repository with changed, local information, use the dcmctl command updateConfig, as follows,

dcmctl updateconfig
dcmctl getstate

Use the command resyncInstance to update local information with information from the repository. For example,

dcmctl resyncinstance

By default this command only updates configuration information for components whose In Sync status is false. Use the -force option to update all components, regardless of their In Sync status.

4.3.2.6 Performing Administration on a DCM-Managed OracleAS Cluster

During planned administrative downtimes, with a DCM-Managed OracleAS Cluster using an OracleAS File-based Farm that runs on multiple hosts with sufficient resources, you can perform administrative tasks while continuing to handle requests. This section describes how to relocate the repository host in a DCM-Managed OracleAS Cluster, while continuing to handle requests.

These procedures are useful for performing administrative tasks on a DCM-Managed OracleAS Cluster, such as the following:

  • Relocating the repository for repository host node decommission.

  • Applying required patches to the DCM-Managed OracleAS Cluster.

  • Applying system upgrades, changes, or patches that require a system restart for a host in the DCM-Managed OracleAS Cluster.


Note:

Using the procedures outlined in this section, only administration capabilities are lost during a planned downtime.

Use the following steps to relocate the repository host in a DCM-Managed OracleAS Cluster.

  1. Issue the following DCM command, on UNIX systems:

    cd $ORACLE_HOME/dcm/bin
    dcmctl exportRepository -f file
    
    

    On Windows systems:

    cd %ORACLE_HOME%\dcm\bin
    dcmctl exportRepository -f file
    
    

    Note:

    After this step, do not perform configuration or administration commands that would change the configuration. Otherwise those changes will not be copied when the repository file is imported to the new repository host.

  2. Stop the administrative system, including Enterprise Manager and the DCM daemon in each instance of the OracleAS File-based Farm, except for the instance that is going to be the new repository host.

    On UNIX systems use the following commands on each instance in the cluster:

    $ORACLE_HOME/bin/emctl stop iasconsole
    $ORACLE_HOME/opmn/bin/opmnctl stopproc ias-component=dcm-daemon
    
    

    On Windows systems use the following commands on each instance in the cluster:

    %ORACLE_HOME%\bin\emctl stop iasconsole
    %ORACLE_HOME%\opmn\bin\opmnctl stopproc ias-component=dcm-daemon
    

    At this point, the DCM-Managed OracleAS Cluster can still handle requests.

  3. Import the saved repository on the host that is to be the repository host instance.

    On UNIX systems, use the following commands:

    cd $ORACLE_HOME/dcm/bin/
    dcmctl importRepository -file file name
    
    

    On Windows systems, use the following commands:

    cd %ORACLE_HOME%\dcm\bin\
    dcmctl importRepository -file file name
    
    

    where file name is the name of the file you specified in the exportRepository command.

    While importRepository is active, the DCM-Managed OracleAS Cluster can still handle requests.


    Note:

    The importRepository command issues a prompt that specifies that the system that is the currently hosting the repository must be shutdown. However, only the dcm-daemon on the system that is currently hosting the repository must be shutdown, and not the entire system.

  4. Use the following command to start all components on the new repository host. Do not perform administrative functions at this time.

    On UNIX systems:

    $ORACLE_HOME/opmn/bin/opmnctl startall
    
    

    On Windows systems:

    %ORACLE_HOME%\opmn\bin\opmnctl startall
    
    
  5. On the system that was the repository host, indicate that the instance is no longer the host by issuing the following command,

    dcmctl  repositoryRelocated
    
    
  6. Start Application Server Control Console on the new repository host instance. The repository has now been relocated, and the new repository instance now handles requests.

    On UNIX systems use the following commands on each instance in the cluster:

    $ORACLE_HOME/bin/emctl start iasconsole
    
    

    On Windows systems use the following commands on each instance in the cluster:

    %ORACLE_HOME%\bin\emctl start iasconsole
    
    
  7. Shut down the Oracle Application Server instance associated with the old repository host, using the following commands:

    On UNIX systems:

    $ORACLE_HOME/opmn/bin/opmnctl stopall
    
    

    On Windows systems:

    %ORACLE_HOME%\opmn\bin\opmnctl startall
    
    

You can now perform the required administrative tasks on the old repository host system, such as the following.

  • Applying required patches to the repository host system in the DCM-Managed OracleAS Cluster.

  • Decommission the node.

  • Applying system upgrades, changes, or patches that require a system restart for the DCM-Managed OracleAS Cluster.

After completing the administrative tasks on the system that was the repository host, if you want to switchback the repository host, you need to perform these steps again.

4.3.2.7 Best Practices for Repository Backups

When you export repository files and archives, keep the files in known locations and backup the exports and archives regularly. It is also recommended that exported repositories be available to non-repository instances, not only as a backup means but also for availability. If the repository instance becomes unavailable, a new instance can become the new repository host but only if an exported repository file is available.

Oracle Application Server does not provide an automated repository backup procedure. However, to assure that you can recover from loss of configuration data, you need to put a repository backup plan in place. Perform repository backups on a regular basis, frequently, and perform a repository backup after any configuration changes or topology changes where instances are added or removed.

Send repository backups to different nodes that are available to other nodes in the OracleAS File-based Farm.

4.3.2.8 Best Practices for Managing Instances In OracleAS File-based Farms

When joining or leaving an OracleAS File-based Farm, all the managed processes are shutdown on the instance that is joining or leaving the OracleAS File-based Farm. If you want the instance to be available, then after performing the leave or join farm operation, restart the instance.

Is is recommended that you make a backup of the local configuration before either leaving or joining an OracleAS File-based Farm. For example, use the following command to create an archive:

dcmctl createarchive -arch myarchive -comment "Archive before leaving xyz farm"
dcmctl exportarchive -arch myarchive -f /archives/myarchive

Archives are portable across OracleAS File-based Farm. When an instance joins a new farm it can apply archives created on a previous farm.

4.4 Using Oracle Application Server Clusters (OC4J)

This section describes Oracle Application Server Cluster (OC4J) configuration and the use of Oracle Application Server Cluster (OC4J) with DCM-Managed OracleAS Clusters.

Using Oracle Application Server Cluster (OC4J) allows Web applications to replicate state and provides for high availability and failover for applications that run under OC4J. You can configure this feature without using a DCM-Managed OracleAS Cluster. However, when you use both of these Oracle Application Server features together, this simplifies and improves manageability and high availability. This section assumes that you are using both Oracle Application Server Cluster (OC4J) and DCM-Managed OracleAS Cluster.

This section covers the following:

4.4.1 Overview of OracleAS Cluster (OC4J) Configuration

Configuring Oracle Application Server Cluster (OC4J) allows Web applications to replicate state, and provides for high availability and failover for applications that run on OC4J. After application server instances join a DCM-Managed OracleAS Cluster, the application server instances, and the OC4J instances have the following properties:

  • Each application server instance has the same cluster-wide configuration. When you use Application Server Control Console or dcmctl to modify any cluster-wide OC4J parameters, the modifications are propagated to all application server instances in the cluster. To make cluster-wide OC4J configuration changes you need to change the configuration parameters on a single application server instance; Oracle Application Server then propagates the modifications to all the other application server instances within the cluster.

  • When you modify any instance-specific parameters on an OC4J instance that is part of a ­DCM-Managed OracleAS Cluster, the change is not propagated across the DCM-Managed OracleAS Cluster. Changes to instance-specific parameters are only applicable to the specific application server instance where the change is made. Since different hosts running application server instances could each have different capabilities, such as total system memory, it may be appropriate for the OC4J processes within an OC4J instance to run with different configuration options.

Table 4-5 provides a summary of the OC4J instance-specific parameters. Other OC4J parameters are cluster-wide parameters and are replicated across DCM-Managed OracleAS Clusters.

Table 4-5  OC4J Instance-Specific Parameters Summary for DCM-Managed OracleAS Cluster

OC4J Parameter Description
islands definitions While you want to keep the names of islands consistent across the application server instances, the definition of the islands and the number of OC4J processes associated with each island is configured on each instance, and the Oracle Application Server configuration management system does not replicate the configuration across the DCM-Managed OracleAS Cluster.

Note: state is replicated in OC4J islands with the same name across application boundaries and across the cluster. So to assure high availability, with stateful applications, the OC4J island names must be the same in each OC4J instance across the cluster.

number of OC4J processes While you want to keep the names of islands consistent across the application server instances, the definition of the islands and the number of OC4J processes associated with each island is configured on each instance, and DCM does not replicate the configuration across the DCM-Managed OracleAS Cluster.

On different hosts you can tune the number of OC4J processes specified to run per island to match the host capabilities.

port numbers The RMI, JMS, and AJP port numbers can be different for each host.
command line options The command line options you use can be different for each host.

4.4.2 Cluster-Wide Configuration Changes and Modifying OC4J Instances

This section covers the following topics:

4.4.2.1 Creating or Deleting OC4J Instances In An OracleAS Cluster (OC4J)

You can create a new OC4J instance on any application server instance within a DCM-Managed OracleAS Cluster and the OC4J instance will be propagated to all application server instances across the cluster.

To create an OC4J instance, do the following:

  1. Navigate to any application server instance within the DCM-Managed Oracle Application Server Cluster.

  2. Select Create OC4J Instance under the System Components area. This brings up the Create OC4J instance page.

  3. Enter a name in the OC4J Instance name field.

  4. Select Create.

  5. The Oracle Application Server creates the instances and then DCM propagates the new OC4J instance across the DCM-Managed OracleAS Cluster.

A new OC4J instance is created with the name you provided. This OC4J instance shows up on each application server instance across the cluster, in the System Components section.

To delete an OC4J instance, select the checkbox next to the OC4J instance you wish to delete, then select Delete OC4J Instance. The Oracle Application Server DCM system propagates the OC4J removal across the cluster.

4.4.2.2 Deploying Applications On An OracleAS Cluster (OC4J)

Using DCM-Managed OracleAS Cluster, when you deploy an application to one application server instance, the application is propagated to all application server instances across the cluster.

To deploy an application across a cluster, do the following:

  1. Select the cluster you want to deploy the application to.

  2. Select any application server instance from within the cluster.

  3. Select an OC4J instance on the application server instance where you want to deploy the application.

  4. Deploy the application to the OC4J instance using either Application Server Control Console or dcmctl commands.

  5. The Distributed Configuration Management system then propagates the application across the DCM-Managed Oracle Application Server Cluster.


    See Also:

    Oracle Application Server Containers for J2EE User's Guide for complete information on deploying applications to an OC4J instance.

4.4.2.3 Configuring Web Application State Replication With OracleAS Cluster (OC4J)

To assure that Oracle Application Server maintains, across DCM-Managed OracleAS Cluster, the state of stateful Web applications you need to configure state replication for the Web applications.

To configure state replication for stateful Web applications, do the following:

  1. Select the Administration link on the OC4J Home Page.

  2. Select the Replication Properties link in the Instance Properties area.

  3. Scroll down to the Web Applications section. Figure 4-8 shows this section.

    Figure 4-8 Web State Replication Configuration

    Description of repli.gif follows
    Description of the illustration repli.gif

  4. Select the Replicate session state checkbox.

    Optionally, you can provide the multicast host IP address and port number. If you do not provide the host and port for the multicast address, it defaults to host IP address 230.0.0.1 and port number 9127. The host IP address must be between 224.0.0.2 through 239.255.255.255. Do not use the same multicast address for both HTTP and EJB multicast addresses.


    Note:

    When choosing a multicast address, ensure that the address does not collide with the addresses listed in

    http://www.iana.org/assignments/multicast-addresses

    Also, if the low order 23 bits of an address is the same as the local network control block, 224.0.0.0 – 224.0.0.255, then a collision may occur. To avoid this problem, provide an address that does not have the same bits in the lower 23 bits of the address as the addresses in this range.


  5. Add the <distributable/> tag to all web.xml files in all Web applications. If the Web application is serializable, you must add this tag to the web.xml file.

    The following shows an example of this tag added to web.xml:

    <web-app>
      <distributable/>
      <servlet>
      ...
      </servlet>
    </web-app>
    

Note:

In order for sessions to be replicated to a just-started instance that joins a running cluster, for example where sessions are already being replicated between instances, the web module in the application maintaining the session has to be configured with the load-on-startup flag set to true. This is a cluster wide configuration parameter. See Figure 4-9 for details on setting this flag.

Figure 4-9 Application Server Control Console Website Properties Page For Setting Load On Startup

Description of haloadonstarup.gif follows
Description of the illustration haloadonstarup.gif

4.4.2.4 Configuring EJB Application State Replication With OracleAS Cluster (OC4J-EJB)

To create an EJB cluster also known as OracleAS Cluster (OC4J-EJB), you specify the OC4J instances that are to be involved in the cluster, configure each of them with the same multicast address, username, and password, and deploy the EJB, which is to be clustered, to each of the nodes in the cluster.

EJBs involved in a OracleAS Cluster (OC4J-EJB) cannot be sub-grouped in an island. Instead, all EJBs within the cluster are in one group. Also, only session beans are clustered.

The state of all beans is replicated at the end of every method call to all nodes in the cluster using a multicast topic. Each node included in the OracleAS Cluster (OC4J-EJB) is configured to use the same multicast address.

The concepts for understanding how EJB object state is replicated within a cluster are described in the Oracle Application Server Containers for J2EE Enterprise JavaBeans Developer's Guide.

To configure EJB replication, do the following:

  1. Select the Administration link on the OC4J Home Page.

  2. Select the Replication Properties link in the Instance Properties area.

  3. In the EJB Applications section, select the Replicate State checkbox.

    Figure 4-10 shows this section.

    Figure 4-10 EJB State Replication Configuration

    Description of repejb.gif follows
    Description of the illustration repejb.gif

  4. Provide the username and password, which is used to authenticate itself to other hosts in the OracleAS Cluster (OC4J-EJB). If the username and password are different for other hosts in the cluster, they will fail to communicate. You can have multiple username and password combinations within a multicast address. Those with the same username/password combinations will be considered a unique cluster.

    Optionally, you can provide the multicast host IP address and port number. If you do not provide the host and port for the multicast address, it defaults to host IP address 230.0.0.1 and port number 9127. The host IP address must be between 224.0.0.2 through 239.255.255.255. Do not use the same multicast address for both Web Application and EJB multicast addresses.


    Note:

    When choosing a multicast address, ensure that the address does not collide with the addresses listed in

    http://www.iana.org/assignments/multicast-addresses

    Also, if the low order 23 bits of an address is the same as the local network control block, 224.0.0.0 – 224.0.0.255, then a collision may occur. To avoid this provide an address that does not have the same bits in the lower 23 bits of the address as the addresses in this range.


  5. Configure the type of EJB replication within the orion-ejb-jar.xml file within the JAR file. See "Configuring Stateful Session Bean Replication for OracleAS Cluster (OC4J-EJB)s" for full details. You can configure these within the orion-ejb-jar.xml file before deployment or add this through the Application Server Control Console screens after deployment. To add this after deployment, drill down to the JAR file from the application page.

4.4.2.5 Configuring Stateful Session Bean Replication for OracleAS Cluster (OC4J-EJB)s

For stateful session beans, you may have you modify the orion-ejb-jar.xml file to add the state replication configuration. Since you configure the replication type for the stateful session bean within the bean deployment descriptor, each bean can use a different type of replication.

Stateful session beans require state to be replicated among nodes. In fact, stateful session beans must send all their state between the nodes, which can have a noticeable effect on performance. Thus, the following replication modes are available to you to decide on how to manage the performance cost of replication:

4.4.2.5.1 End of Call Replication

The state of the stateful session bean is replicated to all nodes in the cluster, with the same multicast address, at the end of each EJB method call. If a node loses power, then the state has already been replicated.

To use end of call replication, set the replication attribute of the <session-deployment> tag in the orion-ejb-jar.xml file to "endOfCall".

For example,

<session-deployment replication="EndOfCall" .../>
4.4.2.5.2 JVM Termination Replication

The state of the stateful session bean is replicated to only one other node in the cluster, with the same multicast address, when the JVM is terminating. This is the most performant option, because the state is replicated only once. However, it is not very reliable for the following reasons:

  • The state is not replicated if the power is shut off unexpectedly. The JVM termination replication mode does not guarantee state replication in the case of lost power.

  • The state of the bean exists only on a single node at any time; the depth of failure is equal to one node.

To use JVM termination replication, set the replication attribute of the <session-deployment> tag in the orion-ejb-jar.xml file to "VMTermination".

For example,

<session-deployment replication="VMTermination" .../>

4.4.3 Configuring OC4J Instance-Specific Parameters

This section covers the instance-specific parameters that are not replicated across DCM-Managed OracleAS Clusters.

This section covers the following:

4.4.3.1 Configuring OC4J Islands and OC4J Processes

To provide a redundant environment and to support high availability using DCM-Managed OracleAS Clusters, you need to configure multiple OC4J processes within each OC4J instance.

Using DCM-Managed OracleAS Cluster, state is replicated in OC4J islands with the same name within OC4J instances and across instances in the DCM-Managed OracleAS Cluster. To assure high availability, with stateful applications, OC4J island names within an OC4J instance must be the same in corresponding OC4J instances across the DCM-Managed OracleAS Cluster. It is the administrator's responsibility to make sure that island names match where session state replication is needed in a DCM-Managed OracleAS Cluster.

The number of OC4J processes on an OC4J instance within a DCM-Managed OracleAS Cluster is an instance-specific parameter since different hosts running application server instances in the DCM-Managed OracleAS Cluster could each have different capabilities, such as total system memory. Thus, it could be appropriate for a DCM-Managed OracleAS Cluster to contain application server instances that each run different numbers of OC4J processes within an OC4J instance.

To modify OC4J islands and the number of processes each OC4J island contains, do the following:

  1. Select the Administration link on the OC4J Home Page of the application server instance of interest in the DCM-Managed OracleAS Cluster.

  2. Select Server Properties in the Instance Properties area.

  3. Scroll down to the Multiple VM Configuration section. This section defines the islands and the number of OC4J processes that should be started on this application server instance in each island.

    Figure 4-11 displays the Multiple VM Configuration Islands section.

    Figure 4-11 OC4J instance Island and Number of Processes Configuration

    Description of island2.gif follows
    Description of the illustration island2.gif

  4. Create any islands for this OC4J instance within the cluster by clicking Add Another Row. You can supply a name for each island within the Island ID field. You can designate how many OC4J processes should be started within each island by the number configured in the Number of Processes field.

4.4.3.2 Configuring Port Numbers and Command Line Options

Figure 4-12 shows the section where you can modify these ports and set command line options.

To modify OC4J ports or the command line options, do the following:

  1. Select the Administration link on the OC4J Home Page of the application server instance of interest in the cluster.

  2. Select Server Properties in the Instance Properties area.

  3. Scroll down to the Multiple VM Configuration section. This section defines the ports and the command line options for OC4J and for the JVM that runs OC4J processes.

Figure 4-12 shows the Ports and Command line options areas on the Server Properties page.

Figure 4-12 OC4J Ports and Command Line Options Configuration

Description of oc4jport.gif follows
Description of the illustration oc4jport.gif

4.5 Using Oracle Application Server Clusters (Portal)

Details on configuring OracleAS Portal for high availability cover a number of Oracle Application Server components, including the following:

4.6 Using Oracle Application Server Clusters (Web Cache)

You can configure multiple instances of Oracle Application Server Web Cache to run as independent caches, with no interaction with one another. However, to increase the availability and scalability of your Web cache, you can configure multiple instances of OracleAS Web Cache to run as members of a cache cluster, also called OracleAS Cluster (Web Cache). A cache cluster is a loosely coupled collection of cooperating OracleAS Web Cache instances working together to provide a single logical cache.

4.7 Managing OracleAS Cold Failover Cluster (Middle-Tier)

This section provides instructions for managing Oracle Application Server Cold Failover Cluster (Middle-Tier). Using OracleAS Cold Failover Cluster (Middle-Tier) provides cost reductions for a highly available system, as compared to a fully available active-active middle-tier system. In addition, some applications may not function properly in an active-active OracleAS Cluster environment (for example, an applications that relies queuing or other synchronous methods). In this case, using an OracleAS Cold Failover Cluster (Middle-Tier) provides for high availability using the existing application without modifications.

This section covers the following topics:

4.7.1 Managing Configuration and Deployment for OracleAS Cold Failover Cluster (Middle-Tier)

Any application deployment or configuration change needs to be applied to both nodes of the OracleAS Cold Failover Cluster (Middle-Tier). This is an administrator responsibility for the Administrator managing the OracleAS Cold Failover Cluster (Middle-Tier) environment.

This section covers the following:

4.7.1.1 Configuration and Deployment Changes for OracleAS Cold Failover Cluster (Middle-Tier)

Any applications deployed or any configuration changes made to the middle-tier installation should be made on both nodes of the cold failover cluster. This needs to be ensured by the administrator managing the environment.

Using a OracleAS Cold Failover Cluster (Middle-Tier), application deployment is applied, as with any other middle-tier environment. To deploy applications on the passive node, bring up the node and then deploy the application. For the J2EE installation of OC4J and Oracle HTTP Server, the application deployment is like any other multiple middle-tier environment. The passive node can be brought up during the deployment phase and the application deployment can be done on this node. Similarly, applications can be deployed on the active node.


Note:

Using OracleAS Cold Failover Cluster (Middle-Tier) with OracleBI Discoverer, the active instance of OracleBI Discoverer is also the preference server. User preferences that users create while logged on to the active instance need to be synced up to the values on the passive instance. This allows the values to be available when the passive instance becomes the active instance during a failover. Thus, you need to periodically copy or update the following files to the passive instance so that they are current during a failover.
$ORACLE_HOME/discoverer/.reg_key.dc
$ORACLE_HOME/discoverer/.reg_key.dc.bak
$ORACLE_HME/discoverer/util/pref.txt

4.7.1.2 Backup and Recovery for OracleAS Cold Failover Cluster (Middle-Tier)

Both nodes of the OracleAS Cold Failover Cluster (Middle-Tier) should be backed up. The procedure for this remains the same as for any other middle tier and is documented in the Oracle Application Server Administrator's Guide. Each installation needs to be backed up. During restoration, each backup can only be restored to the host it was backed up from. It should not be restored to the other node.

4.7.1.3 Using Application Server Control Console for OracleAS Cold Failover Cluster (Middle-Tier)

To monitor or manage a node using Application Server Control Console, login to the Console using the physical hostname of the current active node. The Application Server Control Console processes can be up and running on both nodes of the cluster simultaneously. When changes to the environment are made, including configuration changes or application deployments, perform the changes on both nodes of the OracleAS Cold Failover Cluster (Middle-Tier).

4.7.2 Managing Failover for OracleAS Cold Failover Cluster (Middle-Tier)

Using a OracleAS Cold Failover Cluster (Middle-Tier), a failure in the active node, or a decision to stop the active node and failover to the passive node requires that you make the formerly passive node active (perform a failover operation).

The failover management itself can be performed using either of the following failover processes:

  • Automated using a cluster manager facility. The cluster manager offers services, which uses packages to monitor the state of a service. If the service or the node is found to be down, it automatically fails over the service from one node to the other node.

  • Manual failover. In this case, perform the manual failover steps as outlined in this section. Since both the detection of the failure and the failover itself is performed manually, the system may be unavailable for a longer period using manual failover.

This section covers the following topics:

4.7.2.1 Manual Failover for OracleAS Cold Failover Cluster (Middle-Tier)

The failover process to make the formerly passive node the new active node includes the following steps:

  1. Stop all middle-tier services on current active node (if the node is still available).

  2. Failover the virtual IP to the new active node.

  3. Failover the components to the new active node.

  4. Start the middle-tier service on the new active node.


Note:

The failover process requires that you previously performed the post-installation steps that set up and configured the OracleAS Cold Failover Cluster (Middle-Tier), as outlined in the Oracle Application Server Installation Guide for your platform.

4.7.2.2 Manual Failover for the Virtual IP in OracleAS Cold Failover Cluster (Middle-Tier)

Perform the following steps to failover the Virtual IP in an OracleAS Cold Failover Cluster (Middle-Tier):

  1. Stop all Oracle Application Server processes on the failed node, if possible, using the following command, on UNIX systems:

    $ORACLE_HOME/opmn/bin/opmnctl stopall
    
    

    On Windows systems,

    %ORACLE_HOME%\opmn\bin\opmnctl stopall
    
    
  2. Stop Oracle Application Server Administration processes on the failed node, if possible, using the following commands, on UNIX systems:

    $ORACLE_HOME/bin/emctl stop iasconsole
    $ORACLE_HOME/bin/emctl stop agent
    
    

    On Windows systems,

    %ORACLE_HOME%\bin\emctl stop iasconsole
    %ORACLE_HOME%\bin\opmnctl stop agent
    
    
  3. Perform a failover of the virtual IP from the failed node to the new active node.

    On Sun SPARC Solaris systems:

    1. If the failed node is usable, login as root and execute the following command (on the failed node):

      > ifconfig <interface_name> removeif <virtual_IP>
      
      
    2. Login as root on the new active node and execute the command:

      > ifconfig <interface_name> addif <virtual_IP> up
      
      
      

    On Linux systems:

    1. If the failed node is usable, login as root on the failed node and execute the following command:

      > /sbin/ifconfig <interface_name> down
      
      
    2. Login as root on the new active node and execute the command:

      > ifconfig <interface_name> netmask <netmask> <virtual_IP> up
      
      

    On Windows systems:

    On the failed node, move the group that was created using Oracle Fail Safe as follows:

    1. Start up Oracle Fail Safe Manager.

    2. Right-click the group that was created during the OracleAS middle-tier installation and select "Move to different node".


    Note:

    If OracleAS JMS is using file-persistence, fail over the shared disk as well.

4.7.2.3 Manual Failover of Components for OracleAS Cold Failover Cluster (Middle-Tier)

After performing the failover steps for the virtual IP on the new active node, perform the following steps to failover on the OracleAS Cold Failover Cluster (Middle-Tier) system. Perform the following steps to stop and start Oracle Application Server processes:

  1. Stop Oracle Application Server processes on the new active node and start OPMN only.

    Execute the following commands on UNIX systems:

    > $ORACLE_HOME/opmn/bin/opmnctl stopall
    > $ORACLE_HOME/opmn/bin/opmnctl start
    
    

    Execute the following commands on Windows systems:

    > %ORACLE_HOME%\opmn\bin\opmnctl stopall
    > %ORACLE_HOME%\opmn\bin\opmnctl start
    
    
  2. Stop Oracle Application Server Administration processes on the new active node, using the following commands

    On UNIX systems:

    $ORACLE_HOME/bin/emctl stop iasconsole
    $ORACLE_HOME/bin/emctl stop agent
    
    

    On Windows systems,

    %ORACLE_HOME%\bin\emctl stop iasconsole
    %ORACLE_HOME%\bin\opmnctl stop agent
    
    
  3. On the current active node, execute the following commands.

    On UNIX systems:

    > $ORACLE_HOME/opmn/bin/opmnctl stopall
    > $ORACLE_HOME/opmn/bin/opmnctl startall
    
    

    On Windows systems:

    > %ORACLE_HOME%\opmn\bin\opmnctl stopall
    > %ORACLE_HOME%\opmn\bin\opmnctl startall
    
    
  4. If you use Application Server Control Console, start Oracle Application Server Administration processes on the current active node using the following commands.

    On UNIX systems:

    $ORACLE_HOME/bin/emctl start agent
    $ORACLE_HOME/bin/emctl start iasconsole
    
    

    On Windows systems,

    %ORACLE_HOME%\bin\emctl start agent
    %ORACLE_HOME%\bin\opmnctl start iasconsole
    
    

4.7.2.4 Manual Failover of OracleAS Cluster (OC4J-JMS)

If you are using OracleAS Cluster (OC4J-JMS), and the system fails abnormally, you may need to perform additional failover steps such as removing lock files for OracleAS JMS file based persistence.


See Also:

Abnormal Termination in the Oracle Application Server Containers for J2EE Services Guide section, "Oracle Application Server JMS".

4.8 Managing Custom Processes With OPMN

Oracle Process Manager and Notification Server (OPMN) supports management of custom processes, processes that you create and configure, as part of an Oracle Application Server installation. The features that OPMN provides for custom process management include the following:


Note:

If you add a custom process to OPMN that monitors and manages other processes, then OPMN only determines the status of the process it is monitoring.

For example, when OPMN is managing and monitoring Oracle Internet Directory processes, OPMN only manages the OIDMON process. The OIDMON process is in turn responsible for monitoring other Oracle Internet Directory processes. Thus when OPMN reports that Oracle Internet Directory is UP, OIDMON is running but, depending on the Oracle Internet Directory configuration, other processes such as OIDLDAPD may still be in the process of starting or may even have failed in their initialization.


4.9 Managing Oracle Application Server Middle-tier Upgrades

When performing upgrades using systems in a high availability environment, the ultimate goal should be to upgrade all Oracle Application Server instances to the same version—in this case, Oracle Application Server 10g (10.1.2). Running all of the Oracle Application Server instances at the same version level is not mandatory; however, doing so makes it easier to manage, troubleshoot, and maintain J2EE applications and the Oracle Application Server components.

If you choose to maintain previous versions of the Oracle Application Server, you must consider which combinations of versions are supported.

This section covers the following topics:

4.9.1 Upgrading Oracle Application Server Instances

When you upgrade an OracleAS Cluster, Oracle Application Server instances need to be upgraded in a specific order to avoid unsupported or unstable configurations.

The general procedure for upgrading Oracle Application Server instances is:

  1. Upgrade each of the middle-tier instances first.

    For example, both Oracle Application Server 10g (9.0.4) and Oracle Application Server 10g (10.1.2) Application Server instances function using an Oracle Application Server 10g (9.0.4) Infrastructure.

  2. Upgrade the Metadata Repository.

  3. Upgrade the Identity Management system.


See Also:

  • For information on upgrading Windows systems, Oracle Application Server 10g Upgrade and Compatibility Guide 10g Release 2 (10.1.2) for Windows

  • For information on Upgrading UNIX systems, Oracle Application Server 10g Upgrade and Compatibility Guide 10g Release 2 (10.1.2) for UNIX


4.9.2 Upgrading DCM-Managed OracleAS Clusters

When using DCM-Managed OracleAS Clusters, each instance joined to the cluster must use the same Oracle Application Server version. Thus, before you upgrade instances in a DCM-Managed OracleAS Cluster, you need to do the following:

  1. Leave the DCM-Managed Oracle Application Server Cluster using either Application Server Control Console, or the DCM leavecluster command.

  2. Ensure that any old instance archives that need to be retained are exported to the file system using the DCM exportarchive command. The upgrade procedure does not upgrade archives. These archives can then be re-imported after the upgrade process.

  3. After completing the upgrades for all the Oracle Application Server instances that are part of the DCM-Managed OracleAS Cluster, you can then join the instances into a new DCM-Managed OracleAS Cluster.


See Also:

Section 4.3.2.6, "Performing Administration on a DCM-Managed OracleAS Cluster" for information on how to minimize downtime while upgrading instances in a ­DCM-Managed OracleAS Cluster that uses a OracleAS File-based Farm.

4.9.3 Upgrading Stateful OC4J Applications

Beginning with Oracle Application Server 10g (10.1.2), OC4J applications running in a Oracle Application Server Cluster (OC4J) that use HTTPSession to store state can be upgraded with no session loss.


See Also:

  • For information on upgrading Windows systems, Oracle Application Server 10g Upgrade and Compatibility Guide 10g Release 2 (10.1.2) for Windows

  • For information on Upgrading UNIX systems, Oracle Application Server 10g Upgrade and Compatibility Guide 10g Release 2 (10.1.2) for UNIX


4.10 Using OracleAS Single Sign-On With OracleAS Cluster (Middle-Tier)

To enable Oracle Application Server Single Sign-On with OracleAS Cluster, the OracleAS Single Sign-On server needs to be aware of the entry point into the OracleAS Cluster, which is commonly the load balancing mechanism in front of the Oracle HTTP Servers. Usually, this is either Oracle Application Server Web Cache, a network load balancer appliance, or an Oracle HTTP Server installation.

In order to register an OracleAS Cluster's entry point with the OracleAS Single Sign-On server, use the ssoreg.sh script.

In order to use OracleAS Single Sign-On functionality, all Oracle HTTP Server instances in a OracleAS Cluster must have an identical OracleAS Single Sign-On registration.

If you do not use a network load balancer, then the OracleAS Single Sign-On configuration must originate with whatever you use as the incoming load balancer— Oracle Application Server Web Cache, Oracle HTTP Server, and so on.

To configure a DCM-Managed OracleAS Cluster for single sign-on, execute the ssoreg.sh script against one of the application server instances in the DCM-Managed OracleAS Cluster. This tool registers the OracleAS Single Sign-On server and the redirect URLs with all Oracle HTTP Servers in the OracleAS Cluster:

  1. On one of the application server instances, define the configuration with the ssoreg.sh script. Run ssoreg.sh with the options specified as follows, substituting your information for the italicized portions of the parameter values. See Table 4-6 for a full description of these values.

    $ORACLE_HOME/sso/bin/ssoreg.sh
    -oracle_home_path <orcl_home_path>
    -site_name <site_name>
    -config_mod_osso TRUE
    -mod_osso_url <URL>
    -u <userid>
    [-virtualhost <virtual_host_name>]
    [-update_mode CREATE | DELETE | MODIFY]
    [-config_file <config_file_path>]
    [-admin_info <admin_info>]
    [-admin_id <adminid>]
    
    
    • Specify the host, port, and SID of the database used by the Single Sign-On server.

    • Specify the host and port of the front-end load balancer in mod_osso_url parameter. This should be a HTTP or HTTPS URL depending on the site security policy regarding SSL access to OracleAS Single Sign-On protected resources.

    • Specify the root user of the host that you are executing this tool on in the -u option.

  2. DCM propagates the configuration to all other Oracle HTTP Servers in the DCM-Managed OracleAS Cluster.

Table 4-6 ssoreg.sh Parameter Values

Parameter Value
oracle_home_path   <orcl_home_path> Absolute path to the Oracle home of the application server instance, where you are invoking this tool.
site_name <site_name> Name of the site, typically, the effective host name and port of the partner application. For example, application.mydomain.com.
config_mod_osso TRUE If set to TRUE, this parameter indicates that the application being registered is mod_osso. You must include config_mod_osso for osso.conf to be generated.
mod_osso_url <URL> The effective URL of the partner application. This is the URL that is used to access the partner application. The value should be specified in this URL format:

http://oracle_http_host.domain:port

u <userid> The user name that will start the Oracle HTTP Server. In UNIX, this name is usually "root." On Windows systems it is SYSTEM. The parameter u is mandatory.
virtualhost <virtual_host_name> Optional. Use this parameter only if registering an Oracle HTTP virtual host with the OracleAS Single Sign-On server.

If you create a virtual host, be sure, in the httpd.conf file, to fill in the following directive for each protected URL:

<VirtualHost host_name>
OssoConfigFile $ORACLE_        HOME/Apache/Apache/conf/osso/host_name/osso.conf
OssoIpCheck off
#<Location /your_protected_url>
# AuthType basic
# Require valid-user
#</Location>
#Other configuration information for the virtual host
</VirtualHost>

The commented lines must be uncommented before the application is deployed.

update_mode CREATE |       DELETE | MODIFY Optional. Creates, deletes, or modifies the partner registration record. CREATE, the default, generates a new record. DELETE removes the existing record. MODIFY deletes the existing record and then creates a new one.
config_file <config_          file_path> Optional. Location of the osso.conf file for the virtual host if one is being configured. It may, for example, be $ORACLE_HOME/Apache/Apache/conf/osso/virtual_host_name/osso.conf.

Note that the osso.conf for the non-virtual host is located at $ORACLE_HOME/Apache/Apache/conf/osso.

admin_id <name> (Optional) User name of the mod_osso administrator. This shows up in the Single Sign-On tool as contact information.
admin_info <text> (Optional) Additional information about the mod_osso administrator, such as e-mail address. This shows up in the OracleAS Single Sign-On tool as contact information.

The ssoreg.sh script establishes all information necessary to facilitate secure communication between the Oracle HTTP Servers in the OracleAS Cluster and the Single Sign-On server.


Note:

When using OracleAS Single Sign-On with the Oracle HTTP Servers in the OracleAS Cluster, set the KeepAlive directive to OFF. When the Oracle HTTP Servers are behind a network load balancer; if the KeepAlive directive is set to ON, then the network load balancer maintains state with the Oracle HTTP Server for the same connection, which results in an HTTP 503 error. Modify the KeepAlive directive in the Oracle HTTP Server configuration. This directive is located in the httpd.conf file of the Oracle HTTP Server.