13 Configuring High Availability for Other Components

This section describes information unique to certain component products.

For this release, this section includes the following topics:

13.1 Deploying Oracle Data Integrator

Review information in this section, which describes considerations for configuring Oracle Data Integrator repository connections to Oracle Real Application Clusters:

13.1.1 Oracle RAC Retry Connectivity for Source and Target Connections

When you configure Oracle Data Integrator (ODI) Oracle Real Application Clusters (RAC) connections, Oracle RAC retry is supported for the ODI master or ODI work repository.

ODI uses transactional connections to source and target connections while running ODI scenarios. For these source and target connections, ODI doesn't support RAC retry connectivity. You can't migrate these transactions to another node in Oracle RAC.

13.1.2 Configuring ODI Repository Connections to Oracle RAC

When you create an ODI repository using Repository Creation Utility (RCU), you specify the work repository connection JDBC URL. RCU stores the URL in the master repository contents. If a work repository JDBC URL is a single node URL, Oracle recommends that you modify the URL to include the Oracle Real Application Clusters (Oracle RAC) failover address.

  • If Oracle RAC is not configured with Single Client Access Name (SCAN), you can provide details of the Oracle RAC instances. In the work repository JDBC URL field, enter the Oracle RAC connectivity address in the format host:port. See the following example.

  • If Oracle RAC is configured with SCAN, provide Oracle RAC instance details with the SCAN address.

The following example shows the JDBC URL format to connect to an Oracle RAC with two hosts when it doesn't use SCAN:

(HOST =host1)(PORT =port1)) (ADDRESS =(PROTOCOL =tcp)(HOST =host2)
(PORT =port2)) (CONNECT_DATA =(SERVER=dedicated) 

See Creating a Work Repository in Oracle Fusion Middleware Administering Oracle Data Integrator for more information.

13.1.3 About Oracle Data Integrator Scheduler Node Failure

If a WebLogic Server failover occurs, the other WebLogic Server instance becomes the scheduler. A Coherence cache handles the scheduler lifecycle. Locking guarantees the scheduler uniqueness, and event notification provides scheduler migration.

When an agent restarts and computes its schedule, it takes into account schedules in progress, which automatically continue their execution cycle beyond the server startup time. New sessions trigger as if the scheduler never stopped. Stale sessions move to an error state and remain in that state when they restart.

In an Oracle Data Integrator Agent cluster, if the Agent node that is the scheduler node fails, another node in the cluster takes over as the scheduler node. The new scheduler node reinitializes and runs all schedules from that point forward.

If a scheduled scenario with a repeatable execution cycle is running when the node crashes, the scenario doesn't continue its iterations on the new scheduler node from the point at which the scheduler node failed. For example, if a scheduled scenario is configured to repeat the execution 10 times after an interval of two minutes and the scheduler node fails during the third execution, the new scheduler node doesn't continue running the scenario for the next eight executions.

13.2 Deploying Forms

Keep the following in mind as you deploy Forms.

13.2.1 About Recovering from Forms HTTP Session Failover

If a Forms HTTP session fails, you must reconnect and restart your session with the Forms application.

13.3 Deploying Reports

Reports has the certain considerations you need to know about for a high availability set up.

13.3.1 About Scaling Up in Reports

If you scale up Reports components, Oracle recommends that you bring down all nodes and then restart them when configuration is complete.

See Starting and Stopping Oracle Reports Server in Oracle Fusion Middleware Publishing Reports to the Web with Oracle Reports Services.

13.3.2 About Reports Multicast Communication

Reports cluster members or individual clients use multicast to discover other nodes. There is no workaround to using multicast.

13.3.3 About Reports Shared-File Based Cache

Reports has shared file based cache as a singleton. If the cache fails, high availability also fails.

There is no workaround. If shared-file based cache fails, you must restart Reports servers.

13.3.4 About Reports Database Service Failure

Reports components can tolerate database failure. Reports retries the database connection three times. After the database is up, you must run Reports again.

13.3.5 About Reports OID/Shared Cache File System Failure

Reports doesn't have retries for OID/Shared cache file system failures. There is no workaround. After the external system is up, you must run Reports again.

13.4 Deploying Oracle Business Process Management

Keep the following in mind when you deploy Oracle Business Process Management in a high availability environment.

13.4.1 About BP Composer and High Availability

In a high availability environment, you lose session state in BPM Composer if failover occurs. This causes loss of work; any in-progress edits are lost. BPM Composer is accessible on the secondary server but you must log in again and create a fresh session.