For this release, this section includes the following topics:
Parent topic: Component Procedures
Review information in this section, which describes considerations for configuring Oracle Data Integrator repository connections to Oracle Real Application Clusters:
When you configure Oracle Data Integrator (ODI) Oracle Real Application Clusters (RAC) connections, Oracle RAC retry is supported for the ODI master or ODI work repository.
ODI uses transactional connections to source and target connections while running ODI scenarios. For these source and target connections, ODI doesn't support RAC retry connectivity. You can't migrate these transactions to another node in Oracle RAC.
When you create an ODI repository using Repository Creation Utility (RCU), you specify the work repository connection JDBC URL. RCU stores the URL in the master repository contents. If a work repository JDBC URL is a single node URL, Oracle recommends that you modify the URL to include the Oracle Real Application Clusters (Oracle RAC) failover address.
If Oracle RAC is not configured with Single Client Access Name (SCAN), you can provide details of the Oracle RAC instances. In the work repository JDBC URL field, enter the Oracle RAC connectivity address in the format host:port. See the following example.
If Oracle RAC is configured with SCAN, provide Oracle RAC instance details with the SCAN address.
The following example shows the JDBC URL format to connect to an Oracle RAC with two hosts when it doesn't use SCAN:
jdbc:oracle:thin:(DESCRIPTION =(LOAD_BALANCE=ON) (ADDRESS =(PROTOCOL =tcp) (HOST =host1)(PORT =port1)) (ADDRESS =(PROTOCOL =tcp)(HOST =host2) (PORT =port2)) (CONNECT_DATA =(SERVER=dedicated) (SERVICE_NAME=service_name)))
See Creating a Work Repository in Oracle Fusion Middleware Administering Oracle Data Integrator for more information.
If a WebLogic Server failover occurs, the other WebLogic Server instance becomes the scheduler. A Coherence cache handles the scheduler lifecycle. Locking guarantees the scheduler uniqueness, and event notification provides scheduler migration.
When an agent restarts and computes its schedule, it takes into account schedules in progress, which automatically continue their execution cycle beyond the server startup time. New sessions trigger as if the scheduler never stopped. Stale sessions move to an error state and remain in that state when they restart.
In an Oracle Data Integrator Agent cluster, if the Agent node that is the scheduler node fails, another node in the cluster takes over as the scheduler node. The new scheduler node reinitializes and runs all schedules from that point forward.
If a scheduled scenario with a repeatable execution cycle is running when the node crashes, the scenario doesn't continue its iterations on the new scheduler node from the point at which the scheduler node failed. For example, if a scheduled scenario is configured to repeat the execution 10 times after an interval of two minutes and the scheduler node fails during the third execution, the new scheduler node doesn't continue running the scenario for the next eight executions.
Reports has the certain considerations you need to know about for a high availability set up.
If you scale up Reports components, Oracle recommends that you bring down all nodes and then restart them when configuration is complete.
See Starting and Stopping Oracle Reports Server in Oracle Fusion Middleware Publishing Reports to the Web with Oracle Reports Services.
Reports cluster members or individual clients use multicast to discover other nodes. There is no workaround to using multicast.
Reports has shared file based cache as a singleton. If the cache fails, high availability also fails.
There is no workaround. If shared-file based cache fails, you must restart Reports servers.
Reports components can tolerate database failure. Reports retries the database connection three times. After the database is up, you must run Reports again.
Keep the following in mind when you deploy Oracle Business Process Management in a high availability environment.
In a high availability environment, you lose session state in BPM Composer if failover occurs. This causes loss of work; any in-progress edits are lost. BPM Composer is accessible on the secondary server but you must log in again and create a fresh session.