This section describes information unique to certain component products.
For this release, this section includes the following topics:
This topic describes considerations for configuring Oracle Data Integrator repository connections to Oracle Real Application Clusters:
Section 11.1.1, "Oracle RAC Retry Connectivity for Source and Target Connections"
Section 11.1.2, "Configuring ODI Repository Connections to Oracle RAC"
Section 11.1.3, "About Oracle Data Integrator Scheduler Node Failure"
When you configure Oracle Data Integrator (ODI) Oracle Real Application Clusters (RAC) connections, Oracle RAC retry is supported for the ODI master or ODI work repository. ODI uses transactional connections to source and target connections while running ODI scenarios. For these source and target connections, ODI does not support RAC retry connectivity. You cannot migrate these transactions to another node in Oracle RAC.
When you create an ODI repository using Repository Creation Utility (RCU), you specify the work repository connection JDBC URL. RCU stores the URL in the master repository contents. If the work repository JDBC URL is a single node URL, you should modify the URL to include the Oracle Real Application Clusters (Oracle RAC) failover address.
If Oracle RAC is not configured with Single Client Access Name (SCAN), you can provide details of the Oracle RAC instances. In the work repository JDBC URL field, enter the Oracle RAC connectivity address in the format host:port. See the following example.
If Oracle RAC is configured with SCAN, provide Oracle RAC instance details with the SCAN address.
The following example shows the JDBC URL format to connect to an Oracle RAC with two hosts when it does not use SCAN:
jdbc:oracle:thin:(DESCRIPTION =(LOAD_BALANCE=ON) (ADDRESS =(PROTOCOL =tcp) (HOST =host1)(PORT =port1)) (ADDRESS =(PROTOCOL =tcp)(HOST =host2) (PORT =port2)) (CONNECT_DATA =(SERVER=dedicated) (SERVICE_NAME=service_name)))
See "Creating a Work Repository" in Administering Oracle Data Integrator for more information.
If a WebLogic Server failover occurs, the other WebLogic Server instance becomes the scheduler. A Coherence cache handles the scheduler lifecycle. Locking guarantees the scheduler uniqueness, and event notification provides scheduler migration. When an agent restarts and computes its schedule, it takes into account schedules in progress, which automatically continue their execution cycle beyond the server startup time. New sessions trigger as if the scheduler never stopped. Stale sessions move to an error state and remain in that state when they restart.
In an Oracle Data Integrator Agent cluster, if the Agent node that is the scheduler node fails, another node in the cluster takes over as the scheduler node. The new scheduler node reinitializes and runs all schedules from that point forward.
If a scheduled scenario with a repeatable execution cycle is running when the node crashes, the scenario does not continue its iterations on the new scheduler node from the point at which the scheduler node failed. For example, if a scheduled scenario is configured to repeat the execution 10 times after an interval of two minutes and the scheduler node fails during the third execution, the new scheduler node doesn't continue running the scenario for the next eight executions.
This section includes the following topic:
See Also:
For more on Oracle Application Development Framework (ADF), see:Oracle ADF Key Concepts in Understanding the Oracle Application Development Framework
Oracle Fusion Middleware Administering Oracle ADF Applications
When you use Oracle JRF Asynchronous Web Services, the asynchronous web service is pinned to a service and does not fail over. When you use a reliability protocol such as WS-RM, the higher-level protocol reconnects to a new server after a failure.
For more on Oracle JRF Asynchronous Web Services, see the Domain Template Reference.
Topics in this section include the following:
Section 11.3.4, "About Specifying Ports for Multiple Node Managers"
Section 11.3.5, "About RAC Database Post Installation Configuration"
If a BI managed server and/or host crashes, a user may need to login again. This depends on which application they are using at the time of the crash and whether or not SSO is in use.
Essbase does not support a high availability configuration. If a server fails, there is no loss of state; you can recover from a failure by redploying Essbase Cube.
Studio does not support a high availability configuration. Oracle recommends performing xml import/export on a regular basis. This is the best practice for Studio recovery from a catalog failure.
If you have more than one node manager per machine, verify that you specify your ports. For more information, see the Oracle Fusion Middleware Enterprise Deployment Guide for Oracle Business Intelligence.
Oracle Business Intelligence requires additional configuration steps for whole server migration after installation. See "Using Whole Server Migration and Service Migration in an Enterprise Deployment" in Oracle Fusion Middleware Enterprise Deployment Guide for Oracle Business Intelligence for these steps.
Oracle Business Intelligence requires additional steps after you follow scale out steps in Chapter 6, "Scaling Out a Topology (Machine Scale Out).". Oracle BI requires you to change setDomainEnv.sh
to the updated singleton-data-directory setting (SDD).
To complete Oracle BI scale out:
Move the SDD from local to shared storage so that all hosts can access it using the same path. For example, move it to:
DOMAIN_HOME/bidata/net/yoursystem/scratch/sdd
Open DOMAIN_HOME/config/fmwconfig/bienv/core/bi-environment.xml (element bi:singleton-data-directory).
Change the xdo.server.config.dir
path to refer to the new SDD path you just created.
Restart the server.
Topics in this section include the following:
If a Forms HTTP session fails, you must reconnect and restart your session with the Forms application.
Reports has the following considerations in a high availability set up:
If you scale up Reports components, Oracle recommends that you bring down all nodes and then restart them when configuration is complete.
See "Starting and Stopping Oracle Reports Server" in Oracle Fusion Middleware Publishing Reports to the Web with Oracle Reports Services.
Reports cluster members or individual clients use multicast to discover other nodes. There is no workaround to using multicast.
Reports has shared file based cache as a singleton. If the cache fails, high availability also fails.
There is no workaround. If shared-file based cache fails, you must restart Reports servers.
Reports components can tolerate database failure. Reports retries the database connection three times. After the database is up, you must run Reports again.
Reports does not have retries for OID/Shared cache file system failures. There is no workaround. After the external system is up, you must run Reports again.