A High Availability for Oracle Enterprise Scheduler
This chapter describes how you can configure and manage a highly available Oracle Enterprise Scheduler environment.
This appendix includes the following sections:
A.1 Introduction to High Availability for Oracle Enterprise Scheduler
A highly available cluster of Oracle Enterprise Scheduler servers is recommended for optimal job performance. This is especially useful for running asynchronous jobs remotely, which may require returning a status message upon completion.
For example, suppose an asynchronous ADF Business Components job runs remotely. Oracle Enterprise Scheduler expects the job to send a status upon completion using a web services callback. If Oracle Enterprise Scheduler runs on only one node, if that node is down, the callback message does not arrive and the status of the job is unknown. The job would then require manual intervention to mark its status as complete.
A two node cluster, however, allows all callbacks to process and arrive at their destination even if one server is down. A clustered Oracle Enterprise Scheduler environment allows callbacks to be delivered as required, and jobs to complete with the correct status automatically assigned by the system.
The main steps required for configuring a highly available Oracle Enterprise Scheduler environment are as follows:
Use the Configuration Wizard to set up a domain and configure a cluster.
Add nodes to the cluster as required in order to enhance scalability, allowing more processing power for jobs.
When a cluster node is added, the new node's processor configuration might have to be adjusted to assign appropriate work assignments.
For more information, see the Oracle WebLogic Server documentation.
Configure the load balancer. For more information, see the Oracle HTTP Server documentation.
For information about troubleshooting an Oracle Enterprise Scheduler cluster, see Troubleshooting Oracle Enterprise Scheduler.
A.2 Oracle Enterprise Scheduler Concepts
In order to configure an Oracle Enterprise Scheduler environment, it helps to understand concepts such as the architecture of Oracle Enterprise Scheduler, its components and life cycle.
This section includes the following topics:
A.2.1 Oracle Enterprise Scheduler Architecture
Oracle Enterprise Scheduler is installed to an Oracle WebLogic Server instance, on which it runs. The Oracle Enterprise Scheduler service component sits on top of Oracle JRF and is secured by Oracle Web Services Manager. Oracle Enterprise Scheduler manages scheduled job submissions and job definitions.
Figure A-1 shows the Oracle Enterprise Scheduler runtime architecture in the context of Oracle Fusion Middleware components.
Figure A-1 Oracle Enterprise Scheduler Runtime Architecture
Description of "Figure A-1 Oracle Enterprise Scheduler Runtime Architecture"
The components of the Oracle Enterprise Scheduler runtime architecture are as follows:
Oracle Enterprise Scheduler client applications: Various applications can request the execution of a scheduled job. Applications include Oracle Fusion applications, Web service clients such as SOA or Oracle ADF Business Components and PL/SQL applications.
Oracle Enterprise Scheduler: Fusion Middleware Control enables you to manage Oracle Enterprise Scheduler clusters, services and jobs. Oracle Enterprise Scheduler accesses metadata using MDS. Scheduled job output is saved to Oracle WebCenter Content. Oracle Enterprise Scheduler includes interfaces and APIs that enable interaction with applications and external components. For example, a PL/SQL client uses the Oracle Enterprise Scheduler PL/SQL API to request a scheduled job.
Components accessed by Oracle Enterprise Scheduler: Oracle Enterprise Scheduler supports creating Java jobs that access SOA components, Oracle ADF Business Components services, and Oracle Business Intelligence Publisher (there is no direct support for jobs specific to these components).
Client applications accessing EJBs connect to Oracle Enterprise Scheduler over RMI, whereas client applications using Oracle Enterprise Scheduler web services use HTTP. Connections from client applications to the server are persistent, short-lived asynchronous interactions that sometimes use callback functions.
A.2.2 Oracle Enterprise Scheduler Components
Oracle Enterprise Scheduler components are as follows:
Oracle ADF Server (Oracle Enterprise Scheduler client):
MetadataServiceEJBare deployed as shared libraries. These libraries are imported in the ADF client applications (ears).
Oracle Enterprise Scheduler Server (Oracle Enterprise Scheduler Runtime): The Oracle Enterprise Scheduler service component manages all scheduled jobs.
Core runtime: This is an Oracle Enterprise Scheduler application EAR file which contains a JCA resource adapter, multiple EJB components and JRF web service modules (WAR files).
Hosting applications: A hosting application is an EAR file that imports the
MetadataServiceEJBshared libraries. An Oracle Enterprise Scheduler hosting application submits job requests using the Oracle Enterprise Scheduler libraries or an integrated job request submission interface.
Oracle Database Scheduler: The standard Oracle Database Scheduler is used to execute Oracle Enterprise Scheduler PL/SQL jobs.
Oracle Enterprise Scheduler uses Java process APIs to spawn native binary jobs.
Oracle Enterprise Scheduler relies on the following data sources:
Oracle Enterprise Scheduler runtime (XA emulation)
Oracle Enterprise Scheduler runtime (non-XA)
Oracle Enterprise Scheduler Metadata Store (non-XA)
An XA transaction, in the most general terms, is a global transaction that may span multiple resources. A non-XA transaction always involves just one resource, and generally cannot participate in a global transaction.
External dependencies include a run-time database, an MDS repository, as well as Oracle SOA Suite, Oracle ADF Business Components, Oracle BI Presentation Services, Oracle WebCenter Content and so on, depending on the components involved in implementing a given job.
A.2.3 Oracle Enterprise Scheduler Life Cycle
The Oracle Enterprise Scheduler engine starts up as part of the standard J2EE application initialization by Oracle WebLogic Server. The Oracle Enterprise Scheduler JCA adapter connects to the run-time schema and polls for scheduled work items.
The following is the sequence of the execution of a client request in Oracle Enterprise Scheduler.
- A client application submits a job request.
- Oracle Enterprise Scheduler reads the metadata for the job request.
- Oracle Enterprise Scheduler places the job request and the job metadata in a queue in the Oracle Enterprise Scheduler run-time data store.
- Based on schedule and request processor availability, Oracle Enterprise Scheduler sends a message to the hosted application, which includes all the job request parameters and metadata captured at the time of submission.
- The hosting application executes the job and returns a status. Job output and logs are written to Oracle WebCenter Content.
- Oracle Enterprise Scheduler updates the history with the job request status.
Figure A-2 shows the changes in job state during the life cycle of an executed job request.
Figure A-2 Job Request Changes in State During Runtime
Description of "Figure A-2 Job Request Changes in State During Runtime"
Figure A-3 displays the changes in state for an executable job request for which the executing user has canceled the request.
Figure A-3 Changes in Job State Following Cancellation
Description of "Figure A-3 Changes in Job State Following Cancellation"
Figure A-4 displays state transitions for a job request submitted with a schedule.
Figure A-4 State Transitions for a Job Request Submitted with a Schedule
Description of "Figure A-4 State Transitions for a Job Request Submitted with a Schedule"
A.2.4 Oracle Enterprise Scheduler Life Cycle Tools
As Oracle Enterprise Scheduler runs on an Oracle WebLogic Server instance, you can manage Oracle Enterprise Scheduler using Oracle Fusion Middleware Node Manager for SOA.
Oracle Enterprise Scheduler jobs can be hosted on the same Oracle WebLogic Server instance (or a remote Oracle WebLogic Server instance), database and binary processes. Oracle Enterprise Scheduler controls the life cycle of Oracle Enterprise Scheduler jobs. Use Oracle Enterprise Manager to monitor and manage Oracle Enterprise Scheduler jobs. For more information about managing Oracle Enterprise Scheduler jobs, see Managing Oracle Enterprise Scheduler Requests.
For more information about Oracle Fusion Middleware Node Manager, see the chapter “Using Node Manager" in Oracle Fusion Middleware Node Manager Administrator's Guide for Oracle WebLogic Server.
A.3 Configuring High Availability for Oracle Enterprise Scheduler
In order to enable a highly available environment, it is recommended to run Oracle Enterprise Scheduler in a cluster of at least two nodes.
This section includes the following topics:
A.3.1 Oracle Enterprise Scheduler Configuration and Deployment Artifacts
Configuration files are as follows:
ess.xml: This file is part of the Oracle Enterprise Scheduler EAR file deployed to the Oracle Enterprise Scheduler cluster.
MDS repository: The Oracle Metadata repository stores Oracle Enterprise Scheduler job metadata. Oracle Enterprise Scheduler supports both database and file-based MDS repositories.
Deployment artifacts are as follows:
J2EE application for core run-time and hosting applications.
Job metadata within Oracle MDS for core runtime and jobs loaded at startup.
The Oracle WebLogic Server deployment is non-staged.
A.3.2 Oracle Enterprise Scheduler Logging
Use standard Oracle WebLogic Server logging for an Oracle Enterprise Scheduler cluster. Use logs in Oracle WebCenter Content to examine Oracle Enterprise Scheduler behavior. Oracle Enterprise Scheduler logging is configured by default in Oracle WebLogic Server.
The default location for log files for Oracle Enterprise Scheduler process jobs on UNIX servers is
/tmp/ess/requestFileDirectory. Oracle Enterprise Scheduler operational log files can be found under
-diagnostic.log on Windows.
A.3.3 Oracle Enterprise Scheduler Cluster Architecture
Figure A-5 is an architectural diagram of a two node Oracle Enterprise Scheduler cluster.
Figure A-5 Oracle Enterprise Scheduler Two Node Cluster
Description of "Figure A-5 Oracle Enterprise Scheduler Two Node Cluster"
This configuration includes the following components:
Hardware load balancer.
A cluster of Oracle WebLogic Servers running on two servers.
Two Oracle Fusion Middleware homes running on two servers.
A two node cluster of Oracle Enterprise Scheduler instances, each running on a different instance of Oracle Fusion Middleware.
A cluster of Oracle RAC databases with multi-DS configuration is required for availability of run-time and MDS schemas. Multi-DS and Oracle RAC provide for database failure.
Shared persistence storage
An HTTP load balancer is required for web service interactions. The HTTP load balancer must be appropriately configured to balance requests to Oracle Enterprise Scheduler web services. Oracle Enterprise Scheduler includes the following web services at the specified URLs:
For more information regarding high availability architectures, see “Configuring High Availability for Oracle Fusion Middleware SOA Suite" in the Oracle Fusion Middleware High Availability Guide.
A.3.4 Configuring the Oracle Enterprise Scheduler Front End Host and Port
For enterprise deployment, Oracle Fusion Applications have removed frontend
host.port settings and added a checklist to all middleware components so that they do not rely on using the frontend
host.port. To enable this EDG requirement, ESSAPP supports explicit configuration for the frontend host and port for its web services.
A.3.4.1 Configuring the ESSAPP Frontend Host and Port
ESSAPP support the following two new configuration properties:
The value of this property value is a string with the following format:
It is used at runtime to determine the
ESSWebServiceend point address and is published as part of the
The value of this property is a string with the following format:
It is used at runtime to determine the end point address of Oracle Enterprise Scheduler callback web services (including
EssWsJobAsyncCallbackService). These end point addresses are published as part of the respective WSDLs.
If these ESSAPP properties are not configured, then ESSAPP looks up the frontend host and port configuration at runtime by querying the Oracle WebLogic Server cluster. If this cluster's frontend host and port is also not configured, it uses the local
ess_server host name and port.
ESSAPP advertises the WSDL URLs for its web services (computed using the above preference order) under its home page at:
It is recommended that Oracle Enterprise Scheduler HA/Cluster administrators configure the
CallbackServerURL properties in ESSAPP as explained in this section. The configuration of the cluster frontend host and port described in Death Detection and Restart can be done as a second preference with lower priority.
It is recommended that end users of Oracle Enterprise Scheduler web services look up the advertised WSDL URLs for individual Oracle Enterprise Scheduler web services and use them to access Oracle Enterprise Scheduler web services.
A.3.4.2 Configuring the WebLogic Server Cluster Frontend Host and Port
For an Oracle Enterprise Scheduler cluster setup involving two or more Oracle Enterprise Scheduler instances, you must configure the cluster front end host and port in the WebLogic Server Administration Console, as follows:
- Log in to the WebLogic Server Administration Console, click on the Clusters link and select the Oracle Enterprise Scheduler cluster (for example, ess_cluster) on the Summary of Clusters page.
- On the Settings for ess_cluster page, click on the Configuration tab. On the General sub-tab, fill in the Configure Cluster Address field with the host and port addresses of the Oracle Enterprise Scheduler instances (for example,
ess_server2_host:port, and so on).
- Click on the HTTP sub-tab and configure the Frontend Host, Frontend HTTP Port and Frontend HTTPS Port with the Oracle HTTP Server details.
- Click on the Servers sub-tab and verify that the Server Name Prefix and Oracle Enterprise Scheduler instances are configured.
A.3.5 Failover Requirements
An HTTP load balancer provides load balancing so as to re-route requests in the event of node failure. There are no time out requirements for the load balancer or firewalls, as long as components use persistent connections. Likewise, session state replication and failover are not required.
Load balancing is used for actions such as submitting a job and querying its status using the Oracle Enterprise Scheduler web service interface. This load balancing occurs independently of where the job is scheduled to execute.
As Oracle Enterprise Scheduler does not use JMS, no JMS failover is required. For a remote EJB invocation to an ESS cluster, server affinity must be enabled. The property
weblogic.jndi.enableServerAffinity must be set to
true in the context. If
oracle.as.scheduler.request.RemoteConnector is used, server affinity is set automatically.
This section contains the following topics:
A.3.5.1 Request Processor Failover
Oracle Enterprise Scheduler includes a request processor component, which represents a single Managed Server in the Oracle Enterprise Scheduler cluster. Request processors process job requests, such that job execution is connected to one or more request processors.
If all jobs are targeted at a number of request processors, jobs are not dependent on a particular request processor. If a job is targeted at a particular request processor, any jobs tied to that request processor execute only when the Managed Server is available and an active workshift exists for the job.
A.3.5.2 External Component Failover
Oracle Enterprise Scheduler interacts with Oracle Fusion Middleware and other components such as Oracle SOA Suite, Oracle ADF Business Components, and so on. If one of these external components fail, it is possible that any running jobs may fail.
You can prevent external component failure from affecting jobs using proper configuration. Table A-1 lists the external components that may fail, along with the steps to take to prevent failed Oracle Enterprise Scheduler jobs.
Table A-1 Oracle Enterprise Scheduler External Component Failover
|External Component||Steps to Prevent Failure|
Oracle WebCenter Portal
Integrate with a cluster of Oracle WebCenter Portal service through a load balancer.
Oracle RAC Database
Use multi-DS for Oracle RAC database integration.
Oracle SOA Suite, Oracle ADF, and so on
Configure retries for jobs that depend on these components.
Horizontal scalability—adding Managed Servers on different machines—tends to enable better performance than vertical scalability—(adding Managed Servers on the same machine.
Use standard Oracle WebLogic Server cluster scaling methodologies for horizontal scaling. For more information about Oracle WebLogic Server clustering, see the “High Availability for WebLogic Server" chapter in the Oracle Fusion Middleware High Availability Guide. You can increase the concurrent processing of jobs within a work assignment by increasing the thread allocation of the request processor (by editing the workshift for the request processor) or by binding the work assignment to more than one request processor. For more information, see Creating or Editing a Workshift and Creating or Editing a Work Assignment.
A.3.7 Backup and Recovery
Following are the backup and recovery guidelines for various components:
Components stored on the file system: Product binaries, deployed application EAR files and standard Oracle WebLogic Server files in the domain root.
Changes to the file system: The file system artifacts change when new EAR files are deployed or when the product is patched.
Data stored in the database: The database stores all metadata and run-time data.
Changes to database artifacts: Metadata changes when metadata is created and deployed from Oracle JDeveloper or Fusion Middleware Control. Runtime data changes when jobs are submitted, undergo state changes, and so on.
There is no consistency requirement between the artifacts stored on the file system and those in database. The file system stores EAR files and temporarily stores scheduled job output and log files.
A.4 Managing an Oracle Enterprise Scheduler Cluster
Managing an Oracle Enterprise Scheduler cluster involves starting the cluster, propagating configuration changes throughout the cluster, deploying applications and handling unexpected behavior.
This section contains the following topics:
A.4.1 Starting and Stopping the Cluster
Oracle Enterprise Scheduler uses standard J2EE components. As such, Oracle WebLogic Server determines the startup sequence. Oracle Enterprise Scheduler also allows implementing throttling to prevent surges in load.
When stopping a cluster, Oracle Enterprise Scheduler and all local Java jobs terminate. Oracle Enterprise Scheduler also attempts to stop all local binary jobs. However, asynchronous jobs such as SOA or PL/SQL jobs continue. The asynchronous callback from SOA or Oracle ADF Business Component services cannot be delivered if the entire Oracle Enterprise Scheduler cluster is down.
In the case of an abrupt shutdown, the server attempts to recover any on-going transactions.
A.4.2 Propagating Configuration Changes to the Cluster
There are two types of configuration: container or server, and job metadata configuration. Job metadata is stored in Oracle Metadata Repository. The process of configuring the server is the same as that of configuring a standard Oracle WebLogic Server, as part of platform configuration maintained by the Oracle WebLogic Server Configuration Framework. Job metadata configuration changes are deployed from Oracle JDeveloper.
At the cluster level, any configuration changes are propagated by deploying EAR files or metadata. Configuration data is stored either in the database or an EAR file. You can modify the configuration files in the EAR file, such as ess-config.xml, using the MBean browser in Fusion Middleware Control.
Cluster members are independent, sharing only the database. There is no communication among members of a cluster.
A.4.3 Deploying Applications to the Cluster
Applications are deployed to the cluster using standard Oracle WebLogic Server EAR files, using standard J2EE defined Oracle WebLogic Server mechanisms. EAR files can be deployed without restarting the server.
An application deployment includes an EAR file, which contains a JAZN and MAR file. The JAZN file, which contains access privileges for the scheduled jobs, is stored in LDAP under the control of Oracle Authorization Policy Manager. The MAR file contains metadata, and is stored in MDS. Oracle WebLogic Server deploys the application EAR file to all Managed Servers in the cluster.
A.4.4 Failures and Expected Behavior
In the event of failure, the main way to ensure the continuation of job processing is to configure a cluster of Oracle Enterprise Schedulers. If a server fails, another node in the cluster transitions all jobs running on the failed server to the relevant state. Synchronous jobs for example, end in an error state and may be retried depending upon whether retries are configured.
In order to enable high availability for the data tier, use Oracle RAC.
This section contains the following topics:
You can configure retries for jobs and job time outs for asynchronous jobs. For more information about configuring retries and time-outs for a job, see Creating a Job Request.
A.4.4.2 Death Detection and Restart
In order to enable death detection and recovery, each Oracle Enterprise Scheduler cluster node updates its record in the database every minute. This is called a heartbeat. Other nodes monitor the heartbeat, and when the record does not change for a period of time, the server is assumed to be dead. When a server death is detected, each job running on that server is handled. Synchronous jobs are marked as completed with errors, whereas asynchronous jobs continue to run remotely. If retry has been configured, the job marked as completed with errors restarts. Death detection tends to take about ten minutes.
On death detection, the output and log files of a node must be accessible from another node. As such, the file directory containing the job output and log files must be located on a shared file system. This directory is listed in the file