Oracle Enterprise Scheduler provides the ability to run different job types, including Java, PL/SQL, and spawned jobs. Jobs can run on demand, or be scheduled to run in the future.
Oracle Enterprise Scheduler provides scheduling services for the following purposes:
Distributing job request processing across a grid of application servers.
Running Java, PL/SQL, and process or spawned jobs.
Processing multiple jobs concurrently.
Running the same job in different languages.
Using Oracle JDeveloper, application developers can create and implement jobs. While implemented in JDeveloper, Oracle Enterprise Scheduler runs the jobs. APIs provide an interface between jobs executed within applications developed in JDeveloper and Oracle Enterprise Scheduler.
The Oracle JDeveloper extensions to Oracle Enterprise Scheduler enable the following:
Running scheduled Oracle Business Intelligence Publisher (Oracle BI Publisher), spawned, Java, PL/SQL, Perl, SQL*Plus, SQLoader, and C jobs.
Running the same job in multiple locales, time zones, currencies, and so on.
Creating log and output files for jobs, as well as acting upon those files, such as enabling notifications.
Creating Oracle Application Development Framework (Oracle ADF) task flows to schedule jobs and job sets, as well as monitor job requests.
Before you begin:
Install Oracle Enterprise Scheduler to Oracle WebLogic Server. For more information, see Chapter 2, "Setting Up Your Development Environment."
The following standards and guidelines apply to working with extensions to Oracle Enterprise Scheduler:
Always use the preconfigured job types provided when defining metadata for job definitions.
Submitting job requests from an Oracle Fusion application requires developing the following components:
A job definition, created in JDeveloper
The Java, PL/SQL, SQL*Loader, SQL*Plus, Perl, C, or host scripts job implementation
A user interface enabling end users to submit job requests and/or additional properties for the job
A wizard enables defining a new job within the context of an Oracle Fusion application. The job can be any one of the following types: Java, PL/SQL, SQL*Loader, SQL*Plus, Perl, C, or host scripts.
Creating and implementing a scheduled job in JDeveloper involves creating a package or class from which to call the job, as well as defining a job definition. The job must then be deployed and tested, and a job request submission interface defined.
To create and implement a scheduled job in JDeveloper, do the following:
Once a job has been defined, it is implemented in Java, PL/SQL, SQL*Loader, SQL*Plus, Perl, C or a host script. The Fusion application requests Oracle Enterprise Scheduler to run the implemented jobs using the provided APIs.
An Oracle ADF interface is provided to enable application end users to submit job requests from an Oracle Fusion application. The Oracle ADF interface is integrated into an Oracle Fusion application. As soon as a job request is submitted through the interface, Oracle Enterprise Scheduler runs the job as scheduled.
To submit a job request, you must first create a job definition.
Related Links
The following documents provide additional information related to subjects discussed in this section:
For more information about defining an Oracle BI Publisher job, see the Oracle Fusion Middleware Report Designer's Guide for Oracle Business Intelligence Publisher, Oracle Fusion Middleware Administrator's Guide for Oracle Business Intelligence Publisher, and the Oracle Fusion Developer's Guide for Oracle Business Intelligence Publisher.
For more information about using entity objects, see the section Creating a Business Domain Layer Using Entity Objects in Developing Fusion Web Applications with Oracle Application Development Framework guide.
For more information about log levels, see the Developing Applications for Oracle Enterprise Scheduler guide.
A job definition and job type are required to submit a job request.
Job Definition: This is the basic unit of work that defines a job request in Oracle Enterprise Scheduler.
Job Type: This specifies an execution type and defines a common set of properties for a job request.
The extensions to Oracle Enterprise Scheduler provide the following execution types:
JavaType: for job definitions that are implemented in Java and run in the container.
SQLType: for job definitions that run as PL/SQL stored procedures in a database server.
CJobType: for job definitions that are implemented in C and run in the container.
PerlJobType: for job definitions that are implemented in Perl and run in the container.
SqlLdrJobType: for job definitions that are implemented in SQL*Loader and run in the container.
SqlPlusJobType: for job definitions that are implemented in SQL*Plus and run in the container.
BIPJobType: for job definitions that are executed as Oracle BI Publisher reports. Oracle BI Publisher jobs require configuring the parameter reportID
.
HostJobType: for job definitions that run as host scripts executed from the command line.
If your job definition requires additional properties to be filled in by end users at submission time, you'll need to create a view object that defines these properties. The view object must be associated with the job definition you create. The view object is later associated with the user interface you create to allow end users to submit job requests along with the properties at submission time.
For more information about defining properties to be filled in at runtime by end users, see About Creating an User Interface for Submitting Job Requests.
To create a new job definition in JDeveloper, do the following:
In Oracle JDeveloper, create an Oracle Fusion web application by clicking the Application Menu icon on the Application Navigator, selecting New Project > Projects > Generic Project and clicking OK.
Right-click the project and select Properties. In the Resources tab, add the directory $MW_HOME/jdeveloper/integration/ess/extJobTypes
.
If your job includes any properties to be filled in by end users using an Oracle ADF user interface at runtime, create an ADF Business Components view object with validation and the parameters to be filled in by end users.
Right-click the Model project and select Properties. In the Resource Bundle section, configure one bundle per file and select resource bundle type Xliff Resource Bundle.
Define attributes for the view objects sequentially, ATTRIBUTE1
, ATTRIBUTE2
, and so on, with an attribute for each required parameter. Use ADF Business Components attribute control hints to specify required prompt, validation, and formatting for each parameter.
Add the property parametersVO
to your job definition and specify the fully qualified path of the view object as the value of parametersVO
. For example, set parametersVO
to oracle.my.package.TestVO
. A maximum of 100 attributes can be used for parametersVO
. The attributes should be named incrementally, for example ATTRIBUTE1
, ATTRIBUTE2
, and so on.
Define the following required properties:
jobDefinitionName
: The short name of the job.
jobDefinitionApplication
: The short name of the application running the job.
jobPackageName
: The name of the package running the job.
Additional properties can be defined as shown in Table 65-1.
Table 65-1 Additional Job Definition Properties
Property | Description |
---|---|
|
An optional string value that can be used to communicate details of the final state of the job. This property value is displayed in the UI used to monitor job request submissions in the details section of the job request. It can be useful for displaying a short explanation as to why a request ended in an error or warning state. |
|
The name of the data control for the application to which the parameter task flow is bound. Following is an example. <parameter name="CustomDatacontrol" data-type="string">ExtParameterAM</parameter> Use this property when adding a custom task flow to an Oracle ADF user interface used to submit job requests at runtime. For more information, see How to Add a Custom Task Flow to an User Interface for Submitting Job Requests. |
|
The suffix of the output file. Possible values are |
|
A Boolean parameter that enables or disables the accumulation of time statistics (Y or N). |
|
A numerical value that indicates the level of tracing control for the job. Possible values are as follows:
|
|
Stores the preferred language in which the job request should run. |
|
The numeric characters used in the preferred language in which the job runs, as defined by |
|
The territory of the preferred language in which the job runs, as defined by |
|
Specifies the name of the web module for the Oracle Enterprise Scheduler UI application to use as a portlet when submitting a job request. The Oracle Enterprise Scheduler central UI looks up the producer from the topology based on the registered producer application name derived from |
|
Enables a PL/SQL procedure evaluated at runtime which calculates the next set of date parameter values for a recurring request. Enter the name of the PL/SQL procedure. The procedure expects one argument—a number signifying the change in milliseconds between the start dates of the first and current requests. -- incr_test - Sample PL/SQL incrementProc procedure -- This procedure gets the list of arguments to be incremented -- using the incrementProcArgs property and increments each -- argument by the delta provided. This behavior is identical -- to the default behavior if no incrementProc is set for the -- job. procedure incr_test( delta IN number ) is request_id number; incrProcArgs varchar2(200); curr_arg_n varchar2(100); curr_arg_v varchar2(2000); del_pos number := 0; prev_pos number := 1; old_date date; new_date date; delta_days number; begin request_id := FND_JOB.REQUEST_ID; delta_days := delta / (1000*60*60*24); -- incrProcArgs must be defined for this procedure to be -- called. incrProcArgs := ESS_RUNTIME.GET_REQPROP_VARCHAR(request_id, FND_JOB.INCR_PROC_ARGS_P) || ','; LOOP del_pos := INSTR(incrProcArgs, ',', prev_pos); EXIT WHEN del_pos = 0; curr_arg_n := FND_JOB.SUBMIT_ARG_PREF_P || SUBSTR(incrProcArgs, prev_pos, del_pos-prev_pos); curr_arg_v := ESS_RUNTIME.GET_REQPROP_VARCHAR(request_id, curr_arg_n); old_date := FND_DATE.CANONICAL_TO_DATE(curr_arg_v); new_date := old_date + delta_days; ESS_RUNTIME.UPDATE_REQPROP_VARCHAR(request_id, curr_arg_n, FND_DATE.DATE_TO_ CANONICAL(new_date)); prev_pos := del_pos+1; END LOOP; end incr_test; |
|
A list of comma-separated date arguments to be incremented. The In the incrementProc example shown above, an |
|
The level at which events are logged (between 0 and 4). Each job type has a |
|
This flag enables setting the database optimizer mode for the job. Optimizer mode is useful for fine-tuning performance. |
|
The ADF Business Components view object you define for additional properties to be entered at runtime by end users using an Oracle ADF user interface. |
|
Enter the name of the task flow as a parameter. The name of the <parameter name="ParameterTaskflow" data-type="string">/WEB-INF/oracle/apps/prod/project/ParamTestTaskFlow.xml#ParamTestTaskFlow</parameter> Use this property when adding a custom task flow to an Oracle ADF user interface used to submit job requests at runtime. For more information, see How to Add a Custom Task Flow to an User Interface for Submitting Job Requests. |
|
The Oracle BI Publisher report value specified in the Oracle BI Publisher repository. Required parameter for Oracle BI Publisher jobs only. |
|
Enables setting a database rollback segment for the job, which will be used until the first commit. When implementing the rollback segment, use |
|
A Boolean parameter (Y or N) that controls whether the job displays in the job request submission user interface (see About Creating an User Interface for Submitting Job Requests). |
|
Enables elevating access privileges for completing a scheduled job. For more information about elevating access privileges for the completion of a particular job, see About Elevating Access Privileges for a Scheduled Job. |
Create a new job. From the New Gallery, select Business Tier > Enterprise Scheduler Metadata and click Job Definition.
In the Job Definition Name & Location page in the Job Definition Creation wizard, do the following:
Name: Enter a name for the job.
JobType: Select the job type from the drop-down list.
Click Finish. The new job definition displays.
Edit the following properties in the job definition as required for the selected job type:
JavaJobType: Uncheck the read-only checkbox next to className
and set its value to the value of the business logic class.
PlsqlJobType: Uncheck the read-only checkbox next to procedureName
and set its value to the name of the procedure (such as myprocedure.proc
). Create a new parameter named numberOfArgs
. Set numberOfArgs
to the number of job submission arguments, excluding errbuf
and retcode
.
CJobType: Add the parameter executableName
and set its value to the name of the C job to be executed. The executable file identified by the executableName
parameter must exist in the directory $APPLICATIONS_BASE/$APPLBIN
.
PerlJobType: Add the parameter executableName
and set its value to the name of the Perl script.
SqlLdrJobType: Add the parameter executableName
and set its value to the name of the control file to be executed (located under PRODUCT_TOP/$APPLBIN
). Add SQL*Loader options such (such as direct=yes
) as a sqlldr.directoption parameter in the job definition.
SqlPlusJobType: Add the parameter executableName
and set its value to the name of the SQL*Plus job script to be executed (located under PRODUCT_TOP/$APPLSQL
).
HostJobType: Add the parameter executableName
and set its value to the name of the host script job to be executed. The executable file identified by the executableName
parameter must exist in the directory PRODUCT_TOP/$APPLBIN
.
Note:
Configure the $APPLBIN
and $APPLSQL
variables in the environment.properties
file. The $APPLBIN
and $APPLSQL
variables point to the location of executable files under PRODUCT_TOP
. These variables enable the extensions to Oracle Enterprise Scheduler to locate the jobs to be run. Typically, these variables are set in a preexisting environment properties file in the system.
A file group is a collection of output files such as text files, XML files, and so on. File groups enable categorizing files together for a specific purpose, such as file groups for human resources or financial reports.
File groups are used for postprocessing jobs such as Business Intelligence Publisher jobs. Using postprocessing actions, the results of a job can be saved as an HTML file, for example, or printed. File groups specify the type of postprocessing action to be taken for a given job.
There are two types of file groups: output and layout. Postprocessing layout actions create additional output files using the job request output files. For example, an XML job output file can be processed as an HTML or PDF file.
Postprocessing output actions act upon job request output files by printing, faxing, or emailing the files, for example. Output postprocessing actions can be taken on job request output files, as well as files created by layout postprocessing actions. For example, a job request output XML file can be converted to a PDF file using layout postprocessing actions, and then emailed using output postprocessing actions.
To define file group properties, do the following:
In the job definition for which you want to define postprocessing, define a file group.
Name the property Program.FMG.
For the value of the property, enter a list of comma-separated File Management Groups, where each file group is prefixed by an L or O to indicate a layout or output file group, respectively. A sample file group property is shown in Example 65-1.
Three file groups are listed in this example.
In the job definition, create a property containing a regular expression used to filter the files in the output work directory of the job request. Any output files that match the filter will be part of the relevant file group.
Example regular expressions are shown in Example 65-2, Example 65-3, and Example 65-4.
An example of file group properties in a job definition is shown in Example 65-5.
These properties specify the use of the Business Intelligence Publisher postprocessing action on the MYXML
file group, followed by the print postprocessing action on either ALL
or PDF
file groups.
Optionally, rename the file group and store it in the Oracle Metadata Service repository so that it displays in a more user-friendly way in the scheduled job request submission UI.
Example 65-1 File Group Property Sample Value
Program.FMG = L.MYXML, O.ALL, O.PDF
Example 65-2 File Group Regular Expression Filtering for All Files with the Suffix XML
MYXML = '.*.\xml$'
Example 65-3 File Group Regular Expression Filtering for All Files
ALL = '.*$'
Example 65-4 File Group Regular Expression Filtering for All Files with the Suffix PDF
PDF = '.*.\pdf$'
Example 65-5 File Group Properties with File Group Regular Expression Filtering
Program.FMG = L.MYXML, O.ALL, O.PDF MYXML = '.*.\xml$' ALL = '.*$' PDF = '.*.\pdf$'
The job definition is written to an XML file called <
job name
>.xml
.
Configuring a spawned job involves creating an environment file and configuring an Oracle wallet.
Spawned jobs require an environment.properties
file to provide the correct environment for execution. The environment.properties
file should be located in the config/fmwconfig
directory under the domain.
Additional environment variables may be added to the same directory in a similar file called env.custom.properties
. Variables defined in this file take precedence over those in the environment.properties
file.
Similarly, server-specific environment variables may be set in the server config directory in files called environment.properties
and env.custom.properties
.
The following variables are used to identify the correct interpreters for various spawned job types:
AFSQLPLUS
: The executable for SQL*Plus scripts.
AFSQLLDR
: The executable for SQL*Loader uploads.
AFPERL
: The Perl interpreter.
ATGPF_TOP
: The TOP
directory for ATGPF files, needed to locate key files for SQL*Plus and Perl jobs.
The following environment properties are available to all spawned jobs:
REQUESTID
: The request ID of the current job request.
WORK_DIR_ROOT
: The directory on the local file system where the request can perform file operations.
OUTPUT_WORK_DIR
: The directory to which the job writes all output files.
LOG_WORK_DIR
: The directory to which the job writes all log files.
INPUT_WORK_DIR
: The directory to which input files are saved before the job is spawned.
OUTFILE_NAME
: The default name for the job output file.
LOGFILE_NAME
: The name of the log file for the job.
USER_NAME
: The name of the user submitting the job. The job runs in the context of this user.
REQUEST_HANDLE
: The Oracle Enterprise Scheduler request handle for the current request.
The environment variables must point to the client ORACLE_HOME
and environment so that spawned jobs can connect to the database.
Note:
Ensure the variables you define in the environment.properties
file do not include any trailing spaces. Follow the guidelines required by java.util.properties
.
Restart the server after editing the environment.properties
file.
To create an environment file, do the following:
Use the TNS_ADMIN
and ORACLE_HOME
variables specified in the environment.properties
file created in About Creating an Environment File for Spawned Jobs.
A configured Oracle wallet enables spawned jobs to connect to the database at the command line. A provisioned Oracle Fusion Applications environment will have this wallet preconfigured.
To configure an Oracle wallet for the spawned job, do the following:
Example 65-6 Creating a Wallet
If you are using a Linux operating system, use these commands:
cd $TNS_ADMIN mkdir wallet mkstore -wrl ./wallet -create
If you are using a Windows operating system, use these commands:
cd %TNS_ADMIN% mkdir wallet mkstore -wrl wallet -create
Example 65-7 Creating Wallet Credentials
If you are using a Linux operating system, use this command:
mkstore -wrl ./wallet -createCredential <$TWO_TASK> fusion_runtime <fusion_runtime_password password>
If you are using a Windows operating system, use this command:
mkstore -wrl wallet -createCredential <%TWO_TASK%> fusion_runtime <fusion_runtime_password password>
Example 65-8 Create a File Called sqlnet.ora
If you are using a Linux operating system, use these commands:
SQLNET.WALLET_OVERRIDE = TRUE WALLET_LOCATION = (SOURCE = (METHOD = FILE) (METHOD_DATA = (DIRECTORY = <$TNS_ADMIN>/wallet) ) )
If you are using a Windows operating system, use these commands:
SQLNET.WALLET_OVERRIDE = TRUE WALLET_LOCATION = (SOURCE = (METHOD = FILE) (METHOD_DATA = (DIRECTORY = <%TNS_ADMIN%>\wallet) ) )
Example 65-9 Create a File Called tnsnames.ora
dbname = (DESCRIPTION = (ADDRESS = (PROTOCOL = TCP) (HOST = host.example.com) (PORT = 1521) ) (CONNECT_DATA = (SID-sidname)) )
Example 65-10 Set Directory and File Permissions
chmod 755 wallet chmod 744 wallet/cwallet.sso
Example 65-11 Connect to the Wallet
If you are using a Linux operating system, use this command:
sqlplus /@<$TWO_TASK>
If you are using a Windows operating system, use this command:
sqlplus /@<%TWO_TASK%>
Oracle Enterprise Scheduler provides a plug-in to migrate spawned job environment properties from the testing environment to the production environment. The migration plug-in moves the data-sources, LDAP, JMS, and other configurations that are part of the domain of the testing environment to the production environment.
The plug-in aims to incorporate properties created for spawned job processing. As such, the test to production plug-in parses the environment.properties
file created in the testing environment and migrates it to the production environment. The file resides in the location specified in the testing environment as defined in the system property ess.config.dir
.
As the properties to be moved reside in a flat file, it is unnecessary to annotate any MBeans with the property MovableProperty. Changes in the Oracle Enterprise Scheduler connections.xml
are handled by the plug-in.
The migration plug-in includes the following components: CopyConfig
, MovePlans
, PasteConfig
.
CopyConfig
: This script accomplishes the following tasks.
Reads the EssConfigDir
location property from the system.
Reads and parses the environment.properties
file.
Creates a MovableComponent
with the name ESS-EXT
and creates another MovableComponent
with the name ENV-PROPS
which it then adds to the ESS-EXT
movable property.
Loops through every property in the environment.properties
file and adds it to the ESS-EXT
MovableComponent
as a ConfigProperty
.
Moves all the properties available in the properties file of the testing environment to the same file of the production environment.
Analyzes properties to determine whether they are changeable in the production environment. If they are, then those properties are defined as READ_WRITE
. If they are not changeable, the properties are defined as READ_ONLY
.
Returns the list of MovableComponent
objects.
In post-processing, sets the EssConfigDir
property to componentProperties
so that the PasteConfig script can fetch the correct value.
Generates the MovePlan.xml
file, whose values can be modified as required for the production environment.
MovePlans
: The environment.properties
file is located outside the domain home or Oracle Fusion Middleware home. As such, a test to production plug-in must extract these properties and makes them movable through the test to production framework. The Plug-in extracts the key and value of the properties defined in the environment.properties
file and makes them available in MovePlan.xml
so that the values may be modified accordingly to suit the needs of the production environment.
An example of the hierarchy of a MovePlans.xml
file is shown in Example 65-12.
PasteConfig
: This script writes all the required values to the environment.properties
file of the production environment.
Gets the MovableComponent
from FMWT2PPasteBean
with type ESS-EXT
. Gets the internal MovableComponent
with name ENV-PROPS
from ESS-EXT
.
Creates a new environment.properties
file in the UserFileDir location of the production environment.
Extracts the ConfigProperty
values added to this MovableComponent
and constructs an output steam.
Writes all the values to the production environment.properties
file.
Example 65-12 MovePlan.xml File
movePlan
|_movableComponent (componentType:J2EEDomain, componentName:base_domain)
//Default root movable component
|_movableComponent (componentType:ESS-EXT, componentName:'Oracle Enterprise Scheduler
Extension components')
|_moveDescriptor
|_configGroup (type: ENVIRONMENT_PROPERTIES)
|_configProperty(id="/machine/user/instance/ess/config")
// Location of the environment.properties
|_<configProperty>
<name>APPL_TOP</name>
<value>/machine/user2/Test/</value>
<itemMetadata>
<dataType>STRING</dataType>
<scope>READ_WRITE</scope>
</itemMetadata>
</configProperty>
|_configProperty-2
|_configProperty-3
|_configProperty-4
Implementing a PL/SQL scheduled job requires creating a job definition and creating a PL/SQL package.
Related Links
The following documents provide additional information related to subjects discussed in this section:
For more information about implementing a PL/SQL stored procedure scheduled job see the chapter "Creating and Using PL/SQL Jobs" in the Developing Applications for Oracle Enterprise Scheduler.
Run subrequests through Oracle Enterprise Scheduler using the Oracle Enterprise Scheduler APIs to access Oracle Enterprise Scheduler.
A PL/SQL stored procedure scheduler job should have a signature with the first two arguments being errbuf
and retcode
. The remaining arguments are used as required for defining job parameters. All arguments have a data type of varchar2
.
Create a job definition as described in About Creating a Job Definition.
PL/SQL jobs require setting an additional property numberOfArgs
in the job definition. This property identifies the number of job submission arguments (not including the required arguments errbuf
and retcode
.)
Oracle Enterprise Scheduler provides runtime PL/SQL APIs for implementing PL/SQL jobs and running the jobs using Oracle Enterprise Scheduler. A view object is defined and associated with the job definition for the job.
When creating a PL/SQL job, use the fusion
database user. For information about granting access privileges to database users in the context of Oracle Fusion Applications, see Implementing Oracle Fusion Data Security.
To implement a PL/SQL scheduled job, do the following:
The sample PL/SQL job shown in Example 65-13 provides a signature of a PL/SQL procedure run as a job. The first two arguments to the PL/SQL procedure, errbuf
and retcode
, are required. The remaining arguments are properties filled in by end users and passed to Oracle Enterprise Scheduler when the job is submitted.
The example shown in Example 65-13 illustrates a sample PL/SQL job that uses the PL/SQL API.
The sample shown in Example 65-14 illustrates a PL/SQL job with a subrequest submission. The no_requests
argument identifies the number of subrequests that must be submitted.
Example 65-13 Running a Job Using the PL/SQL API
procedure fusion_plsql_sample( -- The first two arguments are required: errbuf and retcode -- errbuf out NOCOPY varchar2, retcode out NOCOPY varchar2, -- The errbuf is logged when a job request ends in a warning or error state to -- provide a quick indication as to why the job request ended in an error or -- warning state. -- Job submission arguments, as collected from the view object associated with the -- job as configured in the job definition. The view object is used to present a -- user interface to end users, allowing them to enter the properties listed in -- the following lines of code. -- interface. These values are submitted by the end user. -- run_mode in varchar2 default 'BASIC', duration in varchar2 default '0', p_num in varchar2 default NULL, p_date in varchar2 default NULL, p_varchar in varchar2 default NULL) is begin -- Write log file content using FND_FILE API FND_FILE.PUT_LINE(FND_FILE.LOG, "About to run the sample program"); -- Implement the business logic of the job here. -- FND_FILE.PUT_LINE(FND_FILE.OUT, " RUN MODE : " || run_mode); FND_FILE.PUT_LINE(FND_FILE.OUT, "DURATION: " || duration); FND_FILE.PUT_LINE(FND_FILE.OUT, "P_NUM: " || p_num); FND_FILE.PUT_LINE(FND_FILE.OUT, "P_DATE: " || p_date); FND_FILE.PUT_LINE(FND_FILE.OUT, "P_VARCHAR: " p_varchar); -- Retrieve the job completion status which is returned to Oracle -- Enterprise Scheduler. errbuf := fnd_message.get("FND", "COMPLETED NORMAL"); retcode := 0; end;
Example 65-14 Submitting a Subrequest Using the PL/SQL Runtime API
procedure fusion_plsql_subreq_sample(
errbuf out NOCOPY varchar2,
retcode out NOCOPY varchar2,
no_requests in varchar2 default '5',
) is
req_cnt number := 0;
sub_reqid number;
submitted_requests varchar2(100);
request_prop_table_t jobProp;
begin
-- Write log file content using FND_FILE API
FND_FILE.PUT_LINE(FND_FILE.LOG, "About to run the sample program with subrequest functionality");
-- Requesting the PAUSED_STATE property set by job identifies request as
-- having started for the first time or restarting after being paused.
if ( ess_runtime.get_reqprop_varchar(fnd_job.job_request_id, 'PAUSED_STATE') ) is null ) -- first time start
then
-- Implement the business logic of the job here.
FND_FILE.PUT_LINE(FND_FILE.OUT, " About to submit subrequests : " || no_requests);
-- Loop through all the subrequests.
for req_cnt 1..no_requests loop
-- Retrieve the request handle and submit the subrequest.
sub_reqid := ess_runtime.submit_subrequest(request_handle => fnd_job.request_handle,
definition_name => 'sampleJob',
definition_package => 'samplePkg',
props => jobProp);
submitted_requests := sub_reqid || ',';
end loop;
-- Pause the parent request.
ess_runtime.update_reqprop_varchar(fnd_job.request_id, 'STATE', ess_job.PAUSED_STATE);
-- Update the parent request with the state of the subrequest, enabling
-- the job to retrieve the status during restart.
ess_runtime.update_reqprop_int(fnd_job.request_id, 'PAUSED_STATE', submitted_requests);
else
-- Restart the request, retrieve job completion status and return the
-- status to Oracle Enterprise Scheduler.
errbuf := fnd_message.get("FND", "COMPLETED NORMAL");
retcode := 0;
end if;
end;
Oracle Enterprise Scheduler calls routines to initialize the context of the PL/SQL job, including PL/SQL global values, local values (such as language and territory), and request-specific values such as request ID and request handle.
The view object associated with the job definition displays a user interface so that end users may fill in values for each property. The Oracle Fusion web application calls Oracle Enterprise Scheduler using the provided APIs and submits the job request. Oracle Enterprise Scheduler runs the job, which calls the context routines and then runs the job logic. The job ends with a retcode
value of 0, 1, 2 or 3, representing SUCCESS
, WARNING
, FAILURE
or BUSINESS ERROR
, respectively. The Oracle Fusion web application can retrieve the result from Oracle Enterprise Scheduler and display it in the user interface.
Implementing a SQL*Plus scheduled job involves writing a SQL*Plus script and configuring an environment file for the job.
Run subrequests through Oracle Enterprise Scheduler using the Oracle Enterprise Scheduler APIs to access Oracle Enterprise Scheduler.
Implementing a SQL*Plus stored procedures job involves writing the SQL*Plus script, storing the script and configuring a spawned job environment.
To implement a SQL*Plus job, do the following:
Oracle Enterprise Scheduler provides runtime SQL*Plus APIs for implementing SQL*Plus jobs and running the jobs using Oracle Enterprise Scheduler.
This sample SQL*Plus job provides a signature of a SQL*Plus procedure run as a job. Any necessary arguments are properties filled in by end users and passed to Oracle Enterprise Scheduler when the job is submitted. A view object is defined and associated with the job definition for the job. The view object is then used to display a user interface so that end users may fill in values for each property. Finally, the sample prints to an output file.
Example 65-15 shows a sample SQL*Plus scheduled job, which is executed by a wrapper script.
Example 65-15 Implementing a SQL*Plus Scheduled Job
SET VERIFY OFF SET linesize 132 WHENEVER SQLERROR EXIT FAILURE ROLLBACK; WHENEVER OSERROR EXIT FAILURE ROLLBACK; REM dbdrv: none /* ----------------------------------------------------------------------*/ DECLARE errbuf varchar2(240) := NULL; retval boolean; run_mode varchar2(200) := '&1'; BEGIN DBMS_OUTPUT.PUT_LINE(run_mode); update dual set dummy = 'Q'; FND_FILE.PUT_LINE(FND_FILE.LOG, 'Parameter 1 = ' || nvl(run_mode,'NULL')); /* print out test message to log file and output file */ /* by making direct call to FND_FILE.PUT_LINE */ /* from sql script. */ FND_FILE.PUT_LINE(FND_FILE.LOG, ' '); FND_FILE.PUT_LINE(FND_FILE.LOG, '----------------------------------------- -----------------------'); FND_FILE.PUT_LINE(FND_FILE.LOG, 'Printing a message to the LOG FILE '); FND_FILE.PUT_LINE(FND_FILE.LOG, '----------------------------------------- -----------------------'); FND_FILE.PUT_LINE(FND_FILE.LOG, 'SUCCESS! '); FND_FILE.PUT_LINE(FND_FILE.LOG, ' '); FND_FILE.PUT_LINE(FND_FILE.OUTPUT,'----------------------------------------- -----------------------'); FND_FILE.PUT_LINE(FND_FILE.OUTPUT,'Printing a message to the OUTPUT FILE '); FND_FILE.PUT_LINE(FND_FILE.OUTPUT,'----------------------------------------- -----------------------'); FND_FILE.PUT_LINE(FND_FILE.OUTPUT,'SUCCESS! '); FND_FILE.PUT_LINE(FND_FILE.OUTPUT,' '); retval := FND_JOB.SET_SQLPLUS_STATUS(FND_JOB.SUCCESS_V); END; / COMMIT; -- EXIT; Oracle Fusion Applications SQL*Plus Jobs must not exit.
Oracle Enterprise Scheduler calls routines in a wrapper script to initialize the context of the SQL*Plus job, including global values, local values (such as language and territory), and request-specific values such as request ID and request handle. The wrapper script introduces the prologue of commands shown in Example 65-16.
The Oracle Fusion application calls Oracle Enterprise Scheduler using the provided APIs. Oracle Enterprise Scheduler runs the job, and the final job status—SUCCESS
, WARNING
, BUSINESS ERROR
, or FAILURE
—is communicated to Oracle Enterprise Scheduler. The Oracle Fusion web application can retrieve the result from Oracle Enterprise Scheduler and display it in the user interface.
Example 65-16 SQL*Plus wrapper script
SET TERM OFF SET PAUSE OFF SET HEADING OFF SET FEEDBACK OFF SET VERIFY OFF SET ECHO OFF SET ESCAPE ON WHENEVER SQLERROR EXIT FAILURE
Implementing a SQL*Loader scheduled job involves creating a SQL*Loader control file and configuring a spawned job environment.
Like all executable jobs (C, SQLPlus, host, and Perl scripts), SQL*Loader jobs require an executable file that is located in the read-only APPLTOP
folder. For SQL*Loader jobs, this is the control file. The control file determines which database tables are to be affected by the SQL*Loader command.
It is possible to use a dynamic control file for SQL*Loader subrequest jobs. A SQL*Loader job submitted as a subrequest using a dynamic control file must access the control file from the working directory of the parent job request, rather than the APPLTOP
folder.
Keep in mind that the control file and data file must conform to the following SQL*Loader standards:
Place control files in the $APPLBIN
directory under the product TOP. (Subrequests using dynamic control files must instead access the working directory of the parent job request.)
The control file's name must be the same as the executableName
parameter in the job definition.
Ensure that the full path of the data file's location is the first submit argument to the job.
Add SQL*Loader options such as direct=yes
, if needed, as the sqlldr.directoption
parameter in the job definition.
Set the job log file as the SQL*Loader LOG
parameter so it will automatically contain all SQL*Loader log messages.
Set the job output file as the SQL*Loader BAD
parameter so it will automatically receive any output directed there. Alternatively, you can create two output files for a SQL*Loader job request.
<
requestid
>_bad.txt
: This is the output of the bad parameter.
<
requestid
>_discard.txt
: This is the output of the discard parameter, however a discard file is not always generated.
To implement a SQL*Loader scheduled job, do the following:
Create a SQL*Loader control file (.ctl).
In the parent job request, make sure you have set the system property workDirectoryRoot
to the working directory of the parent job request.
Alternatively, in the case of SQL*Loader subrequests only, configure the job to use a dynamically created control file.
In the job definition for the subrequest, configure the property execPRWD
. Set this property to Y
to enable the dynamic control file.
In the parent job request, configure the name of the control file using the executableName
system property.
Enter the full path of the data file as the first submit argument to the job.
Store the control file under PRODUCT_TOP/$APPLBIN
. Skip this step if you are implementing a SQL*Loader subrequest.
Configure the spawned job environment as described in About Configuring a Spawned Job Environment.
Test the file.
A sample SQL*Loader scheduled job is shown in Example 65-17.
What Happens When You Implement a SQL*Loader Scheduled Job Subrequest
When the SQL*Loader subrequest completes, the parent request can discover the status of the completed SQL*Loader job request as with any other subrequest. The log and output files are written to the content repository.
What Happens When You Implement a SQL*Loader Scheduled Job Subrequest Using a Dynamic Control File (execPRWD)
When the SQL*Loader subrequest runs, the command line creates the path to the control file by looking for the file named in the system property executableName
under the directory indicated in the parent job request as the system property workDirectoryRoot
.
Example 65-17 Sample SQL*Loader scheduled job
This sample control file will upload data from the data file into the fnd_applcp_test
table, into the columns listed here (id1
, id2
, id
n
, mesg
). See the SQL*Loader documentation For more information about writing control files.
OPTIONS (silent=(header,feedback,discards)) LOAD DATA INFILE * INTO TABLE fnd_applcp_test APPEND FIELDS TERMINATED BY ',' (id1, id2, id3, func CHAR(30), time SYSDATE, action CHAR(30), mesg CHAR(240))
Implementing a Perl scheduled job involves creating a job definition, enabling the Perl job to connect to a database and configuring a spawned job environment.
Related Links
The following documents provide additional information related to subjects discussed in this section:
For more information about creating a Perl scheduled job see the chapter "Creating and Using Process Jobs" in Developing Applications for Oracle Enterprise Scheduler.
Example 65-18 shows a sample scheduled Perl job which does the following:
Checks for basic or full mode.
Prints arguments.
Gets the context object of the scheduled job request.
Retrieves contextual information about the scheduled job request, which is stored in the context object.
Writes the request to the log file.
Prints information as required.
Example 65-18 Perl Scheduled Job
# dbdrv: none use strict; (my $VERSION) = q$Revision: 120.1 $ =~ /(\d+(\.\d+)*)/; print_header("Begin Perl testing script (version $VERSION)"); # check first argument for BASIC or FULL mode # if not FULL mode, exit successfully without doing anything if (! $ARGV[0] || uc($ARGV[0]) ne "FULL") { exit(0); } # -- If argument #1 was passed, use it as a sleep time if ($ARGV[1]) { if ($ARGV[1] =~ /\D/) { print "** Argument #1 is not a valid number, unable to sleep!\n\n"; } else { printf("Sleeping for %d seconds...\n", $ARGV[1]); sleep($ARGV[1]); } } # -- Arguments print_header("Arguments"); my $i = 1; foreach (@ARGV) { print "Argument #", $i++, ": $_\n"; } # -- Get the request context object my $context = get_context(); # -- Use this object to retrieve context information about this request print_header("Context Information"); printf "Request id \t= %d\n", $context->reqid(); printf "User name \t= %d\n", $context->username(); printf "Logfile \t= %s\n", $context->logfile(); printf "Outfile \t= %s\n", $context->outfile(); # -- Writing to the request log file print_header("Writing to log file"); # -- retrieve a Logfile object from the context my $log = $context->log(); $log->writeln("This message should appear in the request logfile"); $log->timestamp("This is a timestamped message to the request logfile"); print "Wrote two messages to the request logfile\n"; # -- Print out some useful information print_header("Environment"); foreach (sort keys %ENV) { print "$_=$ENV{$_}\n"; } print_header("Perl Information"); print "PROCESS ID = $$\n"; print "REAL USER ID = $<\n"; print "EFF USER ID = $>\n"; print "SCRIPT NAME = $0\n"; print "PERL VERSION = $]\n"; print "OS NAME = $^O\n"; print "EXE NAME = $^X\n"; print "WARNINGS ON = $^W\n"; print "\n\@INC path:\n"; foreach (@INC) { print "$_\n"; } print "\nAll loaded perl modules:\n"; foreach (sort keys %INC) { print "$_ => $INC{$_}\n"; } # -- Exiting the script # -- The exit status of the script will be used as the request exit status. # -- A zero exit status is reported as state of success. # -- An exit status of 2 is reported as a warning state. # -- An exit status of 3 is reported as a business error state. # -- Any other exit status is reported as an error state. print_header("Exiting script with status 0. (Normal completion)"); exit(0); sub print_header { my $msg = shift; print "\n\n", "-" x 40, "\n", $msg, "\n", "-" x 40, "\n"; }
The main steps required to implement a C scheduled job are as follows:
Creating a job definition
Configuring a spawned job environment
Implementing and testing a C scheduled job
Create a job definition as described in About Creating a Job Definition.
Several C functions are available for use in developing Oracle Fusion applications, while several others are not. Table 65-2 and Table 65-3 list the available and unavailable functions.
Table 65-2 C Functions Available for Developing Oracle Fusion Applications
Function | Description |
---|---|
|
Run C program. The recommended API for writing a C program. The main OC file should call this function to run the program logic. It initializes the context and calls the program. int afprcp (uword argc, text **argv, afsqlopt *options, afpfcn *function); |
|
End C program. All programs must call this to signal the completion of the program. The program should pass completion status and message if necessary. Indicate completion status with the following constants:
boolean afpend (text *outcome, dvoid *handle, text *compmesg); |
|
Find request status. For a given request, retrieve the status. The following are possible request states:
afreqstate fdpfrs (text *request_id, text *errbuf); |
|
Get the error type of a specific job request ID. The following are possible error types:
afreqstate fdpgret (text *request_id, text *status, text *errbuf); |
|
Get request status. For a given request, retrieve the current status and completion text. afreqstate fdpgrs (text *request_id, text *status, text *errbuf); |
|
Lock table. Locks the desired table with the specified lock mode and |
|
Legacy API for concurrent programs. All new concurrent programs should use boolean fdpscp (sword *argc, text **argv[], text args_type, text *errbuf); |
|
Routines for creating log/output files and writing to files. These are routines concurrent programs should use for writing to all log and output files. |
Table 65-3 C Functions Not Available for Developing Oracle Fusion Applications
Function | Description |
---|---|
|
Get Oracle data group. |
|
Get program name. |
|
Get request count. |
|
Run the import utility. |
|
Run SQL*Loader. |
|
Run Perl concurrent program. |
|
Run report. |
|
Run Sql*Rpt program. |
|
Submit concurrent program. Use the |
|
Get resource security group. |
|
Run SQL*Plus concurrent program. |
|
Run stored procedure. |
When developing a C job, it is possible to test the job by running it from a command line interface.
Running a C job from the command line involves the following main steps:
Invoking the job
Obtaining a database connection and setting the runtime context by passing special arguments.
Passing any program-specific parameters at the command line.
To run a C job from the command line, use the syntax shown in Example 65-19 to run a C job from the command line for testing purposes.
where
<heavyweight user connection string>
is the username/password@TWO_TASK
pair used to connect to the database
<lightweight user name>
is the name of the lightweight user submitting the job. This value is used to set the user context in the database connection.
<flag>
must be set to 'L' for lightweight user.
An example illustrating running a C job from the command line is shown in Example 65-20.
Example 65-19 Syntax for Running a C Job from the Command Line
%program <heavyweight user connection string> <lightweight username> <flag> <job parameters> ...
Example 65-20 Running a C Job from the Command Line for Testing Purposes
program username/password@my_db MYUSER L <parameter1> <parameter2> ....
The sample C job shown in Example 65-21 uses afprcp
to initialize and obtain a database connection. It uses both Pro*C and afupi
.
Example 65-21 Using the C Runtime API
#ifndef AFSTD
#include <afstd.h>
#endif
#ifndef AFSTR
#include <afstr.h>
#endif
#ifndef AFCP
#include <afcp.h>
#endif
#ifndef SQLCA
#include <sqlca.h>
#endif
#ifndef AFUPI
#include <afupi.h>
#endif
#ifndef FDS
#include <fds.h>
#endif
boolean testupi()
{
text *sqltext;
text buffer[ERRLEN];
text os_user[31];
text session_user[31];
text db_name[31];
aucursor *use_curs;
word errcode;
os_user[0] = session_user[0] = db_name[0] = (text)'\0';
sqltext = (text*) "SELECT sys_context('USERENV','DB_NAME',30), sys_context('US
ERENV','SESSION_USER',30), sys_context('USERENV','OS_USER',30) from dual";
use_curs = NULLCURSOR;
use_curs = afuopen (NULLHOST, NULLCURSOR, (dvoid *)
sqltext,
UPISTRING);
if (use_curs == NULLCURSOR) {goto upierror;}
afudefine(use_curs, 1, AFUSTRING, (dvoid *)db_name, 31);
afudefine(use_curs, 2, AFUSTRING, (dvoid *)session_user, 31);
afudefine(use_curs, 3, AFUSTRING, (dvoid *)os_user, 31);
if (!afuexec (use_curs, (uword)1, (uword)1, CSTATHOLD|CSTATEXACT) ||
(errcode = afuerror (NULLHOST, (text *) NULL, 0)) != ORA_NORMAL) {
goto upierror;
}
DISCARD afurelease (use_curs);
DISCARD sprintf((char *)buffer, "%s as %s@%s", os_user,
session_user, db_name);
DISCARD fdpwrt(AFWRT_OUT | AFWRT_NEWLINE, buffer);
return TRUE;
upierror:
if (use_curs != NULLCURSOR)
DISCARD afurelease (use_curs);
DISCARD fdpwrt(AFWRT_LOG | AFWRT_NEWLINE, "Error in testupi");
return FALSE;
}
void testrpc()
{
text buffer[256];
EXEC SQL BEGIN DECLARE SECTION;
VARCHAR os_user[31];
VARCHAR session_user[31];
VARCHAR db_name[31];
EXEC SQL END DECLARE SECTION;
buffer[0] = os_user.arr[0] = session_user.arr[0] = db_name.arr[0] = '\0';
EXEC SQL SELECT sys_context('USERENV','DB_NAME',30),
sys_context('USERENV','SESSION_USER',30),
sys_context('USERENV','OS_USER',30)
INTO :db_name, :session_user, :os_user
from dual;
nullterm(os_user);
nullterm(session_user);
nullterm(db_name);
DISCARD sprintf((char *)buffer, "%s as %s@%s", os_user.arr,
session_user.arr, db_name.arr);
DISCARD fdpwrt(AFWRT_OUT | AFWRT_NEWLINE, buffer);
}
sword cptest(argc, argv, reqinfo)
/* ARGSUSED */
sword argc;
text *argv[];
dvoid *reqinfo;
{
ub2 i;
text errbuf[ERRLEN+1];
/* Write to the log file */
DISCARD fdpwrt(AFWRT_LOG | AFWRT_NEWLINE, (text *)"Test Success");
/* Write to the out file */
DISCARD fdpwrt(AFWRT_OUT | AFWRT_NEWLINE, (text *)"Test Args:");
/* Loop through argv and write to the out file. */
for ( i=0; i<argc; i++)
DISCARD fdpwrt(AFWRT_OUT | AFWRT_NEWLINE, argv[i]);
/* Call the Oracle Fusion Applications function afpoget to return the value */
/* of a profile option called SITENAME and write the results to the error */
/* buffer. */
DISCARD afpoget((text *)"SITENAME", errbuf);
/* Write the value to the output file. */
DISCARD fdpwrt(AFWRT_OUT | AFWRT_NEWLINE, errbuf);
/* Connect to the database and run a SELECT against the database. Creates a */
/* string and writes the returned data to the output file. Uses prc APIs. */
testrpc();
/* Open a cursor for the SELECT statement, defines variables to collect data */
/* upon running statement, and executes SELECT. Creates a string which it */
/* writes to the output file. Uses afupi APIs. */
testupi();
/* Writes the string "Test Completed." to the output file. */
DISCARD fdpwrt(AFWRT_OUT | AFWRT_NEWLINE, (text *)"Test Completed.");
/* Call afpend to identify the exit status, which in this case is successful. */
/* Other possible values are FDP_WARNING, FDP_ERROR and FDP_BIZERR. The
/* reqinfo originally passed to cptest is passed here. Optionally, additional */
/* text can be passed here, for example explaining the outcome of the exit */
/* status. */
return((sword)afpend(FDP_SUCCESS, reqinfo, (text *)NULL));
};
int main(/*_ int argc, text *argv[] _*/);
int main(argc, argv)
int argc;
text *argv[];
{
/* Run cptest and return an exit value to Oracle ESS. */
return(afprcp((uword)argc, (text **)argv,
(afsqlopt *)NULL, (afpfcn *)cptest));
}
When Oracle Enterprise Scheduler runs a C job, afprcp()
runs first to initialize the context and obtain the database connection. The function afprcp()
then calls the function containing the program logic. Oracle Enterprise Scheduler runs the job, and the result of the job is returned to Oracle Enterprise Scheduler. The Oracle Fusion application can retrieve the result from Oracle Enterprise Scheduler and display it in the user interface.
Note:
Wallet configuration is required for the client ORACLE_HOME
to obtain the database connection. The operating system environment in which the job runs (including the location of the client ORACLE_HOME
, which is also required) is set in the environment.properties
file. The environment.properties
file must be configured and placed in the config/fmwconfig
directory under the domain.
You can add your own environment variables by creating an env.custom.properties
file in the same directory. Variables you define in this file take precedence over those in the environment.properties
file.
Similarly, you can set server-specific environment variables with environment.properties
and env.custom.properties
files in the server config directory.
Arguments submitted for a host script job request are passed to the script at the command line. Host scripts may access the standard environment variables to get REQUESTID
, LOG_WORK_DIRECTORY
, OUTPUT_WORK_DIRECTORY
, and so on. Script output is redirected to the request log file by default.
Use the following steps when implementing a host script job:
Complete the steps for configuring a spawned job as described in About Configuring a Spawned Job Environment.
Create one script file each for Unix and Windows platforms. Name each script file the same as executableName
parameter in the job definition. For example, if your executableName
is "myscript", the script files would be called myscript.sh
(on Unix platforms) and myscript.cmd
(on Windows).
Put host scripts in the $APPLBIN
directory under the product TOP.
The script should exit with one of the following exit codes (anything else is considered a SYSTEM ERROR):
0 for SUCCESS
2 for WARNING
3 for BUSINESS ERROR
For more information about implementing Java scheduled jobs, see the chapter "Using Oracle JDeveloper to Generate an Oracle Enterprise Scheduler Application" in Developing Applications for Oracle Enterprise Scheduler.
Create a job definition as described in About Creating a Job Definition.
For information about the Java runtime API, see the Oracle Fusion Applications Java API Reference for Oracle Enterprise Scheduler Service.
You can access the Oracle Fusion Middleware Extensions for Applications Message
and Profile
objects directly, using those APIs which handle the service accessing themselves.
You can cancel a scheduled Java job by implementing the Cancellable
interface.
The Cancellable
implementation in Example 65-22 checks as logic progresses to see if the job has been canceled. If it has, the code cleans up after itself before exiting.
Example 65-22 Handling a Job Cancellation Request
import oracle.as.scheduler.Cancellable; import oracle.as.scheduler.Executable; import oracle.as.scheduler.ExecutionCancelledException; import oracle.as.scheduler.ExecutionErrorException; import oracle.as.scheduler.ExecutionPausedException; import oracle.as.scheduler.ExecutionWarningException; import oracle.as.scheduler.RequestExecutionContext; import oracle.as.scheduler.RequestParameters; public class MyExecutable implements Executable, Cancellable { private volatile boolean m_cancel = false; public void execute( RequestExecutionContext reqCtx, RequestParameters reqParams ) throws ExecutionErrorException, ExecutionWarningException, ExecutionPausedException, ExecutionCancelledException { // Do some work and check if this request has been canceled. // ... work ... checkCancel(reqCtx); // Do more work and check if this request has been canceled. // ... work ... checkCancel(reqCtx); // Finish work. // ... work ... } // Set flag that the app logic should check periodically to // determine if this request has been canceled. public void cancel() { m_cancel = true; } // Check if request has been canceled. If not, do nothing. // Otherwise, do any cleanup work that may be needed for // this request and end by throwing an ExecutionCancelledException. private void checkCancel(RequestExecutionContext reqCtx ) throws ExecutionCancelledException { if (m_cancel) { // Do work any cleanup work that may be needed // before ending this executable. // ... cleanup work ... String msg = "Request " + reqCtx.getRequestId() + " was cancelled."; throw new ExecutionCancelledException(msg); } } }
Oracle Enterprise Scheduler initializes the context of the job. The Oracle Fusion application calls Oracle Enterprise Scheduler using the provided APIs. Oracle Enterprise Scheduler runs the job, and a result of success or failure is returned to Oracle Enterprise Scheduler. The Oracle Fusion application can retrieve the result from Oracle Enterprise Scheduler and display it in the user interface.
Oracle Enterprise Scheduler executes jobs in the user context of the job submitter at the scheduled time. Some scheduled jobs require access privileges that are different from those of the submitting user. However, information regarding the submitter of the scheduled job must be retrievable for auditing purposes.
In Oracle Enterprise Scheduler, it is prohibited to run a job in the context of a user other than the submitting user with the runAs
property. Doing so would be considered a security breach. Using an application identity enables running a job with different access privileges from those allotted to the submitting user.
Application identity is a SOA and Java Platform Security (JPS) concept that addresses the requirement for escalated privileges in completing an action. The application installer creates an application identity in Oracle Identity Management Repository.
For more information, see the following chapters:
The Oracle Enterprise Scheduler job system property SYS_runasApplicationID
enables elevating access privileges for completing a scheduled job.
To elevate access privileges, do the following:
Example 65-23 Retrieving the Executing User with getRunAsUser()
requestDetail.getRunAsUser()
Example 65-24 Retrieving the Executing User with getRequestParameter()
String sysPropUserName = (String) runtime.getRequestParameter(h, reqid, SystemProperty.USER_NAME);
Given a request ID, you can retrieve the submitting and executing users of a job request.
Example 65-25 shows a code sample for retrieving the submitting and executing users of a job request using the Oracle Enterprise Scheduler RuntimeService
Enterprise JavaBeans object.
Example 65-25 Retrieving the Submitting and Executing Users of a Job Request Using the RuntimeService Enterprise JavaBeans Object
// Lookup runtimeService RequestDetail requestDetail = runtimeService.getRequestDetail(h, reqid); String runAsUser = requestDetail.getRunAsUser(); String submitter = requestDetail.getSubmitter();
Example 65-26 shows a code sample for retrieving the submitting and executing users of a job request from within an Oracle Fusion application.
Example 65-26 Retrieving the Submitting and Executing Users of a Job Request from an Oracle Fusion application
import oracle.apps.fnd.applcore.common.ApplSessionUtil; // The elevated privilege user name. ApplSessionUtil.getUserName() // The submitting user. ApplSessionUtil.getHistoryOverrideUserName()
When a job request schedule executes, Oracle Enterprise Scheduler:
Oracle Enterprise Scheduler validates the user's execution privileges on the job metadata. If so, the user context is captured and stored in the Oracle Enterprise Scheduler database as the submitting user, and the request is placed in the queue.
Multiple language jobs enable repeatedly running an end-user's submitted request in different languages such that the data processes accordingly.
Enabling multiple language support for jobs involves the following steps:
Identify a set of languages in which the job runs.
Develop a multiple-language support function (PL/SQL, Java, or ADF Business Components) and register it with concurrent processing.
Associate the multiple language support function with one or more programs.
The following multiple language support functions can be defined:
Java multiple language support functions can be defined in the client interface.
The client interface for the Java multi-language function should be a fully qualified class name that implements the interface shown in Example 65-27.
The ExecutionLanguageList
interface is available in the oracle.apps.fnd.applcp.mls.server
package.
This class implements a default constructor. Parameters to the program are passed through the parameters array. Parameter n0 contains the parameter token and parameter n1 contains the parameter value. The returned string array is a valid list of languages, such that both positional and named parameters are supported for clients. The connection object should be valid and secured to the relevant schema.
A sample class is shown in Example 65-28.
Example 65-27 The ExecutionLanguageList Interface
public interface ExecutionLanguageList { public String[] getLanguageList(Connection conn, String[][] parameters); }
Example 65-28 Defining Concurrent Processing in the Client Interface Using Java
public class MyLanguageList implements ExecutionLanguageList { public String[] getLanguageList(Connection conn, String[][] parameters) { String[] myStrArray; ... return myStrArray; } }
PL/SQL multiple language support functions can be defined in the client interface or using the PL/SQL support API.
The client interface for a PL/SQL multi-language support function should be a PL/SQL function name optionally prefixed with a package name and separated by a period. The PL/SQL function does not take any input parameters and returns a VARCHAR2
value. The returned VARCHAR2
value includes a list of acceptable comma separated languages (such as those listed in the FND_LANGUAGES
property). If a returned language is not listed in the FND_LANGUAGES
property or the language is not installed, the caller of the client interface can raise an exception or cause the request to fail either at submission time or at runtime.
A sample return value for this function is shown in Example 65-30:
In the previous example, the concurrent processing program's multi-language support function property is PROD_MLS_PKG.GET_LANGUAGES
.
Example 65-29 A Sample PL/SQL Multiple Language Support Function
FUNCTION PROD_MLS_PKG.GET_LANGUAGES RETURN VARCHAR2;
Example 65-30 Sample Return Value
US,F,AR,KO
The API package FND_REQUEST_INFO
is provided to enable interrogating the current context and act accordingly.
The following functions and procedures are provided:
GET_PROGRAM
: Returns the name of the program being executed and the application to which it belongs. This is useful in cases where the same multi-language support function is used for multiple programs.
GET_PARAMETER
: Given a parameter number from the ordered list of parameters, returns the parameter name for the current request.
Example 65-31 GET_PROGRAM Syntax
procedure GET_PROGRAM (prog_name out varchar2, prog_app_name out varchar2);
Example 65-32 GET_PARAMETER Syntax
function GET_PARAMETER (param_num in number) return varchar2;
A list of supported languages can also be retrieved from an ADF Business Components view object.
To define ADF Business Components multiple language support functions, do the following:
Example 65-33 Value for the ADF Business Components Multiple Language Support Function
oracle.apps.fnd.cp.MySampleAM.MyLanguageVO1.
When a program executes this multi-language support function, the following occurs:
The Application Module specified in the function is located.
The method findViewObject()
is called on the module to get a handle for the view object.
The method initQuery(String[][])
is called on the view object.
[CONDITIONALIZED. FOR DROP 6]
The output generated from each request may need to be formatted into readable output formats. For example, XML output may need to be formatted to be readable to end users. Output can be formatted using templates during postprocessing. Usually, a particular formatting template is associated with a program for a given language. If a program is not associated with a template, end users can specify a formatting template for one or more languages when submitting a job. If no template is specified for a specific language, a designated default template should be applied for that language.
The generated output can be sent to zero or more printers based on the language being run. In some cases, printers may not have the fonts required to print documents in a particular language. Alternatively, all print output can be sent to the same printer. The printing interfaces to concurrent processing handle sending the output to the correct printer based on the parameters specified by the user.
Users who submit requests can also choose to specify completion notifications to be automatically sent to zero or more recipients. This notification may contain a link to the output, for example. In the case of multiple language requests, users can specify zero or more recipients of notifications for all languages or for an individual languages. As in the case of printing, the notification interfaces handle the delivery of notifications according to the sender's instructions.
The list of languages that can be selected by the user is driven by the languages supported by the installed software. Any function written for multiple language support returns a valid subset of languages from this list.
The multiple language support function returns a list of supported languages in a format supported by concurrent processing. The programs associated with the multiple language support function execute multiple times for each request, once for each returned language.
If a program has no associated multiple language support function but is aware of the language context, users can explicitly choose one or more languages in which the program runs when submitting a request.
When a job is submitted for a program with an attached multiple language support function, the submission interface evaluates the function to retrieve the list of languages. This language list is used to drive postprocessing options such as publishing and printing.
Multiple language support functions are not evaluated at runtime. Convenience messages may be provided to users when attempting to submit a program already scheduled for multiple language support execution if there is an overlap in the set of languages in the two submissions.
The following standards and guidelines apply to using multi-language support functions:
Runtime evaluation of multi-language support functions is no longer supported.
Multiple language support functions can be registered in the program definition user interface.
Multiple language support functions are evaluated at the time of request submission.
When implemented as part of an Oracle Fusion application, the Oracle ADF user interface enables end users to submit job requests.
Related Links
The following documents provide additional information related to subjects discussed in this section:
For more information about configuring security certificates, see the chapter "Managing Keystores, Wallets, and Certificates" in the Administering Oracle Fusion Middleware.
For more information about creating task flows and binding them to an Oracle ADF user interface, see the following chapters in Developing Fusion Web Applications with Oracle Application Development Framework:
For more information about creating an ADF Business Components view object, see the chapters "Defining SQL Queries Using View Objects" and "Advanced View Object Techniques" in Developing Fusion Web Applications with Oracle Application Development Framework.
For more information about passing parameters to the Oracle ADF task flow, see the chapter "Using Parameters in Task Flows" in Developing Fusion Web Applications with Oracle Application Development Framework.
For more information about deploying Oracle Enterprise Scheduler applications, see the section "Deploying and Running the EssDemoApp Application" in the Developing Applications for Oracle Enterprise Scheduler.
The Oracle ADF UI enables end users to submit job requests. End users can enter complex data types for the arguments of descriptive and key flexfields. The Parameters tab in the Oracle ADF UI interface allows end users to enter parameters to be used when submitting the job request.
Flexfields display in a separate task flow region. This region is a child task flow of the parent task flow displayed in the Parameters tab.
Note:
Define customization layers and authorize runtime customizations to the adf-config.xml
file as described in Creating Customizable Applications .
To create a user interface for submitting job requests, do the following:
Create a new Oracle Fusion web application by clicking New Application in the Application Navigator and selecting Fusion Web Application (ADF) from the Application Templates drop-down list.
Model and ViewController projects are created within the application.
Right-click the Model project and select Project Properties > Libraries and Classpath > Add Library.
From the list, select the following libraries, as shown in Figure 65-2.
Applications Core
Applications Concurrent Processing
Enterprise Scheduler Extensions
Figure 65-2 Adding the Libraries to the Model Project
Click OK to close the window and add the libraries.
Right-click the View Controller project and select Project Properties > Libraries and Classpath > Add Library.
Add the library Applications Core (ViewController), as shown in Figure 65-3.
Figure 65-3 Adding the Library to the View Controller Project
In the Project Properties dialog, in the left-hand pane, click Business Components.
The Initialize Business Components Project window displays. Click the Edit icon to create a database connection for the project.
Fill in the database connection details as follows:
Connection Exists in: Application Resources
Connection Type: Oracle (JDBC)
User name/Password: Fill in the relevant user name and password for the database.
Driver: thin
Host Name: Enter the host name of the database server.
JDBC port: Enter the port number of the database.
SID: The unique Oracle system ID for the database.
Click OK.
In the file weblogic.xml
, import the oracle.applcp.view
library.
In the file weblogic-application.xml
, import the following libraries:
oracle.applcore.attachments (for ESS-UCM)
oracle.applcp.model
oracle.applcp.runtime
oracle.ess
oracle.sdp.client (for notification)
oracle.ucm.ridc.app-lib (for ESS-UCM)
oracle.webcenter.framework (for ESS-UCM)
oracle.xdo.runtime
oracle.xdo.service.client
oracle.xdo.webapp
The libraries oracle.applcp.model
and oracle.applcp.view
are deployed as part of the installation while running the config.sh
wizard.
Create a new Java Server Pages XML (JSPX) page for the ViewController project by right-clicking ViewController and selecting New > Web Tier >JSF > JSF JSP Page.
Create a new File System connection. In the Resource Palette, right-click File System, select New File System Connection, and do the following:
Provide a connection name and directory path for the Oracle ADF Library files (<jdev_install>/jdev/oaext/adflib
).
Click Test Connection and click OK after the connection succeeds.
Expand the contents of the SRS-View.jar
file to display the list of available task flows that can be used in the application, as shown in Figure 65-4.
Figure 65-4 Displaying the List of Available Task Flows
To include the job request submission page in the application, select the ScheduleRequest-taskflow
item from the Resource Palette and drop it onto the Java Server Faces (JSF) page in the area where you want to create a call to the task flow. Create the task flow call as a link or button.
For example, to invoke the job request submission page from within a dialog in the application, do the following:
From the Component Palette, drag and drop a Link onto the form in the JSPX page.
In the Property Inspector, configure the behavior of the link to the value showpopup
.
From the Component Palette, drag and drop a Popup component with a dialog component onto the form.
To enable submitting a job request, drag and drop the ScheduleRequest-taskflow
item onto the dialog component as a dynamic region.
To enable submitting a job set request, drag and drop the ScheduleJobset-taskflow
item onto the dialog component.
Figure 65-5 displays the task flows in the Resource Palette.
Figure 65-5 Including the Job Request Submission Page in the Application
From the context menu, select Create a Dynamic Region.
When prompted, add the required library to the ViewController project by clicking Add Library. Save the JSF page.
Edit the task flow binding. Define the following parameters for the task flow, as shown in Figure 65-6.
jobdefinitionname
: Enter the name of the job definition to be submitted. This is not the name that displays. This is the job definition defined in About Creating a Job Definition. Required.
jobdefinitionpackagename
: Enter the package name under which the job definition metadata is stored. This should be the namespace path appended to the package name, for example /oracle/ess/Scheduler
. The namespace path typically begins with a forward slash ("/"), but should have no forward slash at the end. Required.
centralui
: When setting this parameter to true
, then the task flow UI does not display the header section containing the name, description and basic Oracle BI Publisher actions (such as email, print and notify). This parameter must be a Boolean value. Optional.
pageTitle
: When passed, the task flow will render this passed String value as the page title. The pageTitle
value is currently configured to be truncated at 30 characters. Optional.
requireRootOutcome
: If true
is passed as the value, then the task flow will generate a value of root-outcome when the user clicks the Submit or Cancel buttons. By default, the task flow generates a value of parent-outcome. Optional.
requestparametersmap
: Enter the name of the map object variable that contains the parameters required for the job request submission. If this parameter is filled in, the Parameters tab in the request scheduling submission page will not prompt end users to enter parameters for executing the request. The map can be passed to the task flow as a parameter. Typically, this parameter takes the data type java.util.Map
in which keys are parameter names and values are parameter values. For example, if you will be using a paramsMap
object in the pageFlowScope
context, you might enter a requestparametersmap
value of #{pageFlowScope.paramsMap}
. Optional.
In the page that holds the task flow region in the job request submission page, set the following property for the popup window that opens the job request submission page window: contentDelivery = immediate
.
In the page definition file of the page that contains the task flow region, set the following property for the task flow: Pagedef > executables > taskflow > Refresh=IfNeeded
.
If you are using a map to pass parameters to the task flow (as shown in Figure 65-6, the map is called requestparametersmap
), create a new task flow parameter, such as the paramsMap
object in the pageFlowScope
element of a page flow.
Figure 65-6 Defining Parameters for the Task Flow
These values can be accessed in the job executable, for example from the RequestParameters
object in the case of a Java job. Example 65-34 illustrates passing the values stored in the RequestParameters
object to a Java job. This code is used in the class that implements the oracle.as.scheduler.Executable
interface.
Note:
When using a requestparametersmap
object, set the following properties for the popup window within which the task flow is started.
Set Content Delivery to Immediate.
In the page definition XML file for the page that contains the region, select PageDef > Executables > taskflow > set Refresh = ifNeeded.
If the job is defined with properties that must be filled in by end users, the user interface allows end users to fill in these properties before submitting the job request. For example, if the job requires a start and end time, end users can fill in the desired start and end times in the space provided by the user interface.
The properties that are filled in by end users are associated with a view object, which in turn is associated with the job definition itself. When the job runs, Oracle Enterprise Scheduler accesses the view object to retrieve the values of the properties.
If using a view object to pass parameters to the job definition, do the following:
Create a view object called TestVO using a query such as the one shown in Example 65-35.
Specify control UI hints, for example set the display label for Attribute1 to Run Mode and for Attribute2 to Duration.
The parameters tab in the job request submission UI renders with the input fields Run Mode and Duration.
To render the Parameters tab in the job request submission UI, add the DynamicComponents 1.0 library as follows. Right-click ViewController and select Project Properties > JSP Tag Libraries > Add. In the Choose Tag Libraries window, select the library DynamicComponents 1.0 and click OK. Figure 65-7 displays the Choose Tag Libraries window.
Figure 65-7 Adding the Library DynamicComponents 1.0
In the JSF application you created, create another project called Scheduler. Select File > New, and choose General > Empty Project. This project will be used to create Oracle Enterprise Scheduler metadata and job implementations.
In the Scheduler project, add the Oracle Enterprise Scheduler Extensions library to the class path. Right-click the Scheduler project and select Project Properties > Libraries and Classpath > Add Library > Oracle Enterprise Scheduler Extensions.
Deploy the libraries oracle.xdo.runtime
and oracle.xdo.webapp
to the Oracle Enterprise Scheduler UI managed server. These libraries are located in the directory $MW_HOME/jdeveloper/xdo
, where MW_HOME
is the Oracle Fusion Middleware home directory.
Deploy the application.
Note:
When testing the UI in a web browser, you may need to add a security exception to your browser so that the UI renders correctly. Follow the directions in the online help for your web browser.
Example 65-34 Passing Values in a Map Object to a Java Job
public void execute(RequestExecutionContext ctx,RequestParameters props) throws ExecutionErrorException, ExecutionWarningException, ExecutionCancelledException,ExecutionPausedException { String pageTitle = (String) props.getValue("pageTitle"); // Retrieve other parameters. // ... }
Example 65-35 Creating a View Object Using a Query
select null as Attribute1, null as Attribute2 from dual"
You can add a custom task flow to an Oracle ADF user interface used to submit job requests at runtime.
To add a custom task flow, do the following:
The schedule request submission and jobset UI taskflows include Submit and Cancel buttons. Typically, clicking the Submit button submits the job request, closes the transaction, and returns the user to the page that launched the schedule request submission or jobset UI taskflow (the container taskflow). Clicking the Cancel button resets the internal data structures in the schedule request submission or jobset UI and returns the user to the page that launched the schedule request submission or jobset UI taskflow.
Clicking the Submit or Cancel buttons notifies the containing bounded or unbounded parent taskflow of the result of the Submit or Cancel event, and the container taskflow decides what to do next. The containing taskflow may be a popup or an inline page. The container taskflow handles any navigation required after clicking either button.
The schedule request submission and jobset UI taskflows define two root outcomes and two parent outcomes, which the parent taskflow can use to handle navigational requirements.
The following sample shows the schedule request submission UI parent outcomes.
The following sample shows the jobset UI parent outcomes.
As shown in the preceding samples, the containing taskflow defines the root/parent outcomes that occur when the user clicks the Submit or Cancel button, respectively. These outcomes are onSRSSubmitted
and onSRSCanceled
, in the case of the schedule request submission UI taskflow, and onJobsetRequestSubmitted
and onJobsetRequestCanceled
in the case of the jobset UI taskflow.
The consuming or parent taskflow uses these root/parent outcomes in their view definition files (*taskflow.xml or adfc-config.xml) and defines control flow rules accordingly.
Note:
The taskflows must be dropped as a region on a page for the parent actions to work. Make sure to drop the schedule request submission or jobset UI taskflow as a region on a page.
Example 65-36 Schedule request submission UI parent outcomes
<parent-action id="rootSubmitActionId"> <description id="__1">Parent action when the submit button is clicked for root parent</description> <root-outcome id="submitOutcome">onSRSSubmitted</root-outcome> </parent-action> <parent-action id="rootCancelActionId"> <description id="__2">Parent action when the cancel button is clicked for root parent</description> <root-outcome id="cancelOutcome">onSRSCanceled</root-outcome> </parent-action> <parent-action id="parentSubmitActionId"> <description id="__3">Parent action when the submit button is clicked for immediate parent</description> <parent-outcome id="parentSubmitOutcome">onSRSSubmitted</parent-outcome> </parent-action> <parent-action id="parentCancelActionId"> <description id="__4">Parent action when the cancel button is clicked for immediate parent</description> <parent-outcome id="parentCancelOutcome">onSRSCanceled</parent-outcome> </parent-action>
Example 65-37 Jobset UI parent outcomes
<parent-action id="rootSubmitActionId"> <description id="__1">Parent action when the submit button is clicked for the root</description> <root-outcome id="submitOutcomeForRoot">onJobsetRequestSubmitted</root-outcome> </parent-action> <parent-action id="rootCancelActionId"> <description id="__2">Parent action when the cancel button is clicked for the root</description> <root-outcome id="cancelOutcomeForRoot">onJobsetRequestCanceled</root-outcome> </parent-action> <parent-action id="parentSubmitActionId"> <description id="__3">Parent action when the submit button is clicked for immediate parent</description> <parent-outcome id="parentSubmitOutcome">onJobsetRequestSubmitted</parent-outcome> </parent-action> <parent-action id="parentCancelActionId"> <description id="__4">Parent action when the cancel button is clicked for immediate parent</description> <parent-outcome id="parentCancelOutcome">onJobsetRequestCanceled</parent-outcome> </parent-action>
In some cases, you may want to place the taskflow in a region component within an explicitly defined popup window. In such cases, the taskflow must be refreshed after the popup window closes.
Following is an example of how the parent taskflow can use these parent/root outcomes to define the navigational rules for closing or navigating away from the schedule request submission UI taskflow. The steps are the same for jobset UI taskflows, but the outcomes have different names.
In this example, the schedule request submission UI taskflow is placed in an ADF popup window. When the user clicks the Submit or Cancel button, the popup window closes, returning the user to the page that launched the popup.
Note:
This is only an example of how to use the root/parent outcomes. The actual implementation may vary according to the usecase of the user.
You can either use the parent or root outcome at any given time, depending on the use case. By default, the schedule request submission or jobset UI taskflows always pass a parent outcome. If the consuming application needs a root outcome, then you must pass the requireRootOutcome
parameter with a value of true
to the schedule request submission or jobset UI taskflow.
To handle Submit and Cancel buttons, do the following:
If you are launching the schedule request submission taskflow in a UIShell popup window, then use the closePopup()
method described in Implementing OK and Cancel Buttons in a Popup.
If you are launching the schedule request submission UI taskflow in the main task area of the UIShell, then you need to follow the navigation options described in How to Implement End User Preferences.
Note:
The UIShell APIs closeMainTask()
and openMainTask()
can only be invoked from within a bounded taskflow. You must wrap the schedule request submission UI taskflow in a dummy container taskflow, and define the control flow rules to consume the parent actions in the view definition file of the container taskflow.
After integrating your application with the Oracle ADF UI for submitting job requests, enable context-sensitive parameter support in the UI.
The request submission UI will render the context-sensitive parameters first so that the end user will specify the context-sensitive parameter values. Context is set in the database based on these parameters. After setting the context, it renders the rest of the parameters based on context set at database layer. When the job runs, the actual business logic will run after setting the context based on the context-sensitive parameter values inside the database.
To enable context-sensitive parameter support in the UI, do the following:
Example 65-38 contextParametersVO
<parameter name="contextParametersVO" data-type="string">_oracle.apps.mypkg.TestCtxVO</parameter>_
Example 65-39 setContextAPI
<parameter name="setContextAPI" data-type="string">_myPkg1.mySetCtx</parameter>_
Saving and scheduling a job request using an Oracle ADF UI involves using the Oracle Enterprise Scheduler Extensions library with a JSF application that includes a task flow in which a job is scheduled and saved.
To schedule a job request, do the following:
Submitting a saved job request schedule using an Oracle ADF UI involves using the Oracle Enterprise Scheduler Extensions library with a JSF application that includes a task flow in which a saved job schedule can be submitted.
To submit a job, do the following
The Oracle ADF user interface for submitting job requests provides the ability to notify users of the status of submitted jobs (via the Notification tab of the user interface). For example, users can request a notification to be sent to the originator of the job request.
A notification includes two components: the user to whom the notification is to be delivered, and the completion status of the job that triggers the notification. For example, notifications can be sent upon the successful completion of a job, or when a job completes in an error or warning state.
The Oracle ADF interface is integrated with the Oracle Fusion application, and the application is tested and deployed. End users access the Oracle ADF user interface, fill in optional job properties, and click a button to submit the job request.
The application receives the submitted job request and calls Oracle Enterprise Scheduler to run the job. The Oracle Fusion application accesses the values of the properties entered by end users through the view object in which these properties were defined at design time. The job returns a result of success or failure, and the result passes from the Oracle Fusion application to Oracle Enterprise Scheduler.
Custom Task Flow
A job that includes properties to be filled in by end users through an Oracle ADF user interface at runtime includes ADF Business Components view objects with validation and the parameters to be filled in by end users. These parameters are submitted at runtime in the order in which they have been defined, meaning the first custom parameter to be defined is submitted first. The custom parameters must be named as follows:
ParameterVO1.ATTRIBUTE1
, ParameterVO1.ATTRIBUTE2
, ParameterVO2.ATTRIBUTE1
, ParameterVO3.ATTRIBUTE1
, and so on.
If the job definition includes the properties ContextParametersVO
, ParameterTaskflow
and parametersVO
, these properties render in that order at runtime.
Context-Sensitive Parameters
When starting the job request submission page UI to submit a job or job set request with context-sensitive parameters, the contextParametersVO
parameter initially renders in the Parameters tab of the Oracle ADF user interface.
The end-user can then enter values for the context-sensitive parameters. Clicking Next invokes an API called setConextAPI
by passing the context parameters. The context is set at the database level and the remaining parametersVO
job parameters are rendered.
When the context-sensitive parameters are modified, end users must click Next to set the context with the new values.
Notifications
When the final status of the job is determined, Oracle Enterprise Scheduler delivers the notifications to the relevant users using the User Messaging Service. Users receive notifications based on their messages preferences.
The notification view object defined at design time populates the input box in the submission request user interface at runtime.
TBD
* Applying Language Templates
* Specifying a Printer for Language Output
* Configuring Automatic Notifications Upon Completion of Language-Specific Requests
You can submit, cancel and otherwise manage job requests using the request submission API.
For information about using the request submission API, see the chapter "Using the Runtime Service" in Developing Applications for Oracle Enterprise Scheduler.
Use the Oracle Enterprise Scheduler Java API to request job submission in a Java application. For more information about the API, see the Oracle Enterprise Scheduler Service javadoc documentation.
Sample code: TBD
The package ESS_RUNTIME_SERVICE
enables submitting job requests using PL/SQL.
The following samples illustrate the use of the PL/SQL request submission API.
In the first example, a request is submitted to run a job defined in the metadata of a J2EE application called app1
. The procedure takes two arguments set to myarg1
and myarg2
. The job is scheduled to run every two hours beginning at the time of job submission.
In the second example, a subrequest is submitted. A sub request is a request submitted from a running job, where the running job passes its own execution context in the form of a parameter called request_handle
. Passing the execution context ties the request being submitted to the currently running request. This routine is designed to be called from a running PL/SQL request.
The request is submitted to J2EE application called app1
, and the procedure takes two arguments whose values are myarg1
and myarg2
. The job is scheduled to run every two hours from the time of submission.
Example 65-40 Submitting a job request using the PL/SQL API
request_prop_table_t myprops; ess_runtime.set_submit_args( myprops, 'myarg1', 'myarg2'); ess_runtime.submit_request_adhoc_sched( application => 'app1', definition_type => 'JOB', definition_name => 'jobA', interval=>2, frequency=>'HOURLY', props => myprops);
Example 65-41 Submitting a Subrequest Using the PL/SQL API
procedure myjob (request_handle IN varchar2, arg1 IN varchar2, arg2 IN varchar2) as number reqid; request_prop_table_t myprops; ... begin … reqid := ess_runtime.get_request_id(request_handle); ess_runtime.set_submit_args( myprops, 'subarg1', 'subarg2'); ess_runtime.submit_subrequest( request_handle => request_handle, definition_name => 'jobA', definition_package => 'Test_Package', props => myprops) ... end
Oracle Business Intelligence Publisher enables generating reports from a variety of data sources, such as Oracle Database, web services, RSS feeds, files, and so on. BI Publisher provides several delivery options for generated reports, including print, fax, and email.
To create an Oracle BI Publisher report, an Oracle BI Publisher report definition is required. Oracle BI Publisher report definitions consist of a data model that specifies the type of data source (database, web service, and so on) and a template for output formatting.
With report definitions in place, options for reporting are available to end users in the Output tab of the Oracle ADF user interface. The Output tab provides options through which an end user can define templates for reports. They can specify layout templates, document formats (such as PDF, RTF, and more), report destinations (email addresses, fax numbers, or printer addresses), and so on. When the user submits a request, this information is stored in the Oracle Enterprise Scheduler schema. The preprocessor then invokes the Oracle BI Publisher service and passes the saved data to it.
Extensions to Oracle Enterprise Scheduler provide the ability to run Oracle BI Publisher reports as batch jobs. The Oracle Enterprise Scheduler postprocessing infrastructure enables applying Oracle BI Publisher formatting templates to XML data and delivering the formatted reports by printing, faxing, and so on.
Related Links
The following documents provide additional information related to subjects discussed in this section:
For more information about defining postprocessing actions for scheduled jobs, see "Creating a Business Domain Layer Using Entity Objects" in the Developing Fusion Web Applications with Oracle Application Development Framework.
For more information on the web service, see the chapter "Using the Oracle Enterprise Scheduler Web Service" in Developing Applications for Oracle Enterprise Scheduler.
For more information about configuring security certificates, see the chapter "Managing Keystores, Wallets, and Certificates" in the Administering Oracle Fusion Middleware.
Before you start defining Oracle BI Publisher postprocessing for a scheduled job, do the following:
Example 65-42 Location of the File for Setting Up Oracle BI Publisher Reporting and Seeding the Database
$BEAHOME/jdeveloper/jdev/oaext/adflib/PPActions.jar
Defining postprocessing for a scheduled job involves the following:
Define the postprocessing action.
Create a Java class for the postprocessing action. The Java class uses the parameters collected by the Oracle Enterprise Scheduler UI and calls Oracle BI Publisher APIs as required.
Create a native ADF Business Components view object to save parameters for postprocessing, such as template name, output format, locale, and so on.
To create an Oracle BI Publisher postprocessing action, do the following:
Example 65-43 A Java Class that Defines a Postprocessing Action
package oracle.apps.shh.Obfuscate; import oracle.apps.fnd.applcp.request.postprocess.PostProcess; import oracle.apps.fnd.applcp.util.ESSContext; import oracle.apps.fnd.applcp.util.PostProcessState; import oracle.as.scheduler.*; public class PPobfuscate implements PostProcess { ArrayList myOutputFiles; ArrayList getOutputFileList() { return myOutputFiles; } PostProcessState invokePostProcess(long requestID, String ppArguments[], ArrayList files) { RuntimeService rService = null; RuntimeServiceHandle rHandle = null; try { // Accessing Runtime Details for a given requestID RequestDetail rDetail = null; RequestParameters rParam = null; String obfuscationSeed = ppArguments[0]; String codedFileName = ppArguments[1]; String myNewFile; String outDir = null; rService = ESSContext.getRuntimeService(); if (rService != null) rHandle = rService.open(); if (rHandle != null) rDetail = getRequestDetails(rHandle, requestID); if (rDetail != null) rParam = rDetail.getParameters(); if (rParam != null) outDir = rParam.getValue("outputWorkDirectory"); if (outDir == null) { // Details not received, usually an exception would have been thrown // by now. We handle this case to be robust. // Log the ERROR to Oracle Diagnostic Logging return PostProcessState.ERROR; } // Check files if (files[0] == null) { // no files - PostProcessing should never call us in this state // in case it does - log Error to Oracle Diagnostic Logging return PostProcessState.ERROR; } // This example expects a single file myNewFile = outputDir + System.getProperty("file.separator") + codedFileName; Obfuscate.performObfuscation( files[0], obfuscationSeed, myNewFile ); myOutputFiles[0] = myNewFile; // In case multiple files are used. for ( i = 1; files[i] != null; i++ ) { // Appending a counter to the filename to be unique. myNewFile = outputDir + System.getProperty("file.separator") + codedFileName + i ; Obfuscate.performObfuscation( files[i], obfuscationSeed, myNewFile ); myOutputFiles[i] = myNewFile; } return PostProcessState.SUCCESS; } catch (RuntimeServiceException rse) { // Log RuntimeServiceException to Oracle Diagnostic Logging. return PostProcessState.ERROR; } catch (Exception e) { // Log Exception to Oracle Diagnostic Logging. return PostProcessState.ERROR; } finally { if (rHandle != null) rService.close(rHandle); } } } // end class
Example 65-44 shows a PL/SQL job that includes Oracle BI Publisher postprocessing actions. The PL/SQL job calls the method ess_runtime.add_pp_action
so as to generate a layout for the data from the postprocessing action. This example formats the XML generated by the job as a PDF file.
Example 65-44 Defining a Scheduled PL/SQL Job with Oracle BI Publisher Postprocessing Actions
declare l_reqid number; l_props ess_runtime.request_prop_table_t; begin . ess_runtime.add_pp_action ( props => l_props, -- IN OUT request_prop_table_t, action_order => 1, -- order in which this post processing action will execute. action_name => 'BIPDocGen', -- Action for Document Generation (layout) on_success => 'Y', -- Should this be called on success, on_warning => 'N', -- Should this be called on warning, on_error => 'N', -- Should this be called on error, file_mgmt_group => 'XML', -- File types this action will process. It has to be defined in Job Defintion, step_path => NULL, -- IN varchar2 default NULL, argument1 => 'XLABIPTEST_RTF', -- Template name needed for Documnet Generation action, argument2 => 'pdf' -- What type of layout file will be generated by Document Generation action, ); . l_reqid := ess_runtime.submit_request_adhoc_sched (application => 'SSEssWls', -- Application Application definition_type => 'JOB', definition_name => 'BIPTestJob', -- Job definition definition_package => '/mypackage', -- Job definition package props => l_props); commit; dbms_output.put_line('request_id = :'||l_reqid); end;
You can invoke postprocessing actions programmatically from a client using a Java or web service API. Both APIs require the same set of parameter values described in table Table 65-7.
For Java clients, call the addPPAction
method of oracle.as.scheduler.cp.SubmissionUtil
. The method takes the values needed to invoke the action and throws an exception called IllegalArgumentException
if the number of arguments exceeds 10. Example 65-45 shows the declaration of the method.
For web service clients, you invoke the method using a proxy, as in Example 65-46.
Table 65-7 Parameters for Adding a Postprocessing Action
Parameter | Description |
---|---|
|
A |
|
The ordinal location of this action in the sequence of actions to be performed within the action domain. Oracle BI Publisher process requests starting with action order index 1. |
|
The name of the action to perform. The following lists acceptable values for this parameter, along with the acceptable values you can use in the
|
|
Description of this post processor action. |
|
Determines whether this action should be performed on successful completion of the job. |
|
Determines whether this action should be performed when the job or step has completed with a warning. |
|
Determines whether this action should be performed when the job or step has completed with an error. |
|
The name of the File Management Group. When using a Oracle BI Publisher template, the value of this parameter is |
|
A list of arguments for the post processor action. See the |
Example 65-45 Sample declaration of the addPPAction method
public static void addPPAction (RequestParameters params, int actionOrder, String actionName, String description, boolean onSuccess, boolean onWarning, boolean onError, String fileMgmtGroup, String[] arguments) throws IllegalArgumentException
Example 65-46 Adding Postprocessing Actions for a Request
ESSWebService proxy = createProxy("addPPActions"); PostProcessAction ppAction = new PostProcessAction(); ppAction.setActionOrder(1); ppAction.setActionName("BIPDocGen"); ppAction.setOnSuccess(true); ppAction.setOnWarning(false); ppAction.setOnError(false); ppAction.getArguments().add("argument1"); ppAction.getArguments().add("argument2"); List<PostProcessAction> ppActionList = new ArrayList<PostProcessAction>(); ppActionList.add(ppAction); RequestParameters reqParams = new RequestParameters(); reqParams = proxy.addPPActions(reqParams, ppActionList);
Depending on the FMG property set for the job definition, the relevant postprocessing action is selected for the job.
The ppArguments
array stores the values collected from the view object attributes. The array is passed to the invokePostProcess
method which executes in the Java class that defines the postprocessing action.
At runtime, the user interface uses the view object to collect the arguments for executing the postprocessing action as defined in the table APPLCP_PP_ACTIONS
. These arguments also instruct the user interface as to how to invoke the action logic.
The postprocessing action accesses the XML output file from the job request, and passes the XML output to Oracle BI Publisher. The postprocessing action creates a report request containing the XML data.
The postprocessing action displays in the submission Oracle ADF UI. The UI enables adding a postprocessing action for the scheduled job, selecting arguments for the action using the view object and selecting output options for the action. The user interface also displays the name of the File Management Group with which the output files are associated.
Note:
When testing the UI in a web browser, you may need to add a security exception to your browser so that the UI renders correctly. Follow the directions in the online help for your web browser.
It is possible to view previously submitted jobs by integrating the Monitoring Processes task flow into an application.
For information about enabling tracing for jobs, see Debugging and .
Related Links
The following document provides additional information related to subjects discussed in this section:
For more information about tracing Oracle Enterprise Scheduler jobs, see the section "Tracing Oracle Enterprise Scheduler Jobs" in the chapter "Managing Oracle Enterprise Scheduler Service and Jobs" in the Administrator's Guide.
The main steps involved in monitoring scheduled job requests using an Oracle ADF UI are as follows:
Configure Oracle Enterprise Scheduler in JDeveloper
Create and initialize an Oracle Fusion web application
Create a UI Shell page and drop the Monitor Processes task flow onto it
Note:
Fields such as submission date, ready time, scheduled date, process start, name, type, definition, and so on, are not set unless the job request or subrequest is successfully validated.
To monitor scheduled job requests, do the following:
You can embed a table of job request search results as a region on a page. Task flow parameters can be used to further specify the job requests returned by the search.
To embed a search-results table, do the following:
You can enable Oracle Diagnostic Logging in an Oracle ADF UI used to monitor scheduled job requests. When enabling logging, the UI displays a View Log button.
The View Log functionality in the monitoring UI applies only to scheduled requests with a persistenceMode
property set to a value of file
. Hence, the View Log button in the scheduled request submission monitoring UI displays only when viewing requests with persistenceMode
property set to a value of file
.
The only other valid value for the persistenceMode
property is the value content
. The View Log button is hidden for all requests with a persistenceMode
property value of content
. If the persistenceMode
property is not specified for a given request, then the monitoring UI defaults to a persistenceMode
value of file
, and displays the View Log button when viewing relevant requests.
To log scheduled job requests, do the following:
Example 65-47 Enabling Logging in the logging.xml File
<logger name='oracle.apps.fnd.applcp.srs' level='FINEST' useParentHandlers='false'> <handler name='odl-handler'/> </logger>
Some useful tips for troubleshooting the Oracle ADF UI used to monitor scheduled job requests.
Displaying a readable name. When defining metadata, use the display-name
attribute to configure the name to be displayed in the Oracle ADF UI. The monitoring UI will display the value defined for the display-name
attribute. If this attribute is not defined, the UI displays the value of the metadata-name
attribute assigned to the metadata.
Displaying multiple links in the task flow UI that each display a popup window with a different job definition. The recommended approach is to create a single page fragment that contains the scheduled request submission task flow within an Oracle ADF region. This page is reused by each link to display a different job definition in the scheduled request submission UI. For each link, pass the relevant parameters such as the job definition name, package name, and so on. This approach ensures that the UI session creates and uses a single instance of the task flow.
Displaying the correct name given the metadata name and display name attributes. By default, the display name takes precedence and displays in the UI. If the display name is not defined, then the UI displays the job or job set name.
Resolving name conflicts between a job metadata parameter name and a request parameter with the same name. Oracle Enterprise Scheduler uses the following rules to resolve parameter name conflicts.
The last definition takes precedence. When the same parameter is defined repeatedly with the read-only flag set to false in all cases, the last parameter definition takes precedence. For example, a property specified at the job request level takes precedence over the same property specified at the job definition level.
The first read-only definition takes precedence. When the same parameter is defined repeatedly and at least one definition is read-only (that is, the ParameterInfo
read-only flag is set to true), the first read-only definition takes precedence. For example a read-only parameter specified at the job type definition level takes precedence over a property with the same name specified at the job definition level, regardless of whether it is read-only.
Resolving name conflicts between the job or job set metadata name and display name attributes. By default, the display name takes precedence over the metadata name. If the display name is not defined, then the UI defaults to displaying the job or job set name.
Understanding the state of a job request. There are 20 possible states for a job request, each with a corresponding number value. These are shown in Table 65-8.
Table 65-8 Job Request States
Job State Number | Job Request State | Description |
---|---|---|
-1 |
|
The state of the job request is unknown. |
1 |
|
The job request is awaiting dispatch. |
2 |
|
The job request has been dispatched and is awaiting processing. |
3 |
|
The job request is being processed. |
4 |
|
The job request has completed and postprocessing has commenced. |
5 |
|
The job request is blocked by one or more incompatible job requests. |
6 |
|
The job request has been explicitly held. |
7 |
|
The job request has been canceled and is awaiting acknowledgement. |
8 |
|
The job request expired before it could be processed. |
9 |
|
The job request was canceled. |
10 |
|
The job request has run and resulted in an error. |
11 |
|
The job request has run and resulted in a warning. |
12 |
|
The job request has run and completed successfully. |
13 |
|
The job request paused for subrequest completion. |
14 |
|
The job request has been submitted but has not been validated. |
15 |
|
The job request has been submitted, but validation has failed. |
16 |
|
The schedule for the job request has ended, or the job request expiration time specified at submission has been reached. |
17 |
|
The job request, and all child job requests, have finished. |
18 |
|
The job request has run, resulted in an error, and is eligible for automatic retry. |
19 |
|
The job request requires manual intervention to be retried or transition to a terminal state. |
Fixing an Oracle BI Publisher report that does not generate, even though the Oracle Enterprise Scheduler schema REQUEST_PROPERTY
table contains all the relevant postprocessing parameters. Verify that the postprocessing parameters begin with index value of 1. If a set of parameters begins with an index value of 0 (such as the parameter pp.0.action
), then the Oracle BI Publisher report will not generate. Oracle BI Publisher expects parameters to begin with an index value of 1. In the case of a job set with multiple Oracle BI Publisher jobs, verify that all the individual step postprocessing actions begin with an index value of 1.
Fixing a scheduled request submission UI that does not display, and throws a partial page rendering error in the browser indicating that the drTaskflowId
property is invalid. This error may occur due to any of the following.
The object oracle.as.scheduler.JobDefinition
may be unavailable to the scheduled request submission UI, which attempts to query the object using the MetadataService
API.
The job definition name or the job definition package name is incorrect when passed as task flow parameters. Ensure that the package name does not end with a trailing forward slash.
The metadata permissions are not properly configured for the user who is currently logged in. The JobDefinition
object, being stored in Oracle Metadata Services Repository, requires adequate metadata permissions to read and modify the JobDefinition
metadata. Ensure that the Oracle Metadata Services Repository to which you are referring contains the job definition name in the proper package hierarchy.
Every Oracle application registers task flows with a product called Oracle Fusion Applications Setup Manager. Applications Setup Manager provides a single, unified user interface that allows customers and implementers to configure all Oracle applications by defining custom configuration templates or tasks based on their business needs.
The Applications Setup Manager UI enables customers and implementers to select the business processes or products that they want to implement.
Oracle Function Security controls your privileges to a specific task flow, and users who do not have the required privilege cannot view the task flow. For more information about how to implement function security privileges and roles, see Implementing Function Security .
Table 65-9 lists the task flows and their parameters.
Table 65-9 Oracle Enterprise Scheduler Task Flows and Parameters
Task Flow Name | Task Flow XML | Parameters Passed | Behavior | Comments |
---|---|---|---|---|
Schedule Requests |
|
None. |
Schedules a job request submission. |
None. |
The Oracle ADF UI used to submit scheduled requests supports basic and advanced modes. Switching between modes requires page navigation between two view activities.
In some cases, you may want to use a custom parameter task flow for the UI in the context of an Oracle Fusion web application. One such use case is when you require a method call activity as the default activity of a custom bounded task flow so as to initialize the parameters view object and Flexfield filters defined in that task flow.
When using page navigation between two view activities and custom bounded task flows with a default method call activity, switching between basic and advanced modes might reinitialize the related view objects and entity objects. If this happens, any data entered in basic mode is lost when changing to advanced mode.
The task flow template enables switching between basic and advanced modes in the scheduled request submission Oracle ADF UI without losing data.
Related Links
The following document provides additional information related to subjects discussed in this section:
For more information about creating task flows, see the part "Creating Oracle ADF Task Flows" in Developing Fusion Web Applications with Oracle Application Development Framework." Alternatively, add the lines of code shown in Example 65-49 to the task flow XML file.
A bundled task flow template is provided, containing the components required to enable switching between basic and advanced modes in the Oracle ADF UI. The task flow template adds a router activity and an input parameter to the custom bounded task flow. Configure the router activity as the default activity.
You need only extend the task flow template as needed and implement the activity IDs defined in the task flow template.
Example 65-48 shows a sample implementation of the task flow template.
The task flow template defines the following:
A default-activity,
An input parameter of Boolean type,
A router activity,
A control-flow-rule containing two cases.
Example 65-48 Task Flow Template
<?xml version="1.0" encoding="UTF-8" ?> <adfc-config xmlns="http://xmlns.oracle.com/adf/controller" version="1.2"> <task-flow-template id="srs-custom-task-flow-template"> <default-activity id="defActivity">defaultRouter</default-activity> <input-parameter-definition id="param1"> <description id="paramDescription">Parameter to decide on initialization.</description> <name id="paramName">shouldInitialize</name> <value id="paramID">#{pageFlowScope.shouldInitialize}</value> <class id="paramType">boolean</class> <required/> </input-parameter-definition> <router id="defaultRouter"> <case id="routerCaseID"> <expression id="routerExprID">#{pageFlowScope.shouldInitialize}</expression> <outcome id="outcomeID">initializeTaskflow</outcome> </case> <default-outcome id="defOutcomeID">skip</default-outcome> </router> <control-flow-rule id="ctrlFlwRulID"> <from-activity-id id="FrmAc1">defaultRouter</from-activity-id> <control-flow-case id="CtrlCase1"> <from-outcome id="FrmAct3">initializeTaskflow</from-outcome> <to-activity-id id="ToAct1">initActivity</to-activity-id> </control-flow-case> <control-flow-case id="CtrlCase2"> <from-outcome id="FrmAct2">skip</from-outcome> <to-activity-id id="ToAct2">defaultView</to-activity-id> </control-flow-case> </control-flow-rule> <use-page-fragments/> </task-flow-template> </adfc-config>
If you need to create your own custom bounded task flow UI for the parameters section of the scheduled request submission UI, you will need to extend this template.
To extend the taskflow template, do the following:
Example 65-49 Extending a Task Flow
<template-reference> <document id="doc1">/WEB-INF/srs-custom-task-flow-template.xml</document> <id id="temid">srs-custom-task-flow-template</id> </template-reference>
Example 65-50 Implementing a Control Flow Rule
<control-flow-rule> <from-activity-id>initActivity</from-activity-id> <control-flow-case> <from-outcome>outcome_of_init_activity</from-outcome> <to-activity-id>defaultView</to-activity-id> </control-flow-case> </control-flow-rule>
Based on the value of the input parameter, the router invokes the method call activity or skips it, and invokes the view activity directly. The Oracle ADF UI must pass the correct parameter values to the task flow while switching modes.
When loading the initial page in basic mode, the method call activity is invoked. While loading the page in the advanced mode, the custom bounded task flow directly invokes the view activity. This ensures that the user entered data persists in the view objects across modes.
If the custom task flow UI does not render correctly, check whether transactional properties have been set in the custom task flow, such as the requires-transaction
property, and so on.
Remove transactional properties from the task flow definition and set the data control scope to shared.
As the parent scheduled request submission UI task flow already has a transaction, Oracle ADF will commit all called task flow transactions if the data controls are shared.
Note:
When using the UI to schedule a job to run for a year, for example, a maximum of 300 occurrences display when clicking Customize Times.
When creating Oracle ADF UIs for scheduled jobs, you can secure the individual task flows involved using a security policy.
The task flows you can secure are the following:
Scheduling Job Requests UI
/WEB-INF/ScheduleRequest-taskflow.xml
/WEB-INF/srs-test-task-flow.xml#srs-test-task-flow
/WEB-INF/LayoutRN-taskflow.xml#LayoutRN-taskflow
/WEB-INF/NotifyRN-taskflow.xml#NotifyRN-taskflow
/WEB-INF/ScheduleRN_taskflow.xml#ScheduleRN_taskflow
Monitoring Job Requests UI
/WEB-INF/oracle/apps/fnd/applcp/monitor/ui/flow/MonitorProcessesMainAreaFlow.xml#MonitorProcessesMainAreaFlow
/WEB-INF/oracle/apps/fnd/applcp/monitor/ui/flow/EmptyFlow.xml
Oracle Enterprise Scheduler is fully integrated with Oracle Fusion Applications logging. The logger captures Oracle Enterprise Scheduler-specific attributes when invoking logging from within the context of a running job request. You can set the values to these Oracle Enterprise Scheduler attributes within the context of defining a job.
Jobs can generate a log file on the file system that can be viewed with the Monitoring UI.
In a typically configured Oracle Enterprise Scheduler hosting application, log and output files are stored in an Oracle WebCenter Content repository rather than on the file system. These files are available to end users through a page you provide for monitoring scheduled job requests. For more information about request monitoring, see About Monitoring Scheduled Job Requests Using an UI.
Log messages written using the request log file APIs are written to the request log file and Oracle Fusion Applications logging at a severity level of FINE
(only if logging is enabled at a level of FINE
or lower).
For more information about managing log files, see the chapter "Managing Log Files and Diagnostic Data" in Administering Oracle Fusion Middleware.
Note:
Do not use the request log for debugging and internal error reporting. For Oracle Enterprise Scheduler jobs, the request log is equivalent to the end-user UI for online applications. When writing Oracle Enterprise Scheduler job code, you should ideally log only translatable end user-oriented messages to the request log. You should not use the request log for debug messages or internal error messages that are oriented to system administrators and/or Oracle Support. The audience for debug messages and detailed internal error messages is typically system administrators and Oracle Support, not end users.
Therefore, debug and detailed internal error messages should be logged to the log called FND_LOG
only.
For Oracle Enterprise Scheduler jobs, the request log is equivalent to the end user interface for web applications. When developing an Oracle Enterprise Scheduler job, log to the request log only translatable end-user oriented messages.
For example, if an end user enters a bad parameter to the Oracle Enterprise Scheduler job, a translated error message logged to the request log is displayed to the end user. The end user can then take the relevant corrective action.
Example 65-51 shows how to set log messages using the request log.
If the Oracle Enterprise Scheduler job fails due to an internal software error, log the detailed failure message to the log called FND_LOG
for the system administrator or support. You can also log a high-level generic message to the request log so as to inform end users of the error. An example of a generic error message intended for end users: "Your request could not be completed due to an internal error."
Example 65-51 Setting Log Messages Using the Request Log
-- Seeded message to be displayed to the end user. FND_MESSAGE.SET_NAME('FND', 'INVALID_PARAMETER'); -- Runtime parameter information FND_MESSAGE.SET_TOKEN('PARAM_NAME', pName); FND_MESSAGE.SET_TOKEN('PARAM_VALUE', pValue); -- The following is useful for auto-logging errors. FND_MESSAGE.SET_MODULE('fnd.plsql.mypackage.myfuntionA'); fnd_file.put_line( FND_FILE.LOG, FND_MESSAGE.GET );
Note:
Do not use the output file for debugging and internal error reporting.
The output file is a formally formatted file generated by an Oracle Enterprise Scheduler job. An output file can be sent to a printer or viewed in a UI window. Example 65-52 shows an invoice sent to an output file.
Example 65-52 Invoice Output File
fnd_file.put_line( FND_FILE.OUTPUT, '******** XYZ Invoice ********' );
Debug and error logging should be done using the Oracle Diagnostic Logging APIs only. The Oracle Enterprise Scheduler request log should not be used for system administrator or Oracle support-oriented debug and error logging purposes. The request log is for the end users and it should only contain messages that are clear and concise. When an error occurs in an Oracle Enterprise Scheduler job, use an appropriate high-level (and, ideally, translated) message to report the error to the end user through the request log. The details of the error and any debug messages should be logged with Oracle Diagnostic Logging APIs.
Common PL/SQL, Java, or C code that could be invoked by both Oracle Enterprise Scheduler jobs and interactive application code should only use Oracle Diagnostic Logging APIs. If needed, the wrapper Oracle Enterprise Scheduler job should perform appropriate batching and logging to the request log for progress reporting purposes.
Using Logging in a Java Application
In Java jobs, use the log called AppsLog
for debugging and error logging. You can retrieve an AppsLog
instance from the CpContext
object, by calling the method getLog()
.
Example 65-53 shows the use of logging in a Java application.
Note:
Example 65-53 uses an active WebAppsContext
object. Do not attempt to log messages using an inactive or freed WebAppsContext
object, as this can cause connection leaks.
Using Logging in a PL/SQL Application
PL/SQL APIs are part of the FND_LOG
package. These APIs require invoking relevant application user session initialization APIs—such as the method FND_GLOBAL.INITIALIZE()
— to set up user session properties in the database session.
These application user session properties, including UserId
, RespId
, AppId
, SessionId
, are needed for the log APIs. Typically, Applications Core invokes these session initialization APIs.
Log plain text messages with the method FND_LOG.STRING()
. Log translatable message dictionary messages with the method FND_LOG.MESSAGE()
. FND_LOG.MESSAGE()
logs messages in encoded, but not translated, format, and allows the Log Viewer UI to handle translating messages based on the language preferences of the system administrator viewing the messages.
For details regarding the FND_LOG
API, run the script $fnd/patch/115/sql/AFUTLOGB.pls
at the prompt. Example 65-54 shows the PL/SQL logging syntax.
Example 65-55 shows how to log a message in PL/SQL after the AOL session has been initialized.
The global variable FND_LOG.G_CURRENT_RUNTIME_LEVEL
allows callers to avoid a function call for messages at a lower level than the current configured level. If logging is disabled, the current runtime level is set to a large number such as 9999 so that it is sufficient to simply log messages with levels greater than or equal to this number. This global variable is automatically populated by the FND_LOG_REPOSITORY
package during session and context initialization.
Example 65-56 shows sample code that illustrates the use of the global variable FND_LOG.G_CURRENT_RUNTIME_LEVEL
.
Note:
For PL/SQL in a forms client, use the same APIs. Use the method FND_LOG.TEST()
to check whether logging is enabled.
Example 65-57 shows logging message dictionary messages.
Using Logging in C
Example 65-58 illustrates the use of logging in a C application.
Example 65-53 Logging in Java Using AppsLog
public boolean authenticate(AppsContext ctx, String user, String passwd) throws SQLException, NoSuchUserException { AppsLog alog = (AppsLog) ctx.getLog(); if(alog.isEnabled(Log.PROCEDURE)) /* To avoid String Concat if not enabled */ alog.write("fnd.security.LoginManager.authenticate.begin", "User=" + user, Log.PROCEDURE); /* Never log plain-text security sensitive parameters like passwd! */ try { validUser = checkinDB(user, passwd); } catch(NoSuchUserException nsue) { if(alog.isEnabled(Log.EXCEPTION)) alog.write("fnd.security.LoginManager.authenticate",nsue, Log.EXCEPTION); throw nsue; // Allow the caller to Handle it appropriately } catch(SQLException sqle) { if(alog.isEnabled(Log.UNEXPECTED)) { alog.write("fnd.security.LoginManager.authenticate", sqle, Log.UNEXPECTED); Message Msg = new Message("FND", "LOGIN_ERROR"); /* System Alert */ Msg.setToken("ERRNO", sqle.getErrorCode(), false); Msg.setToken("REASON", sqle.getMessage(), false); /* Message Dictionary messages should be logged using write(..Message..), * and never using write(..String..) */ alog.write("fnd.security.LoginManager.authenticate", Msg, Log.UNEXPECTED); } throw sqle; // Allow the caller to handle it appropriately } // End of catch(SQLException sqle) if(alog.isEnabled(Log.PROCEDURE)) /* To avoid String Concat if not enabled */ alog.write("fnd.security.LoginManager.authenticate.end", "validUser=" + validUser, Log.PROCEDURE); return success; }
Example 65-54 PL/SQL Logging Syntax
PACKAGE FND_LOG IS LEVEL_UNEXPECTED CONSTANT NUMBER := 6; LEVEL_ERROR CONSTANT NUMBER := 5; LEVEL_EXCEPTION CONSTANT NUMBER := 4; LEVEL_EVENT CONSTANT NUMBER := 3; LEVEL_PROCEDURE CONSTANT NUMBER := 2; LEVEL_STATEMENT CONSTANT NUMBER := 1; /* ** Writes the message to the log file for the specified ** level and module ** if logging is enabled for this level and module */ PROCEDURE STRING(LOG_LEVEL IN NUMBER, MODULE IN VARCHAR2, MESSAGE IN VARCHAR2); /* ** Writes a message to the log file if this level and module ** are enabled. ** The message gets set previously with FND_MESSAGE.SET_NAME, ** SET_TOKEN, etc. ** The message is displayed from the message dictionary stack, ** if POP_MESSAGE is TRUE. ** Pass FALSE for POP_MESSAGE if the message will also be ** displayed to the user later. ** Example usage: ** FND_MESSAGE.SET_NAME(...); -- Set message ** FND_MESSAGE.SET_TOKEN(...); -- Set token in message ** FND_LOG.MESSAGE(..., FALSE); -- Log message ** FND_MESSAGE.RAISE_ERROR; -- Display message */ PROCEDURE MESSAGE(LOG_LEVEL IN NUMBER, MODULE IN VARCHAR2, POP_MESSAGE IN BOOLEAN DEFAULT NULL); /* ** Tests whether logging is enabled for this level and module, ** to avoid the performance penalty of building long debug ** message strings unnecessarily. */ FUNCTION TEST(LOG_LEVEL IN NUMBER, MODULE IN VARCHAR2) RETURN BOOLEAN;
Example 65-55 Logging a Message in PL/SQL After the AOL Session Has Been Initialized
begin /* Call a routine that logs messages. */ /* For performance purposes, check whether logging is enabled. */ if( FND_LOG.LEVEL_PROCEDURE >= FND_LOG.G_CURRENT_RUNTIME_LEVEL ) then FND_LOG.STRING(FND_LOG.LEVEL_PROCEDURE, 'fnd.plsql.MYSTUFF.FUNCTIONA.begin', 'Hello, world!' ); end if; /
Example 65-56 Logging a Message in PL/SQL Using FND_LOG.G_CURRENT_RUNTIME_LEVEL
if( FND_LOG.LEVEL_STATEMENT >= FND_LOG.G_CURRENT_RUNTIME_LEVEL ) then dbg_msg := create_lengthy_debug_message(...); FND_LOG.STRING(FND_LOG.LEVEL_STATEMENT 'fnd.form.ABCDEFGH.PACKAGEA.FUNCTIONB.firstlabel', dbg_msg); end if;
Example 65-57 Logging Message Dictionary Messages
if( FND_LOG.LEVEL_UNEXPECTED >= FND_LOG.G_CURRENT_RUNTIME_LEVEL) then FND_MESSAGE.SET_NAME('FND', 'LOGIN_ERROR'); -- Seeded Message -- Runtime Information FND_MESSAGE.SET_TOKEN('ERRNO', sqlcode); FND_MESSAGE.SET_TOKEN('REASON', sqlerrm); FND_LOG.MESSAGE(FND_LOG.LEVEL_UNEXPECTED, 'fnd.plsql.Login.validate', TRUE); end if;
Example 65-58 Logging in C
#define AFLOG_UNEXPECTED 6 #define AFLOG_ERROR 5 #define AFLOG_EXCEPTION 4 #define AFLOG_EVENT 3 #define AFLOG_PROCEDURE 2 #define AFLOG_STATEMENT 1 /* ** Writes a message to the log file if this level and module is ** enabled */ void aflogstr(/*_ sb4 level, text *module, text* message _*/); /* ** Writes a message to the log file if this level and module is ** enabled. ** If pop_message=TRUE, the message is popped off the message ** Dictionary stack where it was set with afdstring() afdtoken(), ** etc. The stack is not cleared (so messages below will still be ** there in any case). */ void aflogmsg(/*_ sb4 level, text *module, boolean pop_message _*/); /* ** Tests whether logging is enabled for this level and module, to ** avoid the performance penalty of building long debug message ** strings */ boolean aflogtest(/*_ sb4 level, text *module _*/); /* ** Internal ** This routine initializes the logging system from the profiles. ** It will also set up the current session and user name in its state */ void afloginit();