Skip Headers
Oracle® Fusion Applications Developer's Guide for Oracle Enterprise Scheduler
11g Release 1 (11.1.2)

Part Number E10142-02
Go to Documentation Home
Home
Go to Book List
Book List
Go to Table of Contents
Contents
Go to Feedback page
Contact Us

Go to previous page
Previous
Go to next page
Next
PDF · Mobi · ePub

9 Working with Extensions to Oracle Enterprise Scheduler

This chapter explains how to use extensions to Oracle Enterprise Scheduler to manage job request submissions.

9.1 Introduction to Oracle Enterprise Scheduler Extensions

Oracle Enterprise Scheduler provides the ability to run different job types, including: Java, PL/SQL and spawned jobs. Jobs can run on demand, or scheduled to run in the future.

Oracle Enterprise Scheduler provides scheduling services for the following purposes:

Using Oracle JDeveloper, application developers can easily create and implement jobs. While implemented in JDeveloper, Oracle Enterprise Scheduler runs the jobs. A number of APIs are provided to interface between jobs executed within applications developed in JDeveloper and Oracle Enterprise Scheduler Service.

The Oracle JDeveloper extensions to Oracle Enterprise Scheduler enable the following:

Before you begin:

Install Oracle Enterprise Scheduler Service to the WLS server. For more information, see the chapter "Setting Up Your Development Environment" in Oracle Fusion Applications Developer's Guide.

9.2 Standards and Guidelines

The following standards and guidelines apply to working with extensions to Oracle Enterprise Scheduler Service:

9.3 Creating and Implementing a Scheduled Job in JDeveloper

Submitting job requests from an Oracle Fusion application requires developing the following components:

A wizard enables easily defining a new job within the context of an Oracle Fusion application. The job can be any one of the following types: Java, PL/SQL, SQL*Loader, SQL*Plus, Perl, C or host scripts.

9.3.1 How to Create and Implement a Scheduled Job in JDeveloper

Creating and implementing a scheduled job in JDeveloper involves creating a package or class from which to call the job, as well as defining a job definition. The job must then be deployed and tested, and a job request submission interface defined.

To create and implement a scheduled job in JDeveloper:

  1. Create a package, class, or job, and include the minimum required methods or functions.

    • Define the job request

    • Define any sub-requests, if required.

  2. If a job requires parameters to be filled in by end users using an Oracle ADF user interface, define a standard ADF Business Components view object with validation.

    For example, if a job requires information regarding duration, date, and time, create an ADF Business Components view object with the properties duration, date, and time.

  3. Create a job definition in JDeveloper using the wizard.

    If using an ADF Business Components view object to collect additional values at runtime from end users, specify the name of the view object as a property of the job definition.

  4. Deploy the job.

  5. Test the job.

  6. Create the end user job request submission interface.

    For more information about creating the end user job request submission interface, see Section 9.14, "Creating an Oracle ADF User Interface for Submitting Job Requests".

9.3.2 What Happens at Runtime: How a Scheduled Job Is Created and Implemented in JDeveloper

An Oracle ADF interface is provided to enable application end-users to submit job requests from an Oracle Fusion application. The Oracle ADF interface is easily integrated into an Oracle Fusion application. Once a job request is submitted through the interface, Oracle Enterprise Scheduler Service runs the job as scheduled.

9.4 Creating a Job Definition

In order to submit a job request, you must first create a job definition.

9.4.1 How to Create a Job Definition

A job definition and job type are required to submit a job request.

  • Job Definition: This is the basic unit of work that defines a job Request in Oracle Enterprise Scheduler.

  • Job Type: This specifies an execution type and defines a common set of properties for a job request.

The extensions to Oracle Enterprise Scheduler Service provide the following execution types:

  • JavaType: for job definitions that are implemented in Java and run in the container.

  • SQLType: for job definitions that run as PL/SQL stored procedures in a database server.

  • CJobType: for job definitions that are implemented in C and run in the container.

  • PerlJobType: for job definitions that are implemented in Perl and run in the container.

  • SqlLdrJobType: for job definitions that are implemented in SQL*Loader and run in the container.

  • SqlPlusJobType: for job definitions that are implemented in SQL*Plus and run in the container.

  • BIPJobType: for job definitions that are executed as Oracle BI Publisher (BIP) reports. Oracle BI Publisher jobs require configuring the parameter reportID.

    For more information about defining a Business Intelligence Publisher job, see the Business Intelligence Publisher Administrator's and Developer's Guide and the Business Intelligence Publisher Report Designer's Guide.

  • HostJobType: for job definitions that run as host scripts executed from the command line.

Before you begin:

If your job definition requires additional properties to be filled in by end users at submission time, you'll need to create a view object that defines these properties. The view object must be associated with the job definition you create. The view object is later associated with the user interface you create to allow end users to submit job requests along with the properties at submission time.

For more information about defining properties to be filled in at runtime by end users, see Section 9.14, "Creating an Oracle ADF User Interface for Submitting Job Requests."

To create a new job definition in Oracle JDeveloper:

  1. In Oracle JDeveloper, create an Oracle Fusion web application by clicking the Application Menu icon on the Application Navigator, selecting New Project > Projects > Generic Project and clicking OK.

  2. Right-click the project and select Properties. In the Resources tab, add the directory $MW_HOME/jdeveloper/integration/ess/extJobTypes.

  3. If your job includes any properties to be filled in by end users using an Oracle ADF user interface at runtime, create an ADF Business Components view object with validation and the parameters to be filled in by end users.

    1. Right-click the Model project and select Properties. In the Resource Bundle section, configure one bundle per file and select resource bundle type Xliff Resource Bundle.

    2. Define attributes for the view objects sequentially, ATTRIBUTE1, ATTRIBUTE2, and so on, with an attribute for each required parameter. Use ADF Business Components attribute control hints to specify required prompt, validation, and formatting for each parameter. For more information, see the chapter "Creating a Business Domain Layer Using Entity Objects" in Oracle Fusion Middleware Fusion Developer's Guide for Oracle Application Development Framework.

    3. Add the property parametersVO to your job definition and specify the fully qualified path of the view object as the value of parametersVO. For example, set parametersVO to oracle.my.package.TestVO. A maximum of 100 attributes can be used for parametersVO. The attributes should be named incrementally, for example ATTRIBUTE1, ATTRIBUTE2, and so on.

    4. Define the following required properties:

    • jobDefinitionName: The short name of the job.

    • jobDefinitionApplication: The short name of the application running the job.

    • jobPackageName: The name of the package running the job.

    Additional properties can be defined as shown in Table 9-1.

    Table 9-1 Additional Job Definition Properties

    Property Description

    completionText

    An optional string value that can be used to communicate details of the final state of the job.

    This property value is displayed in the UI used to monitor job request submissions in the details section of the job request. It can be useful for displaying a short explanation as to why a request ended in an error or warning state.

    CustomDatacontrol

    The name of the data control for the application to which the parameter task flow is bound. Following is an example.

    <parameter name="CustomDatacontrol"  data-type="string">ExtParameterAM</parameter>
    

    Use this property when adding a custom task flow to an Oracle ADF user interface used to submit job requests at run time. For more information, see Section 9.14.2, "How to Add a Custom Task Flow to an Oracle ADF User Interface for Submitting Job Requests."

    defaultOutputExtension

    The suffix of the output file. Possible values are txt, xml, pdf, html.

    enableTimeStatistics

    A boolean parameter that enables or disables the accumulation of time statistics (Y or N).

    enableTrace

    A numerical value that indicates the level of tracing control for the job. Possible values are as follows:

    • 1: Database trace

    • 5: Database trace with bind

    • 9: Database trace with wait

    • 13: Database trace with bind and wait

    • 16: PL/SQL profile

    • 17: Database trace and PL/SQL profile

    • 21: Database trace with bind and PL/SQL profile

    • 25: Database trace with wait and PL/SQL profile

    • 29: Database trace with bind, wait and PL/SQL profile

    executionLanguage

    Stores the preferred language in which the job request should run.

    executionNumchar

    The numeric characters used in the preferred language in which the job runs, as defined by executionLanguage.

    executionTerritory

    The territory of the preferred language in which the job runs, as defined by executionLanguage.

    EXT_PortletContainerWebModule

    Specifies the name of the web module for the Oracle Enterprise Scheduler UI application to use as a portlet when submitting a job request. The Oracle Enterprise Scheduler central UI looks up the producer from the topology based on the registered producer application name derived from EXT_PortletContainerWebModule.

    incrementProc

    Enables a PL/SQL procedure evaluated at runtime which calculates the next set of date parameter values for a recurring request. Enter the name of the PL/SQL procedure. The procedure expects one argument—a number signifying the change in milliseconds between the start dates of the first and current requests.

    incrementProcArgs

    A list of comma-separated date arguments to be incremented. The incrementProc property is used to increment these values. Alternatively, a default value is used if the property incrementProc is not defined. Enter a list of argument numbers to identify which job arguments are to be incremented (for example, "1, 2, 5").

    In the example shown here, an incrementProc procedure calculates the next set of date parameter values for a recurring request. The procedure expects one argument: a number signifying the change in milliseconds between the start dates of the first and current requests.

    -- incr_test - Sample PL/SQL incrementProc procedure
        -- This procedure gets the list of arguments to be incremented
        -- using the incrementProcArgs property and increments each
        -- argument by the delta provided. This behavior is identical
        -- to the default behavior if no incrementProc is set for the
        -- job.
    procedure incr_test(   delta IN number ) is
       request_id number;
       incrProcArgs varchar2(200);
       curr_arg_n varchar2(100);
       curr_arg_v varchar2(2000);
       del_pos number := 0;
       prev_pos number := 1;
       old_date date;
       new_date date;
       delta_days number;
       begin
          request_id := FND_JOB.REQUEST_ID;
          delta_days := delta / (1000*60*60*24);
         
          -- incrProcArgs must be defined for this procedure to be
          -- called.
          incrProcArgs := ESS_RUNTIME.GET_REQPROP_VARCHAR(request_id,
                          FND_JOB.INCR_PROC_ARGS_P) || ',';
     
          LOOP
         del_pos := INSTR(incrProcArgs, ',', prev_pos);
         EXIT WHEN del_pos = 0;
         
         curr_arg_n := FND_JOB.SUBMIT_ARG_PREF_P || SUBSTR(incrProcArgs,
                       prev_pos, del_pos-prev_pos);
     
         curr_arg_v := ESS_RUNTIME.GET_REQPROP_VARCHAR(request_id, 
                                                       curr_arg_n);
     
         old_date := FND_DATE.CANONICAL_TO_DATE(curr_arg_v);
         new_date := old_date + delta_days;
         
         ESS_RUNTIME.UPDATE_REQPROP_VARCHAR(request_id, curr_arg_n,
                                           FND_DATE.DATE_TO_
                                           CANONICAL(new_date));
     
         prev_pos := del_pos+1;
          END LOOP;
       end incr_test;
    

    logLevel

    The level at which events are logged (between 0 and 4). Each job type has a logLevel of 1 by default. This optional value is used to override the job type logLevel in the job definition. For more information about log levels, see the Enterprise Scheduler Service Developer's Guide.

    optimizerMode

    This flag enables setting the database optimizer mode for the job. Optimizer mode is useful for fine-tuning performance.

    parametersVO

    The ADF Business Components view object you define for additional properties to be entered at runtime by end users using an Oracle ADF user interface.

    ParameterTaskflow

    Enter the name of the task flow as a parameter. The name of the taskflow.xml file must be the same as the taskflowId. Following is an example.

    <parameter name="ParameterTaskflow"  data-type="string">/WEB-INF/oracle/apps/prod/project/ParamTestTaskFlow.xml#ParamTestTaskFlow</parameter>
    

    Use this property when adding a custom task flow to an Oracle ADF user interface used to submit job requests at run time. For more information, see Section 9.14.2, "How to Add a Custom Task Flow to an Oracle ADF User Interface for Submitting Job Requests."

    reportID

    The BIP report value specified in the Oracle BI Publisher repository. Required parameter for Oracle BI Publisher jobs only.

    rollbackSegment

    Enables setting a database rollback segment for the job, which will be used until the first commit. When implementing the rollback segment, use FND_JOB.AF_COMMIT and FND_JOB.AF_ROLLBACK to commit and rollback.

    srsFlag

    A boolean parameter (Y or N) that controls whether the job displays in the job request submission user interface (see Section 9.14, "Creating an Oracle ADF User Interface for Submitting Job Requests").

    SYS_runasApplicationID

    Enables elevating access privileges for completing a scheduled job. For more information about elevating access privileges for the completion of a particular job, see Section 9.13, "Elevating Access Privileges for a Scheduled Job."


  4. Create a new job. From the New Gallery, select SOA Tier > Enterprise Scheduler Metadata and click Job Definition.

  5. In the Job Definition Name & Location page in the Job Definition Creation Wizard, do the following:

    • Name: Enter a name for the job.

    • JobType: Select the job type from the drop-down list.

    Click Finish. The new job definition displays.

  6. Edit the following properties in the job definition as required for the selected job type:

    • JavaJobType: Uncheck the read-only checkbox next to className and set its value to the value of the business logic class.

    • PlsqlJobType: Uncheck the read-only checkbox next to procedureName and set its value to the name of the procedure (such as myprocedure.proc). Create a new parameter named numberOfArgs. Set numberOfArgs to the number of job submission arguments, excluding errbuf and retcode.

    • CJobType: Add the parameter executableName and set its value to the name of the C job to be executed. The executable file identified by the executableName parameter must exist in the directory $APPLICATIONS_BASE/$APPLBIN.

    • PerlJobType: Add the parameter executableName and set its value to the name of the Perl script.

    • SqlLdrJobType: Add the parameter executableName and set its value to the name of the control file to be executed (located under PRODUCT_TOP/$APPLBIN). Add SQL*Loader options such (such as direct=yes) as a sqlldr.directoption parameter in the job definition.

    • SqlPlusJobType: Add the parameter executableName and set its value to the name of the SQL*Plus job script to be executed (located under PRODUCT_TOP/$APPLSQL).

    • HostJobType: Add the parameter executableName and set its value to the name of the host script job to be executed. The executable file identified by the executableName parameter must exist in the directory PRODUCT_TOP/$APPLBIN.

    Note:

    Make sure the $APPLBIN and $APPLSQL variables are configured in the environment.properties file. The $APPLBIN and $APPLSQL variables point to the location of executable files under PRODUCT_TOP. These variables enable the extensions to Oracle Enterprise Scheduler Service to locate the jobs to be run. Typically, these variables are set in a pre-existing environment properties file in the system.

9.4.2 How to Define File Groups for a Job

A file group is a collection of output files such as text files, XML files, and so on. File groups enable categorizing files together for a specific purpose, such as file groups for human resources or financial reports.

File groups are used for post-processing jobs such as Business Intelligence Publisher jobs. Using post-processing actions, the results of a job can be saved as an HTML file, for example, or printed. File groups specify the type of post-processing action to be taken for a given job.

There are two types of file groups: output and layout. Post-processing layout actions create additional output files using the job request output files. For example, an XML job output file can be processed as an HTML or PDF file.

Post-processing output actions act upon job request output files by printing, faxing, or e-mailing the files, for example. Output post-processing actions can be taken on job request output files, as well as files created by layout post-processing actions. For example, a job request output XML file can be converted to a PDF file using layout post-processing actions, and then e-mailed using output post-processing actions.

For more information about defining a Business Intelligence Publisher job, see the Oracle Fusion Middleware Developer's Guide for Oracle Business Intelligence Publisher (Oracle Fusion Applications Edition), Oracle Fusion Middleware Report Designer's Guide for Oracle Business Intelligence Publisher and Oracle Fusion Middleware Administrator's Guide for Oracle Business Intelligence Publisher (Oracle Fusion Applications Edition).

To define file group properties:

  1. In the job definition for which you want to define post-processing, define a file group.

    1. Name the property Program.FMG.

    2. For the value of the property, enter a list of comma-separated file management groups, where each file group is prefixed by an L or O to indicate a layout or output file group, respectively. A sample file group property is shown in Example 9-1:

      Example 9-1 File Group Property Sample Value

      Program.FMG = L.MYXML, O.ALL, O.PDF
      

      Three file groups are listed in this example.

  2. In the job definition, create a property containing a regular expression used to filter the files in the output work directory of the job request. Any output files that match the filter will be part of the relevant file group.

    Example regular expressions are shown in Example 9-2, Example 9-3 and Example 9-4.

    Example 9-2 File Group Regular Expression Filtering for All Files with the Suffix XML

    MYXML = '.*.\xml$' 
    

    Example 9-3 File Group Regular Expression Filtering for All Files

    ALL = '.*$'
    

    Example 9-4 File Group Regular Expression Filtering for All Files with the Suffix PDF

    PDF = '.*.\pdf$'
    

    An example of file group properties in a job definition is shown in Example 9-5.

    Example 9-5 File Group Properties with File Group Regular Expression Filtering

    Program.FMG = L.MYXML, O.ALL, O.PDF
    MYXML = '.*.\xml$'   ALL = '.*$'   PDF = '.*.\pdf$'
    

    These properties specify the use of the Business Intelligence Publisher post-processing action on the MYXML file group, followed by the print post-processing action on either ALL or PDF file groups.

  3. Optionally, rename the file group and store it in Oracle Metadata Store so that it displays in a more user-friendly way in the scheduled job request submission UI.

9.4.3 What Happens When You Create a Job Definition

The job definition is written to an XML file called <job name>.xml.

9.4.4 What Happens at Runtime: How Job Definitions Are Created

The Fusion application passes the job definition file to Oracle Enterprise Scheduler Service, which runs the job defined in the file.

9.5 Configuring a Spawned Job Environment

Configuring a spawned job involves creating an environment file and configuring an Oracle wallet.

9.5.1 How to Create an Environment File for Spawned Jobs

Spawned jobs require an environment.properties file to provide the correct environment for execution. The environment.properties file should be located in the config/fmwconfig directory under the domain.

Additional environment variables may be added to the same directory in a similar file called env.custom.properties. Variables defined in this file take precedence over those in the environment.properties file.

Similarly, server-specific environment variables may be set in the server config directory in files called environment.properties and env.custom.properties.

Before you begin:

The following variables are used to identify the correct interpreters for various spawned job types:

  • AFSQLPLUS: The executable for SQL*Plus scripts.

  • AFSQLLDR: The executable for SQL*Loader uploads.

  • AFPERL: The Perl interpreter.

  • ATGPF_TOP: The TOP directory for ATGPF files, needed to locate key files for SQL*Plus and Perl jobs.

The following environment properties are available to all spawned jobs:

  • REQUESTID: The request ID of the current job request.

  • WORK_DIR_ROOT: The directory on the local file system where the request can perform file operations.

  • OUTPUT_WORK_DIR: The directory to which the job writes all output files.

  • LOG_WORK_DIR: The directory to which the job writes all log files.

  • INPUT_WORK_DIR: The directory to which input files are saved before the job is spawned.

  • OUTFILE_NAME: The default name for the job output file.

  • LOGFILE_NAME: The name of the log file for the job.

  • USER_NAME: The name of the user submitting the job. The job runs in the context of this user.

  • REQUEST_HANDLE: The Oracle Enterprise Scheduler request handle for the current request.

The environment variables must point to the client ORACLE_HOME and environment so that spawned jobs can connect to the database.

Note:

Make sure the variables you define in the environment.properties file do not include any trailing spaces. Follow the guidelines required by java.util.properties.

Make sure to restart the server after editing the environment.properties file.

To create an environment file for spawned jobs:

  1. Use a text editor to create an environment.properties file for the spawned job.

  2. Set the following environment variables in the environment.properties file:

    • LD_LIBRARY_PATH

    • ORACLE_HOME

    • PATH: The full path of the spawned job. In Windows environments, the PATH must include all directories that are normally part of LD_LIBRARY_PATH.

    • TNS_ADMIN: The directory which stores files related to the database connection (such as tnsnames.ora, sqlnet.ora).

    • TWO_TASK: The TNS name identifying the database to which spawned jobs should connect. In Windows environments, the environment variable is LOCAL.

  3. Configure the following variables, which are required to locate spawned jobs:

    • APPLBIN: C executables and SQL*Loader control files must reside in the $APPLBIN directory under the product TOP.

    • APPL_TOP: Set this property to the top level directory where the bin directory of C executables resides.

    • APPLSQL: SQL*Plus scripts must reside in the $APPLSQL directory under the product TOP. This means that the product TOP should be accessible to the environment.

    • ATGPF_TOP: This variable is required for SQL*Plus jobs. This should point to where the wrapper script is available.

  4. Save the environment.properties file and restart the server.

9.5.2 How to Configure an Oracle Wallet for Spawned Jobs

Use the TNS_ADMIN and ORACLE_HOME variables specified in the environment.properties file created in Section 9.5.1.

A configured Oracle wallet enables spawned jobs to connect to the database at the command line. A provisioned Fusion applications environment will have this wallet pre-configured.

To configure an Oracle wallet for the spawned job:

  1. At the prompt, enter the following commands as shown in Example 9-6.

    Example 9-6 Creating a Wallet

    cd $TNS_ADMIN
    mkdir wallet
    mkstore -wrl ./wallet -create 
    
  2. When prompted, choose a password for the wallet.

  3. At the prompt, enter the following command as shown in Example 9-7.

    Example 9-7 Creating Wallet Credentials

    mkstore -wrl ./wallet -createCredential <$TWO_TASK> fusion_runtime <fusion_runtime_password password>
    

    where TWO_TASK is the variable in the environment.properties file and <fusion password> is the password for the fusion username.

    This command creates permissions for accessing the wallet.

  4. When prompted, enter the wallet password created earlier.

  5. In a text editor, create a file called sqlnet.ora that includes the lines shown in Example 9-8.

    Example 9-8 Create a File Called sqlnet.ora

    SQLNET.WALLET_OVERRIDE = TRUE
          WALLET_LOCATION =
            (SOURCE =
              (METHOD = FILE)
              (METHOD_DATA =
            (DIRECTORY = <$TNS_ADMIN>/wallet)
            )
           )
    
  6. In a text editor, create a file called tnsnames.ora that includes the lines shown in Example 9-9.

    Example 9-9 Create a File Called tnsnames.ora

    dbname =
            (DESCRIPTION =
              (ADDRESS =
                 (PROTOCOL = TCP)
                 (HOST = host.us.oracle.com)
                 (PORT = 1521)
               )
              (CONNECT_DATA = (SID-sidname))
            )
    
  7. Execute the following commands as shown in Example 9-10.

    Example 9-10 Set Directory and File Permissions

    chmod 755 wallet
    chmod 744 wallet/cwallet.sso
    

    The first command enables anyone to read and execute files in the directory, while reserving write access to the directory creator.

    The second command enables only the file owner to read, write and execute the file, while anyone can read the file.

  8. Test the wallet by connecting to it. Execute the following command as shown in Example 9-11.

    Example 9-11 Connect to the Wallet

    sqlplus /@<$TWO_TASK>
    

9.5.3 What Happens When You Configure a Spawned Job Environment

A configured Oracle wallet enables spawned jobs to connect to the database at the command line.

9.6 Implementing a PL/SQL Scheduled Job

Implementing a PL/SQL scheduled job requires creating a job definition and creating a PL/SQL package.

9.6.1 Standards and Guidelines for Implementing a PL/SQL Scheduled Job

Be sure to run sub-requests through Oracle Enterprise Scheduler Service using theOracle Enterprise Scheduler APIs to access Oracle Enterprise Scheduler.

A PL/SQL stored procedure scheduler job should have a signature with the first two arguments being errbuf and retcode. The remaining arguments are used as required for defining job parameters. All arguments have a data type of varchar2.

9.6.2 How to Define Metadata for a PL/SQL Scheduled Job

Create a job definition as described in Section 9.4, "Creating a Job Definition."

PL/SQL jobs require setting an additional property numberOfArgs in the job definition. This property identifies the number of job submission arguments (not including the required arguments errbuf and retcode.)

9.6.3 How to Implement a PL/SQL Scheduled Job

Oracle Enterprise Scheduler Service provides runtime PL/SQL APIs for implementing PL/SQL jobs and running the jobs using Oracle Enterprise Scheduler. A view object is defined and associated with the job definition for the job.

When create a PL/SQL job, use the fusion database user. For information about granting access privileges to database users in the context of Oracle Fusion Applications, see the "Security" section in Oracle Fusion Applications Developer's Guide.

Before you begin:

For more information about implementing a PL/SQL stored procedure scheduled job see Chapter 6, "Creating and Using PL/SQL Jobs."

To implement a PL/SQL scheduled job:

  1. Create a PL/SQL package, including at minimum the required errbuf and retcode arguments.

  2. Deploy the package to a database.

  3. Test the package.

9.6.4 What Happens When You Implement a PL/SQL Job

The sample PL/SQL job shown in Example 9-12 provides a signature of a PL/SQL procedure run as a job. The first two arguments to the PL/SQL procedure, errbuf and retcode, are required. The remaining arguments are properties filled in by end users and passed to Oracle Enterprise Scheduler when the job is submitted.

The example shown in Example 9-12 illustrates a sample PL/SQL job that uses the PL/SQL API.

Example 9-12 Running a Job Using the PL/SQL API

procedure fusion_plsql_sample(
-- The first two arguments are required: errbuf and retcode
-- 
                                  errbuf    out NOCOPY varchar2,
                                  retcode   out NOCOPY varchar2,

-- The errbuf is logged when a job request ends in a warning or error state to
-- provide a quick indication as to why the job request ended in an error or
-- warning state.
-- Job submission arguments, as collected from the view object associated with the
-- job as configured in the job definition. The view object is used to present a
-- user interface to end users, allowing them to enter the properties listed in
-- the following lines of code.
-- interface. These values are submitted by the end user.
-- 
                                  run_mode  in  varchar2 default 'BASIC',
                                  duration  in  varchar2 default '0',
                                  p_num     in  varchar2 default NULL,
                                  p_date    in  varchar2 default NULL,
                                  p_varchar in  varchar2 default NULL) is
 
  begin
       -- Write log file content using FND_FILE API
       FND_FILE.PUT_LINE(FND_FILE.LOG, "About to run the sample program");
 
       -- Implement the business logic of the job here.
       -- 
       FND_FILE.PUT_LINE(FND_FILE.OUT, " RUN MODE : " || run_mode);
       FND_FILE.PUT_LINE(FND_FILE.OUT, "DURATION: " || duration);
       FND_FILE.PUT_LINE(FND_FILE.OUT, "P_NUM: " || p_num);
       FND_FILE.PUT_LINE(FND_FILE.OUT, "P_DATE: " || p_date);
       FND_FILE.PUT_LINE(FND_FILE.OUT, "P_VARCHAR: " p_varchar);
 
       -- Retrieve the job completion status which is returned to Oracle
       -- Enterprise Scheduler.
       errbuf := fnd_message.get("FND", "COMPLETED NORMAL");
       retcode := 0;
 end;

The sample shown in Example 9-13 illustrates a PL/SQL job with a sub-request submission. The no_requests argument identifies the number of sub-requests that must be submitted.

Example 9-13 Submitting a Sub-request Using the PL/SQL Runtime API

procedure fusion_plsql_subreq_sample(
                                  errbuf    out NOCOPY varchar2,
                                  retcode   out NOCOPY varchar2,
                                  no_requests  in  varchar2 default '5',
                                  ) is
       req_cnt number := 0;
       sub_reqid number;
       submitted_requests varchar2(100);
       request_prop_table_t jobProp;
  begin
       -- Write log file content using FND_FILE API
       FND_FILE.PUT_LINE(FND_FILE.LOG, "About to run the sample program with sub-request functionality");
 
       -- Requesting the PAUSED_STATE property set by job identifies request as
       -- having started for the first time or restarting after being paused.
       if ( ess_runtime.get_reqprop_varchar(fnd_job.job_request_id, 'PAUSED_STATE') ) is null )  -- first time start
       then
          -- Implement the business logic of the job here.
          FND_FILE.PUT_LINE(FND_FILE.OUT, " About to submit sub-requests : " || no_requests);
 
          -- Loop through all the sub-requests.
          for req_cnt 1..no_requests loop
            -- Retrieve the request handle and submit the subrequest.
            sub_reqid := ess_runtime.submit_subrequest(request_handle => fnd_job.request_handle,
                                        definition_name => 'sampleJob',
                                        definition_package => 'samplePkg',
                                        props => jobProp);
            submitted_requests := sub_reqid || ',';
          end loop;
 
          -- Pause the parent request.
          ess_runtime.update_reqprop_varchar(fnd_job.request_id, 'STATE', ess_job.PAUSED_STATE);
 
          -- Update the parent request with the state of the sub-request, enabling
          -- the job to retrieve the status during restart. 
          ess_runtime.update_reqprop_int(fnd_job.request_id, 'PAUSED_STATE', submitted_requests);
 
       else
          -- Restart the request, retrieve job completion status and return the
          -- status to Oracle Enterprise Scheduler Service.
          errbuf := fnd_message.get("FND", "COMPLETED NORMAL");
          retcode := 0;
       end if;
 end;

9.6.5 What Happens at Runtime: How a PL/SQL Job is Implemented

Oracle Enterprise Scheduler Service calls routines to initialize the context of the PL/SQL job, including PL/SQL global values, local values (such as language and territory), and request-specific values such as request ID and request handle.

The view object associated with the job definition displays a user interface so that end users may fill in values for each property. The Oracle Fusion web application calls Oracle Enterprise Scheduler using the provided APIs and submits the job request. Oracle Enterprise Scheduler runs the job, which calls the context routines and then runs the job logic. The job ends with a retcode value of 0, 1, 2 or 3, representing SUCCESS, WARNING, FAILURE or BUSINESS ERROR, respectively. The Oracle Fusion web application can retrieve the result from Oracle Enterprise Scheduler and display it in the user interface.

9.7 Implementing a SQL*Plus Scheduled Job

Implementing a SQL*Plus scheduled job involves writing a SQL*Plus script and configuring an environment file for the job.

9.7.1 Standards and Guidelines for Implementing a SQL*Plus Scheduled Job

Be sure to run sub-requests through Oracle Enterprise Scheduler Service using the Oracle Enterprise Scheduler APIs to access Oracle Enterprise Scheduler.

9.7.2 How to Implement a SQL*Plus Job

Implementing a SQL*Plus stored procedures job involves writing the SQL*Plus script, storing the script and configuring a spawned job environment.

To implement a SQL*Plus job:

  1. Write the SQL*Plus job as a SQL*Plus script. Include the FND_JOB.set_sqlplus_status call so as to report the final job status.

    Include the following in the SQL*Plus scheduled job:

    • FND_JOB.set_sqlplus_status: Call to report the final job status. Statuses include:

      • FND_JOB.SUCCESS_V: Success.

      • FND_JOB.WARNING_V: Warning.

      • FND_JOB.FAILURE_V: Failure.

      • FND_JOB.BIZERR_V: Business Error.

    • FND_FILE routines: Can be used for producing log data and output files.

    • FND_JOB API for request values: API calls are initialized for SQL*Plus jobs.

    Note:

    SQL*Plus jobs must not exit.
  2. Store the script under PRODUCT_TOP/$APPLSQL.

  3. Configure the spawned job environment as described in Section 9.5, "Configuring a Spawned Job Environment". Be sure to configure the ATGPF_TOP value in the environment.properties file for spawned jobs.

  4. Run and test the job.

9.7.3 How to Use the SQL*Plus Runtime API

Oracle Enterprise Scheduler Service provides runtime SQL*Plus APIs for implementing SQL*Plus jobs and running the jobs using Oracle Enterprise Scheduler.

This sample SQL*Plus job provides a signature of a SQL*Plus procedure run as a job. Any necessary arguments are properties filled in by end users and passed to Oracle Enterprise Scheduler when the job is submitted. A view object is defined and associated with the job definition for the job. The view object is then used to display a user interface so that end users may fill in values for each property. Finally, the sample prints to an output file.

9.7.4 What Happens When You Implement a SQL*Plus Job

Example 9-14 shows a sample SQL*Plus scheduled job, which is executed by a wrapper script.

Example 9-14 Implementing a SQL*Plus Scheduled Job

SET VERIFY OFF
SET linesize 132
 
WHENEVER SQLERROR EXIT FAILURE ROLLBACK;
WHENEVER OSERROR EXIT FAILURE ROLLBACK;
REM dbdrv: none
 
/* ----------------------------------------------------------------------*/
 
DECLARE
errbuf        varchar2(240) := NULL;
retval        boolean;
run_mode      varchar2(200)  := '&1';
 
BEGIN
        DBMS_OUTPUT.PUT_LINE(run_mode);
 
        update dual set dummy = 'Q';
 
    FND_FILE.PUT_LINE(FND_FILE.LOG, 'Parameter 1 = ' || nvl(run_mode,'NULL'));
 
/*  print out test message to log file and output file  */
/*  by making direct call to FND_FILE.PUT_LINE          */
/*  from sql script.                                    */
 
    FND_FILE.PUT_LINE(FND_FILE.LOG,   '
                       ');
    FND_FILE.PUT_LINE(FND_FILE.LOG,   '-----------------------------------------
-----------------------');
    FND_FILE.PUT_LINE(FND_FILE.LOG,   'Printing a message to the LOG FILE
                       ');
    FND_FILE.PUT_LINE(FND_FILE.LOG,   '-----------------------------------------
-----------------------');
    FND_FILE.PUT_LINE(FND_FILE.LOG,   'SUCCESS!
                       ');
    FND_FILE.PUT_LINE(FND_FILE.LOG,   '
                       ');
    FND_FILE.PUT_LINE(FND_FILE.OUTPUT,'-----------------------------------------
-----------------------');
    FND_FILE.PUT_LINE(FND_FILE.OUTPUT,'Printing a message to the OUTPUT FILE
                       ');
    FND_FILE.PUT_LINE(FND_FILE.OUTPUT,'-----------------------------------------
-----------------------');
    FND_FILE.PUT_LINE(FND_FILE.OUTPUT,'SUCCESS!
                       ');
    FND_FILE.PUT_LINE(FND_FILE.OUTPUT,'
                       ');
 
retval :=  FND_JOB.SET_SQLPLUS_STATUS(FND_JOB.SUCCESS_V);
 
END;
/
COMMIT;
-- EXIT; Fusion Applications  SQL*Plus Jobs must not exit.

9.7.5 What Happens at Runtime: How a SQL*Plus Job Is Implemented

Oracle Enterprise Scheduler Service calls routines in a wrapper script to initialize the context of the SQL*Plus job, including global values, local values (such as language and territory), and request-specific values such as request ID and request handle. The wrapper script introduces the prologue of commands shown in Example 9-15.

Example 9-15 SQL*Plus wrapper script

SET TERM OFF
SET PAUSE OFF
SET HEADING OFF
SET FEEDBACK OFF
SET VERIFY OFF
SET ECHO OFF
SET ESCAPE ON
 
WHENEVER SQLERROR EXIT FAILURE

The Fusion application calls Oracle Enterprise Scheduler using the provided APIs. Oracle Enterprise Scheduler runs the job, and the final job status—SUCCESS, WARNING, BUSINESS ERROR or FAILURE—is communicated to Oracle Enterprise Scheduler. The Oracle Fusion web application can retrieve the result from Oracle Enterprise Scheduler and display it in the user interface.

9.8 Implementing a SQL*Loader Scheduled Job

Implementing a SQL*Loader scheduled job involves creating a SQL*Loader control file and configuring a spawned job environment.

9.8.1 How to Implement a SQL*Loader Scheduled Job

Before you begin:

Keep in mind that the control file and data file must conform to the following SQL*Loader standards:

  • Place control files in the $APPLBIN directory under the product TOP.

  • Make sure that the control file's name is the same as the executableName parameter in the job definition.

  • Ensure that the data file's location is the first submit argument to the job.

  • Add SQL*Loader options such as direct=yes, if needed, as the sqlldr.directoption parameter in the job definition.

To implement a SQL*Loader scheduled job:

  1. Create a SQL*Loader control file (.ctl).

  2. Enter the full path of the data file as the first submit argument to the job.

  3. Store the control file under PRODUCT_TOP/$APPLBIN.

  4. Configure the spawned job environment as described in Section 9.5, "Configuring a Spawned Job Environment."

  5. Test the file.

9.8.2 What Happens When You Implement a SQL*Loader Scheduled Job

A sample SQL*Loader scheduled job is shown in Example 9-16.

Example 9-16 Sample SQL*Loader scheduled job

This sample control file will upload data from the data file into the fnd_applcp_test table, into the columns listed here (id1, id2, ..., mesg). See the SQL*Loader documentation for more information on writing control files.

OPTIONS (silent=(header,feedback,discards))
LOAD DATA
INFILE *
INTO TABLE fnd_applcp_test
APPEND
FIELDS TERMINATED BY ','
(id1,
 id2,
 id3,
 func CHAR(30),
 time SYSDATE,
 action CHAR(30),
 mesg CHAR(240))

9.9 Implementing a Perl Scheduled Job

Implementing a Perl scheduled job involves creating a job definition, enabling the Perl job to connect to a database and configuring a spawned job environment.

9.9.1 How to Implement a Perl Scheduled Job

Before you begin:

For more information about creating a Perl scheduled job see Chapter 6, "Creating and Using PL/SQL Jobs."

To implement a Perl scheduled job:

  1. Place the Perl job under the directory PRODUCT_TOP/$APPLBIN.

  2. Create a job definition for the Perl job, setting the executableName parameter to the name of the Perl script. The following functions can be used in the Perl script:

    • writeln(): Write a message to the log file.

    • timestamp(): Write a timestamped message.

  3. To enable the Perl job to connect to a database, use /@$TWO_TASK as a connection string without specifying a username or password.

  4. Configure the spawned job environment as described in Section 9.5, "Configuring a Spawned Job Environment". The context provides values for the following:

    • reqid: The request ID.

    • outfile: The full path to the output file.

    • logfile: The full path to the log file.

    • username: The name of the user submitting the job request.

    • log: The log object.

  5. Implement an exit code for the job, with values of 0, 2 or 3 representing the following states: success, warning and business error. All other values represent an errored state.

  6. Test the job.

9.9.2 What Happens When You Implement a Perl Scheduled Job

Example 9-17 shows a sample scheduled Perl job which does the following:

  1. Checks for basic or full mode.

  2. Prints arguments.

  3. Gets the scheduled job request context object.

  4. Retrieves contextual information about the scheduled job request, which is stored in the context object.

  5. Writes the request to the log file.

  6. Prints information as required.

Example 9-17 Perl Scheduled Job

# dbdrv: none
 
use strict;
 
(my $VERSION) = q$Revision: 120.1 $ =~ /(\d+(\.\d+)*)/;
 
print_header("Begin Perl testing script (version $VERSION)");
 
# check first argument for BASIC or FULL mode
# if not FULL mode, exit successfully without doing anything
if (! $ARGV[0] || uc($ARGV[0]) ne "FULL") {
    exit(0);
}
 
# -- If argument #1 was passed, use it as a sleep time
if ($ARGV[1]) {
 
    if ($ARGV[1] =~ /\D/) {
      print "** Argument #1 is not a valid number, unable to sleep!\n\n";
        } else {
      printf("Sleeping for %d seconds...\n", $ARGV[1]);
      sleep($ARGV[1]);
        }
}
 
# -- Arguments
print_header("Arguments");
my $i = 1;
foreach (@ARGV) {
  print "Argument #", $i++, ": $_\n";
}
 
# -- Get the request context object
my $context = get_context();
 
# -- Use this object to retrieve context information about this request
 
print_header("Context Information");
printf "Request id \t= %d\n", $context->reqid();
printf "User name \t= %d\n", $context->username();
printf "Logfile \t= %s\n", $context->logfile();
printf "Outfile \t= %s\n", $context->outfile();
 
# -- Writing to the request log file
print_header("Writing to log file");
 
# -- retrieve a Logfile object from the context
my $log = $context->log();
$log->writeln("This message should appear in the request logfile");
$log->timestamp("This is a timestamped message to the request logfile");
 
print "Wrote two messages to the request logfile\n";
 
# -- Print out some useful information
 
print_header("Environment");
foreach (sort keys %ENV) {
    print "$_=$ENV{$_}\n";
}
 
print_header("Perl Information");
print "PROCESS ID = $$\n";
print "REAL USER ID = $<\n";
print "EFF USER ID = $>\n";
print "SCRIPT NAME = $0\n";
print "PERL VERSION = $]\n";
print "OS NAME = $^O\n";
print "EXE NAME = $^X\n";
print "WARNINGS ON = $^W\n";
 
print "\n\@INC path:\n";
foreach  (@INC) {
    print "$_\n";
}
 
print "\nAll loaded perl modules:\n";
foreach (sort keys %INC) {
    print "$_ => $INC{$_}\n";
}
 
# -- Exiting the script
# -- The exit status of the script will be used as the request exit status.
# -- A zero exit status is reported as state of success.
# -- An exit status of 2 is reported as a warning state.
# -- An exit status of 3 is reported as a business error state.
# -- Any other exit status is reported as an error state.
 
print_header("Exiting script with status 0. (Normal completion)");
exit(0);
 
sub print_header {
 
  my $msg = shift;
  print "\n\n", "-" x 40, "\n", $msg, "\n", "-" x 40, "\n";
 
}

9.10 Implementing a C Scheduled Job

The main steps required to implement a C scheduled job are as follows:

9.10.1 How to Define Metadata for a C Scheduled Job

Create a job definition as described in Section 9.4, "Creating a Job Definition".

9.10.2 How to Implement a C Scheduled Job

To implement a C scheduled job:

  1. In a separate function or file rather than in main, implement your required business logic.

    Include the following header files:

    • afcp.h: This is the header file for Oracle Enterprise Scheduler.

    • afstd.h and afstr.h: These are Fusion Application header files.

  2. Call afpend in the business logic function.

  3. In the main function, call afprcp, passing to it a pointer to the business logic function.

    The business logic function is called by afprcp, taking the arguments argc, argv, and reqinfo.

  4. Save the executable job file to the $APPLICATIONS_BASE/$APPLBIN directory.

  5. Configure the spawned job environment, as described in Section 9.5, "Configuring a Spawned Job Environment".

    Be sure to set both the TOP and APPLBIN variables for your application in the environment.properties file.

9.10.3 Scheduled C Job API

Several C functions are available for use in developing Fusion applications, while several others are not. Table 9-2 and Table 9-3 list the available and unavailable functions.

Table 9-2 C Functions Available for Developing Fusion Applications

Function Description

afprcp

Run C program. The recommended API for writing a C program. The main .oc file should call this function to run the program logic. It initializes the context and calls the program.

int afprcp (uword argc, text **argv, afsqlopt *options, afpfcn *function); 

afpend

End C program. All programs must call this to signal the completion of the program. The program should pass completion status, and message if necessary.

Indicate completion status with the following constants:

  • FDP_SUCCESS: Success

  • FDP_WARNING: Warning

  • FDP_ERROR: System Error

  • FDP_BIZERR: Business Error

boolean afpend (text *outcome, dvoid *handle, text *compmesg);

fdpfrs

Find request status. For a given request, retrieve the status. The following are possible request states:

  • ESS_WAIT_STATE

  • ESS_READY_STATE

  • ESS_RUNNING_STATE

  • ESS_COMPLETED_STATE

  • ESS_BLOCKED_STATE

  • ESS_HOLD_STATE

  • ESS_CANCELLING_STATE

  • ESS_EXPIRED_STATE

  • ESS_CANCELLED_STATE

  • ESS_ERROR_STATE

  • ESS_WARNING_STATE

  • ESS_SUCCEEDED_STATE

  • ESS_PAUSED_STATE

  • ESS_PENDING_VALID_STATE

  • ESS_VALID_FAILED_STATE

  • ESS_SCHEDULE_ENDED_STATE

  • ESS_FINISHED_STATE

  • ESS_ERROR_AUTO_RETRY_STATE

  • ESS_MANUAL_RECOVERY_STATE

afreqstate fdpfrs (text *request_id, text *errbuf);

fdpgret

Get the error type of a specific job request ID. The following are possible error types:

  • ESS_UNDEFINED_ERROR_TYPE

  • ESS_SYSTEM_ERROR_TYPE

  • ESS_BUSINESS_ERROR_TYPE

  • ESS_TIMEOUT_ERROR_TYPE

  • ESS_MIXED_NON_BUSINESS_ERROR_TYPE

  • ESS_MIXED_BUSINESS_ERROR_TYPE

afreqstate fdpgret (text *request_id, text *status, text *errbuf);

fdpgrs

Get request status. For a given request, retrieve the current status and completion text.

afreqstate fdpgrs (text *request_id, text *status, text *errbuf);

fdplck

Lock table. Locks the desired table with the specified lock mode and NOWAIT.

fdpscp

Legacy API for concurrent programs. All new concurrent programs should use afprcp.

boolean fdpscp (sword *argc, text **argv[], text args_type, text *errbuf); 

fdpwrt

Routines for creating log/output files and writing to files. These are routines concurrent programs should use for writing to all log and output files.


Table 9-3 C Functions Not Available for Developing Fusion Applications

Function Description

fdpgoi

Get Oracle data group.

fdpgpn

Get program name.

fdpgrc

Get request count.

fdpimp

Run the import utility.

fdpldr

Run SQL*Loader.

fdpperl

Run Perl concurrent program.

fdprep

Run report.

fdprpt

Run Sql*Rpt program.

fdprsg

Submit concurrent program. Use the afpsub routines instead.

fdpscr

Get resource security group.

fdpsql

Run SQL*Plus concurrent program.

fdpstp

Run stored procedure.


9.10.4 How to Test a C Scheduled Job

When developing a C job, it is possible to test the job by running it from a command line interface.

Running a C job from the command line involves the following main steps:

  • Invoking the job

  • Obtaining a database connection and setting the runtime context by passing special arguments.

  • Passing any program-specific parameters at the command line.

To run a C job from the command line:

  • Use the syntax shown in Example 9-18 to run a C job from the command line for testing purposes.

    Example 9-18 Syntax for Running a C Job from the Command Line

    %program <heavyweight user connection string> <lightweight username> <flag> <job parameters> ...
    

    where

    <heavyweight user connection string> is the username/password@TWO_TASK pair used to connect to the database

    <lightweight username> is the name of the lightweight user submitting the job. This value is used to set the user context in the database connection.

    <flag> must be set to 'L' for lightweight user.

An example illustrating running a C job from the command line is shown in Example 9-19.

Example 9-19 Running a C Job from the Command Line for Testing Purposes

program username/password@my_db MYUSER L <parameter1> <parameter2> .... 

9.10.5 What Happens When You Implement a C Scheduled Job

The sample C job shown in Example 9-20 uses afprcp to initialize and obtain a database connection. It uses both Pro*C and afupi.

Example 9-20 Using the C Runtime API

#ifndef AFSTD
#include <afstd.h>
#endif
 
#ifndef AFSTR
#include <afstr.h>
#endif
 
#ifndef AFCP
#include <afcp.h>
#endif
 
#ifndef SQLCA
#include <sqlca.h>
#endif
 
#ifndef AFUPI
#include <afupi.h>
#endif
 
#ifndef FDS
#include <fds.h>
#endif
 
boolean testupi()
{
  text *sqltext;
  text buffer[ERRLEN];
  text os_user[31];
  text session_user[31];
  text db_name[31];
 
  aucursor  *use_curs;
  word      errcode;
 
  os_user[0] = session_user[0] = db_name[0] = (text)'\0';
 
  sqltext = (text*) "SELECT sys_context('USERENV','DB_NAME',30), sys_context('US
ERENV','SESSION_USER',30), sys_context('USERENV','OS_USER',30) from dual";
 
  use_curs = NULLCURSOR;
  use_curs = afuopen (NULLHOST, NULLCURSOR, (dvoid *)
                      sqltext,
                      UPISTRING);
  if (use_curs == NULLCURSOR) {goto upierror;}
 
  afudefine(use_curs, 1, AFUSTRING, (dvoid *)db_name, 31);
  afudefine(use_curs, 2, AFUSTRING, (dvoid *)session_user, 31);
  afudefine(use_curs, 3, AFUSTRING, (dvoid *)os_user, 31);
 
  if (!afuexec (use_curs, (uword)1, (uword)1, CSTATHOLD|CSTATEXACT) ||
      (errcode = afuerror (NULLHOST, (text *) NULL, 0)) != ORA_NORMAL) {
    goto upierror;
  }
 
  DISCARD afurelease (use_curs);
 
  DISCARD sprintf((char *)buffer, "%s as %s@%s", os_user,
                  session_user, db_name);
 
  DISCARD fdpwrt(AFWRT_OUT | AFWRT_NEWLINE, buffer);
 
 
  return TRUE;
 
 upierror:
  if (use_curs != NULLCURSOR)
    DISCARD afurelease (use_curs);
  DISCARD fdpwrt(AFWRT_LOG | AFWRT_NEWLINE, "Error in testupi");
  return FALSE;
}
 
void testrpc()
{
  text buffer[256];
 
 
  EXEC SQL BEGIN DECLARE SECTION;
 
  VARCHAR os_user[31];
  VARCHAR session_user[31];
  VARCHAR db_name[31];
 
  EXEC SQL END DECLARE SECTION;
 
  buffer[0] = os_user.arr[0] = session_user.arr[0] = db_name.arr[0] = '\0';
 
  EXEC SQL SELECT sys_context('USERENV','DB_NAME',30),
    sys_context('USERENV','SESSION_USER',30),
    sys_context('USERENV','OS_USER',30)
    INTO :db_name, :session_user, :os_user
    from dual;
 
  nullterm(os_user);
  nullterm(session_user);
  nullterm(db_name);
 
  DISCARD sprintf((char *)buffer, "%s as %s@%s", os_user.arr,
                  session_user.arr, db_name.arr);
 
  DISCARD fdpwrt(AFWRT_OUT | AFWRT_NEWLINE, buffer);
}
 
sword cptest(argc, argv, reqinfo)
/* ARGSUSED */
sword argc;
text *argv[];
dvoid *reqinfo;
{
  ub2 i;
  text errbuf[ERRLEN+1];
 
 /* Write to the log file */
  DISCARD fdpwrt(AFWRT_LOG | AFWRT_NEWLINE, (text *)"Test Success");
 /* Write to the out file */
  DISCARD fdpwrt(AFWRT_OUT | AFWRT_NEWLINE, (text *)"Test Args:");
 /* Loop through argv and write to the out file. */
  for ( i=0; i<argc; i++)
    DISCARD fdpwrt(AFWRT_OUT | AFWRT_NEWLINE, argv[i]);
  /* Call the Fusion Applications function afpoget to return the value of a */
  /* profile option called SITENAME and write the results to the error buffer. */
  DISCARD afpoget((text *)"SITENAME", errbuf);
  /* Write the value to the output file. */
  DISCARD fdpwrt(AFWRT_OUT | AFWRT_NEWLINE, errbuf);
  /* Connect to the database and run a SELECT against the database. Creates a */
  /* string and writes the returned data to the output file. Uses prc APIs. */
  testrpc();
  /* Open a cursor for the SELECT statement, defines variables to collect data */  
  /* upon running statement, and executes SELECt. Creates a string which it */ 
  /* writes to the output file. Uses afupi APIs. */
  testupi();
  /* Writes the string "Test Completed." to the output file. */
  DISCARD fdpwrt(AFWRT_OUT | AFWRT_NEWLINE, (text *)"Test Completed.");
  /* Call afpend to identify the exit status, which in this case is successful. */
  /* Other possible values are FDP_WARNING, FDP_ERROR and FDP_BIZERR. The 
  /* reqinfo originally passed to cptest is passed here. Optionally, additional */
  /* text can be passed here, for example explaining the outcome of the exit */
  /* status. */
  return((sword)afpend(FDP_SUCCESS, reqinfo, (text *)NULL));
};
 
 
int main(/*_ int argc, text *argv[] _*/);
int main(argc, argv)
  int argc;
  text *argv[];
{

    /* Run cptest and return an exit value to Oracle ESS. */
    return(afprcp((uword)argc, (text **)argv,
    (afsqlopt *)NULL, (afpfcn *)cptest));
}

9.10.6 What Happens at Runtime: How a C Scheduled Job Is Implemented

When Oracle Enterprise Scheduler Service runs a C job, afprcp() runs first to initialize the context and obtain the database connection. The function afprcp() then calls the function containing the program logic. Oracle Enterprise Scheduler runs the job, and the result of the job is returned to Oracle Enterprise Scheduler. The Fusion Application can retrieve the result from Oracle Enterprise Scheduler and display it in the user interface.

Note:

Wallet configuration is required for the client ORACLE_HOME to obtain the database connection. The operating system environment in which the job runs (including the location of the client ORACLE_HOME, which is also required) is set in the environment.properties file. The environment.properties file must be configured and placed in the config/fmwconfig directory under the domain.

You can add your own environment variables by creating an env.custom.properties file in the same directory. Variables you define in this file take precedence over those in the environment.properties file.

Similarly, you can set server-specific environment variables with environment.properties and env.custom.properties files in the server config directory.

9.11 Implementing a Host Script Scheduled Job

Arguments submitted for a host script job request are passed to the script at the command line. Host scripts may access the standard environment variables to get REQUESTID, LOG_WORK_DIRECTORY, OUTPUT_WORK_DIRECTORY, and so on. Script output is redirected to the request log file by default.

Use the following steps when implementing a host script job:

9.12 Implementing a Java Scheduled Job

For more information about implementing Java Scheduler jobs, see Chapter 3, "Use Case Oracle Enterprise Scheduler Sample Application."

9.12.1 How to Define Metadata for a Scheduled Java Job

Create a job definition as described in Section 9.4, "Creating a Job Definition".

9.12.2 How to Use the Java Runtime API

For information about the Java runtime API, see the Oracle Fusion Applications Java API Reference for Oracle Enterprise Scheduler Service.

You can access the Oracle Fusion Middleware Extensions for Applications Message and Profile objects directly, using those APIs which handle the service accessing themselves.

9.12.3 How to Cancel a Scheduled Java Job

You can cancel a scheduled Java job by implementing the Cancellable interface.

The Cancellable implementation in Example 9-21 checks as logic progresses to see if the job has been canceled. If it has, the code cleans up after itself before exiting.

Example 9-21 Handling a Job Cancellation Request

import oracle.as.scheduler.Cancellable;
import oracle.as.scheduler.Executable;
import oracle.as.scheduler.ExecutionCancelledException;
import oracle.as.scheduler.ExecutionErrorException;
import oracle.as.scheduler.ExecutionPausedException;
import oracle.as.scheduler.ExecutionWarningException;
import oracle.as.scheduler.RequestExecutionContext;
import oracle.as.scheduler.RequestParameters;

public class MyExecutable
    implements Executable, Cancellable
{
    private volatile boolean m_cancel = false;

    public void execute( RequestExecutionContext reqCtx,
                         RequestParameters reqParams ) 
        throws ExecutionErrorException, ExecutionWarningException,
               ExecutionPausedException, ExecutionCancelledException
    {
        // Do some work and check if this request has been canceled.
        // ... work ...
        checkCancel(reqCtx);

        // Do more work and check if this request has been canceled.
        // ... work ...
        checkCancel(reqCtx);
        // Finish work.
        // ... work ...
    }

    // Set flag that the app logic should check periodically to
    // determine if this request has been canceled.
    public void cancel()
    {
        m_cancel = true;
    }

    // Check if request has been canceled. If not, do nothing.
    // Otherwise, do any clean up work that may be needed for
    // this request and end by throwing an ExecutionCancelledException.
    private void checkCancel(RequestExecutionContext reqCtx )
        throws ExecutionCancelledException
    {
        if (m_cancel)
        {
            // Do work any clean up work that may be needed
            // prior to ending this executable.
            // ... clean up work ...
            String msg = "Request " + reqCtx.getRequestId() +
                         " was cancelled.";
            throw new ExecutionCancelledException(msg);
        } 
    }
}

9.12.4 What Happens at Runtime: How a Java Scheduled Job Is Implemented

Oracle Enterprise Scheduler Service initializes the context of the job. The Fusion application calls Oracle Enterprise Scheduler Service using the provided APIs. Oracle Enterprise Scheduler runs the job, and a result of success or failure is returned to Oracle Enterprise Scheduler. The Fusion Application can retrieve the result from Oracle Enterprise Scheduler and display it in the user interface.

9.13 Elevating Access Privileges for a Scheduled Job

Oracle Enterprise Scheduler executes jobs in the user context of the job submitter at the scheduled time. Some scheduled jobs require access privileges that are different from those of the submitting user. However, information regarding the submitter of the scheduled job must be retrievable for auditing purposes.

In Oracle Enterprise Scheduler, it is prohibited to run a job in the context of a user other than the submitting user with runAs. Doing so would be considered a security breach. Using an application identity enables running a job with different access privileges from those allotted to the submitting user.

Application identity is a SOA and JPS concept that addresses the requirement for escalated privileges in completing an action. The application installer creates an application identity in Oracle Identity Management Repository.

For more information, see the following chapters in the Oracle Fusion Applications Developer's Guide:

9.13.1 How to Elevate Access Privileges for a Scheduled Job

The Oracle Enterprise Scheduler job system property SYS_runasApplicationID enables elevating access privileges for completing a scheduled job.

To elevate access privileges for a scheduled job:

  1. Create a job definition, as described in Section 9.4, "Creating a Job Definition."

  2. Under the Parameters section, add a parameter called SYS_runasApplicationID.

  3. In the text field for the SYS_runasApplicationID, enter the application ID under which you want to run the job, as shown in Figure 9-1.

    Make sure the input string is a valid ApplicationID that exists when the job executes.

    Figure 9-1 Defining the runAs User for the Job

    Defining the runAs user for the job.

    You can retrieve the executing user by running either of the methods shown in Example 9-22 and Example 9-23.

    Example 9-22 Retrieving the Executing User with getRunAsUser()

    requestDetail.getRunAsUser()
    

    Example 9-23 Retrieving the Executing User with getRequestParameter()

    String sysPropUserName =
             (String) runtime.getRequestParameter(h, reqid, SystemProperty.USER_NAME);
    

Given a request ID, you can retrieve the submitting and executing users of a job request.

To retrieve the submitting and executing users of a job request in Oracle Enterprise Scheduler RuntimeService EJB:

  • Example 9-24 shows a code snippet for retrieving the submitting and executing users of a job request using the Oracle Enterprise Scheduler RuntimeService EJB.

    Example 9-24 Retrieving the Submitting and Executing Users of a Job Request Using the RuntimeService EJB

    // Lookup runtimeService
    
    RequestDetail requestDetail = runtimeService.getRequestDetail(h, reqid);  
    String runAsUser = requestDetail.getRunAsUser();    
    String submitter = requestDetail.getSubmitter();
    

To retrieve the submitting and executing users of a job request from within an Oracle Fusion application:

  • Example 9-25 shows a code snippet for retrieving the submitting and executing users of a job request from within an Oracle Fusion application.

    Example 9-25 Retrieving the Submitting and Executing Users of a Job Request from an Oracle Fusion Application

    import oracle.apps.fnd.applcore.common.ApplSessionUtil;
    // The elevated privilege user name.
    ApplSessionUtil.getUserName()
    // The submitting user.
    ApplSessionUtil.getHistoryOverrideUserName()
    

9.13.2 How Access Privileges Are Elevated for a Scheduled Job

When a job request schedule executes, Oracle Enterprise Scheduler:

  1. Validates the submitter's execution privileges on the job metadata.

  2. Retrieves the application identity information from the job metadata. If the job metadata does not specify an application identity for the job, Oracle Enterprise Scheduler executes the job in the context of the job submitter.

    • Java job: An FND session is established as the user with elevated privileges.

      The executing user is taken from the current subject as viewed from the job logic.

      Note:

      Oracle Enterprise Scheduler does not directly support invoking a web service or composite. If your job logic invokes a web service or composite, you must write the client code logic in your job, establish a connection and propagate the job submitter information as a payload for auditing purposes. For an asynchronous web service call, the job must wait for a response.
    • Spawned C job: An application user session is established as the executing user. The submitter information is an attribute of the application user session.

      The spawned job executes as the operating system user who starts Oracle WebLogic Server.

    • PL/SQL job: An FND session is established as the executing user. The submitter information is attribute of the FND session.

      The job runs in the context of the FND session in the RDBMS job scheduler.

  3. Executes the job logic.

9.13.3 What Happens When Access Privileges Are Elevated for a Scheduled Job

Oracle Enterprise Scheduler validates the user's execution privileges on the job metadata. If so, the user context is captured and stored in the Oracle Enterprise Scheduler database as the submitting user, and the request is placed in the queue.

9.14 Creating an Oracle ADF User Interface for Submitting Job Requests

When implemented as part of an Oracle Fusion application, the Oracle ADF user interface enables end users to submit job requests.

9.14.1 How to Create an Oracle ADF User Interface for Submitting Job Requests

The Oracle ADF UI enables end users to submit job requests. End users can enter complex data types for the arguments of descriptive and key flexfields. The Parameters tab in the Oracle ADF UI interface allows end users to enter parameters to be used when submitting the job request.

Flexfields display in a separate task flow region. This region is a child task flow of the parent task flow displayed in the Parameters tab.

Note:

Make sure to define customization layers and authorize runtime customizations to the adf-config.xml file as described in the chapter "Creating Customizable Applications" in Oracle Fusion Applications Developer's Guide.

To create a user interface for submitting job requests:

  1. Create a new Oracle Fusion web application by clicking New Application in the Application Navigator and selecting Fusion Web Application (ADF) from the Application Templates drop-down list.

    Model and ViewController projects are created within the application.

  2. Right-click the Model project and select Project Properties > Libraries and Classpath > Add Library.

  3. From the list, select the following libraries, as shown in Figure 9-2:

    • Applications Core

    • Applications Concurrent Processing

    • Enterprise Scheduler Extensions

    Figure 9-2 Adding the Libraries to the Model Project

    Adding the Applications Core and ESS library.

    Click OK to close the window and add the libraries.

  4. Right-click the View Controller project and select Project Properties > Libraries and Classpath > Add Library.

    Add the library Applications Core (ViewController), as shown in Figure 9-3.

    Figure 9-3 Adding the Library to the View Controller Project

    Adding libraries to the View Controller project.
  5. In the Project Properties dialog, in the left pane, click Business Components.

  6. The Initialize Business Components Project window displays. Click the Edit icon to create a database connection for the project.

    Fill in the database connection details as follows:

    • Connection Exists in: Application Resources

    • Connection Type: Oracle (JDBC)

    • Username/Password: Fill in the relevant username and password for the database.

    • Driver: thin

    • Host Name: Enter the host name of the database server.

    • JDBC port: Enter the port number of the database.

    • SID: The unique Oracle system ID for the database.

    Click OK.

  7. In the file weblogic.xml, import oracle.applcp.view.

  8. In the file weblogic-application.xml, import the following libraries:

    • oracle.applcore.attachments (for ESS-UCM)

    • oracle.applcp.model

    • oracle.applcp.runtime

    • oracle.ess

    • oracle.sdp.client (for notification)

    • oracle.ucm.ridc.app-lib (for ESS-UCM)

    • oracle.webcenter.framework (for ESS-UCM)

    • oracle.xdo.runtime

    • oracle.xdo.service.client

    • oracle.xdo.webapp

    The libraries oracle.applcp.model and oracle.applcp.view are deployed as part of the installation while running the config.sh wizard.

  9. Create a new JSPX page for the ViewController project by right-clicking ViewController and selecting New > Web Tier >JSF > JSF JSP Page.

  10. Create a new File System connection. In the Resource Palette, right-click File System, select New File System Connection, and do the following:

    1. Provide a connection name and directory path for the Oracle ADF Library files (<jdev_install>/jdev/oaext/adflib).

    2. Click Test Connection and click OK once the connection is successful.

  11. Expand the contents of the SRS-View.jar file to display the list of available task flows that can be used in the application, as shown in Figure 9-4.

    Figure 9-4 Displaying the List of Available Task Flows

    Displaying the list of available task flows.
  12. To include the job request submission page in the application, select the ScheduleRequest-taskflow from the Resource Palette and drop it onto the JSF page in the area where you want to create a call to the taskflow. Create the taskflow call as a link or button.

    For example, to invoke the job request submission page from within a dialog box in the application, do the following:

    1. From the Component Palette, drag and drop a Link onto the form in the JSPX page.

    2. In the Property Inspector, configure the behavior of the link to showpopup.

    3. From the Component Palette, drag and drop a Popup component with a dialog component onto the form.

    4. To enable submitting a job request, drag and drop ScheduleRequest-taskflow onto the dialog component as a dynamic region.

      To enable submitting a job set request, drag and drop ScheduleJobset-taskflow onto the dialog component.

      Figure 9-5 displays the task flows in the Resource Palette.

      Figure 9-5 Including the Job Request Submission Page in the Application

      Adding the job request page to the application.
    5. From the context menu, select Create a Dynamic Region.

  13. When prompted, add the required library to the ViewController project by clicking Add Library. Save the JSF page.

  14. Edit the task flow binding. Define the following parameters for the task flow, as shown in Figure 9-6.

    1. jobdefinitionname: Enter the name of the job definition to be submitted. This is not the name that displays. This is the job definition defined in Section 9.4, "Creating a Job Definition". Required.

    2. jobdefinitionpackagename: Enter the package name under which the job definition metadata is stored. This should be the namespace path appended to the package name, for example /oracle/ess/Scheduler. The namespace path typically begins with a forward slash ("/"), but should have no forward slash at the end. Required.

    3. centralui: When setting this parameter to true, then the task flow UI does not display the header section containing the name, description and basic Oracle BI Publisher actions (such as e-mail, print and notify). This parameter must be a boolean value. Optional.

    4. pageTitle: When passed, the task flow will render this passed String value as the page title. The pageTitle value is currently configured to be truncated at 30 characters. Optional.

    5. requireRootOutcome: If true is passed as the value, then the task flow will generate root-outcome when the user clicks the Submit or Cancel buttons. By default, the task flow generates parent-outcome. Optional.

    6. requestparametersmap: Enter the name of the map object variable that contains the parameters required for the job request submission. If this parameter is filled in, the Parameters tab in the request scheduling submission page will not prompt end users to enter parameters for executing the request. The map can be passed to the task flow as a parameter. Typically, this parameter takes the data type java.util.Map in which keys are parameter names and values are parameter values. For example, if you will be using a paramsMap object in the pageFlowScope, you might enter a requestparametersmap value of #{pageFlowScope.paramsMap}. Optional.

      In the page that holds the SRS task flow region, set the following property for the popup that launches the SRS window: contentDelivery = immediate.

      In the page definition file of the page that contains the task flow region, set the following property for the task flow: Pagedef > executables > taskflow > Refresh=IfNeeded.

    Figure 9-6 Defining Parameters for the Task Flow

    Defining parameters for the task flow.
  15. If you are using a map to pass parameters to the taskflow (requestparametersmap), create a new taskflow parameter, such as the paramsMap object in the pageFlowScope of a pageflow.

    These values can be accessed in the job executable, for example from the RequestParameters object in the case of a Java job. Example 9-26 illustrates passing the values stored in the RequestParameters object to a Java job. This code is used in the class that implements the oracle.as.scheduler.Executable interface.

    Example 9-26 Passing Values in a Map Object to a Java Job

    public void execute(RequestExecutionContext ctx,RequestParameters props) 
        throws ExecutionErrorException, ExecutionWarningException, 
            ExecutionCancelledException,ExecutionPausedException
    { 
        String pageTitle = (String) props.getValue("pageTitle");
        // Retrieve other parameters.
        // ... 
    }
    

    Note:

    When using a requestparametersmap, make sure to set the following properties for the popup within which the task flow is launched.
    • Set Content Delivery to Immediate.

    • In the page definition XML file for the page that contains the region, select PageDef > Executables > taskflow > set Refresh = ifNeeded.

  16. If the job is defined with properties that must be filled in by end users, the user interface allows end users to fill in these properties prior to submitting the job request. For example, if the job requires a start and end time, end users can fill in the desired start and end times in the space provided by the user interface.

    The properties that are filled in by end users are associated with a view object, which in turn is associated with the job definition itself. When the job runs, Oracle Enterprise Scheduler Service accesses the view object to retrieve the values of the properties.

    If using a view object to pass parameters to the job definition, do the following:

    1. Create a view object called TestVO using a query such as the one shown in Example 9-27.

      Example 9-27 Creating a View Object Using a Query

      select null as Attribute1, null as Attribute2 from dual" 
      
    2. Specify control UI hints, for example set the display label for Attribute1 to Run Mode and for Attribute2 to Duration.

      As a result, the parameters tab in the job request submission UI renders with the input fields Run Mode and Duration.

    3. In order to render the Parameters tab in the job request submission UI, add the DynamicComponents 1.0 library as follows. Right-click ViewController and select Project Properties > JSP Tag Libraries > Add. In the Choose Tag Libraries window, select the library DynamicComponents 1.0 and click OK. Figure 9-7 displays the Choose Tag Libraries window.

      Figure 9-7 Adding the Library DynamicComponents 1.0

      Add DynamicComponents library to ViewController.
  17. In the JSF application you created, create another project called Scheduler. Select File > New, and choose General > Empty Project. This project will be used to create Enterprise Scheduler Service metadata and job implementations.

  18. In the Scheduler project, add the Enterprise Scheduler Extensions library to the classpath. Right-click the Scheduler project and select Project Properties > Libraries and Classpath > Add Library > Enterprise Scheduler Extensions.

  19. Deploy the libraries oracle.xdo.runtime and oracle.xdo.webapp to the Oracle Enterprise Scheduler UI managed server. These libraries are located in the directory $MW_HOME/jdeveloper/xdo, where MW_HOME is the Oracle Fusion Middleware home directory.

  20. Deploy the application.

9.14.2 How to Add a Custom Task Flow to an Oracle ADF User Interface for Submitting Job Requests

You can add a custom task flow to an Oracle ADF user interface used to submit job requests at run time.

To add a custom task flow to an Oracle ADF user interface for submitting job requests:

  1. Create a task flow and bind it to your Oracle ADF user interface for submitting a job request created in Section 9.14.1, "How to Create an Oracle ADF User Interface for Submitting Job Requests."

    For more information about creating task flows and binding them to an Oracle ADF user interface, see the following chapters in Oracle Fusion Middleware Fusion Developer's Guide for Oracle Application Development Framework:

  2. Create an ADF Business Components view object for each UI field. Name the view objects that are bound to UI fields ParameterVO1, ParameterVO2, and so on.

    Name the attributes of the view objects as follows: ATTRIBUTE1, ATTRIBUTE2, and so on.

    For more information about creating an ADF Business Components view object, see the chapters "Defining SQL Queries Using View Objects" and "Advanced View Object Techniques" in Oracle Fusion Middleware Fusion Developer's Guide for Oracle Application Development Framework.

  3. Include the view objects in the relevant application module. Even if their names are different, make sure the view object instance names are ParameterVO1, ParameterVO2, ParameterVO3, and so on.

  4. In the job definition, make sure to define the properties CustomDataControl and ParameterTaskflow For more information, see Section 9.4.1, "How to Create a Job Definition."

    For more information about passing parameters to the Oracle ADF task flow, see the chapter "Using Parameters in Task Flows" in Oracle Fusion Middleware Fusion Developer's Guide for Oracle Application Development Framework.

  5. Optionally, include the method preSubmit() in the application module. Oracle Enterprise Scheduler invokes this method before retrieving the parameter values for the submission request.

    Your implementation of the preSubmit() method (which returns a boolean value) could include validation code in the custom task flow. If the validation fails, your code can throw an exception with the proper internationalized error message.If this validation fails while submitting the request, the error message is displayed to the user and the submission doesn't go through.

9.14.3 How to Enable Support for Context-Sensitive Parameters in an Oracle ADF User Interface for Submitting Job Requests

After integrating your application with the Oracle ADF UI for submitting job requests, enable context-sensitive parameter support in the UI.

The request submission UI will render the context-sensitive parameters first so that the end user will specify the context-sensitive parameter values. Context is set in the database based on these parameters. After setting the context, it renders the rest of the parameters based on context set at database layer. When the job runs, the actual business logic will run after setting the context based on the context-sensitive parameter values inside the database.

Follow this procedure to enable context-sensitive parameter support in the UI.

To enable support for context sensitive parameters in an Oracle ADF user interface for submitting job requests:

  1. Follow the instructions described in Section 9.14.1.

  2. Create a native ADF Business Components view object with attributes CTXATTRIBUTE1, CTXATTRIBUTE2, and so on, with a maximum of 100 attributes.

    For example, create a view object with the query Select null as CTXATTRIBUTE1, CTXATTRIBUTE2, CTXATTRIBUTE3 from dual. Include required UI hints such as display label, tool tip, and so on.

  3. Create a PL/SQL procedure or function in order to set the context.

  4. Specify the parameters shown in Example 9-28 and Example 9-29 in the job definition metadata.

    • contextParametersVO: Enter the fully qualified name of the view object that holds the context sensitive parameters.

      Example 9-28 contextParametersVO

      <parameter name="contextParametersVO" data-type="string">_oracle.apps.mypkg.TestCtxVO</parameter>_
      
    • setContextAPI: PL/SQL API to set the context, along with the package name. The _myPkg1.mySetCtx procedure receives arguments based on attributes in the contextParametersVO.

      Example 9-29 setContextAPI

      <parameter name="setContextAPI"  data-type="string">_myPkg1.mySetCtx</parameter>_
      

9.14.4 How to Save and Schedule a Job Request Using an Oracle ADF UI

Saving and scheduling a job request using an Oracle ADF UI involves using the Enterprise Scheduler Extensions library in conjunction with a JSF application that includes a task flow in which a job is scheduled and saved.

To schedule a job request using an Oracle ADF UI:

  1. Follow the instructions in Section 9.14.1, "How to Create an Oracle ADF User Interface for Submitting Job Requests" up to step 9.

    Note:

    If the custom parameters task flow has no transactions of its own, it must set the data-control-scope to "isolated". This ensures that multiple ParameterVOs using the same application module get their independent application module instance.
  2. Drag and drop SaveSchedule-taskflow onto the dialog. No input parameters are required.

  3. When prompted, add the required library to the ViewController project by clicking Add Library. Save the JSF page.

  4. In the JSF application you created, create another project called Scheduler. Select File > New, and choose General > Empty Project. This project will be used to create Enterprise Scheduler Service metadata and job implementations.

  5. In the Scheduler project, add the Enterprise Scheduler Extensions library to the classpath. Right-click the Scheduler project and select Project Properties > Libraries and Classpath > Add Library > Enterprise Scheduler Extensions.

  6. Deploy the application as described in the Oracle Enterprise Scheduler Developer's Guide.

  7. Launch the application using the following URL:

    http://<machine>:<http-port>/<context-root>/faces/<page>
    
  8. Enter a schedule name, description and package name with the namespace appended, as shown in Figure 9-8.

    Figure 9-8 Saving a Job Submission Schedule

    Saving a job submission schedule.
  9. Save the schedule.

    A message displays indicating the metadata object ID of the saved schedule. This ID can be used for further job or job set request submissions

9.14.5 How to Submit a Job Using a Saved Schedule in an Oracle ADF UI

Submitting a saved job request schedule using an Oracle ADF UI involves using the Enterprise Scheduler Extensions library in conjunction with a JSF application that includes a task flow in which a saved job schedule can be submitted.

To submit a job using a saved schedule in an Oracle ADF UI:

  1. Follow the instructions in Section 9.14.1, "How to Create an Oracle ADF User Interface for Submitting Job Requests".

  2. Deploy the application. Launch the page using the following URL:

    http://<machine>:<http-port>/<context-root>/faces/<page>
    
  3. Click the Schedule tab. In the Run option field, select the Use a Schedule radio button.

  4. From the Frequency drop-down list, select Use a Saved Schedule.

  5. Enter the namespace and package names for the schedule along with the name of the schedule.

  6. To view the list of scheduled jobs, click Get Details. Click Submit to submit the saved job request.

9.14.6 How to Notify Users or Groups of the Status of Executed Jobs

The Oracle ADF user interface for submitting job requests provides the ability to notify users of the status of submitted jobs (via the Notification tab of the user interface). For example, users can request a notification to be sent to the originator of the job request.

A notification includes two components: the user or group to whom the notification is to be delivered, and the completion status of the job that triggers the notification. For example, notifications can be sent upon the successful completion of a job, or when a job completes in an error or warning state.

To notify users or groups of the status of executed jobs:

  1. Configure Oracle User Messaging Service. For more information, see the chapter "Configuring Oracle User Messaging Service" in Oracle Fusion Middleware Administrator's Guide for Oracle SOA Suite and Oracle Business Process Management Suite.

  2. Deploy the drivers required for Oracle User Messaging Service. You can do so using Oracle WebLogic Server Scripting Tool. For more information, see the chapter "Managing Oracle User Messaging Service" in Oracle Fusion Middleware Administrator's Guide for Oracle SOA Suite and Oracle Business Process Management Suite.

  3. In the Oracle Enterprise Scheduler connections.xml file, specify the URL of the notification service. An example is shown in Example 9-30. While you cannot edit this file, you can browse Oracle ADF connection information using MBeans. For more information on configuring application properties, see the chapter "Monitoring and Configuring ADF Applications" in Oracle Fusion Middleware Administrator's Guide for Oracle Application Development Framework.

    Example 9-30 Specify the URL of the Notification Service

    <References>
    -
        <Reference name="EssConnection1"
            className="oracle.as.scheduler.config.ca.EssConnection">
            <Factory className="oracle.as.scheduler.config.ca.EssConnectionFactory"/>
    -
            <RefAddresses>
    -
                <StringRefAddr addrType="NotificationServiceURL">
                    <Contents>http://localhost:8001</Contents>
                </StringRefAddr>
    -
                <StringRefAddr addrType="RequestFileDirectory">
                    <Contents>/tmp/ess/requestFileDirectory</Contents>
                </StringRefAddr>
    -
                <StringRefAddr addrType="SAMLTokenPolicyURI">
                    <Contents/>
                </StringRefAddr>
    -
                <StringRefAddr addrType="FilePersistenceMode">
                    <Contents>file</Contents>
                </StringRefAddr>
            </RefAddresses>
        </Reference>
    </References>
    
    
  4. Follow the instructions described in Section 9.14.1, "How to Create an Oracle ADF User Interface for Submitting Job Requests."

  5. Create a native ADF Business Components view object with attributes representing the following properties:

    • Recipient Type: Specify whether the notification recipient is a user or a group of users. This should be defined as a radio button. Values are User or Group.

    • Recipient ID: Specify the User- or GroupID, depending on the recipient type. Create an LOV that provides a list of users or groups for the current submitting user. This LOV is dependent on the selected recipient type.

    • On Success: Notify the recipient upon successful completion of the job.

    • On Warning: Notify the recipient in the event of a job that ends with a warning.

    • On Error: Notify the recipient in the event that a job completes in an error state.

    Note:

    If using the post-processing action infrastructure to display the notification view object, it is not necessary to define status options in the view object (On Success, On Warning, On Error). Status data collection is built into the post-processing action infrastructure.
  6. Launch the application using the following URL:

    http://<machine>:<http-port>/<context-root>/faces/<page>
    

9.14.7 What Happens When You Create an Oracle ADF User Interface for Submitting Job Requests

The Oracle ADF interface is integrated with the Fusion application, and the application is tested and deployed. End users access the Oracle ADF user interface, fill in optional job properties, and click a button to submit the job request.

9.14.8 What Happens at Runtime: How an Oracle ADF User Interface for Submitting Job Requests Is Created

The application receives the submitted job request and calls Oracle Enterprise Scheduler Service to run the job. The Fusion application accesses the values of the properties entered by end users through the view object in which these properties were defined at design time. The job returns a result of success or failure, and the result passes from the Fusion application to Oracle Enterprise Scheduler.

Custom Task Flow

A job that includes properties to be filled in by end users through an Oracle ADF user interface at runtime includes ADF Business Components view objects with validation and the parameters to be filled in by end users. These parameters are submitted at runtime in the order in which they have been defined, meaning the first custom parameter to be defined is submitted first. The custom parameters must be named as follows:

ParameterVO1.ATTRIBUTE1, ParameterVO1.ATTRIBUTE2, ParameterVO2.ATTRIBUTE1, ParameterVO3.ATTRIBUTE1, and so on.

If the job definition includes ContextParametersVO, ParameterTaskflow and parametersVO, these properties render in that order at run time.

Context-Sensitive Parameters

When launching the SRS UI to submit a job or job set request with context-sensitive parameters, contextParametersVO initially renders in the Parameters tab of the Oracle ADF user interface.

The end-user can then enter values for the context-sensitive parameters. Clicking Next invokes setConextAPI by passing the context parameters. The context is set at the database level and the remaining parametersVO job parameters are rendered.

When the context-sensitive parameters are modified, end-users must click Next in order to set the context with the new values.

Notifications

When the final status of the job is determined, Oracle Enterprise Scheduler delivers the notifications to the relevant users or groups using the User Messaging Service. Groups receive notifications via e-mailed, whereas users receive notifications based on their messages preferences.

The notification view object defined at design time populates the input box in the submission request user interface at run time.

9.15 Submitting Job Requests Using the Request Submission API

You can submit, cancel and otherwise manage job requests using the request submission API.

For information about using the request submission API, see Section 13, "Using the Runtime Service."

9.16 Defining Oracle Business Intelligence Publisher Post-Processing Actions for a Scheduled Job

Oracle Business Intelligence Publisher enables generating reports from a variety of data sources, such as Oracle Database, web services, RSS feeds, files, and so on. BI Publisher provides a number of delivery options for generated reports, including print, fax, and e-mail.

In order to create an Oracle BI Publisher report, an Oracle BI Publisher report definition is required. Oracle BI Publisher report definitions consist of a data model that specifies the type of data source (database, web service, and so on) and a template for output formatting.

With report definitions in place, options for reporting are available to end users in the Output tab of the Oracle ADF user interface. The Output tab provides options through which an end user can define templates for reports. They can specify layout templates, document formats (such as PDF, RTF, and more), report destinations (email addresses, fax numbers, or printer addresses), and so on. When the user submits a request, this information is stored in the Oracle Enterprise Scheduler schema. The post-processor then invokes the Oracle BI Publisher service and passes the saved data to it.

Extensions to Oracle Enterprise Scheduler provide the ability to run Oracle BI Publisher reports as batch jobs. The Oracle Enterprise Scheduler post-processing infrastructure enables applying Oracle BI Publisher formatting templates to XML data and delivering the formatted reports by printing, faxing, and so on.

For more information about defining post-processing actions for scheduled jobs, see "Creating a Business Domain Layer Using Entity Objects" in the Oracle Fusion Middleware Fusion Developer's Guide for Oracle Application Development Framework.

9.16.1 How to Define Oracle BI Publisher Post-Processing for a Scheduled Job

Defining post-processing for a scheduled job involves the following:

  • Define the post-processing action.

  • Create a Java class for the post-processing action. The Java class uses the parameters collected by the Oracle Enterprise Scheduler UI and calls Oracle BI Publisher APIs as required.

  • Create a native ADF Business Components view object to save parameters for post-processing, such as template name, output format, locale, and so on.

Before you begin:

  1. Follow the instructions for setting up Oracle BI Publisher reporting as described in the Oracle BI Publisher documentation.

    Use the following file to set up reporting and seed your database with the relevant Oracle BI Publisher data:

    Example 9-31 Location of the File for Setting Up Oracle BI Publisher Reporting and Seeding the Database

    $BEAHOME/jdeveloper/jdev/oaext/adflib/PPActions.jar 
    
  2. Create an Oracle BI Publisher job definition, following the instructions in the Oracle BI Publisher documentation.

  3. Define File Management Group (FMG) properties for the Oracle BI Publisher job definition as described in Section 9.4.2, "How to Define File Groups for a Job."

To create an Oracle BI Publisher post-processing action:

  1. In the table APPLCP_PP_ACTIONS, define the post-processing action to be executed for the job.

    The columns to be seeded in the APPLCP_PP_ACTIONS table are as follows:

    • Action_SN: Define a short name for the action, used when post-processing actions are submitted programatically. For example, OBFUSC8.

    • Action Name: Enter a name for the action to be displayed in the user interface. This name is stored separately for translation purposes.

    • Class: Enter the name of the Java class that defines the logic for the post-processing action. For example, oracle.apps.shh.obfuscate.PPobfuscate.

    • VO_Def_Name: Enter the name of the view object used to collect the arguments for the post-processing action. For example, oracle.apps.shh.obfuscate.PPobfuscateVO.

    • Type: Enter the category of the post-processing action to be taken. Enter one of the following categories of post-processing actions:

      • L: Indicates a Layout post-processing action. Layout actions change the output of the job, and produce new output.

      • O: Indicates an Output post-processing action. Output actions act on the output created by the job and its layout actions, performing delivery, publishing, printing, and so on.

      • F: Indicates a Final post-processing action. Final Actions take no input. Final post-processing actions execute using the final status of the job after all Layout and Output actions have executed.

    • On_Success: Indicate whether the post-processing action runs following a successful job. Enter Y or N.

    • On_Warning: Indicate whether the post-processing action runs following a job that ends in a warning. Enter Y or N.

    • On_Failure: Indicate whether the post-processing action runs following a failed job. Enter Y or N.

    • SEQ_NUM: Enter a number to sequentially order the post-processing actions. Only registered post-processing actions of the same type can be sequentially ordered. This value determines both the order in which the tabs corresponding to the actions appear in the user interface, and the order in which the actions run.

    Each action can also specify request parameters used by the post-processing action view object. These parameters must be set in the job definition for any job using this action. The parameter names are stored in the APPLCP_PP_ACTION_PARAMS table. The values of these parameters are accessible from the parameter view object at the time of job request submission. Post-processing actions can access all request parameters at runtime using the request ID.

  2. Define a Java class for the post-processing action, implementing the interface oracle.apps.fnd.applcp.request.postprocess.PostProcess. Use the methods required by the interface as described in Table 9-4.

    Table 9-4 Methods Required When Implementing the Interface oracle.apps.fnd.applcp.request.postprocess.PostProcess

    Method Description

    PostProcessState invokePostProcess(long requestID, String ppArguments[], ArrayList files);

    Receives the requestID, the ppArguments[] array of arguments collected from the view object (or submitted programmatically), and the files array list which identifies the files on which the action is to be taken.

    It is possible to specify the location of the output file.

    ArrayList getOutputFileList();

    Returns an array of the output files created by the post-processing action.


    Additional methods used by the invokePostProcess method are shown in Table 9-5.

    Table 9-5 Oracle BI Publisher Client API oracle.xdo.service.client.ReportService Used by the invokePostProcess method

    Method Description

    runReport()

    Enables the post-processing action to pass to the Business Intelligence Publisher the job's XML output along with the template ID and format (all collected during job request submission).


    Additional methods used by the ReportRequest object are shown in Table 9-6.

    Table 9-6 Oracle BI Publisher Client API oracle.xdo.service.client.types.ReportRequest Used by the ReportRequest Object

    Method Description

    setAttributeFormat()

    Set the format for the Oracle BI Publisher report request.

    setAttributeLocale()

    Set the locale data for the Oracle BI Publisher report request.

    setAttributeTemplate()

    Set the template for the Oracle BI Publisher report request.

    setXMLData()

    Set the XML data for the Oracle BI Publisher report request.


    An example of a Java class that defines a post-processing action is shown in Example 9-32:

    Example 9-32 A Java Class that Defines a Post-Processing Action

    package oracle.apps.shh.Obfuscate;
    
    import oracle.apps.fnd.applcp.request.postprocess.PostProcess;
    import oracle.apps.fnd.applcp.util.ESSContext;
    import oracle.apps.fnd.applcp.util.PostProcessState;
    import oracle.as.scheduler.*;
    
    public class PPobfuscate implements PostProcess {
    
      ArrayList myOutputFiles;
    
      ArrayList getOutputFileList()
      {
        return myOutputFiles;
      }
    
      PostProcessState invokePostProcess(long requestID, String ppArguments[],
        ArrayList files)
      {
    
        RuntimeService rService = null;
        RuntimeServiceHandle rHandle = null;
        try {
          // Accessing Runtime Details for a given requestID
          RequestDetail rDetail = null;
          RequestParameters rParam = null;
          String obfuscationSeed = ppArguments[0];
          String codedFileName = ppArguments[1];
          String myNewFile;
          String outDir = null;
     
          rService = ESSContext.getRuntimeService();
          if (rService != null) rHandle = rService.open();
          if (rHandle != null)  rDetail = getRequestDetails(rHandle, requestID);
          if (rDetail != null)  rParam  = rDetail.getParameters();
          if (rParam != null)   outDir  = rParam.getValue("outputWorkDirectory");
          if (outDir == null)
          {
            // Didn't get our details for some reason, usually an exception
            // would have been thrown by now. We handle this case to be robust.
            // log the ERROR to ODL
            return PostProcessState.ERROR;
          }
          // Check files
          if (files[0] == null)
          {
            // no files - PostProcessing should never call us in this state
            // in case it does - log Error to ODL
            return PostProcessState.ERROR;
          }
          // This example expects a single file
          myNewFile = outputDir + System.getProperty("file.separator") + 
            codedFileName;
          Obfuscate.performObfuscation( files[0], obfuscationSeed, myNewFile );
          myOutputFiles[0] = myNewFile;
     
          // In case we're called on multiple files
          for ( i = 1; files[i] != null; i++ )
          {
            // appending a counter to the filename to be unique
            myNewFile = outputDir + System.getProperty("file.separator") +
            codedFileName + i ;
            Obfuscate.performObfuscation( files[i], obfuscationSeed, myNewFile );
            myOutputFiles[i] = myNewFile;
          }
     
          // Return our success
          return PostProcessState.SUCCESS;
     
        } catch (RuntimeServiceException rse)
        {
          // log RuntimeServiceException to ODL
          return PostProcessState.ERROR;
        } catch (Exception e)
        {
          // log Exception to ODL
          return PostProcessState.ERROR;
        } finally {
          if (rHandle != null)
            rService.close(rHandle);
        }
      }
    } // end class
    
  3. Create a native ADF Business Components view object to collect the parameters to be used in the post-processing action. Follow the procedure described in Section 9.4, "Creating a Job Definition." Define any view object attributes sequentially.

    If the view object requires access to action-specific values from the job definition, specify the required job definition parameters in the action definition. The submission UI automatically retrieves the values from the job definition metadata and sets them as Applications Core Session attributes that may be retrieved using the ApplSession standard API.

9.16.2 How to Define Oracle BI Publisher Post-Processing Actions for a Scheduled PL/SQL Job

Example 9-33 shows a PL/SQL job that includes Oracle BI Publisher post-processing actions. The PL/SQL job calls ess_runtime.add_pp_action so as to generate a layout for the data from the post-processing action. This example formats the XML generated by the job as a PDF file.

Example 9-33 Defining a Scheduled PL/SQL Job with Oracle BI Publisher Post-Processing Actions

declare
l_reqid   number;
l_props   ess_runtime.request_prop_table_t;
begin
.
 ess_runtime.add_pp_action (
   props              => l_props,               -- IN OUT
request_prop_table_t,
   action_order       => 1,                     -- order in which this post
processing action will execute.
   action_name        => 'BIPDocGen',           -- Action for Document
Generation (layout)
   on_success         => 'Y',                   -- Should this be called on
success,
   on_warning         => 'N',                   -- Should this be called on
warning,
   on_error           => 'N',                   -- Should this be called on
error,
   file_mgmt_group    => 'XML',                 -- File types this action
will process. It has to be defined in Job Defintion,
   step_path          => NULL,                  -- IN varchar2 default NULL,
   argument1          => 'XLABIPTEST_RTF',      -- Template name needed for
Documnet Generation action,
   argument2          => 'pdf'                  -- What type of layout file
will be generated by Document Generation action,
 );
.
  l_reqid :=
          ess_runtime.submit_request_adhoc_sched
             (application => 'SSEssWls',               -- Application
Application
              definition_type => 'JOB',
              definition_name => 'BIPTestJob',         -- Job definition
              definition_package => '/mypackage',      -- Job definition package
              props => l_props);
commit;
dbms_output.put_line('request_id = :'||l_reqid);
end;

9.16.3 What Happens When You Define Oracle BI Publisher Post-Processing Actions for a Scheduled Job

Depending on the FMG property set for the job definition, the relevant post-processing action is selected for the job.

The ppArguments array stores the values collected from the view object attributes. The array is passed to the invokePostProcess method which executes in the Java class that defines the post-processing action.

9.16.4 What Happens at Runtime: How Oracle BI Publisher Post-Processing Actions are Defined for a Scheduled Job

At runtime, the user interface uses the view object to collect the arguments for executing the post-processing action as defined in the table APPLCP_PP_ACTIONS. These arguments also instruct the user interface as to how to invoke the action logic.

The post-processing action accesses the XML output file from the job request, and passes the XML output to Oracle BI Publisher. The post-processing action creates a report request containing the XML data.

The post-processing action displays in the submission Oracle ADF UI. The UI enables adding a post-processing action for the scheduled job, selecting arguments for the action using the view object and selecting output options for the action. The user interface also displays the name of the file management group with which the output files are associated.

9.16.5 Invoking Post-Processing Actions Programmatically

You can invoke post-processing actions programmatically from a client using a Java or web service API. Both APIs require the same set of parameter values described in table Table 9-7.

For Java clients, call the addPPAction method of oracle.as.scheduler.cp.SubmissionUtil. The method takes the values needed to invoke the action and throws an IllegalArgumentException if the number of arguments exceeds 10. Example 9-34 shows the declaration of the method.

Example 9-34 Sample declaration of the addPPAction method

public static void addPPAction (RequestParameters params,
        int actionOrder,
        String actionName,
        String description,
        boolean onSuccess,
        boolean onWarning,
        boolean onError,
        String fileMgmtGroup,
        String[] arguments)
    throws IllegalArgumentException 

For web service clients, you invoke the method using a proxy, as in Example 9-35. For more on the web service, see Chapter 10, "Using the Oracle Enterprise Scheduler Web Service.".

Example 9-35 Adding Post-Processing Actions for a Request

ESSWebService proxy = createProxy("addPPActions");

PostProcessAction ppAction = new PostProcessAction();
ppAction.setActionOrder(1);
ppAction.setActionName("BIPDocGen");
ppAction.setOnSuccess(true);
ppAction.setOnWarning(false);
ppAction.setOnError(false);
ppAction.getArguments().add("argument1");
ppAction.getArguments().add("argument2");

List<PostProcessAction> ppActionList =   new ArrayList<PostProcessAction>();
ppActionList.add(ppAction);

RequestParameters reqParams = new RequestParameters();
reqParams = proxy.addPPActions(reqParams, ppActionList);

Table 9-7 Parameters for Adding a Post-Processing Action

Parameter Description

params

A RequestParameters object into which this method adds parameters.

actionOrder

The ordinal location of this action in the sequence of actions to be performed within the action domain. Oracle BI Publisher process requests starting with action order index 1.

actionName

The name of the action to perform. The following lists acceptable values for this parameter, along with the acceptable values you can use in the arguments parameter of this method.

  • BIPDocGen: for applying Oracle Business Intelligence Publisher templates. Acceptable argument parameter values are:

    • argument1: maps to report parameter TEMPLATE, the template name.

    • argument2: maps to report parameter OUTPUT_FORMAT, the output format for BIP document generation, for example, "pdf" or "html".

    • argument3: maps to report parameter LOCALE, the locale to be used while generating output.

  • BIPPrintService: for specifying the print action. Acceptable argument parameter values are:

    • argument1: maps to printerName

    • argument2: maps to numberOfCopies

    • argument3: maps to side

    • argument4: maps to tray

    • argument5: maps to pagesRange

    • argument6: maps to orientation

  • BIPDeliveryEmail: for specifying the email action. Acceptable argument parameter values are:

    • argument1: maps to emailServerName

    • argument2: maps to from

    • argument3: maps to to

    • argument4: maps to cc

    • argument5: maps to bcc

    • argument6: maps to replyTo

    • argument7: maps to subject

    • argument8: maps to messageBody

  • BIPDeliveryFax: for specifying the fax action. Acceptable argument parameter values are:

    • argument1: maps to faxServerName

    • argument2: maps to faxNumber

description

Description of this post processor action.

onSuccess

Determines whether this action should be performed on successful completion of the job.

onWarning

Determines whether this action should be performed when the job or step has completed with a warning.

onError

Determines whether this action should be performed when the job or step has completed with an error.

fileMgmtGroup

Name of the File Management Group.For BIP applying template this will be 'XML'; defined in job definition Program.FMG property with value 'L.XML'.

arguments

A list of arguments for the post processor action. See the actionName parameter for values you can use for the arguments parameter.


9.17 Monitoring Scheduled Job Requests Using an Oracle ADF UI

It is possible to view previously submitted jobs by integrating the Monitoring Processes taskflow into an application.

For information about enabling tracing for jobs, see "Developing Diagnostic Tests" in Oracle Fusion Applications Developer's Guide. For more information about tracing Oracle Enterprise Scheduler jobs, see the section "Tracing Oracle Enterprise Scheduler Jobs" in the chapter "Managing Oracle Enterprise Scheduler Service and Jobs" in the Oracle Fusion Applications Administrator's Guide.

9.17.1 How to Monitor Scheduled Job Requests

The main steps involved in monitoring scheduled job requests using an Oracle ADF UI are as follows:

  • Configure Oracle Enterprise Scheduler in JDeveloper

  • Create and initialize an Oracle Fusion web application

  • Create a UI Shell page and drop the Monitor Processes task flow onto it

Note:

Fields such as submission date, ready time, scheduled date, process start, name, type, definition, and so on, are not set unless the job request or sub-request is successfully validated.

To monitor scheduled job requests using an Oracle ADF UI:

  1. Follow the instructions in Section 9.14.1, "How to Create an Oracle ADF User Interface for Submitting Job Requests" up to and including step 5.

  2. Under the ViewController project, right-click Web Content and create a new JSF page called Consumer.jspx. Be sure to select the following options:

    • UIShell (template)

    • Create as XML Document

  3. Create a new JSF page fragment. This page initializes the project.

  4. Open adfc-config.xml and drag Consumer.jspx onto adfc-config.xml.

  5. Right-click adfc-config.xml and select Create ADF Menu.

    The Create ADF Menu Model window displays.

  6. Rename the default file root_menu.xml to something else.

  7. Open the XML file created in the previous step. Look for an itemNode element as follows:

    <itemNode id="itemNode_JSF/JSPX page name">
    

    For example, the Consumer.jspx page has the following itemNode value:

    <itemNode id="itemNode_Consumer">
    
  8. In the Structure window, right-click the root itemNode and select Insert inside itemNode-itemNode_JSF/JSPX page name > itemNode.

  9. In Common Properties, enter the following values:

    • id: MonitorNode

    • focusViewId: /Consumer

  10. In Advanced Properties, enter Monitor Processes in the label field.

  11. Right-click the itemNode you just added and select Go to Properties.

  12. In the Property Inspector, select Advanced and do the following:

    • Select the dynamicMain task type.

    • In the taskFlowId field, enter the following:

      /WEB-INF/oracle/apps/fnd/applcp/monitor/ui/flow/MonitorProcessesMainAreaFlow.xml#MonitorProcessesMainAreaFlow
      
    • Enter a string for the pageTitle parameter, which will become the title for the monitoring page. If this parameter is not specified, then the page title will be shown as "Manage Scheduled Processes".

  13. Repeat steps 8-12 to create a second itemNode element with the following properties:

    • id: __Launcher_itemNode__FndTaskList

    • focusViewId: /Launcher

    • label: #{applcoreBundle.TASKS}

    • Task Type: defaultRegional

    • taskFlowId: /WEB-INF/oracle/apps/fnd/applcore/patterns/uishell/ui/publicFlow/TasksList.xml#TasksList

  14. Right-click adfc-config.xml and select Link ADF Menu to Navigator.

  15. Configure Oracle JDeveloper Integrated Oracle WebLogic Server for development with Oracle Enterprise Scheduler extensions.

  16. Deploy and test the application.

9.17.2 How to Embed a Table of Search Results as a Region on a Page

You can embed a table of job request search results as a region on a page. A number of task flow parameters can be used to further specify the job requests returned by the search.

To embed a search results table as a region:

  1. Add the Applications Concurrent Processing (View Controller) library to the ViewController project.

    For more information about adding this library to the project, see Section 9.3.1.

  2. In the Resource Palette, select File System > Applications Core > MonitorProcesses-View.jar > ADF Task Flows.

  3. Drag and drop onto the page as a region the SearchResultsFlow task flow.

    The task flow accepts the following parameters:

    • processId: The request ID number uniquely identifying the process.

    • processName: The name of the process, which corresponds to the name of the job definition.

    • processNameList: Fetches the job requests of multiple process names using a list which contains the relevant job names.

      When specifying the task flow parameter processName, this parameter takes precedence over the task flow parameter processNameList. The requests returned are for the single process name specified by the processName parameter only.

    • scheduledDays: Queries requests for the last n days. If this parameter is not specified in a work area task flow, job requests from the last three days are displayed. If the value of this parameter is greater than three days, then the parameter value will be taken as three and only the last three days of job requests display.

    • status: The status of the request. This filter narrows down the result set to display only the requests with the selected status in the filter.

      If the status input parameter is not specified, then the results table shows all requests with all statuses (by default, All is selected in the status filter list).

      If the status input parameter is specified, then the results table show only the requests of the given status. The selected status is chosen as the default in the status filter list.

    • isEmbedResults: A boolean value that indicates whether search results are embedded in the task flow. True or false.

      Set to true in order to embed table results.

    • Time Range Filter: This filter is used to narrow down the result set to show only the requests for last n hours. This filter lists the following values in a combobox: (1) Last 1 Hour, (2) Last 12 Hours, (3) Last 24 Hours, (4) Last 48 Hours and (5) Last 72 Hours.

      The default selected item displays based on the value assigned or given to the task flow parameter scheduledDays.

      A scheduledDays value of 1 means the time range filter list displays only the first three items.

      A scheduledDays value of 2 means the time range filter list displays only the first four items.

      If the value of scheduledDays is 1, then by default, the time range combobox displays Last 24 Hours.

      If the value of scheduledDays is 3 or more, then by default, the time range combobox displays Last 72 Hours.

    • pageTitle: When passed, the task flow will render this passed String value as the page title. Optional.

    • requireRootOutcome: If true is passed as the value, then the task flow will generate root-outcome when the user clicks on the Submit or Cancel buttons. By default the task flow generates parent-outcome.

    Specifying more than one of these parameters causes the search to run using the AND conjunction.

9.17.3 How to Log Scheduled Job Requests in an Oracle ADF UI

You can enable Oracle Diagnostic Logging in an Oracle ADF UI used to monitor scheduled job requests. When enabling logging, the UI displays a View Log button.

The View Log functionality in the monitoring UI applies only to scheduled requests with a persistenceMode property set to file. Hence, the View Log button in the scheduled request submission monitoring UI displays only when viewing requests with persistenceMode property set to file.

The only other valid value for the persisteceMode is content. The View Log button is hidden for all requests with a persistenceMode property value of content. If the persistenceMode property is not specified for a given request, then the monitoring UI defaults to a persistenceMode value of file, and displays the View Log button when viewing relevant requests.

To log scheduled job requests:

  1. Open the server's logging.xml file.

  2. In the logging.xml file, enter the required logging level for oracle.apps.fnd.applcp.srs, for example: INFO, FINE, FINER or FINEST.

    Example 9-36 shows a snippet of a logging.xml file with Oracle Diagnostic Logging configured.

    Example 9-36 Enabling Logging in the logging.xml File

    <logger name='oracle.apps.fnd.applcp.srs' level='FINEST'
        useParentHandlers='false'>
       <handler name='odl-handler'/>
    </logger>
    
  3. Save the logging.xml file and restart the server.

9.17.4 How to Troubleshoot an Oracle ADF UI Used to Monitor Scheduled Job Requests

Some useful tips for troubleshooting the Oracle ADF UI used to monitor scheduled job requests.

  • Displaying a readable name. When defining metadata, use the display-name attribute to configure the name to be displayed in the Oracle ADF UI. The monitoring UI will display the value defined for the display-name attribute. If this attribute is not defined, the UI displays the value of the metadata-name attribute assigned to the metadata.

  • Displaying multiple links in the task flow UI that each display a pop-up window with a different job definition. The recommended approach is to create a single page fragment that contains the scheduled request submission task flow within an Oracle ADF region. This page is re-used by each link to display a different job definition in the scheduled request submission UI. For each link, be sure to pass the relevant parameters such as the job definition name, package name, and so on. This approach ensures that the UI session creates and uses a single instance of the task flow.

  • Displaying the correct name given the metadata name and display name attributes. By default, the display name takes precedence and displays in the UI. If the display name is not defined, then the UI displays the job or job set name.

  • Resolving name conflicts between a job metadata parameter name and a request parameter with the same name. Oracle Enterprise Scheduler uses the following rules to resolve parameter name conflicts.

    • The last definition takes precedence. When the same parameter is defined repeatedly with the read-only flag set to false in all cases, the last parameter definition takes precedence. For example, a property specified at the job request level takes precedence over the same property specified at the job definition level.

    • The first read-only definition takes precedence. When the same parameter is defined repeatedly and at least one definition is read-only (that is, the ParameterInfo read-only flag is set to true), the first read-only definition takes precedence. For example a read-only parameter specified at the job type definition level takes precedence over a property with the same name specified at the job definition level, regardless of whether or not it is read-only.

  • Resolving name conflicts between the job or job set metadata name and display name attributes. By default, the display name takes precedence over the metadata name. If the display name is not defined, then the UI defaults to displaying the job or job set name.

  • Understanding the state of a job request. There are 20 possible states for a job request, each with a corresponding number value. These are shown in Table 9-8.

    Table 9-8 Job Request States

    Job State Number Job Request State Description

    -1

    UNKNOWN

    The state of the job request is unknown.

    1

    WAIT

    The job request is awaiting dispatch.

    2

    READY

    The job request has been dispatched and is awaiting processing.

    3

    RUNNING

    The job request is being processed.

    4

    COMPLETED

    The job request has completed and post-processing has commenced.

    5

    BLOCKED

    The job request is blocked by one or more incompatible job requests.

    6

    HOLD

    The job request has been explicitly held.

    7

    CANCELLING

    The job request has been cancelled and is awaiting acknowledgement.

    8

    EXPIRED

    The job request expired before it could be processed.

    9

    CANCELLED

    The job request was cancelled.

    10

    ERROR

    The job request has run and resulted in an error.

    11

    WARNING

    The job request has run and resulted in a warning.

    12

    SUCCEEDED

    The job request has run and completed successfully.

    13

    PAUSED

    The job request paused for sub-request completion.

    14

    PENDING_VALIDATION

    The job request has been submitted but has not been validated.

    15

    VALIDATION_FAILED

    The job request has been submitted, but validation has failed.

    16

    SCHEDULE_ENDED

    The schedule for the job request has ended, or the job request expiration time specified at submission has been reached.

    17

    FINISHED

    The job request, and all child job requests, have finished.

    18

    ERROR_AUTO_RETRY

    The job request has run, resulted in an error, and is eligible for automatic retry.

    19

    ERROR_MANUAL_RECOVERY

    The job request requires manual intervention in order to be retried or transition to a terminal state.


  • Fixing an Oracle BI Publisher report that does not generate, even though the Oracle Enterprise Scheduler schema REQUEST_PROPERTY table contains all the relevant post-processing parameters. Verify that the post-processing parameters begin with index value of 1. If a set of parameters begins with an index value of 0 (such as pp.0.action), then the Oracle BI Publisher report will not generate. Oracle BI Publisher expects parameters to begin with an index value of 1. In the case of a job set with multiple Oracle BI Publisher jobs, verify that all the individual step post-processing actions begin with an index value of 1.

  • Fixing a scheduled request submission UI that does not display, and throws a partial page rendering error in the browser indicating that the drTaskflowId is invalid. This error may occur as a result of any of the following.

    • The object oracle.as.scheduler.JobDefinition may be unavailable to the scheduled request submission UI, which attempts to query the object using the MetadataService API.

    • The job definition name or the job definition package name is incorrect when passed as task flow parameters. Ensure that the package name does not end with a trailing forward slash.

    • The metadata permissions are not properly configured for the user who is currently logged in. The JobDefinition object, being stored in Oracle Metadata Repository, requires adequate metadata permissions in order to read and modify the JobDefinition metadata. Ensure that the Oracle Metadata Repository to which you are referring contains the job definition name in the proper package hierarchy.

9.18 Using a Task Flow Template for Submitting Scheduled Requests through an Oracle ADF UI

The Oracle ADF UI used to submit scheduled requests supports basic and advanced modes. Switching between modes requires page navigation between two view activities.

In some cases, you may want to use a custom parameter task flow for the UI in the context of an Oracle Fusion web application. One such use case is when you require a method call activity as the default activity of a custom bounded task flow so as to initialize the parameters view object and flex filters defined in that task flow.

When using page navigation between two view activities and custom bounded task flows with a default method call activity, switching between basic and advanced modes might re-initialize the related view objects and entity objects. If this happens, any data entered in basic mode is lost when changing to advanced mode.

The task flow template enables switching between basic and advanced modes in the scheduled request submission Oracle ADF UI without losing data.

9.18.1 How to Use a Task Flow Template for Submitting Scheduled Requests through an Oracle ADF UI

A bundled task flow template is provided, containing the components required to enable switching between basic and advanced modes in the Oracle ADF UI. The task flow template adds a router activity and an input parameter to the custom bounded task flow. Configure the router activity as the default activity.

You need only extend the task flow template as needed and implement the activity IDs defined in the task flow template.

Example 9-37 shows a sample implementation of the task flow template.

Example 9-37 Task Flow Template

<?xml version="1.0" encoding="UTF-8" ?>
<adfc-config xmlns="http://xmlns.oracle.com/adf/controller" version="1.2">
  <task-flow-template id="srs-custom-task-flow-template">
    <default-activity id="defActivity">defaultRouter</default-activity>
    <input-parameter-definition id="param1">
      <description id="paramDescription">Parameter to decide on initialization.</description>
      <name id="paramName">shouldInitialize</name>
      <value id="paramID">#{pageFlowScope.shouldInitialize}</value>
      <class id="paramType">boolean</class>
      <required/>
    </input-parameter-definition>
 
    <router id="defaultRouter">
      <case id="routerCaseID">
        <expression id="routerExprID">#{pageFlowScope.shouldInitialize}</expression>
        <outcome id="outcomeID">initializeTaskflow</outcome>
      </case>
      <default-outcome id="defOutcomeID">skip</default-outcome>   
    </router>
 
    <control-flow-rule id="ctrlFlwRulID">
      <from-activity-id id="FrmAc1">defaultRouter</from-activity-id>
      <control-flow-case id="CtrlCase1">
        <from-outcome id="FrmAct3">initializeTaskflow</from-outcome>
        <to-activity-id id="ToAct1">initActivity</to-activity-id>
      </control-flow-case>
      <control-flow-case id="CtrlCase2">
        <from-outcome id="FrmAct2">skip</from-outcome>
        <to-activity-id id="ToAct2">defaultView</to-activity-id>  
        </control-flow-case>
     </control-flow-rule>
    <use-page-fragments/>
  </task-flow-template>
</adfc-config>

The task flow template defines the following:

  • A default-activity,

  • An input parameter of boolean type,

  • A router activity,

  • A control-flow-rule containing two cases.

9.18.2 How to Extend the Task Flow Template for Submitting Scheduled Requests through an Oracle ADF UI

If you need to create your own custom bounded task flow UI for the parameters section of the scheduled request submission UI, you will need to extend this template.

To extend the task flow template for the Oracle ADF UI used to submit scheduled requests:

  1. When creating a new task flow, extend the task flow by selecting Use a template. (For more information, see the chapter "Creating ADF Task Flows in Oracle Fusion Middleware Fusion Developer's Guide for Oracle Application Development Framework.") Alternatively, add the lines of code shown in Example 9-38 to the task flow XML file.

    Example 9-38 Extending a Task Flow

    <template-reference>
         <document id="doc1">/WEB-INF/srs-custom-task-flow-template.xml</document>
         <id id="temid">srs-custom-task-flow-template</id>
    </template-reference>
    

    Note:

    Make sure that your bounded task flow does not define any default activity.
  2. Implement the activity IDs defined in the template, which are invoked by the router activity in the template.

    • initActivity: The ID of the method call activity.

    • defaultView: The ID of the default view activity.

    To do this, to the task flow drag and drop the createInsert method from the VO used in the defaultView. This creates a pagedef file and adds the binding details in DateBinding.cpx.

  3. Define a control flow rule to navigate from initActivity to defaultView. This navigation depends on the outcome of initActivity, as well as individual use cases.

    Example 9-39 shows a sample implementation of a control flow rule.

    Example 9-39 Implementing a Control Flow Rule

    <control-flow-rule>
         <from-activity-id>initActivity</from-activity-id>
         <control-flow-case>
              <from-outcome>outcome_of_init_activity</from-outcome>
              <to-activity-id>defaultView</to-activity-id>
         </control-flow-case>
    </control-flow-rule>
    

9.18.3 What Happens When you Use a Task Flow Template for Submitting Scheduled Requests through an Oracle ADF UI

Based on the value of the input parameter, the router invokes the method call activity or skips it, and invokes the view activity directly. The Oracle ADF UI must pass the correct parameter values to the task flow while switching modes.

9.18.4 What Happens at Runtime: How a Task Flow Template Is Used to Submit Scheduled Requests through an Oracle ADF UI

When loading the initial page in basic mode, the method call activity is invoked. While loading the page in the advanced mode, the custom bounded task flow directly invokes the view activity. This ensures that the user entered data persists in the view objects across modes.

If the custom task flow UI does not render correctly, check whether transactional properties have been set in the custom task flow, such as requires-transaction, and so on.

Remove transactional properties from the task flow definition and set the data control scope to shared.

As the parent scheduled request submission UI task flow already has a transaction, Oracle ADF will commit all called task flow transactions as long as the data controls are shared.

Note:

When using the UI to schedule a job to run for a year, for example, a maximum of 300 occurrences display when clicking Customize Times.

9.19 Securing Oracle ADF UIs

When creating Oracle ADF UIs for scheduled jobs, you can secure the individual task flows involved using a security policy.

The task flows you can secure are as follows.

Scheduling Job Requests UI

Monitoring Job Requests UI

9.20 Integrating Scheduled Job Logging with Fusion Applications

Oracle Enterprise Scheduler is fully integrated with Oracle Fusion Applications logging. The logger captures Oracle Enterprise Scheduler-specific attributes when invoking logging from within the context of a running job request. You can set the values to these Oracle Enterprise Scheduler attributes within the context of defining a job.

Jobs can generate a log file on the file system that can be viewed with the Monitoring UI.

In a typically configured Oracle Enterprise Scheduler hosting application, log and output files are stored in an Oracle Universal Content Management content repository rather than on the file system. These files are available to end users through a page you provide for monitoring scheduled job requests. For more on request monitoring, see Section 9.17, "Monitoring Scheduled Job Requests Using an Oracle ADF UI."

9.21 Logging Scheduled Jobs

Log messages written using the request log file APIs are written to the request log file and Oracle Fusion Applications logging at a severity level of FINE (only if logging is enabled at a level of FINE or lower).

9.21.1 Using the Request Log

Note:

Do not use the Request Log for debug and internal error reporting. For Oracle Enterprise Scheduler jobs, the "Request Log" is equivalent to the end-user UI for online applications. When writing Oracle Enterprise Scheduler job code, you should ideally log only translatable end user-oriented messages to the Request Log. You should not use the Request Log for debug messages or internal error messages that are oriented to system administrators and/or Oracle Support. Please keep in mind that the audience for debug messages and detailed internal error messages is typically system administrators and Oracle Support, not the end user.

Therefore, debug and detailed internal error messages should be logged to FND_LOG only.

For Oracle Enterprise Scheduler jobs, the request log is equivalent to the end user interface for web applications. When developing an Oracle Enterprise Scheduler job, make sure to log to the request log only translatable end-user oriented messages.

For example, if an end user inputs a bad parameter to the Oracle Enterprise Scheduler job, a translated error message logged to the request log is displayed to the end user. The end user can then take the relevant corrective action.

Example 9-40 shows how to set log messages using the request log.

Example 9-40 Setting Log Messages Using the Request Log

-- Seeded message to be displayed to the end user.
FND_MESSAGE.SET_NAME('FND', 'INVALID_PARAMETER'); 
-- Runtime parameter information
FND_MESSAGE.SET_TOKEN('PARAM_NAME', pName); 
FND_MESSAGE.SET_TOKEN('PARAM_VALUE', pValue); 
-- The following is useful for auto-logging errors.
FND_MESSAGE.SET_MODULE('fnd.plsql.mypackage.myfuntionA');
fnd_file.put_line( FND_FILE.LOG, FND_MESSAGE.GET );

If the Oracle Enterprise Scheduler job fails due to an internal software error, log the detailed failure message to FND_LOG for the system administrator or support. You can also log a high-level generic message to the request log so as to inform end users of the error. An example of a generic error message intended for end users: "Your request could not be completed due to an internal error."

9.21.2 Using the Output File

Note:

Do not use the output file for debugging and internal error reporting.

The output file is a formally formatted file generated by an Oracle Enterprise Scheduler job. An output file can be sent to a printer or viewed in a UI window. Example 9-41 shows an invoice sent to an output file.

Example 9-41 Invoice Output File

fnd_file.put_line( FND_FILE.OUTPUT, '******** XYZ Invoice ********' );

9.21.3 Debugging and Error Logging

Debug and error logging should be done using the Diagnostic Logging APIs only. The Oracle Enterprise Scheduler Request Log should not be used for system administrator or Oracle support-oriented debug and error logging purposes. The Request Log is for the end users and it should only contain messages that are clear and easy for end users to understand. When an error occurs in an Oracle Enterprise Scheduler job, an appropriate high-level (and, ideally, translated) message should be used to report the error to the end user through the Request Log. The details of the error and any debug messages should be logged with Diagnostic Logging APIs.

Common PL/SQL, Java, or C code that could be invoked by both Oracle Enterprise Scheduler jobs and interactive application code should only use Diagnostic Logging APIs. If needed, the wrapper Oracle Enterprise Scheduler job should perform appropriate batching and logging to the Request Log for progress reporting purposes.

For more information, see the chapter "Managing Log Files and Diagnostic Data" in Oracle Fusion Middleware Administrator's Guide.

Using Logging in a Java Application

In Java jobs, use AppsLog for debugging and error logging. You can retrieve an AppsLog instance from the CpContext object, by calling getLog().

Example 9-42 shows the use of logging in a Java application.

Example 9-42 Logging in Java Using AppsLog

public boolean authenticate(AppsContext ctx, String user, String passwd) 
      throws SQLException, NoSuchUserException {
    AppsLog alog = (AppsLog) ctx.getLog();
    if(alog.isEnabled(Log.PROCEDURE))   /* To avoid String Concat if not enabled */
    alog.write("fnd.security.LoginManager.authenticate.begin", 
                 "User=" + user, Log.PROCEDURE);
    /* Never log plain-text security sensitive parameters like passwd! */
    try {                
      validUser = checkinDB(user, passwd);
    } catch(NoSuchUserException nsue) {
        if(alog.isEnabled(Log.EXCEPTION))
        alog.write("fnd.security.LoginManager.authenticate",nsue, Log.EXCEPTION);
      throw nsue; // Allow the caller to Handle it appropriately
    } catch(SQLException sqle) {
     if(alog.isEnabled(Log.UNEXPECTED)) { 
        alog.write("fnd.security.LoginManager.authenticate", sqle, 
        Log.UNEXPECTED);
        Message Msg = new Message("FND", "LOGIN_ERROR"); /* System Alert */
        Msg.setToken("ERRNO", sqle.getErrorCode(), false);
        Msg.setToken("REASON", sqle.getMessage(), false);
       /* Message Dictionary messages should be logged using write(..Message..), 
        * and never using write(..String..) */
        alog.write("fnd.security.LoginManager.authenticate", Msg, Log.UNEXPECTED);
     }
    throw sqle; // Allow the caller to handle it appropriately
   } // End of catch(SQLException sqle)
   if(alog.isEnabled(Log.PROCEDURE))    /* To avoid String Concat if not enabled */
      alog.write("fnd.security.LoginManager.authenticate.end", 
                 "validUser=" + validUser, Log.PROCEDURE);
   return success;
  }

Note:

Example 9-42 uses an active WebAppsContext. Do not attempt to log messages using an inactive or freed WebAppsContext, as this can cause connection leaks.

Using Logging in a PL/SQL Application

PL/SQL APIs are part of the FND_LOG package. These APIs require invoking relevant application user session initialization APIs—such as FND_GLOBAL.INITIALIZE()— in order to set up user session properties in the database session.

These application user session properties, including UserId, RespId, AppId, SessionId, are needed for the log APIs. Typically, Applications Core invokes these session initialization APIs.

Log plain text messages with FND_LOG.STRING(). Log translatable message dictionary messages with FND_LOG.MESSAGE(). FND_LOG.MESSAGE() logs messages in encoded, but not translated, format, and allows the Log Viewer UI to handle translating messages based on the language preferences of the system administrator viewing the messages.

For details regarding the FND_LOG API, run $fnd/patch/115/sql/AFUTLOGB.pls at the prompt.

Example 9-43 PL/SQL Logging Syntax

PACKAGE FND_LOG IS
   LEVEL_UNEXPECTED CONSTANT NUMBER  := 6;
   LEVEL_ERROR      CONSTANT NUMBER  := 5;
   LEVEL_EXCEPTION  CONSTANT NUMBER  := 4;
   LEVEL_EVENT      CONSTANT NUMBER  := 3;
   LEVEL_PROCEDURE  CONSTANT NUMBER  := 2;
   LEVEL_STATEMENT  CONSTANT NUMBER  := 1;
 
  /*
   **  Writes the message to the log file for the specified 
   **  level and module
   **  if logging is enabled for this level and module 
   */
   PROCEDURE STRING(LOG_LEVEL IN NUMBER,
                    MODULE    IN VARCHAR2,
                    MESSAGE   IN VARCHAR2);
 
   /*
   **  Writes a message to the log file if this level and module 
   **  are enabled.
   **  The message gets set previously with FND_MESSAGE.SET_NAME, 
   **  SET_TOKEN, etc. 
   **  The message is popped off the message dictionary stack, 
   **  if POP_MESSAGE is TRUE.  
   **  Pass FALSE for POP_MESSAGE if the message will also be 
   **  displayed to the user later.
   **  Example usage:
   **  FND_MESSAGE.SET_NAME(...);    -- Set message
   **  FND_MESSAGE.SET_TOKEN(...);   -- Set token in message
   **  FND_LOG.MESSAGE(..., FALSE);  -- Log message
   **  FND_MESSAGE.RAISE_ERROR;      -- Display message
   */
   PROCEDURE MESSAGE(LOG_LEVEL   IN NUMBER,
                     MODULE      IN VARCHAR2, 
                     POP_MESSAGE IN BOOLEAN DEFAULT NULL);
 
   /*
   ** Tests whether logging is enabled for this level and module, 
   ** to avoid the performance penalty of building long debug 
   ** message strings unnecessarily.
   */
   FUNCTION TEST(LOG_LEVEL IN NUMBER, MODULE IN VARCHAR2) 
RETURN BOOLEAN;

Example 9-44 shows how to log a message in PL/SQL after the AOL session has been initiliazed.

Example 9-44 Logging a Message in PL/SQL After the AOL Session Has Been Initialized

begin
  
  /* Call a routine that logs messages. */
  /* For performance purposes, check whether logging is enabled. */
  if( FND_LOG.LEVEL_PROCEDURE >= FND_LOG.G_CURRENT_RUNTIME_LEVEL ) then
    FND_LOG.STRING(FND_LOG.LEVEL_PROCEDURE, 
        'fnd.plsql.MYSTUFF.FUNCTIONA.begin', 'Hello, world!' );
  end if;
/

The global variable FND_LOG.G_CURRENT_RUNTIME_LEVEL allows callers to avoid a function call for messages at a lower level than the current configured level. If logging is disabled, the current runtime level is set to a large number such as 9999 so that it is sufficient to simply log messages with levels greater than or equal to this number. This global variable is automatically populated by the FND_LOG_REPOSITORY package during session and context initialization.

Example 9-45 shows sample code that illustrates the use of the global variable FND_LOG.G_CURRENT_RUNTIME_LEVEL.

Example 9-45 Logging a Message in PL/SQL Using FND_LOG.G_CURRENT_RUNTIME_LEVEL

if( FND_LOG.LEVEL_STATEMENT >= FND_LOG.G_CURRENT_RUNTIME_LEVEL ) then
      dbg_msg := create_lengthy_debug_message(...);
      FND_LOG.STRING(FND_LOG.LEVEL_STATEMENT 
           'fnd.form.ABCDEFGH.PACKAGEA.FUNCTIONB.firstlabel', dbg_msg);
end if;

Note:

For PL/SQL in a forms client, use the same APIs. Use FND_LOG.TEST() to check whether logging is enabled.

Example 9-46 shows logging message dictionary messages.

Example 9-46 Logging Message Dictionary Messages

if( FND_LOG.LEVEL_UNEXPECTED >=
            FND_LOG.G_CURRENT_RUNTIME_LEVEL) then
        FND_MESSAGE.SET_NAME('FND', 'LOGIN_ERROR'); -- Seeded Message
        -- Runtime Information
        FND_MESSAGE.SET_TOKEN('ERRNO', sqlcode); 
        FND_MESSAGE.SET_TOKEN('REASON', sqlerrm); 
        FND_LOG.MESSAGE(FND_LOG.LEVEL_UNEXPECTED, 
                        'fnd.plsql.Login.validate', TRUE); 
end if;

Using Logging in C

Example 9-47 illustrates the use of logging in a C application.

Example 9-47 Logging in C

#define  AFLOG_UNEXPECTED  6
#define  AFLOG_ERROR       5
#define  AFLOG_EXCEPTION   4
#define  AFLOG_EVENT       3
#define  AFLOG_PROCEDURE   2
#define  AFLOG_STATEMENT   1
 
/* 
** Writes a message to the log file if this level and module is 
** enabled 
*/
void aflogstr(/*_ sb4 level, text *module, text* message _*/);
 
/* 
** Writes a message to the log file if this level and module is 
** enabled. 
** If pop_message=TRUE, the message is popped off the message 
** Dictionary stack where it was set with afdstring() afdtoken(), 
** etc. The stack is not cleared (so messages below will still be 
** there in any case). 
*/
void aflogmsg(/*_ sb4 level, text *module, boolean pop_message _*/);
 
/* 
** Tests whether logging is enabled for this level and module, to
** avoid the performance penalty of building long debug message 
** strings 
*/
boolean aflogtest(/*_ sb4 level, text *module _*/);
 
/* 
** Internal
** This routine initializes the logging system from the profiles.
** It will also set up the current session and username in its state */
void afloginit();