65 Working with Extensions to Oracle Enterprise Scheduler

This chapter explains how to use extensions to Oracle Enterprise Scheduler to manage job request submissions in the context of Oracle Fusion Applications. This chapter includes the following sections:

65.1 Introduction to Oracle Enterprise Scheduler Extensions

Oracle Enterprise Scheduler provides the ability to run different job types, including Java, PL/SQL, and spawned jobs. Jobs can run on demand, or be scheduled to run in the future.

Oracle Enterprise Scheduler provides scheduling services for the following purposes:

  • Distributing job request processing across a grid of application servers.

  • Running Java, PL/SQL, and process or spawned jobs.

  • Processing multiple jobs concurrently.

  • Running the same job in different languages.

Using Oracle JDeveloper, application developers can create and implement jobs. While implemented in JDeveloper, Oracle Enterprise Scheduler runs the jobs. APIs provide an interface between jobs executed within applications developed in JDeveloper and Oracle Enterprise Scheduler.

The Oracle JDeveloper extensions to Oracle Enterprise Scheduler enable the following:

  • Running scheduled Oracle Business Intelligence Publisher (Oracle BI Publisher), spawned, Java, PL/SQL, Perl, SQL*Plus, SQLoader, and C jobs.

  • Running the same job in multiple locales, time zones, currencies, and so on.

  • Creating log and output files for jobs, as well as acting upon those files, such as enabling notifications.

  • Creating Oracle Application Development Framework (Oracle ADF) task flows to schedule jobs and job sets, as well as monitor job requests.

Before you begin:

Install Oracle Enterprise Scheduler to Oracle WebLogic Server. For more information, see Chapter 2, "Setting Up Your Development Environment."

65.2 Standards and Guidelines

The following standards and guidelines apply to working with extensions to Oracle Enterprise Scheduler:

  • Always use the preconfigured job types provided when defining metadata for job definitions.

65.3 Creating and Implementing a Scheduled Job in Oracle JDeveloper

Submitting job requests from an Oracle Fusion application requires developing the following components:

  • A job definition, created in JDeveloper

  • The Java, PL/SQL, SQL*Loader, SQL*Plus, Perl, C, or host scripts job implementation

  • A user interface enabling end users to submit job requests and/or additional properties for the job

A wizard enables defining a new job within the context of an Oracle Fusion application. The job can be any one of the following types: Java, PL/SQL, SQL*Loader, SQL*Plus, Perl, C, or host scripts.

65.3.1 How to Create and Implement a Scheduled Job in JDeveloper

Creating and implementing a scheduled job in JDeveloper involves creating a package or class from which to call the job, as well as defining a job definition. The job must then be deployed and tested, and a job request submission interface defined.

To create and implement a scheduled job in JDeveloper:

  1. Create a package, class, or job, and include the minimum required methods or functions.

    • Define the job request

    • Define any subrequests, if required.

  2. If a job requires parameters to be filled in by end users using an Oracle ADF user interface, define a standard ADF Business Components view object with validation.

    For example, if a job requires information regarding duration, date, and time, create an ADF Business Components view object with the properties duration, date, and time.

  3. Create a job definition in JDeveloper using the wizard.

    If using an ADF Business Components view object to collect additional values at runtime from end users, specify the name of the view object as a property of the job definition.

  4. Deploy the job.

  5. Test the job.

  6. Create the end user job request submission interface.

    For more information about creating the end user job request submission interface, see Section 65.14, "Creating an Oracle ADF User Interface for Submitting Job Requests."

65.3.2 What Happens at Runtime: How a Scheduled Job Is Created and Implemented in JDeveloper

An Oracle ADF interface is provided to enable application end users to submit job requests from an Oracle Fusion application. The Oracle ADF interface is integrated into an Oracle Fusion application. As soon as a job request is submitted through the interface, Oracle Enterprise Scheduler runs the job as scheduled.

65.4 Creating a Job Definition

To submit a job request, you must first create a job definition.

65.4.1 How to Create a Job Definition

A job definition and job type are required to submit a job request.

  • Job Definition: This is the basic unit of work that defines a job request in Oracle Enterprise Scheduler.

  • Job Type: This specifies an execution type and defines a common set of properties for a job request.

The extensions to Oracle Enterprise Scheduler provide the following execution types:

  • JavaType: for job definitions that are implemented in Java and run in the container.

  • SQLType: for job definitions that run as PL/SQL stored procedures in a database server.

  • CJobType: for job definitions that are implemented in C and run in the container.

  • PerlJobType: for job definitions that are implemented in Perl and run in the container.

  • SqlLdrJobType: for job definitions that are implemented in SQL*Loader and run in the container.

  • SqlPlusJobType: for job definitions that are implemented in SQL*Plus and run in the container.

  • BIPJobType: for job definitions that are executed as Oracle BI Publisher reports. Oracle BI Publisher jobs require configuring the parameter reportID.

  • HostJobType: for job definitions that run as host scripts executed from the command line.

Before you begin:

If your job definition requires additional properties to be filled in by end users at submission time, you'll need to create a view object that defines these properties. The view object must be associated with the job definition you create. The view object is later associated with the user interface you create to allow end users to submit job requests along with the properties at submission time.

For more information about defining properties to be filled in at runtime by end users, see Section 65.14, "Creating an Oracle ADF User Interface for Submitting Job Requests."

To create a new job definition in Oracle JDeveloper:

  1. In Oracle JDeveloper, create an Oracle Fusion web application by clicking the Application Menu icon on the Application Navigator, selecting New Project > Projects > Generic Project and clicking OK.

  2. Right-click the project and select Properties. In the Resources tab, add the directory $MW_HOME/jdeveloper/integration/ess/extJobTypes.

  3. If your job includes any properties to be filled in by end users using an Oracle ADF user interface at runtime, create an ADF Business Components view object with validation and the parameters to be filled in by end users.

    1. Right-click the Model project and select Properties. In the Resource Bundle section, configure one bundle per file and select resource bundle type Xliff Resource Bundle.

    2. Define attributes for the view objects sequentially, ATTRIBUTE1, ATTRIBUTE2, and so on, with an attribute for each required parameter. Use ADF Business Components attribute control hints to specify required prompt, validation, and formatting for each parameter.

    3. Add the property parametersVO to your job definition and specify the fully qualified path of the view object as the value of parametersVO. For example, set parametersVO to oracle.my.package.TestVO. A maximum of 100 attributes can be used for parametersVO. The attributes should be named incrementally, for example ATTRIBUTE1, ATTRIBUTE2, and so on.

    4. Define the following required properties:

    • jobDefinitionName: The short name of the job.

    • jobDefinitionApplication: The short name of the application running the job.

    • jobPackageName: The name of the package running the job.

    Additional properties can be defined as shown in Table 65-1.

    Table 65-1 Additional Job Definition Properties

    Property Description

    completionText

    An optional string value that can be used to communicate details of the final state of the job.

    This property value is displayed in the UI used to monitor job request submissions in the details section of the job request. It can be useful for displaying a short explanation as to why a request ended in an error or warning state.

    CustomDatacontrol

    The name of the data control for the application to which the parameter task flow is bound. Following is an example.

    <parameter name="CustomDatacontrol"  data-type="string">ExtParameterAM</parameter>
    

    Use this property when adding a custom task flow to an Oracle ADF user interface used to submit job requests at run time. For more information, see Section 65.14.2, "How to Add a Custom Task Flow to an Oracle ADF User Interface for Submitting Job Requests."

    defaultOutputExtension

    The suffix of the output file. Possible values are txt, xml, pdf, html.

    enableTimeStatistics

    A Boolean parameter that enables or disables the accumulation of time statistics (Y or N).

    enableTrace

    A numerical value that indicates the level of tracing control for the job. Possible values are as follows:

    • 1: Database trace

    • 5: Database trace with bind

    • 9: Database trace with wait

    • 13: Database trace with bind and wait

    • 16: PL/SQL profile

    • 17: Database trace and PL/SQL profile

    • 21: Database trace with bind and PL/SQL profile

    • 25: Database trace with wait and PL/SQL profile

    • 29: Database trace with bind, wait and PL/SQL profile

    executionLanguage

    Stores the preferred language in which the job request should run.

    executionNumchar

    The numeric characters used in the preferred language in which the job runs, as defined by executionLanguage.

    executionTerritory

    The territory of the preferred language in which the job runs, as defined by executionLanguage.

    EXT_PortletContainerWebModule

    Specifies the name of the web module for the Oracle Enterprise Scheduler UI application to use as a portlet when submitting a job request. The Oracle Enterprise Scheduler central UI looks up the producer from the topology based on the registered producer application name derived from EXT_PortletContainerWebModule.

    incrementProc

    Enables a PL/SQL procedure evaluated at runtime which calculates the next set of date parameter values for a recurring request. Enter the name of the PL/SQL procedure. The procedure expects one argument—a number signifying the change in milliseconds between the start dates of the first and current requests.

        -- incr_test - Sample PL/SQL incrementProc procedure
        -- This procedure gets the list of arguments to be incremented
        -- using the incrementProcArgs property and increments each
        -- argument by the delta provided. This behavior is identical
        -- to the default behavior if no incrementProc is set for the
        -- job.
    procedure incr_test(   delta IN number ) is
       request_id number;
       incrProcArgs varchar2(200);
       curr_arg_n varchar2(100);
       curr_arg_v varchar2(2000);
       del_pos number := 0;
       prev_pos number := 1;
       old_date date;
       new_date date;
       delta_days number;
       begin
          request_id := FND_JOB.REQUEST_ID;
          delta_days := delta / (1000*60*60*24);
         
          -- incrProcArgs must be defined for this procedure to be
          -- called.
          incrProcArgs := ESS_RUNTIME.GET_REQPROP_VARCHAR(request_id,
                          FND_JOB.INCR_PROC_ARGS_P) || ',';
     
          LOOP
         del_pos := INSTR(incrProcArgs, ',', prev_pos);
         EXIT WHEN del_pos = 0;
         
         curr_arg_n := FND_JOB.SUBMIT_ARG_PREF_P || SUBSTR(incrProcArgs,
                       prev_pos, del_pos-prev_pos);
     
         curr_arg_v := ESS_RUNTIME.GET_REQPROP_VARCHAR(request_id, 
                                                       curr_arg_n);
     
         old_date := FND_DATE.CANONICAL_TO_DATE(curr_arg_v);
         new_date := old_date + delta_days;
         
         ESS_RUNTIME.UPDATE_REQPROP_VARCHAR(request_id, curr_arg_n,
                                           FND_DATE.DATE_TO_
                                           CANONICAL(new_date));
     
         prev_pos := del_pos+1;
          END LOOP;
       end incr_test;
    

    incrementProcArgs

    A list of comma-separated date arguments to be incremented. The incrementProc property is used to increment these values. Alternatively, a default calculation is used if the property incrementProc is not defined. Enter a list of argument numbers to identify which job arguments are to be incremented (for example, "1, 2, 5").

    In the incrementProc example shown above, an incrementProc procedure calculates the next set of date parameter values for a recurring request. The procedure expects one argument: a number signifying the change in milliseconds between the start dates of the first and current requests.

    logLevel

    The level at which events are logged (between 0 and 4). Each job type has a logLevel of 1 by default. This optional value is used to override the job type logLevel in the job definition.

    optimizerMode

    This flag enables setting the database optimizer mode for the job. Optimizer mode is useful for fine-tuning performance.

    parametersVO

    The ADF Business Components view object you define for additional properties to be entered at runtime by end users using an Oracle ADF user interface.

    ParameterTaskflow

    Enter the name of the task flow as a parameter. The name of the taskflow.xml file must be the same as the taskflowId. Following is an example.

    <parameter name="ParameterTaskflow"  data-type="string">/WEB-INF/oracle/apps/prod/project/ParamTestTaskFlow.xml#ParamTestTaskFlow</parameter>
    

    Use this property when adding a custom task flow to an Oracle ADF user interface used to submit job requests at run time. For more information, see Section 65.14.2, "How to Add a Custom Task Flow to an Oracle ADF User Interface for Submitting Job Requests."

    reportID

    The Oracle BI Publisher report value specified in the Oracle BI Publisher repository. Required parameter for Oracle BI Publisher jobs only.

    rollbackSegment

    Enables setting a database rollback segment for the job, which will be used until the first commit. When implementing the rollback segment, use FND_JOB.AF_COMMIT and FND_JOB.AF_ROLLBACK to commit and rollback.

    srsFlag

    A Boolean parameter (Y or N) that controls whether the job displays in the job request submission user interface (see Section 65.14, "Creating an Oracle ADF User Interface for Submitting Job Requests").

    SYS_runasApplicationID

    Enables elevating access privileges for completing a scheduled job. For more information about elevating access privileges for the completion of a particular job, see Section 65.13, "Elevating Access Privileges for a Scheduled Job."


  4. Create a new job. From the New Gallery, select Business Tier > Enterprise Scheduler Metadata and click Job Definition.

  5. In the Job Definition Name & Location page in the Job Definition Creation wizard, do the following:

    • Name: Enter a name for the job.

    • JobType: Select the job type from the drop-down list.

    Click Finish. The new job definition displays.

  6. Edit the following properties in the job definition as required for the selected job type:

    • JavaJobType: Uncheck the read-only checkbox next to className and set its value to the value of the business logic class.

    • PlsqlJobType: Uncheck the read-only checkbox next to procedureName and set its value to the name of the procedure (such as myprocedure.proc). Create a new parameter named numberOfArgs. Set numberOfArgs to the number of job submission arguments, excluding errbuf and retcode.

    • CJobType: Add the parameter executableName and set its value to the name of the C job to be executed. The executable file identified by the executableName parameter must exist in the directory $APPLICATIONS_BASE/$APPLBIN.

    • PerlJobType: Add the parameter executableName and set its value to the name of the Perl script.

    • SqlLdrJobType: Add the parameter executableName and set its value to the name of the control file to be executed (located under PRODUCT_TOP/$APPLBIN). Add SQL*Loader options such (such as direct=yes) as a sqlldr.directoption parameter in the job definition.

    • SqlPlusJobType: Add the parameter executableName and set its value to the name of the SQL*Plus job script to be executed (located under PRODUCT_TOP/$APPLSQL).

    • HostJobType: Add the parameter executableName and set its value to the name of the host script job to be executed. The executable file identified by the executableName parameter must exist in the directory PRODUCT_TOP/$APPLBIN.

    Note:

    Configure the $APPLBIN and $APPLSQL variables in the environment.properties file. The $APPLBIN and $APPLSQL variables point to the location of executable files under PRODUCT_TOP. These variables enable the extensions to Oracle Enterprise Scheduler to locate the jobs to be run. Typically, these variables are set in a preexisting environment properties file in the system.

65.4.2 How to Define File Groups for a Job

A file group is a collection of output files such as text files, XML files, and so on. File groups enable categorizing files together for a specific purpose, such as file groups for human resources or financial reports.

File groups are used for postprocessing jobs such as Business Intelligence Publisher jobs. Using postprocessing actions, the results of a job can be saved as an HTML file, for example, or printed. File groups specify the type of postprocessing action to be taken for a given job.

There are two types of file groups: output and layout. Postprocessing layout actions create additional output files using the job request output files. For example, an XML job output file can be processed as an HTML or PDF file.

Postprocessing output actions act upon job request output files by printing, faxing, or emailing the files, for example. Output postprocessing actions can be taken on job request output files, as well as files created by layout postprocessing actions. For example, a job request output XML file can be converted to a PDF file using layout postprocessing actions, and then emailed using output postprocessing actions.

To define file group properties:

  1. In the job definition for which you want to define postprocessing, define a file group.

    1. Name the property Program.FMG.

    2. For the value of the property, enter a list of comma-separated File Management Groups, where each file group is prefixed by an L or O to indicate a layout or output file group, respectively. A sample file group property is shown in Example 65-1.

      Example 65-1 File Group Property Sample Value

      Program.FMG = L.MYXML, O.ALL, O.PDF
      

      Three file groups are listed in this example.

  2. In the job definition, create a property containing a regular expression used to filter the files in the output work directory of the job request. Any output files that match the filter will be part of the relevant file group.

    Example regular expressions are shown in Example 65-2, Example 65-3, and Example 65-4.

    Example 65-2 File Group Regular Expression Filtering for All Files with the Suffix XML

    MYXML = '.*.\xml$' 
    

    Example 65-3 File Group Regular Expression Filtering for All Files

    ALL = '.*$'
    

    Example 65-4 File Group Regular Expression Filtering for All Files with the Suffix PDF

    PDF = '.*.\pdf$'
    

    An example of file group properties in a job definition is shown in Example 65-5.

    Example 65-5 File Group Properties with File Group Regular Expression Filtering

    Program.FMG = L.MYXML, O.ALL, O.PDF
    MYXML = '.*.\xml$'   ALL = '.*$'   PDF = '.*.\pdf$'
    

    These properties specify the use of the Business Intelligence Publisher postprocessing action on the MYXML file group, followed by the print postprocessing action on either ALL or PDF file groups.

  3. Optionally, rename the file group and store it in the Oracle Metadata Service repository so that it displays in a more user-friendly way in the scheduled job request submission UI.

65.4.3 What Happens When You Create a Job Definition

The job definition is written to an XML file called <job name>.xml.

65.4.4 What Happens at Runtime: How Job Definitions Are Created

The Oracle Fusion application passes the job definition file to Oracle Enterprise Scheduler, which runs the job defined in the file.

65.4.5 Related Links

The following documents provide additional information related to subjects discussed in this section:

65.5 Configuring a Spawned Job Environment

Configuring a spawned job involves creating an environment file and configuring an Oracle wallet.

65.5.1 How to Create an Environment File for Spawned Jobs

Spawned jobs require an environment.properties file to provide the correct environment for execution. The environment.properties file should be located in the config/fmwconfig directory under the domain.

Additional environment variables may be added to the same directory in a similar file called env.custom.properties. Variables defined in this file take precedence over those in the environment.properties file.

Similarly, server-specific environment variables may be set in the server config directory in files called environment.properties and env.custom.properties.

Before you begin:

The following variables are used to identify the correct interpreters for various spawned job types:

  • AFSQLPLUS: The executable for SQL*Plus scripts.

  • AFSQLLDR: The executable for SQL*Loader uploads.

  • AFPERL: The Perl interpreter.

  • ATGPF_TOP: The TOP directory for ATGPF files, needed to locate key files for SQL*Plus and Perl jobs.

The following environment properties are available to all spawned jobs:

  • REQUESTID: The request ID of the current job request.

  • WORK_DIR_ROOT: The directory on the local file system where the request can perform file operations.

  • OUTPUT_WORK_DIR: The directory to which the job writes all output files.

  • LOG_WORK_DIR: The directory to which the job writes all log files.

  • INPUT_WORK_DIR: The directory to which input files are saved before the job is spawned.

  • OUTFILE_NAME: The default name for the job output file.

  • LOGFILE_NAME: The name of the log file for the job.

  • USER_NAME: The name of the user submitting the job. The job runs in the context of this user.

  • REQUEST_HANDLE: The Oracle Enterprise Scheduler request handle for the current request.

The environment variables must point to the client ORACLE_HOME and environment so that spawned jobs can connect to the database.

Note:

Ensure the variables you define in the environment.properties file do not include any trailing spaces. Follow the guidelines required by java.util.properties.

Restart the server after editing the environment.properties file.

To create an environment file for spawned jobs:

  1. Use a text editor to create an environment.properties file for the spawned job.

  2. Set the following environment variables in the environment.properties file:

    • LD_LIBRARY_PATH

    • ORACLE_HOME

    • PATH: The full path of the spawned job. In Windows environments, the PATH must include all directories that are typically part of LD_LIBRARY_PATH.

    • TNS_ADMIN: The directory which stores files related to the database connection (such as tnsnames.ora, sqlnet.ora).

    • TWO_TASK: The TNS name identifying the database to which spawned jobs should connect. In Windows environments, the environment variable is LOCAL.

  3. Configure the following variables, which are required to locate spawned jobs:

    • APPLBIN: C executables and SQL*Loader control files must reside in the $APPLBIN directory under the product TOP.

    • APPL_TOP: Set this property to the top level directory where the bin directory of C executables resides.

    • APPLSQL: SQL*Plus scripts must reside in the $APPLSQL directory under the product TOP. This means that the product TOP should be accessible to the environment.

    • ATGPF_TOP: This variable is required for SQL*Plus jobs. This should point to where the wrapper script is available.

  4. Save the environment.properties file and restart the server.

65.5.2 How to Configure an Oracle Wallet for Spawned Jobs

Use the TNS_ADMIN and ORACLE_HOME variables specified in the environment.properties file created in Section 65.5.1.

A configured Oracle wallet enables spawned jobs to connect to the database at the command line. A provisioned Oracle Fusion Applications environment will have this wallet preconfigured.

To configure an Oracle wallet for the spawned job:

  1. At the prompt, enter the following commands as shown in Example 65-6.

    Example 65-6 Creating a Wallet

    If you are using a Linux operating system, use these commands:

    cd $TNS_ADMIN
    mkdir wallet
    mkstore -wrl ./wallet -create 
    

    If you are using a Windows operating system, use these commands:

    cd %TNS_ADMIN%
    mkdir wallet
    mkstore -wrl wallet -create
    
  2. When prompted, choose a password for the wallet.

  3. At the prompt, enter the following command as shown in Example 65-7.

    Example 65-7 Creating Wallet Credentials

    If you are using a Linux operating system, use this command:

    mkstore -wrl ./wallet -createCredential <$TWO_TASK> fusion_runtime <fusion_runtime_password password>
    

    If you are using a Windows operating system, use this command:

    mkstore -wrl wallet -createCredential <%TWO_TASK%> fusion_runtime <fusion_runtime_password password>
    

    where TWO_TASK is the variable in the environment.properties file and <fusion password> is the password for the fusion user name.

    This command creates permissions for accessing the wallet.

  4. When prompted, enter the wallet password created earlier.

  5. In a text editor, create a file called sqlnet.ora that includes the lines shown in Example 65-8.

    Example 65-8 Create a File Called sqlnet.ora

    If you are using a Linux operating system, use these commands:

          SQLNET.WALLET_OVERRIDE = TRUE
          WALLET_LOCATION =
            (SOURCE =
              (METHOD = FILE)
              (METHOD_DATA =
            (DIRECTORY = <$TNS_ADMIN>/wallet)
            )
           )
    

    If you are using a Windows operating system, use these commands:

    SQLNET.WALLET_OVERRIDE = TRUE
          WALLET_LOCATION =
            (SOURCE =
              (METHOD = FILE)
              (METHOD_DATA =
            (DIRECTORY = <%TNS_ADMIN%>\wallet)
            )
           )
    
  6. In a text editor, create a file called tnsnames.ora that includes the lines shown in Example 65-9.

    Example 65-9 Create a File Called tnsnames.ora

          dbname =
            (DESCRIPTION =
              (ADDRESS =
                 (PROTOCOL = TCP)
                 (HOST = host.example.com)
                 (PORT = 1521)
               )
              (CONNECT_DATA = (SID-sidname))
            )
    
  7. [Only for Unix OS] Execute the following commands as shown in Example 65-10.

    Example 65-10 Set Directory and File Permissions

    chmod 755 wallet
    chmod 744 wallet/cwallet.sso
    

    The first command enables anyone to read and execute files in the directory, while reserving write access to the directory creator.

    The second command enables only the file owner to read, write and execute the file, while anyone can read the file.

  8. Test the wallet by connecting to it. Execute the following command as shown in Example 65-11.

    Example 65-11 Connect to the Wallet

    If you are using a Linux operating system, use this command:

    sqlplus /@<$TWO_TASK>
    

    If you are using a Windows operating system, use this command:

    sqlplus /@<%TWO_TASK%>
    

65.5.3 Migrating Spawned Job Environment Properties to the Production Environment

Oracle Enterprise Scheduler provides a plug-in to migrate spawned job environment properties from the testing environment to the production environment. The migration plug-in moves the data-sources, LDAP, JMS, and other configurations that are part of the domain of the testing environment to the production environment.

The plug-in aims to incorporate properties created for spawned job processing. As such, the test to production plug-in parses the environment.properties file created in the testing environment and migrates it to the production environment. The file resides in the location specified in the testing environment as defined in the system property ess.config.dir.

As the properties to be moved reside in a flat file, it is unnecessary to annotate any MBeans with the property MovableProperty. Changes in the Oracle Enterprise Scheduler connections.xml are handled by the plug-in.

The migration plug-in includes the following components: CopyConfig, MovePlans, PasteConfig.

  • CopyConfig: This script accomplishes the following tasks.

    • Reads the EssConfigDir location property from the system.

    • Reads and parses the environment.properties file.

    • Creates a MovableComponent with the name ESS-EXT and creates another MovableComponent with the name ENV-PROPS which it then adds to the ESS-EXT movable property.

    • Loops through every property in the environment.properties file and adds it to the ESS-EXT MovableComponent as a ConfigProperty.

    • Moves all the properties available in the properties file of the testing environment to the same file of the production environment.

    • Analyzes properties to determine whether they are changeable in the production environment. If they are, then those properties are defined as READ_WRITE. If they are not changeable, the properties are defined as READ_ONLY.

    • Returns the list of MovableComponent objects.

    • In post-processing, sets the EssConfigDir property to componentProperties so that the PasteConfig script can fetch the correct value.

    • Generates the MovePlan.xml file, whose values can be modified as required for the production environment.

  • MovePlans: The environment.properties file is located outside the domain home or Oracle Fusion Middleware home. As such, a test to production plug-in must extract these properties and makes them movable through the test to production framework. The Plug-in extracts the key and value of the properties defined in the environment.properties file and makes them available in MovePlan.xml so that the values may be modified accordingly to suit the needs of the production environment.

    An example of the hierarchy of a MovePlans.xml file is shown in Example 65-12.

    Example 65-12 MovePlan.xml File

     movePlan
      |_movableComponent (componentType:J2EEDomain, componentName:base_domain) 
         //Default root movable component
         |_movableComponent (componentType:ESS-EXT, componentName:'Oracle Enterprise Scheduler
            Extension components')
            |_moveDescriptor
               |_configGroup (type: ENVIRONMENT_PROPERTIES)
                  |_configProperty(id="/machine/user/instance/ess/config") 
                     // Location of the environment.properties
                     |_<configProperty>                
                         <name>APPL_TOP</name>
                         <value>/machine/user2/Test/</value>
                         <itemMetadata>
                           <dataType>STRING</dataType>
                           <scope>READ_WRITE</scope>
                         </itemMetadata>
                       </configProperty>
                     |_configProperty-2
                     |_configProperty-3                
                     |_configProperty-4
    
  • PasteConfig: This script writes all the required values to the environment.properties file of the production environment.

    • Gets the MovableComponent from FMWT2PPasteBean with type ESS-EXT. Gets the internal MovableComponent with name ENV-PROPS from ESS-EXT.

    • Creates a new environment.properties file in the UserFileDir location of the production environment.

    • Extracts the ConfigProperty values added to this MovableComponent and constructs an output steam.

    • Writes all the values to the production environment.properties file.

To migrate spawned job environment properties to the production environment:

  1. Make sure the file cloningclient.jar is located at <MW_HOME>/oracle_common/jlib.

  2. Make sure the Oracle Enterprise Scheduler plug-in JAR and the registration XML file are located under <MW_HOME>/Oracle_atgpf/clone/provision.

  3. Pass the location of the original testing environment environment.properties file to the migration plugin by entering the following at the prompt:

    For a Linux operating system, use this command:

    setenv T2P_JAVA_OPTIONS "-Dess.config.dir=<LocationOfPropsFile>"
    

    For a Windows operating system, use these commands:

    set "T2P_JAVA_OPTIONS=-Dess.config.dir=<LocationOfPropsFile>"
    cd <MW_HOME>\oracle_common\bin
    
  4. At the command line, execute the copyConfig script after entering the names of all the servers in the domain or cluster, as shown.

    For a Linux operating system, use this command:

    copyConfig.sh -javaHome /usr/local/packages/jdk16/ -archiveLocation /machine/user/11gWLS/DIST/a.jar
     -sourceDomainLoc /machine/user/11gWLS/user_projects/domains/mydomain -sourceMWHomeLoc /machine/user/11gWLS
     -domainHostName hostname.host.com -domainPortNo 8001 -domainAdminUserName weblogic
     -domainAdminPassword /machine/user/11gWLS/wlsPassword.txt -silent true
    

    For a Windows operating system, use this command:

    copyConfig.cmd -javaHome "C:\Program Files\Java\jdk6" -archiveLocation C:\user\11gWLS\DIST\a.jar -sourceDomainLoc
    C:\user\11gWLS\user_projects\domains\mydomain -sourceMWHomeLoc C:\user\11gWLS -domainHostName hostname.host.com -domainPortNum 8001 -domainAdminUserName weblogic -domainAdminPassword C:\user\11gWLS\wlsPassword.txt -silent true
    

    The script creates an environment archive.

  5. At the command line, execute the extractPlans script as shown in the following example.

    For a Linux operating system, use this command:

    extractPlans.sh -javaHome /usr/local/packages/jdk16 -archiveLocation /machine/user/11gWLS/DIST/a.jar 
     -planDirLocation /machine/user/11gWLS/EXTRACT -logDirLoc /tmp
    

    For a Windows operating system, use this command:

    extractMovePlan.cmd -javaHome "C:\Program Files\Java\jdk6"
     -archiveLocation C:\user\11gWLS\DIST\a.jar -planDirLocation
    C:\user\11gWLS\EXTRACT -logDirLoc C:\tmp
    

    This script extracts the MovePlan.xml file from the archive to the planDirLocation. The extracted XML document contains environment.properties entries within the movableComponent of type ESS-EXT.

  6. Change the MovePlan xml file under planDirLocation as required by the production environment.

  7. At the command line, execute the pasteConfig script to recreate the target environment. This step configures the data sources, LDAP, and so on, and starts the servers. Following is an example.

    For a Linux operating system, use this command:

    pasteConfig.sh -javaHome /usr/local/packages/jdk16/ -archiveLocation /machine/user/11gWLS/DIST/a.jar
     -targetDomainLoc /machine/user/11gWLS/user_projects/domains/domain9 -targetMiddlewareHomeLoc /machine/user/11gWLS
     -movePlanLocation /machine/user/11gWLS/EXTRACT/moveplan.xml -logDirLoc /tmp -silent true
    

    For a Windows operating system, use this command:

    pasteConfig.cmd -javaHome "C:\Program Files\Java\jdk6"
     -archiveLocation C:\user\11gWLS\DIST\a.jar  -targetDomainLoc
    C:\user\11gWLS\user_projects\domains\domain9 -targetMiddlewareHomeLoc
    C:\user\11gWLS  -movePlanLocation C:\user\11gWLS\EXTRACT\moveplan.xml
     -logDirLoc C:\tmp -silent true
    

65.5.4 What Happens When You Configure a Spawned Job Environment

A configured Oracle wallet enables spawned jobs to connect to the database at the command line.

65.6 Implementing a PL/SQL Scheduled Job

Implementing a PL/SQL scheduled job requires creating a job definition and creating a PL/SQL package.

65.6.1 Standards and Guidelines for Implementing a PL/SQL Scheduled Job

Run subrequests through Oracle Enterprise Scheduler using the Oracle Enterprise Scheduler APIs to access Oracle Enterprise Scheduler.

A PL/SQL stored procedure scheduler job should have a signature with the first two arguments being errbuf and retcode. The remaining arguments are used as required for defining job parameters. All arguments have a data type of varchar2.

65.6.2 How to Define Metadata for a PL/SQL Scheduled Job

Create a job definition as described in Section 65.4, "Creating a Job Definition."

PL/SQL jobs require setting an additional property numberOfArgs in the job definition. This property identifies the number of job submission arguments (not including the required arguments errbuf and retcode.)

65.6.3 How to Implement a PL/SQL Scheduled Job

Oracle Enterprise Scheduler provides runtime PL/SQL APIs for implementing PL/SQL jobs and running the jobs using Oracle Enterprise Scheduler. A view object is defined and associated with the job definition for the job.

When creating a PL/SQL job, use the fusion database user. For information about granting access privileges to database users in the context of Oracle Fusion Applications, see Chapter 51, "Implementing Oracle Fusion Data Security."

To implement a PL/SQL scheduled job:

  1. Create a PL/SQL package.

  2. Deploy the package to a database.

  3. Test the package.

65.6.4 What Happens When You Implement a PL/SQL Job

The sample PL/SQL job shown in Example 65-13 provides a signature of a PL/SQL procedure run as a job. The first two arguments to the PL/SQL procedure, errbuf and retcode, are required. The remaining arguments are properties filled in by end users and passed to Oracle Enterprise Scheduler when the job is submitted.

The example shown in Example 65-13 illustrates a sample PL/SQL job that uses the PL/SQL API.

Example 65-13 Running a Job Using the PL/SQL API

procedure fusion_plsql_sample(
-- The first two arguments are required: errbuf and retcode
-- 
                                  errbuf    out NOCOPY varchar2,
                                  retcode   out NOCOPY varchar2,

-- The errbuf is logged when a job request ends in a warning or error state to
-- provide a quick indication as to why the job request ended in an error or
-- warning state.
-- Job submission arguments, as collected from the view object associated with the
-- job as configured in the job definition. The view object is used to present a
-- user interface to end users, allowing them to enter the properties listed in
-- the following lines of code.
-- interface. These values are submitted by the end user.
-- 
                                  run_mode  in  varchar2 default 'BASIC',
                                  duration  in  varchar2 default '0',
                                  p_num     in  varchar2 default NULL,
                                  p_date    in  varchar2 default NULL,
                                  p_varchar in  varchar2 default NULL) is
 
  begin
       -- Write log file content using FND_FILE API
       FND_FILE.PUT_LINE(FND_FILE.LOG, "About to run the sample program");
 
       -- Implement the business logic of the job here.
       -- 
       FND_FILE.PUT_LINE(FND_FILE.OUT, " RUN MODE : " || run_mode);
       FND_FILE.PUT_LINE(FND_FILE.OUT, "DURATION: " || duration);
       FND_FILE.PUT_LINE(FND_FILE.OUT, "P_NUM: " || p_num);
       FND_FILE.PUT_LINE(FND_FILE.OUT, "P_DATE: " || p_date);
       FND_FILE.PUT_LINE(FND_FILE.OUT, "P_VARCHAR: " p_varchar);
 
       -- Retrieve the job completion status which is returned to Oracle
       -- Enterprise Scheduler.
       errbuf := fnd_message.get("FND", "COMPLETED NORMAL");
       retcode := 0;
 end;

The sample shown in Example 65-14 illustrates a PL/SQL job with a subrequest submission. The no_requests argument identifies the number of subrequests that must be submitted.

Example 65-14 Submitting a Subrequest Using the PL/SQL Runtime API

procedure fusion_plsql_subreq_sample(
                                  errbuf    out NOCOPY varchar2,
                                  retcode   out NOCOPY varchar2,
                                  no_requests  in  varchar2 default '5',
                                  ) is
       req_cnt number := 0;
       sub_reqid number;
       submitted_requests varchar2(100);
       request_prop_table_t jobProp;
  begin
       -- Write log file content using FND_FILE API
       FND_FILE.PUT_LINE(FND_FILE.LOG, "About to run the sample program with subrequest functionality");
 
       -- Requesting the PAUSED_STATE property set by job identifies request as
       -- having started for the first time or restarting after being paused.
       if ( ess_runtime.get_reqprop_varchar(fnd_job.job_request_id, 'PAUSED_STATE') ) is null )  -- first time start
       then
          -- Implement the business logic of the job here.
          FND_FILE.PUT_LINE(FND_FILE.OUT, " About to submit subrequests : " || no_requests);
 
          -- Loop through all the subrequests.
          for req_cnt 1..no_requests loop
            -- Retrieve the request handle and submit the subrequest.
            sub_reqid := ess_runtime.submit_subrequest(request_handle => fnd_job.request_handle,
                                        definition_name => 'sampleJob',
                                        definition_package => 'samplePkg',
                                        props => jobProp);
            submitted_requests := sub_reqid || ',';
          end loop;
 
          -- Pause the parent request.
          ess_runtime.update_reqprop_varchar(fnd_job.request_id, 'STATE', ess_job.PAUSED_STATE);
 
          -- Update the parent request with the state of the subrequest, enabling
          -- the job to retrieve the status during restart. 
          ess_runtime.update_reqprop_int(fnd_job.request_id, 'PAUSED_STATE', submitted_requests);
 
       else
          -- Restart the request, retrieve job completion status and return the
          -- status to Oracle Enterprise Scheduler.
          errbuf := fnd_message.get("FND", "COMPLETED NORMAL");
          retcode := 0;
       end if;
 end;

65.6.5 What Happens at Runtime: How a PL/SQL Job is Implemented

Oracle Enterprise Scheduler calls routines to initialize the context of the PL/SQL job, including PL/SQL global values, local values (such as language and territory), and request-specific values such as request ID and request handle.

The view object associated with the job definition displays a user interface so that end users may fill in values for each property. The Oracle Fusion web application calls Oracle Enterprise Scheduler using the provided APIs and submits the job request. Oracle Enterprise Scheduler runs the job, which calls the context routines and then runs the job logic. The job ends with a retcode value of 0, 1, 2 or 3, representing SUCCESS, WARNING, FAILURE or BUSINESS ERROR, respectively. The Oracle Fusion web application can retrieve the result from Oracle Enterprise Scheduler and display it in the user interface.

65.6.6 Related Links

The following documents provide additional information related to subjects discussed in this section:

For more information about implementing a PL/SQL stored procedure scheduled job see the chapter "Creating and Using PL/SQL Jobs" in the Oracle Fusion Middleware Developer's Guide for Oracle Enterprise Scheduler.

65.7 Implementing a SQL*Plus Scheduled Job

Implementing a SQL*Plus scheduled job involves writing a SQL*Plus script and configuring an environment file for the job.

65.7.1 Standards and Guidelines for Implementing a SQL*Plus Scheduled Job

Run subrequests through Oracle Enterprise Scheduler using the Oracle Enterprise Scheduler APIs to access Oracle Enterprise Scheduler.

65.7.2 How to Implement a SQL*Plus Job

Implementing a SQL*Plus stored procedures job involves writing the SQL*Plus script, storing the script and configuring a spawned job environment.

To implement a SQL*Plus job:

  1. Write the SQL*Plus job as a SQL*Plus script. Include the FND_JOB.set_sqlplus_status call so as to report the final job status.

    Include the following in the SQL*Plus scheduled job:

    • FND_JOB.set_sqlplus_status: Call to report the final job status. Statuses include:

      • FND_JOB.SUCCESS_V: Success.

      • FND_JOB.WARNING_V: Warning.

      • FND_JOB.FAILURE_V: Failure.

      • FND_JOB.BIZERR_V: Business Error.

    • FND_FILE routines: Can be used for producing log data and output files.

    • FND_JOB API for request values: API calls are initialized for SQL*Plus jobs.

    Note:

    SQL*Plus jobs must not exit.

  2. Store the script under PRODUCT_TOP/$APPLSQL.

  3. Configure the spawned job environment as described in Section 65.5, "Configuring a Spawned Job Environment." Configure the ATGPF_TOP value in the environment.properties file for spawned jobs.

  4. Run and test the job.

65.7.3 How to Use the SQL*Plus Runtime API

Oracle Enterprise Scheduler provides runtime SQL*Plus APIs for implementing SQL*Plus jobs and running the jobs using Oracle Enterprise Scheduler.

This sample SQL*Plus job provides a signature of a SQL*Plus procedure run as a job. Any necessary arguments are properties filled in by end users and passed to Oracle Enterprise Scheduler when the job is submitted. A view object is defined and associated with the job definition for the job. The view object is then used to display a user interface so that end users may fill in values for each property. Finally, the sample prints to an output file.

65.7.4 What Happens When You Implement a SQL*Plus Job

Example 65-15 shows a sample SQL*Plus scheduled job, which is executed by a wrapper script.

Example 65-15 Implementing a SQL*Plus Scheduled Job

SET VERIFY OFF
SET linesize 132
 
WHENEVER SQLERROR EXIT FAILURE ROLLBACK;
WHENEVER OSERROR EXIT FAILURE ROLLBACK;
REM dbdrv: none
 
/* ----------------------------------------------------------------------*/
 
DECLARE
errbuf        varchar2(240) := NULL;
retval        boolean;
run_mode      varchar2(200)  := '&1';
 
BEGIN
        DBMS_OUTPUT.PUT_LINE(run_mode);
 
        update dual set dummy = 'Q';
 
    FND_FILE.PUT_LINE(FND_FILE.LOG, 'Parameter 1 = ' || nvl(run_mode,'NULL'));
 
/*  print out test message to log file and output file  */
/*  by making direct call to FND_FILE.PUT_LINE          */
/*  from sql script.                                    */
 
    FND_FILE.PUT_LINE(FND_FILE.LOG,   '
                       ');
    FND_FILE.PUT_LINE(FND_FILE.LOG,   '-----------------------------------------
-----------------------');
    FND_FILE.PUT_LINE(FND_FILE.LOG,   'Printing a message to the LOG FILE
                       ');
    FND_FILE.PUT_LINE(FND_FILE.LOG,   '-----------------------------------------
-----------------------');
    FND_FILE.PUT_LINE(FND_FILE.LOG,   'SUCCESS!
                       ');
    FND_FILE.PUT_LINE(FND_FILE.LOG,   '
                       ');
    FND_FILE.PUT_LINE(FND_FILE.OUTPUT,'-----------------------------------------
-----------------------');
    FND_FILE.PUT_LINE(FND_FILE.OUTPUT,'Printing a message to the OUTPUT FILE
                       ');
    FND_FILE.PUT_LINE(FND_FILE.OUTPUT,'-----------------------------------------
-----------------------');
    FND_FILE.PUT_LINE(FND_FILE.OUTPUT,'SUCCESS!
                       ');
    FND_FILE.PUT_LINE(FND_FILE.OUTPUT,'
                       ');
 
retval :=  FND_JOB.SET_SQLPLUS_STATUS(FND_JOB.SUCCESS_V);
 
END;
/
COMMIT;
-- EXIT; Oracle Fusion Applications  SQL*Plus Jobs must not exit.

65.7.5 What Happens at Runtime: How a SQL*Plus Job Is Implemented

Oracle Enterprise Scheduler calls routines in a wrapper script to initialize the context of the SQL*Plus job, including global values, local values (such as language and territory), and request-specific values such as request ID and request handle. The wrapper script introduces the prologue of commands shown in Example 65-16.

Example 65-16 SQL*Plus wrapper script

SET TERM OFF
SET PAUSE OFF
SET HEADING OFF
SET FEEDBACK OFF
SET VERIFY OFF
SET ECHO OFF
SET ESCAPE ON
 
WHENEVER SQLERROR EXIT FAILURE

The Oracle Fusion application calls Oracle Enterprise Scheduler using the provided APIs. Oracle Enterprise Scheduler runs the job, and the final job status—SUCCESS, WARNING, BUSINESS ERROR, or FAILURE—is communicated to Oracle Enterprise Scheduler. The Oracle Fusion web application can retrieve the result from Oracle Enterprise Scheduler and display it in the user interface.

65.8 Implementing a SQL*Loader Scheduled Job

Implementing a SQL*Loader scheduled job involves creating a SQL*Loader control file and configuring a spawned job environment.

65.8.1 How to Implement a SQL*Loader Scheduled Job

Like all executable jobs (C, SQLPlus, host, and Perl scripts), SQL*Loader jobs require an executable file that is located in the read-only APPLTOP folder. For SQL*Loader jobs, this is the control file. The control file determines which database tables are to be affected by the SQL*Loader command.

It is possible to use a dynamic control file for SQL*Loader subrequest jobs. A SQL*Loader job submitted as a subrequest using a dynamic control file must access the control file from the working directory of the parent job request, rather than the APPLTOP folder.

Before you begin:

Keep in mind that the control file and data file must conform to the following SQL*Loader standards:

  • Place control files in the $APPLBIN directory under the product TOP. (Subrequests using dynamic control files must instead access the working directory of the parent job request.)

  • The control file's name must be the same as the executableName parameter in the job definition.

  • Ensure that the full path of the data file's location is the first submit argument to the job.

  • Add SQL*Loader options such as direct=yes, if needed, as the sqlldr.directoption parameter in the job definition.

  • Set the job log file as the SQL*Loader LOG parameter so it will automatically contain all SQL*Loader log messages.

  • Set the job output file as the SQL*Loader BAD parameter so it will automatically receive any output directed there. Alternatively, you can create two output files for a SQL*Loader job request.

    • <requestid>_bad.txt: This is the output of the bad parameter.

    • <requestid>_discard.txt: This is the output of the discard parameter, however a discard file is not always generated.

To implement a SQL*Loader scheduled job:

  1. Create a SQL*Loader control file (.ctl).

  2. In the parent job request, make sure you have set the system property workDirectoryRoot to the working directory of the parent job request.

  3. Alternatively, in the case of SQL*Loader subrequests only, configure the job to use a dynamically created control file.

    1. In the job definition for the subrequest, configure the property execPRWD. Set this property to Y to enable the dynamic control file.

    2. In the parent job request, configure the name of the control file using the executableName system property.

  4. Enter the full path of the data file as the first submit argument to the job.

  5. Store the control file under PRODUCT_TOP/$APPLBIN. Skip this step if you are implementing a SQL*Loader subrequest.

  6. Configure the spawned job environment as described in Section 65.5, "Configuring a Spawned Job Environment."

  7. Test the file.

65.8.2 What Happens When You Implement a SQL*Loader Scheduled Job

A sample SQL*Loader scheduled job is shown in Example 65-17.

Example 65-17 Sample SQL*Loader scheduled job

This sample control file will upload data from the data file into the fnd_applcp_test table, into the columns listed here (id1, id2, idn, mesg). See the SQL*Loader documentation For more information about writing control files.

OPTIONS (silent=(header,feedback,discards))
LOAD DATA
INFILE *
INTO TABLE fnd_applcp_test
APPEND
FIELDS TERMINATED BY ','
(id1,
 id2,
 id3,
 func CHAR(30),
 time SYSDATE,
 action CHAR(30),
 mesg CHAR(240))

What Happens When You Implement a SQL*Loader Scheduled Job Subrequest

When the SQL*Loader subrequest completes, the parent request can discover the status of the completed SQL*Loader job request as with any other subrequest. The log and output files are written to the content repository.

What Happens When You Implement a SQL*Loader Scheduled Job Subrequest Using a Dynamic Control File (execPRWD)

When the SQL*Loader subrequest runs, the command line creates the path to the control file by looking for the file named in the system property executableName under the directory indicated in the parent job request as the system property workDirectoryRoot.

65.9 Implementing a Perl Scheduled Job

Implementing a Perl scheduled job involves creating a job definition, enabling the Perl job to connect to a database and configuring a spawned job environment.

65.9.1 How to Implement a Perl Scheduled Job

To implement a Perl scheduled job:

  1. Place the Perl job under the directory PRODUCT_TOP/$APPLBIN.

  2. Create a job definition for the Perl job, setting the executableName parameter to the name of the Perl script. The following functions can be used in the Perl script:

    • writeln(): Write a message to the log file.

    • timestamp(): Write a timestamped message.

  3. To enable the Perl job to connect to a database, use /@$TWO_TASK as a connection string without specifying a user name or password.

  4. Configure the spawned job environment as described in Section 65.5, "Configuring a Spawned Job Environment." The context provides values for the following:

    • reqid: The request ID.

    • outfile: The full path to the output file.

    • logfile: The full path to the log file.

    • username: The name of the user submitting the job request.

    • log: The log object.

  5. Implement an exit code for the job, with values of 0, 2 or 3 representing the following states: success, warning, and business error. All other values represent an errored state.

  6. Test the job.

65.9.2 What Happens When You Implement a Perl Scheduled Job

Example 65-18 shows a sample scheduled Perl job which does the following:

  1. Checks for basic or full mode.

  2. Prints arguments.

  3. Gets the context object of the scheduled job request.

  4. Retrieves contextual information about the scheduled job request, which is stored in the context object.

  5. Writes the request to the log file.

  6. Prints information as required.

Example 65-18 Perl Scheduled Job

# dbdrv: none
 
use strict;
 
(my $VERSION) = q$Revision: 120.1 $ =~ /(\d+(\.\d+)*)/;
 
print_header("Begin Perl testing script (version $VERSION)");
 
# check first argument for BASIC or FULL mode
# if not FULL mode, exit successfully without doing anything
if (! $ARGV[0] || uc($ARGV[0]) ne "FULL") {
    exit(0);
}
 
# -- If argument #1 was passed, use it as a sleep time
if ($ARGV[1]) {
 
    if ($ARGV[1] =~ /\D/) {
      print "** Argument #1 is not a valid number, unable to sleep!\n\n";
        } else {
      printf("Sleeping for %d seconds...\n", $ARGV[1]);
      sleep($ARGV[1]);
        }
}
 
# -- Arguments
print_header("Arguments");
my $i = 1;
foreach (@ARGV) {
  print "Argument #", $i++, ": $_\n";
}
 
# -- Get the request context object
my $context = get_context();
 
# -- Use this object to retrieve context information about this request
 
print_header("Context Information");
printf "Request id \t= %d\n", $context->reqid();
printf "User name \t= %d\n", $context->username();
printf "Logfile \t= %s\n", $context->logfile();
printf "Outfile \t= %s\n", $context->outfile();
 
# -- Writing to the request log file
print_header("Writing to log file");
 
# -- retrieve a Logfile object from the context
my $log = $context->log();
$log->writeln("This message should appear in the request logfile");
$log->timestamp("This is a timestamped message to the request logfile");
 
print "Wrote two messages to the request logfile\n";
 
# -- Print out some useful information
 
print_header("Environment");
foreach (sort keys %ENV) {
    print "$_=$ENV{$_}\n";
}
 
print_header("Perl Information");
print "PROCESS ID = $$\n";
print "REAL USER ID = $<\n";
print "EFF USER ID = $>\n";
print "SCRIPT NAME = $0\n";
print "PERL VERSION = $]\n";
print "OS NAME = $îO\n";
print "EXE NAME = $îX\n";
print "WARNINGS ON = $îW\n";
 
print "\n\@INC path:\n";
foreach  (@INC) {
    print "$_\n";
}
 
print "\nAll loaded perl modules:\n";
foreach (sort keys %INC) {
    print "$_ => $INC{$_}\n";
}
 
# -- Exiting the script
# -- The exit status of the script will be used as the request exit status.
# -- A zero exit status is reported as state of success.
# -- An exit status of 2 is reported as a warning state.
# -- An exit status of 3 is reported as a business error state.
# -- Any other exit status is reported as an error state.
 
print_header("Exiting script with status 0. (Normal completion)");
exit(0);
 
sub print_header {
 
  my $msg = shift;
  print "\n\n", "-" x 40, "\n", $msg, "\n", "-" x 40, "\n";
 
}

65.9.3 Related Links

The following documents provide additional information related to subjects discussed in this section:

For more information about creating a Perl scheduled job see the chapter "Creating and Using Process Jobs" in Oracle Fusion Middleware Developer's Guide for Oracle Enterprise Scheduler.

65.10 Implementing a C Scheduled Job

The main steps required to implement a C scheduled job are as follows:

  • Creating a job definition

  • Configuring a spawned job environment

  • Implementing and testing a C scheduled job

65.10.1 How to Define Metadata for a C Scheduled Job

Create a job definition as described in Section 65.4, "Creating a Job Definition."

65.10.2 How to Implement a C Scheduled Job

To implement a C scheduled job:

  1. In a separate function or file rather than in main, implement your required business logic.

    Include the following header files:

    • afcp.h: This is the header file for Oracle Enterprise Scheduler.

    • afstd.h and afstr.h: These are Oracle Fusion application header files.

  2. Call afpend in the business logic function.

  3. In the main function, call afprcp, passing to it a pointer to the business logic function.

    The business logic function is called by afprcp, taking the arguments argc, argv, and reqinfo.

  4. Save the executable job file to the $APPLICATIONS_BASE/$APPLBIN directory.

  5. Configure the spawned job environment, as described in Section 65.5, "Configuring a Spawned Job Environment."

    Set both the TOP and APPLBIN variables for your application in the environment.properties file.

65.10.3 Scheduled C Job API

Several C functions are available for use in developing Oracle Fusion applications, while several others are not. Table 65-2 and Table 65-3 list the available and unavailable functions.

Table 65-2 C Functions Available for Developing Oracle Fusion Applications

Function Description

afprcp

Run C program. The recommended API for writing a C program. The main OC file should call this function to run the program logic. It initializes the context and calls the program.

int afprcp (uword argc, text **argv, afsqlopt *options, afpfcn *function); 

afpend

End C program. All programs must call this to signal the completion of the program. The program should pass completion status and message if necessary.

Indicate completion status with the following constants:

  • FDP_SUCCESS: Success

  • FDP_WARNING: Warning

  • FDP_ERROR: System Error

  • FDP_BIZERR: Business Error

boolean afpend (text *outcome, dvoid *handle, text *compmesg);

fdpfrs

Find request status. For a given request, retrieve the status. The following are possible request states:

  • ESS_WAIT_STATE

  • ESS_READY_STATE

  • ESS_RUNNING_STATE

  • ESS_COMPLETED_STATE

  • ESS_BLOCKED_STATE

  • ESS_HOLD_STATE

  • ESS_CANCELLING_STATE

  • ESS_EXPIRED_STATE

  • ESS_CANCELLED_STATE

  • ESS_ERROR_STATE

  • ESS_WARNING_STATE

  • ESS_SUCCEEDED_STATE

  • ESS_PAUSED_STATE

  • ESS_PENDING_VALID_STATE

  • ESS_VALID_FAILED_STATE

  • ESS_SCHEDULE_ENDED_STATE

  • ESS_FINISHED_STATE

  • ESS_ERROR_AUTO_RETRY_STATE

  • ESS_MANUAL_RECOVERY_STATE

afreqstate fdpfrs (text *request_id, text *errbuf);

fdpgret

Get the error type of a specific job request ID. The following are possible error types:

  • ESS_UNDEFINED_ERROR_TYPE

  • ESS_SYSTEM_ERROR_TYPE

  • ESS_BUSINESS_ERROR_TYPE

  • ESS_TIMEOUT_ERROR_TYPE

  • ESS_MIXED_NON_BUSINESS_ERROR_TYPE

  • ESS_MIXED_BUSINESS_ERROR_TYPE

afreqstate fdpgret (text *request_id, text *status, text *errbuf);

fdpgrs

Get request status. For a given request, retrieve the current status and completion text.

afreqstate fdpgrs (text *request_id, text *status, text *errbuf);

fdplck

Lock table. Locks the desired table with the specified lock mode and NOWAIT.

fdpscp

Legacy API for concurrent programs. All new concurrent programs should use afprcp.

boolean fdpscp (sword *argc, text **argv[], text args_type, text *errbuf); 

fdpwrt

Routines for creating log/output files and writing to files. These are routines concurrent programs should use for writing to all log and output files.


Table 65-3 C Functions Not Available for Developing Oracle Fusion Applications

Function Description

fdpgoi

Get Oracle data group.

fdpgpn

Get program name.

fdpgrc

Get request count.

fdpimp

Run the import utility.

fdpldr

Run SQL*Loader.

fdpperl

Run Perl concurrent program.

fdprep

Run report.

fdprpt

Run Sql*Rpt program.

fdprsg

Submit concurrent program. Use the afpsub routines instead.

fdpscr

Get resource security group.

fdpsql

Run SQL*Plus concurrent program.

fdpstp

Run stored procedure.


65.10.4 How to Test a C Scheduled Job

When developing a C job, it is possible to test the job by running it from a command line interface.

Running a C job from the command line involves the following main steps:

  • Invoking the job

  • Obtaining a database connection and setting the runtime context by passing special arguments.

  • Passing any program-specific parameters at the command line.

To run a C job from the command line:

  • Use the syntax shown in Example 65-19 to run a C job from the command line for testing purposes.

    Example 65-19 Syntax for Running a C Job from the Command Line

    %program <heavyweight user connection string> <lightweight username> <flag> <job parameters> ...
    

    where

    <heavyweight user connection string> is the username/password@TWO_TASK pair used to connect to the database

    <lightweight user name> is the name of the lightweight user submitting the job. This value is used to set the user context in the database connection.

    <flag> must be set to 'L' for lightweight user.

An example illustrating running a C job from the command line is shown in Example 65-20.

Example 65-20 Running a C Job from the Command Line for Testing Purposes

program username/password@my_db MYUSER L <parameter1> <parameter2> .... 

65.10.5 What Happens When You Implement a C Scheduled Job

The sample C job shown in Example 65-21 uses afprcp to initialize and obtain a database connection. It uses both Pro*C and afupi.

Example 65-21 Using the C Runtime API

#ifndef AFSTD
#include <afstd.h>
#endif
 
#ifndef AFSTR
#include <afstr.h>
#endif
 
#ifndef AFCP
#include <afcp.h>
#endif
 
#ifndef SQLCA
#include <sqlca.h>
#endif
 
#ifndef AFUPI
#include <afupi.h>
#endif
 
#ifndef FDS
#include <fds.h>
#endif
 
boolean testupi()
{
  text *sqltext;
  text buffer[ERRLEN];
  text os_user[31];
  text session_user[31];
  text db_name[31];
 
  aucursor  *use_curs;
  word      errcode;
 
  os_user[0] = session_user[0] = db_name[0] = (text)'\0';
 
  sqltext = (text*) "SELECT sys_context('USERENV','DB_NAME',30), sys_context('US
ERENV','SESSION_USER',30), sys_context('USERENV','OS_USER',30) from dual";
 
  use_curs = NULLCURSOR;
  use_curs = afuopen (NULLHOST, NULLCURSOR, (dvoid *)
                      sqltext,
                      UPISTRING);
  if (use_curs == NULLCURSOR) {goto upierror;}
 
  afudefine(use_curs, 1, AFUSTRING, (dvoid *)db_name, 31);
  afudefine(use_curs, 2, AFUSTRING, (dvoid *)session_user, 31);
  afudefine(use_curs, 3, AFUSTRING, (dvoid *)os_user, 31);
 
  if (!afuexec (use_curs, (uword)1, (uword)1, CSTATHOLD|CSTATEXACT) ||
      (errcode = afuerror (NULLHOST, (text *) NULL, 0)) != ORA_NORMAL) {
    goto upierror;
  }
 
  DISCARD afurelease (use_curs);
 
  DISCARD sprintf((char *)buffer, "%s as %s@%s", os_user,
                  session_user, db_name);
 
  DISCARD fdpwrt(AFWRT_OUT | AFWRT_NEWLINE, buffer);
 
 
  return TRUE;
 
 upierror:
  if (use_curs != NULLCURSOR)
    DISCARD afurelease (use_curs);
  DISCARD fdpwrt(AFWRT_LOG | AFWRT_NEWLINE, "Error in testupi");
  return FALSE;
}
 
void testrpc()
{
  text buffer[256];
 
 
  EXEC SQL BEGIN DECLARE SECTION;
 
  VARCHAR os_user[31];
  VARCHAR session_user[31];
  VARCHAR db_name[31];
 
  EXEC SQL END DECLARE SECTION;
 
  buffer[0] = os_user.arr[0] = session_user.arr[0] = db_name.arr[0] = '\0';
 
  EXEC SQL SELECT sys_context('USERENV','DB_NAME',30),
    sys_context('USERENV','SESSION_USER',30),
    sys_context('USERENV','OS_USER',30)
    INTO :db_name, :session_user, :os_user
    from dual;
 
  nullterm(os_user);
  nullterm(session_user);
  nullterm(db_name);
 
  DISCARD sprintf((char *)buffer, "%s as %s@%s", os_user.arr,
                  session_user.arr, db_name.arr);
 
  DISCARD fdpwrt(AFWRT_OUT | AFWRT_NEWLINE, buffer);
}
 
sword cptest(argc, argv, reqinfo)
/* ARGSUSED */
sword argc;
text *argv[];
dvoid *reqinfo;
{
  ub2 i;
  text errbuf[ERRLEN+1];
 
 /* Write to the log file */
  DISCARD fdpwrt(AFWRT_LOG | AFWRT_NEWLINE, (text *)"Test Success");
 /* Write to the out file */
  DISCARD fdpwrt(AFWRT_OUT | AFWRT_NEWLINE, (text *)"Test Args:");
 /* Loop through argv and write to the out file. */
  for ( i=0; i<argc; i++)
    DISCARD fdpwrt(AFWRT_OUT | AFWRT_NEWLINE, argv[i]);
  /* Call the Oracle Fusion Applications function afpoget to return the value */
  /* of a profile option called SITENAME and write the results to the error */
  /* buffer. */
  DISCARD afpoget((text *)"SITENAME", errbuf);
  /* Write the value to the output file. */
  DISCARD fdpwrt(AFWRT_OUT | AFWRT_NEWLINE, errbuf);
  /* Connect to the database and run a SELECT against the database. Creates a */
  /* string and writes the returned data to the output file. Uses prc APIs. */
  testrpc();
  /* Open a cursor for the SELECT statement, defines variables to collect data */  
  /* upon running statement, and executes SELECT. Creates a string which it */ 
  /* writes to the output file. Uses afupi APIs. */
  testupi();
  /* Writes the string "Test Completed." to the output file. */
  DISCARD fdpwrt(AFWRT_OUT | AFWRT_NEWLINE, (text *)"Test Completed.");
  /* Call afpend to identify the exit status, which in this case is successful. */
  /* Other possible values are FDP_WARNING, FDP_ERROR and FDP_BIZERR. The 
  /* reqinfo originally passed to cptest is passed here. Optionally, additional */
  /* text can be passed here, for example explaining the outcome of the exit */
  /* status. */
  return((sword)afpend(FDP_SUCCESS, reqinfo, (text *)NULL));
};
 
 
int main(/*_ int argc, text *argv[] _*/);
int main(argc, argv)
  int argc;
  text *argv[];
{

    /* Run cptest and return an exit value to Oracle ESS. */
    return(afprcp((uword)argc, (text **)argv,
    (afsqlopt *)NULL, (afpfcn *)cptest));
}

65.10.6 What Happens at Runtime: How a C Scheduled Job Is Implemented

When Oracle Enterprise Scheduler runs a C job, afprcp() runs first to initialize the context and obtain the database connection. The function afprcp() then calls the function containing the program logic. Oracle Enterprise Scheduler runs the job, and the result of the job is returned to Oracle Enterprise Scheduler. The Oracle Fusion application can retrieve the result from Oracle Enterprise Scheduler and display it in the user interface.

Note:

Wallet configuration is required for the client ORACLE_HOME to obtain the database connection. The operating system environment in which the job runs (including the location of the client ORACLE_HOME, which is also required) is set in the environment.properties file. The environment.properties file must be configured and placed in the config/fmwconfig directory under the domain.

You can add your own environment variables by creating an env.custom.properties file in the same directory. Variables you define in this file take precedence over those in the environment.properties file.

Similarly, you can set server-specific environment variables with environment.properties and env.custom.properties files in the server config directory.

65.11 Implementing a Host Script Scheduled Job

Arguments submitted for a host script job request are passed to the script at the command line. Host scripts may access the standard environment variables to get REQUESTID, LOG_WORK_DIRECTORY, OUTPUT_WORK_DIRECTORY, and so on. Script output is redirected to the request log file by default.

Use the following steps when implementing a host script job:

  • Complete the steps for configuring a spawned job as described in Section 65.5, "Configuring a Spawned Job Environment."

  • Create one script file each for Unix and Windows platforms. Name each script file the same as executableName parameter in the job definition. For example, if your executableName is "myscript", the script files would be called myscript.sh (on Unix platforms) and myscript.cmd (on Windows).

  • Put host scripts in the $APPLBIN directory under the product TOP.

  • The script should exit with one of the following exit codes (anything else is considered a SYSTEM ERROR):

    • 0 for SUCCESS

    • 2 for WARNING

    • 3 for BUSINESS ERROR

65.12 Implementing a Java Scheduled Job

For more information about implementing Java scheduled jobs, see the chapter "Using Oracle JDeveloper to Generate an Oracle Enterprise Scheduler Application" in Oracle Fusion Middleware Developer's Guide for Oracle Enterprise Scheduler.

65.12.1 How to Define Metadata for a Scheduled Java Job

Create a job definition as described in Section 65.4, "Creating a Job Definition."

65.12.2 How to Use the Java Runtime API

For information about the Java runtime API, see the Oracle Fusion Applications Java API Reference for Oracle Enterprise Scheduler Service.

You can access the Oracle Fusion Middleware Extensions for Applications Message and Profile objects directly, using those APIs which handle the service accessing themselves.

65.12.3 How to Cancel a Scheduled Java Job

You can cancel a scheduled Java job by implementing the Cancellable interface.

The Cancellable implementation in Example 65-22 checks as logic progresses to see if the job has been canceled. If it has, the code cleans up after itself before exiting.

Example 65-22 Handling a Job Cancellation Request

import oracle.as.scheduler.Cancellable;
import oracle.as.scheduler.Executable;
import oracle.as.scheduler.ExecutionCancelledException;
import oracle.as.scheduler.ExecutionErrorException;
import oracle.as.scheduler.ExecutionPausedException;
import oracle.as.scheduler.ExecutionWarningException;
import oracle.as.scheduler.RequestExecutionContext;
import oracle.as.scheduler.RequestParameters;

public class MyExecutable
    implements Executable, Cancellable
{
    private volatile boolean m_cancel = false;

    public void execute( RequestExecutionContext reqCtx,
                         RequestParameters reqParams ) 
        throws ExecutionErrorException, ExecutionWarningException,
               ExecutionPausedException, ExecutionCancelledException
    {
        // Do some work and check if this request has been canceled.
        // ... work ...
        checkCancel(reqCtx);

        // Do more work and check if this request has been canceled.
        // ... work ...
        checkCancel(reqCtx);
        // Finish work.
        // ... work ...
    }

    // Set flag that the app logic should check periodically to
    // determine if this request has been canceled.
    public void cancel()
    {
        m_cancel = true;
    }

    // Check if request has been canceled. If not, do nothing.
    // Otherwise, do any cleanup work that may be needed for
    // this request and end by throwing an ExecutionCancelledException.
    private void checkCancel(RequestExecutionContext reqCtx )
        throws ExecutionCancelledException
    {
        if (m_cancel)
        {
            // Do work any cleanup work that may be needed
            // before ending this executable.
            // ... cleanup work ...
            String msg = "Request " + reqCtx.getRequestId() +
                         " was cancelled.";
            throw new ExecutionCancelledException(msg);
        } 
    }
}

65.12.4 What Happens at Runtime: How a Java Scheduled Job Is Implemented

Oracle Enterprise Scheduler initializes the context of the job. The Oracle Fusion application calls Oracle Enterprise Scheduler using the provided APIs. Oracle Enterprise Scheduler runs the job, and a result of success or failure is returned to Oracle Enterprise Scheduler. The Oracle Fusion application can retrieve the result from Oracle Enterprise Scheduler and display it in the user interface.

65.13 Elevating Access Privileges for a Scheduled Job

Oracle Enterprise Scheduler executes jobs in the user context of the job submitter at the scheduled time. Some scheduled jobs require access privileges that are different from those of the submitting user. However, information regarding the submitter of the scheduled job must be retrievable for auditing purposes.

In Oracle Enterprise Scheduler, it is prohibited to run a job in the context of a user other than the submitting user with the runAs property. Doing so would be considered a security breach. Using an application identity enables running a job with different access privileges from those allotted to the submitting user.

Application identity is a SOA and Java Platform Security (JPS) concept that addresses the requirement for escalated privileges in completing an action. The application installer creates an application identity in Oracle Identity Management Repository.

For more information, see the following chapters:

65.13.1 How to Elevate Access Privileges for a Scheduled Job

The Oracle Enterprise Scheduler job system property SYS_runasApplicationID enables elevating access privileges for completing a scheduled job.

To elevate access privileges for a scheduled job:

  1. Create a job definition, as described in Section 65.4, "Creating a Job Definition."

  2. Under the Parameters section, add a parameter called SYS_runasApplicationID.

  3. In the text field for the SYS_runasApplicationID property, enter the application ID under which you want to run the job, as shown in Figure 65-1.

    The input string must be a valid ApplicationID value that exists when the job executes.

    Figure 65-1 Defining the runAs User for the Job

    The image is described in the surrounding text.

    You can retrieve the executing user by running either of the methods shown in Example 65-23 and Example 65-24.

    Example 65-23 Retrieving the Executing User with getRunAsUser()

    requestDetail.getRunAsUser()
    

    Example 65-24 Retrieving the Executing User with getRequestParameter()

    String sysPropUserName =
             (String) runtime.getRequestParameter(h, reqid, SystemProperty.USER_NAME);
    

Given a request ID, you can retrieve the submitting and executing users of a job request.

To retrieve the submitting and executing users of a job request in Oracle Enterprise Scheduler RuntimeService Enterprise JavaBeans object:

  • Example 65-25 shows a code sample for retrieving the submitting and executing users of a job request using the Oracle Enterprise Scheduler RuntimeService Enterprise JavaBeans object.

    Example 65-25 Retrieving the Submitting and Executing Users of a Job Request Using the RuntimeService Enterprise JavaBeans Object

    // Lookup runtimeService
    
    RequestDetail requestDetail = runtimeService.getRequestDetail(h, reqid);  
    String runAsUser = requestDetail.getRunAsUser();    
    String submitter = requestDetail.getSubmitter();
    

To retrieve the submitting and executing users of a job request from within an Oracle Fusion application:

  • Example 65-26 shows a code sample for retrieving the submitting and executing users of a job request from within an Oracle Fusion application.

    Example 65-26 Retrieving the Submitting and Executing Users of a Job Request from an Oracle Fusion application

    import oracle.apps.fnd.applcore.common.ApplSessionUtil;
    // The elevated privilege user name.
    ApplSessionUtil.getUserName()
    // The submitting user.
    ApplSessionUtil.getHistoryOverrideUserName()
    

65.13.2 How Access Privileges Are Elevated for a Scheduled Job

When a job request schedule executes, Oracle Enterprise Scheduler:

  1. Validates the submitter's execution privileges on the job metadata.

  2. Retrieves the application identity information from the job metadata. If the job metadata does not specify an application identity for the job, Oracle Enterprise Scheduler executes the job in the context of the job submitter.

    • Java job: An FND session is established as the user with elevated privileges.

      The executing user is taken from the current subject as viewed from the job logic.

      Note:

      Oracle Enterprise Scheduler does not directly support invoking a web service or composite. If your job logic invokes a web service or composite, you must write the client code logic in your job, establish a connection and propagate the job submitter information as data for auditing purposes. For an asynchronous web service call, the job must wait for a response.

    • Spawned C job: An application user session is established as the executing user. The submitter information is an attribute of the application user session.

      The spawned job executes as the operating system user who starts Oracle WebLogic Server.

    • PL/SQL job: An FND session is established as the executing user. The submitter information is attribute of the FND session.

      The job runs in the context of the FND session in the RDBMS job scheduler.

  3. Executes the job logic.

65.13.3 What Happens When Access Privileges Are Elevated for a Scheduled Job

Oracle Enterprise Scheduler validates the user's execution privileges on the job metadata. If so, the user context is captured and stored in the Oracle Enterprise Scheduler database as the submitting user, and the request is placed in the queue.

65.14 Creating an Oracle ADF User Interface for Submitting Job Requests

When implemented as part of an Oracle Fusion application, the Oracle ADF user interface enables end users to submit job requests.

65.14.1 How to Create an Oracle ADF User Interface for Submitting Job Requests

The Oracle ADF UI enables end users to submit job requests. End users can enter complex data types for the arguments of descriptive and key flexfields. The Parameters tab in the Oracle ADF UI interface allows end users to enter parameters to be used when submitting the job request.

Flexfields display in a separate task flow region. This region is a child task flow of the parent task flow displayed in the Parameters tab.

Note:

Define customization layers and authorize runtime customizations to the adf-config.xml file as described in Chapter 64, "Creating Customizable Applications."

To create a user interface for submitting job requests:

  1. Create a new Oracle Fusion web application by clicking New Application in the Application Navigator and selecting Fusion Web Application (ADF) from the Application Templates drop-down list.

    Model and ViewController projects are created within the application.

  2. Right-click the Model project and select Project Properties > Libraries and Classpath > Add Library.

  3. From the list, select the following libraries, as shown in Figure 65-2.

    • Applications Core

    • Applications Concurrent Processing

    • Enterprise Scheduler Extensions

    Figure 65-2 Adding the Libraries to the Model Project

    The image is described in the surrounding text.

    Click OK to close the window and add the libraries.

  4. Right-click the View Controller project and select Project Properties > Libraries and Classpath > Add Library.

    Add the library Applications Core (ViewController), as shown in Figure 65-3.

    Figure 65-3 Adding the Library to the View Controller Project

    The image is described in the surrounding text.
  5. In the Project Properties dialog, in the left-hand pane, click Business Components.

  6. The Initialize Business Components Project window displays. Click the Edit icon to create a database connection for the project.

    Fill in the database connection details as follows:

    • Connection Exists in: Application Resources

    • Connection Type: Oracle (JDBC)

    • User name/Password: Fill in the relevant user name and password for the database.

    • Driver: thin

    • Host Name: Enter the host name of the database server.

    • JDBC port: Enter the port number of the database.

    • SID: The unique Oracle system ID for the database.

    Click OK.

  7. In the file weblogic.xml, import the oracle.applcp.view library.

  8. In the file weblogic-application.xml, import the following libraries:

    • oracle.applcore.attachments (for ESS-UCM)

    • oracle.applcp.model

    • oracle.applcp.runtime

    • oracle.ess

    • oracle.sdp.client (for notification)

    • oracle.ucm.ridc.app-lib (for ESS-UCM)

    • oracle.webcenter.framework (for ESS-UCM)

    • oracle.xdo.runtime

    • oracle.xdo.service.client

    • oracle.xdo.webapp

    The libraries oracle.applcp.model and oracle.applcp.view are deployed as part of the installation while running the config.sh wizard.

  9. Create a new Java Server Pages XML (JSPX) page for the ViewController project by right-clicking ViewController and selecting New > Web Tier >JSF > JSF JSP Page.

  10. Create a new File System connection. In the Resource Palette, right-click File System, select New File System Connection, and do the following:

    1. Provide a connection name and directory path for the Oracle ADF Library files (<jdev_install>/jdev/oaext/adflib).

    2. Click Test Connection and click OK after the connection succeeds.

  11. Expand the contents of the SRS-View.jar file to display the list of available task flows that can be used in the application, as shown in Figure 65-4.

    Figure 65-4 Displaying the List of Available Task Flows

    The image is described in the surrounding text.
  12. To include the job request submission page in the application, select the ScheduleRequest-taskflow item from the Resource Palette and drop it onto the Java Server Faces (JSF) page in the area where you want to create a call to the task flow. Create the task flow call as a link or button.

    For example, to invoke the job request submission page from within a dialog in the application, do the following:

    1. From the Component Palette, drag and drop a Link onto the form in the JSPX page.

    2. In the Property Inspector, configure the behavior of the link to the value showpopup.

    3. From the Component Palette, drag and drop a Popup component with a dialog component onto the form.

    4. To enable submitting a job request, drag and drop the ScheduleRequest-taskflow item onto the dialog component as a dynamic region.

      To enable submitting a job set request, drag and drop the ScheduleJobset-taskflow item onto the dialog component.

      Figure 65-5 displays the task flows in the Resource Palette.

      Figure 65-5 Including the Job Request Submission Page in the Application

      The image is described in the surrounding text.
    5. From the context menu, select Create a Dynamic Region.

  13. When prompted, add the required library to the ViewController project by clicking Add Library. Save the JSF page.

  14. Edit the task flow binding. Define the following parameters for the task flow, as shown in Figure 65-6.

    1. jobdefinitionname: Enter the name of the job definition to be submitted. This is not the name that displays. This is the job definition defined in Section 65.4, "Creating a Job Definition." Required.

    2. jobdefinitionpackagename: Enter the package name under which the job definition metadata is stored. This should be the namespace path appended to the package name, for example /oracle/ess/Scheduler. The namespace path typically begins with a forward slash ("/"), but should have no forward slash at the end. Required.

    3. centralui: When setting this parameter to true, then the task flow UI does not display the header section containing the name, description and basic Oracle BI Publisher actions (such as email, print and notify). This parameter must be a Boolean value. Optional.

    4. pageTitle: When passed, the task flow will render this passed String value as the page title. The pageTitle value is currently configured to be truncated at 30 characters. Optional.

    5. requireRootOutcome: If true is passed as the value, then the task flow will generate a value of root-outcome when the user clicks the Submit or Cancel buttons. By default, the task flow generates a value of parent-outcome. Optional.

    6. requestparametersmap: Enter the name of the map object variable that contains the parameters required for the job request submission. If this parameter is filled in, the Parameters tab in the request scheduling submission page will not prompt end users to enter parameters for executing the request. The map can be passed to the task flow as a parameter. Typically, this parameter takes the data type java.util.Map in which keys are parameter names and values are parameter values. For example, if you will be using a paramsMap object in the pageFlowScope context, you might enter a requestparametersmap value of #{pageFlowScope.paramsMap}. Optional.

      In the page that holds the task flow region in the job request submission page, set the following property for the popup window that opens the job request submission page window: contentDelivery = immediate.

      In the page definition file of the page that contains the task flow region, set the following property for the task flow: Pagedef > executables > taskflow > Refresh=IfNeeded.

  15. If you are using a map to pass parameters to the task flow (as shown in Figure 65-6, the map is called requestparametersmap), create a new task flow parameter, such as the paramsMap object in the pageFlowScope element of a page flow.

    Figure 65-6 Defining Parameters for the Task Flow

    The image is described in the surrounding text.

    These values can be accessed in the job executable, for example from the RequestParameters object in the case of a Java job. Example 65-27 illustrates passing the values stored in the RequestParameters object to a Java job. This code is used in the class that implements the oracle.as.scheduler.Executable interface.

    Example 65-27 Passing Values in a Map Object to a Java Job

    public void execute(RequestExecutionContext ctx,RequestParameters props) 
        throws ExecutionErrorException, ExecutionWarningException, 
            ExecutionCancelledException,ExecutionPausedException
    { 
        String pageTitle = (String) props.getValue("pageTitle");
        // Retrieve other parameters.
        // ... 
    }
    

    Note:

    When using a requestparametersmap object, set the following properties for the popup window within which the task flow is started.

    • Set Content Delivery to Immediate.

    • In the page definition XML file for the page that contains the region, select PageDef > Executables > taskflow > set Refresh = ifNeeded.

  16. If the job is defined with properties that must be filled in by end users, the user interface allows end users to fill in these properties before submitting the job request. For example, if the job requires a start and end time, end users can fill in the desired start and end times in the space provided by the user interface.

    The properties that are filled in by end users are associated with a view object, which in turn is associated with the job definition itself. When the job runs, Oracle Enterprise Scheduler accesses the view object to retrieve the values of the properties.

    If using a view object to pass parameters to the job definition, do the following:

    1. Create a view object called TestVO using a query such as the one shown in Example 65-28.

      Example 65-28 Creating a View Object Using a Query

      select null as Attribute1, null as Attribute2 from dual" 
      
    2. Specify control UI hints, for example set the display label for Attribute1 to Run Mode and for Attribute2 to Duration.

      The parameters tab in the job request submission UI renders with the input fields Run Mode and Duration.

    3. To render the Parameters tab in the job request submission UI, add the DynamicComponents 1.0 library as follows. Right-click ViewController and select Project Properties > JSP Tag Libraries > Add. In the Choose Tag Libraries window, select the library DynamicComponents 1.0 and click OK. Figure 65-7 displays the Choose Tag Libraries window.

      Figure 65-7 Adding the Library DynamicComponents 1.0

      The image is described in the surrounding text.
  17. In the JSF application you created, create another project called Scheduler. Select File > New, and choose General > Empty Project. This project will be used to create Oracle Enterprise Scheduler metadata and job implementations.

  18. In the Scheduler project, add the Oracle Enterprise Scheduler Extensions library to the class path. Right-click the Scheduler project and select Project Properties > Libraries and Classpath > Add Library > Oracle Enterprise Scheduler Extensions.

  19. Deploy the libraries oracle.xdo.runtime and oracle.xdo.webapp to the Oracle Enterprise Scheduler UI managed server. These libraries are located in the directory $MW_HOME/jdeveloper/xdo, where MW_HOME is the Oracle Fusion Middleware home directory.

  20. Deploy the application.

    Note:

    When testing the UI in a web browser, you may need to add a security exception to your browser so that the UI renders correctly. Follow the directions in the online help for your web browser.

65.14.2 How to Add a Custom Task Flow to an Oracle ADF User Interface for Submitting Job Requests

You can add a custom task flow to an Oracle ADF user interface used to submit job requests at run time.

To add a custom task flow to an Oracle ADF user interface for submitting job requests:

  1. Create a task flow and bind it to your Oracle ADF user interface for submitting a job request created in Section 65.14.1, "How to Create an Oracle ADF User Interface for Submitting Job Requests."

  2. Create an ADF Business Components view object for each UI field. Name the view objects that are bound to UI fields ParameterVO1, ParameterVO2, and so on.

    Name the attributes of the view objects as follows: ATTRIBUTE1, ATTRIBUTE2, and so on.

  3. Include the view objects in the relevant application module. Even if their names are different, the view object instance names ought to be ParameterVO1, ParameterVO2, ParameterVO3, and so on.

  4. In the job definition, define the properties CustomDataControl and ParameterTaskflow For more information, see Section 65.4.1, "How to Create a Job Definition."

  5. Optionally, include the method preSubmit() in the application module. Oracle Enterprise Scheduler invokes this method before retrieving the parameter values for the submission request.

    Your implementation of the preSubmit() method (which returns a Boolean value) could include validation code in the custom task flow. If the validation fails, your code can throw an exception with the proper internationalized error message.If this validation fails while submitting the request, the error message is displayed to the user and the submission doesn't go through.

65.14.3 How to Handle Schedule Request Submission and Jobset UI Submit and Cancel Button Events

The schedule request submission and jobset UI taskflows include Submit and Cancel buttons. Typically, clicking the Submit button submits the job request, closes the transaction, and returns the user to the page that launched the schedule request submission or jobset UI taskflow (the container taskflow). Clicking the Cancel button resets the internal data structures in the schedule request submission or jobset UI and returns the user to the page that launched the schedule request submission or jobset UI taskflow.

Clicking the Submit or Cancel buttons notifies the containing bounded or unbounded parent taskflow of the result of the Submit or Cancel event, and the container taskflow decides what to do next. The containing taskflow may be a popup or an inline page. The container taskflow handles any navigation required after clicking either button.

The schedule request submission and jobset UI taskflows define two root outcomes and two parent outcomes, which the parent taskflow can use to handle navigational requirements.

The following sample shows the schedule request submission UI parent outcomes.

Example 65-29 Schedule request submission UI parent outcomes

    <parent-action id="rootSubmitActionId">
      <description id="__1">Parent action when the submit button is clicked for root
                            parent</description>
      <root-outcome id="submitOutcome">onSRSSubmitted</root-outcome>
    </parent-action>
 
    <parent-action id="rootCancelActionId">
      <description id="__2">Parent action when the cancel button is clicked for root
                            parent</description>
      <root-outcome id="cancelOutcome">onSRSCanceled</root-outcome>
    </parent-action>
 
    <parent-action id="parentSubmitActionId">
      <description id="__3">Parent action when the submit button is clicked for immediate
                            parent</description>
      <parent-outcome id="parentSubmitOutcome">onSRSSubmitted</parent-outcome>
    </parent-action>
 
    <parent-action id="parentCancelActionId">
      <description id="__4">Parent action when the cancel button is clicked for immediate
                            parent</description>
      <parent-outcome id="parentCancelOutcome">onSRSCanceled</parent-outcome>
    </parent-action>

The following sample shows the jobset UI parent outcomes.

Example 65-30 Jobset UI parent outcomes

    <parent-action id="rootSubmitActionId">
      <description id="__1">Parent action when the submit button is clicked for the
                            root</description>
      <root-outcome id="submitOutcomeForRoot">onJobsetRequestSubmitted</root-outcome>
    </parent-action>
 
    <parent-action id="rootCancelActionId">
      <description id="__2">Parent action when the cancel button is clicked for the
                            root</description>
      <root-outcome id="cancelOutcomeForRoot">onJobsetRequestCanceled</root-outcome>
    </parent-action>
 
    <parent-action id="parentSubmitActionId">
      <description id="__3">Parent action when the submit button is clicked for immediate
                            parent</description>
      <parent-outcome id="parentSubmitOutcome">onJobsetRequestSubmitted</parent-outcome>
    </parent-action>
 
    <parent-action id="parentCancelActionId">
      <description id="__4">Parent action when the cancel button is clicked for immediate
                            parent</description>
      <parent-outcome id="parentCancelOutcome">onJobsetRequestCanceled</parent-outcome>
    </parent-action>

As shown in the preceding samples, the containing taskflow defines the root/parent outcomes that occur when the user clicks the Submit or Cancel button, respectively. These outcomes are onSRSSubmitted and onSRSCanceled, in the case of the schedule request submission UI taskflow, and onJobsetRequestSubmitted and onJobsetRequestCanceled in the case of the jobset UI taskflow.

The consuming or parent taskflow uses these root/parent outcomes in their view definition files (*taskflow.xml or adfc-config.xml) and defines control flow rules accordingly.

Note:

The taskflows must be dropped as a region on a page for the parent actions to work. Make sure to drop the schedule request submission or jobset UI taskflow as a region on a page.

What You Should Know about Handling Schedule Request Submission and Jobset UI Submit and Cancel Button Events

65.14.3.1 What You Should Know About Handling Schedule Request Submission and Jobset UI Submit and Cancel Button Events

In some cases, you may want to place the taskflow in a region component within an explicitly defined popup window. In such cases, the taskflow must be refreshed after the popup window closes.

To display the popup window:

  1. Drop the taskflow onto the popup window as a dynamic region. Dynamic regions have the advantage of not being tied to a single taskflow ID.

  2. In your page definition file, map the taskflow's taskflowId attribute to a method in your managed bean. This bean should typically be in a view scope.

    The managed bean method should return the taskflow ID as a string. By default, the ID should be an empty string, but not null. You can define a string member variable to store the current taskflow ID and return that string variable to the method.

  3. Define a popupFetchListener in your JSFF file, and map it to a method in the same managed bean. In this popupFetch method, you can swap the taskflow ID with the schedule request submission taskflow ID, so that the method returning the taskflow ID now returns the schedule request submission taskflow ID.

    When the popup window opens, it displays the schedule request submission UI taskflow.

To close the popup window:

  1. In the view definition file of your taskflow, define the control flow rules to handle the parent actions to be returned by the schedule request submission UI taskflow upon clicking the Submit and Cancel buttons.

  2. Define a method call activity in the same taskflow definition file, and call that method from the control flow rule. This method call activity can call a method in the same view scope managed bean.

  3. The managed bean method should swap the schedule request submission UI taskflow ID with an empty taskflow ID, an empty string. The taskflow ID in the page definition file (and the dynamic region) will now reference an empty taskflow.

    The taskflow is now finalized, and garbage collection has occurred. The schedule request submission taskflow has been re-initialized.The next time the schedule request submission UI taskflow is called, it will execute a new instance of the taskflow.

65.14.3.2 Handling Schedule Request Submission and Jobset UI Submit and Cancel Buttons in an ADF Popup Window

Following is an example of how the parent taskflow can use these parent/root outcomes to define the navigational rules for closing or navigating away from the schedule request submission UI taskflow. The steps are the same for jobset UI taskflows, but the outcomes have different names.

In this example, the schedule request submission UI taskflow is placed in an ADF popup window. When the user clicks the Submit or Cancel button, the popup window closes, returning the user to the page that launched the popup.

Note:

This is only an example of how to use the root/parent outcomes. The actual implementation may vary according to the usecase of the user.

You can either use the parent or root outcome at any given time, depending on the use case. By default, the schedule request submission or jobset UI taskflows always pass a parent outcome. If the consuming application needs a root outcome, then you must pass the requireRootOutcome parameter with a value of true to the schedule request submission or jobset UI taskflow.

To handle Submit and Cancel buttons:

  1. Define the control flow rule in your view definition file.

    <control-flow-rule>
        <from-activity-id>*</from-activity-id>
        <control-flow-case id="__101">
          <from-outcome id="__121">onSRSCanceled</from-outcome>
          <to-activity-id id="__111">cancelSRS</to-activity-id>
        </control-flow-case>
        <control-flow-case id="__102">
          <from-outcome id="__122">onSRSSubmitted</from-outcome>
          <to-activity-id id="__112">cancelSRS</to-activity-id>
        </control-flow-case>    
      </control-flow-rule>
    
  2. Define the taskflow activity to handle the navigation. This activity is invoked by the control flow rule. A method call activity has been defined in this example.

    <method-call id="cancelSRS">
        <method>#{myManagedBean.closeSRSPopup} </method>
        <outcome>
          <fixed-outcome>srsPopupClosed</fixed-outcome>
        </outcome>
    </method-call>
    
  3. Define the corresponding method in the managed bean of the parent taskflow.

    public void closeSRSPopup() {
            RichPopup srspopup = getPopup();
            if(srspopup != null) {
              srspopup.cancel(); //closes the popup
            }
        }
    
  4. To use a root outcome, set the requireRootOutcome taskflow parameter to true.

  5. Deploy the parent taskflow and launch the page to test out the navigation.

65.14.3.3 Schedule Request Submission and Jobset UI Submit and Cancel Buttons in a UIShell Popup Window

If you are launching the schedule request submission taskflow in a UIShell popup window, then use the closePopup() method described in Section 18.4.1.3, "Implementing OK and Cancel Buttons in a Popup."

If you are launching the schedule request submission UI taskflow in the main task area of the UIShell, then you need to follow the navigation options described in Section 14.6.2, "How to Implement End User Preferences."

Note:

The UIShell APIs closeMainTask() and openMainTask() can only be invoked from within a bounded taskflow. You must wrap the schedule request submission UI taskflow in a dummy container taskflow, and define the control flow rules to consume the parent actions in the view definition file of the container taskflow.

65.14.4 How to Enable Support for Context-Sensitive Parameters in an Oracle ADF User Interface for Submitting Job Requests

After integrating your application with the Oracle ADF UI for submitting job requests, enable context-sensitive parameter support in the UI.

The request submission UI will render the context-sensitive parameters first so that the end user will specify the context-sensitive parameter values. Context is set in the database based on these parameters. After setting the context, it renders the rest of the parameters based on context set at database layer. When the job runs, the actual business logic will run after setting the context based on the context-sensitive parameter values inside the database.

Follow this procedure to enable context-sensitive parameter support in the UI.

To enable support for context sensitive parameters in an Oracle ADF user interface for submitting job requests:

  1. Follow the instructions described in Section 65.14.1.

  2. Create a native ADF Business Components view object with attributes CTXATTRIBUTE1, CTXATTRIBUTE2, and so on, with a maximum of 100 attributes.

    For example, create a view object with the query Select null as CTXATTRIBUTE1, CTXATTRIBUTE2, CTXATTRIBUTE3 from dual. Include required UI hints such as display label, tool tip, and so on.

  3. Create a PL/SQL procedure or function to set the context.

  4. Specify the parameters shown in Example 65-31 and Example 65-32 in the job definition metadata.

    • contextParametersVO: Enter the fully qualified name of the view object that holds the context sensitive parameters.

      Example 65-31 contextParametersVO

      <parameter name="contextParametersVO" data-type="string">_oracle.apps.mypkg.TestCtxVO</parameter>_
      
    • setContextAPI: PL/SQL API to set the context, along with the package name. The _myPkg1.mySetCtx procedure receives arguments based on attributes in the contextParametersVO.

      Example 65-32 setContextAPI

      <parameter name="setContextAPI"  data-type="string">_myPkg1.mySetCtx</parameter>_
      

65.14.5 How to Save and Schedule a Job Request Using an Oracle ADF UI

Saving and scheduling a job request using an Oracle ADF UI involves using the Oracle Enterprise Scheduler Extensions library with a JSF application that includes a task flow in which a job is scheduled and saved.

To schedule a job request using an Oracle ADF UI:

  1. Follow the instructions in Section 65.14.1, "How to Create an Oracle ADF User Interface for Submitting Job Requests" up to step 9.

    Note:

    If the custom parameters task flow has no transactions of its own, it must set the data-control-scope to isolated. This ensures that multiple parametersVO properties using the same application module get their independent application module instance.

  2. Drag and drop the SaveSchedule-taskflow object onto the dialog. No input parameters are required.

  3. When prompted, add the required library to the ViewController project by clicking Add Library. Save the JSF page.

  4. In the JSF application you created, create another project called Scheduler. Select File > New, and choose General > Empty Project. This project will be used to create Oracle Enterprise Scheduler metadata and job implementations.

  5. In the Scheduler project, add the Oracle Enterprise Scheduler Extensions library to the class path. Right-click the Scheduler project and select Project Properties > Libraries and Classpath > Add Library > Oracle Enterprise Scheduler Extensions.

  6. Deploy the application.

  7. Start the application using the following URL:

    http://<machine>:<http-port>/<context-root>/faces/<page>
    
  8. Enter a schedule name, description and package name with the namespace appended, as shown in Figure 65-8.

    Figure 65-8 Saving a Job Submission Schedule

    The image is described in the surrounding text.
  9. Save the schedule.

    A message displays indicating the metadata object ID of the saved schedule. This ID can be used for further job or job set request submissions

65.14.6 How to Submit a Job Using a Saved Schedule in an Oracle ADF UI

Submitting a saved job request schedule using an Oracle ADF UI involves using the Oracle Enterprise Scheduler Extensions library with a JSF application that includes a task flow in which a saved job schedule can be submitted.

To submit a job using a saved schedule in an Oracle ADF UI:

  1. Follow the instructions in Section 65.14.1, "How to Create an Oracle ADF User Interface for Submitting Job Requests".

  2. Deploy the application. Open the page using the following URL:

    http://<machine>:<http-port>/<context-root>/faces/<page>
    
  3. Click the Schedule tab. In the Run option field, select the Use a Schedule radio button.

  4. From the Frequency drop-down list, select Use a Saved Schedule.

  5. Enter the namespace and package names for the schedule along with the name of the schedule.

  6. To view the list of scheduled jobs, click Get Details. Click Submit to submit the saved job request.

65.14.7 How to Notify Users of the Status of Executed Jobs

The Oracle ADF user interface for submitting job requests provides the ability to notify users of the status of submitted jobs (via the Notification tab of the user interface). For example, users can request a notification to be sent to the originator of the job request.

A notification includes two components: the user to whom the notification is to be delivered, and the completion status of the job that triggers the notification. For example, notifications can be sent upon the successful completion of a job, or when a job completes in an error or warning state.

65.14.8 What Happens When You Create an Oracle ADF User Interface for Submitting Job Requests

The Oracle ADF interface is integrated with the Oracle Fusion application, and the application is tested and deployed. End users access the Oracle ADF user interface, fill in optional job properties, and click a button to submit the job request.

65.14.9 What Happens at Runtime: How an Oracle ADF User Interface for Submitting Job Requests Is Created

The application receives the submitted job request and calls Oracle Enterprise Scheduler to run the job. The Oracle Fusion application accesses the values of the properties entered by end users through the view object in which these properties were defined at design time. The job returns a result of success or failure, and the result passes from the Oracle Fusion application to Oracle Enterprise Scheduler.

Custom Task Flow

A job that includes properties to be filled in by end users through an Oracle ADF user interface at runtime includes ADF Business Components view objects with validation and the parameters to be filled in by end users. These parameters are submitted at runtime in the order in which they have been defined, meaning the first custom parameter to be defined is submitted first. The custom parameters must be named as follows:

ParameterVO1.ATTRIBUTE1, ParameterVO1.ATTRIBUTE2, ParameterVO2.ATTRIBUTE1, ParameterVO3.ATTRIBUTE1, and so on.

If the job definition includes the properties ContextParametersVO, ParameterTaskflow and parametersVO, these properties render in that order at run time.

Context-Sensitive Parameters

When starting the job request submission page UI to submit a job or job set request with context-sensitive parameters, the contextParametersVO parameter initially renders in the Parameters tab of the Oracle ADF user interface.

The end-user can then enter values for the context-sensitive parameters. Clicking Next invokes an API called setConextAPI by passing the context parameters. The context is set at the database level and the remaining parametersVO job parameters are rendered.

When the context-sensitive parameters are modified, end users must click Next to set the context with the new values.

Notifications

When the final status of the job is determined, Oracle Enterprise Scheduler delivers the notifications to the relevant users using the User Messaging Service. Users receive notifications based on their messages preferences.

The notification view object defined at design time populates the input box in the submission request user interface at run time.

65.14.10 Related Links

The following documents provide additional information related to subjects discussed in this section:

65.15 Submitting Job Requests Using the Request Submission API

You can submit, cancel and otherwise manage job requests using the request submission API.

For information about using the request submission API, see the chapter "Using the Runtime Service" in Oracle Fusion Middleware Developer's Guide for Oracle Enterprise Scheduler.

65.16 Defining Oracle Business Intelligence Publisher Postprocessing Actions for a Scheduled Job

Oracle Business Intelligence Publisher enables generating reports from a variety of data sources, such as Oracle Database, web services, RSS feeds, files, and so on. BI Publisher provides several delivery options for generated reports, including print, fax, and email.

To create an Oracle BI Publisher report, an Oracle BI Publisher report definition is required. Oracle BI Publisher report definitions consist of a data model that specifies the type of data source (database, web service, and so on) and a template for output formatting.

With report definitions in place, options for reporting are available to end users in the Output tab of the Oracle ADF user interface. The Output tab provides options through which an end user can define templates for reports. They can specify layout templates, document formats (such as PDF, RTF, and more), report destinations (email addresses, fax numbers, or printer addresses), and so on. When the user submits a request, this information is stored in the Oracle Enterprise Scheduler schema. The preprocessor then invokes the Oracle BI Publisher service and passes the saved data to it.

Extensions to Oracle Enterprise Scheduler provide the ability to run Oracle BI Publisher reports as batch jobs. The Oracle Enterprise Scheduler postprocessing infrastructure enables applying Oracle BI Publisher formatting templates to XML data and delivering the formatted reports by printing, faxing, and so on.

65.16.1 How to Define Oracle BI Publisher Postprocessing for a Scheduled Job

Defining postprocessing for a scheduled job involves the following:

  • Define the postprocessing action.

  • Create a Java class for the postprocessing action. The Java class uses the parameters collected by the Oracle Enterprise Scheduler UI and calls Oracle BI Publisher APIs as required.

  • Create a native ADF Business Components view object to save parameters for postprocessing, such as template name, output format, locale, and so on.

Before you begin:

  1. Follow the instructions for setting up Oracle BI Publisher reporting as described in the "Creating and Editing Reports" chapter in Oracle Fusion Middleware Report Designer's Guide for Oracle Business Intelligence Publisher.

    Use the following file to set up reporting and seed your database with the relevant Oracle BI Publisher data:

    Example 65-33 Location of the File for Setting Up Oracle BI Publisher Reporting and Seeding the Database

    $BEAHOME/jdeveloper/jdev/oaext/adflib/PPActions.jar 
    
  2. Create an Oracle BI Publisher job definition, following the instructions in the Oracle BI Publisher documentation.

  3. Define File Management Group (FMG) properties for the Oracle BI Publisher job definition as described in Section 65.4.2, "How to Define File Groups for a Job."

To create an Oracle BI Publisher postprocessing action:

  1. In the table called APPLCP_PP_ACTIONS, define the postprocessing action to be executed for the job.

    The columns to be seeded in the APPLCP_PP_ACTIONS table are as follows:

    • Action_SN: Define a short name for the action, used when postprocessing actions are submitted programatically. For example, OBFUSC8.

    • Action Name: Enter a name for the action to be displayed in the user interface. This name is stored separately for translation purposes.

    • Class: Enter the name of the Java class that defines the logic for the postprocessing action. For example, oracle.apps.shh.obfuscate.PPobfuscate.

    • VO_Def_Name: Enter the name of the view object used to collect the arguments for the postprocessing action. For example, oracle.apps.shh.obfuscate.PPobfuscateVO.

    • Type: Enter the category of the postprocessing action to be taken. Enter one of the following categories of postprocessing actions:

      • L: Indicates a Layout postprocessing action. Layout actions change the output of the job, and produce new output.

      • O: Indicates an Output postprocessing action. Output actions act on the output created by the job and its layout actions, performing delivery, publishing, printing, and so on.

      • F: Indicates a Final postprocessing action. Final Actions take no input. Final postprocessing actions execute using the final status of the job after all Layout and Output actions have executed.

    • On_Success: Indicate whether the postprocessing action runs following a successful job. Enter Y or N.

    • On_Warning: Indicate whether the postprocessing action runs following a job that ends in a warning. Enter Y or N.

    • On_Failure: Indicate whether the postprocessing action runs following a failed job. Enter Y or N.

    • SEQ_NUM: Enter a number to sequentially order the postprocessing actions. Only registered postprocessing actions of the same type can be sequentially ordered. This value determines both the order in which the tabs corresponding to the actions appear in the user interface, and the order in which the actions run.

    Each action can also specify request parameters used by the postprocessing action view object. These parameters must be set in the job definition for any job using this action. The parameter names are stored in the APPLCP_PP_ACTION_PARAMS table. The values of these parameters are accessible from the parameter view object at the time of job request submission. postprocessing actions can access all request parameters at runtime using the request ID.

  2. Define a Java class for the postprocessing action, implementing the interface oracle.apps.fnd.applcp.request.postprocess.PostProcess. Use the methods required by the interface as described in Table 65-4.

    Table 65-4 Methods Required When Implementing the Interface oracle.apps.fnd.applcp.request.postprocess.PostProcess

    Method Description

    PostProcessState invokePostProcess(long requestID, String ppArguments[], ArrayList files);

    Receives the requestID parameter, the ppArguments[] array of arguments collected from the view object (or submitted programmatically), and the files array list which identifies the files on which the action is to be taken.

    It is possible to specify the location of the output file.

    ArrayList getOutputFileList();

    Returns an array of the output files created by the postprocessing action.


    Additional methods used by the invokePostProcess method are shown in Table 65-5.

    Table 65-5 Oracle BI Publisher Client API oracle.xdo.service.client.ReportService Used by the invokePostProcess method

    Method Description

    runReport()

    Enables the postprocessing action to pass to the Business Intelligence Publisher the job's XML output along with the template ID and format (all collected during job request submission).


    Additional methods used by the ReportRequest object are shown in Table 65-6.

    Table 65-6 Oracle BI Publisher Client API oracle.xdo.service.client.types.ReportRequest Used by the ReportRequest Object

    Method Description

    setAttributeFormat()

    Set the format for the Oracle BI Publisher report request.

    setAttributeLocale()

    Set the locale data for the Oracle BI Publisher report request.

    setAttributeTemplate()

    Set the template for the Oracle BI Publisher report request.

    setXMLData()

    Set the XML data for the Oracle BI Publisher report request.


    An example of a Java class that defines a postprocessing action is shown in Example 65-34:

    Example 65-34 A Java Class that Defines a Postprocessing Action

    package oracle.apps.shh.Obfuscate;
    
    import oracle.apps.fnd.applcp.request.postprocess.PostProcess;
    import oracle.apps.fnd.applcp.util.ESSContext;
    import oracle.apps.fnd.applcp.util.PostProcessState;
    import oracle.as.scheduler.*;
    
    public class PPobfuscate implements PostProcess {
    
      ArrayList myOutputFiles;
    
      ArrayList getOutputFileList()
      {
        return myOutputFiles;
      }
    
      PostProcessState invokePostProcess(long requestID, String ppArguments[],
        ArrayList files)
      {
    
        RuntimeService rService = null;
        RuntimeServiceHandle rHandle = null;
        try {
          // Accessing Runtime Details for a given requestID
          RequestDetail rDetail = null;
          RequestParameters rParam = null;
          String obfuscationSeed = ppArguments[0];
          String codedFileName = ppArguments[1];
          String myNewFile;
          String outDir = null;
     
          rService = ESSContext.getRuntimeService();
          if (rService != null) rHandle = rService.open();
          if (rHandle != null)  rDetail = getRequestDetails(rHandle, requestID);
          if (rDetail != null)  rParam  = rDetail.getParameters();
          if (rParam != null)   outDir  = rParam.getValue("outputWorkDirectory");
          if (outDir == null)
          {
            // Details not received, usually an exception would have been thrown
            // by now. We handle this case to be robust.
            // Log the ERROR to Oracle Diagnostic Logging
            return PostProcessState.ERROR;
          }
          // Check files
          if (files[0] == null)
          {
            // no files - PostProcessing should never call us in this state
            // in case it does - log Error to Oracle Diagnostic Logging
            return PostProcessState.ERROR;
          }
          // This example expects a single file
          myNewFile = outputDir + System.getProperty("file.separator") + 
            codedFileName;
          Obfuscate.performObfuscation( files[0], obfuscationSeed, myNewFile );
          myOutputFiles[0] = myNewFile;
     
          // In case multiple files are used.
          for ( i = 1; files[i] != null; i++ )
          {
            // Appending a counter to the filename to be unique.
            myNewFile = outputDir + System.getProperty("file.separator") +
            codedFileName + i ;
            Obfuscate.performObfuscation( files[i], obfuscationSeed, myNewFile );
            myOutputFiles[i] = myNewFile;
          }
     
          return PostProcessState.SUCCESS;
     
        } catch (RuntimeServiceException rse)
        {
          // Log RuntimeServiceException to Oracle Diagnostic Logging.
          return PostProcessState.ERROR;
        } catch (Exception e)
        {
          // Log Exception to Oracle Diagnostic Logging.
          return PostProcessState.ERROR;
        } finally {
          if (rHandle != null)
            rService.close(rHandle);
        }
      }
    } // end class
    
  3. Create a native ADF Business Components view object to collect the parameters to be used in the postprocessing action. Follow the procedure described in Section 65.4, "Creating a Job Definition." Define any view object attributes sequentially.

    If the view object requires access to action-specific values from the job definition, specify the required job definition parameters in the action definition. The submission UI automatically retrieves the values from the job definition metadata and sets them as Oracle Fusion Middleware Extensions for Applications (Applications Core) Session attributes that may be retrieved using the ApplSession standard API.

65.16.2 How to Define Oracle BI Publisher Postprocessing Actions for a Scheduled PL/SQL Job

Example 65-35 shows a PL/SQL job that includes Oracle BI Publisher postprocessing actions. The PL/SQL job calls the method ess_runtime.add_pp_action so as to generate a layout for the data from the postprocessing action. This example formats the XML generated by the job as a PDF file.

Example 65-35 Defining a Scheduled PL/SQL Job with Oracle BI Publisher Postprocessing Actions

declare
l_reqid   number;
l_props   ess_runtime.request_prop_table_t;
begin
.
 ess_runtime.add_pp_action (
   props              => l_props,               -- IN OUT
request_prop_table_t,
   action_order       => 1,                     -- order in which this post
processing action will execute.
   action_name        => 'BIPDocGen',           -- Action for Document
Generation (layout)
   on_success         => 'Y',                   -- Should this be called on
success,
   on_warning         => 'N',                   -- Should this be called on
warning,
   on_error           => 'N',                   -- Should this be called on
error,
   file_mgmt_group    => 'XML',                 -- File types this action
will process. It has to be defined in Job Defintion,
   step_path          => NULL,                  -- IN varchar2 default NULL,
   argument1          => 'XLABIPTEST_RTF',      -- Template name needed for
Documnet Generation action,
   argument2          => 'pdf'                  -- What type of layout file
will be generated by Document Generation action,
 );
.
  l_reqid :=
          ess_runtime.submit_request_adhoc_sched
             (application => 'SSEssWls',               -- Application
Application
              definition_type => 'JOB',
              definition_name => 'BIPTestJob',         -- Job definition
              definition_package => '/mypackage',      -- Job definition package
              props => l_props);
commit;
dbms_output.put_line('request_id = :'||l_reqid);
end;

65.16.3 Invoking Postprocessing Actions Programmatically

You can invoke postprocessing actions programmatically from a client using a Java or web service API. Both APIs require the same set of parameter values described in table Table 65-7.

For Java clients, call the addPPAction method of oracle.as.scheduler.cp.SubmissionUtil. The method takes the values needed to invoke the action and throws an exception called IllegalArgumentException if the number of arguments exceeds 10. Example 65-36 shows the declaration of the method.

Example 65-36 Sample declaration of the addPPAction method

public static void addPPAction (RequestParameters params,
        int actionOrder,
        String actionName,
        String description,
        boolean onSuccess,
        boolean onWarning,
        boolean onError,
        String fileMgmtGroup,
        String[] arguments)
    throws IllegalArgumentException 

For web service clients, you invoke the method using a proxy, as in Example 65-37.

Example 65-37 Adding Postprocessing Actions for a Request

ESSWebService proxy = createProxy("addPPActions");

PostProcessAction ppAction = new PostProcessAction();
ppAction.setActionOrder(1);
ppAction.setActionName("BIPDocGen");
ppAction.setOnSuccess(true);
ppAction.setOnWarning(false);
ppAction.setOnError(false);
ppAction.getArguments().add("argument1");
ppAction.getArguments().add("argument2");

List<PostProcessAction> ppActionList =   new ArrayList<PostProcessAction>();
ppActionList.add(ppAction);

RequestParameters reqParams = new RequestParameters();
reqParams = proxy.addPPActions(reqParams, ppActionList);

Table 65-7 Parameters for Adding a Postprocessing Action

Parameter Description

params

A RequestParameters object into which this method adds parameters.

actionOrder

The ordinal location of this action in the sequence of actions to be performed within the action domain. Oracle BI Publisher process requests starting with action order index 1.

actionName

The name of the action to perform. The following lists acceptable values for this parameter, along with the acceptable values you can use in the arguments parameter of this method.

  • BIPDocGen: for applying Oracle BI Publisher templates. Acceptable argument parameter values are:

    • argument1: maps to report parameter TEMPLATE, the template name.

    • argument2: maps to report parameter OUTPUT_FORMAT, the output format for Oracle BI Publisher document generation, for example, "pdf" or "html".

    • argument3: maps to report parameter LOCALE, the locale to be used while generating output.

  • BIPPrintService: for specifying the print action. Acceptable argument parameter values are:

    • argument1: maps to printerName

    • argument2: maps to numberOfCopies

    • argument3: maps to side

    • argument4: maps to tray

    • argument5: maps to pagesRange

    • argument6: maps to orientation

  • BIPDeliveryEmail: for specifying the email action. Acceptable argument parameter values are:

    • argument1: maps to emailServerName

    • argument2: maps to from

    • argument3: maps to to

    • argument4: maps to cc

    • argument5: maps to bcc

    • argument6: maps to replyTo

    • argument7: maps to subject

    • argument8: maps to messageBody

  • BIPDeliveryFax: for specifying the fax action. Acceptable argument parameter values are:

    • argument1: maps to faxServerName

    • argument2: maps to faxNumber

description

Description of this post processor action.

onSuccess

Determines whether this action should be performed on successful completion of the job.

onWarning

Determines whether this action should be performed when the job or step has completed with a warning.

onError

Determines whether this action should be performed when the job or step has completed with an error.

fileMgmtGroup

The name of the File Management Group. When using a Oracle BI Publisher template, the value of this parameter is XML, as defined in the job definition Program.FMG property with the value L.XML.

arguments

A list of arguments for the post processor action. See the actionName parameter for values you can use for the arguments parameter.


65.16.4 What Happens When You Define Oracle BI Publisher Postprocessing Actions for a Scheduled Job

Depending on the FMG property set for the job definition, the relevant postprocessing action is selected for the job.

The ppArguments array stores the values collected from the view object attributes. The array is passed to the invokePostProcess method which executes in the Java class that defines the postprocessing action.

65.16.5 What Happens at Runtime: How Oracle BI Publisher Postprocessing Actions are Defined for a Scheduled Job

At runtime, the user interface uses the view object to collect the arguments for executing the postprocessing action as defined in the table APPLCP_PP_ACTIONS. These arguments also instruct the user interface as to how to invoke the action logic.

The postprocessing action accesses the XML output file from the job request, and passes the XML output to Oracle BI Publisher. The postprocessing action creates a report request containing the XML data.

The postprocessing action displays in the submission Oracle ADF UI. The UI enables adding a postprocessing action for the scheduled job, selecting arguments for the action using the view object and selecting output options for the action. The user interface also displays the name of the File Management Group with which the output files are associated.

Note:

When testing the UI in a web browser, you may need to add a security exception to your browser so that the UI renders correctly. Follow the directions in the online help for your web browser.

65.16.6 Related Links

The following documents provide additional information related to subjects discussed in this section:

  • For more information about defining postprocessing actions for scheduled jobs, see "Creating a Business Domain Layer Using Entity Objects" in the Oracle Fusion Middleware Fusion Developer's Guide for Oracle Application Development Framework.

  • For more information on the web service, see the chapter "Using the Oracle Enterprise Scheduler Web Service" in Oracle Fusion Middleware Developer's Guide for Oracle Enterprise Scheduler.

  • For more information about configuring security certificates, see the chapter "Managing Keystores, Wallets, and Certificates" in the Oracle Fusion Middleware Administrator's Guide.

65.17 Monitoring Scheduled Job Requests Using an Oracle ADF UI

It is possible to view previously submitted jobs by integrating the Monitoring Processes task flow into an application.

For information about enabling tracing for jobs, see Chapter 61, "Debugging Oracle ADF and Oracle SOA Suite."

65.17.1 How to Monitor Scheduled Job Requests

The main steps involved in monitoring scheduled job requests using an Oracle ADF UI are as follows:

  • Configure Oracle Enterprise Scheduler in JDeveloper

  • Create and initialize an Oracle Fusion web application

  • Create a UI Shell page and drop the Monitor Processes task flow onto it

Note:

Fields such as submission date, ready time, scheduled date, process start, name, type, definition, and so on, are not set unless the job request or subrequest is successfully validated.

To monitor scheduled job requests using an Oracle ADF UI:

  1. Follow the instructions in Section 65.14.1, "How to Create an Oracle ADF User Interface for Submitting Job Requests" up to and including step 5.

  2. Under the ViewController project, right-click Web Content and create a new JSF page called Consumer.jspx. Select the following options:

    • UIShell (template)

    • Create as XML Document

  3. Create a new JSF page fragment. This page initializes the project.

  4. Open adfc-config.xml and drag Consumer.jspx onto adfc-config.xml.

  5. Right-click adfc-config.xml and select Create ADF Menu.

    The Create ADF Menu Model window displays.

  6. Rename the default file root_menu.xml to something else.

  7. Open the XML file created in the previous step. Look for an itemNode element as follows:

    <itemNode id="itemNode_JSF/JSPX page name">
    

    For example, the Consumer.jspx page has the following itemNode value:

    <itemNode id="itemNode_Consumer">
    
  8. In the Structure window, right-click the root itemNode and select Insert inside itemNode-itemNode_JSF/JSPX page name > itemNode.

  9. In Common Properties, enter the following values:

    • id: MonitorNode

    • focusViewId: /Consumer

  10. In Advanced Properties, enter Monitor Processes in the label field.

  11. Right-click the itemNode you just added and select Go to Properties.

  12. In the Property Inspector, select Advanced and do the following:

    • Select the dynamicMain task type.

    • In the taskFlowId field, enter the following:

      /WEB-INF/oracle/apps/fnd/applcp/monitor/ui/flow/MonitorProcessesMainAreaFlow.xml#MonitorProcessesMainAreaFlow
      
    • Enter a string for the pageTitle parameter, which will become the title for the monitoring page. If this parameter is not specified, then the page title will be shown as "Manage Scheduled Processes".

  13. Repeat steps 8-12 to create a second itemNode element with the following properties:

    • id: __Launcher_itemNode__FndTaskList

    • focusViewId: /Launcher

    • label: #{applcoreBundle.TASKS}

    • Task Type: defaultRegional

    • taskFlowId: /WEB-INF/oracle/apps/fnd/applcore/patterns/uishell/ui/publicFlow/TasksList.xml#TasksList

  14. Right-click adfc-config.xml and select Link ADF Menu to Navigator.

  15. Configure Oracle JDeveloper Integrated Oracle WebLogic Server for development with Oracle Enterprise Scheduler extensions.

  16. Deploy and test the application.

65.17.2 How to Embed a Table of Search Results as a Region on a Page

You can embed a table of job request search results as a region on a page. Task flow parameters can be used to further specify the job requests returned by the search.

To embed a search results table as a region:

  1. Add the Applications Concurrent Processing (View Controller) library to the ViewController project.

    For more information about adding this library to the project, see Section 65.3.1.

  2. In the Resource Palette, select File System > Applications Core > MonitorProcesses-View.jar > ADF Task Flows.

  3. Drag and drop onto the page as a region the SearchResultsFlow task flow.

    The task flow accepts the following parameters:

    • processId: The request ID number uniquely identifying the process.

    • processName: The name of the process, which corresponds to the name of the job definition.

    • processNameList: Fetches the job requests of multiple process names using a list which contains the relevant job names.

      When specifying the task flow parameter processName, this parameter takes precedence over the task flow parameter processNameList. The requests returned are for the single process name specified by the processName parameter only.

    • scheduledDays: Queries requests for the last n days. If this parameter is not specified in a work area task flow, job requests from the last three days are displayed. If the value of this parameter is greater than three days, then the parameter value will be taken as three and only the last three days of job requests display.

    • status: The status of the request. This filter narrows down the result set to display only the requests with the selected status in the filter.

      If the status input parameter is not specified, then the results table shows all requests with all statuses (by default, the All value is selected in the status filter list).

      If the status input parameter is specified, then the results table show only the requests of the given status. The selected status is chosen as the default in the status filter list.

    • isEmbedResults: A Boolean value that indicates whether search results are embedded in the task flow. True or false.

      Set to true to embed table results.

    • Time Range Filter: This filter is used to narrow down the result set to show only the requests for last n hours. This filter lists the following values in a dropdown list: (1) Last 1 Hour, (2) Last 12 Hours, (3) Last 24 Hours, (4) Last 48 Hours and (5) Last 72 Hours.

      The default selected item displays based on the value assigned or given to the task flow parameter scheduledDays.

      A scheduledDays value of 1 means the time range filter list displays only the first three items.

      A scheduledDays value of 2 means the time range filter list displays only the first four items.

      If the value of scheduledDays is 1, then by default, the time range dropdown list displays Last 24 Hours.

      If the value of scheduledDays is 3 or more, then by default, the time range dropdown list displays Last 72 Hours.

    • pageTitle: When passed, the task flow will render this passed String value as the page title. Optional.

    • requireRootOutcome: If the value true is passed, then the task flow will generate a value of root-outcome when the user clicks on the Submit or Cancel buttons. By default the task flow generates a value of parent-outcome.

    Specifying more than one of these parameters causes the search to run using the AND conjunction.

65.17.3 How to Log Scheduled Job Requests in an Oracle ADF UI

You can enable Oracle Diagnostic Logging in an Oracle ADF UI used to monitor scheduled job requests. When enabling logging, the UI displays a View Log button.

The View Log functionality in the monitoring UI applies only to scheduled requests with a persistenceMode property set to a value of file. Hence, the View Log button in the scheduled request submission monitoring UI displays only when viewing requests with persistenceMode property set to a value of file.

The only other valid value for the persistenceMode property is the value content. The View Log button is hidden for all requests with a persistenceMode property value of content. If the persistenceMode property is not specified for a given request, then the monitoring UI defaults to a persistenceMode value of file, and displays the View Log button when viewing relevant requests.

To log scheduled job requests:

  1. Open the server's logging.xml file.

  2. In the logging.xml file, enter the required logging level for oracle.apps.fnd.applcp.srs, for example: INFO, FINE, FINER or FINEST.

    Example 65-38 shows a sample of a logging.xml file with Oracle Diagnostic Logging configured.

    Example 65-38 Enabling Logging in the logging.xml File

    <logger name='oracle.apps.fnd.applcp.srs' level='FINEST'
        useParentHandlers='false'>
       <handler name='odl-handler'/>
    </logger>
    
  3. Save the logging.xml file and restart the server.

65.17.4 How to Troubleshoot an Oracle ADF UI Used to Monitor Scheduled Job Requests

Some useful tips for troubleshooting the Oracle ADF UI used to monitor scheduled job requests.

  • Displaying a readable name. When defining metadata, use the display-name attribute to configure the name to be displayed in the Oracle ADF UI. The monitoring UI will display the value defined for the display-name attribute. If this attribute is not defined, the UI displays the value of the metadata-name attribute assigned to the metadata.

  • Displaying multiple links in the task flow UI that each display a popup window with a different job definition. The recommended approach is to create a single page fragment that contains the scheduled request submission task flow within an Oracle ADF region. This page is reused by each link to display a different job definition in the scheduled request submission UI. For each link, pass the relevant parameters such as the job definition name, package name, and so on. This approach ensures that the UI session creates and uses a single instance of the task flow.

  • Displaying the correct name given the metadata name and display name attributes. By default, the display name takes precedence and displays in the UI. If the display name is not defined, then the UI displays the job or job set name.

  • Resolving name conflicts between a job metadata parameter name and a request parameter with the same name. Oracle Enterprise Scheduler uses the following rules to resolve parameter name conflicts.

    • The last definition takes precedence. When the same parameter is defined repeatedly with the read-only flag set to false in all cases, the last parameter definition takes precedence. For example, a property specified at the job request level takes precedence over the same property specified at the job definition level.

    • The first read-only definition takes precedence. When the same parameter is defined repeatedly and at least one definition is read-only (that is, the ParameterInfo read-only flag is set to true), the first read-only definition takes precedence. For example a read-only parameter specified at the job type definition level takes precedence over a property with the same name specified at the job definition level, regardless of whether it is read-only.

  • Resolving name conflicts between the job or job set metadata name and display name attributes. By default, the display name takes precedence over the metadata name. If the display name is not defined, then the UI defaults to displaying the job or job set name.

  • Understanding the state of a job request. There are 20 possible states for a job request, each with a corresponding number value. These are shown in Table 65-8.

    Table 65-8 Job Request States

    Job State Number Job Request State Description

    -1

    UNKNOWN

    The state of the job request is unknown.

    1

    WAIT

    The job request is awaiting dispatch.

    2

    READY

    The job request has been dispatched and is awaiting processing.

    3

    RUNNING

    The job request is being processed.

    4

    COMPLETED

    The job request has completed and postprocessing has commenced.

    5

    BLOCKED

    The job request is blocked by one or more incompatible job requests.

    6

    HOLD

    The job request has been explicitly held.

    7

    CANCELLING

    The job request has been canceled and is awaiting acknowledgement.

    8

    EXPIRED

    The job request expired before it could be processed.

    9

    CANCELLED

    The job request was canceled.

    10

    ERROR

    The job request has run and resulted in an error.

    11

    WARNING

    The job request has run and resulted in a warning.

    12

    SUCCEEDED

    The job request has run and completed successfully.

    13

    PAUSED

    The job request paused for subrequest completion.

    14

    PENDING_VALIDATION

    The job request has been submitted but has not been validated.

    15

    VALIDATION_FAILED

    The job request has been submitted, but validation has failed.

    16

    SCHEDULE_ENDED

    The schedule for the job request has ended, or the job request expiration time specified at submission has been reached.

    17

    FINISHED

    The job request, and all child job requests, have finished.

    18

    ERROR_AUTO_RETRY

    The job request has run, resulted in an error, and is eligible for automatic retry.

    19

    ERROR_MANUAL_RECOVERY

    The job request requires manual intervention to be retried or transition to a terminal state.


  • Fixing an Oracle BI Publisher report that does not generate, even though the Oracle Enterprise Scheduler schema REQUEST_PROPERTY table contains all the relevant postprocessing parameters. Verify that the postprocessing parameters begin with index value of 1. If a set of parameters begins with an index value of 0 (such as the parameter pp.0.action), then the Oracle BI Publisher report will not generate. Oracle BI Publisher expects parameters to begin with an index value of 1. In the case of a job set with multiple Oracle BI Publisher jobs, verify that all the individual step postprocessing actions begin with an index value of 1.

  • Fixing a scheduled request submission UI that does not display, and throws a partial page rendering error in the browser indicating that the drTaskflowId property is invalid. This error may occur due to any of the following.

    • The object oracle.as.scheduler.JobDefinition may be unavailable to the scheduled request submission UI, which attempts to query the object using the MetadataService API.

    • The job definition name or the job definition package name is incorrect when passed as task flow parameters. Ensure that the package name does not end with a trailing forward slash.

    • The metadata permissions are not properly configured for the user who is currently logged in. The JobDefinition object, being stored in Oracle Metadata Repository, requires adequate metadata permissions to read and modify the JobDefinition metadata. Ensure that the Oracle Metadata Repository to which you are referring contains the job definition name in the proper package hierarchy.

65.17.5 Related Links

The following document provides additional information related to subjects discussed in this section:

For more information about tracing Oracle Enterprise Scheduler jobs, see the section "Tracing Oracle Enterprise Scheduler Jobs" in the chapter "Managing Oracle Enterprise Scheduler Service and Jobs" in the Oracle Fusion Applications Administrator's Guide.

65.18 Using a Task Flow Template for Submitting Scheduled Requests Through an Oracle ADF UI

The Oracle ADF UI used to submit scheduled requests supports basic and advanced modes. Switching between modes requires page navigation between two view activities.

In some cases, you may want to use a custom parameter task flow for the UI in the context of an Oracle Fusion web application. One such use case is when you require a method call activity as the default activity of a custom bounded task flow so as to initialize the parameters view object and Flexfield filters defined in that task flow.

When using page navigation between two view activities and custom bounded task flows with a default method call activity, switching between basic and advanced modes might reinitialize the related view objects and entity objects. If this happens, any data entered in basic mode is lost when changing to advanced mode.

The task flow template enables switching between basic and advanced modes in the scheduled request submission Oracle ADF UI without losing data.

65.18.1 How to Use a Task Flow Template for Submitting Scheduled Requests through an Oracle ADF UI

A bundled task flow template is provided, containing the components required to enable switching between basic and advanced modes in the Oracle ADF UI. The task flow template adds a router activity and an input parameter to the custom bounded task flow. Configure the router activity as the default activity.

You need only extend the task flow template as needed and implement the activity IDs defined in the task flow template.

Example 65-39 shows a sample implementation of the task flow template.

Example 65-39 Task Flow Template

<?xml version="1.0" encoding="UTF-8" ?>
<adfc-config xmlns="http://xmlns.oracle.com/adf/controller" version="1.2">
  <task-flow-template id="srs-custom-task-flow-template">
    <default-activity id="defActivity">defaultRouter</default-activity>
    <input-parameter-definition id="param1">
      <description id="paramDescription">Parameter to decide on initialization.</description>
      <name id="paramName">shouldInitialize</name>
      <value id="paramID">#{pageFlowScope.shouldInitialize}</value>
      <class id="paramType">boolean</class>
      <required/>
    </input-parameter-definition>
 
    <router id="defaultRouter">
      <case id="routerCaseID">
        <expression id="routerExprID">#{pageFlowScope.shouldInitialize}</expression>
        <outcome id="outcomeID">initializeTaskflow</outcome>
      </case>
      <default-outcome id="defOutcomeID">skip</default-outcome>   
    </router>
 
    <control-flow-rule id="ctrlFlwRulID">
      <from-activity-id id="FrmAc1">defaultRouter</from-activity-id>
      <control-flow-case id="CtrlCase1">
        <from-outcome id="FrmAct3">initializeTaskflow</from-outcome>
        <to-activity-id id="ToAct1">initActivity</to-activity-id>
      </control-flow-case>
      <control-flow-case id="CtrlCase2">
        <from-outcome id="FrmAct2">skip</from-outcome>
        <to-activity-id id="ToAct2">defaultView</to-activity-id>  
        </control-flow-case>
     </control-flow-rule>
    <use-page-fragments/>
  </task-flow-template>
</adfc-config>

The task flow template defines the following:

  • A default-activity,

  • An input parameter of Boolean type,

  • A router activity,

  • A control-flow-rule containing two cases.

65.18.2 How to Extend the Task Flow Template for Submitting Scheduled Requests through an Oracle ADF UI

If you need to create your own custom bounded task flow UI for the parameters section of the scheduled request submission UI, you will need to extend this template.

To extend the task flow template for the Oracle ADF UI used to submit scheduled requests:

  1. When creating a new task flow, extend the task flow by selecting Use a template.

    Example 65-40 Extending a Task Flow

    <template-reference>
         <document id="doc1">/WEB-INF/srs-custom-task-flow-template.xml</document>
         <id id="temid">srs-custom-task-flow-template</id>
    </template-reference>
    

    Note:

    Ensure your bounded task flow does not define any default activity.

  2. Implement the activity IDs defined in the template, which are invoked by the router activity in the template.

    • initActivity: The ID of the method call activity.

    • defaultView: The ID of the default view activity.

    To do this, to the task flow drag and drop the createInsert method from the view object used in the default view. This creates a pagedef file and adds the binding details in DateBinding.cpx.

  3. Define a control flow rule to navigate from the initActivity object to the defaultView object. This navigation depends on the outcome of the initActivity object, as well as individual use cases.

    Example 65-41 shows a sample implementation of a control flow rule.

    Example 65-41 Implementing a Control Flow Rule

    <control-flow-rule>
         <from-activity-id>initActivity</from-activity-id>
         <control-flow-case>
              <from-outcome>outcome_of_init_activity</from-outcome>
              <to-activity-id>defaultView</to-activity-id>
         </control-flow-case>
    </control-flow-rule>
    

65.18.3 What Happens When you Use a Task Flow Template for Submitting Scheduled Requests through an Oracle ADF UI

Based on the value of the input parameter, the router invokes the method call activity or skips it, and invokes the view activity directly. The Oracle ADF UI must pass the correct parameter values to the task flow while switching modes.

65.18.4 What Happens at Runtime: How a Task Flow Template Is Used to Submit Scheduled Requests through an Oracle ADF UI

When loading the initial page in basic mode, the method call activity is invoked. While loading the page in the advanced mode, the custom bounded task flow directly invokes the view activity. This ensures that the user entered data persists in the view objects across modes.

If the custom task flow UI does not render correctly, check whether transactional properties have been set in the custom task flow, such as the requires-transaction property, and so on.

Remove transactional properties from the task flow definition and set the data control scope to shared.

As the parent scheduled request submission UI task flow already has a transaction, Oracle ADF will commit all called task flow transactions if the data controls are shared.

Note:

When using the UI to schedule a job to run for a year, for example, a maximum of 300 occurrences display when clicking Customize Times.

65.18.5 Related Links

The following document provides additional information related to subjects discussed in this section:

For more information about creating task flows, see the part "Creating Oracle ADF Task Flows" in Oracle Fusion Middleware Fusion Developer's Guide for Oracle Application Development Framework." Alternatively, add the lines of code shown in Example 65-40 to the task flow XML file.

65.19 Securing Oracle ADF UIs

When creating Oracle ADF UIs for scheduled jobs, you can secure the individual task flows involved using a security policy.

The task flows you can secure are as follows.

Scheduling Job Requests UI

  • /WEB-INF/ScheduleRequest-taskflow.xml

    • /WEB-INF/srs-test-task-flow.xml#srs-test-task-flow

    • /WEB-INF/LayoutRN-taskflow.xml#LayoutRN-taskflow

    • /WEB-INF/NotifyRN-taskflow.xml#NotifyRN-taskflow

    • /WEB-INF/ScheduleRN_taskflow.xml#ScheduleRN_taskflow

Monitoring Job Requests UI

  • /WEB-INF/oracle/apps/fnd/applcp/monitor/ui/flow/MonitorProcessesMainAreaFlow.xml#MonitorProcessesMainAreaFlow

    • /WEB-INF/oracle/apps/fnd/applcp/monitor/ui/flow/EmptyFlow.xml

65.20 Integrating Scheduled Job Logging with Oracle Fusion Applications

Oracle Enterprise Scheduler is fully integrated with Oracle Fusion Applications logging. The logger captures Oracle Enterprise Scheduler-specific attributes when invoking logging from within the context of a running job request. You can set the values to these Oracle Enterprise Scheduler attributes within the context of defining a job.

Jobs can generate a log file on the file system that can be viewed with the Monitoring UI.

In a typically configured Oracle Enterprise Scheduler hosting application, log and output files are stored in an Oracle WebCenter Content repository rather than on the file system. These files are available to end users through a page you provide for monitoring scheduled job requests. For more information about request monitoring, see Section 65.17, "Monitoring Scheduled Job Requests Using an Oracle ADF UI."

65.21 Logging Scheduled Jobs and Writing to Output Files

Log messages written using the request log file APIs are written to the request log file and Oracle Fusion Applications logging at a severity level of FINE (only if logging is enabled at a level of FINE or lower).

65.21.1 Using the Request Log

Note:

Do not use the request log for debugging and internal error reporting. For Oracle Enterprise Scheduler jobs, the request log is equivalent to the end-user UI for online applications. When writing Oracle Enterprise Scheduler job code, you should ideally log only translatable end user-oriented messages to the request log. You should not use the request log for debug messages or internal error messages that are oriented to system administrators and/or Oracle Support. The audience for debug messages and detailed internal error messages is typically system administrators and Oracle Support, not end users.

Therefore, debug and detailed internal error messages should be logged to the log called FND_LOG only.

For Oracle Enterprise Scheduler jobs, the request log is equivalent to the end user interface for web applications. When developing an Oracle Enterprise Scheduler job, log to the request log only translatable end-user oriented messages.

For example, if an end user enters a bad parameter to the Oracle Enterprise Scheduler job, a translated error message logged to the request log is displayed to the end user. The end user can then take the relevant corrective action.

Example 65-42 shows how to set log messages using the request log.

Example 65-42 Setting Log Messages Using the Request Log

-- Seeded message to be displayed to the end user.
FND_MESSAGE.SET_NAME('FND', 'INVALID_PARAMETER'); 
-- Runtime parameter information
FND_MESSAGE.SET_TOKEN('PARAM_NAME', pName); 
FND_MESSAGE.SET_TOKEN('PARAM_VALUE', pValue); 
-- The following is useful for auto-logging errors.
FND_MESSAGE.SET_MODULE('fnd.plsql.mypackage.myfuntionA');
fnd_file.put_line( FND_FILE.LOG, FND_MESSAGE.GET );

If the Oracle Enterprise Scheduler job fails due to an internal software error, log the detailed failure message to the log called FND_LOG for the system administrator or support. You can also log a high-level generic message to the request log so as to inform end users of the error. An example of a generic error message intended for end users: "Your request could not be completed due to an internal error."

65.21.2 Using the Output File

Note:

Do not use the output file for debugging and internal error reporting.

The output file is a formally formatted file generated by an Oracle Enterprise Scheduler job. An output file can be sent to a printer or viewed in a UI window. Example 65-43 shows an invoice sent to an output file.

Example 65-43 Invoice Output File

fnd_file.put_line( FND_FILE.OUTPUT, '******** XYZ Invoice ********' );

65.21.3 Debugging and Error Logging

Debug and error logging should be done using the Oracle Diagnostic Logging APIs only. The Oracle Enterprise Scheduler request log should not be used for system administrator or Oracle support-oriented debug and error logging purposes. The request log is for the end users and it should only contain messages that are clear and concise. When an error occurs in an Oracle Enterprise Scheduler job, use an appropriate high-level (and, ideally, translated) message to report the error to the end user through the request log. The details of the error and any debug messages should be logged with Oracle Diagnostic Logging APIs.

Common PL/SQL, Java, or C code that could be invoked by both Oracle Enterprise Scheduler jobs and interactive application code should only use Oracle Diagnostic Logging APIs. If needed, the wrapper Oracle Enterprise Scheduler job should perform appropriate batching and logging to the request log for progress reporting purposes.

Using Logging in a Java Application

In Java jobs, use the log called AppsLog for debugging and error logging. You can retrieve an AppsLog instance from the CpContext object, by calling the method getLog().

Example 65-44 shows the use of logging in a Java application.

Example 65-44 Logging in Java Using AppsLog

public boolean authenticate(AppsContext ctx, String user, String passwd) 
      throws SQLException, NoSuchUserException {
    AppsLog alog = (AppsLog) ctx.getLog();
    if(alog.isEnabled(Log.PROCEDURE))   /* To avoid String Concat if not enabled */
    alog.write("fnd.security.LoginManager.authenticate.begin", 
                 "User=" + user, Log.PROCEDURE);
    /* Never log plain-text security sensitive parameters like passwd! */
    try {                
      validUser = checkinDB(user, passwd);
    } catch(NoSuchUserException nsue) {
        if(alog.isEnabled(Log.EXCEPTION))
        alog.write("fnd.security.LoginManager.authenticate",nsue, Log.EXCEPTION);
      throw nsue; // Allow the caller to Handle it appropriately
    } catch(SQLException sqle) {
     if(alog.isEnabled(Log.UNEXPECTED)) { 
        alog.write("fnd.security.LoginManager.authenticate", sqle, 
        Log.UNEXPECTED);
        Message Msg = new Message("FND", "LOGIN_ERROR"); /* System Alert */
        Msg.setToken("ERRNO", sqle.getErrorCode(), false);
        Msg.setToken("REASON", sqle.getMessage(), false);
       /* Message Dictionary messages should be logged using write(..Message..), 
        * and never using write(..String..) */
        alog.write("fnd.security.LoginManager.authenticate", Msg, Log.UNEXPECTED);
     }
    throw sqle; // Allow the caller to handle it appropriately
   } // End of catch(SQLException sqle)
   if(alog.isEnabled(Log.PROCEDURE))    /* To avoid String Concat if not enabled */
      alog.write("fnd.security.LoginManager.authenticate.end", 
                 "validUser=" + validUser, Log.PROCEDURE);
   return success;
  }

Note:

Example 65-44 uses an active WebAppsContext object. Do not attempt to log messages using an inactive or freed WebAppsContext object, as this can cause connection leaks.

Using Logging in a PL/SQL Application

PL/SQL APIs are part of the FND_LOG package. These APIs require invoking relevant application user session initialization APIs—such as the method FND_GLOBAL.INITIALIZE()— to set up user session properties in the database session.

These application user session properties, including UserId, RespId, AppId, SessionId, are needed for the log APIs. Typically, Applications Core invokes these session initialization APIs.

Log plain text messages with the method FND_LOG.STRING(). Log translatable message dictionary messages with the method FND_LOG.MESSAGE(). FND_LOG.MESSAGE() logs messages in encoded, but not translated, format, and allows the Log Viewer UI to handle translating messages based on the language preferences of the system administrator viewing the messages.

For details regarding the FND_LOG API, run the script $fnd/patch/115/sql/AFUTLOGB.pls at the prompt. Example 65-45 shows the PL/SQL logging syntax.

Example 65-45 PL/SQL Logging Syntax

PACKAGE FND_LOG IS
   LEVEL_UNEXPECTED CONSTANT NUMBER  := 6;
   LEVEL_ERROR      CONSTANT NUMBER  := 5;
   LEVEL_EXCEPTION  CONSTANT NUMBER  := 4;
   LEVEL_EVENT      CONSTANT NUMBER  := 3;
   LEVEL_PROCEDURE  CONSTANT NUMBER  := 2;
   LEVEL_STATEMENT  CONSTANT NUMBER  := 1;
 
  /*
   **  Writes the message to the log file for the specified 
   **  level and module
   **  if logging is enabled for this level and module 
   */
   PROCEDURE STRING(LOG_LEVEL IN NUMBER,
                    MODULE    IN VARCHAR2,
                    MESSAGE   IN VARCHAR2);
 
   /*
   **  Writes a message to the log file if this level and module 
   **  are enabled.
   **  The message gets set previously with FND_MESSAGE.SET_NAME, 
   **  SET_TOKEN, etc. 
   **  The message is displayed from the message dictionary stack, 
   **  if POP_MESSAGE is TRUE.  
   **  Pass FALSE for POP_MESSAGE if the message will also be 
   **  displayed to the user later.
   **  Example usage:
   **  FND_MESSAGE.SET_NAME(...);    -- Set message
   **  FND_MESSAGE.SET_TOKEN(...);   -- Set token in message
   **  FND_LOG.MESSAGE(..., FALSE);  -- Log message
   **  FND_MESSAGE.RAISE_ERROR;      -- Display message
   */
   PROCEDURE MESSAGE(LOG_LEVEL   IN NUMBER,
                     MODULE      IN VARCHAR2, 
                     POP_MESSAGE IN BOOLEAN DEFAULT NULL);
 
   /*
   ** Tests whether logging is enabled for this level and module, 
   ** to avoid the performance penalty of building long debug 
   ** message strings unnecessarily.
   */
   FUNCTION TEST(LOG_LEVEL IN NUMBER, MODULE IN VARCHAR2) 
RETURN BOOLEAN;

Example 65-46 shows how to log a message in PL/SQL after the AOL session has been initialized.

Example 65-46 Logging a Message in PL/SQL After the AOL Session Has Been Initialized

begin
  
  /* Call a routine that logs messages. */
  /* For performance purposes, check whether logging is enabled. */
  if( FND_LOG.LEVEL_PROCEDURE >= FND_LOG.G_CURRENT_RUNTIME_LEVEL ) then
    FND_LOG.STRING(FND_LOG.LEVEL_PROCEDURE, 
        'fnd.plsql.MYSTUFF.FUNCTIONA.begin', 'Hello, world!' );
  end if;
/

The global variable FND_LOG.G_CURRENT_RUNTIME_LEVEL allows callers to avoid a function call for messages at a lower level than the current configured level. If logging is disabled, the current runtime level is set to a large number such as 9999 so that it is sufficient to simply log messages with levels greater than or equal to this number. This global variable is automatically populated by the FND_LOG_REPOSITORY package during session and context initialization.

Example 65-47 shows sample code that illustrates the use of the global variable FND_LOG.G_CURRENT_RUNTIME_LEVEL.

Example 65-47 Logging a Message in PL/SQL Using FND_LOG.G_CURRENT_RUNTIME_LEVEL

if( FND_LOG.LEVEL_STATEMENT >= FND_LOG.G_CURRENT_RUNTIME_LEVEL ) then
      dbg_msg := create_lengthy_debug_message(...);
      FND_LOG.STRING(FND_LOG.LEVEL_STATEMENT 
           'fnd.form.ABCDEFGH.PACKAGEA.FUNCTIONB.firstlabel', dbg_msg);
end if;

Note:

For PL/SQL in a forms client, use the same APIs. Use the method FND_LOG.TEST() to check whether logging is enabled.

Example 65-48 shows logging message dictionary messages.

Example 65-48 Logging Message Dictionary Messages

if( FND_LOG.LEVEL_UNEXPECTED >=
            FND_LOG.G_CURRENT_RUNTIME_LEVEL) then
        FND_MESSAGE.SET_NAME('FND', 'LOGIN_ERROR'); -- Seeded Message
        -- Runtime Information
        FND_MESSAGE.SET_TOKEN('ERRNO', sqlcode); 
        FND_MESSAGE.SET_TOKEN('REASON', sqlerrm); 
        FND_LOG.MESSAGE(FND_LOG.LEVEL_UNEXPECTED, 
                        'fnd.plsql.Login.validate', TRUE); 
end if;

Using Logging in C

Example 65-49 illustrates the use of logging in a C application.

Example 65-49 Logging in C

#define  AFLOG_UNEXPECTED  6
#define  AFLOG_ERROR       5
#define  AFLOG_EXCEPTION   4
#define  AFLOG_EVENT       3
#define  AFLOG_PROCEDURE   2
#define  AFLOG_STATEMENT   1
 
/* 
** Writes a message to the log file if this level and module is 
** enabled 
*/
void aflogstr(/*_ sb4 level, text *module, text* message _*/);
 
/* 
** Writes a message to the log file if this level and module is 
** enabled. 
** If pop_message=TRUE, the message is popped off the message 
** Dictionary stack where it was set with afdstring() afdtoken(), 
** etc. The stack is not cleared (so messages below will still be 
** there in any case). 
*/
void aflogmsg(/*_ sb4 level, text *module, boolean pop_message _*/);
 
/* 
** Tests whether logging is enabled for this level and module, to
** avoid the performance penalty of building long debug message 
** strings 
*/
boolean aflogtest(/*_ sb4 level, text *module _*/);
 
/* 
** Internal
** This routine initializes the logging system from the profiles.
** It will also set up the current session and user name in its state */
void afloginit();

65.21.4 Related Links

The following document provides additional information related to subjects discussed in this section:

For more information about managing log files, see the chapter "Managing Log Files and Diagnostic Data" in Oracle Fusion Middleware Administrator's Guide.