5 Generating Reports and Running Other Jobs

The previous section, "Viewing Reports and Other Outputs," described how to find reports and other outputs that have already been generated. If you have the necessary security privileges, you can run new jobs to generate reports on current data and view the resulting reports.

You may also be able to run jobs to load data into Oracle Life Sciences Data Hub (Oracle LSH) from another system; for example, SAS or Oracle Clinical; or to generate data files for export out of Oracle LSH to a regulatory agency or partner institution. For more details, see What Types of Jobs Are There?.

In addition to running jobs in the Reports tab, you can run jobs from the My Home page and, if you have the necessary privileges, from the Applications tab. For more details, see Running a Job from Reports, My Home, or Applications.

To track the jobs progress, go the Job Execution section of your My Home page. You can also see the output from there, if any. For more information, see Tracking Job Execution and Viewing Reports and Other Outputs.

This section contains information on the following topics:

What Types of Jobs Are There?

Oracle LSH has different types of executables that perform different functions. Depending on your security privileges, you may be able to submit any of these for execution:

  • Programs generate reports and/or transform data and write data to Oracle LSH database tables.

  • Report Sets generate a whole set of reports with a unified table of contents.

  • Load Sets load data into Oracle LSH from another system. Load Sets can load SAS files, text files, or data from an external Oracle system. There are specialized Load Sets for loading data and meta-data from Oracle Clinical. For detailed information on the different types of Load Sets, see "Load Set Types" in the Oracle Life Sciences Data Hub Application Developer's Guide.

  • Data Marts produce files containing data intended for export out of Oracle LSH; for example, for submission to a regulatory agency or to send to a partner. For detailed information on the different types of Data Marts, see "Data Mart Types" in the Oracle Life Sciences Data Hub Application Developer's Guide.

  • Workflows can include any number of the other types of executables united in a single execution process. For example, a Workflow execution could run one or more Load Sets to load the most current data from SAS into Oracle LSH, then run several Programs to transform the data, and finally generate a Report Set with the most current data available in the source system.

Executing any of these executable objects is called a job.

Running a Job from Reports, My Home, or Applications

This section contains the following topics:

Running a Job in the Reports Tab

In the Reports tab you can submit any Execution Setup to which you have security access; in other words, any Execution Setup you can see.

To run any type of executable in the Reports tab, do the following:

  1. Find the Execution Setup you need. You find Execution Setups the same way you do outputs. For more details, see Browsing in the Outputs Screen and Searching.
  2. Click the Execution Setup icon in the Action column for the Execution Setup:

    The system then displays the Execution Setup.

  3. Look at the default Submission Details and change any you need to change. For more details, see Setting Submission Details.
  4. In the Submission Parameters subtab, look at the default values and change any you need to change. For more details, see Setting Load Set, Data Mart, or Program Parameters.
  5. In the Source Data Currency subtab, specify whether you want to operate on the most recent data or on a snapshot based on either a snapshot label or on a refresh timestamp. For more details, see Setting Source Data Currency.

You can also click on the Output icon in the Outputs tab, then click the Job ID hyperlink and then click Re-Submit.

Running a Job from the My Home Tab

In the My Home tab you can submit any Execution Setup that you have previously submitted. By default, the system uses the Parameter values and data currency used by the previous execution. You can change these if necessary.

Note:

The submission uses the latest version of the executable instance, even if the previous submission used an earlier version.

  1. Find the record of the job in the Job Executions section. You can sort on any column heading by clicking it.
  2. Click the Job ID hyperlink. The Job Execution Details screen for that job opens.
  3. Click Re-Submit. The Submission screen for the Execution Setup you previously used opens.
  4. Change submission details and Parameter settings if necessary; see"Setting Submission Details".
  5. Click Submit. The system submits the job and displays a message with the Job ID. Go back to the My Home page (by clicking the My Home breadcrumb or tab) to view the job's progress.

Running a Job from the Applications Tab

If you have access to the Applications tab, you can go to any executable object instance to which you have security access and click Submit to run the default Execution Setup.

To run a different Execution Setup, select Execution Setup from the Actions drop-down list, and click Submit for the Execution Setup you choose.

Setting Submission Details

The submission detail settings control various aspects of job execution. Depending on the type of object being executed and the way the Execution Setup is defined, you may or may not see all possible submission details. The complete list includes the following details:

Execution Priority

From the drop-down list, select the priority—Low, Normal, or High—that this job should have relative to other jobs.

Submission Type

From the drop-down list, select the timing of the submission:

  • Immediate. The system executes the job as quickly as possible after you click Submit.

  • Deferred. The system displays additional screens to allow you to schedule a single execution at a later time. Enter the date and time you want to execute the job in the fields that appear at the bottom of the section.

  • Scheduled. The system displays additional screens to allow you to schedule multiple executions at regular intervals.

Submission Mode

From the drop-down list, select Full or Incremental. This field applies only to jobs that load data to tables that use Reload processing.

  • Incremental mode compares the records being loaded or written in the current job to those loaded or written to the same tables in the previous job makes appropriate updates and insertions but does not delete any data.

  • Full mode also compares and performs updates and insertions, but in addition full mode processing soft-deletes any records that are not included in the current job.

If you are incrementally adding records to a table, or loading updated versions of different subsets of data, select Incremental. If you are reloading a complete set of the most up-to-date records that may be missing some records due to deletions, select Full. See "Reload Processing" in the Execution chapter of the Oracle Life Sciences Data Hub Application Developer's Guide.

Data Currency

From the drop-down list, select one of the following. All options may not be available.

  • Current Immediate Source. The system processes the current data in the tables or data sets that are read by the job.

  • Most Current Available (Trigger Forward Chain) A forward chain refreshes data in some or all tables downstream by running jobs that read from the tables written to by this Program, Load Set, or Workflow, and then jobs that read from the tables those programs write to, and so on. See "Forward Chaining" in the Execution chapter of the Oracle Life Sciences Data Hub Application Developer's Guide. If you select this option, two fields appear:

    • Report Only: If selected, the system does not actually run the job, but determines which objects would be executed as part of the forward chain and lists them in the log file.

    • Desired Top Level Hierarchy: By default this is set to All and all possible objects that are part of the forward chain are included. However, you can limit the scope of the job by selecting a Work Area, Application Area, or Domain that you want to serve as the top level of the hierarchy; objects outside it are not included in the job.

  • Most Current Available (Trigger Backchain). A backchain job checks for more current data and executes jobs along the data flow to bring the most current data possible to the current job. See "Backchaining" in the Execution chapter of the Oracle Life Sciences Data Hub Application Developer's Guide.

  • Specify Snapshot. You can run the job on noncurrent data by specifying a data snapshot—the state of data at a previous point in time. If you select this option, you must specify the snapshot you want to use in the Data Currency subtab. For more details, see Setting Source Data Currency.

    Note:

    The Execution Setup may be defined with a fixed data snapshot that you cannot change.

Blind Break

This Parameter is relevant only when one or more source Table instances either currently or formerly contained blinded data. Special privileges are required to run a job on real data that is either currently blinded or was formerly blinded. Different special privileges are required to see the resulting output(s). The choices available depend both on the state of the data and on your security privileges. They are:

  • Not Applicable. If none of the data has ever been blinded, the only option available is Not Applicable. No special privileges are required.

  • Dummy. If any of the data has ever been blinded, this option appears. If you do not have the privileges required to run the job on real data, this is the only option available. If you do have the privileges required, you see Dummy plus one of the following options, depending on the state of the data and your privileges. Select Dummy to run the job on dummy (not real) data.

    If you are using DMW and all source tables with a Blinding Status of Blinded are blinded at the column, row, or cell level (not the table level), the Dummy Blind Break option allows processing the masking values in LSH. These are stored in the Dummy partition of the target table(s).

    Tip:

    Oracle recommends processing DMW data in DMW, not LSH. In particular, do not run DMW transformations and validation checks in LSH.

  • Real (Blind Break). If any of the data is currently blinded, and you have the required privileges, you can select this option to run the job on the real data, according to your company's policies. The table(s) containing the blinded data remain blinded after you run the job.

  • Real (Unblinded). If any of the data was formerly blinded but is currently unblinded and none of the data is currently blinded, and you have the required privileges, you can select this option to run the job on the real data, according to your company's policies.

Notify on Completion

From the drop-down list, select the conditions in which you want to receive a notification of the job's ending status:

  • Success. If you select Success, you receive a notification if the job ends with a status of Success, Warning, or Failure. This is the default value.

  • Warning. If you select Warning, you receive a notification if the job ends with a status of Warning or Failure.

  • Failure. If you select Failure, you receive a notification if the job ends with a status of Failure.

  • Never. If you select Never, you do not receive a notification no matter what status the job ends with.

Force Duplicate Execution

When you submit a job, Oracle LSH checks the Parameter settings, Blind Break setting, data currency, and executable object instance version and compares them with previous executions of the same executable object.

  • No. If the settings and data currency are the same, so that the job would duplicate a previous job, the system does not run the job.

  • Yes. The system runs the job even if it does duplicate a previous job.

    Note:

    In the case of Report Sets, setting Force Duplicate Execution to Yes affects only the Report Set Entries that are selected for inclusion in the job.

Timeout Value

Enter the number of minutes or hours, for example, past the scheduled execution time, after which the system should no longer start the job. For example, if a report is scheduled to run once a day, you might enter a timeout period of 24 hours, so that, if the system is down for that period, the system does not attempt to generate a report for that day.

From the drop-down list, select the unit of time you want the system to use, together with the Timeout Value, to determine how long to try to run the job before timing out.

Apply Snapshot Label

From the drop-down list, select the tables to which you want to apply a snapshot label to identify the state of data immediately after this job is executed:

  • Both. The system applies the label to both source and target tables.

  • Targets. The system applies the label only to target tables.

  • None. The system does not apply a label to any table.

Note:

It does not make sense to apply a snapshot label on a job with a repeating execution schedule, because the same label is applied each time.

Also note that when you run a Load Set you can apply a snapshot label only to its target Table instances because its source tables or files are not located in Oracle LSH. You can select either Both or Target to apply a label to the target Table instances.

Label

Enter the text of the snapshot label that you want to apply to the Table instances you specified. If the label already exists for another currency of any of the same Table instances, the system automatically moves the label, and the log file contains a record of the change.

Schedule Submission Details

If you select Scheduled in the Submission Type field, the system displays the following fields at the bottom of the section:

  • Frequency Type. From the drop-down list, select Daily, Hourly, Monthly, or Weekly.

  • Start Date. Click on the calendar icon to select the date.

  • Start Time. Enter the time you want the first job of the series to start. Use 24-hour time; for example, enter 23:00 to start the job at 11:00 p.m.

  • Ongoing. Select this radio button if you do not want to set an end date for the schedule.

  • End Date. Click on the calendar icon to select the end date, if you want to set one.

  • End Time. Enter the time after which you want the scheduled job executions to stop. Use 24-hour time.

  • Frequency Details. The system displays different fields depending on the Frequency Type you selected. Enter values as directed to complete the schedule.

Setting Output Suffixes

To distinguish the output of this job from the output of a different execution of the same Program or other executable object, you can enter text that the system will append to the title and description of the output during this execution. The system applies the text you enter to all Planned Outputs of this job, including the log file. The system does not apply the suffix to the coversheet.

  • Output Title Suffix. The system appends a space and then this text after the output's title. If the suffix length plus the title length together are more than 200 characters, the system truncates the suffix.

  • Output Description Suffix. The system appends a space and then this text to the output's description. If the suffix length plus the title length together are more than 2000 characters, the system truncates the suffix.

    Note:

    If the system truncates these values, you can resubmit with shorter values for the suffixes or edit the title and description by going to Job Executions in the My Home page, clicking the Job ID, clicking Re-Submit, and modifying as necessary.

Setting Source Data Currency

When you run a report or other job, you may want to use the most current data, data snapshots from a previous point in time, or data snapshots that have been labeled because they represent data at a signficant stage; for example, Interim or Final. You may need to use source data with different labels; for example, because you are reporting on data from different studies. You may also want to use a label for some studies or tables and the most current data for other studies or tables.

The Submission screen's Data Currency tab displays all the source Table instances to be used for that submission, arranged by Work Area. The default setting for each table is Current for current data. You can specify the source data currency you want using lists of values at three levels:

  • for all source Table instances, across Work Areas

  • for all source Table instances in a particular Work Area

  • for individual source Table instances

The list of values at each level provides these options:

  • Snapshot Labels. The list of values includes every label that is applied to every source Table instance to be used for the submission within the scope of the list of values: across Work Areas, within a single Work Area, or for a single Table instance. When you select a label, the system displays it for all Table instances in scope that have that label and that do not already have a label displayed.

  • Most Current Label. When you select this option, for each Table instance in the scope of the list of values, the system displays the label with the most recent timestamp, whatever it may be. The system does not change the data currency setting for Table instances that have no labels or for Table instances that already have a label displayed.

  • Most Current Data. The system overwrites all labels you have applied and resets all source Table instances in the Work Area to use the most current data.

  • Clear Currency. The system removes all currency settings.

  • No Change (represented by a blank line). This option allows you to explore the options in the drop-down list and return to the previous setting even if you do not remember what it was. This option is not required at the across-Work Area level.

The current setting for each Table instance is displayed in the columns Source Currency (for labels) and Refresh TS (for timestamps).

The settings you select apply only to the current submission. After you submit the job, the settings return to the default value, which is to use the current data. If you want to reuse the settings you specified or make a few changes, go to the My Home page, click the Job ID, and click Re-Submit. You can modify all settings as necessary.

When you use the Most Current Label or specify a specific snapshot, you can work iteratively; the system preserves your first selections as you continue to work on additional Table instances.

This section contains the following topics:

Example 1

Table instance DEMOG has the following currencies:

  • 01JAN = INTERIM1

  • 01FEB = INTERIM2

  • 01MAR

  • 01APR = FINAL

  • 01MAY

Table instance VIT has the following currencies:

  • 01JAN = INTERIM1

  • 15FEB

  • 20MAR

You can select:

  • A Snapshot Label. If you select FINAL, the system displays FINAL for DEMOG and does not change the default setting of Current for VIT because VIT does not have the label FINAL. If you then select INTERIM1, the system displays INTERIM1 for VIT and does not change the setting for DEMOG because it already has a label displayed.

  • Most Current Label. Given the same Table instances at their default Current setting, if you selected Most Current Label, the system would display FINAL for DEMOG and INTERIM1 for VIT at the same time because those are the most recent timestamps with a label for each Table instance.

  • Most Current Data. In either case, if you selected Most Current Data after setting labels for the two Table instances, the system would reset the data currency for both Table instances to Current. If the submission ran with these settings, the system would process the 01MAY timestamped data in DEMOG and the 20MAR timestamped data in VIT.

Example 2

If you are running a report on data from five studies, each of which uses a different Work Area, and three of the studies are closed and all their Table instances labeled FINAL, one of the studies is near completion, and the other has completed the first interim analysis, you could select the label FINAL at the across-Work Area level to pick up all the Tables instances in the closed studies and some of the Table instances in the nearly completed study. You could then select Most Current Label at the across-Work Area level to get the most recent data with a snapshot label in the noncompleted studies. All other data would still be set to Current.

Limit Snapshot by Cut-Off Date

Use this option to limit the snapshots available for selection to those up to and including a particular point in time. For example, you may not want to use any labels that have been applied to data in the last month. You can select a cut-off date from a calendar.

The cut-off date affects the snapshot labels displayed in all the Data Currency lists of values. By default, the cut-off date is the current date (sysdate).

Setting Data Currency Across All Work Areas

If you want to use a label that is assigned to data in different Work Areas, use the Use Snapshot drop-down at the top of the Data Currency tab:

  1. (Optional) Select a cut-off date; see "Limit Snapshot by Cut-Off Date".

  2. The Use Snapshot drop-down displays all snapshot labels assigned to any of the data for the submission, arranged by timestamp with the most recent first, with the standard options at the bottom. Do one of the following:

    • Select a snapshot label and click Apply. The system displays that label for each Table instance that has the label, regardless of which Work Area the Table instance is in. Table instances that do not have the selected label are not affected.

    • Select Most Current Label and click Apply. The system displays the most recent label associated with every source Table instance that has a label, regardless of the text of the label or what Work Area the Table instance is in.

    • Select Most Current Data and click Apply. The system overwrites all labels you have applied and resets all source Table instances to use the most current data.

    • Select Clear Currency and click Apply. The system removes all currency settings.

Setting Data Currency for Table Instances in a Single Work Area

To set source data currency for multiple Table instances in the same Work Area, use the Snapshot Label drop-down in the Work Area's row:

  1. (Optional) Select a cut-off date. For more details, see Limit Snapshot by Cut-Off Date.

  2. The Use Snapshot drop-down in each Work Area's row displays all snapshot labels assigned to any of the data in the Work Area for the submission, arranged by timestamp with the most recent first, with the standard options at the bottom. Do one of the following:

    • Select a snapshot label. The system displays that label for each Table instance in the Work Area that has the label. Table instances that do not have the selected label are not affected.

    • Select Most Current Label. The system displays the most recent label associated with every source Table instance in the Work Area that has a label, regardless of the text of the label.

    • Select Most Current Data. The system overwrites all labels you have applied and resets all source Table instances in the Work Area to use the most current data.

    • Select Clear Currency. The system removes all currency settings from all source Table instances in the Work Area.

    • Select the blank line to leave any previous settings unchanged.

Setting Data Currency for a Single Source Table Instance

To set source data currency for one Table instance at a time, use the Snapshot Label drop-down in the Table instance's row:

  1. (Optional) Select a cut-off date. For more details, see Limit Snapshot by Cut-Off Date.

  2. The Use Snapshot drop-down in each Table instance's row displays all snapshot labels assigned to any of the data in the Table instance, arranged by timestamp with the most recent first, with the standard options at the bottom. Do one of the following:

    • Select a snapshot label. The system displays that label for the Table instance.

    • Select Most Current Label. The system displays the most recent label associated with the Table instance.

    • Select Most Current Data. The system overwrites any label you have applied and resets the Table instances to use the most current data.

    • Select Clear Currency. The system removes the currency settings from the Table instance.

    • Select the blank line to leave the previous setting unchanged.

Note:

You cannot specify a snapshot in a source system to load into Oracle LSH. Oracle LSH loads the most current data available in the location you specify. Oracle LSH records the data currency of the loaded data as follows:

  • For Oracle Clinical Data Extract Oracle and SAS Views, the system gives the timestamp of the most recent batch validation job in Oracle Clinical (unless you specify a Current view, in which case the system uses the sysdate).

    If you run a Load Set for which the system uses the batch validation timestamp more than once before batch validation runs in Oracle Clinical (and the Load Set version and Parameter settings remain the same), the system considers the run a duplicate job and does not execute the load unless you specify Force Duplicate Execution.

  • For all other loads from Oracle systems, the system uses the sysdate at the time of the Load Set execution.

  • For loads of data in files (SAS and text) there is no data currency given. However, you can apply a label to the target Table instances.

Setting Load Set, Data Mart, or Program Parameters

In many Execution Setups you can set Parameter values to determine how or on what data a job is executed. Some Parameters are required; if you do not supply a value, the execution fails. Some Parameters may have a fixed value that you cannot change.

Each type of Load Set and Data Mart, and some types of Programs have different predefined Parameters that are explained in the sections below.

This section contains the following topics:

Setting Parameters

In the Parameters subtab of a Load Set, Data Mart, or Program check the current settings of the Parameters and change them if necessary.

  1. In the Submission Parameters subtab, the system displays the following information:
    • Parameter Required? If set to Yes, the Parameter must have a value or the job cannot run. If set to No, a value is not required.

    • Parameter Value. If the Parameter currently has a value, the system displays the value.

  2. Set the Parameters that are required and any others you choose to set. Settable Parameters have an enterable field in the Parameter Value column. Depending on how each Parameter is defined, you can set its value in one of the following ways:
    • Enter a value. Some Parameters are defined so that the system checks the value you enter when you submit the Execution Setup.

      Note:

      Some Parameters may accept multiple values. In this case, enter a comma (,) between one value and the next. Never enter a comma as part of a single value; the system interprets it as the delimiter between two values.

    • Select a value from the drop-down list.

    • Click the Search icon:

      The system then displays the values the Parameter is allowed to have. Click the Quick Select icon to select a value:

      (or click the checkbox in the Select column and then the Select button) for the value you want. Or, if multiple values are allowed, click multiple checkboxes or Select All.

Load Set Submission Parameters

Different types of Load Sets have different predefined parameters. See the following topics for details:

SAS

Using a SAS Load Set you can load a single data set or multiple data sets contained in a SAS transport file. The predefined Parameters for SAS Load Sets are:

  • BLOB ID (Temporary). Do not enter a value for this parameter. It is for internal use only.

  • Dataset File Name. Browse to the file you want to upload. If you want to use a remote file, leave this field blank and enter appropriate values for the following parameters.

  • Load from Server OS. You have the option to load a data set from your local computer or from a remote system. Set this parameter to Yes (default is No) if your data set is either on the Oracle LSH DP server or on another computer that has its drives NFS-mounted on the DP server computer. If you set this parameter to No, you must upload a data set file from your local computer.

    Note:

    The system can load remote SAS data set files only from an Oracle LSH DP Server that is running the SAS and the SQL*Loader services. See the chapter on Setting Up Services in the Oracle Life Sciences Data Hub System Administrator's Guide for more information.

  • Server OS Filename. If you set Load From Server OS to Yes, enter the absolute path of the remote data set file else leave this parameter blank.

  • Maximum Allowed Errors. Tolerance factor; the maximum number of invalid records you are willing to accept before SQL*Loader stops the loading process and marks the Load Set job as failed. The default value of this parameter is 0.

    Note:

    If you are uploading a SAS CPORT or XPORT file, note that the value you enter here applies to each data set contained within the CPORT or XPORT file.

  • Direct. If set to Yes (default is No), the system uses the direct path INSERT to load data from the SAS dataset file into the Oracle LSH target Table instance.

    Note:

    When writing to multiple Table instances, if all the Table instances are not of the Transactional High Throughput processing type, the system uses the direct path INSERT for the Transactional High Throughput processing type Tables and the conventional INSERT for Tables of other data processing types. The Load Set job completes with warnings and writes warning messages to the job log.

  • Drop and Recreate Indexes. If set to Yes (default), and if the value of the parameter Direct is Yes, the system drops all non-unique indexes before running a data-loading job and recreates the non-unique indexes after loading data into the target Tables. The system does not drop indexes if Direct is set to No.

Text

Using a Text Load Set, you can load a single text file in either fixed or delimited format. The predefined Parameters for Text Load Sets are:

  • Text Data File Name. Browse for the text data file you want to load.

  • Data Format. Choose either Fixed or Delimited:

    • Fixed. The system uses the Load Set's target Table Descriptor Column definitions to interpret the data type and length of the values in the text file. The file must contain the correct number of characters for each value in each column of each record.

    • Delimited. With delimited records, you specify the character used in the source file between each record as the separator character. Different records in the same column can be of different lengths, up to the maximum allowed. The system loads the contents between the separator character into each consecutive Column of the target Table instance.

      For example, if you have a Table instance with Columns Patient ID, Patient Initials, and Date of Birth, and the separator character was a comma, and the date format was DDMMYYYY, the first two records might look like this:

      54602,EKP,04081969

      66781,BAH,22011975

  • Delimiter Character. (Required for Delimited-format Load Sets only.) Specify the character used as the value delimiter in the source text file. In the example above, the comma (,) is the separator character. The default character is the comma (,). You can use any two unicode characters as a delimiter. For example, to use tab as the delimiter character, enter \t.

    Note:

    Even if you specify that this is a fixed format Load Set, this Parameter is displayed. The system does not use this value.

  • Enclosing Character. (For Delimited-format Load Sets only; recommended but not required.) If any record value may contain the delimiter character, you need an enclosing character. Specify the character used to enclose each value. The default character is double quotation marks (").

    For example, if a double quotation mark (") were the enclosing character, the same two records would look like this in the source text file:

    "54602","EKP","04081969"

    "66781","BAH","22011975"

    You can use any single unicode character as an enclosing character.

    Both delimiter and enclosing characters should be characters that never appear in the data content of the file.

    The delimiter character and the enclosing character must be different. If they are the same, the Load Set execution will fail.

    Note:

    Even if you specify that this is a fixed format Load Set, this Parameter is displayed and is required. The system does not use this value.

  • Skip Records. If you prefer not to load records at the beginning of the file, enter the number of records you want the system to skip. The default value is zero (0).

  • TMP BLOB ID. This Parameter is for internal use only. Do not modify it.

  • Max Errors. Tolerance factor; the maximum number of invalid rows you are willing to tolerate before the SQL*Loader stops the load process. If you do not enter a value here, the job fails on the first error and generates an error file. The default value is 99999.

  • Date Format. Enter the date format used in the source text file, if any. If you do not enter a value, the system uses the format DD-MON-YYYY. If you enter a value, you must enter a valid Oracle to_date format string. Valid formats include:

    DD-MM-YYYY, DD-MON-YYYY, DD.MM.YYYY, DD.MON.YYYY, DD/MM/YYYY, DD/MON/YYYY, MM-DD-YYYY, MM.DD.YYYY, MM/DD/YYYY, YYYY-MM-DD, YYYY-MON-DD, YYYY.MM.DD, YYYY.MON.DD, YYYY/MM/DD, YYYY/MON/DD

  • Load From Server OS. You have the option to load a file from your local computer or from a remote system. Set this parameter to Yes (default is No) if your file is either on the Oracle LSH DP server or on another computer that has its drives NFS-mounted on the DP server computer. If you set this parameter to No, you must upload a text file from your local computer when running the Text Load Set job by providing its name in the Data File Name parameter.

    Note:

    The system can load remote text files only from an Oracle LSH DP Server that is running the SQL*Loader service. Contact your Oracle LSH administrator for more information.

  • Server OS Filename. If you set Load From Server OS to Yes, enter the absolute path of the remote file with the filename and extension, else leave this parameter blank.

  • Direct. Set to Yes if you want to use the direct path INSERT to load data from the text file into the Oracle LSH target Table. If the Table Descriptor mapped to the Text Load Set is not of the Transactional High Throughput processing type, the system loads data using the conventional INSERT and writes warning messages to the job log.

    Note:

    See the Oracle Database Utilities Guide (part number B14215-01) for more information on this SQL*Loader Command-Line Parameter.

  • (Optional) Rows. For direct path loads (when you set Direct to Yes), this parameter indicates the number of rows that SQL*Loader reads together at one time from the datafile before writing them to the target Table and committing to the database. If you leave this parameter blank, by default the system reads all the rows from the datafile and then writes the data to the target Table.

    For conventional path loads, (if you set Direct to No), this parameter indicates the number of rows that SQL*Loader assigns to the bind array. If you leave this parameter blank, Oracle LSH calculates the number of rows.

    Note:

    A very high or a very low value for Rows can adversely affect the system's performance. Enter a value only if you are absolutely sure that you need to change this.

    See the Oracle Database Utilities Guide (part number B14215-01) for more information on this SQL*Loader Command-Line Parameter.

  • Drop and Recreate Indexes. If set to Yes (default), and if the value of the parameter Direct is set to Yes, the system drops all non-unique indexes before running a data-loading job and recreates the non-unique indexes after loading data into the target Tables. The system does not drop indexes if Direct is set to No.

  • Recoverable. Set to Yes (default) if you want the system to recover data in the event of a failure. If set to No, the data becomes unrecoverable because SQL*Loader does not maintain a redo log file for the data.

    Note:

    For more information on the SQL*Loader UNRECOVERABLE clause, see the Oracle Database Utilities Guide (B14215-01) at:

    http://download.oracle.com/docs/cd/B19306_01/server.102/b14215/part_ldr.htm#i436326

Oracle Tables and Views

Using an Oracle Tables and Views Load Set, you can load one or more tables or views from an Oracle system. The predefined Parameters for Oracle Tables and Views are:

  • Remote Location. From the drop-down list, select the Remote Location from which you want to load tables or views.

    If the correct location does not appear on the list, you may need to set up your own connection to it. Click the Preferences link at the top of the screen and follow instructions in "User Connections". You must have a valid username and password on the remote location to run the Load Set.

  • Direct. Set this parameter to Yes if you want to use the direct path INSERT to load data into target Table instance(s) of Transactional High Throughput data processing type.

    Note:

    When writing to multiple Table instances, if all the Table instances are not of the Transactional High Throughput processing type, the system first inserts data into the Transactional High Throughput processing type Tables using the direct INSERT and subsequently inserts data using the conventional INSERT into Tables of other data processing types. The Load Set job completes with warnings and writes warning messages to the job log.

  • Drop and Recreate Indexes. This parameter applies only when you set Direct to Yes.

    If set to Yes (default), the system drops all non-unique indexes before running a data-loading job and recreates the non-unique indexes after loading data into the target Table instances. The system does not drop indexes if Direct is set to No.

    This parameter applies only to Load Sets loading data into Table(s) of Transactional High Throughput data processing type.

  • Logging. This parameter applies only when you set Direct to Yes.

    If set to Yes (default), in the event of a failure the system can recover data committed to the database. If set to No, the system does not maintain a redo log file and cannot recover any data when a database failure occurs.

Oracle Clinical Global Metadata

Oracle Clinical Global Metadata Load Sets load the OC Questions (with DVGs, if any) and Question Groups you have defined in a particular Oracle Clinical Domain and converts them to Oracle LSH defined objects. The predefined Parameters for Oracle Clinical Global Library Meta-Data Load Sets are:

  • Remote Location. Click the Search icon, search, and select the Remote Location from which you want to load Oracle Clinical Global Library meta-data. If the correct location does not appear on the list, you may need to set up your own connection to it. Click the Preferences link at the top of the screen and follow instructions in "User Connections". You must have a valid username and password on the remote location to run the Load Set.

  • Library Domain Name. From the list of values, select the name of the Oracle Clinical Global Library Domain that you want to load.

Oracle Clinical Data Extract Views

Oracle Clinical Data Extract Oracle Views Load Sets load the OC DX Oracle views and data. They have the following predefined Parameters:

  • Remote Location. Click the Search icon, search, and select the Remote Location from which you want to load data and meta-data. If the correct location does not appear on the list, you may need to set up your own connection to it. Click the Preferences link at the top of the screen and follow instructions in "User Connections". You must have a valid username and password on the remote location to run the Load Set.

  • View Type. From the list of values, select the Oracle Clinical Study or Study Set Access Account that maintains the views you want.

Oracle Clinical Data Extract SAS Views

Oracle Clinical Data Extract SAS Views Load Sets load OC DX SAS views and data and meta-data. They have the following predefined Parameters:

  • Remote Location. Click the Search icon, search, and select the Remote Location from which you want to load data. If the correct location does not appear on the list, you may need to set up your own connection to it. Click the Preferences link at the top of the screen and follow instructions in "User Connections". You must have a valid username and password on the remote location to run the Load Set.

  • View Type. From the list of values, select the Oracle Clinical Study or Study Set Access Account that maintains the views you want.

Oracle Clinical Study Data

Oracle Clinical Study Data Load Sets load Study-specific nonpatient data, including discrepancies, DCFs, page tracking , patient statuses, and more. They have the following predefined Parameters:

  • Remote Location. Click the Search icon, search, and select the Remote Location from which you want to load data. If the correct location does not appear on the list, you may need to set up your own connection to it. Click the Preferences link at the top of the screen and follow instructions in User Connections. You must have a valid username and password on the remote location to run the Load Set.

  • Study. Specify the study from which to load data.

Oracle Clinical Design and Definition

Oracle Clinical Design and Definition Load Sets load the meta-data on which CRFs are based, including DCMs, DCIs, Procedures, Copy Groups, and DX Queries and Templates. They have the following predefined Parameters:

  • Remote Location. Click the Search icon, search, and select the Remote Location from which you want to load meta-data. If the correct location does not appear on the list, you may need to set up your own connection to it. Click the Preferences link at the top of the screen and follow instructions in User Connections. You must have a valid username and password on the remote location to run the Load Set.

  • Study. Specify the study from which to load data.

Oracle Clinical Labs

Oracle Clinical Labs Load Sets load lab reference ranges and associated meta-data from Oracle Clinical Labs-related tables. They have the following predefined Parameters:

  • Remote Location. Click the Search icon, search, and select the Remote Location from which you want to load labs meta-data. If the correct location does not appear on the list, you may need to set up your own connection to it. Click the Preferences link at the top of the screen and follow instructions in "User Connections". You must have a valid username and password on the remote location to run the Load Set.

  • Lab. Specify the lab for which to load data.

Oracle Clinical 4.5 Stable Interface

Oracle Clinical 4.5 Stable Interface Load Sets load any Oracle Clinical table that is documented in the Oracle Clinical 4.5 Stable Interface Guide.

They have one predefined Parameter: Remote Location. Click the Search icon, search, and select the Remote Location from which you want to load data or meta-data. If the correct location does not appear on the list, you may need to set up your own connection to it. Click the Preferences link at the top of the screen and follow instructions in User Connections. You must have a valid username and password on the remote location to run the Load Set.

Oracle Clinical Randomization

Oracle Clinical Labs Load Sets load real or dummy treatment pattern information for Oracle Clinical studies. They have the following predefined Parameters:

  • Remote Location. Click the Search icon, search, and select the Remote Location from which you want to load real or dummy data. If the correct location does not appear on the list, you may need to set up your own connection to it. Click the Preferences link at the top of the screen and follow instructions in User Connections. You must have a valid username and password on the remote location to run the Load Set.

  • Study. Specify the study for which to load real or dummy treatment codes.

Data Mart Submission Parameters

Different types of Data Marts have different predefined parameters. See the following topics for details:

Text

Text Data Marts put the Oracle LSH data into one or more text files. They have the following predefined Parameters:

  • Zip Result Flag.

    • No. If set to No, the system does not generate a zipped file. The Data Mart output consists of one text file for each Table Descriptor and the log file. The system disregards the defined Export Filename.

    • Zip. If set to Zip, the system generates a zipped file that includes the text files generated for each Table Descriptor and the log file.

    • Both. If set to Both, the system generates a text file for each Table Descriptor, creates a zipped file that contains all the text files and the log file, and also leaves a copy of each text file outside the zipped file.

  • First Row Desc. If you are loading a fixed-format Text Data Mart, this Parameter must have a value. It determines the way the Data Mart handles the first row for each Table Descriptor:

    • Yes. If set to Yes, the system inserts the Column names as the first row in the text file for each Table Descriptor. If the Column name is too long for its length, the system truncates it. If it is a different format, the system inserts a warning in the Data Mart log file.

    • No. If set to No, the system starts the file with the data of the first record, not the column names.

  • Operating System. Select your operating system: UNIX or MS Windows. The system inserts the appropriate carriage return for the operating system at the end of every record.

  • Separator Character (Delimited-format Text Data Marts only). Oracle LSH appends the separator character (delimiter) to every column value of every row (except the last value, where there is a carriage return instead) in order to clarify where one value ends and the next one begins.

    The separator character should not appear as part of any column value of any record. The default value is a comma (,). You can enter any other single character.

  • Enclosing Character (Delimited-format Text Data Marts only). If you select Yes for the Use Enclosing Character Parameter, the system prepends and appends the enclosing character to every character and date column value of every record. The character should be one that does not appear as part of any column value of any record. The default value is a double quotation mark (").

  • Use Enclosing Character (Delimited-format Text Data Marts only). Select Yes or No to determine whether or not Oracle LSH uses the enclosing character or not.

Oracle Export

The Oracle Export Utility requires the following parameters. More information is available in the Oracle Export Utility documentation.

  • Compress. Should Oracle Export export data into a single extent? The default value is Y.

  • Statistics. Specifies the type of database optimizer statistics to generate when the exported data is imported. Options are ESTIMATE, COMPUTE, and NONE. The default value is ESTIMATE.

SAS

SAS Data Marts have the following Parameters:

  • Mode. Select the mode that corresponds to the type of output you need: CPORT, XPORT, or Dataset. The default value is CPORT.

  • Zip Result Flag. The Zip Results Parameter setting has an effect only for SAS Data Marts of the Dataset mode. CPort and XPort Data Marts cannot be zipped.

    For SAS Dataset Data Marts, if set to No, the system does not generate a zipped file. The Data Mart output consists of one sas7bdat file for each Table Descriptor, plus the log file. If set to Zip, the system generates a zipped file that includes the sas7bdat file generated for each Table Descriptor and the log file. The system also generates and uploads the individual SAS data sets for each Table Descriptor as separate outputs.

Program Submission Parameters

This section contains the following topics:

Oracle BI Publisher Programs

Oracle LSH BI Publisher Programs have the following predefined submission Parameters:

  • BIP Report Output Format. Contains a list of output formats supported by BI Publisher. Select the desired report output format. An Oracle LSH Definer may have set a value for this Parameter from an Oracle LSH BI Publisher Program. You can change this value to generate a report in another format.

  • BIP Template. This refers to the layout template that you need to attach to a BI Publisher report. You must type out the name of the layout template (the value in the Name field from the BI Publisher Layout screen) in the text box manually. A template name may already be present that an Oracle LSH Definer set at the time of creating the Program. Do not change this name unless you really want to use another template and are sure its template file exists in your local system.

    Note:

    Refer to the BI Publisher Report Designer's Guide for complete details. You can browse through the documentation online and download what you need from the Oracle Technology Network. To view the list of available documentation for the Oracle Business Intelligence Suite Enterprise Edition (version 10.1.3.4), visit http://download.oracle.com/docs/cd/E10415_01/doc/nav/portal_booklist.htm.

Running Report Sets and Workflows

See the following topics on generating Report Sets and Workflows:

Instances Subtab

The submission screen displays different information for reports sets and workflows in the Instances subtab. See the following topics for details:

Report Sets

For Report Sets, you see the Report Set structure. Expand the Report Set node (+) to see all chapters, expand chapters to see sections, and so on. You can Expand All, expand a particular node, or select a Focus icon to hide the other nodes. (To see all nodes again, click the Report Set title just below Expand All and Collapse All.)

Click any chapter or section's hyperlink to set its Report Set Entry Parameters or the Parameters of its assigned Program instance. For more details, see Setting Parameters for Report Set and Workflow Submission.

Check the box in the Include column to include a chapter or section (Report Set Entry) in the execution. Uncheck the box to exclude that Report Set Entry.

Workflows

For Workflows, you can expand the Workflow node (+) to see all executable objects contained in the Workflow. Click any object's hyperlink to view or set its submission Parameters. See Setting Load Set, Data Mart, or Program Parameters for information on setting Parameters for each object type.

Report Sets and Workflows

The Instances subtab displays the following information:

  • Full Title. For Report Sets, this column displays each Report Set Entry's concatenated full title that includes its Entry Number Prefix, Parent Number, Delimiter, Entry Number, Entry Number Suffix, and Title.

    For Workflows, this column displays the name of each executable object contained in the Workflow.

  • Ready. If a checkmark in a green circle is displayed, all required Parameters associated with the object on that line have a value. For Parameters with a static list of values, the system checks if the value is valid.

    If an X in a red circle is displayed, at least one Parameter does not have a value, or has an invalid value. See Ready and Refresh.

  • Validation Status of the object on that line.

  • Volume Name. (Applies only to Report Sets) If the Report Set Entry begins a new volume of the Report Set, the system displays the volume name.

  • Program Name. (Applies only to Report Sets) If a Program instance is assigned to the Report Set Entry, the system displays its name.

  • Program Checked Out By. (Applies only to Report Sets) If the assigned Program instance is checked out, the system displays the username of the person who checked it out.

  • Planned Output Name. (Applies only to Report Sets) If a Planned Output of the assigned Program instance is assigned to the Report Set Entry, the system displays its name.

  • Filename Reference. (Applies only to Report Sets) If there is a Planned Output assigned, the system displays its filename.

Ready and Refresh

Use the Ready column and the Refresh button to ensure that all required Parameters have values and to validate the values before you submit the job. You cannot submit the job until the Parameter values are ready.

The system displays the Ready icon (below) if all required Parameters associated with the object on that line have a value and if all Parameters with a static list of values have a valid value. The system checks all Report Set Entries that are children of the Report Set instance or Report Set Entry for which the icon is displayed; if any child Report Set Entry is not ready, the Not Ready icon is displayed for the parent as well.

The system displays the Not Ready icon (below) if any required Parameter does not have a value, or if any Parameter with a static list of values has an invalid value.

See the following topics for details:

Ready
This image of the Ready icon is a green tick mark.
Not Ready
This image of the Not Ready icon is a red cross mark.

Use these icons to find missing and invalid values, fix them, and then click the Refresh button at the top of the screen. The system then validates all values for all included Report Set Entries or the whole Workflow at once, including Parameters whose values are passed from another Parameter and those that are validated programmatically, and resets the icons in the Ready column.

Note:

If you have included or excluded Report Set Entries since the last time you refreshed, you may need to click Refresh before you can submit the job.

Setting Parameters for Report Set and Workflow Submission

This section contains the following topics:

Program Parameters

A Report Set contains Programs that may have Parameters, and a Workflow may contain Programs, Load Sets or Data Marts that may have Parameters. You may need to set values for these Parameters to determine which data the system processes.

See the following topics for details:

Report Sets

To see Program Parameters in a Report Set, do the following:

  1. Click the hyperlink for a Report Set Entry (chapter or section). The Report Set Entry Execution Setup screen opens.

  2. Click the Program subtab. The system displays the Parameters of the Program assigned to the Report Set Entry. This Program generates the report associated with the Report Set Entry's chapter or section.

  3. Follow instructions in Setting All Parameters for a Report Set or Workflow Submission.

Workflows

To see Parameters of Load Sets, Programs, or Data Marts contained in a Workflow, do the following:

  1. Click the hyperlink for the Load Set, Program, or Data Mart. The Execution Setup screen for that object opens.

  2. Click the Program subtab. The system displays the Parameters.

  3. Follow instructions in Setting All Parameters for a Report Set or Workflow Submission.

Report Set and Workflow Parameters

In the Parameters subtab there may be Parameters set up specifically to share their value to other Parameters in the Report Set, Report Set Entry, or Workflow.

See Setting All Parameters for a Report Set or Workflow Submission for instructions on setting these Parameters.

Post-Processing Parameters

The following Parameters are used to determine whether PDF post-processing occurs, and if so, how to format the Report Set.

Depending on how the Report Set is defined, you may not be able to change these values or even see all these Parameters. For more information, see Report Set and Workflow Parameters for instructions.

This section contains the following topics:

Post-Process

(Report Set-level only) If set to Yes, the Report Set uses post-processing to generate one or more volumes of concatenated reports in PDF format. If set to No, the system disregards all the other Post-Processing and Overlay Templates Parameters. The Programs run and produce individual outputs.

Post-processing may take a long time, so you may prefer to run a Report Set without it while you are developing the Report Set and are primarily concerned with testing its Programs.

Paper Size

Select either A4 (European and Japanese standard) or US Letter (North American standard).

Note:

The system uses this setting to determine which OTD Files to use from the specified OTD. Setting this Parameter does not guarantee that your reports will print correctly for the size of paper. That depends on the OTD file itself and on the output of the Program.

Language

In Oracle LSH there is only one option: English, United States.

Start Page Number

(Report Set-level only) Enter a number. The actual starting page number displayed on the first page of the Report Set is always one (1). The value of the Starting Page Parameter does not change this. However, if your Report Set is really a continuation of another Report Set that you have defined separately, you may want to enter the number of pages in the first Report Set as the value for the Starting Page Parameter. XML Publisher displays page numbers as "1 of x," "2 of x," and so on. If you enter a Starting Page value of 400, XML Publisher displays page one as "1 of (x+400)" where x is the number of pages in the current Report Set.

Watermark Text

Enter the text, if any, that you want in the watermark. (A watermark is text printed across the page so that it appears to be beneath the content of the report.) If the Overlay Template Definition file you specify does not have a watermark defined, the system ignores this value.

If you want the watermark to display the current validation status (Development, Quality Control, or Production) of the Report Set or Report Set Entry, enter: <VS>

To substitute your own terminology for any of the three validation statuses, enter three pipes (|) after the VS, each followed by the value you want to display when the validation status is Development, Quality Control, or Production, respectively. For example:

  • To display "Dev," "QC," or "Final," enter: <VS|Dev|QC|Final>

  • To display no watermark if the validation status is Production, but "Dev" or "QC" if the validation status is Development or Quality Control, enter: <VS|Dev|QC|>

You can also have a string of fixed text on either side of the validation status or its equivalent, or only fixed text.

  • To display y text before the validation status and x text after it, enter: y <VS|Dev|QC|Final> x

  • To display the string xyz by itself, enter: 'xyz'

Table 5-1 Examples of Validation Status-Based Watermark Values

Parameter Value Development Quality Control Production

<VS>

Development

Quality Control

Production

<VS|Dev|QC|Final>

Dev

QC

Final

<VS|Dev|QC|>

Dev

QC

<none>

y < VS|Dev|QC|Final> x

y Dev x

y QC x

y Final x

Coversheet

(Report Set-level only) If set to Yes, XML Publisher incorporates the coversheet in the PDF output. The default setting is No.

Coversheet Orientation

(Report Set-level only) Select either Portrait (vertical) or Landscape (horizontal) for the paper orientation default value for the cover sheet. The default value is Portrait.

TOC

(Report Set-level only) If set to Yes, XML Publisher generates a table of contents (TOC) for the Report Set. If set to No, XML Publisher does not generate a table of contents.

TOC Orientation

(Report Set-level only) Select either Portrait (vertical) or Landscape (horizontal) for the paper orientation default value for the table of contents. The default value is Portrait.

Pre-Narrative Orientation

Select either Portrait (vertical) or Landscape (horizontal) for the paper orientation default value for narratives that appear before or instead of reports. The default value is Portrait.

Post-Narrative Orientation

Select either Portrait (vertical) or Landscape (horizontal) for the paper orientation default value for narratives that appear after reports. The default value is Portrait.

Note:

There is no Parameter for content orientation because the system determines this automatically based on the actual dimensions of each Program output.

The following Parameters appear only for Report Set Entries:

Calc Title

Do not enter a value here. This Parameter may be used to receive a value generated by the Program instance during Report Set execution.

Hlink1...5

You can define hyperlinks to other Report Set Entries in the Overlay Template. If you want to create a hyperlink to another Report Set Entry in this Report Set Entry in the Report Set output, enter the target Report Set Entry's name here. Use the other Hlink...Target Parameters to create links to additional Report Set Entries.

Hlink1...5 Text

For each Hlink...Target Report Set Entry you defined, enter the hyperlink text you want displayed. If you leave this blank the system uses target Report Set Entry's Full Title.

Setting All Parameters for a Report Set or Workflow Submission

Depending on how each Parameter is defined, you may or may not be able to change its value. If you can set its value, you will be able to do one of the following in the Default Value column:

  • Enter a value. Some Parameters are defined so that the system checks the value you enter when you submit the Execution Setup.

    Note:

    Some Parameters may accept multiple values. In this case, enter a comma (,) between one value and the next. Never enter a comma as part of a single value; the system interprets it as the delimiter between two values.

  • Click the Search icon: Shows magnifying glass

    The system then displays the values the Parameter is allowed to have. Click the Quick Select icon to select a value: Shows bars to select values

    (or click the checkbox in the Select column and then the Select button) for the value you want. Or, if multiple values are allowed, click multiple checkboxes or Select All.

The system displays the following information for each Parameter:

  • Prompt. The Prompt identifies the Parameter.

  • Allowed Values. The system displays the method that determines the Parameter's allowed values, if any:

    • None. The system does not limit the values you can enter. However, during execution the system may validate the value you enter.

    • Static List of Values. You must pick from a list of values that do not change.

    • Program Generated List of Values. You must pick from a list of values that are generated dynamically.

    • Classification List of Values. You must pick from a list of values that are derived from a classification hierarchy level.

  • Valid indicates whether or not the value is valid. For more details, see Valid.

  • Propagation indicates whether the Parameter is a source or target of value propagation, or neither. For more details, see Propagation.

This section contains the following topics:

Valid

The system displays the Ready icon if the value is valid or if there is no value and none is required. The system displays the Not Ready icon if the value is not valid or it there is no value and the Parameter is required.

Propagation

The icon in the Propagation column indicates whether the Parameter is a source or target of value propagation, or neither.

Resubmitting a Job with the Same or Similar Settings

You can resubmit a job with the same settings or modify existing settings as necessary.

  1. Go to the My Home page.

  2. Click the Job ID hyperlink. The Job Execution Details screen opens.

  3. Click Re-Submit. The Submission Details screen opens.

  4. Make any changes required to system parameters, submission parameters, or data currency.

  5. Click Submit. The system submits the job.