Oracle HRMS Data Pump

Oracle HRMS Data Pump

This essay provides the information that you need to understand and use the Oracle HRMS Data Pump. To understand this information you should already have a good functional and technical knowledge of the Oracle HRMS product architecture, including:

Restrictions

This essay does not describe the entire Data Pump schema in detail. Details are given as needed for some of the tables and in most cases you will use the PL/SQL routines to insert data to these batch interface tables. Full details are provided in the Oracle HRMS electronic Technical Reference Manual (eTRM), available on My Oracle Support.

Oracle delivers seed data to enable Data Pump API calls to use features such as passing in user values instead of system identifiers. This support is not available for all of the APIs that are delivered with Oracle HRMS. This essay describes a mechanism for calling APIs using Data Pump where the supporting seed data is not present.

When purging data from the Data Pump tables, take extra care that you do not delete information on User Keys that you might need for future loading of external data. See: User Key Values.

Contents

This essay includes the following sections:

Overview

Oracle HRMS has a set of predefined APIs that are business process related and you are strongly advised always to use these APIs to load data. The predefined APIs enforce all the business rules in the system and guarantee the integrity of any data loaded into the system.

The Oracle HRMS Data Pump supports rapid implementation by simplifying and standardizing the common tasks associated with loading batch data into the Oracle HRMS tables. This is done by providing a set of predefined batch tables and standard processes that simplify the tasks of data-loading using the supported APIs.

With the Oracle Data Pump you:

  1. Map the data items from your external system to the parameter values of the appropriate APIs.

    Because you map data to the parameters of the APIs you do not need to know the complexity of the HRMS data model. For example, to create an employee you need to co-ordinate inserting data into multiple tables. The create_employee API does this automatically, using the parameter values you pass in.

    A special feature of the Data Pump is that you can use user values in place of system IDs for the API parameters. These are translated automatically by the Data Pump.

  2. Load your data into a single generic batch lines table. (There is also a single batch header table to help you manage your batch loading processes.)

    The Data Pump works with a single generic batch lines table. It generates a specific view for each API so that you can easily review and update the data for each API using the parameter names for the API.

    Also, there are PL/SQL interface routines to insert your external data into the generic batch lines table.

  3. Run a standard process that automatically calls the appropriate API for each line of data in the batch table.

Components of Data Pump

Data Pump consists of the following components:

Meta-Mapper Process

This process generates the specific PL/SQL procedures and views for each of the supported API modules you want to use.

Use the Meta-Mapper to generate a set of views that you can use to examine or update data in the batch tables. For example you might want to correct data or change the order in which data is loaded.

Note: The Meta-Mapper is similar to an install process. You must run the Meta-Mapper before making a data pump API call. Meta-Mapper usually runs during the loading of your software, but there are occasions when you may need to run Meta-Mapper manually. For example, if you cannot find Meta-Mapper, or if you version displays as invalid, then you should run Meta-Mapper manually.

Batch Header Table and Batch Lines Table

Use these two tables to hold the header and lines information from your external data.

Note: The Meta-Mapper creates views based on the batch lines table called HRDPV_<API Procedure Name>, for example, HRDPV_CREATE_EMPLOYEE.

PL/SQL Routines

Use the predefined and generated PL/SQL routines to insert your external or legacy data into the batch lines table. Meta-Mapper generates a separate routine for each API that is supported by the Data Pump.

There is also a help routine to provide detailed information on the parameter options for specific procedures.

The Data Pump Engine Process

The Data Pump Engine process is a standard concurrent process that performs the actual data validation and loading operations. It takes these parameters:

Special Features of Data Pump

The following is a list of the special features provided with Data Pump:

User Keys

Data Pump enables you to define the combination of data items that uniquely identify records for loading into Oracle HRMS. For example, when you are loading data for a Person, you could use a combination of Last Name, First Name, Date of Birth, and Gender to identify that person uniquely in Oracle HRMS.

You store these user key definitions in the table HR_PUMP_BATCH_LINES_USER_KEYS.

Use Actual Values

In nearly all cases you can load data using actual names or values without having to identify a system value in Oracle HRMS. The conversion of name to ID is transparent to the user. For example, you can use a real Job Name without needing to identify the JOB_ID in Oracle HRMS; or you can use the value `Male' for gender without needing to know that the code value is `M'.

Alternative Meta-Mapper Generation Mode

It is possible to call the Meta-Mapper so that Data Pump API call is essentially a direct call to the API. This feature is most useful in the absence of seed data for Data Pump support.

Automatic Parallel Processing Of Batch Load Process

Data Pump automatically supports parallel processing on multi-processor systems without any extra code. You turn this on by inserting or updating a row for THREADS in the PAY_ACTION_PARAMETER_VALUES table.

This is the same parameter that controls parallel processing for the Payroll Run and other processes in Oracle HRMS.

Note: When you are using parallel processing, use the P_LINK_VALUE parameter in the batch lines to group transactions that must be run within the same thread.

Explicit User Ordering of Operations

When loading batch lines with related data you must perform some operations in a strict sequence. For example, entering salary information for an employee must take place after the employee record has been created.

With Data Pump, you use the P_USER_SEQUENCE parameter to control the order of processing of batch lines.

Note: Data Pump cannot validate the sequence numbers you enter. It accepts the sequence and tries to process as instructed. If you use incorrect numbers the process may return validation errors when it tries to load your data in the wrong sequence. See: Running the Data Pump.

Validation Mode Operation

When you submit the Data Pump concurrent process you can choose to run it in validation mode. This enables you to review errors in batches or in related records in a batch and to change them before any of them are committed to the HRMS database.

Processing Batches

When you run Data Pump the process only loads data that has not already been processed successfully. This means that you can run a batch, review and correct errors for any specific lines, and then rerun the same batch. You can repeat this process until you have successfully loaded all lines in the batch.

To do this you submit the concurrent process with the same batch name. All unprocessed or errored lines are reprocessed automatically.

Logging Options

There are many logging options with Data Pump that help you find errors when running the process.

Using Data Pump

To use Data Pump, follow this sequence of tasks:

  1. Decide which of the supported API modules you require for loading your external data and run the Meta-Mapper to generate interface procedures for these APIs.

    See: Running the Meta-Mapper.

  2. Use the predefined PL/SQL routines and those created by the Meta-Mapper to transfer your external data into the Data Pump tables.

    See: Loading Data Into the Batch Tables.

    Note: For each entity that requires a User Key you must include the value you want to use as a unique identifier. For example, the parameters P_PERSON_USER_KEY and P_ASSIGNMENT_USER_KEY for create_employee.

  3. Optional. Run Data Pump in validation mode to check and correct data before it is loaded.

    See: Running the Data Pump Process.

  4. Run Data Pump to load data from batch tables into the Oracle HRMS tables.

    Note: When you load a record for the first time, Data Pump automatically inserts your user key value from the batch lines, and the unique key ID generated by the API into the HR_PUMP_BATCH_LINE_USER_KEYS table. This combination is used for all further data loads that update existing records in Oracle HRMS.

    For example, P_PERSON_USER_KEY = USER_KEY_VALUE and PERSON_ID = UNIQUE_KEY_ID.

  5. Review any errors and correct causes.

    See: Finding and Fixing Errors.

  6. If necessary, rerun Data Pump to load corrected batch lines.

    See: Rerunning the Data Pump Process.

    Repeat 5 and 6 until all lines are successfully loaded.

  7. Optional. Purge data from the batch tables.

    See: Purging Data.

Running the Meta-Mapper

Based on your implementation you might decide that you do not need to use all of the predefined APIs to load external data. Run the Meta-Mapper for all APIs or for each single API that you select. The Meta-Mapper generates a specific PL/SQL package and view for each API.

Note: For APIs with overloaded interfaces, the Meta-Mapper will only generate code for the latest interface. The latest interface is the interface that has the greatest number of mandatory parameters.

Use the following SQL*PLUS command to generate packages and views for a number of APIs. (Not, however, all APIs, as the GENERATEALL name appears to suggest):

sql> execute hr_pump_meta_mapper.generateall;

See also: APIs Supported by the GENERATEALL Command.

Use the following SQL*PLUS command to generate packages and views for one API:

sql> execute hr_pump_meta_mapper.generate( <package_name>,<procedure_name>);

For example:

sql> execute hr_pump_meta_mapper.generate( 'hr_employee_api', 'create_employee' );

The naming convention for the view is hrdpv_<api_module_name> and the naming convention for the PL/SQL package is hrdpp_<api module name>. This applies unless the name would exceed 30 bytes, in which case the name is truncated to 30 bytes. In the example, the name of the view is hrdpv_create_employee, and the name of the package is hrdpp_create_employee.

You can use the view to insert legacy data into the HRMS schema or the batch tables, or to update data already in the batch lines table. The PL/SQL package contains an insert_batch_lines procedure to make it easy to insert data from your external systems into the batch lines table; and a call procedure that executes the API on the rows in the batch lines table.

View Generated by the Meta-Mapper

For each API the Meta-Mapper generates a view on the HR_PUMP_BATCH_LINES table that reflects the parameters of the API. This makes it easier to examine and update row values. The name of the view reflects the API name. For example, HRDPV_CREATE_EMPLOYEE. For a full listing of this view see: Table and View Descriptions.

In addition to the parameters for the API, the Meta-Mapper always creates the following columns in the view:

Column                       Description
----------------------------------------------------
BATCH_ID                     Foreign key to HR_PUMP_BATCH_HEADERS
BATCH_LINE_ID                Foreign key to HR_PUMP_BATCH_LINES.
Primary key generated using the
                                        hr_pump_batch_lines_s sequence.
API_MODULE_ID                Foreign key to HR_API_MODULES.
                                        This tells Data Pump which api to 
                                        call for each row. 
LINE_STATUS                  Load status of this API:
     `U' - Unprocessed. 
                                        This must be the initial value for all
                                        lines
     'C' - Complete.
                                        The API call was successful and the
                                        changes have been committed.
     'E' - Error.
     'V' - Validated 
                                        The API call was successful but the
                                        changes have not been committed.
USER_SEQUENCE                Used to control processing order.
                                        For example, to make sure that address
                                        for an employee is loaded after the
                                        employee record has been created.
LINK_VALUE                   Use a unique link_value to link multiple
                                        rows in a single batch.
                                        Set this value when using parallel
                                        processing to make sure that related
                                        rows in a batch are processed together. 
BUSINESS_GROUP_NAME          Alternative business group name to use
                        for a particular API call. If not null,
                        this overrides the value specified in
                        the batch header
                                        

Meta-Mapper also creates other columns for specific APIs. For example, some of the columns on the create employee view are:

Other columns are created to reflect the PL/SQL OUT values returned from the API so that you can examine these values. For example:

You do not need to know which columns of the batch lines table hold specific parameters for the API.

Required Columns

If you use the view to insert data to the batch lines table then remember that in addition to the data required for the insert batch line procedure you also need :

PL/SQL Package Generated by the Meta-Mapper

The Meta-Mapper also generates a separate package for each API to make it easier for you to load data to the batch lines table or to review the content of the table for specific APIs.

For example, the create_employee package hrdpp_create_employee contains two procedures:

Insert Batch Lines Procedure

Use this procedure to simplify loading data into the batch lines table.

See also: Default and Null Values for API Parameters.

A call to this procedure creates one row in the batch lines table, complete with all the parameters. For create employee, some of the parameters are:

p_batch_id                      number      in
p_data_pump_batch_line_id       number      in     default
p_data_pump_business_grp_name   varchar2    in     default
p_user_sequence                 number      in     default
p_link_value                    number      in     default
p_hire_date                     date        in
p_last_name                     varchar2    in
p_sex                           varchar2    in
p_per_comments                  varchar2    in     default
p_date_employee_data_verified   date        in     default
p_date_of_birth                 date        in     default
p_email_address                 varchar2    in     default
p_employee_number               varchar2    in
p_expense_check _send_to_addres varchar2    in     default
p_first_name                    varchar2    in     default
p_known_as                      varchar2    in     default
p_marital_status                varchar2    in     default
p_middle_names                  varchar2    in     default
p_nationality                   varchar2    in     default
p_national_identifier           varchar2    in     default
p_previous_last_name            varchar2    in     default
p_registered_disabled_flag      varchar2    in     default
p_title                         varchar2    in     default
p_attribute1                    varchar2    in     default
p_attribute2                    varchar2    in     default
p_attribute3                    varchar2    in     default
p_attribute4                    varchar2    in     default
p_attribute5                    varchar2    in     default
p_attribute6                    varchar2    in     default
p_attribute7                    varchar2    in     default
p_attribute8                    varchar2    in     default
    ...
    ...
p_resume_exists                 varchar2    in     default
p_resume_last_updated           date        in     default
p_second_passport_exists        varchar2    in     default
p_student_status                varchar2    in     default
p_work_schedule                 varchar2    in     default
p_suffix                        varchar2    in     default
p_person_user_key               varchar2    in
p_assignment_user_key           varchar2    in
p_user_person_type              varchar2    in     default
p_vendor_name                   varchar2    in     default
p_correspondence_language       varchar2    in     default

This example does not show all the parameters as there are many more.

The optional p_data_pump_business_grp_name parameter specifies a business group name to override the name specified in the batch header.

The optional p_data_pump_batch_line_id parameter specifies the batch_line_id for the inserted row (if necessary an existing row with this batch_line_id will be deleted).

Note: This procedure requires two user key values p_person_user_key and p_assignment_user_key. You must supply values for these keys. If you use Data Pump to create records in Oracle HRMS then Data Pump automatically inserts your key values and the HRMS key values generated by the APIs into the user keys table. For subsequent actions Data Pump can use these keys to match records from your external system with the Oracle HRMS records. A more detailed explanation and example is included in a later section of this document.

Call Procedure

This is the actual 'wrapper' procedure executed by the Data Pump process to call the API and pass in the appropriate parameter values. The procedure takes two arguments: p_business_group_id and p_batch_line_id.

Note: Direct calls to this procedure are NOT supported. You must use the Data Pump concurrent process to execute the procedures.

Meta-Mapper Help Procedure

The Meta-Mapper package also includes a help procedure hr_pump_meta_mapper help that returns information on the generated PL/SQL package and view names, and the batch lines table parameter values for a given API.

The help procedure has two parameters:

You must set server output on before calling this procedure.

For example, use the following SQL*PLUS to get help for hr_employee_api.create_employee:

sql> set serveroutput on size 1000000;
sql> execute hr_pump_meta_mapper.help( 'hr_employee_api', 'create_employee' );

The output is as follows:

Generated package: hrdpp_create_employee
Generated view: hrdpv_create_employee
Parameter Name              Type    In/Out Default? Lookup Type
---------------             -----   ------ -------- -----------
P_HIRE_DATE                 DATE      IN
P_LAST_NAME                 VARCHAR2  IN
P_SEX                       LOOKUP    IN   SEX
P_PER_COMMENTS              VARCHAR2  IN   DEFAULT
P_DATE_EMPLOYEE
  _DATA_VERIFIED            DATE      IN   DEFAULT
P_DATE_OF_BIRTH             DATE      IN   DEFAULT
P_EMAIL_ADDRESS             VARCHAR2  IN   DEFAULT
P_EMPLOYEE_NUMBER           VARCHAR2  IN
P_EXPENSE_CHECK
  _SEND_TO_ADDRES           LOOKUP    IN   DEFAULT  HOME_OFFICE
P_FIRST_NAME                VARCHAR2  IN   DEFAULT
P_KNOWN_AS                  VARCHAR2  IN   DEFAULT
P_MARITAL_STATUS            LOOKUP    IN   DEFAULT  MAR_STATUS
P_MIDDLE_NAMES              VARCHAR2  IN   DEFAULT
P_NATIONALITY               LOOKUP    IN   DEFAULT  NATIONALITY                
P_NATIONAL_IDENTIFIER       VARCHAR2  IN   DEFAULT
P_PREVIOUS_LAST_NAME        VARCHAR2  IN   DEFAULT
P_REGISTERED_DISABLED_FLAG  LOOKUP    IN   DEFAULT  YES_NO
P_TITLE                     LOOKUP    IN   DEFAULT  TITLE
P_WORK_TELEPHONE            VARCHAR2  IN   DEFAULT
P_ATTRIBUTE_CATEGORY        VARCHAR2  IN   DEFAULT
P_ATTRIBUTE1                VARCHAR2  IN   DEFAULT
P_ATTRIBUTE2                VARCHAR2  IN   DEFAULT
P_ATTRIBUTE3                VARCHAR2  IN   DEFAULT
P_ATTRIBUTE4                VARCHAR2  IN   DEFAULT
P_ATTRIBUTE5                VARCHAR2  IN   DEFAULT
P_ATTRIBUTE6                VARCHAR2  IN   DEFAULT
...
P_ASSIGNMENT_SEQUENCE       NUMBER    OUT        
P_ASSIGNMENT_NUMBER         VARCHAR2  OUT        
P_NAME_COMBINATION_WARNING  BOOLEAN   OUT        
P_ASSIGN_PAYROLL_WARNING    BOOLEAN   OUT        
P_USER_PERSON_TYPE          VARCHAR2  IN   DEFAULT
P_VENDOR_NAME               VARCHAR2  IN   DEFAULT
P_CORRESPONDENCE_LANGUAGE   VARCHAR2  IN   DEFAULT
...

The following is an explanation of the help output:

Loading Data Into the Batch Tables

The Meta-Mapper generates a specific PL/SQL package and view for each API. Use these PL/SQL interface procedures and views for loading data into the batch tables, except where stated otherwise in this document.

It is particularly important that inserts are performed exclusively through the interfaces. There are two reasons for this:

Tip: Test the validity of the legacy data capture code on a subset of the batch to be loaded. For example, if you plan to load details for 100000 people, test your routines to validate and load a subset of 100 representative people. This should help you to identify and resolve any obvious problems with your capture code before you attempt to load the bulk of your data.

The Batch Interface Tables

The main objective of the interface design was to keep everything as simple as possible. The result is that Data Pump only has one batch header and one batch lines table for loading data for all APIs. Views are generated by the Meta-Mapper with specific column names for each API.

Each row of the batch lines table holds the reference to an API and data values. Data Pump executes each API with the data passed in as parameters.

How to Control Processing Order

There are many instances where you need to control the order in which batch lines are loaded into the database. For example, Data Pump would generate an error if it tried to create an address for a person before it created the person.

To control the order in which operations are performed, use the p_user_sequence parameter to set the order manually. Choose some appropriate numeric values for this parameter when you insert the data to the batch lines table. Data Pump uses these numbers to determine processing order.

Different Approaches to Batch Loading

There are a number of approaches you can take when setting the order for processing batch lines.

One approach would be to load disparate data in separate batches. For example load personal information in one batch and address information in a second batch.

Another approach would be to create a batch containing lines with related API calls. For example, you could load person, address, and assignment information for one employee as part of one batch. In this approach, if you are using the parallel processing option, you would use the p_link_value parameter to make sure all the lines are processed in the same chunk. Use the default or p_user_sequence parameter to make sure that the different API calls are made in the correct order within the linked group.

Processing Order When Running Parallel

The Data Pump process has been optimized to take advantage of parallel processing options. If you want to run a multi-threaded process there are some special considerations for ordering batch lines.

When you run the Data Pump process in parallel, the concurrent manager generates multiple threads, each of which processes a defined number of batch lines before it commits them to the database. The number of lines is controlled by the CHUNK_SIZE payroll action parameter - see Other Parameters for details.

With parallel processing and chunking of lines, in theory a transaction that includes more than one line could be split between processes. This would mean that lines might not be processed in the order set by the p_user_sequence parameter.

You can prevent this by using the p_link_value parameter. This parameter tells Data Pump that a set of batch lines must be processed in the same chunk. Use the same link value for all the lines that must be processed by the same thread - this will automatically extend the number of rows processed by a single thread when necessary.

When lines have a common link value, they must also be in consecutive user sequence in order to be processed within a single chunk.

For example, in the following table, only the lines with the user sequences 1, 2 and 5 are guaranteed to be processed in the same thread.

User Sequence Link Value
1 1
2 1
5 1
8 2
10 1

Note: When running Data Pump in parallel you may find that performance does not scale as expected. Remember that running business process APIs in parallel may cause lock contention because of extended validation. For example, in the past, the personal payment method and element entry APIs were known to have problems in this area.

Default and Null Values for API Parameters

Specifying a Default or NULL Parameter Value

Part of the design for the APIs in Oracle HRMS is that many parameters have default values set for them. This means that they can be called directly without having to pass values for all parameters.

When you use Data Pump there is a similar mechanism that means you do not have to supply values for all parameters.

The following rules apply:

Any other value passed as a parameter will be the value inserted into the batch line and subsequently passed to the appropriate API process.

Indicator Parameters

The insert_batch_lines procedure may be generated with indicator parameters. Each indicator parameter is generated in addition to the corresponding standard parameter e.g. I_AMOUNT (indicator parameter), P_AMOUNT (standard parameter). The indicator parameters are generated to allow the special value NULL to be specified for non-mandatory number and date parameters whose default value is not NULL. If the indicator parameter = Y then the value NULL is written to the batch lines table, otherwise the standard parameter's value is used. The usual case for this is for update APIs where a number or date value needs to be updated to NULL

Assumed Default Values

Occasionally, when the value NULL is used to specify a non-mandatory parameter, the wrong default value gets passed to the API call. The usual reason for this is that the parameter in question has a non-standard default value, but the seed data has not taken this into account. In such case, the correct default value for the parameter should be explicitly set in the batch lines row for the Data Pump API call.

The meta-mapper assumes, that unless seeded otherwise, certain default values for API parameters - this is because it is not possible to get the actual default values from the database. The default value used for a create API (e.g. create_employee) is NULL. For all other APIs, the default values used are shown in the following table:

Parameter Type Default Value
BOOLEAN NULL
DATE HR_APLG_DATE
LONG NULL
NUMBER HR_APLG_NUMBER
VARCHAR2 HR_APLG_VARCHAR2

Default and Null Values for Mapped Parameters

A mapped parameter is one where an actual value (or a user key) is used rather than a system identifier in the Data Pump API call. The meta-mapper call procedure calls a mapping function before making the API call to resolve the system identifier value from the input value. Such a mapping function will usually have two or more parameters – an obvious name parameter e.g. P_JOB_NAME, and other parameters such as P_EFFECTIVE_DATE.

If one or more of the mapping function parameters is set to <NULL> in batch lines then the mapped parameter is passed to the API as NULL. Otherwise, if one or more of the mapping function parameters is set to NULL in batch lines and the default value is NULL or an HR_API value (e.g. HR_API.G_NUMBER) then the mapped parameter is passed to the API with its default value.

Recommendation: To use this feature, set the name parameter to <NULL> or NULL in the batch lines table. There is no need to worry about what the other mapping function parameters could be.

Running the Data Pump Process

Use the Submit Reports and Processes form to start the Data Pump Engine process. It takes these parameters:

Note: Before running the Data Pump process you should decide whether to use parallel threads and whether you want to turn on any logging options.

Overview of Data Pump Action Parameters

Data Pump process running can be controlled through the action parameter value settings. A number of these action parameters (THREADS, CHUNK_SIZE, MAX_ERRORS_ALLOWED) are also used by the other processes e.g. the payroll run.

With action parameter groups it is possible to have separate action parameter values for different processes, something that is highly recommended. Another use of action parameter groups is to switch in an action parameter group for debugging e.g. so that Data Pump is run as a single thread with logging switched on.

Any action parameters not set within the specified action parameter group take their values from the default action parameter group (the null action parameter group). Furthermore, if action parameters are NULL then the Data Pump process uses default values for them.

You can set action parameter values from the Action Parameters form (navigate to Process And Reports->Action Parameters).

Running In Parallel

To enable parallel processing you set a value for the THREADS parameter in PAY_ACTION_PARAMETER_VALUES.

The threads value includes the starting process. That means that if you set a value of 2, the main engine code starts with one slave process to make a total of two concurrent processes. When running in parallel, the 'master' process may finish before the slave processes. This is normal.

Note: The THREADS parameter also controls the parallel execution of the other Oracle Payroll processes. We recommend that you use action parameter groups to separate action parameters for Data Pump from normal payroll processing.

Other Parameters

There are six other payroll action parameters you can set for Data Pump.

CHUNK_SIZE

Default = 10

Controls how many batch API calls are processed at a time per thread when running in parallel. It also controls the number of API calls per commit. Note that there are certain circumstances under which the actual number can vary from this number. For example, it can be higher if the p_link_value parameter is set.

MAX_ERRORS_ALLOWED

Default = 20

Controls how many errors in calling an API will be tolerated before the entire Data Pump engine fails. This is the number of errors per parallel thread.

PUMP_DEBUG_LEVEL

Use this parameter to turn on logging for tracking errors generated by the Data Pump process. For a list of valid values for this parameter, see Logging Options.

DATA_PUMP_DISABLE_CONTINUOUS_CALC

Default = N

Use this parameter to turn off continuous calculation triggers. This may be desirable for performance reasons. The value Y turns off the continuous calculation triggers.

DATA_PUMP_NO_FND_AUDIT

Default = N

Use this parameter to turn off Oracle Applications auditing. This may be desirable for performance reasons. The value Y turns off the auditing.

DATA_PUMP_NO_LOOKUP_CHECKS

Default = N

Use this parameter to turn off lookup validation in the Data Pump API call. The Data Pump API call assumes that values for lookup parameters are passed in as lookup codes only. This may be desirable for performance reasons. The value Y turns off the lookup validation.

Checking Run Status

The Data Pump runs as a concurrent process. You can check process status at any time using the View Concurrent Requests window. The concurrent manager only reports failure if the entire process has failed. Usually this happens because the number of errors exceeded the value set by the MAX_ERRORS_ALLOWED parameter.

Note: Even if the concurrent process completes successfully there may be some data errors encountered by the process. You should always check for batch line errors.

Finding and Fixing Errors

This section deals with the logging options available for tracking errors generated by the Data Pump process, as well as hints and tips on how to deal with these.

Logging Options

You enable logging options for Data Pump by inserting appropriate values in the PAY_ACTION_PARAMETERS_VALUES table for the PUMP_DEBUG_LEVEL parameter.

Note: Turning logging on always affects the overall performance of the data pump process. You should only use logging to help track down problems when they occur. Remember also to switch logging off after you have solved your problem.

Valid values for PUMP_DEBUG_LEVEL are as follows.

Tip: The first three options are likely to be the most useful to you.

Option Description
AMD API Module Debug (enables trace output from API)
RRP Range Row Processing logging (logs the number of errors that occurred for each unit of work, or range)
GID Get_id function failure information (logs failures in functions that map user values to IDs)
MSG Output specific logging messages
ROU Routing information (entry to and exit from procedures)
WCD Wrapper cache debug logging
STK Stack dump logging (trace information on failure)
EXT Exit information (trace information on success)
RRI Range row insert logging
BLI Batch Line Information (output the batch line number for the batch line being processed).
CLF Concurrent Log File (logging messages output with the MSG option go to the concurrent manager log file).

You can combine any number of these options by concatenating the values, separated by a colon. For example, the string 'MSG:RRI:RRP' combines MSG, RRI, and RRP debugging.

How to View Logging Output

When you enable logging options, output is produced for every thread that may be running. Use the PYUPIP command to view this output.

To use this command you will need to know the ID for the concurrent process you are logging. Online you can use the View My Requests window to find the Concurrent Request IDs. Alternatively, you can query from the HR_PUMP_REQUESTS table. One row is inserted for each process that is running. For example:

select * from hr_pump_requests;

Typical output would be:

BATCH_ID    REQUEST_ID      PROCESS_TYPE
----------- --------------- -----------------
8437        98533           MASTER
8437        98534           SLAVE

This tells us that there are two processes running, and the request_id values are 98533 and 98534.

Use PYUPIP to trace the output in a separate command line window. For example:

PYUPIP <user/password>@database REQID98533
PYUPIP <user/password>@database REQID98534

Note: If you are running multiple threads, you should trace all the threads. If you do not choose all threads, this means that the processing comes to halt when the database trace pipe fills up. It may be advisable to run a single thread only when tracing.

How to Find Errors in Batch Lines

When an error occurs during processing, Data Pump generates one or more rows in the HR_PUMP_BATCH_EXCEPTIONS table. There will be multiple rows if the API supports multiple messaging. In this release you must use SQL*PLUS to view this information.

Additionally, you can use SQL*PLUS to query rows in HR_PUMP_BATCH_LINES where the LINE_STATUS has a value of E - error.

Note: In validation mode LINE_STATUS is set to V- validated, for a successful API call. In update mode LINE_STATUS is set to to C - complete, for a successful API call.

Investigating the Cause of Errors

Investigation strategies depend on the type of error and the indications of its origin. For some errors you may need experience with the use of APIs and the Oracle HRMS application to recognize what might be wrong.

Some specific advice for Data Pump follows:

How to Fix Errors

The most common cause of errors is likely to be that incorrect values have been loaded via the insert_batch_lines procedure and that these need to be corrected.

Using The Views To Correct Data

Use the HRDPV_ views on HR_PUMP_BATCH_LINES to correct values in the appropriate columns. You can use normal update statements on these views and this makes fixing data problems much simpler.

Warning: When using the views to make changes to problem data, you must not alter the LINE_STATUS on the HR_PUMP_BATCH_LINES table. The Data Pump engine uses this for processing.

Note: Views on HR_PUMP_BATCH_LINES display rows only for the APIs for which they were generated. Any attempt to update the API_MODULE_ID column with an incorrect value will fail with an ORA-1402 error. The views are generated with a WITH CHECK OPTION on the where-clause to prevent you from using a view to generate any row that the view could not select.

(The same warning applies to inserting rows into HR_PUMP_BATCH_LINES using the generated views.)

Rerunning The Data Pump Process

After you have fixed any problems you can rerun the batch by submitting the Data Pump process again using the same batch name. You can submit the process any number of times until all lines are successfully completed. Batch lines with a status of E - error; U- unprocessed; or V -validated are automatically reprocessed.

You do not have to take any action to remove rows from the exception table. Data Pump automatically deals with this.

Lines validated in previous Data Pump runs are reprocessed even if the Data Pump is run in validation mode because the results of the associated API calls would have been rolled back in the previous runs. Only lines with a status of C - complete are not reprocessed.

The following table summarizes each of the possible entries for batch line status, and explains whether each status is automatically reprocessed.

Status Meaning Automatically Reprocessed?
U Unprocessed Yes
N Not Processed
Applies to Data Pump purges and indicates that this call was not processed because a previous linked call failed to complete processing.
Yes
R Rolled Back
This linked API call completed successfully, indicating that the call parameters were valid. However, the results were rolled back because a subsequent linked API call failed to complete.
Yes
V Validated
Not yet committed.
Yes
E Error
Not committed
Yes
C Completed
Committed to the database.
No

Purging Data

You can use the Data Pump Purge Process to remove unwanted batches when you have successfully uploaded them to your database. You may wish to retain some of the batch information for future processing. When you purge a data pump batch, you can therefore select how much of the batch information you purge. The Data Pump Purge process enables you to select your criteria for purging and then submit your purge.

Before you submit a purge request, you should make sure that:

If you have existing Data Pump purge practices you can continue to use them in preference to the Data Pump Purge process.

See How to Purge

You run the Data Pump Purge process from the Submit Requests window.

To run the Data Pump Purge process:

  1. Enter the name of the batch that you want to purge. If you do not supply a batch name, confirm that you want to purge all batches.

  2. Specify whether you want to preserve the user keys so that you can use them for future batches.

  3. If you want to purge the unprocessed batch lines, enter Yes. However, these lines are probably unprocessed because Data Pump encountered the maximum number of errors and could not process any of the later batch lines. You can run these unprocessed lines again when you have fixed the data. So, if you want to run these batch lines again, enter No.

  4. If you want to purge the failed batch lines, enter Yes. However, if you intend to process these batch lines again when you have corrected the reason for failure, enter No to preserve these batch lines for future use.

  5. If you want to purge the completed batch lines, enter Yes.

  6. If you want to delete the batch header, enter Yes.

  7. Confirm that the action parameter group is correct.

How To Purge

In all cases you should start with the following actions:

TRUNCATE TABLE HR_PUMP_REQUESTS;
TRUNCATE TABLE HR_PUMP_RANGES;

Simple Purge Of All Rows

If you want to purge all rows regardless of status then use the following:

TRUNCATE TABLE HR_PUMP_BATCH_EXCEPTIONS;
TRUNCATE TABLE HR_PUMP_BATCH_LINE_USER_KEYS;
TRUNCATE TABLE HR_PUMP_BATCH_LINES;
TRUNCATE TABLE HR_PUMP_BATCH_HEADERS;

Purge Of All Successful Rows

This is more complicated. You should purge data only when all loads have been successful. This avoids the danger of purging rows that are still needed. Perform the following actions:

Sample Code

This section contains some sample code showing how you could call the batch lines procedures.

This example is artificial in that the data for the API calls is generated. However, it shows how we can prepare the Data Pump to create a number of batch lines that:

The example also illustrates the use of p_link_value to make sure that the separate transactions for each employee and assignment are processed by the same thread.

------------------------ start of example -----------------------
create or replace package hrdp_cre_emp as
procedure hrdp_cre_emp (p_start in number, p_end in number);
end hrdp_cre_emp;
/
create or replace package body hrdp_cre_emp as
/*
 *  Insert a number of batch lines in preparation for
 *  running the data pump engine, which will then
 *  - create an employee
 *  - create an address for the employee
 *  - update the criteria of the default assignment
 *  - create a secondary assignment
 */
procedure hrdp_cre_emp (p_start in number, p_end in number) is
   l_last_name    varchar2(40);
   l_hire_date    date;
   l_birthday     date;
   l_first_name   varchar2(40);
   l_asgno        varchar2(40);
   -- These are the 'out' values.
   l_special_ceiling_step_id     number;
   l_person_user_key             varchar2(100);
   l_address_user_key            varchar2(100);
   l_assignment_user_key         varchar2(100);
   l_assignment_user_key2        varchar2(100);
   l_link_value                  number;
   l_commit_count number;
   l_commit_limit number;
   l_emp_count    number;
   l_address_line1 varchar2(256);
begin
   l_commit_limit := 10;   -- commit after every 10 employees.
   l_commit_count := 0;
   l_first_name   := 'David';
   l_hire_date    := to_date('1997/12/01', 'YYYY/MM/DD');
   l_birthday   := to_date('1970/01/01', 'YYYY/MM/DD');
   l_link_value := 0;
   for emp_count in p_start..p_end loop
      -- Prepare to create an employee.
     l_last_name := 'DUMP' || lpad(emp_count, 5, '0');
     l_person_user_key     := l_last_name || ' : PER USER KEY';
     l_assignment_user_key := l_last_name || ' : ASG USER KEY';
     l_address_user_key := l_last_name || ' : ADDR USER KEY';
     l_address_line1 := to_char(emp_count) || ', Union Square';
     hr_utility.trace('Last Name : ' || l_last_name);
     -- Allow linking together so that these API calls process
     -- by the same thread.
     l_link_value := l_link_value + 1;
     hrdpp_create_employee.insert_batch_lines
     (
         p_batch_id             => 3,
         p_user_sequence        => null,
         p_link_value           => l_link_value,
         p_person_user_key      => l_person_user_key,
         p_assignment_user_key  => l_assignment_user_key,
         p_hire_date            => l_hire_date,
         p_last_name            => l_last_name,
         p_sex                  => 'Male',
         p_employee_number      => null, 
         p_per_comments         => 'Comments for : ' || l_last_name,
         p_date_of_birth        => l_birthday,
         p_email_address        => 'somebody@us.oracle.com',
         p_first_name           => l_first_name,
         p_user_person_type     => 'Employee'
      );
      -- Create an address for the person.
      hrdpp_create_us_person_address.insert_batch_lines
      (
         p_batch_id             => 3,
         p_user_sequence        => null,
         p_link_value           => l_link_value,
         p_effective_date       => l_hire_date,
         p_primary_flag         => 'Yes',
         p_date_from            => l_hire_date,
         p_address_type         => 'Home',
         p_address_line1        => l_address_line1,
         p_city                 => 'Golden Valley',
         p_county               => 'Los Angeles',
         p_state                => 'California',
         p_zip_code             => '91350',
         p_country              => 'US',
         p_person_user_key      => l_person_user_key,
         p_address_user_key     => l_address_user_key
      );
      -- Let's update some criteria.
      l_special_ceiling_step_id := hr_api.g_number;
      hrdpp_update_emp_asg_criteria.insert_batch_lines
      (
         p_batch_id                     => 3,
         p_user_sequence                => null,
         p_link_value                   => l_link_value,
         p_effective_date               => l_hire_date,
         p_datetrack_update_mode        => 'CORRECTION',
         p_assignment_user_key          => l_assignment_user_key,
         p_payroll_name                 => 'Monthly',
         p_special_ceiling_step_id      => l_special_ceiling_step_id
      );
      l_assignment_user_key2 := l_assignment_user_key || '2';
      hrdpp_create_secondary_emp_asg.insert_batch_lines
      (
         p_batch_id                  => 3,
         p_user_sequence             => null,
         p_link_value                => l_link_value,
         p_assignment_user_key       => l_assignment_user_key2,
         p_person_user_key           => l_person_user_key,
         p_effective_date            => l_hire_date,
         p_assignment_number         => l_asgno,
         p_comments                  => 'asg created by data pump',
         p_organization_name         => 'Setup Business Group',
         p_grade_name                => 'faz1',
         p_job_name                  => 'TEST',
         p_payroll_name              => 'Monthly'
      );
      l_hire_date    := l_hire_date + 1;
      l_commit_count := l_commit_count + 1;
      if(l_commit_count = l_commit_limit) then
         -- Commit after so many employees.
         hr_utility.trace('Commit after ' || l_commit_limit || ' employees.');
         commit;
         l_commit_limit := 1;
      end if;
   end loop;
end hrdp_cre_emp;
/

Notes on Using The Generated Interfaces

The Meta-Mapper process generates a view and PL/SQL packages for each API. This section explains some of the factors that you should keep in mind when using them.

Finding System IDs from Names or Values

When you use APIs you must supply lookup codes and surrogate primary keys for many parameters. For example:

  ...
  p_sex        => 'M',
  p_payroll_id => 13456,
  ...

Without Data Pump you would need to write additional code to convert values from your external system to Oracle HRMS system IDs for each API.

However, with Data Pump you have a set of predefined procedures for each of the supported APIs that automatically convert user names or values into lookups and system IDs. For example:

  ...
  p_sex          => 'Male', 
  p_payroll_name => 'Monthly Payroll',
  ...

Note: For lookup parameters, you can use the meaning or the lookup code itself. For non-lookup type IDs you will find an alternative parameter to use.

Exceptions

There are three major exceptions to the use of names for parameter values:

Flexfield Attribute Parameters

Most of the API processes include flexfield attribute parameters with names like P_SEGMENT18 or P_ATTRIBUTE20. Data Pump cannot know what the mappings of these values are in your specific implementation and therefore value conversion is not supported.

This means that you must take responsibility for passing the correct lookup code or other value as appropriate.

PL/SQL IN/OUT Parameters

When an API performs a combination of different actions then you need to provide the appropriate ID or code values for the parameters rather than the user meanings. This should not be a great problem where the values for these items can be derived before the Data Pump run.

For example, in hr_assignment_api.update_emp_asg , p_special_ceiling_step_id must be passed in as an ID, even though other APIs require it to be a user key.

Note: You cannot provide user keys for PL/SQL IN/OUT parameters of the API because the Data Pump code that calls the specific API has no way to determine whether the user key existed before the API call and therefore whether it is to be created or its ID value updated after the API call.

Many APIs generate a comment_id as an output parameter. However, you are not required to supply a user key value for the comment_id. This avoids the generation of a lot of meaningless user keys.

Note: A comment_id user key is required for the comment_id parameters to the element entry creation and update APIs. You must add these user keys if you require them for the element entry API calls.

Legislation Specific Lookup Parameters

A similar situation arises with legislation-specific business process API calls where a specific lookup in the legislation-specific API call corresponds to a generic parameter in the generic business process API call.

For example, the p_region_1 parameter in the hr_person_address_api.create_person_address API corresponds to p_county lookup parameter in the hr_person_address_api.create_gb_person_address API.

When calling hr_person_address_api.create_person_address for a GB address via Data Pump, you would have to pass the 'GB_COUNTY' lookup code for the p_region_1 parameter. Alternatively you could use the 'GB_COUNTY' lookup meaning if you used hr_person_address_api.create_gb_person_address.

Note: You should use legislation-specific APIs where these are available.

User Key Values

When you are mapping data from your external system to Oracle HRMS you will find that there are some cases where an ID value for an Oracle entity cannot be derived from a logical unique key or name. Examples of this are Person, Assignment and Address. Consider the unique identifier for a person. It is very difficult, if not impossible, to identify a person uniquely. In theory different people may share the same first and last names, gender, birth date, marital status, and so forth.

There are similar problems if an entity does not have a logical key, and its surrogate ID cannot be derived easily from the names of any of its component entities. For example, it isn't easy to identify a unique Element Link by looking simply at names of its components - Payroll, Job, Position etc.

Or, the entity may be an abstract entity specific to the Oracle Applications products and is only identifiable using an ID value. For example an ID_FLEX_NUM.

The solution provided by Data Pump is to enable you to set a 'User Key' value. This value must be a unique character string. It could be a unique ID taken from your external system or it could be a concatenation of multiple values. For example a user key for a person could be the person's name concatenated with the existing employee number from your legacy system. An illustration would be:

p_person_user_key => 'Joe Bloggs' || '2345',  -- name + emp no

You must define user key values for any parameters with a name that ends 'user_key'. Data Pump uses these user key values to identify IDs for the records in the Oracle HRMS system.

Note: User key values must be unique across all entities. For example, it is not possible to have a Person user key value of 'SMITH1001', and an Assignment user key value also of 'SMITH1001'.

In most cases you will have one user key value for each system ID. However, with Data Pump you can define many different user keys for the same system ID. This is important if you are loading data from different external systems and the unique keys do not match.

User keys are held as rows in the HR_PUMP_BATCH_LINE_USER_KEYS table.

Creating User Key Values

User keys are created in one of two ways:

Once the user keys have been created you can use the same key with other APIs to update an existing entity, or to specify another entity. For example, two person user keys can be used to specify a contact relationship.

Utility Procedures Available With Data Pump

This section lists the utility procedures that are provided with the Data Pump.

All the procedures are in the HR_PUMP_UTILS package.

create_batch_header

Parameters :
    p_batch_name            : unique batch name.
    p_business_group_name   : name of business group (optional)
    p_reference             : user reference value (optional)
    p_atomic_linked_calls   : set to Y to enable the processing of linked calls
Returns
    The hr_pump_batch_headers.batch_id.
Description :
    Creates a batch header row.  This should be used to create
    the row rather than direct insert.

An example of a call to this procedure is:

  declare
     l_batch_id number;
  begin
     l_batch_id := hr_pump_utils.create_batch_header
                 ('Employees for Dept 071', 'AKA Enterprises');
  end;

add_user_key

Procedure  : add_user_key
Parameters : 
    p_user_key_value        : unique user key value.
    p_unique_key_id         : ID associated with the user key.
Description :
  Creates a user key for use with Data Pump API calls. 
  add_user_key is used to add a user key when the object
  referred to by the ID value has not been created by Data 
  Pump. This may happen when the object has no creation API but
  is required as a user key parameter to an API called by Data 
  Pump, or if the object was created before Data Pump was 
  available.

modify_user_key

Procedure  : modify_user_key
Parameters : 
    p_user_key_value        : unique user key value identifying
                              the user key to be changed.
    p_new_user_key_value    : new unique user key value.
    p_unique_key_id         : new ID associated with the user 
                              key.
Description :
    The main purpose of modify_user_key is to fix an incorrect 
    user key created by add_user_key. If either 
    p_new_user_key_value or p_unique_key_id are null then the
    corresponding column is not updated for the user key.

Table and View Descriptions

The following section provides more detailed descriptions of the specific tables and views you use with Data Pump.

APIs Supported by the GENERATEALL Command

Package Name Business Process
HR_APPLICANT_API CREATE_APPLICANT
  CREATE_GB_APPLICANT
  CREATE_US_APPLICANT
HR_ASSIGNMENT_API ACTIVATE_EMP_ASG
  ACTUAL_TERMINATION_EMP_ASG
  CREATE_SECONDARY_EMP_ASG
  CREATE_GB_SECONDARY_EMP_ASG
  CREATE_US_SECONDARY_EMP_ASG
  SUSPEND_EMP_ASG
  UPDATE_EMP_ASG
  UPDATE_EMP_ASG_CRITERIA
  UPDATE_GB_EMP_ASG
  UPDATE_US_EMP_ASG
HR_CONTACT_API CREATE_PERSON
HR_CONTACT_REL_ API CREATE_CONTACT
HR_EMPLOYEE_API CREATE_EMPLOYEE
  CREATE_GB_EMPLOYEE
  CREATE_US_EMPLOYEE
HR_EX_ EMPLOYEE_API ACTUAL_TERMINATION_EMP
  FINAL_PROCESS_EMP
HR_JOB_API CREATE_JOB
HR_JOB_REQUIREMENT_API CREATE_JOB_REQUIREMENT
HR_PERSONAL_PAY_METHOD_API CREATE_GB_PERSONAL_PAY_METHOD
  CREATE_PERSONAL_PAY_METHOD
  CREATE_US_PERSONAL_PAY_METHOD
  DELETE_PERSONAL_PAY_METHOD
  UPDATE_PERSONAL_PAY_METHOD
  UPDATE_GB_ PERSONAL_PAY_METHOD
  UPDATE_US_ PERSONAL_PAY_METHOD
HR_PERSON_ADDRESS_API CREATE_GB_PERSON_ADDRESS
  CREATE_PERSON_ADDRESS
  CREATE_US_PERSON_ADDRESS
  UPDATE_PERSON_ADDRESS
  UPDATE_GB_PERSON_ADDRESS
  UPDATE_US_PERSON_ADDRESS
HR_PERSON_API UPDATE_PERSON
  UPDATE_GB_PERSON
  UPDATE_US_PERSON
HR_POSITION_API CREATE_POSITION
  UPDATE_POSITION
HR_POSITION_REQUIREMENT_API CREATE_POSITION_REQUIREMENT
HR_SIT_API CREATE_SIT
HR_VALID_GRADE_API CREATE_VALID_GRADE
PY_ELEMENT_ENTRY_API CREATE_ELEMENT_ENTRY
  DELETE_ELEMENT_ENTRY
  UPDATE_ELEMENT_ENTRY

Using Data Pump with Unsupported APIs

Sometimes the necessary seed data for a Data Pump call to a particular API is not present. The usual problem when running the meta-mapper generate is the lack of mapping functions to resolve system identifiers from user values, for example:

ORA-2001: Seed data error: Mapping function get_set_of_books_id does not exist. Please contact your support representative.

This type of error is usually caused by API parameters with names ending in _ID, for example, P_JOB_ID.

You can call the meta-mapper in an alternative generate mode that essentially generates a direct call to the API rather than processing parameter values beforehand to get system values. Making a Data Pump call with this generate mode requires a better understanding of the API itself than is required when using the standard generate mode.

Use this SQL*PLUS command to generate packages and views for an API:

sql > execute hr_pump_meta_mapper.generate (<package_name>, <procedure_name>, false) ;

Use these SQL*PLUS commands to display the help text for the API:

sql > set serveroutput on size 1000000;

sql > execute hr_pump_meta_mapper.generate (<package_name>, <procedure_name>, false) ;

The view and package generated are the same as in the standard generation mode discussed earlier in this essay. They can be used as described in this essay. However, when using this generate mode you should note that:

Table and View Descriptions

The following section provides more details of the specific tables and views that you use with Oracle HRMS Data Pump

HR_API_MODULES

API modules supported by Data Pump

Name                      Description
------------------------- --------------------
API_MODULE_ID             Sequence generated unique ID.
API_MODULE_TYPE           Type of the API represented by:
                          'RH' - Row Handler 
                           (not of interest to Data Pump).
                          'BP' - Business Process API.
                          'AI' - Alternative Interface API. 
MODULE_NAME               API procedure name.
MODULE_PACKAGE            API package name when the 
                          module type is 'BP' or 'AI'.

HR_PUMP_BATCH_LINE_USER_KEYS

This table holds key mappings between your external system and the Oracle HRMS system. These keys are required for specific entities where it may be difficult to identify the record uniquely in Oracle HRMS from a single field in the batch line table. For example, you might want to use Name||National Identifier from the external system to map to Person ID in Oracle HRMS.

This table is populated automatically by the Data Pump process when you create new records in Oracle HRMS. For example when you load your legacy data. You can insert new lines to this table if you have already loaded your legacy data.

You can have multiple external key mappings to the same unique_key_id in Oracle HRMS. For example, if you want to interface data from an external payroll system and an external benefits system to Oracle HR where the unique IDs are different.

Name                        Null?    Type          Description
--------------------------- -------- ----             -------------
USER_KEY_ID                 NOT NULL NUMBER(9)
BATCH_LINE_ID                        NUMBER(9)
USER_KEY_VALUE              NOT NULL VARCHAR2(240) User Defined
                                                   key to identify
                                                   a record.
UNIQUE_KEY_ID               NOT NULL NUMBER(15)    Unique Key in
                                                   Oracle HRMS
LAST_UPDATE_DATE                     DATE
LAST_UPDATED_BY                      NUMBER(15)
LAST_UPDATE_LOGIN                    NUMBER(15)
CREATED_BY                           NUMBER(15)
CREATION_DATE                        DATE

HR_PUMP_BATCH_HEADERS

This table holds batch header information for Data Pump. BATCH_NAME is a parameter for the Data Pump concurrent process.

 Name                   Null?    Type         Description
----------------------- -------- -----        -------------
 BATCH_ID               NOT NULL NUMBER(9)
 BATCH_NAME             NOT NULL VARCHAR2(80) Unique name for
                                              the batch
 BATCH_STATUS           NOT NULL VARCHAR2(30) Status can be                                                      decoded using                                                      'ACTION STATUS'                                                    lookup type
 REFERENCE                       VARCHAR2(80)
 BUSINESS_GROUP_NAME             VARCHAR2(80)
 LAST_UPDATE_DATE                DATE
 LAST_UPDATE_LOGIN               NUMBER(15)
 LAST_UPDATED_BY                 NUMBER(15)
 CREATED_BY                      NUMBER(15)
 CREATION_DATE                   DATE

HR_PUMP_BATCH_LINES

This table holds the individual batch lines that will be loaded by Data Pump

Name                 Null?    Type        Description
------------------- -------- ----         -------------
BATCH_LINE_ID       NOT NULL NUMBER(9)    Sequence generated ID
BATCH_ID            NOT NULL NUMBER(9)    Foreign key to
                                          HR_PUMP_BATCH_HEADERS
API_MODULE_ID       NOT NULL NUMBER(9)    Foreign key to
                                          HR_API_MODULES
LINE_STATUS         NOT NULL VARCHAR2(1)  Load status of this API
                                          'U' Unprocessed (initial
                                           value)
                                          'N' Unprocessed (Same as 'U'
                                           but applies to Data Pump purges
                                           and indicates that this call
                                           was not processed because a
                                           previous linked call failed
                                           to complete processing
                                          'R' This linked API call completed
                                           successfully, indicating that the
                                           call parameters were valid.
                                           However,the results were rolled back
                                           because a subsequent linked API call
                                           failed to complete. 
                                          'V' - Validated but
                                          record not committed
                                          'C' - Complete and                                                                    record committed
                                          'E' - Error
PROCESS_SEQUENCE              NUMBER(9)
USER_SEQUENCE                 NUMBER(9)
LINK_VALUE                    NUMBER
PVAL001                       VARCHAR2(2000)
PVAL002                       VARCHAR2(2000)
PVAL003                       VARCHAR2(2000)
PVAL004                       VARCHAR2(2000)
PVAL005                       VARCHAR2(2000)
PVAL006                       VARCHAR2(2000)
PVAL007                       VARCHAR2(2000)
PVAL008                       VARCHAR2(2000)
PVAL009                       VARCHAR2(2000)
PVAL010                       VARCHAR2(2000)
PVAL230                       VARCHAR2(2000)
PLONGVAL                      LONG
BUSINESS_GROUP_NAME           VARCHAR2(240)

HR_PUMP_BATCH_EXCEPTIONS

Holds exception information.

Name                        Description
------------------------    ------------ 
EXCEPTION_SEQUENCE          Sequence generated unique ID.
EXCEPTION_LEVEL             Decode using 'MESSAGE_LEVEL' lookup.
SOURCE_ID                   BATCH_ID or BATCH_LINE_ID.
SOURCE_TYPE                 Indicates what SOURCE_ID holds:
                                 'BATCH_HEADER' : BATCH_ID
                                 'BATCH_LINE' : BATCH_LINE_ID
EXCEPTION_TEXT              Text of exception.

HRDPV_CREATE_EMPLOYEE

Name                        Null?    Type
------------------------------------ ----
BATCH_ID                    NOT NULL NUMBER(9)
BATCH_LINE_ID               NOT NULL NUMBER(9)
API_MODULE_ID               NOT NULL NUMBER(9)
LINE_STATUS                 NOT NULL VARCHAR2(1)
USER_SEQUENCE                        NUMBER(9)
LINK_VALUE                           NUMBER
BUSINESS_GROUP_NAME           VARCHAR2(240)
P_HIRE_DATE                          VARCHAR2(2000)
P_LAST_NAME                          VARCHAR2(2000)
P_SEX                                VARCHAR2(2000)
P_PER_COMMENTS                       VARCHAR2(2000)
P_DATE_EMPLOYEE_DATA_VERIFIED        VARCHAR2(2000)
P_DATE_OF_BIRTH                      VARCHAR2(2000)
P_EMAIL_ADDRESS                      VARCHAR2(2000)
P_EMPLOYEE_NUMBER                    VARCHAR2(2000)
P_EXPENSE_CHECK_SEND_TO_ADDRES       VARCHAR2(2000)
P_FIRST_NAME                         VARCHAR2(2000)
P_KNOWN_AS                           VARCHAR2(2000)
P_MARITAL_STATUS                     VARCHAR2(2000)
P_MIDDLE_NAMES                       VARCHAR2(2000)
P_NATIONALITY                        VARCHAR2(2000)
P_NATIONAL_IDENTIFIER                VARCHAR2(2000)
P_PREVIOUS_LAST_NAME                 VARCHAR2(2000)
P_REGISTERED_DISABLED_FLAG           VARCHAR2(2000)
P_TITLE                              VARCHAR2(2000)
P_WORK_TELEPHONE                     VARCHAR2(2000)
P_ATTRIBUTE_CATEGORY                 VARCHAR2(2000)
P_ATTRIBUTE1                         VARCHAR2(2000)
P_ATTRIBUTE2                         VARCHAR2(2000)
P_ATTRIBUTE3                         VARCHAR2(2000)
...
P_ATTRIBUTE30                        VARCHAR2(2000)
P_PER_INFORMATION_CATEGORY           VARCHAR2(2000)
P_PER_INFORMATION1                   VARCHAR2(2000)
P_PER_INFORMATION2                   VARCHAR2(2000)
P_PER_INFORMATION3                   VARCHAR2(2000)
...
P_PER_INFORMATION30                  VARCHAR2(2000)
P_BACKGROUND_CHECK_STATUS            VARCHAR2(2000)
P_BACKGROUND_DATE_CHECK              VARCHAR2(2000)
P_BLOOD_TYPE                         VARCHAR2(2000)
P_FAST_PATH_EMPLOYEE                 VARCHAR2(2000)
P_FTE_CAPACITY                       VARCHAR2(2000)
P_HONORS                             VARCHAR2(2000)
P_INTERNAL_LOCATION                  VARCHAR2(2000)
P_LAST_MEDICAL_TEST_BY               VARCHAR2(2000)
P_LAST_MEDICAL_TEST_DATE             VARCHAR2(2000)
P_MAILSTOP                           VARCHAR2(2000)
P_OFFICE_NUMBER                      VARCHAR2(2000)
P_ON_MILITARY_SERVICE                VARCHAR2(2000)
P_PRE_NAME_ADJUNCT                   VARCHAR2(2000)
P_PROJECTED_START_DATE               VARCHAR2(2000)
P_RESUME_EXISTS                      VARCHAR2(2000)
P_RESUME_LAST_UPDATED                VARCHAR2(2000)
P_SECOND_PASSPORT_EXISTS             VARCHAR2(2000)
P_STUDENT_STATUS                     VARCHAR2(2000)
P_WORK_SCHEDULE                      VARCHAR2(2000)
P_SUFFIX                             VARCHAR2(2000)
P_PERSON_USER_KEY                    VARCHAR2(2000)
P_ASSIGNMENT_USER_KEY                VARCHAR2(2000)
P_PER_OBJECT_VERSION_NUMBER          VARCHAR2(2000)
P_ASG_OBJECT_VERSION_NUMBER          VARCHAR2(2000)
P_PER_EFFECTIVE_START_DATE           VARCHAR2(2000)
P_PER_EFFECTIVE_END_DATE             VARCHAR2(2000)
P_FULL_NAME                          VARCHAR2(2000)
P_PER_COMMENT_ID                     VARCHAR2(2000)
P_ASSIGNMENT_SEQUENCE                VARCHAR2(2000)
P_ASSIGNMENT_NUMBER                  VARCHAR2(2000)
P_NAME_COMBINATION_WARNING           VARCHAR2(2000)
P_ASSIGN_PAYROLL_WARNING             VARCHAR2(2000)
P_USER_PERSON_TYPE                   VARCHAR2(2000)
P_VENDOR_NAME                        VARCHAR2(2000)
P_CORRESPONDENCE_LANGUAGE            VARCHAR2(2000)

PAY_ACTION_PARAMETER_GROUPS

Name                       Null?      Type
-----------------------------------   ----
ACTION_PARAMETER_GROUP_ID   NOT NULL NUMBER(9)
ACTION_PARAMETER_GROUP_NAME NOT NULL VARCHAR2(30)

PAY_ACTION_PARAMETER_VALUES

Name                       Null?      Type
-----------------------------------   ----
PARAMETER_NAME             NOT NULL    VARCHAR2(30)
PARAMETER_VALUE            NOT NULL    VARCHAR2(80)
ACTION_PARAMETER_GROUP_ID              NUMBER(9)
Note: The PAY_ACTION_PARAMETERS view just returns those rows from PAY_ACTION_PARAMETER_VALUES that have a NULL_ACTION_PARAMETER_GROUP_ID

Using Data Pump Process Manager

You can find Data Pump Process Manager in the Mass Information eXchange (MIX) menu of HRMS.

You can then use each of the following pages for easier processing of your Data Pump batches.

The Processing Summary Page

This is the entry page. Use the Processing Summary page to view processing status and processing details for an existing batch that has been submitted to Data Pump:

The Submit Data Pump Process Page

The Submit Data Pump page enables you to submit a Data Pump batch for processing, and control the processing by:

The Processing Details Page

The Processing Details page enables you to: