6 Support Utilities

Some support utilities will be exposed for implementers directly in APEX, allowing you to run functions such as database cleanup without Oracle involvement. These utilities may also be used by Oracle Support when responding to Service Requests on your RAP environments. If a process documented here is intended only for Oracle Support usage and not for customers directly, it will be noted in the detailed description.

Data Cleanup Utilities

Data Warehouse Table Cleanup

Because foundation data is always loaded first through the shared data warehouse (also called the RADM01 schema), implementers often need to erase data from these tables in preparation for a new load. Database functions have been exposed to APEX to allow targeted deletion of data by table name. The delete functions can be disabled by Oracle upon request if you do not want the functionality exposed after customer go-live.

Note:

These utilities are only for data loaded using the AIF DATA schedule jobs in POM. If you have loaded data into one or more AIF applications (AIF APPS schedule) then there is a cleanup job named AIF_APPS_MAINT_DATA_CLEANUP_ADHOC_PROCESS in the AIF APPS schedule. Refer to the Data Cleanup Utility section in the AI Foundation Applications Standalone Processes chapter.

The sample command below is the basic method used for table cleanup of a specific table. Specify the schema name and table name to be truncated, then run the PL/SQL block.

DECLARE
  SCHEMANAME VARCHAR2(200);
  TABLENAME VARCHAR2(200);
BEGIN
    SCHEMANAME := 'RADM01';
    TABLENAME := 'W_RTL_SLS_TRX_IT_LC_DY_FS';
    RI_SUPPORT_UTIL.CLEAR_SELECTED_RI_TABLES(
        SCHEMANAME => SCHEMANAME,
        TABLENAME => TABLENAME
    );
END;

If the process is successful, you will see that the PL/SQL block was successfully executed with no further message or results. If the process encounters any error, it will display the error details in the results panel in APEX. Error details may also be logged in the RI_LOG_MSG table unless it is an unexpected failure that was not caught by the program.

To quickly clean the entire database schema instead of individual tables, you may instead call the following command. This command will erase all customer data except for the calendar, system configuration tables, and seed data records. This command will also delete user data from the C_HIST_LOAD_STATUS table, which was generated for any history loads. Use this command if you need to reset the environment in preparation for a new dataload using a different dataset:

DECLARE 
SCHEMANAME VARCHAR2(200); 
BEGIN 
    SCHEMANAME := 'RADM01'; 
    RI_SUPPORT_UTIL.CLEAR_SELECTED_RI_TABLES( 
        SCHEMANAME => SCHEMANAME 
    ); 
END;

Calendar removal is provided as a separate function, because you cannot remove calendar information without also erasing all partitions (which are specific to your currently loaded calendar). The function name is CLEAR_RI_MCAL_TABLES and can be called the same way as the schema clear script above, passing in the schema name as the input. Before you perform any calendar cleanup, review the following:

  • Partition removal is based on the current partition configuration in C_MODULE_ARTIFACT; it will not modify tables that are not enabled for partitioning. Ensure the configuration table reflects your current cleanup needs.

  • Because calendar cleanup includes partition removal, you cannot use the system for a new data load without first re-partitioning the system. Refer to the RAP Implementation Guide for the steps to reload the calendar and partition the database.

DECLARE 
SCHEMANAME VARCHAR2(200); 
BEGIN 
    SCHEMANAME := 'RADM01'; 
    RI_SUPPORT_UTIL.CLEAR_RI_MCAL_TABLES( 
        SCHEMANAME => SCHEMANAME 
    ); 
END;

There is a function named RI_SUBJECTAREA_TABLE that erases functional areas of the data warehouse one by one, which can be useful for targeted cleanup of related groups of tables. The function uses the list of tables and subject area names from the database table C_RI_SUBJECTAREA, which you can query from APEX to identify which values you want to use. If you run the command for the Price or Inventory Position subject areas, then it will also clean up the C_HIST_LOAD_STATUS table for all related entries.

The command syntax is shown below.

DECLARE
  SUBJECTAREA_NAME VARCHAR2(200);
  OWNER_NAME VARCHAR2(200);
BEGIN
    OWNER_NAME := 'RADM01';
    SUBJECTAREA_NAME := 'Organization';
    RI_SUPPORT_UTIL.RI_SUBJECTAREA_TABLE(
        SUBJECTAREA_NAME => SUBJECTAREA_NAME,
        OWNER_NAME => OWNER_NAME
  );
END;

When you run any of the cleanup commands above, it may take an hour or longer to complete depending upon the amount of data in your schema and the number of partitions requiring deletion. It is recommended to run large cleanup activities from a SQL Script, not from the SQL Commands screen. Add the commands to a script and save it, then execute it separately. Saved scripts that are executed are allowed to run in the background, which means you can navigate away from the page as soon as you start it, and it will continue to run until completed. To monitor the activity after starting a cleanup command or script, you can query the RI_LOG_MSG table and check for new log messages:

select * from ri_log_msg order by msg_ts desc;

If the process is still running, you will see new log entries being added for ri_support_util methods. If no recent entries are added and the last set of messages show the END messages for a process step, then you can verify that all your tables are cleared and proceed with your implementation activities. When verifying table counts and contents after a cleanup script is run, you must include a hint in your SQL to prevent cached results or stale statistics from being returned.

For example:

SELECT /*+ OPT_PARAM('_optimizer_answering_query_using_stats' 'FALSE') */  COUNT(*) from w_product_d;

Integration Layer Table Cleanup

Tables used to move data between RAP applications are maintained in a separate database schema called RDX (short for retail data exchange). These tables are not accessible to cleanup packages like ri_support_util. A separate package is provided just for these tables, called rap_support_util. You may call this package to delete data from the RDX schema, specifically for tables that use RUN_ID as the primary key and partition structure, such as W_PDS_SLS_IT_LC_WK_A.

The rap_support_util package has a single method PURGE_INTF_RUNS. This method allows you to delete data from an RDX table fully or for specific RUN_ID values. The parameters for the method are:

  • p_app_code – Required. The code of the application whose integration tables you are purging. Use % as a wildcard to purge data regardless of the application source. Valid codes can be found on RAP_INTF_CFG.

  • p_intf_name – Required. The name of the table that needs to be purged. Use % as a wildcard to purge all supported tables (recommended to only use this option with a non-wildcard value on p_app_code).

  • p_run_id – Optional. The run_id to be deleted.

  • p_max_retained_run_id – Optional. The max run_id that needs to be retained in the table. Data less that this run ID will be purged.

  • p_purge_age_run_id – Optional. Purge run IDs older than this age.

The first two parameters must always be used, and you may optionally provide one of the other three parameters to further refine the data to be deleted.

To erase all runs in a table, you only need to provide the table name. For example, to erase the W_PDS_INVRTV_IT_LC_WK_A table, use the command:

DECLARE
    TABLE_NAME VARCHAR2(200); 
BEGIN
    TABLE_NAME := 'W_PDS_INVRTV_IT_LC_WK_A';
    RAP_SUPPORT_UTIL.PURGE_INTF_RUNS(p_app_code => '%', p_intf_name => TABLE_NAME); 
END;

To limit the RUN_ID values that are purged, pick any one of the other parameters as a third input value after the table name. The run ID and max retained run ID are numbers that should contain a single RUN_ID from the available values in the database table. The purge age is a number representing the number of days back from sysdate to purge (any run created before sysdate-p_purge_age_run_id will be deleted).

Data Warehouse Partition Cleanup

Most tables in the data warehouse are partitioned using the loaded fiscal calendar, and these partitions are created either through ad hoc processes or automatically in batch. When you have major data changes, such as altering the calendar or reconfiguring the flexible fact tables, you might be required to purge existing partitions as part of data cleanup. There is a function named REMOVE_ALL_PARTITIONS that can purge partitions from a single table. It has only one parameter input: the name of the table to purge.

DECLARE
    TABLE_NAME VARCHAR2(200); 
BEGIN
    TABLE_NAME := 'W_RTL_FLEXFACT1_F';
    RI_SUPPORT_UTIL.REMOVE_ALL_PARTITIONS(p_table_name => TABLE_NAME); 
END;

Data Delete Utility

During data validations, you may encounter times when a particular subset of data is incorrect, and you wish to reload it. If you do not want to truncate the entire table, you may be able to delete just the segment of data having an issue by using this data deletion utility. This utility allows you to specify ranges of calendar, product, and location values that you need to delete and it will remove only that data, allowing you to reprocess those intersections without reloading the entire history.

Currently, the following functional modules are supported:

Name Module Code

Sales

SLS

Inventory Position

INV

Inventory Receipts

INVRC

Inventory Transfers

INVTSF

Inventory Adjustments

INVADJ

Inventory RTVs

INVRTV

Markdowns

MKDN

Wholesale/Franchise

SLSWF

The job for this utility is in the AIF DATA schedule, Standalone process section. The process name is DATA_DELETE_PROCESS_ADHOC and has only one job, DATA_DELETE_JOB. The job accepts two parameter values:

  1. The first value is required and must specify the module code (for example, SLS)

  2. The second value is optional and may be the value ALL. This overrides the configurations for the job and instead performs a full table truncate on the module, similar to the data cleanup utility but only for the specific table covered by this program.

The parameters are provided directly into the POM edit parameters screen or as values on the job parameters in a Postman call. If entered into POM UI, you might enter SLS ALL to erase the sales data, or just SLS to run the sales delete process for your configuration.

To configure what data will be deleted, there are two configuration tables available from the Manage System Configurations. The first table is C_MODULE_CLNUP_TABLE. This lists the available tables in the database to be cleaned up by the utility. You may change the column CLNUP_IND to be Y or N, which enables or disables that table for the cleanup functions. The second table is C_MODULE_CLNUP_CFG. This is where you specify the intersections of product, location, and calender to be deleted from the tables. You may insert a row to this table to set the parameters. The columns to update are listed below.

Column Usage

MODULE_CODE

Enter the module code from the supported list of codes, like SLS or INV

PROD_LVL_NAME

Enter the product hierarchy level that your IDs will be specified for, including ITEM, SBC, CLS, DEPT, GRP, or DIV

PROD_LVL_ID_LST

Enter a comma-separated list of unique identifiers for product hierarchy nodes (equivalent to W_PROD_CAT_DH.LEVEL_ID or PRODUCT.csv values like LVL4_PRODCAT_UID or ITEM) that you want data deleted for.

PROD_LVL_ID_LST2

Optional field for more identifiers if your list exceeds the field length of the prior column.

ORG_LVL_NAME

Enter the location hierarchy level that your IDs will be specified for, including LOCATION, DISTRICT, REGION, AREA, CHAIN

ORG_LVL_ID_LST

Enter a comma-separated list of unique identifiers for location hierarchy nodes (equivalent to ORGANIZATION.csv ORG_HIER10_NUM or other org hierarchy values) that you want data deleted for.

ORG_LVL_ID_LST2

Optional field for more identifiers if your list exceeds the field length of the prior column.

CLNUP_START_DATE

Set the start date for the data cleanup. Use a week-starting date if cleaning week-level tables. Make sure there is no timestamp component (time shows as 00:00:00).

CLNUP_END_DATE

Set the end date for the data cleanup. Use a week-ending date if cleaning week-level tables. Make sure there is no timestamp component (time shows as 00:00:00).

CLNUP_IND

Enter Y or N to enable or disable the configuration record.

The data delete utility will use this information in the following ways:

  • When not using the ALL option, it truncates partition statements if no prod/loc intersections are provided, or delete statements which are limited by your product, location, and calendar values. All combinations of the specified values are deleted for the given date range.

  • When using the ALL option, the configuration table is ignored and the tables having CLNUP_IND=Y will be truncated entirely.

  • Week-level tables are always truncated by weekly partition or deleted by week, partial week cleanup is not possible, which is why week start/end dates should be used in the configuration.

For the hierarchy information, the tables W_PRODUCT_D_RTL_CUR_TMP and W_INT_ORG_DH_RTL_CUR_TMP are used to select the necessary intersections and keys. These should be populated already by performing hierarchy loads into the system; but if you are having any trouble with the tool deleting the necessary data, check that your hierarchies have been loaded here. Only the latest hierarchy definitions are used, so if reclassifications were performed, it will not delete data for inactive or outdated hierarchy definitions.

Innovation Workbench Process Cleanup

If a process running from Innovation Workbench needs to be terminated for any reason, it is possible to do this from Innovation Workbench itself. As a prerequisite, you will need two pieces of information:

  • the account name which invoked the process/query that needs to be terminated (for example: first.lastname@domainname.com)

  • the minimum required runtime of the database session(s) to be killed (in minutes)

After collecting this information, you will use a package called RAP_DBADMIN_LOCAL_UTIL, which is used for program units associated with session management. This package has a routine named KILL_USER_RUNAWAY_SESSION which kills long-running sessions for specified client information. It has support for two parameters to help select sessions to be killed. The first parameter (p_client_info) is optional and, if not provided, will default to the user who invoked the routine. The second parameter (p_max_runtime_allowed_min) is also optional and accommodates the number of minutes a session must be active before being selected for getting killed. This defaults to 60 minutes if not provided. To kill all IW sessions, without regard for runtime, provide a value of 0 for p_max_runtime_allowed_min. The routine never terminates the session that invoked the routine.

Based on the parameters provided, sessions that are ACTIVE and meet the search conditions provided will be killed using a command like the following:

BEGIN
rap_dbadmin_local_util.kill_user_runaway_session(:userid_to_kill, :minimum_runtime_minutes);
END;
/

The variables shown above are placeholders for the values to be used. The process will kill all sessions that are associated with the specified account (except for the session initiating this request), and have been active for more than the specified number of minutes. The runtime parameter allows for easier cleanup of sessions, allowing only the terminating of longer running sessions, and not more-recent sessions.

As an alternative to using this process, it is also possible to schedule your commands to run through DBMS_SCHEDULER. Those statements run in the background and can be monitored through the normal data dictionary views associated with DBMS_SCHEDULER. If a statement runs through DBMS_SCHEDULER, and it needs to be terminated, then instead of using this RAP_DBADMIN_LOCAL_UTIL package to kill the session, it is possible to use DBMS_SCHEDULER.STOP_JOB to stop the job that was submitted. 

Aggregation Utility

The data warehouse has over 100 different tables for pre-calculating data at higher levels of aggregation, mainly for the purpose of BI reporting and analytics. These tables do not need to be populated during initial historical data loads but would be needed before end-users begin accessing data in RI. Some aggregates are also used for PDS integrations (at the item/location/week level). To populate or update these tables after history loads are complete, an aggregation utility is provided that can use the base intersection of a functional area to calculate all of the higher-level tables.

The utility currently supports the following subject areas in RI:

Name Module Code

Sales

SLS

Sales Pack

SLSPK

Sales Promotion

SLSPR

Sales Wholesale

SLSWF

Inventory

INV

Inventory Adjustments

INVADJ

Inventory Receipts

INVRC

Inventory Transfers

INVTSF

Inventory Reclasses

INVRECLASS

Inventory Return to Vendor

INVRTV

Markdowns

MKDN

Net Profit

NPROF

Customer Loyalty Transactions

CUST

Within these subject areas, the aggregation does have some limitations on which columns are populated (relative to nightly batches). Aggregate columns that are derived by joining multiple tables together during batch processing are not included in this utility because the data may not be available or accurate for the calculations. This includes:

  • Inventory availability columns, such as the counts and amounts based on presentation stock and demo stock levels

  • Inventory age and weeks-in-store calculations based on new receipt activity

  • Any columns that join to the clearance dimension to get the clearance indicator and markdown event ID for a specific inventory or transaction record

Prerequisites for using the utility (all steps must be completed every time you want to use the utility):

  1. Partitioning has been run for the target functional areas such as sales (SLS), inventory (INV), and so on. Follow the steps in the RAP Implementation Guide to perform additional partitioning as needed. If you have not used the utility since the last time you received a product patch, you should re-run the partitioning process again to ensure all tables are partitioned.

  2. The base fact for the functional area has already been loaded with data for the entire date range you want to aggregate on. For example, the W_RTL_SLS_TRX_IT_LC_DY_F table is loaded before attempting to aggregate it to W_RTL_SLS_IT_LC_WK_A.

  3. Database statistics have been collected recently using REFRESH_RADM_JOB and ANAYLZE_TEMP_TABLES_JOB (either as part of an ad hoc data load or automatically as part of nightly batch).

The configuration table to control the utility is C_RI_AGGREGATION_MAP, which is available from the Control Center in the AI Foundation user interface. It contains a list of aggregate tables in the data warehouse that can be processed by the utility. For each table you want to load, set the START_DT as the earliest date to process and the END_DT to the final date to process. The tables are grouped by functional area such as MODULE_CODE=SLS so you can update all tables relating to that fact.

When specifying the start/end dates, make sure to consider the calendar level of the table. Day level tables can have any start/end dates because they use daily partitions. Week level (WK) tables should use week starting/ending dates to ensure each full week of data is always aggregated into the table. Similarly, Gregorian month (GMH) tables should use month start/end dates. The utility also has functions to auto-extend your date ranges to encompass full weeks and months even if you make mistakes in the configuration. By default, dates will always be auto-extended so that full weeks/months are always loaded where needed. This can be changed using parameter RI_AGG_FULL_LOAD_TYPE on C_ODI_PARAM_VW if you only want the dates you specify to be included in the aggregations. Valid values include:

  • F – full auto-extend of dates

  • FE – extend end dates only

  • FS – extend start dates only

N/A (or any other values) – Use only the dates in the mapping table

Once the necessary updates are performed, you will execute an ad hoc process in POM named AGGREGATION_UTILITY_ADHOC. This process is a first-time manual run to validate the configuration is working as intended and to set up the temp tables. This process has 3 jobs in it:

 AGG_UTILITY_PRE_JOB – Calculates a temporary lookup table for product hierarchy relationships

 AGG_UTILITY_ORG_PRE_JOB – Calculates a temporary lookup table for organization hierarchy relationships

 AGG_UTILITY_JOB – Performs an aggregation action for a specific table name and run type

The AGG_UTILITY_JOB requires two input parameters: the name of the table as found in C_RI_AGGREGATION_MAP and the type of aggregation to perform (FRESH or RESTART). When FRESH is specified, it assumes you want to aggregate the entire date range specified in the configuration table, even if it has been run before. If RESTART is specified, it will run only from the last completed period (the partition job aggregates one quarter at a time so it will not re-run earlier quarters that already completed). Also use the RESTART option if you changed the END_DT to some time further in the future and want to only process incomplete dates resulting from the change. In most use cases you can always specify RESTART as the option and it will perform the required actions.

Example Postman message body for the process call:

{
  "cycleName":"Adhoc", 
  "flowName":"Adhoc", 
  "requestType":"POM Scheduler", 
  "processName":"AGGREGATION_UTILITY_ADHOC",
  "requestParameters":"jobParams.AGG_UTILITY_JOB= W_RTL_SLS_CS_IT_LC_DY_A RESTART"
}

Once you have issued the command to start the process, you may monitor the detailed run status by querying the table C_BULK_LOAD_STATUS from APEX. A record will be inserted for each calendar quarter that has been processed until the entire date range is aggregated. The POM job will complete successfully after the table is loaded for all dates. You may then compare the base fact table with the target aggregate and confirm the values have been rolled up as expected.

The aggregate tables must be populated in a specific sequence based on the value in the AGGREGATION_LEVEL column in C_RI_AGGREGATION_MAP. For each MODULE_CODE, the level 1 tables must be populated first, then the level 2 tables, and so on. To automate this execution sequence, there is a separate job available in POM, named AGG_SRVC_JOB. The aggregation service job accepts a single input parameter for the MODULE_CODE value. The job will execute all tables in the associated record set in C_RI_AGGREGATION_MAP for that module, following the AGGREGATION_LEVEL sequence as needed. The two PRE jobs (AGG_UTILITY_PRE_JOB and AGG_UTILITY_ORG_PRE_JOB) are prerequisites for this job, so ensure you’ve already run those at least once before using AGG_SRVC_JOB.

Example Postman message body for the process call:

{
  "cycleName":"Adhoc", 
  "flowName":"Adhoc", 
  "requestType":"POM Scheduler", 
  "processName":"AGGREGATION_SRVC_ADHOC",
  "requestParameters":"jobParams.AGG_SRVC_JOB=INV"
}

If any processes in the AGG_SRVC_JOB have a failure or are taking too long to run, you may also check the following tables for more information:

  • C_RI_SRVC_REQ_QUEUE – Contains the status of individual service calls invoked by the aggregation process. A status of 6 means success while 7 means failed.

  • RI_LOG_MSG – If a process does fail, the detailed trace logs will be written to this table to help you identify the problem. Look for records where PROGRAM_UNIT = RI_AGGREGATION_UTIL.

Database Statistics Utility

A critical part of working with large datasets in Oracle Database is the collection of statistics on your database tables. The POM processes used to load data generally include a job to collect statistics on the entire database schema to ensure stats are always up-to-date. The drawback of this program is that it can take a significant amount of time to run, even if you only need to refresh statistics on a single table. To help implementers collect statistics on specific tables, a utility is provided using the POM standalone program COLLECT_STATS_JOB.

The COLLECT_STATS_JOB accepts a single input parameter for the database module code you wish to gather stats on. The module codes are defined from the configuration table C_MODULE_DBSTATS, which is available from the Control & Tactical Center in the AI Foundation UI. The configuration table will come pre-defined with some core modules that often need stats collected on them using the codes SLS, INV, and PRICE. You have the ability to insert new rows into the table to define your own custom values for MODULE_CODE. You may specify any value you wish for the MODULE_CODE, along with one or more tables you plan to collect stats on. You would then pass the MODULE_CODE value into the job parameters to collect stats on your chosen list of tables. TABLE_NAME and MODULE_CODE are the only required values for tables in the RADM01 schema. If you are collecting stats on a temp table (in the RABE01USER schema) then you must also populate the OWNER_TYPE as BATCH.

The C_MODULE_DBSTATS column OP_TYPE provides a way to handle various issues with locked statistics. It accepts one of the following codes:

  • SKIP – If a table’s statistics are locked, then skip it and continue processing

  • C_LOCK – If a table’s statistics are locked, unlock it, collect stats, then lock it again

  • C_UNLOCK – If a table’s statistics are locked, unlock it and collect stats, leaving it unlocked

  • UNLOCK – If a table’s statistics are locked, unlock them

After reviewing the configuration, you may invoke the job from POM or Postman, providing your MODULE_CODE as the only input parameter.

Example Postman message body for the process call:

{
 "cycleName":"Adhoc", 
 "flowName":"Adhoc", 
 "requestType":"POM Scheduler", 
 "processName":"COLLECT_STATS_ADHOC",
 "requestParameters":"jobParams.COLLECT_STATS_JOB=SLS"
}

External Table Load Logs

The first step of importing a file into RAP applications is to map the raw file as an external table on the Oracle database. The file is then pulled from the external table into an actual staging table in the target database schema. From a batch job perspective, the external table steps are performed by the jobs having STG in the name, such as W_RTL_CMP_CLOSED_DS_STG_JOB or STG_SI_ORGANIZATION_JOB. Issues that occur during the external table setup and load process result in rejected records on the application server that are not immediately visible to the database, since no data is yet loaded into the system.

To access rejected records from external tables, a temporary link is created in the database that points to the log files. You must use a procedure in the ri_support_util package to access this data. The procedure is named get_file_load_result and it accepts two input parameters:

  1. The log file type, using values LOG or BAD. LOG files are the detailed log messages, while BAD files are the actual rejected records from the source data.

  2. The numerical sequence of the database object linked to the logs. This is obtained from the error message when a job fails in POM.

Here is an example log message you might get from a failed job in POM:

Status check shows failed job, due to [ORA-20003: Reject limit reached, query table "RADM01"."COPY$124_LOG" for error details

The table referenced in this message is actually an external table link to a log file on the server. To access the data, log into Innovation Workbench and call the support utility with this command:

create table BATCH_LOG124 as select * from table (ri_support_util.get_file_load_result('LOG', '124'));

Creating a table allows you to preserve the logs without re-querying the application server. If there are rejected records associated with the same load, then there will also be a BAD table, which can use the same command but replacing LOG with BAD. External table logs are temporary, and they will be erased frequently by automated processes. You will need to extract the relevant data from the logs the same day the job fails, or it may be deleted.

Managing Rejected Records

Data Warehouse Rejection Process

A core feature of the foundation data warehouse is the capturing and storage of rejected fact records. A rejected record is one in which the data was able to be staged into the database from a file or other integration, but there was a problem with the data that prevented it from loading into the final data warehouse table. The most common reason for rejection is that one or more of the key columns in the fact record does not have any matching value in the associated dimension tables. For example, you provide a sales transaction with ITEM = 12340 but there is no such item as part of your product dimension, so the system is unable to load that record.

The general process for rejecting data is:

  1. Records are loaded from the staging (FS) table to a temporary (TMP) table where the data is joined with internal dimensions and foreign keys are obtained.

  2. Records are moved from the temporary table to the target fact (F) table using either an INSERT or MERGE statement, depending on the fact program.

  3. The program compares the records in the FS and TMP tables and any differences are written to an error (E$) table for review. E$ tables do not exist when the system is first installed; they are created dynamically at runtime.

  4. A summary of the errors is written to the W_ETL_REJECTED_RECORDS table when the job completes. Jobs do not fail due to rejected records, they will load any valid data and end successfully.

A list of rejected record tables is provided below for reference. These tables belong to the RABE01USER database user, so when querying them you should append the username in front of the table name. If you are attempting to query one of these tables from APEX and you get an error that the table does not exist, then one of two things may be the reason:

  • You do not have any rejections yet, so the table has not been created by the load program.

  • The table has not been granted to APEX. The ability to select from E$ tables is given by RABE_TO_RTLWSP_GRANTS_JOB and this job must be executed as part of any ad hoc process to refresh the table grants.

Subject Area Rejected Records Table

Allocation Details

E$_W_RTL_ALC_IT_LC_DY_TMP

Base Cost

E$_W_RTL_BCOST_IT_LC_DY_TMP

Cluster Items

E$_W_RTL_CLSTR_GRP_IT_TMP

Competitor Price

E$_W_RTL_COMP_PRICE_IT_LC_DY_T

Customer Loyalty Awards

E$_W_RTL_CUST_LYL_AWD_TRX_DY_T

Customer Loyalty Transactions

E$_W_RTL_CUST_LYL_TRX_LC_DY_TM

Deal Income

E$_W_RTL_DEALINC_IT_LC_DY_TMP

Fact Aggregate 1

E$_W_RTL_FACT1_PROD1_LC1_T1_TMP

Fact Aggregate 2

E$_W_RTL_FACT2_PROD2_LC2_T2_TMP

Fact Aggregate 3

E$_W_RTL_FACT3_PROD3_LC3_T3_TMP

Fact Aggregate 4

E$_W_RTL_FACT4_PROD4_LC4_T4_TMP

Flex Fact 1

E$_W_RTL_FLEXFACT1_TMP

Flex Fact 2

E$_W_RTL_FLEXFACT2_TMP

Flex Fact 3

E$_W_RTL_FLEXFACT3_TMP

Flex Fact 4

E$_W_RTL_FLEXFACT4_TMP

Gift Card Sales

E$_W_RTL_GCN_TRX_LC_DY_TMP

Intercompany Margin

E$_W_RTL_ICM_IT_LC_DY_TMP

Inventory Adjustments

E$_W_RTL_INVADJ_IT_LC_DY_TMP

Inventory OOS

E$_W_RTL_INVOOS_IT_LC_WK_TMP

Inventory Count (Perpetual)

E$_W_RTL_INVPS_CNT_IT_LC_DY_TM

Inventory Count (Systemic)

E$_W_RTL_INVSS_CNT_IT_LC_DY_TM

Inventory Receipts

E$_W_RTL_INVRC_IT_LC_DY_TMP

Inventory Reclass

E$_W_RTL_INVRECLASS_IT_LC_DY_T

Inventory Return to Vendor

E$_W_RTL_INVRTV_IT_LC_DY_TMP

Inventory Transfers

E$_W_RTL_INVTSF_IT_LC_DY_TMP

Inventory Unavailable

E$_W_RTL_INVU_IT_LC_DY_TMP

Inventory Position (when RI_INVAGE_REQ_IND=N)

E$_W_RTL_INV_IT_LC_DY_TMP

Inventory Position (when RI_INVAGE_REQ_IND=Y)

E$_W_RTL_INV_IT_LC_DY_TMP1

Markdowns

E$_W_RTL_MKDN_IT_LC_DY_TMP

Market Sales (Consumer Group)

E$_W_RTL_MKTSLS_TA_CH_CNG_WK_T

Market Sales (Household Group)

E$_W_RTL_MKTSLS_TA_CH_HG_WK_TM

Net Cost

E$_W_RTL_NCOST_IT_LC_DY_TMP

Plan 1

E$_W_RTL_PLAN1_PROD1_LC1_T1_TMP

Plan 2

E$_W_RTL_PLAN2_PROD2_LC2_T2_TMP

Plan 3

E$_W_RTL_PLAN3_PROD3_LC3_T3_TMP

Plan 4

E$_W_RTL_PLAN4_PROD4_LC4_T4_TMP

Plan 5

E$_W_RTL_PLAN5_PROD5_LC5_T5_TMP

Plan Forecast 1

E$_W_RTL_PLANFC_PROD1_LC1_T1_T

Plan Forecast 2

E$_W_RTL_PLANFC_PROD2_LC2_T2_T

Promotion Actual

E$_W_RTL_PRACT_IT_LC_DY_TMP

Promotion Budget

E$_W_RTL_PRBDGT_IT_LC_TMP

Purchase Order Allocations

E$_W_RTL_PO_ONALC_IT_LC_DY_TMP

Purchase Orders

E$_W_RTL_PO_ONORD_IT_LC_DY_TMP

Price

E$_W_RTL_PRICE_IT_LC_DY_TMP

Replenishment Demand

E$_W_RTL_REPL_DMD_IT_LC_DY_TMP

Replenishment WF Orders

E$_W_RTL_REPL_WF_ORD_IT_LC_DY_

Sales

E$_W_RTL_SLS_TRX_IT_LC_DY_TMP

Sales Consignment

E$_W_RTL_SLSCC_TRX_IT_LC_DY_TM

Sales Discount

E$_W_RTL_SLSDSC_TRX_IT_LC_DY_T

Sales Extensions

E$_W_RTL_SLS_TRX_EXT_IT_LC_DY_

Sales Pack

E$_W_RTL_SLSPK_IT_LC_DY_TMP

Sales Promotion

E$_W_RTL_SLSPR_TX_IT_LC_DY_TMP

Sales Wholesale

E$_W_RTL_SLSWF_IT_LC_DY_TMP

Shipment Details

E$_W_RTL_SHIP_IT_LC_DY_TMP

Stock Ledger (Gregorian Month)

E$_W_RTL_STCKLDGR_SC_LC_MH_G_T

Stock Ledger (Month)

E$_W_RTL_STCKLDGR_SC_LC_MH_TMP

Stock Ledger (Week)

E$_W_RTL_STCKLDGR_SC_LC_WK_TMP

Store Traffic

E$_W_RTL_STTRFC_LC_DY_MI_TMP

Supplier Compliance

E$_W_RTL_SUPPCM_IT_LC_DY_TMP

Supplier Compliance Unfulfilled

E$_W_RTL_SUPPCMUF_LC_DY_TMP

Transaction Tender

E$_W_RTL_TRX_TNDR_LC_DY_TMP

Transfer Details

E$_W_RTL_TSF_IT_LC_DY_TMP

XStore Sales

E$_W_RTL_SLS_POS_IT_LC_DY_TMP

Rejected Record Reprocessing

Some subject areas have support utilities to aid with reloading records that were rejected because of bad or missing data in nightly batch executions. When records are rejected, they will first be placed into separate tables prefixed with E$. From here, you may review the data for issues and go back to the source systems to make corrections and avoid future batch problems. The rejected record utilities support the following areas:

Subject Area Module Code Rejected Records Table

Sales

SLS

E$_W_RTL_SLS_TRX_IT_LC_DY_TMP

Sales Promotion

SLSPR

E$_W_RTL_SLSPR_TX_IT_LC_DY_TMP

Inventory

INV

E$_W_RTL_INV_IT_LC_DY_TMP

or

E$_W_RTL_INV_IT_LC_DY_TMP1

Price

PRICE

E$_W_RTL_PRICE_IT_LC_DY_TMP

As a prerequisite to running these processes, some one-time cleanup must be done. The inventory and price reload jobs use the C_HIST_LOAD_STATUS table in the same manner as historical loads. For this reason, you must erase the values from the MAX_COMPLETED_DATE and HIST_LOAD_STATUS columns of this table. All rows should show as null values for these fields. This cleanup can be done using the Control & Tactical Center UI.

For inventory only, there are two rejected record tables used by the process, but only one of them is listed in the configuration tables. You do not need to alter the configuration to specify the other TMP table; the program will select the correct table automatically based on the value of RI_INVAGE_REQ_IND in C_ODI_PARAM. E$_W_RTL_INV_IT_LC_DY_TMP is used when the parameter is set to N (which means the system is not tracking receipt dates or inventory age) while the other table E$_W_RTL_INV_IT_LC_DY_TMP1 is used when the parameter value is Y.

Before attempting to reload any rejections, you will also need to perform another batch run or ad hoc load to correct the associated dimensions, such as adding any missing items or locations. For example, for records that are rejected on Day 1, you must fix the source data and run a normal batch on Day 2; then you are ready to reprocess the older rejections on Day 2+ after the batch cycle. Once the dimensions are fixed, you can follow the steps below to reload the rejected records.

  1. Run the E_FS_RELOAD_JOB in the process E_FS_RELOAD_PROCESS_ADHOC. This job accepts three input values: module code, start date, and end date. The module code is required and comes from the table above. The dates are optional and specify the range of CHECK_DATE values to extract from the E$ table. If no dates are provided to the job, then it will use trunc(sysdate-1) to trunc(sysdate) as the start and end date. The values should be entered as parameters on the job in the format:

    SLS 20230808 20230809
  2. The set of records found for the provided input parameters will be moved from the associated E$ table to another table prefixed with ERR. This table will keep the history of reprocessed records so the data is not lost. From there, it will be moved to the staging (FS) table. The specific tables used for each module are listed in the configuration table C_MODULE_REJECT_TABLE.

  3. Verify the FS table now contains the data you want to load. You also have the ability to directly update the data in the staging tables from Innovation Workbench if any further changes need to be done to make it load successfully.

  4. Run the reload process for the data to move the records from the staging table into the data warehouse tables. This will be one of the following ad hoc processes (make sure all jobs in these processes are enabled in Batch Administration before trying to run them):

    • E_SLS_RELOAD_PROCESS_ADHOC

    • E_SLSPR_RELOAD_PROCESS_ADHOC

    • E_INV_RELOAD_PROCESS_ADHOC

    • E_PRICE_RELOAD_PROCESS_ADHOC

For Sales and Sales Promotion, they are kept separate because it’s possible to have transactions that were loaded to the base sales tables but were rejected from promotional sales tables. In this case, you might only reprocess the SLSPR module, which will not load any new data into the base sales transaction tables. If you see rows rejected on both SLS and SLSPR E$ tables, then you want to reprocess both modules, as the tables loaded are differently.

For Inventory and Price, you can see the status of the reload using the C_HIST_LOAD_STATUS table, similar to how the historical load is performed. Once a range of dates is loaded successfully in this manner, you cannot go back and reprocess the same records again: the job will not allow you to insert any item/locations that already exist in the fact tables. If you have different, rejected item/location records that still need to be reprocessed, then you must first reset C_HIST_LOAD_STATUS to allow past dates to be reprocessed. The inventory reload is also used only to populate the core tables that are common to RAP (W_RTL_INV_IT_LC_G, W_RTL_INV_IT_LC_DY_F and W_RTL_INV_IT_LC_WK_A). For any other inventory aggregates that need to be reloaded, the Aggregation Utility must be used.

The intermediate ERR tables used to hold the reload history have a DELETE_FLG column to indicate that they’ve been reloaded to FS tables once and should not be used again on future runs of the jobs. If you do want to reprocess the same set of records again to FS tables, there is a separate process named E_FS_RESET_DELETE_FLG_PROCESS_ADHOC with one job (E_RESET_DELETE_FLG_JOB) that accepts a module code, start date, and end date similar to the E_FS_RELOAD_JOB parameters. This will change the delete flag back to N only for that subset of records, allowing you to start over from step 1 above.

Rejected Record Notifications

When records are rejected during a nightly batch, the jobs themselves do not generally fail. The batch will be allowed to complete but the rejected records are placed in separate tables for review. You may enable notifications that will alert your administrator users any time rejections happen in the AIF DATA batch cycles. The notifications are visible anywhere the Notifications panel is available on the left side of the screen, such as in the Retail Home and AI Foundation user interfaces. You can also customize the recipients and behavior from the Manage Notifications screen in Retail Home. Refer to "Notifications Administration" in the Retail Home Administration Guide for details. These notifications will be present under the Retail Insights application in the dropdown menu, with a notification type code of AIF_DATA_REJECTION and a notification name of AIF Data Rejections. AIF DATA refers to the schedule in POM by the same name, which is what these notifications are issued for.

To enable the rejected record notifications, you must first enable the nightly jobs below in the AIF DATA nightly batch schedule. By default, all jobs should be enabled except E_INV_REJECT_DATA_NOTIF_JOB. This job is not needed unless you are configuring the job to fail when rejections are found (using the options described farther below).

Job Purpose
E_REJECT_DATA_NOTIF_JOB Checks for rejected records and triggers notifications when they are found. Will only check for rejections in data loaded in the last 24 hours, so that it does not trigger repeatedly for older data.
E_INV_REJECT_DATA_NOTIF_JOB Checks for rejected records in inventory data and optionally causes the batch to fail immediately, giving you the opportunity to correct the data and reload it before resuming the batch. The batch failure trigger is configured separately, based on the severity configuration.
E_DIMM_LKUP_PROD_CHK_JOB Compares the loaded product dimension data with the internal lookup table used for downstream integrations and logs any differences for review. Mismatches in the lookup table can cause missing data later in AIF or RPAS apps. The missing records will be in the table RABE01USER.W_RTL_REJECT_DIMENSION_TMP with ERR_PROD_LKP_TMP as the TABLE_NAME.
E_DIMM_LKUP_ORG_CHK_JOB Compares the loaded organization dimension data with the internal lookup table used for downstream integrations and logs any differences for review. Mismatches in the lookup table can cause missing data later in AIF or RPAS apps. The missing records will be in the table RABE01USER.W_RTL_REJECT_DIMENSION_TMP with ERR_ORG_LKP_TMP as the TABLE_NAME.

Once enabled, the notifications will be issued based on the configuration table C_MODULE_REJECT_TABLE. This table has two columns that can be updated:

  • E_NOTIFICATION_ON – Set to N to disable the notifications or Y to enable them
  • E_SEVERITY_LEVEL – Set to 1 to mark the notification as Critical, which will also cause the POM job to fail. Set to 2 or 3 for lower severity messages, which will not cause the POM job to fail but will still issue notification messages.

When you set a notification to severity 1 and it causes the batch job to fail, then you must also mark that notification as read to prevent it from causing the job to keep failing in future runs. This can be done on the notifications user interface, either by clicking the X icon to clear the notification from the task panel or by opening the full notifications tab and marking them as read using the UI action for it. Refer to the Retail Home User Guide chapter on "Notifications" for details on marking notifications as read.

In addition to the Notifications panel in the UI, the messages are also logged in the database if you wish to access them from Innovation Workbench or create a custom service using the data. The table used for the messages themselves is RADM01.RAF_NOTIFICATION; you must use the database username as a prefix, because the same table exists for all Oracle users (RASE01 could be used instead for AIF Apps notifications). The table used to log the job activity is RI_LOG_MSG; you may query this table where PROGRAM_UNIT = ‘RI_NOTIFICATION_UTIL’ to see when messages are triggered.

Database Hints for SQL Jobs

Oracle Support may need to alter or add to the Oracle SQL hints used by specific programs to improve performance on your dataset. All AIF DATA jobs in ODI support configurable hints using rows added to the C_ODI_PARAM table.

The general process is to insert a row into C_ODI_PARAM with PARAM_NAME set to 'IKM_OPTIMIZER_HINT_INSERT' or 'IKM_OPTIMIZER_HINT_SELECT' and with INTEGRATION_ID set to 'Step/Interface Name'.

Insert statement template:

  INSERT INTO c_odi_param (
        row_wid,
        scenario_name,
        scenario_version,
        param_name,
        param_value,
        integration_id,
        created_on_dt,
        change_on_dt
    )
        ( SELECT
            2,
            $ODI_SCENARIO_NAME,
            '001',
            'IKM_OPTIMIZER_HINT_INSERT',
            $HINT_DEFINITION,
            $STEP_NAME,
            sysdate,
            sysdate
        FROM
            dual
        )

As an example, we want to add a hint for job step SIL_Retail_SalesTransactionFact inside the scenario 'SIL_RETAIL_SALESTRANSACTIONFACT'. We would run the following statement to add the hint:

INSERT INTO c_odi_param (
row_wid,
scenario_name,
scenario_version,
param_name,
param_value,
integration_id,
created_on_dt,
change_on_dt
)
( SELECT
2,
'SIL_RETAIL_SALESTRANSACTIONFACT', -- “Scenario_Name”
'001',
'IKM_OPTIMIZER_HINT_INSERT',
'/* +Append */',
'SIL_Retail_SalesTransactionFact', -- “Step/Interface Name” 
sysdate,
sysdate
FROM
dual
)

Once a row is added for the first time, it should not be inserted again. Instead, update the param_value with the new hint SQL.

Data Model Utilities

Data Warehouse Models

The Innovation Workbench workspace in APEX provides access to internal data warehouse objects by using synonyms. This results in the end-user being unable to directly describe the objects to see their column definitions and other information, because the synonym does not provide any of that information about its underlying table. To access data model information, a utility package named RI_DATA_MODEL is provided to query the information in a user-friendly format. Currently, this package provides access to data in the RADM01 and RABE01USER schemas, which covers all tables in the AIF DATA data warehouse.

If you need to query the list of tables in one of the data warehouse schemas, you can use the following commands to do so:

select * from table(ri_data_model.ri_table_list('RADM01'));
select * from table(ri_data_model.ri_table_list('RADM01')) WHERE TABLE_NAME LIKE '%DEAL%';

To get the column information for a table, use the following SQL statement, changing the table name in the command as needed:

select * from table(ri_data_model.ri_table_desc('W_RTL_SLS_TRX_IT_LC_DY_F')); 

To get the primary key (PK) information for a table, use the following SQL statement, changing the table name in the command as needed:

select * from table(ri_data_model.ri_table_pk('W_RTL_SLS_TRX_IT_LC_DY_F')); 

To get the foreign key (FK) information for a table, use the following SQL statement, changing the table name in the command as needed:

select * from table(ri_data_model.ri_table_fk('W_RTL_SLS_TRX_IT_LC_DY_F')); 

AI Foundation Models

The AI Foundation applications have a separate utility in Innovation Workbench for reviewing the data models for each solution. There is a help view that can be used to get details about the routines provided. Use the following query:

 SELECT * FROM rse_data_model_support_help_vw;

This will provide a list of the routines that are available, along with a description of the routines and any parameters they support. A summary of the utility functions is provided below.

  • RSE_DATA_MODEL_SUPPORT_UTIL.GET_TABLE_LIST - Return a list of objects that match the parameter patterns provided

  • RSE_DATA_MODEL_SUPPORT_UTIL.GET_TABLE_PK_LIST - Return a set of primary keys for one or more objects (TABLE and VIEW)

  • RSE_DATA_MODEL_SUPPORT_UTIL.GET_TABLE_FK_LIST - Return a set of foreign keys for one or more objects (TABLE or VIEW)

  • RSE_DATA_MODEL_SUPPORT_UTIL.GET_TABLE_UK_LIST - Return a set of unique keys for one or more objects (TABLE and VIEW)

  • RSE_DATA_MODEL_SUPPORT_UTIL.GET_TABLE_DESC - Return a list of columns for one or more objects

  • RSE_DATA_MODEL_SUPPORT_UTIL.GET_TABLE_PK - Return a set of primary keys columns for one or more objects (TABLE and VIEW)

  • RSE_DATA_MODEL_SUPPORT_UTIL.GET_TABLE_FK - Return a set of foreign keys columns for one or more objects (TABLE and VIEW)

  • RSE_DATA_MODEL_SUPPORT_UTIL.GET_TABLE_UK - Return a set of unique keys columns for one or more objects (TABLE and VIEW)

Some examples of how to use the routines:

  • Get help with a routine:

    SELECT * FROM rse_data_model_support_help_vw WHERE program_unit = 'RSE_DATA_MODEL_SUPPORT_UTIL.GET_TABLE_LIST'
  • Execute a routine, providing an optional parameter to help specify the object to return information for:

    SELECT * FROM RSE_DATA_MODEL_SUPPORT_UTIL.GET_TABLE_LIST ( 'RSE_SLS_%' );

Any of the parameters listed in the help output can, optionally, be provided to limit the output that the routine provides.