Sales History Load

RAP supports the loading of sales transaction history using actual transaction data or daily/weekly sales totals. If loading data at an aggregate level, all key columns (such as the transaction ID) are still required to have some value. The sales data may be provided for a range of dates in a single file. The data should be loaded sequentially from the earliest week to the latest week but, unlike inventory position, you may have gaps or out-of-order loads, because the data is not stored positionally. Refer to Data File Generation for more details on the file requirements.

Note:

Many parts of AI Foundation require transactional data for sales, so loading aggregate data should not be done unless you have no better alternative.

If you are not loading sales history for Retail Insights specifically, then there are many aggregation programs that can be disabled in the POM standalone process. Most aggregation programs (jobs ending in _A_JOB) populate additional tables used only in BI reporting. The following list of jobs must be enabled in the HIST_SALES_LOAD_ADHOC process to support AIF and Planning data needs, but all others can be disabled for non-RI projects:

  • VARIABLE_REFRESH_JOB

  • ETL_REFRESH_JOB

  • W_EMPLOYEE_D_JOB

  • SEED_EMPLOYEE_D_JOB

  • W_PARTY_PER_D_JOB

  • SEED_PARTY_PER_D_JOB

  • W_RTL_CO_HEAD_D_JOB

  • W_RTL_CO_LINE_D_JOB

  • SEED_CO_HEAD_D_JOB

  • SEED_CO_LINE_D_JOB

  • W_RTL_SLS_TRX_IT_LC_DY_F_JOB

  • RA_ERROR_COLLECTION_JOB

  • RI_GRANT_ACCESS_JOB

  • RI_CREATE_SYNONYM_JOB

  • ANAYLZE_TEMP_TABLES_JOB

  • W_RTL_SLS_IT_LC_DY_TMP_JOB

  • W_RTL_SLS_IT_LC_WK_A_JOB

  • W_RTL_PROMO_D_TL_JOB

  • SEED_PROMO_D_TL_JOB

  • W_PROMO_D_RTL_TMP_JOB

  • W_RTL_SLSPR_TRX_IT_LC_DY_F_JOB

  • W_RTL_SLSPK_IT_LC_DY_F_JOB

  • W_RTL_SLSPK_IT_LC_WK_A_JOB

  • REFRESH_RADM_JOB

The other process used, HIST_STG_CSV_SALES_LOAD_ADHOC, can be run with all jobs enabled, as it is only responsible for staging the files in the database. Make sure to check the enabled jobs in both processes before continuing.

After confirming the list of enabled sales jobs, perform the following steps:

  1. Create the file SALES.csv containing one or more days of sales data along with a CTX file defining the columns which are populated. Optionally include the SALES_PACK.csv file as well.

  2. Upload the history files to Object Storage using the RAP_DATA_HIST.zip file.

  3. Execute the HIST_ZIP_FILE_LOAD_ADHOC process.

  4. Execute the HIST_STG_CSV_SALES_LOAD_ADHOC process to stage the data in the database. Validate your data before proceeding. Refer to Sample Validation SQLs for sample queries you can use for this.

  5. Execute the HIST_SALES_LOAD_ADHOC batch processes to load the data. If no data is available for certain dimensions used by sales, then the load process can seed the dimension from the history file automatically. Enable seeding for all of the dimensions according to the initial configuration guidelines; providing the data in other files is optional.

    Several supplemental dimensions are involved in this load process, which may or may not be provided depending on the data requirements. For example, sales history data has promotion identifiers, which would require data on the promotion dimension.

    Sample Postman message bodies:

    {
      "cycleName":"Adhoc", 
      "flowName":"Adhoc", 
      "processName":"HIST_STG_CSV_SALES_LOAD_ADHOC"
    }
    
    {
      "cycleName":"Adhoc", 
      "flowName":"Adhoc", 
      "processName":"HIST_SALES_LOAD_ADHOC"
    }

    Note:

    If any job having STG in the name fails during the run, then review the POM logs and it should provide the name of an external LOG or BAD table with more information. These error tables can be accessed from APEX using a support utility. Refer to the AI Foundation Operations Guide section on “External Table Load Logs” for the utility syntax and examples.

After the load is complete, you should check for rejected records, as this will not cause the job to fail but it will mean not all data was loaded successfully. Query the table W_ETL_REJECTED_RECORDS from IW to see a summary of rejections. If you cannot immediately identify the root cause (for example, missing products or locations causing the data load to skip the records) there is a utility job W_RTL_REJECT_DIMENSION_TMP_JOB that allows you analyze the rejections for common reject reasons. Refer to the AIF Operations Guide for details on configuring and running the job for the first time if you have not used it before.

This process can be repeated as many times as needed to load all history files for the sales transaction data. If you are sending data to multiple RAP applications, do not wait until all data files are processed to start using those applications. Instead, load a month or two of data files and process them into all apps to verify the flows before continuing.

Note:

Data cannot be reloaded for the same records multiple times, as sales data is treated as additive. If data needs correction, you must post only the delta records (for example, send -5 to reduce a value by 5 units) or erase the table and restart the load process using RI_SUPPORT_UTIL procedures in APEX. Raise a Service Request with Oracle if neither of these options resolve your issue.

Once you have performed the load and validated the data one time, you may wish to automate the remaining file loads. A standalone process flow RI_FLOW_ADHOC is available in POM that can run the sales history load multiple times using your specified start times. Follow the steps below to leverage this process:

  1. Upload multiple ZIP files each containing one SALES.csv and naming them as RAP_DATA_HIST.zip, RAP_DATA_HIST.zip.1, RAP_DATA_HIST.zip.2 and so on, incrementing the index on the end of the zip file name. Track the status of the files in the C_HIST_FILES_LOAD_STATUS table once they are uploaded and at least one execution of the HIST_ZIP_FILE_UNLOAD_JOB process has been run.

  2. In the POM batch administration screen, ensure all of the jobs in the RI_FLOW_ADHOC are enabled, matching your initial ad hoc run. Schedule the standalone flows from Scheduler Administration to occur at various intervals throughout the day. Space out the runs based on how long it took to process your first file.

  3. Monitor the load progress from the Batch Monitoring screen to see the results from each run cycle.