Loading Dimensions into RI

You cannot load any fact data into the platform until the related dimensions have been processed and verified. The processes in this section are provided to initialize the core dimensions needed to begin fact data loads and verify file formats and data completeness. Some dimensions which are not used in history loads are not part of the initialization process, as they are expected to come in the nightly batches at a later time.

For the complete list of dimension files and their file specifications, refer to the AI Foundation Interfaces Guide on My Oracle Support. The steps below assume you have enabled or disabled the appropriate dimension loaders in POM per your requirements. The process flow examples also assume CSV file usage, different programs are available for legacy DAT files. The AI Foundation Operations Guide provides a list of all the job and process flows used by foundation data files, so you can identify the jobs required for your files and disable unused programs in POM.

When you are using RDE jobs to source dimension data from RMFCS and you are not providing any flat files like PRODUCT.csv, it is necessary to disable all file-based loaders in the RI_DIM_INITIAL_ADHOC process flow from POM. Any job name starting with the following text can be disabled, because RDE jobs will bypass these steps and insert directly to staging tables:

  • COPY_SI_

  • STG_SI_

  • SI_

  • STAGING_SI_

  1. Disable any dimension jobs you are not using from Batch Administration, referring to the process flows for DAT and CSV files in the AIF Operations Guide as needed. If you are not sure if you need to disable a job, it’s best to leave it enabled initially. Restart the POM schedule in Batch Monitoring to apply the changes.

  2. Provide your dimension files and context files through File Transfer Services (packaged using the RAP_DATA_HIST.zip file). All files should be included in a single zip file upload. If you are using data from Merchandising, this is where you should run the RDE ADHOC processes such as RDE_EXTRACT_DIM_INITIAL_ADHOC.

  3. Execute the HIST_ZIP_FILE_LOAD_ADHOC process if you need to unpack a new ZIP file.

  4. Execute the RI_DIM_INITIAL_ADHOC process to stage, transform, and load your dimension data from the files. The ETL date on the command should be at a minimum one day before the start of your history load timeframe, but 3-6 months before is ideal. It is best to give yourself a few months of space for reprocessing dimension loads on different dates prior to start of history. Date format is YYYY-MM-DD; any other format will not be processed. After running the process, you can verify the dates are correct in the W_RTL_CURR_MCAL_G table. If the business date was not set correctly, your data may not load properly.

    Sample Postman message body:

    {
      "cycleName":"Adhoc", 
      "flowName":"Adhoc", 
      "processName":"RI_DIM_INITIAL_ADHOC", 
      "requestParameters":"jobParams.ETL_BUSINESS_DATE_JOB=2017-12-31"
    }
    

    Note:

    If any job having STG in the name fails during the run, then review the POM logs and it should provide the name of an external LOG or BAD table with more information. These error tables can be accessed from APEX using a support utility. Refer to the AI Foundation Operations Guide section on “External Table Load Logs” for the utility syntax and examples.

If this is your first dimension load, you will want to validate the core dimensions such as product and location hierarchies using APEX. Refer to Sample Validation SQLs for sample queries you can use for this.

If any jobs fail during this load process, you may need to alter one or more dimension data files, re-send them in a new zip file upload, and re-execute the programs. Only after all core dimension files have been loaded (CALENDAR, PRODUCT, ORGANIZATION, and EXCH_RATE) can you proceed to history loads for fact data. Make sure to query the RI_DIM_VALIDATION_V view for any warnings/errors after the run. Refer to the AI Foundation Operations Guide for more details on the validation messages that may occur. This view primarily uses the table C_DIM_VALIDATE_RESULT, which can be separately queried instead of the view to see all the columns available on it.

If you need to reload the same file multiple times due to errors, you must Restart the Schedule in POM and then run the ad hoc process C_LOAD_DATES_CLEANUP_ADHOC before repeating these steps. This will remove any load statuses from the prior run and give you a clean start on the next execution.

Note:

Starting with version 23.1.101.0, the product and organization file loaders have been redesigned specifically for the initial ad hoc loads. In prior versions, you must not reload multiple product or organization files for the same ETL business date, as it treats any changes as a reclassification and can cause data issues while loading history. In version 23.x, the dimensions are handled as “Type 1” slowly changing dimensions, meaning the programs do not look for reclasses and instead perform simple merge logic to apply the latest hierarchy data to the existing records, even if levels have changed.

As a best practice, you should disable all POM jobs in the RI_DIM_INITIAL_ADHOC process except the ones you are providing new files for. For example, if you are loading the PRODUCT, ORGANIZATION, and EXCH_RATE files as your dimension data for AI Foundation, then you could just execute the set of jobs for those files and disable the others. Refer to the AI Foundation Operations Guide for a list of the POM jobs involved in loading each foundation file, if you wish to disable jobs you do not plan to use to streamline the load process.