Preparing to Load Data

Implementations can follow this general outline for the data load process:

  1. Initialize the business and system calendars and perform table partitioning, which prepares the database for loading fact data.

  2. Load initial dimension data into the dimension and hierarchy tables and perform validation checks on the data from DV/APEX or using RI reports.

  3. If implementing any AI Foundation or Planning module, load the dimension data to those systems now. Data might work fine on the input tables but have issues only visible after processing in those systems. Don’t start loading history data if your dimensions are not working with all target applications.

  4. Load the first set of history files (for example, one month of sales or inventory) and validate the results using DV/APEX.

  5. If implementing any AI Foundation or Planning module, stop here and load the history data to those systems as well. Validate that the history data in those systems is complete and accurate per your business requirements.

  6. Continue loading history data into RAP until you are finished with all data. You can stop at any time to move some of the data into downstream modules for validation purposes.

  7. After history loads are complete, all positional tables, such as Inventory Position, need to be seeded with a full snapshot of source data before they can be loaded using regular nightly batches. This seeding process is used to create a starting position in the database which can be incremented by daily delta extracts. These full-snapshot files can be included in the first nightly batch you run, if you want to avoid manually loading each seed file through one-off executions.

  8. When all history and seeding loads are completed and downstream systems are also populated with that data, nightly batches can be started.

Before you begin this process, it is best to prepare your working environment by identifying the tools and connections needed for all your Oracle cloud services that will allow you to interact with the platform, as detailed in Implementation Tools and Data File Generation.

Prerequisites for loading files and running POM processes include:

Prerequisite Tool / Process

Upload ZIPs to Object Storage

File Transfer Service (FTS) scripts

Invoke adhoc jobs to unpack and load the data

Postman (or similar REST API tool)

Monitor job progress after invoking POM commands

POM UI (Batch Monitoring tab)

Monitoring data loads

APEX / DV (direct SQL queries)

Users must also have the necessary permissions in Oracle Cloud Infrastructure Identity and Access Management (OCI IAM) to perform all the implementation tasks. Before you begin, ensure that your user has at least the following groups (and their _PREPROD equivalents if using a stage/dev environment):

Access Needed Groups Needed

Batch Job Execution

BATCH_ADMINISTRATOR_JOB

PROCESS_SERVICE_ADMIN_JOB

Database Monitoring

<tenant ID>-DVContentAuthor (DV)

DATA_SCIENCE_ADMINISTRATOR_JOB (APEX)

Retail Home

RETAIL_HOME_ADMIN

PLATFORM_SERVICES_ADMINISTRATOR

PLATFORM_SERVICES_ADMINISTRATOR_ABSTRACT

RI and AI Foundation Configurations

ADMINISTRATOR_JOB

MFP Configurations

MFP_ADMIN_STAGE / PROD

IPO Configurations

IPO_ADMIN_STAGE / PROD

AP Configurations

AP_ADMIN_STAGE / PROD