3 Data Loads and Initial Batch Processing

This chapter describes the common data requirements for implementing any of the Retail Analytics and Planning modules, where to get additional information for optional or application-specific data interfaces, and how to load an initial dataset into the cloud environments and distribute it across your desired applications.

Data Requirements

Preparing data for one or more Retail Analytics and Planning modules can consume a significant amount of project time, so it is crucial to identify the minimum data requirements for the platform first, followed by additional requirements that are specific to your implementation plan. Data requirements that are called out for the platform are typically shared across all modules, meaning you only need to provide the inputs once to leverage them everywhere. This is the case for foundational data elements, such as your product and location hierarchies. Foundation data must be provided for any module of the platform to be implemented. Foundation data is provided using different sources depending on your current software landscape, including the on-premise Oracle Retail Merchandising System (RMS) or 3rd-party applications. 

Figure 3-1 Inbound Foundation Data Flows

Inbound Data Flow Diagram

Retail Insights is used as the foundational data warehouse that collects and coordinates data on the platform. You do not need to purchase Retail Insights Cloud Service to leverage the data warehouse for storage and integration; it is included as part of any RAP solution. Regardless of which RAP solutions you are implementing, the integration flows shown above are used.

Application-specific data requirements are in addition to the shared foundation data, and may only be used by one particular module of the platform. These application data requirements may have different formats and data structures from the core platform-level dataset, so pay close attention to those additional interface specifications. References and links are provided later in this chapter to guide you to the relevant materials for application-specific inputs and data files.

If you are using RMS as your primary data source, then you may not need to produce some or all of these foundation files, as they will be created by other Oracle Retail processes for you. However, it is often the case that historical data requires a different set of foundation files from your future post-implementation needs. If you are loading manually-generated history files, or you are not using an Oracle Retail data source for foundation data, then review the rest of this section for details.

Platform Data Requirements

There is a subset of core platform data files that can be created and loaded once and then used across some or all application modules. These files use a specific format as detailed below.

Note:

Every application included with Retail Analytics and Planning has additional data needs beyond this foundation data. But this common set of files can be used to initialize the system before moving on to those specific requirements.

The first table defines the minimum dimensional data. A dimension is a collection of descriptive elements, attributes, or hierarchical structures that provide context to your business data. Dimensions tell the platform what your business looks like and how it operates. This is not the entire list of possible dimension files, just the main ones needed to use the platform. Refer to Legacy Foundation File Reference for a complete list of available platform foundation files, along with a cross-reference to the legacy interfaces they most closely align with. A complete interface specification document is also available in My Oracle Support to assist you in planning your application-specific interface needs.

Table 3-1 Common Foundation Dimensions

Dimension Filename(s) Usage

Product

PRODUCT.csv

The product foundation data includes the items you sell, their core merchandise attributes, and their hierarchical structure in your source systems.

Product

PRODUCT_ALT.csv

Alternate product attributes and hierarchy levels intended for downstream Planning application extensions.

Organization

ORGANIZATION.csv

The organization foundation data includes all of your business entities involved in the movement or sale of merchandise. This includes stores, warehouses, partner/finisher locations, web stores, and virtual warehouses. It also includes your organizational hierarchy and core location attributes. 

Organization

ORGANIZATION_ALT.csv

Alternate location attributes and hierarchy levels intended for downstream Planning application extensions.

Calendar

CALENDAR.csv

The calendar foundation data defines your business (or fiscal) calendar. This is the calendar that you operate in when making critical business decisions, reporting on financial results, and planning for the future. The most common calendar used by all modules of the platform is a 4-5-4 fiscal calendar.

Exchange Rates

EXCH_RATE.csv

Exchange rates define the conversion of monetary values from the currency they are recorded in to your primary operating currency. Most data is provided to the platform in the local currency of the data source, and it is converted to your primary currency during the load process.

Product Attributes

ATTR.csv

PROD_ATTR.csv

Product attributes describe the physical and operational characteristics of your merchandise and are a critical piece of information for many AI Foundation modules, such as Demand Transference and Size Profile Optimization. They are not required as an initial input to start data loads but will eventually be needed for most applications to function.

System Parameters

RA_SRC_CURR_PARAM_G.dat

Parameters file that supports certain batch processes, such as the ability to load multiple ZIP files and run batches in sequence. Include this file with nightly batch uploads to specify the current business date, which enables the system to run multiple batches in sequence without customer input. Required once you begin nightly or weekly batch uploads.

The other set of foundation files are referred to as facts. Fact data covers all of the actual events, transactions, and activities occurring throughout the day in your business. Each module in the platform has specific fact data needs, but the most common ones are listed below. At a minimum, you should expect to provide Sales, Inventory, and Receipts data for use in most platform modules. The intersection of all data (meaning which dimensional values are used) is at a common level of item/location/date. Additional identifiers may be needed on some files; for example, the sales data should be at the transaction level, the inventory file has a clearance indicator, and the adjustments file has type codes and reason codes.

Table 3-2 Common Foundation Facts

Dimension Filename(s) Usage

Sales

SALES.csv

Transaction-level records for customer sales (wholesale data provided separately). Used across all modules.

Inventory

INVENTORY.csv

Physical inventory levels for owned merchandise as well as consignment and concession items. Used across all modules.

Receipts

RECEIPT.csv

Inventory receipts into any location, including purchase order and transfer receipts. Used by Insights, Planning, and the AI Foundation modules to identify the movement of inventory into a location and to track first/last receipt date attributes.

Adjustments

ADJUSTMENT.csv

Inventory adjustments made at a location, including shrink, wastage, theft, stock counts, and other reasons. Used by Insights and Planning modules.

Purchase Orders

ORDER_HEAD.csv

ORDER_DETAIL.csv

The purchase order data for all orders placed with suppliers. Held at a level of order number, supplier, item, location, and date. Separate files are needed for the order attributes and order quantities/amounts. Used by Insights and Planning modules.

Markdowns

MARKDOWN.csv

The currency amount above or below the listed price of an item when that item's price is modified for any reason (planned markdown, POS discount, promotional sale, and so on). Used By Insights and Planning modules.

Transfers

TRANSFER.csv

The movement of inventory between two locations (both physical and virtual). Used By Insights and Planning modules.

Returns to Vendor

RTV.csv

The return of owned inventory to a supplier or vendor. Used by Insights and Planning modules.

Prices

PRICE.csv

The current selling retail value of an item at a location. Used by Insights and AI Foundation modules.

Costs

COST.csv

The base unit cost and derived net costs of an item at a location. Used by Retail Insights only.

Wholesale/Franchise

SALES_WF.csv

Sales and markdown data from wholesale and franchise operations. Used by all modules.

Deal Income

DEAL_INCOME.csv

Income associated with deals made with suppliers and vendors. Used by Insights and Planning modules.

Details on which application modules make use of specific files (or columns within a file) can be found in the Interfaces Guide on My Oracle Support. Make sure you have a full understanding of the data needs for each application you are implementing before moving on to later steps in the process. If it is your first time creating these files, read Data File Generation, for important information about key file structures and business rules that must be followed for each foundation file.

File Upload Samples

When you first upload foundation data into the platform, you will likely provide a small subset of the overall set of files required by your applications. The following examples show a possible series of initial file uploads to help you verify that you are providing the correct sets of data. All dimension files in initialization and history loads must be full snapshots of data; never send partial or incremental files.

Example #1: Calendar Initialization

When you first configure the system you must upload and process the CALENDAR.csv file. You will generate the file following the specifications, and also provide a context (ctx) file, as described in Data File Generation.

File to upload: RAP_DATA_HIST.zip

Zip file contents:

  • CALENDAR.csv

  • CALENDAR.csv.ctx

Example #2: Product and Location Setup

You have finalized the calendar and want to initialize your core product and organization dimensions. You must provide the PRODUCT.csv and ORGANIZATION.csv files along with their context files, as described in Data File Generation.

File to upload: RAP_DATA_HIST.zip

Zip file contents:

  • PRODUCT.csv

  • PRODUCT.csv.ctx

  • ORGANIZATION.csv

  • ORGANIZATION.csv.ctx

Example #3: Full dimension load

You have a set of dimension files you want to process using the initial dimension load ad hoc processes in POM.

File to upload: RAP_DATA_HIST.zip

Zip file contents:

  • PRODUCT.csv

  • PRODUCT.csv.ctx

  • ORGANIZATION.csv

  • ORGANIZATION.csv.ctx

  • ATTR.csv

  • ATTR.csv.ctx

  • PROD_ATTR.csv

  • PROD_ATTR.csv.ctx

  • EXCH_RATE.csv

  • EXCH_RATE.csv.ctx

Example #4: Sales Data Load

You have finished the dimensions and you are ready to start processing sales history files.

File to upload: RAP_DATA_HIST.zip

Zip file contents:

  • SALES.csv

  • SALES.csv.ctx

Example #5: Multi-File Fact Data Load

Once you are confident in your data file format and contents, you may send multiple files as separate ZIP uploads for sequential loading in the same run. This process uses a numerical sequence on the end of the ZIP file name. You should still include the base ZIP file to start the process. The actual method to loop over these files is to use the intraday cycle in the RI POM schedule, which executes the fact history load once every cycle, for up to 12 cycles per day.

Files to upload: RAP_DATA_HIST.zip, RAP_DATA_HIST.zip.1, RAP_DATA_HIST.zip.2, RAP_DATA_HIST.zip.3

Zip file contents (in each uploaded zip):

  •  SALES.csv

  •  SALES.csv.ctx – The CTX is only required in the first ZIP file, but it’s best to always include it so you can refer to it later in archived files, if needed.

In this example you are loading sales month by month iteratively, but the intraday process supports all other fact loads as well. You can also combine multiple fact files (for different facts with the same period of time) in each ZIP file upload. Track the status of the files in the C_HIST_FILES_LOAD_STATUS table after each cycle execution; it shows whether the file was loaded successfully and how many more files are available to process.

Uploading ZIP Packages

When providing data to the platform, push the compressed files into Object Storage using a ZIP file format. Review the File Transfer Services section for details on how to interact with Object Storage. The ZIP file you use will depend on the data you are attempting to load. The default ZIP file packages are below, but the history ZIP file name is configurable in C_ODI_PARAM using parameter name HIST_ZIP_FILE if a different one is desired.

Table 3-3 Platform ZIP File Usage

Filenames Frequency Notes

RAP_DATA_HIST.zip

Ad Hoc

Used for:

  •  Historical files, such as sales and inventory history for the last 1-2 years.

  •  Loading initial dimensions, such as calendar, merchandise, and location hierarchies prior to history loads.

  •  Initial seeding loads.

RAP_DATA_HIST.zip.1

RAP_DATA_HIST.zip.2 …

RAP_DATA_HIST.zip.N

Ad Hoc / Intraday

Multiple zip uploads are supported for sending historical fact data which should be loaded sequentially. Append a sequence number on the ZIP files starting from 1 and increasing to N, where N is the number of files you are loading. Track the status of the files in C_HIST_FILES_LOAD_STATUS table.

RAP_DATA.zip

RI_RMS_DATA.zip

RI_CE_DATA.zip

RI_MFP_DATA.zip

RI_EXT_DATA.zip

Daily

Can be used for daily ongoing loads into the platform (for RI and foundation common inputs), and for any daily data going to downstream applications through RI’s nightly batch. Different files can be used for different source systems.

RI_REPROCESS_DATA.zip

Ad Hoc

Used to upload individual files which will be appended into an existing nightly batch file set.

ORASE_WEEKLY_ADHOC.zip

Ad Hoc

Used for loading AI Foundation files with ad hoc processes.

ORASE_WEEKLY.zip

Weekly

Used for weekly batch files sent directly to AI Foundation.

ORASE_INTRADAY.zip

Intraday

Used for intraday batch files sent directly to AI Foundation.

Other supported file packages, such as output files and optional input files, are detailed in each module’s implementation guides. Except for Planning-specific integrations and customizations (which support additional integration paths and formats), it is expected that all files will be communicated to the platform using one of the filenames above.

Preparing to Load Data

Implementations can follow this general outline for the data load process:

  1. Initialize the business and system calendars and perform table partitioning, which prepares the database for loading fact data.

  2. Load initial dimension data into the dimension and hierarchy tables and perform validation checks on the data from DV/APEX or using RI reports.

  3. If implementing any AI Foundation or Planning module, load the dimension data to those systems now. Data might work fine on the input tables but have issues only visible after processing in those systems. Don’t start loading history data if your dimensions are not working with all target applications.

  4. Load the first set of history files (for example, one month of sales or inventory) and validate the results using DV/APEX.

  5. If implementing any AI Foundation or Planning module, stop here and load the history data to those systems as well. Validate that the history data in those systems is complete and accurate per your business requirements.

  6. Continue loading history data into RAP until you are finished with all data. You can stop at any time to move some of the data into downstream modules for validation purposes.

  7. After history loads are complete, all positional tables, such as Inventory Position, need to be seeded with a full snapshot of source data before they can be loaded using regular nightly batches. This seeding process is used to create a starting position in the database which can be incremented by daily delta extracts. These full-snapshot files can be included in the first nightly batch you run, if you want to avoid manually loading each seed file through one-off executions.

  8. When all history and seeding loads are completed and downstream systems are also populated with that data, nightly batches can be started.

Before you begin this process, it is best to prepare your working environment by identifying the tools and connections needed for all your Oracle cloud services that will allow you to interact with the platform, as detailed in Implementation Tools and Data File Generation.

Prerequisites for loading files and running POM processes include:

Prerequisite Tool / Process

Upload ZIPs to Object Storage

File Transfer Service (FTS) scripts

Invoke adhoc jobs to unpack and load the data

Postman (or similar REST API tool)

Monitor job progress after invoking POM commands

POM UI (Batch Monitoring tab)

Monitoring data loads

APEX / DV (direct SQL queries)

Users must also have the necessary permissions in Oracle Cloud Infrastructure Identity and Access Management (OCI IAM) to perform all the implementation tasks. Before you begin, ensure that your user has at least the following groups:

Access Needed Groups Needed

Batch Job Execution

BATCH_ADMINISTRATOR_JOB

PROCESS_SERVICE_ADMIN_JOB

Database Monitoring

<tenant ID>-DVContentAuthor (DV)

DATA_SCIENCE_ADMINISTRATOR_JOB (APEX)

Retail Home

RETAIL_HOME_ADMIN

PLATFORM_SERVICES_ADMINISTRATOR

PLATFORM_SERVICES_ADMINISTRATOR_ABSTRACT

RI and AI Foundation Configurations

ADMINISTRATOR_JOB

MFP Configurations

MFP_ADMIN_STAGE / PROD

RDF Configurations

RDF_ADMIN_STAGE / PROD

AP Configurations

AP_ADMIN_STAGE / PROD

Calendar and Partition Setup

This is the first step that must be performed in all new environments, including projects that will not be implementing RI, but only AI Foundation or Planning solutions. Before beginning this step, ensure your configurations are complete per the initial configuration sections in the prior chapter. Your START_DT and END_DT variables must be set correctly for your calendar range (START_DT at least 12 months before start of history data) and the C_MODULE_ARTIFACT table must have all of the required tables enabled for partitioning. C_MODULE_EXACT_TABLE must be configured if you need PLAN partitions for planning data loads.

  1. Upload the calendar file CALENDAR.csv (and associated context file) through Object Storage or SFTP (packaged using the RAP_DATA_HIST.zip file).

  2. Execute the HIST_ZIP_FILE_LOAD_ADHOC process. Example Postman message body:

    {
    "cycleName":"Adhoc", 
    "flowName":"Adhoc",
    "processName":"HIST_ZIP_FILE_LOAD_ADHOC"
    }
  3. Verify that the jobs in the ZIP file load process completed successfully using the POM Monitoring screen. Download logs for the tasks as needed for review.

  4. Execute the CALENDAR_LOAD_ADHOC process. This transforms the data and moves it into all internal RI tables. It also performs table partitioning based on your input date range.

    Sample Postman message body:

    {
      "cycleName":"Adhoc", 
      "flowName":"Adhoc", 
      "processName":"CALENDAR_LOAD_ADHOC",
      "requestParameters":"jobParams.CREATE_PARTITION_PRESETUP_JOB=2018-12-30,jobParams.ETL_BUSINESS_DATE_JOB=2021-02-06"
    }
    

    There are two date parameters provided for this command:

    1. The first date value specifies the first day of partitioning. It must be some time before the first actual day of data being loaded. The recommendation is 1-6 months prior to the planned start of the history so that you have room for back-posted data and changes to start dates. You should not create excessive partitions for years of data you won’t be loading however, as it can impact system performance. The date should also be >= START_DT value set in C_ODI_PARAM_VW, because RAP cannot partition dates that don’t exist in the system; but you don’t need to partition your entire calendar range.

    2. The second date (ETL business date) specifies the target business date, which is typically the day the system should be at after loading all history data and starting daily batches. It is okay to guess some date in the future for this value, but note that the partition process automatically extends 4 months past the date you specified. Your fiscal calendar must have enough periods in it to cover the 4 months after this date or this job will fail. This date can be changed later if needed, and partitioning can be re-executed multiple times for different timeframes.

  5. If this is your first time loading a calendar file, check the RI_DIM_VALIDATION_V view to confirm no warnings or errors are detected. Refer to the AI Foundation Operations Guide for more details on the validations performed. The validation job will fail if it doesn’t detect data moved to the final table (W_MCAL_PERIOD_D). Refer to Sample Validation SQLs for sample queries you can use to check the data.

  6. If you need to reload the same file multiple times due to errors, you must Restart the Schedule in POM and then run the ad hoc process C_LOAD_DATES_CLEANUP_ADHOC before repeating these steps. This will remove any load statuses from the prior run and give you a clean start on the next execution.

    Note:

    If any job having STG in the name fails during the run, then review the POM logs and it should provide the name of an external LOG or BAD table with more information. These error tables can be accessed from APEX using a support utility. Refer to the AI Foundation Operations Guide section on “External Table Load Logs” for the utility syntax and examples.

You can monitor the partitioning process while it’s running by querying the RI_LOG_MSG table from APEX. This table captures the detailed partitioning steps being performed by the script in real time (whereas POM logs are only refreshed at the end of execution). If the process fails in POM after exactly 4 hours, this is just a POM process timeout and it may still be running in the background so you can check for new inserts to the RI_LOG_MSG table.

The partitioning process will take some time (~5 hours per 100k partitions) to complete if you are loading multiple years of history, as this may require 100,000+ partitions to be created across the data model. This process must be completed successfully before continuing with the data load process. Contact Oracle Support if there are any questions or concerns. Partitioning can be performed after some data has been loaded; however, it will take significantly longer to execute, as it has to move all of the loaded data into the proper partitions.

You can also estimate the number of partitions needed based on the details below:

  • RAP needs to partition around 120 week-level tables if all functional areas are enabled, so take the number of weeks in your history time window multiplied by this number of tables.

  • RAP needs to partition around 160 day-level tables if all functional areas are enabled, so take the number of days in your history time window multiplied by this number of tables.

For a 3-year history window, this results in: 120*52*3 + 160*365*3 = 193,920 partitions. If you wish to confirm your final counts before proceeding to the next dataload steps, you can execute these queries from APEX:

select count(*) cnt from dba_tab_partitions where table_owner = 'RADM01' and table_name like 'W_RTL_%'
select table_name, count (*) cnt from dba_tab_partitions where table_owner = 'RADM01' and table_name like 'W_RTL_%' group by table_name

The queries should return a count roughly equal to your expected totals (it will not be exact, as the data model will add/remove tables over time and some tables come with pre-built partitions or default MAXVALUE partitions).

Loading Data from Files

When history and initial seed data comes from flat files, use the following tasks to upload them into RAP:

Table 3-4 Flat File Load Overview

Activity Description

Initialize Dimensions

Initialize dimensional data (products, locations, and so on) to provide a starting point for historical records to join with. Separate initial load processes are provided for this task.

Load History Data

Run history loads in one or multiple cycles depending on the data volume, starting from the earliest date in history and loading forward to today.

Reloading Dimensions

Reload dimensional data as needed throughout the process to maintain correct key values for all fact data. Dimensional files can be provided in the same package with history files and ad hoc processes run in sequence when loading.

Seed Positional Facts

Seed initial values for positional facts using full snapshots of all active item/locations in the source system. This must be loaded for the date prior to the start of nightly batches to avoid gaps in ongoing data.

Run Nightly Batches

Nightly batches must be started from the business date after the initial seeding was performed.

Completing these steps will load all of your data into the Retail Insights data model, which is required for all implementations. From there, proceed with moving data downstream to other applications as needed, such as AI Foundation modules and Merchandise Financial Planning.

Note:

All steps are provided sequentially, but can be executed in parallel. For example, you may load dimensions into RI, then on to AI Foundation and Planning applications before loading any historical fact data. While historical fact data is loaded, other activities can occur in Planning such as the domain build and configuration updates.

Initialize Dimensions

Loading Dimensions into RI

You cannot load any fact data into the platform until the related dimensions have been processed and verified. The processes in this section are provided to initialize the core dimensions needed to begin fact data loads and verify file formats and data completeness. Some dimensions which are not used in history loads are not part of the initialization process, as they are expected to come in the nightly batches at a later time.

For the complete list of dimension files and their file specifications, refer to the AI Foundation Interfaces Guide on My Oracle Support. The steps below assume you have enabled or disabled the appropriate dimension loaders in POM per your requirements. The process flow examples also assume CSV file usage, different programs are available for legacy DAT files. The AI Foundation Operations Guide provides a list of all the job and process flows used by foundation data files, so you can identify the jobs required for your files and disable unused programs in POM.

When you are using RDE jobs to source dimension data from RMFCS and you are not providing any flat files like PRODUCT.csv, it is necessary to disable all file-based loaders in the LOAD_DIM_INITIAL_ADHOC process flow from POM. Any job name starting with the following text can be disabled, because RDE jobs will bypass these steps and insert directly to staging tables:

  • COPY_SI_

  • STG_SI_

  • SI_

  • STAGING_SI_

  1. Provide your dimension files and context files through File Transfer Services (packaged using the RAP_DATA_HIST.zip file). All files should be included in a single zip file upload. If your prior upload included dimensions along with the calendar file, skip to the third step.

  2. Execute the HIST_ZIP_FILE_LOAD_ADHOC process if you need to unpack a new ZIP file.

  3. Execute the LOAD_DIM_INITIAL_ADHOC process to stage, transform, and load your dimension data from the files. The ETL date on the command should be at a minimum one day before the start of your history load timeframe, but 3-6 months before is ideal. It is best to give yourself a few months of space for reprocessing dimension loads on different dates prior to start of history. Date format is YYYY-MM-DD; any other format will not be processed. After running the process, you can verify the dates are correct in the W_RTL_CURR_MCAL_G table. If the business date was not set correctly, your data may not load properly.

    Sample Postman message body:

    {
      "cycleName":"Adhoc", 
      "flowName":"Adhoc", 
      "processName":"LOAD_DIM_INITIAL_ADHOC", 
      "requestParameters":"jobParams.ETL_BUSINESS_DATE_JOB=2017-12-31"
    }
    

    Note:

    If any job having STG in the name fails during the run, then review the POM logs and it should provide the name of an external LOG or BAD table with more information. These error tables can be accessed from APEX using a support utility. Refer to the AI Foundation Operations Guide section on “External Table Load Logs” for the utility syntax and examples.

If this is your first dimension load, you will want to validate the core dimensions such as product and location hierarchies using APEX. Refer to Sample Validation SQLs for sample queries you can use for this.

If any jobs fail during this load process, you may need to alter one or more dimension data files, re-send them in a new zip file upload, and re-execute the programs. Only after all core dimension files have been loaded (CALENDAR, PRODUCT, ORGANIZATION, and EXCH_RATE) can you proceed to history loads for fact data. Make sure to query the RI_DIM_VALIDATION_V view for any warnings/errors after the run. Refer to the AI Foundation Operations Guide for more details on the validation messages that may occur. This view primarily uses the table C_DIM_VALIDATE_RESULT, which can be separately queried instead of the view to see all the columns available on it.

If you need to reload the same file multiple times due to errors, you must Restart the Schedule in POM and then run the ad hoc process C_LOAD_DATES_CLEANUP_ADHOC before repeating these steps. This will remove any load statuses from the prior run and give you a clean start on the next execution.

Note:

Starting with version 23.1.101.0, the product and organization file loaders have been redesigned specifically for the initial ad hoc loads. In prior versions, you must not reload multiple product or organization files for the same ETL business date, as it treats any changes as a reclassification and can cause data issues while loading history. In version 23.x, the dimensions are handled as “Type 1” slowly changing dimensions, meaning the programs do not look for reclasses and instead perform simple merge logic to apply the latest hierarchy data to the existing records, even if levels have changed.

As a best practice, you should disable all POM jobs in the LOAD_DIM_INITIAL_ADHOC process except the ones you are providing new files for. For example, if you are loading the PRODUCT, ORGANIZATION, and EXCH_RATE files as your dimension data for AI Foundation, then you could just execute the set of jobs for those files and disable the others. Refer to the AI Foundation Operations Guide for a list of the POM jobs involved in loading each foundation file, if you wish to disable jobs you do not plan to use to streamline the load process.

Hierarchy Deactivation

Beginning in version 23, foundation dimension ad hoc loads have been changed to use Type 1 slowly-changing dimension (SCD) behavior, which means that the system will no longer create new records every time a parent/child relationship changes. Instead, it will perform a simple merge on top of existing data to maintain as-is hierarchy definitions. The foundation data model holds hierarchy records separately from product data, so it is also necessary to perform maintenance on hierarchies to maintain a single active set of records that should be propagated downstream to other RAP applications. This maintenance is performed using the program W_PROD_CAT_DH_CLOSE_JOB in the LOAD_DIM_INITIAL_ADHOC process. The program will detect unused hierarchy nodes which have no children after the latest data has been loaded into W_PROD_CAT_DH and it will close them (set to CURRENT_FLG=N). This is required because, in the data model, each hierarchy level is stored as a separate record, even if that level is not being used by any products on other tables. Without the cleanup activity, unused hierarchy levels would accumulate in W_PROD_CAT_DH and be available in AI Foundation, which is generally not desired.

There are some scenarios where you may want to disable this program. For example, if you know the hierarchy is going to change significantly over a period of time and you don’t want levels to be closed and re-created every time a new file is loaded, you must disable W_PROD_CAT_DH_CLOSE_JOB. You can re-enable it later and it will close any unused levels that remain after all your changes are processed. Also be aware that the program is part of the nightly batch process too, so once you switch from historical to nightly loads, this job will be enabled and will close unused hierarchy levels unless you intentionally disable it.

Loading Dimensions to Other Applications

Once you have successfully loaded dimension data, you should pause the dataload process and push the dimensions to AI Foundation Cloud Services and the Planning Data Store (if applicable). This allows for parallel data validation and domain build activities to occur while you continue loading data. Review sections Sending Data to AI Foundation and Sending Data to Planning for details on the POM jobs you may execute for this.

The main benefits of this order of execution are:

  1. Validating the hierarchy structure from the AI Foundation interface provides an early view for the customer to see some application screens with their data.

  2. Planning apps can perform the domain build activity without waiting for history file loads to complete, and can start to do other planning implementation activities in parallel to the history loads.

  3. Data can be made available for custom development or validations in Innovation Workbench.

Do not start history loads for facts until you are confident all dimensions are working throughout your solutions. Once you begin loading facts, it becomes much harder to reload dimension data without impacts to other areas. For example, historical fact data already loaded will not be automatically re-associated with hierarchy changes loaded later in the process.

Load History Data

Historical fact data is a core foundational element to all solutions in Retail Analytics and Planning. As such, this phase of the implementation can take the longest amount of time during the project, depending on the volumes of data, the source of the data, and the amount of transformation and validation that needs to be completed before and after loading it into the Oracle database.

It is important to know where in the RAP database you can look to find what data has been processed, what data may have been rejected or dropped due to issues, and how far along in the overall load process you are. The following tables provide critical pieces of information throughout the history load process and can be queried from APEX.

Table 3-5 Inbound Load Status Tables

Table Usage

C_HIST_LOAD_STATUS

Tracks the progress of historical ad hoc load programs for inventory and pricing facts. This table will tell you which Retail Insights tables are being populated with historical data, the most recent status of the job executions, and the most recently completed period of historical data for each table. Use APEX or Data Visualizer to query this table after historical data load runs to ensure the programs are completing successfully and processing the expected historical time periods.

C_HIST_FILES_LOAD_STATUS

Tracks the progress of zip file processing when loading multiple files in sequence using scheduled intraday cycles.

C_LOAD_DATES

Check for detailed statuses of historical load jobs. This is the only place that tracks this information at the individual ETL thread level. For example, it is possible for an historical load using 8 threads to successfully complete 7 threads but fail on one thread due to data issues. The job itself may just return as Failed in POM, so knowing which thread failed will help identify the records that may need correcting and which thread should be reprocessed.

W_ETL_REJECTED_RECORDS

Summary table capturing rejected fact record counts that do not get processed into their target tables in Retail Insights. Use this to identify other tables with specific rejected data to analyze. Does not apply to dimensions, which do not have rejected record support at this time.

E$_W_RTL_SLS_TRX_IT_LC_DY_TMP

Example of a rejected record detail table for Sales Transactions. All rejected record tables start with the E$_ prefix. These tables are created at the moment the first rejection occurs for a load program. W_ETL_REJECTED_RECORDS will tell you which tables contain rejected data for a load.

When loading data from flat files for the first time, it is common to have bad records that cannot be processed by the RAP load procedures, such as when the identifiers on the record are not present in the associated dimension tables. The foundation data loads leverage rejected record tables to capture all such data so you can see what was dropped by specific data load and needs to be corrected and reloaded. These tables do not exist until rejected records occur during program execution. Periodically monitor these tables for rejected data which may require reloading.

The overall sequence of files to load will depend on your specific data sources and conversion activities, but the recommendation is listed below as a guideline.

  1. Sales – Sales transaction data is usually first to be loaded, as the data is critical to running most applications and needs the least amount of conversion.

  2. Inventory Receipts – If you need receipt dates for downstream usage, such as in Offer Optimization, then you need to load receipt transactions in parallel with Inventory Positions. For each file of receipts loaded, also load the associated inventory positions afterwards.

  3. Inventory Position – The main stock-on-hand positions file is loaded next. This history load also calculates and stores data using the receipts file, so INVENTORY.csv and RECEIPT.csv must be loaded at the same time, for the same periods.

  4. Pricing – The price history file is loaded after sales and inventory are complete because many applications need only the first two datasets for processing. Potentially, price history may also be the largest volume of data; so it’s good to be working within your other applications in parallel with loading price data.

  5. All other facts – There is no specific order to load any of the other facts like transfers, adjustments, markdowns, costs, and so on. They can be loaded based on your downstream application needs and the availability of the data files.

Automated History Loads

Once you have performed your history loads manually a couple of times (following all steps in later sections) and validated the data is correct, you may wish to automate the remaining file loads. An intraday cycle is available in POM that can run the fact history loads multiple times using your specified start times. Follow the steps below to enable this process:

  1. Upload multiple ZIP files using FTS, each containing one set of files for the same historical period. Name the files like RAP_DATA_HIST.zip, RAP_DATA_HIST.zip.1, RAP_DATA_HIST.zip.2 and so on, incrementing the index on the end of the zip file name after the first one.

  2. In the POM batch administration screen, ensure all of the jobs in the RI_INTRADAY_CYCLE are enabled, matching your initial ad hoc runs. Schedule the intraday cycles from Scheduler Administration to occur at various intervals throughout the day. Space out the cycles based on how long it took to process your first file.

  3. Monitor the load progress from the Batch Monitoring screen to see the results from each run cycle. Validate that data is being loaded successfully in your database periodically throughout the intraday runs. If an intraday run fails for any reason, it will not allow more runs to proceed until the issue is resolved.

Sales History Load

RAP supports the loading of sales transaction history using actual transaction data or daily/weekly sales totals. If loading data at an aggregate level, all key columns (such as the transaction ID) are still required to have some value. The sales data may be provided for a range of dates in a single file. The data should be loaded sequentially from the earliest week to the latest week but, unlike inventory position, you may have gaps or out-of-order loads, because the data is not stored positionally. Refer to Data File Generation for more details on the file requirements.

Note:

Many parts of AI Foundation require transactional data for sales, so loading aggregate data should not be done unless you have no better alternative.

If you are not loading sales history for Retail Insights specifically, then there are many aggregation programs that can be disabled in the POM standalone process. These programs populate additional tables used only in BI reporting. The following list of jobs can be disabled if you are not using RI (effectively all aggregate “A” tables and jobs except for the item/location/week level can be turned off):

  • W_RTL_SLS_IT_LC_DY_A_JOB

  • W_RTL_SLS_SC_LC_WK_A_JOB

  • W_RTL_SLS_CL_LC_DY_A_JOB

  • W_RTL_SLS_CL_LC_WK_A_JOB

  • W_RTL_SLS_DP_LC_DY_A_JOB

  • W_RTL_SLS_DP_LC_WK_A_JOB

  • W_RTL_SLS_IT_DY_A_JOB

  • W_RTL_SLS_IT_WK_A_JOB

  • W_RTL_SLS_SC_DY_A_JOB

  • W_RTL_SLS_SC_WK_A_JOB

  • W_RTL_SLS_LC_DY_A_JOB

  • W_RTL_SLS_LC_WK_A_JOB

  • W_RTL_SLS_IT_LC_DY_SN_TMP_JOB

  • W_RTL_SLS_IT_LC_DY_SN_A_JOB

  • W_RTL_SLS_IT_LC_WK_SN_A_JOB

  • W_RTL_SLS_IT_DY_SN_A_JOB

  • W_RTL_SLS_IT_WK_SN_A_JOB

  • W_RTL_SLS_SC_LC_DY_CUR_A_JOB

  • W_RTL_SLS_SC_LC_WK_CUR_A_JOB

  • W_RTL_SLS_CL_LC_DY_CUR_A_JOB

  • W_RTL_SLS_DP_LC_DY_CUR_A_JOB

  • W_RTL_SLS_CL_LC_WK_CUR_A_JOB

  • W_RTL_SLS_DP_LC_WK_CUR_A_JOB

  • W_RTL_SLS_SC_DY_CUR_A_JOB

  • W_RTL_SLS_CL_DY_CUR_A_JOB

  • W_RTL_SLS_DP_DY_CUR_A_JOB

  • W_RTL_SLS_SC_WK_CUR_A_JOB

  • W_RTL_SLS_CL_WK_CUR_A_JOB

  • W_RTL_SLS_DP_WK_CUR_A_JOB

  • W_RTL_SLSPR_PC_IT_LC_DY_A_JOB

  • W_RTL_SLSPR_PP_IT_LC_DY_A_JOB

  • W_RTL_SLSPR_PE_IT_LC_DY_A_JOB

  • W_RTL_SLSPR_PP_CUST_LC_DY_A_JOB

  • W_RTL_SLSPR_PC_CS_IT_LC_DY_A_JOB

  • W_RTL_SLSPR_PC_HH_WK_A_JOB

  •  W_RTL_SLS_CNT_LC_DY_A_JOB

  •  W_RTL_SLS_CNT_IT_LC_DY_A_JOB

  •  W_RTL_SLS_CNT_SC_LC_DY_A_JOB

  •  W_RTL_SLS_CNT_CL_LC_DY_A_JOB

  •  W_RTL_SLS_CNT_DP_LC_DY_A_JOB

  •  W_RTL_SLS_CNT_SC_LC_DY_CUR_A_JOB

  •  W_RTL_SLS_CNT_CL_LC_DY_CUR_A_JOB

  •  W_RTL_SLS_CNT_DP_LC_DY_CUR_A_JOB

After confirming the list of enabled sales jobs, perform the following steps:

  1. Create the file SALES.csv containing one or more days of sales data along with a CTX file defining the columns which are populated. Optionally include the SALES_PACK.csv file as well.

  2. Upload the history files to Object Storage using the RAP_DATA_HIST.zip file.

  3. Execute the HIST_ZIP_FILE_LOAD_ADHOC process.

  4. Execute the HIST_STG_CSV_SALES_LOAD_ADHOC process to stage the data in the database. Validate your data before proceeding. Refer to Sample Validation SQLs for sample queries you can use for this.

  5. Execute the HIST_SALES_LOAD_ADHOC batch processes to load the data. If no data is available for certain dimensions used by sales, then the load process can seed the dimension from the history file automatically. Enable seeding for all of the dimensions according to the initial configuration guidelines; providing the data in other files is optional.

    Several supplemental dimensions are involved in this load process, which may or may not be provided depending on the data requirements. For example, sales history data has promotion identifiers, which would require data on the promotion dimension.

    Sample Postman message bodies:

    {
      "cycleName":"Adhoc", 
      "flowName":"Adhoc", 
      "processName":"HIST_STG_CSV_SALES_LOAD_ADHOC"
    }
    
    {
      "cycleName":"Adhoc", 
      "flowName":"Adhoc", 
      "processName":"HIST_SALES_LOAD_ADHOC"
    }

    Note:

    If any job having STG in the name fails during the run, then review the POM logs and it should provide the name of an external LOG or BAD table with more information. These error tables can be accessed from APEX using a support utility. Refer to the AI Foundation Operations Guide section on “External Table Load Logs” for the utility syntax and examples.

After the load is complete, you should check for rejected records, as this will not cause the job to fail but it will mean not all data was loaded successfully. Query the table W_ETL_REJECTED_RECORDS from IW to see a summary of rejections. If you cannot immediately identify the root cause (for example, missing products or locations causing the data load to skip the records) there is a utility job W_RTL_REJECT_DIMENSION_TMP_JOB that allows you analyze the rejections for common reject reasons. Refer to the AIF Operations Guide for details on configuring and running the job for the first time if you have not used it before.

This process can be repeated as many times as needed to load all history files for the sales transaction data. If you are sending data to multiple RAP applications, do not wait until all data files are processed to start using those applications. Instead, load a month or two of data files and process them into all apps to verify the flows before continuing.

Note:

Data cannot be reloaded for the same records multiple times, as sales data is treated as additive. If data needs correction, you must post only the delta records (for example, send -5 to reduce a value by 5 units) or erase the table and restart the load process using RI_SUPPORT_UTIL procedures in APEX. Raise a Service Request with Oracle if neither of these options resolve your issue.

Once you have performed the load and validated the data one time, you may wish to automate the remaining file loads. An intraday cycle is available in POM that can run the sales history load multiple times using your specified start times. Follow the steps below to leverage this process:

  1. Upload multiple ZIP files each containing one SALES.csv and naming them as RAP_DATA_HIST.zip, RAP_DATA_HIST.zip.1, RAP_DATA_HIST.zip.2 and so on, incrementing the index on the end of the zip file name. Track the status of the files in the C_HIST_FILES_LOAD_STATUS table once they are uploaded and at least one execution of the HIST_ZIP_FILE_UNLOAD_JOB process has been run.

  2. In the POM batch administration screen, ensure all of the jobs in the RI_INTRADAY_CYCLE are enabled, matching your initial ad hoc run. Schedule the intraday cycles from Scheduler Administration to occur at various intervals throughout the day. Space out the cycles based on how long it took to process your first file.

  3. Monitor the load progress from the Batch Monitoring screen to see the results from each run cycle.

Inventory Position History Load

RAP supports the loading of inventory position history using full, end-of-week snapshots. These weekly snapshots may be provided one week at a time or as multiple weeks in a single file. The data must be loaded sequentially from the earliest week to the latest week with no gaps or out-of-order periods. For example, you cannot start with the most recent inventory file and go backward; you must start from the first week of history. Refer to Data File Generation for more details on the file requirements.

A variety of C_ODI_PARAM_VW settings are available in the Control Center to disable inventory features that are not required for your implementation. All of the following parameters can be changed to a value of N during the history load and enabled later for daily batches, as it will greatly improve the load times:

  • RI_INVAGE_REQ_IND – Disables calculation of first/last receipt dates and inventory age measures. Receipt date calculation is used in RI and required for Offer Optimization (as a method of determining entry/exit dates for items). It is also required for Forecasting for the Short Lifecycle (SLC) methods. Set to Y if using any of these applications.

  • RA_CLR_LEVEL – Disables the mapping of clearance event IDs to clearance inventory updates. Used only in RI reporting.

  • RI_PRES_STOCK_IND – Disables use of replenishment data for presentation stock to calculate inventory availability measures. Used only in RI reporting.

  • RI_BOH_SEEDING_IND – Disables the creation of initial beginning-on-hand records so analytics has a non-null starting value in the first week. Used only in RI reporting.

  • RI_MOVE_TO_CLR_IND – Disables calculation of move-to-clearance inventory measures when an item/location goes into or out of clearance status. Used only in RI reporting.

  • RI_MULTI_CURRENCY_IND – Disables recalculation of primary currency amounts if you are only using a single currency. Should be enabled for multi-currency, or disabled otherwise.

The following steps describe the process for loading inventory history:

  1. If you need inventory to keep track of First/Last Receipt Dates for use in Offer Optimization or Forecasting (SLC) then you must first load a RECEIPT.csv file for the same historical period as your inventory file (because it is used in forecasting, that may make it required for your Inventory Optimization loads as well, if you plan to use SLC forecasting). You must also set RI_INVAGE_REQ_IND to Y. Receipts are loaded using the process HIST_CSV_INVRECEIPTS_LOAD_ADHOC. Receipts may be provided at day or week level depending on your history needs.

  2. Create the file INVENTORY.csv containing one or more weeks of inventory snapshots in chronological order along with your CTX file to define the columns that are populated. The DAY_DT value on every record must be an end-of-week date (Saturday by default). The only exception to this is the final week of history, which may be the middle of the week as long as you perform initial seeding loads on the last day of that week.

  3. Upload the history file and its context file to Object Storage using the RAP_DATA_HIST.zip file.

  4. Update column HIST_LOAD_LAST_DATE on the table C_HIST_LOAD_STATUS to be the date matching the last day of your overall history load (will be later than the dates in the current file). This can be done from the Control & Tactical Center. If you are loading history after your nightly batches were already started, then you must set this date to be the last week-ending date before your first daily/weekly batch. No other date value can be used in this case.

  5. Execute the HIST_ZIP_FILE_LOAD_ADHOC process.

  6. If you are providing RECEIPT.csv for tracking receipt dates in history, run HIST_CSV_INVRECEIPTS_LOAD_ADHOC at this time.

  7. Execute the HIST_STG_CSV_INV_LOAD_ADHOC process to stage your data into the database. Validate your data before proceeding. Refer to Sample Validation SQLs for sample queries you can use for this.

  8. Execute the HIST_INV_LOAD_ADHOC batch process to load the file data. The process loops over the file one week at a time until all weeks are loaded. It updates the C_HIST_LOAD_STATUS table with the progress, which you can monitor from APEX or DV. Sample Postman message bodies:

    {
      "cycleName":"Adhoc", 
      "flowName":"Adhoc", 
      "processName":"HIST_STG_CSV_INV_LOAD_ADHOC"
    }
    
    {
      "cycleName":"Adhoc", 
      "flowName":"Adhoc", 
      "processName":"HIST_INV_LOAD_ADHOC"
    }

This process can be repeated as many times as needed to load all history files for the inventory position. Remember that inventory cannot be loaded out of order, and you cannot go back in time to reload files after you have processed them. If you load a set of inventory files and then find issues during validation, erase the tables in the database and restart the load with corrected files.

If you finish the entire history load and need to test downstream systems (like Inventory Optimization) then you will want to populate the table W_RTL_INV_IT_LC_G first (the history load skips this table). There is a separate standalone job HIST_LOAD_INVENTORY_GENERAL_JOB that you may execute to copy the final week of inventory from the fact table to this table.

If your inventory history has invalid data, you may get rejected records and the batch process will fail with a message that rejects exist in the data. If this occurs, you cannot proceed until you resolve your input data, because rejections on positional data MUST be resolved for one date before moving onto the next. If you move onto the next date without reprocessing any rejected data, that data is lost and cannot be loaded at a later time without starting over. When this occurs:

  1. The inventory history load will automatically populate the table W_RTL_REJECT_DIMENSION_TMP with a list of invalid dimensions it has identified. If you are running any other jobs besides the history load, you can also run the process W_RTL_REJECT_DIMENSION_TMP_ADHOC to populate that table manually. You have the choice to fix the data and reload new files or proceed with the current file

  2. After reviewing the rejected records, run REJECT_DATA_CLEANUP_ADHOC, which will erase the E$ table and move all rejected dimensions into a skip list. You must pass in the module code you want to clean up data for as a parameter on the POM job (in this case the module code is INV). The skip list is loaded to the table C_DISCARD_DIMM. Skipped identifiers will be ignored for the current file load, and then reset for the start of the next run.

    Example Postman message body:

    {
    "cycleName": "Adhoc", 
    "flowName":"Adhoc", 
    "processName":"REJECT_DATA_CLEANUP_ADHOC",
    "requestParameters":"jobParams.REJECT_DATA_CLEANUP_JOB=INV"
    }
  3. If you want to fix your files instead of continuing the current load, stop here and reload your dimensions and/or fact data following the normal process flows.

  4. If you are resuming with the current file with the intent to skip all data in C_DISCARD_DIMM, restart the failed POM job now. The skipped records are permanently lost and cannot be reloaded unless you erase your inventory data and start loading files from the beginning.

Log a Service Request with Oracle Support for assistance with any of the above steps if you are having difficulties with loading inventory history or dealing with rejected records.

Price History Load

Certain applications, such as Promotion and Markdown Optimization, require price history to perform their calculations. Price history is similar to inventory in that it is a positional, but it can be loaded in a more compressed manner due to the extremely high data volumes involved. The required approach for price history is as follows:

  1. Update C_HIST_LOAD_STATUS for the PRICE records in the table, specifying the last date of history load, just as you did for inventory. If you are loading history after your nightly batches already started, then you must set this date to be the last week-ending date before your first daily/weekly batch. No other date value can be used in this case.

  2. If you are loading prices for PMO/OO applications specifically, then go to C_ODI_PARAM_VW in the Control Center and change the parameter RI_LAST_MKDN_HIST_IND to have a value of Y. This will populate some required fields for PMO markdown price history

  3. Create an initial, full snapshot of price data in PRICE.csv for the first day of history and load this file into the platform using the history processes in this section. All initial price records must come with a type code of 0.

  4. Create additional PRICE files containing just price changes for a period of time (such as a month) with the appropriate price change type codes and effective day dates for those changes. Load each file one at a time using the history processes.

  5. The history procedure will iterate over the provided files day by day, starting from the first day of history, up to the last historical load date specified in C_HIST_LOAD_STATUS for the pricing fact. For each date, the procedure checks the staging data for effective price change records and loads them, then moves on to the next date.

The process to perform price history loads is similar to the inventory load steps. It uses the PRICE.csv file and the HIST_CSV_PRICE_LOAD_ADHOC process (the price load only has one load process instead of two like sales/inventory). Just like inventory, you must load the data sequentially; you cannot back-post price changes to earlier dates than what you have already loaded. Refer to Data File Generation for complete details on how to build this file.

Just like inventory, the REJECT_DATA_CLEANUP_ADHOC process may be used when records are rejected during the load. Price loads cannot continue until you review and clear the rejections.

{
  "cycleName": "Adhoc", 
  "flowName":"Adhoc", 
  "processName":"REJECT_DATA_CLEANUP_ADHOC",
  "requestParameters":"jobParams.REJECT_DATA_CLEANUP_JOB=PRICE"
}
Other History Loads

While sales and inventory are the most common facts to load history for, you may also want to load history for other areas such as receipts and transfers. Separate ad hoc history load processes are available for the following fact areas:

  • HIST_CSV_ADJUSTMENTS_LOAD_ADHOC

  • HIST_CSV_INVRECEIPTS_LOAD_ADHOC

  • HIST_CSV_MARKDOWN_LOAD_ADHOC

  • HIST_CSV_INVRTV_LOAD_ADHOC

  • HIST_CSV_TRANSFER_LOAD_ADHOC

  • HIST_CSV_DEAL_INCOME_LOAD_ADHOC

  • HIST_CSV_ICMARGIN_LOAD_ADHOC

  • HIST_CSV_INVRECLASS_LOAD_ADHOC

All of these interfaces deal with transactional data (not positional) so you may use them at any time to load history files in each area.

Note:

These processes are intended to support history data for downstream applications such as AI Foundation and Planning, so the tables populated by each process by default should satisfy the data needs of those applications. Jobs not needed by those apps are not included in these processes.

Some data files used by AIF and Planning applications do not have a history load process, because the data is only used from the current business date forwards. For Purchase Order data (ORDER_DETAIL.csv), refer to the section below on Seed Positional Facts if you need to load the file before starting your nightly batch processing. For other areas like transfers/allocations used by Inventory Optimization, those jobs are only included in the nightly batch schedule and do not require any history to be loaded.

Modifying Staged Data

If you find problems in the data you’ve staged in the RAP database (specific to RI/AIF input interfaces) you have the option to directly update those tables from APEX, thus allowing you to reprocess the records without uploading new files through FTS. You have the privileges to insert, delete, or update records in tables where data is staged before being loaded into the core data model, such as W_RTL_INV_IT_LC_DY_FTS for inventory data.

Directly updating the staging table data can be useful for quickly debugging load failures and correcting minor issues. For example, you are attempting to load PRODUCT.csv for the first time and you discover some required fields are missing data for some rows. You may directly update the W_PRODUCT_DTS table to put values in those fields and rerun the POM job, allowing you to progress with your dataload and find any additional issues before generating a new file. Similarly, you may have loaded an inventory receipts file, but discovered after staging the file that data was written to the wrong column (INVRC_QTY contains the AMT values and vice versa). You can update the fields and continue to load it to the target tables to verify it, and then correct your source data from the next run forwards only.

These privileges extend only to staging tables, such as table names ending in FTS, DTS, FS, or DS. You cannot modify internal tables holding the final fact or dimension data. You cannot modify configuration tables as they must be updated from the Control & Tactical Center. The privileges do not apply to objects in the RDX or PDS database schemas.

Reloading Dimensions

It is common to reload dimensions at various points throughout the history load, or even in-sync with every history batch run. Ensure that your core dimensions, such as the product and location hierarchies, are up-to-date and aligned with the historical data being processed. To reload dimensions, you may follow the same process as described in the Initial Dimension Load steps, ensuring that the current business load date in the system is on or before the date in history when the dimensions will be required. For example, if you are loading history files in a monthly cadence, ensure that new product and location data required for the next month has been loaded no later than the first day of that month, so it is effective for all dates in the history data files. Do not reload dimensions for the same business date multiple times, advance the date to some point after the prior load each time you want to load new sets of changed dimension files.

It is also very important to understand that history load procedures are unable to handle reclassifications that have occurred in source systems when you are loading history files. For example, if you are using current dimension files from the source system to process historical data, and the customer has reclassified products so they are no longer correct for the historical time periods, then your next history load may place sales or inventory under the new classifications, not the ones that were relevant in history. For this reason, reclassifications should be avoided if at all possible during history load activities, unless you can maintain historical dimension snapshots that will accurately reflect historical data needs.

Seed Positional Facts

Once sales and inventory history have been processed, you will need to perform seeding of the positional facts you wish to use. Seeding a fact means to load a full snapshot of the data for all active item/locations, thus establishing a baseline position for every possible record before nightly batches start loading incremental updates to those values. Seeding of positional facts should only occur once history data is complete and daily batch processing is ready to begin. Seed loads should also be done for a week-ending date, so that you do not have a partial week of daily data in the system when you start daily batches.

Instead of doing separate seed loads, you also have the option of just providing full snapshots of all positional data in your first nightly batch run. This will make the first nightly batch take a long time to complete (potentially 8+ hours) but it allows you to skip all of the steps documented below. This method of seeding the positional facts is generally the preferred approach for implementers, but if you want to perform manual seeding as a separate activity, review the rest of this section. If you are also implementing RMFCS, you can leverage Retail Data Extractor (RDE) programs for the initial seed load as part of your first nightly batch run, following the steps in that chapter instead.

If you did not previously disable the optional inventory features in C_ODI_PARAM (parameters RI_INVAGE_REQ_IND, RA_CLR_LEVEL, RI_PRES_STOCK_IND, RI_BOH_SEEDING_IND, RI_MOVE_TO_CLR_IND, and RI_MULTI_CURRENCY_IND) then you should review these settings now and set all parameters to N if the functionality is not required. Once this is done, follow the steps below to perform positional seeding:

  1. Create the files containing your initial full snapshots of positional data. It may be one or more of the following:

    • PRICE.csv

    • COST.csv (used for both BCOST and NCOST data interfaces)

    • INVENTORY.csv

    • ORDER_DETAIL.csv (ORDER_HEAD.csv should already be loaded using dimension process)

    • W_RTL_INVU_IT_LC_DY_FS.dat

  2. Upload the files to Object Storage using the RAP_DATA_HIST.zip file.

  3. Execute the LOAD_CURRENT_BUSINESS_DATE_ADHOC process to set the load date to be the next week-ending date after the final date in your history load.

    {
    "cycleName":"Adhoc", 
    "flowName":"Adhoc", 
    "processName":"LOAD_CURRENT_BUSINESS_DATE_ADHOC", 
    "requestParameters":"jobParams.ETL_BUSINESS_DATE_JOB=2017-12-31"
    }
    
  4. Execute the ad hoc seeding batch processes depending on which files have been provided. Sample Postman messages:

    {
    "cycleName":"Adhoc", 
    "flowName":"Adhoc", 
    "processName":"SEED_CSV_W_RTL_PRICE_IT_LC_DY_F_PROCESS_ADHOC"
    }
    
    {
    "cycleName": "Adhoc",
    "flowName":"Adhoc", 
    "processName":"SEED_CSV_W_RTL_NCOST_IT_LC_DY_F_PROCESS_ADHOC"
    }
    
    {
    "cycleName":"Adhoc", 
    "flowName":"Adhoc", 
    "processName":"SEED_CSV_W_RTL_BCOST_IT_LC_DY_F_PROCESS_ADHOC"
    }
    
    {
    "cycleName":"Adhoc", 
    "flowName":"Adhoc", 
    "processName":"SEED_CSV_W_RTL_INV_IT_LC_DY_F_PROCESS_ADHOC"
    }
    
    {
    "cycleName":"Adhoc", 
    "flowName":"Adhoc", 
    "processName":"SEED_CSV_W_RTL_INVU_IT_LC_DY_F_PROCESS_ADHOC"
    }
    
    {
    "cycleName":"Adhoc", 
    "flowName":"Adhoc",
    "processName":"SEED_CSV_W_RTL_PO_ONORD_IT_LC_DY_F_PROCESS_ADHOC"
    }

Once all initial seeding is complete and data has been validated, you are ready to perform a regular batch run. Provide the data files expected for a full batch, such as RAP_DATA.zip or RI_RMS_DATA.zip for foundation data, RI_MFP_DATA.zip for externally-sourced planning data (for RI reporting and AI Foundation forecasting), and any AI Foundation Cloud Services files using the ORASE_WEEKLY.zip files. If you are sourcing daily data from RMFCS then you need to ensure that the RDE batch flow is configured to run nightly along with the RAP batch schedule. Batch dependencies between RDE and RI should be checked and enabled, if they are not already turned on.

From this point on, the nightly batch takes care of advancing the business date and loading all files, assuming that you want the first load of nightly data to occur the day after seeding. The following diagram summarizes a potential set of dates and activities using the history and seeding steps described in this chapter:

Diagram of History and Seeding Steps

Note:

The sequential nature of this flow of events must be followed for positional facts (for example, inventory) but not for transactional facts (such as sales). Transactional data supports posting for dates other than what the current system date is, so you can choose to load sales history at any point in this process.

Run Nightly Batches

As soon as initial seeding is performed (or instead of initial seeding), you need to start nightly batch runs. If you are using the nightly batch to seed positional facts, ensure your first ZIP file upload for the batch has those full snapshots included. Once those full snapshots are loaded through seeding or the first full batch, then you can send incremental files rather than full snapshots.

Nightly batch schedules can be configured in parallel with the history load processes using a combination of the Customer Modules Management (in Retail Home) and the POM administration screens. It is not recommended to configure the nightly jobs manually in POM, as there are over 500 batch programs; choosing which to enable can be a time-intensive and error-prone activity. Customer Modules Management greatly simplifies this process and preserves dependencies and required process flows. Batch Orchestration describes the batch orchestration process and how you can configure batch schedules for nightly execution.

Once you move to nightly batches, you may also want to switch dimension interfaces from Full to Incremental loading of data. Several interfaces, such as the Product dimension, can be loaded incrementally, sending only the changed records every day instead of a full snapshot. These options use the IS_INCREMENTAL flag in the C_ODI_PARAM_VW table and can be accessed from the Control & Tactical Center. If you are unsure of which flags you want to change, refer to the Retail Insights Implementation Guide for detailed descriptions of all parameters.

Note:

At this time, incremental product and location loads are supported when using RDE for integration or when using legacy DAT files. CSV files should be provided as full snapshots.

As part of nightly batch uploads, also ensure that the parameter file RA_SRC_CURR_PARAM_G.dat is included in each ZIP package, and that it is being automatically updated with the current business date for that set of files. This file is used for business date validation so incorrect files are not processed. This file will help Oracle Support identify the current business date of a particular set of files if they need to intervene in the batch run or retrieve files from the archives for past dates. Refer to the System Parameters File section for file format details.

In summary, here are the main steps that must be completed to move from history loads to nightly batches:

  1. All files must be bundled into a supported ZIP package like RAP_DATA.zip for the nightly uploads, and this process should be automated to occur every night.

  2. Include the system parameter file RA_SRC_CURR_PARAM_G.dat in each nightly upload ZIP and automate the setting of the vdate parameter in that file (not applicable if RDE jobs are used).

  3. Sync POM schedules with the Customer Module configuration using the Sync with MDF button in the Batch Administration screen, restart the POM schedules to reflect the changes, and then review the enabled/disabled jobs to ensure the necessary data will be processed in the batch.

  4. Move the RI ETL business date up to the date one day before the current nightly load (using LOAD_CURRENT_BUSINESS_DATE_ADHOC). The nightly load takes care of advancing the date from this point forward.

  5. Close and re-open the batch schedules in POM as needed to align the POM business date with the date used in the data (all POM schedules should be open for the current business date before running the nightly batch).

  6. Schedule the start time from the Scheduler Administration screen > RI schedule > Nightly tab. Enable it and set a start time. Restart your schedule again to pick up the new start time.

Sending Data to AI Foundation

All AI Foundation modules leverage a common batch infrastructure to initialize the core dataset, followed by ad hoc, application-specific programs to generate additional data as needed. Before loading any data into an AI Foundation module, it is necessary to complete initial dimension loads into RI and validate that core structures (calendar, products, locations) match what you expect to see. Once you are comfortable with the data that has been loaded in, leverage the following jobs to move data into one or more AI Foundation applications.

Table 3-6 Extracts for AI Foundation

POM Process Name Usage Details

INPUT_FILES_ADHOC_PROCESS

Receive inbound zip files intended for AI Foundation, archive and extract the files. This process looks for the ORASE_WEEKLY_ADHOC.zip file.

RSE_MASTER_ADHOC_PROCESS

Foundation data movement from RI to AI Foundation, including core hierarchies and dimensional data. Accepts many different parameters to run specific steps in the load process.

<app>_MASTER_ADHOC_PROCESS

Each AI Foundation module, such as SPO or IO, has a master job for extracting and loading data that is required for that application, in addition to the RSE_MASTER processes. AI Foundation module jobs may look for a combination of data from RI and input files in ORASE_WEEKLY_ADHOC.zip.

Because AI Foundation Cloud Services ad hoc procedures have been exposed using only one job in POM, they are not triggered like RI procedures. AI Foundation programs accept a number of single-character codes representing different steps in the data loading process. These codes can be provided directly in POM by editing the Parameters of the job in the Batch Monitoring screen, then executing the job through the user interface.

AI Foundation Standalone Batch Jobs Actions Menu

For example, this string of parameters will move all dimension data from RI to AI Foundation:

Parameters for Moving Dimension Data from RI to AI Foundation

Additional parameters are available when moving periods of historical data, such as inventory and sales:

Parameters for Moving Periods of Historical Data

A typical workflow for moving core foundation data into AI Foundation is:

  1. Load the core foundation files (like Calendar, Product, and Organization) into RI.

  2. Use the RSE_MASTER_ADHOC_PROCESS to move those same datasets to AI Foundation, providing specific flag values to only run the needed steps.

  3. Load some of your history files for Sales to validate the inputs.

  4. Load the same range of sales to AI Foundation using the sales load with optional from/to date parameters.

  5. Repeat the previous two steps until all sales data is loaded into both RI and AI Foundation.

    Performing the process iteratively provides you early opportunities to find issues in the data before you’ve loaded everything, but it is not required. You can load all the data into AI Foundation at one time.

Follow the same general flow for the other application-specific, ad hoc flows into the AI Foundation modules. For a complete list of parameters in each program, refer to the AI Foundation Operations Guide.

Sending Data to Planning

If a Planning module is being implemented, then additional RI jobs should be executed as part of the initial and nightly batch runs. These jobs are available through ad hoc calls, and the nightly jobs are included in the RI nightly schedule. Review the list below for more details on the core Planning extracts available.

Table 3-7 Extracts for Planning

POM Job Name Usage Details

W_PDS_PRODUCT_D_JOB

Exports a full snapshot of Product master data (for non-pack items only) and associated hierarchy levels.

W_PDS_ORGANIZATION_D_JOB

Exports a full snapshot of Location master data (for stores and warehouses only) and associated hierarchy levels.

W_PDS_CALENDAR_D_JOB

Exports a full snapshot of Calendar data at the day level and associated hierarchy levels.

Note:

While RI exports the entire calendar, PDS will only import 5 years around the RPAS_TODAY date (current year +/- 2 years).

W_PDS_EXCH_RATE_G_JOB

Exports a full snapshot of exchange rates.

W_PDS_PRODUCT_ATTR_D_JOB

Exports a full snapshot of item-attribute relationships. This is inclusive of both diffs and UDAs.

W_PDS_DIFF_D_JOB

Exports a full snapshot of Differentiators such as Color and Size.

W_PDS_DIFF_GRP_D_JOB

Exports a full snapshot of differentiator groups (most commonly size groups used by SPO and AP).

W_PDS_UDA_D_JOB

Exports a full snapshot of User-Defined Attributes.

W_PDS_BRAND_D_JOB

Exports a full snapshot of Brand data (regardless of whether they are currently linked to any items).

W_PDS_SUPPLIER_D_JOB

Exports a full snapshot of Supplier data (regardless of whether they are currently linked to any items).

W_PDS_REPL_ATTR_IT_LC_D_JOB

Exports a full snapshot of Replenishment Item/Location Attribute data (equivalent to REPL_ITEM_LOC from RMFCS).

W_PDS_DEALINC_IT_LC_WK_A_JOB

Incremental extract of deal income data (transaction codes 6 and 7 from RMFCS) posted in the current business week.

W_PDS_PO_ONORD_IT_LC_WK_A_JOB

Incremental extract of future on-order amounts for the current business week, based on the expected OTB date.

W_PDS_INV_IT_LC_WK_A_JOB

Incremental extract of inventory positions for the current business week. Inventory is always posted to the current week, there are no back-posted records.

W_PDS_SLS_IT_LC_WK_A_JOB

Incremental extract of sales transactions posted in the current business week (includes back-posted transactions to prior transaction dates).

W_PDS_INVTSF_IT_LC_WK_A_JOB

Incremental extract of inventory transfers posted in the current business week (transaction codes 30, 31, 32, 33, 37, 38 from RMFCS).

W_PDS_INVRC_IT_LC_WK_A_JOB

Incremental extract of inventory receipts posted in the current business week. Only includes purchase order receipts (transaction code 20 from RMFCS).

W_PDS_INVRTV_IT_LC_WK_A_JOB

Incremental extract of returns to vendor posted in the current business week (transaction code 24 from RMFCS).

W_PDS_INVADJ_IT_LC_WK_A_JOB

Incremental extract of inventory adjustments posted in the current business week (transaction codes 22 and 23 from RMFCS).

W_PDS_SLSWF_IT_LC_WK_A_JOB

Incremental extract of wholesale and franchise transactions posted in the current business week (transaction codes 82, 83, 84, 85, 86, 88 from RMFCS).

W_PDS_MKDN_IT_LC_WK_A_JOB

Incremental extract of markdowns posted in the current business week (transaction codes 11, 12, 13, 14, 15, 16, 17, 18 from RMFCS).

W_PDS_INV_IT_LC_WK_A_INITIAL_JOB

Initial history extract for inventory position data based on the timeframe established in C_SOURCE_CDC.

W_PDS_FLEXFACT1_F_JOB

Exports flexible fact data from RI that can be at any configurable level of hierarchies. Can be used to extend PDS with additional measures. Flex Fact 1 is also used by OO to display custom measures.

W_PDS_FLEXFACT2_F_JOB

Exports flexible fact data from RI that can be at any configurable level of hierarchies. Can be used to extend PDS with additional measures.

W_PDS_FLEXFACT3_F_JOB

Exports flexible fact data from RI that can be at any configurable level of hierarchies. Can be used to extend PDS with additional measures.

W_PDS_FLEXFACT4_F_JOB

Exports flexible fact data from RI that can be at any configurable level of hierarchies. Can be used to extend PDS with additional measures.

The PDS jobs are linked with several ad hoc processes in POM, providing you with the ability to extract specific datasets on-demand as you progress with history and initial data loads. The table below summarizes the ad hoc processes, which can be called using the standard methods such as cURL or Postman.

Table 3-8 RI Ad Hoc Processes for Planning

POM Process Name Usage Details

LOAD_PDS_DIMENSION_PROCESS_ADHOC

Groups all of the dimension (D/G table) extracts for PDS into one process for ad hoc execution. Disable individual jobs in POM to run only some of them.

LOAD_PDS_FACT_PROCESS_ADHOC

Groups all of the fact (F/A table) extracts for PDS into one process for ad hoc execution. Disable individual jobs in POM to run only some of them.

LOAD_PDS_FACT_INITIAL_PROCESS_ADHOC

History extract process for inventory positions, run only to pull a full snapshot of inventory from RI for one or multiple weeks of data. You must set the start and end dates for extraction in the C_SOURCE_CDC table before running this process.

C_LOAD_DATES_CLEANUP_ADHOC

Purge the successful job records from C_LOAD_DATES, which is necessary when you are running the same jobs multiple times per business date.

All of the PDS fact jobs leverage the configuration table C_SOURCE_CDC to track the data that has been extracted. On the first run of any incremental process, the job will extract all available data in a single run. From that point forwards, the extract will incrementally load only the data which has been added or modified since the last extract. For the initial inventory job, it requires start/end dates to extract, so you must update C_SOURCE_CDC from the Tactical & Control Center. For all other jobs, the extract dates are written to the C_SOURCE_CDC alongside the extract table name after each execution and can be overwritten as needed when doing multiple loads or re-running the same time period. If you run the same process more than once, use C_LOAD_DATES_CLEANUP_ADHOC to reset the run statuses before the next run.

Table 3-9 Planning Integration Configuration Tables

Table Usage

C_SOURCE_CDC

Configuration and tracking table that shows the interfaces supported for RI to Planning integration and the currently processed date ranges.

RAP_INTF_CONFIG

Configuration and tracking table for integration ETL programs between all RAP modules. Contains their most recent status, run ID, and data retention policy.

RAP_INTF_RUN_STATUS

RAP integration run history and statuses.

RAP_LOG_MSG

RAP integration logging table, specific contents will vary depending on the program and logging level.

After the planning data has been extracted from RI, the Planning systems use the same processes to extract both initial and incremental data for each interface. As long as RI has exported historical data for the current run, the regular interfaces scheduled to run using POM will pull the historical or incremental data specific to that interface into PDS.

Note:

Before loading dimension data into Planning, you must configure the value of RPAS_TODAY and do any other initial configurations as specified in the MFP, RDF, AP, or RPASCE Implementation Guides.

Generating Forecasts for MFP

Before you can complete an MFP implementation, you must set up and run forecasting within AI Foundation Cloud Services. Review the steps below for the initial setup of Forecasting:

  1. In Manage Forecast Configurations in the AI Foundation UI, start by setting up a run type in the Setup train stop.

  2. Click the + icon above the table and fill in the fields in the popup. For MFP forecasting, the forecast method should be selected as Automatic Exponential Smoothing.

  3. Create a run type for each forecast measure/forecast intersection combination that is required for MFP.

  4. Create test runs in the Test train stop once you are done setting up the run types:

    1. Click a run type in the top table, then click on the + icon in the bottom table to create a run.

    2. If necessary, change the configurations parameters for the estimation process and forecast process in their respective tabs in the Create Run popup.

      For example, if you want to test a run using Bayesian method, edit the Estimation Method parameter in the Estimation tab using the edit icon above the table.

    3. After modifying and reviewing the configuration parameters, click the Submit button to start the run.

  5. Once the run is complete, the status changes to Forecast Generation Complete.

Doing test runs is an optional step. In addition to that, you will need to modify and review the configurations of the run type, activate the run type, enable auto-approve and map the run type to the downstream application (in this case to MFP). In the Manage train stop, select a row, click Edit Configurations Parameters and edit the estimation and forecast parameters as needed. Once you are done, go to Review tab, click Validate, then close the tab.

Note:

If the run type is active, you will only be able to view the parameters. To edit the parameters, the run type must be inactive.

To activate the run type and enable the auto-approve, select a run type in the table and click the corresponding buttons above the table. Lastly, to map the run type to MFP, go to the Map train stop and click the + icon to create a new mapping.

When configuring forecasts for the MFP base implementation, the following list of forecast runs may be required, and you will want to configure and test each run type following the general workflow above. Additional runs can be added to satisfy your MFP implementation requirements.

Note:

The “Channel” level in MFP is often referred to as “Area” level in RI and AI Foundation, so be sure to select the correct levels which align to your hierarchy.
MFP Plan MFP Levels Method Data Source Measure

MFP Merch Target

Department-Channel-Week

Auto ES

Store Sales

Regular and Promotion Gross Sales Amt

MFP Merch Target

Department-Channel-Week

Auto ES

Store Sales

Clearance Gross Sales Amt

MFP Merch Target

Department-Channel-Week

Auto ES

Store Sales

Regular and Promotion Gross Sales Unit

MFP Merch Target

Department-Channel-Week

Auto ES

Store Sales

Clearance Gross Sales Unit

MFP Merch Target

Department-Channel-Week

Auto ES

Store Sales

Total Returns Amount

MFP Merch Target

Department-Channel-Week

Auto ES

Store Sales

Total Returns Units

MFP Merch Plan

Subclass-Channel-Week

Auto ES

Store Sales

Regular and Promotion Gross Sales Amt

MFP Merch Plan

Subclass-Channel-Week

Auto ES

Store Sales

Clearance Gross Sales Amt

MFP Merch Plan

Subclass-Channel-Week

Auto ES

Store Sales

Regular and Promotion Gross Sales Unit

MFP Merch Plan

Subclass-Channel-Week

Auto ES

Store Sales

Clearance Gross Sales Unit

MFP Merch Plan

Subclass-Channel-Week

Auto ES

Store Sales

Total Returns Amount

MFP Merch Plan

Subclass-Channel-Week

Auto ES

Store Sales

Total Returns Units

MFP Location Target

Company-Location-Week

Auto ES

Store Sales

Total Gross Sales Amount

MFP Location Target

Company-Location-Week

Auto ES

Store Sales

Total Gross Sales Unit

MFP Location Target

Company-Location-Week

Auto ES

Store Sales

Total Returns Amount

MFP Location Target

Company-Location-Week

Auto ES

Store Sales

Total Returns Units

MFP Location Plan

Department-Location-Week

Auto ES

Store Sales

Total Gross Sales Amount

MFP Location Plan

Department-Location-Week

Auto ES

Store Sales

Total Gross Sales Unit

MFP Location Plan

Department-Location-Week

Auto ES

Store Sales

Total Returns Amount

MFP Location Plan

Department-Location-Week

Auto ES

Store Sales

Total Returns Units

Generating Forecasts for RDF

The same forecasting interface described in the previous section for MFP is also used to generate the base demand and initial forecasts for Retail Demand Forecasting (RDF). Demand and forecasts must be generated in AI Foundation as part of your RDF implementation. The general workflow is the same, but the forecasting levels and methods used will vary depending on your RDF implementation needs. For example, your RDF forecasts may be at an item/location/week level of granularity instead of higher levels like MFP requires. You will also use other forecasting methods such as Causal-Short Life Cycle instead of the MFP default method (Auto ES).

RDF directly integrates the demand and forecast parameters between AI Foundation Cloud Services and PDS tables using the RAP data exchange layer (RDX) as needed. Outputs from the forecasting engine will be written to tables prefixed with RSE_FCST_*. Outputs from RDF back into the data exchange layer will be in tables prefixed with RDF_APPR_FCST_*. For more details on importing the forecasts to RDF after they are generated, refer to the RDF Implementation Guide.

Implementation Flow Example

The steps below describe a minimal implementation workflow for RDF, which has the most complex process of the Planning applications. A similar process would be followed for other Planning applications, except the integration would largely be one-way (RI and AIF pushing data to MFP/AP). Note that these manual steps are provided for first-time implementations and testing, all jobs would be automated as part of nightly batches before going live with the application.

Retail Insights is used as the foundational data warehouse that collects and coordinates data on the platform. You do not need to purchase Retail Insights Cloud Service to leverage the data warehouse for storage and integration, it is included as part of any RAP solution. Regardless of which RAP solutions you are implementing, the integration flows shown below are the same.

Before performing this workflow, you should already have an understanding of the Forecast Configuration UI in AIF. AIF should also be configured for RDF following the setup steps found in the AI Foundation Implementation Guide section on “Forecast Configuration for RDF and AIF”:https://docs.oracle.com/cd/F17841_01/orscs/pdf/2311010/aifcs-2311010-impg.pdf

Figure 3-2 Integration Summary

Integration Summary Flow Diagram
  1. Integrate your foundation data (core dimensions and fact history) using either RMFCS direct loads or object storage file uploads and run the load programs following the steps described earlier in this chapter.

  2. Move foundation data to the Data Exchange (RDX) using the LOAD_PDS_DIMENSION_PROCESS_ADHOC and LOAD_PDS_FACT_PROCESS_ADHOC processes.

  3. AI Foundation and Forecast Setup Flow

    AI Foundation and Forecast Setup Flow Diagram
    1. Move the data to AI Foundation using the RSE_MASTER_ADHOC_JOB process (passing in the appropriate parameters for your data). After that completes, run the forecast setup processes using PMO_CREATE_BATCH_RUN_JOB and PMO_RUN_EXEC_PROCESS_ADHOC.

    2. Set up run types and execute test runs in the Forecasting module of AI Foundation, then approve and map those runs to RDF. Set up Flex Groups in AIF to be used with the forecasts in RDF.

    3. Export AIF setup data for RDF to the Data Exchange (RDX) using the jobs below (MFP and AP do not require most of these jobs, instead you would simply run RSE_FCST_EXPORT_ADHOC_PROCESS jobs for MFP/AP exports):

      • RSE_FCST_RUN_TYPE_CONF_EXPORT_ADHOC_PROCESS

      • RSE_FCST_ESC_LVL_EXPORT_ADHOC_PROCESS

      • RSE_PROMO_OFFER_EXPORT_ADHOC_PROCESS

      • RSE_FLEX_GROUP_EXPORT_ADHOC_PROCESS

      • RSE_FCST_EXPORT_ADHOC_PROCESS (enabling the RDF job only)

  4. RDF Setup Flow

    RDF Setup Flow Diagram
    1. Import hierarchy and foundation data from RDX to RDF:

      • RDF_HIER_IMP_RDX_W_JOB

      • RDF_PRE_DATA_IMP_RDX_W_JOB

      • Import Flexible Grouping (OAT task)

    2. Import any app-specific non-foundation files from Object Storage to RDF:

      • RDF_HIER_IMP_OBS_W_JOB

      • RDF_PRE_DATA_IMP_OBS_W_JOB

    3. Perform your initial RDF Workflow Tasks following the RDF Implementation and User Guides, such as building the domain, setting up forecast parameters, new items, what-ifs, and so on.

    4. Run the RDF Pre-Forecast and Export Forecast Parameters Batches using RDF_PRE_BATCH_W_JOB and RDF_PRE_EXP_RDX_W_JOB processes.

  5. Forecast Execution Flow

    Forecast Execution Flow Diagram
    1. Import the updated RDF parameters to AIF using the jobs:

      • RSE_RDX_FCST_PARAM_ADHOC_PROCESS

      • RSE_FCST_RDX_NEW_ITEM_ENABLE_ADHOC_PROCESS

      • RSE_LIKE_RDX_RSE_ADHOC_PROCESS

      • PMO_EVENT_IND_RDF_ADHOC_PROCESS

    2. Return to the AIF Forecasting module and generate new forecasts using the RDF parameters. Create new runs under the same run type as before, generate the forecast(s), approve the demand parameters, and click Approve Base Demand and Forecast. Ensure you activate the run type from the Manager Forecast Configurations screen and enable auto-approve (if starting nightly runs).

    3. Export the forecasts using the RSE_FCST_EXPORT_ADHOC_PROCESS (you can directly run the RDF job RSE_RDF_FCST_EXPORT_ADHOC_JOB), RSE_FCST_RUN_TYPE_CONF_EXPORT_ADHOC_PROCESS, and RSE_PROMO_OFFER_SALES_EXPORT_ADHOC_PROCESS.

    4. Import the forecasts to RDF using RDF_POST_DATA_IMP_PROCESS. Also re-run any of the previous RDF jobs (for example, RDF_PRE_DATA_IMP_RDX_W_JOB) if any other data has changed since the last run.

  6. Forecast Approval Flow

    Forecast Approval Flow Diagram
    1. Perform RDF Workflow Tasks to review/modify/approve the final forecasts (Run Post Forecast Batch, Build Forecast Review, run Forecast What-Ifs)

    2. Export the approved forecast from RDF using the Export Approved Forecast OAT Task or the RDF_POST_EXP_RDX POM job.

    3. Use AIF POM jobs to process the approved forecast and generate flat file exports for RMFCS (if required):

      • RSE_RDX_APPD_FCST_ADHOC_PROCESS (import final forecast to AIF)

      • RSE_RDF_APPR_FCST_EXPORT_ADHOC_PROCESS (export week level)

      •  RSE_RDF_APPR_FCST_DAY_EXPORT_ADHOC_PROCESS (export day level)

Generating Forecasts for AP

The same forecasting interface described in the previous section for MFP is also used to generate the Assortment Planning forecasts. When configuring forecasts for the AP base implementation, the following list of forecast runs may be required, and you will want to configure and test each run type. Additional runs can be added to satisfy your AP implementation requirements.

Note:

Although AP has an in-season plan, it still leverages Auto ES as the base forecasting method.

Bayesian (which includes plan data in the forecast) is set up as the estimation method for the run. This is also why Store Sales is set as the data source for all runs, because all runs have the ability to include plan data based on the estimation methods used (in addition to store sales).

AP Plan AP Levels Method Data Source Measure

Item Plan Pre-Season

Item-Location-Week

Auto ES

Store Sales

Regular and Promotion Gross Sales Units

Item Plan In-Season

Item-Location-Week

Auto ES (with Bayesian estimation method)

Store Sales

Regular and Promotion Gross Sales Units

Subclass Plan Pre-Season

Subclass-Location-Week

Auto ES

Store Sales

Regular and Promotion Gross Sales Units

Loading Plans to RI

If you are implementing a Planning module and need to generate plan-influenced forecasts (such as the AP in-season forecast) then you will need to first integrate your plans into RI (which acts as the singular data warehouse for this data both when the plans come from Oracle Retail solutions and when they come from outside of Oracle).

If you are pushing the plans from MFP or AP directly, then you will enable a set of POM jobs to copy plan outputs directly to RI tables. You may also use an ad hoc process to move the plan data on-demand during the implementation.

Table 3-10 Plan Extracts from MFP and AP

POM Job Name Usage Details

W_RTL_PLAN1_PROD1_LC1_T1_FS_SDE_JOB

Extract the MFP plan output from the MFP_PLAN1_EXP table in the RDX schema.

W_RTL_PLAN2_PROD2_LC2_T2_FS_SDE_JOB

Extract the MFP plan output from the MFP_PLAN2_EXP table in the RDX schema.

W_RTL_PLAN3_PROD3_LC3_T3_FS_SDE_JOB

Extract the MFP plan output from the MFP_PLAN3_EXP table in the RDX schema.

W_RTL_PLAN4_PROD4_LC4_T4_FS_SDE_JOB

Extract the MFP plan output from the MFP_PLAN4_EXP table in the RDX schema.

W_RTL_PLAN5_PROD5_LC5_T5_FS_SDE_JOB

Extract the AP plan output from the AP_PLAN1_EXP table in the RDX schema.

In all of the jobs above, the target table name is in the job name (such as W_RTL_PLAN2_PROD2_LC2_T2_FS as the target for the PLAN2 extract). Once the data is moved to the staging layer in RI, the source-agnostic fact load jobs will import the data (W_RTL_PLAN2_PROD2_LC2_T2_FS is loaded to W_RTL_PLAN2_PROD2_LC2_T2_F, and so on). The fact tables in RI are then available for AI Foundation jobs to import them as needed for forecasting usage.

The plan tables in RI have configurable levels based on your Planning implementation. The default levels are aligned to the standard outputs of MFP and AP if you do not customize or extend them. If you have modified the planning solution to operate at different levels, then you must also reconfigure the RI interfaces to match. This includes the use of flexible alternate hierarchy levels (for example, the PRODUCT_ALT.csv interface) which require custom configuration changes to the PLAN interfaces before you can bring that data back to RI and AIF. These configurations are in C_ODI_PARAM_VW, accessed from the Control & Tactical Center in AI Foundation. For complete details on the plan configurations in RI, refer to the Retail Insights Implementation Guide.

Loading Forecasts to RI

If you are implementing RDF and RI, then you may want to integrate your approved forecast with RI for reporting. The same RI interfaces are used both for RDF forecasts and external, non-Oracle forecasts.

If you are pushing the forecast from RDF, use a set of POM jobs to copy forecast outputs directly to RI tables. You may also use an ad hoc process to move the forecast data on-demand during the implementation.

Table 3-11 Forecast Extracts from AIF and RDF

POM Job Name Usage Details

W_RTL_PLANFC_PROD1_LC1_T1_FS_SDE_JOB

Extracts the baseline forecast from the AI Foundation table RSE_FCST_DEMAND_DTL_CAL_EXP in the RDX schema

W_RTL_PLANFC_PROD2_LC2_T2_FS_SDE_JOB

Extracts the final, approved forecast from the AI Foundation view RSE_RDF_APPR_FCST_VW, which contains only the approved forecast export from RDF

In all of the jobs above, the target table name is in the job name (such as W_RTL_PLANFC_PROD1_LC1_T1_FS as the target for the PLANFC1 extract). Once the data is moved to the staging layer in RI, the source-agnostic fact load jobs will import the data (W_RTL_PLANFC_PROD1_LC1_T1_FS is loaded to W_RTL_PLANFC_PROD1_LC1_T1_F, and so on).

The forecast tables in RI have configurable levels based on your implementation. The default levels are aligned to the standard outputs of AI Foundation and RDF (item/location/week) if you do not customize or extend them. If you have modified the RDF solution to operate at different levels, then you must also reconfigure the RI interfaces to match. These configurations are in C_ODI_PARAM_VW, accessed from the Control & Tactical Center in AI Foundation. For complete details on the forecast configurations in RI, refer to the Retail Insights Implementation Guide.

Migrate Data Between Environments

The process of moving all your data from one cloud environment to another is referred to as a “lift-and-shift” process. It is common to perform all of the dataload activities in this chapter in one environment, then have the final dataset copied into another environment (instead of reloading it all from the beginning). This activity is performed by Oracle and can be requested using a Service Request. The lift-and-shift process can take several days to complete, depending on data volumes and how many other Oracle applications may be involved. It is expected that you will raise SRs to schedule the activity at least 2-3 weeks in advance of the target date.

When requesting the activity, you may specify if you need to migrate the entire database (both data and structures) or only the data. The first time doing this process must be a full migration of data and structures to synchronize the source and target environments. It is currently recommended to begin your implementation in the Production environment to avoid needing a lift-and-shift in order to go live. Around the time of your go-live date, you can request a lift-and-shift be done into your non-production environments to synchronize the data for future use.

Note:

The product versions between the source and target must be aligned. It is the project team’s responsibility to plan for appropriate upgrades and lift-and-shift activities such that this will be true.