3 Data Loads and Initial Batch Processing

This chapter describes the data requirements for implementing modules of the Retail Analytics Platform, where to get additional information for optional or application-specific data interfaces, and how to load an initial dataset into the cloud environments and distribute it across your desired applications.

Data Requirements

Preparing data for one or more modules of the Retail Analytics Platform can consume a significant amount of project time, so it is crucial to identify the minimum data requirements for the platform first, followed by additional requirements that are specific to your implementation plan. Data requirements that are called out for the platform are typically shared across all modules, meaning you only need to provide the inputs once to leverage them everywhere. This is the case for foundational data elements, such as your product and location hierarchies. Foundation data must be provided for any module of the platform to be implemented. Foundation data is provided using different sources depending on your current software landscape, including the on-premise Oracle Retail Merchandising System (RMS) or 3rd-party applications. 

Figure 3-1 Inbound Foundation Data Flows

Inbound Data Flow Diagram

Application-specific data requirements are in addition to the shared foundation data, and may only be used by one particular module of the platform. These application data requirements may have different formats and data structures from the core platform-level dataset, so pay close attention to those additional interface specifications. References and links are provided later in this chapter to guide you to the relevant materials for application-specific inputs and data files.

If you are using RMS as your primary data source, then you may not need to produce some or all of these foundation files, as they will be created by other Oracle Retail processes for you. However, it is often the case that historical data requires a different set of foundation files from your future post-implementation needs. If you are loading manually-generated history files, or you are not using an Oracle Retail data source for foundation data, then review the rest of this section for details.

File Types and Data Format

The shared platform data files discussed in this chapter may use a standard comma-delimited (CSV) file format, with text strings optionally enclosed by quotation marks or other characters. The files expect the first line to be column headers, and lines 2+ should contain the input data. This applies to any filename ending in “.csv”. For specific columns in each of these files, the following standards can be used as a guideline (though they can be changed by configuration options).

Table 3-1 Foundation File Formatting

Datatype Format Example Explanation

Number

0.00

340911.10

Numbers should be unformatted with periods for decimal places. Commas or other symbols should not be used within the numerical values.

Character

“abc”

“Item #4561”

Any alphanumeric string can be optionally enclosed by quotation marks to encapsulate special characters such as commas in a descriptive value.

Date

YYYYMMDD

20201231

Dates should be provided as simple 8-digit values with no formatting in year-month-day sequences.

Context File Configuration

Note:

If you are an existing RI or Science customer, or your environments were provisioned initially on version 19 or earlier, then this information does NOT apply. Refer to the section after this for Legacy context file handling.

Before creating and processing a data file on the platform, choose the fields that will be populated and instruct the platform to only look for data in those columns. This configuration is handled through the use of Context (CTX) Files that are uploaded alongside each base data file. For example, the context file for PRODUCT.csv will be PRODUCT.csv.ctx (appending the .ctx file descriptor to the end of the base filename).

Within each context file you must provide a single column containing:

  • One or more parameters defining the behavior of the file load and the format of the file.

  • The list of fields contained in the source file, in the order in which they appear in the file specification:

    • #TABLE#<Staging Table Name>#

    • #DELIMITER#<Input Value>#

    • #DATEFORMAT#<Input Value>#

    • #REJECTLIMIT#<Input Value>#

    • #RECORDDELIMITER#<Input Value>#

    • #IGNOREBLANKLINES#<Input Value>#

    • #SKIPHEADERS#<Input Value>#

    • #TRIMSPACES#<Input Value>#

    • #TRUNCATECOL#<Input Value>#

    • #COLUMNLIST#

      <COL1>

      <COL2>

      <COL3>

The following is an example context file for the CALENDAR.csv data file:

File Name: CALENDAR.csv.ctx

File Contents:

	#TABLE#W_MCAL_PERIOD_DTS#
	#DELIMITER#,#
	#DATEFORMAT#YYYY-MM-DD#
	#REJECTLIMIT#100#
	#RECORDDELIMITER#\n#
	#IGNOREBLANKLINES#false#
	#SKIPHEADERS#1#
	#TRIMSPACES#rtrim#
	#TRUNCATECOL#false#
	#COLUMNLIST#
MCAL_CAL_ID
MCAL_PERIOD_TYPE
MCAL_PERIOD_NAME
MCAL_PERIOD
MCAL_PERIOD_ST_DT
MCAL_PERIOD_END_DT
MCAL_QTR
MCAL_YEAR
MCAL_QTR_START_DT
MCAL_QTR_END_DT
MCAL_YEAR_START_DT
MCAL_YEAR_END_DT

The TABLE field is required: it indicates the name of the database staging table updated by the file. Most of the other parameters are optional and the rows can be excluded from the context file. However, this will set values to system defaults that may not align with your format. The server maintains a copy of all the context files used, so you do not need to send a context file every time. If no context files are found, the Analytics Platform uses the last known configuration.

The COLUMNLIST tag is also required: it determines the columns the customer uses in their .dat or .csv file. The column list must match the order of fields in the file from left to right, which must also align with the published file specifications. Include the list of columns after the #COLUMNLIST# tag.

Note:

Both RI and Science can use these context files to determine the format of incoming data.

For additional format options, the available values used are from the DBMS_CLOUD package options in ADW.

Legacy Context Files

For RI interfaces in environments created from version 19 and earlier (including environments currently on v21 that were provisioned with v19), a more basic CTX context file is used. These context files only accept a list of columns in the data file; they cannot control any formatting or other options. They also must have a fixed filename ending in .dat.ctx regardless of the associated file. Use the name of the target table in RI as the file name. For example, CALENDAR.csv will have a context file named W_MCAL_PERIOD_DTS.dat.ctx. The file format itself is also fixed. CSV files will be comma-delimited, while DAT files will be pipe-delimited. DAT files use the same name for both the data file and context file, such as W_CODE_DS.dat and W_CODE_DS.dat.ctx.

The following is an example legacy context file for the CALENDAR.csv data file:

File Name: W_MCAL_PERIOD_DTS.dat.ctx

File Contents:

MCAL_CAL_ID
MCAL_PERIOD_TYPE
MCAL_PERIOD_NAME
MCAL_PERIOD
MCAL_PERIOD_ST_DT
MCAL_PERIOD_END_DT
MCAL_QTR
MCAL_YEAR
MCAL_QTR_START_DT
MCAL_QTR_END_DT
MCAL_YEAR_START_DT
MCAL_YEAR_END_DT

Note:

This legacy context file format applies strictly to Retail Insights. Science applications do not support the legacy context file format; they expect the full data file contents in their interfaces. For a complete list of mappings between CSV filenames and table names for CTX file usage, refer to Context File Table Reference .

Platform Data Requirements

There is a subset of core platform data files that can be created and loaded once and then used across some or all application modules. These files use a specific format as detailed below.

Note:

Every application on the Retail Analytics Platform has additional data needs beyond this foundation data. But this common set of files can be used to initialize the system before moving on to those specific requirements

The first table defines the minimum dimensional data. A dimension is a collection of descriptive elements, attributes, or hierarchical structures that provide context to your business data. Dimensions tell the platform what your business looks like and how it operates. This is not the entire list of possible dimension files, just the main ones needed to use the platform. Refer to Legacy Foundation File Reference for a complete list of available platform foundation files, along with a cross-reference to the legacy interfaces they most closely align with. A complete interface specification document is also available in My Oracle Support to assist you in planning your application-specific interface needs.

Table 3-2 Common Foundation Dimensions

Dimension Filename(s) Usage

Product

PRODUCT.csv

The product foundation data includes the items you sell, their core merchandise attributes, and their hierarchical structure in your source systems.

Organization

ORGANIZATION.csv

The organization foundation data includes all of your business entities involved in the movement or sale of merchandise. This includes stores, warehouses, partner/finisher locations, web stores, and virtual warehouses. It also includes your organizational hierarchy and core location attributes. 

Calendar

CALENDAR.csv

The calendar foundation data defines your business (or fiscal) calendar. This is the calendar that you operate in when making critical business decisions, reporting on financial results, and planning for the future. The most common calendar used by all modules of the platform is a 4-5-4 fiscal calendar.

Exchange Rates

EXCH_RATE.csv

Exchange rates define the conversion of monetary values from the currency they are recorded in to your primary operating currency. Most data is provided to the platform in the local currency of the data source, and it is converted to your primary currency during the load process.

Product Attributes

ATTR.csv

PROD_ATTR.csv

Product attributes describe the physical and operational characteristics of your merchandise and are a critical piece of information for many Science modules, such as Demand Transference and Size Profile Optimization.

System Parameters

RA_SRC_CURR_PARAM_G.dat

Parameters file that supports certain batch processes, such as the ability to load multiple ZIP files and run batches in sequence. Include this file with nightly RI batch uploads to specify the current business date, which enables the system to run multiple batches in sequence without customer input.

The other set of foundation files are referred to as facts. Fact data covers all of the actual events, transactions, and activities occurring throughout the day in your business. Each module in the platform has specific fact data needs, but the most common ones are listed below. At a minimum, you should expect to provide Sales, Inventory, and Receipts data for use in most platform modules. The intersection of all data (meaning which dimensional values are used) is at a common level of item/location/date. Additional identifiers may be needed on some files; for example, the sales data should be at the transaction level, the inventory file has a clearance indicator, and the adjustments file has type codes and reason codes.

Table 3-3 Common Foundation Facts

Dimension Filename(s) Usage

Sales

SALES.csv

Transaction-level records for customer sales (wholesale data provided separately). Used across all modules.

Inventory

INVENTORY.csv

Physical inventory levels for owned merchandise as well as consignment and concession items. Used across all modules.

Receipts

RECEIPT.csv

Inventory receipts into any location, including purchase order and transfer receipts. Used by Insights, Planning, and the Science modules to identify the movement of inventory into a location and to track first/last receipt date attributes.

Adjustments

ADJUSTMENT.csv

Inventory adjustments made at a location, including shrink, wastage, theft, stock counts, and other reasons. Used by Insights and Planning modules.

Purchase Orders

ORDER_HEAD.csv

ORDER_DETAIL.csv

The purchase order data for all orders placed with suppliers. Held at a level of order number, supplier, item, location, and date. Separate files are needed for the order attributes and order quantities/amounts. Used by Insights and Planning modules.

Markdowns

MARKDOWN.csv

The currency amount above or below the listed price of an item when that item's price is modified for any reason (planned markdown, POS discount, promotional sale, and so on). Used By Insights and Planning modules.

Transfers

TRANSFER.csv

The movement of inventory between two locations (both physical and virtual). Used By Insights and Planning modules.

Returns to Vendor

RTV.csv

The return of owned inventory to a supplier or vendor. Used by Insights and Planning modules.

Prices

PRICE.csv

The current selling retail value of an item at a location. Used by Insights and Science modules.

Costs

COST.csv

The base unit cost and derived net costs of an item at a location. Used by Insights and Science modules.

Wholesale/Franchise

SALES_WF.csv

Sales and markdown data from wholesale and franchise operations. Used by all modules.

Deal Income

DEAL_INCOME.csv

Income associated with deals made with suppliers and vendors. Used by Insights and Planning modules.

Details on which application modules make use of specific files (or columns within a file) can be found in the Interfaces Guide. Make sure you have a full understanding of the data needs for each application you are implementing before moving on to later steps in the process. If it is your first time creating these files, read Data File Generation, for important information about key file structures and business rules that must be followed for each file.

Other Platform File Considerations

The following section describes additional data considerations that may apply to your implementation depending on the types and volumes of data being provided to the platform. Review each topic closely, as it affects the data provided in the foundation files.

Positional Data Handling

The largest sets of fact data in the platform tend to be those that represent every possible item/location combination (such as prices or costs). To efficiently store and process these data volumes, a data warehouse technique known as compression is used to capture only the changed records on a day-to-day basis, effectively maintaining a “current position” for every set of identifiers, which is updated during each batch execution. The output of this compression process is called positional data, and the following functional areas use this method of data load and storage:

  • Inventory

  • Prices

  • Costs

  • Purchase Orders

Positional data loads follow very specific rules and cannot be processed in the same manner as non-positional data such as sales transactions.

Table 3-4 Positional Data Rules

Rule Explanation

Data Must be Sequential

Positional data must be loaded in the order of the calendar date on which it occurs and cannot be loaded out-of-order. For example, when loading history data for inventory, you must provide each week of inventory one after the other, starting from Week 1, 2, 3, and so on.

Data Cannot be Back Posted

Positional data cannot be posted to any date prior to the current load date or business date of the system. If your current load date is Week 52 2021, you cannot post records back to Week 50: those past positions are unable to be changed. Any corrections that need to be loaded must be effective from the current date forward.

Data Must be Seeded

Because positional data must maintain the current position of all data elements in the fact (even those that are inactive or not changing) it is required to initialize or “seed” positional facts with a starting value for every possible combination of identifiers. Special seed programs are provided to load these initial full snapshots of data, after which time you are allowed to provide incremental datasets (posting only the positions that change, not the full daily or weekly snapshot). Incremental loads are one of the main benefits of using positional data, as they greatly reduce your nightly batch runtime.

Throughout the initial data load process, there will be additional steps called out any time a positional load must be performed, to ensure you accurately capture both historical and initial seed data before starting nightly batch runs.

System Parameters File

The dimension file for RA_SRC_CURR_PARAM_G.dat is not used as part of your history load process directly, but instead provides an important piece of information to the platform for operational activities. This file must contain the current business date associated with the files in the ZIP package. The file should be included with the nightly ZIP upload that contains your foundation data, such as RI_RMS_DATA.zip or RAP_DATA.zip.

The file has only two generic columns, PARAM_NAME and PARAM_VALUE. When data is sourced from RMFCS, it will be automatically generated and sent to RI in the nightly batch. If your data does not come from RMFCS, then you need to include the file manually. Currently, only one row will be used, but future releases may look for additional parameters in the file.

The file should contain the following row:

VDATE|20220101

The second value is the current business date that all your other files were generated for in YYYYMMDD format. The date should match the values on your fact data, such as the DAY_DT columns in sales, inventory, and so on. This format is not configurable and should be provided as shown.

Loading Data on Multiple Threads

Large data files, such as sales and inventory, can support multiple threads for parallel execution, which can greatly reduce the load times for history files. Some tables have an explicit column called ETL_THREAD_VAL to define your threads for incoming data. Parameters are available in C_ODI_PARAM_VW (in the Control Center) to configure each load program's maximum number of threads, using the parameter name LOC_NUM_OF_THREAD. For larger retailers, it is common to define 5 to 8 threads for incoming data feeds. You may choose how you split data across threads when generating the files, but a common strategy is to assign stores to threads, such that all the data for a given location is processed on the same thread. It is also helpful to balance the data across threads to be roughly equal in volume because batch programs must wait for all threads to complete before advancing to the next step.

For other interfaces, the ETL_THREAD_VAL column has been removed and, in those cases, the system will automatically calculate the threads based on the LOC_NUM_OF_THREAD parameter values. If ETL_THREAD_VAL is not provided on a file that has the column, the same automation will be applied to determine multi-threading behavior.

Application-Specific Data

Each application within the Retail Analytics Platform may require data to be provided following specific rules and data formats, which can differ from those used in the common platform files. This section describes the use-cases for alternate data formats and lays out the basic rules that must be followed.

Retail Insights

Retail Insights has a large number of legacy interfaces that do not follow the shared platform data formats. These interfaces are populated with files named after their target database table with a file extension of .dat, such as W_PRODUCT_DS.dat. All files ending with a .dat extension are pipe-delimited files (using the | symbol as the column separator). These files also have a Unix line-ending character by default, although the line-ending character can be configured to be a different value, if needed. These files may be created by a legacy Merchandising (RMS) extract process or may be produced through existing integrations to an older version of RI or Science.

Table 3-5 Retail Insights Legacy File Formatting

Datatype Format Example Explanation

Number

0.00

340911.10

Unformatted numbers with periods for decimal places. Commas or other symbols cannot be used within the numerical values.

Character

abc

Item #4561

Any alphanumeric string will provided as-is, with the exception that it must NOT contain pipe characters or line-ending characters.

Date

YYYY-MM-DD;00:00:00

2020-05-09;00:00:00

Dates without timestamps must still use a timestamp format, but they must be hard-coded to have a time of 00:00:00. Date fields (such as DAY_DT columns) must NOT have a non-zero time, or they will not load correctly.

Timestamp

YYYY-MM-DD;HH:MM:SS

2020-05-09;09:35:19

Use full date-and-time formatting ONLY when a full timestamp is expected on the column. This is not commonly used and should be noted in the interface specifications, if supported.

If you are implementing Retail Insights as one of your modules and you are in an environment that was originally installed with version 19 or earlier of RI, you may need to provide some files in this data format, in addition to the foundation files which use the CSV format. This file format is also used when integrating with legacy solutions such as the Retail Merchandising System (RMS) through the Retail Data Extractor (RDE).

Example data from the file W_RTL_PLAN1_PROD1_LC1_T1_FS.dat:

70|-1|13|-1|2019-05-04;00:00:00|RETAIL|0|1118.82|1|70~13~2019-05-04;00:00:00~0
70|-1|13|-1|2019-05-11;00:00:00|RETAIL|0|476.09|1|70~13~2019-05-11;00:00:00~0
70|-1|13|-1|2019-05-18;00:00:00|RETAIL|0|296.62|1|70~13~2019-05-18;00:00:00~0
Retail Science Platform

Modules within the Science Platform leverage the same Context (CTX) file concepts as described in the common foundation file formats. You may control the structure and contents of Science files using the parameters in the context files. The full list of interfaces used by Science modules is included in the Interfaces Guide.

Planning Platform

Planning solutions using PDS (Planning Data Store), such as Merchandise Financial Planning, have two main types of files:

  • Hierarchy/Dimension Files – Foundation Data for the Hierarchy/Dimensions

  • Measure/Fact Files – Factual Data specific to loadable metric/measures.

When loading directly to Planning applications, both types of files should only be in CSV format and they should contain headers. Headers should contain the details of the dimension names for Hierarchy/Dimension Files and the fact names for Measure/Fact Files.

Hierarchy/Dimension Files uses the naming convention <Hierarchy Name>.hdr.csv.dat and Measure Files can be any meaningful fact grouping name, but with allowed extensions as .ovr, .rpl or .inc.

  • OVR extension should be used for override files

  • RPL extension is used to delete and replace position-based data sets

  • INC extension is for incremental files which can increment positional data.

If using the common foundation CSV files, most of the data can be interfaced using those shared integrations. However, certain files (such as the VAT Hierarchy) must be directly loaded to Planning: it does not come from RI at this time. Refer to the application-specific Planning Implementation Guides for more details about the list of files that are not included in foundation integrations.

Monitoring Load Progress

Throughout the setup and data load process, make sure you are regularly monitoring log messages and data tables to ensure procedures are completing successfully. The table below summarizes how to find more information depending on your current activity.

Table 3-6 How to Monitor Load Progress

Activity Where to Monitor How to Monitor

Ad Hoc Batch Processes

POM

POM User Guide, “Monitoring” chapter.

File Uploads

Object Storage APIs

Object Storage

Data Loading and Transformation Jobs

Data Visualizer

Data Visualizer

Planning Application Data Loads

Online Admin Tools

Online Administration Tools

Historical Load Status

It is important to understand that most data loaded into the Retail Analytics Platform is passed through the Retail Insights data warehouse before moving on to its final destination. Because of this, you will need to know where in Retail Insights you can look to find what data has been processed, what data may have been rejected or dropped due to issues, and how far along in the overall load process you are. The following tables represent the first step in the flow of data through the platform, which involves taking your input files and moving them through Retail Insights.

Table 3-7 Inbound Load Status Tables

Table Usage

C_HIST_LOAD_STATUS

Tracks the progress of historical ad hoc load programs. This table will tell you which Retail Insights tables are being populated with historical data, the most recent status of the job executions, and the most recently completed period of historical data for each table. Use APEX or Data Visualizer to query this table after historical data load runs to ensure the programs are completing successfully and processing the expected historical time periods.

C_LOAD_DATES

Check for detailed statuses of historical load jobs. This is the only place that tracks this information at the individual ETL thread level. For example, it is possible for an historical load using 8 threads to successfully complete 7 threads but fail on one thread due to data issues. The job itself may just return as Failed in POM, so knowing which thread failed will help identify the records that may need correcting and which thread should be reprocessed.

Data Load Rejections

When loading data from flat files for the first time, it is common to have bad records that cannot be processed by the RAP load procedures, such as when the identifiers on the record are not present in the associated dimensional tables. Retail Insights leverages dynamic rejected record tables to capture all such data so you can easily see what was left out of a specific data load and needs to be corrected in subsequent runs. These tables do not exist until rejected records occur during a batch program execution, at which time they will be created and populated at the time of job failure or completion. Rejected records do not automatically result in a job failure, so you should monitor the system for rejected data throughout the load process. You may query the tables from the Data Visualizer any time you are running data load programs. Look for any tables created with the E$_ prefix, which indicates the presence of rejected records.

Table 3-8 Rejected Record Tables

Table Usage

W_ETL_REJECTED_RECORDS

Summary table capturing rejected fact record counts that do not get processed into their target tables in Retail Insights. Use this to identify other tables with specific rejected data to analyze. Does not apply to dimensions, which do not have rejected record support at this time.

E$_W_RTL_SLS_TRX_IT_LC_DY_TMP

Example of a rejected record detail table for Sales Transactions. All rejected record tables start with the E$_ prefix. These tables are created at the moment the first rejection occurs for a load program.

Planning Foundation Data Status

If you are integrating data to a Planning module of the Retail Analytics Platform, there are additional tables you can monitor as you perform your initial setup and load process. These tables are part of the integration layer between the Retail Insights data warehouse and the Planning data store.

Table 3-9 Planning Integration Tables

Table Usage

C_SOURCE_CDC

Configuration and tracking table that shows the interfaces supported for RI to Planning integration and the currently processed date ranges.

RAP_INTF_CONFIG

Configuration and tracking table for integration ETL programs between all RAP modules. Contains their most recent status, run ID, and data retention policy.

RAP_INTF_RUN_STATUS

RAP integration run history and statuses.

RAP_LOG_MSG

RAP integration logging table, specific contents will vary depending on the program and logging level.

Preparing to Load Data

Implementations can follow this general outline for the data load process:

  1. Initialize the business and system calendars and perform table partitioning, which prepares the database for loading fact data.

  2. Begin staging dimension files, but do NOT load them all the way through to the target tables. For the first few runs, it is best to leverage APEX or DV to validate your data in RI’s staging tables and ensure it is correct and complete before loading it through.

  3. Load verified dimension data into RI, and perform additional checks on the data from DV/APEX, or using RI reports.

  4. If implementing any Science or Planning module, stop here and load dimension data to those systems as well. Data might work fine within RI but have issues only visible after processing in those systems.

  5. Load the first set of history files (for example, one month of sales or inventory) into RI. Again pause at the staging layer, validate the data with SQL queries, and then proceed to load it into RI.

  6. If implementing any Science or Planning module, stop here and load history data to those systems as well. Validate that the history data in those systems is complete and accurate per your business requirements.

  7. Continue loading history data into RI until you are finished with all data. You can stop at any time to move some of the data into other modules for validation purposes.

  8. After history loads are complete, all positional tables, such as Inventory Position, need to be seeded with a full snapshot of source data before they can be loaded using regular nightly batches. This seeding process is used to create a starting position in the database which can be incremented by daily delta extracts.

  9. When all history and seeding loads are completed and downstream systems are also populated with that data, nightly batches can be started.

Before you begin this process, it is best to prepare your working environment by identifying the tools and connections needed with Oracle cloud services that will allow you to interact with the platform, as detailed in the following sections.

Executing Ad Hoc Processes

To execute an ad hoc process outside of nightly batch processing, you can leverage a command-line interface (CLI) to issue cURL commands. The cURL library is available on most platforms including Windows, iOS, and Linux. All of the cURL command examples provided in this document use variables for the username and password. . Newer versions of POM require OAuth authentication instead of Basic authentication, so you will pass your OAuth tokens instead of a user/password. Also note that the format of cURL commands varies between platforms, meaning that some samples included in this document may work on Unix but not on Windows or iOS. Become familiar with the formatting requirements of the utility on your chosen platform, and modify the sample commands accordingly.

Alternatively, you may issue the same commands from a GUI-based web service program such as Postman. When using Postman, you would POST your requests to the same URL as mentioned in the cURL commands (ending with “execution”) and setting the authentication in the appropriate tab for the request parameters.

Note:

POM uses different authentication methods depending on the version you have in your environment. If you are using POM version 19 or earlier, then follow the examples in the rest of this chapter using Basic authentication. If you are using POM v21 or later, you must use OAuth 2.0 authentication instead. When using OAuth 2.0 it is easier to use Postman as your batch management tool, at least until you automate the batch process for nightly usage.

Retail Insights ad hoc processing follows different flows depending on whether you're loading DAT or CSV files. For most new implementations use CSV files. The summary table below highlights the main ad hoc flows that retrieve history data for RI using the CSV files described in this document.

Data Load Activity / File Type Stage Files to Database Transform and Load Files

Calendar Initialization

CALENDAR_STG_CSV_LOAD_ADHOC

CALENDAR_LOAD_ADHOC

Initial Dimensions

LOAD_DIM_INITIAL_CSV_ADHOC

+ LOAD_EXT_DIM_INITIAL_SI_ADHOC

LOAD_DIM_INITIAL_ADHOC

Sales History

HIST_STG_CSV_SALES_LOAD_ADHOC

HIST_SALES_LOAD_ADHOC

Inventory History

HIST_STG_CSV_INV_LOAD_ADHOC

HIST_INV_LOAD_ADHOC

Price History

HIST_CSV_PRICE_LOAD_ADHOC

If you are providing any legacy DAT files, then the original set of staging load jobs from version 19 and earlier may still be used. For example, the comparable DAT file loaders for the above activities are:

  • CALENDAR_STG_LOAD_ADHOC

  • LOAD_DIM_INITIAL_STAGE_ADHOC

  • HIST_STG_SALES_LOAD_ADHOC

  • HIST_PRICE_LOAD_ADHOC

  • HIST_STG_INV_LOAD_ADHOC

After using either the DAT or CSV staging loads (but never both for the same data), the Transform and Load processes are the same for both sets of files.

If you choose to use Postman to run ad hoc processes, there are some general guidelines for interacting with POM services:

  1. Authentication must be configured for every service call, using either Basic authentication (for POM v19 and earlier) or OAuth 2.0 authentication (for POM v21+)

  2. In the Parameters, provide a key of skipVersion with a value of true.
    Execute Ad Hoc Processes — Parameters
  3. In the Body of the service calls, copy and paste the lines of text from the sample cURL commands in between (and including) the brackets, and specify the type as JSON.
    Execute Ad Hoc Processes — Body
  4. Ensure all commands are issued as a POST message type.

Note:

This same configuration can also be used to execute nightly batch flows on-demand instead through the POM scheduler by providing the necessary cycleName and flowName values for a nightly batch cycle.

Uploading ZIP Packages

When providing data to the platform, push the compressed files into Object Storage using a ZIP file format. Review the Configuration Tools section for details on how to interact with Object Storage. The ZIP file you use will depend on what data you are trying to load. Some common or required ZIP packages include:

Table 3-10 Platform ZIP File Usage

Filenames Frequency Notes

RIHIST_RMS_DATA.zip

Ad Hoc

Used for:

  • Historical files, such as sales and inventory history for the last 1-2 years.

  • Loading initial dimensions, such as calendar, merchandise, and location hierarchies prior to history loads.

  • Initial seeding loads.

RAP_DATA.zip

RI_RMS_DATA.zip

RI_CE_DATA.zip

RI_MFP_DATA.zip

RI_EXT_DATA.zip

Daily

Can be used for daily ongoing loads into Retail Insights, and for any daily data going to downstream applications through RI’s nightly batch. Different files can be used for different source systems.

ORASE_WEEKLY.zip

Weekly

Used for weekly batch files sent directly to RSP.

ORASE_INTRADAY.zip

Intraday

Used for intraday batch files sent directly to RSP.

Other supported file packages, such as output files and optional input files, are detailed in each module’s implementation guides. Except for Planning-specific integrations and customizations (which support additional integration paths and formats), it is expected that all files will be communicated to the platform using one of the filenames above.

Important:

Once you begin loading historical data with RIHIST_RMS_DATA.zip uploads, do not re-process the same dimension file more than once for the same business date. Also, do not run the ad hoc process for a dimension without a file present, as you are then effectively closing out all data and telling the system you no longer have any active products or locations for that date. After successfully loading a dimension and moving on to other files, it is recommended to disable the related POM jobs from your process until you need it again.

User Setup

Users must have the necessary permissions in Oracle Identity Cloud Services (IDCS) or Oracle Cloud Infrastructure Identity and Access Management (OCI IAM) to perform all the implementation tasks. Before you begin, ensure that your user has at least the following groups:

Table 3-11 Implementation User Permissions

Access Needed Groups Needed

Batch Execution (POM)

BATCH_ADMINISTRATOR_JOB

PROCESS_SERVICE_ADMIN_JOB

Database Monitoring (DV)

DVContentAuthor (if available)

- OR -

BIAuthors_JOB (use in older environments where DV roles not added)

Retail Home

RETAIL_HOME_ADMIN

PLATFORM_SERVICES_ADMINISTRATOR

RI and RSP Configurations

ADMINISTRATOR_JOB

MFP Configurations

MFP_ADMIN

Calendar and Partition Setup

This is the first step that must be performed in all new environments, including projects that will not be implementing RI, but only RSP or MFP. Before beginning this step, ensure your configurations are complete per the initial configuration sections in the prior chapter. Your START_DT and END_DT variables must be set correctly for your calendar range (START_DT must be earlier than the first date in your calendar file) and the C_MODULE_ARTIFACT table must have all of the non-PLAN tables enabled for partitioning.

If this is the first time running any batch programs through POM, then there are some important, one-time updates that are necessary before you begin this process:

  • Ensure that the POM program for SETUP_CTX_FILE_JOB is enabled in the Administration tab in both Nightly and Standalone schedules for all processes where it occurs.

  • Restart your POM schedules from the Monitoring tab and execute the SETUP_CTX_FILE_PROCESS_ADHOC process to verify it is successful.

  • If you encounter an error when running this job, open a Service Request (SR) with Oracle Support, as a configuration step may be missing in your environment.

Note:

These steps cannot be skipped and the error while running this job cannot be ignored. If this process is not able to run successfully, then your CTX files will be ignored during data loads. Once these processes are successful, continue on to the steps below.
  1. Provide a calendar file CALENDAR.csv through Object Storage or SFTP (packaged using the RIHIST_RMS_DATA.zip file). This file contains your business calendar and is used to generate the Fiscal Calendar in Retail Insights and generate calendar-based partitions. Include the CTX file as well, if this is the first file load.

  2. Execute the HIST_ZIP_FILE_LOAD_ADHOC batch process.

    curl -u ${user}:${password} -X POST
    'https://xxx-yyy-rsi-ris.oracleindustry.com/ProcessServices/services/private/executionEngine/schedules/RI/execution?skipVersion=' -H 'content-type:application/json' -d '{
    "cycleName": "Adhoc", 
    "flowName":"Adhoc",
    "processName":"HIST_ZIP_FILE_LOAD_ADHOC"
    }'
    
  3. Execute the CALENDAR_STG_CSV_LOAD_ADHOC batch process using a cURL command similar to the URL format below (updating it for your actual server host):

    curl -u ${user}:${password} -X POST
    'https://xxx-yyy-rsi-ris.oracleindustry.com/ProcessServices/services/private/executionEngine/schedules/RI/execution?skipVersion=' -H 'content-type:application/json' -d '{
    "cycleName": "Adhoc",
    "flowName":"Adhoc", 
    "processName":"CALENDAR_STG_CSV_LOAD_ADHOC"}'
    
  4. Verify the data from your file was loaded to the target tables W_MCAL_PERIOD_DTS and W_MCAL_PERIODS_DS as needed. Refer to Sample Validation SQLs for sample queries you can use for this. Now run the process CALENDAR_LOAD_ADHOC using a similar command as above. This transforms the data and moves it into all internal RI tables.

  5. Validate all jobs in the calendar process are successful in POM. Validate that the tables W_MCAL_PERIOD_D and W_MCAL_DAY_D are populated with a full range of calendar dates based on your uploaded file (using DV to query the database). Do not begin the partitioning process until you are confident the calendar data is correctly loaded, as invalid calendar data can result in incorrect partitions.

  6. Begin the partitioning process using the cURL command below. There are two date parameters provided for this command:

    1. The 1st date value specifies the first day of partitioning. It can be before the first actual day of data being loaded, but you should not create excessive partitions as it can impact the performance of the system.

    2. The 2nd date (ETL business date) specifies the target business date, which is typically the day the system should be at after loading all history data and starting daily batches.

      curl -u ${user}:${password} -X POST
      'https://xxx-yyy-rsi-ris.oracleindustry.com/ProcessServices/services/private/executionEngine/schedules/RI/execution?skipVersion=' -H 'content-type:application/json' -d '{
      "cycleName": "Adhoc", 
      "flowName":"Adhoc", 
      "processName":"CREATE_PARTITION_ADHOC",
      "requestParameters":"jobParams.CREATE_PARTITION_PRESETUP_
      JOB=2018-12-30,jobParams.ETL_BUSINESS_DATE_JOB=2021-02-06"
      }'
      

The partitioning process will take some time (~5 hours per 100k partitions) to complete if you are loading multiple years of history, as this may require 100,000+ partitions to be created across the data model. This process must be completed successfully before continuing with the data load process. Contact Oracle Support if there are any questions or concerns. Partitioning can be performed after some data has been loaded; however, it will take significantly longer to execute, as it has to move all of the loaded data into the proper partitions.

If your historical time range is very long, the partitioning process may require multiple executions. It has the ability to run multiple times and resume the partitioning from where it left off. An easy way to see if all partitions are created is to run the process and see if it completes right away (in under a minute). If the process is still taking a long time to run on the second or third execution, then it hasn’t completed all partitions yet.

You can also estimate the number of partitions needed based on the details below:

  • RI needs to partition around 120 week-level tables, so take the number of weeks in your history time window multiplied by this number of tables.

  • RI needs to partition around 160 day-level tables, so take the number of days in your history time window multiplied by this number of tables.

For a 3-year history window, this results in: 120*52*3 + 160*365*3 = 193,920 partitions. If you wish to confirm your counts with Oracle before proceeding, you can raise a Service Request to ask for the results of the following two queries, or execute them from DV or APEX:

select count(*) cnt from dba_tab_partitions where table_owner = 'RADM01' and table_name like 'W_RTL_%'
select table_name, count (*) cnt from dba_tab_partitions where table_owner = 'RADM01' and table_name like 'W_RTL_%' group by table_name

The queries should return a value roughly equal to your expected counts (it will not be exact, as the data model will add/remove tables over time and some tables come with pre-built partitions or default MAXVALUE partitions).

Loading Data from Files

When history and initial seed data comes from flat files, use the following tasks to upload them into RAP:

Table 3-12 Flat File Load Overview

Activity Description

Initialize Dimensions

Initialize dimensional data (products, locations, and so on) to provide a starting point for historical records to join with. Separate initial load processes are provided for this task.

Load History Data

Run history loads in one or multiple cycles depending on the data volume, starting from the earliest date in history and loading forward to today.

Reload Dimensions

Reload dimensional data as needed throughout the process to maintain correct key values for all fact data. Dimensional files can be provided in the same package with history files and ad hoc processes run in sequence when loading.

Seed Positional Facts

Seed initial values for positional facts using full snapshots of all active item/locations in the source system. This must be loaded for the date prior to the start of nightly batches to avoid gaps in ongoing data.

Run Nightly Batches

Nightly batches must be started from the business date after the initial seeding was performed.

Completing these steps will load all of your data into the Retail Insights data model, which is required for all implementations. From there, proceed with moving data downstream to other applications as needed, such as Science modules and Merchandise Financial Planning.

Note:

All steps are provided sequentially, but can be executed in parallel. For example, you may load dimensions into RI, then on to RSP and MFP before loading any historical fact data. While historical fact data is loaded, other activities can occur in MFP such as the domain build and configuration updates.

Initialize Dimensions

Loading Dimensions into RI

You cannot load any fact data into the platform until the related dimensions have been processed and verified. The processes in this section are provided to initialize the core dimensions needed to begin fact data loads and verify file formats and data completeness. Some dimensions which are not used in history loads are not part of the initialization process, as they are expected to come in the nightly batches at a later time.

For the complete list of dimension files and their file specifications, refer to the Retail Insights Interfaces Guide on My Oracle Support. The steps below assume you have enabled or disabled the appropriate dimension loaders in POM per your requirements. Do not start the processes below before reviewing the POM ad hoc flows and disabling any file loads/jobs that are not a part of the current ZIP upload package. Also note that the process flow examples assume CSV file usage. Additional processes are used for the DAT files that go into legacy RI interfaces as defined in Executing Ad Hoc Processes.

  1. Provide your dimension files through Object Storage (packaged using the RIHIST_RMS_DATA.zip file). All files should be included in a single zip file upload.

  2. Execute the HIST_ZIP_FILE_LOAD_ADHOC batch process.

    curl -u ${user}:${password} -X POST
    'https://xxx-yyy-rsi-ris.oracleindustry.com/ProcessServices/services/private/executionEngine/schedules/RI/execution?skipVersion=' -H 'content-type:application/json' -d '{
    "cycleName": "Adhoc", "flowName":"Adhoc",
    "processName":"HIST_ZIP_FILE_LOAD_ADHOC"
    }'
    
  3. Execute the LOAD_CURRENT_BUSINESS_DATE_ADHOC batch process to set the load date to one day before the start of your history load timeframe.

    curl -u ${user}:${password} -X POST
    'https://xxx-yyy-rsi-ris.oracleindustry.com/ProcessServices/services/private/executionEngine/schedules/RI/execution?skipVersion=' -H 'content-type:application/json’ -d '{
    "cycleName":"Adhoc", "flowName":"Adhoc",
    "processName":"LOAD_CURRENT_BUSINESS_DATE_ADHOC", "requestParameters": "jobParams.ETL_BUSINESS_DATE_JOB=2017-12-31"
    }'
    
  4. Execute the LOAD_DIM_INITIAL_CSV_ADHOC and LOAD_EXT_DIM_INITIAL_SI_ADHOC processes to stage data into the database. Validate your data before proceeding. Refer to Sample Validation SQLs for sample queries you can use for this.

  5. Execute LOAD_DIM_INITIAL_ADHOC process to stage and process your dimension files. It is not required to send all files listed in the program parameters; refer to the Interfaces Guide to identify which files you wish to load. The ETL date on the command should be the same one used in the prior steps.

    curl -u ${user}:${password} -X POST
    'https://xxx-yyy-rsi-ris.oracleindustry.com/ProcessServices/services/private/executionEngine/schedules/RI/execution?skipVersion=' -H 'content-type:application/json' -d '{
    "cycleName": "Adhoc", 
    "flowName":"Adhoc", 
    "processName":"LOAD_DIM_INITIAL_ADHOC", 
    "requestParameters": "jobParams.ETL_BUSINESS_DATE_JOB=2017-12-31"
    }'
    

If any jobs fail during this load process, you may need to alter one or more dimension data files, re-send them in a new zip file upload, and re-execute the programs. Only after all dimension files have been loaded can you proceed to history loads for fact data. If you do need to load the same file multiple times, run the ad hoc process C_LOAD_DATES_CLEANUP_ADHOC before repeating these steps. This will remove any load statuses from the prior run and give you a clean start on the next execution.

Note:

You must not reload the same foundation dimension file on the same business date if it was already imported successfully. You must move the business date forward before loading these files again, even if the data has not changed, as the batch programs are designed around daily execution (one load per file per business day).

As a best practice, you should disable all POM jobs in the LOAD_DIM_INITIAL_ADHOC process except the ones you are providing new files for. This will prevent any mistakes, such as running the Organization loads without having a file present (which can result in closing all records in the database). Once you have loaded data through an interface, do not re-run that interface without data present, unless you have advanced the business date and provided a new file.

Loading Dimensions to Other Applications

Once you have successfully loaded dimension data, you may choose to pause the dataload process and push the dimensions to the Science Platform or Merchandise Financial Planning (if applicable). This allows for parallel data validation and domain build activities to occur while you continue loading data. Review sections Sending Data to Science and Sending Data to Planning for details on the POM jobs you may execute for this.

The main benefits of this order of execution are:

  1. Validating the hierarchy structure from the Science interface provides an early view for the customer to see some application screens with their data.

  2. MFP performs the domain build activity without waiting for history file loads to complete, and can start to do other planning implementation activities in parallel to the history loads.

  3. Data can be made available for custom development or validations in Innovation Workbench.

Load History Data

Historical fact data is a core foundational element to all solutions in the Retail Analytics Platform. As such, this phase of the implementation can take the longest amount of time during the project, depending on the volumes of data, the source of the data, and the amount of transformation and validation that needs to be completed before and after loading it into the platform.

Before loading any large history files such as sales and inventory, update the multi-threading parameter (LOC_NUM_OF_THREAD) in the C_ODI_PARAM table for these processes. In general, a good starting value for multi-threading is 8 threads, after which time you can monitor the job performance of each thread and add or remove threads as needed. Thread-level runtimes can be found on the C_LOAD_DATES table, which is viewable from APEX or the Tactical & Control Center.

The scenario names below are most often used in the history load processes and should have multi-threading values checked/adjusted in C_ODI_PARAM.

Note:

All related jobs MUST use the same thread value, or data could be missed. For example, all INVPOSITION scenarios must use the same value.
SIL_RETAIL_SALESTRANSACTIONFACT
SIL_RETAILINVPOSITIONFACT
SIL_RETAILINVRECEIPTSFACT
PLP_RETAILINVPOSITIONITLCGMHAGGREGATE
PLP_RETAILINVPOSITIONITLCWKAGGREGATE
PLP_RETAILINVPOSITIONSCLCDYCURRRECLASSAGGREGATE
PLP_RETAILINVPOSITIONSCLCDYWKCURRRECLASSAGGREGATE
PLP_RETAILSALESITLCDYTEMPLOAD
Inventory Position History Load

RI supports the loading of inventory position history using full, end-of-week snapshots. These weekly snapshots may be provided one week at a time or as multiple weeks in a single file. The data must be loaded sequentially from the earliest week to the latest week with no gaps or out-of-order periods. For example, you cannot start with the most recent inventory file and go backward; you must start from the first week of history.

Also, note that a variety of C_ODI_PARAM settings are available to disable inventory features that are not required for your implementation. All of the following parameters can be changed to a value of N during the history load and enabled later for daily batches, as it will greatly improve the load times:

  • RI_INVAGE_REQ_IND – Disables calculation of first/last receipt dates and inventory age measures. Receipt date calculation is used in RI and also Offer Optimization (as a method of determining entry/exit dates for items).

  • RA_CLR_LEVEL – Disables the mapping of clearance event IDs to clearance inventory updates. Used only in RI reporting.

  • RI_PRES_STOCK_IND – Disables use of replenishment data for presentation stock to calculate inventory availability measures. Used only in RI reporting.

  • RI_BOH_SEEDING_IND – Disables the creation of initial beginning-on-hand records so analytics has a non-null starting value in the first week. Used only in RI reporting.

  • RI_MOVE_TO_CLR_IND – Disables calculation of move-to-clearance inventory measures when an item/location goes into or out of clearance status. Used only in RI reporting.

  • RI_MULTI_CURRENCY_IND – Disables recalculation of primary currency amounts if you are only using a single currency. Should be enabled for multi-currency, or disabled otherwise.

The following steps describe the process for loading inventory history:

  1. Create the file INVENTORY.csv containing one or more weeks of inventory snapshots in chronological order along with your CTX file to define the columns that are populated. The DAY_DT value on every record must be an end-of-week date (Saturday by default). The only exception to this is the final week of history, which may be the middle of the week, only if daily batches will start from the following date mid-week.

  2. Upload the history file to Object Storage using the RIHIST_RMS_DATA.zip file.

  3. Update column HIST_LOAD_LAST_DATE on the table C_HIST_LOAD_STATUS to be the date matching the last day of your overall history load (can be later than the dates in the current file). This can be done from the Control & Tactical Center as discussed in the prior sections for initial configuration.

  4. Execute the HIST_ZIP_FILE_LOAD_ADHOC batch process.

    curl -u ${user}:${password} -X POST
    'https://xxx-yyy-rsi-ris.oracleindustry.com/ProcessServices/services/private/executionEngine/schedules/RI/execution?skipVersion=' -H 'content-type:application/json' -d '{
    "cycleName": "Adhoc", 
    "flowName":"Adhoc",
    "processName":"HIST_ZIP_FILE_LOAD_ADHOC"
    }'
    
    
  5. Execute the HIST_STG_CSV_INV_LOAD_ADHOC to stage your data into the database. Validate your data before proceeding. Refer to Sample Validation SQLs for sample queries you can use for this.

  6. Execute the HIST_INV_LOAD_ADHOC batch process to load the file data. The process loops over the file one week at a time until all weeks are loaded.

    curl -u ${user}:${password} -X POST
    'https://xxx-yyy-rsi-ris.oracleindustry.com/ProcessServices/services/private/executionEngine/schedules/RI/execution?skipVersion=' -H 'content-type:application/json' -d '{
    "cycleName": "Adhoc", 
    "flowName":"Adhoc", 
    "processName":"HIST_STG_CSV_INV_LOAD_ADHOC"
    }'
    
    curl -u ${user}:${password} -X POST
    'https://xxx-yyy-rsi-ris.oracleindustry.com/ProcessServices/services/private/executionEngine/schedules/RI/execution?skipVersion=' -H 'content-type:application/json' -d '{
    "cycleName": "Adhoc", 
    "flowName":"Adhoc", 
    "processName":"HIST_INV_LOAD_ADHOC"
    }'
    

This process can be repeated as many times as needed to load all history files for the inventory position. Remember that inventory cannot be loaded out of order, and you cannot go back in time to reload files after you have processed them. If you load a set of inventory files and then find issues during validation, erase the tables in the database and restart the load with corrected files.

If your inventory history has invalid data, you may get rejected records and the batch process will fail with a message that rejects exist in the data. If this occurs, you cannot proceed until you resolve your input data, because rejections on positional data MUST be resolved for one date before moving onto the next. If you move onto the next date without reprocessing any rejected data, that data is lost and cannot be loaded at a later time without starting over. When this occurs:

  1. Review the input data for issues such as items that do not exist in your foundation data, and correct or delete the offending data as needed.

  2. Run the ad hoc process W_RTL_REJECT_DIMENSION_TMP_ADHOC, which will analyze the rejected records for you and call out all primary key mismatches which would result in rejections. The results are written to table W_RTL_REJECT_DIMENSION_TMP. You have the choice to fix the data and reload new files or proceed with the current file.

  3. If you want to ignore the rejected records and treat them as invalid data (that you will not be reloading at any future time) then run REJECT_DATA_CLEANUP_ADHOC, which will move all rejected dimensions into a skip list. The list is loaded to the table C_DISCARD_DIMM. Skipped identifiers will be ignored for the current file load, and then reset for the start of the next run.

  4. If you want to fix your files, stop here and reload your dimensions and/or fact data following the normal process flows.

  5. When you are ready to continue your load, update the C_HIST_LOAD_STATUS table for the batch you just ran to set the HIST_LOAD_STATUS back to INPROGRESS on the load it failed on. Then you can run the inventory history load process again, and it should resume from the step and load-date where it stopped processing.

  6. If there are leftover records in the E$_W_RTL_INV_IT_LC_DY_TMP1 and E$_W_RTL_INV_IT_LC_DY_TMP tables you can use REJECT_DATA_CLEANUP_ADHOC to purge them.

Log a Service Request with Oracle Support for assistance with any of the above steps if you are having difficulties with loading inventory history or dealing with rejected records.

Sales History Load

RI supports the loading of sales transaction history using actual transaction data or daily/weekly sales totals. If loading data at an aggregate level, all key columns (such as the transaction ID) are still required to have some value. The sales data may be provided for a range of dates in a single file. The data should be loaded sequentially from the earliest week to the latest week but, unlike inventory position, you may have gaps or out-of-order loads, because the data is not stored positionally.

If you are not loading sales history for Retail Insights specifically, then there are many aggregation programs that can be disabled in the POM standalone process. These programs populate additional tables used only in BI reporting. The following list of jobs can be disabled if you are not using RI (effectively all aggregate “A” tables and jobs except for the item/location/week level can be turned off):

  • W_RTL_SLS_IT_LC_DY_A_JOB

  • W_RTL_SLS_IT_LC_GMH_A_JOB

  • W_RTL_SLS_SC_LC_DY_A_JOB

  • W_RTL_SLS_SC_LC_WK_A_JOB

  • W_RTL_SLS_CL_LC_DY_A_JOB

  • W_RTL_SLS_CL_LC_WK_A_JOB

  • W_RTL_SLS_DP_LC_DY_A_JOB

  • W_RTL_SLS_DP_LC_WK_A_JOB

  • W_RTL_SLS_IT_DY_TMP_JOB

  • W_RTL_SLS_IT_DY_A_JOB

  • W_RTL_SLS_IT_WK_A_JOB

  • W_RTL_SLS_SC_DY_A_JOB

  • W_RTL_SLS_SC_WK_A_JOB

  • W_RTL_SLS_LC_DY_TMP_JOB

  • W_RTL_SLS_LC_DY_A_JOB

  • W_RTL_SLS_LC_WK_A_JOB

  • W_RTL_SLS_IT_LC_DY_SN_TMP_JOB

  • W_RTL_SLS_IT_LC_DY_SN_A_JOB

  • W_RTL_SLS_IT_LC_WK_SN_A_JOB

  • W_RTL_SLS_IT_DY_SN_A_JOB

  • W_RTL_SLS_IT_WK_SN_A_JOB

  • W_RTL_SLS_SC_LC_DY_CUR_TMP_JOB

  • W_RTL_SLS_SC_LC_DY_CUR_A_JOB

  • W_RTL_SLS_SC_LC_WK_CUR_A_JOB

  • W_RTL_SLS_CL_LC_DY_CUR_A_JOB

  • W_RTL_SLS_DP_LC_DY_CUR_A_JOB

  • W_RTL_SLS_CL_LC_WK_CUR_A_JOB

  • W_RTL_SLS_DP_LC_WK_CUR_A_JOB

  • W_RTL_SLS_SC_DY_CUR_A_JOB

  • W_RTL_SLS_CL_DY_CUR_A_JOB

  • W_RTL_SLS_DP_DY_CUR_A_JOB

  • W_RTL_SLS_SC_WK_CUR_A_JOB

  • W_RTL_SLS_CL_WK_CUR_A_JOB

  • W_RTL_SLS_DP_WK_CUR_A_JOB

  • W_RTL_SLSPR_PC_IT_LC_DY_A_JOB

  • W_RTL_SLSPR_PP_IT_LC_DY_A_JOB

  • W_RTL_SLSPR_PE_IT_LC_DY_A_JOB

  • W_RTL_SLSPR_PP_CUST_LC_DY_A_JOB

  • W_RTL_SLSPR_PC_CS_IT_LC_DY_A_JOB

  • W_RTL_SLSPR_PC_HH_WK_A_JOB

After confirming the list of enabled sales jobs, perform the following steps:

  1. Create the file SALES.csv containing one or more days of sales data along with a CTX file defining the columns which are populated.

  2. Upload the history files to Object Storage using the RIHIST_RMS_DATA.zip file.

  3. Execute the HIST_ZIP_FILE_LOAD_ADHOC batch process.

    curl -u ${user}:${password} -X POST
    'https://xxx-yyy-rsi-ris.oracleindustry.com/ProcessServices/services/private/executionEngine/schedules/RI/execution?skipVersion=' -H 'content-type:application/json' -d '{
    "cycleName": "Adhoc", "flowName":"Adhoc",
    "processName":"HIST_ZIP_FILE_LOAD_ADHOC"
    }'
    
  4. Execute the HIST_STG_CSV_SALES_LOAD_ADHOC process to stage the data in the database. Validate your data before proceeding. Refer to Sample Validation SQLs for sample queries you can use for this.

  5. Execute the HIST_SALES_LOAD_ADHOC batch processes to load the data. If no data is available for certain dimensions used by sales, then the load process can seed the dimension from the history file automatically. Enable seeding for all of the dimensions according to the initial configuration guidelines; providing the data in other files is optional.

    Note:

    Several supplemental dimensions are involved in this load process, which may or may not be provided depending on the data requirements. For example, sales history data, which has customer identifiers, may require data on the customer dimension.
    curl -u ${user}:${password} -X POST
    'https://xxx-yyy-rsi-ris.oracleindustry.com/ProcessServices/services/private/executionEngine/schedules/RI/execution?skipVersion=' -H 'content-type:application/json' -d '{
    "cycleName": "Adhoc", 
    "flowName":"Adhoc", 
    "processName":"HIST_STG_CSV_SALES_LOAD_ADHOC"
    }'
    
    curl -u ${user}:${password} -X POST
    'https://xxx-yyy-rsi-ris.oracleindustry.com/ProcessServices/services/private/executionEngine/schedules/RI/execution?skipVersion=' -H 'content-type:application/json' -d '{
    "cycleName": "Adhoc", 
    "flowName":"Adhoc", 
    "processName":"HIST_SALES_LOAD_ADHOC"
    }'

This process can be repeated as many times as needed to load all history files for the sales transaction data.

Note:

Data cannot be reloaded for the same records multiple times, as sales data is treated as additive in Retail Insights. Create a Service Request with Oracle Support for assistance with clearing out incorrect or unwanted data in your environment.
Price History Load

Certain applications, such as Promotion and Markdown Optimization, require price history to perform their calculations. Price history is similar to inventory in that it is a positional, but it can be loaded in a more compressed manner due to the extremely high data volumes involved. The recommended approach for price history is as follows:

  1. Create an initial, full snapshot of price data on the first day of history and load this file into the platform using the history processes.

  2. Create additional files containing price changes for a period of time (such as a month) with the appropriate price change type codes and effective day dates for those changes. Load each file one at a time using the history processes.

  3. The history procedure will iterate over the provided files day by day, starting from the first day of history, up to the last historical load date specified in C_HIST_LOAD_STATUS for the pricing fact. For each date, the procedure checks the staging data for effective price change records and loads them, then moves on to the next date.

The process to perform price history loads uses the PRICE.csv file and the HIST_CSV_PRICE_LOAD_ADHOC batch process. Just like inventory, you must load the data sequentially; you cannot back-post price changes to earlier dates than what you have already loaded. Refer to Data File Generation for complete details on how to build this file.

Other History Loads

While sales and inventory are the most common facts to load history for, you may also want to load history for other areas such as receipts and transfers. Separate ad hoc history load processes are available for the following fact areas:

  • HIST_CSV_ADJUSTMENTS_LOAD_ADHOC

  • HIST_CSV_INVRECEIPTS_LOAD_ADHOC

  • HIST_CSV_MARKDOWN_LOAD_ADHOC

  • HIST_CSV_INVRTV_LOAD_ADHOC

  • HIST_CSV_TRANSFER_LOAD_ADHOC

  • HIST_CSV_DEAL_INCOME_LOAD_ADHOC

  • HIST_CSV_ICMARGIN_LOAD_ADHOC

  • HIST_CSV_INVRECLASS_LOAD_ADHOC

All of these interfaces deal with transactional data (not positional) so you may use them at any time to load history files in each area.

Note:

These processes are intended to support history data for downstream applications such as Science and Planning, so the tables populated by each process by default should satisfy the data needs of those applications.

Reload Dimensions

It is common to reload dimensions at various points throughout the history load, or even in-sync with every history batch run. Ensure that your core dimensions, such as the product and location hierarchies, are up-to-date and aligned with the historical data being processed. To reload dimensions, you may follow the same process as described in the Initial Dimension Load steps, ensuring that the current business load date in the system is on or before the date in history when the dimensions will be required. For example, if you are loading history files in a monthly cadence, ensure that new product and location data required for the next month has been loaded no later than the first day of that month, so it is effective for all dates in the history data files. Do not reload dimensions for the same business date multiple times, advance the date to some point in the future each time you want to load new sets of changed dimension files.

It is also very important to understand that history load procedures are unable to handle reclassifications that have occurred in source systems when you are loading history files. For example, if you are using current dimension files from the source system to process historical data, and the customer has reclassified products so they are no longer correct for the historical time periods, then your next history load may place sales or inventory under the new classifications, not the ones that were relevant in history. For this reason, reclassifications should be avoided if at all possible during history load activities, unless you can maintain historical dimension snapshots that will accurately reflect historical data needs.

Seed Positional Facts

Once sales and inventory history have been processed, you will need to perform seeding of the positional facts you wish to use. Seeding a fact means to load a full snapshot of the data for all active item/locations, thus establishing a baseline position for every possible record before nightly batches start loading incremental updates to those values. Seeding of positional facts should only occur once history data is complete and daily batch processing is ready to begin. It is normal to test this process in a pre-prod environment first, before doing it for the final production cutover.

Initial seed loads can take a long time to execute, as each file contains all possible item/location combinations that are active in the source systems. Plan for 2-3 days to execute these processes until you have performed them once and have a better estimate for the total time required. Seed loads should also be done for a week-ending date, so that you do not have a partial week of daily data in the system when you start daily batches.

If you did not previously disable the optional inventory features in C_ODI_PARAM (parameters RI_INVAGE_REQ_IND, RA_CLR_LEVEL, RI_PRES_STOCK_IND, RI_BOH_SEEDING_IND, RI_MOVE_TO_CLR_IND, and RI_MULTI_CURRENCY_IND) then you should review these settings now and set all parameters to “N” if the functionality is not required. Once this is done, follow the steps below to perform positional seeding:

  1. Create the files containing your initial full snapshots of positional data. It may be one or more of the following:

    • PRICE.csv

    • COST.csv

    • INVENTORY.csv

    • ORDER_DETAIL.csv (ORDER_HEAD.csv should already be loaded using dimension process)

  2. Upload the files to Object Storage using the RIHIST_RMS_DATA.zip file.

  3. Execute the LOAD_CURRENT_BUSINESS_DATE_ADHOC batch process to set the load date to be the next week-ending date after the final date in your history load.

    curl -u ${user}:${password} -X POST
    'https://xxx-yyy-rsi-ris.oracleindustry.com/ProcessServices/services/private/executionEngine/schedules/RI/execution?skipVersion=' -H 'content-type:application/json' -d '{
    "cycleName":"Adhoc", "flowName":"Adhoc", "processName":"LOAD_CURRENT_BUSINESS_DATE_ADHOC", 
    "requestParameters": "jobParams.ETL_BUSINESS_DATE_JOB=2017-12-31"
    }'
    
  4. Execute the ad hoc seeding batch processes depending on which files have been provided.

    curl -u ${user}:${password} -X POST
    'https://xxx-yyy-rsi-ris.oracleindustry.com/ProcessServices/services/private/executionEngine/schedules/RI/execution?skipVersion=' -H 'content-type:application/json' -d '{
    "cycleName": "Adhoc", "flowName":"Adhoc", "processName":"SEED_CSV_W_RTL_PRICE_IT_LC_DY_F_PROCESS_ADHOC"
    }'
    
    curl -u ${user}:${password} -X POST
    'https://xxx-yyy-rsi-ris.oracleindustry.com/ProcessServices/services/private/executionEngine/schedules/RI/execution?skipVersion=' -H 'content-type:application/json' -d '{
    "cycleName": "Adhoc", "flowName":"Adhoc", "processName":"SEED_CSV_W_RTL_NCOST_IT_LC_DY_F_PROCESS_ADHOC"
    }'
    
    curl -u ${user}:${password} -X POST
    'https://xxx-yyy-rsi-ris.oracleindustry.com/ProcessServices/services/private/executionEngine/schedules/RI/execution?skipVersion=' -H 'content-type:application/json' -d '{
    "cycleName": "Adhoc", "flowName":"Adhoc", "processName":"SEED_CSV_W_RTL_BCOST_IT_LC_DY_F_PROCESS_ADHOC"
    }'
    
    curl -u ${user}:${password} -X POST
    'https://xxx-yyy-rsi-ris.oracleindustry.com/ProcessServices/services/private/executionEngine/schedules/RI/execution?skipVersion=' -H 'content-type:application/json' -d '{
    "cycleName": "Adhoc", "flowName":"Adhoc", "processName":"SEED_CSV_W_RTL_INV_IT_LC_DY_F_PROCESS_ADHOC"
    }'
    
    curl -u ${user}:${password} -X POST
    'https://xxx-yyy-rsi-ris.oracleindustry.com/ProcessServices/services/private/executionEngine/schedules/RI/execution?skipVersion=' -H 'content-type:application/json' -d '{
    "cycleName": "Adhoc", "flowName":"Adhoc",
    "processName":" SEED_CSV_W_RTL_PO_ONORD_IT_LC_DY_F_PROCESS_ADHOC"
    }'
    

Once all initial seeding is complete and data has been validated, you are ready to perform a regular batch run. Provide the data files expected for a full batch, such as RAP_DATA.zip or RI_RMS_DATA.zip for foundation data, RI_MFP_DATA.zip for planning data (for RI reporting), and any Science Platform files using the ORASE_WEEKLY.zip files. If you are sourcing daily data from RMS then you need to ensure that the RDE batch flow is configured to run nightly along with the RAP batch schedule. Batch dependencies between RDE and RI should be checked and enabled, if they are not already turned on.

From this point on, the nightly batch takes care of advancing the business date and loading all files, assuming that you want the first load of nightly data to occur the day after seeding. The following diagram summarizes a potential set of dates and activities using the history and seeding steps described in this chapter:

Diagram of History and Seeding Steps

Note:

The sequential nature of this flow of events must be followed for positional facts (for example, inventory) but not for transactional facts (such as sales). Transactional data supports posting for dates other than what the current system date is, so you can choose to load sales history at any point in this process.

Run Nightly Batches

As soon as initial seeding is performed, prepare for starting nightly batch runs in full. Nightly batch schedules can be configured in parallel with the history load processes using a combination of the Customer Modules Management (in Retail Home) and the POM administration screens. It is not recommended to configure the nightly jobs manually in POM, as there are over 500 batch programs; choosing which to enable can be a time-intensive and error-prone activity. Customer Modules Management greatly simplifies this process and preserves dependencies and required process flows. Batch Orchestration describes the batch orchestration process and how you can configure batch schedules for nightly execution.

Once you move to nightly batches, you may also want to switch interfaces from Full to Incremental loading of data. Several RI interfaces, such as the Product dimension, can be loaded incrementally, sending only the changed records every day instead of a full snapshot. These options use the IS_INCREMENTAL flag in the C_ODI_PARAM table and can be accessed from the Control & Tactical Center. If you are unsure of which flags you want to change, refer to the Retail Insights Implementation Guide for detailed descriptions of all parameters.

As part of nightly batch uploads, ensure that the parameter file RA_SRC_CURR_PARAM_G.dat is included in each ZIP package, and that it is being automatically updated with the current business date for that set of files.

Sending Data to Science

All Science modules leverage a common batch infrastructure to initialize the core dataset, followed by ad hoc, application-specific programs to generate additional data as needed. Before loading any data into a Science module, it is necessary to complete initial dimension loads into RI and validate that core structures (calendar, products, locations) match what you expect to see. You may also perform some or all of the history loads before extracting any data to Science. Once you are comfortable with the data that has been loaded in, leverage the following jobs to move data into one or more Science applications.

Table 3-13 Extracts for Science

POM Process Name Usage Details

INPUT_FILES_ADHOC_PROCESS

Receive inbound zip files intended for Science, archive and extract the files.

RSE_MASTER_ADHOC_PROCESS

Foundation data movement from RI to Science, including core hierarchies and dimensional data. Accepts many different parameters to run specific steps in the load process.

<app>_MASTER_ADHOC_PROCESS

Each Science module, such as SPO or IO, has a master job for extracting and loading data that is required for that application, in addition to the RSE_MASTER processes.

Because Science Platform ad hoc procedures have been exposed using only one job in POM, they are not triggered like RI procedures. Science programs accept a number of single-character codes representing different steps in the data loading process. These codes can be provided directly in POM by editing the Parameters of the job in the Batch Monitoring screen, then executing the job through the user interface.

RSP Standalone Batch Jobs Actions Menu

For example, this string of parameters will move all dimension data from RI to RSP:

Parameters for Moving Dimension Data from RI to RSP

Additional parameters are available when moving periods of historical data, such as inventory and sales:

Parameters for Moving Periods of Historical Data

A typical workflow for moving core foundation data into Science is:

  1. Load the core foundation files (like Calendar, Product, and Organization) into RI.

  2. Use the RSE_MASTER_ADHOC_PROCESS to move those same datasets to RSP, providing specific flag values to only run the needed steps.

  3. Load some of your history files for Sales to validate the inputs.

  4. Load the same range of sales to RSP using the sales load with optional from/to date parameters.

  5. Repeat the previous two steps until all sales data is loaded into both RI and RSP.

    Performing the process iteratively provides you early opportunities to find issues in the data before you’ve loaded everything, but it is not required. You can load all the data into RSP at one time.

Follow the same general flow for the other application-specific, ad hoc flows into the Science modules. For a complete list of parameters in each program, refer to the RI and Science Operations Guide.

Sending Data to Planning

If a Planning module is being implemented, then additional RI jobs should be executed as part of the initial and nightly batch runs. These jobs are available through ad hoc calls, and the nightly jobs are included in the RI nightly schedule. Review the list below for more details on the core Planning extracts available.

Table 3-14 Extracts for Planning

POM Job Name Usage Details

W_PDS_PRODUCT_D_JOB

Exports a full snapshot of Product master data (for non-pack items only) and associated hierarchy levels.

W_PDS_ORGANIZATION_D_JOB

Exports a full snapshot of Location master data (for stores and warehouses only) and associated hierarchy levels.

W_PDS_CALENDAR_D_JOB

Exports a full snapshot of Calendar data at the day level and associated hierarchy levels.

Note:

While RI exports the entire calendar, PDS will only import 5 years around the RPAS_TODAY date (current year +/- 2 years).

W_PDS_EXCH_RATE_G_JOB

Exports a full snapshot of exchange rates.

W_PDS_DIFF_D_JOB

Exports a full snapshot of Differentiators such as Color and Size.

W_PDS_UDA_D_JOB

Exports a full snapshot of User-Defined Attributes.

W_PDS_DEALINC_IT_LC_WK_A_JOB

Incremental extract of deal income data (transaction codes 6 and 7 from RMFCS) posted in the current business week.

W_PDS_PO_ONORD_IT_LC_WK_A_JOB

Incremental extract of future on-order amounts for the current business week, based on the expected OTB date.

W_PDS_INV_IT_LC_WK_A_JOB

Incremental extract of inventory positions for the current business week. Inventory is always posted to the current week, there are no back-posted records.

W_PDS_SLS_IT_LC_WK_A_JOB

Incremental extract of sales transactions posted in the current business week (includes back-posted transactions to prior transaction dates).

W_PDS_INVTSF_IT_LC_WK_A_JOB

Incremental extract of inventory transfers posted in the current business week (transaction codes 30, 31, 32, 33, 37, 38 from RMFCS).

W_PDS_INVRC_IT_LC_WK_A_JOB

Incremental extract of inventory receipts posted in the current business week. Only includes purchase order receipts (transaction code 20 from RMFCS).

W_PDS_INVRTV_IT_LC_WK_A_JOB

Incremental extract of returns to vendor posted in the current business week (transaction code 24 from RMFCS).

W_PDS_INVADJ_IT_LC_WK_A_JOB

Incremental extract of inventory adjustments posted in the current business week (transaction codes 22 and 23 from RMFCS).

W_PDS_SLSWF_IT_LC_WK_A_JOB

Incremental extract of wholesale and franchise transactions posted in the current business week (transaction codes 82, 83, 84, 85, 86, 88 from RMFCS).

W_PDS_MKDN_IT_LC_WK_A_JOB

Incremental extract of markdowns posted in the current business week (transaction codes 11, 12, 13, 14, 15, 16, 17, 18 from RMFCS).

W_PDS_INV_IT_LC_WK_A_INITIAL_JOB

Initial history extract for inventory position data based on the timeframe established in C_SOURCE_CDC.

The PDS jobs are linked with several ad hoc processes in POM, providing you with the ability to extract specific datasets on-demand as you progress with history and initial data loads. The table below summarizes the ad hoc processes, which can be called using the standard methods such as cURL or Postman.

Table 3-15 RI Ad Hoc Processes for Planning

POM Process Name Usage Details

LOAD_PDS_DIMENSION_PROCESS_ADHOC

Groups all of the dimension (D/G table) extracts for PDS into one process for ad hoc execution. Disable individual jobs in POM to run only some of them.

LOAD_PDS_FACT_PROCESS_ADHOC

Groups all of the fact (F/A table) extracts for PDS into one process for ad hoc execution. Disable individual jobs in POM to run only some of them.

LOAD_PDS_FACT_INITIAL_PROCESS_ADHOC

History extract process for inventory positions, run only to pull a full snapshot of inventory from RI for one or multiple weeks of data. You must set the start and end dates for extraction in the C_SOURCE_CDC table before running this process.

C_LOAD_DATES_CLEANUP_ADHOC

Purge the successful job records from C_LOAD_DATES, which is necessary when you are running the same jobs multiple times per business date.

All of the PDS fact jobs leverage the configuration table C_SOURCE_CDC to track the data that has been extracted. On the first run of any incremental process, the job will extract all available data in a single run. From that point forwards, the extract will incrementally load only the data which has been added or modified since the last extract. For the initial inventory job, it requires start/end dates to extract, so you must update C_SOURCE_CDC from the Tactical & Control Center. For all other jobs, the extract dates are written to the C_SOURCE_CDC alongside the extract table name after each execution and can be overwritten as needed when doing multiple loads or re-running the same time period. If you run the same process more than once, use C_LOAD_DATES_CLEANUP_ADHOC to reset the run statuses before the next run.

After the planning data has been extracted from RI, the Planning systems use the same processes to extract both initial and incremental data for each interface. As long as RI has exported historical data for the current run, the regular interfaces scheduled to run using POM will pull the historical or incremental data specific to that interface into PDS.

Note:

Before loading dimension data into Planning, you must configure the value of RPAS_TODAY and do any other initial configurations as specified in the MFP or RPAS CE Implementation Guides.

Generating Forecasts

Before you can complete an MFP implementation, you must set up and run forecasting within the Science platform. Review the steps below for the initial setup of Forecasting:

  1. In Manage Forecast Configurations in the RSP UI, start by setting up a run type in the Setup train stop.

  2. Click the + icon above the table and fill in the fields in the popup. For MFP forecasting, the forecast method should be selected as Automatic Exponential Smoothing.

  3. Create a run type for each forecast measure/forecast intersection combination that is required for MFP.

  4. Create test runs in the Test train stop once you are done setting up the run types:

    1. Click a run type in the top table, then click on the + icon in the bottom table to create a run.

    2. If necessary, change the configurations parameters for the estimation process and forecast process in their respective tabs in the Create Run popup.

      For example, if you want to test a run using Bayesian method, edit the Estimation Method parameter in the Estimation tab using the edit icon above the table.

    3. After modifying and reviewing the configuration parameters, click the Submit button to start the run.

  5. Once the run is complete, the status changes to Forecast Generation Complete.

Doing test runs is an optional step. In addition to that, you will need to modify and review the configurations of the run type, activate the run type, enable auto-approve and map the run type to the downstream application (in this case to MFP). In the Manage train stop, select a row, click Edit Configurations Parameters and edit the estimation and forecast parameters as needed. Once you are done, go to Review tab, click Validate, then close the tab.

Note:

If the run type is active, you will only be able to view the parameters. To edit the parameters, the run type must be inactive.

To activate the run type and enable the auto-approve, select a run type in the table and click the corresponding buttons above the table. Lastly, to map the run type to MFP, go to the Map train stop and click the + icon to create a new mapping.

When configuring forecasts for the MFP base implementation, the following list of forecast runs may be required, and you will want to configure and test each run type following the general workflow above. Additional runs can be added to satisfy your MFP implementation requirements.

Note:

The “Channel” level in MFP is often referred to as “Area” level in RI and RSP, so be sure to select the correct levels which align to your hierarchy.
MFP Plan MFP Levels Method Data Source Measure

MFP Merch Target

Department-Channel-Week

Auto ES

Store Sales

Regular and Promotion Gross Sales Amt

MFP Merch Target

Department-Channel-Week

Auto ES

Store Sales

Clearance Gross Sales Amt

MFP Merch Target

Department-Channel-Week

Auto ES

Store Sales

Regular and Promotion Gross Sales Unit

MFP Merch Target

Department-Channel-Week

Auto ES

Store Sales

Clearance Gross Sales Unit

MFP Merch Target

Department-Channel-Week

Auto ES

Store Sales

Total Returns Amount

MFP Merch Target

Department-Channel-Week

Auto ES

Store Sales

Total Returns Units

MFP Merch Plan

Subclass-Channel-Week

Auto ES

Store Sales

Regular and Promotion Gross Sales Amt

MFP Merch Plan

Subclass-Channel-Week

Auto ES

Store Sales

Clearance Gross Sales Amt

MFP Merch Plan

Subclass-Channel-Week

Auto ES

Store Sales

Regular and Promotion Gross Sales Unit

MFP Merch Plan

Subclass-Channel-Week

Auto ES

Store Sales

Clearance Gross Sales Unit

MFP Merch Plan

Subclass-Channel-Week

Auto ES

Store Sales

Total Returns Amount

MFP Merch Plan

Subclass-Channel-Week

Auto ES

Store Sales

Total Returns Units

MFP Location Target

Company-Location-Week

Auto ES

Store Sales

Total Gross Sales Amount

MFP Location Target

Company-Location-Week

Auto ES

Store Sales

Total Gross Sales Unit

MFP Location Target

Company-Location-Week

Auto ES

Store Sales

Total Returns Amount

MFP Location Target

Company-Location-Week

Auto ES

Store Sales

Total Returns Units

MFP Location Plan

Department-Location-Week

Auto ES

Store Sales

Total Gross Sales Amount

MFP Location Plan

Department-Location-Week

Auto ES

Store Sales

Total Gross Sales Unit

MFP Location Plan

Department-Location-Week

Auto ES

Store Sales

Total Returns Amount

MFP Location Plan

Department-Location-Week

Auto ES

Store Sales

Total Returns Units

Migrate Data Between Environments

The process of moving all your data from one cloud environment to another is referred to as a “lift-and-shift” process. It is common to perform all of the dataload activities in this chapter in one environment, then have the final dataset copied into another environment (instead of reloading it all from the beginning). This activity is performed by Oracle and can be requested using a Service Request. The lift-and-shift process can take several days to complete, depending on data volumes and how many other Oracle applications may be involved. It is expected that you will raise SRs to schedule the activity at least 2-3 weeks in advance of the target date.

When requesting the activity, you may specify if you need to migrate the entire database (both data and structures) or only the data. The first time doing this process must be a full migration of data and structures to synchronize the source and target environments. It is currently recommended to begin your implementation in the Production environment to avoid needing a lift-and-shift in order to go live. Around the time of your go-live date, you can request a lift-and-shift be done into your non-production environments to synchronize the data for future use.

Note:

The product versions between the source and target must be aligned. It is the project team’s responsibility to plan for appropriate upgrades and lift-and-shift activities such that this will be true.