3 Data Loads and Initial Batch Processing
This chapter describes the common data requirements for implementing any of the Retail Analytics and Planning modules, where to get additional information for optional or application-specific data interfaces, and how to load an initial dataset into the cloud environments and distribute it across your desired applications.
Data Requirements
Preparing data for one or more Retail Analytics and Planning modules can consume a significant amount of project time, so it is crucial to identify the minimum data requirements for the platform first, followed by additional requirements that are specific to your implementation plan. Data requirements that are called out for the platform are typically shared across all modules, meaning you only need to provide the inputs once to leverage them everywhere. This is the case for foundational data elements, such as your product and location hierarchies. Foundation data must be provided for any module of the platform to be implemented. Foundation data is provided using different sources depending on your current software landscape, including the on-premise Oracle Retail Merchandising System (RMS) or 3rd-party applications.
Figure 3-1 Inbound Foundation Data Flows
Retail Insights is used as the foundational data warehouse that collects and coordinates data on the platform. You do not need to purchase Retail Insights Cloud Service to leverage the data warehouse for storage and integration; it is included as part of any RAP solution. Regardless of which RAP solutions you are implementing, the integration flows shown above are used.
Application-specific data requirements are in addition to the shared foundation data, and may only be used by one particular module of the platform. These application data requirements may have different formats and data structures from the core platform-level dataset, so pay close attention to those additional interface specifications. References and links are provided later in this chapter to guide you to the relevant materials for application-specific inputs and data files.
If you are using RMS as your primary data source, then you may not need to produce some or all of these foundation files, as they will be created by other Oracle Retail processes for you. However, it is often the case that historical data requires a different set of foundation files from your future post-implementation needs. If you are loading manually-generated history files, or you are not using an Oracle Retail data source for foundation data, then review the rest of this section for details.
Platform Data Requirements
There is a subset of core platform data files that can be created and loaded once and then used across some or all application modules. These files use a specific format as detailed below.
Note:
Every application included with Retail Analytics and Planning has additional data needs beyond this foundation data. But this common set of files can be used to initialize the system before moving on to those specific requirements.The first table defines the minimum dimensional data. A dimension is a collection of descriptive elements, attributes, or hierarchical structures that provide context to your business data. Dimensions tell the platform what your business looks like and how it operates. This is not the entire list of possible dimension files, just the main ones needed to use the platform. Refer to Legacy Foundation File Reference for a complete list of available platform foundation files, along with a cross-reference to the legacy interfaces they most closely align with. A complete interface specification document is also available in My Oracle Support to assist you in planning your application-specific interface needs.
Table 3-1 Common Foundation Dimensions
Dimension | Filename(s) | Usage |
---|---|---|
Product |
PRODUCT.csv |
The product foundation data includes the items you sell, their core merchandise attributes, and their hierarchical structure in your source systems. |
Product |
PRODUCT_ALT.csv |
Alternate product attributes and hierarchy levels intended for downstream Planning application extensions. |
Organization |
ORGANIZATION.csv |
The organization foundation data includes all of your business entities involved in the movement or sale of merchandise. This includes stores, warehouses, partner/finisher locations, web stores, and virtual warehouses. It also includes your organizational hierarchy and core location attributes. |
Organization |
ORGANIZATION_ALT.csv |
Alternate location attributes and hierarchy levels intended for downstream Planning application extensions. |
Calendar |
CALENDAR.csv |
The calendar foundation data defines your business (or fiscal) calendar. This is the calendar that you operate in when making critical business decisions, reporting on financial results, and planning for the future. The most common calendar used by all modules of the platform is a 4-5-4 fiscal calendar. |
Exchange Rates |
EXCH_RATE.csv |
Exchange rates define the conversion of monetary values from the currency they are recorded in to your primary operating currency. Most data is provided to the platform in the local currency of the data source, and it is converted to your primary currency during the load process. |
Product Attributes |
ATTR.csv PROD_ATTR.csv |
Product attributes describe the physical and operational characteristics of your merchandise and are a critical piece of information for many AI Foundation modules, such as Demand Transference and Size Profile Optimization. They are not required as an initial input to start data loads but will eventually be needed for most applications to function. |
System Parameters |
RA_SRC_CURR_PARAM_G.dat |
Parameters file that supports certain batch processes, such as the ability to load multiple ZIP files and run batches in sequence. Include this file with nightly batch uploads to specify the current business date, which enables the system to run multiple batches in sequence without customer input. Required once you begin nightly or weekly batch uploads. |
The other set of foundation files are referred to as facts. Fact data covers all of the actual events, transactions, and activities occurring throughout the day in your business. Each module in the platform has specific fact data needs, but the most common ones are listed below. At a minimum, you should expect to provide Sales, Inventory, and Receipts data for use in most platform modules. The intersection of all data (meaning which dimensional values are used) is at a common level of item/location/date. Additional identifiers may be needed on some files; for example, the sales data should be at the transaction level, the inventory file has a clearance indicator, and the adjustments file has type codes and reason codes.
Table 3-2 Common Foundation Facts
Dimension | Filename(s) | Usage |
---|---|---|
Sales |
SALES.csv |
Transaction-level records for customer sales (wholesale data provided separately). Used across all modules. |
Inventory |
INVENTORY.csv |
Physical inventory levels for owned merchandise as well as consignment and concession items. Used across all modules. |
Receipts |
RECEIPT.csv |
Inventory receipts into any location, including purchase order and transfer receipts. Used by Insights, Planning, and the AI Foundation modules to identify the movement of inventory into a location and to track first/last receipt date attributes. |
Adjustments |
ADJUSTMENT.csv |
Inventory adjustments made at a location, including shrink, wastage, theft, stock counts, and other reasons. Used by Insights and Planning modules. |
Purchase Orders |
ORDER_HEAD.csv ORDER_DETAIL.csv |
The purchase order data for all orders placed with suppliers. Held at a level of order number, supplier, item, location, and date. Separate files are needed for the order attributes and order quantities/amounts. Used by Insights and Planning modules. |
Markdowns |
MARKDOWN.csv |
The currency amount above or below the listed price of an item when that item's price is modified for any reason (planned markdown, POS discount, promotional sale, and so on). Used By Insights and Planning modules. |
Transfers |
TRANSFER.csv |
The movement of inventory between two locations (both physical and virtual). Used By Insights and Planning modules. |
Returns to Vendor |
RTV.csv |
The return of owned inventory to a supplier or vendor. Used by Insights and Planning modules. |
Prices |
PRICE.csv |
The current selling retail value of an item at a location. Used by Insights and AI Foundation modules. |
Costs |
COST.csv |
The base unit cost and derived net costs of an item at a location. Used by Retail Insights only. |
Wholesale/Franchise |
SALES_WF.csv |
Sales and markdown data from wholesale and franchise operations. Used by all modules. |
Deal Income |
DEAL_INCOME.csv |
Income associated with deals made with suppliers and vendors. Used by Insights and Planning modules. |
Details on which application modules make use of specific files (or columns within a file) can be found in the Interfaces Guide on My Oracle Support. Make sure you have a full understanding of the data needs for each application you are implementing before moving on to later steps in the process. If it is your first time creating these files, read Data File Generation, for important information about key file structures and business rules that must be followed for each foundation file.
File Upload Samples
When you first upload foundation data into the platform, you will likely provide a small subset of the overall set of files required by your applications. The following examples show a possible series of initial file uploads to help you verify that you are providing the correct sets of data. All dimension files in initialization and history loads must be full snapshots of data; never send partial or incremental files.
Example #1: Calendar Initialization
When you first configure the system you must upload and process the CALENDAR.csv
file. You will generate
the file following the specifications, and also provide a context (ctx) file, as described in Data File Generation.
File to upload: RAP_DATA_HIST.zip
Zip file contents:
-
CALENDAR.csv
-
CALENDAR.csv.ctx
Example #2: Product and Location Setup
You have finalized the calendar and want to initialize your core product and organization dimensions. You must provide
the PRODUCT.csv
and ORGANIZATION.csv
files along with their context files, as described
in Data File Generation.
File to upload: RAP_DATA_HIST.zip
Zip file contents:
-
PRODUCT.csv
-
PRODUCT.csv.ctx
-
ORGANIZATION.csv
-
ORGANIZATION.csv.ctx
Example #3: Full dimension load
You have a set of dimension files you want to process using the initial dimension load ad hoc processes in POM.
File to upload: RAP_DATA_HIST.zip
Zip file contents:
-
PRODUCT.csv
-
PRODUCT.csv.ctx
-
ORGANIZATION.csv
-
ORGANIZATION.csv.ctx
-
ATTR.csv
-
ATTR.csv.ctx
-
PROD_ATTR.csv
-
PROD_ATTR.csv.ctx
-
EXCH_RATE.csv
-
EXCH_RATE.csv.ctx
Example #4: Sales Data Load
You have finished the dimensions and you are ready to start processing sales history files.
File to upload: RAP_DATA_HIST.zip
Zip file contents:
-
SALES.csv
-
SALES.csv.ctx
Example #5: Multi-File Fact Data Load
Once you are confident in your data file format and contents, you may send multiple files as separate ZIP uploads for sequential
loading in the same run. This process uses a numerical sequence on the end of the ZIP file name. You should still include
the base ZIP file to start the process. The actual method to loop over these files is to use the RI_FLOW_ADHOC
standalone flow in the AIF DATA POM schedule, which executes the fact history load for all the standard history load files.
Files to upload: RAP_DATA_HIST.zip
, RAP_DATA_HIST.zip.1
, RAP_DATA_HIST.zip.2
, RAP_DATA_HIST.zip.3
Zip file contents (in each uploaded zip):
-
SALES.csv
-
SALES.csv.ctx
– The CTX is only required in the first ZIP file, but it’s best to always include it so you can refer to it later in archived files, if needed.
In this example you are loading sales month by month iteratively, but the standalone flow supports all other fact loads
as well. You can also combine multiple fact files (for different facts with the same period of time) in each ZIP file upload.
Track the status of the files in the C_HIST_FILES_LOAD_STATUS
table after each cycle execution; it shows
whether the file was loaded successfully and how many more files are available to process.
Uploading ZIP Packages
When providing data to the platform, push the compressed files into Object Storage using a ZIP file format. Review the File Transfer Services section
for details on how to interact with Object Storage. The ZIP file you use will depend on the data you are attempting to load.
The default ZIP file packages are below, but the history ZIP file name is configurable in C_ODI_PARAM
using
parameter name HIST_ZIP_FILE
if a different one is desired.
Table 3-3 Platform ZIP File Usage
Filenames | Frequency | Notes |
---|---|---|
RAP_DATA_HIST.zip |
Ad Hoc |
Used for:
|
RAP_DATA_HIST.zip.1 RAP_DATA_HIST.zip.2 … RAP_DATA_HIST.zip.N |
Ad Hoc / Intraday |
Multiple zip uploads are supported for sending historical fact data which should be loaded sequentially.
Append a sequence number on the ZIP files starting from 1 and increasing to N, where N is the number of files you are loading.
Track the status of the files in |
RAP_DATA.zip RI_RMS_DATA.zip RI_CE_DATA.zip RI_MFP_DATA.zip RI_EXT_DATA.zip |
Daily |
Can be used for daily ongoing loads into the platform (for RI and foundation common inputs), and for any daily data going to downstream applications through RI’s nightly batch. Different files can be used for different source systems. |
RI_REPROCESS_DATA.zip |
Ad Hoc |
Used to upload individual files which will be appended into an existing nightly batch file set. |
ORASE_WEEKLY_ADHOC.zip |
Ad Hoc |
Used for loading AI Foundation files with ad hoc processes. |
ORASE_WEEKLY.zip |
Weekly |
Used for weekly batch files sent directly to AI Foundation. |
ORASE_INTRADAY.zip |
Intraday |
Used for intraday batch files sent directly to AI Foundation. |
Other supported file packages, such as output files and optional input files, are detailed in each module’s implementation guides. Except for Planning-specific integrations and customizations (which support additional integration paths and formats), it is expected that all files will be communicated to the platform using one of the filenames above.
Preparing to Load Data
Implementations can follow this general outline for the data load process:
-
Initialize the business and system calendars and perform table partitioning, which prepares the database for loading fact data.
-
Load initial dimension data into the dimension and hierarchy tables and perform validation checks on the data from DV/APEX or using RI reports.
-
If implementing any AI Foundation or Planning module, load the dimension data to those systems now. Data might work fine on the input tables but have issues only visible after processing in those systems. Don’t start loading history data if your dimensions are not working with all target applications.
-
Load the first set of history files (for example, one month of sales or inventory) and validate the results using DV/APEX.
-
If implementing any AI Foundation or Planning module, stop here and load the history data to those systems as well. Validate that the history data in those systems is complete and accurate per your business requirements.
-
Continue loading history data into RAP until you are finished with all data. You can stop at any time to move some of the data into downstream modules for validation purposes.
-
After history loads are complete, all positional tables, such as Inventory Position, need to be seeded with a full snapshot of source data before they can be loaded using regular nightly batches. This seeding process is used to create a starting position in the database which can be incremented by daily delta extracts. These full-snapshot files can be included in the first nightly batch you run, if you want to avoid manually loading each seed file through one-off executions.
-
When all history and seeding loads are completed and downstream systems are also populated with that data, nightly batches can be started.
Before you begin this process, it is best to prepare your working environment by identifying the tools and connections needed for all your Oracle cloud services that will allow you to interact with the platform, as detailed in Implementation Tools and Data File Generation.
Prerequisites for loading files and running POM processes include:
Prerequisite | Tool / Process |
---|---|
Upload ZIPs to Object Storage |
File Transfer Service (FTS) scripts |
Invoke adhoc jobs to unpack and load the data |
Postman (or similar REST API tool) |
Monitor job progress after invoking POM commands |
POM UI (Batch Monitoring tab) |
Monitoring data loads |
APEX / DV (direct SQL queries) |
Users must also have the necessary permissions in Oracle Cloud Infrastructure Identity and Access Management (OCI IAM)
to perform all the implementation tasks. Before you begin, ensure that your user has at least the following groups (and their _PREPROD
equivalents if using a stage/dev environment):
Access Needed | Groups Needed |
---|---|
Batch Job Execution |
BATCH_ADMINISTRATOR_JOB PROCESS_SERVICE_ADMIN_JOB |
Database Monitoring |
<tenant ID>-DVContentAuthor (DV) DATA_SCIENCE_ADMINISTRATOR_JOB (APEX) |
Retail Home |
RETAIL_HOME_ADMIN PLATFORM_SERVICES_ADMINISTRATOR PLATFORM_SERVICES_ADMINISTRATOR_ABSTRACT |
RI and AI Foundation Configurations |
ADMINISTRATOR_JOB |
MFP Configurations |
MFP_ADMIN_STAGE / PROD |
IPO Configurations |
IPO_ADMIN_STAGE / PROD |
AP Configurations |
AP_ADMIN_STAGE / PROD |
Calendar and Partition Setup
This is the first step that must be performed in all new environments, including projects that will not be implementing
RI, but only AI Foundation or Planning solutions. Before beginning this step, ensure your configurations are complete per
the initial configuration sections in the prior chapter. Your START_DT
and END_DT
variables
must be set correctly for your calendar range (START_DT
at least 12 months before start of history data)
and the C_MODULE_ARTIFACT
table must have all of the required tables enabled for partitioning. C_MODULE_EXACT_TABLE
must be configured if you need PLAN partitions for planning data loads.
-
Upload the calendar file
CALENDAR.csv
(and associated context file) through Object Storage (packaged using theRAP_DATA_HIST.zip
file). -
Execute the
HIST_ZIP_FILE_LOAD_ADHOC
process. Example Postman message body:{ "cycleName":"Adhoc", "flowName":"Adhoc", "processName":"HIST_ZIP_FILE_LOAD_ADHOC" }
-
Verify that the jobs in the ZIP file load process completed successfully using the POM Monitoring screen. Download logs for the tasks as needed for review.
-
Execute the
CALENDAR_LOAD_ADHOC
process. This transforms the data and moves it into all internal RI tables. It also performs table partitioning based on your input date range.Sample Postman message body:
{ "cycleName":"Adhoc", "flowName":"Adhoc", "processName":"CALENDAR_LOAD_ADHOC", "requestParameters":"jobParams.CREATE_PARTITION_PRESETUP_JOB=2018-12-30,jobParams.ETL_BUSINESS_DATE_JOB=2021-02-06" }
There are two date parameters provided for this command:
-
The first date value specifies the first day of partitioning. It must be some time before the first actual day of data being loaded. The recommendation is 1-6 months prior to the planned start of the history so that you have room for back-posted data and changes to start dates. You should not create excessive partitions for years of data you won’t be loading however, as it can impact system performance. The date should also be >=
START_DT
value set inC_ODI_PARAM_VW
, because RAP cannot partition dates that don’t exist in the system; but you don’t need to partition your entire calendar range. -
The second date (ETL business date) specifies the target business date, which is typically the day the system should be at after loading all history data and starting daily batches. It is okay to guess some date in the future for this value, but note that the partition process automatically extends 4 months past the date you specified. Your fiscal calendar must have enough periods in it to cover the 4 months after this date or this job will fail. This date can be changed later if needed, and partitioning can be re-executed multiple times for different timeframes.
-
-
If this is your first time loading a calendar file, check the
RI_DIM_VALIDATION_V
view to confirm no warnings or errors are detected. Refer to the AI Foundation Operations Guide for more details on the validations performed. The validation job will fail if it doesn’t detect data moved to the final table (W_MCAL_PERIOD_D
). Refer to Sample Validation SQLs for sample queries you can use to check the data. -
If you need to reload the same file multiple times due to errors, you must Restart the Schedule in POM and then run the ad hoc process
C_LOAD_DATES_CLEANUP_ADHOC
before repeating these steps. This will remove any load statuses from the prior run and give you a clean start on the next execution.Note:
If any job havingSTG
in the name fails during the run, then review the POM logs and it should provide the name of an external LOG or BAD table with more information. These error tables can be accessed from APEX using a support utility. Refer to the AI Foundation Operations Guide section on “External Table Load Logs” for the utility syntax and examples.
You can monitor the partitioning process while it’s running by querying the RI_LOG_MSG
table from APEX.
This table captures the detailed partitioning steps being performed by the script in real time (whereas POM logs are only
refreshed at the end of execution). If the process fails in POM after exactly 4 hours, this is just a POM process timeout
and it may still be running in the background so you can check for new inserts to the RI_LOG_MSG
table.
The partitioning process will take some time (~5 hours per 100k partitions) to complete if you are loading multiple years of history, as this may require 100,000+ partitions to be created across the data model. This process must be completed successfully before continuing with the data load process. Contact Oracle Support if there are any questions or concerns. Partitioning can be performed after some data has been loaded; however, it will take significantly longer to execute, as it has to move all of the loaded data into the proper partitions.
You can also estimate the number of partitions needed based on the details below:
-
RAP needs to partition around 120 week-level tables if all functional areas are enabled, so take the number of weeks in your history time window multiplied by this number of tables.
-
RAP needs to partition around 160 day-level tables if all functional areas are enabled, so take the number of days in your history time window multiplied by this number of tables.
For a 3-year history window, this results in: 120*52*3 + 160*365*3 = 193,920 partitions. If you wish to confirm your final counts before proceeding to the next dataload steps, you can execute these queries from APEX:
select count(*) cnt from dba_tab_partitions where table_owner = 'RADM01' and table_name like 'W_RTL_%'
select table_name, count (*) cnt from dba_tab_partitions where table_owner = 'RADM01' and table_name like 'W_RTL_%' group by table_name
The queries should return a count roughly equal to your expected totals (it will not be exact, as the data model will add/remove
tables over time and some tables come with pre-built partitions or default MAXVALUE
partitions).
Loading Data from Files
When history and initial seed data comes from flat files, use the following tasks to upload them into RAP:
Table 3-4 Flat File Load Overview
Activity | Description |
---|---|
Initialize dimensional data (products, locations, and so on) to provide a starting point for historical records to join with. Separate initial load processes are provided for this task. |
|
Run history loads in one or multiple cycles depending on the data volume, starting from the earliest date in history and loading forward to today. |
|
Reload dimensional data as needed throughout the process to maintain correct key values for all fact data. Dimensional files can be provided in the same package with history files and ad hoc processes run in sequence when loading. |
|
Seed initial values for positional facts using full snapshots of all active item/locations in the source system. This must be loaded for the date prior to the start of nightly batches to avoid gaps in ongoing data. |
|
Nightly batches must be started from the business date after the initial seeding was performed. |
Completing these steps will load all of your data into the Retail Insights data model, which is required for all implementations. From there, proceed with moving data downstream to other applications as needed, such as AI Foundation modules and Merchandise Financial Planning.
Note:
All steps are provided sequentially, but can be executed in parallel. For example, you may load dimensions into RI, then on to AI Foundation and Planning applications before loading any historical fact data. While historical fact data is loaded, other activities can occur in Planning such as the domain build and configuration updates.
Initialize Dimensions
Loading Dimensions into RI
You cannot load any fact data into the platform until the related dimensions have been processed and verified. The processes in this section are provided to initialize the core dimensions needed to begin fact data loads and verify file formats and data completeness. Some dimensions which are not used in history loads are not part of the initialization process, as they are expected to come in the nightly batches at a later time.
For the complete list of dimension files and their file specifications, refer to the AI Foundation Interfaces Guide on My Oracle Support. The steps below assume you have enabled or disabled the appropriate dimension loaders in POM per your requirements. The process flow examples also assume CSV file usage, different programs are available for legacy DAT files. The AI Foundation Operations Guide provides a list of all the job and process flows used by foundation data files, so you can identify the jobs required for your files and disable unused programs in POM.
When you are using RDE jobs to source dimension data from RMFCS and you are not providing any flat files like PRODUCT.csv
, it is necessary to disable all file-based loaders in the LOAD_DIM_INITIAL_ADHOC
process flow from POM.
Any job name starting with the following text can be disabled, because RDE jobs will bypass these steps and insert directly
to staging tables:
-
COPY_SI_
-
STG_SI_
-
SI_
-
STAGING_SI_
-
Disable any dimension jobs you are not using from Batch Administration, referring to the process flows for DAT and CSV files in the AIF Operations Guide as needed. If you are not sure if you need to disable a job, it’s best to leave it enabled initially. Restart the POM schedule in Batch Monitoring to apply the changes.
-
Provide your dimension files and context files through File Transfer Services (packaged using the
RAP_DATA_HIST.zip
file). All files should be included in a single zip file upload. If you are using data from Merchandising, this is where you should run the RDEADHOC
processes such asRDE_EXTRACT_DIM_INITIAL_ADHOC
. -
Execute the
HIST_ZIP_FILE_LOAD_ADHOC
process if you need to unpack a new ZIP file. -
Execute the
LOAD_DIM_INITIAL_ADHOC
process to stage, transform, and load your dimension data from the files. The ETL date on the command should be at a minimum one day before the start of your history load timeframe, but 3-6 months before is ideal. It is best to give yourself a few months of space for reprocessing dimension loads on different dates prior to start of history. Date format isYYYY-MM-DD
; any other format will not be processed. After running the process, you can verify the dates are correct in theW_RTL_CURR_MCAL_G
table. If the business date was not set correctly, your data may not load properly.Sample Postman message body:
{ "cycleName":"Adhoc", "flowName":"Adhoc", "processName":"LOAD_DIM_INITIAL_ADHOC", "requestParameters":"jobParams.ETL_BUSINESS_DATE_JOB=2017-12-31" }
Note:
If any job havingSTG
in the name fails during the run, then review the POM logs and it should provide the name of an external LOG or BAD table with more information. These error tables can be accessed from APEX using a support utility. Refer to the AI Foundation Operations Guide section on “External Table Load Logs” for the utility syntax and examples.
If this is your first dimension load, you will want to validate the core dimensions such as product and location hierarchies using APEX. Refer to Sample Validation SQLs for sample queries you can use for this.
If any jobs fail during this load process, you may need to alter one or more dimension data files, re-send them in a new
zip file upload, and re-execute the programs. Only after all core dimension files have been loaded (CALENDAR
, PRODUCT
, ORGANIZATION
, and EXCH_RATE
) can you proceed to history loads
for fact data. Make sure to query the RI_DIM_VALIDATION_V
view for any warnings/errors after the run. Refer
to the AI Foundation Operations Guide for more details on the validation messages that may occur. This view primarily
uses the table C_DIM_VALIDATE_RESULT
, which can be separately queried instead of the view to see all the
columns available on it.
If you need to reload the same file multiple times due to errors, you must Restart the Schedule in POM and then run the
ad hoc process C_LOAD_DATES_CLEANUP_ADHOC
before repeating these steps. This will remove any load statuses
from the prior run and give you a clean start on the next execution.
Note:
Starting with version 23.1.101.0, the product and organization file loaders have been redesigned specifically for the initial ad hoc loads. In prior versions, you must not reload multiple product or organization files for the same ETL business date, as it treats any changes as a reclassification and can cause data issues while loading history. In version 23.x, the dimensions are handled as “Type 1” slowly changing dimensions, meaning the programs do not look for reclasses and instead perform simple merge logic to apply the latest hierarchy data to the existing records, even if levels have changed.As a best practice, you should disable all POM jobs in the LOAD_DIM_INITIAL_ADHOC
process except the ones
you are providing new files for. For example, if you are loading the PRODUCT
, ORGANIZATION
, and EXCH_RATE
files as your dimension data for AI Foundation, then you could just execute the set of jobs
for those files and disable the others. Refer to the AI Foundation Operations Guide for a list of the POM jobs involved
in loading each foundation file, if you wish to disable jobs you do not plan to use to streamline the load process.
Hierarchy Deactivation
Beginning in version 23, foundation dimension ad hoc loads have been changed to use Type 1 slowly-changing dimension (SCD)
behavior, which means that the system will no longer create new records every time a parent/child relationship changes. Instead,
it will perform a simple merge on top of existing data to maintain as-is hierarchy definitions. The foundation data model
holds hierarchy records separately from product data, so it is also necessary to perform maintenance on hierarchies to maintain
a single active set of records that should be propagated downstream to other RAP applications. This maintenance is performed
using the program W_PROD_CAT_DH_CLOSE_JOB
in the LOAD_DIM_INITIAL_ADHOC
process. The program
will detect unused hierarchy nodes which have no children after the latest data has been loaded into W_PROD_CAT_DH
and it will close them (set to CURRENT_FLG=N
). This is required because, in the data model, each hierarchy
level is stored as a separate record, even if that level is not being used by any products on other tables. Without the cleanup
activity, unused hierarchy levels would accumulate in W_PROD_CAT_DH
and be available in AI Foundation, which
is generally not desired.
There are some scenarios where you may want to disable this program. For example, if you know the hierarchy is going to
change significantly over a period of time and you don’t want levels to be closed and re-created every time a new file is
loaded, you must disable W_PROD_CAT_DH_CLOSE_JOB
. You can re-enable it later and it will close any unused
levels that remain after all your changes are processed. Also be aware that the program is part of the nightly batch process
too, so once you switch from historical to nightly loads, this job will be enabled and will close unused hierarchy levels
unless you intentionally disable it. This job must be disabled if you are using RDE programs to load Merchandising data.
Loading Dimensions to Other Applications
Once you have successfully loaded dimension data, you should pause the dataload process and push the dimensions to AI Foundation Cloud Services and the Planning Data Store (if applicable). This allows for parallel data validation and domain build activities to occur while you continue loading data. Review sections Sending Data to AI Foundation and Sending Data to Planning for details on the POM jobs you may execute for this.
The main benefits of this order of execution are:
-
Validating the hierarchy structure from the AI Foundation interface provides an early view for the customer to see some application screens with their data.
-
Planning apps can perform the domain build activity without waiting for history file loads to complete, and can start to do other planning implementation activities in parallel to the history loads.
-
Data can be made available for custom development or validations in Innovation Workbench.
Do not start history loads for facts until you are confident all dimensions are working throughout your solutions. Once you begin loading facts, it becomes much harder to reload dimension data without impacts to other areas. For example, historical fact data already loaded will not be automatically re-associated with hierarchy changes loaded later in the process.
Load History Data
Historical fact data is a core foundational element to all solutions in Retail Analytics and Planning. As such, this phase of the implementation can take the longest amount of time during the project, depending on the volumes of data, the source of the data, and the amount of transformation and validation that needs to be completed before and after loading it into the Oracle database.
It is important to know where in the RAP database you can look to find what data has been processed, what data may have been rejected or dropped due to issues, and how far along in the overall load process you are. The following tables provide critical pieces of information throughout the history load process and can be queried from APEX.
Table 3-5 Inbound Load Status Tables
Table | Usage |
---|---|
C_HIST_LOAD_STATUS |
Tracks the progress of historical ad hoc load programs for inventory and pricing facts. This table will tell you which Retail Insights tables are being populated with historical data, the most recent status of the job executions, and the most recently completed period of historical data for each table. Use APEX or Data Visualizer to query this table after historical data load runs to ensure the programs are completing successfully and processing the expected historical time periods. |
C_HIST_FILES_LOAD_STATUS |
Tracks the progress of zip file processing when loading multiple files in sequence using scheduled standalone process flows. |
C_LOAD_DATES |
Check for detailed statuses of historical load jobs. This is the only place that tracks this information at the individual ETL thread level. For example, it is possible for an historical load using 8 threads to successfully complete 7 threads but fail on one thread due to data issues. The job itself may just return as Failed in POM, so knowing which thread failed will help identify the records that may need correcting and which thread should be reprocessed. |
W_ETL_REJECTED_RECORDS |
Summary table capturing rejected fact record counts that do not get processed into their target tables in Retail Insights. Use this to identify other tables with specific rejected data to analyze. Does not apply to dimensions, which do not have rejected record support at this time. |
E$_W_RTL_SLS_TRX_IT_LC_DY_TMP |
Example of a rejected record detail table for Sales Transactions. All rejected record
tables start with the |
When loading data from flat files for the first time, it is common to have bad records that cannot be processed by the
RAP load procedures, such as when the identifiers on the record are not present in the associated dimension tables. The foundation
data loads leverage rejected record tables to capture all such data so you can see what was dropped by specific data load
and needs to be corrected and reloaded. These tables do not exist until rejected records occur during program execution, and
are not initially granted to APEX unless you have run RABE_GRANT_ACCESS_TO_IW_ADHOC_PROCESS
. Periodically
monitor these tables for rejected data which may require reloading.
The overall sequence of files to load will depend on your specific data sources and conversion activities, but the recommendation is listed below as a guideline.
-
Sales – Sales transaction data is usually first to be loaded, as the data is critical to running most applications and needs the least amount of conversion.
-
Inventory Receipts – If you need receipt dates for downstream usage, such as in Lifecycle Pricing Optimization, then you need to load receipt transactions in parallel with Inventory Positions. For each file of receipts loaded, also load the associated inventory positions afterwards.
-
Inventory Position – The main stock-on-hand positions file is loaded next. This history load also calculates and stores data using the receipts file, so
INVENTORY.csv
andRECEIPT.csv
must be loaded at the same time, for the same periods. -
Pricing – The price history file is loaded after sales and inventory are complete because many applications need only the first two datasets for processing. Potentially, price history may also be the largest volume of data; so it’s good to be working within your other applications in parallel with loading price data.
-
All other facts – There is no specific order to load any of the other facts like transfers, adjustments, markdowns, costs, and so on. They can be loaded based on your downstream application needs and the availability of the data files.
For your first time implementing this history load process, you may also leverage the reference paper and scripts in My Oracle Support (Doc ID 2539848.1) titled AI Foundation Historical Data Load Monitoring. This document will guide you through one way you can monitor the progress of history loads, gather statistics on commonly used tables, and verify that data is moving from the input tables to the target tables in the database.
Automated History Loads
Once you have performed your history loads manually a couple of times (following all steps in later sections) and validated
the data is correct, you may wish to automate the remaining file loads. A standalone process flow RI_FLOW_ADHOC
is available in POM that can run the fact history loads multiple times using your specified start times. Follow the steps
below to enable this process:
-
Upload multiple ZIP files using FTS, each containing one set of files for the same historical period. Name the files like
RAP_DATA_HIST.zip
,RAP_DATA_HIST.zip.1
,RAP_DATA_HIST.zip.2
and so on, incrementing the index on the end of the zip file name after the first one. -
In the POM batch administration screen, ensure all of the jobs in the
RI_FLOW_ADHOC
are enabled, matching your initial fact history standalone runs. The following processes in the process flow are common to all runs and all jobs should be enabled in them:-
FLOW_LOAD_START_PROCESS
-
FLOW_LOAD_END_PROCESS
-
CLEANUP_C_LOAD_DATES_FLOW_PROCESS
-
ZIP_FILE_LOAD_FLOW_PROCESS
-
FACT_FLOW_END_PROCESS
-
-
Schedule the standalone flows from Scheduler Administration to occur at various intervals throughout the day. Space out the cycles based on how long it took to process your first file.
-
Monitor the load progress from the Batch Monitoring screen to see the results from each run. Monitor the table
C_HIST_FILES_LOAD_STATUS
to verify the ZIP files you uploaded were all processed successfully. Validate that data is being loaded successfully in your database periodically throughout the standalone runs. If a run fails for any reason, it will not allow more runs to proceed until the issue is resolved.
Sales History Load
RAP supports the loading of sales transaction history using actual transaction data or daily/weekly sales totals. If loading data at an aggregate level, all key columns (such as the transaction ID) are still required to have some value. The sales data may be provided for a range of dates in a single file. The data should be loaded sequentially from the earliest week to the latest week but, unlike inventory position, you may have gaps or out-of-order loads, because the data is not stored positionally. Refer to Data File Generation for more details on the file requirements.
Note:
Many parts of AI Foundation require transactional data for sales, so loading aggregate data should not be done unless you have no better alternative.If you are not loading sales history for Retail Insights specifically, then there are many aggregation programs that can
be disabled in the POM standalone process. Most aggregation programs (jobs ending in _A_JOB
) populate additional
tables used only in BI reporting. The following list of jobs must be enabled in the HIST_SALES_LOAD_ADHOC
process to support AIF and Planning data needs, but all others can be disabled for non-RI projects:
-
VARIABLE_REFRESH_JOB
-
ETL_REFRESH_JOB
-
W_EMPLOYEE_D_JOB
-
SEED_EMPLOYEE_D_JOB
-
W_PARTY_PER_D_JOB
-
SEED_PARTY_PER_D_JOB
-
W_RTL_CO_HEAD_D_JOB
-
W_RTL_CO_LINE_D_JOB
-
SEED_CO_HEAD_D_JOB
-
SEED_CO_LINE_D_JOB
-
W_RTL_SLS_TRX_IT_LC_DY_F_JOB
-
RA_ERROR_COLLECTION_JOB
-
RI_GRANT_ACCESS_JOB
-
RI_CREATE_SYNONYM_JOB
-
ANAYLZE_TEMP_TABLES_JOB
-
W_RTL_SLS_IT_LC_DY_TMP_JOB
-
W_RTL_SLS_IT_LC_WK_A_JOB
-
W_RTL_PROMO_D_TL_JOB
-
SEED_PROMO_D_TL_JOB
-
W_PROMO_D_RTL_TMP_JOB
-
W_RTL_SLSPR_TRX_IT_LC_DY_F_JOB
-
W_RTL_SLSPK_IT_LC_DY_F_JOB
-
W_RTL_SLSPK_IT_LC_WK_A_JOB
-
REFRESH_RADM_JOB
The other process used, HIST_STG_CSV_SALES_LOAD_ADHOC
, can be run with all jobs enabled, as it is only
responsible for staging the files in the database. Make sure to check the enabled jobs in both processes before continuing.
After confirming the list of enabled sales jobs, perform the following steps:
-
Create the file
SALES.csv
containing one or more days of sales data along with a CTX file defining the columns which are populated. Optionally include theSALES_PACK.csv
file as well. -
Upload the history files to Object Storage using the
RAP_DATA_HIST.zip
file. -
Execute the
HIST_ZIP_FILE_LOAD_ADHOC
process. -
Execute the
HIST_STG_CSV_SALES_LOAD_ADHOC
process to stage the data in the database. Validate your data before proceeding. Refer to Sample Validation SQLs for sample queries you can use for this. -
Execute the
HIST_SALES_LOAD_ADHOC
batch processes to load the data. If no data is available for certain dimensions used by sales, then the load process can seed the dimension from the history file automatically. Enable seeding for all of the dimensions according to the initial configuration guidelines; providing the data in other files is optional.Several supplemental dimensions are involved in this load process, which may or may not be provided depending on the data requirements. For example, sales history data has promotion identifiers, which would require data on the promotion dimension.
Sample Postman message bodies:
{ "cycleName":"Adhoc", "flowName":"Adhoc", "processName":"HIST_STG_CSV_SALES_LOAD_ADHOC" } { "cycleName":"Adhoc", "flowName":"Adhoc", "processName":"HIST_SALES_LOAD_ADHOC" }
Note:
If any job havingSTG
in the name fails during the run, then review the POM logs and it should provide the name of an external LOG or BAD table with more information. These error tables can be accessed from APEX using a support utility. Refer to the AI Foundation Operations Guide section on “External Table Load Logs” for the utility syntax and examples.
After the load is complete, you should check for rejected records, as this will not cause the job to fail but it will mean
not all data was loaded successfully. Query the table W_ETL_REJECTED_RECORDS
from IW to see a summary of
rejections. If you cannot immediately identify the root cause (for example, missing products or locations causing the data
load to skip the records) there is a utility job W_RTL_REJECT_DIMENSION_TMP_JOB
that allows you analyze the
rejections for common reject reasons. Refer to the AIF Operations Guide for details on configuring and running the
job for the first time if you have not used it before.
This process can be repeated as many times as needed to load all history files for the sales transaction data. If you are sending data to multiple RAP applications, do not wait until all data files are processed to start using those applications. Instead, load a month or two of data files and process them into all apps to verify the flows before continuing.
Note:
Data cannot be reloaded for the same records multiple times, as sales data is treated as additive. If data needs correction, you must post only the delta records (for example, send -5 to reduce a value by 5 units) or erase the table and restart the load process usingRI_SUPPORT_UTIL
procedures in APEX. Raise a Service Request with Oracle if neither of
these options resolve your issue.
Once you have performed the load and validated the data one time, you may wish to automate the remaining file loads. A
standalone process flow RI_FLOW_ADHOC
is available in POM that can run the sales history load multiple times
using your specified start times. Follow the steps below to leverage this process:
-
Upload multiple ZIP files each containing one
SALES.csv
and naming them asRAP_DATA_HIST.zip
,RAP_DATA_HIST.zip.1
,RAP_DATA_HIST.zip.2
and so on, incrementing the index on the end of the zip file name. Track the status of the files in theC_HIST_FILES_LOAD_STATUS
table once they are uploaded and at least one execution of theHIST_ZIP_FILE_UNLOAD_JOB
process has been run. -
In the POM batch administration screen, ensure all of the jobs in the
RI_FLOW_ADHOC
are enabled, matching your initial ad hoc run. Schedule the standalone flows from Scheduler Administration to occur at various intervals throughout the day. Space out the runs based on how long it took to process your first file. -
Monitor the load progress from the Batch Monitoring screen to see the results from each run cycle.
Inventory Position History Load
RAP supports the loading of inventory position history using full, end-of-week snapshots. These weekly snapshots may be provided one week at a time or as multiple weeks in a single file. The data must be loaded sequentially from the earliest week to the latest week with no gaps or out-of-order periods. For example, you cannot start with the most recent inventory file and go backward; you must start from the first week of history. Refer to Data File Generation for more details on the file requirements.
A variety of C_ODI_PARAM_VW
settings are available in the Control Center to disable inventory features
that are not required for your implementation. All of the following parameters can be changed to a value of N during the history load and enabled later for daily batches, as it will greatly improve the load times:
-
RI_INVAGE_REQ_IND
– Disables calculation of first/last receipt dates and inventory age measures. Receipt date calculation is used in RI and required for Lifecycle Pricing Optimization (as a method of determining entry/exit dates for items). It is also required for Forecasting for the Short Lifecycle (SLC) methods. Set toY
if using any of these applications. -
RA_CLR_LEVEL
– Disables the mapping of clearance event IDs to clearance inventory updates. Used only in RI reporting. -
RI_PRES_STOCK_IND
– Disables use of replenishment data for presentation stock to calculate inventory availability measures. Used only in RI reporting. -
RI_BOH_SEEDING_IND
– Disables the creation of initial beginning-on-hand records so analytics has a non-null starting value in the first week. Used only in RI reporting. -
RI_MOVE_TO_CLR_IND
– Disables calculation of move-to-clearance inventory measures when an item/location goes into or out of clearance status. Used only in RI reporting. -
RI_MULTI_CURRENCY_IND
– Disables recalculation of primary currency amounts if you are only using a single currency. Should be enabled for multi-currency, or disabled otherwise.
If you will be loading inventory history after you have already started nightly batches, then you must also update two additional parameters:
-
INV_NIGHTLY_BATCH_IND
– Change this toY
to indicate that nightly batches have been run but you are planning to load history for prior dates -
INV_LAST_HIST_LOAD_DT
– Set this to the final week of history data you plan to load, which must be a week-ending date and it must be before the nightly batches were started
Although it is supported, it is not advisable to load history data after nightly batches have started. It would be difficult to erase or correct historical data after it is loaded without affecting your nightly batch data as well. For this reason it is best to validate the history data thoroughly in a non-production environment before loading it to the production system.
The following steps describe the process for loading inventory history:
-
If you need inventory to keep track of First/Last Receipt Dates for use in Lifecycle Pricing Optimization or Forecasting (SLC) then you must first load a
RECEIPT.csv
file for the same historical period as your inventory file (because it is used in forecasting, that may make it required for your Inventory Planning Optimization loads as well, if you plan to use SLC forecasting). You must also setRI_INVAGE_REQ_IND
toY
. Receipts are loaded using the processHIST_CSV_INVRECEIPTS_LOAD_ADHOC
. Receipts may be provided at day or week level depending on your history needs. -
Create the file
INVENTORY.csv
containing one or more weeks of inventory snapshots in chronological order along with your CTX file to define the columns that are populated. TheDAY_DT
value on every record must be an end-of-week date (Saturday by default). -
Upload the history file and its context file to Object Storage using the
RAP_DATA_HIST.zip
file. -
Update column
HIST_LOAD_LAST_DATE
on the tableC_HIST_LOAD_STATUS
to be the date matching the last day of your overall history load (will be later than the dates in the current file). This can be done from the Control & Tactical Center. If you are loading history after your nightly batches were already started, then you must set this date to be the last week-ending date before your first daily/weekly batch. No other date value can be used in this case. -
Execute the
HIST_ZIP_FILE_LOAD_ADHOC
process. -
If you are providing
RECEIPT.csv
for tracking receipt dates in history, runHIST_CSV_INVRECEIPTS_LOAD_ADHOC
at this time. -
Execute the
HIST_STG_CSV_INV_LOAD_ADHOC
process to stage your data into the database. Validate your data before proceeding. Refer to Sample Validation SQLs for sample queries you can use for this. -
Execute the
HIST_INV_LOAD_ADHOC
batch process to load the file data. The process loops over the file one week at a time until all weeks are loaded. It updates theC_HIST_LOAD_STATUS
table with the progress, which you can monitor from APEX or DV. Sample Postman message bodies:{ "cycleName":"Adhoc", "flowName":"Adhoc", "processName":"HIST_STG_CSV_INV_LOAD_ADHOC" } { "cycleName":"Adhoc", "flowName":"Adhoc", "processName":"HIST_INV_LOAD_ADHOC" }
This process can be repeated as many times as needed to load all history files for the inventory position. Remember that inventory cannot be loaded out of order, and you cannot go back in time to reload files after you have processed them (for the same item/loc intersections). If you load a set of inventory files and then find issues during validation, erase the tables in the database and restart the load with corrected files.
If you finish the entire history load and need to test downstream systems (like Inventory Planning Optimization) then you
must populate the table W_RTL_INV_IT_LC_G
first (the history load skips this table). There is a separate
standalone job HIST_LOAD_INVENTORY_GENERAL_JOB
in the process HIST_INV_GENERAL_LOAD_ADHOC
that you may execute to copy the final week of inventory from the fact table to this table.
If your inventory history has invalid data, you may get rejected records and the batch process will fail with a message that rejects exist in the data. If this occurs, you cannot proceed until you resolve your input data, because rejections on positional data MUST be resolved for one date before moving onto the next. If you move onto the next date without reprocessing any rejected data, that data is lost and cannot be loaded at a later time without starting over. When this occurs:
-
The inventory history load will automatically populate the table
W_RTL_REJECT_DIMENSION_TMP
with a list of invalid dimensions it has identified. If you are running any other jobs besides the history load, you can also run the processW_RTL_REJECT_DIMENSION_TMP_ADHOC
to populate that table manually. You have the choice to fix the data and reload new files or proceed with the current file -
After reviewing the rejected records, run
REJECT_DATA_CLEANUP_ADHOC
, which will erase theE$
table and move all rejected dimensions into a skip list. You must pass in the module code you want to clean up data for as a parameter on the POM job (in this case the module code isINV
). The skip list is loaded to the tableC_DISCARD_DIMM
. Skipped identifiers will be ignored for the current file load, and then reset for the start of the next run.Example Postman message body:
{ "cycleName": "Adhoc", "flowName":"Adhoc", "processName":"REJECT_DATA_CLEANUP_ADHOC", "requestParameters":"jobParams.REJECT_DATA_CLEANUP_JOB=INV" }
-
If you want to fix your files instead of continuing the current load, stop here and reload your dimensions and/or fact data following the normal process flows.
-
If you are resuming with the current file with the intent to skip all data in
C_DISCARD_DIMM
, restart the failed POM job now. The skipped records are permanently lost and cannot be reloaded unless you erase your inventory data and start loading files from the beginning.
Log a Service Request with Oracle Support for assistance with any of the above steps if you are having difficulties with loading inventory history or dealing with rejected records.
Reloading Inventory Data
If you need to reload new inventory data from the beginning, you must first erase the inventory fact tables and clean up
the configurations. Use the Data Cleanup Utility described in the AIF Operations Guide to truncate all inventory fact
tables. You should use the subject area cleanup for this purpose, passing in a value of Inventory Position as the subject area name. This utility when run for the inventory subject area will truncate the tables, reset the columns
in C_HIST_LOAD_STATUS
, and cleanup metadata relating to the partitions to prepare it for a fresh load of
data. It will not truncate TMP
tables, as those will be truncated automatically when the history load programs
are executed.
Make sure you check C_HIST_LOAD_STATUS
before starting a new inventory load as you may want to enable
different tables or change the HIST_LOAD_LAST_DATE
values to align with your new data. Verify that the MAX_COMPLETED_DATE
and HIST_LOAD_STATUS
columns are null for all rows you will be reprocessing.
Price History Load
Certain applications, such as Lifecycle Pricing Optimization, require price history to perform their calculations. Price history is similar to inventory in that it is a positional, but it can be loaded in a more compressed manner due to the extremely high data volumes involved. The required approach for price history is as follows:
-
Update
C_HIST_LOAD_STATUS
for thePRICE
records in the table, specifying the last date of history load, just as you did for inventory. For whatever value you put in theHIST_LOAD_LAST_DATE
column, ensure that you load price data up to and including that date. There is logic in the history load to refresh theW_RTL_PRICE_IT_LC_G
table; that logic only runs on the final day of historical processing. But, if you don’t load files all the way up to that date, it will never run and you may have incorrect data as a result. If you are loading history after your nightly batches already started, then you must set this date to be the last week-ending date before your first daily/weekly batch. No other date value can be used in this case. -
If you are loading prices for LPO applications specifically, then go to
C_ODI_PARAM_VW
in the Control Center and change the parameterRI_LAST_MKDN_HIST_IND
to have a value ofY
. This will populate some required fields for LPO markdown and promotion price history. -
Create an initial, full snapshot of price data in
PRICE.csv
for the first day of history and load this file into the platform using the history processes in this section. All initial price records must come with a type code of 0. -
Create additional
PRICE
files containing just price changes for a period of time (such as a month) with the appropriate price change type codes and effective day dates for those changes. Load each file one at a time using the history processes. -
The history procedure will iterate over the provided files day by day, starting from the first day of history, up to the last historical load date specified in
C_HIST_LOAD_STATUS
for the pricing fact. For each date, the procedure checks the staging data for effective price change records and loads them, then moves on to the next date.
The process to perform price history loads is similar to the inventory load steps. It uses the PRICE.csv
file and the HIST_CSV_PRICE_LOAD_ADHOC
process (the price load only has one load process instead of two
like sales/inventory). Just like inventory, you must load the data sequentially; you cannot back-post price changes to earlier
dates than what you have already loaded. Refer to Data File Generation for complete details on how to build this file.
Just like inventory, the REJECT_DATA_CLEANUP_ADHOC
process may be used when records are rejected during
the load. Price loads cannot continue until you review and clear the rejections.
{
"cycleName": "Adhoc",
"flowName":"Adhoc",
"processName":"REJECT_DATA_CLEANUP_ADHOC",
"requestParameters":"jobParams.REJECT_DATA_CLEANUP_JOB=PRICE"
}
Reloading Price Data
If you need to reload new price data from the beginning, you must first erase the pricing fact tables and clean up the configurations. There are two methods available for this and both should be used as of the current release:
-
Use the Data Cleanup Utility found in the AIF Operations Guide to truncate all price tables. You may use the subject area cleanup for this purpose, passing in a value of Price as the subject area name. This is required in order to efficiently delete any large volume tables, which performs better from this utility.
-
Use the history cleanup standalone process
HIST_DATA_CLEANUP_ADHOC
, which has only one job namedHIST_DATA_CLEANUP_JOB
. This job requires a module code as an input parameter, so pass in a value ofPRICE
to erase price data. This job will also take care of resettingC_HIST_LOAD_STATUS
data for pricing loads. This can run after the first step and will take care of price historyTMP
table cleanup that the support utility does not do.
Make sure you check C_HIST_LOAD_STATUS
before starting a new price load as you may want to enable different
tables or change the HIST_LOAD_LAST_DATE
values to align with your new data. Verify that the MAX_COMPLETED_DATE
and HIST_LOAD_STATUS
columns are null for all rows you will be reprocessing.
Purchase Order Loads
Purchase Order data does not support loading of historical records, as all usage of PO data is for forward-looking calculations, such as open on-order amounts. However, it is common to need some test data to be loaded prior to nightly batches so that you can validate the purchase order calculations in your Planning solutions during implementation. When loading PO data, make sure you develop the files following the guidance in the data files generation section of this document and in the RAP interfaces guide.
-
Create the files
ORDER_HEAD.csv
andORDER_DETAIL.csv
containing a full snapshot of purchase order data for a single date. TheDAY_DT
value on every record inORDER_DETAIL.csv
must be the same and represent the current business date in RAP at the time the file will be loaded. The load will fail for any other value ofDAY_DT
besides the current business date, and it will fail if multiple dates are present in the file. -
Upload the data files and their context files to Object Storage using the
RAP_DATA_HIST.zip
file. -
Execute the
HIST_ZIP_FILE_LOAD_ADHOC
process. -
Execute the
SEED_CSV_W_RTL_PO_ONORD_IT_LC_DY_F_PROCESS_ADHOC
batch process to load the data. Example Postman message body:{ "cycleName":"Adhoc", "flowName":"Adhoc", "processName":"SEED_CSV_W_RTL_PO_ONORD_IT_LC_DY_F_PROCESS_ADHOC" }
Purchase Order data functions similar to inventory in that it is a positional fact interface and cannot be loaded or reloaded
multiple times for the same dates. If you load some data for a given business date and need to change or erase it, you must
truncate the tables using support utilities and then load the new data. You also cannot load the data out of order. Once you
load data for one date, you may only load new files for future dates after that point. The data warehouse target tables for
this data are W_RTL_PO_DETAILS_D
(for the ORER_HEAD.csv
file) and W_RTL_PO_ONORD_IT_LC_DY_F
(for the ORDER_DETAIL.csv
file).
Prior to starting nightly loads of PO data, you must also choose your configuration option for the parameter PO_FULL_LOAD_IND
in the C_ODI_PARAM_VW
configuration table in Manage System Configurations. By default, this parameter is
set to N
, which means that the nightly interface expects to get all delta/incremental updates to your existing
PO data each night. This delta includes zero balance records for when a PO is received in the source and moves to zero units
on order. If tracking and sending deltas is not possible, you may change this parameter to Y
, which indicates
that your nightly file will be a full snapshot of open order records instead. The system will automatically zero out any purchase
order lines that are not included in your nightly file, which allows you to extract only the non-zero lines in your source
data.
Other History Loads
While sales and inventory are the most common facts to load history for, you may also want to load history for other areas such as receipts and transfers. Separate ad hoc history load processes are available for the following fact areas:
-
HIST_CSV_ADJUSTMENTS_LOAD_ADHOC
-
HIST_CSV_INVRECEIPTS_LOAD_ADHOC
-
HIST_CSV_MARKDOWN_LOAD_ADHOC
-
HIST_CSV_INVRTV_LOAD_ADHOC
-
HIST_CSV_TRANSFER_LOAD_ADHOC
-
HIST_CSV_DEAL_INCOME_LOAD_ADHOC
-
HIST_CSV_ICMARGIN_LOAD_ADHOC
-
HIST_CSV_INVRECLASS_LOAD_ADHOC
-
HIST_STG_CSV_SALES_WF_LOAD_ADHOC
-
HIST_SALES_WF_LOAD_ADHOC
All of these interfaces deal with transactional data (not positional) so you may use them at any time to load history files in each area.
Note:
These processes are intended to support history data for downstream applications such as AI Foundation and Planning, so the tables populated by each process by default should satisfy the data needs of those applications. Jobs not needed by those apps are not included in these processes.Some data files used by AIF and Planning applications do not have a history load process, because the data is only used
from the current business date forwards. For Purchase Order data (ORDER_DETAIL.csv
), refer to the section
below on Seed Positional Facts if you need to load the file before starting your nightly batch processing. For other
areas like transfers/allocations used by Inventory Planning Optimization, those jobs are only included in the nightly batch
schedule and do not require any history to be loaded.
Modifying Staged Data
If you find problems in the data you’ve staged in the RAP database (specific to RI/AIF input interfaces) you have the option
to directly update those tables from APEX, thus allowing you to reprocess the records without uploading new files through
FTS. You have the privileges to insert, delete, or update records in tables where data is staged before being loaded into
the core data model, such as W_RTL_INV_IT_LC_DY_FTS
for inventory data.
Directly updating the staging table data can be useful for quickly debugging load failures and correcting minor issues.
For example, you are attempting to load PRODUCT.csv
for the first time and you discover some required fields
are missing data for some rows. You may directly update the W_PRODUCT_DTS
table to put values in those fields
and rerun the POM job, allowing you to progress with your dataload and find any additional issues before generating a new
file. Similarly, you may have loaded an inventory receipts file, but discovered after staging the file that data was written
to the wrong column (INVRC_QTY
contains the AMT
values and vice versa). You can update the
fields and continue to load it to the target tables to verify it, and then correct your source data from the next run forwards
only.
These privileges extend only to staging tables, such as table names ending in FTS
, DTS
, FS
, or DS
. You cannot modify internal tables holding the final fact or dimension data.
You cannot modify configuration tables as they must be updated from the Control & Tactical Center. The privileges do not
apply to objects in the RDX or PDS database schemas.
Reloading Dimensions
It is common to reload dimensions at various points throughout the history load, or even in-sync with every history batch run. Ensure that your core dimensions, such as the product and location hierarchies, are up-to-date and aligned with the historical data being processed. To reload dimensions, you may follow the same process as described in the Initial Dimension Load steps, ensuring that the current business load date in the system is on or before the date in history when the dimensions will be required. For example, if you are loading history files in a monthly cadence, ensure that new product and location data required for the next month has been loaded no later than the first day of that month, so it is effective for all dates in the history data files.
It is also very important to understand that history load procedures are unable to handle reclassifications that have occurred in source systems when you are loading history files. For example, if you are using current dimension files from the source system to process historical data, and the customer has reclassified products so they are no longer correct for the historical time periods, then your next history load may place sales or inventory under the new classifications, not the ones that were relevant in history. For this reason, reclassifications should be avoided if at all possible during history load activities, unless you can maintain historical dimension snapshots that will accurately reflect historical data needs.
Seed Positional Facts
Once sales and inventory history have been processed, you will need to perform seeding of the positional facts you wish to use. Seeding a fact means to load a full snapshot of the data for all active item/locations, thus establishing a baseline position for every possible record before nightly batches start loading incremental updates to those values. Seeding of positional facts should only occur once history data is complete and daily batch processing is ready to begin. Seed loads should also be done for a week-ending date, so that you do not have a partial week of daily data in the system when you start daily batches.
Instead of doing separate seed loads, you also have the option of just providing full snapshots of all positional data in your first nightly batch run. This will make the first nightly batch take a long time to complete (potentially 8+ hours) but it allows you to skip all of the steps documented below. This method of seeding the positional facts is generally the preferred approach for implementers, but if you want to perform manual seeding as a separate activity, review the rest of this section. If you are also implementing RMFCS, you can leverage Retail Data Extractor (RDE) programs for the initial seed load as part of your first nightly batch run, following the steps in that chapter instead.
If you did not previously disable the optional inventory features in C_ODI_PARAM
(parameters RI_INVAGE_REQ_IND
, RA_CLR_LEVEL
, RI_PRES_STOCK_IND
, RI_BOH_SEEDING_IND
, RI_MOVE_TO_CLR_IND
, and RI_MULTI_CURRENCY_IND
) then you should review these settings now and set all parameters to N
if the functionality is not required. Once this is done, follow the steps below to perform positional seeding:
-
Create the files containing your initial full snapshots of positional data. It may be one or more of the following:
-
PRICE.csv
-
COST.csv
(used for both BCOST and NCOST data interfaces) -
INVENTORY.csv
-
ORDER_DETAIL.csv
(ORDER_HEAD.csv
may already be loaded using dimension process, but it can also be done now, as it is required to load this file) -
W_RTL_INVU_IT_LC_DY_FS.dat
-
-
Upload the files to Object Storage using the
RAP_DATA_HIST.zip
file. -
Execute the
LOAD_CURRENT_BUSINESS_DATE_ADHOC
process to set the load date to be the next week-ending date after the final date in your history load.{ "cycleName":"Adhoc", "flowName":"Adhoc", "processName":"LOAD_CURRENT_BUSINESS_DATE_ADHOC", "requestParameters":"jobParams.ETL_BUSINESS_DATE_JOB=2017-12-31" }
-
Execute the ad hoc seeding batch processes depending on which files have been provided. Sample Postman messages:
{ "cycleName":"Adhoc", "flowName":"Adhoc", "processName":"SEED_CSV_W_RTL_PRICE_IT_LC_DY_F_PROCESS_ADHOC" } { "cycleName": "Adhoc", "flowName":"Adhoc", "processName":"SEED_CSV_W_RTL_NCOST_IT_LC_DY_F_PROCESS_ADHOC" } { "cycleName":"Adhoc", "flowName":"Adhoc", "processName":"SEED_CSV_W_RTL_BCOST_IT_LC_DY_F_PROCESS_ADHOC" } { "cycleName":"Adhoc", "flowName":"Adhoc", "processName":"SEED_CSV_W_RTL_INV_IT_LC_DY_F_PROCESS_ADHOC" } { "cycleName":"Adhoc", "flowName":"Adhoc", "processName":"SEED_CSV_W_RTL_INVU_IT_LC_DY_F_PROCESS_ADHOC" } { "cycleName":"Adhoc", "flowName":"Adhoc", "processName":"SEED_CSV_W_RTL_PO_ONORD_IT_LC_DY_F_PROCESS_ADHOC" }
Once all initial seeding is complete and data has been validated, you are ready to perform a regular batch run. Provide
the data files expected for a full batch, such as RAP_DATA.zip
or RI_RMS_DATA.zip
for foundation
data, RI_MFP_DATA.zip
for externally-sourced planning data (for RI reporting and AI Foundation forecasting),
and any AI Foundation Cloud Services files using the ORASE_WEEKLY.zip
files. If you are sourcing daily data
from RMFCS then you need to ensure that the RDE batch flow is configured to run nightly along with the RAP batch schedule.
Batch dependencies between RDE and RI should be checked and enabled, if they are not already turned on.
From this point on, the nightly batch takes care of advancing the business date and loading all files, assuming that you want the first load of nightly data to occur the day after seeding. The following diagram summarizes a potential set of dates and activities using the history and seeding steps described in this chapter:
Note:
The sequential nature of this flow of events must be followed for positional facts (for example, inventory) but not for transactional facts (such as sales). Transactional data supports posting for dates other than what the current system date is, so you can choose to load sales history at any point in this process.Run Nightly Batches
As soon as initial seeding is performed (or instead of initial seeding), you need to start nightly batch runs. If you are using the nightly batch to seed positional facts, ensure your first ZIP file upload for the batch has those full snapshots included. Once those full snapshots are loaded through seeding or the first full batch, then you can send incremental files rather than full snapshots.
Nightly batch schedules can be configured in parallel with the history load processes using a combination of the Customer Modules Management (in Retail Home) and the POM administration screens. It is not recommended to configure the nightly jobs manually in POM, as there are over 500 batch programs; choosing which to enable can be a time-intensive and error-prone activity. Customer Modules Management greatly simplifies this process and preserves dependencies and required process flows. Batch Orchestration describes the batch orchestration process and how you can configure batch schedules for nightly execution.
Once you move to nightly batches, you may also want to switch dimension interfaces from Full to Incremental loading of
data. Several interfaces, such as the Product dimension, can be loaded incrementally, sending only the changed records every
day instead of a full snapshot. These options use the IS_INCREMENTAL
flag in the C_ODI_PARAM_VW
table and can be accessed from the Control & Tactical Center. If you are unsure of which flags you want to change, refer
to the Retail Insights Implementation Guide for detailed descriptions of all parameters.
Note:
At this time, incremental product and location loads are supported when using RDE for integration or when using legacy DAT files. CSV files should be provided as full snapshots.As part of nightly batch uploads, also ensure that the parameter file RA_SRC_CURR_PARAM_G.dat
is included
in each ZIP package, and that it is being automatically updated with the current business date for that set of files. This
file is used for business date validation so incorrect files are not processed. This file will help Oracle Support identify
the current business date of a particular set of files if they need to intervene in the batch run or retrieve files from the
archives for past dates. Refer to the System Parameters File section for file format details.
In summary, here are the main steps that must be completed to move from history loads to nightly batches:
-
All files must be bundled into a supported ZIP package like
RAP_DATA.zip
for the nightly uploads, and this process should be automated to occur every night. -
Include the system parameter file
RA_SRC_CURR_PARAM_G.dat
in each nightly upload ZIP and automate the setting of thevdate
parameter in that file (not applicable if RDE jobs are used). -
Sync POM schedules with the Customer Module configuration using the Sync with MDF button in the Batch Administration screen, restart the POM schedules to reflect the changes, and then review the enabled/disabled jobs to ensure the necessary data will be processed in the batch.
-
Move the RI ETL business date up to the date one day before the current nightly load (using
LOAD_CURRENT_BUSINESS_DATE_ADHOC
). The nightly load takes care of advancing the date from this point forward. -
Close and re-open the batch schedules in POM as needed to align the POM business date with the date used in the data (all POM schedules should be open for the current business date before running the nightly batch).
-
Schedule the start time from the Scheduler Administration screen > RI schedule > Nightly tab. Enable it and set a start time. Restart your schedule again to pick up the new start time.
Sending Data to AI Foundation
All AI Foundation modules leverage a common batch infrastructure to initialize the core dataset, followed by ad hoc, application-specific programs to generate additional data as needed. Before loading any data into an AI Foundation module, it is necessary to complete initial dimension loads into RI and validate that core structures (calendar, products, locations) match what you expect to see. Once you are comfortable with the data that has been loaded in, leverage the following jobs to move data into one or more AI Foundation applications.
Table 3-6 Extracts for AI Foundation
POM Process Name | Usage Details |
---|---|
INPUT_FILES_ADHOC_PROCESS |
Receive inbound zip files intended for AI Foundation, archive and extract the files.
This process looks for the |
RSE_MASTER_ADHOC_PROCESS |
Foundation data movement from RI to AI Foundation, including core hierarchies and dimensional data. Accepts many different parameters to run specific steps in the load process. |
<app>_MASTER_ADHOC_PROCESS |
Each AI Foundation module, such as SPO or IPO, has a master job for extracting and
loading data that is required for that application, in addition to the |
Because AI Foundation Cloud Services ad hoc procedures have been exposed using only one job in POM, they are not triggered like RI procedures. AI Foundation programs accept a number of single-character codes representing different steps in the data loading process. These codes can be provided directly in POM by editing the Parameters of the job in the Batch Monitoring screen, then executing the job through the user interface.
For example, this string of parameters will move all dimension data from RI to AI Foundation:
Additional parameters are available when moving periods of historical data, such as inventory and sales:
A typical workflow for moving core foundation data into AI Foundation is:
-
Load the core foundation files (like Calendar, Product, and Organization) using the AIF DATA schedule jobs.
-
Use the
RSE_MASTER_ADHOC_PROCESS
to move those same datasets to AI Foundation Apps, providing specific flag values to only run the needed steps. The standard set of first-time flags are-pldg
, which loads the core hierarchies required by all the applications. -
Load some ofyour history files for Sales using the AIF DATA jobs to validate the inputs. For example, load 1-3 months of sales for all products or a year of sales for only one department.
-
Load the same range of sales to AI Foundation using the sales steps in the master process with optional from/to date parameters. The flags to use for this are
-xwa
, which loads the transaction tableRSE_SLS_TXN
as well as all sales aggregates which are shared across AIF. -
Repeat the previous two steps until all sales data is loaded into both RI and AI Foundation.
Performing the process iteratively provides you early opportunities to find issues in the data before you’ve loaded everything, but it is not required. You can load all the data into AI Foundation at one time.
Follow the same general flow for the other application-specific, ad hoc flows into the AI Foundation modules. For a complete list of parameters in each program, refer to the AI Foundation Operations Guide.
When loading hierarchies, it is possible to have data issues on the first run due to missing or incomplete records from the customer. You may get an error like the following:
Error occurred in RSE_DDL_UTIL.create_local_index_stmt - Error while creating index: CREATE UNIQUEINDEX TMP$STG$RSE_LOC_SRC_XREF_2 ON TMP$STG$RSE_LOC_SRC_XREF ( HIER_TYPE_ID,LOC_EXT_ID,LEAF_NODE_FLG,APP_SOURCE_ID) ORA-01452: cannot CREATE UNIQUE INDEX; duplicate keys found
The INDEX
create statement tells you the name of the table and the columns that were attempting to be
indexed. Querying that table is what is required to see what is duplicated or invalid in the source data. Because these tables
are created dynamically when the job is run, you will need to first grant access to it using a procedure like below in IW:
begin RSE_SUPPORT_UTIL.grant_temp_table(p_prefix => 'STG$RSE_LOC_SRC_XREF'); end;
The value passed into the procedure should be everything after the TMP$
in the table name (not the index
name). The procedure also supports two other optional parameters to be used instead of or in addition to the prefix:
p_suffix
– Provide the ending suffix of a temporary table, usually a number like00001101
, to grant access to all tables with that suffixp_purge_flg
- Purge flag (Y/N) which indicates to drop temporary tables for a given run
Once this is executed for a given prefix, a query like this can retrieve the data causing the failure:
SELECT * FROM RASE01.TMP$STG$RSE_LOC_SRC_XREF WHERE ( HIER_TYPE_ID,LOC_EXT_ID,LEAF_NODE_FLG,APP_SOURCE_ID ) IN ( SELECT
HIER_TYPE_ID,LOC_EXT_ID,LEAF_NODE_FLG,APP_SOURCE_ID FROM
RASE01.TMP$STG$RSE_LOC_SRC_XREF GROUP BY
HIER_TYPE_ID,LOC_EXT_ID,LEAF_NODE_FLG,APP_SOURCE_ID HAVING count(*) > 1 )ORDER BY HIER_TYPE_ID,LOC_EXT_ID,LEAF_NODE_FLG,APP_SOURCE_ID ;
For location hierarchy data the source of the issue will most commonly come from the W_INT_ORG_DH
table.
For product hierarchy, it could be W_PROD_CAT_DH
. These are the hierarchy tables populated in the data warehouse
by your foundation data loads.
Sending Data to Planning
If a Planning module is being implemented, then additional AIF DATA schedule jobs should be executed as part of the initial loads and nightly batch runs. These jobs are available through ad hoc calls, and the nightly jobs are included in the AIF DATA nightly schedule. Review the list below for more details on the core Planning extracts available.
Process Overview
Table 3-7 Extracts for Planning
POM Job Name | Usage Details |
---|---|
W_PDS_PRODUCT_D_JOB |
Exports a full snapshot of Product master data and associated hierarchy levels, including flex fields for alternates. |
W_PDS_ORGANIZATION_D_JOB |
Exports a full snapshot of Location master data (for stores and virtual warehouses only) and associated hierarchy levels, including flex fields for alternates. |
W_PDS_CALENDAR_D_JOB |
Exports a full snapshot of Calendar data at the day level and associated hierarchy levels, for both Fiscal and Gregorian calendars. Note: While RI exports the entire calendar, PDS will only import 5 years around theRPAS_TODAY date (current year +/- 2 years).
|
W_PDS_EXCH_RATE_G_JOB |
Exports a full snapshot of exchange rates. |
W_PDS_PRODUCT_ATTR_D_JOB |
Exports a full snapshot of item-attribute relationships. This is inclusive of both diffs and UDAs. |
W_PDS_DIFF_D_JOB |
Exports a full snapshot of Differentiators such as Color and Size. |
W_PDS_DIFF_GRP_D_JOB |
Exports a full snapshot of differentiator groups (most commonly size groups used by SPO and AP). |
W_PDS_UDA_D_JOB |
Exports a full snapshot of User-Defined Attributes. |
W_PDS_BRAND_D_JOB |
Exports a full snapshot of Brand data (regardless of whether they are currently linked to any items). |
W_PDS_SUPPLIER_D_JOB |
Exports a full snapshot of Supplier data (regardless of whether they are currently linked to any items). |
W_PDS_REPL_ATTR_IT_LC_D_JOB |
Exports a full snapshot of Replenishment Item/Location Attribute
data (equivalent to |
W_PDS_DEALINC_IT_LC_WK_A_JOB |
Incremental extract of deal income data (transaction codes 6 and 7 from RMFCS) posted in the current business week. |
W_PDS_PO_ONORD_IT_LC_WK_A_JOB |
Incremental extract of future on-order amounts for the current business week, based on the expected OTB date. |
W_PDS_INV_IT_LC_WK_A_JOB |
Incremental extract of inventory positions for the current business week. Inventory is always posted to the current week, there are no back-posted records. |
W_PDS_SLS_IT_LC_WK_A_JOB |
Incremental extract of sales transactions posted in the current business week (includes back-posted transactions to prior transaction dates). |
W_PDS_INVTSF_IT_LC_WK_A_JOB |
Incremental extract of inventory transfers posted in the current business week (transaction codes 30, 31, 32, 33, 37, 38 from RMFCS). |
W_PDS_INVRC_IT_LC_WK_A_JOB |
Incremental extract of inventory receipts posted in the current business week. Only includes purchase order receipts (transaction code 20 from RMFCS). |
W_PDS_INVRTV_IT_LC_WK_A_JOB |
Incremental extract of returns to vendor posted in the current business week (transaction code 24 from RMFCS). |
W_PDS_INVADJ_IT_LC_WK_A_JOB |
Incremental extract of inventory adjustments posted in the current business week (transaction codes 22 and 23 from RMFCS). |
W_PDS_SLSWF_IT_LC_WK_A_JOB |
Incremental extract of wholesale and franchise transactions posted in the current business week (transaction codes 82, 83, 84, 85, 86, 88 from RMFCS). |
W_PDS_MKDN_IT_LC_WK_A_JOB |
Incremental extract of markdowns posted in the current business week (transaction codes 11, 12, 13, 14, 15, 16, 17, 18 from RMFCS). |
W_PDS_INV_IT_LC_WK_A_INITIAL_JOB |
Initial history extract for inventory position data based on the timeframe established in C_SOURCE_CDC. |
W_PDS_FLEXFACT1_F_JOB |
Exports flexible fact data from RI that can be at any configurable level of hierarchies. Can be used to extend PDS with additional measures. Flex Fact 1 is also used by LPO to display custom measures. |
W_PDS_FLEXFACT2_F_JOB |
Exports flexible fact data from RI that can be at any configurable level of hierarchies. Can be used to extend PDS with additional measures. |
W_PDS_FLEXFACT3_F_JOB |
Exports flexible fact data from RI that can be at any configurable level of hierarchies. Can be used to extend PDS with additional measures. |
W_PDS_FLEXFACT4_F_JOB |
Exports flexible fact data from RI that can be at any configurable level of hierarchies. Can be used to extend PDS with additional measures. |
The PDS jobs are linked with several ad hoc processes in POM, providing you with the ability to extract specific datasets on-demand as you progress with history and initial data loads. The table below summarizes the ad hoc processes, which can be called using the standard methods such as cURL or Postman.
Table 3-8 RI Ad Hoc Processes for Planning
POM Process Name | Usage Details |
---|---|
LOAD_PDS_DIMENSION_PROCESS_ADHOC |
Groups all of the dimension (D/G table) extracts for PDS into one process for ad hoc execution. Disable individual jobs in POM to run only some of them. |
LOAD_PDS_FACT_PROCESS_ADHOC |
Groups all of the fact (F/A table) extracts for PDS into one process for ad hoc execution. Disable individual jobs in POM to run only some of them. |
LOAD_PDS_FACT_INITIAL_PROCESS_ADHOC |
History extract process for inventory positions, run only to pull a full snapshot of
inventory from RI for one or multiple weeks of data. You must set the start and end dates for extraction in the |
C_LOAD_DATES_CLEANUP_ADHOC |
Purge the successful job records from C_LOAD_DATES, which is necessary when you are running the same jobs multiple times per business date. |
Most of the PDS fact jobs leverage the configuration table C_SOURCE_CDC
to track the data that has been
extracted in each run. On the first run of an incremental job in LOAD_PDS_FACT_PROCESS_ADHOC
, the job extracts
all available data in a single run. From that point forwards, the extract incrementally loads only the data that has been
added or modified since the last extract, based on W_UPDATE_DT
columns in the source tables. There are two
exceptions to this incremental process: Inventory and On Order interfaces. The normal incremental jobs for these two interfaces
will always extract the latest day’s data only, because they are positional facts that send the full snapshot of current positions
to PDS each time they run.
To move inventory history prior to the current day, you must use the initial inventory extract to PDS (LOAD_PDS_FACT_INITIAL_PROCESS_ADHOC
). It requires manually entering the start/end dates to extract, so you must update C_SOURCE_CDC
from the
Control & Tactical Center for the inventory table record. LAST_MIN_DATE
is the start of the history you
wish to send, and LAST_MAX_DATE
is the final date of history. For example, if you loaded one year of inventory,
you might set LAST_MIN_DATE
to 04-JUN-22
and LAST_MAX_DATE
to 10-JUN-23
. Make sure that the timestamps on the values entered are 00:00:00
when saved to the database, otherwise
the comparison between these values and your input data may not align.
For all other jobs, the extract dates are written automatically to C_SOURCE_CDC
alongside the extract
table name after each execution and can be overwritten as needed when doing multiple loads or re-running for the same time
period. If you run the same process more than once, use C_LOAD_DATES_CLEANUP_ADHOC
to reset the run statuses
before the next run, then edit C_SOURCE_CDC
to change the minimum and maximum dates that you are pulling
data for. Review the table below for a summary of the configuration tables involved in PDS extracts.
Table 3-9 Planning Integration Configuration Tables
Table | Usage |
---|---|
C_SOURCE_CDC |
Configuration and tracking table that shows the interfaces supported for RI to Planning integration and the currently processed date ranges. |
C_LOAD_DATES |
Tracks the execution of jobs and the most recent run status of each job. Prevents running the same job repeatedly if it was already successful, unless you first erase the records from this table. |
RAP_INTF_CONFIG |
Configuration and tracking table for integration ETL programs between all RAP modules. Contains their most recent status, run ID, and data retention policy. |
RAP_INTF_RUN_STATUS |
RAP integration run history and statuses. |
RAP_LOG_MSG |
RAP integration logging table, specific contents will vary depending on the program and logging level. |
After the planning data has been extracted from the data warehouse to PDS staging tables (W_PDS_*
), the
Planning applications use the same programs to extract both full and incremental data for each interface. You can run the
dimension and fact loads for planning from the Online Administration (OAT) tasks, or use the RPASCE schedule in POM. Refer
to the relevant implementation guides for MFP, AP, or IPO for details on these processes.
Usage Examples
The following examples show how to leverage the PDS extract processes to move data from the data warehouse tables to the PDS staging tables, where the data can be picked up by the Planning applications.
Scenario 1: Initial Dimension Extract
-
Perform the initial loads into the data warehouse as described in the section Initialize Dimensions.
-
Enable all jobs in the ad hoc process
LOAD_PDS_DIMENSION_PROCESS_ADHOC
and execute it. -
Verify that data has been moved successfully to the target tables, such as
W_PDS_CALENDAR_D
,W_PDS_PRODUCT_D
, andW_PDS_ORGANIZATION_D
.
Scenario 2: Initial Sales Extract
-
Perform the initial sales loads into the data warehouse as described in the section Sales History Load.
-
Enable the job
W_PDS_SLS_IT_LC_WK_A_JOB
in the ad hoc processLOAD_PDS_FACT_PROCESS_ADHOC
and execute the process. -
Verify that data has been moved successfully to the target tables
W_PDS_SLS_IT_LC_WK_A
andW_PDS_GRS_SLS_IT_LC_WK_A
. -
Repeat the same steps for any other transactional history loads, such as adjustments, transfers, and RTVs (using the appropriate PDS job as described in the prior section).
Scenario 3: Initial Inventory Extract
-
Perform the initial inventory loads into the data warehouse as described in the section Inventory Position History Load.
-
Open the
C_SOURCE_CDC
table in Manage System Configurations and locate the row forW_RTL_INV_IT_LC_WK_A
. Edit the values in theLAST_MIN_DATE
andLAST_MAX_DATE
columns to fully encompass your range of historical dates in the inventory history. -
Enable the job
W_PDS_INV_IT_LC_WK_A_INITIAL_JOB
in the ad hoc processLOAD_PDS_FACT_INITIAL_PROCESS_ADHOC
and execute the process. -
If the job fails with error code “ORA-01403: no data found,” it generally means that the dates in
C_SOURCE_CDC
are not set or do not align with your historical data. Update the dates and re-run the job. -
Verify that data has been moved successfully to the target table
W_PDS_INV_IT_LC_WK_A
.
Scenario 4: Updated Data Extracts
-
Load additional history data as required for the project. For example, you may have started with one month of data for testing and are now going to load an additional 2 years of data.
-
If the
C_LOAD_DATES_CLEANUP_ADHOC
process was not run at the start of your current data load, run it now to ensure there are no previous run statuses from past PDS export runs that will interfere with new runs. -
Run
LOAD_PDS_DIMENSION_PROCESS_ADHOC
to ensure the dimensions for PDS are in sync with the new history data being loaded. -
Review
C_SOURCE_CDC
and alter the minimum/maximum dates on each table, if required. If you want to re-push your entire history for a fact, including the previously loaded data, then you will need to adjust the dates before running the extracts. -
Run
LOAD_PDS_FACT_PROCESS_ADHOC
for all the facts, such as sales and receipts, that have had new history loaded in the data warehouse. -
Run
LOAD_PDS_FACT_INITIAL_PROCESS_ADHOC
to pull the desired range of inventory periods. -
Verify the target tables contain the expected data before starting the import into Planning applications.
Generating Forecasts for MFP
Before you can complete an MFP implementation, you must set up and run forecasting within AI Foundation Cloud Services. Review the steps below for the initial setup of Forecasting:
-
In Manage Forecast Configurations in the AI Foundation UI, start by setting up a run type in the Setup train stop.
-
Click the + icon above the table and fill in the fields in the popup. For MFP forecasting, the forecast method should be selected as Automatic Exponential Smoothing.
-
Create a run type for each forecast measure/forecast intersection combination that is required for MFP.
-
Create test runs in the Test train stop once you are done setting up the run types:
-
Click a run type in the top table, then click on the + icon in the bottom table to create a run.
-
If necessary, change the configurations parameters for the estimation process and forecast process in their respective tabs in the Create Run popup.
For example, if you want to test a run using Bayesian method, edit the Estimation Method parameter in the Estimation tab using the edit icon above the table.
-
After modifying and reviewing the configuration parameters, click the Submit button to start the run.
-
-
Once the run is complete, the status changes to Forecast Generation Complete.
Doing test runs is an optional step. In addition to that, you will need to modify and review the configurations of the run type, activate the run type, enable auto-approve and map the run type to the downstream application (in this case to MFP). In the Manage train stop, select a row, click Edit Configurations Parameters and edit the estimation and forecast parameters as needed. Once you are done, go to Review tab, click Validate, then close the tab.
Note:
If the run type is active, you will only be able to view the parameters. To edit the parameters, the run type must be inactive.To activate the run type and enable the auto-approve, select a run type in the table and click the corresponding buttons above the table. Lastly, to map the run type to MFP, go to the Map train stop and click the + icon to create a new mapping.
When configuring forecasts for the MFP base implementation, the following list of forecast runs may be required, and you will want to configure and test each run type following the general workflow above. Additional runs can be added to satisfy your MFP implementation requirements.
Note:
The “Channel” level in MFP is often referred to as “Area” level in RI and AI Foundation, so be sure to select the correct levels which align to your hierarchy.MFP Plan | MFP Levels | Method | Data Source | Measure |
---|---|---|---|---|
MFP Merch Target |
Department-Channel-Week |
Auto ES |
Store Sales |
Regular and Promotion Gross Sales Amt |
MFP Merch Target |
Department-Channel-Week |
Auto ES |
Store Sales |
Clearance Gross Sales Amt |
MFP Merch Target |
Department-Channel-Week |
Auto ES |
Store Sales |
Regular and Promotion Gross Sales Unit |
MFP Merch Target |
Department-Channel-Week |
Auto ES |
Store Sales |
Clearance Gross Sales Unit |
MFP Merch Target |
Department-Channel-Week |
Auto ES |
Store Sales |
Total Returns Amount |
MFP Merch Target |
Department-Channel-Week |
Auto ES |
Store Sales |
Total Returns Units |
MFP Merch Plan |
Subclass-Channel-Week |
Auto ES |
Store Sales |
Regular and Promotion Gross Sales Amt |
MFP Merch Plan |
Subclass-Channel-Week |
Auto ES |
Store Sales |
Clearance Gross Sales Amt |
MFP Merch Plan |
Subclass-Channel-Week |
Auto ES |
Store Sales |
Regular and Promotion Gross Sales Unit |
MFP Merch Plan |
Subclass-Channel-Week |
Auto ES |
Store Sales |
Clearance Gross Sales Unit |
MFP Merch Plan |
Subclass-Channel-Week |
Auto ES |
Store Sales |
Total Returns Amount |
MFP Merch Plan |
Subclass-Channel-Week |
Auto ES |
Store Sales |
Total Returns Units |
MFP Location Target |
Company-Location-Week |
Auto ES |
Store Sales |
Total Gross Sales Amount |
MFP Location Target |
Company-Location-Week |
Auto ES |
Store Sales |
Total Gross Sales Unit |
MFP Location Target |
Company-Location-Week |
Auto ES |
Store Sales |
Total Returns Amount |
MFP Location Target |
Company-Location-Week |
Auto ES |
Store Sales |
Total Returns Units |
MFP Location Plan |
Department-Location-Week |
Auto ES |
Store Sales |
Total Gross Sales Amount |
MFP Location Plan |
Department-Location-Week |
Auto ES |
Store Sales |
Total Gross Sales Unit |
MFP Location Plan |
Department-Location-Week |
Auto ES |
Store Sales |
Total Returns Amount |
MFP Location Plan |
Department-Location-Week |
Auto ES |
Store Sales |
Total Returns Units |
Generating Forecasts for Inventory Planning Optimization Cloud Service-Demand Forecasting
The same forecasting interface described in the previous section for MFP is also used to generate the base demand and initial forecasts for Inventory Planning Optimization Cloud Service-Demand Forecasting (IPOCS-Demand Forecasting). Demand and forecasts must be generated in AI Foundation as part of your Forecasting implementation. The general workflow is the same, but the forecasting levels and methods used will vary depending on your implementation needs. For example, your IPOCS-Demand Forecasting forecasts may be at an item/location/week level of granularity instead of higher levels like MFP requires. You will also use other forecasting methods such as Causal-Short Life Cycle instead of the MFP default method (Auto ES).
IPOCS-Demand Forecasting directly integrates the demand and forecast parameters between AI Foundation Cloud Services and
PDS tables using the RAP data exchange layer (RDX) as needed. Outputs from the forecasting engine will be written to tables
prefixed with RSE_FCST_*
. Outputs from IPOCS-Demand Forecasting back into the data exchange layer will be
in tables prefixed with RDF_APPR_FCST_*
. For more details on importing the forecasts after they are generated,
refer to the IPO Demand Forecasting Implementation Guide.
Implementation Flow Example
The steps below describe a minimal implementation workflow for IPOCS-Demand Forecasting, which has the most complex process of the Planning applications. A similar process would be followed for other Planning applications, except the integration would largely be one-way (AIF pushing data to MFP/AP). Note that these manual steps are provided for first-time implementations and testing, all jobs would be automated as part of nightly batches before going live with the application.
Retail Insights is used as the foundational data warehouse that collects and coordinates data on the platform. You do not need to purchase Retail Insights Cloud Service to leverage the data warehouse for storage and integration, it is included as part of any RAP solution. Regardless of which RAP solutions you are implementing, the integration flows shown below are the same.
This section has high-level steps to provide an understanding of the major events in the setup process, but detailed explanations may be found in the AI Foundation Implementation Guide sections on “Forecast Configuration for IPO-DF and AIF” and "Workflow for IPO-DF Implementation". Refer to that document for additional configuration guidance, this section is only a summary to help you understand the interactions between the applications.
Figure 3-2 Integration Summary
-
Integrate your foundation data (core dimensions and fact history) using either RMFCS direct loads or object storage file uploads and run the load programs following the steps described earlier in this chapter.
-
Move foundation data to the Data Exchange (RDX) using the
LOAD_PDS_DIMENSION_PROCESS_ADHOC
andLOAD_PDS_FACT_PROCESS_ADHOC
processes. -
AI Foundation and Forecast Setup Flow
-
Move the data to AI Foundation using the
RSE_MASTER_ADHOC_JOB
process (passing in the appropriate parameters for your data). -
Set up run types and execute test runs in the Forecasting module of AI Foundation, then approve and map those runs to IPOCS-Demand Forecasting. Set up Flex Groups in AIF to be used with the forecasts in IPOCS-Demand Forecasting.
-
Export AIF setup data for IPOCS-Demand Forecasting to the Data Exchange (RDX) using the jobs below (MFP and AP do not require most of these jobs, instead you would simply run
RSE_FCST_EXPORT_ADHOC_PROCESS
jobs for MFP/AP exports):-
RSE_FCST_RUN_TYPE_CONF_EXPORT_ADHOC_PROCESS
-
RSE_PROMO_OFFER_EXPORT_ADHOC_PROCESS
-
RSE_FCST_EXPORT_ADHOC_PROCESS
(enabling the IPOCS-Demand Forecasting job only)
-
-
-
IPOCS-Demand Forecasting Setup Flow
-
Import hierarchy and foundation data from RDX to IPOCS-Demand Forecasting.
-
Import any app-specific non-foundation files from Object Storage to IPOCS-Demand Forecasting directly.
-
Perform your initial IPOCS-Demand Forecasting Workflow Tasks following the IPO-DF Implementation and User Guides, such as building the domain, setting up forecast parameters, new items, what-ifs, and so on.
-
Run the IPOCS-Demand Forecasting Pre-Forecast and Export Forecast Parameters Batches.
-
-
Forecast Execution Flow
-
Import the updated IPOCS-Demand Forecasting parameters to AIF using the jobs:
-
RSE_RDX_FCST_PARAM_ADHOC_PROCESS
-
RSE_FCST_RDX_NEW_ITEM_ENABLE_ADHOC_PROCESS
-
RSE_LIKE_RDX_RSE_ADHOC_PROCESS
-
PMO_EVENT_IND_RDF_ADHOC_PROCESS
-
-
Return to the AIF Forecasting module and generate new forecasts using the IPOCS-Demand Forecasting parameters. Create new runs under the same run type as before, generate the forecast(s), approve the demand parameters, and click Approve Base Demand and Forecast. Ensure you activate the run type from the Manager Forecast Configurations screen and enable auto-approve (if starting nightly runs).
-
Export the forecasts using the
RSE_FCST_EXPORT_ADHOC_PROCESS
(you can directly run the IPOCS-Demand Forecasting jobRSE_RDF_FCST_EXPORT_ADHOC_JOB
),RSE_FCST_RUN_TYPE_CONF_EXPORT_ADHOC_PROCESS
, andRSE_PROMO_OFFER_SALES_EXPORT_ADHOC_PROCESS
. -
Import the forecasts to IPOCS-Demand Forecasting. Also re-run any of the previous IPOCS-Demand Forecasting steps if any other data has changed since the last run.
-
-
Forecast Approval Flow
-
Perform IPOCS-Demand Forecasting Workflow Tasks to review/modify/approve the final forecasts (Run Post Forecast Batch, Build Forecast Review, run Forecast What-Ifs)
-
Export the approved forecast from IPOCS-Demand Forecasting using the Export Approved Forecast OAT Task or the associated POM job.
-
Use AIF POM jobs to process the approved forecast and generate flat file exports for RMFCS (if required):
-
RSE_RDX_APPD_FCST_ADHOC_PROCESS
(import final forecast to AIF) -
RSE_RDF_APPR_FCST_EXPORT_ADHOC_PROCESS
(export week level) -
RSE_RDF_APPR_FCST_DAY_EXPORT_ADHOC_PROCESS
(export day level)
-
-
Generating Forecasts for AP
The same forecasting interface described in the previous section for MFP is also used to generate the Assortment Planning forecasts. When configuring forecasts for the AP base implementation, the following list of forecast runs may be required, and you will want to configure and test each run type. Additional runs can be added to satisfy your AP implementation requirements.
Note:
Although AP has an in-season plan, it still leverages Auto ES as the base forecasting method.Bayesian (which includes plan data in the forecast) is set up as the estimation method for the run. This is also why Store Sales is set as the data source for all runs, because all runs have the ability to include plan data based on the estimation methods used (in addition to store sales).
AP Plan | AP Levels | Method | Data Source | Measure |
---|---|---|---|---|
Item Plan Pre-Season |
Item-Location-Week |
Auto ES |
Store Sales |
Regular and Promotion Gross Sales Units |
Item Plan In-Season |
Item-Location-Week |
Auto ES (with Bayesian estimation method) |
Store Sales |
Regular and Promotion Gross Sales Units |
Subclass Plan Pre-Season |
Subclass-Location-Week |
Auto ES |
Store Sales |
Regular and Promotion Gross Sales Units |
Loading Plans to RI
If you are implementing a Planning module and need to generate plan-influenced forecasts (such as the AP in-season forecast) then you will need to first integrate your plans into RI (which acts as the singular data warehouse for this data both when the plans come from Oracle Retail solutions and when they come from outside of Oracle).
If you are pushing the plans from MFP or AP directly, then you will enable a set of POM jobs to copy plan outputs directly to RI tables. You may also use an ad hoc process to move the plan data on-demand during the implementation.
Table 3-10 Plan Extracts from MFP and AP
POM Job Name | Usage Details |
---|---|
W_RTL_PLAN1_PROD1_LC1_T1_FS_SDE_JOB |
Extract the MFP plan output from the |
W_RTL_PLAN2_PROD2_LC2_T2_FS_SDE_JOB |
Extract the MFP plan output from the |
W_RTL_PLAN3_PROD3_LC3_T3_FS_SDE_JOB |
Extract the MFP plan output from the |
W_RTL_PLAN4_PROD4_LC4_T4_FS_SDE_JOB |
Extract the MFP plan output from the |
W_RTL_PLAN5_PROD5_LC5_T5_FS_SDE_JOB |
Extract the AP plan output from the |
In all of the jobs above, the target table name is in the job name (such as W_RTL_PLAN2_PROD2_LC2_T2_FS
as the target for the PLAN2
extract). Once the data is moved to the staging layer in RI, the source-agnostic
fact load jobs will import the data (W_RTL_PLAN2_PROD2_LC2_T2_FS
is loaded to W_RTL_PLAN2_PROD2_LC2_T2_F
, and so on). The fact tables in RI are then available for AI Foundation jobs to import them as needed for forecasting usage.
The plan tables in RI have configurable levels based on your Planning implementation. The default levels are aligned to
the standard outputs of MFP and AP if you do not customize or extend them. If you have modified the planning solution to operate
at different levels, then you must also reconfigure the RI interfaces to match. This includes the use of flexible alternate
hierarchy levels (for example, the PRODUCT_ALT.csv
interface) which require custom configuration changes
to the PLAN
interfaces before you can bring that data back to RI and AIF. These configurations are in C_ODI_PARAM_VW
, accessed from the Control & Tactical Center in AI Foundation. For complete details on the plan
configurations in RI, refer to the Retail Insights Implementation Guide.
Loading Forecasts to RI
If you are implementing IPOCS-Demand Forecasting and RI, then you may want to integrate your approved forecast with RI for reporting. The same RI interfaces are used both for IPOCS-Demand Forecasting forecasts and external, non-Oracle forecasts.
If you are pushing the forecast from IPOCS-Demand Forecasting, use a set of POM jobs to copy forecast outputs directly to RI tables. You may also use an ad hoc process to move the forecast data on-demand during the implementation.
Table 3-11 Forecast Extracts from AIF and IPOCS-Demand Forecasting
POM Job Name | Usage Details |
---|---|
W_RTL_PLANFC_PROD1_LC1_T1_FS_SDE_JOB |
Extracts the baseline forecast from the AI Foundation table |
W_RTL_PLANFC_PROD2_LC2_T2_FS_SDE_JOB |
Extracts the final, approved forecast from the AI Foundation view |
In all of the jobs above, the target table name is in the job name (such as W_RTL_PLANFC_PROD1_LC1_T1_FS
as the target for the PLANFC1
extract). Once the data is moved to the staging layer in RI, the source-agnostic
fact load jobs will import the data (W_RTL_PLANFC_PROD1_LC1_T1_FS
is loaded to W_RTL_PLANFC_PROD1_LC1_T1_F
, and so on).
The forecast tables in RI have configurable levels based on your implementation. The default levels are aligned to the
standard outputs of AI Foundation and IPOCS-Demand Forecasting (item/location/week) if you do not customize or extend them.
If you have modified the IPOCS-Demand Forecasting solution to operate at different levels, then you must also reconfigure
the RI interfaces to match. These configurations are in C_ODI_PARAM_VW
, accessed from the Control & Tactical
Center in AI Foundation. For complete details on the forecast configurations in RI, refer to the Retail Insights Implementation
Guide.
Loading Aggregate History Data
If you are only looking to implement a Planning solution or certain basic modules of AI Foundation and you cannot provide transaction-level history data, then you have the option to load pre-aggregated historical fact data into RAP, bypassing the usual transaction-level interfaces. The custom fact aggregates allow for up to 4 different intersections of measure data at levels at or above item/location/date. The fact columns are generic and accept various numeric measure data across all typical functional areas (sales, receipts, inventory, transfers, and so on) in the same interface. The non-numeric fields on each interface are only for integration to PDS; they won’t be used by AI Foundation.
The aggregate fact interface files and their associated data warehouse tables are listed below. Refer to the RAP Interfaces Guide in My Oracle Support for complete file specifications.
Filename | Staging Table | Target Table |
---|---|---|
W_RTL_FACT1_PROD1_LC1_T1_FS.dat |
W_RTL_FACT1_PROD1_LC1_T1_FS |
W_RTL_FACT1_PROD1_LC1_T1_F |
W_RTL_FACT2_PROD2_LC2_T2_FS.dat |
W_RTL_FACT2_PROD2_LC2_T2_FS |
W_RTL_FACT2_PROD2_LC2_T2_F |
W_RTL_FACT3_PROD3_LC3_T3_FS.dat |
W_RTL_FACT3_PROD3_LC3_T3_FS |
W_RTL_FACT3_PROD3_LC3_T3_F |
W_RTL_FACT4_PROD4_LC4_T4_FS.dat |
W_RTL_FACT4_PROD4_LC4_T4_FS |
W_RTL_FACT4_PROD4_LC4_T4_F |
You must configure the data intersections for these tables before you can use them, as each table can only have one intersection
defined. The parameters are in the C_ODI_PARAM_VW
table in the Control & Tactical Center Manage System
Configurations screen. The parameters for each interface are listed below.
Parameter Name | Default Value |
---|---|
RI_FACT1_ATTR_LEVEL |
ALL |
RI_FACT1_CAL_LEVEL |
DAY |
RI_FACT1_ORG_LEVEL |
LOCATION |
RI_FACT1_PROD_LEVEL |
ITEM |
RI_FACT1_SUPP_LEVEL |
ALL |
RI_FACT2_ATTR_LEVEL |
ALL |
RI_FACT2_CAL_LEVEL |
DAY |
RI_FACT2_ORG_LEVEL |
LOCATION |
RI_FACT2_PROD_LEVEL |
ITEM |
RI_FACT2_SUPP_LEVEL |
ALL |
RI_FACT3_ATTR_LEVEL |
ALL |
RI_FACT3_CAL_LEVEL |
DAY |
RI_FACT3_ORG_LEVEL |
LOCATION |
RI_FACT3_PROD_LEVEL |
ITEM |
RI_FACT3_SUPP_LEVEL |
ALL |
RI_FACT4_ATTR_LEVEL |
ALL |
RI_FACT4_CAL_LEVEL |
DAY |
RI_FACT4_ORG_LEVEL |
LOCATION |
RI_FACT4_PROD_LEVEL |
ITEM |
RI_FACT4_SUPP_LEVEL |
ALL |
In the current release, the ATTR
and SUPP
parameters should remain as ALL
; other options are not supported when integrating the data throughout the platform. You can configure the PROD
, ORG
, and CAL
levels for each interface to match the intersection of data being loaded
there. Valid parameter values for each type are listed below.
Product (PROD) | Organization (ORG) | Calendar (CAL) |
---|---|---|
CMP |
COMPANY |
YEAR |
DIV |
CHAIN |
HALFYEAR |
GRP |
AREA |
QUARTER |
DEPT |
REGION |
PERIOD |
CLS |
DISTRICT |
WEEK |
SBC |
LOCATION |
DAY |
ITEM |
Before using the interfaces, you must also partition them using either day- or week-level partitioning (depending on the
data intersections specified above). Partitioning is controlled using two tables accessible from the Control & Tactical
Center: C_MODULE_ARTIFACT
and C_MODULE_EXACT_TABLE
.
In C_MODULE_ARTIFACT
, locate the rows where the module code starts with FACT
(such as FACT1
) and set them to both ACTIVE_FLG=Y
and PARTITION_FLG=Y
.
Locate the same modules in C_MODULE_EXACT_TABLE
and modify the columns PARTITION_COLUMN_TYPE
and PARTITION_INTERVAL
to be either WK
(for week level data) or DY
(for
day level data). Lastly, run the partitioning process as described in Calendar and Partition Setup.
After the interfaces are configured and partitioned, you must prepare the data files for upload following these guidelines:
-
All key columns on the interface must be populated, even if you have specified
ALL
as the data level. You should use a default value of-1
to populate these fields. This includes the fieldsPROD_DH_NUM
,PROD_DH_ATTR
,ORG_DH_NUM
,SUPPLIER_NUM
, andCAL_DATE
. -
The calendar (
CAL_DATE
) field must always be a date. If loading the data above day level, use the end-of-period date. The format must match the date mask specified on the context (CTX
) file. -
The
PLANNING_TYPE_CODE
field was originally used to specify whether the planning domain wasCOST
orRETAIL
, but this makes no functional difference in the datafile at this time and can be set to any value for your own reference. -
The
VERSION_NUM
field is for future use, it can be defaulted to a value of0
. -
The
DATASOURCE_NUM_ID
field must be provided with a hard-coded value of1
, similar to all other interface specifications that contain this column. -
The
INTEGRATION_ID
field must be provided with a unique value that identifies the record, such as a concatenation of all primary key values. -
The data file should be formatted based on the options specified in the associated context (
CTX
) file, such as choosing to use pipes or commas for delimiters.
To load the files into the data warehouse, use the standalone process in the AIF DATA schedule named HIST_AGGR_FACT_LOAD_ADHOC
. Make sure you enable and run all jobs related to your target table (such as W_RTL_FACT1_PROD1_LC1_T1_F
). The sequence of jobs to be executed for one table is like this:
-
VARIABLE_REFRESH_JOB
-
ETL_REFRESH_JOB
-
W_RTL_FACT1_PROD1_LC1_T1_FS_COPY_JOB
-
W_RTL_FACT1_PROD1_LC1_T1_FS_STG_JOB
-
W_FACT1_PROD1_LC1_T1_F_VALIDATOR_JOB
-
W_RTL_FACT1_PROD1_LC1_T1_TMP_JOB
-
W_RTL_FACT1_PROD1_LC1_T1_F_JOB
-
RABE_TO_RTLWSP_GRANTS_JOB
To push the data downstream to Planning applications, use the standalone processes named LOAD_PDS_FACT1_AGGR_PROCESS_ADHOC
through LOAD_PDS_FACT4_AGGR_PROCESS_ADHOC
. The planning loads will populate RDX schema tables, such as W_PDS_FACT1_PROD1_LC1_T1_F
, which can then be used for customizations and extensions in PDS (in the GA solutions
this data would not be used).
After data is loaded into the core data warehouse tables, you will also need to configure and load the AI Foundation application tables before the data is accessible to any AIF modules. Because the intersections for the data are flexible and the populated columns are unknown until the data is loaded, you will need to instruct the system on how to use your aggregate data.
The measure metadata will be stored in the AIF table RSE_MD_CDA
. This table is loaded programmatically
using an ad hoc job in the RSP schedule named RSE_AGGREGATE_METADATA_LOAD_ADHOC_JOB
. The program will detect
the columns with data and add entries for each measure with a generic name assigned. Once the program is complete, you can
modify the UI display name to be something meaningful to end-users from the Control & Tactical Center.
The measures themselves will first be loaded into RSE_PR_LC_CAL_CDA
, which is the staging area in AIF
to prepare the measures for the applications. After the metadata is configured, you may run another ad hoc job in the RSP
schedule named RSE_AGGREGATE_ACTUALS_LOAD_ADHOC_JOB
. This will populate the columns in RSE_PR_LC_CAL_CDA
based on their metadata.
Lastly, you must map the measure data into the application tables that require access to aggregate facts. This is performed
using the configuration table RSE_MD_FACT_COLUMN_MAP
, which is accessible for inserts and updates in the
Control & Tactical Center. Possible configuration options supported by the AIF applications will be listed in their respective
implementation guides, but a sample set of values is provide below for a sales and inventory measure mapping, which will be
the most common use cases:
SOURCE_TABLE | SOURCE_COLUMN | TARGET_TABLE | TARGET_COLUMN |
---|---|---|---|
W_RTL_FACT1_PROD1_LC1_T1_F |
SLSRG_QTY |
RSE_SLS_PR_LC_WK |
SLS_QTY |
W_RTL_FACT1_PROD1_LC1_T1_F |
BOH_QTY |
RSE_INV_PR_LC_WK_A |
INV_QTY_BOH |
Separate POM jobs are included in the RSP schedule to move the data from the CDA tables to their final target tables. The
jobs will come in pairs and have job names ending in AGGR_MEAS_SETUP_ADHOC_JOB
followed by AGGR_MEAS_PROCESS_ADHOC_JOB
. For example, to load the sales table in the sample mapping, use RSE_SLS_PR_LC_WK_AGGR_MEAS_SETUP_ADHOC_JOB
and RSE_SLS_PR_LC_WK_AGGR_MEAS_PROCESS_ADHOC_JOB
. For additional details on the individual AIF application
usage of these mappings and jobs, refer to the AIF Implementation Guide.
If you need in-season forecasts, then you must plan to configure MFP or AP plan exports to RI as part of your planning
implementation. You must populate the same columns on the plan exports that you are using on the FACT1-4 interfaces for actuals.
When doing in-season forecasts with aggregated data, it expects the same column in a PLAN
and FACT
table at the same intersection so that it can load the associated plan measure for the actuals and do a plan-influenced
forecast run. For example, if you are populating the SLS_QTY
column on the FACT1 interface, then you must
also send an SLS_QTY
value on the PLAN1 interface or else it won’t be used in the plan-influenced forecast.
Migrate Data Between Environments
The process of moving all your data from one cloud environment to another is referred to as a “lift-and-shift” process. It is common to perform all of the dataload activities in this chapter in one environment, then have the final dataset copied into another environment (instead of reloading it all from the beginning). This activity is performed by Oracle and can be requested using a Service Request. The lift-and-shift process can take several days to complete, depending on data volumes and how many other Oracle applications may be involved. It is expected that you will raise SRs to schedule the activity at least 2-3 weeks in advance of the target date.
When requesting the activity, you may specify if you need to migrate the entire database (both data and structures) or only the data. The first time doing this process must be a full migration of data and structures to synchronize the source and target environments. It is currently recommended to begin your implementation in the Production environment to avoid needing a lift-and-shift in order to go live. Around the time of your go-live date, you can request a lift-and-shift be done into your non-production environments to synchronize the data for future use.
Note:
The product versions between the source and target must be aligned. It is the project team’s responsibility to plan for appropriate upgrades and lift-and-shift activities such that this will be true.