2 AI Foundation Data Standalone Processes
The primary function of standalone processes in the AI Foundation Data (AIF DATA) schedule is to load history data in a new environment for use in one or more applications on the platform. These process flows group together multiple, related programs that load data files, stage them in the database, and transform them into multiple target tables in the data warehouse. Processes are also available for integrations with Planning applications (MFP, AP, and so on) and Xstore.
Adjustments History Load
Module Name |
HIST_CSV_ADJUSTMENTS_LOAD_ADHOC |
Description |
Loads the |
Dependencies |
HIST_ZIP_FILE_LOAD_ADHOC |
Business Activity |
Historical Data Load |
Design Overview
The history load process for Inventory Adjustment transactions accepts an input file at the transaction level using the
file specification for ADJUSTMENT.csv
. It assumes the file has already been moved into place using the HIST_ZIP_FILE_LOAD_ADHOC
process. This process imports the file into a preprocessing table in the database, transforms
it to RI’s internal staging tables, then loads it into the base fact (item/location/day), as well as the week aggregate used
for integrations (item/location/week). The Reason dimension is also seeded with records if the reason code and reason description
are provided on the transactions.
Note:
This process does not currently populate BI aggregate tables. Those jobs need to be run separately after each execution of this process if it is necessary to use this data for reporting in RI.Aggregate Fact History Load
Module Name |
HIST_AGGR_FACT_LOAD_ADHOC |
Description |
Loads pre-aggregated fact data from flat files into the data warehouse for data that is above the item/location intersection. |
Dependencies |
HIST_ZIP_FILE_LOAD_ADHOC |
Business Activity |
Historical Data Load |
Design Overview
The history load process for aggregate fact data is intended for new customers or migrating customers that cannot provide
item/location level history for sales, inventory, and other areas. Pre-aggregated data can be loaded into one of four identical
file interfaces that support a wide variety of measures across all functional areas. Each interface must have data for a single
intersection (such as subclass/area/week) and the intersection must be configured in C_ODI_PARAM_VW
before
running the loads. This process will take care of importing the files and loading the data into data warehouse tables for
storage.
There are jobs included in this process to validate the partition structure of these tables, because they support flexible
partitioning by day or week levels. The validator jobs (such as W_FACT1_PROD1_LC1_T1_F_VALIDATOR_JOB
) verify
whether the partition structure is correct relative to the configuration in C_ODI_PARAM_VW
. If the configuration
and table structures do not match, it will automatically drop and re-create all partitions in the expected format. This will
result in dropped data as well, so it is important to verify the configuration before loading any data.
When using these tables, the overall flow of data is as follows:
-
Update the configuration table (
C_ODI_PARAM_VW
) from the Control & Tactical Center to specify your base intersections for each table.-
The parameter names for these tables are prefixed with
RI_FACT
, such asRI_FACT1_PROD_LEVEL
for the product level of theFACT1
table. -
The complete list of values you may put into these parameters is provided below.
Product (PROD) Organization (ORG) Calendar (CAL) CMP
COMPANY
YEAR
DIV
CHAIN
HALFYEAR
GRP
AREA
QUARTER
DEPT
REGION
PERIOD
CLS
DISTRICT
WEEK
SBC
LOCATION
DAY
ITEM
CHANNEL
GREGORIANYEAR
ALL
PLANNING_CHANNEL
GREGORIANQUARTER
FLEX1 -FLEX20
PRICE_ZONE
GREGORIANMONTH
ALL
GREGORIANDAY
-
-
Update the table partition configuration in
C_MODULE_EXACT_TABLE
by setting thePARTITION_COLUMN_TYPE
andPARTITION_INTERVAL
asWK
for week data orDY
for day data. Calendar levels above week level do not require partitioning as it’s assumed the data volume will be low. -
Enable the
FACT1
throughFACT4
modules (based on your intended usage) inC_MODULE_ARTIFACT
by setting thePARTITION_FLG
andACTIVE_FLG
toY
. -
Use the process
CREATE_PARTITION_ADHOC
to re-execute the partitioning programs for any configuration changes made above, unless you have not yet loaded any calendar data, in which case theCALENDAR_LOAD_ADHOC
process will also perform partitioning for these tables. -
Use this process (
HIST_AGGR_FACT_LOAD_ADHOC
) to populate staging tables and move data from staging (FS
) to target (F
) tables. Enable all jobs relating to the table(s) you will be loading (Each table load has aCOPY
,STG
,VALIDATOR
,TMP
, andF
job, and all of them should be enabled).-
Records may be rejected due to bad/missing data in the 3 base dimensions supported on the facts (product, location, calendar). The job does not fail if rejects occur; it will load any valid records.
-
Rejected records will be copied to
E$
tables such asE$_W_RTL_FACT1_PROD1_LC1_T1_TMP
.E$
tables are created dynamically when records get rejected so the table may not exist initially. -
E$
tables will contain the full rejected record, which you may insert back into the staging (FS
) tables later to attempt to load them again. -
E$
tables will not purge or drop data unless you perform a full schema cleanup or database clone from another environment.
-
-
If sending the data to a Planning application, use the processes
LOAD_PDS_FACT1_AGGR_PROCESS_ADHOC
throughLOAD_PDS_FACT4_AGGR_PROCESS_ADHOC
to export the contents of each table to PDS. The integration tables for PDS will have similar names as the source tables, only addingPDS
in the name. For example,W_RTL_FACT1_PROD1_LC1_T1_F
is loaded toW_PDS_FACT1_PROD1_LC1_T1_F
by the processLOAD_PDS_FACT1_AGGR_PROCESS_ADHOC
.
Key Tables Affected
Table | Usage |
---|---|
W_RTL_FACT1_PROD1_LC1_T1_FS |
Staging |
W_RTL_FACT2_PROD2_LC2_T2_FS |
Staging |
W_RTL_FACT3_PROD3_LC3_T3_FS |
Staging |
W_RTL_FACT4_PROD4_LC4_T4_FS |
Staging |
W_RTL_FACT1_PROD1_LC1_T1_F |
Output |
W_RTL_FACT2_PROD2_LC2_T2_F |
Output |
W_RTL_FACT3_PROD3_LC3_T3_F |
Output |
W_RTL_FACT4_PROD4_LC4_T4_F |
Output |
Allocation History Load
Module Name |
HIST_ALCDETAIL_LOAD_ADHOC |
Description |
Loads a full snapshot of allocations data from |
Dependencies |
HIST_ZIP_FILE_LOAD_ADHOC |
Business Activity |
Historical Data Load |
Design Overview
Data regarding allocations of merchandise is split between two interfaces, the dimension file W_RTL_ALC_DETAILS_DS.dat
and the fact file W_RTL_ALC_IT_LC_DY_FS.dat
. This process can be used to load full snapshots of historical
or currently active allocations to the data warehouse outside the nightly batch cycle. The two files must be in sync, meaning
that every allocation record on the detail file must have a record in the header file. The header file is always a full snapshot
of all allocations that should appear as currently active in the data warehouse, meaning that if any allocation records are
no longer sent on W_RTL_ALC_DETAILS_DS.dat
, they will be marked as inactive/closed in the data warehouse
table (CURRENT_FLG = N
) and should no longer appear in the files.
This data is used primarily for Retail Insights reporting and for Inventory Planning Optimization (IPO).
Batch Individual File Reprocessing
Module Name |
REPROCESS_ZIP_FILE_PROCESS_ADHOC |
Description |
Looks for the |
Dependencies |
None |
Business Activity |
Nightly Batch Processing |
Design Overview
This process moves and unloads a ZIP file (specifically RI_REPROCESS_DATA.zip
) so that the file contents
may be added to an in-progress nightly batch run of the AIF DATA schedule. The ZIP file may contain one or multiple files.
It only needs to contain the files that you wish to update for the current batch run. Unlike the other ZIP file processes,
this process does not archive or delete existing files in the system, so it can safely be used repeatedly to upload new files
on top of existing data.
Batch Nightly File Reprocessing
Module Name |
NIGHTLY_ZIP_FILE_PROCESS_ADHOC |
Description |
Looks for the |
Dependencies |
None |
Business Activity |
Nightly Batch Processing |
Design Overview
This process moves and unloads nightly ZIP files (such as RAP_DATA.zip
) so that the file contents may
be used for an in-progress nightly batch run of the AIF DATA schedule. The ZIP file(s) must contain all data files you need
for a nightly batch run. This process contains the exact same jobs as the nightly AIF DATA batch and the primary purpose is
to let you reload a new ZIP file when your current nightly batch has failed or you’ve accidentally provided the wrong upload
and need to replace it.
The program uses the C_LOAD_DATES
table to support restartability if it runs multiple times for the same
or different ZIP files, based on the following scenarios:
-
The
C_LOAD_DATES
entry it creates hasPACKAGE_NAME=’ZIPUNLOAD’
. -
When you run it as part of the nightly batch or as part of this ad hoc job, it creates the entry in the
C_LOAD_DATES
table. -
When you have a “Success” status in
C_LOAD_DATES
, then you can rerun the job by placing a new zip file; it will then unzip it in the internalExpand
directory (retaining older extracted files in theExpand
directory if it is not overwritten). -
When you have a status like “Failed”, then rerunning will mark it as “Success” and retain the files in the
Expand
directory; it will not erase existing files. -
When you don't have any entry in the
C_LOAD_DATES
table, then running it erases all the directories, then unzips your file into the internal location and processes them as needed.
Customer Loyalty Load
Module Name |
HIST_CUST_LYL_LOAD_ADHOC |
Description |
Loads Customer Loyalty dimension and fact data for use in Retail Insights reporting. |
Dependencies |
HIST_ZIP_FILE_LOAD_ADHOC |
Business Activity |
Historical Data Load |
Design Overview
This process will load the Customer Loyalty dimension and fact files. This data is used for Retail Insights reporting only. All jobs in the process should be enabled before running it.
Key Tables Affected
Input Table | Target Table |
---|---|
W_RTL_CUST_LYL_PROG_DS |
W_RTL_CUST_LYL_PROG_D |
W_RTL_CUST_LYL_ACCT_DS |
W_RTL_CUST_LYL_ACCT_D |
W_RTL_CUST_LYL_AWD_ACCT_DS |
W_RTL_CUST_LYL_AWD_ACCT_D |
W_RTL_CUST_LYL_TRX_LC_DY_FS |
W_RTL_CUST_LYL_TRX_LC_DY_F |
W_RTL_CUST_LYL_AWD_TRX_DY_FS |
W_RTL_CUST_LYL_AWD_TRX_DY_F |
W_RTL_CUST_LYL_TRX_LC_DY_F |
W_RTL_CUST_LYL_PROG_LC_DY_A |
Data Security Load
Module Name |
RAF_SEC_FILTER_LOAD_ADHOC |
Description |
Copies the data security staging table data (which is populated from IW) into the target tables,
such as |
Dependencies |
None |
Business Activity |
Application Administration |
Design Overview
This process loads data for AIF data security functionality. The tables populated by this process limit what data an end user can see in certain AIF applications, such as RI and PMO. This data load flow only accepts data from IW, and is an alternative to sending flat files as part of the nightly batch process. This is only a replacement for the flat file load; if there are any jobs downstream in the applications that must be run, those are still required. The steps to use this process are:
-
Implement data security integration in Innovation Workbench/APEX to retrieve the users, groups, and data filter definitions from an external source. You may use REST APIs or custom file loads for this integration.
-
Develop SQL statements or procedures to insert your data into the staging tables (listed below).
-
Run the
RAF_SEC_FILTER_LOAD_ADHOC
process, which will truncate the target tables and insert your newly staged data. If a staging table is empty, then it will not truncate the target table.
The entire process could be automated by establishing REST APIs that post into IW tables, a stored procedure that pushes
the data from IW into the RAF staging tables, and then adding a DBMS_SCHEDULER
job that runs the POM process.
Because a truncate-and-load process is used, you must maintain the full set of data security records somewhere to push into
the AIF tables.
Deal Actuals History Load
Module Name |
HIST_DEAL_LOAD_ADHOC |
Description |
Loads the |
Dependencies |
HIST_ZIP_FILE_LOAD_ADHOC |
Business Activity |
Historical Data Load |
Design Overview
The history load process for Deal Actuals accepts an input file at the deal/item/location/date level using the file specification
for W_RTL_DEALACT_IT_LC_DY_FS.dat
. It assumes the file has already been moved into place using the HIST_ZIP_FILE_LOAD_ADHOC
process. This process imports the file into a staging table in the database and loads it
into the base fact (item/location/day) table for reporting.
Deal Income History Load
Module Name |
HIST_CSV_DEAL_INCOME_LOAD_ADHOC |
Description |
Loads the |
Dependencies |
HIST_ZIP_FILE_LOAD_ADHOC |
Business Activity |
Historical Data Load |
Design Overview
The history load process for Deal Income transactions accepts an input file at the transaction level using the file specification
for DEAL_INCOME.csv
. It assumes the file has already been moved into place using the HIST_ZIP_FILE_LOAD_ADHOC
process. This process imports the file into a preprocessing table in the database, transforms it to RI’s internal staging
tables, and then loads it into the base fact (item/location/day) as well as the week aggregate used for integrations (item/location/week).
Default Calendar Initialization
Module Name |
AUTO_GEN_CALENDAR_LOAD_ADHOC |
Description |
Automatically generates a generic NRF fiscal calendar and sets up the data warehouse database with it. |
Dependencies |
None |
Business Activity |
Initial System Setup |
Design Overview
The auto-generated calendar process does not require any input files. Instead, it uses an internal calendar definition based on the National Retail Federation (NRF) 4-5-4 business calendar to populate the Retail Insights data model with basic calendar information. The NRF calendar typically starts around the first week of February and runs for 52 or 53 weeks, depending on the year. The default calendar starts from January 2017 and extends for approximately 30 years. It automatically includes 53-week years where appropriate and follows the NRF guidelines for fiscal weeks and periods.
This process performs all the necessary transform and load jobs required to set up the RAP data warehouse calendar. This
process should only be used if you cannot get a business calendar definition from any other source, and the retailer does
not want to provide a file themselves. Once this process runs, you can disable W_MCAL_PERIOD_DS_JOB
in your
nightly batch if you do not intend to ever provide a calendar file directly.
This process also populates the Gregorian system calendar at the same time the fiscal calendar is loaded. The Gregorian
calendar requires additional start and end date parameters from C_ODI_PARAM
to define the time range to generate.
It must be greater than the range of time in the fiscal calendar. Output tables that start with W_MCAL_
are
mainly used for fiscal calendar generation, while the other tables, such as W_DAY_D
, are used for the Gregorian
calendar. All output tables must be successfully populated with calendar data to use the platform.
Key Tables Affected
Table | Usage |
---|---|
W_MCAL_PERIOD_DS |
Staging |
W_TIME_OF_DAY_D |
Output |
W_DAY_D |
Output |
W_YEAR_D |
Output |
W_QTR_D |
Output |
W_MONTH_D |
Output |
W_WEEK_D |
Output |
W_MINUTE_OF_DAY_D |
Output |
W_MCAL_CONFIG_G |
Output |
W_MCAL_CAL_D |
Output |
W_MCAL_PERIOD_D |
Output |
W_MCAL_DAY_D |
Output |
W_MCAL_WEEK_D |
Output |
W_MCAL_YEAR_D |
Output |
W_MCAL_QTR_D |
Output |
W_RTL_MCAL_DAY_SHIFT_D |
Output |
W_RTL_MCAL_DAY_UNSHIFT_D |
Output |
W_RTL_MCAL_DAY_GUNSHIFT_D |
Output |
W_RTL_MCAL_DAY_CUSTOM_D |
Output |
W_RTL_MCAL_WEEK_SHIFT_D |
Output |
W_RTL_MCAL_WEEK_UNSHIFT_D |
Output |
W_RTL_MCAL_PERIOD_SHIFT_D |
Output |
W_RTL_MCAL_PERIOD_UNSHIFT_D |
Output |
ETL Business Date Update
Module Name |
LOAD_CURRENT_BUSINESS_DATE_ADHOC |
Description |
Override the current business date used for loading data into the data warehouse. |
Dependencies |
None |
Business Activity |
Batch Administration |
Design Overview
This process updates the current business date in the foundation data warehouse. The current business date usually reflects
the most recent data loaded into the platform. For example, if the last nightly batch imported data is for February 2nd, 2023, the current business date will be that same date. Specific to Retail Insights, this date is also used whenever
calculating a repository variable like CurrentDate
or CurrentWeek
. There may be times during
an implementation of an Analytics and Planning solution where you need to alter this date, such as when you are loading historical
data for a specific past date and want the data warehouse to reflect that date as the current date. In such cases, you must
use this process to set the business date as the first step before running those other processes.
The ETL_BUSINESS_DATE_JOB
within this process requires a date as an input parameter. The date must be
in the format YYYY-MM-DD
. When you run the process from the POM UI, ensure you edit the parameters for this
job first and enter the date as the only input value.
Sample payload when using an API to call the process:
{
"cycleName":"Adhoc",
"flowName":"Adhoc",
"processName":"LOAD_CURRENT_BUSINESS_DATE_ADHOC",
"requestParameters":"jobParams.ETL_BUSINESS_DATE_JOB=2017-12-31"
}
Fix Unusable Indexes
Module Name |
FIX_UNUSABLE_INDEX_ADHOC |
Description |
Repair broken database indexes that may result from erasing and reloading dimension or fact data repeatedly |
Dependencies |
None |
Business Activity |
Batch Administration |
Design Overview
This process executes a single job, FIX_UNUSABLE_INDEX_JOB
, which repairs any database indexes in an unusable
state. The most common reason to execute this job is that you have encountered the error:
ORA-26026: unique index <INDEX NAME> initially in unusable state
When this error occurs, you must run this process to correct it. If you have a failed batch job that encountered this error,
you may restart that job from POM after running this process. This job also runs as part of the AIF DATA weekly maintenance
process RI_MAINTENANCE_ADHOC
, which should be scheduled to run every week before your nightly batches.
Flexible Fact Load
Module Name |
FLEXFACT_LOAD_ADHOC |
Description |
Loads the flexible fact tables for use in AIF applications and Retail Insights reporting. |
Dependencies |
HIST_ZIP_FILE_LOAD_ADHOC |
Business Activity |
Historical Data Load |
Design Overview
This process populates the flex fact tables that are used in both AI Foundation to display custom
attributes and measures and in Retail Insights reporting on custom fact data. Prior to
loading flex fact data, you must configure the proper data levels in
C_ODI_PARAM
to match the hierarchy levels used in the files. Refer
to the Retail Insights Implementation Guide for additional details on configuring
flex facts.
There is a job included in this process to validate the partition structure of the
FLEXFACT1
table, because it supports flexible partitioning by day
or week levels. The validator job (W_RTL_FLEXFACT1_F_VALIDATOR_JOB
)
verifies whether the partition structure is correct relative to the configuration in
C_ODI_PARAM_VW
. If the configuration and table structures do not
match, it automatically drops and re-creates all partitions in the expected format. This
will result in dropped data as well, so it is important to verify the configuration
before loading any data.
Gift Card Sales Load
Module Name |
HIST_GCN_TRX_LOAD_ADHOC |
Description |
Loads Gift Card Sales fact data for use in Retail Insights reporting. |
Dependencies |
HIST_ZIP_FILE_LOAD_ADHOC |
Business Activity |
Historical Data Load |
History Data Cleanup
Module Name |
HIST_DATA_CLEANUP_ADHOC |
Description |
Erase all data from Inventory and Price tables in RI, in order to restart your history load for those interfaces. |
Dependencies |
None |
Business Activity |
Historical Data Load |
Design Overview
This process erases all data from select functional areas (currently Inventory Position and Pricing facts). The purpose of the process is to reset the environment if the data currently loaded is invalid or unwanted, and you’d like to start over with empty tables.
Note:
It does not erase partition structures, so you need to load data for the same range of dates already available.
It also does not reset the C_HIST_LOAD_STATUS
table, so you will need to update that before loading any
new data.
History Data File Upload
Module Name |
HIST_ZIP_FILE_LOAD_ADHOC |
Description |
Looks for the |
Dependencies |
None |
Business Activity |
Historical Data Load |
Design Overview
This process moves and unloads a ZIP file (specifically RIHIST_RMS_DATA.zip
) so that the file contents
may be used for one or more history and seeding load jobs. The ZIP file may contain one or multiple files. This process is
a prerequisite to running any history or seeding load programs.
The first job in this process waits a set period of time for the ZIP file to be uploaded, and it fails if it is not received in that time (4 hours by default). The second job moves the ZIP file to the internal server location and unzip it. It deletes any files previously in the destination folder, unzip the new file, and move the ZIP file to an archive when complete. It fails if the ZIP does not contain any data files, as there is nothing for it to move.
History Data Master Flow
Module Name |
RI_FLOW_ADHOC |
Description |
Ad hoc flow for loading all major foundation fact files in a single process that can be scheduled to run repeatedly in the same day for automated history loads. |
Dependencies |
None |
Business Activity |
Historical Data Load |
Design Overview
The RI_FLOW_ADHOC
process in the AIF DATA schedule is a replacement for the RI_INTRADAY_CYCLE
found in earlier versions. It provides a single master process flow that can load the most common RAP foundation fact files
and populate all data warehouse tables using one or multiple input zip files. When you provide multiple zip files, you will
also schedule the RI_FLOW_ADHOC
process to run multiple times and, each time, it will pick up the next ZIP
file in the sequence.
The process flow cannot be executed without first configuring which jobs to run. You may configure the flow by enabling or disabling sets of jobs by process name, and then, within the processes you are enabling, you may selectively disable jobs for tables you are not using. The processes included in the flow are listed below in order of execution (first to last). You should disable the processes you do not need before running or scheduling the flow.
Process Name | Usage |
---|---|
FLOW_LOAD_START_PROCESS |
Required, non-functional jobs for logging/dependency purposes. |
CLEANUP_C_LOAD_DATES_FLOW_PROCESS |
Required, clears prior run statuses from |
ZIP_FILE_LOAD_FLOW_PROCESS |
Required, selects a ZIP file for the current run and unpacks it. |
STG_SALES_LOAD_ADHOC_PROCESS |
Stages the |
STG_CSV_SALES_LOAD_ADHOC_PROCESS |
Stages the |
STG_RTV_LOAD_ADHOC_PROCESS |
Stages the |
STG_CSV_RTV_LOAD_ADHOC_PROCESS |
Stages the |
STG_INVADJ_LOAD_ADHOC_PROCESS |
Stages the |
STG_CSV_INVADJ_LOAD_ADHOC_PROCESS |
Stages the |
STG_INVRC_LOAD_ADHOC_PROCESS |
Stages the |
STG_CSV_INVRC_LOAD_ADHOC_PROCESS |
Stages the |
STG_INVTSF_LOAD_ADHOC_PROCESS |
Stages the |
STG_CSV_INVTSF_LOAD_ADHOC_PROCESS |
Stages the |
STG_MARKDOWN_LOAD_ADHOC_PROCESS |
Stages the |
STG_CSV_MARKDOWN_LOAD_ADHOC_PROCESS |
Stages the |
STG_DEAL_INCOME_ADHOC_PROCESS |
Stages the |
STG_CSV_DEAL_INCOME_ADHOC_PROCESS |
Stages the |
STG_INV_LOAD_ADHOC_PROCESS |
Stages the |
STG_INVOOS_LOAD_ADHOC_PROCESS |
Stages the |
STG_CSV_INV_LOAD_ADHOC_PROCESS |
Stages the |
STG_PRICE_LOAD_ADHOC_PROCESS |
Stages the |
STG_CSV_PRICE_LOAD_ADHOC_PROCESS |
Stages the |
STG_ONORD_ADHOC_PROCESS |
Stages the |
STG_CSV_ONORD_ADHOC_PROCESS |
Stages the |
STG_INVU_LOAD_ADHOC_PROCESS |
Stages the |
STG_SALES_WF_ADHOC_PROCESS |
Stages the |
STG_CSV_SALES_WF_ADHOC_PROCESS |
Stages the |
RESET_ETL_THREAD_VAL_ADHOC_PROCESS |
Required; updates your staging data to remove any multi-threading parameter values that are no longer supported. |
ADHOC_REFRESH_RADM_PROCESS |
Required; collects statistics on your tables before starting the target table loads. |
SALES_LOAD_ADHOC_PROCESS |
Processes all the data from Sales and Sales Pack files (CSV or DAT) into the target data warehouse tables and aggregates. |
RTV_LOAD_ADHOC_PROCESS |
Processes all the data from the RTV file (CSV or DAT) into the target data warehouse tables and aggregates. |
INVADJ_LOAD_ADHOC_PROCESS |
Processes all the data from the Adjustment file (CSV or DAT) into the target data warehouse tables and aggregates. |
INVRC_LOAD_ADHOC_PROCESS |
Processes all the data from the Receipt file (CSV or DAT) into the target data warehouse tables and aggregates. |
INVTSF_LOAD_ADHOC_PROCESS |
Processes all the data from the Transfer file (CSV or DAT) into the target data warehouse tables and aggregates. |
MARKDOWN_LOAD_ADHOC_PROCESS |
Processes all the data from the Markdown file (CSV or DAT) into the target data warehouse tables and aggregates. |
DEAL_INCOME_ADHOC_PROCESS |
Processes all the data from the Deal Income file (CSV or DAT) into the target data warehouse tables and aggregates. |
INV_LOAD_ADHOC_PROCESS |
Processes all the data from the Inventory file (CSV or DAT) into the target data warehouse tables and aggregates. |
INVOOS_LOAD_ADHOC_PROCESS |
Processes all the data from the Inventory OOS file into the target data warehouse tables and aggregates. |
PRICE_LOAD_ADHOC_PROCESS |
Processes all the data from the Price file (CSV or DAT) into the target data warehouse tables and aggregates. |
ONORD_LOAD_ADHOC_PROCESS |
Processes all the data from the Purchase Order file (CSV or DAT) into the target data warehouse tables and aggregates. |
INVU_LOAD_ADHOC_PROCESS |
Processes all the data from the Unavailable Inventory file into the target data warehouse tables and aggregates. |
SALES_WF_ADHOC_PROCESS |
Processes all the data from the Sales Wholesale/Franchise file (CSV or DAT) into the target data warehouse tables and aggregates. |
FACT_FLOW_END_PROCESS |
Required; non-functional jobs for logging/dependency purposes. |
FLOW_LOAD_END_PROCESS |
Required; non-functional jobs for logging/dependency purposes. |
Once you’ve disabled all the unused jobs at the process level, you may want to review the remaining jobs and disable table
loads that are not needed for your implementation. Unless you are implementing Retail Insights, you do not need any aggregate
table above the item/location/week level intersections. For example, you will need all sales tables containing IT_LC_DY
or IT_LC_WK
for item/loc/day and week intersections. You may not need the sales aggregates such as W_RTL_SLS_SC_DY_A
which is a BI aggregate for reporting, unless you are implementing Retail Insights.
If you are calling the process from Postman or Curl, then you may use a payload like the following to trigger the process flow:
{
"cycleName" : "Adhoc",
"flowName" : "RI_FLOW_ADHOC",
"requestType" : "POM Scheduler"
}
Initial Base Cost Seeding
Module Name |
SEED_CSV_W_RTL_BCOST_IT_LC_DY_F_PROCESS_ADHOC |
Description |
Loads a full snapshot of base cost data from |
Dependencies |
HIST_ZIP_FILE_LOAD_ADHOC |
Business Activity |
Nightly Batch Preparation |
Design Overview
The seeding load process for Base Cost data accepts an input file at the item-location-date-supplier level using the file
specification for COST.csv
. It assumes the file has already been moved into place by the HIST_ZIP_FILE_LOAD_ADHOC
process. This process imports the file into a preprocessing table in the database, transforms it to RI’s internal staging
tables, then loads it into the base fact (item/location/day). This process is only for the base cost, a separate process loads
the net cost, if required.
Note:
Seeding processes require a full snapshot of data for a single date, which covers all item/location combinations that should have a starting position for this fact. The seeding process must load data for the day before the nightly batch is going to run. Alternatively, you can include the full snapshots of data in your very first nightly batch and skip the seeding steps. This causes the nightly batch to take a significantly longer time to execute but avoids the manual load processes for all the positional facts.Initial Base Cost Seeding (Legacy)
Module Name |
SEED_W_RTL_BCOST_IT_LC_DY_F_PROCESS_ADHOC |
Description |
Loads a full snapshot of base cost data from |
Dependencies |
HIST_ZIP_FILE_LOAD_ADHOC |
Business Activity |
Nightly Batch Preparation |
Design Overview
The seeding load process for Base Cost data accepts an input file at the item-location-date-supplier level using the file
specification for W_RTL_BCOST_IT_LC_DY_FS.dat
. It assumes the file has already been moved into place using
the HIST_ZIP_FILE_LOAD_ADHOC
process. This process imports the file into a preprocessing table in the database,
transforms it to RI’s internal staging tables, then loads it into the base fact (item/location/day).
Note:
Seeding processes require a full snapshot of data for a single date, which covers all item/location combinations that should have a starting position for this fact. The seeding process must load data for the day before the nightly batch runs. Alternatively, you can include the full snapshots of data in your very first nightly batch and skip the seeding steps. This causes the nightly batch to take a significantly longer time to execute but avoids the manual load processes for all the positional facts.Initial Calendar Load
Module Name |
CALENDAR_LOAD_ADHOC |
Description |
Runs all calendar creation and load processes to set up or update the system and fiscal calendars in the data warehouse. Runs the table partitioning for all date-based partitions. |
Dependencies |
HIST_ZIP_FILE_LOAD_ADHOC |
Business Activity |
Initial System Setup |
Design Overview
The calendar load ad hoc process performs all the necessary stage, transform, and load jobs to set up the data warehouse calendars. It also performs the table partitioning that is driven by the calendar definition. It takes as input:
-
A calendar data file (
CALENDAR.csv
) uploaded and unpacked using theHIST_ZIP_FILE_LOAD_ADHOC
process -
Optional last-year mapping files to define shifted and unshifted calendars when reporting on LY data
-
System calendar start and end dates in
C_ODI_PARAM
-
Partition configurations in
C_MODULE_ARTIFACT
The calendar data must be in the form of a fiscal calendar (for example, a 4-5-4 or 13-period calendar). It must be at
the period level of detail (not the day level) and should include start and end date ranges for the period, quarter, and year
levels on each record. The RAP data warehouse currently supports a single, hard-coded calendar ID (Retail Calendar~41) that
should be used in the file’s first column (MCAL_CAL_ID
). Optional mapping files for this-year-to-last-year
mappings may be provided if the business uses a custom definition of LY in reporting and analytics. These mappings control
which range of dates are returned when pulling LY metrics in RI, such as when a fiscal week in the current year should be
mapped to a different week in LY. Default mappings are created by the process if no data is provided.
This process populates the Gregorian system calendar at the same time the fiscal calendar is loaded. The Gregorian calendar
requires additional start and end date parameters from C_ODI_PARAM
to define the time range to generate.
It must be greater than the range of time in the fiscal calendar. The calendar generation process does not support a 53-week
year as the starting year, so it’s recommended to make the start date of the Gregorian calendar at least 1 year earlier than
the start of the fiscal calendar, which avoids improperly formed data in the fiscal calendar if the 53-week year is the first
year.
Output tables that start with W_MCAL_
are mainly used for fiscal calendar generation, while the other
tables such as W_DAY_D
are used for the Gregorian calendar. All output tables must be successfully populated
with calendar data in order to use the platform. Validate the data closely after running this process to ensure nothing is
missing or incorrect in the generated calendar data.
Key Tables Affected
Table | Usage |
---|---|
W_MCAL_PERIOD_DTS |
Input |
W_RTL_MCAL_DAY_SHIFT_DS |
Input |
W_RTL_MCAL_DAY_UNSHIFT_DS |
Input |
W_RTL_MCAL_DAY_GUNSHIFT_DS |
Input |
W_RTL_MCAL_WEEK_SHIFT_DS |
Input |
W_RTL_MCAL_WEEK_UNSHIFT_DS |
Input |
W_MCAL_PERIOD_DS |
Staging |
W_TIME_OF_DAY_D |
Output |
W_DAY_D |
Output |
W_YEAR_D |
Output |
W_QTR_D |
Output |
W_MONTH_D |
Output |
W_WEEK_D |
Output |
W_MINUTE_OF_DAY_D |
Output |
W_MCAL_CONFIG_G |
Output |
W_MCAL_CAL_D |
Output |
W_MCAL_PERIOD_D |
Output |
W_MCAL_DAY_D |
Output |
W_MCAL_WEEK_D |
Output |
W_MCAL_YEAR_D |
Output |
W_MCAL_QTR_D |
Output |
W_RTL_MCAL_DAY_SHIFT_D |
Output |
W_RTL_MCAL_DAY_UNSHIFT_D |
Output |
W_RTL_MCAL_DAY_GUNSHIFT_D |
Output |
W_RTL_MCAL_DAY_CUSTOM_D |
Output |
W_RTL_MCAL_WEEK_SHIFT_D |
Output |
W_RTL_MCAL_WEEK_UNSHIFT_D |
Output |
W_RTL_MCAL_PERIOD_SHIFT_D |
Output |
W_RTL_MCAL_PERIOD_UNSHIFT_D |
Output |
Initial Calendar Staging (Legacy)
Module Name |
CALENDAR_STG_LOAD_ADHOC |
Description |
Stages the |
Dependencies |
HIST_ZIP_FILE_LOAD_ADHOC |
Business Activity |
Initial System Setup |
Initial Competitor Price Seeding
Module Name |
SEED_W_RTL_COMP_PRICE_IT_LC_DY_F_PROCESS_ADHOC |
Description |
Loads a full snapshot of competitor price data from |
Dependencies |
HIST_ZIP_FILE_LOAD_ADHOC |
Business Activity |
Nightly Batch Preparation |
Design Overview
The seeding load process for Competitor Price fact data accepts an input file at the item-location-competitor store-date
level using the file specification for W_RTL_COMP_PRICE_IT_LC_DY_FS.dat
. It assumes the file has already
been moved into place by the HIST_ZIP_FILE_LOAD_ADHOC
process. This process imports the file into a staging
table, then loads it into the base fact (item/location/comp store/day).
Note:
Seeding processes require a full snapshot of data for a single date, which covers all item/location combinations that should have a starting position for this fact. The seeding process must load data for the day before the nightly batch runs. Alternatively, you can include the full snapshots of data in your very first nightly batch and skip the seeding steps. This causes the nightly batch to take a significantly longer time to execute but avoids the manual load processes for all the positional facts.Initial Dimension Load
Module Name |
RI_DIM_INITIAL_ADHOC |
Description |
Runs all core dimension load programs in AIF DATA schedule in POM to stage, transform, and load dimension data to the foundation data warehouse tables. |
Dependencies |
HIST_ZIP_FILE_LOAD_ADHOC |
Business Activity |
Historical Data Load |
Design Overview
This process runs the dimension load programs needed to initialize the data model with the core dataset needed for history and seed loads. Not all dimensions supported by RI or AIF are part of the initial load process, only those that are used in some way for history or downstream application processing. The process will stage and load all the files in a single flow; no other processes are needed to load the dimensions. The jobs used by the process are the same as the ones in the nightly batch so this also validates the file quality and correctness.
The process has three distinct types of jobs:
-
File import jobs that take a CSV input and load it to the database pre-staging tables (usually tables ending in
DTS
orFTS
) -
Staging jobs that transform the raw inputs to the required formats and perform any defaulting of values on data columns
-
Load jobs that move the staging data to internal target tables
The tables below are broken out by each type, so you can review the inputs and outputs for each block of jobs.
Files to Pre-Staging Tables
Input File | Output Table |
---|---|
PRODUCT.csv |
W_PRODUCT_DTS |
PRODUCT_ALT.csv |
W_PRODUCT_ALT_DTS |
ORGANIZATION.csv |
W_INT_ORG_DTS |
ORGANIZATION_ALT.csv |
W_ORGANIZATION_ALT_DTS |
EXCH_RATE.csv |
W_EXCH_RATE_DTS |
CALENDAR.csv |
W_MCAL_PERIODS_DTS |
SUPPLIER.csv |
W_SUPPLIER_DTS |
EMPLOYEE.csv |
W_EMPLOYEE_DTS |
PROD_LOC_ATTR.csv |
W_PROD_LOC_ATTR_DTS |
PROD_LOC_REPL.csv |
W_INVENTORY_PRODUCT_ATTR_DTS |
ATTR.csv |
W_ATTR_DTS |
PROD_ATTR.csv |
W_PRODUCT_ATTR_DTS |
SEASON.csv |
W_RTL_SEASON_PHASE_DTS |
PROD_SEASON.csv |
W_RTL_SEASON_PHASE_IT_DTS |
STORE_COMP.csv |
W_RTL_LOC_COMP_MTX_DTS |
CODES.csv |
W_RTL_CODE_DTS |
PROD_PACK.csv |
W_RTL_ITEM_GRP2_DTS |
DIFF_GROUP.csv |
W_DIFF_GROUP_DTS |
ADJUSTMENT.csv |
W_ADJUSTMENT_FTS |
PROMOTION.csv |
W_RTL_PROMO_EXT_DTS |
PROMO_DETAIL.csv |
W_RTL_PROMO_IT_LC_DTS |
ORDER_HEAD.csv |
W_ORDER_HEAD_FTS |
REPL_DISTRO.csv |
W_RTL_REPL_DISTRO_IT_LC_DS |
REPL_REV_INT.csv |
W_RTL_REPL_REV_INT_IT_LC_DS |
REPL_LT_INT.csv |
W_RTL_REPL_LT_INT_IT_LC_DS |
Pre-Staging to Staging Tables
These processes apply all of the transformation scripts needed to take simplified interface (SI) data for dimensions and map it to the internal data model staging tables. The simplified interfaces are a one-to-many mapping to the internal data warehouse structures for dimensions, so this intermediate step is required to transform the incoming data and make it usable downstream.
Input Table | Output Table |
---|---|
W_PRODUCT_DTS |
W_PROD_CAT_DHS |
W_PRODUCT_DTS |
W_PRODUCT_ATTR_DS |
W_PRODUCT_DTS |
W_PRODUCT_DS |
W_PRODUCT_DTS |
W_PRODUCT_DS_TL |
W_PRODUCT_DTS |
W_RTL_PRODUCT_BRAND_DS |
W_PRODUCT_DTS |
W_RTL_PRODUCT_BRAND_DS_TL |
W_PRODUCT_DTS |
W_RTL_IT_SUPPLIER_DS |
W_PRODUCT_DTS |
W_DOMAIN_MEMBER_DS_TL |
W_PRODUCT_ALT_DTS |
W_PRODUCT_FLEX_DS |
W_INT_ORG_DTS |
W_INT_ORG_DS |
W_INT_ORG_DTS |
W_INT_ORG_DS_TL |
W_INT_ORG_DTS |
W_INT_ORG_DHS |
W_INT_ORG_DTS |
W_DOMAIN_MEMBER_DS_TL |
W_INT_ORG_DTS |
W_RTL_CHANNEL_DS |
W_INT_ORG_DTS |
W_INT_ORG_ATTR_DS |
W_INT_ORG_DTS |
W_RTL_CHANNEL_CNTRY_DS |
W_ORGANIZATION_ALT_DTS |
W_ORGANIZATION_FLEX_DS |
W_EXCH_RATE_DTS |
W_EXCH_RATE_GS |
W_MCAL_PERIODS_DTS |
W_MCAL_PERIOD_DS |
W_SUPPLIER_DTS |
W_PARTY_ATTR_DS |
W_SUPPLIER_DTS |
W_PARTY_ORG_DS |
W_EMPLOYEE_DTS |
W_EMPLOYEE_DS |
W_PROD_LOC_ATTR_DTS |
W_RTL_IT_LC_DS |
W_INVENTORY_PRODUCT_ATTR_DTS |
W_INVENTORY_PRODUCT_ATTR_DS |
W_ATTR_DTS |
W_RTL_PRODUCT_ATTR_DS |
W_ATTR_DTS |
W_RTL_PRODUCT_ATTR_DS_TL |
W_ATTR_DTS |
W_DOMAIN_MEMBER_DS_TL |
W_ATTR_DTS |
W_RTL_PRODUCT_COLOR_DS |
W_PRODUCT_ATTR_DTS |
W_RTL_ITEM_GRP1_DS |
W_RTL_SEASON_PHASE_DTS |
W_RTL_SEASON_DS |
W_RTL_SEASON_PHASE_DTS |
W_RTL_PHASE_DS |
W_RTL_SEASON_PHASE_DTS |
W_DOMAIN_MEMBER_DS_TL |
W_RTL_SEASON_PHASE_IT_DTS |
W_RTL_SEASON_PHASE_IT_DS |
W_RTL_LOC_COMP_MTX_DTS |
W_RTL_LOC_COMP_MTX_DS |
W_RTL_CODE_DTS |
W_RTL_CODE_DS |
W_RTL_ITEM_GRP2_DTS |
W_RTL_ITEM_GRP2_DS |
W_DIFF_GROUP_DTS |
W_RTL_DIFF_GRP_DS |
W_DIFF_GROUP_DTS |
W_RTL_DIFF_GRP_DS_TL |
W_ADJUSTMENT_FTS |
W_REASON_DS |
W_ADJUSTMENT_FTS |
W_DOMAIN_MEMBER_DS_TL |
W_RTL_PROMO_EXT_DTS |
W_RTL_PROMO_EXT_DS |
W_RTL_PROMO_IT_LC_DTS |
W_RTL_PROMO_IT_LC_DS |
W_ORDER_HEAD_FTS |
W_RTL_PO_DETAILS_DS |
Staging to Target Tables
Input Table | Output Table |
---|---|
W_DOMAIN_MEMBER_DS_TL |
W_DOMAIN_MEMBER_LKP_TL |
W_EMPLOYEE_DS |
W_EMPLOYEE_D |
W_EXCH_RATE_GS |
W_EXCH_RATE_G |
W_INT_ORG_DS |
W_INT_ORG_D |
W_INT_ORG_DHS |
W_INT_ORG_DH |
W_ORGANIZATION_FLEX_DS |
W_ORGANIZATION_FLEX_D |
W_PARTY_ATTR_DS |
W_PARTY_ATTR_D |
W_PARTY_ORG_DS |
W_PARTY_ORG_D |
W_PARTY_PER_DS |
W_PARTY_PER_D |
W_PROD_CAT_DHS |
W_PROD_CAT_DH |
W_PRODUCT_ATTR_DS |
W_PRODUCT_ATTR_D |
W_PRODUCT_DS |
W_PRODUCT_D |
W_PRODUCT_FLEX_DS |
W_PRODUCT_FLEX_D |
W_REASON_DS |
W_REASON_D |
W_RTL_ALC_DETAILS_DS |
W_RTL_ALC_DETAILS_D |
W_RTL_BUYER_DS |
W_RTL_BUYER_D |
W_RTL_CHANNEL_DS |
W_RTL_CHANNEL_D |
W_RTL_CHANNEL_CNTRY_DS |
W_RTL_CHANNEL_CNTRY_D |
W_RTL_CO_HEAD_DS |
W_RTL_CO_HEAD_D |
W_RTL_CO_LINE_DS |
W_RTL_CO_LINE_D |
W_RTL_CO_SHIP_METHOD_DS |
W_RTL_CO_SHIP_METHOD_D |
W_RTL_CO_SHIP_TYPE_DS |
W_RTL_CO_SHIP_TYPE_D |
W_RTL_COMP_STORE_DS |
W_RTL_COMP_STORE_D |
W_RTL_CONS_METADATA_GS |
W_RTL_CONS_METADATA_G |
W_RTL_COUPON_DS |
W_RTL_COUPON_D |
W_RTL_DIFF_GRP_DS |
W_RTL_DIFF_GRP_D |
W_RTL_DIFF_RNG_DS |
W_RTL_DIFF_RNG_D |
W_RTL_DISCOUNT_TYPE_DS |
W_RTL_DISCOUNT_TYPE_D |
W_RTL_IT_SUPPLIER_DS |
W_RTL_IT_SUPPLIER_D |
W_RTL_ITEM_GRP1_DS |
W_RTL_ITEM_GRP1_D |
W_RTL_LOC_STOCK_CNT_DS |
W_RTL_LOC_STOCK_CNT_D |
W_RTL_ORG_FIN_DS |
W_RTL_ORG_FIN_D |
W_RTL_PHASE_DS |
W_RTL_PHASE_D |
W_RTL_PO_DETAILS_DS |
W_RTL_PO_DETAILS_D |
W_RTL_PRICE_CLR_IT_LC_DS |
W_RTL_PRICE_CLR_IT_LC_D |
W_RTL_PRODUCT_ATTR_DS |
W_RTL_PRODUCT_ATTR_D |
W_RTL_PRODUCT_BRAND_DS |
W_RTL_PRODUCT_BRAND_D |
W_RTL_PROMO_DS_TL |
W_RTL_PROMO_D_TL |
W_RTL_PROMO_IT_LC_DS |
W_RTL_PROMO_IT_LC_D |
W_RTL_PROMO_CE_IT_LC_DS |
W_RTL_PROMO_IT_LC_D |
W_RTL_REPL_DISTRO_IT_LC_DS |
W_RTL_REPL_DISTRO_IT_LC_D |
W_RTL_REPL_REV_INT_IT_LC_DS |
W_RTL_REPL_REV_INT_IT_LC_D |
W_RTL_REPL_LT_INT_IT_LC_DS |
W_RTL_REPL_LT_INT_IT_LC_D |
W_RTL_SEASON_DS |
W_RTL_SEASON_D |
W_RTL_SEASON_PHASE_IT_DS |
W_RTL_SEASON_PHASE_IT_D |
W_RTL_TNDR_TYPE_DS |
W_RTL_TNDR_TYPE_D |
W_STATUS_DS |
W_STATUS_D |
Initial Dimension Staging (Legacy)
Module Name |
LOAD_DIM_INITIAL_STAGE_ADHOC |
Description |
Stages all of the dimension DAT files from the server for initial data loads into the database. |
Dependencies |
HIST_ZIP_FILE_LOAD_ADHOC |
Business Activity |
Historical Data Load |
Design Overview
This process looks for all legacy DAT files for dimensions placed on the server by a history ZIP file upload and move them
into staging tables in the data warehouse for use by the RI_DIM_INITIAL_ADHOC
process. You cannot use both
the CSV and DAT staging processes for the same data, as they overwrite each other. However, you may use this process to load
DAT files for an interface where a CSV file does not exist, such as W_PARTY_PER_DS.dat
.
Key Tables Affected
Table | Usage |
---|---|
RA_SRC_CURR_PARAM_G |
File Input |
W_CODE_DS |
File Input |
W_DOMAIN_MEMBER_DS_TL |
File Input |
W_EMPLOYEE_DS |
File Input |
W_EXCH_RATE_GS |
File Input |
W_INT_ORG_ATTR_DS |
File Input |
W_INT_ORG_DHS |
File Input |
W_INT_ORG_DS |
File Input |
W_INT_ORG_DS_TL |
File Input |
W_PARTY_ATTR_DS |
File Input |
W_PARTY_ORG_DS |
File Input |
W_PARTY_PER_DS |
File Input |
W_PROD_CAT_DHS |
File Input |
W_PRODUCT_ATTR_DS |
File Input |
W_PRODUCT_DS |
File Input |
W_PRODUCT_DS_TL |
File Input |
W_REASON_DS |
File Input |
W_RTL_ALC_DETAILS_DS |
File Input |
W_RTL_BUYER_DS |
File Input |
W_RTL_CHANNEL_DS |
File Input |
W_RTL_CO_HEAD_DS |
File Input |
W_RTL_CO_LINE_DS |
File Input |
W_RTL_CO_SHIP_METHOD_DS |
File Input |
W_RTL_CO_SHIP_TYPE_DS |
File Input |
W_RTL_CODE_DS |
File Input |
W_RTL_COMP_STORE_DS |
File Input |
W_RTL_COUPON_DS |
File Input |
W_RTL_COUPON_DS_TL |
File Input |
W_RTL_DIFF_GRP_DS |
File Input |
W_RTL_DIFF_GRP_DS_TL |
File Input |
W_RTL_DIFF_RNG_DS |
File Input |
W_RTL_DIFF_RNG_DS_TL |
File Input |
W_RTL_DISCOUNT_TYPE_DS |
File Input |
W_RTL_IT_SUPPLIER_DS |
File Input |
W_RTL_ITEM_GRP1_DS |
File Input |
W_RTL_LOC_STOCK_CNT_DS |
File Input |
W_RTL_ORG_FIN_DS |
File Input |
W_RTL_PARTY_PER_ATTR_DS |
File Input |
W_RTL_PHASE_DS |
File Input |
W_RTL_PO_DETAILS_DS |
File Input |
W_RTL_PRICE_CLR_IT_LC_DS |
File Input |
W_RTL_PROD_HIER_ATTR_LKP_DHS |
File Input |
W_RTL_PRODUCT_BRAND_DS |
File Input |
W_RTL_PRODUCT_BRAND_DS_TL |
File Input |
W_RTL_PROMO_CE_DS |
File Input |
W_RTL_PROMO_CE_IT_LC_DS |
File Input |
W_RTL_PROMO_DS |
File Input |
W_RTL_PROMO_DS_TL |
File Input |
W_RTL_PROMO_EXT_DS |
File Input |
W_RTL_SEASON_DS |
File Input |
W_RTL_SEASON_PHASE_IT_DS |
File Input |
W_RTL_TNDR_TYPE_DS |
File Input |
W_STATUS_DS |
File Input |
Initial Inventory Seeding
Module Name |
SEED_CSV_W_RTL_INV_IT_LC_DY_F_PROCESS_ADHOC |
Description |
Loads a full snapshot of inventory data from |
Dependencies |
HIST_ZIP_FILE_LOAD_ADHOC |
Business Activity |
Nightly Batch Preparation |
Design Overview
The seeding load process for Inventory data accepts an input file at the item-location-date level using the file specification
for INVENTORY.csv
. It assumes the file has already been moved into place by the HIST_ZIP_FILE_LOAD_ADHOC
process. This process imports the file into a preprocessing table in the database, transforms it to internal staging tables,
then loads it into the base fact (item/location/day).
Note:
Seeding processes require a full snapshot of data for a single date, which covers all item/location combinations that should have a starting position for this fact. The seeding process must load data for the day before the nightly batch runs. Alternatively, you can include the full snapshots of data in your first nightly batch and skip the seeding steps. This causes the nightly batch to take a significantly longer time to execute, but avoids the manual load processes for all the positional facts.Initial Inventory Seeding (Legacy)
Module Name |
SEED_W_RTL_INV_IT_LC_DY_F_PROCESS_ADHOC |
Description |
Loads a full snapshot of inventory data from |
Dependencies |
HIST_ZIP_FILE_LOAD_ADHOC |
Business Activity |
Nightly Batch Preparation |
Design Overview
The seeding load process for Inventory data accepts an input file at the item-location-date level using the file specification
for W_RTL_INV_IT_LC_DY_FS.dat
. It assumes the file has already been moved into place by the HIST_ZIP_FILE_LOAD_ADHOC
process. This process imports the file to internal staging tables, then load it into the base fact (item/location/day).
Note:
Seeding processes require a full snapshot of data for a single date, which covers all item/location combinations that should have a starting position for this fact. The seeding process must load data for the day before the nightly batch runs. Alternatively, you can include the full snapshots of data in your very first nightly batch and skip the seeding steps. This causes the nightly batch to take a significantly longer time to execute but avoids the manual load processes for all the positional facts.Initial Net Cost Seeding
Module Name |
SEED_CSV_W_RTL_NCOST_IT_LC_DY_F_PROCESS_ADHOC |
Description |
Loads a full snapshot of net cost data from |
Dependencies |
HIST_ZIP_FILE_LOAD_ADHOC |
Business Activity |
Nightly Batch Preparation |
Design Overview
The seeding load process for Net Cost data accepts an input file at the item-location-date-supplier level using the file
specification for COST.csv
. It assumes the file has already been moved into place by the HIST_ZIP_FILE_LOAD_ADHOC
process. This process imports the file into a preprocessing table in the database, transforms it to internal staging tables,
then loads it into the base fact (item/location/day). This process is only for the net cost; a separate process loads the
base cost, if required.
Note:
Seeding processes require a full snapshot of data for a single date, which covers all item/location combinations that should have a starting position for this fact. The seeding process must load data for the day before the nightly batch runs. Alternatively, you can include the full snapshots of data in your very first nightly batch and skip the seeding steps. This causes the nightly batch to take a significantly longer time to execute, but avoids the manual load processes for all the positional facts.Initial Net Cost Seeding (Legacy)
Module Name |
SEED_W_RTL_NCOST_IT_LC_DY_F_PROCESS_ADHOC |
Description |
Loads a full snapshot of net cost data from |
Dependencies |
HIST_ZIP_FILE_LOAD_ADHOC |
Business Activity |
Nightly Batch Preparation |
Design Overview
The seeding load process for Net Cost data accepts an input file at the item-location-date-supplier level using the file
specification for W_RTL_NCOST_IT_LC_DY_FS.dat
. It assumes the file has already been moved into place using
the HIST_ZIP_FILE_LOAD_ADHOC
process. This process imports the file into a preprocessing table in the database,
transforms it to internal staging tables, then loads it into the base fact (item/location/day).
Note:
Seeding processes require a full snapshot of data for a single date, which covers all item/location combinations that should have a starting position for this fact. The seeding process must load data for the day before the nightly batch runs. Alternatively, you can include the full snapshots of data in your very first nightly batch and skip the seeding steps. This causes the nightly batch to take a significantly longer time to execute, but avoids the manual load processes for all the positional facts.Initial Price Seeding
Module Name |
SEED_CSV_W_RTL_PRICE_IT_LC_DY_F_PROCESS_ADHOC |
Description |
Loads a full snapshot of price data from |
Dependencies |
HIST_ZIP_FILE_LOAD_ADHOC |
Business Activity |
Nightly Batch Preparation |
Design Overview
The seeding load process for Price data accepts an input file at the item-location-date level using the file specification
for PRICE.csv
. It assumes the file has already been moved into place by the HIST_ZIP_FILE_LOAD_ADHOC
process. This process imports the file into a preprocessing table in the database, transforms it to internal staging tables,
then loads it into the base fact (item/location/day).
Note:
Seeding processes require a full snapshot of data for a single date, which covers all item/location combinations that should have a starting position for this fact. The seeding process must load data for the day before the nightly batch runs. Alternatively, you can include the full snapshots of data in your very first nightly batch and skip the seeding steps. This causes the nightly batch to take a significantly longer time to execute, but avoids the manual load processes for all the positional facts.Initial Price Seeding (Legacy)
Module Name |
SEED_W_RTL_PRICE_IT_LC_DY_F_PROCESS_ADHOC |
Description |
Loads a full snapshot of price data from |
Dependencies |
HIST_ZIP_FILE_LOAD_ADHOC |
Business Activity |
Nightly Batch Preparation |
Design Overview
The seeding load process for Price data accepts an input file at the item-location-date level using the file specification
for W_RTL_PRICE_IT_LC_DY_FS.dat
. It assumes the file has already been moved into place by the HIST_ZIP_FILE_LOAD_ADHOC
process. This process imports the file into a staging table, then loads it into the base fact (item/location/day).
Note:
Seeding processes require a full snapshot of data for a single date, which covers all item/location combinations that should have a starting position for this fact. The seeding process must load data for the day before the nightly batch runs. Alternatively, you can include the full snapshots of data in your very first nightly batch and skip the seeding steps. This causes the nightly batch to take a significantly longer time to execute but avoids the manual load processes for all the positional facts.Initial Purchase Order Seeding
Module Name |
SEED_CSV_W_RTL_PO_ONORD_IT_LC_DY_F_PROCESS_ADHOC |
Description |
Loads a full snapshot of purchase order data from |
Dependencies |
HIST_ZIP_FILE_LOAD_ADHOC |
Business Activity |
Nightly Batch Preparation |
Design Overview
The seeding load process for Purchase Order data accepts two input files at the order header and order detail levels using
the file specifications for ORDER_HEAD.csv
and ORDER_DETAIL.csv
. It assumes the files have
already been moved into place by the HIST_ZIP_FILE_LOAD_ADHOC
process. This process imports the files into
preprocessing tables in the database, transforms them to internal staging tables, then loads them into the base dimension
and facts. The dimension is loaded first to support loading the fact table against those foreign keys.
Note:
Seeding processes require a full snapshot of data for a single date, which covers all purchase orders and item/location combinations that should have a starting position for this fact. The seeding process must load data for the day before the nightly batch runs. Alternatively, you can include the full snapshots of data in your very first nightly batch and skip the seeding steps. This causes the nightly batch to take a significantly longer time to execute but avoids the manual load processes for all the positional facts.Initial Purchase Order Seeding (Legacy)
Module Name |
SEED_W_RTL_PO_ONORD_IT_LC_DY_F_PROCESS_ADHOC |
Description |
Loads a full snapshot of purchase order data from |
Dependencies |
HIST_ZIP_FILE_LOAD_ADHOC |
Business Activity |
Nightly Batch Preparation |
Design Overview
The seeding load process for Purchase Order fact data accepts an input file at the item-location-date level using the file
specification for W_RTL_PO_ONORD_IT_LC_DY_FS.dat
. It assumes the file has already been moved into place by
the HIST_ZIP_FILE_LOAD_ADHOC
process. This process imports the file into a staging table, then loads it into
the base fact (item/location/day). It assumes the dimension has already been loaded separately using the initial dimension
loads.
Note:
Seeding processes require a full snapshot of data for a single date, which covers all item/location combinations that should have a starting position for this fact. The seeding process must load data for the day before the nightly batch runs. Alternatively, you can include the full snapshots of data in your very first nightly batch and skip the seeding steps. This causes the nightly batch to take a significantly longer time to execute but avoids the manual load processes for all the positional factsInitial Purchase Order Allocation Seeding
Module Name |
SEED_W_RTL_PO_ONALC_IT_LC_DY_F_PROCESS_ADHOC |
Description |
Loads a full snapshot of purchase order allocation data from |
Dependencies |
HIST_ZIP_FILE_LOAD_ADHOC |
Business Activity |
Nightly Batch Preparation |
Design Overview
The seeding load process for Purchase Order Allocation fact data accepts an input file at the item-location-date level
using the file specification for W_RTL_PO_ONALC_IT_LC_DY_FS.dat
. It assumes the file has already been moved
into place by the HIST_ZIP_FILE_LOAD_ADHOC
process. This process imports the file into a staging table, then
loads it into the base fact (item/location/day). It assumes the dimension has already been loaded separately using the initial
dimension loads.
Note:
Seeding processes require a full snapshot of data for a single date, which covers all item/location combinations that should have a starting position for this fact. The seeding process must load data for the day before the nightly batch runs. Alternatively, you can include the full snapshots of data in your very first nightly batch and skip the seeding steps. This causes the nightly batch to take a significantly longer time to execute but avoids the manual load processes for all the positional facts.Intercompany Margin History Load
Module Name |
HIST_CSV_ICMARGIN_LOAD_ADHOC |
Description |
Loads the |
Dependencies |
HIST_ZIP_FILE_LOAD_ADHOC |
Business Activity |
Nightly Batch Preparation |
Design Overview
The history load process for Intercompany Margin transactions accepts an input file at the item/location/day level using
the file specification for IC_MARGIN.csv
. It assumes the file has already been moved into place by the HIST_ZIP_FILE_LOAD_ADHOC
process. This process imports the file into a preprocessing table in the database, transforms
it to RI’s internal staging tables, then loads it into the base fact (item/location/day) as well as the week aggregate used
for integrations (item/location/week).
Inventory History Current Position Load
Module Name |
HIST_INV_GENERAL_LOAD_ADHOC |
Description |
Copies the ending positions of inventory history for the last week into the General (G) table for the purpose of testing the data and integrations within RAP. |
Dependencies |
HIST_INV_LOAD_ADHOC |
Business Activity |
Nightly Batch Preparation |
Design Overview
This process takes the final week of inventory data loaded using the
HIST_INV_LOAD_ADHOC
process and copies it into the table for current inventory
positions (W_RTL_INV_IT_LC_G
). This program uses an INSERT
statement, so it cannot be re-run multiple times without first truncating the table. The purpose
of this program is to test any integrations or reports that use this table prior to actually
running nightly batches, when it would normally be populated. The most common use case is for
Inventory Planning Optimization testing, which uses this table to get the current inventory
position during ad hoc and weekly batch runs.
Inventory History Load
Module Name |
HIST_INV_LOAD_ADHOC |
Description |
Processes any staged inventory history data for end-of-week snapshots, starting from the last processed week. |
Dependencies |
HIST_STG_CSV_INV_LOAD_ADHOC |
Business Activity |
Historical Data Load |
Design Overview
The inventory history load process supports loading of end-of-week inventory snapshots over a long period of time to populate the data warehouse with historical data. It requires the inventory data to already be staged into the database by one of the available staging processes. Multiple weeks of inventory can be provided in a single file, though it is recommended to not load more than one month at a time unless the volumes are low. Every record in the data must be for a week-ending date; other dates in the file will not work using this process.
The C_HIST_LOAD_STATUS
configuration table controls the actions taken by the process. Before running the
process for the first time, you must set up this table for the following:
-
Set the history load date (
HIST_LOAD_LAST_DATE
) to be the very latest date you expect to load history for (this can be changed later if needed to load more weeks). The date must be a week-ending date and should have00:00:00
as the timestamp after saving the date to the database table.. -
Disable any aggregate (
_A
) tables you do not wish to populate by settingENABLED_IND
toN
. When loading data only for AI Foundation or Planning, you only need the history temporary table (W_RTL_INV_IT_LC_DY_HIST_TMP
), base fact (W_RTL_INV_IT_LC_DY_F
) and week aggregate (W_RTL_INV_IT_LC_WK_A
). For RI, all tables should be enabled and loaded.
Once setup is complete, begin processing files from the earliest week-ending date you plan to load. You must start
from the beginning of the history and load data sequentially. You cannot load data out of order and you cannot load the same
week multiple times without first erasing the data from your database. After a week is loaded successfully, the C_HIST_LOAD_STATUS
records are updated with the most recent load status and date.
If you will be loading inventory history after you have already started nightly batches, then you must also change two
parameters in C_ODI_PARAM_VW
from the Control Center:
-
INV_NIGHTLY_BATCH_IND
– Change this toY
to indicate that nightly batches have been run but you are planning to load history for prior dates. -
INV_LAST_HIST_LOAD_DT
– Set this to the final week of history data you plan to load, which must be a week-ending date and must be before the nightly batches were started.
Key Tables Affected
Table | Usage |
---|---|
C_HIST_LOAD_STATUS |
Configuration |
W_RTL_INV_IT_LC_DY_FS |
Input |
W_RTL_INV_IT_LC_DY_F |
Output |
W_RTL_INV_IT_LC_G |
Output |
W_RTL_INV_IT_LC_GMH_A |
Output |
W_RTL_INV_IT_LC_WK_A |
Output |
W_RTL_INV_IT_RG_DY_A |
Output |
W_RTL_INV_IT_DY_A |
Output |
W_RTL_INV_IT_WK_A |
Output |
W_RTL_INV_SC_LC_DY_A |
Output |
W_RTL_INV_CL_LC_DY_A |
Output |
W_RTL_INV_DP_LC_DY_A |
Output |
W_RTL_INV_SC_LC_DY_CUR_A |
Output |
W_RTL_INV_SC_DY_A |
Output |
W_RTL_INV_SC_DY_CUR_A |
Output |
W_RTL_INV_SC_LC_WK_A |
Output |
W_RTL_INV_CL_LC_WK_A |
Output |
W_RTL_INV_DP_LC_WK_A |
Output |
W_RTL_INV_SC_LC_WK_CUR_A |
Output |
W_RTL_INV_SC_WK_A |
Output |
W_RTL_INV_SC_WK_CUR_A |
Output |
Inventory History Staging
Module Name |
HIST_STG_CSV_INV_LOAD_ADHOC |
Description |
Stages the |
Dependencies |
HIST_ZIP_FILE_LOAD_ADHOC |
Business Activity |
Historical Data Load |
Design Overview
This process looks for the INVENTORY.csv
file placed on the server by a history zip file upload, moves
it into a preprocessing table, and transforms it for use by the HIST_INV_LOAD_ADHOC
process.
Note:
The inventory file used for history data must contain only week-ending dates and must be full, weekly snapshots of data.Inventory History Staging (Legacy)
Module Name |
HIST_STG_INV_LOAD_ADHOC |
Description |
Stages the |
Dependencies |
HIST_ZIP_FILE_LOAD_ADHOC |
Business Activity |
Historical Data Load |
Design Overview
This process looks for the W_RTL_INV_IT_LC_DY_FS.dat
file placed on the server by a history ZIP file upload
and loads it for use by the HIST_INV_LOAD_ADHOC
process.
Note:
The inventory file used for history data must contain only week-ending dates and must be full, weekly snapshots of data.Inventory Out of Stock Load
Module Name |
HIST_INV_OOS_LOAD_ADHOC |
Description |
Stages and loads the |
Dependencies |
HIST_ZIP_FILE_LOAD_ADHOC |
Business Activity |
Historical Data Load |
Design Overview
This process looks for the INVENTORY_OOS.csv
file placed on the server by a history zip file upload, moves
it into a preprocessing table, and transforms it into the target table for use in AI Foundation loads.
Note:
The inventory OOS file must contain only week-ending dates; other day dates will not be accepted into the interfaceInventory Reclass History Load
Module Name |
HIST_CSV_INVRECLASS_LOAD_ADHOC |
Description |
Loads the |
Dependencies |
HIST_ZIP_FILE_LOAD_ADHOC |
Business Activity |
Historical Data Load |
Design Overview
The history load process for Inventory Reclass transactions accepts an input file at the item/location/day level using
the file specification for INV_RECLASS.csv
. It assumes the file has already been moved into place by the HIST_ZIP_FILE_LOAD_ADHOC
process. This process imports the file into a preprocessing table in the database, transforms
it to internal staging tables, then loads it into the base fact (item/location/day) as well as the week aggregate used for
integrations (item/location/week).
Inventory Refresh from Merchandising
Module Name |
RDE_INSERT_FULL_INV_POS_ADHOC |
Description |
Prepares the RDE nightly extract for inventory positions to pull all possible item/locations for the next run as a means of refreshing RAP application inventory from the source without using separate ad hoc jobs. |
Dependencies |
None |
Business Activity |
Nightly Batch Maintenance |
Design Overview
This process contains only one job, which is named RDE_INSERT_FULL_INV_POS_JOB
. The purpose of this job
is to query the Merchandising Foundation Cloud Service (MFCS) table ITEM_LOC
and extract all valid item/locations
into a temporary table. The list of item/locations is filtered to the same set of items that the nightly batch would allow
from MFCS to RAP (only active and approved items). The next time that the inventory position extract RDE_EXTRACT_FACT_P7_INVILDSDE_JOB
runs, it will include all of these item/locations even if their inventory values did not change since yesterday. This process
is only applicable when MFCS is on version 23 or greater, as the job needs to directly query MFCS data that is being replicated
to the Analytics & Planning database.
Inventory Reload
Module Name |
INV_RELOAD_PROCESS_ADHOC |
Description |
Provides an automated way to reload a single week of historical inventory as a way of correcting bad data on the inventory position fact table |
Dependencies |
None |
Business Activity |
Data Correction |
Design Overview
This process contains only one job, which is named INV_RELOAD_JOB
. This job deletes a week of data from
your inventory position fact tables and inserts the data found on W_RTL_INV_IT_LC_DY_FS
in its place. The
process expects a single week of inventory data (with DAY_DT
equal to a week-ending date) to be loaded into
the W_RTL_INV_IT_LC_DY_FS
staging table. This job deletes that specific week of data from the W_RTL_INV_IT_LC_DY_F
and W_RTL_INV_IT_LC_WK_A
tables and then inserts the staging table data. If you then need to move this
data to downstream applications, you must also run the associated ad hoc processes to load inventory data to those solutions.
This process is designed for customers that provide full weekly snapshots of their inventory positions, such that deleting
a week of data from the internal tables can be based solely on the DAY_DT
in the staging table and no other
date. This job does not update the W_RTL_INV_IT_LC_G
current positions, as it assumes you are only trying
to reload prior weeks of inventory, not the current week. The data is assumed to be correct in W_RTL_INV_IT_LC_G
; if it is not, you must push in new inventory data using nightly batch processing to correct any records in the current
week. For example, if the current week has some non-zero positions that need to be zeroed out after your data correction,
you need to include those zero-balance rows in a nightly batch file so they can be both applied to W_RTL_INV_IT_LC_G
and updated for the current week in the other data warehouse tables and downstream applications.
Inventory Selling Date Seeding
Module Name |
LOAD_W_RTL_INV_IT_LC_G_FIRST_SOLD_DT_ADHOC |
Description |
Calculates the initial value of First Sold Date for all item/locations in inventory, based on sales history data. |
Dependencies |
SEED_CSV_W_RTL_INV_IT_LC_DY_F_PROCESS_ADHOC |
Business Activity |
Historical Data Load |
Design Overview
This process populates the fields W_RTL_INV_IT_LC_G.FIRST_SOLD_DT
and
LAST_SOLD_DT
with values, using your historical sales data to
calculate the first time each item/location with stock on hand was sold. This process
should only run after all inventory and sales history is completely loaded and you are
ready to begin nightly batches. If this process does not run, then all item/locations
will start with a first/last selling date of the first transaction to occur on it in
nightly batch runs. These date values are used by the AI Foundation Cloud Services
(Lifecycle Pricing Optimization) as an input to determine item lifecycles from the
history data in RI.
Markdown History Load
Module Name |
HIST_CSV_MARKDOWN_LOAD_ADHOC |
Description |
Loads the |
Dependencies |
HIST_ZIP_FILE_LOAD_ADHOC |
Business Activity |
Historical Data Load |
Design Overview
The history load process for Markdown transactions accepts an input file at the item/location/day level using the file
specification for MARKDOWN.csv
. It assumes the file has already been moved into place by the HIST_ZIP_FILE_LOAD_ADHOC
process. This process imports the file into a preprocessing table in the database, transforms it to internal staging tables,
then loads it into the base fact (item/location/day) as well as the week aggregate used for integrations (item/location/week).
Market Data Load
Module Name |
HIST_MARKET_LOAD_ADHOC |
Description |
Loads all dimension and fact data relating to Market Items, Market Attributes, and Market Sales subject areas. These tables are used for Retail Insights reporting and Advanced Clustering (AC) analysis. |
Dependencies |
HIST_ZIP_FILE_LOAD_ADHOC |
Business Activity |
Historical Data Load |
Design Overview
This process loads all the dimensions, facts, and aggregate tables relating to Market Data. Market data comes from a non-Oracle source and represents consumer buying behavior and products available on the market that are related to your own merchandise. It is used in Retail Insights reporting, as well as specific metrics in Advanced Clustering where you may compare performance of your store clusters against market performance in specific product categories. Market data is updated infrequently, such as once a month or quarter, so you may choose to exclude it from your nightly batches and only load it using this ad hoc process.
Key Tables Affected
Input Table | Target Table |
---|---|
W_RTL_MARKET_PRODUCT_DS |
W_RTL_MARKET_PRODUCT_D |
W_RTL_MARKET_PROD_DHS |
W_RTL_MARKET_PROD_DH |
W_RTL_MARKET_PRODUCT_DS_TL |
W_RTL_MARKET_PRODUCT_D_TL |
W_RTL_MARKET_PROD_BRAND_DS_TL |
W_RTL_MARKET_PROD_BRAND_D_TL |
W_RTL_MARKET_PRODUCT_MTX_DS |
W_RTL_MARKET_PRODUCT_MTX_D |
W_RTL_MARKET_PROD_ATTR_DS |
W_RTL_MARKET_PROD_ATTR_D |
W_RTL_MARKET_PROD_ATTR_MTX_DS |
W_RTL_MARKET_PROD_ATTR_MTX_D |
W_RTL_MARKET_PROD_BRAND_DS |
W_RTL_MARKET_PROD_BRAND_D |
W_RTL_MARKET_PROD_DH_MTX_DS |
W_RTL_MARKET_PROD_DH_MTX_D |
W_RTL_MKTSLS_TA_CH_CNG_WK_FS |
W_RTL_MKTSLS_TA_CH_CNG_WK_F |
W_RTL_MKTSLS_TA_CH_HG_WK_FS |
W_RTL_MKTSLS_TA_CH_HG_WK_F |
W_RTL_MKTSLS_TA_CH_CNG_WK_F |
W_RTL_MKTSLS_TA_CMG_CS_QR_A |
W_RTL_MKTSLS_TA_CH_CNG_WK_F |
W_RTL_MKTSLS_TA_CMG_QR_A |
W_RTL_MKTSLS_TA_CH_CNG_WK_F |
W_RTL_MKTSLS_TA_CL_CS_QR_CUR_A |
W_RTL_MKTSLS_TA_CH_CNG_WK_F |
W_RTL_MKTSLS_TA_CL_QR_CUR_A |
Nightly Batch Status Cleanup
Module Name |
C_LOAD_DATES_CLEANUP_ADHOC |
Description |
Erases the execution status of nightly batch programs. This is required to run a nightly process outside of a batch. |
Dependencies |
None |
Business Activity |
Batch Administration |
Design Overview
This process erases records from the C_LOAD_DATES
database table. Any time a job
runs as part of the nightly batch, or a job runs that is included in both nightly and ad
hoc processing, a status record is inserted into C_LOAD_DATES
. The job
is then blocked from executing again while this record exists, as a safety measure when
restarting batch processes that failed midway through execution. During initial
dimension loads, you may need to execute the same jobs multiple times to work through
file or data issues. In that case, you may execute this process before each run to clear
the status of prior runs from the database.
Note:
This process should only run during history and initial data loads or at the guidance of Oracle Support. It should not be run during regular nightly batch processing. ClearingC_LOAD_DATES
while the batch is running normally
could cause data corruption, as it would allow the same jobs to run multiple times for the same business date.
Plan Data Integration
Module Name |
LOAD_PLANNING1_DATA_ADHOC LOAD_PLANNING2_DATA_ADHOC LOAD_PLANNING3_DATA_ADHOC LOAD_PLANNING4_DATA_ADHOC LOAD_PLANNING5_DATA_ADHOC |
Description |
Extracts data from the MFP and AP Plan Export interfaces to RI's internal planning tables. |
Dependencies |
CLEANUP_C_LOAD_DATES_PLANNING_ADHOC |
Business Activity |
RI Integrations |
Design Overview
This set of processes moves Merchandise Financial Planning (MFP) and Assortment Planning (AP) export data from the data exchange (RDX) layer to internal staging tables, then triggers the AIF DATA load programs for planning data. Each process contains the end-to-end flow of data for a single interface. Use these processes to perform integration testing and plan data validations during an RI and MFP/AP implementation, or to trigger an on-demand refresh of plan data in RI outside the normal batch cycle. If you run these on the same day as a normal batch run, or you run them multiple times, you must run the cleanup process shown in the dependencies prior to each run.
Key Tables Affected
Table | Usage |
---|---|
MFP_PLAN1_EXP |
Input |
W_RTL_PLAN1_PROD1_LC1_T1_FS |
Staging |
W_RTL_PLAN1_PROD1_LC1_T1_F |
Output |
MFP_PLAN2_EXP |
Input |
W_RTL_PLAN2_PROD2_LC2_T2_FS |
Staging |
W_RTL_PLAN2_PROD2_LC2_T2_F |
Output |
MFP_PLAN3_EXP |
Input |
W_RTL_PLAN3_PROD3_LC3_T3_FS |
Staging |
W_RTL_PLAN3_PROD3_LC3_T3_F |
Output |
MFP_PLAN4_EXP |
Input |
W_RTL_PLAN4_PROD4_LC4_T4_FS |
Staging |
W_RTL_PLAN4_PROD4_LC4_T4_F |
Output |
AP_PLAN1_EXP |
Input |
W_RTL_PLAN5_PROD5_LC5_T5_FS |
Staging |
W_RTL_PLAN5_PROD5_LC5_T5_F |
Output |
Planning Dimension Export
Module Name |
LOAD_PDS_DIMENSION_PROCESS_ADHOC |
Description |
Exports all supported dimensions from the data warehouse to the data exchange schema for Planning. |
Dependencies |
RI_DIM_INITIAL_ADHOC |
Business Activity |
RI Integrations |
Design Overview
This process runs all the planning data schema dimension exports from the data warehouse to the data exchange layer, where PDS batch processes can pick up and load the data the rest of the way. Each time the exports run, the data is truncated and inserted as full snapshots. Planning exports do not support incremental or delta extracts for dimensions. The programs apply various filters and criteria to the export data to align with Planning Data Schema requirements for dimensions, as described in the RAP Implementation Guide. The programs only export specific columns from each dimension, based on the downstream application needs. Review the PDS integration tables in detail to understand which data will be exported.
Key Tables Affected
Input Table | Output Table |
---|---|
W_PRODUCT_D |
W_PDS_PRODUCT_D |
W_PRODUCT_D_TL |
W_PDS_PRODUCT_D |
W_PROD_CAT_DH |
W_PDS_PRODUCT_D |
W_PRODUCT_ATTR_D |
W_PDS_PRODUCT_D |
W_DOMAIN_MEMBER_LKP_TL |
W_PDS_PRODUCT_D |
W_INT_ORG_D |
W_PDS_ORGANIZATION_D |
W_INT_ORG_D_TL |
W_PDS_ORGANIZATION_D |
W_INT_ORG_DH |
W_PDS_ORGANIZATION_D |
W_DOMAIN_MEMBER_LKP_TL |
W_PDS_ORGANIZATION_D |
W_INT_ORG_ATTR_D |
W_PDS_ORGANIZATION_D |
W_MCAL_DAY_D |
W_PDS_CALENDAR_D |
W_EXCH_RATE_G |
W_PDS_EXCH_RATE_G |
W_RTL_ITEM_GRP1_D |
W_PDS_PRODUCT_ATTR_D |
W_DOMAIN_MEMBER_LKP_TL |
W_PDS_PRODUCT_ATTR_D |
W_RTL_PRODUCT_ATTR_D |
W_PDS_UDA_D |
W_DOMAIN_MEMBER_LKP_TL |
W_PDS_UDA_D |
W_RTL_PRODUCT_ATTR_D |
W_PDS_DIFF_D |
W_RTL_PRODUCT_ATTR_D_TL |
W_PDS_DIFF_D |
W_RTL_ITEM_GRP2_D |
W_PDS_PRODUCT_PACK_D |
W_RTL_CUSTSEG_D |
W_PDS_CUSTSEG_D |
W_INVENTORY_PRODUCT_ATTR_D |
W_PDS_REPL_ATTR_IT_LC_D |
W_INT_ORG_D_CFA |
W_PDS_ORG_ATTR_STR_D |
W_INT_ORG_ATTR_D |
W_PDS_ORG_ATTR_STR_D |
W_ORGANIZATION_FLEX_D |
W_PDS_ORG_ATTR_STR_D |
W_INT_ORG_D_CFA |
W_PDS_ORG_ATTR_NBR_D |
W_INT_ORG_D_CFA |
W_PDS_ORG_ATTR_DT_D |
W_PRODUCT_D_CFA |
W_PDS_PRODUCT_ATTR_STR_D |
W_PRODUCT_D_CFA |
W_PDS_PRODUCT_ATTR_NBR_D |
W_PRODUCT_D_CFA |
W_PDS_PRODUCT_ATTR_DT_D |
W_RTL_IT_LC_D_CFA |
W_PDS_PROD_ORG_ATTR_STR_D |
W_RTL_IT_LC_D_CFA |
W_PDS_PROD_ORG_ATTR_NBR_D |
W_RTL_IT_LC_D_CFA |
W_PDS_PROD_ORG_ATTR_DT_D |
Planning Fact Export
Module Name |
LOAD_PDS_FACT_PROCESS_ADHOC |
Description |
Exports all supported facts from the data warehouse to the data exchange schema for Planning. |
Dependencies |
HIST_SALES_LOAD_ADHOC HIST_INV_LOAD_ADHOC HIST_CSV_ADJUSTMENTS_LOAD_ADHOC HIST_CSV_INVRECEIPTS_LOAD_ADHOC HIST_CSV_MARKDOWN_LOAD_ADHOC HIST_CSV_INVRTV_LOAD_ADHOC HIST_CSV_TRANSFER_LOAD_ADHOC HIST_CSV_DEAL_INCOME_LOAD_ADHOC HIST_CSV_ICMARGIN_LOAD_ADHOC HIST_CSV_INVRECLASS_LOAD_ADHOC |
Business Activity |
RI Integrations |
Design Overview
This process runs all the planning data schema fact exports from the data warehouse to the data exchange layer, where PDS
batch processes pick up and load the data the rest of the way. Each run of these jobs inserts to the target tables with a
new RUN_ID
. Old runs are preserved for a configurable period of time (such as 7 days) to ensure PDS has adequate
time to retrieve the data before it is erased. All fact exports are incremental and send only the current week’s data based
on when it was posted into the data warehouse. This means the exports include all back-posted transaction data regardless
of the transaction date, as long as it was posted to RI in the current fiscal week.
The range of dates exported by this process is tracked and configured from the table C_SOURCE_CDC
. This
table can be edited from the Control & Tactical Center to alter the range of dates exported in one batch execution, such
as when you are sending historical data to MFP, or when you need to refresh the PDS data for more than a week. The table is
automatically updated after every run to reflect the most recent export dates. The next export begins from the last date/time
used.
Key Tables Affected
Input Table | Output Table |
---|---|
W_RTL_SLS_IT_LC_WK_A |
W_PDS_SLS_IT_LC_WK_A |
W_RTL_SLS_IT_LC_WK_A |
W_PDS_GRS_SLS_IT_LC_WK_A |
W_RTL_SLSWF_IT_LC_WK_A |
W_PDS_SLSWF_IT_LC_WK_A |
W_RTL_INV_IT_LC_WK_A |
W_PDS_INV_IT_LC_WK_A |
W_RTL_INVU_IT_LC_WK_A |
W_PDS_INVU_IT_LC_WK_A |
W_RTL_PO_ONORD_IT_LC_DY_F |
W_PDS_PO_ONORD_IT_LC_WK_A |
W_RTL_PO_ONORD_IT_LC_DY_F |
W_PDS_PO_ONORD_IT_LC_DY_F |
W_RTL_MKDN_IT_LC_WK_A |
W_PDS_MKDN_IT_LC_WK_A |
W_RTL_INVADJ_IT_LC_WK_A |
W_PDS_INVADJ_IT_LC_WK_A |
W_RTL_INVRC_IT_LC_WK_A |
W_PDS_INVRC_IT_LC_WK_A |
W_RTL_INVTSF_IT_LC_WK_A |
W_PDS_INVTSF_IT_LC_WK_A |
W_RTL_INVRTV_IT_LC_WK_A |
W_PDS_INVRTV_IT_LC_WK_A |
W_RTL_INVRECLASS_IT_LC_WK_A |
W_PDS_INVRECLASS_IT_LC_WK_A |
W_RTL_DEALINC_IT_LC_WK_A |
W_PDS_DEALINC_IT_LC_WK_A |
W_RTL_ICM_IT_LC_WK_A |
W_PDS_ICM_IT_LC_WK_A |
W_RTL_TSF_IT_LC_DY_F |
W_PDS_TSF_IT_LC_DY_F |
W_RTL_ALC_IT_LC_DY_F |
W_PDS_ALC_IT_LC_DY_F |
Planning Initial Inventory Export
Module Name |
LOAD_PDS_FACT_INITIAL_PROCESS_ADHOC |
Description |
Exports a full snapshot of historical inventory to the data exchange schema for Planning. |
Dependencies |
HIST_INV_LOAD_ADHOC |
Business Activity |
RI Integrations |
Design Overview
This process exports inventory history from the data warehouse to Planning. The base inventory extract for PDS only sends
the current week’s inventory, as the data is positional in the data warehouse and the current week reflects all current values
on the fact. This process can send a range of weeks at one time by configuring the start date and end date in C_SOURCE_CDC
for this interface (where TABLE_NAME = W_RTL_INV_IT_LC_WK_A
). The start and end dates must be week-ending
dates and must match a range of weeks loaded as inventory history into RAP. If the job is run without configuring C_SOURCE_CDC
with valid dates, it will fail with the error “no data found”. All weeks of data are written for a
single Run ID in the output table. Running the PDS import process consumes the entire range of data into their inventory facts.
Planning Load Cleanup
Module Name |
CLEANUP_C_LOAD_DATES_PLANNING_ADHOC |
Description |
Erases the execution status of planning batch programs. This is required to run a program multiple times for the same business date. |
Dependencies |
None |
Business Activity |
RI Integrations |
Design Overview
This process erases records from the C_LOAD_DATES
database table. Any time a job
runs as part of the nightly batch, or a job is included in both nightly and ad hoc
processing, a status record is inserted into C_LOAD_DATES
. The job is
then blocked from executing again while this record exists, as a safety measure when
restarting batch processes that failed midway through execution. During initial planning
integration loads, you may need to execute the same jobs multiple times to work through
file or data issues. In that case, you may execute this process before each run to clear
the status of prior runs from the database.
Note:
This process should only run during history and initial data loads, or at the guidance of Oracle Support. It should not run during regular nightly batch processing. ClearingC_LOAD_DATES
when the batch is running normally
could cause data corruption, as it would allow the same jobs to run multiple times for the same business date.
POS Sales Integration
Module Name |
LOAD_POSLOG_DATA_ADHOC |
Description |
Integrates data from Xstore, received through the POSLOG broadcaster services, into the RI data model. |
Dependencies |
None |
Business Activity |
RI Integrations |
Design Overview
Retail Insights supports loading intraday sales transactions from Xstore’s string-based XML receiver API. The data loaded by this method is specifically for reporting today’s sales before the end-of-day batch processes the full snapshot of audited sales transactions. The sales data from Xstore is not used as a primary source of sales history in Retail Insights, as the system was designed around the concept of a Sales Audit system being used prior to data coming into the data warehouse.
The data first comes to the Retail AI Foundation Cloud Services from Xstore’s web service API. The API is configured as part of the AI Foundation Cloud Services, but is used by Retail Insights to get the raw XML POSLOGs into the database for transformation to the RI data model. This process can then move the data from AI Foundation to RI staging tables, and from there to RI’s internal data model for BI reports. Refer to the RI Implementation Guide for additional details.
Price History Load
Module Name |
HIST_CSV_PRICE_LOAD_ADHOC |
Description |
Loads the |
Dependencies |
HIST_ZIP_FILE_LOAD_ADHOC |
Business Activity |
Historical Data Load |
Design Overview
The price history load process supports loading of price information over a long period of time to populate the data warehouse
with historical data. This process both stages the PRICE.csv
file into the database and processes it into
RI. Multiple weeks of pricing data can be provided in a single file, though it is recommended not to load more than one month
at a time, unless the volumes are low. Pricing data must start with a full snapshot of all item/locations on the earliest
day in history that you will be loading. This can be loaded by itself to validate the file is formatted and the data is correct.
From then on, you can provide only the price change events on the dates that they occur (such as regular and markdown price
changes). The price history load will iterate through the provided files one day at a time and load the available price change
events for each date in order.
The C_HIST_LOAD_STATUS
configuration table determines the actions taken by the process. Before running
the process for the first time, you must set up this table for the history load date to be the very latest date you expect
to load history for (this can be changed later if needed to load more weeks). Once that setup is complete, you can begin processing
files from the earliest date you plan to load. You must start from the beginning of the history and load it sequentially.
You cannot load data out of order, and you cannot load the same date multiple times without first erasing the data from the
database. After a date is loaded successfully, the C_HIST_LOAD_STATUS
records are updated with the most recent
load status and date.
Price History Load (Legacy)
Module Name |
HIST_PRICE_LOAD_ADHOC |
Description |
Stages and loads the |
Dependencies |
HIST_ZIP_FILE_LOAD_ADHOC |
Business Activity |
Historical Data Load |
Design Overview
The price history load process supports loading of price information over a long period of time to populate the data warehouse
with historical data. This process stages the W_RTL_PRICE_IT_LC_DY_FS.dat
file into the database and processes
it into RI. Multiple weeks of pricing data can be provided in a single file, though it is recommended not to load more than
one month at a time unless the volumes are low. Pricing data must start with a full snapshot of all item/locations
on the earliest day in the history that you are loading. This can be loaded by itself to validate the file is formatted and
the data is correct. From then on, you can provide only the price change events on the dates that they occur (such as regular
and markdown price changes). The price history load iterates through the provided files one day at a time and loads the available
price change events for each date in order.
The actions taken by the process are guided by the configuration table C_HIST_LOAD_STATUS
. Before running
the process for the first time, you must set up this table for the history load date to be the very latest date you expect
to load history for (this can be changed later if needed to load more weeks). Once that setup is complete, you can begin processing
files from the earliest date you plan to load. You must start from the beginning of the history and load it sequentially.
You cannot load data out of order and you cannot load the same date multiple times without first erasing the data from your
database. After a date is loaded successfully, the C_HIST_LOAD_STATUS
records are updated with the most recent
load status and date.
Promotion Budget Load
Module Name |
HIST_PROMO_FACT_LOAD_ADHOC |
Description |
Loads promotion budget and actuals for use in Retail Insights reporting. |
Dependencies |
HIST_ZIP_FILE_LOAD_ADHOC |
Business Activity |
Historical Data Load |
RDE Grants to APEX
Module Name |
RDE_GRANT_MFCS_TO_APEX_ADHOC |
Description |
Refreshes the grants and synonyms for Merchandising replicated objects that should be exposed to Innovation Workbench. |
Dependencies |
None |
Business Activity |
Historical Data Load |
Design Overview
This process runs the job RDE_GRANT_MFCS_TO_APEX_JOB
, which re-applies the necessary grants and objects
to allow a user to query Merchandising data from Innovation Workbench. This process assumes that the environment is one in
which Merchandising is version 22 or later and the data is being actively replicated using Golden Gate to RAP.
The synonyms are present in the RABE01USER
user in the database; so, when querying Merchandising objects,
you may query a table like this:
select * from RABE01USER.ITEM_MASTER
Receipts History Load
Module Name |
HIST_CSV_INVRECEIPTS_LOAD_ADHOC |
Description |
Loads the |
Dependencies |
HIST_ZIP_FILE_LOAD_ADHOC |
Business Activity |
Historical Data Load |
Design Overview
The history load process for Inventory Receipt transactions accepts an input file at the item/location/day level using
the file specification for RECEIPT.csv
. It assumes the file has already been moved into place by the HIST_ZIP_FILE_LOAD_ADHOC
process. This process imports the file into a preprocessing table in the database, transforms
it to internal staging tables, then loads it into the base fact (item/location/day) as well as the week aggregate used for
integrations (item/location/week).
Rejected Record Analysis
Module Name |
W_RTL_REJECT_DIMENSION_TMP_ADHOC |
Description |
Analyses rejected records in the pricing and inventory position facts for any known causes of rejection, such as missing dimension keys for the records, and outputs a summary for review. |
Dependencies |
None |
Business Activity |
Historical Data Load |
Design Overview
The rejected record analysis ad hoc process provides a set of queries comparing rejected data to
all related dimensional tables. If any dimension keys are found on the rejected data but not
in the related tables, a summary of the comparison is output to a database table for review.
This tool can help debug invalid input data so it can be corrected and reprocessed. The ad hoc
job currently runs for the Sales Transaction, Inventory Position, and Pricing facts, which are
the most common history loads performed. The job is run automatically for Inventory and Price
loads because they will fail if any records are rejected, but it requires manual setup and
execution for Sales Transaction analysis. The modules enabled for the job are listed in the
configuration table W_RTL_REJECT_DIMLKUP_TMP
. The rejected dimension keys are
output to W_RTL_REJECT_DIMENSION_TMP
.
To add the sales module before running the job for transaction loads, follow these steps:
-
Navigate to the Control & Tactical Center’s Manage System Configurations screen.
-
Locate the table
C_MODULE_REJECT_TABLE
and check whether there is already a row forMODULE_CODE=SLS
-
If there is not a row for SLS, then add a new row with these values for the first 3 column:
SLS
,E$_W_RTL_SLS_TRX_IT_LC_DY_TMP
,W_RTL_SLS_TRX_IT_LC_DY_FS
To run the job from Postman, use the message body like below:
{
"cycleName": "Adhoc",
"flowName":"Adhoc",
"processName":" W_RTL_REJECT_DIMENSION_TMP_ADHOC",
"requestParameters":"jobParams. W_RTL_REJECT_DIMENSION_TMP_JOB=SLS 20230102 20230109"
}
The parameters for the job are the MODULE_CODE
value from the configuration table followed by start and
end dates in YYYYMMDD
format. The dates correspond to the load date that resulted in rejected records on W_ETL_REJECTED_RECORDS
. After you run the job, query W_RTL_REJECT_DIMENSION_TMP
to see the results.
Rejected Record Cleanup
Module Name |
REJECT_DATA_CLEANUP_ADHOC |
Description |
Purges rejected records from certain |
Dependencies |
None |
Business Activity |
Historical Data Load |
Design Overview
The rejected record cleanup ad hoc process provides a way to clear out rejected data for
positional fact history loads (currently inventory and price) that are blocked by having any
rejections. The data is erased from the E$
tables and any invalid keys that
do not have matching dimensions are written to the C_DISCARD_DIMM
output
table. If you then re-run the failed history job from POM, the job will ignore all of the
discarded dimension keys and proceed to load the rest of the data file for the current
day/week of processing. It is important to note that once you discard positional data in this
manner, you cannot reload it later: you are declaring the data as unwanted/unusable. If you
instead want to reload your data file with corrected records, you would not re-run your
current history load job. You would go back and reload dimension and fact files as needed and
start a fresh job run.
This job requires an input parameter of INV
or PRICE
, which tells the job which fact
to clean up. The Postman body message format is below.
{
"cycleName": "Adhoc",
"flowName":"Adhoc",
"processName":"REJECT_DATA_CLEANUP_ADHOC",
"requestParameters":"jobParams.REJECT_DATA_CLEANUP_JOB=INV"
}
After doing the cleanup, check the C_HIST_LOAD_STATUS
table to see where the history job stopped processing.
If all steps are marked COMPLETE
and the TMP
table has a later value for the MAX_COMPLETED_DATE
(for example, the TMP
table has a date of 04/18/2021
and the other tables show 04/11/2021
) then you may simply rerun the POM job to resume the dataload. In this scenario it will use the existing
data in the HIST
table for week of 04/18/2021 and continue to load those records in the F/A tables (ignoring
the dimensions which are discarded).
Reprocess CSV Files
Module Name |
CSV_REPROCESS_ADHOC |
Description |
Reprocesses customer data files (with |
Dependencies |
REPROCESS_ZIP_FILE_PROCESS_ADHOC |
Business Activity |
Batch Administration |
Design Overview
This process provides an on-demand way to load data files that caused failures in the AIF DATA nightly batch cycle and
have a file extension of .csv
; for example, you attempted to load the file PRODUCT.csv
,
but the nightly batch failed on the job STG_SI_PRODUCT_JOB
. The steps to correct this issue and resume the
batch are as follows:
-
Review and correct the file on your local server and generate a new
PRODUCT.csv
file, then add it toRI_REPROCESS_DATA.zip
. -
Upload the ZIP file using FTS to the
ris/incoming
prefix, then run the AIF DATA processREPROCESS_ZIP_FILE_PROCESS_ADHOC
to load it. Verify that all jobs in the process complete successfully before continuing. -
Select the AIF DATA process
CSV_REPROCESS_ADHOC
, then locate the job namedCOPY_SI_PRODUCT_JOB
and run it. -
If you want to verify the new file is correct, you may also run
STG_SI_PRODUCT_JOB
from within the same process. You can also go directly back to the nightly batch cycle and re-run the failedSTG_SI_PRODUCT_JOB
from there. Re-running the nightly batch job will attempt to stage the file into the database and resume the batch.
Your batch may also fail on the job DIM_PROD_VALIDATOR_JOB
, which means the PRODUCT.csv
file loaded successfully but there are other issues with the contents that cannot pass into the data warehouse. When the
batch fails here, run all of the steps above (COPY
and STG
jobs for the file) plus all of
the associated “simplified interface” or “SI” jobs that transform your CSV file data into the internal database structures.
In the case of PRODUCT.csv
, these additional jobs are one or more of the following, depending on what is
enabled in your nightly batch:
-
SI_W_PROD_CAT_DHS_JOB
-
SI_W_PRODUCT_ATTR_DS_JOB
-
SI_W_PRODUCT_DS_JOB
-
SI_W_PRODUCT_DS_TL_JOB
-
SI_W_RTL_IT_SUPPLIER_DS_JOB
-
SI_W_RTL_PRODUCT_ATTR_IMG_DS_JOB
-
SI_W_RTL_PRODUCT_BRAND_DS_JOB
-
SI_W_RTL_PRODUCT_BRAND_DS_TL_JOB
After all the ad hoc jobs complete, return to the nightly batch cycle and re-run the failed DIM_PROD_VALIDATOR_JOB
. If the data issues are corrected, then the job will complete successfully.
An alternative way to make data corrections specifically when a VALIDATOR
job fails is to edit the data
directly from Innovation Workbench. For example, when the DIM_PROD_VALIDATOR_JOB
fails, instead of loading
a new PRODUCT.csv
file and re-running all of these jobs, you may instead log into Innovation Workbench and
execute SQLs against the records having an issue in the staging tables, like W_PRODUCT_DS
. This should only
be done if you are familiar with the data model and feel comfortable editing the data directly. This may allow you to complete
the current batch cycle faster, but you will still need to go back to your source system and correct the file for tomorrow’s
batch onwards.
Reprocess DAT Files
Module Name |
DAT_REPROCESS_ADHOC |
Description |
Reprocesses customer data files (with |
Dependencies |
REPROCESS_ZIP_FILE_PROCESS_ADHOC |
Business Activity |
Batch Administration |
Design Overview
This process provides an on-demand way to load data files that caused failures in the AIF DATA nightly batch cycle and
have a .dat
file extension. For example, you attempt to load the file W_RTL_REPL_DAY_DS.dat
but the nightly batch failed on the job W_RTL_REPL_DAY_DS_STG_JOB
. The steps to correct this issue and
resume the batch process are as follows:
-
Review and correct the file on your local server and generate a new
W_RTL_REPL_DAY_DS.dat
file, then add it toRI_REPROCESS_DATA.zip
-
Upload the ZIP file using FTS to the
ris/incoming
prefix, then run the AIF DATA processREPROCESS_ZIP_FILE_PROCESS_ADHOC
to load it. Verify that all jobs in the process complete successfully before continuing. -
Select the AIF DATA process
DAT_REPROCESS_ADHOC
, then locate theW_RTL_REPL_DAY_DS_COPY_JOB
job and run it. -
If you want to verify the new file is correct, run
W_RTL_REPL_DAY_DS_STG_JOB
from within the same process. You can go directly back to the nightly batch cycle and re-run the failedW_RTL_REPL_DAY_DS_STG_JOB
from there. Re-running the nightly batch job will attempt to stage the file into the database and resume the batch.
RTV History Load
Module Name |
HIST_CSV_INVRTV_LOAD_ADHOC |
Description |
Loads the |
Dependencies |
HIST_ZIP_FILE_LOAD_ADHOC |
Business Activity |
Historical Data Load |
Design Overview
The history load process for Inventory Returns to Vendor (RTV) transactions accepts an input file
at the item/location/day level using the file specification for
RTV.csv
. It assumes the file has already been moved into place by the
HIST_ZIP_FILE_LOAD_ADHOC
process. This process imports the file
into a preprocessing table in the database, transforms it to RI’s internal staging
tables, then loads it into the base fact (item/location/day) as well as the week
aggregate used for integrations (item/location/week).
RTV History Load (Legacy)
Module Name |
HIST_INVRTV_LOAD_ADHOC |
Description |
Stages and loads the |
Dependencies |
HIST_ZIP_FILE_LOAD_ADHOC |
Business Activity |
Historical Data Load |
Design Overview
The history load process for Inventory Returns to Vendor (RTV) transactions accepts an input file at the item/location/day
level using the file specification for W_RTL_INVRTV_IT_LC_DY_FS.dat
. It assumes the file has already been
moved into place by the HIST_ZIP_FILE_LOAD_ADHOC
process. This process will import the file into RI’s internal
staging tables and then load it into the base fact (item/location/day) as well as the week aggregate used for integrations
(item/location/week).
Sales History Load
Module Name |
HIST_SALES_LOAD_ADHOC |
Description |
Processes any staged sales history data and runs all aggregation programs for a specified history range. |
Dependencies |
HIST_STG_CSV_SALES_LOAD_ADHOC |
Business Activity |
Historical Data Load |
Design Overview
The sales history load process supports loading of sales transaction data over a long period of time to populate the data
warehouse with historical data. It requires the sales data to already be staged into the database using one of the available
staging processes. Multiple weeks of sales can be provided in a single file, though it is recommended to not load more than
one month at a time unless the volumes are low. This process populates all sales tables in the data warehouse (but not AIF
applications), both for integration and BI reporting purposes. If you are not using RI for reporting, disable the aggregation
table programs in POM (except the W_RTL_SLS_IT_LC_WK_A
aggregate) before running the process.
Key Tables Affected
Table | Usage |
---|---|
W_RTL_SLS_TRX_IT_LC_DY_FS |
Input |
W_RTL_SLSPK_IT_LC_DY_FS |
Input |
W_RTL_SLS_TRX_IT_LC_DY_F |
Output (Base Fact) |
W_RTL_SLSPK_IT_LC_DY_F |
Output (Base Fact) |
W_RTL_SLS_IT_LC_WK_A |
Aggregate (for integrations) |
W_RTL_SLS_IT_LC_DY_A |
Aggregate (for BI reporting) |
W_RTL_SLS_IT_LC_GMH_A |
Aggregate (for BI reporting) |
W_RTL_SLS_SC_LC_DY_A |
Aggregate (for BI reporting) |
W_RTL_SLS_SC_LC_WK_A |
Aggregate (for BI reporting) |
W_RTL_SLS_CL_LC_DY_A |
Aggregate (for BI reporting) |
W_RTL_SLS_CL_LC_WK_A |
Aggregate (for BI reporting) |
W_RTL_SLS_DP_LC_DY_A |
Aggregate (for BI reporting) |
W_RTL_SLS_DP_LC_WK_A |
Aggregate (for BI reporting) |
W_RTL_SLS_IT_DY_A |
Aggregate (for BI reporting) |
W_RTL_SLS_IT_WK_A |
Aggregate (for BI reporting) |
W_RTL_SLS_SC_DY_A |
Aggregate (for BI reporting) |
W_RTL_SLS_SC_WK_A |
Aggregate (for BI reporting) |
W_RTL_SLS_LC_DY_A |
Aggregate (for BI reporting) |
W_RTL_SLS_LC_WK_A |
Aggregate (for BI reporting) |
W_RTL_SLS_IT_LC_DY_SN_A |
Aggregate (for BI reporting) |
W_RTL_SLS_IT_LC_WK_SN_A |
Aggregate (for BI reporting) |
W_RTL_SLS_IT_DY_SN_A |
Aggregate (for BI reporting) |
W_RTL_SLS_IT_WK_SN_A |
Aggregate (for BI reporting) |
W_RTL_SLS_SC_LC_DY_CUR_A |
Aggregate (for BI reporting) |
W_RTL_SLS_SC_LC_WK_CUR_A |
Aggregate (for BI reporting) |
W_RTL_SLS_CL_LC_DY_CUR_A |
Aggregate (for BI reporting) |
W_RTL_SLS_DP_LC_DY_CUR_A |
Aggregate (for BI reporting) |
W_RTL_SLS_CL_LC_WK_CUR_A |
Aggregate (for BI reporting) |
W_RTL_SLS_DP_LC_WK_CUR_A |
Aggregate (for BI reporting) |
W_RTL_SLS_SC_DY_CUR_A |
Aggregate (for BI reporting) |
W_RTL_SLS_CL_DY_CUR_A |
Aggregate (for BI reporting) |
W_RTL_SLS_DP_DY_CUR_A |
Aggregate (for BI reporting) |
W_RTL_SLS_SC_WK_CUR_A |
Aggregate (for BI reporting) |
W_RTL_SLS_CL_WK_CUR_A |
Aggregate (for BI reporting) |
W_RTL_SLS_DP_WK_CUR_A |
Aggregate (for BI reporting) |
W_RTL_SLSPK_IT_LC_WK_A |
Aggregate (for BI reporting) |
W_RTL_SLSPK_IT_DY_A |
Aggregate (for BI reporting) |
W_EMPLOYEE_D |
Supporting Dimension (for BI reporting) |
W_PARTY_PER_D |
Supporting Dimension (for BI reporting) |
W_RTL_CO_HEAD_D |
Supporting Dimension (for BI reporting) |
W_RTL_CO_LINE_D |
Supporting Dimension (for BI reporting) |
Sales History Staging
Module Name |
HIST_STG_CSV_SALES_LOAD_ADHOC |
Description |
Stages the |
Dependencies |
HIST_ZIP_FILE_LOAD_ADHOC |
Business Activity |
Historical Data Load |
Sales History Staging (Legacy)
Module Name |
HIST_STG_SALES_LOAD_ADHOC |
Description |
Stages the |
Dependencies |
HIST_ZIP_FILE_LOAD_ADHOC |
Business Activity |
Historical Data Load |
Sales Tender Load
Module Name |
HIST_SALES_TEND_LOAD_ADHOC |
Description |
Processes any staged sales tender data into the sales tender fact table, including processing of related dimensions used by the fact. |
Dependencies |
HIST_STG_SALES_TEND_LOAD_ADHOC |
Business Activity |
Historical Data Load |
Design Overview
The sales tender load process supports loading of sales transaction tender data over a long period of time to populate the data warehouse with historical data. It requires the sales tender data to already be staged in the database using the separate staging process. Multiple weeks of tender data can be provided in a single file, though it is recommended to not load more than one month at a time unless the volumes are low. This process populates all sales tender tables in the data warehouse, both for integration and BI reporting purposes.
Sales Tender Staging
Module Name |
HIST_STG_SALES_TEND_LOAD_ADHOC |
Description |
Processes a sales tender data file into the sales tender fact staging table. |
Dependencies |
HIST_ZIP_FILE_LOAD_ADHOC |
Business Activity |
Historical Data Load |
Sales Wholesale/Franchise Staging
Module Name |
HIST_STG_CSV_SALES_WF_LOAD_ADHOC |
Description |
Processes the sales for wholesale/franchise data file into the sales WF fact staging table. |
Dependencies |
HIST_ZIP_FILE_LOAD_ADHOC |
Business Activity |
Historical Data Load |
Sales Wholesale/Franchise Load
Module Name |
HIST_SALES_WF_LOAD_ADHOC |
Description |
Processes any staged sales wholesale/franchise data into the sales wholesale/franchise fact table and aggregates. |
Dependencies |
HIST_STG_CSV_SALES_WF_LOAD_ADHOC |
Business Activity |
Historical Data Load |
Design Overview
The sales wholesale/franchise load process supports loading of sales measures for wholesale/franchise locations over a long period of time to populate the data warehouse with historical data. It requires the sales data to already be staged in the database using the separate staging process. Multiple weeks of sales data can be provided in a single file. This process populates all sales wholesale/franchise tables in the data warehouse, for both integration and BI reporting purposes.
Key Tables Affected
Table | Usage |
---|---|
W_RTL_SLSWF_IT_LC_DY_FS |
Staging |
W_RTL_SLSWF_IT_LC_DY_F |
Base Fact |
W_RTL_SLSWF_IT_LC_WK_A |
Aggregate (PDS Integration) |
W_RTL_SLSWF_IT_DY_A |
Aggregate (BI Reporting) |
W_RTL_SLSWF_IT_WK_A |
Aggregate (BI Reporting) |
W_RTL_SLSWF_SC_LC_DY_A |
Aggregate (BI Reporting) |
W_RTL_SLSWF_SC_LC_WK_A |
Aggregate (BI Reporting) |
W_RTL_SLSWF_SC_LC_DY_CUR_A |
Aggregate (BI Reporting) |
W_RTL_SLSWF_SC_LC_WK_CUR_A |
Aggregate (BI Reporting) |
Shipments History Load
Module Name |
SEED_CSV_W_RTL_SHIP_IT_LC_DY_F_PROCESS_ADHOC |
Description |
Loads a full snapshot of shipment data from |
Dependencies |
HIST_ZIP_FILE_LOAD_ADHOC |
Business Activity |
Historical Data Load |
Design Overview
Data regarding shipments of merchandise is split between two interfaces, the dimension file SHIPMENT_HEAD.csv
and the fact file SHIPMENT_DETAIL.csv
. This process can be used to load full snapshots of historical or
currently active shipments to the data warehouse outside of the nightly batch cycle. The two files must be in sync, meaning
that every shipment record on the detail file must have a record in the header file. The header file is always a full snapshot
of all shipments that should appear as currently active in the data warehouse, meaning that if any shipment records are no
longer sent on SHIPMENT_HEAD.csv
, they will be marked as inactive/closed in the data warehouse table (CURRENT_FLG = N
) and should no longer appear in the files.
Stock Count Load
Module Name |
HIST_STOCK_COUNT_LOAD_ADHOC |
Description |
Loads the stock count dimension and facts for Retail Insights reporting. |
Dependencies |
HIST_ZIP_FILE_LOAD_ADHOC |
Business Activity |
Historical Data Load |
Design Overview
This process loads the stock count dimension and fact tables for use in Retail Insights reporting. The fact tables are for counts coming from two different sources, systemic counts (INVSS) from a merchandising solution and perpetual counts (INVPS) from a perpetual/real-time inventory solution. The same dimension supports both facts, so the dimension file must be a combination of all stock count header records from any source.
Stock Ledger Load
Module Name |
HIST_STCKLDGR_LOAD_ADHOC |
Description |
Loads Stock Ledger fact data for use in Retail Insights reporting. |
Dependencies |
HIST_ZIP_FILE_LOAD_ADHOC |
Business Activity |
Historical Data Load |
Design Overview
This process loads the stock ledger fact files for week- and month-level stock ledgers. These facts are used only in Retail Insights reporting. Before using this process, you must configure it to disable one of the two month-level jobs:
-
W_RTL_STCKLDGR_SC_LC_MH_F_JOB
-
W_RTL_STCKLDGR_SC_LC_MH_F_GREG_JOB
The first job is used for Fiscal Calendar only, while the second job is used for Gregorian calendar only. Attempting to run both jobs in the same environment will result in failures because you cannot have both calendar types in the table at the same time.
Store Traffic Load
Module Name |
HIST_STTRFC_LOAD_ADHOC |
Description |
Loads Store Traffic fact data for use in Retail Insights reporting. |
Dependencies |
HIST_ZIP_FILE_LOAD_ADHOC |
Business Activity |
Historical Data Load |
Supplier Compliance Load
Module Name |
HIST_SUPPCM_LOAD_ADHOC |
Description |
Loads Supplier Compliance fact data for use in Retail Insights reporting. |
Dependencies |
HIST_ZIP_FILE_LOAD_ADHOC |
Business Activity |
Historical Data Load |
Supplier Invoice Load
Module Name |
HIST_SUPP_IVC_LOAD_ADHOC |
Description |
Loads Supplier Invoice fact data for use in Retail Insights reporting. |
Dependencies |
HIST_ZIP_FILE_LOAD_ADHOC |
Business Activity |
Historical Data Load |
Table Partitioning
Module Name |
CREATE_PARTITION_ADHOC |
Description |
Uses the provided range of dates and the loaded calendar information to generate table partitions across the data warehouse data model. |
Dependencies |
CALENDAR_LOAD_ADHOC |
Business Activity |
Initial System Setup |
Design Overview
This process must be used after the Calendar load is complete to partition all of your database tables. Tables in Retail
Insights are partitioned dynamically based on your fiscal calendar using the days and weeks defined in W_MCAL_DAY_D
and W_MCAL_WEEK_D
. This type of partitioning provides optimal performance in BI reporting, where the SQL
queries can prune the selected partitions to only those that hold data for your time-based filters and attributes. Without
this partitioning in place, batch programs will not insert data into the expected partitions, some programs could fail to
load data at all, and BI reporting will have very poor performance.
This process can be run repeatedly to ensure all partitions are created. Each time it runs, it
resumes from where it left off, if any partitions still need to be added to the data
model. If you have run the process several times and it is now completing in under a
minute, then it is no long recreating any new partitions. The functional areas being
partitioned should be reviewed in the table C_MODULE_ARTIFACT
. All
tables should be enabled for partitioning, except for tables that have
PLAN
in their naming structure.
Transfer Detail History Load
Module Name |
HIST_TSFDETAIL_LOAD_ADHOC |
Description |
Loads a full snapshot of transfers data from |
Dependencies |
HIST_ZIP_FILE_LOAD_ADHOC |
Business Activity |
Historical Data Load |
Design Overview
Data regarding transfers of merchandise is split between two interfaces, the dimension file W_RTL_TSF_DETAILS_DS.dat
and the fact file W_RTL_TSF_IT_LC_DY_FS.dat
. This process can be used to load full snapshots of historical
or currently active transfers to the data warehouse outside of the nightly batch cycle. The two files must be in sync, meaning
that every transfer record on the detail file must have a record in the header file. The header file is always a full snapshot
of all transfers that should appear as currently active in the data warehouse, meaning that if any transfer records are no
longer sent on W_RTL_TSF_DETAILS_DS.dat
, they will be marked as inactive/closed in the data warehouse table
(CURRENT_FLG = N
) and should no longer appear in the files.
This data is not the same as the transfer transactions file (TRANSFER.csv
). The transfer transactions
are aggregated at the item/loc/day level of detail, while these two files are for the individual transfer activities at the
lowest level of detail. The aggregated transfer transactions are mainly for Retail Insights and Merchandise Financial Planning,
while these transfer detail files are for RI, AI Foundation, and Inventory Planning Optimization usage.
Transfer Transaction History Load
Module Name |
HIST_CSV_TRANSFER_LOAD_ADHOC |
Description |
Loads the |
Dependencies |
HIST_ZIP_FILE_LOAD_ADHOC |
Business Activity |
Historical Data Load |
Design Overview
The history load process for Inventory Transfer transactions accepts an input file at the item/location/day level using
the file specification for TRANSFER.csv
. It assumes the file has already been moved into place by the HIST_ZIP_FILE_LOAD_ADHOC
process. This process imports the file into a preprocessing table in the database, transforms
it to RI’s internal staging tables, then loads it into the base fact (item/location/day) as well as the week aggregate used
for integrations (item/location/week).
This data is not the same as the transfer details files (W_RTL_TSF_DETAILS_DS.dat
and W_RTL_TSF_IT_LC_DY_FS.dat
). The transfer transactions on this file are aggregated at the item/loc/day level of detail, while the transfer detail files
are for the individual transfer activities at the lowest level of detail. The aggregated transfer transactions on this file
are for Retail Insights and Merchandise Financial Planning.
Translation Lookup Load (Legacy)
Module Name |
W_DOMAIN_MEMBER_LKP_TL_PROCESS_ADHOC |
Description |
Processes the translatable string data in the |
Dependencies |
HIST_ZIP_FILE_LOAD_ADHOC |
Business Activity |
Historical Data Load |
Design Overview
This process looks for the W_DOMAIN_MEMBER_DS_TL.dat
file placed on the server by a history ZIP file upload
and loads it to the target table in the data warehouse for translatable strings. When using CSV file uploads, all the translatable
strings from the CSV files are automatically inserted into this table and loaded in the data warehouse without a second file
being provided. However, if you are using legacy files, or you need to update records in this table directly, you can use
this process to manually load string lookup records.