13 Control and Tactical Center

This chapter describes the Control and Tactical Center, where an administrative user can access and manage configurations for different applications.

Under the Control and Tactical Center, a user can access Strategy and Policy Management, which is the central place for managing the configurations of different applications and for setting up and configuring the demand forecasting runs. In addition, the modules for managing product attributes (Attribute Extraction and Attribute Binning) as well as the link for managing web service-related credentials can be also found under the Control and Tactical Center.

Depending on the role, a user will see one or multiple of the following links, as shown below. Typically, the administrative user has access to all of the following.

  • Strategy and Policy Management

  • Manage Credential Stores

  • Attribute Extraction

  • Attribute Binning

Figure 13-1 Control and Tactical Center

Description of Figure 13-1 follows
Description of "Figure 13-1 Control and Tactical Center"

Strategy and Policy Management

In Strategy and Policy Management, a user can edit the configurations of different applications using the Manage System Configurations screen. Only the tables that are editable can be accessed in this screen. Within each table, one or multiple columns are editable and a user can override the values in those columns.

Manage Forecast Configurations can be used to set up, manage, and configure the demand forecasting runs for different applications such as Lifecycle Pricing Optimization (LPO), Inventory Planning Optimization-Inventory Optimization (IPO-IO), Inventory Planning Optimization-Demand Forecasting (IPO-DF), ), Inventory Planning Optimization-Lifecycle Allocation and Replenishment (IPO-LAR), Assortment Planning (AP), and Merchandise Financial Planning (MFP).

Forecast Configuration for MFP and AP

To configure the forecast process for MFP and AP, complete the following two steps:

  1. Use the Manage System Configurations screen to review and modify the global configurations in RSE_CONFIG. For further details, see "Configuration Updates". The relevant configurations in RSE_CONFIG that must be reviewed and edited as required are listed in the following table.

    Table 13-1 Configurations

    APPL CODE PARAM NAME PARAM VALUE DESCR

    RSE

    EXTENDED_HIERARCHY_SRC

    Default value is NON-RMS.

    Data source providing extended hierarchy data RMS/NON-RMS.

    RSE

    LOAD_EXTENDED_PROD_HIER

    Default value is N. If the product hierarchy data had 9 levels, set this value as Y. If it has 7 levels, keep this value as N.

    Y/N Value. This parameter is used by the product hierarchy ETL to determine if the extended product hierarchy is required.

    PMO

    PMO_PROD_HIER_TYPE

    Default value is 3. If the product hierarchy data has nine levels (i.e., it has an extended hierarchy), keep this value as 3. If it has seven levels, change this value to 1.

    The hierarchy ID to use for the product (installation configuration).

    RSE

    PROD_HIER_SLSTXN_HIER_LEVEL_ID

    Default value is 9.

    This parameter identifies the hierarchy level at which sales transactions are provided (7-Style, 8-Style/color, or 9 Style/color/Size). It must match the extended hierarchy leaf level.

    RSE FCST_PURGE_RETENTION_DAYS Default value is 180. Number of days to wait before permanently deleting run data.
    RSE FCST_PURGE_ALL_RUNS Default value is N. Y/N flag indicating all run data should be deleted.
    RSE FCST_TY_PURGE_RETENTION_DAYS Default value is 180. Number of days to wait before permanently deleting run type data.
    RSE FCST_TY_PURGE_ALL_RUNS Default value is N. Y/N flag indicating all run type data should be deleted.
  2. Use the Manage Forecast Configurations screen to set up the forecast runs for MFP and AP, as follows.

    In Manage Forecast Configurations, start by setting up a run type in the Setup train stop. Click the + icon above the table and fill in the fields in the pop-up. For MFP and AP forecasting, the preferred forecast method is Automatic Exponential Smoothing. However, the Sales & Promo and Life Cycle forecast methods are also supported. You must create a run type for each forecast measure/forecast intersection combination that is required for MFP and/or AP.

    Once you are done with setting up the run types, click Start Data Aggregation in the Setup train stop. Select all the run types that were created and click Submit. When the aggregation is complete, the aggregation status will change to Complete. At this point, ad-hoc test runs and batch runs can be created and executed to generate a forecast.

    To create an ad-hoc run, go to the Test train stop. First, click a run type in the top table and then click the + icon in the bottom table to create a run. In the Create Run pop-up, you can change the configurations parameters related to estimation process and forecast process in their respective tabs. For example, if you want to test a run using the Bayesian method, edit the Estimation Method parameter in the Estimation tab using the Edit icon above the table. After modifying and reviewing the configuration parameters, click Submit to start the run. Once the run is complete, the status will change to Forecast Generation Complete.

    Doing test runs is an optional step. In addition to that, you must modify and review the configurations of the run type, activate the run type, enable auto-approve, and map the run type to the downstream application (in this case to MFP or AP). In the Manage train stop, select a row, click Edit Configurations Parameters, and edit the estimation and forecast parameters as necessary. Once you are done, go to Review tab and click Validate and then close tab.

    Note:

    If the run type is active, you will only be able to view the parameters. In order to edit the parameters, the run type must be inactive.

    To activate the run type and enable the auto-approve, select a run type in the table and click the corresponding buttons above the table. Lastly, to map the run type to MFP or AP, go to Map train stop and click the + icon to create a new mapping.

Run Type Configurations for MFP and AP to Set Up GA Runs

To set up GA runs, create one run type for each forecast intersection/measure combination using the Automatic Exponential Smoothing method. The following table shows the configurations for each run type.

Table 13-2 Run Type Configurations

Forecast Level (Merchandise/Location/Calendar) Forecast Measure Forecast Method Data Source

Location Plan (MFP)

Department/Location/Week

Total Gross Sales Units

Automatic Exponential Smoothing for all run types

Store Sales for all run types

Total Gross Sales Amount

Total Returns Units

Total Returns Amount

Location Target (MFP)

Company/Location/Week

Total Gross Sales Units

Total Gross Sales Amount

Total Returns Units

Total Returns Amount

Merchandise Plan (MFP)

Subclass/Area/Week

Clearance Gross Sales Units

Clearance Gross Sales Amount

Regular and Promotion Gross Sales Units

Regular and Promotion Gross Sales Amount

Total Returns Units

Total Returns Amount

Other Gross Sales Units
Other Gross Sales Amount

Merchandise Target (MFP)

Department/Area/Week

Clearance Gross Sales Units

Clearance Gross Sales Amount

Regular and Promotion Gross Sales Units

Regular and Promotion Gross Sales Amount

Total Returns Units

Total Returns Amount

Other Gross Sales Units
Other Gross Sales Amount

Assortment Planning (AP)

Subclass/Location/Week

Clearance Gross Sales Unit

Clearance Gross Sales Amount

Regular and Promotion Gross Sales Amount

Regular and Promotion Gross Sales Amount

Item Planning (AP)

SKU/Location/Week

Clearance Gross Sales Units

Clearance Gross Sales Amount

Regular and Promotion Gross Sales Units

Regular and Promotion Sales Units

Note:

For all batch runs, in addition to the pre-season forecast, the Bayesian forecast is also generated by default. In the Manage train stop, you must only set the method for the pre-season forecast (either as Automatic Exponential Smoothing or Seasonal Exponential Smoothing).

Batch and Ad Hoc Jobs for MFP and AP Forecasting

The Batch and Ad Hoc jobs listed in the following table are used for loading foundation data (product hierarchy, location hierarchy, calendar, product attributes, and so on).

Table 13-3 Configuration and Main Data Load Jobs

JobName Description RmsBatch

ORASE_START_BATCH_JOB

ORASE_START_BATCH_JOB

rse_process_state_update.ksh

ORASE_START_BATCH_SET_ACTIVE_JOB

ORASE_START_BATCH_SET_ACTIVE_JOB

rse_batch_type_active.ksh

ORASE_START_BATCH_REFRESH_RESTR_JOB

ORASE_START_BATCH_REFRESH_RESTR_JOB

rse_batch_freq_restriction.ksh

ORASE_START_BATCH_END_JOB

ORASE_START_BATCH_END_JOB

rse_process_state_update.ksh

RSE_WEEKLY_INPUT_FILES_START_JOB

RSE_WEEKLY_INPUT_FILES_START_JOB

rse_process_state_update.ksh

WEEKLY_INPUT_FILES_WAIT_JOB

WEEKLY_INPUT_FILES_WAIT_JOB

rse_batch_zip_file_wait.ksh

WEEKLY_INPUT_FILES_VAL_JOB

WEEKLY_INPUT_FILES_VAL_JOB

rse_batch_zip_file_extract.ksh

WEEKLY_INPUT_FILES_COPY_JOB

WEEKLY_INPUT_FILES_COPY_JOB

rse_batch_inbound_file_copy.ksh

RSE_WEEKLY_INPUT_FILES_END_JOB

RSE_WEEKLY_INPUT_FILES_END_JOB

rse_process_state_update.ksh

RSE_PROD_HIER_ETL_START_JOB

RSE_PROD_HIER_ETL_START_JOB

rse_process_state_update.ksh

RSE_PROD_SRC_XREF_LOAD_JOB

RSE_PROD_SRC_XREF_LOAD_JOB

rse_prod_src_xref_load.ksh

RSE_PROD_HIER_LOAD_JOB

RSE_PROD_HIER_LOAD_JOB

rse_prod_hier_load.ksh

RSE_PROD_TC_LOAD_JOB

RSE_PROD_TC_LOAD_JOB

rse_prod_tc_load.ksh

RSE_PROD_DH_LOAD_JOB

RSE_PROD_DH_LOAD_JOB

rse_prod_dh_load.ksh

RSE_PROD_GROUP_LOAD_JOB

RSE_PROD_GROUP_LOAD_JOB

rse_load_prod_group.ksh

RSE_PROD_HIER_ETL_END_JOB

RSE_PROD_HIER_ETL_END_JOB

rse_process_state_update.ksh

RSE_LOC_HIER_ETL_START_JOB

RSE_LOC_HIER_ETL_START_JOB

rse_process_state_update.ksh

RSE_LOC_SRC_XREF_LOAD_JOB

RSE_LOC_SRC_XREF_LOAD_JOB

rse_loc_src_xref_load.ksh

RSE_LOC_HIER_LOAD_JOB

RSE_LOC_HIER_LOAD_JOB

rse_loc_hier_load.ksh

RSE_LOC_HIER_TC_LOAD_JOB

RSE_LOC_HIER_TC_LOAD_JOB

rse_loc_hier_tc_load.ksh

RSE_LOC_HIER_DH_LOAD_JOB

RSE_LOC_HIER_DH_LOAD_JOB

rse_loc_hier_dh_load.ksh

RSE_LOC_HIER_ETL_END_JOB

RSE_LOC_HIER_ETL_END_JOB

rse_process_state_update.ksh

RSE_LOC_ATTR_LOAD_START_JOB

RSE_LOC_ATTR_LOAD_START_JOB

rse_process_state_update.ksh

RSE_CDA_ETL_LOAD_LOC_JOB

RSE_CDA_ETL_LOAD_LOC_JOB

rse_cda_etl_load_location.ksh

RSE_LOC_ATTR_LOAD_END_JOB

RSE_LOC_ATTR_LOAD_END_JOB

rse_process_state_update.ksh

RSE_LIKE_LOC_LOAD_START_JOB

RSE_LIKE_LOC_LOAD_START_JOB

rse_process_state_update.ksh

RSE_LIKE_LOC_STG_JOB

RSE_LIKE_LOC_STG_JOB

rse_like_loc_stg.ksh

RSE_LIKE_LOC_COPY_JOB

RSE_LIKE_LOC_COPY_JOB

RSE_LIKE_LOC_STG_CNE_JOB

RSE_LIKE_LOC_STG_CNE_JOB

RSE_LIKE_LOC_LOAD_JOB

RSE_LIKE_LOC_LOAD_JOB

rse_like_loc_load.ksh

RSE_LIKE_LOC_LOAD_END_JOB

RSE_LIKE_LOC_LOAD_END_JOB

rse_process_state_update.ksh

RSE_LIKE_PROD_LOAD_START_JOB

RSE_LIKE_PROD_LOAD_START_JOB

rse_process_state_update.ksh

RSE_LIKE_PROD_STG_JOB

RSE_LIKE_PROD_STG_JOB

rse_like_prod_stg.ksh

RSE_LIKE_PROD_COPY_JOB

RSE_LIKE_PROD_COPY_JOB

RSE_LIKE_PROD_STG_CNE_JOB

RSE_LIKE_PROD_STG_CNE_JOB

RSE_LIKE_PROD_LOAD_JOB

RSE_LIKE_PROD_LOAD_JOB

rse_like_prod_load.ksh

RSE_LIKE_PROD_LOAD_END_JOB

RSE_LIKE_PROD_LOAD_END_JOB

rse_process_state_update.ksh

RSE_DATA_STAGING_START_JOB

RSE_DATA_STAGING_START_JOB

rse_process_state_update.ksh

RSE_MD_CDA_VALUES_STG_JOB

RSE_MD_CDA_VALUES_STG_JOB

rse_md_cda_values_stg.ksh

RSE_MD_CDA_VALUES_COPY_JOB

RSE_MD_CDA_VALUES_COPY_JOB

RSE_MD_CDA_VALUES_STG_CNE_JOB

RSE_MD_CDA_VALUES_STG_CNE_JOB

RSE_PR_LC_CDA_STG_JOB

RSE_PR_LC_CDA_STG_JOB

rse_pr_lc_cda_stg.ksh

RSE_PR_LC_CDA_COPY_JOB

RSE_PR_LC_CDA_COPY_JOB

RSE_PR_LC_CDA_STG_CNE_JOB

RSE_PR_LC_CDA_STG_CNE_JOB

RSE_PR_LC_CAL_CDA_JOB

RSE_PR_LC_CAL_CDA_JOB

rse_pr_lc_cal_cda_stg.ksh

RSE_PR_LC_CAL_CDA_COPY_JOB

RSE_PR_LC_CAL_CDA_COPY_JOB

RSE_PR_LC_CAL_CDA_STG_CNE_JOB

RSE_PR_LC_CAL_CDA_STG_CNE_JOB

RSE_MD_CDA_STG_JOB

RSE_MD_CDA_STG_JOB

rse_md_cda_stg.ksh

RSE_MD_CDA_COPY_JOB

RSE_MD_CDA_COPY_JOB

RSE_MD_CDA_STG_CNE_JOB

RSE_MD_CDA_STG_CNE_JOB

RSE_MD_CDA_LOAD_JOB

RSE_MD_CDA_LOAD_JOB

rse_md_cda_load.ksh

RSE_MD_CDA_VALUES_LOAD_JOB

RSE_MD_CDA_VALUES_LOAD_JOB

rse_md_cda_values_load.ksh

RSE_DATA_STAGING_END_JOB

RSE_DATA_STAGING_END_JOB

rse_process_state_update.ksh

RSE_PROD_ATTR_LOAD_START_JOB

RSE_PROD_ATTR_LOAD_START_JOB

rse_process_state_update.ksh

RSE_CDA_ETL_LOAD_PROD_JOB

RSE_CDA_ETL_LOAD_PROD_JOB

rse_cda_etl_load_product.ksh

RSE_PROD_ATTR_GRP_VALUE_STG_JOB

RSE_PROD_ATTR_GRP_VALUE_STG_JOB

rse_prod_attr_grp_value_stg.ksh

RSE_PROD_ATTR_GRP_VALUE_COPY_JOB

RSE_PROD_ATTR_GRP_VALUE_COPY_JOB

RSE_PROD_ATTR_GRP_VALUE_STG_CNE_JOB

RSE_PROD_ATTR_GRP_VALUE_STG_CNE_JOB

RSE_PROD_ATTR_GRP_VALUE_LOAD_JOB

RSE_PROD_ATTR_GRP_VALUE_LOAD_JOB

rse_prod_attr_grp_value_load.ksh

RSE_PROD_ATTR_VALUE_XREF_STG_JOB

RSE_PROD_ATTR_VALUE_XREF_STG_JOB

rse_prod_attr_value_xref_stg.ksh

RSE_PROD_ATTR_VALUE_XREF_COPY_JOB

RSE_PROD_ATTR_VALUE_XREF_COPY_JOB

RSE_PROD_ATTR_VALUE_XREF_STG_CNE_JOB

RSE_PROD_ATTR_VALUE_XREF_STG_CNE_JOB

RSE_PROD_ATTR_VALUE_XREF_LOAD_JOB

RSE_PROD_ATTR_VALUE_XREF_LOAD_JOB

rse_prod_attr_value_xref_load.ksh

RSE_PROD_ATTR_LOAD_END_JOB

RSE_PROD_ATTR_LOAD_END_JOB

rse_process_state_update.ksh

RSE_CM_GRP_HIER_LOAD_START_JOB

RSE_CM_GRP_HIER_LOAD_START_JOB

rse_process_state_update.ksh

RSE_CM_GRP_XREF_LOAD_JOB

RSE_CM_GRP_XREF_LOAD_JOB

rse_cm_grp_xref_load.ksh

RSE_CM_GRP_HIER_LOAD_JOB

RSE_CM_GRP_HIER_LOAD_JOB

rse_cm_grp_hier_load.ksh

RSE_CM_GRP_TC_LOAD_JOB

RSE_CM_GRP_TC_LOAD_JOB

rse_cm_grp_tc_load.ksh

RSE_CM_GRP_DH_LOAD_JOB

RSE_CM_GRP_DH_LOAD_JOB

rse_cm_grp_dh_load.ksh

RSE_CM_GRP_HIER_LOAD_END_JOB

RSE_CM_GRP_HIER_LOAD_END_JOB

rse_process_state_update.ksh

RSE_TRADE_AREA_HIER_LOAD_START_JOB

RSE_TRADE_AREA_HIER_LOAD_START_JOB

rse_process_state_update.ksh

RSE_TRADE_AREA_SRC_XREF_LOAD_JOB

RSE_TRADE_AREA_SRC_XREF_LOAD_JOB

rse_trade_area_src_xref_load.ksh

RSE_TRADE_AREA_HIER_LOAD_JOB

RSE_TRADE_AREA_HIER_LOAD_JOB

rse_trade_area_hier_load.ksh

RSE_TRADE_AREA_TC_LOAD_JOB

RSE_TRADE_AREA_TC_LOAD_JOB

rse_trade_area_tc_load.ksh

RSE_TRADE_AREA_DH_LOAD_JOB

RSE_TRADE_AREA_DH_LOAD_JOB

rse_trade_area_dh_load.ksh

RSE_TRADE_AREA_HIER_LOAD_END_JOB

RSE_TRADE_AREA_HIER_LOAD_END_JOB

rse_process_state_update.ksh

RSE_CUST_CONS_SEG_HIER_ETL_START_JOB

RSE_CUST_CONS_SEG_HIER_ETL_START_JOB

rse_process_state_update.ksh

RSE_CUSTSEG_SRC_XREF_LOAD_JOB

RSE_CUSTSEG_SRC_XREF_LOAD_JOB

rse_custseg_src_xref_load.ksh

RSE_CUSTSEG_HIER_LOAD_JOB

RSE_CUSTSEG_HIER_LOAD_JOB

rse_custseg_hier_load.ksh

RSE_CUSTSEG_HIER_TC_LOAD_JOB

RSE_CUSTSEG_HIER_TC_LOAD_JOB

rse_custseg_hier_tc_load.ksh

RSE_CUSTSEG_CUST_XREF_LOAD_JOB

RSE_CUSTSEG_CUST_XREF_LOAD_JOB

rse_custseg_cust_xref_load.ksh

RSE_CONSEG_LOAD_JOB

RSE_CONSEG_LOAD_JOB

rse_conseg_load.ksh

RSE_CONSEG_ALLOC_LOAD_JOB

RSE_CONSEG_ALLOC_LOAD_JOB

rse_conseg_alloc_load.ksh

RSE_CUSTSEG_ALLOC_LOAD_JOB

RSE_CUSTSEG_ALLOC_LOAD_JOB

rse_custseg_alloc_load.ksh

RSE_CUST_CONS_SEG_HIER_ETL_END_JOB

RSE_CUST_CONS_SEG_HIER_ETL_END_JOB

rse_process_state_update.ksh

RSE_CAL_HIER_ETL_START_JOB

RSE_CAL_HIER_ETL_START_JOB

rse_process_state_update.ksh

RSE_REGULAR_MAIN_LOAD_JOB

RSE_REGULAR_MAIN_LOAD_JOB

rse_regular_main_load.ksh

RSE_FISCAL_MAIN_LOAD_JOB

RSE_FISCAL_MAIN_LOAD_JOB

rse_fiscal_main_load.ksh

RSE_CAL_HIER_ETL_END_JOB

RSE_CAL_HIER_ETL_END_JOB

rse_process_state_update.ksh

RSE_DIMENSION_LOAD_END_START_JOB

RSE_DIMENSION_LOAD_END_START_JOB

rse_process_state_update.ksh

RSE_DIMENSION_LOAD_END_END_JOB

RSE_DIMENSION_LOAD_END_END_JOB

rse_process_state_update.ksh

RSE_DIMENSION_LOAD_START_START_JOB

RSE_DIMENSION_LOAD_START_START_JOB

rse_process_state_update.ksh

RSE_DIMENSION_LOAD_START_END_JOB

RSE_DIMENSION_LOAD_START_END_JOB

rse_process_state_update.ksh

RSE_SLS_START_START_JOB

RSE_SLS_START_START_JOB

rse_process_state_update.ksh

RSE_SLS_START_END_JOB

RSE_SLS_START_END_JOB

rse_process_state_update.ksh

RSE_SLS_END_START_JOB

RSE_SLS_END_START_JOB

rse_process_state_update.ksh

RSE_SLS_END_END_JOB

RSE_SLS_END_END_JOB

rse_process_state_update.ksh

ORASE_END_START_JOB

ORASE_END_START_JOB

rse_process_state_update.ksh

ORASE_END_RUN_DATE_UPDT_JOB

ORASE_END_RUN_DATE_UPDT_JOB

rse_batch_run_date_update.ksh

ORASE_END_END_JOB

ORASE_END_END_JOB

rse_process_state_update.ksh

RSE_MASTER_ADHOC_JOB

Run RSE master script

rse_master.ksh

PMO_MASTER_ADHOC_JOB

Run PMO master script

pmo_master.ksh

PMO_CUMUL_SLS_SETUP_ADHOC_JOB Ad hoc setup job for cumulative sales calculation pmo_cum_sls_load_setup.ksh
PMO_CUMUL_SLS_PROCESS_ADHOC_JOB Ad hoc process job for cumulative sales calculation pmo_cum_sls_load_process.ksh
PMO_ACTIVITY_STG_ADHOC_JOB Ad hoc PMO Activity load setup job pmo_activity_load_setup.ksh
PMO_ACTIVITY_LOAD_ADHOC_JOB Ad hoc PMO Activity load process job pmo_activity_load_process.ksh
PMO_ACTIVITY_WH_LOAD_SETUP_ADHOC_JOB Ad hoc PMO Activity setup job to aggregate sales data for warehouse transfers pmo_activity_wh_load_setup.ksh
PMO_ACTIVITY_WH_LOAD_PROCESS_ADHOC_JOB Ad hoc PMO Activity process job to aggregate sales data for warehouse transfers pmo_activity_wh_load_process.ksh
PMO_ACTIVITY_SHIPMENT_LOAD_SETUP_ADHOC_JOB Ad hoc PMO Activity setup job to aggregate sales data for warehouse shipments pmo_activity_shipment_load_setup.ksh
PMO_ACTIVITY_SHIPMENT_LOAD_PROCESS_ADHOC_JOB Ad hoc PMO Activity process job to aggregate sales data for warehouse shipments pmo_activity_shipment_load_process.ksh
PMO_RETURN_DATAPREP_SETUP_ADHOC_JOB Ad hoc PMO Returns data load preparation setup job pmo_return_dataprep_setup.ksh
PMO_RETURN_DATAPREP_PROCESS_ADHOC_JOB Ad hoc PMO Returns data load preparation processing job pmo_return_dataprep_process.ksh
PMO_RETURN_CALC_PROCESS_ADHOC_JOB Ad hoc PMO Returns calculations processing job pmo_return_calculation.ksh
PMO_HOLIDAY_LOAD_ADHOC_JOB Ad hoc PMO Holiday load job pmo_holiday_load.ksh

PMO_ACTIVITY_LOAD_START_JOB

PMO Activity load start job

rse_process_state_update.ksh

PMO_ACTIVITY_STG_JOB

PMO Activity data staging

pmo_activity_load_setup.ksh

PMO_ACTIVITY_LOAD_JOB

PMO Activity load job

pmo_activity_load_process.ksh

PMO_ACTIVITY_LOAD_END_JOB

PMO Activity load end job

rse_process_state_update.ksh

The Batch and Ad hoc jobs listed in the following table are used to prepare weekly data and to run weekly batches for MFP and AP.

Table 13-4 Batch and Ad Hoc Jobs for MFP and AP Forecasting

JobName Description RmsBatch

RSE_FCST_SALES_PLAN_START_JOB

Start job for sales plan data load for Bayesian method

rse_process_state_update.ksh

RSE_FCST_SALES_PLAN_LOAD_JOB

Load job for sales plan data for Bayesian method

rse_fcst_sales_plan_load.ksh

RSE_FCST_SALES_PLAN_END_JOB

End job for sales plan data load for Bayesian method

rse_process_state_update.ksh

RSE_ASSORT_PLAN_LOAD_START_JOB

Loading assortment plan through interface for AP

rse_process_state_update.ksh

RSE_ASSORT_PLAN_LOAD_STG_JOB

Loading assortment plan through interface for AP

rse_assort_plan_per_stg.ksh

RSE_ASSORT_PLAN_LOAD_COPY_JOB

Loading assortment plan through interface for AP

RSE_ASSORT_PLAN_LOAD_STG_CNE_JOB

Loading assortment plan through interface for AP

RSE_RDX_ASSORT_PLAN_IMPORT_JOB

Importing assortment plan from RDX to AIF for AP

rse_rdx_assort_plan_import.ksh

RSE_ASSORT_PLAN_LOAD_JOB

Loading assortment plan through interface for AP

rse_assort_plan_per_load.ksh

RSE_ASSORT_PLAN_LOAD_END_JOB

Loading assortment plan through interface for AP

rse_process_state_update.ksh

RSE_PLANNING_PERIOD_LOAD_START_JOB

Loading assortment period through interface for AP

rse_process_state_update.ksh

RSE_PLANNING_PERIOD_STG_JOB

Loading assortment period through interface for AP

rse_planning_period_stg.ksh

RSE_PLANNING_PERIOD_COPY_JOB

Loading assortment period through interface for AP

RSE_PLANNING_PERIOD_STG_CNE_JOB

Loading assortment period through interface for AP

RSE_RDX_ASSORT_PERIOD_IMPORT_JOB

Importing assortment period from RDX to AIF for AP

rse_rdx_assort_period_import.ksh

RSE_PLANNING_PERIOD_LOAD_JOB

Loading assortment period through interface for AP

rse_planning_period_load.ksh

RSE_PLANNING_PERIOD_LOAD_END_JOB

Loading assortment period through interface for AP

rse_process_state_update.ksh

RSE_MFP_FCST_BATCH_RUN_START_JOB

Start job for executing MFP and AP forecast batch runs

rse_process_state_update.ksh

RSE_CREATE_MFP_BATCH_RUN_PROC_JOB

Create MFP and AP batch runs

rse_create_mfp_batch_run_proc.ksh

RSE_MFP_FCST_BATCH_PROCESS_JOB

Execute MFP and AP forecast batch

rse_fcst_mfp_batch_process.ksh

RSE_MFP_FCST_BATCH_RUN_END_JOB

End job for executing MFP and AP forecast batch runs

rse_process_state_update.ksh

RSE_CREATE_MFP_BATCH_RUN_PROC_ADHOC_JOB

Ad hoc job to create MFP and AP batch runs

rse_create_mfp_batch_run_proc.ksh

RSE_MFP_FCST_BATCH_PROCESS_ADHOC_JOB

Ad hoc job to execute MFP and AP forecast batch

rse_fcst_mfp_batch_process.ksh

The forecast values generated by runs that are associated with active run types are exported to the RDX schema. The jobs for exporting the outputs to the RDX schema are listed in the following table.

Table 13-5 Export Jobs for MFP and AP from AIF to RDX

JobName Description RmsBatch

RSE_FCST_EXPORT_START_JOB

Start job for MFP and AP forecast export

rse_process_state_update.ksh

RSE_MFP_FCST_EXPORT_JOB

Export MFP forecast

rse_mfp_export.ksh

RSE_AP_FCST_EXPORT_JOB

Export AP forecast

rse_ap_export.ksh

RSE_FCST_EXPORT_END_JOB

End job for MFP and AP forecast export

rse_process_state_update.ksh

RSE_MFP_FCST_EXPORT_ADHOC_JOB

Ad hoc job to export MFP forecast

rse_mfp_export.ksh

RSE_AP_FCST_EXPORT_ADHOC_JOB

Ad hoc job to export AP forecast

rse_ap_export.ksh

RSE_RDX_ASSORT_ELASTICITY_EXPORT_JOB

Export assortment elasticity for AP

rse_rdx_assort_elasticity_export.ksh

RSE_RDX_ASSORT_ELASTICITY_EXPORT_ADHOC_JOB

Ad hoc job to export assortment elasticity for AP

rse_rdx_assort_elasticity_export.ksh

RSE_RDX_LOC_CLUSTER_EXPORT_JOB

Export store clusters for AP

rse_rdx_loc_cluster_export.ksh

RSE_RDX_LOC_CLUSTER_EXPORT_ADHOC_JOB

Ad hoc job to export store clusters for AP

rse_rdx_loc_cluster_export.ksh

RSE_RDX_SIZE_PROFILE_EXPORT_JOB

Export size profiles for AP

rse_rdx_size_profile_export.ksh

RSE_RDX_SIZE_PROFILE_EXPORT_ADHOC_JOB

Ad hoc job to export size profiles for AP

rse_rdx_size_profile_export.ksh

Forecast Configuration for IPO (DF, IO, and LAR) and LPO

To configure forecast process for IPO and LPO, do the following.

  1. Use the Manage System Configurations screen to review and modify the global configurations in RSE_CONFIG. For further details, see "Configuration Updates". The relevant configurations in RSE_CONFIG that must be reviewed and edited as required are listed in the following table.

    Table 13-6 RSE Configurations

    APPL_CODE PARAM_NAME PARAM_VALUE DESCR

    RSE

    EXTENDED_HIERARCHY_SRC

    Default value is NON-RMS.

    Data source providing extended hierarchy data RMS/NON-RMS.

    RSE

    LOAD_EXTENDED_PROD_HIER

    Default value is N. If the product hierarchy data had 9 levels, set this value as Y. If it has 7 levels, keep this value as N.

    Y/N Value. This parameter is used by the product hierarchy ETL to know if the extended product hierarchy is needed.

    PMO

    PMO_PROD_HIER_TYPE

    Default value is 3. If the product hierarchy data has 9 levels (i.e., it has extended hierarchy), keep this value as 3. If it has 7 levels, change this value to 1.

    The hierarchy id to use for the product (Installation configuration).

    RSE

    PROD_HIER_SLSTXN_HIER_LEVEL_ID

    Default value is 9.

    This parameter identifies the hierarchy level at which sales transactions are provided (7-Style, 8-Style/color or 9 Style/color/Size). It must match the extended hierarchy leaf level

    PMO

    PMO_AGGR_INVENTORY_DATA_FLG

    Default value is Y.

    Specifies if inventory data is present and if it should be used when aggregating activities data. Set this value to N if inventory data is not loaded (inventory data is not required for MFP and AP applications but it may be required for other applications such as LPO and IPO..

    RSE

    SLS_TXN_EXCLUDE_LIABILITY_SLS

    Default value is N.

    Y/N flag indicating if liability sales columns should be excluded when retrieving sales transaction data.

    RSE

    RSE_LOC_HIER_EXCL_FRANCHISE

    Default value is N.

    Y/N flag indicating if franchise locations should be excluded from the location hierarchy.

    RSE

    PROMOTIONAL_SALES_AVAIL

    Default value is Y.

    Y/N flag indicating if promotion data is available for Sales & Promo run types.

    Set this value to N if promotion data is not yet loaded into AIF.

    RSE

    DFLT_LIFE_CYCLE

    Default value is SLC.

    This parameter identifies the primary forecast method for the env.

    Keep this value as SLC if the primary forecast method is Life Cycle and change this value to LLC if the primary forecast method is Sales & Promo.

    PMO PMO_OUTLIER_CALCULATION_FLG Default value is Y. Y/N flag indicating if the Outlier flag calculation should happen internally in AIF.
    RSE FCST_PURGE_RETENTION_DAYS Default value is 180. Number of days to wait before permanently deleting run data.
    RSE FCST_PURGE_ALL_RUNS Default value is N. Y/N flag indicating all run data should be deleted.
    RSE FCST_TY_PURGE_RETENTION_DAYS Default value is 180. Number of days to wait before permanently deleting run type data.
    RSE FCST_TY_PURGE_ALL_RUNS Default value is N. Y/N flag indicating all run type data should be deleted.
    RSE RSE_FIRST_INV_RECEIPT_HIST_MAX_WEEKS Default value is 106. Records with inventory first receipt date older than these many weeks (from the latest week with inventory) will be excluded from model start date calculation.
    RSE INV_SRC_RI Default value is Y. Flag which indicates whether inventory is obtained from AIF data warehouse (Y) or loaded directly to AIF applications (N).
    RSE INV_RC_DT_SRC_RI Default value is Y. Flag that indicates whether receipt dates are obtained from AIF data warehouse (Y) or loaded directly to AIF applications (N).
    RSE RSE_FCST_NEW_ITEM_STRATEGY Default value is RDX. Allowed values are RDX (for population from RDX), RI (for population from RI interface), and RSP (programmatic population using RSE_FCST_NEW_ITEM_AGE_THRESH).
    RSE RSE_FCST_NEW_ITEM_AGE_THRESH Default value is 28. When RSE_FCST_NEW_ITEM_STRATEGY is RSP, this threshold will be used to identify new items. If the number of days passed from the prod_created_dt of the item is smaller than or equal to this value, the item will be considered new.
    RSE

    RSE_BD_ENABLE_POISSON PARAM_VALUE

    Default value is Y. Y/N flag indicating whether to also use the Poisson method for calculating base demand for Sales & Promo run types.
    RSE

    SWITCH_TO_TXN_LOAD_DATE

    PARAM_VALUE

    Default value is SYSDATE. For customers who provide FACT data at aggregate level, this date indicates the stop point for loading aggregate data. If the SYSDATE is greater than or equal to this date, the data load from RI FACT tables to AIF will not run. Applicable date format YYYYMMDD.
  2. Use the Manage Forecast Configurations screen to set up the forecast runs, as follows.

    In Manage Forecast Configurations, start by setting up a run type in the Setup train stop. Click the + icon above the table and fill in the fields in the pop-up. The customer should decide the primary forecast method for the environment and accordingly set the value of the parameter DFLT_LIFE_CYCLE in the table RSE_CONFIG (APPL_CODE= RSE) from the Manage System Configurations screen. All the items that are not supposed to be forecasted by the primary forecast method must be stored in the table RSE_FCST_LIFE_CYCLE_CLSF (populated through an interface). If the customer wants to forecast separately on the items stored in RSE_FCST_LIFE_CYCLE_CLSF and the items that are not, then both Sales & Promo and Life Cycle forecast method run types must be created. The run types will either use the items present in the table RSE_FCST_LIFE_CYCLE_CLSF or use the rest of the items depending on the Forecast Method for the run type and the DFLT_LIFE_CYCLE parameter. You must create a run type for each forecast intersection combination that is required.

    Once you have finished setting up the run types, click Start Data Aggregation in the Setup train stop. Select all the run types that were created and click Submit. When the aggregation is complete, the aggregation status will change to Complete. At this point, ad-hoc test runs and batch runs can be created and executed to generate a forecast.

    To create an ad-hoc run, go to the Test train stop. First, click a run type in the top table and then click the + icon in the bottom table to create a run. In the Create Run pop-up, you can change the configurations parameters related to estimation process, base demand, and forecast process in their respective tabs. After modifying and reviewing the configuration parameters, click Submit to start the run. Upon submit, a validation process runs to validate the value of configuration parameters. If there is any error, correct the corresponding parameter and submit again. Once the run is complete, the status will change to Forecast Generation Complete.

    Once you are done with testing, you must modify and review the configurations of the run type, activate the run type, enable auto-approve, and map the run type to the downstream application (for example, IPO-DF). In the Manage train stop, select a row, click Edit Configurations Parameters, and edit the estimation and forecast parameters as necessary. Once you are done, go to Review tab and click Validate and then close tab.

    Note:

    If the run type is active, you will only be able to view the parameters. In order to edit the parameters, the run type must be inactive.

    To activate the run type and enable the auto-approve, select a run type in the table and click the corresponding buttons above the table. Lastly, to map the run type to IPO-DF, go to Map train stop and click the + icon to create a new mapping.

    Note:

    For IPO-DF, you will be able to map one or multiple run types to the same "external key" run type. The run types that are being mapped to the same external key must have same forecast intersection, same forecast measure, same price zone and customer segment flag, and same data source, but different forecast method.

By default, the approved forecast source for a run type is AIF. But, when a run type is mapped to IPO-DF, irrespective of what other applications it is mapped to, the approved forecast source changes to RDX. The approved forecast source changes back to AIF when the run type mapping to IPO-DF is deleted. So, the approved forecast source is unique to a run type and changes based on the mapping situation. Also, note that when a run type is mapped to RI/RMS, the approved forecast from that run type (either directly from AIF or via IPO-DF depending on whether a mapping also exists to IPO-DF) will be pushed to RI for BI reporting and/or RMS for merchandising. Another nuance is that when a Life Cycle method run type is mapped to LPO-Promotion/Markdown, this run type will treat the value of the DFLT_LIFE_CYCLE parameter as SLC. This enables an environment to forecast on all items using both Sales & Promo and Life Cycle methods, given that the Life Cycle method run type is mapped to LPO-Promotion/Markdown, the RSE_FCST_LIFE_CYCLE_CLSF table is empty, and the DFLT_LIFE_CYCLE parameter is set to LLC.

Batch and Ad-Hoc Jobs for IPO-DF Forecasting

The configuration and main data load jobs listed in Table 13-3 are used for loading foundation data (product hierarchy, location hierarchy, calendar, product attributes, and so on).

The Batch and Ad-Hoc jobs listed in the following table are used for preparing weekly data and running weekly batches for IPO-DF.

Table 13-7 Batch and Ad Hoc Jobs for IPO-DF Forecasting

JobName Description RmsBatch

RSE_PROMO_HIER_ETL_START_JOB

Start job for loading promotion hierarchy data

rse_process_state_update.ksh

RSE_PROMO_SRC_XREF_LOAD_JOB

Job for loading promotion hierarchy data

rse_promo_src_xref_load.ksh

RSE_PROMO_HIER_LOAD_JOB

Job for loading promotion hierarchy data

rse_promo_hier_load.ksh

RSE_PROMO_HIER_DH_LOAD_JOB

Job for loading promotion hierarchy data

rse_promo_hier_dh_load.ksh

RSE_PROMO_HIER_ETL_END_JOB

End job for loading promotion hierarchy data

rse_process_state_update.ksh

PMO_ACTIVITY_LOAD_PL_LLC_START_JOB

Start job for loading promotion lift data

rse_process_state_update.ksh

PMO_ACTIVITY_LOAD_PL_LLC_SETUP_JOB Setup job for loading promotion lift data pmo_activity_load_llc_pl_setup.ksh
PMO_ACTIVITY_LOAD_PL_LLC_PROCESS_JOB Process job for loading promotion lift data pmo_activity_load_llc_pl_process.ksh

PMO_ACTIVITY_LOAD_PL_LLC_END_JOB

End job for loading promotion lift data

rse_process_state_update.ksh

PMO_ACTIVITY_LOAD_PL_LLC_SETUP_ADHOC_JOB Ad-hoc setup job for loading promotion lift data pmo_activity_load_llc_pl_setup.ksh
PMO_ACTIVITY_LOAD_PL_LLC_PROCESS_ADHOC_JOB Ad-hoc process job for loading promotion lift data pmo_activity_load_llc_pl_process.ksh

PMO_ACTIVITY_LOAD_OFFERS_INITIAL_ADHOC_JOB

Initial one-time job for loading promotion offers data

pmo_activity_load_offers_initial.ksh

PMO_ACTIVITY_LOAD_PL_LLC_SETUP_JOB Setup job for loading promotion lift data. pmo_activity_load_llc_pl_setup.ksh
PMO_ACTIVITY_LOAD_PL_LLC_PROCESS_JOB Process job for loading promotion lift data pmo_activity_load_llc_pl_process.ksh

PMO_ACTIVITY_LOAD_OFFERS_START_JOB

Start job for loading promotion offers data

rse_process_state_update.ksh

PMO_ACTIVITY_LOAD_OFFERS_LOAD_JOB

Job for loading promotion offers data

pmo_activity_load_offers_weekly.ksh

PMO_ACTIVITY_LOAD_OFFERS_END_JOB

End job for loading promotion offers data

rse_process_state_update.ksh

PMO_ACTIVITY_LOAD_OFFERS_LOAD_ADHOC_JOB

Ad-hoc job for loading promotion offers data

pmo_activity_load_offers_weekly.ksh

RSE_FLEX_GROUP_LOAD_START_JOB

Loading flex group data through interface

rse_process_state_update.ksh

RSE_FLEX_GROUP_DTL_STG_JOB

Loading flex group data through interface (populates rse_flex_group_dtl_stg)

rse_flex_group_dtl_stg.ksh

RSE_FLEX_GROUP_DTL_COPY_JOB

Loading flex group data through interface

RSE_FLEX_GROUP_DTL_STG_CNE_JOB

Loading flex group data through interface

RSE_FLEX_GROUP_DTL_LOAD_JOB

Loading flex group data through interface (populates rse_flex_group and rse_flex_group_dtl based on rse_flex_group_set and rse_flex_group_dtl_stg)

rse_flex_group_dtl_load.ksh

RSE_FLEX_GROUP_LOAD_END_JOB

Loading flex group data through interface

rse_process_state_update.ksh

RSE_FCST_SPREAD_PROFILE_LOAD_START_JOB

Loading spread profile data through interface

rse_process_state_update.ksh

RSE_FCST_SPREAD_PROFILE_STG_JOB

Loading spread profile data through interface (populates rse_fcst_spread_profile_stg)

rse_fcst_spread_profile_stg.ksh

RSE_FCST_SPREAD_PROFILE_COPY_JOB

Loading spread profile data through interface

RSE_FCST_SPREAD_PROFILE_STG_CNE_JOB

Loading spread profile data through interface

RSE_FCST_SPREAD_PROFILE_LOAD_JOB

Loading spread profile data through interface (populates rse_fcst_spread_profile based on rse_fcst_spread_profile_stg)

rse_fcst_spread_profile_load.ksh

RSE_FCST_SPREAD_PROFILE_LOAD_END_JOB

Loading spread profile data through interface

rse_process_state_update.ksh

RSE_FCST_LIFE_CYCLE_CLSF_LOAD_START_JOB

Loading exception life cycle items through interface

rse_process_state_update.ksh

RSE_FCST_LIFE_CYCLE_CLSF_STG_JOB

Loading exception life cycle items through interface (populates rse_fcst_life_cycle_clsf_stg)

rse_fcst_life_cycle_clsf_stg.ksh

RSE_FCST_LIFE_CYCLE_CLSF_COPY_JOB

Loading exception life cycle items through interface

RSE_FCST_LIFE_CYCLE_CLSF_STG_CNE_JOB

Loading exception life cycle items through interface

RSE_FCST_LIFE_CYCLE_CLSF_LOAD_JOB

Loading exception life cycle items through interface (populates rse_fcst_life_cycle_clsf based on rse_fcst_life_cycle_clsf_stg)

rse_fcst_life_cycle_clsf_load.ksh

RSE_FCST_LIFE_CYCLE_CLSF_LOAD_END_JOB

Loading exception life cycle items through interface

rse_process_state_update.ksh

RSE_INV_LOAD_START_JOB Inventory Load Start Job rse_process_state_update.ksh
RSE_INV_PR_LC_WK_STG_JOB Inventory price location week file stage (populates rse_inv_pr_lc_wk_a_stg) rse_inv_pr_lc_wk_stg.ksh
RSE_INV_PR_LC_WK_COPY_JOB Loading price location week file
RSE_INV_PR_LC_WK_STG_CNE_JOB Loading price location week file
RSE_INV_PR_LC_WK_LOAD_JOB Inventory price location week file load (populates rse_inv_pr_lc_wk_a and rse_inv_pr_lc_hist based on rse_inv_pr_lc_wk_a_stg) rse_inv_pr_lc_wk_load.ksh
RSE_INV_PR_LC_WK_SETUP_JOB Inventory price location week AIF Data load setup rse_inv_pr_lc_wk_setup.ksh
RSE_INV_PR_LC_WK_PROCESS_JOB Inventory price location week AIF Data load process (populates rse_inv_pr_lc_wk_a and rse_inv_pr_lc_hist based on w_rtl_inv_it_lc_wk_a and w_rtl_invu_it_lc_wk_a rse_inv_pr_lc_wk_process.ksh
RSE_INV_LOAD_END_JOB Inventory Load End Job rse_process_state_update.ksh
RSE_EXIT_DATE_LOAD_START_JOB Start job for exit dates rse_process_state_update.ksh
RSE_EXIT_DATE_STG_JOB Staging job for exit dates (populates rse_exit_date_stg) rse_exit_date_stg.ksh
RSE_EXIT_DATE_COPY_JOB CNE copy job for exit dates  
RSE_EXIT_DATE_STG_CNE_JOB CNE staging job for exit dates  
RSE_EXIT_DATE_LOAD_JOB Load job for exit dates (populates rse_exit_date based on rse_exit_date_stg) rse_exit_date_load.ksh
RSE_EXIT_DATE_LOAD_END_JOB End job for exit dates rse_process_state_update.ksh

PMO_RUN_EXEC_START_JOB

Start job for PMO run execution

rse_process_state_update.ksh

PMO_CREATE_BATCH_RUN_JOB

Create batch run for PMO execution

rse_create_pmo_batch_run_proc.ksh

PMO_RUN_EXEC_SETUP_JOB

Setup job for PMO run execution

pmo_run_exec_setup.ksh

PMO_RUN_EXEC_PROCESS_JOB

Process job for PMO run execution

pmo_run_exec_process.ksh

PMO_RUN_EXEC_END_JOB

End job for PMO run execution

rse_process_state_update.ksh

PMO_RUN_EXEC_ADHOC_JOB

Ad-hoc job for PMO run execution

pmo_run_execution.ksh

RSE_FCST_BATCH_RUN_START_JOB

Start job for forecast batch run

rse_process_state_update.ksh

RSE_CREATE_FCST_BATCH_RUN_JOB

Create forecast batch run

rse_create_fcst_batch_run_proc.ksh

RSE_FCST_BATCH_PROCESS_JOB

Execute Base Demand and Demand Forecast for forecast run

rse_fcst_batch_process.ksh

RSE_FCST_BATCH_RUN_END_JOB

End job for creating forecast batch run

rse_process_state_update.ksh

RSE_CREATE_FCST_BATCH_RUN_ADHOC_JOB

Adhoc job to create forecast batch run

rse_create_fcst_batch_run_proc.ksh

RSE_FCST_BATCH_PROCESS_ADHOC_JOB

Adhoc job to execute Base Demand and Demand Forecast for forecast run

rse_fcst_batch_process.ksh

The forecast values generated by runs that are approved, associated with active run types and mapped to IPO-DF are exported to RDX schema. The following table lists the jobs to export the main forecast data.

Table 13-8 Jobs for Exporting Forecast Outputs from AIF to RDX

JobName Description RMSBatch

RSE_RDF_FCST_EXPORT_JOB

Export IPO-DF forecast

rse_rdf_export.ksh

RSE_RDF_FCST_EXPORT_ADHOC_JOB

Ad hoc job to export IPO-DF forecast

rse_rdf_export.ksh

The following table lists the additional jobs for exporting different data such as promotions, and configuration parameters for run types.

Table 13-9 Additional Jobs for Exporting Data from AIF to RDX

JobName Description RmsBatch

RSE_HIST_PROMO_OFFER_SALES_EXPORT_ADHOC_JOB

Initial one-time job for exporting promotion offer sales figures

rse_rdf_offer_sales_hist_exp.ksh

RSE_PROMO_OFFER_EXPORT_START_JOB

Start job for exporting promotion offer

rse_process_state_update.ksh

RSE_PROMO_OFFER_EXPORT_JOB

Export promotion offers

rse_rdf_offers_hier_exp.ksh

RSE_PROMO_OFFER_SALES_EXPORT_JOB

Export promotion offer sales figures

rse_rdf_offer_sales_exp.ksh

RSE_PROMO_OFFER_EXPORT_END_JOB

End job for exporting promotion offer

rse_process_state_update.ksh

RSE_PROMO_OFFER_EXPORT_ADHOC_JOB

Ad hoc job to export promotion offers

rse_rdf_offers_hier_exp.ksh

RSE_PROMO_OFFER_SALES_EXPORT_ADHOC_JOB

Ad hoc job to export promotion offer sales figures

rse_rdf_offer_sales_exp.ksh

RSE_FCST_RUN_TYPE_CONF_EXPORT_START_JOB

Start job for exporting runtype config

rse_process_state_update.ksh

RSE_FCST_RUN_TYPE_CONF_EXPORT_SETUP_JOB

Setup job for exporting runtype config

rse_fcst_run_type_conf_exp_setup.ksh

RSE_FCST_RUN_TYPE_CONF_EXPORT_PROCESS_JOB

Process job for exporting runtype con

rse_fcst_run_type_conf_exp_process.ksh

RSE_FCST_RUN_TYPE_CONF_EXPORT_END_JOB

End job for exporting runtype config

rse_process_state_update.ksh

RSE_FCST_RUN_TYPE_CONF_EXPORT_SETUP_ADHOC_JOB

Ad-hoc setup job for exporting runtype config

rse_fcst_run_type_conf_exp_setup.ksh

RSE_FCST_RUN_TYPE_CONF_EXPORT_PROCESS_ADHOC_JOB

Ad-hoc process job for exporting runtype config

rse_fcst_run_type_conf_exp_process.ksh

The jobs for importing forecast parameters from RDX schema are listed in the following table.

Table 13-10 Jobs for Importing Forecast Parameters from RDX to AIF

JobName Description RMSBatch

RSE_RDX_FCST_PARAM_START_JOB

Start job for importing forecast parameters

rse_process_state_update.ksh

RSE_RDX_FCST_PARAM_SETUP_JOB

Setup job for importing forecast parameters

rse_rdx_fcst_param_setup.ksh

RSE_RDX_FCST_PARAM_PROCESS_JOB

Process job for importing forecast parameters

rse_rdx_fcst_param_process.ksh

RSE_RDX_FCST_PARAM_END_JOB

End job for importing forecast parameters

rse_process_state_update.ksh

RSE_RDX_FCST_PARAM_SETUP_ADHOC_JOB

Ad-hoc setup job for importing forecast parameters

rse_rdx_fcst_param_setup.ksh

RSE_RDX_FCST_PARAM_PROCESS_ADHOC_JOB

Ad-hoc process job for importing forecast parameters

rse_rdx_fcst_param_process.ksh

RSE_FCST_RDX_NEW_ITEM_ENABLE_START_JOB

Start job for importing enablement flags for new item forecast

rse_process_state_update.ksh

RSE_FCST_RDX_NEW_ITEM_ENABLE_SETUP_JOB

Setup job for importing enablement flags for new item forecast

rse_fcst_rdx_new_item_enable_setup.ksh

RSE_FCST_RDX_NEW_ITEM_ENABLE_PROCESS_JOB

Process job for importing enablement flags for new item forecast

rse_fcst_rdx_new_item_enable_process.ksh

RSE_FCST_RDX_NEW_ITEM_ENABLE_END_JOB

End job for importing enablement flags for new item forecast

rse_process_state_update.ksh

RSE_FCST_RDX_NEW_ITEM_ENABLE_SETUP_ADHOC_JOB

Ad-hoc setup job for importing enablement flags for new item forecast

rse_fcst_rdx_new_item_enable_setup.ksh

RSE_FCST_RDX_NEW_ITEM_ENABLE_PROCESS_ADHOC_JOB

Ad-hoc process job for importing enablement flags for new item forecast

rse_fcst_rdx_new_item_enable_process.ksh

PMO_EVENT_IND_RDF_START_JOB

Start job for importing event indicators for preprocessing

rse_process_state_update.ksh

PMO_EVENT_IND_RDF_SETUP_JOB

Setup job for importing event indicators for preprocessing

pmo_event_ind_rdf_setup.ksh

PMO_EVENT_IND_RDF_PROCESS_JOB

Process job for importing event indicators for preprocessing

pmo_event_ind_rdf_process.ksh

PMO_EVENT_IND_RDF_END_JOB

End job for importing event indicators for preprocessing

rse_process_state_update.ksh

PMO_EVENT_IND_RDF_SETUP_ADHOC_JOB

Ad-hoc setup job for importing event indicators for preprocessing

pmo_event_ind_rdf_setup.ksh

PMO_EVENT_IND_RDF_PROCESS_ADHOC_JOB

Ad-hoc process job for importing event indicators for preprocessing

pmo_event_ind_rdf_process.ksh

RSE_LIKE_RDX_RSE_START_JOB

Start job for importing new item parameters

rse_process_state_update.ksh

RSE_LIKE_RDX_RSE_SETUP_JOB

Setup job for importing new item parameters

rse_like_rdx_rse_setup.ksh

RSE_LIKE_RDX_RSE_PROCESS_JOB

Process job for importing new item parameters

rse_like_rdx_rse_process.ksh

RSE_LIKE_RDX_RSE_END_JOB

End job for importing new item parameters

rse_process_state_update.ksh

RSE_LIKE_RDX_RSE_SETUP_ADHOC_JOB

Ad-hoc setup job for importing new item parameters

rse_like_rdx_rse_setup.ksh

RSE_LIKE_RDX_RSE_PROCESS_ADHOC_JOB

Ad-hoc process job for importing new item parameters

rse_like_rdx_rse_process.ksh

The approved forecast is exported out of RDX into AIF, which then goes to RMS via flat files. The jobs are listed in the following table. Note that the approved forecast source for a run type depends on whether it is mapped to the IPO-DF application. If this mapping exists, then the approved forecast source for the run type is RDX, otherwise it is AIF.

Table 13-11 Approved Forecast Export Jobs

JobName Description RMSBatch

RSE_RDX_APPD_FCST_START_JOB

Start job for exporting approved forecast data

rse_process_state_update.ksh

RSE_RDX_APPD_FCST_SETUP_JOB

Setup job for exporting approved forecast data

rse_rdx_appd_fcst_setup.ksh

RSE_RDX_APPD_FCST_PROCESS_JOB

Process job for exporting approved forecast data

rse_rdx_appd_fcst_process.ksh

RSE_RDF_APPR_FCST_EXPORT_JOB

Export approved weekly forecast to a flat file

rse_rdf_appr_fcst_export.ksh

RSE_RDF_APPR_FCST_DAY_EXPORT_JOB

Export approved daily forecast to a flat file

rse_rdf_appr_fcst_day_export.ksh

RSE_RDX_APPD_FCST_END_JOB

End job for exporting approved forecast data

rse_process_state_update.ksh

RSE_RDX_APPD_FCST_SETUP_ADHOC_JOB

Ad-hoc setup job for exporting approved forecast data

rse_rdx_appd_fcst_setup.ksh

RSE_RDX_APPD_FCST_PROCESS_ADHOC_JOB

Ad-hoc process job for exporting approved forecast data

rse_rdx_appd_fcst_process.ksh

RSE_RDF_APPR_FCST_EXPORT_ADHOC_JOB

Ad hoc job to export approved weekly forecast to a flat file

rse_rdf_appr_fcst_export.ksh

RSE_RDF_APPR_FCST_DAY_EXPORT_ADHOC_JOB

Ad-hoc job to export approved daily forecast to a flat file

rse_rdf_appr_fcst_day_export.ksh

Loading and Calculating Event Flags in AIF

The Out-of-Stock and Outlier indicators are important for the estimation stage of forecasting. These two flags are stored in the AIF table PMO_EVENT_INDICATOR that is used by the Sales & Promo forecast method.

Loading Out-of-Stock and Outlier Indicators Through the RAP Interface

The customer can load Out-of-Stock and Outlier indicators at the SKU/Store/Week level using the RAP interface (staging table: W_RTL_INVOOS_IT_LC_WK_FS; target table: W_RTL_INVOOS_IT_LC_WK_F). The indicators will flow into AIF that will be then used in forecasting.

Loading Out-of-Stock and Outlier Indicators Through the RDX Interface

This is implemented via a package RAP_INTF_SUPPORT_UTIL which is also granted to IW, and through the use of granting DML permissions on all the interface tables that reside in RDX01. In order to publish data for an interface, the implementer must first know some things about the interface being published. For this specific use case, the interface name is RDF_FCST_PARM_CAL_EXP.

SELECT * FROM RAP_INTF_CFG WHERE INTF_NAME = 'RDF_FCST_PARM_CAL_EXP';

Of the output from the above query, the PUBLISH_APP_CODE is important to pay attention to (for this example, the value is PDS). Also, the value in CUSTOMER_PUBLISHABLE_FLG is important. In order to publish data to these interfaces, it is necessary to have this value be a Y for any interface that IW is going to be populating. For customer implementations, this would require requesting AMS to update the value to a Y, like this:

UPDATE RAP_INTF_CFG SET CUSTOMER_PUBLISHABLE_FLG = 'Y' WHERE INTF_NAME = 'RDF_FCST_PARM_CAL_EXP'; COMMIT;

Once the above has been done, then the implementer/customer can write code in the IW that does something like this:
DECLARE
v_run_id    NUMBER;
v_partition_name    VARCHAR2(30);
BEGIN
RAP_INTF_SUPPORT_UTIL.PREP_INTF_RUN('PDS', 'RDF_FCST_PARM_CAL_EXP', v_run_id, v_partition_name);
--Note, the PDS value is what was obtained from the above RAP_INTF_CFG query.
EXECUTE IMMEDIATE 'INSERT /*+ append */ INTO RDF_FCST_PARM_CAL_EXP partition ( '||v_partition_name|| ') ....  ';
--NOTE, the .... is to be replaced with the actual list of columns and an appropriate SELECT statement to provide the values.
--Of importance, it should be noted that the RUN_ID column must be populated with the value that was returned into the v_run_id variable above.
--After the data has been populated into the above table, it can be made available for retrieval by consuming application modules by the following:
RAP_INTF_SUPPORT_UTIL.PUBLISH_DATA('PDS', v_run_id, 'RDF_FCST_PARM_CAL_EXP', 'Ready');
END;

Once the above steps have been completed, the data will now be ready to be consumed by the application modules which use this interface.

Note:

If an attempt is made to call any of the routines inside RAP_INTF_SUPPORT_UTIL for an interface that was not updated so that CUSTOMER_PUBLISHABLE_FLG = Y, then an error will be provided indicating that "No Interfaces for [<values they provided to the procedure>] are allowed/available to be published from custom code.

Calculating Outlier Indicators in AIF

Outlier indicators can also be calculated directly in AIF at any Product/Location/Week level. A historical sale for a particular Product/Location/Week is flagged as an outlier if it is more than the last 52–week average rate-of-sales for the Product/Location times a threshold. The threshold value is provided using the parameter PP_LLC_OUTLIER_THRESHOLD in the Preprocessing tab while setting up a Sales & Promo forecast run in Manage Forecast Configurations screen. This feature can be turned on/off using the parameter PMO_OUTLIER_CALCULATION_FLG in the table RSE_CONFIG (APPL_CODE = PMO) from the Manage System Configurations screen. It is expected that an environment will either load outlier indicators through the RAP/RDX interface or calculate in AIF.

Workflow for IPO-DF Implementation

The AIF-IPO-DF workflow can be implemented in two ways.

The main dependency for the first approach is that even before exporting first round of forecast from AIF to RDX, parameters need to be imported from RDX to AIF (the reason for this is IPO-DF prefers to receive forecast for a subset of the product/locations for which forecast is generated in AIF). For the import of parameters to work properly, run types must be mapped to IPO-DF and assigned an external key. Here are the steps for the first approach:

  1. Create run types in AIF in Setup train stop screen.

  2. Create test runs in AIF in Test train stop screen to see if the setup has been correct.

  3. Map run types to IPO-DF and assign external keys in Map train stop screen.

  4. Import parameters from RDX to AIF (IPO-DF sends parameters using the same external keys used in Step 3).

  5. Run forecasts again in Test train stop screen (this time, the forecasts will use the imported parameters).

  6. Approve the estimation and forecast runs in Test train stop screen.

  7. Activate the run types in Manage train stop screen.

  8. Export forecasts from AIF to RDX.

  9. Cycle continues:

    1. Import parameters from RDX to AIF.

    2. Run forecasts.

    3. Approve runs.

    4. Export forecasts from AIF to RDX.

In the second approach, there is no dependency of importing parameters from RDX to AIF before exporting forecast from AIF to RDX. (This simplifies the workflow and saves time during implementation; however, the batch processes will still have the dependency.) Here are the steps for the second approach:

  1. Create run types in AIF in Setup train stop screen.

  2. Create test runs in AIF in Test train stop screen to see if the setup has been correct.

  3. Approve the estimation and forecast runs in Test train stop screen.

  4. Activate the run types in Manage train stop screen.

  5. Map run types to IPO-DF and assign external keys in Map train stop screen.

  6. Export forecasts from AIF to RDX (this will export all product/locations for which AIF has generated forecast; however, this export will take longer time because it is a huge amount of data; this is the tradeoff between the two implementation approaches).

  7. Cycle continues as in Step 9 of the first approach.

The internal (w.r.t. AIF) difference between the two approaches is as follows: When the table in AIF that stores imported parameters from RDX is empty, AIF will export forecast for all product/locations (second approach); otherwise, AIF will export forecast for only a subset of product/locations as imported from RDX (first approach).

Note that once a forecast run has been exported from AIF to RDX, it can't be exported again. For an active run type, the export code finds the latest forecast run that has completed, has been approved, and has not been exported before.

AIF-IPO-DF Workflow Example

Here is a set of sample steps using the first approach.

  1. In AIF Manage Forecast Configurations train stop 1 Setup, create Run Type.
    • For forecasts used by IPO-DF, choose Sales & Promo or Life Cycle as the Forecast Method.
    • If any run type has an aggregation status of Not Started, click the Start Data Aggregation button for the new run type just created, this process can take a few hours to complete.

  2. In AIF Manage Forecast Configurations train stop 2 Test.
    • Once aggregation completes, create a new run under selected run type, choose to run Estimation and Base Demand in this step, you do not have to toggle to Generate Forecast initially.

    • Approve the estimation parameters by clicking on Approve Demand Parameters.
    • Tables to check:
      • Seasonality (from estimation)
        • For both methods: RSE_SEAS_CURVE_DIM (using pmo run id)

        • For both methods: RSE_SEAS_CURVE_WK (using pmo run id)
      • Base Demand
        • For Life Cycle: RSE_BASE_DEMAND (using fcst run id)

        • For Sales & Promo: RSE_LLC_BD_RES (using fcst run id)
      • Forecast (when Generate Forecast is selected)
        • For Life Cycle: RSE_DMD_FCST (using fcst run id)

        • For Sales & Promo: RSE_LLC_FCST_BLINE (using fcst run id)
  3. In AIF Manage Forecast Configurations train stop 4 Map.
    • Map the forecast run type to IPO-DF

    • Make sure to assign external application key for IPO-DF, which is usually Weekly Forecast (this name has to be matched to IPO-DF side)
  4. In IPO-DF:
    • Perform your initial IPO-DF Workflow Tasks to set up forecast parameters (History Length, Forecast Start Date, Forecast End Date), new items, etc.

    • Run the IPO-DF Pre-Forecast and Export Forecast Parameters Batches using IPO_PRE_BATCH_W_JOB and IPO_PRE_EXP_RDX_W_JOB processes.
      • POM job IPO_PRE_BATCH_W_JOB triggers Pre-Forecast Batch task in IPO-DF OAT

      • POM job IPO_PRE_EXP_RDX_W_JOB triggers Export Forecast Parameters task in IPO-DF OAT
      • Check the following tables after the batches complete.
        • RDF_FCST_PARM_HDR_EXP
          • rdf_rdfruntype01:RDF_RUN_TYPE_KEY

        • RDF_RUN_TYPE_PARM_EXP
          • rdf_rdfruntype01:RDF_RUN_TYPE_KEY

          • rdf_runnewitem01:ENABLE_NEW_ITEM
          • rdf_runfrcst01:ENABLE_FORECAST
        • RDF_FCST_PARM_EXP
          • rdf_hislen01:FCST_HIST_LENGTH

          • rdf_frcststartdt01:FCST_START_DATE
          • rdf_frcstenddtlw01:FCST_END_DATE
        • RDF_FCST_PARM_CAL_EXP
          • rdf_ppsbopdate01:BOP_DATE

          • rdf_totadjslsovr01:USER_OVERRIDE
          • rdf_outageind01:PPS_OUTAGE_IND
          • rdf_outlierind01:PPS_OUTLIER_IND
        • RDF_NIT_PARM_EXP
          • PROD_HIER_LEVEL:SKU

          • LOC_HIER_LEVEL:STOR
          • rdf_fcpsubm:NIT_SUB_METHOD
          • rdf_fcpros:NIT_ROS_USER_IN
          • rdf_nitapplkitm1:NIT_LIKE_ITEM
          • rdf_nstapplkstr1:NIT_LIKE_STOR
          • rdf_fcpadj:ADJUSTMENT_FACTOR
  5. Run jobs in POM to import parameters from RDX to AIF.
    • RSE_RDX_FCST_PARAM_ADHOC_PROCESS
      • RSE_RDX_FCST_PARAM_SETUP_ADHOC_JOB

      • RSE_RDX_FCST_PARAM_PROCESS_ADHOC_JOB
      • Table to check:
        • RSE_RDF_FCST_RUN_TYPE_PRM (from RDF_FCST_PARM_HDR_EXP and RDF_FCST_PARM_EXP)

    • RSE_FCST_RDX_NEW_ITEM_ENABLE_ADHOC_PROCESS
      • RSE_FCST_RDX_NEW_ITEM_ENABLE_SETUP_ADHOC_JOB

      • RSE_FCST_RDX_NEW_ITEM_ENABLE_PROCESS_ADHOC_JOB
      • Table to check:
        • RSE_FCST_RDX_NEW_ITEM_ENABLE (from RDF_RUN_TYPE_PARM_EXP)

    • RSE_LIKE_RDX_RSE_ADHOC_PROCESS
      • RSE_LIKE_RDX_RSE_SETUP_ADHOC_JOB

      • RSE_LIKE_RDX_RSE_PROCESS_ADHOC_JOB
      • Table to check:
        • RSE_FCST_NEW_ITEM_PARAM (from RDF_NIT_PARM_EXP)

    • PMO_EVENT_IND_RDF_ADHOC_PROCESS
      • PMO_EVENT_IND_RDF_SETUP_ADHOC_JOB

      • PMO_EVENT_IND_RDF_PROCESS_ADHOC_JOB
      • Table to check:
        • PMO_EVENT_INDICATOR (from RDF_FCST_PARM_CAL_EXP)

  6. In AIF Manage Forecast Configurations train stop 2 Test.
    • Create a new run under the same run type you have been working with, choose to run Base Demand only if there is no need to run Estimation, and toggle to Generate Forecast this time.

    • If estimation has been run, approve the Estimation parameters by clicking on Approve Demand Parameters
    • Approve the base demand and forecast by clicking on Approve Base Demand and Forecast
    • Tables to check:
      • Seasonality (when estimation is run):
        • For both methods: RSE_SEAS_CURVE_DIM (using pmo run id)

        • For both methods: RSE_SEAS_CURVE_WK (using pmo run id)
      • Base Demand
        • For Life Cycle: RSE_BASE_DEMAND (using fcst run id)

        • For Sales & Promo: RSE_LLC_BD_RES (using fcst run id)
      • Forecast
        • For Life Cycle: RSE_DMD_FCST (using fcst run id)

        • For Sales & Promo: RSE_LLC_FCST_BLINE (using fcst run id)
  7. In AIF Manage Forecast Configurations train stop 3 Manage.
    • View/Edit Configuration Parameters to see/apply parameters from previously approved run (these parameters will be used for batch runs; not needed for forecast runs created from AIF UI)

    • Activate the forecast run type
    • Manage Auto-Approve to enable Auto-Approve for the run type before starting batch cycles (not needed for forecast runs created from AIF UI)
  8. Run jobs in POM to export forecasts to RDX from AIF.
    • RSE_FCST_RUN_TYPE_CONF_EXPORT_ADHOC_PROCESS
      • RSE_FCST_RUN_TYPE_CONF_EXPORT_SETUP_ADHOC_JOB

      • RSE_FCST_RUN_TYPE_CONF_EXPORT_PROCESS_ADHOC_JOB
    • RSE_FCST_EXPORT_ADHOC_PROCESS (Do not run the process as it includes AP and MFP forecast export too)
      • RSE_RDF_FCST_EXPORT_ADHOC_JOB

  9. After the jobs complete, check the following tables in Innovation Workbench to do a quick validation.
    • Table: rdx01.rse_fcst_run_type_config (gets populated by RSE_FCST_RUN_TYPE_CONF_EXPORT_ADHOC_PROCESS; based on an independent export run id)

    • Table: rdx01.rse_fcst_run_hdr_exp (gets populated by RSE_RDF_FCST_EXPORT_ADHOC_JOB; this table associates the export run id to the forecast run id approved in Step 6)
    • Table: rdx01.rse_fcst_run_config (gets populated by RSE_RDF_FCST_EXPORT_ADHOC_JOB; based on the export run id from rdx01.rse_fcst_run_hdr_exp)
    • Table: rdx01.rse_fcst_appr_base_dmd_exp (gets populated by RSE_RDF_FCST_EXPORT_ADHOC_JOB; based on the export run id from rdx01.rse_fcst_run_hdr_exp)
    • Table: rdx01.rse_fcst_demand_dtl_exp (gets populated by RSE_RDF_FCST_EXPORT_ADHOC_JOB; based on the export run id from rdx01.rse_fcst_run_hdr_exp)
    • Table: rdx01.rse_fcst_demand_src_exp (gets populated by RSE_RDF_FCST_EXPORT_ADHOC_JOB; based on the export run id from rdx01.rse_fcst_run_hdr_exp)
    • Table: rdx01.rse_fcst_demand_dtl_cal_exp (gets populated by RSE_RDF_FCST_EXPORT_ADHOC_JOB; based on the export run id from rdx01.rse_fcst_run_hdr_exp)
  10. In IPO-DF:
    • Import the forecasts to IPO-DF using IPO_POST_DATA_IMP_PROCESS, which runs the following two OAT tasks in IPO-DF:
      • Import Forecast and Preprocessing Component task in IPO-DF OAT
        • RSE_FCST_RUN_HDR_EXP
          • rdf_rserunid01:RUN_ID

        • RSE_FCST_DEMAND_DTL_CAL_EXP
          • rdf_sysbaseline01:BASELINE_FCST_QTY

          • rdf_sysfrcst01:DEMAND_FCST_QTY
          • rdf_syspeak01:EVENT_PEAK_QTY
          • rdf_eventfut01:EVENT_CLND
          • rdf_prcdiscfut01:PRICE_DISC_PERCENT
        • RSE_FCST_DEMAND_DTL_EXP
          • rdf_basedemand01:BASE_DEMAND_QTY

          • rdf_priceelas01:PRICE_ELASTICITY
        • RSE_FCST_APPR_BASE_DMD_EXP
          • rdf_appchosenlevel01:APPR_BASE_DEMAND_QTY

        • RSE_FCST_DEMAND_SRC_EXP
          • rdf_outageind01:STOCKOUT_IND

          • rdf_outageadj01:LOST_SLS_QTY
          • rdf_outlierind01:OUTLIER_IND
          • rdf_outlieradj01:OUTLIER_SLS_QTY
          • rdf_depromoadj01:PROMO_SLS_QTY
          • rdf_deprice01:CLR_SLS_QTY
          • rdf_totadjsls01:PREPROCESSED_SLS_QTY
      • Post Forecast Batch task in IPO-DF OAT
    • Build Forecast Review workspace to review the forecasts, review, update and approve forecasts
    • Export approved forecasts by running IPO_POST_EXP_RDX POM job
      • Export Approved Forecast OAT Task
        • RDF_APPR_FCST_HDR_EXP
          • rdf_apprserunid01:RSE_RUN_ID

        • RDF_APPR_FCST_CAL_EXP
          • rdf_bopdate01:BOP_DATE

          • rdf_appbaseline01:APPR_BASELINE_FCST
          • rdf_appfrcst01:APPR_DEMAND_FCST
          • rdf_appcumint01:APPR_CUMINT
  11. To push approved forecast from RDX to RMS/object store (as a flat file) via AIF.
    • Run RSE_RDX_APPD_FCST_ADHOC_PROCESS in POM to import approved forecast from RDX to AIF:
      • RSE_RDX_APPD_FCST_SETUP_ADHOC_JOB

      • RSE_RDX_APPD_FCST_PROCESS_ADHOC_JOB
      • Table to check:
        • RSE_RDF_APPR_FCST (from RDF_APPR_FCST_HDR_EXP and RDF_APPR_FCST_CAL_EXP)

    • Run RSE_RDF_APPR_FCST_EXPORT_ADHOC_PROCESS (RSE_RDF_APPR_FCST_EXPORT_ADHOC_JOB) in POM to export the data from RSE_RDF_APPR_FCST_VW (week-level forecast) to RMS/object store as a flat file
    • Run RSE_RDF_APPR_FCST_DAY_EXPORT_ADHOC_PROCESS (RSE_RDF_APPR_FCST_DAY_EXPORT_ADHOC_JOB) in POM to export the data from RSE_RDF_APPR_FCST_DAY_VW (day-level forecast) to RMS/object store as a flat file, if day-level forecast is generated
  12. Run W_RTL_PLANFC_PROD2_LC2_T2_F_JOB job in POM to import approved forecasts in RI (This job is in RI_FORECAST_CYCLE Jobs)
    • Note that W_RTL_PLANFC_PROD2_LC2_T2_F gets the source data from RSE_RDF_APPR_FCST_VW

Note:

For Steps 11 and 12 to work, the run types also need to be mapped to the RI/RMS application in AIF Manage Forecast Configurations train stop 4 Map.

Using the Add Multiple Run Types Feature

An "Add Multiple Run Types" feature is available in the Setup train stop within the Manage Forecast Configurations screen. To add/edit/delete the rows visible within the "Add Multiple Run Types" table, please edit the two tables, RSE_CUSTOM_PROCESS_HDR and RSE_CUSTOM_PROCESS_DTL, available in the Manage System Configurations screen.

Building an Alternate Hierarchy in AIF Applications

AIF applications have the ability to build alternate location and alternate product hierarchies. Alternate hierarchy information should be available in the AIF data warehouse tables (also used by Retail Insights). RSE_ALT_HIER_TYPE_STG and RSE_ALT_HIER_LEVEL_STG are the two relevant tables available in the Manage System Configurations screen. It is possible to add/edit/delete rows in these tables. Information about the alternate hierarchy types and levels must be provided through these tables. Then, the alternate hierarchy jobs must be executed to generate the alternate hierarchies in AIF. The following table lists the relevant jobs.

Table 13-12 Alternate Hierarchy Jobs

JobName Description RMSBatch

RSE_ALT_HIER_SETUP_START_JOB

Start Job for Alternate Hierarchy Type Setup

rse_process_state_update.ksh

RSE_ALT_HIER_LOAD_JOB

Setup alternate hierarchy types by loading data from RSE_ALT_HIER_TYPE_STG table

rse_alt_hier_load.ksh

RSE_ALT_HIER_SETUP_END_JOB

End Job for Alternate Hierarchy Type Setup

rse_process_state_update.ksh

RSE_ALT_HIER_LOAD_ADHOC_JOB Adhoc job to Setup alternate hierarchy types by loading data from RSE_ALT_HIER_TYPE_STG table rse_alt_hier_load.ksh

RSE_ALT_LOC_HIER_START_JOB

Start Job for Alternate Location Hierarchy Load

rse_process_state_update.ksh

RSE_ALT_LOC_HIER_SRC_XREF_LOAD_JOB

Load the RSE_LOC_SRC_XREF table with alternate location hierarchy data

rse_alt_loc_hier_process.ksh

RSE_ALT_LOC_SRC_XREF_DUP_CHECK_JOB Check for RSE_ALT_LOC_SRC_XREF column values that were either modified or for which rows were bypassed to avoid duplicate keys on index errors rse_alt_loc_src_xref_dup_check.ksh

RSE_ALT_LOC_HIER_LOAD_JOB

Load the RSE_LOC_HIER table with alternate location hierarchy data

rse_alt_loc_hier_load.ksh

RSE_ALT_LOC_HIER_TC_LOAD_JOB

Load the RSE_LOC_HIER_TC table with alternate location hierarchy data

rse_alt_loc_hier_tc_load.ksh

RSE_ALT_LOC_HIER_DH_LOAD_JOB

Load the RSE_LOC_HIER_DH table with alternate location hierarchy data

rse_alt_loc_hier_dh_load.ksh

RSE_ALT_LOC_HIER_END_JOB

End Job for Alternate Location Hierarchy Load

rse_process_state_update.ksh

RSE_ALT_LOC_HIER_SRC_XREF_LOAD_ADHOC_JOB Adhoc job to Load the RSE_LOC_SRC_XREF table with alternate location hierarchy data rse_alt_loc_hier_process.ksh
RSE_ALT_LOC_SRC_XREF_DUP_CHECK_ADHOC_JOB Adhoc job to Check for RSE_ALT_LOC_SRC_XREF column values that were either modified or for which rows were bypassed to avoid duplicate keys on index errors rse_alt_loc_src_xref_dup_check.ksh
RSE_ALT_LOC_HIER_LOAD_ADHOC_JOB Adhoc job to Load the RSE_LOC_HIER table with alternate location hierarchy data rse_alt_loc_hier_load.ksh
RSE_ALT_LOC_HIER_TC_LOAD_ADHOC_JOB Adhoc job to Load the RSE_LOC_HIER_TC table with alternate location hierarchy data rse_alt_loc_hier_tc_load.ksh
RSE_ALT_LOC_HIER_DH_LOAD_ADHOC_JOB Adhoc job to Load the RSE_LOC_HIER_DH table with alternate location hierarchy data rse_alt_loc_hier_dh_load.ksh

RSE_ALT_PROD_HIER_START_JOB

Start Job for Alternate Product Hierarchy Load

rse_process_state_update.ksh

RSE_ALT_PROD_HIER_SRC_XREF_LOAD_JOB

Load the RSE_PROD_SRC_XREF table with alternate product hierarchy data

rse_alt_prod_hier_process.ksh

RSE_ALT_PROD_SRC_XREF_DUP_CHECK_JOB Check for RSE_ALT_PROD_SRC_XREF column values that were either modified or for which rows were bypassed to avoid duplicate keys on index errors rse_alt_prod_src_xref_dup_check.ksh

RSE_ALT_PROD_HIER_LOAD_JOB

Load the RSE_PROD_HIER table with alternate product hierarchy data

rse_alt_prod_hier_load.ksh

RSE_ALT_PROD_HIER_TC_LOAD_JOB

Load the RSE_PROD_HIER_TC table with alternate product hierarchy data

rse_alt_prod_hier_tc_load.ksh

RSE_ALT_PROD_HIER_DH_LOAD_JOB

Load the RSE_PROD_HIER_DH table with alternate product hierarchy data

rse_alt_prod_hier_dh_load.ksh

RSE_ALT_PROD_HIER_END_JOB

End Job for Alternate Product Hierarchy Load

rse_process_state_update.ksh

RSE_ALT_PROD_HIER_SRC_XREF_LOAD_ADHOC_JOB Adhoc job to Load the RSE_PROD_SRC_XREF table with alternate product hierarchy data rse_alt_prod_hier_process.ksh
RSE_ALT_PROD_SRC_XREF_DUP_CHECK_ADHOC_JOB Adhoc job to Check for RSE_ALT_PROD_SRC_XREF column values that were either modified or for which rows were bypassed to avoid duplicate keys on index errors rse_alt_prod_src_xref_dup_check.ksh
RSE_ALT_PROD_HIER_LOAD_ADHOC_JOB Adhoc job to Load the RSE_PROD_HIER table with alternate product hierarchy data rse_alt_prod_hier_load.ksh
RSE_ALT_PROD_HIER_TC_LOAD_ADHOC_JOB Adhoc job to Load the RSE_PROD_HIER_TC table with alternate product hierarchy data rse_alt_prod_hier_tc_load.ksh
RSE_ALT_PROD_HIER_DH_LOAD_ADHOC_JOB Adhoc job to Load the RSE_PROD_HIER_DH table with alternate product hierarchy data rse_alt_prod_hier_dh_load.ksh
To perform forecasting in AIF using the created alternate location (product) hierarchy, the PMO_LOC_HIER_TYPE (PMO_PROD_HIER_TYPE) parameter value in RSE_CONFIG table (APPL_CODE = PMO) available in the Manage System Configurations screen must be set up correctly.

Note:

Before changing the location and/or product hierarchy type for the environment, all existing forecast run types need to be deleted. New run types need to be created after the hierarchy type changes are done.
Any existing alternate hierarchies must be deleted first if you are going to restructure them with different levels and mappings for the same hierarchy type. Attempting to restructure a hierarchy without deleting it can result in data issues and job failures. If you need to delete an alternate hierarchy, perform the following steps.
  1. Update the delete flag to Y on the row in the RSE_ALT_HIER_TYPE_STG table corresponding to the alternate hierarchy being deleted.

  2. Delete all the level rows from the RSE_ALT_HIER_LEVEL_STG table corresponding to the alternate hierarchy being deleted.

  3. Run the processes/jobs to rebuild the alternate hierarchies, in the same way you previously did to build those.

If you are working with standalone jobs, then either use RSE_MASTER_ADHOC_JOB to run the hierarchy loads again (using -X flag for the alternate hierarchy load, and others as needed) or use the adhoc jobs listed in the table above. Also, for batch runs, the jobs listed in the table above will handle the deletion. Note that you cannot delete and rebuild the same hierarchy in the same batch cycle or standalone process.

The alternate hierarchy batch or adhoc jobs will do the following cleanup:
  1. The delete flag in the RSE_ALT_HIER_TYPE table will be set to Y.
  2. The corresponding row in the RSE_HIER_TYPE table will be deleted.
  3. The corresponding rows in the RSE_ALT_HIER_LEVEL table will be deleted.
  4. The corresponding rows in the RSE_HIER_LEVEL table will be deleted.
  5. The corresponding partition in the RSE_LOC_HIER and/or RSE_PROD_HIER tables will be dropped.
  6. The corresponding partition in the RSE_LOC_HIER_DH and/ or RSE_PROD_HIER_DH tables will be dropped.
  7. The corresponding partition in the RSE_LOC_HIER_TC and/or RSE_PROD_HIER_TC tables will be dropped.
  8. The corresponding partition in the RSE_LOC_SRC_XREF and/or RSE_PROD_SRC_XREF tables will be dropped.

After alternate hierarchy batch or adhoc jobs run a second time, the remaining corresponding row in the RSE_ALT_HIER_TYPE table whose delete flag had been set to Y will then be deleted.

Custom Jobs Through Innovation Workbench (IW)

Custom processes can be created through IW (see also Loading Out-of-Stock and Outlier Indicators Through the RDX Interface as an example). AIF provides nine in-built custom jobs that can be used for setting up some custom steps at various positions within the AIF applications batch pipeline. The three pre-batch and the three post-batch jobs are positioned to run at the beginning and at the end of the AIF applications batch, respectively. Additionally, for the forecast engine, the three pre-estim jobs are positioned to run after the RDX to AIF forecast parameter import and before the estimation runs for forecasting. The following table lists the relevant jobs.

Table 13-13 Custom Jobs for IW Custom Processes

JobName Description RMSBatch

PMO_PRE_ESTIM_CUSTOM_START_JOB

Start job for PMO pre-estimation custom process

rse_process_state_update.ksh

PMO_PRE_ESTIM_CUSTOM_1_JOB

PMO pre-estimation custom job 1

rse_custom_job.ksh

PMO_PRE_ESTIM_CUSTOM_2_JOB

PMO pre-estimation custom job 2

rse_custom_job.ksh

PMO_PRE_ESTIM_CUSTOM_3_JOB

PMO pre-estimation custom job 3

rse_custom_job.ksh

PMO_PRE_ESTIM_CUSTOM_END_JOB

End job for PMO pre-estimation custom process

rse_process_state_update.ksh

PMO_PRE_ESTIM_CUSTOM_1_ADHOC_JOB

PMO pre-estimation custom ad hoc job 1

rse_custom_job.ksh

PMO_PRE_ESTIM_CUSTOM_2_ADHOC_JOB

PMO pre-estimation custom ad hoc job 2

rse_custom_job.ksh

PMO_PRE_ESTIM_CUSTOM_3_ADHOC_JOB

PMO pre-estimation custom ad hoc job 3

rse_custom_job.ksh

RSE_PRE_BATCH_CUSTOM_START_JOB Start job for AIF applications Pre-Batch custom process rse_process_state_update.ksh
RSE_PRE_BATCH_CUSTOM_1_JOB AIF applications Pre-Batch custom job 1 rse_custom_job.ksh
RSE_PRE_BATCH_CUSTOM_2_JOB AIF applications Pre-Batch custom job 2 rse_custom_job.ksh
RSE_PRE_BATCH_CUSTOM_3_JOB AIF applications Pre-Batch custom job 3 rse_custom_job.ksh
RSE_PRE_BATCH_CUSTOM_END_JOB End job for AIF applications Pre-Batch custom process rse_process_state_update.ksh
RSE_PRE_BATCH_CUSTOM_1_ADHOC_JOB AIF applications Pre-Batch custom ad hoc job 1 rse_custom_job.ksh
RSE_PRE_BATCH_CUSTOM_2_ADHOC_JOB AIF applications Pre-Batch custom ad hoc job 2 rse_custom_job.ksh
RSE_PRE_BATCH_CUSTOM_3_ADHOC_JOB AIF applications Pre-Batch custom ad hoc job 3 rse_custom_job.ksh
RSE_POST_BATCH_CUSTOM_START_JOB Start job for AIF applications Post-Batch custom process rse_process_state_update.ksh
RSE_POST_BATCH_CUSTOM_1_JOB AIF applications Post-Batch custom job 1 rse_custom_job.ksh
RSE_POST_BATCH_CUSTOM_2_JOB AIF applications Post-Batch custom job 2 rse_custom_job.ksh
RSE_POST_BATCH_CUSTOM_3_JOB AIF applications Post-Batch custom job 3 rse_custom_job.ksh
RSE_POST_BATCH_CUSTOM_END_JOB End job for AIF applications Post-Batch custom process rse_process_state_update.ksh
RSE_POST_BATCH_CUSTOM_1_ADHOC_JOB AIF applications Post-Batch custom ad hoc job 1 rse_custom_job.ksh
RSE_POST_BATCH_CUSTOM_2_ADHOC_JOB AIF applications Post-Batch custom ad hoc job 2 rse_custom_job.ksh
RSE_POST_BATCH_CUSTOM_3_ADHOC_JOB AIF applications Post-Batch custom ad hoc job 3 rse_custom_job.ksh

Here are the steps to get the custom jobs working:

  1. In IW, create a package and procedure for the process that needs to be run as a custom job. After logging into Innovation Workbench -> Manage Workbench -> SQL Workshop, on the right-hand side click on the "Package" under "Create Object" and procced with creating the package body, specification and create the procedure inside the package.

  2. Using IW or the Manage System Config screen, modify the table RSE_CUSTOM_JOB_CFG and edit values for the following columns:

    1. JOB_NAME: This value is not editable and indicates a placeholder where a custom process can be added.

    2. PACKAGE_NAME: Enter the name of package that was created in IW.

    3. PROCEDURE_NAME: Enter the name of procedure that was created in IW.

    4. PROCEDURE_DESCR: Enter a description if desired.

    5. RUN_TIME_LIMIT: The run time limit is 900 seconds by default. It can be changed to a smaller value but not to a larger value. If the custom process runs for longer than the value indicated in RUN_TIME_LIMIT when running as a part of the batch process, the custom process will stop and move on to the next job/process.

    6. CONNECTION_TYPE: Valid values are LOW and MEDIUM. This value should almost always be LOW unless the job is supposed to run a process that would need multiple threads. HIGH is not a valid value. If HIGH is entered, it will switch to LOW by default when the job runs.

    7. ENABLE_FLG: Set this value to Y to indicate that this job should be executed as part of the batch process.

Purge Forecast Run and Run Type Data

Forecast run data can be purged through both batch and ad hoc jobs. The parameters that control this purge process are FCST_PURGE_RETENTION_DAYS and FCST_PURGE_ALL_RUNS in the RSE_CONFIG table (APPL_CODE = RSE), available in the Manage System Configurations screen. The ad hoc job can take an input that acts as an override to the FCST_PURGE_RETENTION_DAYS value. When FCST_PURGE_ALL_RUNS = Y, then both the batch and ad hoc jobs delete data for all forecast runs, irrespective of the FCST_PURGE_RETENTION_DAYS value. Forecast run type data can be purged through the ad hoc job. The parameters that control this purge process are FCST_TY_PURGE_RETENTION_DAYS and FCST_TY_PURGE_ALL_RUNS in the RSE_CONFIG table (APPL_CODE = RSE), available in the Manage System Configurations screen. The ad hoc job can take an input that acts as an override to the FCST_TY_PURGE_RETENTION_DAYS value. When FCST_TY_PURGE_ALL_RUNS = Y, then the ad hoc job deletes data for all forecast run types, irrespective of the FCST_TY_PURGE_RETENTION_DAYS value. The following table lists the relevant jobs.

Table 13-14 Purge Jobs for Forecast Run and Run Type Data

JobName Description RMSBatch
RSE_PURGE_RUN_START_JOB Start job for purging run data rse_process_state_update.ksh
RSE_PURGE_RUN_JOB Purge job for run data rse_purge_run_data.ksh
RSE_PURGE_RUN_END_JOB End job for purging run data rse_process_state_update.ksh
RSE_PURGE_RUN_ADHOC_JOB Ad hoc purge job for run data rse_purge_run_data.ksh
RSE_PURGE_RUN_TYPE_ADHOC_JOB Ad hoc purge job for run type data rse_purge_run_type_data.ksh

Configurations in AIF to Generate Forecasts for FACT Measures

There are four FACT tables (W_RTL_FACT1_PROD1_LC1_T1_F, W_RTL_FACT2_PROD2_LC2_T2_F, W_RTL_FACT3_PROD3_LC3_T3_F, and W_RTL_FACT4_PROD4_LC4_T4_F) in RI that store the FACT measures data. The data from these FACT tables flow into the various critical AIF tables (RSE_SLS_PR_LC_WK, RSE_SLS_PR_LC_CS_WK, RSE_SLS_PH_LC_WK_A, and RSE_INV_PR_LC_WK_A) via RSE_PR_LC_CAL_CDA. The table RSE_MD_CDA programmatically stores the mapping between the FACT tables and the columns of RSE_PR_LC_CAL_CDA. The following table shows the jobs that will populate the various critical AIF tables from the FACT tables.

Table 13-15 Jobs to Populate the AIF Tables from FACT Tables

JobName Description RMSBatch
RSE_AGGREGATE_METADATA_LOAD_ADHOC_JOB Ad hoc job to load aggregate measure metadata from RI to AIF (populates RSE_MD_CDA) rse_aggregate_metadata_load.ksh
RSE_AGGREGATE_ACTUALS_LOAD_ADHOC_JOB Ad hoc job to load aggregate measure actuals from RI to AIF (populates RSE_PR_LC_CAL_CDA) rse_aggregate_actuals_load.ksh
RSE_SLS_PR_LC_WK_AGGR_MEAS_SETUP_ADHOC_JOB Ad hoc job for product at leaf level/location/week sales aggregate measure setup rse_sls_pr_lc_wk_aggr_meas_setup.ksh
RSE_SLS_PR_LC_WK_AGGR_MEAS_PROCESS_ADHOC_JOB Ad hoc job for product at leaf level/location/week sales aggregate measure process (populates RSE_SLS_PR_LC_WK and RSE_SLS_PR_LC_CS_WK) rse_sls_pr_lc_wk_aggr_meas_process.ksh
RSE_SLS_PH_LC_WK_AGGR_MEAS_SETUP_ADHOC_JOB Ad hoc job for product at non-leaf level/location/week sales aggregate measure setup rse_sls_ph_lc_wk_aggr_meas_setup.ksh
RSE_SLS_PH_LC_WK_AGGR_MEAS_PROCESS_ADHOC_JOB Ad hoc job for product at non-leaf level/location/week sales aggregate measure process (populates RSE_SLS_PH_LC_WK_A) rse_sls_ph_lc_wk_aggr_meas_process.ksh
RSE_INV_PR_LC_WK_AGGR_MEAS_SETUP_ADHOC_JOB Ad hoc job for product at leaf level/location/week inventory aggregate measure setup rse_inv_pr_lc_wk_aggr_meas_setup.ksh
RSE_INV_PR_LC_WK_AGGR_MEAS_PROCESS_ADHOC_JOB Ad hoc job for product at leaf level/location/week inventory aggregate measure process (populates RSE_INV_PR_LC_WK_A) rse_inv_pr_lc_wk_aggr_meas_process.ksh

Complete the following steps in Manage System Configurations screen to configure the generation of forecasts for FACT measures.

  1. Configure data in RSE_FCST_DFLT_RUN_TYPE_PARAM.

    The pre-configured measures for forecasting are specified in RSE_FCST_DFLT_RUN_TYPE_PARAM under the PARAMETER_NAME=FORECAST_MEASURE. To configure a new forecast measure (for example, Shrink Qty), the user must insert the values in RSE_FCST_DFLT_RUN_TYPE_PARAM using the UI.

    For example, a row with the following values must be inserted

    (PARAMETER_NAME, PARAMETER_VALUE, DESCR, UI_PARAMETER_LABEL, DISPLAY_ORDER, CURRENT_FLG, DELETE_FLG).

    ('FORECAST_MEASURE', ‘SHRINK_QTY’, 'Shrink Qty', 'Shrink Qty', '13', 'Y', 'N')

  2. Configure data in RSE_MD_FACT_COLUMN_MAP.

    To generate a forecast for FACT measures, data must flow into the PMO_ACTIVITIES table. If the FACT measure is one of the forecast measures that are available in the GA forecast engine, then the configuration must be done such that the appropriate named column in PMO_ACTIVITIES is used in the mapping. The GA measures can be found in RSE_FCST_DFLT_RUN_TYPE_PARAM under the PARAMETER_ NAME=FORECAST_MEASURE.

    If the FACT measure is not one of the GA measures, then the configuration must be done such that one of the flex measure columns in PMO_ACTIVITIES is used in the mapping.

    For example, if user wants to forecast Shrink Qty, the FACT measure SHRINK_QTY must be loaded to one of the FLEX columns in PMO_ACTIVITIES. The following row must be inserted into RSE_MD_FACT_COLUMN_MAP using the UI.

    (SOURCE_TABLE, SOURCE_COLUMN, TARGET_TABLE, TARGET_COLUMN)

    ('W_RTL_FACT1_PROD1_LC1_T1_F', 'SHRINK_QTY', 'PMO_ACTIVITIES', 'FLEX_MEAS_VALUE1')

  3. Configure data in RSE_FCST_RUN_TYPE_MTHD_PARAM.

    The forecast engine supports a few different forecast methods, and each method can be used to forecast certain measures based on certain data sources. This mapping is configured in RSE_FCST_RUN_TYPE_MTHD_PARAM. FACT measures can be forecasted using Automatic Exponential Smoothing. Note that all the required data for the method that is being used must be provided via the interface for FACT measures.

    For example, to forecast Shrink_Qty using Automatic Exponential Smoothing, the following row must be inserted in RSE_FCST_RUN_TYPE_MTHD_PARAM using the UI.

    (FORECAST_METHOD, FORECAST_MEASURE, DESCR, FORECAST_MEASURE_UI_LABEL, FORECAST_METHOD_UI_LABEL,CURRENT_FLG, DATA_SOURCE)

    ('AUTO_ES', ' SHRINK_QTY', 'Shrink Qty', 'Shrink Qty', 'Automatic Exponential Smoothing'’Y’,’STORE_SALES’)

The following table shows the jobs that must be run to aggregate FACT measures data (as required) and populate PMO_ACTIVITIES.

Table 13-16 Jobs to Populate PMO_ACTIVITIES Using FACT Data

JobName Description RmsBatch
PMO_ACTIVITY_AGGR_MEAS_SETUP_JOB PMO aggregate measures load setup job pmo_activity_aggr_meas_setup.ksh
PMO_ACTIVITY_AGGR_MEAS_PROCESS_JOB PMO aggregate measures load process job pmo_activity_aggr_meas_process.ksh
PMO_ACTIVITY_AGGR_MEAS_SETUP_ADHOC_JOB PMO aggregate measures load setup adhoc job pmo_activity_aggr_meas_setup.ksh
PMO_ACTIVITY_AGGR_MEAS_PROCESS_ADHOC_JOB PMO aggregate measures load process adhoc job pmo_activity_aggr_meas_process.ksh

Data Requirements for AIF

AI Foundation has a wide range of capabilities and solutions, each with its own specific data needs. While the files themselves are all common across the Retail Analytics and Planning solutions, which fields you provide data for require careful consideration. This section covers the four most critical data areas that must be discussed with customers as part of requirements and design. This data will directly impact the results across your implementation and determine which RAP features you will be able to benefit from, both now and in the future. This is not a replacement for the Implementation guide or the Interface guide, as those provides more detailed requirements for each cloud service. Refer to the detailed documentation before you begin the development of the data files.

Data Definitions for Core Fact Files

This section describes the data definitions for core fact files.

SALES.csv

Sales data represents sales transactions resulting from customer purchases and returns. Sales data can be at varying levels of detail, depending on the solutions used, ranging from item/loc/week summary data to the granular transaction-line details. This is the core fact used across all modules and it has many variations, as defined below. How you define your sales data will affect both current and future implementations of any AIF solutions, so you must consider your product roadmap when deciding on the level of detail to provide here. If you provide pre-aggregated data, for example, then it will prevent you from using modules that require transaction data without doing a re-implementation of your data loads and integrations. Any additional effort spent on the data up front can result in significant savings on implementation costs for future projects.

This section provides a high-level summary of the main considerations for the file. More information about details and requirements is available in the “Sales Data Requirements” section of the Data File Generation chapter of Oracle Retail Analytics and Planning implementation Guide.

Minimum Data (Forecasting)

The minimum data for forecasting has several types, depending on the use case. From least to most detailed, they are:
  1. Item/loc/week or item/loc/day aggregated sales

  2. Item/loc/week or item/loc/day transactional sales

  3. Item/loc/week or item/loc/day promotion-linked transactional sales

  4. Item/loc/week or item/loc/day customer-linked transactional sales

If you are only implementing basic Inventory Planning Optimization Demand Forecasting (IPO-DF) or Lifecycle Pricing Optimization (LPO) forecasting and do not require transaction level details, then you can simply provide pre-summed item/loc/day or week sales totals. Detailed forecast method breakdowns are provided in later sections to clarify requirements by forecast type. If you only want to forecast the units, then you can simply provide SLS_QTY for the gross sales units. SLS_TRX_ID is defaulted to a unique value to differentiate the rows, as it is part of the table key. You might only have a single row per item/loc/week/retail type with one value of SLS_TRX_ID per data intersection.

ITEM,ORG_NUM,DAY_DT,MIN_NUM,RTL_TYPE_CODE,SLS_TRX_ID,SLS_QTY,LOC_CURR_CODE,DOC_CURR_CODE

1234781,2072,20210131,0000,R,12345678,112,USD,USD

If you are forecasting based on sales amounts or net sales/returns, then you can also include the additional columns for those values, which must still be at a day or week aggregated level.

If you need to perform more advanced forecasting methods (Halo and Cannibalization feature) or you are using transaction-level AIF applications (Customer Decision Trees, Customer Segmentation), then you must provide the actual sales transaction information, not the aggregated values. In this instance, the SLS_TRX_ID is a critical field that must represent the true POS transaction identifier to uniquely identify one customer basket/sale.

ITEM,ORG_NUM,DAY_DT,MIN_NUM,RTL_TYPE_CODE,SLS_TRX_ID,SLS_QTY,SLS_AMT_LCL,RET_QTY,RET_AMT_LCL,LOC_CURR_CODE,DOC_CURR_CODE

1234781,2072,20210131,1700,R,21013172971,1,124,0,0,USD,USD
1234799,2072,20210131,1700,R,21013172971,1,55,0,0,USD,USD
1234884,2072,20210131,1700,R,21013172971,2,70,0,0,USD,USD
1235584,2072,20210131,1700,R,21013172971,0,0,1,99,USD,USD

This represents a single customer purchase in which the customer bought three items and returned one. The SLS_TRX_ID is the same for all transaction lines belonging to this one customer sale. If your POS system does not have a unique transaction ID by itself, then you may have to concatenate other parts on top of it (that is, take the POS tran number + loc number + register + date). A single day’s sales file to AIF will have many such lines for the same item/loc for every sale or return transaction that happened that day. This must match with your POS system. Do not default the SLS_TRX_ID to some other value or the AIF results will not be accurate.

If you are using any of the promotion components of forecasting, such as promotion lift or estimation, then the sales data must be linked to the specific promotion IDs that occurred on them. This comes as two fields for the promotion ID and individual offer/component ID of the discount or price change in the promotion. One transaction line can have multiple promotions on it, which come as nearly identical lines of sales that differ only by promotion IDs.

ITEM,ORG_NUM,DAY_DT,MIN_NUM,RTL_TYPE_CODE,SLS_TRX_ID,PROMO_ID,PROMO_COMP_ID,SLS_QTY,SLS_AMT_LCL,RET_QTY,RET_AMT_LCL,LOC_CURR_CODE,DOC_CURR_CODE

1234781,2072,20210131,1700,P,21013172971,2498,12,1,124,0,0,USD,USD
1234799,2072,20210131,1700,P,21013172971,2498,12,1,55,0,0,USD,USD
1234799,2072,20210131,1700,P,21013172971,4693,111,1,55,0,0,USD,USD
1235584,2072,20210131,1700,R,21013172971,-2,-3,0,0,1,99,USD,USD
1234781,2072,20210131,1800,R,21013173592,-2,3,2,248,0,0,USD,USD

This example shows two different promotions, and one line item that had two promotions on it in the same transaction. Note that non-promotional sales use the default values of -2 and -3, respectively. These are the standard values the platform uses when a promotion is not known or not used. If you are providing promotion identifiers, you will also be expected to provide other files relating to promotions, such as PROMOTION.csv. The Oracle Retail Analytics and Planning Cloud Services Data Interfaces provides the complete details on other promotion files and their requirements.

The last of the minimum required datasets is customer-linked transaction data, required if you are also using AIF modules such as CDT, DT, Segmentation, or segment-based forecasting/optimizations. You must provide the unique customer identifier for each sales transaction that is linked to a known customer in the source system.

ITEM,ORG_NUM,DAY_DT,MIN_NUM,RTL_TYPE_CODE,SLS_TRX_ID,CUSTOMER_NUM,SLS_QTY,SLS_AMT_LCL,RET_QTY,RET_AMT_LCL,LOC_CURR_CODE,DOC_CURR_CODE

1234781,2072,20210131,1700,R,21013172971,112193,1,124,0,0,USD,USD
1234884,2072,20210131,1700,R,21013172971,112193,2,70,0,0,USD,USD
1235584,2072,20210131,1700,R,21013172971,112193,0,0,1,99,USD,USD
1234781,2072,20210131,1800,R,21013173592,33986,2,248,0,0,USD,USD

Field-Level Data Variations

As mentioned previously, DAY_DT can be daily dates or week-ending dates, depending on the requirements. If you are providing weekly files, it is fine to still provide the daily DAY_DT values for the days the transaction occurred, but all the data is provided in one week-ending file and no data is provided on other days of the week. (This is what is meant by item/loc/week transaction file: it is a weekly file with accumulated daily transactions.) When using actual transaction level data, however, it is best to provide the data every day due to the data volumes involved. Providing one file only at end of week can cause the weekly batch processing to be too long.

SLS_TRX_ID can be defaulted to a random/unique value if aggregated data is being used, but it must be the true POS customer transaction identifier if transaction level data is being used, as defined in the minimum requirement. It must always be provided as it is part of the primary key on the table.

RTL_TYPE_CODE allows values of R/P/C/O for regular/promotion/clearance/other sales, but not all types are required for all solutions. In some simple IPO-DF implementations, only Regular/R sales can be used to start if promotions/clearances do not matter to you. Note that you must delete and reload the sales history later if you need to change the retail types.

IT_SEQ_NUM is a situational column that you can provide if you are sending raw POS transactions directly from the source. This value uniquely identifies line numbers on a transaction when the same item could be scanned repeatedly. A unique value is required here in such cases or the duplicate rows will be discarded.

REVISION_NUM is a situational column that helps to track the number of revisions made to the same transaction over time. If you are providing aggregated data, then this field is not required. If you are providing transaction data with meaningful SLS_TRX_ID values, then you can also provide REVISION_NUM starting from one to track the number of revisions done to a transaction, as compared to the source system. Each time you post an update to a transaction, increment the number by one, and this will be updated on the transaction record.

SLS_PROFIT_AMT_LCL and RET_PROFIT_AMT_LCL may be relevant to you as additional measure columns if you require any measurement of cost. There are no cost fields in the SALES.csv file; you are instead expected to provide the sales profit/margin amounts associated with the transaction. The platform derives cost-related measures internally by subtracting the profit from the total sales amount when needed, such as when populating Sales Cost for RPASCE.

RECEIPT.csv

Inventory receipts data refers to any movement of stock into a location, which can be the result of any allocation, transfer, or PO shipment accepted into that location. Receipts data for AIF applications drives the first/last receipt dates by item/location as described in the data requirements in a later section. The quantity and type of receipts do not matter to forecasting, they matter only to RI reporting and applications that directly use receipts (IPO-IO, MFP, AP).

Receipt dates indicate that the first or last time that a receipt transaction occurred against an item/location is being calculated. You do not directly provide these calculated measures. You provide the transactions when receipts are occurring, and the platform will maintain calculations for first and last receipt dates. For example, a new item gets a receipt on 5/3/2023, so you will post a receipt transaction with this date for that item/location. During the data load, the RAP interface will take the inventory update, which should reflect the change to stock on hand, and the receipt update, which indicates the date that the receipt occurred, and calculate that the first receipt date and last receipt date should both be 5/3/2023 now. If another receipt arrives later, the new inventory and receipt records will be used to calculate the last receipt date to be that new date, while the first receipt date remains unchanged.

This document provides a high-level summary of the main considerations for the file. More details and requirements are available in the “Receipts Data Requirements” section of the Data File Generation chapter of Oracle Retail Analytics and Planning implementation Guide.

Minimum Data (Forecasting)

For forecasting at item/loc/week, it is sufficient to provide a single receipt record for any item/loc having a receipt during that week so that the first and last receipt date is updated accordingly. You do not need every possible receipt between the first and last if you are only interested in establishing the first/last receipt dates for an item/loc. You can leave out in-between records as long as you still align the receipt and inventory load time ranges. The type and quantity do not matter to forecasting. Forecasting only needs to know that inventory entered that location for that item. For example, this might be the only entry for an item/loc/week:
ITEM,ORG_NUM,DAY_DT,INVRC_TYPE_CODE,LOC_CURR_CODE,DOC_CURR_CODE

30025,1183,20210123,20,USD,USD

Field-Level Data Variations

DAY_DT can be daily dates or week-ending dates. In either case, it is aggregated to week level for the purpose of weekly forecast usage. The data can only come at the end-of-week dates in batch, and all other days can have no data for this file. If you require daily data, then you can send updates every day or accumulate all the updates and provide one end-of-week file containing all the daily receipt activity. If you only send end-of-week data, then your inventory must likewise be end-of-week only, so the two files are aligned. Inventory and receipts must be aligned. It does not make sense for a receipt to occur but no inventory update to be posted. They must both be either daily files or weekly files.

The INVRC_TYPE_CODE value only matters to RI, IPO-IO, and MFP. You can default it to some value (for example, 44~T) for all rows if not required in those applications. Type 20 (PO receipts) typically happen at warehouses and are required to accurately represent WH receipts. Type 44~T are transfer receipts. This is most common for store locations as it is the receipt of any inventory as a result of a transfer between two locations. The source can be a WH or another store. Type 44~A is an optional sub-type for receipts that occurred due to allocations from the WH to the store. This type matters only to RI reporting at this time.

INVENTORY.csv

Inventory positions (that is, stock on hand) capture the current stock levels at all owned locations. Any time a transaction occurs involving stock on hand (sales, receipts, price changes, and so on.), an update to the inventory position must also occur to post the new stock level and related measures to AIF.

This section provides a high-level summary of the main considerations for the file. More details and requirements are available in the “Inventory Data Requirements” section of the Data File Generation chapter of Oracle Retail Analytics and Planning implementation Guide.

Minimum Data (Forecasting)

For forecasting at item/loc/week, it is sufficient to provide a single item/loc/week record for each item/loc on the week-ending date. The on-hand (INV_SOH_QTY) is required as it is used in base demand calculation. INV_IN_TRAN_QTY and INV_ON_ORD_QTY are not necessarily required, but will contribute to the total inventory quantity for forecasting purposes, and some applications, such as LPO and IPO-IO, will expect it, based on the solution requirements. The data must exactly match the receipt activity, as it is assumed any receipt will create a change to inventory. Conversely, any change to inventory must be the result of some transaction, if you are using the file such as a sale, adjustment, receipt, or RTV.
ITEM,ORG_NUM,DAY_DT,CLEARANCE_FLG,INV_SOH_QTY,LOC_CURR_CODE,DOC_CURR_CODE

30025,1183,20210123,N,5,USD,USD
30027,1112,20210123,N,0,USD,USD

Field-Level Data Variations

For history data, DAY_DT is always a week-ending date; it cannot be daily. Receipts history can be daily or weekly, and any receipt dates will be used by the weekly history loads to determine first/last receipt dates.

For ongoing batch files, the data can come every day or end-of-week only. If the data comes weekly, then it must be the final week-ending positions of all item/locs that had any change during the week. You must only send one record per item/loc/week with the final ending stock on hand. DAY_DT cannot be different on different rows in the file; it must always be the current day on all rows or the week-ending date on all rows. Because of the first/last receipt date calculations discussed regarding the receipts file, the RECEIPT and INVENTORY files must be in sync for ongoing loads. That is, both must send data daily or weekly, and you must not mix them. If you do not care about the receipt date calculations because you are not using those forecasting methods and never will, then this requirement can be ignored and you can send daily receipts but weekly inventory. However, this is not recommended.

The CLEARANCE_FLG is not a part of the primary key in the data. You cannot have multiple records per item/loc with different flags. However, it is a required field, as the inventory status (regular or clearance) is necessary for many of the applications. If you are only implementing an application such as IPO-DF and do not plan by clearance/non-clearance separately, you can default the value to N on all rows.

The data must include zero-balance INV_SOH_QTY rows in some cases. For example, if a receipt occurred but the item immediately sold out, an INV_SOH_QTY=0 record must still be provided for that day or week because it is required to link with the receipt activity. If an inventory balance moved from non-zero to zero in that day or week, the zero balance record is also required to come once so that there is an inventory level update indicating it went out of stock. Other kinds of changes in the source can also generate an inventory record update even for a zero balance, such as changes to the unit cost/retail price requiring an update to some fields on this interface. Refer to the Oracle Retail Analytics and Planning Implementation Guide for additional details on building the inventory file and zero-balance needs in history and ongoing data.

If you are familiar with the RMFCS (Merchandising Foundation CS), the way inventory is extracted from that system is by looking at all transaction activity for the day and pulling inventory for every item/loc with any transactions of any kind. Even unrelated transaction types will trigger inventory pulls to ensure the daily positions in RAP are up to date for all fields.

PRICE.csv

The price data represents price or cost change events for an item/location. The data is only for executed events, not future planned events. Typically, these will include the same price events that are pushed out to stores to update the selling prices for all items.

This section provides a high-level summary of the main considerations for the file. More details and requirements are available in the “Price Data Requirements” section of the Data File Generation chapter of Oracle Retail Analytics and Planning implementation Guide.

Minimum Data (Forecasting)

For forecasting at item/loc/week, it is sufficient to provide any new prices and price changes that are effective on the week-ending date per item/loc. Both the SELLING_UNIT_RTL_AMT_LCL and BASE_COST_AMT_LCL must be provided on all the records unless you are only implementing SPO or IPO-DF, and then only SELLING_UNIT_RTL_AMT_LCL is required.
ITEM,ORG_NUM,DAY_DT,PRICE_CHANGE_TRAN_TYPE, SELLING_UNIT_RTL_AMT_LCL,BASE_COST_AMT_LCL,LOC_CURR_CODE,DOC_CURR_CODE

83074502,2030,20200208,0,103.76,49.34,USD,USD

Field-Level Data Variations

DAY_DT can be daily or weekly. In either case, it is aggregated to week level for the purpose of weekly forecast usage. The history data can also be daily or weekly, which is different from inventory. Daily price changes are loaded from history files exactly as provided for the date they occurred. You only provide a record when a price change event happens for an item/loc.

For ongoing batch files, the data can come every day or only for end-of-week. If the data comes weekly, then it must be the final week-ending positions of all item/locs that had any change during the week. You must only send one record per item/loc/week with final ending price. DAY_DT cannot be different on different rows in the file; it must always be the current day on all rows or the week-ending date on all rows. If you know that multiple price changes will occur within one week, then you must send the updates every day in order to capture each price change as it happens. For example, if a promotion or clearance happens in the middle of the week and ends two days later, sending a weekly price only will not capture the price changes. You must only send the week-ending regular price. In this case, you must plan to send daily price updates.

PRICE_CHANGE_TRAN_TYPE must be managed correctly to capture the type of price changes for an item/loc. The typical flow of price changes that will occur for an item/location must be as follows:

  1. New price/cost (PRICE_CHANGE_TRAN_TYPE=0)
  2. Regular cost changes (PRICE_CHANGE_TRAN_TYPE=2)
  3. Regular price changes (PRICE_CHANGE_TRAN_TYPE=4)
  4. Promotional/temporary markdowns (PRICE_CHANGE_TRAN_TYPE=9)
  5. Clearance markdowns (PRICE_CHANGE_TRAN_TYPE=8)
  6. Price reset due to new selling cycle or season change (PRICE_CHANGE_TRAN_TYPE=0)

All item/locations must have an initial price when the item is created (type 0). An item/location can then have many changes with types 2/4/8/9 before eventually staying at a type of 8 (for a final markdown) or resetting to a type of 0 (if the item lifecycle must restart).

Data Requirements for Various Forecast Methods

This section describes the data requirements for various forecast methods.

SALES.csv

Forecast Method: Sales & Promo

This forecast method requires data from SALES.csv. Sales data at the item/location level is required to run this method. The requirement is to provide sales history for a minimum of two years, as well as on an ongoing basis.

  1. When extreme exogenous situations occur, enough sales history data is required on either side of the event.
  2. If you want to generate customer segment-level forecasts, then you must provide the customer-linked transactions sales data. The retailer does not have to provide the data aggregated to the segment-level but must provide the customer ID and customer segment mapping as part of customer segment interface.
  3. If the user wants to forecast by net sales units, then sales returns must also be provided so that the system can calculate the net sales units. As a general rule, items that exhibit very low return rates (less than 10 percent) do not have to worry about providing this additional data.
  4. Regular and Promotional Sales sent via this interface are used to generate forecasts on these measures.
  5. Transaction level data is required for using the Halo and Cannibalization feature.

When the system is in production, the latest incremental sales data is obtained as part of the batch process.

Forecast Method: Life Cycle

This forecast method requires data from SALES.csv. Sales data at the item/location level is required to run this method. The requirement is to provide sales history for a minimum of one year and an additional one season, which for most retailers is close to 18 to 24 months, as well as on an ongoing basis. It is recommended to provide sales history for two years.

  1. When extreme exogenous situations occur, enough sales history data is required on either side of the event.
  2. If you want to generate customer segment-level forecasts, then you must provide the customer-linked transactions sales data. The retailer does not have to provide the data aggregated to the segment-level but must provide the customer ID and customer segment mapping as part of customer segment interface.
  3. If the user wants to forecast by net sales units, then sales returns must also be provided so that system can calculate the net sales units. The returns data identifies the original location where the item was purchased and when it was purchased. This information is useful in returns analysis and in calculating the returns parameters.
  4. Regular and Promotional Sales sent via this interface are used to generate forecasts on these measures. Note that LPO supports optimization on the Total Gross Sales Units or Total Net Sales Units with this forecasting method.
  5. Transaction level data is required for using the Halo and Cannibalization feature.

When the system is in production, the latest incremental sales data is obtained as part of the batch process.

Forecast Method: Automatic Exponential Smoothing

This forecast method requires data from SALES.csv. Sales data at the item/location level is required to run this method. The requirement is to provide sales history for a minimum of two years, as well as on an ongoing basis.
  1. When extreme exogenous situations occur, enough sales history data is required on either side of the event.

  2. If the user wants to forecast returns, then this must also be provided.

  3. Regular Sales, Promotional Sales, Clearance Sales, and Other Sales are also sent via this interface. These are required to generate forecasts on these measures.

When the system is in production, the latest incremental sales data is obtained as part of the batch process.

Forecast Method: Hierarchical Bayesian

This forecast method requires data from SALES.csv. Sales data at the item/location level is required to run this method. The requirement is to provide sales history for a minimum of one year, as well as on an ongoing basis. It is recommended to provide sales history for one year and an additional one season length, which for most retailers is close to 18 to 24 months.

  1. When extreme exogenous situations occur, enough sales history data is required on either side of the event.
  2. If you want to generate customer segment-level forecasts, then you must provide the customer-linked transactions sales data. The retailer does not have to provide the data aggregated to the segment-level but must provide the customer ID and customer segment mapping as part of customer segment interface.
  3. If the user wants to forecast by net sales units, then sales returns must be provided as well so that the system can calculate the net sales units.
  4. This method only supports a forecast on the Total Gross Sales Units and Total Net Sales Units. Note that LPO supports optimization on the Total Gross Sales Units or Total Net Sales Units with this forecasting method.

When the system is in production, the latest incremental sales data is obtained as part of the batch process.

RECEIPT.csv

Forecast Method: Sales & Promo

This forecast method does not require data from RECEIPT.csv.

Forecast Method: Life Cycle

This forecast method requires data from RECEIPT.csv. The requirement is to provide the snapshot of the first and last inventory receipt dates for all active items, as well as on an ongoing basis. It is recommended to provide the first and last inventory receipt dates for all items, at least for the most recent two quarters. The first and last inventory receipt dates are used to calculate the model start date, which helps assign the season code necessary to choose the correct seasonality curves. The RSE_FIRST_INV_RECEIPT_HIST_MAX_WEEKS parameter value in RSE_CONFIG table (APPL_CODE = RSE), available in the Manage System Configurations screen, controls the configuration that records which first inventory receipt dates older than this many weeks from the latest week with inventory will be excluded from model start date calculation.

Forecast Method: Automatic Exponential Smoothing

This forecast method does not require data from RECEIPT.csv.

Forecast Method: Hierarchical Bayesian

This forecast method does not require data from RECEIPT.csv.

INVENTORY.csv

Forecast Method: Sales & Promo

This forecast method does not require data from INVENTORY.csv. In the absence of inventory data from the env, the PMO_AGGR_INVENTORY_DATA_FLG parameter value in RSE_CONFIG table (APPL_CODE = PMO) available in the Manage System Configurations screen must be set to N before the data aggregation is completed for the forecast run type from the Setup train stop in the Manage Forecast Configurations screen.

Forecast Method: Life Cycle

This forecast method requires data from INVENTORY.csv. In the presence of inventory data in the env, the PMO_AGGR_INVENTORY_DATA_FLG parameter value in RSE_CONFIG table (APPL_CODE = PMO) available in the Manage System Configurations screen must be set to Y before the data aggregation is completed for the forecast run type from the Setup train stop in the Manage Forecast Configurations screen. The requirement is to provide the inventory data for all active items, as well as on an ongoing basis. It is recommended to provide the inventory data for all items, at least for the most recent two quarters. The inventory information is used to calculate the base demand, using Bayesian methods, of the items and also to obtain store counts for active items.

For items with extremely low ROS, non-Bayesian methods will over-forecast. So, an inventory informed forecast (Bayesian methods) is determined. This is different from an inventory constrained forecast. A demand forecast, and not sales forecast, is determined. For example, 15 units might be forecasted, even if inventory is 10 units, so it is not bounded by inventory. The role of inventory is to be used as a prior hint in the Bayesian methods, so that 100 units are not forecasted for an item that has an initial inventory of 10.

Forecast Method: Automatic Exponential Smoothing

This forecast method does not require data from INVENTORY.csv. In the absence of inventory data from the env, the PMO_AGGR_INVENTORY_DATA_FLG parameter value in RSE_CONFIG table (APPL_CODE = PMO) available in the Manage System Configurations screen must be set to N before the data aggregation is done for the forecast run type from the Setup train stop in the Manage Forecast Configurations screen.

Forecast Method: Hierarchical Bayesian

This forecast method may require data from INVENTORY.csv. It supports inventory units as a feature, and thus, if there is a desire to consider the impacts of the inventory on the forecast, then that data must be provided for about 18 months, and also on an ongoing basis. The PMO_AGGR_INVENTORY_DATA_FLG parameter value in RSE_CONFIG table (APPL_CODE = PMO) available in the Manage System Configurations screen must be set up accordingly before the data aggregation is done for the forecast run type from the Setup train stop in the Manage Forecast Configurations screen.

PRICE.csv

Forecast Method: Sales & Promo

This forecast method may require data from PRICE.csv. It supports price as a feature in the promotion module, and thus, if there is a need to consider the impacts of price changes on the forecast, then that data must be provided. The recommendation is to provide price history proportional to the length of the sales history provided, as well as on an ongoing basis. Note the following:
  1. This method uses the latest price change as of the week ending, so it is not necessary to send daily price changes.

  2. PRICE_CHANGE_TRAN_TYPE=0 is used to identify the full price or original price of the item. More than one record can have a value of 0 in situations such as if an item is repeated, or reset, or when the original or full price changes. This record is required for every item/location that had sale at any point in the history.

  3. PRICE_CHANGE_TRAN_TYPE=4 records are used to identify the regular price changes for the item. These records help identify regular price changes and thus calculate the impact of the regular price change.

  4. PRICE_CHANGE_TRAN_TYPE=8 records are used to identify the ticket price changes for the item. These records help identify markdowns and calculate the price impact for a markdown. If these records are not provided, the system will use a fallback option to calculate the price response, but it might result in lower forecast accuracies.

Forecast Method: Life Cycle

This forecast method may require data from PRICE.csv. Price data is used to identify the appropriate price changes to calculate the most accurate price response behavior. In the absence of this data, the sales data used in base demand calculation will not be de-priced. The recommendation is to provide price history proportional to the length of the sales history provided, as well as on an ongoing basis. Note the following:
  1. This method uses the latest price change as of the week ending, so it is not necessary to send daily price changes.

  2. PRICE_CHANGE_TRAN_TYPE=0 is used to identify the full price or original price of the item. More than one record can have a value of 0 in situations such as if an item is repeated, or reset, or when the original or full price changes. This record is required for every item/location that had a sale at any point in the history.

  3. PRICE_CHANGE_TRAN_TYPE=4 records are used to identify the regular price changes for the item. These records help identify regular price changes and thus calculate the impact of the regular price change.

  4. PRICE_CHANGE_TRAN_TYPE=8 records are used to identify the ticket price changes for the item. These records help identify markdowns and calculate the price impact for a markdown. If these records are not provided, the system will use a fallback option to calculate the price response, but it might result in lower forecast accuracies.

Forecast Method: Automatic Exponential Smoothing

This forecast method does not require data from PRICE.csv.

Forecast Method: Hierarchical Bayesian

This forecast method requires data from PRICE.csv. Price data is used to identify the appropriate price changes to calculate the most accurate price response behavior. The requirement is to provide price history proportional to the length of the sales history provided, as well as on an ongoing basis. Note the following:
  1. This method uses the latest price change as of the week ending, so it is not necessary to send daily price changes.

  2. PRICE_CHANGE_TRAN_TYPE=0 is used to identify the full price or original price of the item. More than one record can have a value of 0 in situations such as if an item is repeated, or reset, or when the original or full price changes. This record is required for every item/location that had sale at any point in the history.

  3. PRICE_CHANGE_TRAN_TYPE=4 records are used to identify the regular price changes for the item. These records help identify regular price changes and thus calculate the impact of the regular price change

  4. PRICE_CHANGE_TRAN_TYPE=8 records are used to identify the ticket price changes for the item. These records help identify markdowns and calculate the price impact for a markdown. If these records are not provided, the system will use a fallback option to calculate the price response, but it might result in lower forecast accuracies.

Data Correction and Reload Process

As part of building the dataset and reviewing the ETL Process, it is important to understand what is possible for correcting data after you have loaded it, as this will impact the amount of time you spend validating the data up-front before attempting to send it to RAP solutions.

SALES.csv and RECEIPT.csv

Sales and receipts are considered transactional interfaces, meaning that they load the data as incremental deltas. If a record is new, then it will be inserted into the database, but if it already exists then the incoming value will be SUMMED with the existing value. This summation is why deltas are used. The incoming data must only be the difference between the existing and expected values.

When you need to reload sales or receipt history due to data issues, or you need to revise existing data loaded in prior batch cycles, you have three options:
  1. It is ideal to calculate the adjustments required and load those delta values on top of the current data to apply the corrections. This does not require any cleanup or resetting of the existing data. This is how revisions must be done during nightly batches.

  2. Delete the existing sales or receipts data and reload it cleanly with the complete dataset that has been corrected for any issues. This is a complete deletion of the database tables for all history. This is done during history loads to make major changes. You do not normally do this after go-live.

  3. Raise an SR requesting that specific subsets of data be deleted so that you can only reload those records. The subset of data to be deleted must be defined by a grouping of product, location, or calendar hierarchies; it must not be made up of arbitrary records. For example, delete all data for department 1001 so that the history for it can be re-loaded. Oracle will delete the data and then you must load the corrected data using the standard implementation processes.

When you are posting updates to existing records (option 1 above), you must consider the following:
  1. For all of the dimension values, including item, loc, date, transaction ID, promo ID, retail type, and so on, the existing record must match the new revision so that the correct row is updated. If any one value does not match, then the revision may get inserted as a new row instead.
  2. The value you are modifying must be the difference between the old and new value, while values you are not modifying must be provided as zeros. For example, updating SLS_QTY from 3 to 4 must have SLS_QTY=1 on the incoming revision and 0 on other columns.
  3. For sales only, REVISION_NUM must be 2 or greater if you want to keep track of the revision count; however, it is not required. Receipts and other transaction types do not have revision number tracking.
  4. Regardless of the data levels (aggregated item/loc/week or daily transactions), the correction process is the same. The only difference is how you configured the columns such as SLS_TRX_ID or PROMO_ID to work, because you must use the same values each time you post revisions.

Note that for receipt corrections for history data before daily batches, you can reload data incrementally to change the numbers such as INVRC_QTY, but if you are adding or removing entire receipt transactions then you must delete and reload both receipts and inventory in parallel as defined in the RAP implementation steps. This assumes you require the first/last receipt date calculations, as discussed previously. The receipts are applied to inventory calculations during the inventory history load, not during the receipts history load, so they must happen in parallel. If you do not require first/last receipt date calculations, then the receipts can be loaded separately from inventory.

INVENTORY.csv

Inventory is a positional interface; that is, it tracks the current positions of your inventory levels at each point in time. Positional values cannot be updated or overwritten once loaded. It is assumed that you provided the exact value of the position for each item/loc/date. It does not make sense to adjust those values later. For example, if your on-hand inventory units were five today, then that was the physical count of units you had at that time, Any changes that occur later only apply to some future date, not to past dates.

When you must reload inventory history due to data issues, you have two options:

  1. Delete the existing inventory data and reload it cleanly with the complete dataset that has been corrected for any issues. This is a complete deletion of the database tables for all history. This is done during history loads to make major changes; you would not normally do this after go-live. As noted in the receipts sections, if you delete and reload inventory you want to do the same for receipts, assuming you require receipt date calculations for forecasting.
  2. Raise an SR requesting that specific subsets of data be deleted so that you can only reload those records. The subset of data to be deleted must be defined by a grouping of product, location, or calendar hierarchies; it must not be a group of arbitrary records. For example, delete all data for department 1001 so the history for it can be reloaded. Oracle will delete the data and then you must load the corrected data using the standard implementation processes. Request the same deletion for receipts data and reload both in parallel if you are using receipts for forecasting.

When reloading inventory, if first/last receipt dates are required, then you must also provide receipts data for the same item/loc/weeks in the RECEIPT.csv at the same time, otherwise you will lose the receipt date calculations in your reloaded inventory data.

Once you have started nightly batches, you cannot back-post inventory data; your inventory updates will always be posted for the current business date in RAP. If inventory data was not posted for 1+ prior days for any reason, such as issues in your source system or problems at the stores, then include those updates in the next batch and they will be posted for the current date, setting the inventory position values to be correct from that day forward.

PRICE.csv

Price data is a positional interface similar to inventory, but it has additional complexities that make it even more challenging to modify once loaded. Price data calculates many metrics over time during the load process, such as the number of markdowns that have occurred or the last known clearance price. These values are dynamic; they are updated every day as price changes in the history files are processed in chronological order. Once the data is loaded, you cannot directly alter or update it. Your options for data corrections are:
  1. Delete the existing price data and reload it cleanly with the complete dataset that has been corrected for any issues.
  2. Raise an SR requesting that specific subsets of data be deleted so that you can only reload those records. The subset of data to be deleted must be defined by a grouping of product or location hierarchies; it must not be arbitrary records and it cannot be a subset of calendar periods. Oracle must delete all data for the entire calendar range so derived metrics can be recalculated by any new loads. For example, delete all data for department 1001 so Oracle can reload the history for it. Oracle will delete the data and then you must load the corrected data using the standard implementation processes.

AIF Questions

Here are some answers to typical questions related to AIF.

What is the flow of sales data in AIF that comes from RI?

Sales data from RI first comes into RSE_SLS_TXN (transaction level) and then aggregated data flows into RSE_SLS_PR_LC_DY (sku/store/day), RSE_SLS_PR_LC_WK (sku/store/week), and RSE_SLS_PR_LC_CS_WK (sku/store/custseg/week).

Which table stores the aggregated sales data for forecasting at various intersection levels?

PMO_ACTIVITIES (main table for all forecast methods) and PMO_CUM_SLS (cumulative sales table only used by the Life Cycle forecast method) stores aggregated weekly sales data for each unique forecast intersection level specified by the AGGR_LVL_KEY column. These tables obtain data from RSE_SLS_PR_LC_CS_WK.

Why is the run type that I created not showing up in the Test screen?

Aggregation must be completed for run types to show up in the Test screen to be usable to do forecast runs for those run types. For a run type based on Life Cycle forecast method, Cumulative aggregation must also be completed.

How much data do I require for forecasting?

Data requirements differ by forecast methods. The default method for IPO-DF is Sales & Promo forecast method and it requires at least 104 weeks of sales history at each of the different forecast intersection levels. This can be verified from the PMO_ACTIVITIES table filtering on the desired AGGR_LVL_KEY. prod/locs that have at least 104 weeks between their first and last sales will have a seasonality curve generated. So, if in total, there is at least 104 weeks of sales history available but none of the prod/locs at the forecast level satisfies the condition then this method may not work. Other forecast methods such as Life Cycle will require inventory and receipts data, in addition. Refer to Data Requirements for Various Forecast Methods for more information.

What is the difference between forecast run type and run?

A forecast run type can be thought of as a collection of forecast runs grouped by some important parameters such as forecast intersection (merchandise/location) level, forecast method, forecast measure, etc. All forecast runs within a particular run type will have the same values for these important parameters. A forecast run cannot be part of more than one run type. Similarly, a forecast run type always belongs to an unique AGGR_LVL_KEY, but an AGGR_LVL_KEY can have multiple run types mapped to it.

What is the difference between forecast run and estimation run?

Any forecast run and estimation run belongs to an unique run type. Estimation run denotes the pre-processing and parameter estimation stages. Forecast run denotes the base demand calculation and forecast generation stages. The same estimation run can be used by multiple forecast runs but each forecast run is dependent on an unique estimation run.

What is the use of DFLT_LIFE_CYCLE parameter?

DFLT_LIFE_CYCLE is a parameter in the RSE_CONFIG table and can be accessed from the Manage System Configurations screen. It works together with the data in RSE_FCST_LIFE_CYCLE_CLSF table. As a default configuration, RSE_FCST_LIFE_CYCLE_CLSF table should be empty. DFLT_LIFE_CYCLE should be set to LLC when using the Sales & Promo forecast method, and to SLC when using the Life Cycle forecast method. Other forecast methods are not impacted by DFLT_LIFE_CYCLE. Also, forecast run types based on the Life Cycle forecast method and mapped to LPO - Promotion/Markdown will treat the value of the DFLT_LIFE_CYCLE as SLC.

Can I map one forecast run type to more than one application?

Yes, a forecast run type can be mapped to multiple applications. Each run type-application mapping has a specific usage. While setting up a mapping with IPO-DF application, an External Run type Key must be provided which is used by the IPO-DF UI to represent this run type.

What is the source for approved forecast?

Approved forecast source is determined for each run type independently. By default, it is AIF unless the run type is mapped to IPO-DF application in which case it is RDX. This holds even if the run type is mapped to other apps in addition to IPO-DF.

At what level must IPO-DF send pre-forecast batch parameters to AIF?

AIF requires all the pre-forecast batch parameters at each of the different forecast levels that IPO-DF user wants to generate forecast. In other words, AIF requires these parameters for each of the different run types mapped to IPO-DF.

At what level does AIF require Out-of-Stock and Outlier indicators?

Out-of-Stock and Outlier indicators are used by the Sales & Promo forecast method during the pre-processing steps. Pre-processing happens at the forecast intersection level. So, these indicators are needed at each of the different forecast intersection levels for the Sales & Promo method.

Why are the IPO-DF forecast results in AIF not getting exported to RDX?

Export of IPO-DF forecast results from AIF to RDX depends on 2 factors: a particular prod/loc has a forecast generated and that prod/loc is present in the table RSE_RDF_FCST_RUN_TYPE_PRM for the relevant run type. Note that this table is populated from the RDX table RDF_FCST_PARM_EXP during the pre-forecast batch. So, one must determine if there are any common prod/loc between RSE_RDF_FCST_RUN_TYPE_PRM filtered by the relevant run type and RSE_LLC_FCST_BLINE (for Sales & Promo method) or RSE_DMD_FCST (for Life Cycle method) filtered by the relevant forecast run. If there are no common prod/loc, then no export will happen from AIF to RDX. Either change the prod/loc sent from IPO-DF during the pre-forecast batch or update the data files so that the desired prod/loc gets forecast generated in AIF.

What checks must I make to ensure that AIF-IPO-DF workflow is ready to go?

The desired forecast run type must be mapped to IPO-DF and assigned an External Run type Key. The IPO-DF user must send pre-forecast batch parameters using the same External Run type Key. The run type should be in Active state. The completed forecast run that must be exported from AIF to RDX must be Approved.

Why am I not able to export out a particular forecast run from AIF to RDX?

A forecast run can only be exported out once. After that the run is considered dead and cannot be exported again. It can be verified from the table RSE_FCST_RUN_HDR filtered by the ID column which is the forecast run ID and see the EXPORTED_DT column value. This remains empty until the forecast run is exported and gets populated when the export happens. Once this value gets populated, the same forecast run cannot be exported again.

Can I use a different forecast method for MFP and AP other than Auto Exponential Smoothing?

Yes, Sales & Promo method and Life Cycle method are also supported for MFP and AP. Once a run type is setup based on Sales & Promo method or Life Cycle method and mapped to MFP or AP, the forecast exports from that run type will go to the RDX table RSE_FCST_DEMAND_EXP which is where MFP and AP reads the forecasts from.

How to check the seasonality curves used in forecasting?

RSE_SEAS_CURVE_DIM stores the seasonality curves for each estimation run. ROW_PROC_LEVEL of 1 represents the escalation level at which each seasonality curve got computed. ROW_PROC_LEVEL of 3 represents the forecast level. So, if the forecast level is part of the escalation path, and if a curve that got computed for a forecast level prod/loc is not pruned then the same prod/loc will appear for that curve for both ROW_PROC_LEVEL 1 and 3. RSE_SEAS_CURVE_WK stores the seasonality indices for each estimation run and each curve.

How to check the seasonality curves that are pruned and why those got pruned?

For each estimation run, PMO_RUN_LLC_SEASONALITY_CURVE and PMO_RUN_SEASONALITY_CURVE stores all the curves (both pruned and reliable) generated along with the reason of pruning for Sales & Promo method and Life Cycle method, respectively.

Why am I not able to see the approved forecast in RI and/or RMS?

The relevant forecast run type must be mapped to the RI/RMS application for the approved forecast to flow to RI for BI reporting and/or RMS for merchandising. If the RI/RMS application is not visible in the Map screen drop-down list, then it can be turned on from the RSE_APP_SOURCE table in Manage System Configurations screen.

How to turn off the Outlier calculation GA feature to be able to implement a custom solution?

If the GA feature for Outlier indicator calculation is not required, then it can be turned off by setting PMO_OUTLIER_CALCULATION_FLG to N from the RSE_CONFIG table in Manage System Configurations screen.

How can I use alternate hierarchy in AIF for forecasting?

The desired alternate hierarchy must first be constructed in AIF. Refer to the Building an Alternate Hierarchy in AIF Applications section earlier in this chapter for more information. The merchandise and location hierarchy types must be set up correctly for the PMO_PROD_HIER_TYPE and PMO_LOC_HIER_TYPE parameters, respectively, in the RSE_CONFIG table in Manage System Configurations screen.

How can I set up custom processes for forecasting through IW?

Different custom processes are required for different stages of the forecasting pipeline. So, to execute such processes during weekly batches through POM, the correct custom POM jobs must be used which will ensure the desired POM dependencies on pre and post POM processes. Refer to Custom Jobs Through Innovation Workbench (IW) for more information.

How can I set up Fact Measures data for forecasting in AIF?

Different configuration steps must be followed in AIF to consume the aggregated sales data coming through the RI fact interfaces. Refer to the Configurations in AIF to Generate Forecasts for FACT Measures section earlier in this chapter for more information. Automatic Exponential Smoothing forecast method can use the fact measures data source by creating the desired run type correctly.

Why did the additional week of data loaded into AIF did not get aggregated for some AGGR_LVL_KEY?

For the additional week of data to get aggregated in PMO_ACTIVITIES for a particular AGGR_LVL_KEY, at least one of the corresponding run types based on that AGGR_LVL_KEY must be in Active state.

How can I check the DB logs for errors for the different processes running in AIF?

DB logs are stored in RSE_LOG_MSG. To see the latest logs, ORDER BY MSG_TS DESC. To check for errors, filter by LOG_LEVEL = 'ERROR'. To see the progress of a particular process, filter by PROGRAM_UNIT or ROUTINE_NAME. The PROCESS_ID for some logs serve as the estimation run ID or the forecast run ID, but one must be careful with its usage as the PROCESS_ID might also mean something else. The AUDSID remains the same for a particular process execution so that can also be used as a filter to check the logs for a particular process.

Why are the forecasting estimation runs completing in a few seconds and don't seem to do anything?

For the submitted estimation runs to execute the different stages properly, there has to be seed data present in PMO_RUN_STAGE_EXEC_REQUEST for RUN_HDR_ID of -1. Also, data must be present in both PMO_RUN_STAGE_EXEC_REQUEST and PMO_RUN_STAGE_EXECUTION for the relevant estimation runs.

Why did the base demand fail or complete with zero rows for the Sales & Promo forecast method?

One of the primary reasons could be no seasonality data in RSE_SEAS_CURVE_DIM for the relevant estimation run. This, in turn, could be because of the DFLT_LIFE_CYCLE parameter not set to LLC or due to insufficient sales history available in PMO_ACTIVITIES for the relevant AGGR_LVL_KEY.

Why did the base demand fail or complete with zero rows for the Life Cycle forecast method?

One of the primary reasons could be no seasonality data in RSE_SEAS_CURVE_DIM for the relevant estimation run or no in-season base demand data in RSE_BD_IN_SEAS_SLS for the relevant forecast run. This, in turn, could be because of the DFLT_LIFE_CYCLE parameter not set to SLC or due to insufficient inventory data and old first receipt dates.

Why did the forecast run not generate the new item forecast?

New item data should be populated in RSE_FCST_NEW_ITEM_PARAM for the relevant forecast run type before the forecast run is executed. The new item base demand is copied from the like item for the same location. So, the like item must have base demand generated for that particular location.

Why is the RUN_ID not same for RSE_FCST_RUN_TYPE_CONFIG and RSE_FCST_RUN_HDR_EXP?

The RUN_ID in RSE_FCST_RUN_TYPE_CONFIG and RSE_FCST_RUN_HDR_EXP are different by design as these 2 RDX tables are populated by separate processes. The RUN_ID in RDX tables is assigned incrementally every time a new process writes something to one of the RDX tables. Also, the RUN_ID in RDX tables is totally different than the forecast or estimation run IDs in AIF.

Can I forecast seasonal items with Sales & Promo method or long-running items with Life Cycle method?

Both the Sales & Promo and the Life Cycle methods are capable of forecasting all kinds of items. The choice of the forecast method depends on multiple factors such as data availability, business model of the retailer, purpose of the forecast output, and so on. The default method for IPO-DF customers should be the Sales & Promo method. They should only switch to other methods with proper discussions and understanding of their requirements.