13 Control and Tactical Center

This chapter describes the Control and Tactical Center, where an administrative user can access and manage configurations for different applications.

Under the Control and Tactical Center, a user can access Strategy and Policy Management, which is the central place for managing the configurations of different applications and for setting up and configuring the demand forecasting runs. In addition, the modules for managing product attributes (Attribute Extraction and Attribute Binning) as well as the link for managing web service-related credentials can be also found under the Control and Tactical Center.

Depending on the role, a user will see one or multiple of the following links, as shown in Figure 13-1. Typically, the administrative user has access to all of the following.

  • Strategy and Policy Management

  • Manage Credential Stores

  • Attribute Extraction

  • Attribute Binning

Figure 13-1 Control and Tactical Center

Description of Figure 13-1 follows
Description of "Figure 13-1 Control and Tactical Center"

Strategy and Policy Management

In Strategy and Policy Management, a user can edit the configurations of different applications using the Manage System Configurations screen. Only the tables that are editable can be accessed in this screen. Within each table, one or multiple columns are editable and a user can override the values in those columns.

Manage Forecast Configurations can be used to set up, manage, and configure the demand forecasting runs for different applications such as Offer Optimization (OO), Inventory Optimization (OO), Retail Demand Forecasting (RDF), Assortment Planning (AP) and Merchandise Financial Planning (MFP).

Forecast Configuration for MFP and AP

To configure the forecast process for MFP and AP, complete the following two steps:

  1. Use the Manage System Configurations screen to review and modify the global configurations in RSE_CONFIG. For further details, see "Configuration Updates". The relevant configurations in RSE_CONFIG that must be reviewed and edited as required are listed in Table 13-1.

    Table 13-1 Configurations

    APPL CODE PARAM NAME PARAM VALUE DESCR

    RSE

    EXTENDED_HIERARCHY_SRC

    Default value is NON-RMS.

    Data source providing extended hierarchy data RMS/NON-RMS.

    RSE

    LOAD_EXTENDED_PROD_HIER

    Default value is Y. If the product hierarchy data had nine levels, keep this value as Y. If it has seven levels, change this value to N.

    Y/N Value. This parameter is used by the product hierarchy ETL to determine if the extended product hierarchy is required.

    PMO

    PMO_PROD_HIER_TYPE

    Default value is 3. If the product hierarchy data has nine levels (i.e., it has an extended hierarchy), keep this value as 3. If it has seven levels, change this value to 1.

    The hierarchy ID to use for the product (installation configuration).

    RSE

    PROD_HIER_SLSTXN_HIER_LEVEL_ID

    Default value is 9.

    This parameter identifies the hierarchy level at which sales transactions are provided (7-Style, 8-Style/color, or 9 Style/color/Size). It must match the extended hierarchy leaf level.

    RSE FCST_PURGE_RETENTION_DAYS Default value is 180. Number of days to wait before permanently deleting run data.
    RSE FCST_PURGE_ALL_RUNS Default value is N. Y/N flag indicating all run data should be deleted.
    RSE FCST_TY_PURGE_RETENTION_DAYS Default value is 180. Number of days to wait before permanently deleting run type data.
    RSE FCST_TY_PURGE_ALL_RUNS Default value is N. Y/N flag indicating all run type data should be deleted.
  2. Use the Manage Forecast Configurations screen to set up the forecast runs for MFP and AP, as follows.

    In Manage Forecast Configurations, start by setting up a run type in the Setup train stop. Click the + icon above the table and fill in the fields in the pop-up. For MFP and AP forecasting, the forecast method must be selected as Automatic Exponential Smoothing. You must create a run type for each forecast measure/forecast intersection combination that is required for MFP and/or AP.

    Once you are done with setting up the run types, click Start Data Aggregation in the Setup train stop. Select all the run types that were created and click Submit. When the aggregation is complete, the aggregation status will change to Complete. At this point, ad-hoc test runs and batch runs can be created and executed to generate a forecast.

    To create an ad-hoc run, go to the Test train stop. First, click a run type in the top table and then click the + icon in the bottom table to create a run. In the Create Run pop-up, you can change the configurations parameters related to estimation process and forecast process in their respective tabs. For example, if you want to test a run using the Bayesian method, edit the Estimation Method parameter in the Estimation tab using the Edit icon above the table. After modifying and reviewing the configuration parameters, click Submit to start the run. Once the run is complete, the status will change to Forecast Generation Complete.

    Doing test runs is an optional step. In addition to that, you must modify and review the configurations of the run type, activate the run type, enable auto-approve, and map the run type to the downstream application (in this case to MFP or AP). In the Manage train stop, select a row, click Edit Configurations Parameters, and edit the estimation and forecast parameters as necessary. Once you are done, go to Review tab and click Validate and then close tab.

    Note:

    If the run type is active, you will only be able to view the parameters. In order to edit the parameters, the run type must be inactive.

    To activate the run type and enable the auto-approve, select a run type in the table and click the corresponding buttons above the table. Lastly, to map the run type to MFP or AP, go to Map train stop and click the + icon to create a new mapping.

Run Type Configurations for MFP and AP to Set Up GA Runs

To set up GA runs, create one run type for each forecast intersection/measure combination. Table 13-2 shows the configurations for each run type.

Table 13-2 Run Type Configurations

Forecast Level (Merchandise/Location/Calendar) Forecast Measure Forecast Method Data Source

Location Plan (MFP)

Department/Location/Week

Total Gross Sales Units

Automatic Exponential Smoothing for all run types

Store Sales for all run types

Total Gross Sales Amount

Total Returns Units

Total Returns Amount

Location Target (MFP)

Company/Location/Week

Total Gross Sales Units

Total Gross Sales Amount

Total Returns Units

Total Returns Amount

Merchandise Plan (MFP)

Subclass/Area/Week

Clearance Gross Sales Units

Clearance Gross Sales Amount

Regular and Promotion Gross Sales Units

Regular and Promotion Gross Sales Amount

Total Returns Units

Total Returns Amount

Merchandise Target (MFP)

Department/Area/Week

Clearance Gross Sales Units

Clearance Gross Sales Amount

Regular and Promotion Gross Sales Units

Regular and Promotion Gross Sales Amount

Total Returns Units

Total Returns Amount

Forecast Level 1 (AP)

Subclass/Location/Week

Clearance Gross Sales Unit

Clearance Gross Sales Amount

Regular and Promotion Gross Sales Amount

Regular and Promotion Gross Sales Amount

Forecast Level 2 (AP)

SKU/Location/Week

Clearance Gross Sales Units

Clearance Gross Sales Amount

Regular and Promotion Gross Sales Units

Regular and Promotion Sales Units

Note:

For all batch runs, in addition to the pre-season forecast, the Bayesian forecast is also generated by default. In the Manage train stop, you must only set the method for the pre-season forecast (either as Automatic Exponential Smoothing or Seasonal Exponential Smoothing).

Batch and Ad Hoc Jobs for MFP and AP Forecasting

The Batch and Ad Hoc jobs listed in Table 13-3 are used for loading foundation data (product hierarchy, location hierarchy, calendar, product attributes, and so on).

Table 13-3 Configuration and Main Data Load Jobs

JobName Description RmsBatch

ORASE_START_BATCH_JOB

ORASE_START_BATCH_JOB

rse_process_state_update.ksh

ORASE_START_BATCH_SET_ACTIVE_JOB

ORASE_START_BATCH_SET_ACTIVE_JOB

rse_batch_type_active.ksh

ORASE_START_BATCH_REFRESH_RESTR_JOB

ORASE_START_BATCH_REFRESH_RESTR_JOB

rse_batch_freq_restriction.ksh

ORASE_START_BATCH_END_JOB

ORASE_START_BATCH_END_JOB

rse_process_state_update.ksh

RSE_WEEKLY_INPUT_FILES_START_JOB

RSE_WEEKLY_INPUT_FILES_START_JOB

rse_process_state_update.ksh

WEEKLY_INPUT_FILES_WAIT_JOB

WEEKLY_INPUT_FILES_WAIT_JOB

rse_batch_zip_file_wait.ksh

WEEKLY_INPUT_FILES_VAL_JOB

WEEKLY_INPUT_FILES_VAL_JOB

rse_batch_zip_file_extract.ksh

WEEKLY_INPUT_FILES_COPY_JOB

WEEKLY_INPUT_FILES_COPY_JOB

rse_batch_inbound_file_copy.ksh

RSE_WEEKLY_INPUT_FILES_END_JOB

RSE_WEEKLY_INPUT_FILES_END_JOB

rse_process_state_update.ksh

RSE_PROD_HIER_ETL_START_JOB

RSE_PROD_HIER_ETL_START_JOB

rse_process_state_update.ksh

RSE_PROD_SRC_XREF_LOAD_JOB

RSE_PROD_SRC_XREF_LOAD_JOB

rse_prod_src_xref_load.ksh

RSE_PROD_HIER_LOAD_JOB

RSE_PROD_HIER_LOAD_JOB

rse_prod_hier_load.ksh

RSE_PROD_TC_LOAD_JOB

RSE_PROD_TC_LOAD_JOB

rse_prod_tc_load.ksh

RSE_PROD_DH_LOAD_JOB

RSE_PROD_DH_LOAD_JOB

rse_prod_dh_load.ksh

RSE_PROD_GROUP_LOAD_JOB

RSE_PROD_GROUP_LOAD_JOB

rse_load_prod_group.ksh

RSE_PROD_HIER_ETL_END_JOB

RSE_PROD_HIER_ETL_END_JOB

rse_process_state_update.ksh

RSE_LOC_HIER_ETL_START_JOB

RSE_LOC_HIER_ETL_START_JOB

rse_process_state_update.ksh

RSE_LOC_SRC_XREF_LOAD_JOB

RSE_LOC_SRC_XREF_LOAD_JOB

rse_loc_src_xref_load.ksh

RSE_LOC_HIER_LOAD_JOB

RSE_LOC_HIER_LOAD_JOB

rse_loc_hier_load.ksh

RSE_LOC_HIER_TC_LOAD_JOB

RSE_LOC_HIER_TC_LOAD_JOB

rse_loc_hier_tc_load.ksh

RSE_LOC_HIER_DH_LOAD_JOB

RSE_LOC_HIER_DH_LOAD_JOB

rse_loc_hier_dh_load.ksh

RSE_LOC_HIER_ETL_END_JOB

RSE_LOC_HIER_ETL_END_JOB

rse_process_state_update.ksh

RSE_LOC_ATTR_LOAD_START_JOB

RSE_LOC_ATTR_LOAD_START_JOB

rse_process_state_update.ksh

RSE_CDA_ETL_LOAD_LOC_JOB

RSE_CDA_ETL_LOAD_LOC_JOB

rse_cda_etl_load_location.ksh

RSE_LOC_ATTR_LOAD_END_JOB

RSE_LOC_ATTR_LOAD_END_JOB

rse_process_state_update.ksh

RSE_LIKE_LOC_LOAD_START_JOB

RSE_LIKE_LOC_LOAD_START_JOB

rse_process_state_update.ksh

RSE_LIKE_LOC_STG_JOB

RSE_LIKE_LOC_STG_JOB

rse_like_loc_stg.ksh

RSE_LIKE_LOC_COPY_JOB

RSE_LIKE_LOC_COPY_JOB

RSE_LIKE_LOC_STG_CNE_JOB

RSE_LIKE_LOC_STG_CNE_JOB

RSE_LIKE_LOC_LOAD_JOB

RSE_LIKE_LOC_LOAD_JOB

rse_like_loc_load.ksh

RSE_LIKE_LOC_LOAD_END_JOB

RSE_LIKE_LOC_LOAD_END_JOB

rse_process_state_update.ksh

RSE_LIKE_PROD_LOAD_START_JOB

RSE_LIKE_PROD_LOAD_START_JOB

rse_process_state_update.ksh

RSE_LIKE_PROD_STG_JOB

RSE_LIKE_PROD_STG_JOB

rse_like_prod_stg.ksh

RSE_LIKE_PROD_COPY_JOB

RSE_LIKE_PROD_COPY_JOB

RSE_LIKE_PROD_STG_CNE_JOB

RSE_LIKE_PROD_STG_CNE_JOB

RSE_LIKE_PROD_LOAD_JOB

RSE_LIKE_PROD_LOAD_JOB

rse_like_prod_load.ksh

RSE_LIKE_PROD_LOAD_END_JOB

RSE_LIKE_PROD_LOAD_END_JOB

rse_process_state_update.ksh

RSE_DATA_STAGING_START_JOB

RSE_DATA_STAGING_START_JOB

rse_process_state_update.ksh

RSE_MD_CDA_VALUES_STG_JOB

RSE_MD_CDA_VALUES_STG_JOB

rse_md_cda_values_stg.ksh

RSE_MD_CDA_VALUES_COPY_JOB

RSE_MD_CDA_VALUES_COPY_JOB

RSE_MD_CDA_VALUES_STG_CNE_JOB

RSE_MD_CDA_VALUES_STG_CNE_JOB

RSE_PR_LC_CDA_STG_JOB

RSE_PR_LC_CDA_STG_JOB

rse_pr_lc_cda_stg.ksh

RSE_PR_LC_CDA_COPY_JOB

RSE_PR_LC_CDA_COPY_JOB

RSE_PR_LC_CDA_STG_CNE_JOB

RSE_PR_LC_CDA_STG_CNE_JOB

RSE_PR_LC_CAL_CDA_JOB

RSE_PR_LC_CAL_CDA_JOB

rse_pr_lc_cal_cda_stg.ksh

RSE_PR_LC_CAL_CDA_COPY_JOB

RSE_PR_LC_CAL_CDA_COPY_JOB

RSE_PR_LC_CAL_CDA_STG_CNE_JOB

RSE_PR_LC_CAL_CDA_STG_CNE_JOB

RSE_MD_CDA_STG_JOB

RSE_MD_CDA_STG_JOB

rse_md_cda_stg.ksh

RSE_MD_CDA_COPY_JOB

RSE_MD_CDA_COPY_JOB

RSE_MD_CDA_STG_CNE_JOB

RSE_MD_CDA_STG_CNE_JOB

RSE_MD_CDA_LOAD_JOB

RSE_MD_CDA_LOAD_JOB

rse_md_cda_load.ksh

RSE_MD_CDA_VALUES_LOAD_JOB

RSE_MD_CDA_VALUES_LOAD_JOB

rse_md_cda_values_load.ksh

RSE_DATA_STAGING_END_JOB

RSE_DATA_STAGING_END_JOB

rse_process_state_update.ksh

RSE_PROD_ATTR_LOAD_START_JOB

RSE_PROD_ATTR_LOAD_START_JOB

rse_process_state_update.ksh

RSE_CDA_ETL_LOAD_PROD_JOB

RSE_CDA_ETL_LOAD_PROD_JOB

rse_cda_etl_load_product.ksh

RSE_PROD_ATTR_GRP_VALUE_STG_JOB

RSE_PROD_ATTR_GRP_VALUE_STG_JOB

rse_prod_attr_grp_value_stg.ksh

RSE_PROD_ATTR_GRP_VALUE_COPY_JOB

RSE_PROD_ATTR_GRP_VALUE_COPY_JOB

RSE_PROD_ATTR_GRP_VALUE_STG_CNE_JOB

RSE_PROD_ATTR_GRP_VALUE_STG_CNE_JOB

RSE_PROD_ATTR_GRP_VALUE_LOAD_JOB

RSE_PROD_ATTR_GRP_VALUE_LOAD_JOB

rse_prod_attr_grp_value_load.ksh

RSE_PROD_ATTR_VALUE_XREF_STG_JOB

RSE_PROD_ATTR_VALUE_XREF_STG_JOB

rse_prod_attr_value_xref_stg.ksh

RSE_PROD_ATTR_VALUE_XREF_COPY_JOB

RSE_PROD_ATTR_VALUE_XREF_COPY_JOB

RSE_PROD_ATTR_VALUE_XREF_STG_CNE_JOB

RSE_PROD_ATTR_VALUE_XREF_STG_CNE_JOB

RSE_PROD_ATTR_VALUE_XREF_LOAD_JOB

RSE_PROD_ATTR_VALUE_XREF_LOAD_JOB

rse_prod_attr_value_xref_load.ksh

RSE_PROD_ATTR_LOAD_END_JOB

RSE_PROD_ATTR_LOAD_END_JOB

rse_process_state_update.ksh

RSE_CM_GRP_HIER_LOAD_START_JOB

RSE_CM_GRP_HIER_LOAD_START_JOB

rse_process_state_update.ksh

RSE_CM_GRP_XREF_LOAD_JOB

RSE_CM_GRP_XREF_LOAD_JOB

rse_cm_grp_xref_load.ksh

RSE_CM_GRP_HIER_LOAD_JOB

RSE_CM_GRP_HIER_LOAD_JOB

rse_cm_grp_hier_load.ksh

RSE_CM_GRP_TC_LOAD_JOB

RSE_CM_GRP_TC_LOAD_JOB

rse_cm_grp_tc_load.ksh

RSE_CM_GRP_DH_LOAD_JOB

RSE_CM_GRP_DH_LOAD_JOB

rse_cm_grp_dh_load.ksh

RSE_CM_GRP_HIER_LOAD_END_JOB

RSE_CM_GRP_HIER_LOAD_END_JOB

rse_process_state_update.ksh

RSE_TRADE_AREA_HIER_LOAD_START_JOB

RSE_TRADE_AREA_HIER_LOAD_START_JOB

rse_process_state_update.ksh

RSE_TRADE_AREA_SRC_XREF_LOAD_JOB

RSE_TRADE_AREA_SRC_XREF_LOAD_JOB

rse_trade_area_src_xref_load.ksh

RSE_TRADE_AREA_HIER_LOAD_JOB

RSE_TRADE_AREA_HIER_LOAD_JOB

rse_trade_area_hier_load.ksh

RSE_TRADE_AREA_TC_LOAD_JOB

RSE_TRADE_AREA_TC_LOAD_JOB

rse_trade_area_tc_load.ksh

RSE_TRADE_AREA_DH_LOAD_JOB

RSE_TRADE_AREA_DH_LOAD_JOB

rse_trade_area_dh_load.ksh

RSE_TRADE_AREA_HIER_LOAD_END_JOB

RSE_TRADE_AREA_HIER_LOAD_END_JOB

rse_process_state_update.ksh

RSE_CUST_CONS_SEG_HIER_ETL_START_JOB

RSE_CUST_CONS_SEG_HIER_ETL_START_JOB

rse_process_state_update.ksh

RSE_CUSTSEG_SRC_XREF_LOAD_JOB

RSE_CUSTSEG_SRC_XREF_LOAD_JOB

rse_custseg_src_xref_load.ksh

RSE_CUSTSEG_HIER_LOAD_JOB

RSE_CUSTSEG_HIER_LOAD_JOB

rse_custseg_hier_load.ksh

RSE_CUSTSEG_HIER_TC_LOAD_JOB

RSE_CUSTSEG_HIER_TC_LOAD_JOB

rse_custseg_hier_tc_load.ksh

RSE_CUSTSEG_CUST_XREF_LOAD_JOB

RSE_CUSTSEG_CUST_XREF_LOAD_JOB

rse_custseg_cust_xref_load.ksh

RSE_CONSEG_LOAD_JOB

RSE_CONSEG_LOAD_JOB

rse_conseg_load.ksh

RSE_CONSEG_ALLOC_LOAD_JOB

RSE_CONSEG_ALLOC_LOAD_JOB

rse_conseg_alloc_load.ksh

RSE_CUSTSEG_ALLOC_LOAD_JOB

RSE_CUSTSEG_ALLOC_LOAD_JOB

rse_custseg_alloc_load.ksh

RSE_CUST_CONS_SEG_HIER_ETL_END_JOB

RSE_CUST_CONS_SEG_HIER_ETL_END_JOB

rse_process_state_update.ksh

RSE_CAL_HIER_ETL_START_JOB

RSE_CAL_HIER_ETL_START_JOB

rse_process_state_update.ksh

RSE_REGULAR_MAIN_LOAD_JOB

RSE_REGULAR_MAIN_LOAD_JOB

rse_regular_main_load.ksh

RSE_FISCAL_MAIN_LOAD_JOB

RSE_FISCAL_MAIN_LOAD_JOB

rse_fiscal_main_load.ksh

RSE_CAL_HIER_ETL_END_JOB

RSE_CAL_HIER_ETL_END_JOB

rse_process_state_update.ksh

RSE_DIMENSION_LOAD_END_START_JOB

RSE_DIMENSION_LOAD_END_START_JOB

rse_process_state_update.ksh

RSE_DIMENSION_LOAD_END_END_JOB

RSE_DIMENSION_LOAD_END_END_JOB

rse_process_state_update.ksh

RSE_DIMENSION_LOAD_START_START_JOB

RSE_DIMENSION_LOAD_START_START_JOB

rse_process_state_update.ksh

RSE_DIMENSION_LOAD_START_END_JOB

RSE_DIMENSION_LOAD_START_END_JOB

rse_process_state_update.ksh

RSE_SLS_START_START_JOB

RSE_SLS_START_START_JOB

rse_process_state_update.ksh

RSE_SLS_START_END_JOB

RSE_SLS_START_END_JOB

rse_process_state_update.ksh

RSE_SLS_END_START_JOB

RSE_SLS_END_START_JOB

rse_process_state_update.ksh

RSE_SLS_END_END_JOB

RSE_SLS_END_END_JOB

rse_process_state_update.ksh

ORASE_END_START_JOB

ORASE_END_START_JOB

rse_process_state_update.ksh

ORASE_END_RUN_DATE_UPDT_JOB

ORASE_END_RUN_DATE_UPDT_JOB

rse_batch_run_date_update.ksh

ORASE_END_END_JOB

ORASE_END_END_JOB

rse_process_state_update.ksh

RSE_MASTER_ADHOC_JOB

Run RSE master script

rse_master.ksh

PMO_MASTER_ADHOC_JOB

Run PMO master script

pmo_master.ksh

PMO_CUMUL_SLS_SETUP_ADHOC_JOB Ad hoc setup job for cumulative sales calculation pmo_cum_sls_load_setup.ksh
PMO_CUMUL_SLS_PROCESS_ADHOC_JOB Ad hoc process job for cumulative sales calculation pmo_cum_sls_load_process.ksh
PMO_ACTIVITY_STG_ADHOC_JOB Ad hoc PMO Activity load setup job pmo_activity_load_setup.ksh
PMO_ACTIVITY_LOAD_ADHOC_JOB Ad hoc PMO Activity load process job pmo_activity_load_process.ksh
PMO_ACTIVITY_WH_LOAD_SETUP_ADHOC_JOB Ad hoc PMO Activity setup job to aggregate sales data for warehouse transfers pmo_activity_wh_load_setup.ksh
PMO_ACTIVITY_WH_LOAD_PROCESS_ADHOC_JOB Ad hoc PMO Activity process job to aggregate sales data for warehouse transfers pmo_activity_wh_load_process.ksh
PMO_RETURN_DATAPREP_SETUP_ADHOC_JOB Ad hoc PMO Returns data load preparation setup job pmo_return_dataprep_setup.ksh
PMO_RETURN_DATAPREP_PROCESS_ADHOC_JOB Ad hoc PMO Returns data load preparation processing job pmo_return_dataprep_process.ksh
PMO_RETURN_CALC_PROCESS_ADHOC_JOB Ad hoc PMO Returns calculations processing job pmo_return_calculation.ksh
PMO_HOLIDAY_LOAD_ADHOC_JOB Ad hoc PMO Holiday load job pmo_holiday_load.ksh

The Batch and Ad hoc jobs listed in Table 13-4 are used to prepare weekly data and to run weekly batches for MFP and AP.

Table 13-4 Batch and Ad Hoc Jobs for MFP and AP Forecasting

JobName Description RmsBatch

PMO_ACTIVITY_LOAD_START_JOB

PMO Activity load start job

rse_process_state_update.ksh

PMO_ACTIVITY_STG_JOB

PMO Activity data staging

pmo_activity_load_setup.ksh

PMO_ACTIVITY_LOAD_JOB

PMO Activity load job

pmo_activity_load_process.ksh

PMO_ACTIVITY_LOAD_END_JOB

PMO Activity load end job

rse_process_state_update.ksh

RSE_FCST_SALES_PLAN_START_JOB

Start job for sales plan data load for Bayesian method

rse_process_state_update.ksh

RSE_FCST_SALES_PLAN_LOAD_JOB

Load job for sales plan data for Bayesian method

rse_fcst_sales_plan_load.ksh

RSE_FCST_SALES_PLAN_END_JOB

End job for sales plan data load for Bayesian method

rse_process_state_update.ksh

RSE_ASSORT_PLAN_LOAD_START_JOB

Loading assortment plan through interface for AP

rse_process_state_update.ksh

RSE_ASSORT_PLAN_LOAD_STG_JOB

Loading assortment plan through interface for AP

rse_assort_plan_per_stg.ksh

RSE_ASSORT_PLAN_LOAD_COPY_JOB

Loading assortment plan through interface for AP

RSE_ASSORT_PLAN_LOAD_STG_CNE_JOB

Loading assortment plan through interface for AP

RSE_RDX_ASSORT_PLAN_IMPORT_JOB

Importing assortment plan from RDX to AIF for AP

rse_rdx_assort_plan_import.ksh

RSE_ASSORT_PLAN_LOAD_JOB

Loading assortment plan through interface for AP

rse_assort_plan_per_load.ksh

RSE_ASSORT_PLAN_LOAD_END_JOB

Loading assortment plan through interface for AP

rse_process_state_update.ksh

RSE_PLANNING_PERIOD_LOAD_START_JOB

Loading assortment period through interface for AP

rse_process_state_update.ksh

RSE_PLANNING_PERIOD_STG_JOB

Loading assortment period through interface for AP

rse_planning_period_stg.ksh

RSE_PLANNING_PERIOD_COPY_JOB

Loading assortment period through interface for AP

RSE_PLANNING_PERIOD_STG_CNE_JOB

Loading assortment period through interface for AP

RSE_RDX_ASSORT_PERIOD_IMPORT_JOB

Importing assortment period from RDX to AIF for AP

rse_rdx_assort_period_import.ksh

RSE_PLANNING_PERIOD_LOAD_JOB

Loading assortment period through interface for AP

rse_planning_period_load.ksh

RSE_PLANNING_PERIOD_LOAD_END_JOB

Loading assortment period through interface for AP

rse_process_state_update.ksh

RSE_MFP_FCST_BATCH_RUN_START_JOB

Start job for executing MFP and AP forecast batch runs

rse_process_state_update.ksh

RSE_CREATE_MFP_BATCH_RUN_PROC_JOB

Create MFP and AP batch runs

rse_create_mfp_batch_run_proc.ksh

RSE_MFP_FCST_BATCH_PROCESS_JOB

Execute MFP and AP forecast batch

rse_fcst_mfp_batch_process.ksh

RSE_MFP_FCST_BATCH_RUN_END_JOB

End job for executing MFP and AP forecast batch runs

rse_process_state_update.ksh

RSE_CREATE_MFP_BATCH_RUN_PROC_ADHOC_JOB

Ad hoc job to create MFP and AP batch runs

rse_create_mfp_batch_run_proc.ksh

RSE_MFP_FCST_BATCH_PROCESS_ADHOC_JOB

Ad hoc job to execute MFP and AP forecast batch

rse_fcst_mfp_batch_process.ksh

The forecast values generated by runs that are associated with active run types are exported to the RDX schema. The jobs for exporting the outputs to the RDX schema are listed in Table 13-5.

Table 13-5 Export Jobs for MFP and AP from AIF to RDX

JobName Description RmsBatch

RSE_FCST_EXPORT_START_JOB

Start job for MFP and AP forecast export

rse_process_state_update.ksh

RSE_MFP_FCST_EXPORT_JOB

Export MFP forecast

rse_mfp_export.ksh

RSE_AP_FCST_EXPORT_JOB

Export AP forecast

rse_ap_export.ksh

RSE_FCST_EXPORT_END_JOB

End job for MFP and AP forecast export

rse_process_state_update.ksh

RSE_MFP_FCST_EXPORT_ADHOC_JOB

Ad hoc job to export MFP forecast

rse_mfp_export.ksh

RSE_AP_FCST_EXPORT_ADHOC_JOB

Ad hoc job to export AP forecast

rse_ap_export.ksh

RSE_RDX_ASSORT_ELASTICITY_EXPORT_JOB

Export assortment elasticity for AP

rse_rdx_assort_elasticity_export.ksh

RSE_RDX_ASSORT_ELASTICITY_EXPORT_ADHOC_JOB

Ad hoc job to export assortment elasticity for AP

rse_rdx_assort_elasticity_export.ksh

RSE_RDX_LOC_CLUSTER_EXPORT_JOB

Export store clusters for AP

rse_rdx_loc_cluster_export.ksh

RSE_RDX_LOC_CLUSTER_EXPORT_ADHOC_JOB

Ad hoc job to export store clusters for AP

rse_rdx_loc_cluster_export.ksh

RSE_RDX_SIZE_PROFILE_EXPORT_JOB

Export size profiles for AP

rse_rdx_size_profile_export.ksh

RSE_RDX_SIZE_PROFILE_EXPORT_ADHOC_JOB

Ad hoc job to export size profiles for AP

rse_rdx_size_profile_export.ksh

Forecast Configuration for RDF and AIF (including Inventory Optimization and Offer Optimization)

To configure forecast process for RDF, do the following.

  1. Use the Manage System Configurations screen to review and modify the global configurations in RSE_CONFIG. For further details, see "Configuration Updates". The relevant configurations in RSE_CONFIG that must be reviewed and edited as required are listed in Table 13-6.

    Table 13-6 RSE Configurations

    APPL_CODE PARAM_NAME PARAM_VALUE DESCR

    RSE

    EXTENDED_HIERARCHY_SRC

    Default value is NON-RMS.

    Data source providing extended hierarchy data RMS/NON-RMS.

    RSE

    LOAD_EXTENDED_PROD_HIER

    Default value is Y. If the product hierarchy data had 9 levels, keep this value as Y. If it has 7 levels, change this value to N.

    Y/N Value. This parameter is used by the product hierarchy ETL to know if the extended product hierarchy is needed.

    PMO

    PMO_PROD_HIER_TYPE

    Default value is 3. If the product hierarchy data has 9 levels (i.e. it has extended hierarchy), keep this value as 3. If it has 7 levels, change this value to 1.

    The hierarchy id to use for the product (Installation configuration).

    RSE

    PROD_HIER_SLSTXN_HIER_LEVEL_ID

    Default value is 9.

    This parameter identifies the hierarchy level at which sales transactions are provided (7-Style, 8-Style/color or 9 Style/color/Size). It MUST match the extended hierarchy leaf level

    PMO

    PMO_AGGR_INVENTORY_DATA_FLG

    Default value is Y.

    Specifies if inventory data is present and if it should be used when aggregating activities data. Set this value to N if inventory data is not loaded (inventory data is not required for MFP and AP forecasting but it is required for other applications like Offer Optimization, Inventory Optimization and Retail Demand Forecasting).

    RSE

    SLS_TXN_EXCLUDE_LIABILITY_SLS

    Default value is N.

    Y/N flag indicating if liability sales columns should be excluded when retrieving sales transaction data.

    RSE

    RSE_LOC_HIER_EXCL_FRANCHISE

    Default value is N.

    Y/N flag indicating if franchise locations should be excluded from the location hierarchy.

    RSE

    PROMOTIONAL_SALES_AVAIL

    Default value is Y.

    Y/N flag indicating if promotion data is available for Causal-Long Life Cycle run types.

    Set this value to N if promotion data is not yet loaded into AIF.

    RSE

    DFLT_LIFE_CYCLE

    Default value is SLC.

    This parameter identifies the main life cycle of the items.

    Keep this value as SLC if most of the items have short life cycle and change this value to LLC is most of the items have long life cycle.

    RSE RSE_APPROVED_FCST_SOURCE Default value is RDX. Valid values are RSE (approved forecast is taken from AIF directly for the latest approved run) or RDX (approved forecast is taken from RDF for the latest approved run in RDF UI).
    PMO PMO_OUTLIER_CALCULATION_FLG Default value is Y. Y/N flag indicating if the GA solution for Outlier flag calculation should be used.
    RSE FCST_PURGE_RETENTION_DAYS Default value is 180. Number of days to wait before permanently deleting run data.
    RSE FCST_PURGE_ALL_RUNS Default value is N. Y/N flag indicating all run data should be deleted.
    RSE FCST_TY_PURGE_RETENTION_DAYS Default value is 180. Number of days to wait before permanently deleting run type data.
    RSE FCST_TY_PURGE_ALL_RUNS Default value is N. Y/N flag indicating all run type data should be deleted.
    RSE RSE_FIRST_INV_RECEIPT_HIST_MAX_WEEKS Default value is 106. Records with inventory first receipt date older than these many weeks (from the latest week with inventory) will be excluded from model start date calculation.
    RSE INV_SRC_RI Default value is Y. Flag which indicates whether inventory is obtained from RI (Y) or loaded directly to ORASE (N).
    RSE INV_RC_DT_SRC_RI Default value is Y. Flag which indicates whether receipt dates are obtained from RI (Y) or loaded directly to ORASE (N).
  2. Use the Manage Forecast Configurations screen to set up the forecast runs, as follows.

    In Manage Forecast Configurations, start by setting up a run type in the Setup train stop. Click the + icon above the table and fill in the fields in the pop-up. The customer should decide the majority life cycle of the items and accordingly set the value of the parameter DFLT_LIFE_CYCLE in the RSE_CONFIG table. All the items that are of the opposite life cycle should be stored in the table RSE_FCST_LIFE_CYCLE_CLSF (populated through an interface). If the customer has both Short Life Cycle and Long Life Cycle items, you must create one run type for each. The run types will either use the items present in the table RSE_FCST_LIFE_CYCLE_CLSF or use the rest of the items depending on the values of the Forecast Method for the run type and the DFLT_LIFE_CYCLE parameter. You must create a run type for each forecast intersection combination that is required.

    Once you have finished setting up the run types, click Start Data Aggregation in the Setup train stop. Select all the run types that were created and click Submit. When the aggregation is complete, the aggregation status will change to Complete. At this point, ad-hoc test runs and batch runs can be created and executed to generate a forecast.

    To create an ad-hoc run, go to the Test train stop. First, click a run type in the top table and then click the + icon in the bottom table to create a run. In the Create Run pop-up, you can change the configurations parameters related to estimation process, base demand, and forecast process in their respective tabs. After modifying and reviewing the configuration parameters, click Submit to start the run. Upon submit, a validation process runs to validate the value of configuration parameters. If there is any error, correct the corresponding parameter and submit again. Once the run is complete, the status will change to Forecast Generation Complete.

    Once you are done with testing, you must modify and review the configurations of the run type, activate the run type, enable auto-approve, and map the run type to the downstream application (for example, RDF). In the Manage train stop, select a row, click Edit Configurations Parameters, and edit the estimation and forecast parameters as necessary. Once you are done, go to Review tab and click Validate and then close tab.

    Note:

    If the run type is active, you will only be able to view the parameters. In order to edit the parameters, the run type must be inactive.

    To activate the run type and enable the auto-approve, select a run type in the table and click the corresponding buttons above the table. Lastly, to map the run type to RDF, go to Map train stop and click the + icon to create a new mapping.

    Note:

    For RDF, you will be able to map one or multiple run types to the same "external key" run type. The run types that are being mapped to the same external key must have same forecast intersection, same price zone and customer segment flag, and same data source, but opposite life cycle.

Batch and Ad-Hoc Jobs for RDF Forecasting

The configuration and main data load jobs listed in Table 13-3 are used for loading foundation data (product hierarchy, location hierarchy, calendar, product attributes, and so on).

The Batch and Ad-Hoc jobs listed in Table 13-7 are used for preparing weekly data and running weekly batches for RDF.

Table 13-7 Batch and Ad Hoc Jobs for RDF Forecasting

JobName Description RmsBatch

PMO_ACTIVITY_LOAD_START_JOB

PMO Activity load start job

rse_process_state_update.ksh

PMO_ACTIVITY_STG_JOB

PMO Activity data staging

pmo_activity_load_setup.ksh

PMO_ACTIVITY_LOAD_JOB

PMO Activity load job

pmo_activity_load_process.ksh

PMO_ACTIVITY_LOAD_END_JOB

PMO Activity load end job

rse_process_state_update.ksh

RSE_PROMO_HIER_ETL_START_JOB

Start job for loading promotion hierarchy data

rse_process_state_update.ksh

RSE_PROMO_SRC_XREF_LOAD_JOB

Job for loading promotion hierarchy data

rse_promo_src_xref_load.ksh

RSE_PROMO_HIER_LOAD_JOB

Job for loading promotion hierarchy data

rse_promo_hier_load.ksh

RSE_PROMO_HIER_DH_LOAD_JOB

Job for loading promotion hierarchy data

rse_promo_hier_dh_load.ksh

RSE_PROMO_HIER_ETL_END_JOB

End job for loading promotion hierarchy data

rse_process_state_update.ksh

PMO_ACTIVITY_LOAD_PL_LLC_START_JOB

Start job for loading promotion lift data

rse_process_state_update.ksh

PMO_ACTIVITY_LOAD_PL_LLC_LOAD_JOB

Job for loading promotion lift data

pmo_activity_load_pl_llc.ksh

PMO_ACTIVITY_LOAD_PL_LLC_END_JOB

End job for loading promotion lift data

rse_process_state_update.ksh

PMO_ACTIVITY_LOAD_PL_LLC_LOAD_ADHOC_JOB

Ad-hoc job for loading promotion lift data

pmo_activity_load_pl_llc.ksh

PMO_ACTIVITY_LOAD_OFFERS_INITIAL_ADHOC_JOB

Initial one-time job for loading promotion offers data

pmo_activity_load_offers_initial.ksh

PMO_ACTIVITY_LOAD_OFFERS_START_JOB

Start job for loading promotion offers data

rse_process_state_update.ksh

PMO_ACTIVITY_LOAD_OFFERS_LOAD_JOB

Job for loading promotion offers data

pmo_activity_load_offers_weekly.ksh

PMO_ACTIVITY_LOAD_OFFERS_END_JOB

End job for loading promotion offers data

rse_process_state_update.ksh

PMO_ACTIVITY_LOAD_OFFERS_LOAD_ADHOC_JOB

Ad-hoc job for loading promotion offers data

pmo_activity_load_offers_weekly.ksh

RSE_FLEX_GROUP_LOAD_START_JOB

Loading flex group data through interface

rse_process_state_update.ksh

RSE_FLEX_GROUP_DTL_STG_JOB

Loading flex group data through interface (populates rse_flex_group_dtl_stg)

rse_flex_group_dtl_stg.ksh

RSE_FLEX_GROUP_DTL_COPY_JOB

Loading flex group data through interface

RSE_FLEX_GROUP_DTL_STG_CNE_JOB

Loading flex group data through interface

RSE_FLEX_GROUP_DTL_LOAD_JOB

Loading flex group data through interface (populates rse_flex_group and rse_flex_group_dtl based on rse_flex_group_set and rse_flex_group_dtl_stg)

rse_flex_group_dtl_load.ksh

RSE_FLEX_GROUP_LOAD_END_JOB

Loading flex group data through interface

rse_process_state_update.ksh

RSE_FCST_SPREAD_PROFILE_LOAD_START_JOB

Loading spread profile data through interface

rse_process_state_update.ksh

RSE_FCST_SPREAD_PROFILE_STG_JOB

Loading spread profile data through interface (populates rse_fcst_spread_profile_stg)

rse_fcst_spread_profile_stg.ksh

RSE_FCST_SPREAD_PROFILE_COPY_JOB

Loading spread profile data through interface

RSE_FCST_SPREAD_PROFILE_STG_CNE_JOB

Loading spread profile data through interface

RSE_FCST_SPREAD_PROFILE_LOAD_JOB

Loading spread profile data through interface (populates rse_fcst_spread_profile based on rse_fcst_spread_profile_stg)

rse_fcst_spread_profile_load.ksh

RSE_FCST_SPREAD_PROFILE_LOAD_END_JOB

Loading spread profile data through interface

rse_process_state_update.ksh

RSE_FCST_LIFE_CYCLE_CLSF_LOAD_START_JOB

Loading exception life cycle items through interface

rse_process_state_update.ksh

RSE_FCST_LIFE_CYCLE_CLSF_STG_JOB

Loading exception life cycle items through interface (populates rse_fcst_life_cycle_clsf_stg)

rse_fcst_life_cycle_clsf_stg.ksh

RSE_FCST_LIFE_CYCLE_CLSF_COPY_JOB

Loading exception life cycle items through interface

RSE_FCST_LIFE_CYCLE_CLSF_STG_CNE_JOB

Loading exception life cycle items through interface

RSE_FCST_LIFE_CYCLE_CLSF_LOAD_JOB

Loading exception life cycle items through interface (populates rse_fcst_life_cycle_clsf based on rse_fcst_life_cycle_clsf_stg)

rse_fcst_life_cycle_clsf_load.ksh

RSE_FCST_LIFE_CYCLE_CLSF_LOAD_END_JOB

Loading exception life cycle items through interface

rse_process_state_update.ksh

RSE_INV_LOAD_START_JOB Inventory Load Start Job rse_process_state_update.ksh
RSE_INV_PR_LC_WK_STG_JOB Inventory price location week file stage (populates rse_inv_pr_lc_wk_a_stg) rse_inv_pr_lc_wk_stg.ksh
RSE_INV_PR_LC_WK_COPY_JOB Loading price location week file
RSE_INV_PR_LC_WK_STG_CNE_JOB Loading price location week file
RSE_INV_PR_LC_WK_LOAD_JOB Inventory price location week file load (populates rse_inv_pr_lc_wk_a and rse_inv_pr_lc_hist based on rse_inv_pr_lc_wk_a_stg) rse_inv_pr_lc_wk_load.ksh
RSE_INV_PR_LC_WK_SETUP_JOB Inventory price location week RI load setup rse_inv_pr_lc_wk_setup.ksh
RSE_INV_PR_LC_WK_PROCESS_JOB Inventory price location week RI load process (populates rse_inv_pr_lc_wk_a and rse_inv_pr_lc_hist based on w_rtl_inv_it_lc_wk_a and w_rtl_invu_it_lc_wk_a rse_inv_pr_lc_wk_process.ksh
RSE_INV_LOAD_END_JOB Inventory Load End Job rse_process_state_update.ksh
RSE_EXIT_DATE_LOAD_START_JOB Start job for exit dates rse_process_state_update.ksh
RSE_EXIT_DATE_STG_JOB Staging job for exit dates (populates rse_exit_date_stg) rse_exit_date_stg.ksh
RSE_EXIT_DATE_COPY_JOB CNE copy job for exit dates  
RSE_EXIT_DATE_STG_CNE_JOB CNE staging job for exit dates  
RSE_EXIT_DATE_LOAD_JOB Load job for exit dates (populates rse_exit_date based on rse_exit_date_stg) rse_exit_date_load.ksh
RSE_EXIT_DATE_LOAD_END_JOB End job for exit dates rse_process_state_update.ksh

PMO_RUN_EXEC_START_JOB

Start job for PMO run execution

rse_process_state_update.ksh

PMO_CREATE_BATCH_RUN_JOB

Create batch run for PMO execution

rse_create_pmo_batch_run_proc.ksh

PMO_RUN_EXEC_SETUP_JOB

Setup job for PMO run execution

pmo_run_exec_setup.ksh

PMO_RUN_EXEC_PROCESS_JOB

Process job for PMO run execution

pmo_run_exec_process.ksh

PMO_RUN_EXEC_END_JOB

End job for PMO run execution

rse_process_state_update.ksh

PMO_RUN_EXEC_ADHOC_JOB

Ad-hoc job for PMO run execution

pmo_run_execution.ksh

RSE_FCST_BATCH_RUN_START_JOB

Start job for forecast batch run

rse_process_state_update.ksh

RSE_CREATE_FCST_BATCH_RUN_JOB

Create forecast batch run

rse_create_fcst_batch_run_proc.ksh

RSE_FCST_BATCH_PROCESS_JOB

Execute Base Demand and Demand Forecast for forecast run

rse_fcst_batch_process.ksh

RSE_FCST_BATCH_RUN_END_JOB

End job for creating forecast batch run

rse_process_state_update.ksh

RSE_CREATE_FCST_BATCH_RUN_ADHOC_JOB

Adhoc job to create forecast batch run

rse_create_fcst_batch_run_proc.ksh

RSE_FCST_BATCH_PROCESS_ADHOC_JOB

Adhoc job to execute Base Demand and Demand Forecast for forecast run

rse_fcst_batch_process.ksh

The forecast values generated by runs that are associated with active run types and mapped to RDF are exported to RDX schema. Table 13-8 lists the jobs to export the main forecast data.

Table 13-8 Jobs for Exporting Forecast Outputs from AIF to RDX

JobName Description RMSBatch

RSE_RDF_FCST_EXPORT_JOB

Export RDF forecast

rse_rdf_export.ksh

RSE_RDF_FCST_EXPORT_ADHOC_JOB

Ad hoc job to export RDF forecast

rse_rdf_export.ksh

Table 13-9 list the additional jobs for exporting different data such as promotions, and configuration parameters for run types.

Table 13-9 Additional Jobs for Exporting Data from AIF to RDX

JobName Description RmsBatch

RSE_HIST_PROMO_OFFER_SALES_EXPORT_ADHOC_JOB

Initial one-time job for exporting promotion offer sales figures

rse_rdf_offer_sales_hist_exp.ksh

RSE_PROMO_OFFER_EXPORT_START_JOB

Start job for exporting promotion offer

rse_process_state_update.ksh

RSE_PROMO_OFFER_EXPORT_JOB

Export promotion offers

rse_rdf_offers_hier_exp.ksh

RSE_PROMO_OFFER_SALES_EXPORT_JOB

Export promotion offer sales figures

rse_rdf_offer_sales_exp.ksh

RSE_PROMO_OFFER_EXPORT_END_JOB

End job for exporting promotion offer

rse_process_state_update.ksh

RSE_PROMO_OFFER_EXPORT_ADHOC_JOB

Ad hoc job to export promotion offers

rse_rdf_offers_hier_exp.ksh

RSE_PROMO_OFFER_SALES_EXPORT_ADHOC_JOB

Ad hoc job to export promotion offer sales figures

rse_rdf_offer_sales_exp.ksh

RSE_FCST_RUN_TYPE_CONF_EXPORT_START_JOB

Start job for exporting runtype config

rse_process_state_update.ksh

RSE_FCST_RUN_TYPE_CONF_EXPORT_SETUP_JOB

Setup job for exporting runtype config

rse_fcst_run_type_conf_exp_setup.ksh

RSE_FCST_RUN_TYPE_CONF_EXPORT_PROCESS_JOB

Process job for exporting runtype con

rse_fcst_run_type_conf_exp_process.ksh

RSE_FCST_RUN_TYPE_CONF_EXPORT_END_JOB

End job for exporting runtype config

rse_process_state_update.ksh

RSE_FCST_RUN_TYPE_CONF_EXPORT_SETUP_ADHOC_JOB

Ad-hoc setup job for exporting runtype config

rse_fcst_run_type_conf_exp_setup.ksh

RSE_FCST_RUN_TYPE_CONF_EXPORT_PROCESS_ADHOC_JOB

Ad-hoc process job for exporting runtype config

rse_fcst_run_type_conf_exp_process.ksh

The jobs for importing forecast parameters from RDX schema are listed in Table 13-10:

Table 13-10 Jobs for Importing Forecast Parameters from RDX to AIF

JobName Description RMSBatch

RSE_RDX_FCST_PARAM_START_JOB

Start job for importing forecast parameters

rse_process_state_update.ksh

RSE_RDX_FCST_PARAM_SETUP_JOB

Setup job for importing forecast parameters

rse_rdx_fcst_param_setup.ksh

RSE_RDX_FCST_PARAM_PROCESS_JOB

Process job for importing forecast parameters

rse_rdx_fcst_param_process.ksh

RSE_RDX_FCST_PARAM_END_JOB

End job for importing forecast parameters

rse_process_state_update.ksh

RSE_RDX_FCST_PARAM_SETUP_ADHOC_JOB

Ad-hoc setup job for importing forecast parameters

rse_rdx_fcst_param_setup.ksh

RSE_RDX_FCST_PARAM_PROCESS_ADHOC_JOB

Ad-hoc process job for importing forecast parameters

rse_rdx_fcst_param_process.ksh

RSE_FCST_RDX_NEW_ITEM_ENABLE_START_JOB

Start job for importing enablement flags for new item forecast

rse_process_state_update.ksh

RSE_FCST_RDX_NEW_ITEM_ENABLE_SETUP_JOB

Setup job for importing enablement flags for new item forecast

rse_fcst_rdx_new_item_enable_setup.ksh

RSE_FCST_RDX_NEW_ITEM_ENABLE_PROCESS_JOB

Process job for importing enablement flags for new item forecast

rse_fcst_rdx_new_item_enable_process.ksh

RSE_FCST_RDX_NEW_ITEM_ENABLE_END_JOB

End job for importing enablement flags for new item forecast

rse_process_state_update.ksh

RSE_FCST_RDX_NEW_ITEM_ENABLE_SETUP_ADHOC_JOB

Ad-hoc setup job for importing enablement flags for new item forecast

rse_fcst_rdx_new_item_enable_setup.ksh

RSE_FCST_RDX_NEW_ITEM_ENABLE_PROCESS_ADHOC_JOB

Ad-hoc process job for importing enablement flags for new item forecast

rse_fcst_rdx_new_item_enable_process.ksh

PMO_EVENT_IND_RDF_START_JOB

Start job for importing event indicators for preprocessing

rse_process_state_update.ksh

PMO_EVENT_IND_RDF_SETUP_JOB

Setup job for importing event indicators for preprocessing

pmo_event_ind_rdf_setup.ksh

PMO_EVENT_IND_RDF_PROCESS_JOB

Process job for importing event indicators for preprocessing

pmo_event_ind_rdf_process.ksh

PMO_EVENT_IND_RDF_END_JOB

End job for importing event indicators for preprocessing

rse_process_state_update.ksh

PMO_EVENT_IND_RDF_SETUP_ADHOC_JOB

Ad-hoc setup job for importing event indicators for preprocessing

pmo_event_ind_rdf_setup.ksh

PMO_EVENT_IND_RDF_PROCESS_ADHOC_JOB

Ad-hoc process job for importing event indicators for preprocessing

pmo_event_ind_rdf_process.ksh

RSE_LIKE_RDX_RSE_START_JOB

Start job for importing new item parameters

rse_process_state_update.ksh

RSE_LIKE_RDX_RSE_SETUP_JOB

Setup job for importing new item parameters

rse_like_rdx_rse_setup.ksh

RSE_LIKE_RDX_RSE_PROCESS_JOB

Process job for importing new item parameters

rse_like_rdx_rse_process.ksh

RSE_LIKE_RDX_RSE_END_JOB

End job for importing new item parameters

rse_process_state_update.ksh

RSE_LIKE_RDX_RSE_SETUP_ADHOC_JOB

Ad-hoc setup job for importing new item parameters

rse_like_rdx_rse_setup.ksh

RSE_LIKE_RDX_RSE_PROCESS_ADHOC_JOB

Ad-hoc process job for importing new item parameters

rse_like_rdx_rse_process.ksh

The approved forecast is exported out of RDX into AIF, which then goes to a flat file. The jobs are listed in Table 13-11. The RSE_APPROVED_FCST_SOURCE parameter value in RSE_CONFIG table (APPL_CODE = RSE) available in the Manage System Configurations screen should be set up correctly.

Table 13-11 Approved Forecast Export Jobs

JobName Description RMSBatch

RSE_RDX_APPD_FCST_START_JOB

Start job for exporting approved forecast data

rse_process_state_update.ksh

RSE_RDX_APPD_FCST_SETUP_JOB

Setup job for exporting approved forecast data

rse_rdx_appd_fcst_setup.ksh

RSE_RDX_APPD_FCST_PROCESS_JOB

Process job for exporting approved forecast data

rse_rdx_appd_fcst_process.ksh

RSE_RDF_APPR_FCST_EXPORT_JOB

Export approved weekly forecast to a flat file

rse_rdf_appr_fcst_export.ksh

RSE_RDF_APPR_FCST_DAY_EXPORT_JOB

Export approved daily forecast to a flat file

rse_rdf_appr_fcst_day_export.ksh

RSE_RDX_APPD_FCST_END_JOB

End job for exporting approved forecast data

rse_process_state_update.ksh

RSE_RDX_APPD_FCST_SETUP_ADHOC_JOB

Ad-hoc setup job for exporting approved forecast data

rse_rdx_appd_fcst_setup.ksh

RSE_RDX_APPD_FCST_PROCESS_ADHOC_JOB

Ad-hoc process job for exporting approved forecast data

rse_rdx_appd_fcst_process.ksh

RSE_RDF_APPR_FCST_EXPORT_ADHOC_JOB

Ad hoc job to export approved weekly forecast to a flat file

rse_rdf_appr_fcst_export.ksh

RSE_RDF_APPR_FCST_DAY_EXPORT_ADHOC_JOB

Ad-hoc job to export approved daily forecast to a flat file

rse_rdf_appr_fcst_day_export.ksh

Loading and Calculating Event Flags in AIF

The Out-of-Stock and Outlier indicators are important for the estimation stage of forecasting. These two flags are stored in the AIF table PMO_EVENT_INDICATOR that is used by the Causal-Long Life Cycle forecast method.

Loading Out-of-Stock Indicators Through the RAP Interface

The customer can load Out-of-Stock indicators at the SKU/Store/Week level using the RAP interface (staging table: W_RTL_INVOOS_IT_LC_WK_FS; target table: W_RTL_INVOOS_IT_LC_WK_F). The indicators will flow into AIF that will be then used in forecasting. The same interface can also be used to load Outlier indicators.

Calculating Outlier Indicators in AIF

Outlier indicators can be calculated directly in AIF at the Product/Location/Week level. A sale for a particular Product/Location/Week is flagged as an outlier if it is more than the average rate-of-sales for the Product/Location times a threshold. The indicatorsare used in forecasting. The threshold value can be provided using the parameter PP_LLC_OUTLIER_THRESHOLD in the Preprocessing tab while setting up a forecast run in Manage Forecast Configurations screen. This feature can be turned on/off using the parameter PMO_OUTLIER_CALCULATION_FLG in the table RSE_CONFIG (APPL_CODE = PMO) from the Manage System Configurations screen.

Workflow for RDF Implementation

The AIF-RDF workflow can be implemented in two ways.

The main dependency for the first process is that even before exporting first round of forecast from AIF to RDX, parameters need to be imported from RDX to AIF (the reason for this is RDF prefers to receive forecast for a subset of the prod/locs for which forecast is generated in AIF). For the import of parameters to work properly, run types must be mapped to RDF and assigned an ext key. Here are the steps for the first process:

  1. Create run types in AIF in Setup train stop screen.

  2. Create test runs in AIF in Test train stop screen to see if the setup has been correct.

  3. Map run types to RDF and assign ext keys in Map train stop screen.

  4. Import parameters from RDX to AIF (RDF sends parameters using the same ext keys used in Step 3).

  5. Run forecasts again in Test train stop screen (this time, the forecasts will use the imported parameters).

  6. Approve the estimation and forecast runs in Test train stop screen.

  7. Activate the run types in Manage train stop screen.

  8. Export forecasts from AIF to RDX.

  9. Cycle continues:

    1. Import parameters from RDX to AIF.

    2. Run forecasts.

    3. Approve runs.

    4. Export forecasts from AIF to RDX.

In the second process, there is no dependency of importing parameters from RDX to AIF before exporting forecast from AIF to RDX. (This simplifies the workflow and saves time during implementation; however, the batch processes will still have the dependency.) Here are the steps for the second process:

  1. Create run types in AIF in Setup train stop screen.

  2. Create test runs in AIF in Test train stop screen to see if the setup has been correct.

  3. Approve the estimation and forecast runs in Test train stop screen.

  4. Activate the run types in Manage train stop screen.

  5. Map run types to RDF and assign ext keys in Map train stop screen.

  6. Export forecasts from AIF to RDX (this will export all prod/locs for which AIF has generated forecast and not just a subset; however, this export will take longer time because it's a huge amount of data to be exported -- so, here is the tradeoff between the two implementation processes).

  7. Cycle continues as in Step 9 of the first process.

The internal (w.r.t. AIF) difference between the two processes is as follows: When the table in AIF that stores imported parameters from RDX is empty, AIF will export forecast for all prod/locs (second process); otherwise, AIF will export forecast for only a subset of prod/locs as imported from RDX (first process).

Note that once a forecast run has been exported from AIF to RDX, it can't be exported again. For an active run type, the export code finds the latest forecast run that has completed, has been approved, and has not been exported before.

AIF-RDF Workflow Example

Here is a set of sample steps using the first process.

  1. In AIF Manage Forecast Configurations train stop 1 Setup, create Run Type.
    • For forecasts used by RDF, choose Causal-Short Life Cycle or Causal-Long Life Cycle as Forecast Method.
    • If any run type has an aggregation status of Not Started, click the Start Data Aggregation button for the new run type just created, this process can take a few hours to complete.

  2. In AIF Manage Forecast Configurations train stop 2 Test.
    • Once aggregation completes, create a new run under selected run type, choose to run Estimation and Base Demand in this step, you do not have to toggle to Generate Forecast initially.

    • Approve the estimation parameters by clicking on Approve Demand Parameters.
    • Tables to check:
      • Seasonality (from estimation)
        • For both SLC and LLC: RSE_SEAS_CURVE_DIM (using pmo run id)

        • For both SLC and LLC: RSE_SEAS_CURVE_WK (using pmo run id)
      • Base Demand
        • For SLC: RSE_BASE_DEMAND (using fcst run id)

        • For LLC: RSE_LLC_BD_RES (using fcst run id)
      • Forecast (when Generate Forecast is selected)
        • For SLC: RSE_DMD_FCST (using fcst run id)

        • For LLC: RSE_LLC_FCST_BLINE (using fcst run id)
  3. In AIF Manage Forecast Configurations train stop 4 Map.
    • Map the forecast run type to RDF

    • Make sure to assign external application key for RDF, which is usually Weekly Forecast (this name has to be matched to RDF side)
  4. In RDF:
    • Perform your initial RDF Workflow Tasks to set up forecast parameters (History Length, Forecast Start Date, Forecast End Date), new items, etc.

    • Run the RDF Pre-Forecast and Export Forecast Parameters Batches using RDF_PRE_BATCH_W_JOB and RDF_PRE_EXP_RDX_W_JOB processes.
      • POM job RDF_PRE_BATCH_W_JOB triggers Pre-Forecast Batch task in RDF OAT

      • POM job RDF_PRE_EXP_RDX_W_JOB triggers Export Forecast Parameters task in RDF OAT
      • Check the following tables after the batches complete.
        • RDF_FCST_PARM_HDR_EXP
          • rdf_rdfruntype01:RDF_RUN_TYPE_KEY

        • RDF_RUN_TYPE_PARM_EXP
          • rdf_rdfruntype01:RDF_RUN_TYPE_KEY

          • rdf_runnewitem01:ENABLE_NEW_ITEM
          • rdf_runfrcst01:ENABLE_FORECAST
        • RDF_FCST_PARM_EXP
          • rdf_hislen01:FCST_HIST_LENGTH

          • rdf_frcststartdt01:FCST_START_DATE
          • rdf_frcstenddtlw01:FCST_END_DATE
        • RDF_FCST_PARM_CAL_EXP
          • rdf_ppsbopdate01:BOP_DATE

          • rdf_totadjslsovr01:USER_OVERRIDE
          • rdf_outageind01:PPS_OUTAGE_IND
          • rdf_outlierind01:PPS_OUTLIER_IND
        • RDF_NIT_PARM_EXP
          • PROD_HIER_LEVEL:SKU

          • LOC_HIER_LEVEL:STOR
          • rdf_fcpsubm:NIT_SUB_METHOD
          • rdf_fcpros:NIT_ROS_USER_IN
          • rdf_nitapplkitm1:NIT_LIKE_ITEM
          • rdf_nstapplkstr1:NIT_LIKE_STOR
          • rdf_fcpadj:ADJUSTMENT_FACTOR
  5. Run jobs in POM to import parameters from RDX to AIF.
    • RSE_RDX_FCST_PARAM_ADHOC_PROCESS
      • RSE_RDX_FCST_PARAM_SETUP_ADHOC_JOB

      • RSE_RDX_FCST_PARAM_PROCESS_ADHOC_JOB
      • Table to check:
        • RSE_RDF_FCST_RUN_TYPE_PRM (from RDF_FCST_PARM_HDR_EXP and RDF_FCST_PARM_EXP)

    • RSE_FCST_RDX_NEW_ITEM_ENABLE_ADHOC_PROCESS
      • RSE_FCST_RDX_NEW_ITEM_ENABLE_SETUP_ADHOC_JOB

      • RSE_FCST_RDX_NEW_ITEM_ENABLE_PROCESS_ADHOC_JOB
      • Table to check:
        • RSE_FCST_RDX_NEW_ITEM_ENABLE (from RDF_RUN_TYPE_PARM_EXP)

    • RSE_LIKE_RDX_RSE_ADHOC_PROCESS
      • RSE_LIKE_RDX_RSE_SETUP_ADHOC_JOB

      • RSE_LIKE_RDX_RSE_PROCESS_ADHOC_JOB
      • Table to check:
        • RSE_FCST_NEW_ITEM_PARAM (from RDF_NIT_PARM_EXP)

    • PMO_EVENT_IND_RDF_ADHOC_PROCESS
      • PMO_EVENT_IND_RDF_SETUP_ADHOC_JOB

      • PMO_EVENT_IND_RDF_PROCESS_ADHOC_JOB
      • Table to check:
        • PMO_EVENT_INDICATOR (from RDF_FCST_PARM_CAL_EXP)

  6. In AIF Manage Forecast Configurations train stop 2 Test.
    • Create a new run under the same run type you have been working with, choose to run Base Demand only if there is no need to run Estimation, and toggle to Generate Forecast this time.

    • If estimation has been run, approve the Estimation parameters by clicking on Approve Demand Parameters
    • Approve the base demand and forecast by clicking on Approve Base Demand and Forecast
    • Tables to check:
      • Seasonality (when estimation is run):
        • For both SLC and LLC: RSE_SEAS_CURVE_DIM (using pmo run id)

        • For both SLC and LLC: RSE_SEAS_CURVE_WK (using pmo run id)
      • Base Demand
        • For SLC: RSE_BASE_DEMAND (using fcst run id)

        • For LLC: RSE_LLC_BD_RES (using fcst run id)
      • Forecast
        • For SLC: RSE_DMD_FCST (using fcst run id)

        • For LLC: RSE_LLC_FCST_BLINE (using fcst run id)
  7. In AIF Manage Forecast Configurations train stop 3 Manage.
    • View/Edit Configuration Parameters to see/apply parameters from previously approved run (these parameters will be used for batch runs; not needed for forecast runs created from AIF UI)

    • Activate the forecast run type
    • Manage Auto-Approve to enable Auto-Approve for the run type before starting batch cycles (not needed for forecast runs created from AIF UI)
  8. Run jobs in POM to export forecasts to RDX from AIF.
    • RSE_FCST_RUN_TYPE_CONF_EXPORT_ADHOC_PROCESS
      • RSE_FCST_RUN_TYPE_CONF_EXPORT_SETUP_ADHOC_JOB

      • RSE_FCST_RUN_TYPE_CONF_EXPORT_PROCESS_ADHOC_JOB
    • RSE_FCST_EXPORT_ADHOC_PROCESS (Do not run the process as it includes AP And MFP FCST EXPORT too)
      • RSE_RDF_FCST_EXPORT_ADHOC_JOB

  9. After the jobs complete, check the following tables in Data Visualization tool to do a quick validation.
    • Table: rdx01.rse_fcst_run_type_config (gets populated by RSE_FCST_RUN_TYPE_CONF_EXPORT_ADHOC_PROCESS; based on an independent export run id)

    • Table: rdx01.rse_fcst_run_hdr_exp (gets populated by RSE_RDF_FCST_EXPORT_ADHOC_JOB; this table associates the export run id to the forecast run id approved in Step 6)
    • Table: rdx01.rse_fcst_run_config (gets populated by RSE_RDF_FCST_EXPORT_ADHOC_JOB; based on the export run id from rdx01.rse_fcst_run_hdr_exp)
    • Table: rdx01.rse_fcst_appr_base_dmd_exp (gets populated by RSE_RDF_FCST_EXPORT_ADHOC_JOB; based on the export run id from rdx01.rse_fcst_run_hdr_exp)
    • Table: rdx01.rse_fcst_demand_dtl_exp (gets populated by RSE_RDF_FCST_EXPORT_ADHOC_JOB; based on the export run id from rdx01.rse_fcst_run_hdr_exp)
    • Table: rdx01.rse_fcst_demand_src_exp (gets populated by RSE_RDF_FCST_EXPORT_ADHOC_JOB; based on the export run id from rdx01.rse_fcst_run_hdr_exp)
    • Table: rdx01.rse_fcst_demand_dtl_cal_exp (gets populated by RSE_RDF_FCST_EXPORT_ADHOC_JOB; based on the export run id from rdx01.rse_fcst_run_hdr_exp)
  10. In RDF:
    • Import the forecasts to RDF using RDF_POST_DATA_IMP_PROCESS, which runs the following two OAT tasks in RDF:
      • Import Forecast and Preprocessing Component task in RDF OAT
        • RSE_FCST_RUN_HDR_EXP
          • rdf_rserunid01:RUN_ID

        • RSE_FCST_DEMAND_DTL_CAL_EXP
          • rdf_sysbaseline01:BASELINE_FCST_QTY

          • rdf_sysfrcst01:DEMAND_FCST_QTY
          • rdf_syspeak01:EVENT_PEAK_QTY
          • rdf_eventfut01:EVENT_CLND
          • rdf_prcdiscfut01:PRICE_DISC_PERCENT
        • RSE_FCST_DEMAND_DTL_EXP
          • rdf_basedemand01:BASE_DEMAND_QTY

          • rdf_priceelas01:PRICE_ELASTICITY
        • RSE_FCST_APPR_BASE_DMD_EXP
          • rdf_appchosenlevel01:APPR_BASE_DEMAND_QTY

        • RSE_FCST_DEMAND_SRC_EXP
          • rdf_outageind01:STOCKOUT_IND

          • rdf_outageadj01:LOST_SLS_QTY
          • rdf_outlierind01:OUTLIER_IND
          • rdf_outlieradj01:OUTLIER_SLS_QTY
          • rdf_depromoadj01:PROMO_SLS_QTY
          • rdf_deprice01:CLR_SLS_QTY
          • rdf_totadjsls01:PREPROCESSED_SLS_QTY
      • Post Forecast Batch task in RDF OAT
    • Build Forecast Review workspace to review the forecasts, review, update and approve forecasts
    • Export approved forecasts by running RDF_POST_EXP_RDX POM job
      • Export Approved Forecast OAT Task
        • RDF_APPR_FCST_HDR_EXP
          • rdf_apprserunid01:RSE_RUN_ID

        • RDF_APPR_FCST_CAL_EXP
          • rdf_bopdate01:BOP_DATE

          • rdf_appbaseline01:APPR_BASELINE_FCST
          • rdf_appfrcst01:APPR_DEMAND_FCST
          • rdf_appcumint01:APPR_CUMINT
  11. To push approved forecast from RDX to RMS/object store (as a flat file) via AIF.
    • Run RSE_RDX_APPD_FCST_ADHOC_PROCESS in POM to import approved forecast from RDX to AIF:
      • RSE_RDX_APPD_FCST_SETUP_ADHOC_JOB

      • RSE_RDX_APPD_FCST_PROCESS_ADHOC_JOB
      • Table to check:
        • RSE_RDF_APPR_FCST (from RDF_APPR_FCST_HDR_EXP and RDF_APPR_FCST_CAL_EXP)

    • Run RSE_RDF_APPR_FCST_EXPORT_ADHOC_PROCESS (RSE_RDF_APPR_FCST_EXPORT_ADHOC_JOB) in POM to export the data from RSE_RDF_APPR_FCST_VW (week-level forecast) to RMS/object store as a flat file
    • Run RSE_RDF_APPR_FCST_DAY_EXPORT_ADHOC_PROCESS (RSE_RDF_APPR_FCST_DAY_EXPORT_ADHOC_JOB) in POM to export the data from RSE_RDF_APPR_FCST_DAY_VW (day-level forecast) to RMS/object store as a flat file, if day-level forecast is generated
  12. Run W_RTL_PLANFC_PROD2_LC2_T2_F_JOB job in POM to import approved forecasts in RI (This job is in RI_FORECAST_CYCLE Jobs)
    • Note that W_RTL_PLANFC_PROD2_LC2_T2_F gets the source data from RSE_RDF_APPR_FCST_VW

Using the Add Multiple Run Types Feature

An "Add Multiple Run Types" feature is available in the Setup train stop within the Manage Forecast Configurations screen. To add/edit/delete the rows visible within the "Add Multiple Run Types" table, please edit the two tables, RSE_CUSTOM_PROCESS_HDR and RSE_CUSTOM_PROCESS_DTL, available in the Manage System Configurations screen.

Building Alternate Hierarchy in AIF

AIF has the ability to build alternate location and alternate product hierarchies. Alternate hierarchy information should be available in RI. RSE_ALT_HIER_TYPE_STG and RSE_ALT_HIER_LEVEL_STG are the two relevant tables available in the Manage System Configurations screen. It is possible to add/edit/delete rows in these tables. Information about the alternate hierarchy types and levels must be provided through these tables. Then, the alternate hierarchy jobsmust be executed to generate the alternate hierarchies in AIF. Table 13-12 lists the relevant jobs.

Table 13-12 Alternate Hierarchy Jobs

JobName Description RMSBatch

RSE_ALT_HIER_SETUP_START_JOB

Start Job for Alternate Hierarchy Type Setup

rse_process_state_update.ksh

RSE_ALT_HIER_LOAD_JOB

Setup alternate hierarchy types by loading data from RSE_ALT_HIER_TYPE_STG table

rse_alt_hier_load.ksh

RSE_ALT_HIER_SETUP_END_JOB

End Job for Alternate Hierarchy Type Setup

rse_process_state_update.ksh

RSE_ALT_LOC_HIER_START_JOB

Start Job for Alternate Location Hierarchy Load

rse_process_state_update.ksh

RSE_ALT_LOC_HIER_SRC_XREF_LOAD_JOB

Load the RSE_LOC_SRC_XREF table with alternate location hierarchy data

rse_alt_loc_hier_process.ksh

RSE_ALT_LOC_HIER_LOAD_JOB

Load the RSE_LOC_HIER table with alternate location hierarchy data

rse_alt_loc_hier_load.ksh

RSE_ALT_LOC_HIER_TC_LOAD_JOB

Load the RSE_LOC_HIER_TC table with alternate location hierarchy data

rse_alt_loc_hier_tc_load.ksh

RSE_ALT_LOC_HIER_DH_LOAD_JOB

Load the RSE_LOC_HIER_DH table with alternate location hierarchy data

rse_alt_loc_hier_dh_load.ksh

RSE_ALT_LOC_HIER_END_JOB

End Job for Alternate Location Hierarchy Load

rse_process_state_update.ksh

RSE_ALT_PROD_HIER_START_JOB

Start Job for Alternate Product Hierarchy Load

rse_process_state_update.ksh

RSE_ALT_PROD_HIER_SRC_XREF_LOAD_JOB

Load the RSE_PROD_SRC_XREF table with alternate product hierarchy data

rse_alt_prod_hier_process.ksh

RSE_ALT_PROD_HIER_LOAD_JOB

Load the RSE_PROD_HIER table with alternate product hierarchy data

rse_alt_prod_hier_load.ksh

RSE_ALT_PROD_HIER_TC_LOAD_JOB

Load the RSE_PROD_HIER_TC table with alternate product hierarchy data

rse_alt_prod_hier_tc_load.ksh

RSE_ALT_PROD_HIER_DH_LOAD_JOB

Load the RSE_PROD_HIER_DH table with alternate product hierarchy data

rse_alt_prod_hier_dh_load.ksh

RSE_ALT_PROD_HIER_END_JOB

End Job for Alternate Product Hierarchy Load

rse_process_state_update.ksh

To perform forecasting in AIF using the created alternate location (product) hierarchy, the PMO_LOC_HIER_TYPE (PMO_PROD_HIER_TYPE) parameter value in RSE_CONFIG table (APPL_CODE = PMO) available in the Manage System Configurations screen must be set up correctly.

If you need to delete an alternate hierarchy, perform the following steps. Any existing alternate hierarchies must be deleted first if you are going to restructure them with different levels and mappings for the same hierarchy type. Attempting to restructure a hierarchy without deleting it can result in data issues and job failures.
  1. Update the delete flag to Y on the row in the RSE_ALT_HIER_TYPE_STG table corresponding to the alternate hierarchy being deleted.

  2. Delete all the level rows from the RSE_ALTER_HIER_LEVEL_STG table corresponding to the alternate hierarchy being deleted.

  3. Run the processes/jobs that rebuild the main and alternate hierarchie, in the same way you previously did to build the hierarchies.

If you are working with standalone jobs, then use RSE_MASTER_ADHOC_JOB to run the hierarchy loads again (using -X flag for the alternate hierarchy load, plus others as needed). For batch runs, the following jobs include the deletion handling, based on the configurations above. These must run before the deletion is implemented, so you cannot delete and rebuild the same hierarchy in the same batch cycle.

Location Hierarchies:

RSE_ALT_LOC_HIER_START_JOB

RSE_ALT_LOC_HIER_SRC_XREF_LOAD_JOB

RSE_ALT_LOC_HIER_LOAD_JOB

RSE_ALT_LOC_HIER_TC_LOAD_JOB

RSE_ALT_LOC_HIER_DH_LOAD_JOB

RSE_ALT_LOC_HIER_END_JOB

Product Alternates:

RSE_ALT_PROD_HIER_START_JOB

RSE_ALT_PROD_HIER_SRC_XREF_LOAD_JOB

RSE_ALT_PROD_HIER_LOAD_JOB

RSE_ALT_PROD_HIER_TC_LOAD_JOB

RSE_ALT_PROD_HIER_DH_LOAD_JOB

RSE_ALT_PROD_HIER_END_JOB

The alternate hierarchy ad hoc steps and nightly jobs will do the following cleanup:
  1. The delete flag in the RSE_ALT_HIER_TYPE table will be set to Y.
  2. The corresponding row in the RSE_HIER_TYPE table will be deleted.
  3. The corresponding rows in the RSE_ALT_HIER_LEVEL table will be deleted.
  4. The corresponding rows in the RSE_HIER_LEVEL table will be deleted.
  5. The corresponding partition in the RSE_LOC_HIER or RSE_PROD_HIER tables will be dropped.
  6. The corresponding partition in the RSE_LOC_HIER_DH or RSE_PROD_HIER_DH tables will be dropped.
  7. The corresponding partition in the RSE_LOC_HIER_TC or RSE_PROD_HIER_TC tables will be dropped.
  8. The corresponding partition in the RSE_LOC_SRC_XREF or RSE_PROD_SRC_XREF table will be dropped.

After the process is run a second time, the remaining corresponding row in the RSE_ALT_HIER_TYPE table whose delete flag had been set to “Y” will then be deleted.

Custom Jobs Through Innovation Workbench (IW)

Custom processes can be created through IW. AIF provides three in-built custom jobs that can be used for setting up some custom steps which will be executed before the estimation runs for forecasting. Basically, anything that can enhance the estimation results should be setup using these jobs. Table 13-13 lists the relevant jobs.

Table 13-13 Custom Jobs for Pre-Estimation Custom Processes

JobName Description RMSBatch

PMO_PRE_ESTIM_CUSTOM_START_JOB

Start job for PMO pre-estimation custom process

rse_process_state_update.ksh

PMO_PRE_ESTIM_CUSTOM_1_JOB

PMO pre-estimation custom job 1

rse_custom_job.ksh

PMO_PRE_ESTIM_CUSTOM_2_JOB

PMO pre-estimation custom job 2

rse_custom_job.ksh

PMO_PRE_ESTIM_CUSTOM_3_JOB

PMO pre-estimation custom job 3

rse_custom_job.ksh

PMO_PRE_ESTIM_CUSTOM_END_JOB

End job for PMO pre-estimation custom process

rse_process_state_update.ksh

PMO_PRE_ESTIM_CUSTOM_1_ADHOC_JOB

PMO pre-estimation custom ad hoc job 1

rse_custom_job.ksh

PMO_PRE_ESTIM_CUSTOM_2_ADHOC_JOB

PMO pre-estimation custom ad hoc job 2

rse_custom_job.ksh

PMO_PRE_ESTIM_CUSTOM_3_ADHOC_JOB

PMO pre-estimation custom ad hoc job 3

rse_custom_job.ksh

Here are the steps to get the custom jobs working:

  1. In IW, create a package and procedure for the process that needs to be run as a custom job. After logging into Innovation Workbench -> Manage Workbench -> SQL Workshop, on the right-hand side click on the "Package" under "Create Object" and procced with creating the package body, specification and create the procedure inside the package.

  2. Using IW or the Manage System Config screen, modify the table RSE_CUSTOM_JOB_CFG and edit values for the following columns:

    1. OB_NAME: This value is not editable and indicates a placeholder where a custom process can be added.

    2. PACKAGE_NAME: Enter the name of package that was created in IW.

    3. PROCEDURE_NAME: Enter the name of procedure that was created in IW.

    4. PROCEDURE_DESCR: Enter a description if desired.

    5. RUN_TIME_LIMIT: The run time limit is 900 seconds by default. It can be changed to a smaller value but not to a larger value. If the custom process runs for longer than the value indicated in RUN_TIME_LIMIT when running as a part of the batch process, the custom process will stop and move on to the next job/process.

    6. CONNECTION_TYPE: Valid values are LOW and MEDIUM. This value should almost always be LOW unless the job is supposed to run a process that would need multiple threads. HIGH is not a valid value. If HIGH is entered, it will switch to LOW by default when the job runs.

    7. ENABLE_FLG: Set this value to Y to indicate that this job should be executed as part of the batch process.

Purge Forecast Run and Run Type Data

Forecast run data can be purged through both batch and ad hoc jobs. The parameters that control this purge process are FCST_PURGE_RETENTION_DAYS and FCST_PURGE_ALL_RUNS in the RSE_CONFIG table (APPL_CODE = RSE), available in the Manage System Configurations screen. The ad hoc job can take an input that acts as an override to the FCST_PURGE_RETENTION_DAYS value. When FCST_PURGE_ALL_RUNS = Y, then both the batch and ad hoc jobs delete data for all forecast runs, irrespective of the FCST_PURGE_RETENTION_DAYS value. Forecast run type data can be purged through the ad hoc job. The parameters that control this purge process are FCST_TY_PURGE_RETENTION_DAYS and FCST_TY_PURGE_ALL_RUNS in the RSE_CONFIG table (APPL_CODE = RSE), available in the Manage System Configurations screen. The ad hoc job can take an input that acts as an override to the FCST_TY_PURGE_RETENTION_DAYS value. When FCST_TY_PURGE_ALL_RUNS = Y, then the ad hoc job deletes data for all forecast run types, irrespective of the FCST_TY_PURGE_RETENTION_DAYS value. The following table lists the relevant jobs.

Table 13-14 Purge Jobs for Forecast Run and Run Type Data

JobName Description RMSBatch
RSE_PURGE_RUN_START_JOB Start job for purging run data rse_process_state_update.ksh
RSE_PURGE_RUN_JOB Purge job for run data rse_purge_run_data.ksh
RSE_PURGE_RUN_END_JOB End job for purging run data rse_process_state_update.ksh
RSE_PURGE_RUN_ADHOC_JOB Ad hoc purge job for run data rse_purge_run_data.ksh
RSE_PURGE_RUN_TYPE_ADHOC_JOB Ad hoc purge job for run type data rse_purge_run_type_data.ksh

Configurations in AIF to Generate Forecasts for FACT Measures

Complete the following steps to configure the generation of forecasts for FACT measures.

  1. Configure data in RSE_FCST_DFLT_RUN_TYPE_PARAM.

    The pre-configured measures for forecasting are specified in RSE_FCST_DFLT_RUN_TYPE_PARAM under the param_name 'FORECAST_MEASURE'. To configure a new forecast measure (for example, shrink qty), the user must insert the value using the System Configuration screen under Control and Tactical Center.

    For example, a row with the following values must be inserted into (PARAMETER_NAME, PARAMETER_VALUE, DESCR, UI_PARAMETER_LABEL, DISPLAY_ORDER, CURRENT_FLG, DELETE_FLG).

    'FORECAST_MEASURE', SHRINK_QTY, 'Shrink Qty', 'Shrink Qty', '13','Y','N'

  2. Configure data in RSE_MD_FACT_COLUMN_MAP.

    To generate a forecast for FACT measures, data must flow into the PMO_ACTIVITIES table. If the FACT measure is one of the forecast measures that are available in the GA forecast engine, there is no need for any configuration in RSE_MD_FACT_COLUMN_MAP. The GA measures can be found in RSE_FCST_DFLT_RUN_TYPE_PARAM under the param_name 'FORECAST_MEASURE'.

    If the FACT measure is not one of the GA measures, it must be configured in RSE_MD_FACT_COLUMN_MAP.

    For example, if user wants to forecast shrink quantity, the FACT measure SHRINK_QTY must be loaded to one of the FLEX columns in PMO_ACTIVITIES. The following row must be inserted into RSE_MD_FACT_COLUMN_MAP using the System Configuration UI.

    Insert into rse_md_fact_column_map (source_table, source_column, target_table, target_column)

    values ('W_RTL_FACT1_PROD1_LC1_T1_F', 'SHRINK_QTY', 'PMO_ACTIVITIES', 'FLEX_MEAS_VALUE1');

  3. Configure data in RSE_FCST_RUN_TYPE_MTHD_PARAM.

    The forecast engine supports a few different forecast methods, and each method can be used to forecast certain measures. This mapping is configured in RSE_FCST_RUN_TYPE_MTHD_PARAM. FACT measures can be forecasted using Automatic Exponential Smoothing, Causal Long-Life Cycle, and Causal Short-Life Cycle methods. Note that all the required data for the method that is being used must be provided via the interface for FACT measures.

    For example, to forecast SHRINK_QTY using Automatic Exponential Smoothing, the following row must be inserted in RSE_FCST_RUN_TYPE_MTHD_PARAM using System Configurations UI.

    Insert into rse_fcst_run_type_mthd_param (FORECAST_METHOD, FORECAST_MEASURE, DESCR, FORECAST_MEASURE_UI_LABEL, FORECAST_METHOD_UI_LABEL)

    values ('AUTO_ES', ' SHRINK_QTY', 'Shrink Qty', 'Shrink Qty', 'Automatic Exponential Smoothing')

POM Jobs

The following table shows the jobs that must be run to aggregate data (as required) and populate PMO_ACTIVITIES.

Table 13-15 POM Jobs

JobName Description RmsBatch ProcessName
PMO_ACTIVITY_AGGR_MEAS_SETUP_JOB PMO aggregate measures load setup job pmo_activity_aggr_meas_setup.ksh PMO_ACTIVITY_LOAD_PROCESS
PMO_ACTIVITY_AGGR_MEAS_PROCESS_JOB PMO aggregate measures load process job pmo_activity_aggr_meas_process.ksh PMO_ACTIVITY_LOAD_PROCESS

Data Requirements for AIF

AI Foundation has a wide range of capabilities and solutions, each with its own specific data needs. While the files themselves are all common across the Retail Analytics and Planning solutions, which fields you provide data for require careful consideration. This section covers the four most critical data areas that must be discussed with customers as part of requirements and design. This data will directly impact the results across your implementation and determine which RAP features you will be able to benefit from, both now and in the future. This is not a replacement for the Implementation guide or the Interface guide, as those provides more detailed requirements for each cloud service. Refer to the detailed documentation before you begin the development of the data files.

Data Definitions for Core Fact Files

This section describes the data definitions for core fact files.

SALES.csv

Sales data represents sales transactions resulting from customer purchases and returns. Sales data can be at varying levels of detail, depending on the solutions used, ranging from item/loc/week summary data to the granular transaction-line details. This is the core fact used across all modules and it has many variations, as defined below. How you define your sales data will affect both current and future implementations of any AIF solutions, so you must consider your product roadmap when deciding on the level of detail to provide here. If you provide pre-aggregated data, for example, then it will prevent you from using modules that require transaction data without doing a re-implementation of your data loads and integrations. Any additional effort spent on the data up front can result in significant savings on implementation costs for future projects.

This section provides a high-level summary of the main considerations for the file. More information about details and requirements is available in the “Sales Data Requirements” section of the Data File Generation chapter of Oracle Retail Analytics and Planning implementation Guide.

Minumum Data (Forecasting)

The minimum data for forecasting has several types, depending on the use case. From least to most detailed, they are:
  1. Item/loc/week or item/loc/day aggregated sales

  2. Item/loc/week or item/loc/day transactional sales

  3. Item/loc/week or item/loc/day promotion-linked transactional sales

  4. Item/loc/week or item/loc/day customer-linked transactional sales

If you are only implementing basic Retail Demand Forecasting (RDF) or Offer Optimization (OO) forecasting and do not require transaction level details, then you can simply provide pre-summed item/loc/day or week sales totals. Detailed forecast method breakdowns are provided in later sections to clarify requirements by forecast type. If you only want to forecast the units, then you can simply provide SLS_QTY for the gross sales units. SLS_TRX_ID is defaulted to a unique value to differentiate the rows, as it is part of the table key. You might only have a single row per item/loc/week/retail type with one value of SLS_TRX_ID per data intersection.

ITEM,ORG_NUM,DAY_DT,MIN_NUM,RTL_TYPE_CODE,SLS_TRX_ID,SLS_QTY,LOC_CURR_CODE,DOC_CURR_CODE

1234781,2072,20210131,0000,R,12345678,112,USD,USD

If you are forecasting based on sales amounts or net sales/returns, then you can also include the additional columns for those values, which must still be at a day or week aggregated level.

If you need to perform more advanced forecasting methods (Halo and Cannibalization feature) or you are using transaction-level AIF applications (Customer Decision Trees, Customer Segmentation), then you must provide the actual sales transaction information, not the aggregated values. In this instance, the SLS_TRX_ID is a critical field that must represent the true POS transaction identifier to uniquely identify one customer basket/sale.

ITEM,ORG_NUM,DAY_DT,MIN_NUM,RTL_TYPE_CODE,SLS_TRX_ID,SLS_QTY,SLS_AMT_LCL,RET_QTY,RET_AMT_LCL,LOC_CURR_CODE,DOC_CURR_CODE

1234781,2072,20210131,1700,R,21013172971,1,124,0,0,USD,USD
1234799,2072,20210131,1700,R,21013172971,1,55,0,0,USD,USD
1234884,2072,20210131,1700,R,21013172971,2,70,0,0,USD,USD
1235584,2072,20210131,1700,R,21013172971,0,0,1,99,USD,USD

This represents a single customer purchase in which the customer bought three items and returned one. The SLS_TRX_ID is the same for all transaction lines belonging to this one customer sale. If your POS system does not have a unique transaction ID by itself, then you may have to concatenate other parts on top of it (that is, take the POS tran number + loc number + register + date). A single day’s sales file to AIF will have many such lines for the same item/loc for every sale or return transaction that happened that day. This must match with your POS system. Do not default the SLS_TRX_ID to some other value or the AIF results will not be accurate.

If you are using any of the promotion components of forecasting, such as promotion lift or estimation, then the sales data must be linked to the specific promotion IDs that occurred on them. This comes as two fields for the promotion ID and individual offer/component ID of the discount or price change in the promotion. One transaction line can have multiple promotions on it, which come as nearly identical lines of sales that differ only by promotion IDs.

ITEM,ORG_NUM,DAY_DT,MIN_NUM,RTL_TYPE_CODE,SLS_TRX_ID,PROMO_ID,PROMO_COMP_ID,SLS_QTY,SLS_AMT_LCL,RET_QTY,RET_AMT_LCL,LOC_CURR_CODE,DOC_CURR_CODE

1234781,2072,20210131,1700,P,21013172971,2498,12,1,124,0,0,USD,USD
1234799,2072,20210131,1700,P,21013172971,2498,12,1,55,0,0,USD,USD
1234799,2072,20210131,1700,P,21013172971,4693,111,1,55,0,0,USD,USD
1235584,2072,20210131,1700,R,21013172971,-2,-3,0,0,1,99,USD,USD
1234781,2072,20210131,1800,R,21013173592,-2,3,2,248,0,0,USD,USD

This example shows two different promotions, and one line item that had two promotions on it in the same transaction. Note that non-promotional sales use the default values of -2 and -3, respectively. These are the standard values the platform uses when a promotion is not known or not used. If you are providing promotion identifiers, you will also be expected to provide other files relating to promotions, such as PROMOTION.csv. The Oracle Retail Analytics and Planning Cloud Services Data Interfaces provides the complete details on other promotion files and their requirements.

The last of the minimum required datasets is customer-linked transaction data, required if you are also using AIF modules such as CDT, DT, Segmentation, or segment-based forecasting/optimizations. You must provide the unique customer identifier for each sales transaction that is linked to a known customer in the source system.

ITEM,ORG_NUM,DAY_DT,MIN_NUM,RTL_TYPE_CODE,SLS_TRX_ID,CUSTOMER_NUM,SLS_QTY,SLS_AMT_LCL,RET_QTY,RET_AMT_LCL,LOC_CURR_CODE,DOC_CURR_CODE

1234781,2072,20210131,1700,R,21013172971,112193,1,124,0,0,USD,USD
1234884,2072,20210131,1700,R,21013172971,112193,2,70,0,0,USD,USD
1235584,2072,20210131,1700,R,21013172971,112193,0,0,1,99,USD,USD
1234781,2072,20210131,1800,R,21013173592,33986,2,248,0,0,USD,USD

Field-Level Data Variations

As mentioned previously, DAY_DT can be daily dates or week-ending dates, depending on the requirements. If you are providing weekly files, it is fine to still provide the daily DAY_DT values for the days the transaction occurred, but all the data is provided in one week-ending file and no data is provided on other days of the week. (This is what is meant by item/loc/week transaction file: it is a weekly file with accumulated daily transactions.) When using actual transaction level data, however, it is best to provide the data every day due to the data volumes involved. Providing one file only at end of week can cause the weekly batch processing to be too long.

SLS_TRX_ID can be defaulted to a random/unique value if aggregated data is being used, but it must be the true POS customer transaction identifier if transaction level data is being used, as defined in the minimum requirement. It must always be provided as it is part of the primary key on the table.

RTL_TYPE_CODE allows values of R/P/C/O for regular/promotion/clearance/other sales, but not all types are required for all solutions. In some simple RDF implementations, only Regular/R sales can be used to start if promotions/clearances do not matter to you. Note that you must delete and reload the sales history later if you need to change the retail types.

IT_SEQ_NUM is a situational column that you can provide if you are sending raw POS transactions directly from the source. This value uniquely identifies line numbers on a transaction when the same item could be scanned repeatedly. A unique value is required here in such cases or the duplicate rows will be discarded.

REVISION_NUM is a situational column that helps to track the number of revisions made to the same transaction over time. If you are providing aggregated data, then this field is not required. If you are providing transaction data with meaningful SLS_TRX_ID values, then you can also provide REVISION_NUM starting from one to track the number of revisions done to a transaction, as compared to the source system. Each time you post an update to a transaction, increment the number by one, and this will be updated on the transaction record.

SLS_PROFIT_AMT_LCL and RET_PROFIT_AMT_LCL may be relevant to you as additional measure columns if you require any measurement of cost. There are no cost fields in the SALES.csv file; you are instead expected to provide the sales profit/margin amounts associated with the transaction. The platform derives cost-related measures internally by subtracting the profit from the total sales amount when needed, such as when populating Sales Cost for RPASCE.

RECEIPT.csv

Inventory receipts data refers to any movement of stock into a location, which can be the result of any allocation, transfer, or PO shipment accepted into that location. Receipts data for AIF drives the first/last receipt dates by item/location as described in the data requirements in a later section. The quantity and type of receipts do not matter to forecasting, they matter only to RI reporting and applications that directly use receipts (IO, MFP, AP).

Receipt dates indicate that the first or last time that a receipt transaction occurred against an item/location is being calculated. You do not directly provide these calculated measures. You provide the transactions when receipts are occurring, and the platform will maintain calculations for first and last receipt dates. For example, a new item gets a receipt on 5/3/2023, so you will post a receipt transaction with this date for that item/location. During the data load, the RAP interface will take the inventory update, which should reflect the change to stock on hand, and the receipt update, which indicates the date that the receipt occurred, and calculate that the first receipt date and last receipt date should both be 5/3/2023 now. If another receipt arrives later, the new inventory and receipt records will be used to calculate the last receipt date to be that new date, while the first receipt date remains unchanged.

This document provides a high-level summary of the main considerations for the file. More details and requirements are available in the “Receipts Data Requirements” section of the Data File Generation chapter of Oracle Retail Analytics and Planning implementation Guide.

Minimum Data (Forecasting)

For forecasting at item/loc/week, it is sufficient to provide a single receipt record for any item/loc having a receipt during that week so that the first and last receipt date is updated accordingly. You do not need every possible receipt between the first and last if you are only interested in establishing the first/last receipt dates for an item/loc. You can leave out in-between records as long as you still align the receipt and inventory load time ranges. The type and quantity do not matter to forecasting. Forecasting only needs to know that inventory entered that location for that item. For example, this might be the only entry for an item/loc/week:
ITEM,ORG_NUM,DAY_DT,INVRC_TYPE_CODE,LOC_CURR_CODE,DOC_CURR_CODE

30025,1183,20210123,20,USD,USD

Field-Level Data Variations

DAY_DT can be daily dates or week-ending dates. In either case, it is aggregated to week level for the purpose of weekly forecast usage. The data can only come at the end-of-week dates in batch, and all other days can have no data for this file. If you require daily data, then you can send updates every day or accumulate all the updates and provide one end-of-week file containing all the daily receipt activity. If you only send end-of-week data, then your inventory must likewise be end-of-week only, so the two files are aligned. Inventory and receipts must be aligned. It does not make sense for a receipt to occur but no inventory update to be posted. They must both be either daily files or weekly files.

The INVRC_TYPE_CODE value only matters to RI, IO, and MFP. You can default it to some value (for example, 44~T) for all rows if not required in those applications. Type 20 (PO receipts) typically happen at warehouses and are required to accurately represent WH receipts. Type 44~T are transfer receipts. This is most common for store locations as it is the receipt of any inventory as a result of a transfer between two locations. The source can be a WH or another store. Type 44~A is an optional sub-type for receipts that occurred due to allocations from the WH to the store. This type matters only to RI reporting at this time.

INVENTORY.csv

Inventory positions (that is, stock on hand) capture the current stock levels at all owned locations. Any time a transaction occurs involving stock on hand (sales, receipts, price changes, and so on.), an update to the inventory position must also occur to post the new stock level and related measures to AIF.

This section provides a high-level summary of the main considerations for the file. More details and requirements are available in the “Inventory Data Requirements” section of the Data File Generation chapter of Oracle Retail Analytics and Planning implementation Guide.

Minimum Data (Forecasting)

For forecasting at item/loc/week, it is sufficient to provide a single item/loc/week record for each item/loc on the week-ending date. The on-hand (INV_SOH_QTY) is required as it is used in base demand calculation. INV_IN_TRAN_QTY and INV_ON_ORD_QTY are not necessarily required, but will contribute to the total inventory quantity for forecasting purposes, and some applications, such as OO and IO, will expect it, based on the solution requirements. The data must exactly match the receipt activity, as it is assumed any receipt will create a change to inventory. Conversely, any change to inventory must be the result of some transaction, if you are using the file such as a sale, adjustment, receipt, or RTV.
ITEM,ORG_NUM,DAY_DT,CLEARANCE_FLG,INV_SOH_QTY,LOC_CURR_CODE,DOC_CURR_CODE

30025,1183,20210123,N,5,USD,USD
30027,1112,20210123,N,0,USD,USD

Field-Level Data Variations

For history data, DAY_DT is always a week-ending date; it cannot be daily. Receipts history can be daily or weekly, and any receipt dates will be used by the weekly history loads to determine first/last receipt dates.

For ongoing batch files, the data can come every day or end-of-week only. If the data comes weekly, then it must be the final week-ending positions of all item/locs that had any change during the week. You must only send one record per item/loc/week with the final ending stock on hand. DAY_DT cannot be different on different rows in the file; it must always be the current day on all rows or the week-ending date on all rows. Because of the first/last receipt date calculations discussed regarding the receipts file, the RECEIPT and INVENTORY files must be in sync for ongoing loads. That is, both must send data daily or weekly, and you must not mix them. If you do not care about the receipt date calculations because you are not using those forecasting methods and never will, then this requirement can be ignored and you can send daily receipts but weekly inventory. However, this is not recommended.

The CLEARANCE_FLG is not a part of the primary key in the data. You cannot have multiple records per item/loc with different flags. However, it is a required field, as the inventory status (regular or clearance) is necessary for many of the applications. If you are only implementing an application such as RDF and do not plan by clearance/non-clearance separately, you can default the value to N on all rows.

The data must include zero-balance INV_SOH_QTY rows in some cases. For example, if a receipt occurred but the item immediately sold out, an INV_SOH_QTY=0 record must still be provided for that day or week because it is required to link with the receipt activity. If an inventory balance moved from non-zero to zero in that day or week, the zero balance record is also required to come once so that there is an inventory level update indicating it went out of stock. Other kinds of changes in the source can also generate an inventory record update even for a zero balance, such as changes to the unit cost/retail price requiring an update to some fields on this interface. Refer to the Oracle Retail Analytics and Planning Implementation Guide for additional details on building the inventory file and zero-balance needs in history and ongoing data.

If you are familiar with the RMFCS (Merchandising Foundation CS), the way inventory is extracted from that system is by looking at all transaction activity for the day and pulling inventory for every item/loc with any transactions of any kind. Even unrelated transaction types will trigger inventory pulls to ensure the daily positions in RAP are up to date for all fields.

PRICE.csv

The price data represents price or cost change events for an item/location. The data is only for executed events, not future planned events. Typically, these will include the same price events that are pushed out to stores to update the selling prices for all items.

This section provides a high-level summary of the main considerations for the file. More details and requirements are available in the “Price Data Requirements” section of the Data File Generation chapter of Oracle Retail Analytics and Planning implementation Guide.

Minimum Data (Forecasting)

For forecasting at item/loc/week, it is sufficient to provide any new prices and price changes that are effective on the week-ending date per item/loc. Both the SELLING_UNIT_RTL_AMT_LCL and BASE_COST_AMT_LCL must be provided on all the records unless you are only implementing SPO or RDF, and then only SELLING_UNIT_RTL_AMT_LCL is required.
ITEM,ORG_NUM,DAY_DT,PRICE_CHANGE_TRAN_TYPE, SELLING_UNIT_RTL_AMT_LCL,BASE_COST_AMT_LCL,LOC_CURR_CODE,DOC_CURR_CODE

83074502,2030,20200208,0,103.76,49.34,USD,USD

Field-Level Data Variations

DAY_DT can be daily or weekly. In either case, it is aggregated to week level for the purpose of weekly forecast usage. The history data can also be daily or weekly, which is different from inventory. Daily price changes are loaded from history files exactly as provided for the date they occurred. You only provide a record when a price change event happens for an item/loc.

For ongoing batch files, the data can come every day or only for end-of-week. If the data comes weekly, then it must be the final week-ending positions of all item/locs that had any change during the week. You must only send one record per item/loc/week with final ending price. DAY_DT cannot be different on different rows in the file; it must always be the current day on all rows or the week-ending date on all rows. If you know that multiple price changes will occur within one week, then you must send the updates every day in order to capture each price change as it happens. For example, if a promotion or clearance happens in the middle of the week and ends two days later, sending a weekly price only will not capture the price changes. You must only send the week-ending regular price. In this case, you must plan to send daily price updates.

PRICE_CHANGE_TRAN_TYPE must be managed correctly to capture the type of price changes for an item/loc. The typical flow of price changes that will occur for an item/location must be as follows:

  1. New price/cost (PRICE_CHANGE_TRAN_TYPE=0)
  2. Regular cost changes (PRICE_CHANGE_TRAN_TYPE=2)
  3. Regular price changes (PRICE_CHANGE_TRAN_TYPE=4)
  4. Promotional/temporary markdowns (PRICE_CHANGE_TRAN_TYPE=9)
  5. Clearance markdowns (PRICE_CHANGE_TRAN_TYPE=8)
  6. Price reset due to new selling cycle or season change (PRICE_CHANGE_TRAN_TYPE=0)

All item/locations must have an initial price when the item is created (type 0). An item/location can then have many changes with types 2/4/8/9 before eventually staying at a type of 8 (for a final markdown) or resetting to a type of 0 (if the item lifecycle must restart).

Data Requirements for Various Forecast Methods

This section describes the data requirements for various forecast methods.

SALES.csv

Forecast Method: Causal-LLC

This forecast method requires data from SALES.csv. Sales data at the item/location level is required to run this method. The requirement is to provide sales history for a minimum of two years, as well as on an ongoing basis.

  1. When extreme exogenous situations occur, enough sales history data is required on either side of the event.
  2. If you want to generate customer segment-level forecasts, then you must provide the customer-linked transactions sales data. The retailer does not have to provide the data aggregated to the segment-level but must provide the customer ID and customer segment mapping as part of customer segment interface.
  3. If the user wants to forecast by net sales units, then sales returns must also be provided so that the system can calculate the net sales units. As a general rule, items that exhibit very low return rates (less than 10 percent) do not have to worry about providing this additional data.
  4. Regular and Promotional Sales sent via this interface are used to generate forecasts on these measures.
  5. Transaction level data is required for using the Halo and Cannibalization feature.

When the system is in production, the latest incremental sales data is obtained as part of the batch process.

Forecast Method: Causal-SLC

This forecast method requires data from SALES.csv. Sales data at the item/location level is required to run this method. The requirement is to provide sales history for a minimum of one year and an additional one season, which for most retailers is close to 18 to 24 months, as well as on an ongoing basis. It is recommended to provide sales history for two years.

  1. When extreme exogenous situations occur, enough sales history data is required on either side of the event.
  2. If you want to generate customer segment-level forecasts, then you must provide the customer-linked transactions sales data. The retailer does not have to provide the data aggregated to the segment-level but must provide the customer ID and customer segment mapping as part of customer segment interface.
  3. If the user wants to forecast by net sales units, then sales returns must also be provided so that system can calculate the net sales units. The returns data identifies the original location where the item was purchased and when it was purchased. This information is useful in returns analysis and in calculating the returns parameters.
  4. Regular and Promotional Sales sent via this interface are used to generate forecasts on these measures. Note that OO supports optimization on the Total Gross Sales Units or Total Net Sales Units with this forecasting method.
  5. Transaction level data is required for using the Halo and Cannibalization feature.

When the system is in production, the latest incremental sales data is obtained as part of the batch process.

Forecast Method: Automatic Exponential Smoothing

This forecast method requires data from SALES.csv. Sales data at the item/location level is required to run this method. The requirement is to provide sales history for a minimum of two years, as well as on an ongoing basis.
  1. When extreme exogenous situations occur, enough sales history data is required on either side of the event.

  2. If the user wants to forecast returns, then this must also be provided.

  3. Regular Sales, Promotional Sales, Clearance Sales, and Other Sales are also sent via this interface. These are required to generate forecasts on these measures.

When the system is in production, the latest incremental sales data is obtained as part of the batch process.

Forecast Method: Hierarchical Bayesian

This forecast method requires data from SALES.csv. Sales data at the item/location level is required to run this method. The requirement is to provide sales history for a minimum of one year, as well as on an ongoing basis. It is recommended to provide sales history for one year and an additional one season length, which for most retailers is close to 18 to 24 months.

  1. When extreme exogenous situations occur, enough sales history data is required on either side of the event.
  2. If you want to generate customer segment-level forecasts, then you must provide the customer-linked transactions sales data. The retailer does not have to provide the data aggregated to the segment-level but must provide the customer ID and customer segment mapping as part of customer segment interface.
  3. If the user wants to forecast by net sales units, then sales returns must be provided as well so that the system can calculate the net sales units.
  4. This method only supports a forecast on the Total Gross Sales Units and Total Net Sales Units. Note that OO supports optimization on the Total Gross Sales Units or Total Net Sales Units with this forecasting method.

When the system is in production, the latest incremental sales data is obtained as part of the batch process.

RECEIPT.csv

Forecast Method: Causal-LLC

This forecast method does not require data from RECEIPT.csv.

Forecast Method: Causal-SLC

This forecast method requires data from RECEIPT.csv. The requirement is to provide the snapshot of the first and last inventory receipt dates for all active items, as well as on an ongoing basis. It is recommended to provide the first and last inventory receipt dates for all items, at least for the most recent two quarters. The first and last inventory receipt dates are used to calculate the model start date, which helps assign the season code necessary to choose the correct seasonality curves. The RSE_FIRST_INV_RECEIPT_HIST_MAX_WEEKS parameter value in RSE_CONFIG table (APPL_CODE = RSE), available in the Manage System Configurations screen, controls the configuration that records which first inventory receipt dates older than this many weeks from the latest week with inventory will be excluded from model start date calculation.

Forecast Method: Automatic Exponential Smoothing

This forecast method does not require data from RECEIPT.csv.

Forecast Method: Hierarchical Bayesian

This forecast method does not require data from RECEIPT.csv.

INVENTORY.csv

Forecast Method: Causal-LLC

This forecast method does not require data from INVENTORY.csv. In the absence of inventory data from the env, the PMO_AGGR_INVENTORY_DATA_FLG parameter value in RSE_CONFIG table (APPL_CODE = PMO) available in the Manage System Configurations screen must be set to N before the data aggregation is completed for the forecast run type from the Setup train stop in the Manage Forecast Configurations screen.

Forecast Method: Causal-SLC

This forecast method requires data from INVENTORY.csv. In the presence of inventory data in the env, the PMO_AGGR_INVENTORY_DATA_FLG parameter value in RSE_CONFIG table (APPL_CODE = PMO) available in the Manage System Configurations screen must be set to Y before the data aggregation is completed for the forecast run type from the Setup train stop in the Manage Forecast Configurations screen. The requirement is to provide the inventory data for all active items, as well as on an ongoing basis. It is recommended to provide the inventory data for all items, at least for the most recent two quarters. The inventory information is used to calculate the base demand, using Bayesian methods, of the items and also to obtain store counts for active items.

For items with extremely low ROS, non-Bayesian methods will over-forecast. So, an inventory informed forecast (Bayesian methods) is determined. This is different from a inventory constrained forecast. A forecasting demand, and not sales, is determined. For example, 15 units might be forecast, even if inventory is 10 units, so it is not bounded by inventory. The role of inventory is to be used as a prior hint in the Bayesian methods, so that 100 units are not forecast for an item that has an initial inventory of 10.

Forecast Method: Automatic Exponential Smoothing

This forecast method does not require data from INVENTORY.csv. In the absence of inventory data from the env, the PMO_AGGR_INVENTORY_DATA_FLG parameter value in RSE_CONFIG table (APPL_CODE = PMO) available in the Manage System Configurations screen must be set to N before the data aggregation is done for the forecast run type from the Setup train stop in the Manage Forecast Configurations screen.

Forecast Method: Hierarchical Bayesian

This forecast method may require data from INVENTORY.csv. It supports inventory units as a feature, and thus, if there is a desire to consider the impacts of the inventory on the forecast, then that data must be provided for about 18 months, and also on an ongoing basis. The PMO_AGGR_INVENTORY_DATA_FLG parameter value in RSE_CONFIG table (APPL_CODE = PMO) available in the Manage System Configurations screen must be set up accordingly before the data aggregation is done for the forecast run type from the Setup train stop in the Manage Forecast Configurations screen.

PRICE.csv

Forecast Method: Causal-LLC

This forecast method may require data from PRICE.csv. It supports price as a feature in the promotion module, and thus, if there is a need to consider the impacts of price changes on the forecast, then that data must be provided. The recommendation is to provide price history proportional to the length of the sales history provided, as well as on an ongoing basis. Note the following:
  1. This method uses the latest price change as of the week ending, so it is not necessary to send daily price changes.

  2. PRICE_CHANGE_TRAN_TYPE=0 is used to identify the full price or original price of the item. More than one record can have a value of 0 in situations such as if an item is repeated, or reset, or when the original or full price changes. This record is required for every item/location that had sale at any point in the history.

  3. PRICE_CHANGE_TRAN_TYPE=4 records are used to identify the regular price changes for the item. These records help identify regular price changes and thus calculate the impact of the regular price change.

  4. PRICE_CHANGE_TRAN_TYPE=8 records are used to identify the ticket price changes for the item. These records help identify markdowns and calculate the price impact for a markdown. If these records are not provided, the system will use a fallback option to calculate the price response, but it might result in lower forecast accuracies.

Forecast Method: Causal-SLC

This forecast method may require data from PRICE.csv. Price data is used to identify the appropriate price changes to calculate the most accurate price response behavior. In the absence of this data, the sales data used in base demand calculation will not be de-priced. The recommendation is to provide price history proportional to the length of the sales history provided, as well as on an ongoing basis. Note the following:
  1. This method uses the latest price change as of the week ending, so it is not necessary to send daily price changes.

  2. PRICE_CHANGE_TRAN_TYPE=0 is used to identify the full price or original price of the item. More than one record can have a value of 0 in situations such as if an item is repeated, or reset, or when the original or full price changes. This record is required for every item/location that had a sale at any point in the history.

  3. PRICE_CHANGE_TRAN_TYPE=4 records are used to identify the regular price changes for the item. These records help identify regular price changes and thus calculate the impact of the regular price change.

  4. PRICE_CHANGE_TRAN_TYPE=8 records are used to identify the ticket price changes for the item. These records help identify markdowns and calculate the price impact for a markdown. If these records are not provided, the system will use a fallback option to calculate the price response, but it might result in lower forecast accuracies.

Forecast Method: Automatic Exponential Smoothing

This forecast method does not require data from PRICE.csv.

Forecast Method: Hierarchical Bayesian

This forecast method requires data from PRICE.csv. Price data is used to identify the appropriate price changes to calculate the most accurate price response behavior. The requirement is to provide price history proportional to the length of the sales history provided, as well as on an ongoing basis. Note the following:
  1. This method uses the latest price change as of the week ending, so it is not necessary to send daily price changes.

  2. PRICE_CHANGE_TRAN_TYPE=0 is used to identify the full price or original price of the item. More than one record can have a value of 0 in situations such as if an item is repeated, or reset, or when the original or full price changes. This record is required for every item/location that had sale at any point in the history.

  3. PRICE_CHANGE_TRAN_TYPE=4 records are used to identify the regular price changes for the item. These records help identify regular price changes and thus calculate the impact of the regular price change

  4. PRICE_CHANGE_TRAN_TYPE=8 records are used to identify the ticket price changes for the item. These records help identify markdowns and calculate the price impact for a markdown. If these records are not provided, the system will use a fallback option to calculate the price response, but it might result in lower forecast accuracies.

Data Correction and Reload Process

As part of building the dataset and reviewing the ETL Process, it is important to understand what is possible for correcting data after you have loaded it, as this will impact the amount of time you spend validating the data up-front before attempting to send it to RAP solutions.

SALES.csv and RECEIPT.csv

Sales and receipts are considered transactional interfaces, meaning that they load the data as incremental deltas. If a record is new, then it will be inserted into the database, but if it already exists then the incoming value will be SUMMED with the existing value. This summation is why deltas are used. The incoming data must only be the difference between the existing and expected values.

When you need to reload sales or receipt history due to data issues, or you need to revise existing data loaded in prior batch cycles, you have three options:
  1. It is ideal to calculate the adjustments required and load those delta values on top of the current data to apply the corrections. This does not require any cleanup or resetting of the existing data. This is how revisions must be done during nightly batches.

  2. Delete the existing sales or receipts data and reload it cleanly with the complete dataset that has been corrected for any issues. This is a complete deletion of the database tables for all history. This is done during history loads to make major changes. You do not normally do this after go-live.

  3. Raise an SR requesting that specific subsets of data be deleted so that you can only reload those records. The subset of data to be deleted must be defined by a grouping of product, location, or calendar hierarchies; it must not be made up of arbitrary records. For example, delete all data for department 1001 so that the history for it can be re-loaded. Oracle will delete the data and then you must load the corrected data using the standard implementation processes.

When you are posting updates to existing records (option 1 above), you must consider the following:
  1. For all of the dimension values, including item, loc, date, transaction ID, promo ID, retail type, and so on, the existing record must match the new revision so that the correct row is updated. If any one value does not match, then the revision may get inserted as a new row instead.
  2. The value you are modifying must be the difference between the old and new value, while values you are not modifying must be provided as zeros. For example, updating SLS_QTY from 3 to 4 must have SLS_QTY=1 on the incoming revision and 0 on other columns.
  3. For sales only, REVISION_NUM must be 2 or greater if you want to keep track of the revision count; however, it is not required. Receipts and other transaction types do not have revision number tracking.
  4. Regardless of the data levels (aggregated item/loc/week or daily transactions), the correction process is the same. The only difference is how you configured the columns such as SLS_TRX_ID or PROMO_ID to work, because you must use the same values each time you post revisions.

Note that for receipt corrections for history data before daily batches, you can reload data incrementally to change the numbers such as INVRC_QTY, but if you are adding or removing entire receipt transactions then you must delete and reload both receipts and inventory in parallel as defined in the RAP implementation steps. This assumes you require the first/last receipt date calculations, as discussed previously. The receipts are applied to inventory calculations during the inventory history load, not during the receipts history load, so they must happen in parallel. If you do not require first/last receipt date calculations, then the receipts can be loaded separately from inventory.

INVENTORY.csv

Inventory is a positional interface; that is, it tracks the current positions of your inventory levels at each point in time. Positional values cannot be updated or overwritten once loaded. It is assumed that you provided the exact value of the position for each item/loc/date. It does not make sense to adjust those values later. For example, if your on-hand inventory units were five today, then that was the physical count of units you had at that time, Any changes that occur later only apply to some future date, not to past dates.

When you must reload inventory history due to data issues, you have two options:

  1. Delete the existing inventory data and reload it cleanly with the complete dataset that has been corrected for any issues. This is a complete deletion of the database tables for all history. This is done during history loads to make major changes; you would not normally do this after go-live. As noted in the receipts sections, if you delete and reload inventory you want to do the same for receipts, assuming you require receipt date calculations for forecasting.
  2. Raise an SR requesting that specific subsets of data be deleted so that you can only reload those records. The subset of data to be deleted must be defined by a grouping of product, location, or calendar hierarchies; it must not be a group of arbitrary records. For example, delete all data for department 1001 so the history for it can be reloaded. Oracle will delete the data and then you must load the corrected data using the standard implementation processes. Request the same deletion for receipts data and reload both in parallel if you are using receipts for forecasting.

When reloading inventory, if first/last receipt dates are required, then you must also provide receipts data for the same item/loc/weeks in the RECEIPT.csv at the same time, otherwise you will lose the receipt date calculations in your reloaded inventory data.

Once you have started nightly batches, you cannot back-post inventory data; your inventory updates will always be posted for the current business date in RAP. If inventory data was not posted for 1+ prior days for any reason, such as issues in your source system or problems at the stores, then include those updates in the next batch and they will be posted for the current date, setting the inventory position values to be correct from that day forward.

PRICE.csv

Price data is a positional interface similar to inventory, but it has additional complexities that make it even more challenging to modify once loaded. Price data calculates many metrics over time during the load process, such as the number of markdowns that have occurred or the last known clearance price. These values are dynamic; they are updated every day as price changes in the history files are processed in chronological order. Once the data is loaded, you cannot directly alter or update it. Your options for data corrections are:
  1. Delete the existing price data and reload it cleanly with the complete dataset that has been corrected for any issues.
  2. Raise an SR requesting that specific subsets of data be deleted so that you can only reload those records. The subset of data to be deleted must be defined by a grouping of product or location hierarchies; it must not be arbitrary records and it cannot be a subset of calendar periods. Oracle must delete all data for the entire calendar range so derived metrics can be recalculated by any new loads. For example, delete all data for department 1001 so Oracle can reload the history for it. Oracle will delete the data and then you must load the corrected data using the standard implementation processes.