Skip Headers
Oracle® Retail Advanced Inventory Planning Operations Guide
Release 14.1.1
  Go To Table Of Contents
Contents

Previous
Previous
 
Next
Next
 

7 AIP Java/Oracle Daily Batch Process Details

The batch scripts are executed on the AIP Oracle database before and after the AIP batch execution on the AIP RPAS platform.

The batch scripts initiate a number of processes serially. It is accepted that they will not provide optimum parallelization and restart-ability/recovery. The restart/recovery of the batch from the highest level scripts requires a significant amount of the batch to be rerun. You have the option of manually running the lower level scripts from the failing script onward, however it is not recommended in production. For this reason, the second level scripts are modularized such that they can be executed by the job scheduler, with the appropriate dependencies, and run individually (followed by the scheduled dependent processes).

This chapter describes the following sections:

Locking and Unlocking Users

Online users should be locked out from making changes to data that is going to be processed by the batch processes. In order to do this, a script called batch_lock.sh is provided in the home directory of AIP Oracle batch scripts. This script is used with different parameters for both locking and unlocking users.

Client's job scheduler should execute the script to lock users every time before the start of overnight or Intra-day batch processes. When locking for Intra-day batch, only the release wave that is going to be processed should be locked. And then execute the script again to unlock at the end of batch processes so that users can resume online activities. While overnight batch lock is more restrictive than Intra-day batch lock, the latter applies only to the data that will be processed in the locked release wave.

The overnight batch processes on the AIP-Oracle usually begin with the process that increments VDATE before exporting data from Oracle to AIP-RPAS and ends at the post release process. The Intra-day batch processes usually begin with the process that captures inventory snapshot in external systems such as RMS and ends at the release of Intra-day orders.

Script Name: batch_lock.sh

This script calls a PLSQL package function aip_util.set_bach_lock to set or clear the lock. Inside the function, system parameters BATCH_LOCK and BATCH_LOCK_RELEASE_WAVE are updated with values depending upon the input parameters passed to the script. Valid values for BATCH_LOCK are Y and N, Y indicates the lock is in place, N means no lock. BATCH_LOCK_RELEASE_WAVE is -1 for overnight and a number between 0 and 23 for Intra-day release wave.

Input Parameters:

Following are the input parameters for the batch_lock.sh script.

  • Lock option (l): value -1 for overnight lock, a value between 0 and 23 for Intra-day release wave.

  • Unlock option (u): no value. Clears all locks.

Examples:

  • batch_lock.sh -l -1: Overnight lock

  • batch_lock.sh -l 7: Intra-day release wave 7 lock

  • batch_lock.sh -u: Unlock

Virtual Date Maintenance

The entire batch process is run for the next business day. To guarantee that the whole batch process uses the same date, and is not affected by a potential change in calendar days on the server, a virtual date is used.

The virtual date, also referred to as the batch run date or vdate, is set immediately prior to the commencement of the batch. If the scheduled start time is prior to midnight the virtual date is one day ahead of the system date. If the scheduled start time is after midnight the virtual date matches the system date. On normal, day-to-day batch runs the virtual date can simply be incremented by one day to achieve the proper batch run date (regardless of scheduled start time).

A script is used to maintain the vdate. The maintenance of the vdate can be easily accomplished as a scheduled daily task and as a manual task.

The system operator schedules the increment of the vdate on a daily basis.

In the event of a batch failure all or portions of the batch may be re-run without changing the virtual date. No other batch process attempts to update or maintain the vdate. The scheduler and system operator are solely responsible for this maintenance.

The system operator is also able to manually set the vdate. Any attempt to set an invalid date results in an error being logged; the date is not updated. If the operator updates the date directly rather than through the vdate.sh script provided any attempt to retrieve or use the invalid date results in an error that halts the calling process and all dependent processes.

All batch processing dependent on dates (effective dates or end dates) considers the vdate as the current effective date. The export planning horizon and any other calculated time periods (that is, sister store/warehouse copy, walking order cycles, and so forth.) are calculated based on the vdate as the current effective date.

Script Name: vdate.sh

This script is a wrapper to the vdate functions, the get_vdate, set_vdate and inc_vdate from the package aip_util.


Note:

The vdate.sh script should not be used to set or increment the vdate more than once in a single, complete batch run.

Input Parameters

This script accepts the following commands:

  • Action: [set|get|inc]

  • Export: [noexport|export]

  • Date: [YYYYMMDD] (only accepted for the set action)

set [date in YYYYMMDD]

This function performs the following:

  • Calls the SET_VDATE function (using call_bsa_sql.sh).

  • If export= true, write vdate value to text file.

get

This function calls the GET_VDATE function (using call_bsa_sql.sh).

inc

This function performs the following:

  • Calls the INC_VDATE function using call_bsa_sql.sh).

  • If export=true, write vdate value to text file using GET_VDATE function output.

Example

Script Call Explanation
vdate.sh retrieves the vdate
vdate.sh get retrieves the vdate
vdate inc increments the vdate, without transferring it
vdate inc noexport increments the vdate, without exporting it
vdate inc export increments the vdate and exports it
vdate set noexport 20001130 sets the vdate, without exporting it
vdate set export 20001130 set the vdate and exports it

Pre-Export Batch

Prior to physically exporting the DM Oracle data out of the tables, the calculation of the walking order cycles lead time is performed along with the export planning horizon start and end date update.

Export Planning Horizon

The export planning horizon is a calculated planning horizon value that is used to limit the amount of data exported to AIP RPAS. The export planning horizon start date is equal to the virtual batch run date (vdate). The export planning horizon end date is equal to the start date plus the calculated maximum export planning horizon value. These values are used throughout the extract logic to limit the extracted data to that which falls within the export planning horizon start and end dates. In addition to calculating maximum planning horizon, individual SKU planning horizon is also calculated in this step. The horizon itself is exported to AIP-RPAS for use in AIP-RPAS batch processes.

It is important to note that this export planning horizon does not control replenishment. However the data that is exported directly effects the replenishment plan batch calculates. Therefore, it is pertinent that all data is exported for the duration of its corresponding planning horizon.

Walking Order Cycles

Walking order cycles are used to drive stock into a new store prior to its opening. A profile/store release schedule exception is created with a special empty order cycle to wipe out all lead times for the store. Then, a second calculated release schedule lead time exception is created. This is the walking order cycle lead time which is recalculated each day.

Because the user has the opportunity to create new Profiles during the online day the profile/store release schedule exception must be created for the new profiles after the online day. The profile/store release schedule exception is created for all new profile/store combinations prior to the export of DM Online data to AIP RPAS.

Prior to exporting the DM Online data for running the AIP RPAS batch calculations the walking order cycle lead time is calculated for the current batch run date (the next business day). This lead time is saved as a SKU/store release schedule exception.

Build Scalable and Smoothable Assignments

If smoothing functionality is turned ON then smoothable assignments are identified and set up in preparation for future smoothing run. Similarly, if scaling functionality is turned ON then scalable assignments are identified and set up in preparation for future scaling run. One of two global scaling flags needs to be turned ON. Active scaling assignment are considered scalable depending upon the local (scaling group level) flags, scaling horizons and scaling constraints.

Smoothable assignments from last export run are deleted and new smoothable assignments for upcoming smoothing run are set in the smoothable_assignment table. Similarly scalable assignments from the last export run are deleted and new scalable assignments for upcoming scaling run are set in the scalable_assignment table.

During export, a list of distinct vendor/SKU from scalable and smoothable assignments is exported to AIP-RPAS so that AIP-RPAS can return warehouse release schedule and other required information only for the vendor-SKU listed in the file.

Script Call

These steps are performed just prior to the physical export of data but are executed by the same control script which triggers the exports. The control script is described in detail in the next section.

Export DM and OM Data (cron_export)

The DM Online and Order Management applications are used to enter and maintain AIP supply chain configurations, replenishment parameters, and purchase orders. While all of the user entered data is mastered in the Oracle database, it must ultimately be used to generate the desired replenishment plans.

Technical Details

cron_export is the wrapper script executed to export all the data from DM for the AIP RPAS platform. It performs the following activities:

  1. Calls the pre_aiponline_extract.sh script to do the following activities:

    • Remove all the .dat and .int files in the ONL_OUTBOUND_DIR directory.

    • Call the pre_extract_wrapper function from the aip_util package.

  2. Processing and archiving hierarchy and DM data.

    The script loops through the following configuration files and executes each export script listed in the file:

    • $INTEGRATION_HOME\config\export_hierarchy.config

    • $INTEGRATION_HOME\config\export_dm.config

  3. Build smoothing and scaling data.

    The script calls build_scalable_assignment and build_smoothing_data functions in ocs_validation package to prepare assignments for next batch run of scaling/smoothing.

  4. Updating Interface Parameters.

    The update_interface_param.sh script updates the last used planning date for SKU planning horizon in planning_horizon_sku table and MAX planning horizon in interface_parameters table.

Files to Export Data

The following files are needed to export data out of the AIP Oracle schema.

File Description
Export Script ($INTEGRATION_HOME/scripts/export) This folder contains the export SQL query.
Schema File ($INTEGRATION_HOME/schema) Defines the .dat file format using xml tags.
Config File ($INTEGRATION_HOME/config) Lists the .sh export scripts that can be executed in parallel.

Tables to Extract Data

Two key tables are used to extract data from the Oracle tables.

Table Description
INTERFACE_PARAMETER The INTERFACE_PARAMETER table holds the timestamp of the last successful run of the cron_export.sh.

This is how RETL knows what was added or modified since the last extract. In many cases, only modified data is extracted.

INTERFACE_HORIZON_DATE_LOOKUP The INTERFACE_HORIZON_DATE_LOOKUP table has one row for each day starting today through the entire planning horizon (as calculated in the pre-export logic).

This table is used as a join in the SQL statement within the export script.

It is used to blow out the data to the daily level or to ensure that the data being exported falls within the planning horizon.


Export Flow Diagram

Table 7-0 shows the flow of data out of the AIP Oracle schema.

Figure 7-1 Export Flow

Surrounding text describes Figure 7-1 .

Script Name: cron_export.sh

This script exports all the DM and hierarchy information from AIP-Oracle to AIP-RPAS. This script prepares Oracle database for next scaling and smoothing runs.

Input Parameters

noTimestamp is an optional parameter that indicates whether to update the INTERFACE_PARAMETER table or not. By default, the table is updated unless the value noTimestamp is entered to prevent the LAST_EXPORT value from being updated.

Requirements

This script requires a configuration file containing a list of export scripts.

Day on Day Processing

The script exports data from the Oracle database to AIP RPAS. The script is designed to execute automatically by a scheduled CRON job every night after the online day and before AIP RPAS Batch is run.


Note:

It is assumed that batch lock has been applied and VDATE has been updated as documented earlier before the start of this process.

The script first calls pre_aiponline_extract.sh to perform pre-export updates. Next, each export script is invoked. After the data is successfully extracted, the script updates the timestamp on the INTERFACE_PARAMETERS table.

Processing Steps

Table 7-1 lists the processing steps for cron_export.sh. Details of these steps are described in their corresponding section.

  1. Execute pre-extract setup and maintenance.

    The pre_aiponline_extract.sh script sets up the planning horizon days which are used throughout the extract logic and maintains/decrements the walking order cycle exceptions.

  2. Process hierarchy data.

    Use the process_aiponline_data.sh script with the input parameter as export_ hierarchy.config to process the hierarchy data.

    Input Parameters

    Following are the input parameters for the process_aiponline_data.sh script.

    • export_hierarchy.config

    • ${ONL_EXPORT_DIR}

    Details of Processing Hierarchy Data

    Each export script within the configuration file makes a single call to RETL with a database connection parameter. The export scripts contain the export SQL query. When more than one file is being generated, the export script contains one query for each unique file name being generated.

    Hierarchy data included in the export_hierarchy.config file are:

    • network_groups

    • order_cycle

    • order_groups

    • profile

    • sku_pack

    • store_order_cycle

    • warehouse

  3. Archive hierarchy output files.

    After processing the hierarchy data using the previous step, the data is archived by converting the hierarchy (.dat) files into tar files. For example, hierarchy.tar.Z file.

  4. Build scalable and smoothable assignments.

    Use PL/SQL stored procedure to build a new set of scalable assignments if scaling is turned ON. Also call stored procedures that builds smoothing data like smoothable assignments, smoothing detail when smoothing is turned ON.

  5. Process DM data.

    Use the process_aiponline_data.sh script with the input parameter as export_ dm.config to process the DM data.

    Input Parameters

    Following are the input parameters for the process_aiponline_data.sh script.

    • export_dm.config

    • ${ONL_EXPORT_DIR}

    Details of Processing DM Data

    The following table lists the DM data included in the export_dm.config file.

    DM Data Included in the export_dm.config Files:
    assigned_commodity non_release_date sister_store
    chamber_product_type non_release_date_exceptions sister_wh
    commodity_order_cycle_exceptions order_cycle stockless_indicator
    default_order_cycle order_group_assignment store_calendar
    delivery_demand_percent (4 levels) order_multiple store_format_pack_size
    delivery_pattern orderable_unit store_order_cycle
    assigned_commodity orderable_unit singles_enabled_warehouse
    chamber_product_type pack_break_size sister_store
    delivery_demand_percent_location pack_break_warehouse sister_warehouse
    delivery_demand_percent_location_department pack_change source_split_tb
    delivery_demand_percent_location_department_exception pack_change_clear_inventory_period stockless_indicator
    delivery_demand_percent_location_exception pack_change_destination store_format_pack_size
    demand_group_data pallet_multiple store_planning_horizon
    department_profile profile_network_group store_priority
    direct_store_format_pack_size profile_order_cycle supplier_order_cycle_assignment
    direct_store_pack_size promotion_and_replace valid_warehouse
    direct_supplier promotion_and_replace (standard SKU,store and source) vendor_lock
    inventory_snapshot_time range_commodity_pack_size vendor_sku_to_scale
    on_supply_off_supply rdc_reconciliation warehouse_prealloc_mustconsume (global,class and sku)
    order_group_assignment release_wave_assignment (default and exception) warehouse_prealloc_preallocate_sku
    order_history replan_flag_default warehouse_prealloc_stagingwindow (global,class and sku)
    order_multiple secondary_source warehouse_rollout_status

  6. Archive DM output files.

    After processing the DM data using the previous step, the data is archived by converting the DM (.dat) files into tar files. For example, dm.tar.Z file.

  7. Update the interface parameters.

    The update_interface_param.sh script updates the INTERFACE_PARAMETERS table to reflect what the timestamp was at the end of this export.

    Due to the fact that some data is extracted, or maintained at high volumes, or both, that data is only exported when changes occur. In order to identify changes that occur since the last export, a last export timestamp is maintained. This timestamp is updated immediately after all export data files have been successfully created and compressed.

    All subsequent updates made in the DM Online application, or as part of the AIP Oracle Morning Batch, have a later timestamp and are therefore picked up in the next export.

    Details of Updating Interface Parameters

    When no parameter is passed into cron_export.sh, the last_export timestamp is updated to the current date and time. When the noTimestamp parameter is passed to cron_export.sh, the last_export timestamp is not updated.

Prerequisites for running cron_export.sh

Run this step after the DM and OM Online users are locked by batch_lock.sh -l -1 execution. This prevents any opportunity for you to add or modify data between the time that data is extracted from a table and the last export timestamp is updated. Such an occurrence could result in AIP Oracle and AIP RPAS getting out of sync.

The data extracted also depends on the INTERFACE_HORIZON_DATE _LOOKUP table and the INTERFACE_PARAMETER table. If data is not extracted as expected, ensure that the days_in_export_horizon value in the Config package is set correctly to cover the longest planning horizon in the AIP system.

The data files that do not contain a full refresh of the data are controlled by the last_export timestamp value. The timestamp on the row being extracted is compared to the last_export timestamp. If the timestamp on the row is later than the last_export value, it was added or modified since the last extract occurred.

This step must run before the AIP RPAS batch step prep_onl_data.

Restart/Recovery steps for cron_export.sh

If this step fails, perform the following:

  1. Review the log file, which is located in the logs folder of the directory that is specified for the Oracle LOGHOME environment variable.

  2. Perform the necessary corrective actions.

  3. Re-run cron_export.sh.

Import Data into AIP Oracle Database

The following sections describe how to import data into the AIP Oracle Database.

Load Data

Much of the AIP supply chain management is performed in Data Management Online, which is built on an Oracle database. The RMS and RPAS data is imported into DM Oracle as part of the final steps of the AIP nightly batch.

The import includes foundation data and the orders that were produced by AIP RPAS batch. Once imported, today's orders are scaled and immediately released to RMS by OM batch. The release to RMS serves to execute the orders or to communicate them to the source warehouses and suppliers.

Import Overview

Upon completion of the batch jobs, RPAS generates flat files (.dat data files) that contain the hierarchy and measure data that is required in DM and OM Oracle Online. Data files exported from RPAS should then be retrieved by copy or FTP using a job scheduling application. Once the Oracle import process begins, the data is loaded into the Oracle database by using RETL (Oracle Retail Extract, Transformation, and Load). For this step, only the Load part of RETL is used.

Figure 7-2 Importing Flow

Surrounding text describes Figure 7-2 .

Staging tables, prefaced with INTERFACE_ or i_, temporarily store the records from the .dat file in the Oracle database. RETL then invokes a stored procedure to move the data from the staging table to the base table.

The following two files are needed to move the flat file to the staging table using RETL:

  • Import Script ($INTEGRATION_HOME/scripts/import)

  • Schema File ($INTEGRATION_HOME/schema)

Technical Details

The cron_import.sh script currently performs two main functions:

  • Import Hierarchy, DM, SKU attributes, DM Alerts, and OM data into the AIP Online database (process_aiponline_data.sh).

  • Call any post-load processing if necessary (post_import_wrapper.sh).

This script is a wrapper for a series of scripts.

The importing of data into AIP Online can actually run as independent processes, as listed in the following table.

Process Steps
Import Data Management files
  • Retrieves the file containing DM data
  • Calls the import processing script with the hierarchy config file

  • Calls the import processing script with the dm config file

  • Calls the post-load SQL processing script to execute the Automated Data Maintenance batch scripts

Import SKU Attribute data
  • Retrieves the file container SKU Attributes from the import directory
  • Calls the import processing script with the sku-attributes config file

Import Order Management data
  • Retrieves the file containing OM data
  • Calls the import processing script with om config file

Import Data Management Alerts
  • Retrieve the file containing DM Alerts data
  • Calls the import processing script with dm alerts config file


Script Name: cron_import.sh

The script, cron_import.sh is executed to process the files by passing the appropriate parameters.

Input Parameters

Following are the input parameters for the cron_import.sh script.

Module name: [dm|alerts|om|sku-attributes]

Restart/Recovery

If this script fails, perform the following steps:

The import process executes many import scripts in parallel. If this fails, it is because one or more of the import scripts failed. After examining the logs to determine which scripts failed, and after correcting the errors, this process can be completed in a number of ways:

  • Simply restart the import processing. Since successful import scripts deletes their input files, the entire import process can be run again. Missing input files only generate warnings and then continue.

  • Execute individual import scripts from the command line.

  • Create a config file containing the list of only the import scripts that have not completed successfully. Execute the import process using this config file.

If an error occurs in the post-load SQL processing script, then once the errors have been corrected that script can simply be re-executed.

DM Post-Load Batch

The following sections describe the process of DM Post-Load Batch.

Script: post_import_wrapper.sh

The DM scripts that are executed by cron_import.sh after the AIP Oracle load scripts are called DM Post-Load batch. It is also referred to as Automated Data Maintenance Batch.

The following descriptions of each process provide an overview. Additional validation and restrictions may be applied that are not detailed in the overview.

Create Order Groups

The automatic creation of order groups is triggered by a new supplier arriving in the AIP system. When the new supplier has a ship-to value defined it is compared against a mapping table. The mapping table maps ship-to values to sources/source warehouse types and destination warehouse types. One order group is created for each source and destination warehouse-chamber combination assigned to the ship-to source type and destination warehouse-type. The chambers assigned to warehouses that match the ship-to destination warehouse types are assigned as the order group automation scheduling locations.

If the new supplier supplies SKU-packs that already exist in the system (SKU-packs are multi-supplied by another existing supplier), the demand groups of the SKU-packs are immediately assigned to the order group.

Create Profiles

The automatic creation of profiles is triggered by a new supplier arriving in the AIP system. The new supplier triggers the creation of a new direct profile by DM Oracle batch. The new direct profile is assigned a specific default order cycle which is created at implementation time.


Note:

The automatic creation of warehouse profiles occurs in the AIP RPAS batch.

Assign Profiles

The assignment of SKUs to profiles must be performed after the automatic creation of profiles. The automatic assignment of SKUs to profiles is triggered by a new Supplier/SKU-pack size combination. The SKUs which are identified as newly supplied by a supplier are assigned to that supplier's direct profile. If the supplier does not have a direct profile, and the SKU is not assigned to any profiles, the SKU is assigned to the Default (Class) Profile where defined.


Note:

The automatic assignment of SKUs to warehouse profiles is performed in AIP RPAS batch and is triggered by a new SKU.

Demand Group and SKU Group Maintenance

The following sections describe demand group and SKU group maintenance.

New SKU-pack Sizes

All new SKU-pack sizes that arrive in the AIP system are assigned to a single default demand group and SKU group. These SKU-pack sizes cannot be replenished in the warehouse until they are assigned to the proper demand group and SKU group.

A new SKU group is created for each new SKU.

The assignment of new SKU-packsizes to demand groups is determined by a configuration option. This option, set at implementation time, determines if the new packsizes of a SKU should be in a single demand group for that SKU, or each packsize should have its own demand group.

The default configuration is to assign all of a SKU's pack sizes to the same demand group.

An additional parameter—the Inventory Tracking Level— can override the demand group assignment configuration option. The Inventory Tracking Level is a global string value set at implementation time. When it is set to Eaches the demand group assignment configuration option previously mentioned is over-ruled, and all new packsizes of a SKU must be assigned to a single demand group for the SKU.

Pre-priced SKU-pack Sizes

All SKU-pack sizes that are pre-priced/value-added have a parent SKU defined. The pre-priced/value-added SKUs must be assigned to the same SKU group and demand group as their parent SKU.

Any pre-priced/value-added SKU which is assigned to a SKU group other than its parent's SKU group is updated to reflect a SKU group assignment equal to the parent's SKU group.

Any pre-priced/value-added SKU-pack size which is assigned to a demand group other than its parent's demand group is updated to reflect a demand group assignment equal to the parent's demand group. Because a SKU's pack sizes may be assigned to multiple demand groups the first available parent demand group (when ordered by the demand group code) is used for all of the pre-priced/value-added SKU-pack size assignments.

Standard SKUs

All SKUs that have changed status from pre-priced/value-added to a standard SKU must be assigned to their own demand group and SKU group.

A new SKU group is created for each SKU which is now a standard SKU.

A new demand group is created for each new standard SKU or SKU-pack size (depending on the configuration setting)

Clean-up

When SKU-packs become pre-priced/value-added, they are moved into their parent's demand group and SKU group. Moving SKU-packs out of an existing demand group may result in an empty demand group. All demand groups which have no SKU-packs assigned to them, with the exception of the default demand group, are deleted.

Range SKU-pack sizes

Automatic ranging is triggered by a new SKU-pack size. Any SKU-pack size that was not in the AIP system in the previous batch run is considered new. The new SKU-packs are ranged to warehouses based on a configurable system parameter setting that allows ranging based on the SKU's supplier's ship-to value and order group destinations or all valid warehouses. The ranging rules applied in the DM Online application are also applied to the batch process.

Order Group Assignment

Automatic order group assignment is triggered by a new SKU-pack size. The assignment of demand groups to order groups must be performed after the automatic creation of order groups, after SKU group and demand group maintenance, and after automatic ranging.

New SKU-pack sizes' demand groups are assigned to order groups through their supplier's ship-to value. All order groups with sources that match the supplier's ship-to source value are retrieved. The new SKU-pack sizes' demand groups are assigned to all order group source/scheduling location combinations which are valid based on supplier/SKU-pack size links and ranging.

Reset Store Format Pack Size (WH and Direct to Store Format)

This process should be run after Range SKU-pack sizes to ensure all available pack sizes are considered for replacing the existing pack size.

The store format ordering pack size default is reset when the current pack size has a discontinuation date equal to the vdate. The new pack size is selected in a pre-determined order based on the SKU-pack size pack type. The first valid available pack type that is not discontinued becomes the new ordering pack size default for the store format. If no alternative pack size is found the ordering pack size is not reset.

Reset Store Pack Size (WH and Direct to Store)

This process should be run after Range SKU-pack sizes to ensure all available pack sizes are considered for replacing the existing pack size.

The store ordering pack size exception is reset when the current pack size has a discontinuation date equal to the vdate. The new pack size is selected in a pre-determined order based on the SKU-pack size pack type. The first valid, available pack type that is not discontinued becomes the new ordering pack size exception for the store. If no alternative pack size is found the ordering pack size is not reset.

Set Missing Store Format Pack Size (WH and Direct to Store Format)

This process should be run after Range SKU-pack sizes to avoid a day lag in creating store format pack sizes for new SKU-pack sizes.

The default store format pack size is automatically selected for all valid source/SKU/store format combinations. The pack size is selected in a pre-determined order based on SKU-pack size pack types. The first valid, available pack type that is not discontinued becomes the ordering pack size default for the store format.

This process runs nightly to pick up all valid source/SKU/store format combinations that do not currently have a default ordering pack size defined.

Reset Warehouse Orderable Unit

This process should be run after Range SKU-pack sizes and Demand Group and SKU Group Maintenance to ensure all available pack sizes are considered for the new orderable unit.

The warehouse orderable unit is reset when the current pack size has a discontinuation date equal to or before to the vdate. The new pack size (orderable unit) is selected in a pre-determined order based on the SKU-pack size pack type. The first valid available pack type that is not discontinued becomes the new orderable pack size default for the source/demand group/SKU/warehouse. If no alternative pack size is found the orderable pack size is not reset.

Set Missing Warehouse Orderable Unit

This process should be run after Range SKU-pack sizes and Demand Group and SKU Group Maintenance to avoid a day lag in creating warehouse orderable units for new SKU-pack sizes.

The warehouse orderable unit is automatically selected for all valid source/demand group/SKU/warehouse combinations. The pack size (orderable unit) is selected in a pre-determined order based on SKU-pack size pack types. The first valid, available pack type that is not discontinued becomes the orderable pack size for the source/demand group/SKU/warehouse.

This process runs nightly to pick up all valid source/demand group/SKU/warehouse combinations that do not currently have a warehouse orderable unit defined.

Set Order Multiple

This process should be run after Range SKU-pack sizes to avoid a day lag in creating order multiples for new SKU-pack sizes.

The order multiple is automatically selected for all valid source/SKU/pack size/warehouse-chamber combinations. The order multiple is an integer value selected from the available pack sizes for the SKU. The pack size value that is used as the order multiple is selected in a pre-determined order which is based on pack types. The fist valid, available pack type that is not discontinued becomes the order multiple for the source/SKU/pack size/warehouse-chamber.

This process runs nightly to pick up all valid source/SKU/pack size/warehouse-chamber combinations that do not currently have an order multiple defined.

Set Stacking Flag

This process should be run after Range SKU-pack sizes to avoid a day lag in setting stacking flag values for new SKU-pack sizes.

The stacking flag is automatically set for all valid source/SKU/pack size/warehouse-chamber combinations. The default stacking flag value is configurable at implementation time. This process runs nightly to pick up all valid source/SKU/pack size/warehouse-chamber combinations that do not currently have a stacking flag value defined.


Note:

Because this process picks up all missing values, suddenly enabling this logic on a full production set of data causes a significant increase in the batch run time

Set Case Weight

This process should be run after Range SKU-pack sizes to avoid a day lag in setting case weight values for new SKU-pack sizes.

The case weight is automatically set for all valid source/SKU/pack size/warehouse-chamber combinations. The default case weight value is configurable at implementation time. This process runs nightly to pick up all valid source/SKU/pack sizes/warehouse-chamber combinations that do not currently have case weight value defined.


Note:

Because this process picks up all missing values, suddenly enabling this logic on a full production set of data causes a significant increase in the batch run time.

Set Pallet Multiple

This process should run after order multiples are set to prevent a day lag in setting the pallet multiple.

A pallet multiple is automatically set for all source/SKU/pack size/warehouse-chamber combinations that have an effective order multiple on the batch run date and which do not have an effective pallet multiple defined.

Create Time Balanced Source Splits

This process should run after order group assignment to prevent a day lag in creating the time-balanced source splits.

Time balanced source splits are created for every demand group/warehouse combination that has a supplier sourced order group assignment. The first supplier receives 100% of the source split percent. If more than one supplier supplies SKU-pack sizes in the demand group an alert is created to indicate that the user should review the accuracy of the source split created.

Copy Sister Store

The sister store copy logic uses a combination of the new store open date and a configurable system parameter to determine when the sister store copy occurs. When a new sister store open date is received an alert is generated to indicate that a new store open date was received. This alert also includes an expected copy date. A copy only occurs once for a particular new store. Sister store copies occur as soon as their calculated copy date is reached, and results in the supply chain of the copy to store being a replica of the copy from store.


Note:

Users and custom processes must not set up the supply chain in the DM Online application for a new store with a pending sister store copy. Doing so results in a failure of the copy. The only recourse is for the batch operator to manually delete the data pertaining to the new store or the user must manually complete the entire supply-chain setup.

Because the only recourse for a failed copy is manual correction or setup it is highly recommended that the introduction of new stores with a sister store copy be a tightly controlled process. To control the process the introduction of the new stores should not occur until the store open date falls within the copy timeframe. To additionally safeguard this process the sister store offset weeks system parameter can be set to an arbitrarily large number that is less than the maximum export horizon (converted to weeks).


Copy Sister Warehouse

The sister warehouse copy logic uses a combination of the new warehouse open date and a configurable system parameter to determine when the sister warehouse copy occurs. When a new sister warehouse open date is received an alert is generated to indicate that a new warehouse open date was received. This alert also includes an expected copy date. A copy only occurs once for a particular new warehouse. Sister warehouse copies occur as soon as their calculated copy date is reached, and results in the supply chain of the copy to warehouse being a replica of the copy from warehouse.


Note:

You must not set up the supply chain in the DM Online application for a warehouse with a pending sister warehouse copy. Doing so results in a failure of the copy. The only recourse is for the batch operator to manually delete the data, pertaining to the new warehouse, from the database or the user must manually complete the entire supply chain setup.

No custom processes can be created that create chambers or set ranging status for warehouses with a pending sister warehouse copy. Doing so results in a failure of the copy. All data that is created as a result of these actions must be manually deleted from the database in order for a successful copy to occur. Otherwise the user must manually complete the supply chain setup.

Because the only recourse for a failed copy is manual correction or setup it is highly recommended that the introduction of new warehouses with a sister warehouse copy be a tightly controlled process. To control the process the introduction of the new warehouse should not occur until the warehouse open date falls within the copy timeframe. To additionally safeguard this process the sister warehouse offset weeks system parameter can be set to an arbitrarily large number that is less than the maximum export horizon system parameter (converted to weeks).


Prepare for Intra-day Order Load and Release

If Intra-day replanning or Intra-day release is desired, then the Oracle database must be prepared every time before loading and releasing replanned orders for release waves. In order to save critical time, this preparation should be done while the given release wave orders are replanned in AIP-RPAS. The preparations include identifying orders that must be removed when new set of replanned orders for the given release wave are generated by AIP-RPAS and loaded into AIP-Oracle. The data required to identify such orders is assumed to be made available to Oracle database from the overnight DM import process (cron_import.sh dm). Therefore the first execution of this script should only be done after cron_import.sh dm is completed.

A shell script, prepare_for_intra_day_release.sh, is provided in the home directory of AIP Oracle batch scripts. This must be executed once before loading orders for the given release wave. In the event that something fails on the AIP-RPAS side and new set of orders could not be made available after the preparations have been done, then the same script can be executed (with a different parameter) to undo the preparations.


Note:

If Intra-day replanning or Intra-day release is not desired then this script should be skipped. Otherwise this script should be executed once for each configured release wave even when the replanning (on AIP-RPAS side) for that wave results in no new orders being generated.

Input Parameters

Following are the input parameters for the prepare_for_intra_day_release.sh script.

  • Release Wave (w): This is a required parameter to indicate release wave for which system should prepare. Valid values are between 0 and 23.

  • Undo indicator (u): This is an optional parameter and should be used only when something fails and the system need to undo the preparations.

Examples

prepare_for_intra_day_release.sh -w 2: This prepares for loading replanned orders for release wave 2.

prepare_for_intra_day_release.sh -w 5 -u: This will undo the preparations done for loading release wave 5 replanned orders.

Import Orders

The receipt plan that is calculated by AIP RPAS is exported from AIP RPAS upon completion of the replenishment calculations. The AIP-RPAS export configuration controls how much of the plan is exported and is therefore available to load into the Oracle database. The exported plan is eventually translated into Purchase Orders (POs) and Transfers that are sent to RMS and can be viewed and maintained. Both overnight and Intra-day replanned orders are imported into Oracle table using same cron_import_order.sh script but with different parameters.

Technical Details

The cron_import_order.sh script loads the RPAS generated orders from .dat files into AIP Online tables.

This script is parameterized so that critical orders can be loaded and released first. The non-critical orders can be loaded afterwards by passing in different parameters.

This script can be used to load the following combinations of overnight orders:

  • Into-Store Purchase Orders only

  • Into-Store Transfers only

  • Into-Warehouse Purchase Orders only

  • Into-Warehouse Transfers only

  • Into-Store Purchase Orders and Into-Store Transfers (that is, all Into-Store Orders)

  • Into-Warehouse Purchase Orders and Into-Warehouse Transfers (that is, ALL into-Warehouse Orders)

  • Into-Store Purchase Orders and Into-Warehouse Purchase Orders. (that is, all Purchase Orders)

  • Into-Store Transfers and Into-Warehouse Transfers (that is, all Transfers)

  • All orders (this includes Into-Store Purchase Orders and Into-Store Transfers and Into-Warehouse Purchase Orders and Into-Warehouse Transfers)

The same script can be used to load the following combinations of Intra-day orders:

  • Into-store purchase orders for a given release wave

  • Into-store transfers for a given release wave

  • All Into-store orders (includes into-store purchase orders and into-store transfers) for a given release wave

The into-store purchase orders are loaded from strsplrord.dat file into store_order table through staging table, interface_store_orders. When loading overnight into-store purchase orders, existing unreleased store purchase orders from previous load in store_order table are first removed. When loading Intra-day into-store purchase orders, it is assumed that required preparations for the given release wave are already done by running prepare_for_intra_day_release.sh script.

Into-store transfers are loaded from strwhord.dat file into store_order table through a staging table, interface_store_transfers in the same way as into-store purchase orders. Existing unreleased transfers are removed before overnight load and it is assumed that required release wave preparations are done before loading Intra-day orders.

Into-warehouse transfers are loaded from wh_to_wh_transfer.dat file into a staging table, i_non_contents_transfer. The cron_release.sh process moves the releaseable transfers from staging table into non_contents_order table. And then cron_post_release.sh copies the forecast transfers from staging table into non_contents_order table.

Into-warehouse purchase orders are loaded from vendor_to_wh_order.dat file into warehouse_purchase_order table through a staging table, i_non_contents_order. PO Smoothing (optional) and Scaling (optional) process update or insert new orders in warehouse_purchase_order table. Merge (required) process then copies the orders from warehouse_purchase_order to non_contents_order table.

In all of the previous cases, existing unreleased forecast orders, transfers from previous load are removed during overnight batch from main tables (store_order, non_contents_order) before new ones are added.

This script begins with fetching and extracting the order files from the compressed tar file if the tar file is available in the /data folder. Then, depending upon the order_type and dest_type parameters passed into this script, it loads the appropriate order files.


Note:

Intra-day order files use the same name, that is srp.tar.Z, as the overnight order files. Therefore, care must be taken to use the correct release wave number in the loading script, cron_import_order.sh, when loading a given release wave's tar file into Oracle database.

Script Name: cron_import_order.sh

This script loads the RPAS generated orders from .dat files into AIP Online tables.

Input Parameters

Following are the input parameters for the cron_import_order.sh script.

  • order_type (o): [transfer|purchase|all]. This is a required parameter for both overnight and Intra-day orders.

    Parameter Action
    -o transfer Used if transfers are to be loaded.
    -o purchase Used if purchase orders are to be loaded.
    -o all Used if both transfers and purchase orders are to be loaded.

  • dest_type (d): [store|warehouse|all]. This is a required parameter.

    Parameter Action
    -d store Used if only into-store orders are to be loaded.
    -d warehouse Used if only into-warehouse orders are to be loaded.
    -d all Used if both into-store orders and into-warehouse orders are to be loaded.

  • Release Wave (w): [A number between 0 and 23]. This is an optional parameter that applies to destination_type store orders only. This is used to load Intra-day orders for a given release wave.

Using a combination of order_type, dest_type and release_wave, the desired load and its priority in the critical batch window can be easily configured.

Examples of Overnight Load

The following table provides examples of overnight load.

Parameter Action
cron_import_order.sh -o transfer -d all Loads into-store transfers and into-warehouse transfers.
cron_import_order.sh -o purchase -d store Loads into-store purchase orders only.
cron_import_order.sh -o all -d all Loads into-store purchase orders, into-store transfers, into-warehouse purchase orders, and into-warehouse transfer.

Examples of Intra-day Order Loading

The following table provides examples of Intra-day order loading.

Parameter Action
cron_import_order.sh -o purchase -d store -w 3 Loads into-store purchase orders corresponding to release wave 3.
cron_import_order.sh -o transfer -d store -w 8 Loads into-store transfers corresponding to release wave 8.
cron_import_order.sh -o all -d store -w 5 Loads into-store purchase orders and transfers corresponding to release wave 5.

Restart/Recovery

If this script fails, perform the following steps:

  1. Examine the log files to determine the cause of the failure.

  2. Correct any identified setup or environment issues.

  3. Restart the batch. Since successful import scripts delete their input files, the entire import process can be run again. Missing input files only generate warnings and then continue.

Smooth and Scale Purchase Orders

As soon as warehouse purchase orders have been loaded, the smoothing and/or scaling logic can be executed. These are not required by all retailers. This logic is optional depending on the business need. The execution of all or portions of this logic can also be controlled by global system parameters that disable Order Smoothing, Supplier Scaling, or Container Scaling.

Pre-Scaling

Prior to smoothing and scaling some system parameters are set that aid scaling. This script first gathers table statistics on important tables used in scaling. It then sets a system parameter to indicate whether or not any release date has multiple possible delivery dates. If quantity constraints are specified in terms of pallets, it then checks for the existence of pallet multiples and mark the invalid assignments in the list of smoothable and scalable assignments. Lastly it sets a system parameter to indicate whether or not a scaling group Supplier/SKU/warehouse has an Order Multiple that changes in the future.

Script Name: pre_scale.sh

This script prepares database for scaling the warehouse POs that will be loaded from AIP RPAS. Required only when scaling is desired.

Input Parameters

None.

Restart/Recovery

If this script fails, perform the following steps:

  1. Examine the log files to determine the cause of the failure.

  2. Correct any identified setup or environment issues.

  3. Restart the script.

Order Smoothing

The objective of smoothing order is to move orders to earlier delivery dates in order to reduce the aggregated receiving warehouse total such that it does not exceed the user defined warehouse capacity constraint. If the aggregated warehouse quantity total is already below the set limit or if no limit is defined, then no orders are moved. Smoothing is performed on delivery dates starting from smoothing horizon in future up-to today in decreasing order of delivery dates. System Parameter GLOBAL_SMOOTHING_HORIZON is used as default global smoothing horizon in the absence of local smoothing horizon at scaling group level. Alerts are logged if smoothing is unable to reduce the warehouse quantity total to below the set capacity.

Technical Details

The smoothing logic first retrieves the system parameter SMOOTHING_GLOBAL_FLAG to determine whether smoothing is enabled at the global level. If smoothing is not enabled, the script completes successfully without doing anything. Otherwise, multiple threads are spawned to carry out smoothing over the SMOOTHABLE_ASSIGNMENTS table partitions. The maximum number of threads are configurable and are defined in restart_control table.

The output of smoothing is a set of modified warehouse purchase orders some of which are updated with changed quantity while some are new relative to input orders from RPAS. An intermediate table, smoothing_detail, stores the pre and post smoothed total on delivery dates that have valid capacity defined. This table is later used in scaling modules to ensure that warehouse receiving capacity is not broken in scaling also.

Script Name: smooth_order.sh

There is no input parameter to this script. When scheduling the sequence of execution, this script must be executed before scale_order.sh and after orders are loaded in warehouse purchase order table by cron_import_order.sh script. Also pre_scale.sh must be executed before executing this.

Restart/Recovery

If this script fails, perform the following steps:

  1. Examine the log files to determine the cause of the failure.

  2. Correct any identified setup or environment issues.

  3. Re-execute the script. Default recovery that is built into the smoothing logic starts from the last save-point, that is, it ensures that smoothing will not attempt to smooth a successfully smoothed Scaling Group/warehouse/delivery date combination. But if for any reasons, the re-execution should redo smoothing on already smoothed combinations then you should flip the system parameter RESTART_SMOOTHING_FROM_SAVEPOINT to N before re-executing the script.

Supplier and Container Scaling

The following sections describe the process of Supplier and Container Scaling.

Supplier Scaling

Supplier scaling occurs on a daily basis for today up to a user specified scaling horizon. Those days on which the user has specified supplier minimums are a candidate for supplier scaling.

Supplier scaling begins by summing the planned warehouse POs for a release date (also called the order date) and grouping of Suppliers, SKUs, and Warehouses called a Scaling Group. If the summed PO totals are at least as large as the defined minimums the minimums are considered satisfied for the release date.

If the summed PO totals are less than some of the minimums the system attempts to move POs planned from future release dates to the current release date until the minimums are met.

This process is repeated sequentially for each release date starting from today up-to a user specified supplier scaling horizon in increasing order of release dates.

Container Scaling

Container Scaling (CS) occurs on a daily basis for today up to a user specified scaling horizon. If Supplier Minimum Scaling (SMS) is turned ON then SMS happens before container scaling on any given release date. Those days on which the user has specified container dimension constraints are a candidate for container scaling.

Container scaling first places order quantities into containers according to fit; at the same time attempting to group orders for a particular warehouse into the same containers. An order quantity does not fit in a container if adding the quantity to the existing total would exceed a container maximum constraint. When all orders are placed in a container those containers whose totals do not meet a user specified minimum is subject to scaling. The system attempts to move PO quantities planned for future release dates to the current release until a container minimum has been met.

This process is repeated sequentially for each release date.

Technical Details

The scaling logic first retrieves the supplier scaling and container scaling system parameters to determine whether either module is enabled at the global level. If not, the script completes successfully having done nothing.

Next, data is pulled together into the driving table, SCALING_GROUP_ORDER_MAP. Yesterday's data is deleted from the table before rebuilding the table for the current run.

Finally, scaling is performed. The process spawns multiple threads to perform the operation over the SCALING_GROUP_ORDER_MAP table partitions. A record is logged in the RESTART_SCALING table each time scaling is completed for a Scaling Group, Release Date, and scaling module (supplier scaling or container scaling). The record aids restart recovery and prevent re-scaling orders which, in certain circumstances, could produce inaccurate results.

Script Name: scale_order.sh

Period to Scale: [releasing_today|releasing_in_future|releasing_any_day].

releasing_today: Scale only today and exit. This allows the critical release process to execute before scaling subsequent forecast days.

releasing_in_future: Scale all forecast days starting from tomorrow. This can be used after releasing_today.

releasing_any_date: Scale all days starting from today through the scaling horizon.


Note:

When separating the scaling execution by time period using the releasing_today and releasing_in_future parameters it is critical that the releasing_today script call is made before releasing_in_future.

Restart/Recovery

If this script fails, perform the following steps:

  1. Examine the log files to determine the cause of the failure.

  2. Correct any identified setup or environment issues.

  3. Restart the batch. Recovery is built into the scaling logic such that scaling is not repeated for a completed Scaling Group/release date/scaling module (supplier scaling or container scaling).

Partial Pallet Rounding

If partial pallet rounding is turned on, then each order line is rounded before aggregating for comparing against the warehouse receiving capacity or Supplier minimum or Container Scaling constraints.

By default rounding is turned off in system parameter, PARTIAL_PALLET_ROUNDING_FLAG. When moving orders in scaling or smoothing with rounding turned on, the source side rounded total is decreased by an amount equal to the difference of rounded order total before and after the move.

When moving orders in scaling with rounding turned on, the destination side rounded total is increased by either adding the rounded move quantity when no matching order exists on destination side or by adding the difference of rounded order total after and before the move when a matching order exists on destination side.

When loading orders into container, each partial pallet is counted as full pallet towards the container capacity. Please note that rounding is only effective when the constraints are set in terms of pallets and are applicable in smoothing or scaling.

Post-Scaling

When scaling is completed for all days (today and future release days) the modified order quantities and executed Supplier Purchase Quantities (SPQ) are sent to the AIP RPAS platform to be reflected in the WRP workbooks and WRP alert calculations.


Note:

The modified order quantities are not re-reconciled.

The RETL exports to be executed are contained in the export_scale.config file. Once the exports are complete they are archived and readied for moving to AIP RPAS.

Script Name: post_scale.sh

This script exports modified warehouse POs as a result of smoothing and scaling and executed SPQ quantities to AIP-RPAS.

Restart/Recovery

If this script fails, perform the following steps:

  1. Examine the log files to determine the cause of the failure.

  2. Correct any identified setup or environment issues.

  3. Restart the script.

Merge Purchase Orders

Prior to releasing warehouse purchase orders they must be moved from an intermediate table to the final warehouse order table. A parameter passed into the executing script controls which day or days' orders are copied. This step first clears out unreleased warehouse purchase orders from the NON_CONTENTS_ORDER table. These are yesterday's forecast orders. Only orders that fall between the specified time period are deleted. Then, the new purchase orders are copied from the WAREHOUSE_PURCHASE_ORDER table to the NON_CONTENTS_ORDER_TABLE. Only orders with an order quantity greater than 0 are copied.

Script Name: merge_order.sh

This script moves scaled into-warehouse POs to the final database table for Release.

Input Parameters

Following are the input parameters for the merge_order.sh script.

Release Period: [releasing_today|releasing_in_future|releasing_any_day]. This is a required parameter.


Note:

This script is used only for warehouse purchase orders. All other orders and transfers do not require this script to be run before releasing.

Parameter Action
releasing_today Clears out unreleased orders with a release date of today and copies new orders with a release date of today to the NON_CONTENTS_ORDER table.
releasing_in_future Clears out unreleased orders with a release date greater than today and copies new orders, with a release date greater than today, to the NON_CONTENTS_ORDER table.
releasing_any_day Clears out all unreleased orders and copies the new orders, having any release date, to the NON_CONTENTS_ORDER table.

Restart/Recovery

If this script fails, perform the following steps:

  1. Examine the log files to determine the cause of the failure.

  2. Correct any identified setup or environment issues.

  3. Restart the script.

Release Orders to RMS

The following sections describe the process of Release Orders to RMS.

Order Release Overview

After the nightly RPAS batch run, the planned store and warehouse orders are extracted to a set of flat files (.dat data files) that are loaded into AIP Oracle. Similarly after the Intra-day replanned store orders are extracted to a set of .dat files, they are loaded into AIP Oracle. Order Release batch determines which orders must be released to the order execution system (RMS). Overnight orders that are a candidate for release must have a release date of today. Intra-day orders that are candidate for release must have a release date of today and release wave equal to the given release wave that is passed into the batch release script.The release date is determined by the source lead time. Additionally, the destination of a transfer must have a warehouse-chamber status of either Release or Closing Down. Once these conditions are met, the order is assigned either a purchase order number or a transfer number. The purchase order or transfer is then released to RMS where the SKU, destination, and order quantity are communicated to the supplier or warehouse source.

The released orders and forecast purchase orders are also visible from the OM Online screens. You are able to perform an early release or modify the order quantity, the delivery date, and the destination of a purchase order. In case of Intra-day release if a future release wave is manually released, it's release wave is changed to null indicating manual release. These user-entered modifications must also be communicated to RMS to be executed by the sender of the order.

Figure 7-3 Order Release Flow

Surrounding text describes Figure 7-3 .

Release Orders

An overnight batch release of AIP RPAS generated orders and transfers is performed once in the morning before the online users resume their daily screen activity. All orders and transfers scheduled for release today and are not set to be released in any release wave are communicated to RMS. The orders are assigned a purchase order number or transfer number, and the order status are then set to Open.

The Intra-day batch release is kind of delayed release of into-store orders with the option of replanning prior to release. An order for SKU/store set to be released in a selected release wave can be replanned in prior release waves. Functionally each replanned order replaces the previous order. Technically, the orders are not physically deleted just that they are identified and updated as replanned. At the time of release, only those orders that are marked to be release on the given release wave and are not replanned (not deleted) are released.

There are some rules about the order in which release waves can be sequenced. The release wave number denotes a nominal time from 0 to 23 where 0 indicates midnight and 23 indicates night 11:00 PM. When configuring the waves, gaps are allowed. For example, you can set up waves such as 2, 5, 8, and 10. Once configured, the waves must be executed in order of increasing wave numbers. Also a wave execution cannot be jumped over to execute a later wave. Wave execution refers to both loading and releasing orders for a particular wave. Overnight load and overnight release always take place before any Intra-day load or Intra-day release.

Technical Details

This process releases store purchase orders, store transfers, warehouse transfers, and warehouse purchase orders to the merchandising system. The order information is written to a set of staging tables. The records on the staging tables provide detail about the order items and the type of action that must be performed by the RMS message subscription logic so that the RMS order is in sync with the AIP order. An Enterprise Java Bean called OrderSenderBean constantly polls the table to find new records. It then reads the details from the staging tables and the base order tables to generate the messages sent to RMS through the RIB.

Script Name: cron_release.sh

This is a wrapper shell script to batch release Purchase Orders and transfers from AIP Oracle tables.

Input Parameters

Following are the input parameters for the cron_release.sh script.

  • Order Type (o):[transfer|purchase|all]. This is a required parameter.

    Parameter Action
    -o transfer Releases transfers only.
    -o purchase Releases purchase orders only.
    -o all Releases both transfers and purchase orders.

  • Destination Type (d): [store|warehouse|all]. This is a required parameter.

    Parameter Action
    -d store Releases into-store orders only.
    -d warehouse Releases into-warehouse orders only.
    -d all Releases both into-store and into-warehouse orders.

  • Release Wave (w): [A number between 0 and 23]. This is an optional parameter that applies to destination_type store orders only. This is used to release Intra-day orders for a given release wave.

    Depending upon the order type, destination_type and release_wave, parameter values passed into this script the desired combination of orders can be released. This script together with the cron_import_order.sh script can be repeated with different parameters to load and release the orders as dictated by time sensitivity sequentially by time precedence.

Example

For many retailers releasing overnight transfers is more time critical than releasing overnight purchase orders. The following example shows an execution order of scripts that can be followed to load and release overnight transfers before overnight purchase orders.

  1. Load and release all transfers first.

    cron_import_order.sh -o transfer -d all

  2. Next, load and release all overnight POs.

    cron_release.sh -o purchase -d all
    cron_release.sh -o purchase -d all

  3. Execute post release and post-critical processing - cron_post_release.sh -o all -d all.

Processing Steps

This script calls the following scripts depending on the value of the destination type parameter passed into it:

  • scripts/cron_release_store_order.sh

  • scripts/cron_release_non_contents_order.sh

Release Store Orders

This process is executed by cron_release.sh when the dest_type parameter is store or all. When called for releasing overnight orders, release wave parameter is overnight. When called for releasing Intra-day orders, release wave parameter is a number between 0 and 23. This process releases store transfers and purchase orders to the merchandising system. Order line items are grouped together and assigned a purchase order number or transfer number, and the order status is then set to Open. The order information is written to a set of staging tables. The records on the staging tables provide detail about the order items and the type of action that must be performed by the RMS message subscription logic so that the RMS order is in sync with the AIP order. An Enterprise Java Bean called OrderSenderBean constantly polls the table to find new records. It then reads the details from the staging tables and the base order tables in order to generate the messages sent to RMS through the RIB.

Technically into-store purchase orders and into-store transfers follow the UPDATE model of release process that is based on assigning order_number and changing order_status using an UPDATE or MERGE statement on order lines already present in main table.

Script Name: cron_release_store_order.sh

The script calls the following PL/SQL wrapper procedure in order to perform the release of store purchase orders and/or transfers. Also listed are some of the important pre-release validation checks called from wrapper procedure.


Note:

Users should avoid executing this script directly. Instead the wrapper script, cron_release.sh should be executed.

Procedure of package: Script
ORDER_EXPORTER release_store_order

Check for Existence of Order Definitions

Checks to see if order definitions exist.

Causes a batch failure if order definitions are missing. The order definitions are used when assigning order numbers during order release.

Check for Existence of RMS/AIP SKU Mappings

Checks to see if RMS/AIP SKU mappings exist for all SKU/pack sizes.

Orders for a particular SKU/pack size cannot be released to RMS if the mapping is missing.

The validation failure indicator is set to Y on the STORE_ORDER table and an alert is created for the user to view in DM Online.

This does not cause the release process to halt.

The remaining SKU/pack sizes with valid mappings are released.

Check for 0 order quantity

Sets VALIDATION_FAILURE_IND to Y if order quantity is 0.

This prevents the order from being released to RMS.

Restart/Recovery Steps for Releasing Store Orders

In case of failure/errors while running cron_release_store_order.sh through cron_release.sh script, perform the following action:

Review the log file, which is located in the logs folder of the directory specified for the Oracle LOGHOME environment variable.

Error Indicating that the Maximum Transfer or Purchase Order Number was Reached

If you receive an error indicating that the maximum transfer or purchase order number was reached, perform the following steps.


Note:

This error means that the system ran out of available order numbers to assign to the orders that are ready to be released. Purchase Order number recycling happens automatically if a custom RMS purge script captures the purged purchase orders and sends them to AIP.

  1. Ensure that the custom rmse_order_purge.dat file is being received and processed once the AIP purchase order is purged from RMS.

  2. To immediately address the issue, the purchase order numbers and transfer numbers in RMS need to be compared against the range of numbers specified for AIP in the ORDER_NUMBER table.

    • Order_type P identifies the AIP number range for purchase orders.

    • Order_type T identifies the AIP transfer number range.

  3. If a sufficient range of values can be found, the ORDER_NUMBER current_value for the order type can be updated to match the smallest available number. Restart cron_release_store_order.sh from cron_release.sh.


    Note:

    Purchase order numbers and transfer number ranges may be allocated to many different systems. All of these systems feed orders into RMS, but RMS will not accept duplicate order numbers. Therefore, it is not recommended that you update the ORDER_NUMBER low_value or high_value without a thorough analysis of the entire system.

    If you receive an error indicating that the order definitions check failed, insert the missing rows and restart cron_release_store_order.sh from cron_release.sh.


    Note:

    This error means that rows were removed from the ORDER_DEFINITION table. This table must contain four rows, one for each destination type and order type combination.

Table 7-2 shows the only order definition configuration supported in AIP.

Table 7-2 AIP Supported Order Definition Configuration 1

DEST_TYPE ORDER_TYPE USE_SOURCE USE_COMMODITY USE_PACK_SIZE USE_DEST USE_DELIVERY_DATE

S

T

Y

N

N

Y

Y

W

T

Y

N

N

Y

Y

S

P

Y

Y

Y

Y

Y

W

P

Y

Y

Y

Y

Y


An error from the orderExporter package indicates that there is an error in the release of orders.

Dependencies

The ability to release orders to RMS depends upon having the AIP RPAS Replenishment Planning batch generated orders loaded in the Oracle database. Therefore, the script should not run until the corresponding import is complete.

Release Warehouse Orders

This process is executed by cron_release.sh when the dest_type parameter is warehouse or all. A batch release of warehouse orders are performed once in the morning before the online users resume their daily activities. Order line items are grouped together and assigned a purchase order number or transfer number, and the order status is then set to Open. All warehouse orders scheduled for release today are communicated to RMS.

The order information is written to a set of staging tables. The records on the staging tables provide detail about the order items and the type of action that must be performed by the RMS message subscription logic so that the RMS order is in sync with the AIP order. An Enterprise Java Bean called OrderSenderBean constantly polls the table to find new records. It then reads the details from the staging tables and the base order tables to generate the messages sent to RMS through the RIB.

Technically, into-warehouse purchase orders follow the UPDATE model of release process while the into-warehouse transfers follow the INSERT model of release process. The update model of release is based on assigning order_number and changing order_status using an UPDATE or MERGE statement on order lines already present in main table. The insert model is based on assigning order_number as part of INSERT statement when inserting order lines into the main table. Functionally both model of release are equivalent.

Script Name: cron_release_non_contents_order.sh script

The script calls the following PL/SQL wrapper procedure in order to perform the release of warehouse purchase orders and/or transfers. Also listed are some of the important pre-release validation checks called from wrapper procedure.


Note:

This script should not be called directly, instead the wrapper script cron_release.sh should be executed.

Procedure of package: Script
ORDER_EXPORTER release_non_contents_order

Check for Existence of Order Definitions

Checks to see if order definitions exist.

Causes a batch failure if order definitions are missing. The order definitions are used when assigning order numbers during order release.

Check for Existence of AIP/RMS SKU Mappings

Checks to see if RMS/AIP SKU mappings exist for all SKU/pack sizes.

Orders for a particular SKU/pack size cannot be released to RMS if the mapping is missing.

The validation failure indicator is set to Y on the NON_CONTENTS_ORDER table and an alert is created for the user to view in DM Online.

This does not cause the release process to halt.

The remaining SKU/pack sizes with valid mappings are released.

Check for 0 order quantity and chamber status

Sets VALIDATION_FAILURE_IND to Y if order quantity is 0 or if the destination chamber does not have the status of Release or Closing Down. This prevents the order from being released to RMS.

Restart/Recovery Steps for Releasing Into-warehouse Orders

In case of failure/errors while running cron_release_non_contents_order.sh through cron_release.sh script, perform the following action. Review the log file, which is located in the logs folder of the directory specified for the Oracle LOGHOME environment variable.

Error Indicating that the Maximum Transfer or Purchase Order Number was Reached

If you receive an error indicating that the maximum transfer or purchase order number was reached, perform the following:


Note:

This error means that the system ran out of available order numbers to assign to the orders that are ready to be released. Purchase Order number recycling happens automatically if a custom RMS purge script captures the purchase orders and sends them to AIP.

  1. Ensure that the custom rmse_order_purge.dat file is being received and processed once the AIP purchase order is purged from RMS.

  2. To immediately address the issue, the purchase order numbers and transfer numbers in RMS need to be compared against the range of numbers specified for AIP in the ORDER_NUMBER table.

    • Order_type P identifies the AIP number range for purchase orders.

    • Order_type T identifies the AIP transfer number range.

  3. If a sufficient range of values can be found, the ORDER_NUMBER current_value for the order type can be updated to match the smallest available number. Restart cron_release_non_contents_order.sh from cron_release.sh.


Note:

Purchase order numbers and transfer number ranges may be allocated to many different systems. All of these systems feed orders into RMS, which will not accept duplicate order numbers. Therefore, it is not recommended that you update the ORDER_NUMBER low_value or high_value without a thorough analysis of the entire system.

If you receive an error indicating that the order definitions check failed, insert the missing rows and restart cron_release_store_order.sh from cron_release.sh.


Note:

This error means that rows were removed from the ORDER_DEFINITION table. This table must contain four rows, one for each destination type and order type combination.

Table 7-2 shows the only order definition configuration supported in AIP.

An error from the orderExporter package indicates an error in the release of orders.

Dependencies

The ability to release orders to RMS depends on having the OM-generated orders loaded in the Oracle database. Therefore, the script should not run until the corresponding import is completed.

Post Release

After all the desired orders are released, the post release process should be executed. It is recommended that this script be executed anytime after the overnight release but before OM online users start their daily activities. This step does following:

  • Identifies overdue orders.

  • Updates order history of into-warehouse orders.

  • Refreshes forecast transfers in the non_contents_order table.

  • It removes previously loaded forecast transfers in the non_contents_order table and then copies new forecast transfers i_non_contents_transfer into the main tables.

  • It signals that users can now start making changes or create new data in Online screens.

Technical Details

The structure of cron_post_release.sh is very similar to the structure of the cron_release.sh script. The scripts take two input parameters, order_type and dest_type, to control the order type and destination types that are processed in any given execution.

Post release is a required post-critical process that should be executed after critical overnight release process is over and before online user activity starts.

Script Name: cron_post_release.sh

This is a wrapper shell script for post release activities performed after cron_release is completed.

Input Parameters

Following are the input parameters for the cron_post_release.sh script.

  • Order Type (o): [transfer|purchase|all] This is a required parameter.

    Parameter Action
    -o transfer Executes post-release steps of transfers only.
    -o purchase Executes post-release steps of purchase orders only.
    -o all Executes post-release steps of both purchase orders and transfers.

  • Destination Type (d): [warehouse|store|all]. This is a required parameter.

    Parameter Action
    -d warehouse Executes post-release steps of into-warehouse orders only.
    -d store Executes post-release steps of into-store orders only.
    -d all Executes post-release steps of both into-warehouse and into-store orders.

Processing Steps

This script calls the following scripts depending on the value of the destination type parameter passed into it:

  • scripts/post_release_store_order.sh

  • scripts/post_release_warehouse_order.sh

Restart/Recovery

If this script fails, perform the following steps:

  1. Examine the log files to determine the cause of the failure.

  2. Correct any identified setup or environment issues.

  3. Restart the script.

Purge Closed Orders in AIP Online

Closed orders are purged on a daily basis to maximize response time of database accesses. The closed order status must be provided to AIP from RMS or the external order management system. Closed orders are purged after a period of time which is configurable by the system administrators. The partitions in store_order and non_contents_order table that become empty as a result of purging closed orders are dropped at the end of order purge process. Then at the end all empty sub-partitions are truncated to release the unused allocated tablespace back to Oracle.

Technical Details

The structure of order purging is also similar to the structure of the cron_release.sh script. The scripts take two input parameters, order_type and dest_type, to control the order type and destination types that are processed in any given execution.

Purging of orders is not a critical process and so it can be scheduled to run in any non-critical time window of the day. However it is recommended that cron_purge is scheduled to run daily to prevent build up of closed orders in the tables.

Script Name: cron_purge.sh

This is a wrapper shell script for purging closed Purchase Orders and closed transfers from AIP Oracle tables.

Input Parameters

Following are the input parameters for the cron_purge.sh script.

  • Order Type (o): [transfer|purchase|all] This is a required parameter.

    Parameter Action
    -o transfer Purges transfers only.
    -o purchase Purges purchase orders only.
    -o all Purges both purchase orders and transfers.

  • Destination Type (d): [warehouse|store|all]. This is a required parameter.

    Parameter Action
    -d warehouse Purges into-warehouse orders only.
    -d store Purges into-store orders only.
    -d all Purges both into-warehouse and into-store orders.

Processing Steps

The script cron_purge.sh calls the following scripts depending on the value of the destination type parameter passed into it.

Script Action
cron_purge_store_order.sh Purges Store Orders
cron_purge_non_contents_order.sh Purges Warehouse Orders

Purge Store Orders

The following sections describe the process of purging store orders.

Technical Details

This logic purges the closed orders from the STORE_ORDER table that have exceeded their purge age. The purge age is the PURGE_PERIOD (in days) set in the ORDER_PURGE_PERIOD table. The order age is the difference between the timestamp when the order was closed and today. It is recommended that closed store orders are purged by calling the wrapper script cron_purge.sh instead of calling the script cron_purge_store_order.sh directly.

Restart/Recovery Steps for Purging Closed Store Orders

The purging of closed orders is a maintenance task that is not essential to the successful completion of the batch; though overtime it is essential for maintaining manageable table sizes.

If this script fails, perform the following steps:

  1. Examine the log files to determine the cause of the failure.

  2. Correct any identified setup or environment issues.

  3. Restart the script.

Purge Warehouse Orders

The following sections describe the process of purging warehouse orders.

Technical Details

This logic purges the closed orders from the NON_CONTENTS_ORDER table that have exceeded their purge age. The purge age is the PURGE_PERIOD (in days) set in the ORDER_PURGE_PERIOD table. The order age is the difference between the timestamp when the order was closed and today. It is recommended that closed warehouse orders are purged by calling the wrapper script cron_purge.sh instead of calling the script cron_purge_non_contents_order.sh directly.

Restart/Recovery Steps for Purging Closed Warehouse Orders

The purging of closed orders is a maintenance task that is not essential to the successful completion of the batch; though overtime it is essential for maintaining manageable table sizes.

If this script fails, perform the following steps:

  1. Examine the log files to determine the cause of the failure.

  2. Correct any identified setup or environment issues.

  3. Restart the script.

Purge PLSQL Logs

If PLSQL logging is turned on, then log messages are written to a table called plsql_log. In order to avoid build up of excessive log, it is recommended that old log messages are regularly purged in non-critical time. A shell script called purge_log.sh is provided in integration home of AIP Oracle batch scripts to do the purging. There is no input parameter for this script.

Logging can be turned on by setting the following two system parameters:

  1. PLSQL_LOG_LEVEL: This indicates PLSQL logging level. Valid values are DEBUG, INFORMATION (default), ERROR and NONE. While DEBUG level is the most informative, ERROR level is the least informative logging. Logging level NONE results in no logging at all.

  2. PLSQL_LOG_TARGETS: This indicates where the PLSQL logs are written. Valid values are FILES_ONLY (for BSA log files), TABLES_ONLY (for logging table) and FILES_AND_TABLES (for both, also the default).

Another system parameter, PLSQL_LOG_PURGE_PERIOD, is used to control the length (in number of log days) of log retention. Records that have logging VDATE <= (current VDATE - PLSQL_LOG_PURGE_PERIOD) are purged when purge_log.sh is executed. Default value of this parameter is seven days.