Enhancements for Consumption-Driven Planning

This chapter covers the following topics:

Business Logic Engine

This section describes the Business Logic Engine (BLE) enhancements to support the consumption-driven planning process. These enhancements are only available in Consumption-Driven Planning.

When you run a worksheet, Demantra re-evaluates all of the client expressions in the worksheet and saves the changes to the database. If data in the worksheet has been modified at an aggregated level, then Demantra splits the resulting data to the lowest level and saves it to the database.

Many of the calculations used to support inventory and order replenishment must reference values that are calculated at the worksheet (client) level and must be saved to the database. CDP worksheets view and calculate values at a very low level (for example, item or site), so unless the BLE is run, the results of these calculations would not be saved to the database simply by re-running the worksheet. For this reason, enhancements were made to enable better BLE support for CDP. Enhancements include the ability to trigger BLE calculation when a worksheet is saved or to invoke it as user-driven method.

Other BLE enhancements include:

CDP also provides CDP BLE worksheets which contain calculated values and are used in the CDP BLE workflows. For more information, refer to CDP Business Logic Engine Worksheets.

Deploying the Business Logic Engine Cluster

Business Logic Engine (BLE) Cluster refers to running a BLE worksheet in a multiprocess multithreaded manner and not through the BLE step, which is a single instance server with no clustering capability. BLE Cluster allows mass parallelization of a BLE process, enabling more efficient use of available system resources and dramatically reducing run times.

Assumptions

It is assumed that the Analytical Engine is deployed in your environment with the proper directory structure on Linux or UNIX. The details of the Analytical Engine are not discussed in this guide. For information on the Analytical Engine, refer to the Oracle Demantra Analytical Engine Guide.

BLE clustering requires robust database capabilities and is only available when the database is deployed on Oracle Exadata.

BLE clustering is currently available with the following Oracle Demantra modules:

BLE Cluster Design

BLE Cluster uses the Analytical Engine's distributed infrastructure when executing. BLE Cluster runs when the Analytical Engine runs. After each engine is finished with forecast calculation, it can start a BLE process filtered to the same data subset that engine was processing (engine task). One BLE Java process runs for each engine task that is running.

Each engine profile can execute multiple BLE worksheets (see Configuration below for more information).

BLE Cluster Deployment

The BLE Cluster must be deployed in the root directory /Engine where the /lib and /bin subdirectories of the Analytical Engine are located. Refer to the sections below for details.

Files and Directories

The file ble.sh is located in the Windows installation folder, in the following archive file:

In order to use the file in Linux, you must unpack the file from the following path inside the archive file:

Next. perform the following steps:

  1. Copy the file Integration/ble.sh to the Engine/lib directory on the Exalogic or SuperCluster machine.

  2. Copy the Integration directory (from the Windows installation set) into the /Engine directory on the Exadata or SuperCluster machine.

    Note: If you will be running the Analytical Engine on more than one Virtual Machine (VM), this step must be performed on each VM.

  3. Run dos2unix ble.sh. This file is located in the /lib folder. (The dos2unix program converts plain text files from DOS/MAC format to UNIX format.)

    Note: If you want to allocate more memory to each BLE Cluster process, you can alter the -Xmx-Xms JVM parameters inside the file ble.sh.

  4. Add the environment variable JARS and add all JAR files under Engine/Integration/lib to the JARS variable.

    Copy the following command into the bash_profile file:

    for X in $ENGINE_ROOT/Integration/lib/*.jar

    do

    JARS=$JARS:$X

    done

    EXPORT JARS

    Note: Make sure all JARS are copied in Binary mode and not text mode. Also, if running this command on the UNIX operating system, the word "EXPORT" must be in all uppercase. If running it on the Solaris operating system, it should be in lowercase.

  5. Copy the files Integration/conf/DataSource.properties and Integration/conf/logconf.lcf into the Engine/lib/conf directory. If necessary, create the "conf" directory under /Engine/lib first. Set the appropriate values inside each of these files.

    Refer to the Oracle Demantra Installation Guide for information about the DataSource.properties file. Refer to the Oracle Demantra Implementation Guide for details about the logconf.lcf file.

  6. Provide the appropriate permissions to the Engine folder (for example, using the "chmod" command).

  7. Define the BLE Cluster configuration parameters. These are described in the next section.

BLE Cluster Configuration Parameters

In the INIT_PARAMS_XXX table, set the VALUE_STRING column for the "EngPostProcessScript" parameter that corresponds to the engine profile that you will be running.

The template for the VALUE_STRING column for the "EngPostProcessScript" parameter is:

./ble.sh #BRANCH_ID# #TABLE1# #COLUMN1# #TABLE2# #COLUMN2# #SERVICE_NAME# #SKIP_ENG# BLEWsApp_ID1, BLEWsApp_ID2 BLEIncremental_shift Absolute_path_to_logs_Folder

The following parameters listed below can be configured.

Important: The other parameters (those not listed below) should NOT be modified.

Example of the "EngPostProcessScript" parameter:

./ble.sh #BRANCH_ID# #TABLE1# #COLUMN1# #TABLE2# #COLUMN2# #SERVICE_NAME#

#SKIP_ENG# QUERY:13267,QUERY:13320 0

/u01/demantra/7.3.1.5/EngineManager/Engine/lib

The BLE Workflow Step

This section describes the BLE enhancements to the BLE steps in the workflow.

The field Select Filter Context can be set to None, Save Data, and Method. When None is selected, the field Select Relative Time Period is available to support Net-change BLE execution. If Relative Time Period is greater than zero, then BLE only executes on combinations which have been changed within that range thereby minimizing unnecessary processing. For environments where BLE is run weekly, Oracle recommends setting this parameter to 7 days.

When Select Filter Context is set to Saved Data, the field Select series group becomes available. The series group defined here is used to evaluate whether BLE needs to be executed when data is saved in a worksheet. As data is saved, the system update workflow is run. If BLE step set to Save Data is included in the update workflow, it evaluates whether a Series in the selected series group has been modified as part of the update. For any combination where at least one of the Series in the group has been modified, then the BLE step is executed. Using Save Data is only appropriate when the workflow is called as part of an update data process, including it in any other workflow will not have any effect.

If method option is chosen, then the workflow calling the BLE is meant to be called ad-hoc. When the method is called, the full context of the member from which the call is made is used as a filter on the BLE worksheet and only combinations falling in this filter are processed.

Configuring BLE as Part of the Update Mechanism

The Update Data workflow runs the BLE as part of the update mechanism. To enable the BLE to run when the end user saves data in a worksheet, update the system parameter "ble_enable_worksheet_calculations" to 1 (default is 0) from the System tab in Business Modeler. This workflow includes the following steps:

When the ble_enable_worksheet_calculations parameter is set to 1, the Update Data workflow moves from the BLE Condition to the BLE Launcher step. The BLE Launcher step invokes the CDP BLE On Demand workflow.

The CDP BLE On Demand workflow is set up to call the CDP BLE calculations.

Each BLE step in this workflow is configured to run in the Save Data context with a relevant series group that triggers the execution of this BLE step.

BLE steps are called under the CDP BLE On Demand workflow. Each BLE step in this workflow is configured to run in the Save Data context with a relevant series group that triggers the execution of this BLE step. The steps and the trigger series group are as follows:

If an update to a series in the series group occurs and Save Data is selected, then the appropriate BLE step in the list above is invoked. The BLE step receives the combination detail where the update was made and runs the BLE calculation on these combinations.

Filtering the BLE Workflow

The BLE workflow step can now be configured with a filter. The filter allows the same BLE worksheet to be called in different contexts based on business requirements. For example, if two business units need replenishment calculated at a different time of day, the same BLE worksheet can be used, with a different filter when called for each business unit.

Configuration of BLE filter is performed as follows:

  1. Navigate to the Parameters tab of the workflow step.

  2. Add a parameter with the name extra_filters.

  3. For values, populate pairs of level ID with member ID. The level and member values are separated by a comma, and the pairs are separated by semicolon.

    Example: 425,3;425,4 will filter the BLE worksheet to the level with internal ID=425 and members = 3 or 4.

General Levels

Detailed consumption data can be viewed at very low levels in Demantra worksheets, such as at the store level and in daily time buckets.

CDP

The following General Levels are provided to support CDP:

The CDP level itself is primarily an internal construct used to bring Item and Store together. You should primarily use levels Store and Store Group when viewing consumption data. The CDP level, like all General Levels that have a Population Type set to Searchable, includes a Base Time Resolution setting. This setting, which is visible when creating or modifying a Level in Business Modeler, enables Demantra end users to view data in a worksheet at a time level that may be lower and more granular than the system time resolution. (The system time resolution is typically set to either Month or Week.)

The default Base Time Resolution setting for the CDP level is "Day," which means that CDP users can view data at the daily level in a worksheet, even if the system time resolution is set to Week.

The following system parameters have been added to control the history and forecast periods when viewing the daily CDP worksheets.

If the worksheet "time resolution" selection is "lowest period", then these date parameters are used by the worksheet to determine the history and forecast periods. (Valid options for these parameters are sysdate, sysdate+1, sysdate-1, 04-08-2013 00:00:00). If the worksheet "time resolution" selection is not set to "lowest period", then the max_sales_date and min_forecast_date are used to determine the history and forecast periods.

The default setting for these parameters are as follows:

If CDP is configured as weekly, then you should change MinForeDateLowestPeriod to sysdate+6. You can also set these parameters manually to a specific date. The dates are not changed automatically.

For more information about the Time Resolution setting, refer to "Adding a Population Attribute to a General Level" in the Oracle Demantra Implementation Guide.

Launch Management Level

The Launch Management general level supports the new product and new store introduction processes. The hierarchy includes the following levels:

Rolling Data Profiles

The following Rolling Data Profiles populate the Store Sell through Final Forecast Lag and Store Sell through Forecast Lag Series:

By default, these Series are all included in the predefined Rolling Profile Group called Store Sell Through.

Run the workflow Sell Through Forecast Archival to archive the Series above. For more information on the workflows available in CDP, see CDP Workflows.

Launch Management

This section provides information about the launch management functionality that supports CDP:

Using New Product Introduction

Use the CDP New Product Launch Management worksheet to perform new product introduction (NPI). This process links a new product (target) with an existing similar product (source) at a store, store group, or account. Additional historical information can also be copied from the source product and used as pseudo-history for the new target product. When selecting pseudo history for an item one or more data streams are copied from the source product. The pseudo history is used for predicting future sales and demand.

For information about the CDP New Product Launch Management worksheet and how to create a new product introduction launch, refer to CDP New Product Launch Management worksheet.

Using New Store Introduction

Use the CDP New Store Launch Management worksheet to perform new store introduction (NSI). This process links a new store (target) based on an existing similar store (source). Once the new store introduction launch is defined and requested, you can view, edit, or delete the store launch from the worksheet. Editing and deleting the new store launch request is only available if the Store Launch Date has not be reached.

For information about the CDP New Store Launch Management worksheet and how to create a new store introduction launch, refer to CDP New Store Launch Management worksheet.