Oracle® Retail Size Profile Optimization Implementation Guide Release 14.1 E55738-01 |
|
![]() Previous |
![]() Next |
This chapter contains a summary of the scripts that are needed to execute and maintain SPO through batch processing. The SPO batch process computes the following measure data:
Raw size profiles at all enabled escalation levels
Several dimension maps, including the SKU to Size Profile Code Map measure
Before the first batch run, the system environment must be set up along with certain data measures (batch parameters) that control the batch calculations. Pre-batch setup is outlined in the following sections.
After each batch run, a new set of raw size profiles are available for review and approval.
See the Oracle Retail Predictive Application Server Administration Guide for details on formatting load data files and on utilities that enable administrators to load data into RPAS.
Note: Comma-separated values (CSV) files are recommended to reduce the sizes of load files. |
The following directories are used by the batch scripts. These directories are subdirectories of the <spo_directory>
directory. The <spo_directory>
directory is defined by the implementer.
Table 8-1 Directories Used by Batch Scripts
Directory Name | Content of the Directory |
---|---|
bin |
Batch scripts |
config |
SPO template configuration |
domain |
Domains |
input |
Input files for building the domain |
logs |
Log files from running any of the batch scripts |
temp |
Temporary files used by the batch scripts |
Table 8-2 summarizes the available batch scripts. The batch scripts are located in the <spo_directory>/bin
directory.
The following information is included in the table:
Name of the script
Short description of the script
Suggestion on how often to run the script
List of other batch scripts on which there is a dependency
Table 8-2 Batch Script Summary
Script Name | Type | Suggested Frequency | Dependencies |
---|---|---|---|
spo_batch.sh |
Generate Size Profile |
Weekly |
None |
loadSizeOptMeasures.sh |
Load measure data |
Weekly |
None |
There are two ways to check if a batch completed successfully:
In the batch log file, check for any errors, exceptions, or failures. If there are none, the batch completed successfully.
A generation ID is used when a batch runs successfully. If a generation ID is available in the wizard process for the approve workbook, this indicates that the batch ran successfully.
This section contains detailed information on the following batch script:
Script
loadSizeOptMeasures.sh
Usage
loadSizeOptMeasures.sh
Notes
loadSizeOptMeasures.sh loads the following measure data:
Inventory
Sales
Measure Descriptions
Measure Labels
Message Table
Workbook Template Group Labels
Workbook Template Labels
Size Rank
Size to Size Range Display Map
SKU to Size Profile Code Map
SKU to Size Map
Script
spo_batch.sh
Usage
spo_batch.sh
Notes
spo_batch.sh runs the size profile batch generation. A rule group is provided to calculate the resolved size profile.
Note: The rules in the common_data rule group are crucial to the batch process. This group should never be modified in the implementation. |
This script uses the RPAS mace utility. See the Oracle Retail Predictive Application Server Administration Guide for the Classic Client or the Oracle Retail Predictive Application Server Administration Guide for the Fusion Client for details on this utility.
To add any custom rules, add the custom rules into custom rule groups and invoke the custom rule groups in this script. For more details, see the following sections.
To run batch, use the script— spo_batch.sh (located in folder
$SIZEOPT_HOME/bin) as follows:
spo_batch.sh -d "{your-domain-path}"
The complete usage statement is as follows. Square brackets, [ ], indicate optional arguments to the script:
spo_batch.sh -d {masterpath}
[-noparallel | -maxprocesses {n}]
[-gidlabel {string}]
The following are notes on command line arguments:
If the -maxprocesses flag is omitted, the environment variable RPAS_PROCESSES is used to decide the maximum number of background processes. You can override this variable by setting the environment variable SIZEOPT_PROCESSES to a number greater than or equal to 1.
Using -noparallel is equivalent to using -maxprocesses set to 1.
The -gidlabel flag allows you to give a custom label/name to the Profile Generation ID. If -gidlabel is omitted, a default label will be assigned based on the time and date that the Profile was generated. This default label format is yyyy-mm-dd hh:mm:ss (Gn) (where Gn is the Generation ID).
Note: In SPO, Generation IDs (GID) are run specifically. The "approved" related measures are independent of the GID, which is one copy in the domain.Once an SPO run is approved, the results are copied over to the Approved Size Profile measures (for example, Approved Profile, Approved by, and Approve Date). If a subsequent GID (or run) is approved, the Approved Size Profile measures are overridden again. The intent is that the Approved Size Profile measures reflect the latest approved Size Profiles and are independent of the GID. These approved size profile measures are exported to other systems. The total alerted profiles are calculated based on the most recent approve profile status, there is only one version, which is not GID specific. |
Script
spo_batch_localdomain.sh
Usage
spo_batch_localdomain.sh
Notes
spo_batch_localdomain.sh runs the size profile batch generation in a local domain to update or generate size profiles only for that local domain. A rule group is provided to calculate the resolved size profile.
Note: The rules in the common_data rule group are crucial to the batch process. This group should never be modified in the implementation. |
This script uses the RPAS mace utility. See the Oracle Retail Predictive Application Server Administration Guide for the Classic Client or the Oracle Retail Predictive Application Server Administration Guide for the Fusion Client for details on this utility.
To add any custom rules, add the custom rules into custom rule groups and invoke the custom rule groups in this script. For more details, see the next section.
To run the batch, use the script—spo_batch_localdomain.sh (located in folder
$SIZEOPT_HOME/bin) as follows:
spo_batch_localdomain.sh -d "{your-domain-path}" -gidlabel "{string}"
The complete usage statement is as follows. Square brackets, [ ], indicate optional arguments to the script:
spo_batch_localdomain.sh -d {local domain path}
-gidlabel {string}
[-gid {generation ID}]
The following are notes on command line arguments:
-d is the path to the local domain in which the size profile generation has to be run.
-gidlabel is a label associated with the generation ID.
-gid is optional. If specified, the generation ID is used for the size profile generation. If not specified, the first unused GID is used.
Only the local domain (the non-HBI escalation levels) is run in this batch. The master domain (the HBI escalation levels) is not run.
Set the following variables:
RPAS_HOME
RPAS_JAVA_CLASSPATH
LD_LIBRARY_PATH
LIBPATH
PATH
Update the following variable settings in the file $SIZEOPT_HOME/bin/environment.sh to reflect current directory paths and environment:
SIZEOPT_HOME
SIZEOPT_DOMAINHOME
LOGLEVEL
RECORDLOGLEVEL
The following syntax allows the script to set a default value for each variable when it is not defined, but leaves the value unchanged if the variable has been previously defined in, say, the user's .profile:
: ${variable:=value}
The directory $SIZEOPT_HOME/bin should exist and be added to the PATH variable.
The values for LOGLEVEL and RECORDLOGLEVEL can be any one of the following: all, profile, debug, audit, information, warning, error, or none. These two variables are usually both set to warning or both set to error.
Make sure to include both $RPAS_HOME/bin and $SIZEOPT_HOME/bin in the PATH variable.
Some batch parameters must be set up before running the batch process. Typically, these parameters are set in workbooks. The following are measure labels of these parameters:
Batch Execution Flag
Must be set to TRUE
Default History Start Date
Default History End Date
Default History Start Date Override
Optional
Default History End Date Override
Optional
Empty Generation ID
Should have at least one Generation ID that is set to TRUE
Max Iteration Profile Optimization
Should typically be 4 or greater
Angle Threshold
Represented in degrees and should typically be less than 2
Profile Gen Method
0 indicates to use MLE method (best)
1 indicates to use Full Presentation method (faster)
Enable/Disable Escalation Levels
Should be set to TRUE for escalation levels the user wishes to enable
The Prepack Optimization module has the following batch scripts:
pckopt_batch.sh- see Prepack Optimization Batch
pckopt_export_batch.sh- see Prepack Optimization Export Batch
pckopt_patchdata_spo.sh- see Prepack Optimization SPO Patch Batch
packopt_patchdata_AP.sh- see Prepack Optimization AP Patch Batch
The pckopt_batch.sh script runs the prepack optimization batch process. It has the following inputs:
-d: master domain path
-x: execute domain path; used only when the script is run from a local domain
-level: prepack optimization level number
-maxprocesses: maximum number of processes
-debug: debug mode
-approve: autoapprove
-noparallel: no parallelization
The pckopt_export_batch.sh script is the prepack optimization export batch. It converts prepack configurations and prepack calendars from prepack's internal format into a format that is acceptable by AP. The script also generates the pack hierarchy load file that supports the prepack configurations and the prepack calendar output files. These files are passed over to AP. The pack hierarchy file is loaded into AP domain before loading the prepack configuration and prepack calendar in the AP domain.
To enhance the performance, the exported information is generated directly into flat files. The pack hierarchy, prepack definition, and prepack calendar measures in SPO are not updated. To view the exported result in SPO, the prepack hierarchy file needs to be loaded into the SPO domain before loading the prepack definition and prepack calendar measures.
This script has the following inputs:
-d: master domain path
-x: execute domain path; used only when the script is run from a local domain
-level: prepack optimization level number; if not specified, all levels are exported
-f: prepack definition output file; used if required to export to a plain text file
-c: prepack calendar output file; used if required to export to a plain text file
-h: prepack hierarchy output file
The pckopt_patchdata_spo.sh script must be run after each change in the product hierarchy of the SPO domain. It performs the following functions:
regenerates the size profile used in Prepack Optimization
reruns the export batch to generate updated prepack configurations and prepack calendars for AP to load
This script has the following inputs:
-d: master domain path
-debug: debug mode
-maxprocesses: maximum number of processes
-noparallel: no parallelization
-f: updated prepack definition output file
-h: updated prepack hierarchy file
The packopt_patchdata_AP.sh script must be run after each change in the product hierarchy of AP/Prepack. It includes both- the DPM action and product hierarchy update. This script performs the same functions as the pckopt_patchdata_spo.sh script. For more details, see Prepack Optimization SPO Patch Batch.
The SPO batch report, which runs after a SPO batch process, contains statistical information for preprocessing and post-processing filters. The spo_batch.sh script generates the report as a csv file. In order to generate the file, the batch process must be run with a noclean option first so that all intermediate measures are not touched. The report script is then run and performs calculations based on the intermediate and result measures.
The report script requires the following inputs:
-d: domain path - required
-intx: report intersection - required. This is provided by the user. For preprocessing results, this value must be higher than item/store. For post-processing results, this value must be higher than the escalation level normal intersection.
-elvl: escalation level - optional
-gid: generation ID - required
-out: output file name - required
-maxprocesses: maximum precesses - optional
-noparellel: no parallelization - optional
-postonly: generate postprocessing report only - optional
The preprocessing filter report is generated based on a generation ID and a report intersection. It consists of two tables: statistics for item/store filters and statistics for style-color/store filters.
Number of all sku/stores
Number of sku/stores with non-zero inventory
Number of sku/stores with non-zero sales
Total sales of all sku/stores
Number of sku/stores with non-zero sales within a user-defined period
Total sales of all sku/store within a user-defined period
Number of sku/stores that passed the total seasonal length threshold filter
Total Sales within a user-defined period of all sku/stores that passed the seasonal length threshold filter
Number of sku/stores that passed the total sku/str sales threshold filter and season length filter
Total sales of all sku/stores that passed the total sku/str sales threshold filter and the season length filter within a user-defined period
Number of sku/stores that passed the total sku/str sales threshold, season length threshold, and eligible week percentage filters
Total sales of all sku/stores that passed the total sku/str sales threshold, season length threshold, and eligible week percentage filters
Number of skup/stores with non-zero sales within a user-defined period
Number of skup/stores that have at least one child (sku/store) that passed all sku/store filters
Total sales of all skup/stores that have at least one eligible child (sku/store) that passed all sku/store filters
Number of skup/stores that have at least X child (sku/store) that passed total sales filters and seasonal length filter, where X is the eligible sku's threshold
Total sales of all skup/stores that have at least X eligible child and passed the total sales filter and seasonal length filter, where X is the eligible sku's threshold
The count of skup/str/weeks, where weeks are within the start/end of skup/stores that have passed the total skup/str sales and eligible sku's threshold filter
The count of eligible MLE skup/str/weeks, where weeks are eligible (that is, eligible sku percentage and eligible skus filters are satisfied)
The count of eligible FP skup/str/weeks, where weeks are eligible (that is, eligible sku percentage and eligible skus filters are satisfied)
Total sales from final eligible sku/str/weeks for profile generation
Number of skup/stores that have passed all MLE filters (includes eligible sku percentage and eligible week count)
Number of skup/stores that have passed all FP filters (includes eligible sku percentage and eligible week count)
The postprocessing filter report is generated based on a generation ID, an escalation level, and a report intersection. The report contains the following fields:
The number of possible profiles using unpreprocessed sales within user periods
The number of possible profiles using unpreprocessed sales larger than the total sales threshold
The number of raw profiles (calculated using preprocessed sales)
The number of raw profiles that have passed the total sales per size, eligible skup/str/weeks and percentage significant sizes filters
The number of raw profiles that have passed the sales profile correlations, and the above filters
The number of system profiles that have passed the correlation, low sales quarter, and the above filters
The number of non-kink system profiles