Batch Framework Service Catalog

This section describes the batch services that are available to be configured.

Batch Exec Service

The Batch Exec service is the controller for all the other services, specifying groups of tasks to be run, their sequences, and top-level parameters.

The Batch Exec service groups are specified in a text file batch_exec_list.txt. In this file, each active line takes this form:

batch_type | service | service parameter

The first column is an identifier, which may be repeated on several lines to define a grouping of tasks to be run together. The second column indicates which task from the catalogue is being requested. The third column gives parameter details for that task (as necessary). Comments may be placed in the batch_exec_list.txt file by starting a comment line with the hash sign (#).

Here is a sample batch_exec_list.txt file for reference:

# Daily Batch Cyle
daily | waittrigger | daily_upd.txt~3600
daily | unpack      | daily_upd.zip
daily | calc        | exp_set
daily | measexport  | daily_exp_set
daily | measload    | load_oo_list
daily | sendtrigger | batch_load_complete.txt~ftp
daily | calc        | batch_oo
daily | rebuild | rebuild_daily_group
 
# Batch Cycle to Load OO
load_oo | measload | load_oo_list
load_oo | calc     | batch_oo
 
# Weekly Batch Cyle
weekly | calc       | exp_calc_set
weekly | measexport | weekly_exp_set
weekly | hierload   | clnd~14~N
weekly | hierload   | prod~14~N
weekly | hierload   | loc~14~N
weekly | measload   | load_act_list
weekly | measload   | load_oo_list
weekly | calc       | batch_week
weekly | calc       | batch_fcst
weekly | refresh    | refresh_weekly
weekly | rebuild    | rebuild_weekly
weekly | autobuild  |

In this sample file, three batch task groups are specified: daily, load_oo, and weekly. Note that these names are implementer-defined identifiers; there is nothing special about the names "daily" or "weekly". Each identifier is thus associated with a sequence of tasks, which will run in the order they are listed in the file.

Note also that no information is provided about times or schedules on which these task groups should be run. Scheduling information must be specified in the RPASCE Online Administration Tool.

The services listed for each batch task group are run in the order specified when that type of batch run is requested through the OAT interface. Details on the individual batch services and what their service parameters mean are described in the following sections.

Load Measure (Fact) Data: measload

The Load Measures service allows the loading of one or several measures, the data for which may be found in one or several files. The measload service will check for the required data files in the Object Storage incoming files area. The service will optionally either validate the presence of all data files and treat this as an error condition, or treat the presence of files as optional and continue with no error if the files are not present.

Groups of measures to be loaded are specified in a control file, batch_loadmeas_list.txt, with the columns as follows:

  • Load set name

  • Parameter type, which must be one of the following:

    • H - Fact data file name

    • V - Validate option, if present, indicates missing data files are to be treated as an error condition. No third-column parameter required.

    • R - Rejected record threshold (optional). Requires third column parameter; it must be a positive integer. When this parameter is presented, the process will be stopped (error code 37) with a failure trigger when the number of rejected records exceeds the number specified. If this option is not specified, a warning trigger will be created if there are any rejected records in the measure load; however, the process will continue.

    • C - RPASCE v19 parameter, which is no longer supported.

    • M - RPASCE v19 parameter, which is no longer supported.

    • S - RPASCE v19 parameter, which is no longer supported.

  • Parameter value. Relative to the parameter type specified above.

Fact Data files may contain the data for one or several facts in a comma-separated value format (CSV) with a header line.

Here is an example control file for the Load Measure service:

# Load Forecast Measures
load_fcst|R|200
load_fcst|V|
load_fcst|H|mp_fcst.csv.ovr
load_fcst|H|mt_fcst.csv.ovr
load_fcst|H|lp_fcst.csv.ovr
load_fcst|H|lt_fcst.csv.ovr

In this example, if any files for the listed measures are absent, an error condition will be reported.

The Validate option checks for required data files in the internal input file directory, as well as files that have been placed in Object Storage. This allows the measload task's Validate option to correctly detect files that were previously placed in the internal input location by an unpack batch task.

When fact data files are loaded, some lines in the file may be rejected (possibly due to an incorrectly formatted input file or a position that does not exist in the dimension). The RPASCE measure load process does not, by default, treat these rejected lines as errors and will continue loading any valid lines from the rest of the file. In order to detect when rejected lines were encountered, since the batch framework does not report this as an error, the loadmeas batch task writes the rejected records count into its own log file and also creates a rejected records warning file in the outgoing area of Object Storage.

The warning file has no content, providing all relevant information in the file name itself. The file name indicates the name of the measure, the count of rejected records, and a timestamp to indicate when the task was run.

In the following example, the measure apcpfcstslsu had four rejected records when it was loaded on 26-April-2018 at 7:52am:

warning.eebatch_loadmeas.apcpfcstslsu.rejected.4.20180426075212

If the optional |R| parameter is given in the control file, the numerical value indicates a limit to the number of rejected lines, above which the rejections will be reported as an error rather than a warning. For example, in the load_fcst config shown above, the limit is given as 200. If, while loading any particular measure in this load group, more than 200 rejected record lines are detected, then the task will halt, reporting an error, and the batch sequence that includes this task will also halt. In this way, if a badly formatted or corrupted data file was uploaded, then later batch steps such as calculations or workbook refreshes will be performed.

Export Measure (Fact) Data: measexport

The Export Measures service allows the flat-file export of one or more measures, using a control file, batch_export_list.txt, to group (and parameterize) the measure lists for particular export operations. The control file allows multiple parameters to specify the details of each export group.

Here are the columns in the export control file:

  • Export Set Name

  • Parameter type, which must be one of the following:

    • M - Measure name, format name (optional), and output file name (optional). Separated by |. The format name is the name of the format directive specified in the measure_format_list.txt file. The output file name option is only supported when the control set is using the I parameter. If the I parameter is not specified, the output file name will be determined by the O parameter.

    • F - Filter mask measure.

    • X - Base intersection. F or X is required.

    • O - Output file name (optional). One single output file for all exported measures. This parameter is ignored if the I parameter is specified. If multiple O parameters are provided, only the first one will be used.

    • I - Flag to use an individual output file for each measure (optional). The default file name is measure_name.csv.ovr unless it is overridden by the output file name option of the parameter of M.

    • S - File share destination. Keywords: ftp, temp, cloud:<app>, where <app> is one of: ri, mfp, rdf, ap, rms. (For the rare case where multiple instances of a single RPASCE application are to be deployed, the second instance may be integrated by using the values mfp2, rdf2, or ap2.) The cloud:<app> keyword sends the output file to the indicated Oracle RGBU Cloud Service application, if configured for your environment. The temp keyword sends the output to an internal temporary location, where it will not be accessible externally but can be used by other configured batch tasks such as the transform file service (by specifying temp as the input value for the subsequent task).

    • C - Compress output. Optional; file (or files) will be compressed into .zip format.

    • D - Delimiter. Optional character to use in place of a comma; to select the | character as the delimiter, specify the keyword PIPE.

      Note:

      D simply replaces all commas with the delimiter. It does not work well with string measure values that include commas.

    • U - Uppercase position name (optional). Does not have a third column option. If it is specified, the position name in the output file will be converted to an uppercase name.

      P - useDate parameter (optional). Requires either the value start or end for the third column. It is used for a measure that has a high level on the CLND hierarchy (for example, Mnth). When specified, it will replace the measure CLND level position of each data record with the lowest CLND level (for example, Day) position corresponding to that higher level position. The values start or end are used to determine whether the starting day position or the ending day position of the corresponding mnth period will be output.

    • N - Specifies when to skip NA values in export (optional). Valid options for the third column are never, allna, and anyna.

      • never - Export the corresponded data point even though the measure's values are all NA values. This option essentially exports all data points in the logical space.

      • allna - Do not output the corresponded data point if all measure values are NA values (default mode).

      • anyna - Do not output the corresponded data point if any one measure value is an NA value.

    • T - Appends a unique identifier as suffix to the file name (optional). This generates a unique name to ensure that the parallel export can proceed. This flag can only be used with default file naming, without using the |O| flag. This flag is specifically designed to work with the intradayexport() expression.

  • Parameter value. Relative to the parameter type selected above.

Here is an example control file for the Export Measure service:

# Export PoC Plan CP
lpcp|F|lpcpexportb
lpcp|S|ftp
lpcp|M|lpcpbopc
lpcp|M|lpcpbopr
lpcp|M|lpcpbopu
lpcp|M|lpcpeopc
lpcp|M|lpcpeopr 

For the lpcp export group, the implementer has provided a Filter Mask measure, has indicated that the file will be published to the Object Storage outgoing file location (the ftp parameter naming is for compatibility with previous RPASCE versions), and has provided a list of several measures to be included in the output.

Mace Calculation Service: calc

The Calc service, which indicates that the RPASCE utility mace is to be run, uses a control file called batch_calc_list.txt. The format of this file is as follows:

calc_list | [group or expression] | <group name or expr text>

The first column provides an identifier for each group of calc instructions. These identifiers are used to select calculations to be run either directly, or as part of a Batch Exec run.

The second column must contain the keyword group or expression to indicate whether the calculation to be run is a rule group registered in the application configuration or an individual expression given in the control file itself.

The final column provides either the name of the rule group to be executed or the text of the expression to be run.

As with the other control files, any line starting with # is ignored and can be used to comment or document the file, as needed.

Here is an example file for the calculation service:

# Calc Set for Batch Aggregation Weekly
batch_week | group | Batch_GB
batch_week | group | Batch_AggW
batch_week | group | Batch_InvRoll
batch_week | expression | LTWPNSlsR = DRTYNSls1R+DRTYNSls2R
batch_week | expression | LTWPNSlsU = DRTYSls1U+DRTYSls2U-DRTYRtn1U-DRTYRtn2U

# Calc Set for Generating Forecast
batch_fcst | group | Batch_Fcst_G
batch_fcst | group | Batch_Fcst_L

Export Hierarchy: hierexport

The Export Hierarchy service allows the flat-file export of one hierarchy (dimension/levels) using a control file, batch_exporthier_list.txt, to specify available options. The format of the control file is similar to the Export Measure control file, with the exception that only one hierarchy may be specified at a time.

The three columns in the control file are:

  • Export Set Name

  • Parameter Type, which must be one of these options:

    • H - Hierarchy name (required)

    • T - Export type. F - only formal positions, I - only informal positions, A (or omit) - all positions

    • L - Header line export. Output file includes header line for dimension and label columns.

    • U - Export positions of user defined dimensions. This implies Header Line export mode; cannot be used with the only formal export type.

    • O - Output file name. This is optional; defaults to <hier>.dat).

    • C - Compress result file to .zip (optional)

    • S - File export destination. Keywords: ftp, cloud:<app>, where <app> is one of: ri, mfp, rdf, ap, or rms. (For the rare case where multiple instances of a single RPASCE application are to be deployed, the second instance may be integrated by using the values mfp2, rdf2, or ap2.) This sends the output file to the indicated Oracle RGBU Cloud Service application, if configured for your environment.

  • Parameter value. If required, by parameter type.

Here is an example control file for the Export Hier service:

# Export PROD hierarchy, compressed
prod_export|H|prod
prod_export|T|F
prod_export|O|prod_exp.dat
prod_export|C|
prod_export|S|ftp

In this example, the prod_export grouping indicates that only the formal positions in the PROD hierarchy will be written to a compressed file prod_exp.dat.zip and placed in the Object Storage outgoing file location (the ftp parameter naming is for compatibility with previous RPASCE versions).

Load PDS Dimension: loaddimdata

This service supports the option of loading dimension (hierarchy) data into your application from flat files. This task does not require a separate control file, but can be fully specified inside the Batch Exec (batch_exec_list.txt) control file itself.

The parameter column provided in the Batch Exec file contains two values, separated by the ~ character. The values are: hierarchy to be loaded and purgeAge value.

Here is an example:

weekly | loaddimdata | prod~28weekly | loaddimdata | loc~28

This task, when run, looks for <hier>.csv.dat or <hier>.hdr.csv.dat files in Object Storage in the path planning/incoming/input. The task also checks for planning/incoming/input/dimdata.zip, and if found, will unpack the zip file and use any relevant .dat files located in it. If no incoming data files are found, a log message will indicate this, and then the Batch process will continue without error.

Load Alternate Rollup Date: altdimload

This service supports the option of loading alternate rollup data for the dimension (hierarchy) into your application from flat files. This task does not require a separate control file, but can be fully specified inside the Batch Exec (batch_exec_list.txt) control file itself.The parameter column provided in the Batch Exec file contains one value that is the input file name. This file must have been uploaded into Object Storage in the path planning/<subnamespace>/incoming/input.Here is an example:

weekly | altdimload | scl1.csv.dat 

This task, when run, looks for <name>.csv.dat in Object Storage in the path planning/<subnamespace>incoming/input. If no incoming data files are found, a log message will indicate this, and then the batch process will continue without error.

Batch Control Entries Required to Execute Interfaces

The batch control framework facilitates the calling of new interfaces using the new service interface in the batch_exec_list.txt control file. Customers can update their batch control files to call the new interfaces. The interface service takes the interface name to execute as the main parameter. In addition to the interface name, it also provides two optional parameters, INTF_PROC_PARM and INTF_EXEC_PARM, using a tilde (~) as the separator. INTF_PROC_PARM can be the same <PARAM> if the customer wants to call the same interface for a subset of the parameter. If not passed, it will execute that interface for all configured parameters (PARAM) in a single call. INTF_EXEC_PARM can currently take an optional value as L to re-run the last executed interface using the same dataset. By default, each interface execution internally uses a unique RUN_ID for the dataset in the interface, and it will be marked as processed after the successful completion of the interface execution. If the interface is re-run, it will not fetch any new data unless the source system sends new data, but L as an optional parameter can be used to re-run the interface using the same RUN_ID last used.

Typical control file entries for calling multiple interfaces (both exporters and importers) within a weekly batch are shown in the following example batch_exec_list.txt file.

Example

batch_weekly | interface | MFP_PLAN1_EXPbatch_weekly | interface | W_PDS_CALENDAR_Dbatch_weekly | interface | W_PDS_SLS_IT_LC_WK_A

Wait for Trigger File: waittrigger

For a recurring batch task (such as a nightly or weekly batch), you can schedule the batch to run at a particular time, but you must also ensure that it will not start processing until the required input files are available. This requirement is supported by the waittrigger task. The trigger file is a temporary file that is uploaded to Object Storage under the planning/incoming path. Note that this file must be uploaded to the Object Storage last, after all the other required files are present. Note also that this trigger file will be deleted once the waittrigger task sees it, so you must not specify an actual data file as your trigger. For example, if the batch must wait for prod.dat to be present, you must specify a second file name, such as prod_dat_trigger.txt, and the external integration process that sends the latest prod.dat into the Cloud environment must also create prod_dat_trigger.txt after the prod.dat file is available.

By default, the waittrigger task waits for 23 hours for the trigger file to appear before timing out and reporting an error. A shorter timeout may optionally be specified, given in the number of seconds to wait.

The waittrigger task requires only an entry in the batch_exec_list.txt control file; no separate control file is required. Here is an example configuration for a waittrigger task:

daily | waittrigger | daily_upd.txt~3600

This example daily batch task waits up to one hour for the file daily_upd.txt to be present in the Object Storage location. The third column uses the tilde (~) character as a separator and gives two parameters values:

  • the trigger file name. Simple file names only, no paths.

  • (optional) number of seconds to wait before timing out

Send a Trigger File: sendtrigger

In order to notify other processes, either internal or external to the Oracle RGBU Cloud environment, of the progress of a batch task sequence, the sendtrigger task may be configured. This task takes a two parameter values, separated by the tilde (~) character. The first parameter specifies the trigger file name.

The second portion specifies the destination in which the trigger file will be created:

  • ftp - writes to the Object Storage outgoing location for this application (ftp designation is for legacy compatibility with earlier RPASCE versions).

  • cloud:<app> - sends the file to the Oracle RGBU Cloud Service indicated by <app>, with valid values ri, mfp, rdf, ap, or rms. (For the rare case where multiple instances of a single RPASCE application are to be deployed, the second instance may be integrated by using the values mfp2, rdf2, or ap2.)

  • input - writes the file into the input directory of the current application.

The sendtrigger task requires only an entry in the batch_exec_list.txt control file; no separate control file is required. Here is an example entry for a sendtrigger task:

daily | sendtrigger | batch_load_complete.txt~ftp

This control line indicates that the file batch_load_complete.txt will be created in the Object Storage outgoing file area once batch execution successfully reaches this point in the daily batch sequence.

Note that no automatic clean-up of the trigger file is performed, so other processes that look for the presence of this trigger file must remove it. If a trigger file from the previous batch run is still in place during a subsequent batch run, the file will remain in place and the file's timestamp will be updated.

Extract Input Files from Archive: unpack

Batch tasks such as dimdataload or measload expect to find their individual .dat or .ovr files in the incoming file areas. For some integration needs, it may be preferable to send these files together in a compressed archive for faster upload and to ensure that all matching files arrive together. This integration scenario is supported by the unpack task. The unpack task may specify files with the .zip extension to be found in the incoming file area and unpacked into the application input directory. The archive must contain only simple file names and not any subdirectory structure, as this structure would then prevent the files from being found in the <application>/input directory, where later batch tasks expect them.

The daily batch example above contains this usage for the unpack task:

daily | unpack      | daily_upd.zip

The task specifies that the archive file daily_upd.zip is expected to be in the incoming file area, and it will be unpacked into the application's input directory before any subsequent batch tasks are performed.

Transform File Service

The Transform file service is used for simple integration capabilities for file transformations before hierarchy or measure file loads such as splitting a file, renaming file, swapping columns in the files, and so on. It also provides an option to filter file records based on particular data values. It does not call any RPASCE utilities, but instead uses some pre-defined functions that can be called and controlled by control file setting changes. It provides some powerful integration capabilities in which the user does not need to create any external process to format the files so it can readily fit into the regular batch framework. For example, a source system might send multiple measure data in a single file but the configured RPASCE solution expects individual measures per file. In such cases, users can call this service to split those files. This process can also be used to transform the exported output files into required formats that can be copied to other locations.

The parameters for this service are provided in a control file batch_xform_list.txt. This service can be invoked from batch_exec_list.txt (Batch Exec Control file) as follows:

<batch_set_name> | transform | <transform_set_name>

Here are the columns in the batch transform control file separated by the PIPE symbol ("|") for the different functions that can be used.

  • Transform Set Name

  • Parameter Type, which must be one of the following:

    Table 1-1 Transform Parameters

    Parameter Type Value

    I

    Input file path and valid parameter values are cloud for cloud share location (files coming in from other Oracle RGBU Cloud Services), ftp_in for SFTP input, ftp_out for SFTP output, dom_in for Dapplication input (default), dom_out (application output), temp for internal temporary ftp_dim_in for <SFTP> /rdm_input/dimdata, ftp_fact_in for <SFTP>/rdm_input/factdatadirectory, rdm_dim_in for <RDM_ROOT>/dimdata, rdm_fact_in for <RDM_ROOT>/factdata to transform interim export files.

    O

    Output file path, valid parameter values are: dom_in (application input directory; this is the default), dom_out (application output directory), ftp_in (SFTP input directory, ftp_out (SFTP output directory), rdm_dim_in (PDS dimension data directory), rdm_fact_in (PDS fact data directory), cloud:<app> (sends file to another Oracle RGBU Cloud Service's SFTP input directory; valid values for <app> are: ri, mfp, rdf, ap, rms; for the rare case where multiple instances of a single RPASCE application are deployed, the second instance may be integrated by using the values mfp2, rdf2, or ap2.

    D

    Field delimiter (default: comma), for pipe use PIPE.

    E

    Output file delimiter (default: comma), for pipe use PIPE.

    F

    Input file name (required). Can have multiple F entries, in order to merge multiple files, but at least one F entry is required.

    X

    Transformed file name and field numbers (in order) as parameter value. It can also use Linux Cut command format. One entry for each separate file must be created. It can also take one additional argument. If provided, used as headers for the transformed files.

    V

    Validate for files to be present.

    Q

    To add quotes; required if data can contain commas and the input delim is not a comma.

    L

    Filter file based on where filter column and filter value as parameters. By default, it equates the value; however, to use filter value as not equal to, use additional parameter as N after filter value.

    U

    Create unique record output files.

    C

    Copy a column to the end of file (copy column number).

    W

    Swap column from input file (column numbers to swap).

    S

    Sort file columns based on key columns.

    A

    Add a constant value at end of file.

    J

    Join two columns using a separator and add to the end of the file.

    G

    To create a complete trigger output file, if needed at the end.

    Z

    To compress output files of a particular pattern as a zip file. It requires the following four parameters delimited by |

    • <output_compressed_filename>

    • <input_files_prefix>

    • <input_files_extn>

    • <delete_compressed_files_flag (Y/N)>

    M

    Move or copy files of a particular pattern of files from input to output location. If M is used, only files will be moved or copied. It will not do any further processing to those files. It requires two parameters delimited by |

    • <move_file_pattern>

    • delete_file_flag (Y/N)

    If the delete_file_flag is Y, the source is removed and the operation is a true move. If the delete_file_flag is N, it is a copy instead of a move and the source is not removed.

    N

    Do not push output files to cloud:<app> location at the end and still hold the files in a temporary location for further processing to compress the files before pushing the file to cloud:<app> location.

    H

    Add headers to transformed output file if option X is not used to transform the output file. Parameter must be the complete header for that file and is added as the first record for the output file.

    Y

    To delete the input files from the input directory after the transform; if not, the used input file will not be detected.

    B

    Do not merge input files by file name pattern. By default, this task takes all input files with same pattern and merges them. For example, if the input file name is given as sls.csv.ovr, it will merge all files matching sls.csv.ovr*. This option indicates that only the single input file by the exact name must be used.

  • Parameter Values - Relative to the parameter type selected above.

Example 1: To split a single file into multiple files based on column IDs.

rms_oo|F|rms_oo.csv.ovr
rms_oo|I|cloud
rms_oo|V|
rms_oo|X|drtyoou.csv.rpl|1,2,3,6
rms_oo|X|drtyooc.csv.rpl|1,2,3,7
rms_oo|X|drtyoor.csv.rpl|1,2,3,8

This example shows an input file split into multiple files using the multiple X option based on column numbers. In the above example, the output files are created in the application input directory.

Example 2: To split a single file into multiple files based on column IDs and also to filter records based on a column value.

rms_inv1|F|rms_inv.csv.ovr
rms_inv1|I|cloud
rms_inv1|V| rms_inv1|L|5|N
rms_inv1|X|drtyeop1u.csv.ovr|1,2,3,6
rms_inv1|X|drtyeop1c.csv.ovr|1,2,3,7
rms_inv1|X|drtyeop1r.csv.ovr|1,2,3,8

In this example, the first only records with fifth column value as N in the csv file and then those will split into multiple files.

Example 3: To copy columns and swap columns before writing the output file.

rms_curh|F|rms_curr.csv.ovr
rms_curh|I|cloud
rms_curh|C|3
rms_curh|W|2|6
rms_curh|U|
rms_curh|X|curh.csv.dat|2,3

In this example, the original file only contains five columns. The third column is copied to the end of the file as the sixth column due to the use of option C. Then, columns 2 and 6 are swapped due to the use of option W. Then it writes out column 2,3 after removing duplicates due to use of option U.

Example 4: To add a constant value to a file and to join two columns based on a separator.

rms_patt3|F|rms_prod.csv.dat
rms_patt3|I|cloud
rms_patt3|L|22|NA|N
rms_patt3|A|BRAND
rms_patt3|J|34|22|_
rms_patt3|X|drdvprdattt.csv.ovr.3|1,34,35 

It is necessary to add a constant value BRAND and also concatenate it with another column and export both the columns.

In this example, the original file only contains 33 columns. It is first filtered for records not equal to NA in column 22. Then it adds a constant value BRAND in column 34. Then, columns 34 and 22 are joined, using the separator _ (underscore) that is added as column 35. Finally, the newly added columns 34 and 35 are extracted into an output file.

Example 5: The following sample shows the use of E to create different delimited output file and Z option to compress the output file.

mfp_exp_ri|F|ri_mpop_plan.dat
mfp_exp_ri|F|ri_mpcp_plan.dat
mfp_exp_ri|I|temp mfp_exp_ri|V|
mfp_exp_ri|X|W_RTL_PLAN1_PROD1_LC1_T1_FS.dat|4-
mfp_exp_ri|O|cloud:ri mfp_exp_ri|E|PIPE
mfp_exp_ri|Z|RI_MFP_DATA|W_RTL_PLAN|dat|Y

In this example, use of multiple F options merges two output files and creates one output file with only from column 4 delimited by comma. However, the final output file is created with delimiter as PIPE due to use of option E.

In addition, the use of the Z option compresses the output files of pattern W_RTL_PLAN*.dat created at cloud:ri location into a compressed file as RI_MFP_DATA.zip and deletes the generated file after compressing.

Example 6: The following option shows the use of the M option to copy a set of files from one location to another location.

copy_dom_in|I|dom_in
copy_dom_in|O|ftp_out 
copy_dom_in|M|*.dat|N

This example copies all files of the pattern *.dat from the application.input location to the Object Storage outgoing file area. Due to use of option N to not delete the input files, it only copies the file. To move the files, option Y should be used.

Custom Function: ap_set_datr

This feature allows product attributes to be assigned during item creation if the product attributes are defined as dimension attributes. For this functionality, th product attribute hierarchy (for example, PATR) must be defined with two dimensions (product attribute value and product attribute). The mapping of the product attribute values to items must be defined in the measure at the item/product attribute intersection level (for example, addvprdattt). This custom function can be called in a customer's batch control framework to register the loaded product attributes as dimension attributes.

This application-level function registers all the loaded product attributes as dimension attributes that can take two parameters, product attribute measure and product attribute hierarchy name. It must be called each time a new set of product attributes is loaded.

The following example illustrates calling this custom function to register dimension attributes. The product attribute measure name is addvprdattt and the product attribute hierarchy used by customer is patr. The ra_custom is the service name to be used to call the custom functions in batch_exec_list.txt and the custom function name is ap_set_datr. The two parameters for the function must be separated with ~.

Example:

batch_datr | ra_custom | ap_set_datr~addvprdattt~patr

Convert Informal Positions to Formal: formalize

The formalize service allows the modifying of current informal positions (which were created on a given hierarchy and dimension having Dynamic Position Maintenance, or DPM, enabled) to make them formal positions. One or more files matching the pattern <hier>.formalize[.extension] must be uploaded via Object Storage to the planning/incoming/input location. This file (or files) specifies which informal positions to formalize. An option is also available to allow the formalization of all current informal positions on a dimension. Parameters are specified via the batch_formalizepositions_list.txt batch control file.

The batch_formalizepositions_list.txt control file contains multiple lines to specify each formalize task, each with three required columns, as follows:

  • Formalization Set Name.

  • Parameter Type, from these values.

    • C - Column index (starting from 1) of the dimension position name in the hierarchy file (as generated by exportHier) [required].

    • D - Dimension to formalize [required].

    • H - Hierarchy of the dimension [required].

    • A - Formalize all informal positions on Dim (no value needed) [optional; omit if sending a <hier>.formalize.dat file].

  • Parameter Value (varies by parameter type).

If the (A)ll option is not provided, then at least one <hier>.formalize[.extension] file is expected to be available in Object Storage. If no formalize files are present, the batch task will report an error. File format details are available in the Oracle Retail Predictive Application Server Cloud Edition Administration Guide.

Here is an example specifying a formalization task called prod_sku, which operates on positions of the SKU dimension in the PROD hierarchy:

prod_sku|H|prod|
prod_sku|D|sku|
prod_sku|C|1|

Rename Positions in a Hierarchy: renamepositions

The Rename Positions service enables the renaming of existing positions in a hierarchy. The task is configured by specifying the hierarchy on which positions are to be renamed and looks for a file with the name [hier].rn.dat in Object Storage. If the rename data file is not present, the rename task will exit without error so that the following batch sequence steps may continue.

The format of the rename data file is as specified in the Oracle Retail Predictive Application Server Cloud Edition Administration Guide.

The rename positions batch task does not require a separate control file, but may be specified as an entry in the batch_exec_list.txt file, for example:

weekly  |  renamepositions  |  prod

This control line indicates that the weekly batch task will look for a rename positions data file for the PROD hierarchy, prod.rn.dat, and will carry out the renamings specified.

Workspace Refresh by Template Name: refresh

The Workspace Refresh service enables the refresh of particular workspaces with current application data. This allows the selection of all workspaces built from a particular template, including the ability to match on partial template names.

The batch_refresh_list.txt contains only two columns: a refresh group identifier and a template name pattern to match. Here is an example of this file:

refresh_weekly | mt_wb
refresh_weekly | mp_wb
refresh_weekly | lt_wb
refresh_weekly | lp_wb

The example contains one refresh group, with four template pattern names to match. All workspaces that are built from templates matching those patterns will be refreshed.

Workspace Rebuild by Template Name: rebuild

The Workspace Rebuild service is similar to Workspace Refresh task; however, it allows for the workspaces to be completely rebuilt rather than just having their data refreshed. This covers the case where new positions have been added and must be reflected in the workspaces.

The batch_rebuild_list.txt contains only two columns: a rebuild group identifier and a template name pattern to match. Here is an example of this file:

rebuild_weekly | mt_wb 
rebuild_weekly | mp_wb
 

The example contains one rebuild group, with two template pattern names to match. All workspaces that are built from templates matching those patterns will be rebuilt.

Workspace Delete by Template Name: delete

The Workspace Delete service enables the bulk deletion of all workspaces built from a particular template. This service does not require a separate control file but can be fully specified within the batch_exec_list.txt file. All workspaces built from the given template will be removed. To remove workspaces from several template types, specify one delete task for each template.

Here is an example of an entry in the batch_exec_list.txt file for this task:

weekly | delete | AD_POC

Run Segment Build Queue: autobuild

The autobuild service is the simplest to configure in the RPASCE batch framework, as it requires no parameters to be specified. When the autobuild service is included in a batch task group, the wbbatch utility is run to invoke the -startQueue build option. Any segments that have been previously queued for automatic build will be created by this call. Since no further parameters are needed, there is no third column for the autobuild service line.

Here is an example of an entry in the batch_exec_list.txt file for this task:

batch_weekly | autobuild  |

Note that nothing is required after the second pipe (|) character.

Initialize Testing Environment: initrpac

This is the first of two batch tasks that work together to provide automated test capabilities. See the full explanation of RPAC test automation capabilities in Automated Testing with RPAC; the specific activities carried out by the batch tasks are described briefly here.

The initrpac task serves two purposes related to setting up the environment to be ready to run your automated test cases. First, it checks for new or updated test collateral files in Object Storage. Note that there are three archives of test collateral files that can be sent: tests.zip, input.zip, and compare.zip. If any of these collateral file archives are present in Object Storage under the planning/incoming/rpac subdirectory, then they will be moved into the internal holding area, ready to be used by the next step in the process; if no new files are present, then the previously sent files will continue to be used.

Note:

Incrementally adding test collateral files are not supported; previous file sets of each type are removed before unpacking the new archive, so any updated archive must contain all collateral files of that type. This prevents stale test scripts or data files from being left in the testing environment, which could otherwise cause unexpected test failures.

The second task carried out by initrpac is to stage the contents of the input.zip into the internal input directory. This will be used to place any hierarchy load (.dat) or measure load (.ovr, .clr, .rpl) files into position so that subsequent batch tasks may set the application into a known state, ready for automated tests to run and verify the expected result values.

The initrpac task entry in the batch_exec_list.txt control file does not require any parameters. It would normally be placed as the first entry in a test-enabled alternate version of a daily or weekly batch execution sequence. See full example in Automated Testing with RPAC.

rpac_validate | initrpac |

Execute Automated Tests: runrpac

This is the second of the two batch tasks that work together to support automated testing capabilities. The runrpac task executes all automated tests in a single test .XML file. (See further information about the RPAC automation testing framework in Automated Testing with RPAC.) While the preceding initrpac task must only be run once, you may specify as many runrpac tasks as needed to execute all configured automation tests, possibly at several different points in an overall batch execution sequence.

The runrpac task entry in the batch_exec_list.txt control file takes one parameter that combines an identifier for the test, along with the filename of the test .XML file to be executed (separated by ~ character):

rpac_validate | runrpac   | MFPCS_Sample_Test_1~RT01_MT_WB.xml

In this case, the test file RT01_MT_WB.xml will be executed under an identifying title "MFPCS_Sample_Test1". See Automated Testing with RPAC for a full example of a test-enabled batch execution sequence.

Summary test results will be visible in the output log for the batch execution (visible in the Online Administration dashboard), and full test result details will be available in the log file archive that is sent to Object Storage under planning/outgoing after the batch execution completes.