Processing System Records
Processes that retrieve records in the system and do some action on them require an algorithm to select the records to be processed and another algorithm to process the records. The base processes implement standard background process functionality including parallel background process logic and the ability to create To Do entries for errors. This allows for an implementation to take advantage of the pre-built support and provide plug-ins that include the logic that is unique to the specific use case.
The system provides the following processes that support plug-ins for selecting and processing the records:
Ad-hoc Process. This background process is provided for implementations that have some custom business logic that needs to be performed on a group of records. The base batch control Plug-in Driven Generic Template (F1-PDBG) may be used as a template.
Extract Process. This background process is provided for implementations that have extract files to produce for integration with external systems. The process includes parameters to configure the file path and file name for the created file along with other parameters to control how the file is formatted. The base batch control Plug-in Driven Extract Template (F1-PDBEX) may be used as a template.
The following sections provide more information about the functionality that is common for both ad-hoc and extract processes. Refer to Extract Record Processing for additional information specific to extract processes.
Select Records Algorithm
The first important algorithm to design when implementing a plug-in driven batch process is the Select Records algorithm, plugged in on the batch control page. This algorithm type must define the first parameter as the SQL. The batch job will directly access the SQL parameter value in the metadata (rather than invoking the algorithm). All other parameters are available for the algorithm to use for its own logic.
In addition, when invoking the algorithm, it must return the strategy to use (Thread Level SQL Select or Job Level SQL Select. Refer to parallel background processes for more information about the two strategies and when to use each. When choosing the Thread Level SQL Select strategy, the algorithm should return the name of the primary key in the Key Field parameter. In addition, the SQL should include a BETWEEN clause that includes the bind variables for the low and high ID for the ranges. See below for the bind variable syntax.
If the SQL statement includes variables that are determined at execution time, it must use bind parameters. Bind parameters are referenced in the SQL statement using a colon and a parameter name (for example :parameter). There are some variables provided by the system that are populated by the batch job at execution time. These have f1_​ as its prefix.
The system supports the following pre-defined bind parameters:
:f1_​lowID and :f1_​highID - these should be used in the BETWEEN clause for the Thread Level SQL Select strategy. The batch job will substitute the appropriate ID range as required.
:f1_​batchCode and :f1_​batchNumber - these are common attributes of the batch control that are referenced on a record for selection purposes. Note that the batch run number is set according to whether the batch job is a re-run of a previous run or not.
:f1_​businessDate - the batch job will populate the input batch business date, if populated otherwise the current date.
Note:
The system supports both "f1." and "f1_​" as a prefix for the bind variables. However, the "f1." prefix will result in an error if the SQL security property setting is turned on. As such, the underscore syntax is recommended. The period syntax remains for backwards compatibility.
For any other custom parameters, the Select Records algorithm may return one or more sets of field name / variable name / value where the variable name matches a bind variable in the SQL. The field name provides information about the length that assists the SQL binding logic to properly substitute the values. Note that the variable name cannot start with f1. as its prefix. The batch job will use the value returned by the algorithm to set the bind parameter in the SQL statement. Also note that the logic binds all variables as a string, so for date variables the SQL must include the "TO_​DATE" function.
The plug in spot receives a list of the ad hoc parameters for the batch job as name / value pairs. If the list includes parameters whose values are to be used in selecting records, your algorithm may be used to identify the relevant batch parameter passed as input and populate the field name and output bind variable appropriately.
The product provides a base algorithm type for this plug-in spot that simply defines a parameter for the SQL. It also includes parameters for the strategy and the key field name. This algorithm type may be used by any custom batch process where the SQL does not rely on any special bind variables that must be determined. Simply create an algorithm for the algorithm type and provide the appropriate SQL. Refer to the algorithm type Select Records by Predefined Query (F1-PDB-SR) for more information.
Process Records Algorithm
The other important algorithm to design when implementing a plug-in driven batch process is the Process Record algorithm, plugged in on the batch control page. This algorithm is called for each record selected for the process. It receives all the information that was selected from the Select Records plug-in.
For the ad-hoc processing batch process, algorithms plugged into this spot are responsible for doing the work for each record based on the desired logic.
For the extract batch process, algorithms plugged into this spot are responsible for returning the data that should be written to the file. Refer to Extract Record Processing for more information on the functionality specific to extract processes.
Also note that algorithms for this plug-in spot will be passed two Booleans, isFirst and isLast, to indicate if the current work unit is the first and/or last for that thread. This allows for the plug-in to do additional work if needed. Note that the isFirst indication is available for both types of batch processes, ad-hoc and extract. However, the isLast indication is only applicable for the file extract batch. For the ad-hoc batch process this value will always be set to false. Extracts will always execute in a single database transaction. In a single transaction run, any error causes the run to be aborted so that it restarts from the beginning when resubmitted. This is done to avoid partial files being written along with inaccurate setting of the isLast element.
Configuring a New Process
The following points summarize the steps needed to implement a new background process that acts on records in the system using plug-ins for the specific functionality:
Verify the SQL that the background process should execute. Keep in mind that all the data selected in the SQL is available to pass to the plug-in that processes the records. If the performance of the background process is important, be sure to consult with a DBA when designing the SQL.
If the SQL does not require any custom variables to substitute at runtime, create an algorithm for the base algorithm type F1-PDB-SR and configure the SQL. In addition, configure the strategy and the primary key name (for the Thread Level SQL Select strategy).
If the SQL requires custom variables, a new plug-in script must be designed and coded to populate the variable names and values using the algorithm entity Batch Control - Select Records. Besides defining the variables, the algorithm must also indicate the strategy and the primary key name (for the Thread Level SQL Select strategy). Define the algorithm type for the newly created script. The first parameter of the algorithm type must be the SQL as illustrated in the base algorithm type. Note that the other parameters are available for use by this algorithm type if needed. Define the algorithm, populating the SQL as appropriate (using the custom variables).
Design the logic required to process each record. If your edge product does not deliver a suitable plug-in, create a plug-in script where the algorithm entity is Batch Control - Process Record. Note that the plug-in receives all the information selected in the SQL defined in the Select Records plug-in
For an ad-hoc process, the algorithm should perform whatever process is required based on the business use case. Note that the background process is responsible for committing the records.
For an extract process, the algorithm is responsible for returning one or more schema instances populated with information that should be written to the file. Refer to Extract Record Processing for more information and examples of extract algorithms.
In either case, if a new processing script is required, define the algorithm type and algorithm for the newly created script.
Create a batch control by duplicating the appropriate base template as per the type of background process needed. Plug in the algorithms created above and configure the parameters as appropriate. Note that you may configure custom ad hoc parameters on the batch control if required. Both base and custom batch parameter values are available to the Select Records and Process Records plug in algorithms.