Table of Contents Previous Next PDF


Using Batch Runtime

Using Batch Runtime
This chapter contains the following topics:
Configuration Files
The Configuration files are implemented in the directory CONF of the RunTime Batch.
BatchRT.conf
This file contains variables definition.
These variables must be set before using the RunTime Batch.
Messages.conf
This file contains messages used by RTBatch.
The messages may be translated in a local language.
FunctionReturnCode.conf
This file contains internal codes associated with a message.
ReturnCode.conf
This file contains return codes associated with a message and returned to the KSH script.
Setting Environment Variables
Some variables (such as ORACLE_SID, COBDIR, LIBPATH, COBPATH …) are shared variables between different components and are not described in this current document.
For more information, see Rehosting Workbench Installation Guide.
Table 3‑1 lists the environment variables that are used in the KSH scripts and must be defined before using the software.
Table 3‑2 lists the environment variables that are used by Batch Runtime and must be defined before using the software.
(See the BatchRT.conf configuration file)
(See the BatchRT.conf configuration file)
The default is directory GENERATION_FILE. To manage GDG files in database, you need to set the value to GENERATION_FILE_DB and configure MT_GDG_DB_ACCESS appropriately. If the value is specified as NULL or with an incorrect directory name, error occurs when using this environment variable.
(See the BatchRT.conf configuration file)
(See the BatchRT.conf configuration file)
(See the BatchRT.conf configuration file).
(See the BatchRT.conf configuration file)
MT_JESDECRYPT must be set to jesdecrypt object file.
(See the BatchRT.conf configuration file)
(See the BatchRT.conf configuration file)
TUXEDO SRVGRP value of the ARTDPL server.
(See the BatchRT.confconfiguration file)
Table 3‑3 lists optional environment variables used by Batch Runtime.
 
A list of executable programs, programs that don't exist but users don't want to fail any jobs because of them. When m_ProgramExec invokes nonexistent programs, JOB will continue if those programs are specified in this list. For example:
A list of executable programs. The programs are invoked by runbexci instead of runb. For each program in this list, whether or not -n is specified by m_ProgramExec, the program is invoked only by runbexci.
Note:
It's mandatory if MT_GENERATION is set to GENERATION_FILE_DB.
MT_GDG_USEDCB=Y: Create .dcb file for GDG (default behavior). In this mode, LSEQ or SEQ can be specified as file type of GDG members in m_FileAssign statement.
MT_GDG_USEDCB=N: Don't create .dcb file for GDG. In this mode, file type of GDG members can only be LSEQ; whatever file type that you specify in m_FileAssign statement is ignored.
The host name (or IP address), where Workbench is installed to be invoked to convert JCL job to KSH job. The value of MT_WB_HOSTNAME is null if Workbench is in localhost. User name is optional to be added. For example:
MT_WB_HOSTNAME=host1: Set the value of MT_WB_HOSTNAME to host1
MT_WB_HOSTNAME=user1@host1: Set the value of MT_WB_HOSTNAME to user1@host1
Note:
The full install path of Workbench refine, which will be invoked to convert a JCL job to a KSH job. For example:
The value of environment variable REFINEDISTRIB, which is used when Workbench converts a JCL job. For example:
MT_REFINEDISTRIB = Linux64: Set REFINEDISTRIB to Linux64
MT_REFINEDISTRIB = Linux32: Set REFINEDISTRIB to Linux32
A variable used to enable CPU time usage monitor of step for all job. Set MT_CPU_MON_STEP=yes to enable CPU time usage monitor of step for all job. If MT_CPU_MON_STEP is not configured or its value is not equal "yes", this feature is disabled.
If "SYSIN" is set, the stdin for utilitiy will be redirect to file ${DD_SYSIN}, if DD_SYSIN doesn't exist, don't redirect. example: MT_SYS_IO_REDIRECT=SYSIN
If "SYSOUT" is set, the stdout and stderr for utilitiy will be redirect to file ${DD_SYSOUT}, if DD_SYSOUT doesn't exist, don't redirect.
Example: MT_SYS_IO_REDIRECT=SYSOUT
"SYSIN" and "SYSOUT" can be set at the same time, separated by comma, such as "SYSIN,SYSOUT"
Example: MT_SYS_IO_REDIRECT=SYSIN,SYSOUT
By default, MT_SYS_IO_REDIRECT=SYSIN,SYSOUT
Configuring Batch Runtime in MP Mode
Batch Runtime (EJR) will need to be specially configured so as to work well in MP mode if users want to either use EJR to run jobs, which may share resources (normally files), from different machines or configure a MP mode TuxJES domain and submit jobs from any node through the utility provided by TuxJES.
In the latter case, the job submitted from node A may be run by node B and the execution sequence is totally random. Similarly, these jobs submitted from different nodes may share resources.
This section clarifies the details of configuring Batch Runtime (EJR) to support MP mode.
1.
2.
MT_ACC_FILE_PATH should be located on shared storage (NFS), which should have same mount point on all machines in the domain, since the control files for file locking are put in this directory; in addition, users need to make sure AccLock and AccWait files under this directory can be read / written by the effective user of the process running the jobs.
3.
4.
Creating a Script
General Structure of a Script
Oracle Tuxedo Application Runtime for Batch normalizes Korn shell script formats by proposing a script model where the different execution phases of a job are clearly identified.
Oracle Tuxedo Application Runtime for Batch scripts respect a specific format that allows the definition and the chaining of the different phases of the KSH (JOB).
Within Batch Runtime, a phase corresponds to an activity or a step on the source system.
A phase is identified by a label and delimited by the next phase.
At the end of each phase, the JUMP_LABEL variable is updated to give the label of the next phase to be executed.
In the following example, the last functional phase sets JUMP_LABEL to JOBEND: this label allows a normal termination of the job (exits from the phase loop).
The mandatory parts of the script (the beginning and end parts) are shown in bold and the functional part of the script (the middle part) in normal style as shown in Table 3‑4. The optional part of the script must contain the labels, branching and end of steps as described below. The items of the script to be modified are shown in italics.
 
Table 3‑4 Script Structure
m_JobBegin -j JOBNAME -s START -v 2.00
(PENULTIMATESTEP)
For the label, which must point to END_JOB. The _ is necessary, because the character is forbidden on z/OS.
Script Example
Listing 3‑1 shows a Korn shell script example.
Listing 3‑1 Korn shell Script Example
#!/bin/ksh
#@(#)--------------------------------------------------------------
#@(#)-
m_JobBegin -j METAW01D -s START -v 1.00 -c A
while true ;
do
m_PhaseBegin
case ${CURRENT_LABEL} in
(START)
# -----------------------------------------------------------------
# 1) 1st Step: DELVCUST
# Delete the existing file.
# 2) 2nd Step: DEFVCUST
# Allocates the Simple Sample Application VSAM customers file
# -----------------------------------------------------------------
#
# -Step 1: Delete...
JUMP_LABEL=DELVCUST
;;
(DELVCUST)
m_FileAssign -d OLD FDEL ${DATA}/METAW00.VSAM.CUSTOMER
m_FileDelete ${DD_FDEL}
m_RcSet 0
#
# -Step 2: Define...
JUMP_LABEL=DEFVCUST
;;
(DEFVCUST)
# IDCAMS DEFINE CLUSTER IDX
m_FileBuild -t IDX -r 266 -k 1+6 ${DATA}/METAW00.VSAM.CUSTOMER
JUMP_LABEL=END_JOB
;;
(ABORT)
break
;;
(END_JOB)
break
;;
(*)
m_RcSet ${MT_RC_ABORT} "Unknown label : ${JUMP_LABEL}"
break
;;
esac
m_PhaseEnd
done
m_JobEnd
#@(#)--------------------------------------------------------------
 
Defining and Using Symbols
Symbols are internal script variables that allow script statements to be easily modifiable. A value is assigned to a symbol through the m_SymbolSet function as shown in Listing 3‑2. To use a symbol, use the following syntax: $[symbol]
Note:
Listing 3‑2 Symbol Use Examples
(STEP00)
m_SymbolSet VAR=40
JUMP_LABEL=STEP01
;;
(STEP01)
m_FileAssign -d SHR FILE01 ${DATA}/PJ01DDD.BT.QSAM.KBSTO0$[VAR]
m_ProgramExec BAI001
 
Creating a Step That Executes a Program
A step (also called a phase) is generally a coherent set of calls to Batch Runtime functions that enables the execution of a functional (or technical) activity.
The most frequent steps are those that execute an application or utility program. These kind of steps are generally composed of one or several file assignment operations followed by the execution of the desired program. All the file assignments operations must precede the program execution operation shown in Listing 3‑3
Listing 3‑3 Application Program Execution Step Example
(STEPPR15)
m_FileAssign -d SHR INFIL ${DATA}/PJ01DDD.BT.QSAM.KBPRO099
m_FileAssign -d MOD OUTFIL ${DATA}/PJ01DDD.BT.QSAM.KBPRO001
m_OutputAssign -c “*” SYSOUT
m_FileAssign -i LOGIN
IN-STREAM DATA
_end
m_FileAssign -d MOD LOGOUT ${DATA}/PJ01DDD.BT.QSAM.KBPRO091
m_ProgramExec BPRAB001 "20071120"
JUMP_LABEL=END_JOB
;;
 
Application Program Abend Execution
ILBOABN0, an abend routine, can be called from a running program to force it to abort and return the abend code to KSH script. ILBOABN0 is supplied as both source and binary gnt file. It can be called directly by any user-defined cobol program.
Listing 3‑4 Application Program Abend Execution Example (KSH)
(STEPPR15)
 
m_ProgramExec USER
JUMP_LABEL=END_JOB
;;
 
Listing 3‑5 USER.cbl Example
PROCEDURE DIVISION.
 
PROGRAM-BEGIN.
DISPLAY "USER: HELLO USER".
MOVE 2 TO RT-PARAM.
CALL "ILBOABN0" USING RT-PARAM.
DISPLAY "USER: CAN'T REACH HERE WHEN ILBOABN0 IS CALLED".
PROGRAM-DONE.
...
 
Creating a Procedure
Oracle Tuxedo Application Runtime for Batch offers a set of functions to define and use "procedures". These procedures follow generally the same principles as z/OS JCL procedures.
The advantages of procedures are:
Procedures can be of two types:
Creating an In-Stream Procedure
Unlike the z/OS JCL convention, an in-stream procedure must be written after the end of the main JOB, that is: all the in-stream procedures belonging to a job must appear after the call to the function m_JobEnd.
An in-stream procedure in a Korn shell script always starts with a call to the m_ProcBegin function, followed by all the tasks composing the procedure and terminating with a call to the m_ProcEnd function. Listing 3‑6 is an example.
Listing 3‑6 In-stream Procedure Example
m_ProcBegin PROCA
JUMP_LABEL=STEPA
;;
(STEPA)
m_FileAssign -c “*” SYSPRINT
m_FileAssign -d SHR SYSUT1 ${DATA}/PJ01DDD.BT.DATA.PDSA/BIEAM00$[SEQ]
m_FileAssign -d MOD SYSUT2 ${DATA}/PJ01DDD.BT.QSAM.KBIEO005
m_FileLoad ${DD_SYSUT1} ${DD_SYSUT2}
JUMP_LABEL=ENDPROC
;;
(ENDPROC)
m_ProcEnd
 
Creating an External Procedure
External procedures do not require the use of the m_ProcBegin and m_ProcEnd functions; simply code the tasks that are part of the procedure shown in Listing 3‑7.
In order to simplify the integration of a procedure’s code with the calling job, always begin a procedure with:
JUMP_LABEL=FIRSTSTEP
;;
(FIRSTSTEP)
and end it with:
JUMP_LABEL=ENDPROC
;;
(ENDPROC)
Listing 3‑7 External Procedure Example
JUMP_LABEL=PR2STEP1
;;
(PR2STEP1)
m_FileAssign -d SHR INFIL ${DATA}/PJ01DDD.BT.QSAM.KBPRI001
m_FileAssign -d MOD OUTFIL ${DATA}/PJ01DDD.BT.QSAM.KBPRO001
m_OutputAssign -c “*” SYSOUT
m_FileAssign -d SHR LOGIN ${DATA}/PJ01DDD.BT.SYSIN.SRC/BPRAS002
m_FileAssign -d MOD LOGOUT ${DATA}/PJ01DDD.BT.QSAM.KBPRO091
m_ProgramExec BPRAB002
JUMP_LABEL=ENDPROC
;;
(ENDPROC)
 
Using a Procedure
The use of a procedure inside a Korn shell script is made through a call to the m_ProcInclude function.
As described in Script Execution Phases, during the Conversion Phase, a Korn shell script is expanded by including the procedure's code each time a call to the m_ProcInclude function is encountered. It is necessary that after this operation, the resulting expanded Korn shell script still respects the rules of the general structure of a script as defined in the General Structure of a Script.
A procedure, either in-stream or external, can be used in any place inside a calling job provided that the above principals are respected shown in Listing 3‑8.
Listing 3‑8 Call to the m_ProcInclude Function Example
(STEPPR14)
m_ProcInclude BPRAP009
JUMP_LABEL=STEPPR15
 
Modifying a Procedure at Execution Time
The execution of the tasks defined in a procedure can be modified in two different ways:
Listing 3‑9 and Listing 3‑10 are examples.
Listing 3‑9 Defining Procedure Example
m_ProcBegin PROCE
JUMP_LABEL=STEPE
;;
(STEPE)
m_FileAssign -d SHR SYSUT1 ${DATA}/DATA.IN.PDS/DTS$[SEQ]
m_FileAssign -d MOD SYSUT2 ${DATA}/DATA.OUT.PDS/DTS$[SEQ]
m_FileLoad ${DD_SYSUT1} ${DD_SYSUT2}
JUMP_LABEL=ENDPROC
;;
(ENDPROC)
m_ProcEnd
 
Listing 3‑10 Calling Procedure Example
(COPIERE)
m_ProcInclude PROCE SEQ="1"
JUMP_LABEL=COPIERF
;;
 
Using Overrides for File Assignments
As specified in Best Practices, this way of coding procedures is provided mainly for supporting Korn shell scripts resulting from z/OS JCL translation and it is not recommended for Korn shell scripts newly written for the target platform.
The overriding of a file assignment is made using the m_FileOverride function that specifies a replacement for the assignment present in the procedure. The call to the m_FileOverride function must follow the call to the procedure in the calling script.
Listing 3‑11 shows how to replace the assignment of the logical file SYSUT1 using the m_FileOverride function.
Listing 3‑11 m_FileOverride Function Example
m_ProcBegin PROCE
JUMP_LABEL=STEPE
;;
(STEPE)
m_FileAssign -d SHR SYSUT1 ${DATA}/DATA.IN.PDS/DTS$[SEQ]
m_FileAssign -d MOD SYSUT2 ${DATA}/DATA.OUT.PDS/DTS$[SEQ]
m_FileLoad ${DD_SYSUT1} ${DD_SYSUT2}
JUMP_LABEL=ENDPROC
;;
(ENDPROC)
m_ProcEnd
 
Listing 3‑12 m_FileOverride Procedure Call:
(COPIERE)
m_ProcInclude PROCE SEQ="1"
m_FileOverride -i -s STEPE SYSUT1
Overriding test data
_end
JUMP_LABEL=COPIERF
;;
 
Controlling a Script's Behavior
Conditioning the Execution of a Step
Using m_CondIf, m_CondElse, and m_CondEndif
The m_CondIf, m_CondElse and m_CondEndif functions can be used to condition the execution of one or several steps in a script. The behavior is similar to the z/OS JCL statement constructs IF, THEN, ELSE and ENDIF.
The m_CondIf function must always have a relational expression as a parameter as shown in Listing 3‑13. These functions can be nested up to 15 times.
Listing 3‑13 m_CondIf, m_CondElse, and m_CondEndif Example
(STEPIF01)
m_FileAssign -d SHR INFIL ${DATA}/PJ01DDD.BT.QSAM.KBIF000
m_FileAssign -d MOD OUTFIL ${DATA}/PJ01DDD.BT.QSAM.KBIF001
m_ProgramExec BAX001
m_CondIf "STEPIF01.RC,LT,5"
JUMP_LABEL=STEPIF02
;;
(STEPIF02)
m_FileAssign -d SHR INFIL ${DATA}/PJ01DDD.BT.QSAM.KBIF001
m_FileAssign -d MOD OUTFIL ${DATA}/PJ01DDD.BT.QSAM.KBIF002
m_ProgramExec BAX002
m_CondElse
JUMP_LABEL=STEPIF03
;;
(STEPIF03)
m_FileAssign -d SHR INFIL ${DATA}/PJ01DDD.BT.QSAM.KBIF000
m_FileAssign -d MOD OUTFIL ${DATA}/PJ01DDD.BT.QSAM.KBIF003
m_ProgramExec BAX003
m_CondEndif
 
Using m_CondExec
The m_CondExec function is used to condition the execution of a step. The m_CondExec must have at least one condition as a parameter and can have several conditions at the same time. In case of multiple conditions, the step is executed only if all the conditions are satisfied.
A condition can be of three forms:
m_CondExec 4,LT,STEPEC01
m_CondExec EVEN
m_CondExec ONLY
The m_CondExec function must be the first function to be called inside the concerned step as shown in Listing 3‑14.
Listing 3‑14 m_CondExec Example with Multiple Conditions
(STEPEC01)
m_FileAssign -d SHR INFIL ${DATA}/PJ01DDD.BT.QSAM.KBIF000
m_FileAssign -d MOD OUTFIL ${DATA}/PJ01DDD.BT.QSAM.KBIF001
m_ProgramExec BACC01
JUMP_LABEL=STEPEC02
;;
(STEPEC02)
m_FileAssign -d SHR INFIL ${DATA}/PJ01DDD.BT.QSAM.KBIF001
m_FileAssign -d MOD OUTFIL ${DATA}/PJ01DDD.BT.QSAM.KBIF002
m_ProgramExec BACC02
JUMP_LABEL=STEPEC03
;;
(STEPEC03)
m_CondExec 4,LT,STEPEC01 8,GT,STEPEC02 EVEN
m_FileAssign -d SHR INFIL ${DATA}/PJ01DDD.BT.QSAM.KBIF000
m_FileAssign -d MOD OUTFIL ${DATA}/PJ01DDD.BT.QSAM.KBIF003
 
Controlling the Execution Flow
The script's execution flow is determined, and can be controlled, in the following ways:
The start label specified by the m_JobBegin function: this label is usually the first label in the script, but can be changed to any label present in the script if the user wants to start the script execution from a specific step.
The value assigned to the JUMP_LABEL variable in each step: this assignment is mandatory in each step, but its value is not necessarily the label of the following step.
The usage of the m_CondExec, m_CondIf, m_CondElse and m_CondEndif functions: see Conditioning the Execution of a Step.
Changing Default Error Messages
If Batch Runtime administrator wishes to change the default messages (to change the language for example), this can be done through a configuration file whose path is specified by the environment variable: MT_DISPLAY_MESSAGE_FILE.
This file is a CSV (comma separated values) file with a semicolon as a separator. Each record in this file describes a certain message and is composed of 6 fields:
1.
2.
3.
4.
5.
6.
Different Behaviors from z/OS
On z/OS, before one job is executed, JES checks its syntax. If any error is found, JES reports it and runs nothing of the job. For example, if there is a JCL statement applying “NEW” on generation(0) of a GDG, because NEW is not allowed to be applied to existing files, JES reports this error and does not run the job.
However, in ART for Batch, JCL job is converted to ksh job by Oracle Tuxedo ART Workbench at first, and ART for Batch only checks ksh script syntax in the converted ksh job. Grammar errors, if any, are detected when this statement runs, resulting in the fact that statements after the wrong statement are executed but statements before it are executed without being affected.
Using Files
Creating a File Definition
Files are created using the m_FileBuild or the m_FileAssign function.
Four file organizations are supported:
You must specify the file organization for the file being created. For indexed files, the length and the primary key specifications must also be mentioned.
m_FileBuild Examples
m_FileBuild -t LSEQ ${DATA}/PJ01DDD.BT.VSAM.ESDS.KBIDO004
m_FileBuild -t IDX -r 266 -k 1+6 ${DATA}/METAW00.VSAM.CUSTOMER
m_FileAssign examples
m_FileAssign -d NEW -t SEQ -r 80 ${DATA}/PJ01DDD.BT.VSAM.ESDS.KBIDO005
Assigning and Using Files
When using Batch Runtime, a file can be used either by a Batch Runtime function (for example: m_FileSort, m_FileRename etc.) or by a program, such as a COBOL program.
In both cases, before being used, a file must first be assigned. Files are assigned using the m_FileAssign function that:
The environment variable defined via the m_FileAssign function is named: DD_IFN. This naming convention is due to the fact that it is the one used by Micro Focus Cobol to map internal file names to external file names.
Once a file is assigned, it can be passed as an argument to any of Batch Runtime functions handling files by using the ${DD_IFN} variable.
For COBOL programs, the link is made implicitly by Micro Focus Cobol.
Listing 3‑15 Example of File Assignment
(STEPCP01)
m_FileAssign -d SHR INFIL ${DATA}/PJ01DDD.BT.QSAM.KBIDI001
m_FileAssign -d SHR OUTFIL ${DATA}/PJ01DDD.BT.VSAM.KBIDU001
m_FileLoad ${DD_INFIL} ${DD_OUTFIL}
 
Listing 3‑16 Example of Using a File by a COBOL Program
(STEPCBL1)
m_FileAssign -d OLD INFIL ${DATA}/PJ01DDD.BT.QSAM.KBIFI091
m_FileAssign -d MOD OUTFIL ${DATA}/PJ01DDD.BT.QSAM.KBIFO091
m_ProgramExec BIFAB090
 
About DD DISP=MOD
Enhance ART/BatchRT to keep consistency with main frame for DISP=MOD. That is, make the behavior of DISP=MOD on the target operation system of ART/BatchRT to be same with main frame. Currently, BatchRT is depending on below 2 kinds of COBOL compile/runtime environment:
Note:
MicroFocus
For MicroFocus, one new file handler (ARTEXTFH.gnt) is added into BatchRT, in order to make the behavior of DISP=MOD is correct, user need to make their cobol program to be compiled with this file handler. That is to say, need to add below compile option:
CALLFH("ARTEXTFH")
If don't specify this compile option, the write operation with open mode "open output" in the COBOL program will erase the existing file contents. This is unexpected.
It is suggested that you always add this compile option while compile cobol program. Table 3‑5 lists the behavior of API which support DDN.
 
m_ProgramExec: COBOL Program
m_ProgramExec: Other Program
INPUT means INPUT file, only read operation will occur for INPUT file. Specify DISP=MOD is not reasonable for INPUT file, because no data will be written to INPUT file, but it's allowed, For INPUT file, DISP=MOD always act as DISP=OLD.
OUTPUT means OUTPUT file, read and write operation occur for OUTPUT file. All the data written to OUTPUT file will be appended to the original file regardless of open mode in COBOL progrom: "open output" or "open extend."
COBOL-IT
For COBOL-IT, there is no new File Handle is added (like MicroFocus). So there is no special requirement for compiling COBOL program. Table 3‑6 lists the behavior of API which support DDN.
 
m_ProgramExec: COBOL Program
m_ProgramExec: Other Program
INPUT means INPUT file, only read operation will occur for INPUT file. Specify DISP=MOD to INPUT file is not reasonable, and it's not allowed in COBOL-IT. if one INPUT file is assigned as DISP=MOD, its contents can't be read.
OUTPUT means OUTPUT file, read and write operation occur for OUTPUT file. All the data written to OUTPUT file will be appended to the original file regardless of open mode in COBOL progrom: "open output" or "open extend."
Concurrent File Accessing Control
Batch Runtime provides a lock mechanism to prevent one file from being written simultaneously in two jobs.
To enable the concurrent file access control, do the following:
1.
Use environment variable MT_ACC_FILEPATH to specify a directory for the lock files required by concurrent access control mechanism.
2.
Create two empty files, AccLock and AccWait, under the directory specified in step 1.
Make sure the effective user executing jobs has read/write permission to these two files.
Notes:
The file names of AccLock and AccWait are case sensitive.
Following two lines in ejr/CONF/BatchRT.conf should be commented out:
${MT_ACC_FILEPATH}/AccLock
${MT_ACC_FILEPATH}/AccWait
Using Generation Data Group (GDG)
Oracle Tuxedo Application Runtime for Batch allows you to manage Generation Data Group (GDG)files either based on file or based on database (DB). In file-based management way, Batch Runtime manages GDG files in separate "*.gens" files, and one "*.gens" corresponds to one GDG file. In DB-based management way, ART for Batch allows users to manage GDG information in Oracle database or DB2 database.
GDG Management Functionalities
In order to emulate the notion of generation files and present on the z/OS mainframe which is not a UNIX standard, Batch Runtime provides a set of functions to manage this type of file. These functions are available to both file-based management and DB-based management.
Note:
Defining and/or Redefining a GDG
It is required to define a GDG before using it.
A GDG file is defined and/or redefined through m_GenDefine. The operation of defining or redefining a GDG is committed immediately and cannot be rolled back.
As shown in Listing 3‑17, the first line defines a GDG and sets its maximum generations to 15, the second line redefines the same GDG maximum generations to 30, the third line defines a GDG without specifying "-s" option (its maximum generations is set to 9999), the fourth line defines a GDG implicitly and sets its maximum generations to 9999, the fifth line defines a GDG use model file $DATA/FILE, which can be either a GDG file or a normal file.
Listing 3‑17 Example of Defining and Redefining GDG Files
m_GenDefine -s 15 ${DATA}/PJ01DDD.BT.FILE1
m_GenDefine -s 30 -r ${DATA}/PJ01DDD.BT.FILE1
m_GenDefine ${DATA}/PJ01DDD.BT.FILE2
m_FileAssign -d NEW,CATLG -g +1 SYSUT2 ${DATA}/PJ01DDD.BT.FILE3
m_FileAssign -d NEW,CATLG -g +1 -S $DATA/FILE FILE1 $DATA/GDG
 
Adding Generation Files in a GDG
To add a new generation file (GDS) into a GDG, call m_FileAssign with "-d NEW/MOD,…" and "-g +n" parameters. GDS file types can be only LSEQ or SEQ.
There are four key points to add generation files in a GDG.
The filename of a newly created GDS is generated by the generation number specified in m_FileAssign in the format of <current GDS number> + <GenNum>. See Listing 3‑20 for an example.
Four examples as below elaborate those key points individually.
Listing 3‑18 Example of Adding Multiple Generation Files Discontinuously and Disorderedly
(STEP1)
m_FileAssign -d NEW,KEEP,KEEP -g +1 SYSUT1 "$DATA/GDG1"
m_FileAssign -d MOD,KEEP,KEEP -g +5 SYSUT2 "$DATA/GDG1"
(STEP2)
m_FileAssign -d NEW,KEEP,KEEP -g +9 SYSUT1 "$DATA/GDG1"
m_FileAssign -d NEW,KEEP,KEEP -g +2 SYSUT2 "$DATA/GDG1"
 
The above example adds the following GDS files to GDG.
$DATA/GDG1.Gen.0001
$DATA/GDG1.Gen.0002
$DATA/GDG1.Gen.0005
$DATA/GDG1.Gen.0009
Listing 3‑19 Example of Adding One Generation Number Multiple Times in a Job (Incorrect Usage)
(STEP1)
m_FileAssign -d NEW,KEEP,KEEP -g +1 SYSUT1 "$DATA/GDG1"
m_FileAssign -d NEW,KEEP,KEEP -g +5 SYSUT2 "$DATA/GDG1"
(STEP2)
m_FileAssign -d NEW,KEEP,KEEP -g +4 SYSUT1 "$DATA/GDG1"
m_FileAssign -d NEW,KEEP,KEEP -g +5 SYSUT2 "$DATA/GDG1"
 
The above example shows an incorrect usage, where generation number (+5) is added two times.
Listing 3‑20 Example of Listing GDS Filenames
m_FileAssign -d NEW,KEEP,KEEP -g +1 SYSUT1 "$DATA/GDG1"
m_FileAssign -d MOD,KEEP,KEEP -g +5 SYSUT2 "$DATA/GDG1"
 
In the above example, suppose $DATA/GDG1 has three GDS numbered as 1, 2, and 4, respectively. The corresponding GDS files are listed as below.
$DATA/GDG1.Gen.0001
$DATA/GDG1.Gen.0002
$DATA/GDG1.Gen.0004
After the above job runs, $DATA/GDG1 has five GDS numbered as 1, 2, 4, 5, and 9, respectively. The corresponding GDS files are listed as below.
$DATA/GDG1.Gen.0001
$DATA/GDG1.Gen.0002
$DATA/GDG1.Gen.0004
$DATA/GDG1.Gen.0005
$DATA/GDG1.Gen.0009
Listing 3‑21 Example of Defining the Current GDS
(STEP1)
m_FileAssign -d NEW,KEEP,KEEP -g +1 SYSUT1 "$DATA/GDG1"
m_FileAssign -d MOD,KEEP,KEEP -g +5 SYSUT2 "$DATA/GDG1"
(STEP2)
m_FileAssign -d NEW,KEEP,KEEP -g +2 SYSUT3 "$DATA/GDG1"
 
In the above example, the GDS whose RGN equals +5 becomes the current GDS, meaning its RGN becomes 0 after job finishes successfully.
Referring an Existing Generation Files in a GDG
To refer to an existing generation file (GDS) in a GDG, call m_FileAssign with "-d OLD/SHR/MOD,…" and "-g 0", "-g all", or "-g -n" parameters. "-g 0" refers to the current generation, "-g all" refers to all generation files, "-g -n" refers to the generation file which is the nth generation counting backward from the current generation (as 0 generation).
When using relative generation number (RGN) to reference a GDS, note that the "relative generation number" means "relative position with the newest GDS whose generation number is 0".
For example, if GDG1 contains six GDS numbered as 1, 4, 6, 7, 9, and 10, respectively, the mapping of GN and RGN is listed as below.
 
In the following job, use RGN=-1 to reference GDS whose GN equals 9 and use RGN=-4 to reference GDS whose GN equals 4.
Listing 3‑22 Example of Referencing Existing Generation Files
(STEP1)
m_FileAssign -d SHR,KEEP,KEEP -g -1 SYSUT1 "$DATA/GDG1"
m_FileAssign -d SHR,KEEP,KEEP -g -4 SYSUT2 "$DATA/GDG1"
 
If "DELETE" is specified in the DISPOSITION filed of m_FileAssign, the corresponding GDS will be deleted after the current step finishes, resulting in a change of mapping between GN and RGN. The changed mapping will be visible in the next step.
For example, if GDG1 contains six GDS numbered as 1, 4, 6, 7, 9, and 10, respectively, the mapping of GN and RGN is listed as below.
 
In the following job, use RGN=-1 to reference GDS whose GN equals 9 and use RGN=-4 to reference GDS whose GN equals 4.
You can run a job as below.
Listing 3‑23 Example of Referencing Existing Generation Files with DELETE Specified
(STEP1)
m_FileAssign -d OLD,DELETE,DELETE -g -1 SYSUT1 "$DATA/GDG1"
m_FileAssign -d OLD,DELETE,DELETE -g -4 SYSUT2 "$DATA/GDG1"
(STEP2)
m_FileAssign -d OLD,DELETE,DELETE -g -1 SYSUT1 "$DATA/GDG1"
m_FileAssign -d OLD,DELETE,DELETE -g -2 SYSUT2 "$DATA/GDG1"
 
In the above example, after STEP1 finishes, the mapping of GN and RGN becomes the one as below.
 
In STEP2, the GDS pointed by SYSUT1 (the GDS whose GN is 7) and the GDS pointed by SYSUT2 (the GDS whose GN is 6) are deleted.
After STEP2 finishes, the mapping of GN and RGN becomes the one as below.
 
Deleting Generation Files in a GDG
ART for Batch supports you to delete generation files, newly added or current existing, through the disposition of DD specified for m_FileAssign.
Listing 3‑24 Deleting Newly Added GDS
(STEP1)
m_FileAssign -d NEW,DELETE,DELETE -g +1 SYSUT1 "$DATA/GDG1"
m_FileAssign -d NEW,DELETE,DELETE -g +5 SYSUT2 "$DATA/GDG1"
(STEP2)
m_FileAssign -d NEW,DELETE,DELETE -g +1 SYSUT1 "$DATA/GDG1"
m_FileAssign -d NEW,DELETE,DELETE -g +5 SYSUT2 "$DATA/GDG1"
 
In the above example, eventually, no GDS is added to GDG1.
Listing 3‑25 Deleting Existing GDS
(STEP1)
m_FileAssign -d NEW,DELETE,DELETE -g -1 SYSUT1 "$DATA/GDG1"
m_FileAssign -d NEW,DELETE,DELETE -g -3 SYSUT2 "$DATA/GDG1"
(STEP2)
m_FileAssign -d NEW,DELETE,DELETE -g -1 SYSUT3 "$DATA/GDG1"
m_FileAssign -d NEW,DELETE,DELETE -g -3 SYSUT4 "$DATA/GDG1"
 
In the above example, GDG1 has six GDS numbered as 1, 4, 6, 7, 9, and 10, respectively. The GDS pointed by SYSUT1 (the GDS whose GN is 9), by SYSUT2 (the GDS whose GN is 6), by SYSUT3 (the GDS file whose GN is 7), and by SYSUT4 (the GDS file whose GN is 1) are deleted.
Note:
Deleting a GDG
You can delete a GDG as a whole by calling m_FileDelete with the GDG base name, as shown in Listing 3‑26. In this way, all the GDG's GDS will be deleted accordingly. The operation of deleting GDG is committed immediately and cannot be rolled back.
Listing 3‑26 Deleting a GDG
m_FileDelete ${DATA}/PJ01DDD.BT.GDG
 
Cataloging a GDG
Only GDG base can be cataloged; its GDS cannot be cataloged individually.
It is required to enable "file catalog" function in ART for Batch catalog a GDG. Additionally, in catalog mode, the parameter [-v volume] specified in m_FileAssign is ignored.
Note:
Committing a GDG
All GDG having changes in the current step will be committed no matter if the current step successfully finishes.
Committing a GDG updates the information in GDG management system, such as Oracle DataBase or file (*.gens), and commits the temporary generation files; however, committing a GDG does not change the mapping relationship between GN and RGN, meaning, in one step of a job, a RGN always references to the same GDS.
For example, GDG1 has six GDS numbered as 1, 4, 6, 7, 9, and 10, respectively.
Listing 3‑27 Example of Committing a GDG
(STEP1)
m_FileAssign -d NEW,KEEP,KEEP -g +1 SYSUT1 "$DATA/GDG1"
m_FileAssign -d NEW,KEEP,KEEP -g +2 SYSUT2 "$DATA/GDG1"
m_FileAssign -d NEW,KEEP,KEEP -g -1 SYSUT3 "$DATA/GDG1"
(STEP2)
m_FileAssign -d NEW,KEEP,KEEP -g -1 SYSUT4 "$DATA/GDG1"
 
In STEP1, the mapping of GN and RGN (both in job and in GDG management system) becomes the one as below. SYSUT3 references to the GDS whose GN is 9.
 
In STEP2, the mapping of GN and RGN in GDG management system becomes the one as below.
 
However, the mapping of GN and RGN in the current running job is not changed; in the below example, SYSUT4 stills references to the GDS whose GN is 9 rather than the GDS whose GN is 11.
 
File-Based Management
Configuration
MT_GENERATION variable specifies the way of managing GDG files. To manage GDG in *.gens files, you need to set the value to GENERATION_FILE.
Concurrency Control and Authorization
In file-based GDG management mechanism, one GDG file can only be accessed by one job at any time, that is, a single GDG cannot be accessed by multiple jobs simultaneously. To access a GDG file, the file lock must be acquired by the existing internal function mi_FileConcurrentAccessReservation. File-based GDG management mechanism uses a file *.gens (* represents the GDG base name) to control concurrency and authorization. User access checking depends on whether the *.gens file can be accessed or not.
DB-Based Management
For DB-based management, Oracle Database and DB2 database are supported.
Note:
To enable this function, MT_GENERATION must be set to GENERATION_FILE_DB, MT_DB must be set to DB_ORACLE or DB_DB2LUW, and MT_GDG_DB_ACCESS must be set to valid database connection string for accessing Oracle Database or DB2 database.
Database Tables
Table 3‑5 shows the general management for each GDG managed by Batch Runtime. In this table, each row represents a GDG. All GDG files share a single GDG_DETAIL table.
 
Table 3‑7 GDG_DEFINE
It cannot contain only a relative path relative to a single repository. The length of GDG_BASE_NAME is limited to 1024, i.e. the minimum of PATH_MAX on different UNIX platforms.
It contains the upper limit of generations specified by -s option. -s option can be set in the range of 1-9999.
Primary Key: GDG_BASE_NAME
Table 3‑6 shows the detailed information of all the GDG generation files. In this table, each row represents a generation file of a GDG.
 
Table 3‑8 GDG_DETAIL
Primary Key: GDG_BASE_NAME+ GDG_ABS_NUM
GDG_FILE_NAME (the physical generation file name) is not stored in table GDG_DETAIL since it can be constructed from GDG_BASE_NAME in GDG_DEFINE and GDG_ABS_NUM in GDG_DETAIL.
Note:
Generation File Naming Rule
Table 3‑7 shows the rule of generation file name:
 
Configuration Variables
MT_GENERATION
This variable specifies the way of managing GDG files. To manage GDG files in database, you need to set the value to GENERATION_FILE_DB and configure MT_GDG_DB_ACCESS appropriately.
MT_GDG_DB_ACCESS
This variable is used along with MT_GENERATION when it is set to GENERATION_FILE_DB, and must be set with the valid database login account. For accessing Oracle DB, it should be specified in the format of userid/password@sid, for example, scott/tiger@orcl.
External Shell Scripts
You can use the two external shell scripts to create and drop the new database table automatically.
CreateTableGDG.sh
Description
Creates table GDG_DEFINE and GDG_DETAIL in database
Usage
CreateTableGDG.sh <DB_LOGIN_PARAMETER>
Sample
CreateTableGDG.sh scott/tiger@orcl
DropTableGDG.sh
Description
Drops table GDG_DEFINE and GDG_DETAIL from database.
Usage
DropTableGDG.sh <DB_LOGIN_ PARAMETER>
Sample
DropTableGDG.sh scott/tiger@orcl
Concurrency Control and Authorization
DB-based GDG management mechanism maintains the same concurrency control behavior as File-based GDG management mechanism, but has a different *.ACS (* represents the GDG base name) file format. In DB-based GDG management mechanism, you don’t need to lock the tables mentioned in Database Tables as any job that accesses the rows corresponding to a GDG must firstly acquire the file lock of the GDG. That is to say, there is no need to perform concurrency control in the database access level. You cannot access database if you don’t have access permission (read or write) to the corresponding *.ACS file. If you need to modify a GDG file, you must have write permissions to the generation files and the directory holding the generation files, and MT_GDG_DB_ACCESS must be configured correctly to have appropriate permissions to the tables mentioned in Database Tables.
You can only copy DB-based GDG management description entirely and replace the file name.
Exception Handling
There are four kinds of information in DB-based GDG management mechanism:
*.ACS file
These information should be kept consistently for a GDG file. Batch Runtime checks the consistency from GDG_DEFINE to Physical files when a GDG file is accessed the first time in a job. If exceptions happen and result in inconsistency among these information, Batch Runtime terminates the current job and reports error.
This behavior is different from the existing file-based mechanism, which does not check the consistency but only reports exceptions encountered in the process.
Support for Data Control Block (DCB)
Both file-based GDG and DB-based GDG support Data Control Block (DCB).
Defining .dcb File
.dcb file can have two values: "-t <file type>" and "-r <record length>".
-t <file type>
-t <file type> must be LSEQ or SEQ in m_FileAssign to create the first generation file. If you don't specify any file type in job ksh file, LSEQ will be used by default.
-r <record length>
For SEQ file, the value is mandatory and must be a number or "number1-number2".
For LSEQ file, the value is optional. Once set, this value must be a number.
Creating .dcb file
Create .dcb file for GDG data set when the first generation file is created by m_FileAssign -g +1.
Notes:
If a GDG is created by m_GenDefine rather than m_FileAssign, .dcb file will not exist until the first generation file is created by m_FileAssign -g +1.
Once .dcb file is created, its contents will not be changed by any other m_FileAssign statement afterwards, unless such m_FileAssign creates the first generation file again.
Deleting .dcb file
If a GDG is deleted by m_FileDelete, the corresponding .dcb file will be deleted automatically.
However, if all generation files in one GDG are deleted while the GDG itself exists, the corresponding .dcb file will not be deleted.
Using an In-Stream File
To define and use a file whose data is written directly inside the Korn shell script, use the m_FileAssign function with the -i parameter. By default the string _end is the “end” delimiter of the in-stream flow as shown in Listing 3‑28.
Listing 3‑28 In-stream Data Example
(STEP1)
m_FileAssign -i INFIL
data record 1
data record 2
_end
 
 
Using a Set of Concatenated Files
To use a set of files as a concatenated input (which in z/Os JCL was coded as a DD card, where only the first one contains a label), use the m_FileAssign function with the -C parameter as shown in Listing 3‑29.
Listing 3‑29 Using a Concatenated Set of Files Example
(STEPDD02)
m_FileAssign -d SHR INF ${DATA}/PJ01DDD.BT.QSAM.KBDDI002
m_FileAssign -d SHR -C ${DATA}/PJ01DDD.BT.QSAM.KBDDI001
m_ProgramExec BDDAB001
 
Using an External “sysin”
To use an “external sysin” file which contains commands to be executed, use the m_UtilityExec function.
m_FileAssign -d OLD SYSIN ${SYSIN}/SYSIN/MUEX07
m_UtilityExec
Deleting a File
Files (including generation files) can be deleted using the m_FileDelete function:
m_FileDelete ${DATA}/PJ01DDD.BT.QSAM.KBSTO045
RDB Files
In a migration project from z/Os to UNIX/Linux, some permanent data files may be converted to relational tables. See the File-to-Oracle chapter of the Oracle Tuxedo Application Runtime Workbench.
When a file is converted to a relational table, this change has an impact on the components that use it. Specifically, when such a file is used in a z/Os JCL, the converted Korn shell script corresponding to that JCL should be able to handle operations that involve this file.
In order to keep the translated Korn shell script as standard as possible, this change is not handled in the translation process. Instead, all the management of this type of file is performed at execution time within Batch Runtime.
In other words, if in the z/OS JCL there was a file copy operation involving the converted file, this is translated to a standard copy operation for files in Batch Runtime, in other words an m_FileLoad operation).
The management of a file converted to a table is made possible through an RDB file. An RDB file is a file that has the same name as the file that is converted to a table but with an additional suffix:.rdb.
Each time a file-related function is executed by Batch Runtime, it checks whether the files were converted to table (through testing the presence of a corresponding .rdb file). If one of the files concerned have been converted to a table, then the function operates the required intermediate operations (such as: unloading and reloading the table to a file) before performing the final action.
All of this management is transparent to the end-user.
Using an RDBMS Connection
When executing an application program that needs to connect to the RDBMS, the -b option must be used when calling the m_ProgramExec function.
Connection and disconnection (as well as the commit and rollback operations) are handled implicitly by Batch Runtime and can be defined using the following two methods:
Set the environment variable MT_DB_LOGIN before booting the TuxJES system.
Note:
The MT_DB_LOGIN value must use the following form: dbuser/dbpasswd[@ssid]or “/”.
Note:
"/" should be used when the RDBMS is configured to allow the use of UNIX authentication and not RDBMS authentication, for the database connexion user.
Please check with the database administrator whether "/" should be used or not.
The -b option must also be used if the main program executed does not directly use the RDBMS but one of its subsequent sub-programs does as shown in Listing 3‑30.
Listing 3‑30 RDBMS Connection Example
(STEPDD02)
m_FileAssign -d MOD OUTF ${DATA}/PJ01DDD.BT.QSAM.REPO001
m_ProgramExec -b DBREP001
 
The m_ProgramExec function may submit three types of executable files (Cobol executable, command language script, or C executable). It launchs the runb program. We have provided the runb for $ARTDIR/Batch_RT/ejr_mf_ora (on Linux) and ejr_ora (other platforms). If you use neither Microfocus COBOL compiler nor Oracle Database, go to $ARTDIR/Batch_RT/ejr and run "make.sh" to generate your required runb.
The runb program, runtime compiled with database librairies, runs the runbatch program.
The runbatch program, is in charge to :
- do the connection to the database (if necessary)
- run the user program
- do the commit or rollback (if necessary)
- do the disconnection from the database (if necessary)
Submitting a Job Using INTRDR Facility
The INTRDR facility allows you to submit the contents of a sysout to TuxJES (see the TuxJES documentation). If TuxJES is not present, a command “nohup EJR” is used.
Example:
m_FileAssign -d SHR SYSUT1 ${DATA}/MTWART.JCL.INFO
m_OutputAssign -w INTRDR SYSUT2
m_FileRepro -i SYSUT1 -o SYSUT2
The contents of the file ${DATA}/MTWART.JCL.INFO (ddname SYSUT1) is copied into the file which ddname is SYSUT2 and using the option “-w INTRDR” is submitted.
Note that the ouput file must contain valid ksh syntax.
Note:
Submitting a Job With EJR
When using Batch Runtime, TuxJES can be used to launch jobs (see the TuxJES documentation), but a job can also be executed directly using the EJR spawner.
Before performing this type of execution, ensure that the entire context is correctly set. This includes environment variables and directories required by Batch Runtime.
Example of launching a job with EJR:
# EJR DEFVCUST.ksh
For a complete description of the EJR spawner, please refer to the Oracle Tuxedo Application Runtime for Batch Reference Guide.
User-Defined Entry/Exit
Batch Runtime allows you to add custom pre- or post- actions for public APIs. For each m_* (* represents any function name) function, you can provide m_*_Begin and m_*_End function and put them in ejr/USER_EXIT directory. They are invoked automatically when a job execution entering or leaving an m_* API.
Whether an m_* API calls its user-defined entry/exit function depends on the existence of m_*_Begin and m_*_End under ejr/USER_EXIT.
A pair of general user entry/exit APIs, mi_UserEntry and mi_UserExit, are called at the entry and exit point of each external API. The argument to these APIs consists of the function name in which they are called, and the original argument list of that function. You don’t need to modify these two APIs, but just need to provide your custom entry/exit for m_* external APIs. mi_UserEntry and mi_UserExit are placed under ejr/COMMON.
Note:
You are suggested not to call exit in user entry/exit function. Because In the framework, exit is aliased an internal function, mif_ExitTrap, which is invoked ultimately if exit in user entry/exit function is called. If exit 0 is called, the framework does nothing and job is continue, if exit not_0 is called, a global variable is set and may terminate the current job.
Configuration
You should include only one function, e.g. m_*_Begin or m_*_End, in a single file with the same name as the function, and then put all such files under ejr/USER_EXIT.
You are not allowed to provide custom entry/exit functions for any mi_ prefix function provided by Batch Runtime.
Batch Runtime Logging
This section contains the following topics:
General Introduction
Log Message Format
Each log message defined in CONF/Messages.conf is composed of six fields, as listed in Table 3‑8:
 
Table 3‑10 Log Message Format
The levels of these messages are set to 4 by default.
You can specify the message level of Batch Runtime to control whether to print these three messages in job log.
Log Message Level
Table 3‑9 lists the Log message levels provided by Batch Runtime:
 
Table 3‑11 Log Message Level
Log Level Control
The default level of displaying messages in job log is 3. You can also choose one of the following ways to change the level:
Use -V option of EJR
The display level set by EJR can override the level set by MT_DISPLAY_LEVEL.
Log File Structure
For each launched job, Batch Runtime produces a log file containing information for each step that was executed. This log file has the following structure as shown in Listing 3‑31.
Listing 3‑31 Log File Example
JOB Jobname BEGIN AT 20091212/22/09 120445
BEGIN PHASE Phase1
Log produced for Phase1
.......
.......
.......
END PHASE Phase1 (RC=Xnnnn, JOBRC=Xnnnn)
BEGIN PHASE Phase2
Log produced for Phase2
.......
.......
.......
END PHASE Phase2 (RC=Xnnnn, JOBRC=Xnnnn)
..........
..........
BEGIN PHASE END_JOB
..........
END PHASE END_JOB (RC=Xnnnn, JOBRC=Xnnnn)
 
JOB ENDED WITH CODE (C0000})
Or
JOB ENDED ABNORMALLY WITH CODE (S990})
 
When not using TuxJes, the log file is created under the ${MT_LOG} directory with the following name: <Job name>_<TimeStamp>_<Job id>.log
For more information, see Using Tuxedo Job Enqueueing Service (TuxJES).
Log Header
Batch Runtime logging functionality provides an informative log header in front of each log line, in the following format:
YYYYmmdd:HH:MM:SS:TuxSiteID:JobID:JobName:JobStepName
You can configure the format of log header, but should not impact any configuration and behavior of existing specific message header: type 0, 1 and b.
Table 3‑10 shows the variables you can use for specifying the general log header:
 
Name of the job assigned by m_JobBegin in the job script.
Name of the proc when the code included from a PROC by m_ProcInclude is executing; empty otherwise.
Configuration
MT_LOG_HEADER is a new configuration variable added in CONF/BatchRT.conf, for example:
MT_LOG_HEADER='$(date'+%Y%m%d:%H%M%S'):${MTI_SITE_ID}:${MTI_JOB_NAME}:${MTI_JOB_ID}:${MTI_JOB_STEP}: '
If the value of MT_LOG_HEADER is not a null string, its contents are evaluated as a shell statement to get its real value to be printed as the log header, otherwise this feature is disabled.
Note:
The string that configured to MT_LOG_HEADER is treated as a shell statement in the source code, and is interpreted by "eval" command to generate the corresponding string used as log header:
Syntax inside: eval mt_MessageHeader=\"${MT_LOG_HEADER}\"
To configure this variable, you need to comply with the following rules:
MT_LOG_HEADER must be a valid shell statement for "eval", and must be quoted by single quotation marks.
All the variables used in MT_LOG_HEADER must be quoted by "${}". For example: ${ MTI_JOB_STEP }
All the command line used in MT_LOG_HEADER must be quoted by "$()". For example: $(date '+%Y%m%d:%H%M%S')
You can modify the above examples according to your format needs using only the variables listed in Table 3‑10.
This configuration variable is commented by default, you need to uncomment it to enable this feature.
File Information Logging
Logging system can logs the detailed file information in job log, as well as the information when a file is assigned to a DD and when it is released.
File assignment information is logged in the following functions:
m_FileAssign
File release information is logged in the following functions:
m_PhaseEnd
File information is logged in the following functions:
Configuration
Messages.conf
The following message identifiers are defined in CONF/Messages.conf to support using of mi_DisplayFormat to write file assignment and file information log.
Notes:
CONF/Messages.conf is not configurable. Do not edit this file.
The string "%s" at the end of each identifier represents it will be written to log file. You can configure its value using the following variables defined in CONF/Batch.conf. For more information, see Table 3‑12.
MT_LOG_FILE_ASSIGN (for FileAssign)
MT_LOG_FILE_RELEASE (for FileRelease)
MT_LOG_FILE_INFO (for FileInfo)
BatchRT.conf
Three configuration variables should be defined in CONF/BatchRT.conf to determine the detailed file information format. With the placeholders listed in Table 3‑11, you can configure file log information more flexibly.
 
Table 3‑13 Placeholders
SHR or NEW
 
Note:
"operation" is hard-coded into source code, such as FileCopy source, FileCopy Destination, and FileDelete etc.
To configure strings to these MT_LOG_FILE_* variables, replace the placeholders with corresponding values (just string replacement). The result is treated as a shell statement, and is interpreted by "eval" command to generate the corresponding string writing to log:
Syntax inside: eval mt_FileInfo=\"${MT_LOG_FILE_INFO}\"
To configure these variables, you need to comply with the following rules:
After placeholders are replaced, MT_LOG_FILE_* must be a valid shell statement for "eval", and must be quoted by single quotation marks.
Only the placeholders listed in Table 3‑11 can be used in MT_LOG_FILE_*.
All the command line used in MT_LOG_HEADER must be quoted by "$()". For example: $(ls -l --time-style=+'%Y/%m/%d %H:%M:%S' --no-group <%FULLPATH%> )
If the level of FileInfo message is equal to or less than the message level specified for Batch Runtime and MT_LOG_FILE_* is set to a null string, FileInfo message will not be displayed in job log. If MT_LOG_FILE_* is set to an incorrect command to make file information invisible, FileInfo message will not be displayed in job log as well, but the job execution will not be impacted.
Note:
Using Batch Runtime With a Job Scheduler
Entry points are provided in some functions (m_JobBegin, m_JobEnd, m_PhaseBegin, m_PhaseEnd) in order to insert specific actions to be made in relation with the selected Job Scheduler.
Executing an SQL Request
A SQL request may be executed using the function m_ExecSQL.
Depending on the target database, the function executes a “sqlplus” command with ORACLE database, or a “db2 -tsx” command with UDB.
Note that the environment variable MT_DB_LOGIN must be set (database connection user login).
The SYSIN file must contain the SQL requests and the user has to verify the contents regarding the database target.
Simple Application on COBOL-IT / BDB
Batch COBOL programs compiled by COBOL-IT can access the indexed ISAM files which are converted from Mainframe VSAM files through the ART Workbench. VSAM files can be stored in BDB through COBOL-IT.
To enable this function in Batch runtime, do the followings during runtime:
Set DB_HOME correctly because it is required by BDB; DB_HOME points to a place where temporary files are put by BDB.
Unset COB_ENABLE_XA environment variable before booting the TuxJES system.
unset COB_ENABLE_XA
Note:
It is required to set COB_ENABLE_XA when you use COBOL-IT with ART CICS Runtime.
Dynamic JCL Job Execution
This section contains the following topics:
General Introduction
Oracle Tuxedo ART Batch Runtime supports users to manage native JCL jobs with real-time workbench conversion without any pre-conversion. For more information, please refer to JCL Conversion.
Requirements
It's required to install Oracle Tuxedo ART Workbench and make it executable.
Two additional requirements should be fulfilled as below if Oracle Tuxedo ART Workbench is deployed on the remote machine (host1) while ARTJESCONV server is deployed on another machine (host2).
A trusted SSH connection must be configured between host1 and host2, that is, the user (user2) who boots up ARTJESCONV is allowed to log into host1 without passwords. By doing this, ARTJESCONV can invoke ART Workbench installed on host1 directly without interaction.
Note:
If multiple ARTJESCONV servers on more than one machine are configured in JES domain, the trusted SSH connection should be configured on each machine equipped with ARTJESCONV.
If Workbench is deployed on local machine, it is optional to set the host.
For example, you must do the followings to add user2@host2 to user1@host1 if Oracle Tuxedo ART Workbench and ARTJESCONV are deployed in different machines.
1.
Login host2 with user name user2.
2.
Run "cd $HOME/.ssh" on host2.
3.
Run "ssh-keygen -t rsa" to generate id_rsa and id_rsa.pub.
4.
Login host1 with user name user1.
5.
Run "cd $HOME/.ssh" on host1.
6.
Add the content of host2:$HOME/.ssh/id_rsa.pub file to authorized_keys.
Configurations
Working Folder Configurations for JCL Conversion
The template working folder for JCL conversion is $JESROOT/jcl_conv_dir, which will be created automatically if it does not exist at startup. $JESDIR/Batch_RT/jcl_conv_dir contents are automatically copied to such working folder when Batch Runtime starts. When a JCL job is submitted, JES copies this template folder to folder $JESROOT/<JOBID>/JCL, and puts the JCL job file to folder $JESROOT/<JOBID>/JCL/source/JCL/, where Workbench works.
Users need to copy all the INCL, PROC, and SYSIN to the template working folder for each JCL job. When converting and executing a JCL job, $JESROOT/<JOBID>/JCL/target/PROC:$JESROOT/<JOBID>/JCL/target /INCL is added to the head of the environment variable PROCLIB, and $JESROOT/<JOBID>/JCL/target/Master-SYSIN is set to the environment variable SYSIN.
The Workbench configuration file in the working folder for JCL conversion is param/config-trad-JCL.desc. Users should customize it; otherwise, default values will be used. For more information, please refer to The JCL-Translation Configuration File.
EJR Configurations
MT_REFINEDIR and MT_REFINEDISTRIB are required to be configured. For more information, please refer to Table 3‑3.
The Queue for JCL Conversion
The queue, CONV_JCL, is added to the queue space JES2QSPACE to support JCL conversion. For more information, please refer to Table 8 TuxJES Queues.
Using JES Client to Manage JCL Jobs
Submitting a JCL Job
Option -l is used to submit a JCL job with the following usage.
artjesadmin -I JCLScriptName (in the shell command line)
submitjob -I JCLScriptName (in the artjesadmin console)
Printing Jobs
[-t JCL|KSH]is used as a filter with the following usage.
Print JCL jobs: printjob -t JCL
Print KSH jobs: printjob -t KSH
The column, job type, is added to the results with one of the following values.
JCL for JCL jobs
KSH for KSH jobs
Before the conversion phase completes, the JCL job name and class are null, and the priority is displayed as 0.
Holding/Releasing/Canceling/Purging a JCL job
The usage is the same as KSH jobs.
JCL Conversion Log
The JCL conversion log is $JESROOT/<JOBID>/LOG/<JOBID>.jcllog.
Network Job Entry (NJE) Support
This section contains the following topics.
General Introduction
With NJE support, users can implement the following functionalities in Batch Runtime exactly as they do in JCL jobs.
By m_SetJobExecLocation API of Batch Runtime, users can develop KSH jobs with NJE support. For example,
Configurations
Job Execution Server Group
When specifying the server group name, which is specified as job execution group in API m_JobSetExecLocation, please ensure the followings.
The specified server group must exist in ubbconfig file of JES domain.
At least one ARTJESINITIATOR server must be deployed in that server group.
ON/OFF Setting of NJE Support
There is a corresponding setting item in JES configuration file.
 
ON: Enable NJE support
OFF: Disable NJE support
If NJE support is disabled in jesconfig, the statement m_SetJobExecLocation <SvrGrpName> is ignored by TuxJES and then the job may executed by any ARTJESINITIATOR in any server group.
Environment Variable MT_TMP in MP Mode
In MP mode, MT_TMP needs to be configured on NFS, and all the nodes in tuxedo domain should have the same value of MT_TMP and share it.
MT_TMP can be configured in file $MT_ROOT/CONF/BatchRT.conf, or to export it as environment value before tlisten is started in each node.
Queue EXECGRP
If NJESUPPORT is enabled in jesconfig, a new queue named EXECGRP must be created in the existing queue space JES2QSPACE. If EXECGRP is not created, no jobs can be processed by JES.
NJE Job Sample
Listing 3‑32 Sample of Specifying Job Execution Server Group (XEQ)
m_JobBegin -j SAMPLEJCL -s START -v 2.0 -c R
m_JobSetExecLocation "ATLANTA"
while true ;
do
m_PhaseBegin
case ${CURRENT_LABEL} in
(START)
# XEQ ATLANTA
JUMP_LABEL=STEP01
;;
(STEP01)
m_OutputAssign -c "*" SYSPRINT
m_FileAssign -i SYSIN
m_FileDelete ${DATA}/GBOM.J.PRD.ABOMJAW1.ABEND02
m_RcSet 0
_end
m_UtilityExec
JUMP_LABEL=END_JOB
;;
(END_JOB)
break
;;
(*)
m_RcSet ${MT_RC_ABORT:-S999} "Unknown label : ${CURRENT_LABEL}"
break
;;
esac
m_PhaseEnd
done
m_JobEnd
 
In the above sample, the job can be submitted on any JES node, but only be executed by the ARTJESINITIATOR which belongs to JES's tuxedo server group ATLANTA.
Listing 3‑33 Sample of Transmitting and Submitting a Job to Another Server Group (XMIT)
m_JobBegin -j JOBA -s START -v 2.0
while true;
do
m_PhaseBegin
case ${CURRENT_LABEL} in
(START)
m_FileAssign -i -D \_DML_XMIT_TEST1 SYSIN
m_JobBegin -j TEST1 -s START -v 2.0 -c B
m_JobSetExecLocation "ATLANTA"
while true ;
do
m_PhaseBegin
case ${CURRENT_LABEL} in
(START)
JUMP_LABEL=STEP01
;;
(STEP01)
m_OutputAssign -c "*" SYSPRINT
m_FileAssign -i SYSIN
m_FileDelete ${DATA}/GBOM.J.PRD.ABOMJAW1.ABEND02
m_RcSet 0
_end
 
m_UtilityExec
JUMP_LABEL=END_JOB
;;
(END_JOB)
break
;;
(*)
m_RcSet ${MT_RC_ABORT:-S999} "Unknown label : ${CURRENT_LABEL}"
break
;;
esac
m_PhaseEnd
done
m_JobEnd
_DML_XMIT_TEST1
m_ProgramExec artjesadmin -i ${DD_SYSIN}
JUMP_LABEL=END_JOB
;;
(END_JOB)
break
;;
(*)
m_RcSet ${MT_RC_ABORT:-S999} "Unknown label : {CURRENT_LABEL}"
break
;;
esac
m_PhaseEnd
done
m_JobEnd
 
In the above sample, job TEST1 will be submitted by the current job and executed by the ARTJESINITIATOR which belongs to JES's Tuxedo server group ATLANTA.
File Catalog Support
This section contains the following topics.
General Introduction
With file catalog support in Batch Runtime, users can access dataset under volumes. A volume is a dataset carrier and exists as a folder; each dataset should belong to a volume.
File catalog contains the mapping from each dataset to each volume. When referencing an existing and cataloged file on Mainframe, file catalog will be requested to find out the volume in which the file is located, and then the file will be accessed.
If file catalog functionality is disabled, the behavior in Batch Runtime remains the same as it is without such functionality.
Database Table
This table shows the general management for file catalog functionality by Batch Runtime. In this table, each row represents one file-to-volume mapping.
 
Primary Key: PK_ART_BATCH_CATALOG
Configuration Variables
Three configuration variables are required to be added in BatchRT.conf or be set as environment variables.
MT_USE_FILE_CATALOG
If it is set to yes (MT_USE_FILE_CATALOG=yes), the file catalog functionality is enabled; otherwise, the functionality is disabled.
MT_VOLUME_DEFAULT
If no volumes are specified when a new dataset is created, Batch Runtime uses the volume defined by MT_VOLUME_DEFAULT. MT_VOLUME_DEFAULT contains only one volume. For example, MT_VOLUME_DEFAULT=volume1.
MT_DB_LOGIN
This variable contains database access information. Since the file catalog is stored in database, Batch Runtime accesses it through MT_DB_LOGIN. For Oracle, its value is username/password@sid, such as scott/tiger@gdg001. For Db2, its value is your-database USER your-username USING your-password, such as db2linux USER db2svr USING db2svr.
External Shell Scripts
You can use CreateTableCatalog[Oracle|Db2].sh or DropTableCatalog[Oracle|Db2].sh to create or drop the new database table.
CreateTableCatalog[Oracle|Db2].sh
Description
Creates table ART_BATCH_CATALOG in database.
Usage
CreateTableCatalog[Oracle|Db2].sh <DB_LOGIN_PARAMETER>
Sample
CreateTableCatalogOracle.sh scott/tiger@orcl
DropTableCatalog[Oracle|Db2].sh
Description
Drops table ART_BATCH_CATALOG from database.
Usage
DropTableCatalog[Oracle|Db2].sh <DB_LOGIN_PARAMETER>
Sample
DropTableCatalogOracle.sh scott/tiger@orcl
External Dependency
To use file catalog functionality in Batch Runtime, File Converter and JCL Converter in ART Workbench should enable catalog functionality. For more information, please refer to Oracle Tuxedo Application Rehosting Workbench User Guide.

Copyright © 1994, 2017, Oracle and/or its affiliates. All rights reserved.