|
|
|
|
|
Some variables (such as ORACLE_SID, COBDIR, LIBPATH, COBPATH …) are shared variables between different components and are not described in this current document. For more information, see Rehosting Workbench Installation Guide.Table 3‑1 lists the environment variables that are used in the KSH scripts and must be defined before using the software.
Table 3‑1 KSH Script Environment Variables
Note: Table 3‑2 lists the environment variables that are used by Batch Runtime and must be defined before using the software.
(See the BatchRT.conf configuration file) When it is set to "yes", m_DBTableLoad and m_DBTableUnload call the COBOL programs; in this mode, the related data file for load/unload are in the same format as in z/OS for utility DSNUTILB (Load/Unload) and for utility DSNTIAUL (Unload). The COBOL program name could be: schema-table-L ( DSNUTILB Load), schema-table-U (DSNUTILB Unload), or schema-table-u (DSNTIAUL Unload).When it is set to other values than "yes", "MT_CTL_FILES" are necessary for m_DBTableLoad and m_DBTableUnload. Indicates the default schema for DB, When MT_DSNUTILB_LOADUNLOAD is set to "yes". The default value is "DEFSCHEMA". This variable is used for specify the schema for COBOL programs "schema-table-L" and "schema-table-U". (See the BatchRT.conf configuration file) The default is directory GENERATION_FILE. To manage GDG files in database, you need to set the value to GENERATION_FILE_DB and configure MT_GDG_DB_ACCESS appropriately. If the value is specified as NULL or with an incorrect directory name, error occurs when using this environment variable.
• Do not copy pdksh from other machines. You should either download the source code from official website and then build pdksh executable from it, or install pdksh through the official installer which is included in the corresponding OS release image.
• If you build pdksh from source code, recommend that you define the CPU frequency (CLK_TCK, in ksh_time.h) from 60HZ to 100HZ, as most modern Linux/UNIX platforms use 100 as their default CPU frequency.
• (See the BatchRT.conf configuration file) (See the BatchRT.conf configuration file) (See the BatchRT.conf configuration file). (See the BatchRT.conf configuration file) MT_JESDECRYPT must be set to jesdecrypt object file.(See the BatchRT.conf configuration file) (See the BatchRT.conf configuration file) TUXEDO SRVGRP value of the ARTDPL server.(See the BatchRT.confconfiguration file)
Note:
•
•
•
• Table 3‑3 lists optional environment variables used by Batch Runtime.
Variable used with valid database login information to access Database file catalog. Its format is the same as MT_DB_LOGIN.
Note: It precedes MT_DB_LOGIN in accessing file catalog. If file catalog DB is the same as data DB, configuring MT_DB_LOGIN only is required; otherwise, both must be configured. Controls whether empty SYSOUT files are cleaned up at the end of job execution.
• MT_CLEANUP_EMPTY_SYSOUT=Y: empty SYSOUT files are cleaned up.
• MT_CLEANUP_EMPTY_SYSOUT=N: empty SYSOUT files are not cleaned up. A variable to use the specified configuration file instead of the default configuration file BatchRT.conf under "ejr/CONF".For jobs submitted from EJR, export this variable in advance. You can restore the default configuration file BatchRT.conf by unsetting this variable.For jobs submitted from TUXJes, export this variable before restarting TuxJes servers. You can restore the default configuration file BatchRT.conf by unsetting this variable and then restarting TUXJes servers. A variable used to enable CPU time usage monitor of step for all job. Set MT_CPU_MON_STEP=yes to enable CPU time usage monitor of step for all job. If MT_CPU_MON_STEP is not configured or its value is not equal "yes", this feature is disabled. If MT_DB_LOGIN2 has a non-null value, BatchRT uses runb2 (which supports parallel Oracle and DB2 access). Specifies which DB preprocessor is executed before SQL is executed. The built-in DB2-to-Oracle SQL Converter is "${JESDIR}/tools/sql/oracle/BatchSQLConverter.sh". Specifies a full file path. The file is used to store the mapping from "DB SYSTEM" to "DB connection credential string". The file format is:
• When <DB TYPE> is "ORA", the <connection string> format is <user>/<pwd>@<instance id>. For example, ORA01:ORA:tigger/scot@orains01.
• When <DB TYPE> is "DB2", the <connection string> format is <instance id> user <user> using <pwd>. For example, DB202:DB2:db2ins02 user tom using cat.This file is accessed when "DB SYSTEM" is specified in the following EJR API: m_ProgramExec, m_DBTableLoad, m_DBTableUnload, m_ExecSQL, m_DSNUTILB, and m_UtilityExec.
Note: When "DB SYSTEM" is specified in a nested way, only the outer setting takes effect. For example, in the following case, only "ORA01" takes effect (and "ORA02" is ignored). If it is configured to "Y", Batch runtime provides you DSNTIAUL utility to unload data from Oracle Database tables. This utility has the same functionality as DSNTIAUL utlity on mainframe with DB2. If it is configured to "N" or if it is not configured, Batch runtime executes SQL statement and writes the output to a specific file in plain text. The default value is "Y". If it is configured to "Y", BatchRT generates an EJR log file and writes every phase's log to it. If it is configured to "N", BatchRT does not generate the EJR log file. The default value is "Y". A list of executable programs. The programs are invoked by runbexci instead of runb. For each program in this list, whether or not -n is specified by m_ProgramExec, the program is invoked only by runbexci.
Note: If configured to "Y", the GDG changes are committed using a single database access.If configured to "N", the GDG changes are committed using one or more database accesses The default value is "N".
• MT_GDG_USEDCB=Y: Create .dcb file for GDG (default behavior). In this mode, LSEQ or SEQ can be specified as file type of GDG members in m_FileAssign statement.
• MT_GDG_USEDCB=N: Don't create .dcb file for GDG. In this mode, file type of GDG members can only be LSEQ; whatever file type that you specify in m_FileAssign statement is ignored.
• DB_ORACLE for ORACLE
• DB_DB2LUW for UDBIf MT_META_DB has a non-null value, BatchRT uses the database type defined in MT_META_DB for meta data. Otherwise, MT_DB is used. The full install path of Workbench refine, which will be invoked to convert a JCL job to a KSH job. For example: The value of environment variable REFINEDISTRIB, which is used when Workbench converts a JCL job. For example:
• MT_REFINEDISTRIB = Linux64: Set REFINEDISTRIB to Linux64
• MT_REFINEDISTRIB = Linux32: Set REFINEDISTRIB to Linux32 In BatchRT.conf this item is used to make runb redirect SYSIN and SYSOUT for COBOL program run by m_ProgramExec.If "SYSIN" is set, the stdin for utilitiy will be redirect to file ${DD_SYSIN}, if DD_SYSIN doesn't exist, don't redirect. example: MT_SYS_IO_REDIRECT=SYSINIf "SYSOUT" is set, the stdout and stderr for utilitiy will be redirect to file ${DD_SYSOUT}, if DD_SYSOUT doesn't exist, don't redirect.Example: MT_SYS_IO_REDIRECT=SYSOUTExample: MT_SYS_IO_REDIRECT=SYSIN,SYSOUTBy default, MT_SYS_IO_REDIRECT=SYSIN,SYSOUT In EJR mode, if it is configured to "Y", BatchRT generates a SYSLOG file. If it is configured to "N", BatchRT does not generate the SYSLOG file. The default value is "Y". If it is configured to "Y", use hour, minute, second, or millisecond for Step Start Time and Step End Time in SYSLOG file. If it is configured to "N", use hour, minute, or second (millisecond cannot be used). The default value is "N".
• MT_USERENTRYEXIT=Y: user entry/exit function is enabled.
• MT_USERENTRYEXIT=N: user entry/exit function is disabled. A list of executable programs, programs that don't exist but users don't want to fail any jobs because of them. When m_ProgramExec invokes nonexistent programs, JOB will continue if those programs are specified in this list. For example: The host name (or IP address), where Workbench is installed to be invoked to convert JCL job to KSH job. The value of MT_WB_HOSTNAME is null if Workbench is in localhost. User name is optional to be added. For example:
•
•
Note: It is required to be set if Workbench is deployed on the remote machine while ARTJESCONV server is deployed on another machine. If configured to "Y", record-sequential ASCII files are sorted in EBCDIC order.If configured to "N", record-sequential ASCII files are sorted in ASCII order. If configured to "Y", for numeric DISPLAY items with included signs, the signs are to be interpreted according to the EBCDIC convention.If configured to "N", for numeric DISPLAY items with included signs, the signs are to be interpreted according to the ASCII convention. Any return code greater than or equal to MT_PROG_RC_ABORT is considered as abort; any code less than MT_PROG_RC_ABORT is considered as commit.Table 3‑4 lists the environment variables that are used by Native JCL Batch Runtime and must be defined before using the software.
File concurrency access directory that contains the files AccLock and AccWait. These files must be created empty before you run Batch Runtime.
• COBOL_MF: means using Micro Focus COBOL
• COBOL_IT: means using COBO-IT COBOL
• DB_ORACLE: means using Oracle
• DB_DB2LUW: means using DB2 Table 3‑5 lists optional environment variables used by Native JCL Batch Runtime.
Sets the number of thread to insert records to table in the load process of utility DSNUTILB. Default value is 5. Control log output level. Its values could be one of the followings: ERROR, WARN, INFO, DEBUG, and DUMP. Default value is INFO. When MT_VOLUME_DEFAULT is set to a non-empty value, catalog feature is enabled. It is used as volume value if there is no volume specified when a new dataset is created. If MT_VOLUME_DEFAULT is not set, catalog feature is disabled. If MT_DB_LOGIN2 has a non-null value, BatchRT uses parallel Oracle and DB2 access. Specifies a full-path file name. The file is used to store the mapping from "DB SYSTEM" to "DB connection credential string". The file format is:
• When <DB TYPE> is "ORA", the <connection string> format is <user>/<pwd>@<instance id>. For example, ORA01:ORA:tigger/scot@orains01.
• When <DB TYPE> is "DB2", the <connection string> format is: <instance id> user <user> using <pwd>. For example, DB202:DB2:db2ins02 user tom using cat.When MT_DB2_SYSTEM_MAPPING is defined, the feature of mapping from "DB SYSTEM" to "DB connection credential string" is enabled; otherwise, the feature is disabled. If it is configured to "Y" or if it is not configured, Batch runtime provides you DSNTIAUL utility to unload data from Oracle Database tables. This utility has the same functionality as DSNTIAUL utlity on mainframe with DB2. If it is configured to "N", Batch runtime executes SQL statement and writes the output to a specific file in plain text. The default value is "Y".
• SORT_MicroFocus for Micro Focus Sort Utility.
• SORT_SyncSort for SyncSort Sort Utility
• SORT_CIT for citsort utilityIf not specified, it depends on MT_COBOL.
• SORT_MicroFocus, if MT_COBOL=MF is set
• SORT_CIT, if MT_COBOL=IT is set Identifies how numeric DISPLAY items with included signs are interpreted:
• Y: Default. They are interpreted according to the EBCDIC convention.
• N: They are interpreted according to the ASCII convention.
•
•
• The default value is SYSOUT. Any return code greater than or equal to MT_PROG_RC_ABORT is considered as abort; any code less than MT_PROG_RC_ABORT is considered as commit.
2. MT_ACC_FILEPATH should be located on shared storage (NFS), which should have same mount point on all machines in the domain, since the control files for file locking are put in this directory; in addition, users need to make sure AccLock and AccWait files under this directory can be read / written by the effective user of the process running the jobs.Within Batch Runtime, a phase corresponds to an activity or a step on the source system.At the end of each phase, the JUMP_LABEL variable is updated to give the label of the next phase to be executed.In the following example, the last functional phase sets JUMP_LABEL to JOBEND: this label allows a normal termination of the job (exits from the phase loop).The mandatory parts of the script (the beginning and end parts) are shown in bold and the functional part of the script (the middle part) in normal style as shown in Table 3‑6. The optional part of the script must contain the labels, branching and end of steps as described below. The items of the script to be modified are shown in italics.
Table 3‑6 Script Structure JUMP_LABEL=STEP2 (PENULTIMATESTEP) For the label, which must point to END_JOB. The _ is necessary, because the character is forbidden on z/OS. Listing 3‑1 shows a Korn shell script example.Listing 3‑1 Korn shell Script ExampleSymbols are internal script variables that allow script statements to be easily modifiable. A value is assigned to a symbol through the m_SymbolSet function as shown in Listing 3‑2. To use a symbol, use the following syntax: $[symbol]Listing 3‑2 Symbol Use ExamplesThe most frequent steps are those that execute an application or utility program. These kind of steps are generally composed of one or several file assignment operations followed by the execution of the desired program. All the file assignments operations must precede the program execution operation shown in Listing 3‑3.Listing 3‑3 Application Program Execution Step ExampleABEND routines, ILBOABN0, CEE3ABD and ART3ABD can be called from a running program to force it to abort and return the abend code to KSH script. For example, ILBOABN0 is supplied as both source and binary gnt file. It can be directly called by any user-defined COBOL program.Listing 3‑5 USER.cbl ExampleAn in-stream procedure in a Korn shell script always starts with a call to the m_ProcBegin function, followed by all the tasks composing the procedure and terminating with a call to the m_ProcEnd function. Listing 3‑6 is an example.Listing 3‑6 In-stream Procedure ExampleExternal procedures do not require the use of the m_ProcBegin and m_ProcEnd functions; simply code the tasks that are part of the procedure shown in Listing 3‑7.Listing 3‑7 External Procedure ExampleThe use of a procedure inside a Korn shell script is made through a call to the m_ProcInclude function.As described in Script Execution Phases, during the Conversion Phase, a Korn shell script is expanded by including the procedure's code each time a call to the m_ProcInclude function is encountered. It is necessary that after this operation, the resulting expanded Korn shell script still respects the rules of the general structure of a script as defined in the General Structure of a Script.A procedure, either in-stream or external, can be used in any place inside a calling job provided that the above principals are respected shown in Listing 3‑8.Listing 3‑8 Call to the m_ProcInclude Function ExampleListing 3‑9 and Listing 3‑10 are examples.Listing 3‑9 Defining Procedure ExampleListing 3‑10 Calling Procedure ExampleAs specified in Best Practices, this way of coding procedures is provided mainly for supporting Korn shell scripts resulting from z/OS JCL translation and it is not recommended for Korn shell scripts newly written for the target platform.The overriding of a file assignment is made using the m_FileOverride function that specifies a replacement for the assignment present in the procedure. The call to the m_FileOverride function must follow the call to the procedure in the calling script.Listing 3‑11 shows how to replace the assignment of the logical file SYSUT1 using the m_FileOverride function.Listing 3‑11 m_FileOverride Function ExampleListing 3‑12 m_FileOverride Procedure Call:The m_CondIf, m_CondElse and m_CondEndif functions can be used to condition the execution of one or several steps in a script. The behavior is similar to the z/OS JCL statement constructs IF, THEN, ELSE and ENDIF.The m_CondIf function must always have a relational expression as a parameter as shown in Listing 3‑13. These functions can be nested up to 15 times.Listing 3‑13 m_CondIf, m_CondElse, and m_CondEndif ExampleThe m_CondExec function is used to condition the execution of a step. The m_CondExec must have at least one condition as a parameter and can have several conditions at the same time. In case of multiple conditions, the step is executed only if all the conditions are satisfied.The m_CondExec function must be the first function to be called inside the concerned step as shown in Listing 3‑14.Listing 3‑14 m_CondExec Example with Multiple Conditions
• The start label specified by the m_JobBegin function: this label is usually the first label in the script, but can be changed to any label present in the script if the user wants to start the script execution from a specific step.
• The value assigned to the JUMP_LABEL variable in each step: this assignment is mandatory in each step, but its value is not necessarily the label of the following step.
• The usage of the m_CondExec, m_CondIf, m_CondElse and m_CondEndif functions: see Conditioning the Execution of a Step.If Batch Runtime administrator wishes to change the default messages (to change the language for example), this can be done through a configuration file whose path is specified by the environment variable: MT_DISPLAY_MESSAGE_FILE.When using Batch Runtime, a file can be used either by a Batch Runtime function (for example: m_FileSort, m_FileRename etc.) or by a program, such as a COBOL program.In both cases, before being used, a file must first be assigned. Files are assigned using the m_FileAssign function that:The environment variable defined via the m_FileAssign function is named: DD_IFN. This naming convention is due to the fact that it is the one used by Micro Focus COBOL to map internal file names to external file names.Once a file is assigned, it can be passed as an argument to any of Batch Runtime functions handling files by using the ${DD_IFN} variable.Listing 3‑15 Example of File AssignmentListing 3‑16 Example of Using a File by a COBOL Program
• It is suggested that you always add this compile option while compile COBOL program. Table 3‑7 lists the behavior of API which support DDN.
Table 3‑7 Micro Focus COBOL DISP=MOD Behaior m_ProgramExec: COBOL Program m_ProgramExec: Other Program For COBOL-IT, there is no File Handle level support for DISP=MOD (like Micro Focus COBOL). So there is no special requirement for compiling COBOL program. Table 3‑8 lists the behavior of API which support DDN.
Table 3‑8 COBOL-IT DISP=MOD Behaior m_ProgramExec: COBOL Program m_ProgramExec: Other Program
1. Use environment variable MT_ACC_FILEPATH to specify a directory for the lock files required by concurrent access control mechanism.
2.
•
• Following two lines in ejr/CONF/BatchRT.conf should be commented out:A GDG file is defined and/or redefined through m_GenDefine. The operation of defining or redefining a GDG is committed immediately and cannot be rolled back.As shown in Listing 3‑17, the first line defines a GDG and sets its maximum generations to 15, the second line redefines the same GDG maximum generations to 30, the third line defines a GDG without specifying "-s" option (its maximum generations is set to 9999), the fourth line defines a GDG implicitly and sets its maximum generations to 9999, the fifth line defines a GDG use model file $DATA/FILE, which can be either a GDG file or a normal file.Listing 3‑17 Example of Defining and Redefining GDG FilesTo add a new generation file (GDS) into a GDG, call m_FileAssign with "-d NEW/MOD,…" and "-g +n" parameters. GDS file types can be only LSEQ or SEQ.
•
• One generation number (GenNum) can be added only one time in a job. Listing 3‑19 shows an incorrect usage.
• The filename of a newly created GDS is generated by the generation number specified in m_FileAssign in the format of <current GDS number> + <GenNum>. See Listing 3‑20 for an example.
• In a job, if multiple generation files (GDS) are newly created, the GDS with the maximum RGN becomes the current GDS after the job finishes. See Listing 3‑21 for an example.Listing 3‑20 Example of Listing GDS FilenamesIn the above example, suppose $DATA/GDG1 has three GDS numbered as 1, 2, and 4, respectively. The corresponding GDS files are listed as below.After the above job runs, $DATA/GDG1 has five GDS numbered as 1, 2, 4, 5, and 9, respectively. The corresponding GDS files are listed as below.Listing 3‑21 Example of Defining the Current GDSIn the above example, the GDS whose RGN equals +5 becomes the current GDS, meaning its RGN becomes 0 after job finishes successfully.To refer to an existing generation file (GDS) in a GDG, call m_FileAssign with "-d OLD/SHR/MOD,…" and "-g 0", "-g all", or "-g -n" parameters. "-g 0" refers to the current generation, "-g all" refers to all generation files, "-g -n" refers to the generation file which is the nth generation counting backward from the current generation (as 0 generation).For example, if GDG1 contains six GDS numbered as 1, 4, 6, 7, 9, and 10, respectively, the mapping of GN and RGN is listed as below.
In the following job, use RGN=-1 to reference GDS whose GN equals 9 and use RGN=-4 to reference GDS whose GN equals 4.Listing 3‑22 Example of Referencing Existing Generation FilesIf "DELETE" is specified in the DISPOSITION filed of m_FileAssign, the corresponding GDS will be deleted after the current step finishes, resulting in a change of mapping between GN and RGN. The changed mapping will be visible in the next step.For example, if GDG1 contains six GDS numbered as 1, 4, 6, 7, 9, and 10, respectively, the mapping of GN and RGN is listed as below.
In the following job, use RGN=-1 to reference GDS whose GN equals 9 and use RGN=-4 to reference GDS whose GN equals 4.
ART for Batch supports you to delete generation files, newly added or current existing, through the disposition of DD specified for m_FileAssign.
• Deleting Newly Added GDS (See Listing 3‑24 for an example)
• Deleting Existing GDS (See Listing 3‑25 for an example)Listing 3‑24 Deleting Newly Added GDSListing 3‑25 Deleting Existing GDSYou can delete a GDG as a whole by calling m_FileDelete with the GDG base name, as shown in Listing 3‑26. In this way, all the GDG's GDS will be deleted accordingly. The operation of deleting GDG is committed immediately and cannot be rolled back.Listing 3‑26 Deleting a GDGIt is required to enable "file catalog" function in ART for Batch catalog a GDG. Additionally, in catalog mode, the parameter [-v volume] specified in m_FileAssign is ignored.Committing a GDG updates the information in GDG management system, such as Oracle DataBase or file (*.gens), and commits the temporary generation files; however, committing a GDG does not change the mapping relationship between GN and RGN, meaning, in one step of a job, a RGN always references to the same GDS.For example, GDG1 has six GDS numbered as 1, 4, 6, 7, 9, and 10, respectively.Listing 3‑27 Example of Committing a GDG
MT_GENERATION variable specifies the way of managing GDG files. To manage GDG in *.gens files, you need to set the value to GENERATION_FILE.In file-based GDG management mechanism, one GDG file can only be accessed by one job at any time, that is, a single GDG cannot be accessed by multiple jobs simultaneously. To access a GDG file, the file lock must be acquired by the existing internal function mi_FileConcurrentAccessReservation. File-based GDG management mechanism uses a file *.gens (* represents the GDG base name) to control concurrency and authorization. User access checking depends on whether the *.gens file can be accessed or not.
Note: To enable this function, MT_GENERATION must be set to GENERATION_FILE_DB, MT_DB must be set to DB_ORACLE or DB_DB2LUW (or set MT_META_DB to DB_ORACLE or DB_DB2LUW), and MT_GDG_DB_ACCESS must be set to valid database connection string to access Oracle Database or DB2 database.Table 3‑5 shows the general management for each GDG managed by Batch Runtime. In this table, each row represents a GDG. All GDG files share a single GDG_DETAIL table.
Table 3‑9 GDG_DEFINE It cannot contain only a relative path relative to a single repository. The length of GDG_BASE_NAME is limited to 1024, i.e. the minimum of PATH_MAX on different UNIX platforms. It contains the upper limit of generations specified by -s option. -s option can be set in the range of 1-9999. Primary Key: GDG_BASE_NAMETable 3‑6 shows the detailed information of all the GDG generation files. In this table, each row represents a generation file of a GDG.
Table 3‑10 GDG_DETAIL Primary Key: GDG_BASE_NAME+ GDG_ABS_NUMGDG_FILE_NAME (the physical generation file name) is not stored in table GDG_DETAIL since it can be constructed from GDG_BASE_NAME in GDG_DEFINE and GDG_ABS_NUM in GDG_DETAIL.
Note: Table 3‑7 shows the rule of generation file name:
Table 3‑11 Generation File Naming Rule This variable specifies the way of managing GDG files. To manage GDG files in database, you need to set the value to GENERATION_FILE_DB and configure MT_GDG_DB_ACCESS appropriately.This variable is used along with MT_GENERATION when it is set to GENERATION_FILE_DB, and must be set with the valid database login account. For accessing Oracle DB, it should be specified in the format of userid/password@sid, for example, scott/tiger@orcl.Used along with MT_GENERATION when set to GENERATION_FILE_DB. It indicates how to commit GDG changes to database during the commit phase. If configured to "Y", the GDG changes are committed using a single database access. If configured to "N", the GDG changes are committed using one or more database accesses.DB-based GDG management mechanism maintains the same concurrency control behavior as File-based GDG management mechanism, but has a different *.ACS (* represents the GDG base name) file format. In DB-based GDG management mechanism, you don’t need to lock the tables mentioned in Database Tables as any job that accesses the rows corresponding to a GDG must firstly acquire the file lock of the GDG. That is to say, there is no need to perform concurrency control in the database access level. You cannot access database if you don’t have access permission (read or write) to the corresponding *.ACS file. If you need to modify a GDG file, you must have write permissions to the generation files and the directory holding the generation files, and MT_GDG_DB_ACCESS must be configured correctly to have appropriate permissions to the tables mentioned in Database Tables.
• *.ACS fileThese information should be kept consistently for a GDG file. Batch Runtime checks the consistency from GDG_DEFINE to Physical files when a GDG file is accessed the first time in a job. If exceptions happen and result in inconsistency among these information, Batch Runtime terminates the current job and reports error.-t <file type> must be LSEQ or SEQ in m_FileAssign to create the first generation file. If you don't specify any file type in job ksh file, LSEQ will be used by default.
Notes: If a GDG is created by m_GenDefine rather than m_FileAssign, .dcb file will not exist until the first generation file is created by m_FileAssign -g +1.Once .dcb file is created, its contents will not be changed by any other m_FileAssign statement afterwards, unless such m_FileAssign creates the first generation file again.However, if all generation files in one GDG are deleted while the GDG itself exists, the corresponding .dcb file will not be deleted.To define and use a file whose data is written directly inside the Korn shell script, use the m_FileAssign function with the -i parameter. By default the string _end is the “end” delimiter of the in-stream flow as shown in Listing 3‑28.Listing 3‑28 In-stream Data ExampleTo use a set of files as a concatenated input (which in z/Os JCL was coded as a DD card, where only the first one contains a label), use the m_FileAssign function with the -C parameter as shown in Listing 3‑29.Listing 3‑29 Using a Concatenated Set of Files ExampleTo use an “external sysin” file which contains commands to be executed, use the m_UtilityExec function.m_FileAssign -d OLD SYSIN ${SYSIN}/SYSIN/MUEX07Files (including generation files) can be deleted using the m_FileDelete function:In a migration project from z/Os to UNIX/Linux, some permanent data files may be converted to relational tables. See the File-to-Oracle chapter of the Oracle Tuxedo Application Runtime Workbench.In other words, if in the z/OS JCL there was a file copy operation involving the converted file, this is translated to a standard copy operation for files in Batch Runtime, in other words an m_FileLoad operation).When executing an application program that needs to connect to the RDBMS, the -b option must be used when calling the m_ProgramExec function.
• Set the environment variable MT_DB_LOGIN before booting the TuxJES system.
Note: "/" should be used when the RDBMS is configured to allow the use of UNIX authentication and not RDBMS authentication, for the database connexion user.The -b option must also be used if the main program executed does not directly use the RDBMS but one of its subsequent sub-programs does as shown in Listing 3‑30.Listing 3‑30 RDBMS Connection ExampleThe m_ProgramExec function may submit three types of files.
• Generated code files (.gnt file extension) compiled from COBOL source code file.Make sure that the .gnt files can be found in $COBPATH (for Micro Focus COBOL) or $COB_LIBRARY_PATH (for COBOL-IT).
• Callable shared library (.so file extension) compiled from C source code file.Make sure the callable shared library file can be found at $COBPATH (for Micro Focus COBOL) or $COB_LIBRARY_PATH (for COBOL-IT), or at system library file search path like LIBPATH, LD_LIBRARY_PATH, and so on.For example, callable shared library file ProgA.so must contain a function declared by one of the followings.
• ProgA(short* arglen, char* argstr): if you need parameters
• ProgA(): if you do not need parametersm_ProgramExec will determine the deliverable type of the program in the following sequence: COBOL program (.gnt), C program in callable shared library (.so), and other executable programs. Once a COBOL program is executed, m_ProgramExec will not execute other programs with the same name. For example, once ProgA.gnt is executed, ProgA.so or other programs named ProgA will not be executed.For .gnt file and .so files, m_ProgramExec launches the runb program to run it. ART provides runb for the followings.
• $JESDIR/ejr_mf_ora for combination of Micro Focus COBOL and Oracle database
• $JESDIR/ejr_mf_db2 for combination of Micro Focus COBOL and DB2 database
• $JESDIR/ejr_cit_ora for combination of COBOL-IT and Oracle database
• $JESDIR/ejr_cit_db2 for combination of COBOL-IT and DB2 databaseIf you do not use the above four types of combination, go to $JESDIR/ejr and run make.sh to generate your personalized runb.The runbatch program, is in charge to :The INTRDR facility allows you to submit the contents of a sysout to TuxJES (see the Using Tuxedo Job Enqueueing Service (TuxJES) documentation). If TuxJES is not present, a command “nohup EJR” is used.In this example, the contents of the file ${DATA}/MTWART.JCL.INFO (ddname SYSUT1) are copied into the file (ddname SYSUT2) which is using the option -w INTRDR, and then this file (ddname SYSUT2) is submitted.INTRDR job which is generated by COBOL program can be submitted automatically in real time. Once a COBOL program closes INTRDR, the job INTRDR is submitted immediately without waiting for the current step to finish. To enable this feature, file handler ARTEXTFH.gnt needs to be linked to COBOL program.ARTEXTFH.gnt is placed at "${MT_ROOT}/COBOL_IT/ARTEXTFH.gnt".If this feature is not enabled, INTRDR jobs is submitted after the current step finishes.
Note: When using Batch Runtime, TuxJES can be used to launch jobs (see the Using Tuxedo Job Enqueueing Service (TuxJES) documentation), but a job can also be executed directly using the EJR spawner.Batch Runtime allows you to add custom pre- or post- actions for public APIs. For each m_* (* represents any function name) function, you can provide m_*_Begin and m_*_End function and put them in ejr/USER_EXIT directory. They are invoked automatically when a job execution entering or leaving an m_* API.Whether an m_* API calls its user-defined entry/exit function depends on the existence of m_*_Begin and m_*_End under ejr/USER_EXIT.A pair of general user entry/exit APIs, mi_UserEntry and mi_UserExit, are called at the entry and exit point of each external API. The argument to these APIs consists of the function name in which they are called, and the original argument list of that function. You don’t need to modify these two APIs, but just need to provide your custom entry/exit for m_* external APIs. mi_UserEntry and mi_UserExit are placed under ejr/COMMON.You are suggested not to call exit in user entry/exit function. Because In the framework, exit is aliased an internal function, mif_ExitTrap, which is invoked ultimately if exit in user entry/exit function is called. If exit 0 is called, the framework does nothing and job is continue, if exit not_0 is called, a global variable is set and may terminate the current job.You should include only one function, e.g. m_*_Begin or m_*_End, in a single file with the same name as the function, and then put all such files under ejr/USER_EXIT.You are not allowed to provide custom entry/exit functions for any mi_ prefix function provided by Batch Runtime.
Table 3‑12 Log Message Format
• b: Specific for exceptions messages Fatal/Error/Warning Table 3‑9 lists the Log message levels provided by Batch Runtime:
Table 3‑13 Log Message Level Same as level 3 and high level functions which correspond to the -d regexp option Same as 7 and technical level functions which correspond to the -d regexp option
• Use -V option of EJR
• Use the environment variable MT_DISPLAY_LEVELThe display level set by EJR can override the level set by MT_DISPLAY_LEVEL.For each launched job, Batch Runtime produces a log file containing information for each step that was executed. This log file has the following structure as shown in Listing 3‑31.Listing 3‑31 Log File ExampleWhen not using TuxJes, the log file is created under the ${MT_LOG} directory with the following name: <Job name>_<TimeStamp>_<Job id>.logTable 3‑10 shows the variables you can use for specifying the general log header:
Table 3‑14 variables for Specifying General Log Header Name of the job assigned by m_JobBegin in the job script. Name of the proc when the code included from a PROC by m_ProcInclude is executing; empty otherwise.MT_LOG_HEADER is a new configuration variable added in CONF/BatchRT.conf, for example:MT_LOG_HEADER='$(date'+%Y%m%d:%H%M%S'):${MTI_SITE_ID}:${MTI_JOB_NAME}:${MTI_JOB_ID}:${MTI_JOB_STEP}: 'If the value of MT_LOG_HEADER is not a null string, its contents are evaluated as a shell statement to get its real value to be printed as the log header, otherwise this feature is disabled.
Note: The string that configured to MT_LOG_HEADER is treated as a shell statement in the source code, and is interpreted by "eval" command to generate the corresponding string used as log header:Syntax inside: eval mt_MessageHeader=\"${MT_LOG_HEADER}\"
• MT_LOG_HEADER must be a valid shell statement for "eval", and must be quoted by single quotation marks.
•
• All the command line used in MT_LOG_HEADER must be quoted by "$()". For example: $(date '+%Y%m%d:%H%M%S')You can modify the above examples according to your format needs using only the variables listed in Table 3‑10.The following message identifiers are defined in CONF/Messages.conf to support using of mi_DisplayFormat to write file assignment and file information log.CONF/Messages.conf is not configurable. Do not edit this file.The string "%s" at the end of each identifier represents it will be written to log file. You can configure its value using the following variables defined in CONF/Batch.conf. For more information, see Table 3‑12.
• MT_LOG_FILE_ASSIGN (for FileAssign)
• MT_LOG_FILE_RELEASE (for FileRelease)
• MT_LOG_FILE_INFO (for FileInfo)Three configuration variables should be defined in CONF/BatchRT.conf to determine the detailed file information format. With the placeholders listed in Table 3‑11, you can configure file log information more flexibly.
Table 3‑15 Placeholders SHR or NEW
Table 3‑16 Configuration Variables in CONF/BatchRT.conf
Note: "operation" is hard-coded into source code, such as FileCopy source, FileCopy Destination, and FileDelete etc. To configure strings to these MT_LOG_FILE_* variables, replace the placeholders with corresponding values (just string replacement). The result is treated as a shell statement, and is interpreted by "eval" command to generate the corresponding string writing to log:Syntax inside: eval mt_FileInfo=\"${MT_LOG_FILE_INFO}\"
• After placeholders are replaced, MT_LOG_FILE_* must be a valid shell statement for "eval", and must be quoted by single quotation marks.
•
• All the command line used in MT_LOG_HEADER must be quoted by "$()". For example: $(ls -l --time-style=+'%Y/%m/%d %H:%M:%S' --no-group <%FULLPATH%> )If the level of FileInfo message is equal to or less than the message level specified for Batch Runtime and MT_LOG_FILE_* is set to a null string, FileInfo message will not be displayed in job log. If MT_LOG_FILE_* is set to an incorrect command to make file information invisible, FileInfo message will not be displayed in job log as well, but the job execution will not be impacted.Entry points are provided in some functions (m_JobBegin, m_JobEnd, m_PhaseBegin, m_PhaseEnd) in order to insert specific actions to be made in relation with the selected Job Scheduler.Note that the environment variable MT_DB_LOGIN must be set (database connection user login).The SYSIN file must contain the SQL requests and the user has to verify the contents regarding the database target.
• Set DB_HOME correctly because it is required by BDB; DB_HOME points to a place where temporary files are put by BDB.
• Unset COB_ENABLE_XA environment variable before booting the TuxJES system.
Note: It is required to set COB_ENABLE_XA when you use COBOL-IT with ART CICS Runtime.It is required to set TuxJES using database to manage job. See Setting up TuxJES as an Oracle Tuxedo Application (Using Database) for more information.All the environment variables described in Setting Environment Variables is also available for native JCL execution.You can use option -I to submit a JCL job with the following usage.
• artjesadmin -I JCLScriptName (in the shell command line)
• submitjob -I JCLScriptName (in the artjesadmin console)Also, you can specify env file when submitting job as below
• artjesadmin -o "-e <envfile_path>" -I JCLScriptName (in the shell command line)
• submitjob -o "-e <envfile_path>" -I JCLScriptName (in the artjesadmin console)[-t JCL|KSH] is used as a filter with the following usage.
• Print all jobs: printjob
• Print JCL jobs: printjob -t JCL
• Print KSH jobs: printjob -t KSHThe JCL conversion log is $JESROOT/<JOBID>/LOG/<JOBID>.trace.Table 3‑17 lists the supported JCL statement. Table 3‑18 lists the supported utilities.
Table 3‑17 Supported JCL Statement
Table 3‑18 Supported Utilities
• COBOL_MF: Micro Focus COBOL
• COBOL_IT: COBOL-IT COBOL
• DB_ORACLE: Oracle database
• DB_DB2LUW: DB2 database
• SORT_MicroFocus: Micro Foucs COBOL Sort Utility
• SORT_SyncSort: Syncsort Sort Utility
• SORT_CIT: COBOL-IT COBOL Sort utilityConfigure the additional utilities that you need to use in native JCL configuration file. This configuration file is located under ${JESDIR}/jclexec/conf/JCLExecutor.conf; its ADDUTILITYLIST item is used to define additional utility list.For example, if you would like to define MYUTILITY utility, you should specify ADDUTILITYLIST=MYUTILITY; if you would like to define multiple utilities, you should specify ADDUTILITYLIST=MYUTILITY1,MYUTILITY2,…, using comma (',') to separate utilities.You can use artjclchk tool to launch test mode for a job or a group of jobs. The command line syntax for the artjclchk tool is as follows:Specifies the destination directory to save output report files. There are three types of output report file; all of them are generated here. See Test Mode Report Files for more information.
• If you specify -r but do not specify -i, this command generates category report and summary report for every individual report under -d directory.
• If you specify -r and specify -i, this command generates individual reports for all jobs that -i specifies, and then generates category report and summary report only for these individual reports.
• If you do not specify -r, category report and summary report are not generated.
Note: If you run artjclchk tool twice with the same -i and -d option values, results from the second run will replace results from the first run.There are three types of report files that artjclchk generates.An individual report file is a job specific report file. artjclchk generates an individual report file for each job; anything found in the job is reported in this file.A category report file is organized according to the type of information. artjclchk generates a summary report file for each type of information; any occurrence falling in the category together with job location and line number is reported in this file.A summary report file is a simplified version of category report. artjclchk generates it. Unlike category report file, summary report file only records the issues and issue occurrences. Summary report file has the same name with the corresponding category report file but without "Occurences".An individual report file is a job specific report file. artjclchk generates an individual report file for each job; anything found in the job is reported in this file.This file is named in the format of <JOBFILENAME>.rpt; fields in each line are separated by comma. See the following tables for these fields.
• Table 3‑20 lists fields for JCL elements
• Table 3‑21 lists fields for IKJEFTxx utilities
• Table 3‑22 lists fields for other utilities
Table 3‑20 Report Fields for JCL Elements Identifies a PROC issue Identifies an INCLUDE issue Identifies a STEP issue Identifies if SYMBOL object is defined or not defined The object name. It can be PROC name, INCLUDE name, PROGRAM name, UTILITY name, PARAM name, DATASET name, or other object name
Table 3‑21 Report Fields for IKJEFTxx Utilities Identifies a STEP issue Identifies if PROGRAM object is found or not found The object name. It can be PROC name, INCLUDE name, PROGRAM name, UTILITY name, or other object name Identifies line location. The FILE and LINE locations are related to JCL job itself, for example, the STEP location where the current utility is launched.
Table 3‑22 Report Fields for Other Utilities The object name. It can be PROC name, INCLUDE name, PROGRAM name, UTILITY name, or other object name Identifies line location. The FILE and LINE locations are related to JCL job itself, for example, the STEP location where the current utility is launched. Identifies the name of the utility that generates the report line, for example, IEBGENER, SORT, and PKZIP A category report file is organized according to the type of information. artjclchk generates a summary report file for each type of information; any occurrence falling in the category together with job location and line number is reported in this file.This is the category report file for missing items. This report file is named in the format "Missing_Item_<DATETIME>_Occurences.csv". See Table 3‑23 for its columns.This is the category report file for unsupported items. This report file is named in the format "Unsupported_Item_<DATETIME>_Occurences.csv". See Table 3‑24 for its columns.This is the category report file for ignored items. This report file is named in the format "Ignored_Item_<DATETIME>_Occurences.csv". See Table 3‑25 for its columns.This is the category report file for code defect. This report file is named in the format "CodeDefect_<DATETIME>_Occurences.csv". See Table 3‑26 for its columns.This is the category report file for missing dataset. This report file is named in the format "Missing_Dataset_<DATETIME>_Occurences.csv". See Table 3‑27 for its columns.This is the category report file for internal error. This report file is named in the format "Internal_Error_<DATETIME>_Occurences.csv". See Table 3‑28 for its columns.This is the category report file for supported utilities. This report file is named in the format "Supported_Utility_<DATETIME>_Occurences.csv". See Table 3‑29 for its columns.
Table 3‑23 Category Report File: Missing Item Report
Table 3‑25 Category Report File: Ignored Item Report
Table 3‑27 Category Report File: Missing Dataset Report
Table 3‑28 Category Report File: Internal Error Report
A summary report file is a simplified version of category report. artjclchk generates it. Unlike category report file, summary report file only records the issues and issue occurrences. Summary report file has the same name with the corresponding category report file but without "Occurences".This report file is named in the format "Missing_Item_<DATETIME>.csv". See Table 3‑30 for its columns.
Note: This is the simplified version of the "Missing Item Report" category report file.This report file is named in the format "Unsupported_Item_<DATETIME>.csv". See Table 3‑31 for its columns.
Note: This is the simplified version of the "Unsupported Item Report" category report file.This report file is named in the format "Ignored_Item_<DATETIME>.csv". See Table 3‑32 for its columns.
Note: This is the simplified version of the "Ignored Item Report" category report file.This report file is named in the format "CodeDefect_<DATETIME>.csv". See Table 3‑33 for its columns.
Note: This is the simplified version of the "Suspicious Code Defect Report" category report file.This report file is named in the format "Missing_Dataset_<DATETIME>.csv". See Table 3‑34 for its columns.
Note: This is the simplified version of the "Missing Dataset Report" category report file.This report file is named in the format "Internal_Error_<DATETIME>.csv". See Table 3‑35 for its columns.
Note: This is the simplified version of the "Internal Error Report" category report file.This report file is named in the format "Supported_Utility_<DATETIME>.csv". See Table 3‑36 for its columns.
Note: This is the simplified version of the "Supported Utility Report" category report file.
Table 3‑30 Summary Report File: Missing Item Report
Table 3‑31 Summary Report File: Unsupported Item Report
Table 3‑32 Summary Report File: Ignored Item Report
Table 3‑34 Summary Report File: Missing Dataset Report
Table 3‑35 Summary Report File: Internal Error Report
•
• By m_SetJobExecLocation API of Batch Runtime, users can develop KSH jobs with NJE support. For example,When specifying the server group name, which is specified as job execution group in API m_JobSetExecLocation, please ensure the followings.
• The specified server group must exist in ubbconfig file of JES domain.
• At least one ARTJESINITIATOR server must be deployed in that server group.
Table 3‑37 Configurations in <APPDIR>/jesconfig ON: Enable NJE supportOFF: Disable NJE support If NJE support is disabled in jesconfig, the statement m_SetJobExecLocation <SvrGrpName> is ignored by TuxJES and then the job may executed by any ARTJESINITIATOR in any server group.In MP mode, MT_TMP needs to be configured on NFS, and all the nodes in tuxedo domain should have the same value of MT_TMP and share it.MT_TMP can be configured in file $MT_ROOT/CONF/BatchRT.conf, or to export it as environment value before tlisten is started in each node.If NJESUPPORT is enabled in jesconfig, a new queue named EXECGRP must be created in the existing queue space JES2QSPACE. If EXECGRP is not created, no jobs can be processed by JES.m_JobSetExecLocation "ATLANTA"In the above sample, the job can be submitted on any JES node, but only be executed by the ARTJESINITIATOR which belongs to JES's tuxedo server group ATLANTA.In the above sample, job TEST1 will be submitted by the current job and executed by the ARTJESINITIATOR which belongs to JES's Tuxedo server group ATLANTA.
Table 3‑38 Batch Runtime Catalog Primary Key: PK_ART_BATCH_CATALOGFour configuration variables are required to be added in BatchRT.conf or set as environment variables:If it is set to yes (MT_USE_FILE_CATALOG=yes), the file catalog functionality is enabled; otherwise, the functionality is disabled.If no volumes are specified when a new dataset is created, Batch Runtime uses the volume defined by MT_VOLUME_DEFAULT. MT_VOLUME_DEFAULT contains only one volume. For example, MT_VOLUME_DEFAULT=volume1.This variable contains database access information. For Oracle, its value is "username/password@sid" (for example, "scott/tiger@gdg001").For Db2, its value is "your-database USER your-username USING your-password" (for example, "db2linux USER db2svr USING db2svr").This variable contains file catalog database access information. Its format is the same as MT_DB_LOGIN. Since the file catalog is stored in database, BatchRT must access it through MT_DB_LOGIN or MT_CATALOG_DB_LOGIN.MT_CATALOG_DB_LOGIN precedes MT_DB_LOGIN in accessing file catalog. If file catalog DB is the same as data DB, configuring MT_DB_LOGIN only is required; otherwise, both must be configured.You can use CreateTableCatalog[Oracle|Db2].sh or DropTableCatalog[Oracle|Db2].sh to create or drop the new database table.Creates table ART_BATCH_CATALOG in database.Drops table ART_BATCH_CATALOG from database.To use file catalog functionality in Batch Runtime, File Converter and JCL Converter in ART Workbench should enable catalog functionality. For more information, please refer to Oracle Tuxedo Application Rehosting Workbench User Guide.MT_REXX_PATH has no default value. It should be set with the main path where all REXX execs located in. Place REXX programs in proper subdirectories under ${MT_REXX_PATH}. These subdirectories correspond to PDS on mainframe where REXX programs live.DD SYSEXEC specifies where to find object REXX programs.All relevant REXX files (REXX APIs and TSO commands) are located in the Batch_RT/tools/rexx directory. The directory structure is as follows:Batch_RT/tools/rexx/tso is where TSO commands are located. REXX APIs should be put in the Batch_RT/tools/rexx/lib directory.
Table 3‑39 Required Environment Variables
• Listing 3‑34 Example for Setting Environment VariablesListing 3‑37 Example for Preprocessing COBOL ProgramsListing 3‑38 Example for Compiling COBOL Programs (CIT)