• Description of the Output Files including the Generated Objects.
• Detailed Processing including the Command-Line Syntax.
Table 6‑1 z/OS to UNIX File Organizations Files that are part of a PDS are identified as such by their physical file name, for example: METAW00.NIV1.ESSAI(FIC).Generation Data Group (GDG) files are handled specially by the unloading and reloading components in order to maintain their specificity (number of GDG archives to unload and reload). They are subsequently managed as generation files by Oracle Tuxedo Application Runtime Batch (for more information, see the Oracle Tuxedo Application Runtime Batch Reference Guide). On the target platform these files have a LINE SEQUENTIAL organization.
Note: When "enable-buffer-converter" is set in system description file, the source data reside in the data buffer passed from the caller program, and the converted output data is send back to the caller program through the same data buffer.When "enable-reverse-converter" is set in system description file, the source file is on the UNIX/Linux platform and the file format only supports "record sequential". The converted output file format is "z/OS sequential".Listing 6‑1 db-param.cfg Template
Table 6‑2 db-param.cfg Parameters Default value is “cobol_mf” for Micro Focus COBOL. Specifies a mapping table file between EBCDIC (z/OS code set) and ASCII (Linux/UNIX code set) hexadecimal values; if hexa-map-file is not specified, a warning will be logged. When present, this file will be automatically executed at the end of the generation process. It will be called using the <configuration name> as an argument.Listing 6‑2 file-template.txtWhen required, another version of the file-template.txt file can be placed in the $PARAM/file directory. The use of an alternative file is signaled during the execution of file.sh by the message:Listing 6‑3 Execution log with Alternative Template FileThis file is placed during the installation of Tuxedo ART Workbench, it controls the transfer of components generated in the different installation directories. This file indicates the location of each component to copy during the installation phase of file.sh, when launched using file.sh -i.
Note: In the mapper file, the converted clause is associated with the RDBMS table target only. Do not use this clause in the File-to-File Converter.
Used with the attributes LOGICAL_MODULE_ONLY clause, it indicates this file is kept as a Micro Focus COBOL or COBOL-IT file. It is accessed with a logical access COBOL function by Oracle Tuxedo Application Runtime CICS.Without the attributes clause above, it indicates that this file is to be converted to an RDBMS table. You have to ignore this clause to indicate that the file is to be converted to a file.The converted clause can be combined with the transferred clause.
• record name: corresponds to the level 01 field name of the copy description.
• path/COPY name: corresponds to the access path and name of the descriptive copy of the file to migrate.
Note: “map record” and “source record” parameters must use the same “record name” and “descriptive copy”. They are used for forward compatibility.
• record name: corresponds to the level 01 field name of the copy description of the file to migrate.
• path/COPY name: corresponds to the access path and name of the descriptive copy of the file to migrate.
Note: “map record” and “source record” parameters must use the same “record name” and “descriptive copy”. They are used for forward compatibility.
Table 6‑4 Discrimination Rules
Note: These conditions can be parenthesized. NUMERIC cannot be used as arguments for comparison operators.In the following example the fields DPODP-DMDCHQ, DPONO-PRDTIV, DP5CP-VALZONNUM are redefined.Listing 6‑5 Discrimination Rule COBOL DescriptionListing 6‑6 Discrimination RulesThe first rule is to test the value of the numeric field DPODP-RDCRPHY.The second rule tests the first two characters of an alphanumeric field DPONO-NPDT. Only the values 01 and 02 are allowed.The third rule tests whether the field DPODP-RDCRPHY is numeric.Once the COBOL description files have been prepared, the copy files described in the mapper-<configuration name>.re file should be placed in the $PARAM/file/recs-source directory.If you use a COBOL copy book from the source platform to describe a file (see COBOL Description), then it is the location of the copy book that is directly used.
Note: The unloading and loading components generated with the -i $HOME/trf option are placed in the following locations:
Table 6‑5 Component Locations <file name>.jclunload When "enable-buffer-converter" is set in system description file, additional two programs are generated and named:When "enable-reverse-converter" is set in system description file, additional one program and one script file are generated and named: If you used the attributes clause in the mapper file, an access function will be generated:
Note: <target table name> is the file name on the target platform, this file name is furnished in the mapper file.The JCL used to unload the files are generated using the -g option of the file.sh command. They are then (using the -i option) installed in:Each JCL contains two steps and unloads one file using the z/OS IDCAMS REPRO utility. The JCL return code is equal to 0 or 4 for a normal termination.
The JCLs are named: <file name>.jclunload
Note: The .jclunload extension should be deleted for execution under z/OS.
•
• Listing 6‑7 Unload JCL ExampleThe JCL used to unload the Generation Data Sets for Generation Data Group organization are also generated using the -g option of the file.sh command.where <id> is a numerical value which identifies all version (0: current version, 1:first previous, 2: ...)Listing 6‑8 Unload JCL Example for GDGThe COBOL transcoding programs are generated using the -g option of the file.sh command. They are then (using the -i option) installed in:The programs are named: RELFILE-<logical file name>.cblThe programs should be compiled using the target COBOL compilation options documented in Compiler Options.The compilation of these programs requires the presence of a CONVERTMW.cpy copy file adapted to the project, documented in Codeset Conversion chapter.Listing 6‑9 FILE CONTROL Section - for Transcoding ProgramsListing 6‑10 FILE CONTROL Section - for Transcoding ProgramsThe compilation of these programs requires the presence of a CONVERTMW.cpy copy file adapted to the project, documented in Codeset Conversion chapter. Note that the CONVERTMW.cpy is not the same one for "migration of z/OS Files to UNIX/Linux Files".The programs should be compiled using the target COBOL compilation options documented in Compiler Options. Besides that, the following compilation options must be added to successfully compile <logical file name>1.cbl and <logical file name>2.cbl.For Micro Focus COBOL, the compilation options "IBMCOMP" and "NOTRUNC" must be added. For COBOL-IT, the compilation option "variable-rec-pad-mf:yes" must be added.The Reloading Korn shell scripts are generated using the -g option of the file.sh command. They are then (using the -i option) installed in:The scripts are named: loadfile-<logical file name>.kshThe execution of the scripts produces an execution log in $MT_LOG/<logical file name>.logListing 6‑11 Reloading File Script Variables
Note: When "use-file-catalog" is set in system description file, the "${DATA}" is removed from DD_SORTIE in the generated script file; to change the DD_SORTIE names, set the DD_SORTIE variables only with the file name without ${DATA} before calling the script.Various messages may be generated during the execution phases of the scripts, these messages are explained in Oracle Tuxedo Application Rehosting Workbench Messages.On normal end, a return code of 0 is returned.When "enable-reverse-converter" is set in system description file, the following script file is generated and named: loadfile-R-<logical file name>.ksh. They contain a transcoding (or loading) phase and a check phase. These different phases can be launched separately.The execution of the scripts produces an execution log in $MT_LOG/<logical file name>.log.The variables are set at the beginning of each script, and the format is almost the same as that listed in chapter "Reloading Korn Shell Scripts for Migrating z/OS QSAM/VSAM Files to UNIX/Linux Files".
Note: Even when "use-file-catalog" is set in system description file, the "${DATA}" is not removed from DD_SORTIE in the generated script file; to change the DD_SORTIE names, set the DD_SORTIE variables with the file name always with ${DATA} before calling the script file.The master scripts are named: loadgdg-<logical file name>.ksh.For each version - i.e. for each Generation Data Set - they call the script: loadgds-<logical file name>.ksh and do a check phase. The loadgdg-*ksh script contains a transcoding (or loading) phase. These different phases can be launched separately.The execution of the master script produces an execution log in $MT_LOG/<logical file name>.logListing 6‑12 shows variables that are set at the beginning of each script.Listing 6‑12 Reloading File Script Variables
Note: To change the prefix file names, set the DD_ENTREE and DD_SORTIE variables before calling the script.When "use-file-catalog" is set in system description file, the "${DATA}" is removed from DD_SORTIE in the generated script file; to change the DD_SORTIE names, set the DD_SORTIE variables only with the file name without ${DATA} before calling the script.Various messages may be generated during the execution phases of the scripts, these messages are explained in Oracle Tuxedo Application Rehosting Workbench Messages.EJR is a part of Oracle Tuxedo Application Runtime Batch. For more information, see the Oracle Tuxedo Application Runtime Batch Reference Guide. It contains a delete and reload steps.These access functions are generated using the -g option of file.sh and installed in$HOME/trf/DML using the -i and -s options.
Table 6‑6 Access Functions Optional module generated when there are multiple assigns. When using the File-to-File Converter tool, this module is generated when the attributes clause is present in the mapper configuration file. Calls all init_all_files_<configuration name>.cbl (function used by Oracle Tuxedo Application Runtime Batch). Initializes a transaction. All variables used by relational module and ASG_<logical file name> module are initialized for the configuration name listed. (function used by Oracle Tuxedo Application Runtime Batch). Closes a transaction. This program closes all cursors opened in tables for the configuration listed and unlocks all files opened with logical accessor ASG_<logical file name> (function used by Oracle Tuxedo Application Runtime Batch). Calls all close_all_files_<configuration name>.cbl (function used by Oracle Tuxedo Application Runtime Batch).The ASG_<logical file name>.cbl access functions use the following variables
Table 6‑7 Access Call Implemented Variables Indicates the type of operation to execute, for example OPEN, WRITE, etc. The code is passed using the FILE-CODE-F variable of the MWFITECH copy file. A file can be opened in different modes: INPUT, OUTPUT, I O, EXTEND. The mode is passed using the FILE-OPEN-MODE variable of the MWFITECH copy file. The name of the secondary key is passed using the FILE-ALT-KEY-NAME variable of the MWFITECH copy file. For a relative file, the value of the relative key is passed to or from the access module using the FILE-REL-KEY variable of the MWFITECH copy file.Listing 6‑13 LINKAGE SECTION StructureFor all OPEN operations, the FILE-CODE-F variable should contain the key-word OPEN.The FILE-OPEN-MODE variable should contain the type of OPEN to perform as follows:.
Table 6‑8 Call Argument File Open Modes For CLOSE operations, the FILE-CODE-F variable should contain the key-word CLOSE.For CLOSE LOCK operations, the FILE-CODE-F variable should contain the key-word CLOSE-LOCK.
Table 6‑9 Call Argument Delete Modes
Table 6‑10 Read Operation Values Depending on Arguments If DataName1 is a variable corresponding to the keyAltKey1
Note: If the INTO clause is found, a MOVE operation is added after the call in order to set the value of the indicated field.
Note: If the FROM clause is found, a MOVE operation is added before the call in order to set the value of the indicated field.
The following copy files are used by certain access functions. They should be placed in the directory: < installation platform>/fixed-copy/ during the installation of Tuxedo ART Workbench:The following compiler internal copy file is used by access functions; it should be placed into compiler COBCPY environment before the access functions are compiled.file.sh creates different execution reports depending on the options chosen. In the following examples the following command is used:This section describes the Command-Line Syntax used by the File-to-File Converter, and the Process Steps summary.
Note: When "enable-buffer-converter" is set in system description file, steps "Unloading Data", "Reloading the Data", "Reloading the Data" and "Checking the Transfers" are not involved for conversion of data in a buffer between z/OS format and UNIX/Linux format.When "enable-reverse-converter" is set in system description file, step "Unloading Data" is not involved for migration of UNIX/Linux record sequential file to z/OS sequential dataset. The target file locates on UNIX/Linux platform, and the file format satisfies the z/OS sequential dataset format. The upload of the target file to z/OS is not introduced.file.sh - generate file migration components.file.sh generates Tuxedo ART Workbench components used to migrate z/OS files to UNIX Micro Focus COBOL/COBOL-IT files.Triggers the generation, for the configuration indicated, of the unloading and loading components in $TMPPROJECT. This generation depends on the information found in the configuration files.Makes the generated SHELL scripts executable. COBOL programs are adapted to the target COBOL fixed format. When present, the shell script described in File Modifying Generated Components is executed.Places the components in the installation directory. This operation uses the information located in the file-move-assignation.pgm file.All these files are created in $PARAM/dynamic-configIf the file.sh options are used one at a time, they should be used in the following order:
3.
4.
5. The components used for the unloading (generated in $HOME/trf/unload/file) should be installed on the source z/OS platform (the generated JCL may need adapting to specific site constraints including JOB cards, library access paths and access paths to input and out put files).The components used for the reloading (generated in $HOME/trf/reload/file) should be installed on the target platform.Table 6‑14 lists the environment variables that should be set on the target platform.
Table 6‑14 Target Platform Environment Variables The location of the generic reload and control scripts ($HOME/trf/reload/bin). This UNIX/Linux variable has to contain the directory of the Oracle Tuxedo Application Runtime for Batch utilities. If you choose COBOL-IT as target COBOL compiler and you use BDB as ISAM files in an open system, you must set these two environment variables, so that BatchRT can generate BDB format file, otherwise Micro Focus COBOL compatible file will be generated:export COB_EXTFH_LIB=/path_to_Cobol-IT/lib/libbdbextfh.soCompiling these programs requires the presence of a copy of CONVERTMW.cpy adapted to the project.These unloading JCLs are named <logical filename>.jclunloadThe files transferred to the target UNIX/Linux platform should be stored in the $DATA_SOURCE directory.When "enable-buffer-converter" is set in system description file, the data to be converted reside in the buffer passed from the caller program. E.g., the data buffer is passed in from the IMS application. And the converted output data reside in the same data buffer and then pass back to the caller program.
Note: The loadgdg-<logical file name>.ksh script enables the execution of the different loadgds-<logical file name>.ksh scripts. Each loadgds script is used to reload one unitary generation of the file (each data set within a GDG is called a generation or a Generation Data Set – GDS).loadfile transcode and reload data to file.loadgdg and loadgds transcode and reload data to file.
Note: the loadgdg-<logical file name>.ksh script call the loadgds-<logical file name>.ksh script for each Generation Data Set.This check uses the following option of the loadfile-<logical file name>.ksh or loadfile-<logical file name>.ksh