Table of Contents Previous Next PDF


Oracle Tuxedo Application Rehosting Workbench File-to-File Converter

Oracle Tuxedo Application Rehosting Workbench File-to-File Converter
Overview
Purpose
This chapter describes how to install, implement, and configure the Rehosting Workbench File-to-File converter in order to migrate files from the source z/OS environment to the target environment.
Skills
When migrating files, a good knowledge of COBOL, JCL, z/OS utilities and UNIX/Linux Korn Shell is required.
See also
For a comprehensive view of the migration process, see Oracle Tuxedo Application Rehosting Workbench Reference Guide for the chapters Data Conversion and Cobol Conversion as well as the the Rehosting Workbench Cobol Converter chapter of this guide.
Organization
Migrating data files is described in the following sections:
The File-to-File Migration Process
File Organizations Processed
When migrating from a z/OS source platform to a target platform, the first question to ask, when VSAM is concerned, is whether to keep a file or migrate the data to an Oracle table.
The Oracle Tuxedo Application Rehosting Workbench File-to-File converter is used for those files that keep their source platform format (sequential, relative or indexed files) on the target platform. On the target platform, these files use a Micro Focus COBOL file organization equivalent to the one on the source platform.
The following table lists the file organizations handled by z/OS and indicates the organization proposed on the target platform:
Note:
PDS file organization
Files that are part of a PDS are identified as such by their physical file name, for example: METAW00.NIV1.ESSAI(FIC).
An unloading JCL adapted to PDS is generated in this case. The source and target file organizations as indicated in the above table are applied.
GDG file organization
Generation Data Group (GDG) files are handled specially by the unloading and reloading components in order to maintain their specificity (number of GDG archives to unload and reload). They are subsequently managed as generation files by Oracle Tuxedo Application Runtime Batch (see the Oracle Tuxedo Application Runtime Batch Reference Guide). On the target platform these files have a LINE SEQUENTIAL organization.
Migration Process Steps
The principle steps in the File-to-File migration process, explained in detail in the rest of this chapter, are:
1.
2.
3.
4.
5.
6.
7.
8.
9.
10.
Initializing the process
This section describes the steps to be performed before starting the file migration.
Requirements
The migration of z/OS files to UNIX/Linux is dependant on the results of the Oracle Tuxedo Application Rehosting Workbench Cataloger. It does not have any impact on the conversion of COBOL components or the translation of JCL components.
Listing the files to be migrated
The first task is to list all of the files to be migrated, for example, the permanent files input to processing units that do not come from an Oracle table.
File descriptions and managing files with the same structure
For each candidate file for migration, its structure should be described in Cobol format. This description is used in a Cobol copy by the the Rehosting Workbench Cobol converter, subject to the limitations described in COBOL description.
Once built, the list of files to migrate can be purged of files with the same structure in order to save work when migrating the files by limiting the number of programs required to transcode and reload data.
Using the purged list of files, a last task consists of building the files:
COBOL description
A COBOL description is related to each file and considered as the representative COBOL description used within the application programs. This description can be a complex COBOL structure using all COBOL data types, including the OCCURS and REDEFINES notions.
This COBOL description will often be more developed than the COBOL file description (FD). For example, an FD field can be described as a PIC X(364) but really contain a three times defined area including, in one case a COMP-3 based numerals table, and in another case a complex description of several characters/digits fields etc.
It is this developed COBOL description which describes the application reality and therefore is used as a base to migrate a specific physical file.
The quality of the file processing execution depends on the quality of this COBOL description. From this point, the COBOL description is not separable from the file and when referring to the file concerned, we mean both the file and its representative COBOL description. The description must be provided in COBOL format, in a file with the following name:
<COPY name>.cpy
Note:
COBOL description format
The format of the COBOL description must conform to the following rules:
Example
COBOL description and related discrimination rules
Within a COBOL description there are several different ways to describe the same memory field, which means to store objects with different structures and descriptions at the same place.
As the same memory field can contain objects with different descriptions, to be able to read the file, we need a mechanism to determine the description to use in order to interpret correctly this data area.
We need a rule which, according to some criteria, generally the content of one or more fields of the record, will enable us to determine (discriminate) the description to use for reading the re-defined area.
In the Rehosting Workbench this rule is called a discrimination rule.
Any redefinition inside a COBOL description lacking discrimination rules presents a major risk during the file transcoding. Therefore, any non-equivalent redefined field requests a discrimination rule. On the other hand, any equivalent redefinition (called technical redefinition) must be subject to a cleansing within the COBOL description (see the example below).
The discrimination rules must be presented per file and highlight the differences and discriminated areas. Regarding the files, it is impossible to reference a field external to the file description.
The following description is a sample of a COPY as expected by the Rehosting Workbench:
Listing 3‑1 Cobol COPY sample
01 FV14.
05 FV14-X1 PIC X.
05 FV14-X2 PIC XXX.
05 FV14-X3.
10 FV14-MTMGFA PIC 9(2).
10 FV14-NMASMG PIC X(2).
10 FV14-FILLER PIC X(12).
10 FV14-COINFA PIC 9(6)V99.
05 FV14-X4 REDEFINES FV14-X3.
10 FV14-MTMGFA PIC 9(6)V99.
10 FV14-FILLER PIC X(4).
10 FV14-IRETCA PIC X(01).
10 FV14-FILLER PIC X(2).
10 FV14-ZNCERT.
15 FV14-ZNALEA COMP-2.
15 FV14-NOSCP1 COMP-2.
15 FV14-NOSEC2 COMP-2.
15 FV14-NOCERT PIC 9(4) COMP-3.
15 FV14-FILLER PIC X(16).
05 FV14-X5 REDEFINES FV14-X3.
10 FV14-FIL1 PIC X(16).
10 FV14-MNT1 PIC S9(6)V99.
05 FV14-X6 REDEFINES FV14-X3.
10 FV14-FIL3 PIC X(16).
10 FV14-MNT3 PIC S9(6).
10 FV14-FIL4 PIC X(2).
 
The discrimination rules are written in the following format:
Listing 3‑2 Cobol COPY discrimination rules
Field FV14-X3
Rule if FV14-X1 = “A” then FV14-X3
elseif FV14-X1 = “B” then FV14-X4
elseif FV14-X1 = “C” then FV14-X5
else FV14-X6
 
Note:
The copy name of the COBOL description is: <COPY name>.cpy
Redefinition examples
Non-equivalent redefinition
Listing 3‑3 Non-equivalent redefinition example
01 FV15.
05 FV15-MTMGFA PIC 9(2).
05 FV15-ZNPCP3.
10 FV15-NMASMG PIC X(2).
10 FV15-FILLER PIC X(12).
10 FV15-COINFA PIC 9(6)V99.
05 FV15-ZNB2T REDEFINES FV1 5-ZNPCP3.
10 FV15-MTMGFA PIC 9(4)V99.
10 FV15-FILLER PIC X(4).
10 FV15-IRETCA PIC X(01).
10 FV15-FILLER PIC X(2).
10 FV15-ZNCERT
15 FV15-ZNALEA COMP-2.
15 FV15-NOSCP1 COMP-2.
15 FV15-NOSEC2 COMP-2.
15 FV15-NOCERT PIC 9(4) COMP-3.
15 FV15-FILLER PIC X(16).
 
In the above example, two fields (FV15-ZNPCP3 and FV15-ZNB2T) have different structures: an EBCDIC alphanumeric field in one case and a field composed of EBCDIC data and COMP2, COMP3 data in a second case.
The implementation of a discrimination rule will be necessary to migrate the data to a UNIX platform.
Listing 3‑4 Related discrimination rules
Field FV15-ZNPCP3
Rule if FV15-MTMGFA = 12 then FV15-ZNPCP3
elseif FV15-MTMGFA = 08 and FV15-NMASMG = "KC " then FV15-ZNB2T
 
Equivalent redefinition called technical redefinition
Listing 3‑5 Technical redefinition initial situation
01 FV1.
05 FV1-ZNPCP3.
10 FV1-MTMGFA PIC 9(6)V99.
10 FV1-NMASMG PIC X(25).
10 FV1-FILLER PIC X(12).
10 FV1-COINFA PIC 9(10).
10 FV2-COINFA REDEFINES FV1-COINFA.
15 FV2-ZNALEA PIC 9(2).
15 FV2-NOSCP1 PIC 9(4).
15 FV2- FILLER PIC 9(4).
10 FV15-MTMGFA PIC 9(6)V99.
10 FV15-FILLER PIC X(4).
10 FV15-IRETCA PIC X(01).
10 FV15-FILLER PIC X(2).
 
Listing 3‑6 Technical redefinition potential expected results
 
In the above example, the two descriptions correspond to a simple EBCDIC alphanumeric character string (without binary, packed or signed numeric fields). this type of structure does not require the implementation of a discrimination rule.
Preparing the environment
This section describes the tasks to perform before generating the components to be used to migrate the data files.
Initializing environment variables
Before executing the Rehosting Workbench set the following environment variables:
— the location for storing temporary objects generated by the process.
— the location of the configuration files.
Implementing the configuration files
Three files need to be placed in the Rehosting Workbench file structure as described by:
Datamap-<configuration name>.re,
mapper-<configuration name>.re.
For a File-to-File conversion you must create these files yourself.
Note:
Two other configuration files:
are automatically placed in the file structure during the installation of the Rehosting Workbench. If specific versions of these files are required for particular z/OS files they will be placed in the $PARAM/file file structure.
Configuring the files
The following examples show the configuration of three files; two QSAM files and one VSAM KSDS file. There are no discrimination rules to implement for these files.
Database Parameter File (db-param.cfg)
For the db-param.cfg file, the only parameter you may need to modify is the target_os parameter.
Listing 3‑7 db-param.cfg example
# This configuration file is used by FILE & RDBMS converter
# Lines beginning with "#" are ignored
# write information in lower case
# common parameters for FILE and RDBMS
# source information is written into system descriptor file (OS, DBMS=,
# DBMS-VERSION=)
target_rdbms_name:oracle
target_rdbms_version:11
target_os:unix
#
# specific parameters for FILE to RDBMS conversion
file:char_limit_until_varchar:29
# specific parameters for RDBMS conversion
rdbms:date_format:YYYY/MM/DD
rdbms:timestamp_format:YYYY/MM/DD HH24 MI SS
rdbms:time_format:HH24 MI SS
# rename object files
# the file param/rdbms/rename-objects-<schema>.txt is automatically loaded # by the tool if it exists.
 
Datamap parameter file (Datamap-<configuration name>.re)
Each z/OS file to be migrated must be listed.
The following parameters must be set:
Table 3‑2 Datamap parameters
Note:
In the following example, the first two files are QSAM files, the organization is therefore always sequential. The PJ01AAA.SS.VSAM.CUSTOMER file is a VSAM KSDS file and the organization is therefore indexed. The parameters, keys offset 1 bytes length 6 bytes primary, describe the key. In this example, the key is six bytes long starting in position 1.
Listing 3‑8 Example datamap file: Datamap-FTFIL001.re
%% Lines beginning with "%%" are ignored
 
data map FTFIL001-map system cat::PROJ001
%%
%% Datamap File PJ01DDD.DO.QSAM.KBCOI001
%%
file PJ01DDD.DO.QSAM.KBCOI001
organization Sequential
%%
%% Datamap File PJ01DDD.DO.QSAM.KBCOI002
%%
file PJ01DDD.DO.QSAM.KBCOI002
organization Sequential
%%
%% Datamap File PJ01AAA.SS.VSAM.CUSTOMER
%%
file PJ01AAA.SS.VSAM.CUSTOMER
organization Indexed
keys offset 1 bytes length 6 bytes primary
 
Mapping parameter file (mapper-<configuration name>.re)
Each z/OS file to be migrated, that is included in the Datamap configuration file, must be listed.
The following parameters must be set:
Table 3‑3 Mapping parameters
include "#VAR:RECS-SOURCE#/BCOAC01E.cpy"
During the generation, the string #VAR:RECS-SOURCE# will be replaced by the directory name where the copy files are located: $PARAM/file/recs-source
The name of the copy file BCOAC01E.cpy is freely chosen by the user when creating the file.
REC-ENTREE corresponds to the level 01 field name in the copy file.
Note:
Listing 3‑9 Example mapper file: mapper-FTFIL001.re
%% Lines beginning with "%%" are ignored
ufas mapper FTFIL001
%%
%% Desc file PJ01DDD.DO.QSAM.KBCOI001
%%
file PJ01DDD.DO.QSAM.KBCOI001 transferred
include "#VAR:RECS-SOURCE#/BCOAC01E.cpy"
map record REC-ENTREE defined in "#VAR:RECS-SOURCE#/BCOAC01E.cpy"
source record REC-ENTREE defined in "#VAR:RECS-SOURCE#/BCOAC01E.cpy"
logical name FQSAM01
converter name FQSAM01
%%
%% Desc file PJ01DDD.DO.QSAM.KBCOI002
%%
file PJ01DDD.DO.QSAM.KBCOI002 transferred
include "#VAR:RECS-SOURCE#/BCOAC04E.cpy"
map record REC-ENTREE-2 defined in "#VAR:RECS-SOURCE#/BCOAC04E.cpy"
source record REC-ENTREE-2 defined in "#VAR:RECS-SOURCE#/BCOAC04E.cpy"
logical name FQSAM02
converter name FQSAM02
%%
%% Desc file PJ01AAA.SS.VSAM.CUSTOMER
%%
file PJ01AAA.SS.VSAM.CUSTOMER transferred
include "COPY/ODCSF0B.cpy"
map record VS-ODCSF0-RECORD defined in "COPY/ODCSF0B.cpy"
source record VS-ODCSF0-RECORD in "COPY/ODCSF0B.cpy"
logical name ODCSF0B
converter name ODCSF0B
 
Installing the copy files
Create a $PARAM/file/recs-source directory to hold the copy files.
Once the COBOL description files have been prepared, the copy files described in the mapper-<configuration name>.re file should be placed in the $PARAM/file/recs-source directory.
If you use a COBOL copy book from the source platform to describe a file (see note in COBOL description), then it is the location of the copy book that is directly used in the mapping parameter file as in the "COPY/ODCSF0B.cpy" example above.
Generating the components
To generate the components used to migrate z/OS files the Rehosting Workbench uses the file.sh command. This section describes the command.
file.sh
Name
file.sh — Generate z/OS migration components.
Synopsis
file.sh [ [-g] [-m] [-i <installation directory>] <configuration name> | -s <installation directory> (<configuration name>,...) ]
Description
file.sh generates the components used to migrate z/OS files using the Rehosting Workbench.
Options
-g <configuration name>
Generation option. The unloading and loading components are generated in $TMPPROJECT using the information provided by the configuration files.
-m <configuration name>
Modification option. Makes the generated shell scripts executable. The COBOL programs are adapted to Micro Focus COBOL fixed format. When present, the shell script that modifies the generated source files is executed.
-i <installation directory><configuration name>
Installation option. Places the components in the installation directory. This operation uses the information located in the file-move-assignation.pgm file.
-s
Not applicable to File-to-File migration.
Example
file.sh -gmi $HOME/trf FTFIL001
Locations of generated files
The unloading and loading components generated with the -i $HOME/trf option are placed in the following locations:
$HOME/trf/unload/file/<configuration name>
$HOME/trf/reload/file/<configuration name>
The generation log files Mapper-log-<configuration name> can be used to resolve problems.
Modifying generated components
The generated components may be modified using a project’s own scripts.These scripts (sed, awk, perl,…) should be placed in:
$PARAM/file/file-modif-source.sh
When present, this file will be automatically executed at the end of the generation process. It will be called using the <configuration name> as an argument.
Using the make utility
Make is a UNIX utility intended to automate and optimize the construction of targets (files or actions).
You should have a descriptor file named makefile in the source directory in which all operations are implemented (a makefile is prepared in the source directory during the initialization of a project).
The next two sections describe configuring a make file and how to use the Rehosting Workbench File-To-File Converter functions with a make file.
Configuring a make file
Version.mk
The version.mk configuration file in $PARAM is used to set the variables and parameters required by the make utility.
In version.mk specify where each type of component is installed and their extensions, as well as the versions of the different tools to be used. This file also describes how the log files are organized.
The following general variables should be set at the beginning of migration process in the version.mk file:
In addition, the FILE_SCHEMAS variable is specific to file migration, it indicates the different configurations to process.
This configuration should be complete before using the make file.
make file contents
The contents of the makefile summarize the tasks to be performed:
A makefile and a version.mk file are provided with the Rehosting Workbench Simple Application.
Using a makefile with the Rehosting Workbench File-To-File Converter
The make FileConvert command can be used to launch the Rehosting Workbench File-To-File Converter. It enables the generation of the components required to migrate z/OS files to a UNIX/Linux target platform.
The make file launches the file.sh tool with the -g, -m and -i options, for all configurations contained in the FILE_SCHEMAS variable.
Performing the migration
This section describes the tasks of unloading, transfer and reloading using the components generated using the Rehosting Workbench (see Generating the components).
Preparation
Configuring the environments and installing the components
Installing the unloading components under z/OS
The components used for the unloading (generated in $HOME/trf/unload/file) should be installed on the source z/OS platform. The generated JCL may need adapting to specific site constraints including JOB cards, library access paths and access paths to input and output files (Data Set Name – DSN).
Installing the reloading components on target platform
The components used for the reloading (generated in $HOME/trf/reload/file) should be installed on the target platform (runtime).
The following environment variables should be set on the target platform:
Table 3‑5  
Unloading JCL
An unloading JCL is generated for each z/OS file listed in the Datamap parameter file (Datamap-<configuration name>.re). These unloading JCLs are named <logical file name>.jclunload
Note:
The .jclunload extension should be deleted for execution under z/OS.
Transferring the files
Files should be transferred between the source z/OS platform and the target platform in binary format using the file transfer tools available at the site (CFT, FTP, …).
Compiling the transcoding programs
The generated COBOL programs used for transcoding and reloading are named:
RELFILE-<logical file name>
For the example provided in Mapping parameter file (mapper-<configuration name>.re), the generated programs are:
Sequential files
When migrating a sequential file, a Micro Focus LINE SEQUENTIAL output file will be generated:
Listing 3‑10 FILE CONTROL example – extract from program: RELFILE-FQSAM01.cbl:
SELECT SORTIE
ASSIGN TO "SORTIE"
ORGANIZATION IS LINE SEQUENTIAL
FILE STATUS IS IO-STATUS.
 
 
VSAM KSDS files
When migrating a VSAM KSDS file, an INDEXED output file will be generated:
Listing 3‑11 FILE CONTROL example – extract from program: RELFILE-ODCSF0B.cbl:
SELECT MW-SORTIE
ASSIGN TO "SORTIE"
ORGANIZATION IS INDEXED
ACCESS IS DYNAMIC
RECORD KEY IS VS-CUSTIDENT
FILE STATUS IS IO-STATUS.
 
These COBOL programs should be compiled with Micro Focus COBOL using the options described in the Cobol converter section of the Rehosting Workbench Reference Guide.
Executing the transcoding and reloading scripts
The transcoding and reloading scripts use the following parameters:
Synopsis
loadfile-<logical file name>.ksh [-t/-l] [-c <method>]
Options
-t
Transcode and reload the file.
-l
Transcode and reload the file (same action as -t).
-c ftp:<…>:<…>
Implement the verification of the transfer (see Checking the transfers).
Examples
For the example provided in Mapping parameter file (mapper-<configuration name>.re), the generated scripts are:
Files
By default, the input file is located in the directory indicated by $DATA_SOURCE, and the output file is placed in the directory indicated by $DATA.
These files are named with the logical file name used in the Mapping parameter file (mapper-<configuration name>.re).
An execution log is created in the directory indicated by $MT_LOG.
A return code different from zero is produced when a problem is encountered.
Checking the transfers
This check uses the following option of the loadfile-<logical file name>.ksh
-c ftp:<name of transferred physical file>:<name of FTP log under UNIX>
Note:
Troubleshooting
This section describes problems resulting from usage errors that have been encountered when migrating files from the source to target platform.
Overview
When executing any of the Rehosting Workbench tool users should check:
If the Mapper-log-<configuration name> file contains any errors (see Common problems and solutions).
Error messages and associated explanations are listed in the appendix of the Oracle Tuxedo Application Rehosting Workbench Reference Guide.
Common problems and solutions
Error: Unknown file organization *UNDEFINED*
When executing file.sh -gmi $HOME/trf STFILEORA the following message appears:
Refine error...
Log
The contents of the Mapper-log-STFILEORA log file include:
file PJ01AAA.SS.QSAM.CUSTOMER.REPORT loaded/unloaded
file logical name MW-SYSOUT
*** Unknown file organization : *UNDEFINED*
mapping record MW-SYSOUT
record MW-SYSOUT: logical name MW-SYSOUT
record MW-SYSOUT: logical name MW-SYSOUT
Explanation
A file to be migrated is present in the mapper-<configuration name>.re file and absent from the Datamap.<configuration name>.re file.
Error: Record... not found
When executing file.sh -gmi $HOME/trf STFILEORA1 the following message appears:
Refine error...
Log
The contents of the Mapper-log-STFILEORA1 log file include:
file PJ01AAA.SS.QSAM.CUSTOMER.REPORT loaded/unloaded
file logical name MW-SYSOUT
file is sequential: no primary key
*** record `MW-SYSOUT in COPY/MW_SYSOU2T.cpy' not found ***
*** ERROR: all records omitted ***
mapping records
Explanation
The copy file is unknown.
Error: Record... not found
When executing file.sh -gmi $HOME/trf STFILEORA2 the following message appears:
Refine error...
Log
The contents of the Mapper-log-STFILEORA2 log file include:
file PJ01AAA.SS.QSAM.CUSTOMER.REPORT loaded/unloaded
file logical name MW-SYSOUT
file is sequential: no primary key
*** record `MW-SYSOUTTT in COPY/MW_SYSOUT.cpy' not found ***
record MW-SYSOUT reselected (all records omitted)
mapping record MW-SYSOUT
record MW-SYSOUT: logical name MW-SYSOUT
Explanation
The RECORD name (level 01 field) is unknown.
Error: External Variable PARAM is not set
When executing file.sh -gmi $HOME/trf STFILEORA3 the following message appears:
Refine error...
Log
The contents of the Mapper-log-STFILEORA3 log file include:
*=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-==-=-=-=-=-=-=-=-=-=-=-=-
############################################################################
Control of schema STFILEORA3
External Variable PARAM is not setted !
ERROR : Check directive files for schema STFILEORA3
 
Explanation
The variable $PARAM has not been set.
 

Copyright © 1994, 2017, Oracle and/or its affiliates. All rights reserved.