Table of Contents Previous Next PDF


File Convertor

File Convertor
This chapter describes the Rehosting Workbench File Convertor used to migrate files from the source platform (z/OS) to Unix/Linux Micro Focus files or to Oracle tables, and describes the migration tools that are generated. The conversion is performed in the context of other components translated or generated by the other Oracle Tuxedo Application Rehosting Workbench tools.
Several configuration files need to be set, see Description of the configuration files, before launching the conversion process.
The different objects generated are described in Description of the output files. Some objects are only generated when migrating VSAM files to Oracle tables (PCO programs, SQL files, relational module, logical module, utilities and configuration files for JCL and COBOL conversion).
Overview of the File Convertor
Purpose
The purpose of this section is to describe precisely all the features of the Rehosting Workbench File Convertor tools including:
Structure
See also
The conversion of data is closely linked to the conversion of COBOL programs, see:
File organizations processed
When migrating files from a z/OS source platform to a target platform, the first question to ask, when VSAM is concerned, is whether to keep a file or migrate the data to an Oracle table.
Keeping z/OS file organization on the target platform
The Oracle Tuxedo Application Rehosting Workbench File Convertor is used for those files that keep their source platform format (sequential, relative or indexed files) on the target platform. On the target platform, these files use a Micro Focus COBOL file organization equivalent to the one on the source platform.
The following table lists the file organizations handled by z/OS and indicates the organization proposed on the target platform:
PDS file organization
Files that are part of a PDS are identified as such by their physical file name, for example: METAW00.NIV1.ESSAI(FIC).
An unloading JCL adapted to PDS is generated in this case. The source and target file organizations as indicated in the above table are applied.
GDG file organization
Generation Data Group (GDG) files are handled specially by the unloading and reloading components in order to maintain their specificity (number of GDG archives to unload and reload). They are subsequently managed as generation files by Oracle Tuxedo Application Runtime Batch (see the Oracle Tuxedo Application Runtime Batch Reference Guide). On the target platform these files have a LINE SEQUENTIAL organization.
Migrating to Oracle table on the target platform
KSDS, RRDS and ESDS VSAM files can be migrated into Oracle tables.
To make this work, the first task is to list all of the VSAM files to be migrated, and then identify those files that should be converted to Oracle tables. For example, permanent files to be later used via Oracle or files that needs locking at the record level.
Oracle Tuxedo Application Rehosting Workbench configuration name
A configuration name is related to a set of files to be converted. Each set of files can be freely assembled. Each configuration could be related to a different application for example, or a set of files required for tests.
File descriptions and managing files with the same structure
For each candidate file for migration, its structure should be described in Cobol format. This description is used in a Cobol copy by the Rehosting Workbench Cobol converter, subject to the limitations described in COBOL description.
Once built, the list of files to migrate can be purged of files with the same structure in order to save work when migrating the files by limiting the number of programs required to transcode and reload data.
Using the purged list of files, a last task consists of building the files:
COBOL description
A COBOL description is related to each file and considered as the representative COBOL description used within the application programs. This description can be a complex COBOL structure using all COBOL data types, including the OCCURS and REDEFINES notions.
This COBOL description will often be more developed than the COBOL file description (FD). For example, an FD field can be described as a PIC X(364) but really contain a three times defined area including, in one case a COMP-3 based numerals table, and in another case a complex description of several characters/digits fields etc.
It is this developed COBOL description which describes the application reality and therefore is used as a base to migrate a specific physical file.
The quality of the file processing execution depends on the quality of this COBOL description. From this point, the COBOL description is not separable from the file and when referring to the file concerned, we mean both the file and its representative COBOL description. The description must be provided in COBOL format, in a file with the following name:
<COPY name>.cpy
Note:
COBOL description format
The format of the COBOL description must conform to the following rules:
COBOL description and related discrimination rules
Within a COBOL description there are several different ways to describe the same area, which means to store objects with different structures and descriptions at the same place.
As the same zone can contain objects with different descriptions, to be able to read the file, we need a mechanism to determine the description to use in order to interpret correctly this data area.
We need a rule which, according to some criteria, generally the content of one or more fields of the record, will enable us to determine (discriminate) the description to use for reading the re-defined area.
In the Rehosting Workbench this rule is called a discrimination rule.
Any redefinition inside a COBOL description lacking discrimination rules presents a major risk during the file transcoding. Therefore, any non-equivalent redefined field requests a discrimination rule. On the other hand, any equivalent redefinition (called technical redefinition) must be subject to a cleansing within the COBOL description (see the example below COBOL description format).
The discrimination rules must be presented per file and highlight the differences and discriminated areas. Regarding the files, it is impossible to reference a field external to the file description.
The discrimination rules are provided in the mapper file. The syntax is described in chapter Mapper file of this document.
VSAM files becoming Oracle table
Specific Migration rules applied
This column is incremented each time a new line is added to the table and becomes the primary key of the table.
The size of the column is deduced from the information supplied in the Datamap parameter file; the column becomes the primary key of the table.
The column:
Rules applied to Picture clauses
The following rules are applied to COBOL Picture clauses when migrating data from VSAM files to Oracle tables:
Environment variables
Before starting the process of migrating data two environment variables should be set:
Indicates the location to store temporary objects generated by the process.
Indicates the location where the configuration files required by the process are stored.
Description of the input components
File locations
Location of file.sh
The file.sh tool is located in the directory:
$REFINEDIR/convert-data/
Location of db-param.cfg file
The db-param.cfg configuration file is located in the directory given in the variable:
$PARAM
Description of the configuration files
This section lists the files and their parameters that can be used to control the migration of z/OS files to UNIX\Linux files or an Oracle table.
db-param.cfg
This file should be created in the directory indicated by the $PARAM directory:
$PARAM/db-param.cfg
Listing 4‑1 db-param.cfg template
#
# This configuration file is used by FILE & RDBMS converter
# Lines beginning by "#" are ignored
# write information in lower case
#
# common parameters for FILE and RDBMS
#
# source information is written into system descriptor file (DBMS=, DBMS-VERSION=)
target_rdbms_name:<target_rdbms_name>
target_rdbms_version:<target_rdbms_version>
target_os:<target_os>
#
# specific parameters for FILE to RDBMS conversion
file:char_limit_until_varchar:<char_limit>
 
 
Parameters and syntaxes
If the parameter contains: file:char_limit_until_varchar:29
File modifying generated components
The generated components may be modified using a project's own scripts. These scripts (sed, awk, perl,…) should be placed in:
$PARAM/file/file-modif-source.sh
When present, this file will be automatically executed at the end of the generation process. It will be called using the <configuration name> as an argument.
file-template.txt
This file is put in place during the installation of the Rehosting Workbench, it contains the templates that perform the generation of the different migration tools. The file is located in:
$REFINEDIR/convert-data/default/file/file-templates.txt
Listing 4‑2 file-template.txt
% Unloading all File ********************
% All SAM file were transfered using FTP/Binary
% VSAM unloaded step:
#VAR:TEMPLATES#/unloading/jcl-unload-MVS-REPRO.pgm
%
% To create a specific template, copy this template into :
% -- #VAR:PARAM#/file/specific-templates/unloading/jcl-unload-customer.pgm
%
% Loading **********************************************
#VAR:TEMPLATES#/loading/file-reload-files-txt.pgm
% Loading File to File ***************************
#VAR:TEMPLATES#/loading/unix-file/reload-files-ksh.pgm
#VAR:TEMPLATES#/loading/unix-file/reload-mono-rec.pgm
% Loading File to Oracle *************************
#VAR:TEMPLATES#/loading/unix-oracle/load-tables-ksh.pgm
#VAR:TEMPLATES#/loading/unix-oracle/rel-mono-rec.pgm
#VAR:TEMPLATES#/dml/clean-tables-ksh.pgm
#VAR:TEMPLATES#/dml/drop-tables-ksh.pgm
#VAR:TEMPLATES#/dml/create-tables-ksh.pgm
#VAR:TEMPLATES#/dml/ifempty-tables-ksh.pgm
#VAR:TEMPLATES#/dml/ifexist-tables-ksh.pgm
%
% Generate Logical & Relational Module *****************
#VAR:TEMPLATES#/dml/module/open-multi-assign-free.pgm
#VAR:TEMPLATES#/dml/module/open-mono-rec-idx-perf.pgm
#VAR:TEMPLATES#/dml/module/open-mono-rec-sequential.pgm
#VAR:TEMPLATES#/dml/module/open-mono-rec-relative.pgm
%
% and utilities ****************************************
#VAR:TEMPLATES#/dml/module/decharge-mono-rec.pgm
#VAR:TEMPLATES#/dml/module/recharge-table.pgm
#VAR:TEMPLATES#/dml/module/close-all-files.pgm
#VAR:TEMPLATES#/dml/module/init-all-files.pgm
%
% configuration file for translation and runtime *******
#VAR:TEMPLATES#/dml/generate-config-wb-translator-jcl.pgm
#VAR:TEMPLATES#/dml/generate-rdb-txt.pgm
%
% included file to include into modified-components
#VAR:TEMPLATES#/dml/include-modified-components.pgm
%
% ***************************************
% MANDATORY
% : used just after the generation
#VAR:TEMPLATES#/dml/generate-post-process.pgm
% : used when using -i arguments
#VAR:DEFAULT#/file-move-assignation.pgm
 
When required, another version of the file-template.txt file can be placed in the $PARAM/file directory. The use of an alternative file is signaled during the execution of file.sh by the message:
Listing 4‑3 Execution log with alternative template file
##########################################################################
Control of templates
OK: Use Templates list file from current project:
File name is /home2/wkb9/param/file/file-templates.txt
##########################################################################
 
file_move_assignation.txt
This file is placed during the installation of the Rehosting Workbench, it controls the transfer of components generated in the different installation directories. This file indicates the location of each component to copy during the installation phase of file.sh, when launched using file.sh -i.
The file is located in:
$REFINEDIR/convert-data/default/file/file-move-assignation.pgm
This file can be modified following the instructions found at the beginning of the file:
Listing 4‑4 file_move_assignation.txt modification instructions
[…]
*@ (c) Metaware:file-move-assignation.pgm. $Revision: 1.2 $
*release_format=2.3
*
* format is:
* <typ>:<source_directory>:<file_name>:<target_directory>
*
* typ:
* O: optional copy: if the <file_name> is missing, it is ignored
* M: Mandatory copy: abort if <file_name> is missing.
* E: Execution: execute the mandatory script <file_name>.
* Parameters for script to be executed are:
* basedir: directory of REFINEDIR/convert-data
* targetoutputdir: value of "-i <targetdir>"
* schema: schema name
* target_dir: value written as 4th parameter in this file.
* d: use this tag to display the word which follows
*
* source_directory:
* T: generated components written in <targetdir>/Templates/<schema>
* O: components written in <targetdir>/outputs/<schema>
* S: SQL requests (DDL) generated into <targetdir>/SQL/<schema> directory
* F: fixed components present in REFINEDIR
* s: used with -s arguments: indicates the target directory for DML utilities
* (in REFINEDIR/modified-components/) which manipulate all schemas.
*
* file_name: (except for typ:d)
* name of the file in <source_directory>
*
* target_directory: (except for typ:d, given at 4th argument for typ:E)
* name of the target directory
* if the 1st character is "/", component is copied using static directory and not in <td> directory
*
[…]
 
Datamap file
This is a configuration file used by the Rehosting Workbench file converter to add or modify information on the physical files of a system.
Each ZOS file to be migrated must be listed in this file; the file only contains the list of files to be migrated.
The Datamap file must be created in the directory: $PARAM/file with the complete name:
Datamap-<configuration name>.re
Where <configuration name> is the name of the current configuration used.
Datamap syntax and parameters
Listing 4‑5 Datamap file
data map <configuration name>-map system cat::<project name>
file <physical file name>
organization <organization>
[is-gdg limit <p> [scratch/noscratch] [empty/noempty]
[keys offset <n> bytes length <m> bytes primary]
[relkey size <m> bytes]
 
 
is-gdg limit <p> [scratch/noscratch] [empty/noempty]
p parameter value is used to specify the total number of generations that the GDG may contain.
scratch/noscratch parameters are mutually exclusive. Scratch parameter specifies that whenever an entry of the GDG is removed from the index, it should be deleted physically and uncataloged. Noscratch parameter specifies that whenever an entry of the GDG is removed from the index, it should be uncataloged but not physically deleted.
empty/noempty parameters are mutually exclusive. Empty specifies that all existing generations of the GDG are to be uncataloged whenever the generations of GDG reaches the maximum limit . Noempty specifies that only the oldest generation of the GDG is to be uncataloged if the limit is reached.
For indexed files, this clause is used to describe the key; where <n> is the start position and <m> is the length of the key.
Listing 4‑6 Datamap example
data map STFILEORA-map system cat::STFILEORA
%% Datamap File PJ01AAA.SS.QSAM.CUSTOMER
file PJ01AAA.SS.QSAM.CUSTOMER
organization Sequential
%% Datamap File PJ01AAA.SS.VSAM.CUSTOMER
file PJ01AAA.SS.VSAM.CUSTOMER
organization Indexed
keys offset 1 bytes length 6 bytes primary
 
Mapper file
This is a configuration file used by the Rehosting Workbench File Convertor to associate each file to migrate with:
Each z/OS file listed in the Datamap file, must be described in the mapper file.
Mapping file clause
Mapping files consists in choosing, for each physical file to be treated, the associated COBOL description and discrimination rules.
Listing 4‑7 Mapper file clause structure
file <Physical file name>
[converted] [transferred]
table name <Table Name>
include <"path/Copy name">
map record <record name> defined in <"path/Copy name">
source record <record name> defined in <"path/Copy name">
logical name <logical file name>
converter name <converter name>
[attributes <attribute clause>]
[mapping strategies clauses]
 
 
 
record name: corresponds to the level 01 field name of the copy description.
path/COPY name: corresponds to the access path and name of the descriptive copy of the file to migrate.
record name: corresponds to the level 01 field name of the copy description of the file to migrate.
path/COPY name: corresponds to the access path and name of the descriptive copy of the file to migrate.
Listing 4‑8 Mapper file example
ufas mapper STFILEORA
file PJ01AAA.SS.VSAM.CUSTOMER converted transferred
table name CUSTOMER
include "COPY/ODCSF0B.cpy"
map record VS-ODCSF0-RECORD defined in "COPY/ODCSF0B.cpy"
source record VS-ODCSF0-RECORD defined in "COPY/ODCSF0B.cpy"
logical name ODCSF0B
converter name ODCSF0B
attributes LOGICAL_MODULE_IN_ADDITION
 
In this example the mapper file is named STFILEORA. The file processes only one file named PJ01AAA.SS.VSAM.CUSTOMER that is migrated to an Oracle table using the convert option. The ODCSF0B.cpy copy file used to describe the file is one of the source copy files.
Mapping strategy clauses
[ field <field_name>
[ use detail table ]
[ use opaque field <field name> ]
[ table name <target table name> ]
[ mapped type <target data type> ]
[ discard field <field name> ]
[ discard subfields <field name> ]
[ discrimination rule ] ]
Mapping strategy clause syntax and parameters
For OCCURS and REDEFINES clauses, using discrimination rules, three reengineering possibilities are proposed:
Table 4‑6 Mapping strategies
Mapping strategy examples
Discard subfield example
05 NIV1.
10 NIV2A PIC 99.
10 NIV2B PIC 999.
When discarding subfields at the level NIV1, the Rehosting Workbench File Convertor only processes the field NIV1 PIC 9(5). When not discarding subfields, the NIV1 field is ignored and the two fields NIV2A and NIV2B are processed.
Redefines with default option example
This redefines example is without any specific options:
Listing 4‑9 Descriptive copy of the file: PJ01AAA.SS.VSAM.CUSTOMER
01 VS-ODCSF0-RECORD.
05 VS-CUSTIDENT PIC 9(006).
05 VS-CUSTLNAME PIC X(030).
05 VS-CUSTFNAME PIC X(020).
05 VS-CUSTADDRS PIC X(030).
05 VS-CUSTCITY PIC X(020).
05 VS-CUSTSTATE PIC X(002).
05 VS-CUSTBDATE PIC 9(008).
05 VS-CUSTBDATE-G REDEFINES VS-CUSTBDATE.
10 VS-CUSTBDATE-CC PIC 9(002).
10 VS-CUSTBDATE-YY PIC 9(002).
10 VS-CUSTBDATE-MM PIC 9(002).
10 VS-CUSTBDATE-DD PIC 9(002).
05 VS-CUSTEMAIL PIC X(040).
05 VS-CUSTPHONE PIC 9(010).
05 VS-FILLER PIC X(100).
 
The mapper file implemented is:
Listing 4‑10 Mapper file for the file: PJ01AAA.SS.VSAM.CUSTOMER
ufas mapper STFILEORA
file PJ01AAA.SS.VSAM.CUSTOMER converted transferred
table name CUSTOMER
include "COPY/ODCSF0B.cpy"
map record VS-ODCSF0-RECORD defined in "COPY/ODCSF0B.cpy"
source record VS-ODCSF0-RECORD defined in "COPY/ODCSF0B.cpy"
logical name ODCSF0B
converter name ODCSF0B
attributes LOGICAL_MODULE_IN_ADDITION
field VS-CUSTBDATE
rule if VS-CUSTSTATE = "02" then VS-CUSTBDATE
else VS-CUSTBDATE-G
 
The table is generated as follows (all the unitary fields of the REDEFINES are handled).
Listing 4‑11 Table generation for the file: PJ01AAA.SS.VSAM.CUSTOMER
WHENEVER SQLERROR CONTINUE;
DROP TABLE CUSTOMER CASCADE CONSTRAINTS;
WHENEVER SQLERROR EXIT 3;
CREATE TABLE CUSTOMER (
VS_CUSTIDENT NUMBER(6) NOT NULL,
VS_CUSTLNAME VARCHAR2(30),
VS_CUSTFNAME CHAR (20),
VS_CUSTADDRS VARCHAR2(30),
VS_CUSTCITY CHAR (20),
VS_CUSTSTATE CHAR (2),
VS_CUSTBDATE NUMBER(8),
VS_CUSTBDATE_CC NUMBER(2),
VS_CUSTBDATE_YY NUMBER(2),
VS_CUSTBDATE_MM NUMBER(2),
VS_CUSTBDATE_DD NUMBER(2),
VS_CUSTEMAIL VARCHAR2(40),
VS_CUSTPHONE NUMBER(10),
VS_FILLER VARCHAR2(100),
CONSTRAINT PK_CUSTOMER PRIMARY KEY (
VS_CUSTIDENT)
);
 
REDEFINES with OPAQUE FIELD option example
Listing 4‑12 Descriptive copy of the file: PJ01AAA.SS.VSAM.CUSTOMER
01 VS-ODCSF0-RECORD.
05 VS-CUSTIDENT PIC 9(006).
05 VS-CUSTLNAME PIC X(030).
05 VS-CUSTFNAME PIC X(020).
05 VS-CUSTADDRS PIC X(030).
05 VS-CUSTCITY PIC X(020).
05 VS-CUSTSTATE PIC X(002).
05 VS-CUSTBDATE PIC 9(008).
05 VS-CUSTBDATE-G REDEFINES VS-CUSTBDATE.
10 VS-CUSTBDATE-CC PIC 9(002).
10 VS-CUSTBDATE-YY PIC 9(002).
10 VS-CUSTBDATE-MM PIC 9(002).
10 VS-CUSTBDATE-DD PIC 9(002).
05 VS-CUSTEMAIL PIC X(040).
05 VS-CUSTPHONE PIC 9(010).
05 VS-FILLER PIC X(100).
 
The mapper file implemented is:
Listing 4‑13 Mapper file for the file: PJ01AAA.SS.VSAM.CUSTOMER
ufas mapper STFILEORA
file PJ01AAA.SS.VSAM.CUSTOMER converted transferred
table name CUSTOMER
include "COPY/ODCSF0B.cpy"
map record VS-ODCSF0-RECORD defined in "COPY/ODCSF0B.cpy"
source record VS-ODCSF0-RECORD defined in "COPY/ODCSF0B.cpy"
logical name ODCSF0B
converter name ODCSF0B
attributes LOGICAL_MODULE_IN_ADDITION
field VS-CUSTBDATE
use opaque field
rule if VS-CUSTSTATE = "02" then VS-CUSTBDATE
else VS-CUSTBDATE-G
 
The table is generated as follows (only the VS_CUSTBDATE field is kept).
Listing 4‑14 Table generation for the file: PJ01AAA.SS.VSAM.CUSTOMER
WHENEVER SQLERROR CONTINUE;
DROP TABLE CUSTOMER CASCADE CONSTRAINTS;
WHENEVER SQLERROR EXIT 3;
CREATE TABLE CUSTOMER (
VS_CUSTIDENT NUMBER(6) NOT NULL,
VS_CUSTLNAME VARCHAR2(30),
VS_CUSTFNAME CHAR (20),
VS_CUSTADDRS VARCHAR2(30),
VS_CUSTCITY CHAR (20),
VS_CUSTSTATE CHAR (2),
VS_CUSTBDATE RAW (8),
VS_CUSTEMAIL VARCHAR2(40),
VS_CUSTPHONE NUMBER(10),
VS_FILLER VARCHAR2(100),
CONSTRAINT PK_CUSTOMER PRIMARY KEY (
VS_CUSTIDENT)
);
REDEFINES with DETAIL TABLE option example
Listing 4‑15 Descriptive copy of the file: PJ01AAA.SS.VSAM.CUSTOMER
01 VS-ODCSF0-RECORD.
05 VS-CUSTIDENT PIC 9(006).
05 VS-CUSTLNAME PIC X(030).
05 VS-CUSTFNAME PIC X(020).
05 VS-CUSTADDRS PIC X(030).
05 VS-CUSTCITY PIC X(020).
05 VS-CUSTSTATE PIC X(002).
05 VS-CUSTBDATE PIC 9(008).
05 VS-CUSTBDATE-G REDEFINES VS-CUSTBDATE.
10 VS-CUSTBDATE-CC PIC 9(002).
10 VS-CUSTBDATE-YY PIC 9(002).
10 VS-CUSTBDATE-MM PIC 9(002).
10 VS-CUSTBDATE-DD PIC 9(002).
05 VS-CUSTEMAIL PIC X(040).
05 VS-CUSTPHONE PIC 9(010).
05 VS-FILLER PIC X(100).
 
The mapper file implemented is:
Listing 4‑16 Mapper file for the file: PJ01AAA.SS.VSAM.CUSTOMER
ufas mapper STFILEORA
file PJ01AAA.SS.VSAM.CUSTOMER converted transferred
table name CUSTOMER
include "COPY/ODCSF0B.cpy"
map record VS-ODCSF0-RECORD defined in "COPY/ODCSF0B.cpy"
source record VS-ODCSF0-RECORD defined in "COPY/ODCSF0B.cpy"
logical name ODCSF0B
converter name ODCSF0B
attributes LOGICAL_MODULE_IN_ADDITION
field VS-CUSTBDATE
use detail table
rule if VS-CUSTSTATE = "02" then VS-CUSTBDATE
else VS-CUSTBDATE-G
 
The tables are generated as follows (a parent table is generated using the fields not part of the REDEFINES, and two child tables are generated, one for each REDEFINES description).
Listing 4‑17 Table generation for the file: PJ01AAA.SS.VSAM.CUSTOMER
WHENEVER SQLERROR CONTINUE;
DROP TABLE CUSTOMER CASCADE CONSTRAINTS;
WHENEVER SQLERROR EXIT 3;
CREATE TABLE CUSTOMER (
VS_CUSTIDENT NUMBER(6) NOT NULL,
VS_CUSTLNAME VARCHAR2(30),
VS_CUSTFNAME CHAR (20),
VS_CUSTADDRS VARCHAR2(30),
VS_CUSTCITY CHAR (20),
VS_CUSTSTATE CHAR (2),
VS_CUSTEMAIL VARCHAR2(40),
VS_CUSTPHONE NUMBER(10),
VS_FILLER VARCHAR2(100),
CONSTRAINT PK_CUSTOMER PRIMARY KEY (
VS_CUSTIDENT)
);
 
 
WHENEVER SQLERROR CONTINUE;
DROP TABLE VS_CUSTBDATE CASCADE CONSTRAINTS;
WHENEVER SQLERROR EXIT 3;
CREATE TABLE VS_CUSTBDATE (
VS_CUSTBDATE_CUSTIDENT NUMBER(6) NOT NULL,
VS_CUSTBDATE NUMBER(8),
CONSTRAINT FK_VS_CUSTBDATE_CUSTOMER FOREIGN KEY (
VS_CUSTBDATE_CUSTIDENT) REFERENCES CUSTOMER (
VS_CUSTIDENT),
CONSTRAINT PK_VS_CUSTBDATE PRIMARY KEY (
VS_CUSTBDATE_CUSTIDENT)
);
 
WHENEVER SQLERROR CONTINUE;
DROP TABLE VS_CUSTBDATE_G CASCADE CONSTRAINTS;
WHENEVER SQLERROR EXIT 3;
CREATE TABLE VS_CUSTBDATE_G (
VS_CUSTBDATE_G_CUSTIDENT NUMBER(6) NOT NULL,
VS_CUSTBDATE_CC NUMBER(2),
VS_CUSTBDATE_YY NUMBER(2),
VS_CUSTBDATE_MM NUMBER(2),
VS_CUSTBDATE_DD NUMBER(2),
CONSTRAINT FK_VS_CUSTBDATE_G_CUSTOMER FOREIGN KEY (
VS_CUSTBDATE_G_CUSTIDENT) REFERENCES CUSTOMER (
VS_CUSTIDENT),
CONSTRAINT PK_VS_CUSTBDATE_G PRIMARY KEY (
VS_CUSTBDATE_G_CUSTIDENT));
 
Discrimination rules
A discrimination rule must be set on the redefined field; it describes the code to determine which description of the REDEFINES to use and when.
[field <field_name>]
[…]
rule if <condition> then Field_Name_x
[elseif <condition> then field_Name_y]
[else Field_Name_z]
Discrimination rules syntax and parameters
Discrimination rules examples
In the following example the fields DPODP-DMDCHQ, DPONO-PRDTIV, DP5CP-VALZONNUM are redefined.
Listing 4‑18 Discrimination rule COBOL description
01 ZART1.
05 DPODP PIC X(20).
05 DPODP-RDCRPHY PIC 9.
05 DPODP-DMDCHQ PIC X(6).
05 DPODP-REMCHQ REDEFINES DPODP-DMDCHQ.
10 DPODP-REMCHQ1 PIC 999.
10 DPODP-REMCHQ2 PIC 999.
05 DPODP-VIREXT REDEFINES DPODP-DMDCHQ.
10 DPODP-VIREXT1 PIC S9(11) COMP-3.
05 DPONO-NPDT PIC X(5).
05 DPONO-PRDTIV PIC 9(8)V99.
05 DPONO-PRDPS REDEFINES DPONO-PRDTIV PIC X(10).
05 DP5CP-VALZONNUM PIC 9(6).
05 DP5CP-VALZON REDEFINES DP5CP-VALZONNUM PIC X(6).
 
The following discrimination rules are applied:
Listing 4‑19 Discrimination rules
field DPODP-DMDCHQ
rule if DPODP-RDCRPHY = 1 then DPODP-DMDCHQ
elseif DPODP-RDCRPHY = 2 then DPODP-REMCHQ
elseif DPODP-RDCRPHY = 3 then DPODP-VIREXT
else DPODP-DMDCHQ,
field DPONO-PRDTIV
rule if DPONO-NPDT (1:2) = "01" then DPONO-PRDTIV
elseif DPONO-NPDT (1:2) = "02" then DPONO-PRDPS,
field DP5CP-VALZONNUM
rule if DPODP-RDCRPHY is numeric then DP5CP-VALZONNUM
else DP5CP-VALZON
 
The first rule is to test the value of the numeric field DPODP-RDCRPHY.
The second rule tests the first two characters of an alphanumeric field DPONO-NPDT. Only the values 01 and 02 are allowed.
The third rule tests whether the field DPODP-RDCRPHY is numeric.
Links to COBOL copy
As seen in the file clause of the example, the Mapper file is linked to a COBOL copy file. This COBOL copy describes the unloaded data file: it contains columns descriptions and also technical fields. This unloaded file is created by DSNTIAUL utility. It contains column data and null indicator values.
For each column, a field name and two values for the attributes are generated:
Used when the column accepts the NULL flag or has the NOT NULL attribute respectively.
For each technical field, File Convertor generates:
COBOL Description
Oracle Tuxedo Application Rehosting Workbench File Convertor needs a description associated with each table, so a first step generates a COBOL copy description.
Once the COBOL description files have been prepared, the copy files described in the mapper-<configuration name>.re file should be placed in the $PARAM/file/recs-source directory.
If you use a COBOL copy book from the source platform to describe a file (see COBOL description), then it is the location of the copy book that is directly used.
POB files
These files are created during cataloging, for further information see POB files for ASTs.
Symtab file
symtab-<schema name>.pob
This file is created during cataloging, it must be up-to-date and present so that File Convertor can migrate DB2 objects to Oracle. See The Cataloger Symtab and other miscellaneous files.
Description of the output files
File locations
Location of temporary files
The temporary objects generated by the Rehosting Workbench File Convertor are stored in:
$TMPPROJECT
$TMPPROJECT/Template/<configuration name>
$TMPPROJECT/outputs/<configuration name>
Note:
The $TMPPROJECT variable is set to: $HOME/tmp
Location of log files
The execution log files are stored in:
$TMPPROJECT/outputs mapper-log-<configuration name>
Location of generated files
The unloading and loading components generated with the -i $HOME/trf option are placed in the following locations:
$HOME/trf/unload/file/<configuration name>
<file name>.jclunload
$HOME/trf/reload/file/<configuration name>
Location by <configuration name> of the COBOL programs and KSH used for each loading.
Location by <configuration name> of the SQL scripts used to create the Oracle objects.
Note:
<target table name> Is the file name on the target platform; this file name is furnished in the mapper file.
Generated objects
The following sections describe the objects generated during the migration of z/OS files and the directories in which they are placed.
Unloading JCL
The JCL used to unload the files are generated using the -g option of the file.sh command. They are then (using the -i option) installed in:
$HOME/trf/unload/file/<configuration name>
Each JCL contains two steps and unloads one file using the z/OS IDCAMS REPRO utility. The JCL return code is equal to 0 or 4 for a normal termination.
The JCLs are named: <file name>.jclunload
Note:
The .jclunload extension should be deleted for execution under z/OS.
The generated JCL may need adapting to specific site constraints including:
JOB cards: <crdjob>,
Listing 4‑20 Unload JCL example
//<crdjob> <cardjob_parameter_1>,'FIL QSAM',
// <cardjob_parameter_2>
// <cardjob_parameter_3>
// <cardjob_parameter_4>
//*@ (C) Metaware:jcl-unload-MVS-REPRO.pgm. $Revision: 1.6 $
//********************************************************
//* UNLOAD THE FILE:
//* <datain>.QSAM.CUSTOMER
//* INTO <data>.AV.QSAM
//* LENGTH=266
//******************************************************
//*------------------------------------------*
//* DELETE DATA AND LOG FILES
//*------------------------------------------*
//DEL EXEC PGM=IDCAMS
//SYSPRINT DD SYSOUT=*
//SYSOUT DD SYSOUT=*
//SYSIN DD *
DELETE <data>.AV.QSAM.LOG
DELETE <data>.AV.QSAM
SET MAXCC=0
//*------------------------------------------*
//* LAUNCH REPRO UTILITY
//*------------------------------------------*
//COPYFILE EXEC PGM=IDCAMS
//SYSPRINT DD SPACE=(CYL,(150,150),RLSE),
// DISP=(NEW,CATLG),
// UNIT=SYSDA,
// DSN=<data>.AV.QSAM.LOG
//SYSOUT DD SYSOUT=*
//INDD DD DISP=SHR,
DSN=METAW00.QSAM.CUSTOMER
//OUTD DD SPACE=(CYL,(150,150),RLSE),
// DISP=(NEW,CATLG),
// UNIT=SYSDA,
// DCB=(LRECL=266,RECFM=FB),
// DSN=<data>.AV.QSAM
//SYSIN DD *
REPRO INFILE(INDD) OUTFILE(OUTD)
/*
 
COBOL transcoding programs
Migration of z/OS files to UNIX/Linux files
The COBOL transcoding programs are generated using the -g option of the file.sh command. They are then (using the -i option) installed in:
$HOME/trf/reload/file/<configuration name>/src
The programs are named: RELFILE_<logical file name>.cbl
The programs should be compiled using the Microfocus COBOL compilation options documented in Compiler options.
The compilation of these programs requires the presence of a CONVERTMW.cpy copy file adapted to the project.
These files read a file on input and write an output file of the same organization as on z/OS (Sequential, Relative, Indexed). For sequential files, the organization in the UNIX/Linux Micro Focus environment will be Line Sequential.
Listing 4‑21 FILE CONTROL section - for transcoding programs
SELECT MW-ENTREE
ASSIGN TO "ENTREE"
ORGANIZATION IS SEQUENTIAL
ACCESS IS SEQUENTIAL
FILE STATUS IS IO-STATUS.
SELECT MW-SORTIE
ASSIGN TO "SORTIE"
ORGANIZATION IS LINE SEQUENTIAL
FILE STATUS IS IO-STATUS.
 
A record count is written to the output file and is displayed at the end of processing via:
DISPLAY "RELOADING TERMINATED OK".
DISPLAY "Nb rows reloaded: " D-NB-RECS.
DISPLAY " ".
DISPLAY "NUMERIC MOVED WHEN USING CHAR FORMAT: "
DISPLAY " NUMERIC-BCD : " MW-COUNT-NUMERIC-BCD-USE-X.
DISPLAY " NUMERIC-DISP: " MW-COUNT-NUMERIC-DISP-USE-X.
The last two lines displayed signal the movement of data into fields where the COBOL description does not match the content of the input file (packed numeric fields containing non-numeric data and numeric DISPLAY fields containing non-numeric data). When such cases are encountered, each field name and its value is displayed.
Note:
Migration of z/OS files to Oracle tables
The COBOL transcoding programs are generated using the -g option of the file.sh command. They are then (using the -i option) installed in:
$HOME/trf/reload/file/<configuration name>/src
The programs are named: RELTABLE_<logical file name>.cbl
The programs should be compiled using the Microfocus COBOL compilation options documented in Compiler options.
The compilation of these programs requires the presence of a CONVERTMW.cpy copy file adapted to the project.
These files read a file on input and directly load an Oracle table using the SQL INSERT verb.
Listing 4‑22 FILE CONTROL section - for transcoding programs
SELECT MW-ENTREE
ASSIGN TO "ENTREE"
ORGANIZATION IS RECORD SEQUENTIAL
ACCESS IS SEQUENTIAL
FILE STATUS IS IO-STATUS.
 
A commit is made every 1000 records:
IF MW-NB-INSERT >= 1000
CALL "do_commit"
Note:
The do_commit module is part of Oracle Tuxedo Application Runtime Batch.
A record count is written to the output file and is displayed at the end of processing via:
DISPLAY "RELOADING TERMINATED OK".
DISPLAY "Nb rows reloaded: " D-NB-RECS.
DISPLAY " ".
DISPLAY "NUMERIC MOVED WHEN USING CHAR FORMAT : "
DISPLAY " NUMERIC-BCD : " MW-COUNT-NUMERIC-BCD-USE-X.
DISPLAY " NUMERIC-DISP: " MW-COUNT-NUMERIC-DISP-USE-X.
The last two lines displayed signal the movement of data into fields where the COBOL description does not match the content of the input file (packed numeric fields containing non-numeric data and numeric DISPLAY fields containing non-numeric data). When such cases are encountered, each error is displayed.
Note:
Reloading Korn shell scripts
The Reloading Korn shell scripts are generated using the -g option of the file.sh command. They are then (using the -i option) installed in:
$HOME/trf/reload/file/<configuration name>
Reloading Korn shell scripts for migrating z/OS files to UNIX/Linux files
The scripts are named: loadfile-<logical file name>.ksh
They contain a transcoding (or loading) phase and a check phase. these different phases can be launched separately.
The execution of the scripts produces an execution log in $MT_LOG/<logical file name>.log
The following variables are set at the beginning of each script:
Listing 4‑23 Reloading file script variables
f="@ (c) Metaware:reload-files-ksh.pgm. $Revision: 1.9 $null"
echo "Reloading file ODCSFU ODCSFU"
export DD_ENTREE=${DD_ENTREE:-${DATA_SOURCE}/ODCSFU}
export DD_SORTIE=${DD_SORTIE:-${DATA}/ODCSFU}
logfile=${MT_LOG}/ODCSFU.log
reportfile=${MT_LOG}/ODCSFU.rpt
[…]
 
Note:
Various messages may be generated during the execution phases of the scripts, these messages are explained in Oracle Tuxedo Application Rehosting Workbench Messages.
On normal end, a return code of 0 is returned.
Transcoding and loading phases
These steps launch the execution of the COBOL transcoding program associated with the file processed:
cobrun RELFILE-ODCSFU >> $logfile
On normal termination the following message is displayed:
echo "File ${DD_ENTREE} successfully transcoded and reloaded into ${DD_SORTIE}"
Check phase
This step verifies after the reloading that the reloaded Oracle table contains the same number of records as were transferred from the z/OS source platform. If the number of records is different, an error message is produced:
if [[ "$recsreloaded" != "$recstransferred" ]];
If the number of records is equal, this message is produced:
echo "Number of rows written in output file is equal to number calculated using the log file: OK"
Note:
Reloading Korn shell scripts for migrating z/OS files to Oracle tables
The scripts are named: loadtable-<logical file name>.ksh
They contain a DDL creation phase, a transcoding (or loading) phase and a check phase. The different phases may be launched separately.
The execution of the scripts produces an execution log in $MT_LOG/<logical file name>.log
The following variables are set at the beginning of each script:
Listing 4‑24 Reloading table script variables
f="@ (c) Metaware:load-tables-ksh.pgm. $Revision: 1.14 $null"
echo "reloading ODCSF0B into ORACLE"
export DD_ENTREE=${DD_ENTREE:-${DATA_SOURCE}/ODCSF0B}
logfile=$MT_LOG/ODCSF0B.log
reportfile=${MT_LOG}/ODCSF0B.rpt
ddlfile=${DDL}/STFILEORA/ODCSF0B.sql
[…]
 
To change the file names, set the DD_* variables before calling the script.
Various messages may be generated during the three execution phases of the scripts; explanations of these messages are listed in Oracle Tuxedo Application Rehosting Workbench Messages.
On normal end, a return code of 0 is returned.
Creating Oracle DDL phase
Oracle objects are created under SQLPLUS using: ${DDL}/STFILEORA/ODCSF0B.sql
sqlplus $MT_DB_LOGIN >>$logfile 2>&1 <<!EOF
WHENEVER SQLERROR EXIT 3;
start ${ddlfile}
exit
!EOF
 
On normal termination the following message is displayed:
echo "Table(s) created"
Transcoding and loading phases
These steps launch the execution of the COBOL transcoding program associated with the file processed:
runb RELTABLE-ODCSF0B >> $logfile 2>&1
On normal termination the following message is displayed:
echo "File ${DD_ENTREE} successfully transcoded and reloaded into ORACLE"
Check phase
This step verifies after the reloading that the reloaded Oracle table contains the same number of records as the records transferred from ZOS on target platform. If the number of records is different, an error message is produced:. If the number of records is equal, this message is produced:
"Number of rows written in output file is equal to number calculated using the log file: OK"
Target DDL
The ORACLE DDL is generated using the -g option of the file.sh command. They are then (using the -i option) installed in:
$HOME/trf/SQL/file/<schema name>
They are named: <target file name>.ddl.
The format used is:
WHENEVER SQLERROR CONTINUE;
DROP TABLE <target_table_name> CASCADE CONSTRAINTS;
WHENEVER SQLERROR EXIT 3;
CREATE TABLE <target_table_name> (
<target_column_name> <column_data_type> <attribute>[, …]
CONSTRAINT <constraint_name> PRIMARY KEY (<target_column_name>)
);
Where:
Constraint name of primary key (PK_<Oracle table name>
Listing 4‑25 DDL generation sql example
WHENEVER SQLERROR CONTINUE;
DROP TABLE CUSTOMER CASCADE CONSTRAINTS;
WHENEVER SQLERROR EXIT 3;
CREATE TABLE CUSTOMER (
VS_CUSTIDENT NUMBER(6) NOT NULL,
VS_CUSTLNAME VARCHAR2(30),
VS_CUSTFNAME CHAR (20),
VS_CUSTADDRS VARCHAR2(30),
VS_CUSTCITY CHAR (20),
VS_CUSTSTATE CHAR (2),
VS_CUSTBDATE NUMBER(8),
VS_CUSTEMAIL VARCHAR2(40),
VS_CUSTPHONE NUMBER(10),
VS_FILLER VARCHAR2(100),
CONSTRAINT PK_CUSTOMER PRIMARY KEY (
VS_CUSTIDENT)
);
 
Access functions and utility programs
Access functions
These access functions are generated using the -g option of file.sh and installed in$HOME/trf/DML using the -i and -s options..
Table 4‑9 Access functions
Access function call arguments
The RM_<logical file name>.pco and ASG_<logical file name>.cbl access functions use the following variables:
The name of the secondary key is passed using the FILE-ALT-KEY-NAME variable of the MWFITECH copy file.
Listing 4‑26 LINKAGE SECTION structure
LINKAGE SECTION.
01 IO-STATUS PIC XX.
COPY MWFITECH.
* *COBOL Record Description
01 VS-ODCSF0-RECORD.
06 X-VS-CUSTIDENT.
07 VS-CUSTIDENT PIC 9(006).
06 VS-CUSTLNAME PIC X(030).
06 VS-CUSTFNAME PIC X(020).
06 VS-CUSTADDRS PIC X(030).
06 VS-CUSTCITY PIC X(020).
06 VS-CUSTSTATE PIC X(002).
06 X-VS-CUSTBDATE.
07 VS-CUSTBDATE PIC 9(008).
06 VS-CUSTBDATE-G REDEFINES VS-CUSTBDATE.
11 X-VS-CUSTBDATE-CC.
12 VS-CUSTBDATE-CC PIC 9(002).
11 X-VS-CUSTBDATE-YY.
12 VS-CUSTBDATE-YY PIC 9(002).
11 X-VS-CUSTBDATE-MM.
12 VS-CUSTBDATE-MM PIC 9(002).
11 X-VS-CUSTBDATE-DD.
12 VS-CUSTBDATE-DD PIC 9(002).
06 VS-CUSTEMAIL PIC X(040).
06 X-VS-CUSTPHONE.
07 VS-CUSTPHONE PIC 9(010).
06 VS-FILLER PIC X(100).
PROCEDURE DIVISION USING IO-STATUS
MW-FILE-TECH
VS-ODCSF0-RECORD.
 
Call arguments used
OPEN
For all OPEN operations, the FILE-CODE-F variable should contain the key-word OPEN.
The FILE-OPEN-MODE variable should contain the type of OPEN to perform as follows:.
CLOSE
For CLOSE operations, the FILE-CODE-F variable should contain the key-word CLOSE.
CLOSE-LOCK
For CLOSE LOCK operations, the FILE-CODE-F variable should contain the key-word CLOSE-LOCK.
DELETE
Depending on the file access mode, the DELETE operation is either the current record or the one indicated by the file key.
The corresponding function code is indicated as follows:
READ
The function code depends on the file access mode and the type of read required: sequential read, read primary key or read secondary key.
READ filename1 [NEXT]
READ-NEXT => FILE-CODE-F
READ filename1
READ-KEY => FILE-CODE-F
READ filename1 NEXT
READ-NEXT => FILE-CODE-F
READ filename1
READ-KEY => FILE-CODE-F
READ filename1 PREVIOUS
READ-PREV => FILE-CODE-F
If DataName1 is a variable corresponding to the keyAltKey1
READ filename1 KEY DataName1
READ-ALT-KEY => FILE-CODE-F "AltKey1" => FILE-ALT-KEY-NAME
READ filename1
READ-REL-KEY => FILE-CODE-F "RelKeyVar" => FILE-REL-KEY
Note:
REWRITE
The function code depends on the file access mode and the type of read required: sequential read, read primary key or read secondary key
REWRITE RecName1
REWRITE-CUR => FILE-CODE-F
REWRITE RecName1
REWRITE-KEY => FILE-CODE-F
Note:
START
Whether the file is relative, indexed, with or without secondary key, the function code depends on the exact type of start.
START file1
START-EQUAL => FILE-CODE-F
START file1 KEY {EQUAL| = |EQUALS} DataName1
START-EQUAL => FILE-CODE-F
START file1 KEY {EXCEEDS| > |GREATER} DataName1
START-SUP => FILE-CODE-F
START file1 KEY {NOT LESS |GREATER OR EQUAL | NOT < | >= } DataName1
START-SUPEQ => FILE-CODE-F
START file1 KEY {< |LESS} DataName1
START-INF => FILE-CODE-F
START file1 KEY {NOT GREATER |LESS OR EQUAL | NOT > | <= } DataName1
START-INFEQ => FILE-CODE-F
START file1 KEY {EQUAL| = |EQUALS} DataName1
AltKey1 => FILE-ALT-KEY-NAME
START-ALT-EQUAL => FILE-CODE-F
START file1 KEY {EXCEEDS| > |GREATER} DataName1
AltKey1 => FILE-ALT-KEY-NAME
START-ALT-SUP => FILE-CODE-F
START file1 KEY {NOT LESS| GREATER OR EQUAL | NOT < | >=} DataName1
AltKey1 => FILE-ALT-KEY-NAME
START-ALT-SUPEQ => FILE-CODE-F
{< |LESS} DataName1
AltKey1 => FILE-ALT-KEY-NAME
START-ALT-INF => FILE-CODE-F
START file1 KEY {NOT GREATER |LESS OR EQUAL | NOT > | <= } DataName1
AltKey1 => FILE-ALT-KEY-NAME
START-ALT-INFEQ => FILE-CODE-F
WRITE
The function code depends on the file access mode and the type of read required: sequential read, read primary key or read secondary key
Note:
Copy files to be implemented
The following copy files are used by certain access functions. They should be placed in the directory: < installation platform>/fixed-copy/ during the installation of the Rehosting Workbench:
Korn shell utilities
These KSH scripts are generated using the -g option of file.sh and then installed in $HOME/trf/SQL/file/<configuration name> using the -i option. When necessary, they are used by Oracle Tuxedo Application Runtime Batch.
Oracle Tuxedo Application Runtime for CICS configuration files
The desc.vsam and envfile_tux files are generated in the $HOME/trf/config/tux/ directory when VSAM files are migrated to Oracle tables. They are used by Oracle Tuxedo Application Runtime CICS.
COBOL and JCL conversion guide files
These files are generated using the -s option of the file.sh command.
This file is used by the Rehosting Workbench Cobol Converter and JCL Converter to rename object names.
Table 4‑18 Conversion file names
.rdb files
These files are created when VSAM files are converted to Oracle tables. They are used by Oracle Tuxedo Application Runtime Batch to bridge the technical differences between the z/OS file on the source platform and the corresponding Oracle table on the target platform.
The files are generated in: $HOME/trf/data
They are named: <source platform physical file name>.rdb
The files contain two lines described in the next section.
Parameters and syntax
${DATA}/<source platform physical file name> <max> <org> <form> UL_<logical file name> <asgn_in> DL_<logical file name> <asgn_out> RM_<logical file name> <target table name> ${DDL}/<configuration name/cleantable-<target table name>.ksh ${DDL}/<configuration name>/droptable-<target table name>.ksh ${DDL}/<configuration name>/createtable-<target table name>.ksh ${DDL}/<configuration name>/ifemptytable-<target table name>.ksh ${DDL}/<configuration name>/ifexisttable-<target table name>.ksh
IDX_KEY <column name> <n m>
REL_KEY - <m>
<source platform physical file name>
UL_<logical file name>
Uploading component name used by Oracle Tuxedo Application Runtime Batch.
DL_<logical file name>
RM_<logical file name>
<target table name>
n: offset of the indexed key (in COBOL description).
m: length of the indexed key (in COBOL description).
m: length of the relative key (in COBOL description).
Example of .rdb file
The following example is generated when migrating an indexed VSAM file to an Oracle table. On the source platform, the VSAM file is named: PJ01AAA.SS.VSAM.CUSTOMER
Listing 4‑27 .rdb indexed VSAM example
${DATA}/PJ01AAA.SS.VSAM.CUSTOMER 266 IDX FIX UL_ODCSF0B ENTREE DL_ODCSF0B SORTIE RM_ODCSF0B CUSTOMER ${DDL}/STFILEORA/cleantable-ODCSF0B.ksh ${DDL}/STFILEORA/droptable-ODCSF0B.ksh ${DDL}/STFILEORA/createtable-ODCSF0B.ksh ${DDL}/STFILEORA/ifemptytable-ODCSF0B.ksh ${DDL}/STFILEORA/ifexisttable-ODCSF0B.ksh
IDX_KEY VS-CUSTIDENT 1 6
 
Execution Reports
file.sh creates different execution reports depending on the options chosen. In the following examples the following command is used:
file.sh -gmi $HOME/trf STFILEORA
 
Listing 4‑28 Messages produced when using the options -g with file.sh (step 1)
#########################################################################
Control of configuration STFILEORA
#########################################################################
Control of templates
Project Templates list file is missing /home2/wkb9/param/file/file-templates.txt
OK: Use Default Templates list file
File name is /Qarefine/release/M2_L4_1/convert-data/default/file/file-templates.txt
##########################################################################
Control of Mapper
##########################################################################
COMPONENTS GENERATION
CMD : /Qarefine/release/M2_L4_1/scripts/launch file-converter -s /home2/wkb9/param/system.desc -mf /home2/wkb9/tmp/mapper-STFILEORA.re.tmp -dmf /home2/wkb9/param/file/Datamap-STFILEORA.re -td /home2/wkb9/tmp -tmps /home2/wkb9/tmp/file-templates-STFILEORA.tmp -target-sgbd oracle11 -target-os unix -varchar2 29 -print-ddl -print-dml -abort
MetaWorld starter
Loading lib: /Qarefine/release/M2_L4_1/Linux64/lib64/localext.so
(funcall LOAD-THE-SYS-AND-APPLY-DMAP-AND-MAPPER)
*File-Converter*: We are in BATCH mode
Comand line arguments: begining of analyze
recognized argument -s value: /home2/wkb9/param/system.desc
recognized argument -mf value: /home2/wkb9/tmp/mapper-STFILEORA.re.tmp
recognized argument -dmf value: /home2/wkb9/param/file/Datamap-STFILEORA.re
recognized argument -td value: /home2/wkb9/tmp
recognized argument -tmps value: /home2/wkb9/tmp/file-templates-STFILEORA.tmp
recognized argument -target-sgbd value: oracle11
recognized argument -target-os value: unix
recognized argument -varchar2 value: 29
recognized argument -print-ddl
recognized argument -print-dml
recognized argument -abort
End of Analyze
Parsing mapper file /home2/wkb9/tmp/mapper-STFILEORA.re.tmp ...
Parsing data-map file /home2/wkb9/param/file/Datamap-STFILEORA.re ...
Parsing system description file /home2/wkb9/param/system.desc ...
Warning! OS clause is absent, assuming OS is IBM
Current OS is IBM-MF
Loading /home2/wkb9/source/symtab-STFILEORA.pob at 10:19:32... done at 10:19:32
Build-Symtab-DL1 #1<a SYMTAB-DL1>
... Postanalyze-System-RPL...
sym=#2<a SYMTAB>
PostAnalyze-Common #2<a SYMTAB>
0 classes
0 classes
0 classes
0 classes
1 classes
13 classes
Point 1 !!
Point 2 !!
Parsing file /home2/wkb9/source/COPY/ODCSF0.cpy ...
*Parsed 22 lines*
Parsing file /home2/wkb9/source/COPY/MW_SYSOUT.cpy ...
*Parsed 8 lines*
Parsing file /home2/wkb9/source/COPY/ODCSFU.cpy ...
*Parsed 24 lines*
Parsing file /home2/wkb9/source/COPY/ODCSF0B.cpy ...
*Parsed 22 lines*
Point 3 !!
Point 4 !!
Point 5 !!
loading pob file /Qarefine/release/M2_L4_1/convert-data/templates/file/unloading/jcl-unload-MVS-REPRO.pgm.pob
Expanding /Qarefine/release/M2_L4_1/convert-data/templates/file/unloading/jcl-unload-MVS-REPRO.pgm ...
Writing ODCSF0B.jclunload
Writing MW-SYSOUT.jclunload
Writing ODCSFU.jclunload
Writing ODCSF0.jclunload
[…]
loading pob file /Qarefine/release/M2_L4_1/convert-data/templates/file/dml/generate-post-process.pgm.pob
Expanding /Qarefine/release/M2_L4_1/convert-data/templates/file/dml/generate-post-process.pgm ...
Writing post-process-file.sh
Parsing template file /Qarefine/release/M2_L4_1/convert-data/default/file/file-move-assignation.pgm
Expanding /Qarefine/release/M2_L4_1/convert-data/default/file/file-move-assignation.pgm ...
Writing file-move-assignation.lst
Rest in peace, Refine...
*=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-==-=-=-=-=-=-=-=-=-=-=-
Generated components are in /home2/wkb9/tmp/Template/STFILEORA
(Optionaly in /home2/wkb9/tmp/SQL/STFILEORA)
 
Listing 4‑29 Messages produced when using the options -m with file.sh (step 2)
#########################################################################
FORMATTING COBOL LINES
########################################################################
CHANGE ATTRIBUTE TO KSH or SH scripts
*=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-==-=-=-=-=-=-=-=-=-=-=-
Components are modified into /home2/wkb9/tmp directory
*=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-==-=-=-=-=-=-=-=-=-=-=-
Messages produits par l'option -i de file.sh (step 3)
INSTALL COMPONENTS INTO SPECIFIC DIRECTORY USING file-move-assignation.lst
===================================================
==_PJ01AAA.SS.VSAM.CUSTOMER_==
Copied <Templates>:ODCSF0B.jclunload to <td>/unloading/file/STFILEORA/ODCSF0B.jclunload
Copied <Templates>:loadtable-ODCSF0B.ksh to <td>/reload/file/STFILEORA/loadtable-ODCSF0B.ksh
Copied <Templates>:RELTABLE-ODCSF0B.pco to <td>/reload/file/STFILEORA/RELTABLE-ODCSF0B.pco
Copied <Templates>:ASG_ODCSF0B.cbl to <td>/DML/ASG_ODCSF0B.cbl
Copied <Templates>:RM_ODCSF0B.pco to <td>/DML/RM_ODCSF0B.pco
Copied <Templates>:DL_ODCSF0B.cbl to <td>/DML/DL_ODCSF0B.cbl
Copied <Templates>:UL_ODCSF0B.cbl to <td>/DML/UL_ODCSF0B.cbl
Copied <Templates>:PJ01AAA.SS.VSAM.CUSTOMER.rdb to <td>/data/PJ01AAA.SS.VSAM.CUSTOMER.rdb
Copied <SQL>:ODCSF0B.sql to <td>/SQL/file/STFILEORA/ODCSF0B.sql
Copied <Templates>:cleantable-ODCSF0B.ksh to <td>/SQL/file/STFILEORA/cleantable-ODCSF0B.ksh
Copied <Templates>:droptable-ODCSF0B.ksh to <td>/SQL/file/STFILEORA/droptable-ODCSF0B.ksh
Copied <Templates>:createtable-ODCSF0B.ksh to <td>/SQL/file/STFILEORA/createtable-ODCSF0B.ksh
Copied <Templates>:ifemptytable-ODCSF0B.ksh to <td>/SQL/file/STFILEORA/ifemptytable-ODCSF0B.ksh
Copied <Templates>:ifexisttable-ODCSF0B.ksh to <td>/SQL/file/STFILEORA/ifexisttable-ODCSF0B.ksh
 
 
 
===================================================
==_PJ01AAA.SS.QSAM.CUSTOMER.REPORT_==
Copied <Templates>:loadfile-MW-SYSOUT.ksh to <td>/reload/file/STFILEORA/loadfile-MW-SYSOUT.ksh
Copied <Templates>:RELFILE-MW-SYSOUT.cbl to <td>/reload/file/STFILEORA/RELFILE-MW-SYSOUT.cbl
===================================================
==_PJ01AAA.SS.QSAM.CUSTOMER.UPDATE_==
Copied <Templates>:loadfile-ODCSFU.ksh to <td>/reload/file/STFILEORA/loadfile-ODCSFU.ksh
Copied <Templates>:RELFILE-ODCSFU.cbl to <td>/reload/file/STFILEORA/RELFILE-ODCSFU.cbl
===================================================
==_PJ01AAA.SS.QSAM.CUSTOMER_==
Copied <Templates>:loadfile-ODCSF0.ksh to <td>/reload/file/STFILEORA/loadfile-ODCSF0.ksh
Copied <Templates>:RELFILE-ODCSF0.cbl to <td>/reload/file/STFILEORA/RELFILE-ODCSF0.cbl
===================================================
Copied <Templates>:close_all_files_STFILEORA.cbl to <td>/DML/close_all_files_STFILEORA.cbl
Copied <Templates>:init_all_files_STFILEORA.cbl to <td>/DML/init_all_files_STFILEORA.cbl
Copied <Templates>:reload-files.txt to <td>/reload/file/STFILEORA/reload-files.txt
Copied <fixed-components>:getfileinfo.cbl to <td>/DML/getfileinfo.cbl
Copied <fixed-components>:MWFITECH.cpy to <td>/fixed-copy/MWFITECH.cpy
Copied <fixed-components>:MW-PARAM-ERROR.cpy to <td>/fixed-copy/MW-PARAM-ERROR.cpy
Copied <fixed-components>:MW-PARAM-ERROR-VAR.cpy to <td>/fixed-copy/MW-PARAM-ERROR-VAR.cpy
Copied <fixed-components>:MW-PARAM-TRACE.cpy to <td>/fixed-copy/MW-PARAM-TRACE.cpy
Copied <fixed-components>:MW-PARAM-TRACE-VAR.cpy to <td>/fixed-copy/MW-PARAM-TRACE-VAR.cpy
Copied <fixed-components>:MW-PARAM-GETFILEINFO.cpy to <td>/fixed-copy/MW-PARAM-GETFILEINFO.cpy
Copied <fixed-components>:MW-PARAM-GETFILEINFO-VAR.cpy to <td>/fixed-copy/MW-PARAM-GETFILEINFO-VAR.cpy
Copied <fixed-components>:MW-PARAM-DML-LOCKING.cpy to <td>/fixed-copy/MW-PARAM-DML-LOCKING.cpy
Copied <fixed-components>:ERROR-SQLCODE.cpy to <td>/fixed-copy/ERROR-SQLCODE.cpy
Copied <fixed-components>:RunSqlLoader.sh to <td>/reload/bin/RunSqlLoader.sh
Copied <fixed-components>:CreateReportFromMVS.sh to <td>/reload/bin/CreateReportFromMVS.sh
===================================================
Dynamic_configuration
Copied <Templates>:File-in-table-STFILEORA to /home2/wkb9/param/dynamic-config/File-in-table-STFILEORA
Copied <Templates>:../../Conv-ctrl-STFILEORA to /home2/wkb9/param/dynamic-config/Conv-ctrl-STFILEORA
===================================================
post-process
executed <Templates>:post-process-file.sh
/home2/wkb9/param/dynamic-config/Conv-ctrl-STFILEORA treated
=====
Number of copied files: 37
Number of executed scripts: 1
Number of ignored files: 0
 
######################################################################
*=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-==-=-=-=-=-=-=-=-=-=-=-
Components are copied into /home2/wkb9/trf directory
*=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
 
 
Detailed Processing
This section describes the Command-line syntax used by the File Convertor, and the Process Steps summary.
The processes required on the source and target platforms concern:
Command-line syntax
file.sh
Name
file.sh - generate file migration components.
Synopsis
file.sh [ [-g] [-m] [-i <installation directory>] <configuration name> | -s <installation directory> (<configuration name1>,<configuration name2>,...) ]
Description
file.sh generates the Rehosting Workbench components used to migrate z/OS files to UNIX Micro Focus files and Oracle databases.
Options
Generation options
-g <configuration name>
Triggers the generation, for the configuration indicated, of the unloading and loading components in $TMPPROJECT. This generation depends on the information found in the configuration files.
Modification options
-m <configuration name>
Makes the generated SHELL scripts executable. COBOL programs are adapted to Micro Focus COBOL fixed format. When present, the shell script described in File modifying generated components is executed.
Installation option
-i <installation directory> <configuration name>
Places the components in the installation directory. This operation uses the information located in the file_move_assignation.txt file.
Final option
-s <installation directory> (<configuration name 1>, <configuration name 2>, …)
Enables the generation of the COBOL and JCL converter configuration file. This file takes all of the unitary files of the project.
All these files are created in $PARAM/dynamic-config
Example
file.sh -gmi $HOME/trf FTFIL001
Unitary usage sequence
If the file.sh options are used one at a time, they should be used in the following order:
1.
2.
3.
4.
Process Steps
Configuring the environments and installing the components
This section describes the preparation work on the source and target platforms.
Installing the unloading components under z/OS
The components used for the unloading (generated in $HOME/trf/unload/file) should be installed on the source z/OS platform (the generated JCL may need adapting to specific site constraints including JOB cards, library access paths and access paths to input and out put files).
Installing the reloading components on the target platform
The components used for the reloading (generated in $HOME/trf/reload/file) should be installed on the target platform.
The following environment variables should be set on the target platform:
($HOME/trf/SQL/file/<configuration name>).
In addition, the following variable should be set according to the information in the Oracle Tuxedo Application Rehosting Workbench Installation Guide:
Compiling COBOL transcoding programs
The COBOL transcoding programs should be compiled using the options specified in Compiler options.
Compiling these programs requires the presence of a copy of CONVERTMW.cpy adapted to the project.
Unloading data
To unload each file, a JCL using the IBM IDCAMS REPRO utility is executed. The IDCAMS REPRO utility creates two files for each file:
These unloading JCLs are named <logical filename>.jclunload
A return code of 0 is sent on normal job end.
Transferring the data
The unloaded data files should be transferred between the source z/OS platform and the target UNIX/Linux platform in binary format using the file transfer tools available at the site (CFT, FTP, …).
The files transferred to the target UNIX/Linux platform should be stored in the $DATA_SOURCE directory.
Reloading the data
The scripts enabling the transcoding and reloading of data are generated in the directory:
$HOME/trf/reload/file/<configuration name>/.ksh
scripts, the format of the script names are:
loadfile-<logical file name>.ksh
loadgdg-<logical file name>.ksh and loadgds-<logical file name>.ksh
Note:
The loadgdg-<logical file name>.ksh script enables the execution of the different loadgds-<logical file name>.ksh scripts. Each loadgds script is used to reload one unitary generation of the file (each data set within a GDG is called a generation or a Generation Data Set – GDS).
For a file-to-Oracle conversion, the format of the script names is:
loadtable-<logical file name>.ksh
Transcoding and reloading command for files and tables
Name
loadfile or loadtable transcode and reload data to either file or table.
Synopsis
loadfile-<logical file name>.ksh [-t] [-l] [-c: <method>]
loadtable-<logical file name>.ksh [-t] [-l] [-c: <method>]
Options
-t
Transcode and reload the file.
-l
Transcode and reload the file (same action as -t parameter).
-c ftp:<…>:<…>
Implement the verification of the transfer (see Checking the transfers).
Transcoding and reloading command for GDG files
Name
loadgdg and loadgds transcode and reload data to file.
Synopsis
loadgdg-<logical file name>.ksh [-t] [-l] [-c: <method>]
loadgds-<logical file name>.ksh [-t] [-l] [-c: <method>]
Options
-t
Transcode the member files of the GDG.
-l
Reload the member files of the GDG using the Oracle Tuxedo Application Runtime for Batch utilities.
-c ftp:<…>:<…>
Implement the verification of the transfer (see Checking the transfers).
Checking the transfers
This check uses the following option of the loadfile-<logical file name>.ksh or loadtable-<logical file name>.ksh
-c ftp:<name of transferred physical file>:<name of FTP log under UNIX>
This option verifies, after the reloading, that the physical file transferred from z/OS and the file reloaded on the target platform contains the same number of records. This check is performed using the FTP log and the execution report of the reloading program. If the number of records is different, an error message is produced.
 

Copyright © 1994, 2017, Oracle and/or its affiliates. All rights reserved.