• Description of the Output Files including the Generated Objects.
• Detailed Processing including the Command-line Syntax.By default, if the SQL commands of the DB2 DDL are prefixed by a qualifier or an authorization ID, the prefix is used by the Rehosting Workbench as the name of the schema, for example, CREATE TABLE <qualifier or authorization ID>.table name.The schema name can also be determined by the Rehosting Workbench using the global-options clause of the System Description File.Listing 4‑1 Schema Name ExampleTable 4‑1 lists the DB2 objects that are migrated to Oracle.
Table 4‑1 DB2 Objects to Convert This file contains all indexes associated with the table <target_table_name>. This file will not be generated if there were no indexes defined on the table <target_table_name> This file contains all constraints associated with the table <target_table_name>. This file will not be generated if there were no constraints defined on the table <target_table_name> Table 4‑2 shows all DB2 data types and their conversion to the Oracle database target.
Table 4‑2 DB2 to Oracle Data Type Conversion
Note: ROWID data type is not converted. In most cases, ROWID column is only created for LOB data type compliance. If the customer application uses a ROWID column implicitly, you need to manually adapt the generated TABLE-<table_name>.sql file.Table 4‑3 shows all DB2 column properties and their conversion to the Oracle database target.
Table 4‑3 DB2 Column Property Conversion <value> depends on DB2 z/OS data type. For example the SQODCSF0.ddl file contains the source of different DB2 objects. A SQODCSF0.ddl.pob file is created in the $SOURCE/DDL/pobest directory.This file is created during cataloging, it must be up-to-date and present so that DB2-to-Oracle Convertor can migrate DB2 objects to Oracle. See The Cataloger Symtab and Other Miscellaneous Files.This file is created during cataloging, it must be up-to-date and present so that DB2-to-Oracle Convertor can migrate DB2 objects to Oracle. See The Cataloger Symtab and Other Miscellaneous Files.This file is created during cataloging, it must be up-to-date and present so that DB2-to-Oracle Convertor can migrate DB2 objects to Oracle. See The Cataloger Symtab and Other Miscellaneous Files.In addition to the parameters concerning schema, one other parameter should be set in the System Description File.Listing 4‑2 db-param.cfg Template# the file param/rdbms/rename-objects-<schema>.txt is automatically loaded by the tool if it exists.
Table 4‑4 db-param-cfg Parameters
– <jcl_unload_utility_name> is set to “dsnuproc”csv Limitation:When using csv value, column data with hexadecimal value x0A or x0D0A (the UNIX/Windows ASCII carriage return line character) cannot be correctly loaded in Oracle database.
Note: If your DB2 data is encoded by the multiple byte character set (MBCS), you should choose “csv” as the unload file format and “dsnuproc” as the unload utility.These three parameters indicate the date, timestamp and time formats used by z/OS DB2 and stored in DSNZPARM. These parameters impact the reloading operations and the COBOL date and time manipulations.These parameters are optional and only necessary if the DB2 database contains the DATE, TIME or TIMESTAMP fields. These parameters should be supplied respecting the Oracle formats.Listing 4‑3 SQL CREATE INDEX Script Using Index Parameternlssort(MYCOL11, 'nls_sort=FRENCH') ASC
• By default, the tools generated by the DB2 to Oracle migration expect NLS_SORT in binary.
• If a DB2 table contains CLOB or BLOB data types, you can only use the dsnutilb utility.
• If a DB2 table contains MBCS data type, you can only use the dsnuproc utility.
• The dsnuproc utility does not accept CLOB, BLOB, REAL, FLOAT, and DOUBLE data types when <jcl_unload_format_file> is set to binary or not be presented.
• The dsnuproc utility does not accept CLOB and BLOB data types when <jcl_unload_format_file> is set to csv.The csv value can only be used with dsnuproc DB2 utility. The file format is like a CSV file with the “;” character separator. The file created by the DB2 utility must be transferred in text mode, that is, the transfer protocol should transcode the file in ASCII codeset.The binary value for <jcl_unload_format_file> (or without this optional parameter) needs the file to be transferred in image or binary mode.
WARNING: Do not use csv value if the transfer protocol does not properly convert all COBOL characters and all characters from EBCDIC to ASCII.These parameters are optional and only necessary if the DB2 schema contains CLOB or BLOB data types.When downloading a table with CLOB or BLOB columns, each row of this column is written in a separate file which name is written into the current unloading file. The value of this parameter depends on the following dataset types (see Unloading JCL: DSNUTILB):
• <data>: access paths to output files. The string <data> is hardcoded in the JCL
• <schema_name>: schema name
• <table_name>: table name
• <column_name6digit>: column name reduced to 6 characters with a digit suffix (1 for the first LOBS column of the table, 2 for the second, and so forth)
• <file_name>: 8 length of member name or file name automatically generated by the DB2 utility.You need to calculate the maximal length of this string, including parenthesis and slash characters.When present, this file will be automatically executed at the end of the generation process. It will be called using the <schema name> as an argument.This file should be placed in the directory indicated by the $PARAM/rdbms variable.
Note: To apply the modifications to all schema names and/or all table names, place a * in the position of <schema name> and/or <DB2 table name>. See the second example below.Listing 4‑4 Renaming File ExampleListing 4‑5 rdbms-template.txtWhen required, another version of the rdbms-template.txt file can be placed in the $PARAM/rdbms directory. The use of an alternative file is signaled during the execution of rdbms.sh by the message:Listing 4‑6 Execution Log with Alternative Template FileThis file is placed during the installation of the Rehosting Workbench, it controls the transfer of components generated in the different installation directories. This file indicates the location of each component to copy during the installation phase of rdbms.sh, when launched using rdbms.sh -i.
•
• The DB2-to-Oracle Convertor has discarded the object and all associated objects. For instance, if a DB2 table has a VARBINARY column, the tool does not generate TABLE-<table_name>.sql, INDEX-<able_name>.sql, VIEW-<view_name>.sql, etc.The DB2-to-Oracle Convertor has generated all components. You have to ignore or adapt the generated components listed in the warning files. For instance, if a DB2 table has a FLOAT column and you set the dsnuproc DB2 utility in the db-param.cfg, the tool displays that some components associated to this DB2 object have to be discarded. This behavior allows the generation of all components even if some objects cannot be correctly generated.Listing 4‑8 Warning File Unsupported-<schema name>.logListing 4‑9 Check-Warning-<schema name>.log
Table 4‑5 Written Lines Format
• The unloading and loading components generated with the -i $HOME/trf option are placed in the following locations:
Table 4‑6 Component Locations <table name_8>.jclunload Location by <schema name> of the SQL scripts used to create the Oracle objects. Location by <schema name> of the COBOL transcoding programs.
Note: These programs can be absent depending on the value of <jcl_unload_format_file>. Location by <schema name> of the CTL files used by SQL*LOADER. Location by <schema name> of the reloading Korn shell scripts.
Notes: <table name_8> If the table name is shorter or longer than eight characters, the Rehosting Workbench DB2-to-Oracle Convertor attributes an eight-character name to the z/OS JCL as close as possible to the original. The renaming process maintains the uniqueness of each table name.<target table name> Is the table name in the Oracle database.
Table 4‑7 Datamap File Name Parameters Listing 4‑10 Datamap File
Table 4‑8 Datamap File Parameters
Table 4‑9 Mapper file Name Parameters Listing 4‑11 Mapper File ExampleListing 4‑12 Mapper FileEach file directive is used to describe a table, each field directive describes a column of the table (the field directives <target_column_name>-IND and field <target_column_name>-LEN are optionally used to modify certain options in the technical fields added to the unloaded file by DB2 unloading utility (see MVS Utility Parameters).Table 4‑10 lists the field directives.
Table 4‑10 Field Directive Parameters
• For a CLOB or BLOB column, a FNAME field is added. The COBOL field length is given by the <lobs_fname_length> parameter option (see LOBS Parameters).
Table 4‑11 Copy File Name Parameters Listing 4‑13 Copy File: SQL CodeFor this example of DB2 DDL, the copy file generated by the Rehosting Workbench when using the parameter rdbms:lobs_fname_length:50 (see LOBS Parameters) is:Listing 4‑14 Copy File: Generation SampleListing 4‑15 Copy File
Table 4‑12 Copy File Parameters The JCL used to unload the DB2 tables are generated using the -g option of the rdbms.sh command. They are then (using the -i option) installed in:Each JCL performs two steps and unloads one DB2 table using the z/OS DSNTIAUL utility by default. The JCL return code is equal to 0 or 4 for a normal termination.
The JCLs are named: <table name>.jclunload
Note: The .jclunload extension should be deleted for execution under z/OS.The default DB2 unloading utility is DSNTIAUL. You can choose another unloading utility by adding the parameter in the db-param.cfg file:In case of CLOB or BLOB columns, you should choose dsnutilb value.
• JOB cards: <cardjob_parameter_<digit>>
• library access paths: <db2_runlib_load_librairy>
• data source name: <ds_db2>
• plan name: <plan_dsntiaul>
• Listing 4‑16 Unload DSNTIAUL JCL Example
• JOB cards: <cardjob_parameter_<digit>>
• library access paths: <db2_runlib_load_librairy>
• data source name: <ds_db2>
• Listing 4‑18 shows the content of a JCL when these parameters are set in the db-param.cfg:
• JOB cards: <cardjob_parameter_<digit>>
• library access paths: <db2_runlib_load_librairy>
• data source name: <ds_db2>You can use DSNUTILB even if your table does not contain any LOBS columns.Listing 4‑19 shows the content of a JCL when these parameters are set in the db-param.cfg:The COBOL transcoding programs are generated using the -g option of the rdbms.sh command, and installed in $HOME/trf/reload/rdbms/<schema name>/src using the -i option.They are generated when <jcl_unload_format_file> option is not used or set to "binary" (see MVS Utility Parameters).For each DB2 table, the COBOL programs are generated and named: MOD_<table name>.cblFor each DB2 table with a CLOB column, the rdbms.sh command generates a secondary COBOL program.The secondary programs are named: CLOB_<table name>_<column_name>.cblAll these programs should be compiled using the target COBOL compilation options which are documented in Compiler Options.The compilation of these programs requires the presence of a CONVERTMW.cpy copy file adapted to the project.These programs read a file on input and write a sequential file on output with fixed length records. The output file is read by the SQL*LOADER utility. For each CLOB and BLOB columns, these programs write a line sequential file on output. This file contains the list of LOBS files written by the DB2 unloading utility (one file per column per row). The reloading Korn Shell script treats this line sequential file.Listing 4‑21 FILE CONTROL Section - addendum for Transcoding Programs MOD_<table_name> in case of LOBS ColumnsString columns (CHAR, VARCHAR) are converted from EBCDIC to ASCII using CONVERTMW.cpy copy file. This conversion applies to SBCS data only. If the data contains MBCS characters, you need to make the conversion using transfer tools, such as FTP. Binary columns (SMALLINT, INTEGER) are converted depending on the target platform.These programs read the list of CLOB files written by the MOD_<table_name> programs. They read all CLOB files (written by the DB2 unloading utility, one file per CLOB column per row) on input and write sequential files on output with the same size. The output file is read by the SQL*LOADER utility during the loading process.CLOB columns are converted from EBCDIC to ASCII using CONVERTMW.cpy copy file.The Reloading Korn shell scripts are generated using the -g option of the rdbms.sh command. They are then (using the -i option) installed in:The scripts are named: loadrdbms-<table name>.kshThe execution of the scripts produces an execution log in $MT_LOG/<table name>.logListing 4‑24 shows the variables set at the beginning of each script.
Note: Listing 4‑24 Reloading Script Variables (when jcl_unload_format_file option is not used or set to ‘binary’)export DD_LONGTEXT_FNAME=${DD_LONGTEXT_FNAME:-${DATA_TRANSCODE}/TLOB.TESTCLOB/LONGTEXT_FNAME.loblst}To change the LOBS default parameters, set the LOBDIR, DD_ENTREE_DIR, DD_SORTIE_DIR variables before calling the script. All binary LOBS files created by the unloading utility have to be transferred in binary mode into the directory ${DD_ENTREE_DIR}/${LOBDIR}.Various messages may be generated during the three execution phases of the scripts. These messages are listed in Oracle Tuxedo Application Rehosting Workbench Messages.“runb” is a part of OracleTuxedo Application Runtime Batch (see the Oracle Tuxedo Application Runtime Batch Reference Guide).
Note: If the table contains CLOB or BLOB columns and you want to launch this step only once (that is, without launching next CLOB or BLOB step), use -T option of the rdbms.sh command. The program also displays the dataset type used for LOBS files during the unloading step (depends on parameter <jcl_unload_lob_file_system>):This list of BLOB files is created by the MOD_<table_name> COBOL program above (see DD_<column_name>_FNAME variable).
Note: If you want to launch this step only once, use the rdbms.sh command -T option (COBOL program MOD_<table_name> should be launched at least one time).This COBOL program transcodes all CLOB files. This list of CLOB files is created by the MOD_<table_name> COBOL program above (see DD_<column_name>_FNAME variable).runb is a part of OracleTuxedo Application Runtime Batch. For more information, see the Oracle Tuxedo Application Runtime Batch Reference Guide.
Note: If you want to launch this step only once, use the rdbms.sh command -T option (COBOL program MOD_<table_name> should be launched at least one time).This step loads the Oracle table using the SQL*LOADER utility:This step verifies that the reloaded Oracle table contains the same number of records as the equivalent table unloaded from z/OS by the DB2 unloading utility. If the number of records is different, an error message is produced:The ORACLE DDL is generated using the -c or -C option of the rdbms.sh command. They are then (using the -i option) installed in:Listing 4‑27 Oracle Table and Column GenerationListing 4‑28 Oracle Index GenerationThe parameters <nls_function> and <nls_attributes> are optional. If the db-param parameters rdbms:indexsort and rdbms:indexlang are set in the db-param .cfg file, then the Rehosting Workbench generates these options in the CREATE INDEX command.The following samples show an index creation with rdbms:indexsort=french parameter.Listing 4‑31 Oracle Constraint GenerationListing 4‑32 Oracle Comment GenerationListing 4‑33 Oracle View GenerationListing 4‑34 Oracle Sequence GenerationListing 4‑35 Oracle Synonym Generation
• Trigger associated with table <target_table_name>.Listing 4‑36 Oracle Sequence and Trigger GenerationThis file is generated using the -c or -C option of the rdbms.sh command. It is then (using the -i option) installed in:It is named <schema name>.lst.Listing 4‑37 Sample COBOL Conversion GuideThey show all the Sql*Loader data types and the file structures managed by this Oracle Loader utility. They are generated using different parameters (see db-param.cfg).
Note: Listing 4‑39 shows the content of a CTL when these parameters below are set in the db-param.cfg:Listing 4‑40 shows the content of a CTL when these parameters below are set in the db-param.cfg:Listing 4‑40 CTL File Example With CLOB ColumnListing 4‑41 DDL Translator Log File — HeaderListing 4‑42 DDL Translator Log File — ParametersListing 4‑43 DDL Translator Log File — Database ParametersListing 4‑44 DDL Translator Log File — Schema TranslationListing 4‑45 DDL Translator Log File — DDL AnalysisListing 4‑46 DDL Translator Log File — Output FilesListing 4‑47 DDL Translator Log File — Translation IssuesListing 4‑48 DDL Translator Log File — Trailerrdbms.sh creates different execution reports depending on the options chosen. In the following examples the following command is used:Listing 4‑52 Messages Produced by The -r Options of rdbms.shListing 4‑53 Messages Produced by the-i Option of rdbms.shThis section describes the Command-line Syntax used by the DB2-to-Oracle Convertor, the Process Steps summary and the Conversion of DB2 Data Types.rdbms.sh - generate DB2 migration components.rdbms.sh generates the Rehosting Workbench components used to migrate z/OS DB2 databases to UNIX/Linux Oracle databases.Triggers the generation in $TMPPROJECT, for the schema indicated, of the following components: ORACLE DDL, CTL files of SQL*LOADER, XML file used by the COBOL Converter, configuration file(mapper file and Datamap file).This option has the same result as the -C option, except the process will abort for any error or warning.Triggers the generation in $TMPPROJECT, for the schema indicated, of the unloading and loading components. This generation depends on the information found in the configuration files.Makes the generated SHELL scripts executable. COBOL programs are adapted to the target COBOL fixed format. When presented, the shell script belonging to a project, (see File Modifying Generated Components), that modifies the generated sources is executed.Removes the schema name from generated objects (ORACLE DDL, CTL file, KSH). When this option is used, the name of the schema can also be removed from COBOL components by using the option: remove-sql-qualifier Clause located in the COBOL conversion configuration file used when converting COBOL components.All these files are created in $PARAM/dynamic-configIf the rdbms.sh options are used one at a time, they should be used in the following order:
1.
2.
3.
4.
5. The components used for the reloading (generated in $HOME/trf/reload/rdbms) should be installed on the target platform.
Table 4‑13 Target Platform Environment Variables The location of the generic reload and control scripts ($HOME/trf/reload/bin). Directory containing the <table name>.ctl files used by the SQL*LOADER ($HOME/trf/reload/rdbms/<schema name>/ctl).
Note: This directory is not used when the value of <jcl_unload_format_file> is “csv”. The MVS file created by the DB2 utility has already been converted by the transfer protocol. Set according to the instructions in Index, Sort Parameters section and the Oracle Database Globalization Support Guide. Set according to the instructions in Date, Time Parameters section and the Oracle Database Globalization Support Guide.Oracle Tuxedo Application Rehosting Workbench has to reproduce the same behavior found on the DB2/zOS platform on the Oracle database. In order to handle certain DB2/zOS specific cases, the DB2-to-Oracle Convertor includes Oracle packages named MWDB2ORA and MWDB2ORA_CONST which contain the Oracle functions and project constants respectively.Those functions are used for managing all DATE, TIME and TIMESTAMPS features. They are added to the COBOL programs by the COBOL Converter SQL Rules.
1.
2. Modify MWDB2ORA_CONST.sql package in order to adapt format of constant values.
3. Install the package under SQLPLUS:If the DB2 table contains a CLOB or BLOB data types, the dsnutilb utility creates one file per each LOBS column and row in the DB2 table:
• a LOBS data file (the location depends on <jcl_unload_lob_file_system> parameter)These unloading JCLs are named <table name>.jclunloadIf the data files are Single-Byte Character Set (SBCS) encoding, the transfer mode listed in Table 4‑14 could be used on the file transfer tools according to the file types and db-param.cfg parameters:
Table 4‑14 Transfer Mode Used for File Transfer Tools The files transferred to the target UNIX/Linux platform should be stored in the $DATA_SOURCE directory.
2. Run "tso ftp" command to connect the target platform.file_system_codepage and network_transfer_codepage are the corresponding Mainframe code pairs to your target MBCS. For example, for Simplified Chinese character set, choose IBM-5488 as file_system_codepage and IMB-1388 or UTF-8 as network_transfer_codepage.
4. Run "ftp put" command to transfer the data file to target platform.
5. Set NLS_LANG environment parameter and reload the data file without transcoding.For example: export NLS_LANG="SIMPLIFIED CHINESE_CHINA.XXXX"The scripts enabling the transcoding (except for transcoding when <jcl_unload_format_file> is set to “csv”) and reloading of data are generated in the directory:The reloading script uses the SQL*LDR Oracle utility. Because this utility can access to ORACLE server only, this variable should not contain @<oracle_sid> string especially for this reloading step.loadrdbms – transcode and reload data.This check uses the following option of the loadrdbms-<table name>.ksh