Table of Contents Previous Next PDF


Oracle Tuxedo Application Rehosting Workbench DB2 to Oracle Converter

Oracle Tuxedo Application Rehosting Workbench DB2 to Oracle Converter
Overview
Purpose
This chapter describes how to install, implement, and configure the Rehosting Workbench DB2- to-Oracle converter in order to migrate files from a source DB2 database to a target Oracle database.
Skills
When migrating DB2, a good knowledge of COBOL, JCL, z/OS utilities, DB2 and Oracle databases as well as UNIX/Linux Korn Shell is required.
See Also
For a comprehensive view of the migration process, see the Oracle Tuxedo Application Rehosting Workbench Reference Guide for the chapters Data Conversion and COBOL Conversion as well as the COBOL Converter chapter of this guide.
Organization
Migrating data files is described in the following sections:
The DB2-to-Oracle Migration Process
File Organizations Processed
When migrating from a z/OS DB2 source platform to an Oracle UNIX target platform, the first question to ask is, which tables should be migrated? When not all DB2 tables are to be migrated, a DB2 DDL representing the sub-set of objects to be migrated should be built.
Migration Process Steps
The principle steps in the DB2- to-Oracle migration process, explained in detail in the rest of this chapter, are:
1.
2.
3.
4.
5.
6.
7.
8.
Interaction With Other Oracle Tuxedo Application Rehosting Workbench Tools
The DB2-to-Oracle migration is dependent on the results of the Cataloger; the DB2-to-Oracle migration impacts the COBOL conversion and should be completed before beginning the program conversion work.
Reengineering Rules to Implement
This section describes the reengineering rules applied by the Rehosting Workbench when migrating data from a DB2 database to an Oracle database.
Migration Rules Applied
The list of DB2 objects that are included in the migration towards Oracle are described in Creating the Generated Oracle Objects.
Migrated DB2 objects keep their names when migrated to Oracle except for the application of the Rehosting Workbench renaming rules (see Preparing and Implementing Renaming Rules).
DB2-to-Oracle Data Type Migration Rules
 
DB2-to-Oracle Column Property Migration Rules
A column property can change the behavior of an application program.
The following table shows all of the DB2 column properties and how they are converted for the target Oracle database.
 
<value> depends on DB2/z/OS data type.
Preparing and Implementing Renaming Rules
Oracle Tuxedo Application Rehosting Workbench permits the modification of the different names in the DDL source file (table name, column name).
Renaming rules can be implemented for the following cases:
Note:
Renaming rules should be placed in a file named rename-objects-<schema name>.txt. This file should be placed in the directory indicated by the $PARAM/rdbms parameter.
Renaming rules have the following format:
table;<schema name>;<DB2 table name>;<Oracle table name>
Column;<schema name>;<DB2 table name>;<DB2 column name>;<Oracle column name>
Comments can be added as following: % Text.
Example:
% Modification applied to the AUALPH0T table
column;AUANPR0U;AUALPH0T;NUM_ALPHA;MW_NUM_ALPHA
Preparing and Implementing LOBS Data Type Migration
Oracle Tuxedo Application Rehosting Workbench permits the download of CLOB and BLOB data types. The DB2 unloading utility downloads each row of CLOB or BLOB columns into a separate file (PDS or HFS dataset type). This utility (DSNUTILB) downloads data of all columns and NULL technical flags into a unique MVS member file, excepted for CLOB or BLOB columns wich are replaced by the file name of the CLOB or BLOB separate file.
A PDS dataset type does not allow some files depending on your MVS system configuration, you may need to choose another dataset type for downloading CLOB or BLOB columns.
Based on those two constraints, you should set correct parameters in db-param.cfg configuration file (see Implementing the Configuration Files).
Preparing and Implementing MBCS Data Migration
Oracle Tuxedo Application Rehosting Workbench provides the transcoding for single byte data. However, if your DB2 data contains MBCS characters, you should choose DSNUPROC unloading utility and set csv data format. The MBCS transcoding is done by the transfer tools.
Based on this constraint, you have to set correct parameters in db-param.cfg configuration file (see Implementing the Configuration Files).
Example of a Migration of DB2 Objects
In this example, the DB2 DDL contains a table named ODCSF0 with a primary key and a unique index named XCUSTIDEN:
Listing 4‑1 DDL Example Before Migration
DROP TABLE ODCSF0;
COMMIT;
CREATE TABLE ODCSF0
(CUSTIDENT DECIMAL(6, 0) NOT NULL,
CUSTLNAME CHAR(030) NOT NULL,
CUSTFNAME CHAR(020) NOT NULL,
CUSTADDRS CHAR(030) NOT NULL,
CUSTCITY CHAR(020) NOT NULL,
CUSTSTATE CHAR(002) NOT NULL,
CUSTBDATE DATE NOT NULL,
CUSTEMAIL CHAR(040) NOT NULL,
CUSTPHONE CHAR(010) NOT NULL,
PRIMARY KEY(CUSTIDENT))
IN DBPJ01A.TSPJ01A
CCSID EBCDIC;
COMMIT;
CREATE UNIQUE INDEX XCUSTIDEN
ON ODCSF0
(CUSTIDENT ASC) USING STOGROUP SGPJ01A;
COMMIT;
 
After applying the migration rules, and without implementing any renaming rules the following Oracle objects are obtained:
Listing 4‑2 Oracle Table Example After Migration
WHENEVER SQLERROR CONTINUE;
DROP TABLE ODCSF0 CASCADE CONSTRAINTS;
WHENEVER SQLERROR EXIT 3;
CREATE TABLE ODCSF0 (
CUSTIDENT NUMBER(6) NOT NULL,
CUSTLNAME CHAR(30) NOT NULL,
CUSTFNAME CHAR(20) NOT NULL,
CUSTADDRS CHAR(30) NOT NULL,
CUSTCITY CHAR(20) NOT NULL,
CUSTSTATE CHAR(2) NOT NULL,
CUSTBDATE DATE NOT NULL,
CUSTEMAIL CHAR(40) NOT NULL,
CUSTPHONE CHAR(10) NOT NULL);
 
Listing 4‑3 Oracle Index Example After Migration
WHENEVER SQLERROR CONTINUE;
DROP INDEX XCUSTIDEN;
WHENEVER SQLERROR EXIT 3;
CREATE UNIQUE INDEX XCUSTIDEN ON ODCSF0
(
CUSTIDENT ASC
);
 
Listing 4‑4 Oracle Constraint Example After Migration
WHENEVER SQLERROR CONTINUE;
ALTER TABLE ODCSF0 DROP CONSTRAINT CONSTRAINT_01;
WHENEVER SQLERROR EXIT 3;
ALTER TABLE ODCSF0 ADD
CONSTRAINT CONSTRAINT_01 PRIMARY KEY (CUSTIDENT);
 
Preparing the Environment
This section describes the tasks to perform before generating the components to be used to migrate the DB2 data to Oracle.
Implementing the Cataloging of the DB2 DDL Source Files
The DB2 DDL source files to be migrated are located when preparing for the catalog operations. During the migration process, all valid DB2 syntaxes are accepted, although only the SQL CREATE command is handled and migrated to Oracle.
system.desc File Parameters
For a DB2-To-Oracle migration, a parameter must be set in the system.desc System Description File that is used by all of the Rehosting Workbench tools:
Indicates the version of the RDBMS to migrate.
Schemas
A schema should consist of a coherent set of objects (for example there should be no CREATE INDEX for a table that does not exist in the schema).
By default, if the SQL commands of the DB2 DDL are prefixed by a qualifier or an authorization ID, the prefix is used by the Rehosting Workbench as the name of the schema—for example, CREATE TABLE <qualifier or authorization ID>.table name.
The schema name can also be determined by the Rehosting Workbench using the global-options clause of the system.desc file.
For example:
system STDB2ORA root ".."
global-options
catalog="..",
sql-schema=<schema name>.
The schema name can also be determined for each DDL directory by the Rehosting Workbench using the directory options clause of the system.desc file. See section options-clause documented in Cataloger chapter.
directory "DDL" type SQL-SCRIPT
files "*.sql"
options SQL-Schema = "<schema name>".
 
Implementing the Configuration Files
Only one file needs to be placed in the Rehosting Workbench file structure as described by $PARAM:
Two other configuration files:
are automatically generated in the file structure during the installation of the Rehosting Workbench. If specific versions of these files are required, they will be placed in the $PARAM/rdbms file structure.
Initializing Environment Variables
Before executing the Rehosting Workbench set the following environment variables:
export TMPPROJECT=/$home/tmp
— the location for storing temporary objects generated by the process.
   You should regularly clean this directory.
— the location of the configuration files.
Generation Parameters
Listing 4‑5 Example db-param.cfg File
#
# This configuration file is used by FILE & RDBMS converter
# Lines beginning by "#" are ignored
# write information in lower case
#
# common parameters for FILE and RDBMS
# source information is written into system descriptor file (OS, DBMS=, # DBMS-VERSION=)
target_rdbms_name:oracle
target_rdbms_version:11
target_os:unix
# optional parameter
target_cobol:cobol_mf
#
# specific parameters for FILE to RDBMS conversion
file:char_limit_until_varchar:29
# specific parameters for RDBMS conversion
rdbms:date_format:YYYY/MM/DD
rdbms:timestamp_format:YYYY/MM/DD HH24 MI SS FF6
rdbms:time_format:HH24 MI SS
rdbms:lobs_fname_length:75
rdbms:jcl_unload_lob_file_system:pds
rdbms:jcl_unload_utility_name:dsnutilb
#rdbms:jcl_unload_format_file:csv
# rename object files
# the file param/rdbms/rename-objects-<schema>.txt is automatically loaded by # the tool if it exists.
 
Only the parameters target_<xxxxx> and rdbms:<xxxxx> need to be adapted.
Mandatory Parameters
name of the target RDBMS.
version of the target RDBMS.
Name of the target operating system.
Optional Parameters
target_cobol:cobol_mf
Name of the COBOL language. Accepted values are “cobol_mf” (default value) and “cobol_it”.
In this example, language is COBOL Microfocus.
Optional Parameters in Case of DATE, TIME and TIMESTAMP Data Types
The following rdbms parameters indicate the date, timestamp, and time formats used by z/OS DB2 and stored in DSNZPARM:
rdbms:timestamp_format:YYYY/MM/DD HH24 MI SS FF6
These parameters impact the reloading operations, COBOL date, and time manipulations. They are optional and only necessary if the DB2 database contains the DATE, TIME or TIMESTAMP fields.
WARNING:
Optional Parameters in Case of CLOB or BLOB Data Types
The following rdbms parameters are optional and only necessary if the DB2 schema contains CLOB or BLOB data types.
WARNING:
The number of member files that can be created on a PDS is limited. As the DB2 unloading utility creates a new member file for each CLOB/BLOB column and row, which may exceed the maximum number allowed for LOBS files to be created on a PDS dataset type, in that case you need to choose HFS dataset type. Contact your DB2 MVS administrator for more helps. By default, use “pds”.
You need to calculate the maximum length of the CLOB or BLOB file name as written by the DB2 unloading JCL in the table data file:
If the length of target MVS dataset name is equal to “MIGR.SCH1.TAB1.COLUMN1” (22 characters), the maximum length of the string created by the JCL would be 32: 22 + 2 (parenthesis characters) + 8 (member name).
If the length of target MVS directory name is equal to “/LOB/SCHEMA2/TABLE2/SECOND2” (27 characters), the maximum length of the string created by the JCL would be 36: 27 + 1 (slash character) + 8 (file name).
Note:
You should set value “dsnutilb” for rdbms:jcl_unload_utility_name parameter.
Optional Parameters for JCL Unloading Utility
The following parameters are optional:
You can also change the value depending on the presence of DB2 unloading utility on the MVS.
Note:
The second parameter can be set to “csv” only if the jcl_unload_utility_name is set to “dsnuproc”.
If the database contains MBCS characters, you should choose "dsnuproc" as the unloading utility and "csv" as the unloading format.
Generating the Components
To generate the components used to migrate data from DB2 databases to Oracle databases, the Rehosting Workbench uses the rdbms.sh command. This section describes the command.
rdbms.sh
Name
rdbms.sh — Generate DB2 to Oracle database migration components.
Synopsis
rdbms.sh [ [-c|-C] [-g] [-m] [-r] [-i <installation directory>] <schema name> ] -s <installation directory> (<schema name>,...) ]
Description
rdbms.sh generates the Rehosting Workbench components used to migrate z/OS DB2 databases to UNIX Oracle databases.
Options
Generation Options
-C <schema name>
The following components are generated in $TMPPROJECT: DDL Oracle, the SQL*LOADER CTL files, the XML file used by the COBOL converter,and configuration files (mapper.re and Datamap.re). If an error or warning is encountered, the process will not abort.
See Executing the Transcoding and Reloading Scripts for information about the SQL scripts created during the generation operation.
-c <schema name>
This option has the same result as the -C option except the process will abort if an error or warning is generated.
-g <schema name>
The unloading and loading components are generated in $TMPPROJECT using the information provided by the configuration files. You should run the rdbms.sh command with -C or -c command before this option.
Modification Options
-m <schema name>
Makes the generated shell scripts executable. The COBOL programs are adapted to the target COBOL fixed format. When present, the shell script that modifies the generated source is executed.
-r <schema name>
Removes the schema name from the generated objects (create table, table name, CTL file for SQL*LOADER, KSH). When this option is used, the name of the schema can also be removed from the COBOL components by using the option: sql-remove-schema-qualifier located in the config-cobol file (COBOL conversion configuration file) used when converting the COBOL components.
Installation Option
-i <installation directory> <schema name>
Places the components in the installation directory. This operation uses the information located in the rdbms-move-assignation.pgm file.
Generate Configuration Files for COBOL Conversion
-s <installation directory> <schema name>,...)
Enables the generation of the COBOL convertor configuration file. This file takes all of the unitary XML files of the project. All these files are created in $PARAM/dynamic-config.
Example: rdbms-conv.txt rdbms-conv-PJ01DB2.xml
Example
rdbms.sh -Cgrmi $HOME/trf PJ01DB2
Using the Make Utility
Make is a UNIX utility intended to automate and optimize the construction of targets (files or actions).
You should have a descriptor file named makefile in the source directory in which all operations are implemented (a makefile is prepared in the source directory during the initialization of a project).
The next two sections describe configuring a make file and how to use the Rehosting Workbench DB2-To-Oracle Converter functions with a make file.
Configuring a Make File
Version.mk
The version.mk configuration file in $PARAM is used to set the variables and parameters required by the make utility.
In version.mk specify where each type of component is installed and their extensions, as well as the versions of the different tools to be used. This file also describes how the log files are organized.
The following general variables should be set at the beginning of migration process in the version.mk file:
In addition, the RDBMS_SCHEMAS variable is specific to DB2 migration, it indicates the different schemas to process.
This configuration should be complete before using the make file.
Make File Contents
The contents of the makefile summarize the tasks to be performed:
A makefile and a version.mk file are provided with the Rehosting Workbench Simple Application.
Using a Makefile With the Rehosting Workbench DB2-To-Oracle Converter
The make RdbmsConvert command can be used to launch the Rehosting Workbench DB2-To-Oracle Converter. It enables the generation of the components required to migrate a DB2 database to Oracle.
The make file launches the rdbms.sh tool with the -C, -g, -r, -m and -i options, for all schemas contained in the RDBMS_SCHEMAS variable.
Locations of Generated Files
The unloading and loading components generated with the -i $HOME/trf option are placed in the following locations:
 
$HOME/trf/reload/rdbms/<schema name>/src
$HOME/trf/reload/rdbms/<schema name>/ctl
$HOME/trf/reload/rdbms/<schema name>/ksh
$TMPPROJECT/outputs/<schema name>
Modifying Generated Components
The generated components may be modified using a project’s own scripts. these scripts (sed, awk, perl,…) should be placed in:
$PARAM/rdbms/rdbms-modif-source.sh
When present, this file will be automatically executed at the end of the generation process. It will be called using the <schema name> as an argument.
Performing the Migration
This section describes the tasks of unloading, transfer and reloading using the components generated using the Rehosting Workbench (see Generating the Components).
Preparation
Configuring the Environments and Installing the Components
Installing the Unloading Components Under z/OS
The components used for the unloading (generated in $HOME/trf/unload/rdbms) should be installed on the source z/OS platform. The generated JCL may need adapting to specific site constraints including JOB cards, library access paths and access paths to input and output files (Data Set Name – DSN).
Installing the Reloading Components on the Target Platform
The components used for the reloading (generated in $HOME/trf/reload/rdbms) should be installed on the target platform (runtime).
Table 4‑4 lists environment variables that should be set on the target platform.
 
Directory containing the <table name>.ctl files used by the SQL*LOADER ($HOME/trf/reload/rdbms/<schema name>/ctl).
Set according to the instructions in Oracle Tuxedo Application Rehosting Workbench Reference Guide and other Oracle documentation.
The following variable should be set according to the information in the Oracle Tuxedo Application Rehosting Workbench Installation Guide:
The reloading script loadrdbms-<table name>.ksh uses the SQL*LDR Oracle utility. Because this utility can access to ORACLE servers only, this script should be used in ORACLE servers and not with client connection. This variable should not contain an @<oracle_sid> string, especially for this reloading step.
Installing the MWDB2ORA Package Component on the Target Platform
The package functions called by COBOL programs (converted by Oracle Tuxedo Application Rehosting Workbench COBOL Converter) should be installed on the target platform (runtime).
The packages are located in REFINEDIR/convert-data/fixed-components/MWDB2ORA.plb and REFINEDIR/convert-data/fixed-components/MWDB2ORA_CONST.plb. You should adapt the MWDB2ORA_CONST.plb package and install these packages under SQLPLUS as documented in the Oracle Tuxedo Application Rehosting Workbench DB2 to Oracle Converter.
Unloading JCL
To unload each DB2 table, a JCL using the IBM unloading utility is executed. Generally, the unloading utility creates three files for each table:
If the table contains a CLOB or BLOB data types, the unloading utility creates:
These files are written in another dataset or directory if the parameter rdbms:jcl_unload_lob_file_system is respectively set to pds or hfs.
These unloading JCLs are named <table name>.jclunload
If the table name is longer than eight characters, the Rehosting Workbench attributes an eight-character name to the z/OS JCL as close as possible to the original. The renaming process maintains the uniqueness of each table name.
Example:ODCSF0X1.jclunload
In the example used in this chapter, the table named ODCSF0 is lengthened to ODCSF0X1 when naming the z/OS JCL.
Transferring the Files
The unloaded data files should be transferred between the source z/OS platform and the target UNIX platform in binary format using the file transfer tools available at the site (CFT, FTP, …).
The LOG and SYSPUNCH files should be transferred in text mode.
The files transferred to the target UNIX platform should be stored in the $DATA_SOURCE directory.
The CLOB and BLOB data files should be transferred in binary mode and stored in the $DATA_SOURCE/<schema_name>.<column_name> directory.
The MBCS data files should be transferred in text mode and properly transcoded by transfer tools. For more information, see Oracle Tuxedo Application Workbench Reference Guide.
Note:
On MVS, the Rehosting Workbench attributes a six-character name to <column_name> to the dataset or directory, added with a digit number (1 for the first CLOB or BLOB column of the table, 2 for the second, ...). On UNIX/Linux platform, the loadrdbms.sh script uses the real column_name.
Creating the Generated Oracle Objects
The scripts creating Oracle objects (tables, index, constraints, …) are created in the $HOME/trf/SQL/rdbms/<schema name> directory. They should be executed in the target Oracle instance.
The <schema name>.lst file contains the names of all of the tables in hierarchical sequence (parent table then child tables).
Table 4‑5 lists the DB2 objects managed by the Rehosting Workbench and the name of the script used to create them:
 
Table 4‑5 DB2 Objects
This file contains all the CREATE INDEXes associated with the table <target_table_name>. This file will not be generated if there are no indexes defined on the table <target_table_name>
Compiling the Transcoding Programs
The generated COBOL programs used for transcoding are named:
MOD_<table name>.cbl
For the example used in this chapter the generated program is:
The programs should be compiled using the target COBOL compiler and the options documented in the Oracle Tuxedo Application Rehosting Workbench Reference Guide.
The programs produce RECORD SEQUENTIAL files on output that will then be read by the SQL*LOADER utility.
Listing 4‑6 FILE CONTROL Example – Extracted from Program: MOD_ODCSF0.cbl
SELECT MW-SORTIE
ASSIGN TO "SORTIE"
ORGANIZATION IS RECORD SEQUENTIAL
ACCESS IS SEQUENTIAL
FILE STATUS IS IO-STATUS.
 
The generated COBOL programs used for transcoding CLOB columns are named:
CLOB_<table name>_<column_name>.cbl
Executing the Transcoding and Reloading Scripts
The scripts enabling the transcoding and reloading of data are generated in the directory:
$HOME/trf/reload/rdbms/<schema name>/ksh
The format of the script names is:
loadrdbms-<table name>.ksh
In the example used in this chapter, the script is named:
loadrdbms-ODCSF0.ksh
Each script launches the COBOL program that performs the transcoding and then the SQL*LOADER utility. The CTL files used by SQL*LOADER are named:
<table name>.ctl.
The CTL file used for the example in this chapter is named:
ODCSF0.ctl
Transcoding and Reloading Command
The transcoding and reloading scripts have the following parameters:
Synopsis
loadrdbms-<table name>.ksh [-t | [-O|-T]] [-l] [-c: <method>]
Options
-t
Transcodes the file and all BLOB or CLOB files if exist.
-T
Transcodes the file associated to the table only (except for CLOB and BLOB files). This option is used when a table contains CLOB or BLOB columns.
-O
For BLOB columns: creates only an UNIX link to all binary BLOB transferred files.
For CLOB columns: transcodes only all binary CLOB transferred files.
-l
Reloads the data into Oracle table.
-c rows:
Implement the verification of the transfer (see Checking the Transfers).
Examples
For the example provided in Example of a Migration of DB2 Objects, the generated script is:
Checking the Transfers
This check uses the following option of the loadrdbms-<table name>.ksh
-c rows
Note:
Troubleshooting
This section describes problems resulting from usage errors that have been encountered when migrating data from a source DB2 database to a target Oracle database.
Overview
When executing any of the Rehosting Workbench tools, users should check:
If the rdbms-converter-<schema name>.log file contains any errors (see Common Problems and Solutions).
Error messages and associated explanations are listed in the appendix of the Oracle Tuxedo Application Rehosting Workbench Reference Guide.
Common Problems and Solutions
Error: RDBMS-0105
When executing $REFINEDIR/$VERS/rdbms.sh -Cgrmi $HOME/trf PJ01DB2 STFILEORA the following message appears:
Fatal RDBMS error.
Error: RDBMS-0105: Catalog for /home2/wkb9/param/system.desc is out of date
and needs to be updated externally.
Refine error...
Explanation
Changes have been made to the DDL, re-perform the cataloging operation.
Error: conversion aborted. Can not read
When executing $REFINEDIR/$VERS/rdbms.sh -Cgrmi $HOME/trf SCHEMA the following message appears:
Refine error...
/tmp/refine-exit-status.MOaZwgTphIN14075
ERROR : conversion aborted . Can not read /home2/wkb9/tmp/outputs/SCHEMA/rdbms-converter-SCHEMA.log log file
abort
Explanation
The schema name is not known.
Error: Configuration file /db-param.cfg is missing!
When executing $REFINEDIR/$VERS/rdbms.sh -Cgrmi $HOME/trf PJ01DB2 the following message appears:
*=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-==-=-=-=-=-=-=-=-=--=-
#########################################################################
CONVERSION OF DDLs and CTL files and GENERATION of directive files
ERROR : Configuration file /db-param.cfg is missing !
ERROR : Error in reading configuration file
Abort
Explanation
The external variable PARAM is not set.
Error: Target output directory... is missing
When executing $REFINEDIR/$VERS/rdbms.sh -Cgrmi $HOME/bad-directory PJ01DB2 the following message appears:
*=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-==-=-=-=-=-=-=-=-=-=-=-
Target output directory /home2/wkb9/bad-directory is missing
Check parameters: -i <output_directory> <schema>
ERROR : usage : rdbms.sh [ [-c|-C] [-g] [-m] [-r] [-i <output_directory>] <schema_name> ] -s <output_directory> (<schema>,...) ]
abort
Explanation
The target directory does not exist.
Error: Abort when using -c option... in case of unsupported features
When executing $REFINEDIR/$VERS/rdbms.sh -c WWARN the following message appears:
*=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-==-=-=-=-=-=-=-=-
WARNING: some unsupported db2 objects have been ignored by this tool.
Check file /home2/wkb9/tmp/outputs/WWARN/unsupported-WWARN.log to see a detail of those objects.
ERROR:
RDBMSWB-0199: conversion aborted due to 26 Warning message(s). Check previous error messages and try -C option instead of -c
abort
Explanation
The DDL contains some unsupported features. Check the warning files. You can ignore this abort by replacing the -c option with -C option.

Copyright © 1994, 2017, Oracle and/or its affiliates. All rights reserved.