Skip Headers

Oracle Migration Workbench Reference Guide for IBM DB2/400 V4R5 Migrations
Release 9.2.0 for Microsoft Windows 98/2000 and Microsoft Windows NT

Part Number A97252-01
Go To Documentation Library
Home
Go To Product List
Book List
Go To Table Of Contents
Contents
Go To Index
Index

Master Index

Feedback

Go to previous page Go to next page

3
Migration Process

This chapter introduces the migration process by outlining the architecture of both IBM DB2/400 V4R5 and Oracle. It includes information on the following:

IBM DB2/400 V4R5 Architecture

IBM DB2/400 V4R5 is an integrated database management system for OS/400, the AS/400 operating system. It is a database system used for storing and manipulating large volumes of data and it forms the basis for most of the business applications that run on the AS/400.

Everything in the OS/400 operating system, including the database, is organized as objects. Each object has a qualified name. Examples of OS/400 objects are program files, database files, and user profiles. There are about 80 types of objects and some object types contain sub-types, for example the file object type *FILE includes physical files, logical files, printer files, display files, and communications files. An OS/400 object is uniquely identified by its qualified name and object type.

The Distributed Data Management (DDM) architecture provides a basis for distributed file access. Only native data access is allowed for DDM files. IBM created the Distributed Relational Database Architecture (DRDA) layer on top of DDM, and this provides the protocol that a SQL application can use to access distributed tables and data. Oracle supports the DRDA standard and the Oracle Transparent Gateway for IBM DB2/400 V4R5 provides the capability to transparently access and update data stored in distributed locations in IBM DB2/400 V4R5 databases.

High-Level Languages (HLLs) supported by IBM DB2/400 V4R5 include RPG, COBOL, C++, and CL (Command Language).

IBM DB2/400 V4R5 databases can be accessed from any application program using the Microsoft Open Database Connectivity (ODBC) interface, the Java Database Connectivity (JDBC) interface, or a Common Object Request Broker Architecture (CORBA) interface broker.

IBM DB2/400 V4R5 is no longer supported by IBM. The Migration Workbench provides customers with the ability to migrate from IBM DB2/400 V4R5 to Oracle, on any Oracle-supported platform.

Oracle Architecture

Oracle9i is a powerful, flexible, and scalable relational database management system (RDBMS) server, that run on a range of computer systems, from personal computers to the largest mainframes.

The architectural features described in this chapter are only a few of the features provided by Oracle. The features relate only to an IBM DB2/400 V4R5 migration. Refer to the following Oracle Server manuals for a complete description of the Oracle architecture. These manuals can also be found in online format on CD-ROM:

PL/SQL Programming Language

PL/SQL is a modern, full-featured programming language with exception handling. You can use PL/SQL to write stored programs and triggers in Oracle. It is also the programming language used in many of the client-side tools available from Oracle, such as Forms from the Oracle Developer suite of products.

Triggers and Stored Procedures

Oracle allows you to write and store code in the DBMS along with data. You can associate trigger code with an UPDATE, INSERT, or DELETE event for each row or for a table as a whole. You can also set a trigger to run before or after the event. For example, you can set a trigger to run after any row is updated.

A stored procedure is a general routine, either function or subroutine, that is stored in precompiled form on the server. A trigger can call stored procedures, but triggers are activated only by specific database activity, such as the insertion of a row in a table.

Sequences

A sequence is a unique number generator that is implemented in shared memory on a server. It is designed to provide a set of unique values for use as primary keys in high-performance applications.

Transactions

Oracle supports an implicit transaction model. Each SQL statement is part of a logical transaction. A logical transaction begins with the first SQL statement and ends with a COMMIT or ROLLBACK statement. Immediately after either of these statements, a new transaction takes effect with the next SQL statement.

Other Oracle Features

A database administrator has great flexibility when configuring Oracle. The administrator can write data on multiple disks for increased performance, tune rollback and recovery options, and allocate computer resources to optimize the configuration for each server. Oracle also supports distributed processing, so data can be distributed across multiple systems. Oracle offers a version of the server called Trusted Oracle Server for applications that require a higher level of user and use authentication.

Preparing for Migration

You must back up the IBM DB2/400 V4R5 database files before using the Migration Workbench Capture wizard to migrate to Oracle.

Extending the Application

After you move the data management portion of the IBM DB2/400 V4R5 application to Oracle, you can rely on Oracle to protect the data and maintain all referential integrity and business rules that you have encoded in PL/SQL. With this foundation, you can use a wide range of tools such as Oracle JDeveloper and Oracle Objects for OLE to extend the application.

In addition, if the application grows, you can move the Oracle server to larger computers without changing the application.

Using Offline Data Loading

You can use the extract_nn add-in program, shipped with the Migration Workbench, to extract the data of an IBM DB2/400 V4R5 database into delimited flat files. The Migration Workbench uses the extract_nn add-in with SQL*Loader to provide an offline data loading capability for large tables. The following topics explain the process of offline data loading:

extract_nn Add-in

The extract_nn add-in uses SQL to describe the fields of a file and retrieve the data from the file, given a library name and a file name. The field data is written to a specified file in the Integrated File System, as a parameter to the extract_nn add-in itself. The data is separated as follows:

NULL entries in the table are not explicitly returned. A NULL value can be any of the following:



NULL entry location Terminator

NULL neither at the start or end of a row

Two successive field-terminators

NULL in the last column

A field-terminator followed by a record-terminator

NULL in first column in any row other than the first row

A record-terminator followed by a field-terminator

NULL in first column of the first row of the table

A field-terminator at the very start of the file



The output file cannot contain any binary data. Decimal and zoned decimal fields are presented as numeric strings with the appropriate sign and decimal-point, as required. Floating-point columns are formatted using the %f format. The column types that are valid for the extract_nn program are:

For more information on data types refer to "Data Types in Oracle and IBM DB2/400 V4R5".

GRAPHIC types, Datalink columns, and BLOB columns are not valid types for the extract_nn program. All CHAR and VARCHAR columns must be SBCS and not have a CCSID of 65535.

Script Directory Structure

The %ORACLE_HOME%\Omwb\sqlloader_scripts directory contains all data extraction scripts. There is a subdirectory in this directory named db2400v4r3 that contains the SQL*Loader script output for IBM DB2/400 V4R5. The Migration Workbench creates a subdirectory in the db2400v4r3 directory using the date and time the SQL*Loader scripts were generated. For example, a subdirectory named 1-06-01_17-56-16 contains scripts generated at 17:56 P.M. on June 1st 2001.

The extract.omwb file is created by the Generate SQL*Loader Script command and is located in this subdirectory. The file contains a series of command lines that are required for the extract_nn add-in to extract data from the specified tables. You must edit some of the fields in this file, refer to "Using the Extract Scripts" for more information. Use ftp to copy this file and the extract_nn add-in program file, to the omwb_lib library on the source AS/400 system.

When you are generating SQL*Loader Scripts in the Migration Workbench, a subdirectory called oracle is created in the timestamp directory. The oracle directory contains SQL*Loader control files and a SQL*Loader script called sql_load_script.bat. The SQL*Loader control files and the data files that you create must be located in this directory. Therefore, after running the extract_nn add-in on the source AS/400 system, use ftp to copy the resulting data files back to the
sqlloader_scripts/db2400v4r3/timestamp/oracle directory on the target system before executing the sql_load_script.bat file.

Generating Extract Scripts

To create the extract_nn data extraction scripts and the SQL*Loader control files for all tables:

  1. From the Oracle Model, select the Tables folder.

  2. Choose Object -> Generate SQL*Loader Scripts.

    Note:

    You can also generate the scripts for a specific table by selecting that table from the Oracle Model, then choosing Object -> Generate SQL*Loader Scripts.




  3. When you are sure you want to generate the SQL*Loader scripts for the tables specified, click Yes.

  4. After noting the location of the SQL*Loader scripts, click OK.

Using the Extract Scripts

After generating the SQL*Loader scripts, you can use them to load the data into the Oracle database. You must copy the extract_nn file using FTP to the host AS/400 system.

Each command line for each table in the extract.omwb file is as follows:

EXTRACT( "SCHEMA_NAME" , "TABLE_NAME" , "<OUTPUT INTEGRATED FILE SYSTEM 
NAME>/TABLE_NAME.DAT", "<ec>" , "<er>" , "" , "<LOGFILE INTEGRATED FILE SYSTEM 
NAME> /TABLE_NAME_LOG.DAT" )

The parameters of the above example are described in the following table:

Parameter Description
"SCHEMA_NAME"

The schema containing the table (file) that you want to extract. The Migration Workbench automatically completes this parameter.

"TABLE_NAME"

The table name from which you want to extract the data. The Migration Workbench automatically completes this parameter.

"<OUTPUT INTEGRATED 
FILE SYSTEM NAME>"

The file in the Integrated File System (IFS) containing the extract from the table. You must fill in this parameter in the extract.omwb before running the batch file.

"<ec>"

The character string used to delimit each field (terminated by a 0x00 character). Each CHAR, VARCHAR, and null-CHAR string is checked for this character if the length of the string is 1. If the character appears in the database string it is doubled, creating the field-terminator string, which is set to "<ec>" by default.

"<er>"

The character string used to delimit each record (terminated by a 0x00 character). This is the record-terminator string and is set to "<er>" by default.

"NULL"

The character string used as the value for any NULL values. Since zero-length data and NULL entries are both output to the file in the same format, "<ec><ec>", there is nothing else between the column terminators, which could indicate a string of zero length or a NULL. You may want to specify what the value of NULL should be set to.

"<LOGFILE 
INTEGARATED FILE 
SYSTEM NAME>"

The file in the Integrated File System (IFS) containing the logfile generated from the extract process. It contains descriptions of any errors encountered. You must fill in this parameter in the extract.omwb file before running the batch file.

Installing the extract_nn program to the source AS/400 system

The extract_nn is included with the other add-in programs in the savefile program that is copied to the %oracle_home%\omwb\addins\DB2400V4R3 directory during the installation process of the Migration Workbench.

To run the extract_nn add-in program, you must FTP the savefile to the source AS/400 system. The add-in program is installed automatically to the omwb_lib directory on the source AS/400 system during the database capture process of the Migration Workbench. If you have used the Capture Wizard for the IBM DB2/400 V4R3 plug-in, the extract_nn program can be run from the omwb_lib library on the source AS/400 system using the call commands outlined in "Calling the extract_nn program".

If, however, you want to use the extract_nn program without going through the capture process, you have to carry out the following steps:

    1. Login to the source AS/400 system using a user id that has *SECOFR or *SECADM authority - QSECOFR is a suitable user id.

    2. Create a library on the source AS/400 system. A good name would be omwb_lib -- but you can use any library name you want. We will use omwb_lib for that name in what follows.

    3. Create a savefile (object type *SAVF) in the library created in step 2. Call that file savefile. For example:

      CRTSAVF OMWB_lib/SAVEFILE
      
      
    4. Open an FTP connection to the target AS/400 system.

    5. Set the transfer mode to Binary.

    6. Use a cd subcommand to make sure the data goes to the correct library on the AS/400. For example, cd omwb_lib.

    7. FTP the savefile in binary mode, from the
      %oracle_home%\omwb\addins\DB2400V4R3 directory on the Migration Workbench installation system to the a library on the source AS/400 system using the put command. This file is approximately 1 MB in size. For example:

      PUT SAVEFILE
      
      
    8. The savefile on the target AS/400 system contains nine *MODULE objects and a *FILE object.

    Restore the objects from the savefile using the RSTOBJ command. The command should be entered on one line:

    RSTOBJ OBJ(*ALL) SAVLIB(QTEMP) DEV(*SAVF) SAVF(OMWB_lib/SAVEFILE) 
    RSTLIB(OMWB_lib)
    
    
  1. Create the DORST program by entering the following command:

    CRTPGM PGM(OMWB_lib/DORST) MODULE(OMWB_lib/DORST)
    
    
  2. Run the DORST program to complete the installation. A single parameter is required. The parameter is the name of the library in which the add-in's program is placed. In the following example, the program is placed in the same library as the savefile and the objects extracted from that savefile:

    CALL PGM(OMWB_lib/DORST) PARM(OMWB_lib)
    
    
  3. You also need to create extract scripts for input to the extract_nn program. The format of these scripts is defined in "Using the Extract Scripts".

  4. You can now run the extract_nn program.

Calling the extract_nn program

You can run the extract_nn program when the following steps have been completed:

The call to the extract_nn program accepts up to two parameters.

  1. The first parameter, which is not optional, is a string containing the location of the extract.omwb file. An example of a call to the extract_nn program using this single parameter is as follows:

    CALL OMWB_lib/EXTRACT_NN PARM( `/home/usr/EXTRACT.OMWB' )
    
    
  2. A second optional parameter can be added to the call to the extract_nn program. This parameter is used to:

    • Set the length of the returned string for a TIMESTAMP value.

    • Request an estimate of file sizes to be returned by the extract_nn program.

    An example of a call to the extract_nn program to retrieve a TIMESTAMP value no longer than 19 characters is as follows:

    CALL OMWB_lib/EXTRACT_NN PARM(`home/usr/EXTRACT.OMWB', `TIMESTAMP_LEN=19')
    
    

    The above call tells the extract_nn program to truncate any TIMESTAMP values to 19 characters.

    An example of a call to the extract_nn program to retrieve an estimate of file sizes the extract_nn program will generate is as follows:

    CALL OMWB_lib/EXTRACT_NN PARM(`home/usr/EXTRACT.OMWB', `DISKSPACE')
    
    

    The above call does not extract any data to files. It places comments about how large the output file might be in the log file.


    Note:

    The file sizes estimate is approximate and depends on the real data.


To use the EXTRACT script to execute a manual data extraction:

  1. Capture the schema model of the source IBM DB2/400 V4R5 database using the Migration Workbench, and generate the extraction scripts as described in "Generating Extract Scripts" .

  2. Migrate the IBM DB2/400 V4R5 table schema to your Oracle database. Select No when asked "Do you want to migrate table data to Oracle?" in Step 3 of the Migration Wizard.

  3. Follow the steps listed in "Generating Extract Scripts" .

  4. Edit the resulting extract.omwb in the
    %ORACLE_HOME%\Omwb\sqlloader_scripts\DB2400V4R3 directory and enter the values for the required fields (refer to "Using the Extract Scripts" for more information). Copy the edited extract.omwb in the
    %ORACLE_HOME%\Omwb\sqlloader_scripts\DB2400V4R3 directory. If required, install the extract_nn program to the source AS/400 system. Refer to "Installing the extract_nn program to the source AS/400 system".

  5. Call the extract_nn add-in using the following command:

    CALL LIBRARY_NAME/EXTRACT_NN PARM ( "<EXTRACT.OMWB IFS >/EXTRACT.OMWB")
    
    

    In the above example, LIBRARY_NAME is the library where the extract_nn program was copied and <EXTRACT.OMWB IFS> is the Integrated File System name for the location where the extract.omwb file was copied. Wait until each command line in the batch file has completed. For examples of how to call the extract_nn program and the options available see "Calling the extract_nn program".

  6. Copy the resulting output files from the AS/400 system to the
    %ORACLE_HOME\Omwb\sqlloader_scripts\timestamp\Oracle directory on the Migration Workbench client system.

  7. Run the sql_load_script.bat file in the
    %ORACLE_HOME%\Omwb\sqlloader_scripts\timestamp\Oracle directory to input the data from the flat files into the Oracle database.

Go to previous page Go to next page
Oracle
Copyright © 1998, 2002 Oracle Corporation.

All Rights Reserved.
Go To Documentation Library
Home
Go To Product List
Book List
Go To Table Of Contents
Contents
Go To Index
Index

Master Index

Feedback