Skip Headers
Oracle® Fusion Middleware Connectivity and Knowledge Modules Guide for Oracle Data Integrator
11g Release 1 (11.1.1)

Part Number E12644-07
Go to Documentation Home
Home
Go to Book List
Book List
Go to Table of Contents
Contents
Go to Feedback page
Contact Us

Go to previous page
Previous
Go to next page
Next
PDF · Mobi · ePub

2 Oracle Database

This chapter describes how to work with Oracle Database in Oracle Data Integrator.

This chapter includes the following sections:

2.1 Introduction

Oracle Data Integrator (ODI) seamlessly integrates data in an Oracle Database. All Oracle Data Integrator features are designed to work best with the Oracle Database engine, including reverse-engineering, changed data capture, data quality, and integration interfaces.

2.1.1 Concepts

The Oracle Database concepts map the Oracle Data Integrator concepts as follows: An Oracle Instance corresponds to a data server in Oracle Data Integrator. Within this instance, a schema maps to an Oracle Data Integrator physical schema. A set of related objects within one schema corresponds to a data model, and each table, view or synonym will appear as an ODI datastore, with its attributes, columns and constraints.

Oracle Data Integrator uses Java Database Connectivity (JDBC) to connect to Oracle database instance.

2.1.2 Knowledge Modules

Oracle Data Integrator provides the Knowledge Modules (KM) listed in Table 2-1 for handling Oracle data. The KMs use Oracle specific features. It is also possible to use the generic SQL KMs with the Oracle Database. See Chapter 4, "Generic SQL" for more information.

Table 2-1 Oracle Database Knowledge Modules

Knowledge Module Description

RKM Oracle

Reverse-engineers tables, views, columns, primary keys, non unique indexes and foreign keys.

JKM Oracle 10g Consistent (Streams)

Creates the journalizing infrastructure for consistent set journalizing on Oracle 10g tables, using Oracle Streams.

JKM Oracle 11g Consistent (Streams)

Creates the journalizing infrastructure for consistent set journalizing on Oracle 11g tables, using Oracle Streams.

JKM Oracle Consistent

Creates the journalizing infrastructure for consistent set journalizing on Oracle tables using triggers.

JKM Oracle Consistent (Update Date)

Creates the journalizing infrastructure for consistent set journalizing on Oracle tables using triggers based on a Last Update Date column on the source tables.

JKM Oracle Simple

Creates the journalizing infrastructure for simple journalizing on Oracle tables using triggers.

JKM Oracle to Oracle Consistent (OGG)

Creates and manages the ODI CDC framework infrastructure when using Oracle GoldenGate for CDC. See Chapter 29, "Oracle GoldenGate" for more information.

CKM Oracle

Checks data integrity against constraints defined on an Oracle table.

LKM File to Oracle (EXTERNAL TABLE)

Loads data from a file to an Oracle staging area using the EXTERNAL TABLE SQL Command.

LKM File to Oracle (SQLLDR)

Loads data from a file to an Oracle staging area using the SQL*Loader command line utility.

LKM MSSQL to Oracle (BCP SQLLDR)

Loads data from a Microsoft SQL Server to Oracle database (staging area) using the BCP and SQL*Loader utilities.

LKM Oracle BI to Oracle (DBLINK)

Loads data from any Oracle BI physical layer to an Oracle target database using database links. See Chapter 18, "Oracle Business Intelligence Enterprise Edition" for more information.

LKM Oracle to Oracle (DBLINK)

Loads data from an Oracle source database to an Oracle staging area database using database links.

LKM Oracle to Oracle (datapump)

Loads data from an Oracle source database to an Oracle staging area database using external tables in the datapump format.

LKM SQL to Oracle

Loads data from any ANSI SQL-92 source database to an Oracle staging area.

LKM SAP BW to Oracle (SQLLDR)

Loads data from SAP BW systems to an Oracle staging using SQL*Loader utilities. See the Oracle Fusion Middleware Application Adapters Guide for Oracle Data Integrator for more information.

LKM SAP ERP to Oracle (SQLLDR)

Loads data from SAP ERP systems to an Oracle staging using SQL*Loader utilities. See the Oracle Fusion Middleware Application Adapters Guide for Oracle Data Integrator for more information.

IKM Oracle AW Incremental Update

Integrates data in an Oracle target table in incremental update mode and is able to refresh a Cube in an Analytical Workspace. See Chapter 23, "Oracle OLAP" for more information.

IKM Oracle Incremental Update

Integrates data in an Oracle target table in incremental update mode.

IKM Oracle Incremental Update (MERGE)

Integrates data in an Oracle target table in incremental update mode, using a MERGE statement.

IKM Oracle Incremental Update (PL SQL)

Integrates data in an Oracle target table in incremental update mode using PL/SQL.

IKM Oracle Multi Table Insert

Integrates data from one source into one or many Oracle target tables in append mode, using a multi-table insert statement (MTI).

IKM Oracle Slowly Changing Dimension

Integrates data in an Oracle target table used as a Type II Slowly Changing Dimension.

IKM Oracle Spatial Incremental Update

Integrates data into an Oracle (9i or above) target table in incremental update mode using the MERGE DML statement. This module supports the SDO_GEOMETRY datatype.

IKM Oracle to Oracle Control Append (DBLINK)

Integrates data from one Oracle instance into an Oracle target table on another Oracle instance in control append mode.

This IKM is typically used for ETL configurations: source and target tables are on different Oracle instances and the interface's staging area is set to the logical schema of the source tables or a third schema.

SKM Oracle

Generates data access Web services for Oracle databases. See "Working with Data Services" in the Oracle Fusion Middleware Developer's Guide for Oracle Data Integrator for information about how to use this SKM.


2.2 Installation and Configuration

Make sure you have read the information in this section before you start using the Oracle Knowledge Modules:

2.2.1 System Requirements and Certifications

Before performing any installation you should read the system requirements and certification documentation to ensure that your environment meets the minimum installation requirements for the products you are installing.

The list of supported platforms and versions is available on Oracle Technical Network (OTN):

http://www.oracle.com/technology/products/oracle-data-integrator/index.html.

2.2.2 Technology Specific Requirements

Some of the Knowledge Modules for Oracle use specific features of this database. This section lists the requirements related to these features.

2.2.2.1 Using the SQL*Loader Utility

This section describes the requirements that must be met before using the SQL*Loader utility with Oracle database.

  • The Oracle Client and the SQL*Loader utility must be installed on the machine running the Oracle Data Integrator Agent.

  • The server names defined in the Topology must match the Oracle TNS name used to access the Oracle instances.

  • A specific log file is created by SQL*Loader. We recommend looking at this file in case of error. Control Files (CTL), Log files (LOG), Discard Files (DSC) and Bad files (BAD) are placed in the work directory defined in the physical schema of the source files.

  • Using the DIRECT mode requires that Oracle Data integrator Agent run on the target Oracle server machine. The source file must also be on that machine.

2.2.2.2 Using External Tables

This section describes the requirements that must be met before using external tables in Oracle database.

  • The file to be loaded by the External Table command needs to be accessible from the Oracle instance. This file must be located on the file system of the server machine or reachable from a Unique Naming Convention path (UNC path) or stored locally.

  • For performance reasons, it is recommended to install the Oracle Data Integrator Agent on the target server machine.

2.2.2.3 Using Oracle Streams

This section describes the requirements for using Oracle Streams Journalizing knowledge modules.

Note:

It is recommended to review first the "Changed Data Capture" chapter in the Oracle Database Data Warehousing Guide, which contains the comprehensive list of requirements for Oracle Streams.

The following requirements must be met before setting up changed data capture using Oracle Streams:

  • Oracle Streams must be installed on the Oracle Database.

  • The Oracle database must run using a SPFILE (only required for AUTO_CONFIGURATION option).

  • The AQ_TM_PROCESSES option must be either left to the default value, or set to a value different from 0 and 10.

  • The COMPATIBLE option should be set to 10.1 or higher.

  • The database must run in ARCHIVELOG mode.

  • PARALLEL_MAX_SERVERS must be increased in order to take into count the number of Apply and Capture processes. It should be increased at least by 6 for Standalone configuration, 9 for Low-Activity and 21 for High-Activity.

  • UNDO_RETENTION must be set to 3600 at least.

  • STREAMS_POOL_SIZE must be increased by 100MB for Standalone configuration, 236MB for Low-Activity and 548MB for High-Activity.

  • All the columns of the primary key defined in the ODI Model must be part of a SUPPLEMENTAL LOG GROUP.

  • When using the AUTO_CONFIGURATION knowledge module option, all the above requirements are checked and set-up automatically, except some actions that must be set manually. See "Using the Streams JKMs" for more information.

    In order to run this KM without AUTO_CONFIGURATION knowledge module option, the following system privileges must be granted:

    • DBA role to the connection user

    • Streams Administrator to the connection user

    • RESOURCE role to the work schema

    • SELECT ANY TABLE to the work schema

  • Asynchronous mode gives the best performance on the journalized system, but this requires extra Oracle Database initialization configuration and additional privileges for configuration.

  • Asynchronous mode requires the journalized database to be in ARCHIVELOG. Before turning this option on, you should first understand the concept of asynchronous AutoLog publishing. See the Oracle Database Administrator's Guide for information about running a database in ARCHIVELOG mode. See "Asynchronous Change Data Capture" in the Oracle Database Data Warehousing Guide for more information on supplemental logging. This will help you to correctly manage the archives and avoid common issues, such as hanging the Oracle instance if the archive files are not removed regularly from the archive repository.

  • When using asynchronous mode, the user connecting to the instance must be granted admin authorization on Oracle Streams. This is done using the DMBS_STREAMS_AUTH.GRANT_ADMIN_PRIVILEGE procedure when logged in with a user already having this privilege (for example the SYSTEM user).

  • The work schema must be granted the SELECT ANY TABLE privilege to be able to create views referring to tables stored in other schemas.

For detailed information on all other prerequisites, see the "Change Data Capture" chapter in the Oracle Database Data Warehousing Guide.

2.2.3 Connectivity Requirements

This section lists the requirements for connecting to an Oracle Database.

JDBC Driver

Oracle Data Integrator is installed with a default version of the Oracle Type 4 JDBC driver. This drivers directly uses the TCP/IP network layer and requires no other installed component or configuration.

It is possible to connect an Oracle Server through the Oracle JDBC OCI Driver, or even using ODBC. For performance reasons, it is recommended to use the Type 4 driver.

Connection Information

You must ask the Oracle DBA the following information:

  • Network Name or IP address of the machine hosting the Oracle Database.

  • Listening port of the Oracle listener.

  • Name of the Oracle Instance (SID).

  • TNS alias of the connected instance.

  • Login and password of an Oracle User.

2.3 Setting up the Topology

Setting up the Topology consists of:

  1. Creating an Oracle Data Server

  2. Creating an Oracle Physical Schema

2.3.1 Creating an Oracle Data Server

An Oracle data server corresponds to an Oracle Database Instance connected with a specific Oracle user account. This user will have access to several schemas in this instance, corresponding to the physical schemas in Oracle Data Integrator created under the data server.

2.3.1.1 Creation of the Data Server

Create a data server for the Oracle technology using the standard procedure, as described in "Creating a Data Server" of the Oracle Fusion Middleware Developer's Guide for Oracle Data Integrator. This section details only the fields required or specific for defining an Oracle data server:

  1. In the Definition tab:

    • Name: Name of the data server that will appear in Oracle Data Integrator.

    • Instance/dblink: TNS Alias used for this Oracle instance. It will be used to identify the Oracle instance when using database links and SQL*Loader.

    • User/Password: Oracle user (with its password), having select privileges on the source schemas, select/insert privileges on the target schemas and select/insert/object creation privileges on the work schemas that will be indicated in the Oracle physical schemas created under this data server.

  2. In the JDBC tab:

    • JDBC Driver: oracle.jdbc.driver.OracleDriver

    • JDBC URL: jdbc:oracle:thin:@<network name or ip address of the Oracle machine>:<port of the Oracle listener (1521)>:<name of the Oracle instance>

      To connect an Oracle RAC instance with the Oracle JDBC thin driver, use an Oracle RAC database URL as shown in the following example:

      jdbc:oracle:thin:@(DESCRIPTION=(LOAD_BALANCE=on)
      (ADDRESS=(PROTOCOL=TCP)(HOST=host1) (PORT=1521))
      (ADDRESS=(PROTOCOL=TCP)(HOST=host2) (PORT=1521))
      (CONNECT_DATA=(SERVICE_NAME=service)))
      

2.3.2 Creating an Oracle Physical Schema

Create an Oracle physical schema using the standard procedure, as described in "Creating a Physical Schema" of the Oracle Fusion Middleware Developer's Guide for Oracle Data Integrator.

Create for this physical schema a logical schema using the standard procedure, as described in "Creating a Logical Schema" of the Oracle Fusion Middleware Developer's Guide for Oracle Data Integrator and associate it in a given context.

2.4 Setting Up an Integration Project

Setting up a project using the Oracle Database follows the standard procedure. See "Creating an Integration Project" of the Oracle Fusion Middleware Developer's Guide for Oracle Data Integrator.

It is recommended to import the following knowledge modules into your project for getting started with Oracle Database:

2.5 Creating and Reverse-Engineering an Oracle Model

This section contains the following topics:

2.5.1 Create an Oracle Model

Create an Oracle Model using the standard procedure, as described in "Creating a Model" of the Oracle Fusion Middleware Developer's Guide for Oracle Data Integrator.

2.5.2 Reverse-engineer an Oracle Model

Oracle supports both Standard reverse-engineering - which uses only the abilities of the JDBC driver - and Customized reverse-engineering, which uses a RKM to retrieve the structure of the objects directly from the Oracle dictionary.

In most of the cases, consider using the standard JDBC reverse engineering for starting. Standard reverse-engineering with Oracle retrieves tables, views, columns, primary keys, and references.

Consider switching to customized reverse-engineering for retrieving more metadata. Oracle customized reverse-engineering retrieves the table and view structures, including columns, primary keys, alternate keys, indexes, check constraints, synonyms, and references.

Standard Reverse-Engineering

To perform a Standard Reverse-Engineering on Oracle use the usual procedure, as described in "Reverse-engineering a Model" of the Oracle Fusion Middleware Developer's Guide for Oracle Data Integrator.

Customized Reverse-Engineering

To perform a Customized Reverse-Engineering on Oracle with a RKM, use the usual procedure, as described in "Reverse-engineering a Model" of the Oracle Fusion Middleware Developer's Guide for Oracle Data Integrator. This section details only the fields specific to the Oracle technology:

In the Reverse tab of the Oracle Model, select the KM: RKM Oracle.<project name>.

2.6 Setting up Changed Data Capture

The ODI Oracle Knowledge Modules support the Changed Data Capture feature. See Chapter "Working with Changed Data Capture" of the Oracle Fusion Middleware Developer's Guide for Oracle Data Integrator for details on how to set up journalizing and how to use captured changes.

Oracle Journalizing Knowledge Modules support Simple Journalizing and Consistent Set Journalizing. The Oracle JKMs use either triggers or Oracle Streams to capture data changes on the source tables.

Oracle Data Integrator provides the Knowledge Modules listed in Table 2-2 for journalizing Oracle tables.

Table 2-2 Oracle Journalizing Knowledge Modules

KM Notes

JKM Oracle 10g Consistent (Streams)

Creates the journalizing infrastructure for consistent set journalizing on Oracle 10g tables, using Oracle Streams.

JKM Oracle 11g Consistent (Streams)

Creates the journalizing infrastructure for consistent set journalizing on Oracle 11g tables, using Oracle Streams.

JKM Oracle Consistent

Creates the journalizing infrastructure for consistent set journalizing on Oracle tables using triggers.

JKM Oracle Consistent (Update Date)

Creates the journalizing infrastructure for consistent set journalizing on Oracle tables using triggers based on a Last Update Date column on the source tables.

JKM Oracle Simple

Creates the journalizing infrastructure for simple journalizing on Oracle tables using triggers.


Note that it is also possible to use Oracle GoldenGate to consume changed records from an Oracle database. See Chapter 29, "Oracle GoldenGate" for more information.

Using the Streams JKMs

The Streams KMs work with the default values. The following are the recommended settings:

Using the Update Date JKM

This JKM assumes that a column containing the last update date exists in all the journalized tables. This column name is provided in the UPDATE_DATE_COL_NAME knowledge module option.

2.7 Setting up Data Quality

Oracle Data Integrator provides the CKM Oracle for checking data integrity against constraints defined on an Oracle table. See "Set up Flow Control and Post-Integration Control" in the Oracle Fusion Middleware Developer's Guide for Oracle Data Integrator for details.

Oracle Data Integrator provides the Knowledge Module listed in Table 2-4 to perform a check on Oracle. It is also possible to use the generic SQL KMs. See Chapter 4, "Generic SQL" for more information.

Table 2-4 Check Knowledge Modules for Oracle Database

Recommended KM Notes

CKM Oracle

Uses Oracle's Rowid to identify records


2.8 Designing an Interface

You can use Oracle as a source, staging area or a target of an integration interface. It is also possible to create ETL-style integration interfaces based on the Oracle technology.

The KM choice for an interface or a check determines the abilities and performance of this interface or check. The recommendations in this section help in the selection of the KM for different situations concerning an Oracle data server.

2.8.1 Loading Data from and to Oracle

Oracle can be used as a source, target or staging area of an interface. The LKM choice in the Interface Flow tab to load data between Oracle and another type of data server is essential for the performance of an interface.

2.8.1.1 Loading Data from Oracle

The following KMs implement optimized methods for loading data from an Oracle database to a target or staging area database. In addition to these KMs, you can also use the Generic SQL KMs or the KMs specific to the other technology involved.

Target or Staging Area Technology KM Notes

Oracle

LKM Oracle to Oracle (dblink)

Creates a view on the source server, and synonyms on this view on the target server.

Oracle

LKM Oracle to Oracle (datapump)

Uses external tables in the datapump format.


2.8.1.2 Loading Data to Oracle

The following KMs implement optimized methods for loading data from a source or staging area into an Oracle database. In addition to these KMs, you can also use the Generic SQL KMs or the KMs specific to the other technology involved.

Source or Staging Area Technology KM Notes

Oracle

LKM Oracle to Oracle (dblink)

Views created on the source server, synonyms on the target

SAP BW

LKM SAP BW to Oracle (SQLLDR)

Uses Oracle's bulk loader. File cannot be Staging Area.

SAP ERP

LKM SAP ERP to Oracle (SQLLDR)

Uses Oracle's bulk loader. File cannot be Staging Area.

Files

LKM File to Oracle (EXTERNAL TABLE)

Loads file data using external tables.

Files

LKM File to Oracle (SQLLDR)

Uses Oracle's bulk loader. File cannot be Staging Area.

Oracle

LKM Oracle to Oracle (datapump)

Uses external tables in the datapump format.

Oracle BI

LKM Oracle BI to Oracle (DBLINK)

Creates synonyms for the target staging table and uses the OBIEE populate command.

MSSQL

LKM MSSQL to Oracle (BCP/SQLLDR)

Unloads data from SQL Server using BCP, loads data into Oracle using SQL*Loader.

All

LKM SQL to Oracle

Faster than the Generic LKM (Uses Statistics)


2.8.2 Integrating Data in Oracle

The data integration strategies in Oracle are numerous and cover several modes. The IKM choice in the Interface Flow tab determines the performances and possibilities for integrating.

The following KMs implement optimized methods for integrating data into an Oracle target. In addition to these KMs, you can also use the Generic SQL KMs.

Mode KM Note

Update

IKM Oracle Incremental Update

Optimized for Oracle.

Update

IKM Oracle Spatial Incremental Update

Supports SDO_GEOMETRY datatypes

Update

IKM Oracle Incremental Update (MERGE)

Recommended for very large volumes of data because of bulk set-based MERGE feature.

Update

IKM Oracle Incremental Update (PL SQL)

Use PL/SQL and supports long and blobs in incremental update mode.

Specific

IKM Oracle Slowly Changing Dimension

Supports type 2 Slowly Changing Dimensions

Specific

IKM Oracle Multi Table Insert

Supports multi-table insert statements.

Append

IKM Oracle to Oracle Control Append (DBLINK)

Optimized for Oracle using DB*Links


Using Slowly Changing Dimensions

For using slowly changing dimensions, make sure to set the Slowly Changing Dimension value for each column of the Target datastore. This value is used by the IKM Oracle Slowly Changing Dimension to identify the Surrogate Key, Natural Key, Overwrite or Insert Column, Current Record Flag and Start/End Timestamps columns.

Using Multi Table Insert

The IKM Oracle Multi Table Insert is used to integrate data from one source into one to many Oracle target tables with a multi-table insert statement. This IKM must be used in integration interfaces that are sequenced in a Package. This Package must meet the following conditions:

  • The first interface of the Package must have a temporary target and the KM option DEFINE_QUERY set to YES.

    This first interface defines the structure of the SELECT clause of the multi-table insert statement (that is the source flow).

  • Subsequent integration interfaces must source from this temporary datastore and have the KM option IS_TARGET_TABLE set to YES.

  • The last interface of the Package must have the KM option EXECUTE set to YES in order to run the multi-table insert statement.

  • Do not set Use Temporary Interface as Derived Table (Sub-Select) to true on any of the interfaces.

If large amounts of data are appended, consider to set the KM option OPTIMIZER_HINT to /*+ APPEND */.

Using Spatial Datatypes

To perform incremental update operations on Oracle Spatial datatypes, you need to declare the SDO_GEOMETRY datatype in the Topology and use the IKM Oracle Spatial Incremental Update. When comparing two columns of SDO_GEOMETREY datatype, the GEOMETRY_TOLERANCE option is used to define the error margin inside which the geometries are considered to be equal.See the Oracle Spatial User's Guide and Reference for more information.

2.8.3 Designing an ETL-Style Interface

See "Working with Integration Interface" in the Oracle Fusion Middleware Developer's Guide for Oracle Data Integrator for generic information on how to design integration interfaces. This section describes how to design an ETL-style interface where the staging area is Oracle database or any ANSI-92 compliant database and the target on Oracle database.

In an ETL-style interface, ODI processes the data in a staging area, which is different from the target. Oracle Data Integrator provides two ways for loading the data from an Oracle staging area to an Oracle target:

Depending on the KM strategy that is used, flow and static control are supported.

Using a Multi-connection IKM

A multi-connection IKM allows updating a target where the staging area and sources are on different data servers.

Oracle Data Integrator provides the following multi-connection IKM for handling Oracle data: IKM Oracle to Oracle Control Append (DBLINK). You can also use the generic SQL multi-connection IKMs. See Chapter 4, "Generic SQL" for more information.

See Table 2-5 for more information on when to use a multi-connection IKM.

To use a multi-connection IKM in an ETL-style interface:

  1. Create an integration interface with the staging area on Oracle or an ANSI-92 compliant technology and the target on Oracle using the standard procedure as described in "Creating an Interface" in the Oracle Fusion Middleware Developer's Guide for Oracle Data Integrator. This section describes only the ETL-style specific steps.

  2. In the Definition tab of the Interface Editor, select Staging Area different from Target and select the logical schema of the source tables or a third schema.

  3. In the Flow tab, select one of the Source Sets, by clicking its title. The Property Inspector opens for this object.

  4. Select an LKM from the LKM Selector list to load from the source(s) to the staging area. See Table 2-5 to determine the LKM you can use.

  5. Optionally, modify the KM options.

  6. In the Flow tab, select the Target by clicking its title. The Property Inspector opens for this object.

    In the Property Inspector, select an ETL multi-connection IKM from the IKM Selector list to load the data from the staging area to the target. See Table 2-5 to determine the IKM you can use.

Note the following when setting the KM options:

  • For IKM Oracle to Oracle Control Append (DBLINK)

    • If large amounts of data are appended, set the KM option OPTIMIZER_HINT to /*+ APPEND */.

    • Set AUTO_CREATE_DB_LINK to true to create automatically db link on the target schema. If AUTO_CREATE_DB_LINK is set to false (default), the link with this name should exist in the target schema.

    • If you set the options FLOW_CONTROL and STATIC_CONTROL to Yes, select a CKM in the Controls tab. If FLOW_CONTROL is set to Yes, the flow table is created on the target.

Using an LKM and a mono-connection IKM

If there is no dedicated multi-connection IKM, use a standard exporting LKM in combination with a standard mono-connection IKM. The exporting LKM is used to load the flow table from the staging area to the target. The mono-connection IKM is used to integrate the data flow into the target table.

Oracle Data Integrator supports any ANSI SQL-92 standard compliant technology as a source of an ETL-style interface. Staging area and the target are Oracle.

See Table 2-5 for more information on when to use the combination of a standard exporting LKM and a mono-connection IKM.

To use an LKM and a mono-connection IKM in an ETL-style interface:

  1. Create an integration interface with the staging area and target on Oracle using the standard procedure as described in "Creating an Interface" in the Oracle Fusion Middleware Developer's Guide for Oracle Data Integrator. This section describes only the ETL-style specific steps.

  2. In the Definition tab of the Interface Editor, select Staging Area different from Target and select the logical schema of the source tables or a third schema.

  3. In the Flow tab, select one of the Source Sets.

  4. In the Property Inspector, select an LKM from the LKM Selector list to load from the source(s) to the staging area. See Table 2-5 to determine the LKM you can use.

  5. Optionally, modify the KM options.

  6. Select the Staging Area. In the Property Inspector, select an LKM from the LKM Selector list to load from the staging area to the target. See Table 2-5 to determine the LKM you can use.

  7. Optionally, modify the options.

  8. Select the Target by clicking its title. The Property Inspector opens for this object.

    In the Property Inspector, select a standard mono-connection IKM from the IKM Selector list to update the target. See Table 2-5 to determine the IKM you can use.

Table 2-5 KM Guidelines for ETL-Style Interfaces with Oracle Data

Source Staging Area Target Exporting LKM IKM KM Strategy Comment

ANSI SQL-92 standard compliant

Oracle

Oracle

NA

IKM Oracle to Oracle Control Append (DBLINK)

Multi-connection IKM

Use this KM strategy to:

  • Perform control append

  • Use DB*Links for performance reasons

Supports flow and static control.

ANSI SQL-92 standard compliant

Oracle or any ANSI SQL-92 standard compliant database

Oracle or any ANSI SQL-92 standard compliant database

NA

IKM SQL to SQL Incremental Update

Multi-connection IKM

Allows an incremental update strategy with no temporary target-side objects. Use this KM if it is not possible to create temporary objects in the target server.

The application updates are made without temporary objects on the target, the updates are made directly from source to target. The configuration where the flow table is created on the staging area and not in the target should be used only for small volumes of data.

Supports flow and static control

Oracle

Oracle

Oracle

LKM to Oracle to Oracle (DBLINK)

IKM Oracle Slowly Changing Dimension

LKM + standard IKM

 

Oracle

Oracle

Oracle

LKM to Oracle to Oracle (DBLINK)

IKM Oracle Incremental Update

LKM + standard IKM

 

Oracle

Oracle

Oracle

LKM to Oracle to Oracle (DBLINK)

IKM Oracle Incremental Update (MERGE)

LKM + standard IKM

 

2.9 Troubleshooting

This section provides information on how to troubleshoot problems that you might encounter when using Oracle Knowledge Modules. It contains the following topics:

2.9.1 Troubleshooting Oracle Database Errors

Errors appear often in Oracle Data Integrator in the following way:

java.sql.SQLException: ORA-01017: invalid username/password; logon denied
at ...
at ...
...

the java.sql.SQLExceptioncode simply indicates that a query was made to the database through the JDBC driver, which has returned an error. This error is frequently a database or driver error, and must be interpreted in this direction.

Only the part of text in bold must first be taken in account. It must be searched in the Oracle documentation. If its contains an error code specific to Oracle, like here (in red), the error can be immediately identified.

If such an error is identified in the execution log, it is necessary to analyze the SQL code send to the database to find the source of the error. The code is displayed in the description tab of the erroneous task.

2.9.2 Common Problems and Solutions

This section describes common problems and solutions.

  • ORA-12154 TNS:could not resolve service name

    TNS alias resolution. This problem may occur when using the OCI driver, or a KM using database links. Check the configuration of the TNS aliases on the machines.

  • ORA-02019 connection description for remote database not found

    You use a KM using non existing database links. Check the KM options for creating the database links.

  • ORA-00900 invalid SQL statement

    ORA-00923 FROM Keyword not found where expected

    The code generated by the interface, or typed in a procedure is invalid for Oracle. This is usually related to an input error in the mapping, filter of join. The typical case is a missing quote or an unclosed bracket.

    A frequent cause is also the call made to a non SQL syntax, like the call to an Oracle stored procedure using the syntax

    EXECUTE SCHEMA.PACKAGE.PROC(PARAM1, PARAM2).
    
    

    The valid SQL call for a stored procedure is:

    BEGIN
    SCHEMA.PACKAGE.PROC(PARAM1, PARAM2);
    END;
    

    The syntax EXECUTE SCHEMA.PACKAGE.PROC(PARAM1, PARAM2) is specific to SQL*PLUS, and do not work with JDBC.

  • ORA-00904 invalid column name

    Keying error in a mapping/join/filter. A string which is not a column name is interpreted as a column name, or a column name is misspelled.

    This error may also appear when accessing an error table associated to a datastore with a recently modified structure. It is necessary to impact in the error table the modification, or drop the error tables and let Oracle Data Integrator recreate it in the next execution.

  • ORA-00903 invalid table name

    The table used (source or target) does not exist in the Oracle schema. Check the mapping logical/physical schema for the context, and check that the table physically exists on the schema accessed for this context.

  • ORA-00972 Identifier is too Long

    There is a limit in the object identifier in Oracle (usually 30 characters). When going over this limit, this error appears. A table created during the execution of the interface went over this limit. and caused this error (see the execution log for more details).

    Check in the topology for the oracle technology, that the maximum lengths for the object names (tables and columns) correspond to your Oracle configuration.

  • ORA-01790 expression must have same datatype as corresponding expression

    You are trying to connect two different values that can not be implicitly converted (in a mapping, a join...). Use the explicit conversion functions on these values.