2 Oracle Database

It is important to understand how to work with Oracle Database in Oracle Data Integrator.

This chapter includes the following sections:

Introduction

Oracle Data Integrator (ODI) seamlessly integrates data in an Oracle Database. All Oracle Data Integrator features are designed to work best with the Oracle Database engine, including reverse-engineering, changed data capture, data quality, and mappings.

Concepts

The Oracle Database concepts map the Oracle Data Integrator concepts as follows: An Oracle Instance corresponds to a data server in Oracle Data Integrator. Within this instance, a schema maps to an Oracle Data Integrator physical schema. A set of related objects within one schema corresponds to a data model, and each table, View or synonym will appear as an ODI datastore, with its attributes, columns and constraints.

Oracle Data Integrator uses Java Database Connectivity (JDBC) to connect to Oracle database instance.

Knowledge Modules

Oracle Data Integrator provides the Knowledge Modules (KM) listed in the following table for handling Oracle data. The KMs use Oracle specific features. It is also possible to use the generic SQL KMs with the Oracle Database. See Generic SQL for more information.

Table 2-1 Oracle KMs

Knowledge Module Description

RKM Oracle

Reverse-engineers tables, views, columns, primary keys, non unique indexes and foreign keys.

JKM Oracle 11g Consistent (Streams)

Creates the journalizing infrastructure for consistent set journalizing on Oracle 11g tables, using Oracle Streams.

This KM is deprecated.

JKM Oracle Consistent

Creates the journalizing infrastructure for consistent set journalizing on Oracle tables using triggers.

JKM Oracle Consistent (Update Date)

Creates the journalizing infrastructure for consistent set journalizing on Oracle tables using triggers based on a Last Update Date column on the source tables.

JKM Oracle Simple

Creates the journalizing infrastructure for simple journalizing on Oracle tables using triggers.

JKM Oracle to Oracle Consistent (OGG Online)

Creates and manages the ODI CDC framework infrastructure when using Oracle GoldenGate for CDC. See Oracle GoldenGate for more information.

CKM Oracle

Checks data integrity against constraints defined on an Oracle table.

LKM File to Oracle (EXTERNAL TABLE)

Loads data from a file to an Oracle staging area using the EXTERNAL TABLE SQL Command.

LKM File to Oracle (SQLLDR)

Loads data from a file to an Oracle staging area using the SQL*Loader command line utility.

LKM MSSQL to Oracle (BCP SQLLDR)

Loads data from a Microsoft SQL Server to Oracle database (staging area) using the BCP and SQL*Loader utilities.

LKM Oracle BI to Oracle (DBLINK)

Loads data from any Oracle BI physical layer to an Oracle target database using database links. See Oracle Business Intelligence Enterprise Edition for more information.

LKM Oracle to Oracle (DBLINK)

Loads data from an Oracle source database to an Oracle staging area database using database links.

LKM Oracle to Oracle Pull (DB Link)

Loads data from an Oracle source database to an Oracle staging area database using database links. It does not create a view in the source database. It also does not creates the synonym in the staging database. Built-in KM.

LKM Oracle to Oracle Push (DB Link)

Loads and integrates data into Oracle target table using database links. It does not create the synonym in the staging database. Any settings in the IKM would be ignored. Built-in KM.

LKM Oracle to Oracle (datapump)

Loads data from an Oracle source database to an Oracle staging area database using external tables in the datapump format.

LKM SQL to Oracle

Loads data from any ANSI SQL-92 source database to an Oracle staging area.

LKM SAP BW to Oracle (SQLLDR)

Loads data from SAP BW systems to an Oracle staging using SQL*Loader utilities. See the Application Adapters Guide for Oracle Data Integrator for more information.

LKM SAP ERP to Oracle (SQLLDR)

Loads data from SAP ERP systems to an Oracle staging using SQL*Loader utilities. See the Application Adapters Guide for Oracle Data Integrator for more information.

IKM Oracle Incremental Update

Integrates data in an Oracle target table in incremental update mode. Supports Flow Control.

IKM Oracle Incremental Update (MERGE)

Integrates data in an Oracle target table in incremental update mode, using a MERGE statement. Supports Flow Control.

IKM Oracle Incremental Update (PL SQL)

Integrates data in an Oracle target table in incremental update mode using PL/SQL. Supports Flow Control.

IKM Oracle Insert

Integrates data into an Oracle target table in append mode. The data is loaded directly in the target table with a single INSERT SQL statement. Built-in KM.

IKM Oracle Update

Integrates data into an Oracle target table in incremental update mode. The data is loaded directly into the target table with a single UPDATE SQL statement. Built-in KM.

IKM Oracle Merge

Integrates data into an Oracle target table in incremental update mode. The data is loaded directly into the target table with a single MERGE SQL statement. Built-in KM.

IKM Oracle Multi-Insert

Integrates data from one source into one or many Oracle target tables in append mode, using a multi-table insert statement (MTI). This IKM can be utilized in a single mapping to load multiple targets. Built-in KM.

IKM Oracle Multi Table Insert

Integrates data from one source into one or many Oracle target tables in append mode, using a multi-table insert statement (MTI). Supports Flow Control.

IKM Oracle Slowly Changing Dimension

Integrates data in an Oracle target table used as a Type II Slowly Changing Dimension. Supports Flow Control.

IKM Oracle Spatial Incremental Update

Integrates data into an Oracle (9i or above) target table in incremental update mode using the MERGE DML statement. This module supports the SDO_GEOMETRY datatype. Supports Flow Control.

IKM Oracle to Oracle Control Append (DBLINK)

Integrates data from one Oracle instance into an Oracle target table on another Oracle instance in control append mode. Supports Flow Control.

This IKM is typically used for ETL configurations: source and target tables are on different Oracle instances and the mapping's staging area is set to the logical schema of the source tables or a third schema.

SKM Oracle

Generates data access Web services for Oracle databases. For information about how to use this SKM, see Generating and Deploying Data Servicesin the Administering Oracle Data Integrator.

Installation and Configuration

Make sure you have read the information in this section before you start using the Oracle Knowledge Modules:

System Requirements and Certifications

Before performing any installation you should read the system requirements and certification documentation to ensure that your environment meets the minimum installation requirements for the products you are installing.

The list of supported platforms and versions is available on Oracle Technical Network (OTN):

http://www.oracle.com/technetwork/middleware/data-integrator/documentation/index.html.

Technology Specific Requirements

Some of the Knowledge Modules for Oracle use specific features of this database. This section lists the requirements related to these features.

Using the SQL*Loader Utility

This section describes the requirements that must be met before using the SQL*Loader utility with Oracle database.

  • The Oracle Client and the SQL*Loader utility must be installed on the machine running the Oracle Data Integrator Agent.

  • The server names defined in the Topology must match the Oracle TNS name used to access the Oracle instances.

  • A specific log file is created by SQL*Loader. We recommend looking at this file in case of error. Control Files (CTL), Log files (LOG), Discard Files (DSC) and Bad files (BAD) are placed in the work directory defined in the physical schema of the source files.

  • Using the DIRECT mode requires that Oracle Data integrator Agent run on the target Oracle server machine. The source file must also be on that machine.

Using External Tables

This section describes the requirements that must be met before using external tables in Oracle database.

  • The file to be loaded by the External Table command needs to be accessible from the Oracle instance. This file must be located on the file system of the server machine or reachable from a Unique Naming Convention path (UNC path) or stored locally.

  • For performance reasons, it is recommended to install the Oracle Data Integrator Agent on the target server machine.

Using Oracle Wallet

Oracle Wallet provides a simple and easy method to manage database credentials across multiple domains.

Attention:

This section applies only to Data Integration Platform Cloud.

This section describes the requirements that are necessary for creating and managing wallet in an Oracle database environment.

Note:

For more details on creating and managing wallets, refer to Managing Oracle Wallets section of Enterprise User Security Administrator's Guide.

This environment provides all the necessary commands and libraries, including $ORACLE_HOME/oracle_common/bin/mkstore command.

  • Oracle recommends you to create and manage Wallet in a database environment. Often this task is completed by a database administrator and provided for use to the client. You can also install the Oracle Client Runtime package to provide the necessary commands and libraries to create and manage Oracle Wallet.

    Create a wallet on the client using the command: mkstore –wrl <wallet_location> -create, where: wallet_location is the path to the directory where you want to create and store the wallet.

    For Example — mkstore -wrl /scratch/ewallet - createCredential jdbc:oracle;thin:@kkm00ebs.in.oracle.com:1523:oditest odiUser odi where

    • kkm00ebs.in.oracle.com is the host name

    • 1523 is the port number

    • oditest is the SID

    • odiUser denotes the user name

    • odi denotes the password for the user

    Note:

    You can store multiple credentials for multiple databases in a single client wallet. You cannot store multiple credentials (for logging into multiple schema) for the same database in the same wallet. If you have multiple login credentials for the same database, then they must be stored in separate wallets.
  • To add database login credentials to an existing client wallet, use the command: mkstore –wrl <wallet_location> -createCredential <db_connect_string> <username> <password> where

    • wallet_location is the path to the directory where you have created the wallet

    • db_connect_string must be identical to the connection string that you specify in the URL used in the datasource definition (the part of the string that follows the @).

      It can be either the short form or the long form of the URL.

      • For Example — jdbc:oracle:thin:@kkm00ebs.in.oracle.com:1523/oditest

        or
      • (DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=TCP)(HOST=myhost-scan)(PORT=1521)))(CONNECT_DATA=(SERVICE_NAME=myservice)))

        For Example —(DESCRIPTION=(ADDRESS=(PROTOCOL=tcps)(HOST=<hostname or ipaddress>)(PORT=<port>))(CONNECT_DATA=(SERVICE_NAME=<db service>)) (security=(ssl_server_cert_dn="<certificate_info>")) )

Note:

Oracle supports two types of wallets: 1. Password protected wallets (ewallet.p12) and 2. Auto login wallets (cwallet.sso) is used for retrieving connection details to establish secure connection to an instance by verifying certificate details associated with instance for secure connection.

Connectivity Requirements

This section lists the requirements for connecting to an Oracle Database.

JDBC Driver

Oracle Data Integrator is installed with a default version of the Oracle Type 4 JDBC driver. This drivers directly uses the TCP/IP network layer and requires no other installed component or configuration.

It is possible to connect an Oracle Server through the Oracle JDBC OCI Driver, or even using ODBC. For performance reasons, it is recommended to use the Type 4 driver.

Connection Information

You must ask the Oracle DBA the following information:

  • Network Name or IP address of the machine hosting the Oracle Database.

  • Listening port of the Oracle listener.

  • Name of the Oracle Instance (SID) or Service Name

  • Login and password of an Oracle User.

Setting up the Topology

Setting up the Topology consists of:

  1. Creating an Oracle Data Server
  2. Creating an Oracle Physical Schema

Creating an Oracle Data Server

An Oracle data server corresponds to an Oracle Database Instance connected with a specific Oracle user account. This user will have access to several schemas in this instance, corresponding to the physical schemas in Oracle Data Integrator created under the data server.

Creation of the Data Server

Create a data server for the Oracle technology using the standard procedure, as described in Creating a Data Server of Administering Oracle Data Integrator. This section details only the fields required or specific for defining an Oracle data server:

  1. In the Definition tab:

    Data Server

    • Name: Name of the data server that will appear in Oracle Data Integrator.

    • Instance/dblink(Data Server): TNS Alias used for this Oracle instance. It will be used to identify the Oracle instance when using database links and SQL*Loader.

    Connection

    • User/Password: Oracle user (with its password), having select privileges on the source schemas, select/insert privileges on the target schemas and select/insert/object creation privileges on the work schemas that will be indicated in the Oracle physical schemas created under this data server.

    • JNDI Connection: Select this check-box to configure the JNDI connection settings. Navigate to the JNDI tab, and fill in the required fields.

    • Use Credential File: Select this check box to upload the connection details directly from a pre-configured wallet fileFoot 1 or credential zip file.

    Credential Details

    Attention:

    This section applies only to Data Integration Platform Cloud.
    • Credential File: Click the browse icon present beside the Credential File text box, to browse for the location of the required wallet file containing the connection details.

      By default, the credential location is populated to point to the same wallet file (ewallet.p12) or credential zip file (auto login wallet) that is used for login which is present in <homedir>/.odi/oracledi/ewallet. You can also browse for any other location where this credential file is stored with the required database connection details.

      Based on your selection, the following options appear:

      1. If you have selected a password protected wallet file (ewallet.p12),

        • The Credential File Password text box appears. It represents the password that you configured during wallet creation and it is required to open the wallet file. Enter the relevant password of the wallet file in this text box.

          Note:

          ODI supervisor can distribute ewallet.p12 and wallet password to other ODI users.
        • Connection Details – If the entered password is valid, ODI retrieves a list of all available database connection details that are present in the wallet and displays it. Click the Connection Details drop down arrow to choose the required database connection from the list.

      2. If you have selected a credential zip file (auto login wallet),

        • The Connection Details text box appears. Click the Connection Details drop down arrow to choose the required connection URL from the list of available connection URLs retrieved from tnsnames.ora.

    • Upon selecting the required connection URL, JDBC Driver, JDBC URL, Username and Password fields are disabled and the associated username, password, jdbc URL and jdbc driver details are auto-populated with credentials retrieved from the pre-configured wallet file or credential zip file. You can change them only through the list of connections available in the wallet file (ewallet.p12) or credential zip (tnsnames.ora).

    • Click Save, to save the Data Server details

    • Click Test Connection, to test the established connection

  2. If the data server supports JNDI access, fill-in the details of the JNDI tab:
    • JNDI authentication: Select the required option from the following-

      • None- Anonymous access to the naming or directory service

      • Simple: Authenticated access, non-encrypted

      • CRAM-MD5: Authenticated access, encrypted MD5

      • <other value>: authenticated access, encrypted according to <other value>

    • JNDI User/Password: User/password connecting to the JNDI directory

    • JNDI Protocol: Protocol used for the connection

      Note:

      Please note that only the most common protocols are listed here. This is not an exhaustive list.
      • LDAP: Access to an LDAP directory

      • SMQP: Access to a SwiftMQ MOM directory

      • <other value>: access following the sub-protocol <other value>

    • JNDI Driver: The driver allowing the JNDI connection

      For Example — Sun LDAP directory: com.sun.jndi.ldap.LdapCtxFactory

    • JNDI URL: The URL allowing the JNDI connection

      For example: ldap://suse70:389/o=linuxfocus.org

    • JNDI Resource: The directory element containing the connection parameters

      For example: cn=sampledb

  3. If the data server supports JDBC access, fill-in the details of the JDBC tab:
    • JDBC Driver: Name of the JDBC driver used for connecting to the data server

      oracle.jdbc.OracleDriver

    • JDBC URL: URL allowing you to connect to the data server.

      jdbc:oracle:thin:@<network name or ip address of the Oracle machine>:<port of the Oracle listener>:<name of the Oracle instance>

      To connect an Oracle RAC instance with the Oracle JDBC thin driver, use an Oracle RAC database URL as shown in the following example:

      jdbc:oracle:thin:@(DESCRIPTION=(LOAD_BALANCE=on)
      (ADDRESS=(PROTOCOL=TCP)(HOST=host1) (PORT=1521))
      (ADDRESS=(PROTOCOL=TCP)(HOST=host2) (PORT=1521))
      (CONNECT_DATA=(SERVICE_NAME=service)))

Creating an Oracle Physical Schema

Create an Oracle physical schema using the standard procedure, as described in Creating a Physical Schema in Administering Oracle Data Integrator.

Create for this physical schema a logical schema using the standard procedure, as described in Creating a Logical Schema in Administering Oracle Data Integrator and associate it in a given context.

Setting Up an Integration Project

Setting up a project using the Oracle Database follows the standard procedure. See Creating an Integration Project of the Developing Integration Projects with Oracle Data Integrator.

It is recommended to import the following knowledge modules into your project for getting started with Oracle Database:

  • RKM Oracle

  • CKM Oracle

  • LKM SQL to Oracle

  • LKM File to Oracle (SQLLDR)

  • LKM File to Oracle (EXTERNAL TABLE)

  • IKM Oracle Incremental Update

Creating and Reverse-Engineering an Oracle Model

This section contains the following topics:

Create an Oracle Model

Create an Oracle Model using the standard procedure, as described in Creating a Model of Developing Integration Projects with Oracle Data Integrator.

Reverse-engineer an Oracle Model

Oracle supports both Standard reverse-engineering - which uses only the abilities of the JDBC driver - and Customized reverse-engineering, which uses a RKM to retrieve the structure of the objects directly from the Oracle dictionary.

In most of the cases, consider using the standard JDBC reverse engineering for starting. Standard reverse-engineering with Oracle retrieves tables, views, columns, primary keys, and references.

Consider switching to customized reverse-engineering for retrieving more metadata. Oracle customized reverse-engineering retrieves the table and view structures, including columns, primary keys, alternate keys, indexes, check constraints, synonyms, and references.

Standard Reverse-Engineering

To perform a Standard Reverse-Engineering on Oracle use the usual procedure, as described in Reverse-engineering a Model of Developing Integration Projects with Oracle Data Integrator.

Customized Reverse-Engineering

To perform a Customized Reverse-Engineering on Oracle with a RKM, use the usual procedure, as described in Reverse-engineering a Model of Developing Integration Projects with Oracle Data Integrator. This section details only the fields specific to the Oracle technology:

In the Reverse Engineer tab of the Oracle Model, select the KM: RKM Oracle.<project name>.

Setting up Changed Data Capture

The ODI Oracle Knowledge Modules support the Changed Data Capture feature. See Using Changed Data of Developing Integration Projects with Oracle Data Integrator for details on how to set up journalizing and how to use captured changes.

Oracle Journalizing Knowledge Modules support Simple Journalizing and Consistent Set Journalizing. The Oracle JKMs use either triggers or Oracle Streams to capture data changes on the source tables.

Oracle Data Integrator provides the Knowledge Modules listed in Table 2-2 for journalizing Oracle tables.

Table 2-2 Oracle Journalizing Knowledge Modules

KM Notes

JKM Oracle 11g Consistent (Streams)

Creates the journalizing infrastructure for consistent set journalizing on Oracle 11g tables, using Oracle Streams.

JKM Oracle Consistent

Creates the journalizing infrastructure for consistent set journalizing on Oracle tables using triggers.

JKM Oracle Consistent (Update Date)

Creates the journalizing infrastructure for consistent set journalizing on Oracle tables using triggers based on a Last Update Date column on the source tables.

JKM Oracle Simple

Creates the journalizing infrastructure for simple journalizing on Oracle tables using triggers.

Note that it is also possible to use Oracle GoldenGate to consume changed records from an Oracle database. See Oracle GoldenGate for more information.

Using the Streams JKMs

The Streams KMs work with the default values. The following are the recommended settings:

  • By default, the AUTO_CONFIGURATION KM option is set to Yes. If set to Yes, the KM provides automatic configuration of the Oracle database and ensures that all prerequisites are met. As this option automatically changes the database initialization parameters, it is not recommended to use it in a production environment. You should check the Create Journal step in the Oracle Data Integrator execution log to detect configurations tasks that have not been performed correctly (Warning status).

  • By default, the CONFIGURATION_TYPE option is set to Low Activity. Leave this option if your database is having a low transactional activity.

    Set this option to Standalone for installation on a standalone database such as a development database or on a laptop.

    Set this option to High Activity if the database is intensively used for transactional processing.

  • By default, the STREAMS_OBJECT_GROUP option is set to CDC. The value entered is used to generate object names that can be shared across multiple CDC sets journalized with this JKM. If the value of this option is CDC, the naming rules listed in Table 2-3 will be applied.

    Note that this option can only take upper case ASCII characters and must not exceed 15 characters.

    Table 2-3 Naming Rules Example for the CDC Group Name

    CDC Group Naming Convention

    Capture Process

    ODI_CDC_C

    Queue

    ODI_CDC_Q

    Queue Table

    ODI_CDC_QT

    Apply Process

    ODI_CDC_A

  • VALIDATE enables extra steps to validate the correct use of the KM. This option checks various requirements without configuring anything (for configuration steps, please see AUTO_CONFIGURATION option). When a requirement is not met, an error message is written to the log and the execution of the JKM is stopped in error.

    By default, this option is set to Yes in order to provide an easier use of this complex KM out of the box

Using the Update Date JKM

This JKM assumes that a column containing the last update date exists in all the journalized tables. This column name is provided in the UPDATE_DATE_COL_NAME knowledge module option.

Setting up Data Quality

Oracle Data Integrator provides the CKM Oracle for checking data integrity against constraints defined on an Oracle table. See Flow Control and Static Control in Developing Integration Projects with Oracle Data Integrator.

Oracle Data Integrator provides the Knowledge Module listed in Table 2-4 to perform a check on Oracle. It is also possible to use the generic SQL KMs. See Generic SQL for more information.

Table 2-4 Check Knowledge Modules for Oracle Database

Recommended KM Notes

CKM Oracle

Uses Oracle's Rowid to identify records

Designing a Mapping

You can use Oracle as a source, staging area or a target of a mapping. It is also possible to create ETL-style mappings based on the Oracle technology.

The KM choice for a mapping or a check determines the abilities and performance of this mapping or check. The recommendations in this section help in the selection of the KM for different situations concerning an Oracle data server.

Loading Data from and to Oracle

Oracle can be used as a source, target or staging area of a mapping. The LKM choice in the Mapping's Loading Knowledge Module tab to load data between Oracle and another type of data server is essential for the performance of a mapping.

Loading Data from Oracle

The following KMs implement optimized methods for loading data from an Oracle database to a target or staging area database. In addition to these KMs, you can also use the Generic SQL KMs or the KMs specific to the other technology involved.

Target or Staging Area Technology KM Notes

Oracle

LKM Oracle to Oracle (dblink)

Creates a view on the source server, and synonyms on this view on the target server.

Oracle

LKM Oracle to Oracle Push (DB Link)

Creates a view on the source server, but does not create synonyms on this view on the target server. This KM ignores any settings on the IKM. Built-in KM.

Oracle

LKM Oracle to Oracle Pull (DB Link)

Does not create a view on the source server, or the synonyms on this view on the target server. Built-in KM.

Oracle

LKM Oracle to Oracle (datapump)

Uses external tables in the datapump format.

Loading Data to Oracle

The following KMs implement optimized methods for loading data from a source or staging area into an Oracle database. In addition to these KMs, you can also use the Generic SQL KMs or the KMs specific to the other technology involved.

Source or Staging Area Technology KM Notes

Oracle

LKM Oracle to Oracle (dblink)

Views created on the source server, synonyms on the target.

Oracle

LKM Oracle to Oracle Push (DB Link)

Views not created on the source server, synonyms created on the target. Built-in KM.

Oracle

LKM Oracle to Oracle Pull (DB Link)

Views not created on the source server, synonyms not created on the target. Built-in KM.

SAP BW

LKM SAP BW to Oracle (SQLLDR)

Uses Oracle's bulk loader. File cannot be Staging Area.

SAP ERP

LKM SAP ERP to Oracle (SQLLDR)

Uses Oracle's bulk loader. File cannot be Staging Area.

Files

LKM File to Oracle (EXTERNAL TABLE)

Loads file data using external tables.

Files

LKM File to Oracle (SQLLDR)

Uses Oracle's bulk loader. File cannot be Staging Area.

Oracle

LKM Oracle to Oracle (datapump)

Uses external tables in the datapump format.

Oracle BI

LKM Oracle BI to Oracle (DBLINK)

Creates synonyms for the target staging table and uses the OBIEE populate command.

MSSQL

LKM MSSQL to Oracle (BCP-SQLLDR)

Unloads data from SQL Server using BCP, loads data into Oracle using SQL*Loader.

All

LKM SQL to Oracle

Faster than the Generic LKM (Uses Statistics)

Integrating Data in Oracle

The data integration strategies in Oracle are numerous and cover several modes. The IKM choice in the Mapping's Physical diagram determines the performances and possibilities for integrating.

The following KMs implement optimized methods for integrating data into an Oracle target. In addition to these KMs, you can also use the Generic SQL KMs.

Mode KM Note

Update

IKM Oracle Incremental Update

Optimized for Oracle. Supports Flow Control.

Update

IKM Oracle Update

Optimized for Oracle. Oracle UPDATE statement KM. Built-in KM.

Update

IKM Oracle Merge

Optimized for Oracle. Oracle MERGE statement KM. Built-in KM.

Update

IKM Oracle Spatial Incremental Update

Supports SDO_GEOMETRY datatypes. Supports Flow Control.

Update

IKM Oracle Incremental Update (MERGE)

Recommended for very large volumes of data because of bulk set-based MERGE feature. Supports Flow Control.

Update

IKM Oracle Incremental Update (PL SQL)

Use PL/SQL and supports long and blobs in incremental update mode. Supports Flow Control.

Specific

IKM Oracle Slowly Changing Dimension

Supports type 2 Slowly Changing Dimensions. Supports Flow Control.

Specific

IKM Oracle Multi Table Insert

Supports multi-table insert statements. Supports Flow Control.

Append

IKM Oracle to Oracle Control Append (DBLINK)

Optimized for Oracle using DB*Links. Supports Flow Control.

Append

IKM Oracle Insert

Optimized for Oracle. Oracle INSERT statement KM. Built-in KM. Supports Flow Control.

Append

IKM Oracle Multi-Insert

Optimized for Oracle. Oracle multi-target INSERT statement KM, applied to each target. Built-in KM.

Using Slowly Changing Dimensions

For using slowly changing dimensions, make sure to set the Slowly Changing Dimension value for each column of the Target datastore. This value is used by the IKM Oracle Slowly Changing Dimension to identify the Surrogate Key, Natural Key, Overwrite or Insert Column, Current Record Flag and Start/End Timestamps columns.

Using Multi Table Insert

The IKM Oracle Multi Table Insert is used to integrate data from one source into one to many Oracle target tables with a multi-table insert statement. This IKM must be used in mappings that are sequenced in a Package. This Package must meet the following conditions:

  • The first mapping of the Package must have a temporary target and the KM option DEFINE_QUERY set to YES.

    This first mapping defines the structure of the SELECT clause of the multi-table insert statement (that is the source flow).

  • Subsequent mappings must source from this temporary datastore and have the KM option IS_TARGET_TABLE set to YES.

  • The last mapping of the Package must have the KM option EXECUTE set to YES in order to run the multi-table insert statement.

  • Do not set Use Temporary Mapping as Derived Table (Sub-Select) to true on any of the mappings.

If large amounts of data are appended, consider to set the KM option OPTIMIZER_HINT to /*+ APPEND */.

Using Spatial Datatypes

To perform incremental update operations on Oracle Spatial datatypes, you need to declare the SDO_GEOMETRY datatype in the Topology and use the IKM Oracle Spatial Incremental Update. When comparing two columns of SDO_GEOMETREY datatype, the GEOMETRY_TOLERANCE option is used to define the error margin inside which the geometries are considered to be equal. See the Oracle Spatial User's Guide and Reference, for more information.

Designing an ETL-Style Mapping

See Creating a Mapping in Developing Integration Projects with Oracle Data Integrator for generic information on how to design mappings. This section describes how to design an ETL-style mapping where the staging area is Oracle database or any ANSI-92 compliant database and the target on Oracle database.

In an ETL-style mapping, ODI processes the data in a staging area, which is different from the target. Oracle Data Integrator provides two ways for loading the data from an Oracle staging area to an Oracle target:

Depending on the KM strategy that is used, flow and static control are supported.

Using a Multi-connection IKM

A multi-connection IKM allows updating a target where the staging area and sources are on different data servers.

Oracle Data Integrator provides the following multi-connection IKM for handling Oracle data: IKM Oracle to Oracle Control Append (DBLINK). You can also use the generic SQL multi-connection IKMs. See Generic SQL for more information.

See Table 2-5 for more information on when to use a multi-connection IKM.

To use a multi-connection IKM in an ETL-style mapping:

  1. Create a mapping with the staging area on Oracle or an ANSI-92 compliant technology and the target on Oracle using the standard procedure as described in Creating a Mapping of Developing Integration Projects with Oracle Data Integrator. This section describes only the ETL-style specific steps.

  2. Change the staging area for the mapping to the logical schema of the source tables or a third schema. See Configuring Execution Locations of Developing Integration Projects with Oracle Data Integrator for information about how to change the staging area.

  3. In the Physical diagram, select an access point. The Property Inspector opens for this object.

  4. In the Loading Knowledge Module tab, select an LKM to load from the source(s) to the staging area. See Table 2-5 to determine the LKM you can use.

  5. Optionally, modify the KM options.

  6. In the Physical diagram, select the Target by clicking its title. The Property Inspector opens for this object.

    In the Integration Knowledge Module tab, select an ETL multi-connection IKM to load the data from the staging area to the target. See Table 2-5 to determine the IKM you can use.

Note the following when setting the KM options:

  • For IKM Oracle to Oracle Control Append (DBLINK)

    • If large amounts of data are appended, set the KM option OPTIMIZER_HINT to /*+ APPEND */.

    • Set AUTO_CREATE_DB_LINK to true to create automatically db link on the target schema. If AUTO_CREATE_DB_LINK is set to false (default), the link with this name should exist in the target schema.

    • If you set the options FLOW_CONTROL and STATIC_CONTROL to Yes, select a CKM in the Check Knowledge Module tab. If FLOW_CONTROL is set to Yes, the flow table is created on the target.

Using an LKM and a mono-connection IKM

If there is no dedicated multi-connection IKM, use a standard exporting LKM in combination with a standard mono-connection IKM. The exporting LKM is used to load the flow table from the staging area to the target. The mono-connection IKM is used to integrate the data flow into the target table.

Oracle Data Integrator supports any ANSI SQL-92 standard compliant technology as a source of an ETL-style mapping. Staging area and the target are Oracle.

See Table 2-5 for more information on when to use the combination of a standard exporting LKM and a mono-connection IKM.

To use an LKM and a mono-connection IKM in an ETL-style mapping:

  1. Create a mapping with the staging area and target on Oracle using the standard procedure as described in Creating a Mapping of Developing Integration Projects with Oracle Data Integrator. This section describes only the ETL-style specific steps.
  2. Change the staging area for the mapping to the logical schema of the source tables or a third schema. See Configuring Execution Locationsof Developing Integration Projects with Oracle Data Integrator for information about how to change the staging area.
  3. In the Physical diagram, select an access point. The Property Inspector opens for this object.
  4. In the Loading Knowledge Module tab, select an LKM to load from the source(s) to the staging area. See Table 2-5 to determine the LKM you can use.
  5. Optionally, modify the KM options.
  6. Select the access point for the Staging Area. The Property Inspector for this object appears.
  7. In the Loading Knowledge Module tab, select an LKM to load from the staging area to the target. See Table 2-5 to determine the LKM you can use.
  8. Optionally, modify the KM options.
  9. Select the Target by clicking its title. The Property Inspector opens for this object.

    In the Integration Knowledge Module tab, select a standard mono-connection IKM to update the target. See Table 2-5 to determine the IKM you can use.

Table 2-5 KM Guidelines for ETL-Style Mappings with Oracle Data

Source Staging Area Target Exporting LKM IKM KM Strategy Comment

ANSI SQL-92 standard compliant

Oracle

Oracle

NA

IKM Oracle to Oracle Control Append (DBLINK)

Multi-connection IKM

Use this KM strategy to:

  • Perform control append

  • Use DB*Links for performance reasons

Supports flow and static control.

ANSI SQL-92 standard compliant

Oracle or any ANSI SQL-92 standard compliant database

Oracle or any ANSI SQL-92 standard compliant database

NA

IKM SQL to SQL Incremental Update

Multi-connection IKM

Allows an incremental update strategy with no temporary target-side objects. Use this KM if it is not possible to create temporary objects in the target server.

The application updates are made without temporary objects on the target, the updates are made directly from source to target. The configuration where the flow table is created on the staging area and not in the target should be used only for small volumes of data.

Supports flow and static control

Oracle

Oracle

Oracle

LKM to Oracle to Oracle (DBLINK)

IKM Oracle Slowly Changing Dimension

LKM + standard IKM

na

Oracle

Oracle

Oracle

LKM to Oracle to Oracle (DBLINK)

IKM Oracle Incremental Update

LKM + standard IKM

na

Oracle

Oracle

Oracle

LKM to Oracle to Oracle (DBLINK)

IKM Oracle Incremental Update (MERGE)

LKM + standard IKM

na

Troubleshooting

This section provides information on how to troubleshoot problems that you might encounter when using Oracle Knowledge Modules. It contains the following topics:

Troubleshooting Oracle Database Errors

Errors appear often in Oracle Data Integrator in the following way:

java.sql.SQLException: ORA-01017: invalid username/password; logon denied
at ...
at ...
...

the java.sql.SQLExceptioncode simply indicates that a query was made to the database through the JDBC driver, which has returned an error. This error is frequently a database or driver error, and must be interpreted in this direction.

Only the part of text in bold must first be taken in account. It must be searched in the Oracle documentation. If its contains an error code specific to Oracle, like here (in red), the error can be immediately identified.

If such an error is identified in the execution log, it is necessary to analyze the SQL code send to the database to find the source of the error. The code is displayed in the description tab of the erroneous task.

Common Problems and Solutions

This section describes common problems and solutions.

  • ORA-12154 TNS:could not resolve service name

    TNS alias resolution. This problem may occur when using the OCI driver, or a KM using database links. Check the configuration of the TNS aliases on the machines.

  • ORA-02019 connection description for remote database not found

    You use a KM using non existing database links. Check the KM options for creating the database links.

  • ORA-00900 invalid SQL statement

    ORA-00923 FROM Keyword not found where expected

    The code generated by the mapping, or typed in a procedure is invalid for Oracle. This is usually related to an input error in the mapping, filter of join. The typical case is a missing quote or an unclosed bracket.

    A frequent cause is also the call made to a non SQL syntax, like the call to an Oracle stored procedure using the syntax

    EXECUTE SCHEMA.PACKAGE.PROC(PARAM1, PARAM2).
    

    The valid SQL call for a stored procedure is:

    BEGIN
    SCHEMA.PACKAGE.PROC(PARAM1, PARAM2);
    END;
    

    The syntax EXECUTE SCHEMA.PACKAGE.PROC(PARAM1, PARAM2) is specific to SQL*PLUS, and do not work with JDBC.

  • ORA-00904 invalid column name

    Keying error in a mapping/join/filter. A string which is not a column name is interpreted as a column name, or a column name is misspelled.

    This error may also appear when accessing an error table associated to a datastore with a recently modified structure. It is necessary to impact in the error table the modification, or drop the error tables and let Oracle Data Integrator recreate it in the next execution.

  • ORA-00903 invalid table name

    The table used (source or target) does not exist in the Oracle schema. Check the mapping logical/physical schema for the context, and check that the table physically exists on the schema accessed for this context.

  • ORA-00972 Identifier is too Long

    There is a limit in the object identifier in Oracle (usually 30 characters). When going over this limit, this error appears. A table created during the execution of the mapping went over this limit. and caused this error (see the execution log for more details).

    Check in the topology for the oracle technology, that the maximum lengths for the object names (tables and columns) correspond to your Oracle configuration.

  • ORA-01790 expression must have same datatype as corresponding expression

    You are trying to connect two different values that can not be implicitly converted (in a mapping, a join...). Use the explicit conversion functions on these values.



Footnote Legend

Footnote 1: In ODI, you can directly upload connection details from a credential file (auto login wallet) or a password protected wallet file (ewallet.p12)