Skip Headers
Oracle® Fusion Middleware Connectivity and Knowledge Modules Guide for Oracle Data Integrator
11g Release 1 (11.1.1)

Part Number E12644-05
Go to Documentation Home
Home
Go to Book List
Book List
Go to Table of Contents
Contents
Go to Feedback page
Contact Us

Go to previous page
Previous
Go to next page
Next
PDF · Mobi · ePub

4 Generic SQL

This chapter describes how to work with technologies supporting the ANSI SQL-92 syntax in Oracle Data Integrator.

Note:

This is a generic chapter. The information described in this chapter can be applied to technologies supporting the ANSI SQL-92 syntax, including Oracle, Microsoft SQL Server, Sybase ASE, IBM DB2, Teradata, PostgreSQL, MySQL, Derby and so forth.

Some of the ANSI SQL-92 compliant technologies are covered in a separate chapter in this guide. Refer to the dedicated technology chapter for specific information on how to leverage the ODI optimizations and database utilities of the given technology.

This chapter includes the following sections:

4.1 Introduction

Oracle Data Integrator supports ANSI SQL-92 standard compliant technologies.

4.1.1 Concepts

The mapping of the concepts that are used in ANSI SQL-92 standard compliant technologies and the Oracle Data Integrator concepts are as follows: a data server in Oracle Data Integrator corresponds to a data processing resource that stores and serves data in the form of tables. Depending on the technology, this resource can be named for example, database, instance, server and so forth. Within this resource, a sub-division maps to an Oracle Data Integrator physical schema. This sub-division can be named schema, database, catalog, library and so forth. A set of related objects within one schema corresponds to a data model, and each table, view or synonym will appear as an ODI datastore, with its attributes, columns, and constraints

4.1.2 Knowledge Modules

Oracle Data Integrator provides a wide range of Knowledge Modules for handling data stored in ANSI SQL-92 standard compliant technologies. The Knowledge Modules listed in Table 4-1 are generic SQL Knowledge Modules and apply to the most popular ANSI SQL-92 standard compliant databases.

Oracle Data Integrator also provides specific Knowledge Modules for some particular databases to leverage the specific utilities. Technology-specific KMs, using features such as loaders or external tables, are listed in the corresponding technology chapter.

Table 4-1 Generic SQL Knowledge Modules

Knowledge Module Description

CKM SQL

Checks data integrity against constraints defined on a Datastore. Rejects invalid records in the error table created dynamically. Can be used for static controls as well as for flow controls.

Consider using this KM if you plan to check data integrity on an ANSI SQL-92 compliant database. Use specific CKMs instead if available for your database.

IKM SQL Control Append

Integrates data in an ANSI SQL-92 compliant target table in replace/append mode. When flow data needs to be checked using a CKM, this IKM creates a temporary staging table before invoking the CKM.

Consider using this IKM if you plan to load your SQL compliant target table in replace mode, with or without data integrity check.

To use this IKM, the staging area must be on the same data server as the target.

IKM SQL Incremental Update

Integrates data in an ANSI SQL-92 compliant target table in incremental update mode. This KM creates a temporary staging table to stage the data flow. It then compares its content to the target table to idetinfythe records to insert and the records to update. It also allows performing data integrity check by invoking the CKM. This KM is therefore not recommended for large volumes of data.

Consider using this KM if you plan to load your ANSI SQL-92 compliant target table to insert missing records and to update existing ones. Use technology-specific incremental update IKMs whenever possible as they are more optimized for performance.

To use this IKM, the staging area must be on the same data server as the target.

IKM SQL to File Append

Integrates data in a target file from an ANSI SQL-92 compliant staging area in replace mode.

Consider using this IKM if you plan to transform and export data to a target file. If your source datastores are located on the same data server, we recommend using this data server as staging area to avoid extra loading phases (LKMs)

To use this IKM, the staging area must be different from the target.

IKM SQL to SQL Control Append

Integrates data into a ANSI-SQL92 target database from any ANSI-SQL92 compliant staging area.

This IKM is typically used for ETL configurations: source and target tables are on different databases and the interface's staging area is set to the logical schema of the source tables or a third schema.

IKM SQL to SQL Incremental Update

Integrates data from any AINSI-SQL92 compliant database into any any AINSI-SQL92 compliant database target table in incremental update mode.

This IKM is typically used for ETL configurations: source and target tables are on different databases and the interface's staging area is set to the logical schema of the source tables or a third schema.

LKM File to SQL

Loads data from an ASCII or EBCDIC File to an ANSI SQL-92 compliant database used as a staging area. This LKM uses the Agent to read selected data from the source file and write the result in the staging temporary table created dynamically.

Consider using this LKM if one of your source datastores is an ASCII or EBCDIC file. Use technology-specific LKMs for your target staging area whenever possible as they are more optimized for performance. For example, if you are loading to an Oracle database, use the LKM File to Oracle (SQLLDR) or the LKM File to Oracle (EXTERNAL TABLE) instead.

LKM SQL to SQL

Loads data from an ANSI SQL-92 compliant database to an ANSI SQL-92 compliant staging area. This LKM uses the Agent to read selected data from the source database and write the result into the staging temporary table created dynamically.

Consider using this LKM if your source datastores are located on a SQL compliant database different from your staging area. Use technology-specific LKMs for your source and target staging area whenever possible as they are more optimized for performance. For example, if you are loading from an Oracle source server to an Oracle staging area, use the LKM Oracle to Oracle (dblink) instead.

LKM SQL to SQL (row by row)

Loads data from any ISO-92 database to any ISO-92 compliant target database. This LKM uses a Jython script to read selected data from the database and write the result into the target temporary table, which is created dynamically. It loads data from a staging area to a target and indicates the state of each processed row.

The following options are used for the logging mechanism:

  • LOG_LEVEL: This option is used to set the granularity of the data logged.

    The following log levels can be set:

    • 0: nothing to log

    • 1: any JDBC action will be indicated such as 'select action', 'delete action', 'insert action'…

    • 2: in addition to level 1, all records that generate an error will be logged

    • 3: in addition to level 2, all processed records will be logged

  • LOG_FILE_NAME: Full path to the log file used. The directory into which this log file is written must be created before executing the interface.

  • MAX_ERRORS:Specify the maximum number of errors.

    The LKM process stops when the maximum number of errors specified in this option is reached.

This Knowledge Module is NOT RECOMMENDED when using LARGE VOLUMES. Other specific modules using Bulk utilities (SQL*LOADER, BULK INSERT...) or direct links (DBLINKS, Linked Servers...) are usually more efficient.

LKM SQL to SQL (JYTHON)

Loads data from an ANSI SQL-92 compliant database to an ANSI SQL-92 compliant staging area. This LKM uses Jython scripting to read selected data from the source database and write the result into the staging temporary table created dynamically. This LKM allows you to modify the default JDBC data type binding between the source database and the target staging area by editing the underlying Jython code provided.

Consider using this LKM if your source datastores are located on an ANSI SQL-92 compliant database different from your staging area and if you plan to specify your own data type binding method.

Use technology-specific LKMs for your source and target staging area whenever possible as they are more optimized for performance. For example, if you are loading from an Oracle source server to an Oracle staging area, use the LKM Oracle to Oracle (dblink) instead.

RKM SQL (JYTHON)

Retrieves JDBC metadata for tables, views, system tables and columns from an ANSI SQL-92 compliant database. This RKM may be used to specify your own strategy to convert JDBC metadata into Oracle Data Integrator metadata.

Consider using this RKM if you encounter problems with the standard JDBC reverse-engineering process due to some specificities of your JDBC driver. This RKM allows you to edit the underlying Jython code to make it match the specificities of your JDBC driver.

SKM SQL

Generates data access Web services for ANSI SQL-92 compliant databases. Data access services include data manipulation operations such as adding, removing, updating or filtering records as well as changed data capture operations such as retrieving changed data. Data manipulation operations are subject to integrity check as defined by the constraints of your datastores.

Consider using this SKM if you plan to generate and deploy data manipulation or changed data capture web services to your Service Oriented Architecture infrastructure. Use specific SKMs instead if available for your database


4.2 Installation and Configuration

Make sure you have read the information in this section before you start using the generic SQL Knowledge Modules:

4.2.1 System Requirements and Certifications

Before performing any installation you should read the system requirements and certification documentation to ensure that your environment meets the minimum installation requirements for the products you are installing.

The list of supported platforms and versions is available on Oracle Technical Network (OTN):

http://www.oracle.com/technology/products/oracle-data-integrator/index.html.

4.2.2 Technology-Specific Requirements

See the Technology Specific Requirements section of the specific technology chapter for more information.

If your technology does not have a dedicated chapter in this guide, see the documentation of your technology for any technology-specific requirements.

4.2.3 Connectivity Requirements

See the Connectivity Requirements section of the specific technology chapter for more information.

The Java Database Connectivity (JDBC) is the standard for connecting to a database and other data sources. If your technology does not have a dedicated chapter in this guide, see the documentation of your technology for the JDBC configuration information, including the required driver files, the driver name, and the JDBC URL format.

4.3 Setting up the Topology

Setting up the Topology consists in:

  1. Creating a Data Server

  2. Creating a Physical Schema

4.3.1 Creating a Data Server

Create a data server under the ANSI SQL-92 compliant technology listed in the Physical Architecture accordion using the standard procedure, as described in "Creating a Data Server" of the Oracle Fusion Middleware Developer's Guide for Oracle Data Integrator.

If your technology has a dedicated chapter in this guide, see this chapter for more information. For other technologies, see the documentation of your technology for the JDBC driver name and JDBC URL format.

4.3.2 Creating a Physical Schema

Create a Physical Schema using the standard procedure, as described in "Creating a Physical Schema" of the Oracle Fusion Middleware Developer's Guide for Oracle Data Integrator.

If your technology has a dedicated chapter in this guide, see this chapter for more information.

4.4 Setting up an Integration Project

Setting up a Project using an ANSI SQL-92 compliant database follows the standard procedure. See "Creating an Integration Project" of the Oracle Fusion Middleware Developer's Guide for Oracle Data Integrator.

The recommended knowledge modules to import into your project for getting started depend on the corresponding technology. If your technology has a dedicated chapter in this guide, see this chapter for more information.

4.5 Creating and Reverse-Engineering a Model

This section contains the following topics:

4.5.1 Create a Data Model

Create a data model based on the ANSI SQL-92 compliant technology using the standard procedure, as described in "Creating a Model" of the Oracle Fusion Middleware Developer's Guide for Oracle Data Integrator.

If your technology has a dedicated chapter in this guide, see this chapter for more information.

4.5.2 Reverse-engineer a Data Model

ANSI SQL-92 standard compliant technologies support both types of reverse-engineering, the Standard reverse-engineering, which uses only the abilities of the JDBC driver, and the Customized reverse-engineering, which uses a RKM which provides logging features.

In most of the cases, consider using the standard JDBC reverse engineering instead of the RKM SQL (Jython). However, you can use this RKM as a starter if you plan to enhance it by adding your own metadata reverse-engineering behavior.

Standard Reverse-Engineering

To perform a Standard Reverse- Engineering on ANSI SQL-92 technologies use the usual procedure, as described in "Reverse-engineering a Model" of the Oracle Fusion Middleware Developer's Guide for Oracle Data Integrator.

If your technology has a dedicated chapter in this guide, see this chapter for more information.

Customized Reverse-Engineering

To perform a Customized Reverse-Engineering on ANSI SQL-92 technologies with a RKM, use the usual procedure, as described in "Reverse-engineering a Model" of the Oracle Fusion Middleware Developer's Guide for Oracle Data Integrator. This section details only the fields specific to the usage of the RKM SQL (Jython):

This RKM provides two logging options:

  • USE_LOG: Set to Yes if you want the reverse-engineering to process log details in a log file.

  • LOG_FILE_NAME: Enter the name for the log file. Note that the directory into which this log file is written must be created before executing the interface.

4.6 Setting up Changed Data Capture

Oracle Data Integrator does not provide journalizing Knowledge Modules for ANSI SQL-92 compliant technologies.

4.7 Setting up Data Quality

Oracle Data Integrator provides the CKM SQL for checking data integrity against constraints defined on an ANSI SQL-92 compliant table. See "Set up Flow Control and Post-Integration Control" in the Oracle Fusion Middleware Developer's Guide for Oracle Data Integrator for details.

4.8 Designing an Interface

You can use ANSI SQL-92 compliant technologies as a source, staging area or a target of an integration interface. It is also possible to create ETL-style integration interfaces based on an ANSI SQL-92 compliant technology.

The KM choice for an interface or a check determines the abilities and performances of this interface or check. The recommendations below help in the selection of the KM for different situations concerning a data server based on an ANSI SQL-92 compliant technology.

4.8.1 Loading Data From and to an ANSI SQL-92 Compliant Technology

ANSI SQL-92 compliant technologies can be used as a source, target or staging area of an interface. The LKM choice in the Interface Flow tab to load data between an ANSI SQL-92 compliant technology and another type of data server is essential for the performance of an interface.

4.8.1.1 Loading Data from an ANSI SQL-92 Compliant Technology

The generic KMs that are listed in Table 4-2 implement methods for loading data from an ANSI SQL-92 compliant database to a target or staging area database. In addition to these KMS, Oracle Data Integrator provides KMs specific to the target or staging area database. If your technology has a dedicated chapter in this guide, see this chapter for more information.

Table 4-2 KMs to Load from an ANSI SQL-92 Compliant Technology

Source or Staging Area KM Notes

ANSI SQL-92 compliant technology

LKM SQL to SQL

Standard KM for SQL-92 to SQL-92 transfers

ANSI SQL-92 compliant technology

LMK SQL to SQL (Jython)

This LKM uses Jython scripting to read selected data from the source database and write the result into the staging temporary table created dynamically. This LKM allows you to modify the default JDBC data types binding between the source database and the target staging area by editing the underlying Jython code provided.

ANSI SQL-92 compliant technology

LMK SQL to SQL (row by row)

This LKM uses row by row logging.


4.8.1.2 Loading Data to an ANSI SQL-92 Compliant Technology

The generic KMs that are listed in Table 4-3 implement methods for loading data from a source or staging area into an ANSI SQL-92 compliant database. In addition to these KMs, Oracle Data Integrator provides KMs specific to the source or staging area database. If your technology has a dedicated chapter in this guide, see this chapter for more information.

Table 4-3 KMs to Load to an ANSI SQL-92 Compliant Technology

Source or Staging Area KM Notes

File

LKM File to SQL

Standard KM

ANSI SQL-92 compliant technology

LKM SQL to SQL

Standard KM

ANSI SQL-92 compliant technology

LMK SQL to SQL (Jython)

This LKM uses Jython scripting to read selected data from the source database and write the result into the staging temporary table created dynamically. This LKM allows you to modify the default JDBC data types binding between the source database and the target staging area by editing the underlying Jython code provided.

ANSI SQL-92 compliant technology

LMK SQL to SQL (row by row)

This LKM uses row by row logging.


4.8.2 Integrating Data in an ANSI SQL-92 Compliant Technology

An ANSI SQL-92 compliant technology can be used as a target of an interface. The IKM choice in the Interface Flow tab determines the performance and possibilities for integrating.

The KMs listed in Table 4-4 implement methods for integrating data into an ANSI SQL-92 compliant target. In addition to these KMs, Oracle Data Integrator provides KMs specific to the source or staging area database. See the corresponding technology chapter for more information.

Table 4-4 KMs to Integrate Data in an ANSI SQL-92 Compliant Technology

Source or Staging Area KM Notes

ANSI SQL-92 compliant technology

IKM SQL Control Append

Uses Bulk data movement inside data server

ANSI SQL-92 compliant technology

IKM SQL Incremental Update

Uses Bulk data movement inside data server

ANSI SQL-92 compliant technology

IKM SQL to File Append

Uses agent for data movement

ANSI SQL-92 compliant technology

IKM SQL to SQL Incremental Update

Uses agent or JYTHON for data movement

ANSI SQL-92 compliant technology

IKM SQL to SQL Control Append

Uses agent for control append strategies


4.8.3 Designing an ETL-Style Interface

See "Working with Integration Interface" in the Oracle Fusion Middleware Developer's Guide for Oracle Data Integrator for generic information on how to design integration interfaces. This section describes how to design an ETL-style interface where the staging area and target are ANSI SQL-92 compliant.

In an ETL-style interface, ODI processes the data in a staging area, which is different from the target. Oracle Data Integrator provides two ways for loading the data from an ANSI SQL-92 compliant staging area to an ANSI SQL-92 compliant target:

Depending on the KM strategy that is used, flow and static control are supported.

Using a Multi-connection IKM

A multi-connection IKM allows updating a target where the staging area and sources are on different data servers.

Oracle Data Integrator provides the following multi-connection IKMs for ANSI SQL-92 compliant technologies: IKM SQL to SQL Incremental Update and IKM SQL to SQL Control Append.

See Table 4-5 for more information on when to use a multi-connection IKM.

To use a multi-connection IKM in an ETL-style interface:

  1. Create an integration interface with an ANSI SQL-92 compliant staging area and target using the standard procedure as described in "Creating an Interface" in the Oracle Fusion Middleware Developer's Guide for Oracle Data Integrator. This section describes only the ETL-style specific steps.

  2. In the Definition tab of the Interface Editor, select Staging Area different from Target and select the logical schema of the source tables or a third schema.

  3. In the Flow tab, select one of the Source Sets, by clicking its title. The Property Inspector opens for this object.

  4. Select an LKM from the LKM Selector list to load from the source(s) to the staging area. See Table 4-5 to determine the LKM you can use.

  5. Optionally, modify the KM options.

  6. In the Flow tab, select the Target by clicking its title. The Property Inspector opens for this object.

    In the Property Inspector, select an ETL multi-connection IKM from the IKM Selector list to load the data from the staging area to the target. See Table 4-5 to determine the IKM you can use.

Note the following when setting the KM options:

  • For IKM SQL to SQL Incremental Update

    • If you do not want to create any tables on the target system, set FLOW_CONTROL=false and FLOW_TABLE_LOCATION=STAGING.

      Please note that this will lead to row-by-row processing and therefore significantly lower performance.

    • If you set the options FLOW_CONTROL or STATIC_CONTROL to true, select a CKM in the Controls tab. Note that if FLOW_CONTROL is set to true, the flow table is created on the target, regardless of the value of FLOW_TABLE_LOCATION.

    • The FLOW_TABLE_LOCATION option can take the following values:

      Value Description Comment
      TARGET Objects are created on the target. Default value.
      STAGING Objects are created only on the staging area, not on the target. Cannot be used with flow control. Leads to to row-by-row processing and therefore loss of performance.
      NONE No objects are created on staging area nor target. Cannot be used with flow control. Leads to to row-by-row processing and therefore loss of performance. Requires to read source data twice in case of journalized data sources

Using a LKM and a mono-connection IKM

If there is no dedicated multi-connection IKM, use a standard exporting LKM in combination with a standard mono-connection IKM. The exporting LKM is used to load the flow table from the staging area to the target. The mono-connection IKM is used to integrate the data flow into the target table.

Oracle Data Integrator supports any ANSI SQL-92 standard compliant technology as a source, staging area, and target of an ETL-style interface.

See Table 4-5 for more information on when to use the combionation of a standard LKM and a mono-connection IKM.

To use an LKM and a mono-connection IKM in an ETL-style interface:

  1. Create an integration interface with an ANSI SQL-92 compliant staging area and target using the standard procedure as described in "Creating an Interface" in the Oracle Fusion Middleware Developer's Guide for Oracle Data Integrator. This section describes only the ETL-style specific steps.

  2. In the Definition tab of the Interface Editor, select Staging Area different from Target and select the logical schema of the source tables or a third schema.

  3. In the Flow tab, select one of the Source Sets.

  4. In the Property Inspector, select an LKM from the LKM Selector list to load from the source(s) to the staging area. See Table 4-5 to determine the LKM you can use.

  5. Optionally, modify the KM options.

  6. Select the Staging Area. In the Property Inspector, select an LKM from the LKM Selector list to load from the staging area to the target. See Table 4-5 to determine the LKM you can use.

  7. Optionally, modify the options.

  8. Select the Target by clicking its title. The Property Inspector opens for this object.

    In the Property Inspector, select a standard mono-connection IKM from the IKM Selector list to update the target. See Table 4-5 to determine the IKM you can use.

Table 4-5 KM Guidelines for ETL-Style Interfaces based on an ANSI SQL-92 standard compliant technology

Source Staging Area Target Exporting LKM IKM KM Strategy Comment

ANSI SQL-92 standard compliant

ANSI SQL-92 standard compliant database

ANSI SQL-92 standard compliant database

NA

IKM SQL to SQL Incremental Update

Multi-connection IKM

Allows an incremental update strategy with no temporary target-side objects. Use this KM if it is not possible to create temporary objects in the target server.

The application updates are made without temporary objects on the target, the updates are made directly from source to target. The configuration where the flow table is created on the staging area and not in the target should be used only for small volumes of data.

Supports flow and static control

ANSI SQL-92 standard compliant

ANSI SQL-92 standard compliant database

ANSI SQL-92 standard compliant database

NA

IKM SQL to SQL Control Append

Multi-connection IKM

Use this KM strategy to perform control append.

Supports flow and static control.

ANSI SQL-92 standard compliant

ANSI SQL-92 standard compliant database

ANSI SQL-92 standard compliant database

any standard KM loading from an ANSI SQL-92 standard compliant technology to an ANSI SQL-92 standard compliant technology

IKM SQL Incremetal Update

Mono-connection IKM

Allows an incremental update strategy