25 Oracle Changed Data Capture Adapters

This chapter describes how to work with Oracle Changed Data Capture Adapters as well as with Attunity Stream in order to integrate changes captured on legacy sources using Oracle Data Integrator.

This chapter includes the following sections:

25.1 Introduction

Oracle Changed Data Capture Adapters offer log-based change data capture (CDC) for enterprise data sources such as CICS, VSAM, Tuxedo, IMS DB, and IMS TM. Captured changes are stored in a storage called Staging Area (which is different from the Oracle Data Integrator interfaces' staging areas).

Attunity Stream is part of the Attunity Integration Suite (AIS) and provides the same features as the Oracle Changed Data Capture Adapters. In this section, we will refer to both products as Attunity Stream.

The Attunity Stream Staging Area contains the Change Tables used by Attunity Stream to store changes captured from the sources. It maintains the last position read by Oracle Data Integrator (This is the Attunity Stream Context, which is different from the Oracle Data Integrator Context concept) and starts at this point the next time a request from Oracle Data Integrator is received. The Change Tables are accessed through Attunity Stream Datasources.

Oracle Data Integrator uses Attunity Stream datasources as a sources of integration interfaces. They cannot be used as target or staging area. Journalizing or data quality check is not possible on this technology.

25.1.1 Concepts

The Attunity Stream concepts map the Oracle Data Integrator concepts as follows: One Workspace within an Attunity Agent (or Daemon) listening on a port corresponds to one data server in Oracle Data Integrator. Within this Daemon, each Datasource (or Datasource/Owner pair) corresponds to one ODI Physical Schema. In each datasource, the Change Tables appear as ODI Datastores in an ODI model based on the Attunity technology.

25.1.2 Knowledge Modules

Oracle Data Integrator provides the LKM Attunity to SQL for handling Attunity Stream data. The KMs use Attunity Stream specific features.

The Oracle Data Integrator CDC Knowledge Module provides integration from Attunity Stream Staging Areas via a JDBC interface. It is able to:

  • Read Attunity Stream data from Attunity Stream Data Sources.

  • Load this Attunity Stream data into an ANSI SQL-92 compliant database used as a staging area in Oracle Data Integrator staging area.

  • Handle the Attunity Stream Context to ensure consistent consumption of the changes read.

Using the data provided in the Attunity staging area, the Oracle CDC KM cleans the working environment (dropping temporary tables), determines and saves Attunity Stream Context information, loads the journalized data into the collect table and purges the loaded data from the journal.

Note:

Although Attunity Stream is used to capture changes in source systems, it is used as a regular JDBC source (only an LKM is used). The Oracle Data Integrator journalizing framework (JKM) is not used for this technology.

25.2 Installation and Configuration

Make sure you have read the information in this section before you start using the Oracle Knowledge Modules:

25.2.1 System Requirements

Before performing any installation you should read the system requirements and certification documentation to ensure that your environment meets the minimum installation requirements for the products you are installing.

The list of supported platforms and versions is available on Oracle Technical Network (OTN):

http://www.oracle.com/technology/products/oracle-data-integrator/index.html.

25.2.2 Technology Specific Requirements

Please review the Attunity Streams or Oracle Changed Data Capture Adapters documentation for the requirements and instruction for installing and setting up Streams for your source servers.

25.2.3 Connectivity Requirements

In order to use the Attunity Stream technology, you must first install the Attunity drivers in the drivers directory of your Oracle Data Integrator installation and restart ODI. See "Add Additional Drivers and Open Tools" in the Oracle Fusion Middleware Installation Guide for Oracle Data Integrator.

The driver files include the following: nvjdbc2.jar, nvapispy2.jar, nvlog2.jar.

25.3 Setting up the Topology

Setting up the Topology consists in:

  1. Creating an Attunity Stream Data Server

  2. Creating an Attunity Stream Physical Schema

25.3.1 Creating an Attunity Stream Data Server

An Attunity Stream data server corresponds to the server and workspace storing the Attunity Stream datasources.

25.3.1.1 Creation of the Data Server

Create a data server for the Attunity Stream technology using the standard procedure, as described in "Creating a Data Server" of the Oracle Fusion Middleware Developer's Guide for Oracle Data Integrator. This section details only the fields required or specific for defining an Oracle data server:

  1. In the Definition tab:

    • User: User profile to connect the workspace. If you are using anonymous access or specifying the user and password on the URL, leave this field and the JDBC Password field empty.

    • Password: Master password for the user profile.

  2. In the JDBC tab:

    • JDBC Driver: com.attunity.jdbc.NvDriver

    • JDBC URL: jdbc:attconnect://<host_name>:<port>/<workspace> [;AddDefaultSchema=1][;<parameter>=<value>]

      You can use in the URL the properties listed in:

      Table 25-1 JDBC Attunity Driver Properties

      Option Description

      <host_name>

      Name of the machine running the Attunity daemon

      <port>

      Port that the daemon listens to

      <workspace>

      Daemon's workspace. Default is Navigator.

      AddDefaultSchema=1

      This parameter specifies that a schema shows the default owner name public if the data source does not natively support owners. It may be needed in some cases as Oracle Data Integrator makes use of the owner value.

      <parameter>=<value>

      Any parameter available for the JDBC driver. Note that it is not needed to specify the datasource using the DefTdpName driver parameter, as Oracle Data Integrator accesses the change tables using the full qualified syntax: DATASOURCE:OWNER.TABLE_NAME


      For more information on the JDBC URL connection details, see the Oracle Application Server CDC Adapters Installation Guide.

25.3.2 Creating an Attunity Stream Physical Schema

Create an Attunity Stream physical schema using the standard procedure, as described in "Creating a Physical Schema" of the Oracle Fusion Middleware Developer's Guide for Oracle Data Integrator.

This physical schema represents the Attunity Stream datasource from which you want to read the changed data. While defining the physical schema, the list of datasources and owners available for your workspace is displayed, provided that the data server is correctly configured. Public is displayed if no datasources and owners exist.

Create for this physical schema a logical schema using the standard procedure, as described in "Creating a Logical Schema" of the Oracle Fusion Middleware Developer's Guide for Oracle Data Integrator and associate it in a given context.

25.4 Setting Up an Integration Project

Setting up a project using the Attunity Stream follows the standard procedure. See "Creating an Integration Project" of the Oracle Fusion Middleware Developer's Guide for Oracle Data Integrator.

It is recommended to import the LKM Attunity to SQL into your project for getting started with Attunity Stream.

25.5 Creating and Reverse-Engineering an Attunity Stream Model

This section contains the following topics:

25.5.1 Create an Attunity Stream Model

Create an Attunity Stream Model using the standard procedure, as described in "Creating a Model" of the Oracle Fusion Middleware Developer's Guide for Oracle Data Integrator.

25.5.2 Reverse-engineer an Attunity Stream Model

Attunity Stream supports standard reverse-engineering. Standard reverse-engineering returns the change tables stored in the datasource as datastores. The change tables contain some CDC header columns in addition to the data columns used for integration. These columns include timestamps, table_name, operation, transactionID, context, and so forth. See the Attunity Stream documentation for more information on the header columns content.

To perform a Standard Reverse-Engineering on Oracle use the usual procedure, as described in "Reverse-engineering a Model" of the Oracle Fusion Middleware Developer's Guide for Oracle Data Integrator.

25.6 Designing an Interface Using the LKM Attunity to SQL

Oracle Data Integrator uses Attunity Stream datasources as a sources of integration interfaces. They cannot be used as target or staging area. Journalizing or data quality check is not possible on this technology.

To create an integration interface, which loads Attunity Stream data into your Oracle Data Integrator integration project, run the following steps:

  1. Create an integration interface with Attunity Stream source datastores.

  2. Create joins, filters and mappings as usual. Note that joins between change tables are not allowed on the source. They should be performed on the interface's staging area.

  3. In the Flow tab of the interface, select the source set containing the source change table(s) and select the LKM Attunity to SQL.

  4. Set the KM options as follows:

    • DELETE_TEMPORARY_OBJECTS - Set this option to No, if you wish to retain temporary objects (files and scripts) after integration.

    • PK_LIST – Specify the list of source columns that holds the primary key of the journalized table. Use SQL syntax and separate with a comma (,) each column name without prefixing it by the table alias, for example ORDER_ID, CUSTOMER_ID

Note:

When running an interface using this LKM, the changes are consumed from the change table. This KM does not support reading twice the same change.