28 Oracle GoldenGate

You can work with Oracle GoldenGate to capture changes on source transactional systems and replicate them in a staging server for consumption by Oracle Data Integrator mappings.

This chapter includes the following sections:

Introduction

Oracle GoldenGate (OGG) product offers solutions that provide key business applications with continuous availability and real-time information. It provides guaranteed capture, routing, transformation and delivery across heterogeneous databases and environments in real-time.

Using the Oracle GoldenGate knowledge modules requires that you know and understand Oracle GoldenGate concepts and architecture. See the Oracle GoldenGate Documentation on OTN for more information:

http://www.oracle.com/technetwork/middleware/goldengate/overview/index.html

Overview of the GoldenGate CDC Process

Oracle Data Integrator can capture changes in a source database using Oracle GoldenGate to process them in the ODI CDC framework. Oracle Data Integrator uses Oracle GoldenGate to replicate data from a source database to a staging database. This staging database contains a copy of the source tables and the ODI Changed Data Capture (CDC) infrastructure, both loaded using Oracle GoldenGate.

The staging database can be stored in an Oracle or Teradata schema. The source database can be Oracle, Microsoft SQL Server, DB2 UDB, or Sybase ASE. In this chapter, <database> refers to any of these source database technologies.

Setting up CDC with GoldenGate is done using the following process:

  1. A replica of the source tables is created in the staging database, using, for example, the Oracle Data Integrator Common Format Designer feature.

  2. Oracle Data Integrator Changed Data Capture (CDC) is activated on the source tables using either the JKM <database> to Oracle Consistent (OGG Online) or the JKM <database> to Teradata Consistent (OGG Online).

  3. The journals are started in either online mode or offline mode.

    • Online mode: Starting the journals in online mode configures and starts the GoldenGate Capture (Extract) process to capture the changes in the source database and corresponding Delivery (Replicat) processes to replicate the changes in the staging database. Changes are replicated into both the replicated source table and the CDC infrastructure.

      The GoldenGate Capture and Delivery processes are deployed and started using the GoldenGate JAgent interface. The GoldenGate JAgent facilitates communication between Oracle Data Integrator and Oracle GoldenGate.

    • Offline mode: Starting the journals in offline mode creates the Oracle GoldenGate configuration files and sets up a CDC infrastructure in the staging database. Note that no active process is started for capturing source data at this stage.

      Using the generated configuration files, an Oracle GoldenGate Capture process is configured and started to capture changes from the source database, and corresponding Delivery processes are configured and started to replicate these changes into the staging database. Changes are replicated into both the replicated source table and the CDC infrastructure.

      GoldenGate can optionally be configured to perform the initial load of the source data into the staging tables.

      Note:

      The offline mode requires an Oracle GoldenGate data server to be first created in Topology. See Define the Oracle GoldenGate Data Servers for instructions on how to create one.
  4. ODI mappings can source from the replicated tables and use captured changes seamlessly within any ODI scenario.

Knowledge Modules

Oracle Data Integrator provides the Knowledge Modules listed in Table 28-1 for replicating online data from a source to a staging database. Like any other CDC JKMs, the Oracle GoldenGate JKMs journalize data in the source server.

The JKM <database> to Oracle Consistent (OGG Online) and the JKM <database> to Teradata Consistent (OGG Online) perform the same tasks:

  • Create and manage the ODI CDC framework infrastructure on the replicated tables.

  • If the journals are started in online mode, configure and start the Oracle Capture and Delivery processes on the GoldenGate servers using the GoldenGate JAgent.

  • If the journals are started in offline mode, generate the parameter files to set up the Oracle GoldenGate Capture and Delivery processes and the Readme.txt explaining how to complete the setup.

  • Provide extra steps to check the configuration of the source database and proposes tips to correct the configuration.

Table 28-1 Oracle GoldenGate Knowledge Modules

Knowledge Module Description

JKM Oracle to Oracle Consistent (OGG Online)

Creates the infrastructure for consistent set journalizing on an Oracle staging server and generates the Oracle GoldenGate configuration for replicating data from an Oracle source to this staging server.

JKM DB2 UDB to Oracle Consistent (OGG Online)

Creates the infrastructure for consistent set journalizing on an Oracle staging server and generates the Oracle GoldenGate configuration for replicating data from an IBM DB2 UDB source to this staging server.

JKM Sybase ASE to Oracle Consistent (OGG Online)

Creates the infrastructure for consistent set journalizing on an Oracle staging server and generates the Oracle GoldenGate configuration for replicating data from a Sybase ASE source to this staging server.

JKM MSSQL to Oracle Consistent (OGG Online)

Creates the infrastructure for consistent set journalizing on an Oracle staging server and generates the Oracle GoldenGate configuration for replicating data from a Microsoft SQL Server source to this staging server.

JKM Oracle to Teradata Consistent (OGG Online)

Creates the infrastructure for consistent set journalizing on a Teradata staging server and generates the Oracle GoldenGate configuration for replicating data from an Oracle source to this staging server.

JKM DB2 UDB to Teradata Consistent (OGG Online)

Creates the infrastructure for consistent set journalizing on a Teradata staging server and generates the Oracle GoldenGate configuration for replicating data from an IBM DB2 UDB source to this staging server.

JKM Sybase ASE to Teradata Consistent (OGG Online)

Creates the infrastructure for consistent set journalizing on a Teradata staging server and generates the Oracle GoldenGate configuration for replicating data from a Sybase ASE source to this staging server.

JKM MSSQL to Teradata Consistent (OGG Online)

Creates the infrastructure for consistent set journalizing on a Teradata staging server and generates the Oracle GoldenGate configuration for replicating data from a Microsoft SQL Server source to this staging server.

Installation and Configuration

Make sure you have read the information in this section before you start using the Oracle GoldenGate Knowledge Modules:

System Requirements and Certifications

Before performing any installation you should read the system requirements and certification documentation to ensure that your environment meets the minimum installation requirements for the products you are installing.

The list of supported platforms and versions is available on Oracle Technical Network (OTN):

http://www.oracle.com/technetwork/middleware/data-integrator/documentation/index.html.

See also the Oracle GoldenGate documentation on OTN for source and staging database version platform support.

Technology Specific Requirements

In order to run the Capture and Delivery processes, Oracle GoldenGate must be installed on both the source and staging servers. Installing Oracle GoldenGate installs all of the components required to run and manage GoldenGate processes.

Oracle GoldenGate Manager Process must be running on each system before Capture or Delivery can be started, and must remain running during their execution for resource management.

In order to perform online journalizing, the Oracle GoldenGate JAgent process must be configured and running on the Oracle GoldenGate instances.

Oracle GoldenGate has specific requirement and installation instructions that must be performed before starting the Capture and Delivery processes configured with the Oracle GoldenGate JKMs. See the Oracle GoldenGate Documentation on OTN for more information.

Connectivity Requirements

If the source database is Oracle, there are no connectivity requirements for using Oracle GoldenGate data in Oracle Data Integrator.

If the source database is IBM DB2 UDB, Microsoft SQL Server, or Sybase ASE, Oracle GoldenGate uses the ODBC driver to connect to the source database. You need to install the ODBC driver and to declare the data source in your system. You also need to set the data source name (DSN) in the KM option SRC_DSN.

Working with the Oracle GoldenGate JKMs

To use the JKM <database> to Oracle Consistent (OGG Online) or the JKM <database> to Teradata Consistent (OGG Online) in your Oracle Data Integrator integration projects, you need to perform the following steps:

  1. Define the Topology

  2. Create the Replicated Tables

  3. Set Up an Integration Project

  4. Configure CDC for the Source Datastores

  5. Configure and Start Oracle GoldenGate Processes (Offline mode only)

  6. Design Mappings Using Replicated Data

Define the Topology

This step consists in declaring in Oracle Data Integrator the staging data server, the source data server, as well as the physical and logical schemas attached to these servers.

To define the topology in this configuration, perform the following tasks:

  1. Define the Source Data Server

  2. Create the Source Physical Schema

  3. Define the Staging Server

  4. Create the Staging Physical Schema

  5. Define the Oracle GoldenGate Data Servers

  6. Create the Oracle GoldenGate Physical Schemas

  7. Create the Oracle GoldenGate Logical Schemas

Define the Source Data Server

You have to define a source data server from which Oracle GoldenGate will capture changes.

Create a data server for your source technology using the standard procedure. For more information, see the chapter corresponding to your source technology in this guide:

Create the Source Physical Schema

Create a physical schema under the data server that you have created in Define the Source Data Server. Use the standard procedure, as described in Creating a Physical Schema in Administering Oracle Data Integrator.

Create for this physical schema a logical schema using the standard procedure, as described in Creating a Logical Schema in Administering Oracle Data Integrator and associate it in a given context.

Define the Staging Server

Create a data server for the Oracle or Teradata technology. For more information, see:

Create the Staging Physical Schema

Create an Oracle or Teradata physical schema using the standard procedure, as described in Creating a Physical Schema in Administering Oracle Data Integrator.

Note:

The physical schema defined in the staging server will contain in the data schema the changed records captured and replicated by the Oracle GoldenGate processes. The work schema will be used to store the ODI CDC infrastructure.

Create for this physical schema a logical schema using the standard procedure, as described in Creating a Logical Schema in Administering Oracle Data Integrator and associate it in a given context.

Define the Oracle GoldenGate Data Servers

An Oracle GoldenGate data server corresponds to the Oracle GoldenGate JAgent process in Oracle Data Integrator (ODI). The Oracle GoldenGate JAgent process facilitates communication between ODI and the Oracle GoldenGate servers. You must create a JAgent process for both the source and the target Oracle GoldenGate servers.

Create a data server for the Oracle GoldenGate technology using the standard procedure, as described in Creating a Data Server of Administering Oracle Data Integrator. This section details only the fields required or specific for defining an Oracle GoldenGate data server:

  1. In the Definition tab:
    • Name: Name of the data server that will appear in the Oracle Data Integrator.

    • Host: Hostname or the IP address of the server where the JAgent process is running.

    • JMX Port: Port number of the JAgent process.

    • Manager Port: Port number of the Oracle GoldenGate manager instance.

    • JMX User: User name to connect to the JAgent.

    • Password: Password of the user credentials.

    • Installation Path: Location path for the Oracle GoldenGate installation. You must use this path when you create the capture process definitions from a model.

Create the Oracle GoldenGate Physical Schemas

The Oracle GoldenGate physical schemas in ODI correspond to the GoldenGate Capture and Delivery processes that perform CDC in Oracle GoldenGate. You must define the Oracle GoldenGate physical schemas to configure the Capture process on the source GoldenGate server and Delivery process on the target GoldenGate server.

Create a physical schema under the Oracle GoldenGate data server that you have created in Define the Oracle GoldenGate Data Servers. Use the standard procedure, as described in Creating a Physical Schema in Administering Oracle Data Integrator. This section details only the fields required or specific to create the physical schemas to configure the Oracle GoldenGate Capture and Replicate processes.

Note:

Alternatively, you can create the Oracle GoldenGate physical schemas from the model. See Create Oracle GoldenGate Physical Schemas from the model for information about how to create physical schemas from the model.

GoldenGate Capture Process Fields

Note that the GoldenGate Capture process must be configured on the source GoldenGate server.

  1. In the Process Definition tab:

    • Process Type: Type of the process that you want to configure. Select Capture as the process type.

    • Name: Name of the process (physical schema) in Oracle Data Integrator. Process name cannot exceed 8 characters and only upper case is allowed.

    • Trail File Path: Location of the Oracle GoldenGate trail file. Only two characters for the file name part are allowed.

    • Remote Trail File Path: Location of the remote trail file. Only two characters for the file name part are allowed.

    • Trail File Size: Size of the Oracle GoldenGate trail file in Megabytes.

    • Report Fetch: Enables report information to include the fetching statistics.

    • Report Count Frequency: Reports the total operations count at specific intervals. If the interval is not specified the entry is not added to the parameter file.

    • Select a parameter: List of available Oracle GoldenGate parameters. Only the parameters for the supported database are listed. Select a parameter and click Add. A template of the selected parameter is added to the text box.

      See the Oracle GoldenGate Reference Guide on OTN for information about the GoldenGate parameters.

Delivery Process Fields

Note that the GoldenGate Delivery process must be configured on the target GoldenGate server.

  1. In the Process Definition tab:
    • Process Type: Type of the process that you want to configure. Select Delivery as the process type.

    • Name: Name of the process (physical schema) in Oracle Data Integrator. Process name cannot exceed 7 characters and only uppercase is allowed.

    • Trail File Path: Location of the trail file. Only two characters for the filename part are allowed.

    • Discard File Path: Location of the discard file.

    • Definition File Path: Location of the definition file.

    • Report Detail: Enables report information to include any collision counts.

    • Report Count Frequency: Report the total operations count at specific intervals. If the interval is not specified the entry is not added to the parameter file.

    • Select a parameter: List of available Oracle GoldenGate parameters. Only the parameters for the supported database are listed. Select a parameter and click Add.

      See the Oracle GoldenGate Reference Guide on OTN for information about the GoldenGate parameters.

      Note:

      In the definition of the logical schema, you must select the logical schema for the staging database.
Create the Oracle GoldenGate Logical Schemas

Create logical schemas for the GoldenGate physical schemas (GoldenGate Capture and Delivery processes) that you created in section Create the Oracle GoldenGate Physical Schemas. You must create a logical schema for both the Capture process and the Delivery process.

To create logical schemas:

  1. In the Topology Navigator expand the Technologies node in the Logical Architecture accordion.
  2. Right-click Oracle GoldenGate and select New Logical Schema.
  3. Fill in the Logical Schema Name.
  4. Select the appropriate process type, either Capture or Delivery, to which you want to attach your logical schema.
  5. For each Context in the left column, select an existing Physical Schema in the right column. This Physical Schema is automatically associated to the logical schema in this context. Repeat this operation for all necessary contexts.

    Note:

    If the process type is set to 'Delivery', you must select the name of the logical schema that GoldenGate will use to deliver the changes. In this case, select a logical schema name for the 'Target DB Logical Schema'.
  6. From File menu, click Save.

Create the Replicated Tables

Oracle GoldenGate will replicate in the staging server the records changed in the source. In order to perform this replication, the source table structures must be replicated in the staging server.

To replicate these source tables:

  1. Create a new Data Model using the Oracle or Teradata technology. This model must use the logical schema created using the instructions in Create the Staging Physical Schema.

    See Creating a Model in Developing Integration Projects with Oracle Data Integrator for more information on model creation.

    Note that you do not need to reverse-engineer this data model.

  2. Create a new diagram for this model and add to this diagram the source tables that you want to replicate.
  3. Generate the DDL Scripts and run these scripts for creating the tables in the staging data server.
  4. An initial load of the source data can be made to replicate this data into the staging tables. You can perform this initial load with ODI using the Generate Interface IN feature of Common Format Designer. Alternately, you can use Oracle GoldenGate to perform this initial load, by specifying a capture or delivery process to perform the initial load or by setting the USE_OGG_FOR_INIT JKM option to Yes to create a process to perform the initial load when you Configure CDC for the Source Datastores.

Note:

See Creating Data Models with Common Format Designer in Developing Integration Projects with Oracle Data Integrator for more information on diagrams, generating DDL, and generating Interface IN features.

Set Up an Integration Project

Setting up a project using Oracle GoldenGate features follows the standard procedure. See Creating an Integration Project of Developing Integration Projects with Oracle Data Integrator.

Depending on the technology of your source data server and staging server, import one of the following KMs into your project:

  • JKM Oracle to Oracle Consistent (OGG Online)

  • JKM DB2 UDB to Oracle Consistent (OGG Online)

  • JKM Sybase ASE to Oracle Consistent (OGG Online)

  • JKM MSSQL to Oracle Consistent (OGG Online)

  • JKM Oracle to Teradata Consistent (OGG Online)

  • JKM DB2 UDB to Teradata Consistent (OGG Online)

  • JKM Sybase ASE to Teradata Consistent (OGG Online)

  • JKM MSSQL to Teradata Consistent (OGG Online)

Configure CDC for the Source Datastores

Changed Data Capture must be configured for the source datastores. This configuration is similar to setting up consistent set journalizing and is performed using the following steps.

  1. Edit the data model that contains the source datastore. In the Journalizing tab of the data model, set the Journalizing Mode to Consistent Set and select the appropriate JKM <database> to Oracle Consistent (OGG Online) or JKM <database> to Teradata Consistent (OGG Online).

    Select the following GoldenGate processes (physical schemas) using the process selection drop-down list:

    • Capture Process

    • Delivery Process

    • Initial Load Capture Process

    • Initial Load Delivery Process

    If you do not want to use an existing GoldenGate process, you can create new processes from here using the Create button next to the <Process Name> field. See Create Oracle GoldenGate Physical Schemas from the model for information about how to create GoldenGate processes from the model.

    Set the KM options as follows:

    • ONLINE: If you set this option to true, the JKM configures the CDC infrastructure and configures and starts the GoldenGate Capture and Delivery processes. If you set this option to false, the JKM generates the CDC infrastructure and the configuration files that are required to set up the GoldenGate Capture and Delivery processes. It also generates the Readme.txt that contains the instructions to configure and start the GoldenGate processes.

      For more information about online and offline mode, see Overview of the GoldenGate CDC Process.

      For information about how to configure and start GoldenGate processes using the configuration files, see Configure and Start Oracle GoldenGate Processes (Offline mode only).

    • LOCAL_TEMP_DIR: Full path to a temporary folder into which the Oracle GoldenGate configuration files will be generated

    • SRC_DSN: Name of the data source. This KM option is required when the ODBC driver is used. Note that this option does not exist in the JKM Oracle to Oracle Consistent (OGG Online).

      Note:

      For Sybase users only: When defining the data source name, you have to add the database server name to the datasource name as follows:

      DSN_name@SYBASE_DBSERVER

    • USE_OGG_FOR_INIT: Applicable for offline mode only. Generate the Oracle GoldenGate processes to perform the initial load of the replicated tables. If you have performed this initial load using Oracle Data Integrator while Creating the Replicated Tables, you can leave this option to NO.

    • USE_INTEGRATED_REPLICAT_MODE: This KM option is required when the delivery mode is classic or integrated replicat.

      Values

      True: Use integrated replicat mode

      False: Use classic mode (default value)

      Only the following KMs have this parameter implemented:

      KM_JKM DB2 UDB to Oracle Consistent (OGG Online).xml

      KM_JKM MSSQL to Oracle Consistent (OGG Online).xml

      KM_JKM Oracle to Oracle Consistent (OGG Online).xml

      KM_JKM Sybase ASE to Oracle Consistent (OGG Online).xmlKM_JKM DB2 UDB to Oracle Consistent (OGG Online).xml

      KM_JKM MSSQL to Oracle Consistent (OGG Online).xml

      KM_JKM Oracle to Oracle Consistent (OGG Online).xml

      KM_JKM Sybase ASE to Oracle Consistent (OGG Online).xml

  2. Select the datastores that you want to replicate or the model if want to replicate all datastores, right-click then select Changed Data Capture > Add to CDC.
  3. Select the model, right-click then select Changed Data Capture > Subscriber > Subscribe. Add subscribers for this model.
  4. Select the model, right-click then select Changed Data Capture > Start Journal. If journals are started in online mode (ONLINE option for the JKM is set to true), the JKM creates the CDC infrastructure and configures and starts the Oracle GoldenGate processes. If journals are started in offline mode (ONLINE option for the JKM is set to false), the JKM creates the CDC infrastructure and generates the configuration files that are required to configure the Oracle GoldenGate processes. It also generates Readme.txt that contains the instructions to configure and start the GoldenGate processes.

    For information about how to configure and start GoldenGate processes, see Configure and Start Oracle GoldenGate Processes (Offline mode only).

You can review the result of the journal startup action:

  • If journals are started in online mode, the Oracle GoldenGate processes are configured and started. The changed data in the source datastores is captured and replicated in the staging tables.

  • If the journals are started in offline mode, the Oracle GoldenGate configuration files, as well as a Readme.txt file are generated in the directory that is specified in the LOCAL_TEMP_DIR KM option. You can use these files to Configure and Start Oracle GoldenGate Processes (Offline mode only).

  • The CDC infrastructure is set up correctly. The journalized datastores appear in the Models accordion with a Journalizing Active flag. You can right-click the model and select Changed Data Capture > Journal Data‚Ķ to access the journalized data for these datastores.

See Using Journalizing in Developing Integration Projects with Oracle Data Integrator for more conceptual information and detailed instructions on CDC.

Note:

Although this CDC configuration supports consistent set journalizing, it is not required to order datastores in the Journalized Tables tab of the model after adding them to CDC.

Create Oracle GoldenGate Physical Schemas from the model

You can create the Oracle GoldenGate physical schemas for the following GoldenGate processes from the Journalizing tab of the Model Editor.

  • Capture Process

  • Delivery Process

  • Initial Capture Process (Capture process to be used for initial load)

  • Initial Delivery Process (Delivery process to be used for initial load)

When you create the Oracle GoldenGate physical schemas from the models, the default values are derived from the JAgent and the Model details.

To create the Oracle GoldenGate physical schemas from the model:

  1. In the Designer Navigator expand the Models panel.
  2. Expand the Models folder that contains the model from which you want to create the physical schemas.
  3. Right-click the Model and select Open.
  4. Click the Journalizing tab of the Model Editor.
  5. Click Create button next to the Capture Process field.
  6. Select the appropriate JAgent and Context.
  7. Fill in the Process Name and Logical Process Name.
  8. Click OK to create and select the Capture process.

    WARNING:

    The physical schema generated for the Capture process needs to be changed manually. The Remote Trail File Path property of the physical schema uses the path for the Capture instance and needs to be changed to use the path for the Delivery instance.

  9. Click Create button next to the Delivery Process field.
  10. Select the appropriate JAgent and Context.
  11. Fill in the Process Name and Logical Process Name.
  12. Select the Target Database Logical Schema for the Delivery process.
  13. Click OK.
  14. Similarly, click Create buttons next to the Initial Load Capture Process and Initial Load Delivery Process fields to create physical schemas for them.

Configure and Start Oracle GoldenGate Processes (Offline mode only)

Note:

  • This section is applicable only if the journals are started in offline mode. That means only if the ONLINE option for the JKM is set to false.

  • Connection to a JAgent is not required to configure Oracle GoldenGate Processes in offline mode. However, the necessary information must be available in Topology.

The JKM generates in the LOCAL_TEMP_DIR a folder named after the source and target object groups. This folder contains the following:

  • The Readme.txt file that contains detailed instructions for configuring and starting the Oracle GoldenGate processes.

  • The src folder that contains configuration files to upload on the source server, in the Oracle GoldenGate installation directory.

  • The stg folder that contains configuration files to upload on the staging server, in the Oracle GoldenGate installation directory.

The detailed instructions, customized for your configuration, are provided in the readme file.

These instructions include:

  1. Uploading or copying files from the src folder to the source server.
  2. Uploading or copying files from the stg folder to the staging server.
  3. Running on the source server the OBEY file generated by the JKM for starting the Capture process, using the ggsci command line.
  4. Generating on the source server definition file using the defgen command line.
  5. Copying this definition file to the staging server.
  6. If the initial load option is used:
    • Running on the staging server the OBEY file generated by the JKM for the initial load, using the ggsci command line.

    • Running on the source server the OBEY file generated by the JKM for the initial load, using the ggsci command line.

  7. Finally Running on the staging server the OBEY file generated by the JKM for the starting the Delivery processes, using the ggsci command line.

See the Oracle GoldenGate documentation on OTN for more information on OBEY files, the ggsci and defgen utilities.

Design Mappings Using Replicated Data

You can use the data in the replicated data as a source in your mappings. This process is similar to using a source datastore journalized in consistent set mode. See Using Changed Data: Consistent Set Journalizing in Developing Integration Projects with Oracle Data Integrator for more information.

Advanced Configuration

This section includes the following advanced configuration topics:

Initial Load Method

The staging tables contain a replica of the structure and data from the source tables. The Oracle GoldenGate processes capture changes on the source tables and apply them to the target. Yet the staging tables must be initially loaded with the original content of the source tables. You can use the following methods to perform the initial load:

  • Using Oracle GoldenGate: A specific GoldenGate process loads the whole content of the source tables into the staging tables.

  • Using Oracle Data Integrator: The Generate Interfaces IN option of Oracle Data Integrator's Common Format Designer. This method uses ODI mappings to transfer the data.

  • Using database backup/restore tools to copy data and structures.

Tuning Replication Performances

The following KM options can be used to improve replication performances:

  • COMPATIBLE: This Oracle-specific option affects the use of the PURGE key word and the way statistics (using DBMS_STATS or ANALYZE) are collected. Set this value to the database version of your staging server.

  • NB_APPLY_PROCESS: Number of Oracle GoldenGate Delivery processes created on the staging server.

  • TRAIL_FILE_SIZE: Size of the Oracle GoldenGate trail file in Megabytes.

For the NB_APPLY_PROCESS and TRAIL_FILE_SIZE parameters, see the Oracle GoldenGate Documentation on OTN for more information on performance tuning.

One Source Multiple Staging Configuration (Offline mode only)

Note that one source multiple staging configuration can be done only in the offline journalizing mode.

It is possible to set up a configuration where changes are captured on a single source and replicated to several staging servers. The example below illustrates how to set this up in a typical configuration.

Replication should source from source server SRC and replicate in both STG1 and STG2 staging servers.

  1. Edit the source model and ensure that the logical schema for STG1 is selected.
  2. Start the journals in offline mode and follow the instructions in the readme to set up the Oracle GoldenGate processes in SRC and STG1.
  3. Edit the source model again, and select the logical schema for STG2.
  4. Start the journals in offline mode and follow the instructions in the readme to set up the Oracle GoldenGate process in SRC and STG2.

    Note:

    Playing the configuration on SRC again will not recreate a capture process, trail files, or definition files. It will simply create a new Oracle GoldenGate Datapump process to push data to STG2.

Integrated Capture

In the Integrated Capture mode, the Oracle GoldenGate extract process interacts directly with a database logmining server, to receive data changes in the form of logical change records (LCR).

The following are the benefits of Integrated Capture:

As the Integrated Capture uses the database logmining server to access the Oracle redo stream, you can automatically switch between different copies of archive logs or different mirrored versions of the online logs.

  • Being fully integrated with the database, no additional steps are required to work with Oracle RAC, ASM, and TDE

  • Enables faster filtering of tables

  • Handles point-in-time recovery and RAC integration more efficiently

  • Enables integrated log management, as the Oracle Recovery Manager (RMAN) automatically retains the archive logs required for the extract

  • Supports capture from a multi-tenant container database

  • As the Integrated Capture and the Integrated Apply are both database objects, the objects naming follows the same rules as other Oracle database objects

  • For a release 11.2.0.4 source database and later (with source compatibility set to 11.2.0.4 or higher), the capture of DDL is performed by the logmining server asynchronously and requires no special triggers, tables, or other data objects installation

  • DDL trigger and supporting objects are required when extract is in Integrated mode with a Oracle 11g source database earlier than version 11.2.0.4

  • Oracle GoldenGate upgrades can be performed without stopping the user applications

  • Figure 28-1 Configuration of Extract in Integrated Capture

    Description of Figure 28-1 follows
    Description of "Figure 28-1 Configuration of Extract in Integrated Capture"

Integrated Capture Deployment Options

Depending on where the mining database is deployed, you have two deployment options for integrated capture. The mining database is the one where the logmining server is deployed.

Local Deployment

For local deployment, the source database and the mining database are the same. The source database is the database:

  • For which you want to mine the redo stream to capture changes.

  • Where you deploy the logmining server.

As Integrated Capture is fully integrated with the database, this mode does not require any special database setup.

Downstream Deployment

In downstream deployment, the source and mining databases are different databases. When using a downstream mining configuration, the source database and mining database must be of the same platform. For example, if the source database is running on Windows 64-bit, the downstream database must also be on a Windows 64-bit platform.

  1. Create the logmining server at the downstream database.

  2. Configure redo transport at the source database to ship the redo logs to the downstream mining database for capture at that location.

Note:

Using a downstream mining server for capture is recommended to offload the capture overhead, and any other overhead from transformation or other processing from the production server, but requires log shipping and other configuration.

Deciding Which Apply Method to Use

The Replicat process enables the application of replicated data to an Oracle target database. For more information about Oracle GoldenGate processes, see Administering Oracle GoldenGate for Windows and UNIX.

For an Oracle target database, you can run Replicat in either nonintegrated or integrated mode. The following section explains these modes and the database versions that each mode supports:

Nonintegrated Replicat

In nonintegrated mode, the Replicat process uses standard SQL to apply data directly to the target tables.

You can apply transactions in parallel with a nonintegrated Replicat, by using a coordinated Replicat configuration. For more information, see Administering Oracle GoldenGate for Windows and UNIX.

Use nonintegrated Replicat when:

  • The target Oracle database is a version earlier than Oracle 11.2.0.4.

  • You want to extensively use features that are not supported in integrated Replicat mode.

In nonintegrated mode, Replicat operates as follows:

  1. Reads the Oracle GoldenGate trail.

  2. Performs data filtering, mapping, and conversion.

  3. Constructs SQL statements that represent source database DML or DDL transactions (in committed order).

  4. Applies the SQL to the target through Oracle Call Interface (OCI).

Figure 28-2 Nonintegrated Replicat Configuration

Description of Figure 28-2 follows
Description of "Figure 28-2 Nonintegrated Replicat Configuration"
Integrated Replicat

In integrated mode, the Replicat process leverages the apply processing functionality that is available within the Oracle database. In this mode, Replicat operates as follows:

  • Reads the Oracle GoldenGate trail.

  • Performs data filtering, mapping, and conversion.

  • Constructs logical change records (LCR) that represent source database DML transactions (in committed order). DDL is applied directly by Replicat.

  • Attaches to a background process in the target database known as a database inbound server by means of a lightweight streaming interface.

  • Transmits the LCRs to the inbound server, which applies the data to the target database.

Figure 28-3 Integrated Replicat Configuration

Description of Figure 28-3 follows
Description of "Figure 28-3 Integrated Replicat Configuration"

Within a single Replicat configuration, multiple inbound server child processes known as apply servers apply transactions in parallel, while preserving the original transaction atomicity. You can increase this parallelism as much as your target system will support, when you configure the Replicat process or dynamically as needed.

Figure 28-4 Integrated Replicat with Two Parallel Apply Servers

Description of Figure 28-4 follows
Description of "Figure 28-4 Integrated Replicat with Two Parallel Apply Servers"

Integrated Replicat applies transactions asynchronously. Transactions that do not have interdependencies can be safely executed and committed out of order to achieve fast throughput. Transactions with dependencies are guaranteed to be applied in the same order as on the source.

A reader process in the inbound server computes the dependencies among the transactions in the workload based on the constraints defined at the target database (primary key, unique, foreign key). Barrier transactions and DDL operations are managed automatically, as well. A coordinator process coordinates multiple transactions and maintains order among the apply servers.

If the inbound server does not support a configured feature or column type, Replicat disengages from the inbound server, waits for the inbound server to complete transactions in its queue, and then applies the transaction to the database in direct apply mode through OCI. Replicat resumes processing in integrated mode after applying the direct transaction.

The following features are applied in direct mode by Replicat:

  • DDL operations

  • Sequence operations

  • SQLEXEC parameter within a TABLE or MAP parameter

  • EVENTACTIONS processing

  • UDT Note, if the extract uses USENATIVEOBJSUPPORT to capture the UDT, then integrated Replicat will apply it with the inbound server, otherwise it will be handled by Replicat directly.

Note:

Because transactions are applied serially in direct apply mode, heavy use of such operations may reduce the performance of the integrated Replicat mode. Integrated Replicat performs best when most of the apply processing can be performed in integrated mode.

User exits are executed in integrated mode. The user exit may produce unexpected results, if the exit code depends on data in the replication stream.

Integrated Replicat Requirements

To use integrated Replicat, the following must be true:

  • The target Oracle database must be Oracle 11.2.0.4 or later.

  • Supplemental logging must be enabled on the source database to support the computation of dependencies among tables and scheduling of concurrent transactions on the target.

  • Supplemental logging can be enabled at any time up to, but before, you start the Oracle GoldenGate processes.

Using Different Capture and Apply Modes Together

You can use the following capture and apply modes together:

  • Classic capture (Oracle or non-Oracle source) and nonintegrated Replicat

  • Classic capture (Oracle or non-Oracle source) and integrated Replicat

  • Integrated capture and nonintegrated Replicat

  • Integrated capture and integrated Replicat

You can use integrated capture and classic capture concurrently within the same source Oracle GoldenGate instance, and you can use integrated Replicat and nonintegrated Replicat concurrently within the same target Oracle GoldenGate instance.

This configuration requires careful placement of your objects within the appropriate process group, because there is no coordination of DDL or DML between classic and integrated capture modes, nor between nonintegrated and integrated Replicat modes. Each Extract group must process objects that are suited to the processing mode, based on table data types and attributes. No objects in one Extract can have DML or DDL dependencies on objects in the other Extract. The same type of segregation must be applied to the Replicat configuration.

The recommended Oracle GoldenGate configuration, when supported by the Oracle version, is to use one integrated capture on an Oracle source and one integrated Replicat per source database on an Oracle target. Integrated capture supports certain data types more completely than classic capture. One integrated Replicat configuration supports all Oracle data types either through the inbound server or by switching to direct apply when necessary, and it preserves source transaction integrity. You can adjust the parallelism settings to the desired apply performance level as needed.

If the target database is an Oracle version that does not support integrated Replicat, or if it is a non-Oracle database, you can use a coordinated Replicat configuration. For more information, see Administering Oracle GoldenGate for Windows and UNIX.

Switching to Different Process Mode

You can switch between the process modes. For example, you can switch from classic capture to integrated capture, or from integrated capture to classic capture. For instructions, see Administering Oracle GoldenGate for Windows and UNIX.

Upgrading GoldenGate Classic Extract to Integrated

To run integrated extract in GoldenGate 11.2.1, the following requirements should be met:

  • Oracle RDBMS must be 11.2.0.3 or higher

  • RDBMS (Database) patches must be applied:

    • 11.2.0.3 Database specific bundle patch for Integrated Extract 11.2.x

    • Redo compatibility should be set to 11.2.0.3, matching the DB version

The following section explains the upgrade procedure:

  1. If you are using RAC environments and OGG versions 11.2.1.0.23+, execute the steps a to d. If you are using OGG version prior to 11.2.1.0.23, skip these steps and proceed with step 2.

    1. For a running extract, issue the following command:

    2. SEND extract <extract name> tranlogoptions prepareforupgradetoie
    3. For a stopped extract, start it after adding the following line to the parameter file:

    4. TRANLOGOPTIONS PREPAREFORUPGRADETOIE
    5. Monitor the ggserr.log file or corresponding extract report file for an INFO GG-01873 message, indicating that the change has taken affect, and that you can proceed with the upgrade.

      Note:

      For the INFO message to be displayed, extract has to process a committed transaction on all the RAC nodes for a table being captured. As an alternative, a dummy table can be added to the extract parameter file, and doing DML on this table from all the threads will give extract commit boundary current checkpoints for all the threads.

      Example from report file:
      2014-06-05 17:06:09  INFO    OGG-01873  The parameter TRANLOGOPTIONS PREPAREFORUPGRADETOIE has taken effect. Proceed to the next step in the upgrade process.
      Example from ggserr.log file:
      2014-06-05 17:06:09  INFO    OGG-01873  Oracle GoldenGate Capture for Oracle, src.prm:  The parameter TRANLOGOPTIONS PREPAREFORUPGRADETOIE has taken effect.
    6. Once the message appears, stop the extract, perform dblogin, and alter for conversion to Integrated as follows:

  2. Connect to the Extract database, and grant the following privilege to GG Admin user:

  3. SQL>exec dbms_goldengate_auth.grant_admin_privilege('<ggadmin>')
    
  4. Login into GGSCI.

  5. Check to see if upgrade is possible.

  6. GGSCI> DBLOGIN USERID <ID> PASSWORD <PW>
    GGSCI> INFO <extract_name> UPGRADE
    
  7. If there are existing open transactions, the upgrade may fail:

  8. GGSCI>stop extract <extract_name>
    GGSCI>dblogin userid <ggadmin>,password <password>
    
  9. Register the extract in the database, if not done already.

  10. GGSCI>register extract <extract_name> database
     
    GGSCI>alter extract <extract_name>,upgrade integrated tranlog
     
    GGSCI>start extract <extract_name>