3 Configuring Oracle GoldenGate for DB2 LUW

This chapter provides an overview of the basic steps required to configure Oracle GoldenGate for a DB2 LUW source and target database.

Topics:

What to Expect from these Instructions

These instructions show you how to configure basic parameter (configuration) files for the following processes:

  • A primary Extract (captures transaction data from the data source)

  • A data-pump Extract (propagates the data from local storage to the target system)

  • A Replicat (applies replicated data to the target database)

Your business requirements probably will require a more complex topology, but this procedure forms a basis for the rest of your configuration steps.

By performing these steps, you can:

  • Get the basic configuration files established.

  • Build upon them later by adding more parameters as you make decisions about features or requirements that apply to your environment.

  • Use copies of them to make the creation of additional parameter files faster than starting from scratch.

Where to Get More Information

See Administering Oracle GoldenGate and Securing the Oracle GoldenGate Environment for more information about:

  • The processes and files that you are configuring

  • Detailed configuration information

  • Security options

  • Data-integration options (filtering, mapping, conversion)

  • Instructions for configuring complex topologies

  • Steps to perform initial instantiation of the replication environment

Configuring the Primary Extract

These steps configure the primary Extract to capture transaction data from a source DB2 LUW and write the data to a local trail for temporary storage.

  1. In GGSCI on the source system, create the Extract parameter file.
    EDIT PARAMS name
    

    Where: name is the name of the primary Extract.

  2. Enter the Extract parameters in the order shown, starting a new line for each parameter statement.

    Basic parameters for the primary Extract

    EXTRACT finance
    SOURCEDB mysource, USERIDALIAS myalias 
    ENCRYPTTRAIL AES192
    EXTTRAIL /ggs/dirdat/lt
    TABLE hr.*;
    
    Parameter Description
    EXTRACT group

    group is the name of the Extract group.

    SOURCEDB database, USERIDALIAS alias

    Specifies the real name of the source DB2 for i database (not an alias), plus the alias of the database login credential of the user that is assigned to Extract. This credential must exist in the Oracle GoldenGate credential store. For more information, see Database User for Oracle GoldenGate Processes.

    ENCRYPTTRAIL algorithm

    Encrypts the local trail.

    EXTTRAIL pathname

    Specifies the path name of the local trail to which the primary Extract writes captured data for temporary storage.

    TABLE schema.object;
    

    Specifies the database object for which to capture data.

    • TABLE is a required keyword.

    • schema is the schema name or a wildcarded set of schemas.

    • object is the table name, or a wildcarded set of tables.

    The question mark (?) wildcard is not supported for this database. Note that only the asterisk (*) wildcard is supported for DB2 LUW.

    Terminate the parameter statement with a semi-colon.

    To exclude tables from a wildcard specification, use the TABLEEXCLUDE parameter.

    For more information and for additional options that control data filtering, mapping, and manipulation, see TABLE | MAP in Reference for Oracle GoldenGate.

  3. Enter any optional Extract parameters that are recommended for your configuration. You can edit this file at any point before starting processing by using the EDIT PARAMS command in GGSCI.
  4. Save and close the file.

Configuring the Data Pump Extract

These steps configure the data pump that reads the local trail and sends the data across the network to a remote trail on the target. The data pump is optional, but recommended.

  1. In GGSCI on the source system, create the data-pump parameter file.
    EDIT PARAMS name
    

    Where name is the name of the data-pump Extract.

  2. Enter the data-pump Extract parameters in the order shown, starting a new line for each parameter statement. Your input variables will be different. See Table 3-* for descriptions.

    Basic parameters for the data-pump Extract group:

    EXTRACT extpump
    SOURCEDB mypump, USERIDALIAS myalias
    RMTHOST fin1, MGRPORT 7809 ENCRYPT AES192, KEYNAME securekey2
    RMTTRAIL /ggs/dirdat/rt
    TABLE hr.*;
    
    Parameter Description
    EXTRACT group

    group is the name of the data pump Extract.

    SOURCEDB database, USERIDALIAS alias

    Specifies the real name of the source DB2 LUW database (not an alias), plus the alias of the database login credential of the user that is assigned to Extract. This credential must exist in the Oracle GoldenGate credential store. For more information, see Database User for Oracle GoldenGate Processes.

    RMTHOST hostname, MGRPORT portnumber,
    [, ENCRYPT algorithm KEYNAME keyname]
    • RMTHOST specifies the name or IP address of the target system.

    • MGRPORT specifies the port number where Manager is running on the target.

    • ENCRYPT specifies optional encryption of data across TCP/IP.

    RMTTRAIL pathname

    Specifies the path name of the remote trail.

    TABLE schema.object;

    Specifies a table or sequence, or multiple objects specified with a wildcard. In most cases, this listing will be the same as that in the primary Extract parameter file.

    • TABLE is a required keyword.

    • schema is the schema name or a wildcarded set of schemas.

    • object is the name of a table or a wildcarded set of tables.

    Only the asterisk (*) wildcard is supported for DB2 LUW. The question mark (?) wildcard is not supported for this database. See Oracle Golden Gate Administering Oracle GoldenGate for Windows and UNIX for information about how to specify object names with and without wildcards.

    Terminate the parameter statement with a semi-colon.

    To exclude tables from a wildcard specification, use the TABLEEXCLUDE parameter.

    For more information and for additional TABLE options that control data filtering, mapping, and manipulation, see Oracle Fusion Middleware Reference for Oracle GoldenGate for Windows and UNIX.

  3. Enter any optional Extract parameters that are recommended for your configuration. You can edit this file at any point before starting processing by using the EDIT PARAMS command in GGSCI.
  4. Save and close the file.

Creating a Temporal Table

A temporal table is a table that maintains the history of its data and the time period when its data are valid. Temporal tables are used in Oracle GoldenGate to keep track of all the old rows that are deleted or updated in the table. Temporal tables are also used to maintain the business validity of its rows and data. For example, Oracle GoldenGate keeps track of the time period during which a row is valid. There are three types of temporal tables, system-period, application-period, and bitemporal table.

Support for Temporal Tables

  • Replication between system-period temporal tables and application-period temporal tables is not supported.

  • Replication from a non-temporal table to a temporal table is not supported.

  • Replication of temporal tables with the INSERTALLRECORDS parameter is not supported.

  • Bidirectional replication is supported only with the default replication.

  • CDR in bidirectional replication is not supported.

  • CDR in application-period temporal tables is supported.

Replicating with Temporal Tables

You can choose one of the following methods to replicate a system-period or a bitemporal temporal table as follows:

  • You can replicate a temporal table to another temporal table only; this is the default behavior. Oracle GoldenGate will not replicate the SYSTEM_TIME period and transaction id columns because these are automatically generated columns at the apply side. The database manager populates the columns in the target temporal table using the system clock time and with the default values. You can preserve the original values these columns then use any of the following:

    • Add extra timestamp columns in the target temporal table and map the columns accordingly. The extra columns are automatically added to the associated history table.

    • Use a non-temporal table at the apply side and map the columns appropriately. In this scenario, you will not be able to maintain the history table.

    • In a heterogeneous configuration where the source is DB2 LUW and the target is a different database, you can either ignore the automatically generated columns or use an appropriate column conversion function to convert the columns value in the format that target database supports and map them to target columns accordingly.

    Or

  • You can replicate a temporal table, with the associated history table, to a temporal and history table respectively then you must specify the replicate parameter, DBOPTIONS SUPPRESSTEMPORALUPDATES. You must specify both the temporal table and history table to be captured in the Extract parameter file. Oracle GoldenGate replicates the SYSTEM_TIME period and transactions id columns value. You must ensure that the database instance has the execute permission to run the stored procedure at the apply side.

Oracle GoldenGate cannot detect and resolve conflicts while using default replication as SYSTEM_TIME period and transactionstart id columns remains auto generated. These columns cannot be specified in set and where clause. If you use the SUPPRESSTEMPORALUPDATES parameter, then Oracle GoldenGate supports CDR.

Converting

You can convert an already existing table into a temporal table, which changes the structure of the table. This section describes how the structure of the tables changes. The following sample existing table is converted into all three temporal tables types in the examples in this section:.

Table policy_info
(
Policy_id char[4] not null primary key,
Coverage int not null
              )
And the tables contains the following initial rows
                   POLICY_ID       COVERAGE
                   -------------        -----------
                     ABC                12000
                     DEF                 13000
                     ERT                 14000

Example 1   Converting an existing table into System-period temporal table.

You convert the sample existing table into a system-period temporal table by adding SYSTEM_PERIOD, transaction id columns, and SYSTEM_TIME period as in the following:

ALTER TABLE policy_info
   ADD COLUMN sys_start TIMESTAMP(12) NOT NULL GENERATED ALWAYS AS ROW BEGIN;
ALTER TABLE policy_info 
   ADD COLUMN sys_end TIMESTAMP(12) NOT NULL GENERATED ALWAYS AS ROW END;
ALTER TABLE policy_info
   ADD COLUMN ts_id TIMESTAMP(12) NOT NULL GENERATED ALWAYS AS TRANSACTION START ID;
ALTER TABLE policy_info ADD PERIOD SYSTEM_TIME(sys_start, sys_end);

Then you create a history table for the new temporal table using one of the following two methods:

  • 	CREATE TABLE hist_policy_info
    (
     policy_id     CHAR(4) NOT NULL,
    coverage     INT NOT NULL,
    sys_start    TIMESTAMP(12) NOT NULL ,
    sys_end      TIMESTAMP(12) NOT NULL,
    ts_id            TIMESTAMP(12) NOT NULL
     	       );
     ALTER TABLE hist_policy_info ADD RESTRICT ON DROP;
    
  • CREATE TABLE hist_policy_info LIKE policy_info with RESTRICT ON DROP;
    

    The RESTRICT ON DROP clause will not allow the history table to get dropped while dropping system-period temporal table. Otherwise the history table gets implicitly dropped while dropping its associated temporal table. You can create a history table without RESTRICT ON DROP. A history table cannot be explicitly dropped.

    You should not use the GENERATED ALWAYS clause while creating a history table. The primary key of the system-period temporal table also does not apply here as there could be many updates for a particular row in the base table, which triggers many inserts into the history table for the same set of primary keys. Apart from these, the structure of a history table should be exactly same as its associated system-period temporal table. The history table must have the same number and order of columns as system-period temporal table. History table columns cannot explicitly be added, dropped, or changed. You must associate a system-period temporal table with its history table with the following statement:

     ALTER TABLE policy_info ADD VERSIONING USE HISTORY TABLE hist_policy_info.
    

    The GENERATED ALWAYS columns of the table are the ones that are always populated by the database manager so you do not have any control over these columns. The database manager populates these columns based on the system time.

    The extra added SYSTEM_PERIOD and transaction id columns will have default values for already existing rows as in the following:

    POLICY_ID                                   COVERAGE    SYS_START                                         SYS_END                                                TS_ID
    --------- ----------- -------------------------------- -------------------------------- -------------------------------------------------------------------------------
    ABC             12000 0001-01-01-00.00.00.000000000000 9999-12-30-00.00.00.000000000000 0001-01-01-00.00.00.000000000000
    DEF             13000 0001-01-01-00.00.00.000000000000 9999-12-30-00.00.00.000000000000 0001-01-01-00.00.00.000000000000
    ERT             14000 0001-01-01-00.00.00.000000000000 9999-12-30-00.00.00.000000000000 0001-01-01-00.00.00.000000000000
    

    The associated history table is populated with the before images once you start updating the temporal table.

Example 2   Converting an existing table into application-period temporal table.

You can convert the sample existing table into application-period temporal table by adding time columns and a BUSINESS_TIME period as in the following:

ALTER TABLE policy_info ADD COLUMN bus_start DATE NOT NULL DEFAULT '10/10/2001'"
ALTER TABLE policy_info ADD COLUMN bus_end DATE NOT NULL DEFAULT '10/10/2002'
ALTER TABLE policy_info ADD PERIOD BUSINESS_TIME(bus_start, bus_end)

While adding time columns, you need to make sure that while entering business validity time values of the existing time columns, the bus_start column always has value lesser than bus_end because these columns specify the business validity of the rows.

The new application-period temporal table will look similar to:

POLICY_ID   COVERAGE    BUS_START  BUS_END
--------- ----------- ---------- -------------------------------
ERT             14000              10/10/2001  10/10/2002
DEF             13000             10/10/2001   10/10/2002
ABC             12000             10/10/2001   10/10/2002
Example 3   Converting an existing table into bitemporal table.

You can convert the sample existing table into bitemporal table by adding SYSTEM_PERIOD, time columns along with the SYSTEM_TIME and BUSINESS_TIME period as in the following:

ALTER TABLE policy_info
   ADD COLUMN sys_start TIMESTAMP(12) NOT NULL GENERATED ALWAYS AS ROW BEGIN;
ALTER TABLE policy_info 
   ADD COLUMN sys_end TIMESTAMP(12) NOT NULL GENERATED ALWAYS AS ROW END;
ALTER TABLE policy_info
   ADD COLUMN ts_id TIMESTAMP(12) NOT NULL GENERATED ALWAYS AS TRANSACTION START ID;
ALTER TABLE policy_info ADD PERIOD SYSTEM_TIME(sys_start, sys_end);
 
ALTER TABLE policy_info ADD COLUMN bus_start DATE NOT NULL DEFAULT '10/10/2001'"
ALTER TABLE policy_info ADD COLUMN bus_end DATE NOT NULL DEFAULT '10/10/2002'
ALTER TABLE policy_info ADD PERIOD BUSINESS_TIME(bus_start, bus_end)

While adding the time columns, you must make sure that while entering business validity time values of already existing time columns, the bus_start column always has value lesser than bus_end because these columns specify the business validity of the rows.

Then you create a history table for the new temporal table using one of the following two methods:

  • 	CREATE TABLE hist_policy_info
    (
     policy_id     CHAR(4) NOT NULL,
    coverage     INT NOT NULL,
    sys_start    TIMESTAMP(12) NOT NULL ,
    sys_end      TIMESTAMP(12) NOT NULL,
    ts_id            TIMESTAMP(12) NOT NULL
     	       );
     ALTER TABLE hist_policy_info ADD RESTRICT ON DROP;
    CREATE TABLE hist_policy_info LIKE policy_info with RESTRICT ON DROP;
    
  • The RESTRICT ON DROP clause will not allow the history table to get dropped while dropping system-period temporal table. Otherwise the history table gets implicitly dropped while dropping its associated temporal table. You can create a history table without RESTRICT ON DROP. A history table cannot be explicitly dropped.

    You should not use the GENERATED ALWAYS clause while creating a history table. The primary key of the system-period temporal table also does not apply here as there could be many updates for a particular row in the base table, which triggers many inserts into the history table for the same set of primary keys. Apart from these, the structure of a history table should be exactly same as its associated system-period temporal table. The history table must have the same number and order of columns as system-period temporal table. History table columns cannot explicitly be added, dropped, or changed. You must associate a system-period temporal table with its history table with the following statement:

     ALTER TABLE policy_info ADD VERSIONING USE HISTORY TABLE hist_policy_info.
    

    The GENERATED ALWAYS columns of the table are the ones that are always populated by the database manager so you do not have any control over these columns. The database manager populates these columns based on the system time.

    The extra added SYSTEM_PERIOD and transaction id columns will have default values for already existing rows as in the following:

    POLICY_ID                                   COVERAGE    SYS_START                                         SYS_END                                                TS_ID
    --------- ----------- -------------------------------- -------------------------------- -------------------------------------------------------------------------------
    ABC             12000 0001-01-01-00.00.00.000000000000 9999-12-30-00.00.00.000000000000 0001-01-01-00.00.00.000000000000
    DEF             13000 0001-01-01-00.00.00.000000000000 9999-12-30-00.00.00.000000000000 0001-01-01-00.00.00.000000000000
    ERT             14000 0001-01-01-00.00.00.000000000000 9999-12-30-00.00.00.000000000000 0001-01-01-00.00.00.000000000000
    

    The associated history table is populated with the before images once you start updating the temporal table.

The extra added SYSTEM_TIME period, transaction id and time columns will have default values for already existing rows as in the following:

POLICY_ID COVERAGE    SYS_START                        SYS_END                          TS_ID                            BUS_START         BUS_END
--------- ----------- -------------------------------- -------------------------------- -------------------------------- ---------- -------------------------------------
ABC   12000 0001-01-01-00.00.00.000000000000 9999-12-30-00.00.00.000000000000 0001-01-01-00.00.00.000000000000 10/10/2001 10/10/2002
DEF             13000 0001-01-01-00.00.00.000000000000 9999-12-30-00.00.00.000000000000 0001-01-01-00.00.00.000000000000 10/10/2001 10/10/2002
ERT             14000 0001-01-01-00.00.00.000000000000 9999-12-30-00.00.00.000000000000 0001-01-01-00.00.00.000000000000 10/10/2001 10/10/2002

The history table is populated with the before images once user starts updating the temporal table.

Example 4   Replication in Heterogeneous Environment.

In heterogeneous configuration in which you do not have temporal tables at the apply side, you can only replicate the system-period and bitemporal tables though not the associated history tables. While performing replication in this situation, you must take care of the SYSTEM_PERIOD and transaction id columns value. These columns will have some values that the target database might not support. You should first use the map conversion functions to convert these values into the format that the target database supports, and then map the columns accordingly.

For example, MySQL has a DATETIME range from 1000-01-01 00:00:00.000000 to 9999-12-31 23:59:59.999999. You cannot replicate a timestamp value of 0001-01-01-00.00.00.000000000000 to MySQL. To replicate such values, you must convert this value into the MySQL DATETIME value 1000-01-01 00:00:00.000000, and then map the columns. If you have the following row in the policy_info system-period table:

POLICY_ID                                   COVERAGE    SYS_START                                         SYS_END                                                TS_ID
--------- ----------- -------------------------------- -------------------------------- -------------------------------------------------------------------------------
ABC             12000 0001-01-01-00.00.00.000000000000 9999-12-30-00.00.00.000000000000 0001-01-01-00.00.00.000000000000

To replicate the row into MySQL, you would use the colmap() function:

map source_schema.policy_info, target target_schema.policy_info colmap
(policy_id=policy_id, coverage=coverage, sys_start= @IF( ( @NUMSTR( @STREXT(sys_
start,1,4))) > 1000, sys_start, '1000-01-01 00.00.00.000000'), sys_end=sys_end,
 ts_id= @IF( ( @NUMSTR( @STREXT(ts_id,1,4))) > 1000, ts_id, '1000-01-01
 00.00.00.000000'));

Creating a Checkpoint Table

The checkpoint table is a required component of Replicat.

Replicat maintains its recovery checkpoints in the checkpoint table, which is stored in the target database. Checkpoints are written to the checkpoint table within the Replicat transaction. Because a checkpoint either succeeds or fails with the transaction, Replicat ensures that a transaction is only applied once, even if there is a failure of the process or the database.

To configure a checkpoint table, see Creating a Checkpoint Table in Administering Oracle GoldenGate.

Configuring the Replicat Parameter File

These steps configure the Replicat process. This process applies replicated data to a DB2 LUW target database.

  1. In GGSCI on the target system, create the Replicat parameter file.
    EDIT PARAMS name
    

    Where: name is the name of the Replicat group.

  2. Enter the Replicat parameters in the order shown, starting a new line for each parameter statement.

    Basic parameters for the Replicat group:

    REPLICAT financer
    TARGETDB mytarget, USERIDALIAS myalias
    ASSUMETARGETDEFS
    MAP hr.*, TARGET hr2.*;
    
    Parameter Description
    REPLICAT group

    group is the name of the Replicat group.

    TARGETDB database, USERIDALIAS alias

    Specifies the real name of the target DB2 LUW database (not an alias), plus the alias of the database login credential of the user that is assigned to Replicat. This credential must exist in the Oracle GoldenGate credential store. For more information, see Database User for Oracle GoldenGate Processes.

    ASSUMETARGETDEFS

    Specifies how to interpret data definitions. ASSUMETARGETDEFS assumes the source and target tables have identical definitions. (This procedure assume identical definitions.)

    Use the alternative SOURCEDEFS if the source and target tables have different definitions, and create a source data-definitions file with the DEFGEN utility.

    MAP schema.object, TARGET schema.object;

    Specifies the relationship between a source table or multiple objects, and the corresponding target object or objects.

    • MAP specifies the source portion of the MAP statement and is a required keyword. Specify the source objects in this clause.

    • TARGET specifies the target portion of the MAP statement and is a required keyword. Specify the target objects to which you are mapping the source objects.

    • schema is the schema name or a wildcarded set of schemas.

    • object is the name of a table or a wildcarded set of objects.

    Terminate this parameter statement with a semi-colon.

    Note that only the asterisk (*) wildcard is supported for DB2 LUW. The question mark (?) wildcard is not supported for this database. To exclude objects from a wildcard specification, use the MAPEXCLUDE parameter.

  3. Enter any optional Replicat parameters that are recommended for your configuration. You can edit this file at any point before starting processing by using the EDIT PARAMS command in GGSCI.
  4. Save and close the file.

When to Start Replicating Transactional Changes

You must start replication when the source and target data is in a synchronized state, where the corresponding rows in the source and target tables contain identical data values. Unless you are starting with brand new source and target databases with no current user activity, you will need to activate change capture and apply processes to handle ongoing transactional changes while an initial load is being applied to the target. This process is known as initial synchronization, or also as instantiation. The initial load captures a point-in-time snapshot of the source data and applies it to the target, while Oracle GoldenGate maintains any changes that are made after that point.

See Instantiating Oracle GoldenGate with an Initial Load in Administering Oracle GoldenGate for instantiation options.

Testing Your Configuration

It is important to test your configuration in a test environment before deploying it live on your production machines. This is especially important in an active-active or high availability configuration, where trusted source data may be touched by the replication processes. Testing enables you to find and resolve any configuration mistakes or data issues without the need to interrupt user activity for re-loads on the target or other troubleshooting activities.