11 Preparing the DB2 for z/OS Transaction Logs for Oracle GoldenGate
Topics:
Making Transaction Data Available
Oracle GoldenGate can extract DB2 transaction data from the active and archived logs. Follow these guidelines to configure the logs so that Extract can capture data.
Enabling Change Capture
Follow these steps to configure DB2 to log data changes in the expanded format that is supplied by the DATA CAPTURE CHANGES
feature of the CREATE TABLE
and ALTER TABLE
commands. This format provides Oracle GoldenGate with the entire before and after images of rows that are changed with update statements.
Parent topic: Making Transaction Data Available
Enabling Access to Log Records
Activate DB2 Monitor Trace Class 1 ("TRACE(MONITOR) CLASS(1)
") so that DB2 allows Extract to read the active log. The default destination of OPX
is sufficient, because Oracle GoldenGate does not use a destination.
To Start the Trace Manually
-
Log on to DB2 as a DB2 user who has the
TRACE
privilege or at leastSYSOPR
authority. -
Issue the following command:
start trace(monitor) class(1) scope(group)
To Start the Trace Automatically When DB2 is Started
Do either of the following:
-
Set
MONITOR TRACE
to "YES
" on theDSNTIPN
installation tracing panel. -
Set '
DSN6SYSP MON=YES
' in theDSNTIJUZ
installation job, as described in the DB2 UDB Installation Guide.
Note:
The primary authorization ID, or one of the secondary authorization IDs, of the ODBC plan executor also must have the MONITOR2
privilege.
Parent topic: Making Transaction Data Available
Sizing and Retaining the Logs
When tables are defined with DATA CAPTURE CHANGES
, more data is logged than when they are defined with DATA CAPTURE NONE
. If any of the following is true, you might need to increase the number and size of the active and archived logs.
-
Your applications generate large amounts of DB2 data.
-
Your applications have infrequent commits.
-
You expect to stop Extract for long periods of time.
-
Your network is unreliable or slow.
To control log retention, use the DSN6LOGP MAXARCH
system parameter in the DSNTIJUZ
installation job.
Retain enough log data so that Extract can start again from its checkpoints after you stop it or after an unplanned outage. Extract must have access to the log that contains the start of the oldest uncommitted unit of work, and all logs thereafter.
If data that Extract needs during processing was not retained, either in online or archived logs, one of the following corrective actions might be required:
-
Alter Extract to capture from a later point in time for which log data is available (and accept possible data loss on the target).
-
Resynchronize the source and target tables, and then start the Oracle GoldenGate environment over again.
Note:
The IBM documentation makes recommendations for improving the performance of log reads. In particular, you can use large log output buffers, large active logs, and make archives to disk.
Parent topic: Making Transaction Data Available
Using Archive Logs on Tape
Oracle GoldenGate can read DB2 archive logs on tape, but it will degrade performance. For example, DB2 reserves taped archives for a single recovery task. Therefore, Extract would not be able to read an archive tape that is being used to recover a table until the recovery is finished. You could use DFHSM or an equivalent tools to move the archive logs in a seamless manner between online DASD storage and tape, but Extract will have to wait until the transfer is finished. Delays in Extract processing increase the latency between source and target data.
Parent topic: Making Transaction Data Available
Controlling Log Flushes
When reading the transaction log, Extract does not process a transaction until it captures the commit record. If the commit record is on a data block that is not full, it cannot be captured until more log activity is generated to complete the block. The API that is used by Extract to read the logs only retrieves full physical data blocks.
A delay in receiving blocks that contain commits can cause latency between the source and target data. If the applications are not generating enough log records to fill a block, Extract generates its own log records by issuing SAVEPOINT
and COMMIT
statements, until the block fills up one way or the other and is released.
In a data sharing group, each API call causes DB2 to flush the data blocks of all active members, eliminating the need for Extract to perform flushes.
To prevent Extract from performing flushes, use the Extract parameter TRANLOGOPTIONS
with the NOFLUSH
option.
Parent topic: Making Transaction Data Available