1.176 TRANLOGOPTIONS

Valid For

Extract

Description

Use the TRANLOGOPTIONS parameter to control the way that Extract interacts with the transaction log or with the API that passes transaction data, depending on the database or capture mode. You can use multiple TRANLOGOPTIONS statements in the same parameter file, or you can specify multiple options within the same TRANLOGOPTIONS statement, if permissible for those options.

Use a given TRANLOGOPTIONS option only for the database or databases for which it is intended.

Default

None

Syntax

TRANLOGOPTIONS {
[ALLOWTABLECOMPRESSION][ALTLOGDEST path | REMOTE]
[ALWAYSONREADONLYROUTING]
[{DBLOGREADERBUFSIZE size}][ASYNCTRANSPROCESSING buffer_size]
[BUFSIZE size]
[CHECKPOINTRETENTIONTIME days][DB2APIRETRY 	retry_count]
[DB2ZV11COMPATIBILITYMODE][DICTIONARY_CACHE_SIZE value]
[DLFAILOVER_TIMEOUT seconds]
[DISABLESOFTEOFDELAY]
[EXCLUDETAG [tag | NULL] | [EXCLUDETAG +]
[EXCLUDETRANS transaction]
[EXCLUDEUSER user]
[EXCLUDEUSERID Oracle_uid]
[FAILOVERTARGETDESTID n][FETCHPARTIALJSON][FETCHPARTIALLOB][FETCHPARTIALXML]
[FILTERTABLE table]
[FORCEFETCHLOB]
[GETCTASDML | NOGETCTASDML][HANDLEDLFAILOVER [STANDBY_WARNING value | STANDBY_ABEND value]]
[IFILOCKSECONDS (seconds)]
[IGNOREDATACAPTURECHANGES | NOIGNOREDATACAPTURECHANGES][INCLUDEAUX (AUX_specification)]
[INCLUDEREGIONID | INCLUDEREGIONIDWITHOFFSET]
[INCLUDETAG tag]
[ENABLE_PROCEDURAL_REPLICATION Y] 
[ENABLE_AUTO_CAPTURE | DISABLE_AUTO_CAPTURE]
[LOB_CHUNK_SIZE size][MAXAUTOCMTTRANSSIZE (range, default)][MININGUSER {/ | user}[, MININGPASSWORD password]
   [algorithm ENCRYPTKEY {key_name | DEFAULT}] [SYSDBA]
[MININGUSERALIAS alias [DOMAIN domain]]
[MIXEDENDIAN [ON|OFF]]
[MANAGECDCCLEANUP | NOMANAGECDCCLEANUP ] 
[MANAGESECONDARYTRUNCATIONPOINT | NOMANAGESECONDARYTRUNCATIONPOINT ]
[PERFORMANCEPROFILE HIGH|MEDIUM|LOW_RES][QUERYTIMEOUT seconds]
[QUERYRETRYCOUNT seconds]
[READQUEUESIZE size]
[READTIMEOUT milliseconds]
[REDO_TRANSPORT_LAG_THRESHOLD seconds]
[REDO_TRANSPORT_LAG_TIMEOUT value]
[REQUIRELONGDATACAPTURECHANGES | NOREQUIRELONGDATACAPTURECHANGES]
[SOURCE_OS_TIMEZONE timezone]
[SKIPUNKNOWNEVENT]  
[SUPPRESSNOOOPUPDATES][TRACKSCHEMACHANGES ]
[TRANCOUNT integer]
[TSLOOKUPBEGINLRI | TSLOOKUPENDLRI]
[VALIDATEINLINESFLOB]
[USE_ROOT_CONTAINER_TIMEZONE]
[USENATIVEOBJSUPPORT | NOUSENATIVEOBJSUPPORT]
[VERSIONCHECK DYNAMIC | IMMEDIATE]
}
ALWAYSONREADONLYROUTING
Valid for SQL Server

The ALWAYSONREADONLYROUTING parameter allows Extract for SQL Server to route its read-only processing to an available read-intent Secondary when connected to an Always On availability group listener.

ALTLOGDEST path | REMOTE

Valid for MySQL.

Specifies the location of the MySQL log index file. Extract looks for the log files in this location instead of the database default location. ALTLOGDEST can be used when the database configuration does not include the full path name to the logs or when there are multiple MySQL installations on the machine. Extract reads the log index file to find the binary log file that it needs to read. When ALTLOGDEST is used, Extract assumes that the logs and the index are in the same location.

Supply the full path name to the directory.

On Windows, enclose the path within double quotes if the path contains any spaces, such as in the following example.

TRANLOGOPTIONS ALTLOGDEST "C:\Program Files\MySQL\MySQL Server 5.7\log\binlog.index"

On Linux system:

TRANLOGOPTIONS ALTLOGDEST "/mnt/rdbms/mysql/data/logs/binlog.index" 

When capturing against a remote MySQL database, use the REMOTE option instead of the index file path. From remote capture, specify the following in the Extract parameter file.

TRANLOGOPTIONS ALTLOGDEST REMOTE

For more information on using the REMOTE option, see Setting Logging Parameters.

ASYNCTRANSPROCESSING buffer_size

Valid for Extract in integrated capture mode for Oracle.

Controls whether integrated capture runs in asynchronous or synchronous processing mode, and controls the buffer size when Extract is in asynchronous mode. The minimum is 1 and the maximum is 1024; the default is 300.

ASYNCTRANSPROCESSING buffer_size

In asynchronous transaction processing mode, there are two threads of control:

  • One thread groups logical change records (LCR) into transactions, does object-level filtering, and does partial rollback processing,

  • The other thread formats committed transactions, performs any user-specified transformations, and writes to the trail file.

The transaction buffer is the buffer between these two threads and is used to transfer work from one thread to the other. The default transaction buffer size is 300 committed transactions, but is adjusted downward by the Oracle GoldenGate memory manager if its cache memory is close to being exhausted.

NOASYNCTRANSPROCESSING

Disables asynchronous processing and causes Extract to operate in synchronous mode. In this mode, one thread performs all capture work.

BUFSIZE size

Valid for DB2 LUW, and DB2 z/OS. Valid for DB2 for i from Oracle GoldenGate 19c and higher. Valid for Oracle database from Oracle GoldenGate 21c and higher.

Controls the maximum size, in bytes, of the buffers that are allocated to contain the data that is read from the transaction log.

High values increase capture speed but cause Extract to consume more memory. Low values reduce memory usage but increase I/O because Extract must store data that exceeds the cache size to disk.

For Oracle database, the DDL operation record size is limited by Oracle GoldenGate internal record capture buffer size. The DDL size can be up to the Oracle RDBMS size limit. Although Oracle database 21c allows creating DDL greater than 10MB, the maximum internal record capture buffer size is limited to 10MB.

The default buffer size is determined by the source of the redo data. The following are the valid ranges and default sizes, in bytes:

DB2 LUW:

  • Minimum: 8,192

  • Maximum: 10,000,000

  • Default: 204,800

  • The preceding values must be in multiples of the 4096 page size. Extract will truncate to a multiple if a given value does not meet this requirement.

DB2 z/OS and DB2 for i:

  • Minimum: 36KB (36864)

  • Maximum: 32MB (33554432)

  • Default: 2MB (2097152)

  • The preceding values must be in multiples of the 4096 page size. Extract will truncate to a multiple if a given value does not meet this requirement.

  • Each Extract uses a fixed 32bytes of ECSA on the DB2 z/OS system that the Extract connects to. This doesn't apply to DB2 for i.

CHECKPOINTRETENTIONTIME days

Valid for Extract in integrated mode only for Oracle.

Controls the number of days that Extract retains checkpoints before they are purged. Partial days can be specified using decimal values. For example, 8.25 specifies 8 days and 6 hours. When the checkpoint of an Extract in integrated capture mode is purged, LogMiner data dictionary information for the archived redo log file that corresponds to the checkpoint is purged, and the first_scn value of the capture process is reset to the SCN value corresponding to the first change in the next archived redo log file. The default is seven days and the minimum is 0.00001.

DB2APIRETRY number of retries
If Extract receives an error from the DB2 log reading API db2ReadLog(), then for certain errors the API call is retried. Use the DB2APIRETRY to change the number of retries. The default number of retries is set to 3. SQL code for which the API is retried is SQLCODE -30108.
DB2ZV11COMPATIBILITYMODE

Valid for Extract for DB2 z/OS.

When using Oracle GoldenGate to extract from DB2 z/OS version 11 in some compatibility modes, the Extract process may not programmatically determine the actual database version and an OGG-00551 or OGG-00804 error occurs. Use this option in your Extract parameter file to manually set the correct database version.

DICTIONARY_CACHE_SIZE value

Use this option to tune dictionary cache size from Extract. The default value is 5000. If PERFORMANCEPROFILE is set to HIGH, then the default value is 10000.

DLFAILOVER_TIMEOUT seconds

Valid for Extract in integrated mode for Oracle.

Provides a configurable timeout in seconds to allow for standby database reinstatement post-role transition. It is used in conjunction with HANDLEDLFAILOVER to allow Integrated Extract to start up immediately after a role transition. At the end of the timeout period, if the standby database is still not available, then Extract will terminate.

The default is 300 seconds. You can also use centiseconds or milliseconds.

DISABLESOFTEOFDELAY

Valid for Extract only in integrated mode for Oracle and DB2 LUW.

Use DISABLESOFTEOFDELAY in the Extract parameter file to set that the wait time takes effect when the an EOF status is reported with no records to return.

[EXCLUDETAG [tag | NULL] | [EXCLUDETAG +]

Use EXCLUDETAG tag to direct the Extract process to ignore the individual records that are tagged with the specified redo tag. Compare with older versions, new trail file contains tag tokens, which would not introduce problems for older trail readers.

Use EXCLUDETAG + to direct the Extract process to ignore the individual records that are tagged with any redo tag.

The EXCLUDETAG is used to exclude changes that were earlier tagged either by Replicat using the DBOPTIONS SET TAG option or within the Oracle database session using the dbms_xstream.set_tag procedure.

Example
The following are examples of how to use tag specifiers with EXCLUDETAG.

To exclude all tagged changes:
TRANLOGOPTIONS EXCLUDETAG +
To exclude specific tagged changes:
TRANLOGOPTIONS EXCLUDETAG 00
TRANLOGOPTIONS EXCLUDETAG 0952
EXCLUDETRANS transaction

Valid for Integrated Extract for Oracle.

Specifies the transaction name of the Replicat database user or any other user so that those transactions are not captured by Extract. Use for bi-directional processing to prevent data looping between the databases.

For more information about bidirectional synchronization, see Overview of an Active-Active Configuration in Administering Oracle GoldenGate.

EXCLUDEUSER user

Valid for DB2 LUW, DB2 for z/OS, DB2 for i, and Oracle.

Specifies the name of the Replicat database user, or of any other user, to be used as a filter that identifies transactions that will be subject to the rules of the GETREPLICATES or IGNOREREPLICATES parameter. Typically, this option is used to identify Replicat transactions in a bi-directional or cascading processing configuration, for the purpose of excluding or capturing them. However, it can be used to identify transactions by any other user, such as those of a specific business application.

You can use EXCLUDEUSER and EXCLUDEUSERID in the same parameter file. Do not use wildcards in either parameter.

The user name must be valid. Oracle GoldenGate queries the database to get the associated user ID and maps the numeric identifier back to the user name. For this reason, if the specified user is dropped and recreated while name resolution is set to the default of DYNAMICRESOLUTION, EXCLUDEUSER remains valid. If the same transaction is performed when name resolution is set to NODYNAMICRESOLUTION, EXCLUDEUSER becomes invalid, and Extract must be stopped and then started to make EXCLUDEUSER take effect, see DYNAMICRESOLUTION.

  • DB2 z/OS considerations: In DB2 for z/OS, the user is always the primary authorization ID of the transaction, which is typically that of the original RACF user who logged on, but also could be a different authorization ID if changed by a transaction processor or by DB2 exits.

  • Oracle considerations: For an Oracle database, multiple EXCLUDEUSER statements can be used. All specified users are considered the same as the Replicat user, in the sense that they are subject to the rules of GETREPLICATES or IGNOREREPLICATES. You must include the IGNOREAPPLOPS parameter for EXCLUDEUSER to operate correctly unlike all other supported databases. EXLCUDEUSER is not supported for multitenant source databases.

Example
The following Oracle example filters for two users (one by name and one by user ID). The transactions generated by these users will be handled according to the GETREPLICATES or IGNOREREPLICATES rules, and a new transaction buffer size is specified.
TRANLOGOPTIONS EXCLUDEUSER ggsrep, EXCLUDEUSERID 90, BUFSIZE 100000
To use EXCLUDEUSER with multitenant, you must specify the PDB.USERNAME. The following example excludes any DML operation made by PDBXYZ.SCOTT:
TRANLOGOPTIONS EXCLUDEUSER PDBXYZ.SCOTT
EXCLUDEUSERID Database_uid

Valid for Extract for Oracle.

Specifies the database user ID (uid) of the Replicat database user, or of any other user, to be used as a filter that identifies transactions that will be subject to the rules of the GETREPLICATES or IGNOREREPLICATES parameter. This parameter is not valid for multitenant Extracts. Use tagging and EXCLUDETAG instead.

Usage is the same as that of EXCLUDEUSER.

Oracle_uid is a non-negative integer with a maximum value of 2147483638. There are several system views that can be queried to get the user ID. The simplest one is the ALL_USERS view. Oracle GoldenGate does not validate the user ID. If the user that is associated with the specified user ID is dropped and recreated, a new user ID is assigned; therefore, EXCLUDEUSERID becomes invalid for that user.

FAILOVERTARGETDESTID n

Valid for Extract for Oracle.

When using Oracle GoldenGate Extract processes and Oracle Data Guard, without Fast Start Failover (FSFO), FAILOVERTARGETDESTID identifies which standby database the Oracle GoldenGate Extract process must remain behind, with regard to not extracting redo data that has not yet been applied to the Oracle Data Guard standby database. To determine the correct value for FAILOVERTARGETDESTID, the LOG_ARCHIVE_DEST_n initialization parameter is used with n being the correct archive log destination identifier. This parameter is used in combination with HANDLEDLFAILOVER to control whether Extract will throttle its writing of trail data based on the apply progress of the Oracle Data Guard standby database. The minimum value is 0, the maximum is 32 and the default 0.

Example
The following is an example of how to use the TRANLOGOPTIONS FAILOVERTARGETDESTID Extract parameter.
TRANLOGOPTIONS FAILOVERTARGETDESTID 2 
SQL> show parameters log_archive_dest 
NAME TYPE VA 
LUE 
------------------------------------ ----------- -- 
---------------------------- 
log_archive_dest_1 string location=USE_DB_RECOVERY_FILE_DEST, 
valid_for=(ALL_LOGFILES, ALL_ROLES) 
. 
log_archive_dest_2 string service="ggs2d", ASYNC NOAFFIRM delay=0 optional 
compression =disable max_failure=0 max_connections=1 reopen=300 
db_unique_name="GGS2D" net_timeout=30, 
valid_for=(online_logfile,all_roles) 
It would be set to 2 because that is the Standby database Oracle GoldenGate should stay behind. The first entry (log_archive_dest_1) is for the local archive logs for that database, and the second is for the standby database.0
FETCHPARTIALJSON

Valid for Extract for MySQL

Use this option in the Extract parameter file to directly fetch data from the table, if there are partial updates to the JSON datatype columns of a table.

Note:

Processing JSON column data updates depends on the value of the MySQL server variable, binlog_row_value_options, the value of which needs to be set as PARTIAL_JSON and the Extract parameter file includes the FETCHPARTIALJSON parameter.
FETCHPARTIALLOB

Valid for Extract in integrated capture mode for Oracle.

Use this option when replicating to a heterogeneous target or in other conditions where the full LOB image is required. It causes Extract to fetch the full LOB object, instead of using the partial change object from the redo record. By default, the database logmining server sends Extract a whole or partial LOB, depending on whether all or part of the source LOB was updated. To ensure the correct snapshot of the LOB, the Oracle Flashback feature must be enabled for the table and Extract must be configured to use it. The Extract FETCHOPTIONS parameter controls fetching and must be set to USESNAPSHOT (the default in the absence of NOUSESNAPSHOT). Without a Flashback snapshot, Extract fetches the LOB from the table, which may be a different image from the point in time when the redo record was generated.

FETCHPARTIALXML

Valid for Extract in integrated capture mode Oracle.

Use this option when replicating to a heterogeneous target or in other conditions where the full LOB image is required. It causes Extract to fetch the full XML document, instead of using the partial change image from the redo record. By default, the database logmining server sends Extract a whole or partial XML document, depending on whether all or part of the source XML was updated. To ensure the correct snapshot of the XML, the Oracle Flashback feature must be enabled for the table and Extract must be configured to use it. The Extract FETCHOPTIONS parameter controls fetching and must be set to USESNAPSHOT (the default in the absence of NOUSESNAPSHOT). Without a Flashback snapshot, Extract fetches the XML document from the table, which may be a different image from the point in time when the redo record was generated.

FILTERTABLE table

Valid for Extract for MySQL, PostgreSQL, and SQL Server.

Use this option to identify a source transaction for filtering. If a source transaction includes any operation for the specified FILTERTABLE, then that transaction is identified as a replicated transaction. Transaction filtering is based on the GETREPLICATES/IGNOREREPLICATES and GETAPPLOPS/IGNOREAPPLOPS parameters. The default is IGNOREREPLICATES, so all the replicated transactions are ignored, by default for the filter table.

This option may be used to avoid data looping in a bidirectional configuration of Oracle GoldenGate by specifying FILTERTABLE as the fully qualified name of the checkpoint table used by the target Replicat. When a Replicat uses a checkpoint table, it writes a recovery record in the checkpoint table at the end of each transaction that it applies. Considering that all transactions applied by the Replicat contain an update to the checkpoint table, the Extract ignores the entire transaction applied by the Replicat, which prevents data looping. For PostgreSQL and SQL Server, ensure that TRANDATA has been added for the checkpoint table.

If using a parallel Replicat in a bidirectional replication for MySQL and PostgreSQL, then multiple filter tables are supported using the TRANLOGOPTIONS FILTERTABLE option. Multiple filter tables allow the TRANLOGOPTIONS FILTERTABLE to be specified multiple times with different table names or wildcards.

You can include single or multiple TRANLOGOPTIONS FILTERTABLE entries in the Extract parameter file. In the following example, multiple TRANLOGOPTIONS FILTERTABLEentries are included in the Extract parameter file with explicit object names and wildcards.
TRANLOGOPTIONS FILTERTABLE ggs.chkpt2
TRANLOGOPTIONS FILTERTABLE ggs.chkpt_RABC_*

For information about creating a checkpoint table, see Creating a Checkpoint Table in Administering Oracle GoldenGate. To specify object names and wildcards correctly, see Using Wildcards in Database Object Names in Administering Oracle GoldenGate.

Specify the fully qualified name of the filter table. It is used to avoid looping in the bi-directional scenario.

Example
TRANLOGOPTIONS FILTERTABLE ggschema.repcheckpointtable

The following example shows the Extract parameter file containing a single TRANLOGOPTIONS FILTERTABLE with an explicit object name:
TRANLOGOPTIONS FILTERTABLE ggs.chkpt1
The following example shows the Extract parameter file containing multiple TRANLOGOPTIONS FILTERTABLE specifications with explicit object names or wildcards.
TRANLOGOPTIONS FILTERTABLE ggs.chkpt2
TRANLOGOPTIONS FILTERTABLE ggs.chkpt_RABC_*
FORCEFETCHLOB

Valid for Extract in classic and integrated capture modes for Oracle.

Overrides the default behavior of capturing LOB data from the redo log. Causes LOBs to be fetched from the database by default.

Caution:

If a value gets deleted before the fetch occurs, Extract writes a null to the trail. If a value gets updated before a fetch, Extract writes the updated value. To prevent these inaccuracies, try to keep Extract latency low. The Oracle GoldenGate documentation provides guidelines for tuning process performance. Also, see Interactions Between Fetches from a Table and DDL in Using Oracle GoldenGate for Oracle Databasefor instructions on setting fetch options.
GETCTASDML | NOGETCTASDML

Enables Create Table As Select (CTAS) functionality. When GETCTASDML is enabled, CTAS DMLs are sent from LogMiner and replicated on the target. This option is enabled by default. Execution of the CTAS DDL is suppressed on the target. This parameter cannot be enabled while using the DDL metadata trigger. Trail files produced with the CTAS functionality enabled cannot be consumed by a Replicat version lower than 12.1.2.1.0.

Use GETCTASDML to allow CTAS to replay the inserts of the CTAS thus preserving OIDs during replication. This parameter is only supported with Integrated Dictionary and any downstream Replicat must be 12.1.2.1 or greater to consume the trail otherwise, there may be divergence.

HANDLEDLFAILOVER [ STANDBY_WARNING value | STANDBY_ABEND value ]

Valid for Extract for Oracle

STANDBY_WARNING and STANDBY_ABEND valid for Oracle Database 21c and higher.

Controls whether Extract will throttle its writing of trail data based on the apply progress of the Fast Start Failover standby database. It is intended to keep Extract at a safe point behind any data loss failover.

When using this for data loss in a Data Guard configuration without Fast Start Failover (FSFO), you must set the FAILOVERTARGETDESTID Extract parameter to identify the archive log destination ID to where the standby can be connected.

Extract is found to be in a stalled state when Extract queries the standby database apply SCN information (SELECT applied_scn FROM v$archive_dest where dest_id=n) and this SCN is less than Extract processing LCR SCN. In this case, Extract will not process the LCR and waits until the applied_scn becomes greater than or equal to Extract processing LCR SCN.

STANDBY_WARNING value
The amount of time before a warning message is written to the Extract report file, if Extract is stalled. The default is 60 seconds.
STANDBY_ABEND value
The amount of time before Extract abends, if Extract is stalled. The default is 30 minutes.

If both STANDBY_WARNING and STANDBY_ABEND are specified, STANDBY_ABEND should always be greater than STANDBY_WARNING.

IFILOCKSECONDS seconds

Valid for DB2 z/OS

Sets the interval in seconds, for which the Extract holds the implicit locks held in the database by the calls to IFCID 0306. The locks can affect the ability to perform certain database operations such as REORGS. The default value is 20 seconds,with minimum and maximum values as 1 second and 300 seconds, respectively.

Note:

If the IFILOCKSECONDS parameter is set for a longer duration, other database operations such as REORGS, can be impacted due to internal locking caused by the Extract IFI calls. Therefore, if any lock contention occurs with relation to an Extract, either set the lock timeout for the operation to a duration longer than the value of the IFILOCKSECONDS parameter, or shut down the Extract.

IGNOREDATACAPTURECHANGES | NOIGNOREDATACAPTURECHANGES

Valid for DB2 LUW

Controls whether or not Extract captures tables for which DATA CAPTURE CHANGES is not set. IGNOREDATACAPTURECHANGES ignores tables for which DATA CAPTURE CHANGES is not set. Use if tables were specified with a wildcard to ensure that processing continues for tables that do have change capture set. A warning is issued to the error log for tables that were skipped. The default is NOIGNOREDATACAPTURECHANGES.

INCLUDEREGIONID | INCLUDEREGIONIDWITHOFFSET

Valid for Extract in integrated mode for Oracle only.

These options support the Oracle data type TIMESTAMP WITH TIME ZONE specified as TZR (which represents the time zone region, such as US/Pacific). By default, Extract abends on TIMESTAMP WITH TIME ZONE if it includes a time zone region. These options enable you to handle this timestamp based on the target database type.

When Extract detects that the source data type is TIMESTAMP and there is a region ID mapping token, Replicat applies the timestamp as follows:

  • A TIMESTAMP WITH TIME ZONE with TZR is applied if the target Oracle version supports it.

  • A timestamp with a UTC offset is applied to a heterogeneous database, or to an earlier version of Oracle that does not support TIMESTAMP WITH TIME ZONE with TZR.

INCLUDEREGIONID

Valid for Oracle Integrated Extract only.

The INCLUDEREGIONID is deprecated for Oracle GoldenGate 19c (19.1.0). From Oracle GoldenGate 19c (19.1.0) onward, TIMESTAMP WITH TIME ZONE with region ID data is included by default including initial load.

Use when replicating from an Oracle source to an Oracle target of the same version or later. When INCLUDEREGIONID is specified, Extract adds a column index and the two-byte TMZ value as a time-zone mapping token and outputs it to the trail in the UTC format of YYYY-MM-DD HH:MI.SS.FFFFFF +00:00.

INCLUDEREGIONIDWITHOFFSET

Valid for Oracle Integrated Extract only.

Use this option to convert region ID to hour and minutes offset value (+06:00 as example). If the option is not specified, then the timestamp is always written to the trail file in UTC and the time zone is always +00:00.

If you need to preserve the time zone value in hour and minutes instead of UTC, then this option can be used.

In the following cases, the option is forced to turn on to preserve the TIMEZONE value in hour and minutes offset:
  • Old trail file format because Replicat does not support region ID.

  • XML, TEXT, and SQL format because they don't support region ID.

INCLUDETAG tag

Valid for integrated Extract.

Use INCLUDETAG tag to include specific changes trail files. The tag value can be up to 2000 hexadecimal digits (0-9 A-F).

Note:

FFFF and + (plus symbol) are not supported for tag usage.

To avoid conflicts, don't use INCLUDETAG in conjunction with EXCLUDETAG.

Example: tranlogoptions includetag 00

LOB_CHUNK_SIZE

Valid for SQL Server, PostgreSQL.

If you have huge LOB data sizes, then you can adjust the LOB_CHUNK_SIZE from the default of 4000 bytes, to a higher value up to 65535 bytes, so that the fetch size is increased, reducing the trips needed to fetch the entire LOB

Example: TRANLOGOPTIONS LOB_CHUNK_SIZE 8000

(PostgreSQL) Specifies the size of chunk for the LOB (CLOB/BLOB) data that will be used to push in COM. It's unit is in bytes. The minimum and maximum lob_chunk_size values lies between 4000 to 65535 bytes.

INTEGRATEDPARAMS (parameter value [, ...])

Valid for Extract in integrated capture mode for Oracle Standard or Enterprise Edition 12c or later.

Passes parameters and values to the Oracle Database logmining server when Extract is in integrated capture mode. The input must be in the form of parameter value, as in:

TRANLOGOPTIONS INTEGRATEDPARAMS (downsream_real_time_mine Y)

Valid parameter specifications and their values are the following:

max_sga_size

Specifies the amount of SGA memory that is used by the database logmining server. Can be a positive integer in megabytes. The default is 1 GB if streams_pool_size is greater than 1 GB; otherwise, it is 75% of streams_pool_size.

parallelism

Specifies the number of processes supporting the database logmining server. Can be a positive integer. The default is 2.

downstream_real_time_mine

Specifies whether or not integrated capture mines a downstream mining database in real-time mode. A value of Y specifies real-time capture and requires standby redo logs to be configured at the downstream mining database. A value of N specifies capture from archived logs shipped to the downstream mining database. The default is N.

enable_procedural_replication

Enables procedural replication at capture. Procedural replication is disabled by default. A value of Y enables procedural replication. Once this option is turned on for an Extract, it remains on. The parameter value can not be toggled back.

ENABLE_AUTO_CAPTURE | DISABLE_AUTO_CAPTURE

Set this option to enable auto capture mode, which would deliver LCRs of tables enabled for automatic capture. This option can be set when the source database's Oracle binary version is 21c or higher.

MANAGESECONDARYTRUNCATIONPOINT | NOMANAGESECONDARYTRUNCATIONPOINT

Valid for PostgreSQL.

MANAGESECONDARYTRUNCATIONPOINT is the default setting and controls the restart_lsn of the replication slot for the specific Extract.

NOMANAGESECONDARYTRUNCATIONPOINT does not move the Extract’s replication slot and is typically only used for development and testing purposes where an Extract needs to be repositioned to an earlier LSN from the Extract’s recovery LSN. If used, the PostgreSQL write-ahead log will continue to grow and consume disk space.

MAXAUTOCMTTRANSSIZE (range, default)

Valid for DB2 for i only

Provides the range of the maximum autocommited transaction size.

DB2 for i autocommited records (journal entry has CCID equal to 0) do not have a commit record in the journal and therefore Oracle GoldenGate must create an implicit transaction to include these records in the trail. The default allows for a single record to be included in a single transaction, which maintains the accuracy of the indicated IO Time for each record because the IO time is based on the commit for the transaction.

This parameter sets the maximum number of records that will be included in an implicitly created transaction, but the number could be less if any other type of entry is seen in the journal before the maximum is reached. This behavior avoids issues with overlap for checkpoints on records that belong to explicitly committed records.

Setting the value for this parameter to 1 (the defulat) will provide an accurate IO time for each record in the trail for records that are autocommitted (have a CCID of 0 in the journal entry), at the potential expense of throughput for the Extract. The value of this parameter also affects the maximum potential size of a cached transaction for these records in memory. Setting it to a lower value causes the transaction memory to be lower if the Extract is able to store the maximum number of entries per implicit transaction. By definition there can only be one such implicit transaction in memory at any given time since any other transaction records will cause an immediate commit to the trail of any records in an implicit transaction already in memory.

The default range is between 1-10000 and the default value is 1.

MININGUSER {/ | user} [, MININGPASSWORD password]
[algorithm ENCRYPTKEY {key_name | DEFAULT}] [SYSDBA]]

Valid for Extract in integrated capture mode for Oracle.

Specifies login credentials for Extract to log in to a downstream Oracle mining database to interact with the logmining server. Can be used instead of the MININGUSERALIAS option if an Oracle GoldenGate credential store is not being used.

This user must:

  • Have the privileges granted in dbms_goldengate_auth.grant_admin_privilege.

  • Be the user that issues the MININGDBLOGIN or MININGDBLOGINALIAS and REGISTER EXTRACT or UNREGISTER EXTRACT commands for the Extract group that is associated with this MININGUSERALIAS.

  • Not be changed while Extract is in integrated capture mode.

/

Directs Oracle GoldenGate to use an operating-system login for Oracle, not a database user login. Use this argument only if the database allows authentication at the operating-system level. Bypassing database-level authentication eliminates the need to update Oracle GoldenGate parameter files if application passwords frequently change.

To use this option, the correct user name must exist in the database, in relation to the value of the Oracle OS_AUTHENT_PREFIX initialization parameter. The value set with OS_AUTHENT_PREFIX is concatenated to the beginning of a user's operating system account name and then compared to the database name. Those two names must match.

When OS_AUTHENT_PREFIX is set to ' ' (a null string), the user name must be created with IDENTIFIED EXTERNALLY. For example, if the OS user name is ogg, you would use the following to create the database user:

CREATE USER ogg IDENTIFIED EXTERNALLY;

When OS_AUTHENT_PREFIX is set to OPS$ or another string, the user name must be created in the format of:

OS_AUTHENT_PREFIX_value OS_user_name

For example, if the OS user name is ogg, you would use the following to create the database user:

CREATE USER ops$ogg IDENTIFIED BY oggpassword;
user

Specifies the name of the mining database user or a SQL*Net connect string.

password

The user's password. Use when database authentication is required to specify the password for the database user. If the password was encrypted by means of the ENCRYPT PASSWORD command, supply the encrypted password; otherwise, use the clear-text password. If the password is case-sensitive, type it that way. If either the user ID or password changes, the change must be made in the Oracle GoldenGate parameter files, including the re-encryption of the password if necessary.

algorithm

Specifies the encryption algorithm that was used to encrypt the password with ENCRYPT PASSWORD. Can be one of:

AES128
AES192
AES256
BLOWFISH
ENCRYPTKEY {key_name | DEFAULT}

Specifies the encryption key that was specified with ENCRYPT PASSWORD.

  • ENCRYPTKEY key_name specifies the logical name of a user-created encryption key in the ENCKEYS lookup file. Use if ENCRYPT PASSWORD was used with the KEYNAME key_name option.

  • ENCRYPTKEY DEFAULT directs Oracle GoldenGate to use a random key. Use if ENCRYPT PASSWORD was used with the KEYNAME DEFAULT option.

SYSDBA

Specifies that the user logs in as sysdba.

MININGUSERALIAS alias

Valid for Extract in integrated capture mode for Oracle.

Specifies the alias for the login credentials that Extract uses to log in to a downstream Oracle mining database to interact with the logmining server. Can be used instead of MININGUSER if an Oracle GoldenGate credential store is being used.

This alias must be:

  • Associated with a database user login credential that is stored in the local Oracle GoldenGate credential store. This user must have the privileges granted in dbms_goldengate_auth.grant_admin_privilege.

  • The user that issues the MININGDBLOGIN or MININGDBLOGINALIAS and REGISTER EXTRACT or UNREGISTER EXTRACT commands for the Extract group that is associated with this MININGUSERALIAS.

This alias and user must not be changed while Extract is in integrated capture mode.

MIXEDENDIAN [ON|OFF] 
Valid for DB2 LUW with Oracle GoldenGate primary Extract

Oracle GoldenGate Extract for DB2 LUW supports cross-endian capture where the database and Oracle GoldenGate are running on different byte order servers. Detection of byte order is automatic for DB2 LUW database version 10.5 and higher. If you need to disable auto-detection on DB2 LUW 10.5, then you can override it by specifying this parameter. For version 10.1, the parameter must be used in the Extract parameter file for the cross-endian capture. By default, the value is set to OFF for version 10.1.

Syntax:

TRANLOGOPTIONS MIXEDENDIAN [ON|OFF] 

ON: If this is set, then the Extract assumes that the database and Oracle GoldenGate are running on servers with a different byte order and necessary byte reversal conversion is performed.

OFF: If this is set, then the Extract assumes that the database and Oracle GoldenGate are running on servers with the same byte order and no byte order reversal conversion is performed.

MANAGECDCCLEANUP | NOMANAGECDCCLENUP

Valid for SQL Server.

MANAGECDCCLEANUP is the default and recommended setting that instructs the Extract to validate the existence of the Oracle GoldenGate CDC Cleanup job or Purge Change Data task, depending on the architecture and version of Oracle GoldenGate.

For all Oracle GoldenGate classic architecture versions, and for microservices versions prior to Oracle GoldenGate 21.4, use the ogg_cdc_cleanup_setup.bat/sh program to install the Oracle GoldenGate CDC Cleanup job and associated tables and stored procedures.

For Oracle GoldenGate microservices 21.4 and later installations, create a Purge Change Data task from the Tasks page from the Configuration section of the Administration Service of the WebUI. The Purge Change Data task creates the required stored procedures and associated tables, and handles the purge function within the Oracle GoldenGate and not through the SQL Server Agent job.

The NOMANAGECDCCLEANUP option instructs Extract not to check for the existence of the Oracle GoldenGate CDC Cleanup job or Purge Change Data task. This is not a recommended option for production environments but can be used for testing an Extract without having to create the Oracle GoldenGate CDC Cleanup job or task.

PERFORMANCEPROFILE HIGH|MEDIUM|LOW_RES
Valid for Extract in Integrated Capture mode.

For tuning Integrated Capture.

It can be set to HIGH, MEDIUM (default), or LOW_RES. It helps achieve better performance by grouping the parameters that affect performance. Once the performance profile is set up, this option automatically configures the relevant parameters, to achieve the desired throughput and latency.

  • The HIGH option allows high workload with a continuous throughput to be processed more efficiently in terms of the end-to end replication. The HIGH option increases the trail file buffer size to 4 MB and decreases the end-of-file and flush option values to 0.1 second.

  • The MEDIUM option sets the trail file buffer size to 1 MB, and the values for end-of-file delay and flush to 1 second.

  • The LOW_RES option is applicable when resources are low and has been added for memory or resource constrained deployment.

When HIGH option is enabled for low to medium intensity workload, it spikes the integrated Extract latency to several seconds. This is because the HIGH option increases the Extract's read buffer size to 8MB. However, the rule to flush the extract read buffer is either when the buffer is full or when there is no incoming records for a duration of 0.2 seconds. Therefore, any continuous workload with extract ingestion rate below 8MB will result in integrated Extract latency to exceed 1 second. Ensure that if the extract ingestion rates (or redo generation rates if 100% of redo is being captured) are below specific value, such as ~15 MB/sec to get ~0.5 second extract latency, do not use the HIGH option. If sub-second latency is required, it is recommended to lower the buffer size accordingly. For example, set the buffer size to one-third of the redo generation rate in MB/sec to get ~0.3 second maximum extract latency.

QUERYTIMEOUT seconds

Valid for SQL Server.

Specifies how long queries to SQL Server will wait for results before reporting a timeout error message. This option takes an integer value to represent the number of seconds. The default query timeout value is 300 seconds (5 minutes). The minimum value is 0 seconds (infinite timeout). The maximum is 2147483645 seconds.

The following example instructs SQL Server to wait 60 seconds for results before timing out.

TRANLOGOPTIONS QUERYTIMEOUT 60
QUERYRETRYCOUNT seconds

Valid for Extract for SQL Server and MySQL.

Specifies how many times to retry calls to the CDC stored procedure used by Extract, in case of a result set timeout.

QUERYRETRYCOUNT can be specified to retry multiple times. If all of the retry attempts fail, Extract abends with the normal connection timeout error message.

For SQL Server, the default is one retry attempt, after which the process abends.The minimum setting (0) is infinite, maximum is 1000, and default is 1.

For MySQL, the minimum and default setting is 50 and maximum is 1000. There is no infinite value. Any attempt to set the QUERYRETRYCOUNT to less than minimum, will be ignored with no error or warning.

The following example causes Extract to attempt its CDC stored procedure call 4 times:

TRANLOGOPTIONS QUERYRETRYCOUNT 4
The following example causes Extract to attempt its CDC stored procedure call 100 times:
TRANLOGOPTIONS QUERYRETRYCOUNT 100
READQUEUESIZE size

Valid for MySQL.

Specifies the internal queue size, in bytes, for transaction data. It can be increased to improve performance. Valid values are integers from 3 through 1500. The default is 256 bytes; start with the default and evaluate performance before adjusting upward.

REDO_TRANSPORT_LAG_THRESHOLD seconds
Valid for Integrated Extract in Downstream Mining Mode.

Monitors the network latency between a source database and target database when redo logs are shipped. If the latency exceeds the specified threshold then a warning appears in the report file and a subsequent information message appears when the lag drops to the normal level.

The default threshold value is 30 seconds. The minimum threshold value that can be specified is 15 seconds.

For more information, see Configuring Redo Transport from Source to Downstream Mining Database.

REDO_TRANSPORT_LAG_TIMEOUT value
Valid for Integrated Extract in Downstream Mining Mode.

The value provided as input in this parameter option is the time period for which Extract will wait for redo from each thread. If all the threads have waited for the timeout (in seconds) and have not received any redo then Extract will abend.

REQUIRELONGDATACAPTURECHANGES | NOREQUIRELONGDATACAPTURECHANGES

Valid for DB2 LUW.

Controls the response of Extract when DATA CAPTURE is set to NONE or to CHANGES without INCLUDE LONGVAR COLUMNS and the parameter file includes any of the following Oracle GoldenGate parameters that require the presence of before images for some or all column values: GETBEFOREUPATES, NOCOMPRESSUPDATES, and NOCOMPRESSDELETES. Both of those DATA CAPTURE settings prevent the logging of before values for LONGVAR columns. If those columns are not available to Extract, it can affect the integrity of the target data.

REQUIRELONGDATACAPTURECHANGES

Extract abends with an error.

NOREQUIRELONGDATACAPTURECHANGES

Extract issues a warning but continues processing the data record.

SOURCE_OS_TIMEZONE timezone

Valid for Extract in integrated capture mode for Oracle.

Specifies the system time zone of the source database. The system time zone of a database is usually given by the default time zone of its operating system, and can also be overridden by setting the TZ environment variable when the database is started. You should specify this option only if the source database and the Extract process use different system time zones. For example, in a downstream capture deployment where the source database and the Extract process run on different servers in different time zones.

You can specify the value of this option in a time zone region name or a UTC offset form and you must use the same form used by the source database. For example, if the source database uses a region name form like America/New_York, then you must specify America/New_York, US/Eastern, or EST5EDT. Alternately, if the source database uses a UTC offset form like -05:00, then you must use the syntax (GMT)[+|-]hh[:mm]. For example, GMT-05:00 or -5.

SKIPUNKNOWNEVENT

Valid for MySQL.

You can use this parameter in the Extract parameter file to enable skipping any unhandled or unknown event in the MySQL binary log. If this parameter is specified, then the Oracle GoldenGate for MySQL Extract continues processing without any error on finding an event that is not handled by the current Extract process.

SUPPRESSNOOOPUPDATES

Valid for Extract on Oracle Database 12c (12.2) and later.

You can control whether no-op updates are filtered or not in Integrated Extract. The default is no suppression.

TRACKSCHEMACHANGES
Valid for DB2 z/OS and MySQL

This parameter enables Extract to capture table level DDL statements and retain a history of the changes to be used to process DML when the log records refers to a table version that is earlier than the current version of the table. This would usually be before images of updates, but could be after images, inserts or deletes if the Extract is running in a lag situation from the log backlog. When Extract encounters appropriate DDL operations, it will note the version number of the DDL and update the DDL history table with the new information. The Extract will create a new TDR record that relates to the change in the trail as well. When Extract encounters prior versions of the table in the log, it will reference the DDL history to be able to correctly interpret the DDL for the older version of the table. The DDL change is not actually being replicated, and synchronization of any changes to the source table are still required to be manually executed by the user in the target database.

Syntax:

TRANLOGOPTIONS TRACKSCHEMACHANGES

This will enable table level DDL changes to be tracked by the Extract and the trail metadata updated as appropriate. To use TRACKSCHEMACHANGES properly, the table metadata must be at a known consistent state, which means that all the tables that need version tracking must be created and never altered or reorganized before using TRACKSCHEMACHANGES so that no prior table versions will appear in the transaction log for update or delete operations. The script ddl_update.sh has been provided to assist in the creation of an initial set of DDL history records for the database.

To use DDL processing, the database needs to be set up with a history table that will capture DDL changes of the various versions of all tables on the database system. Also system tables need to be enabled for data capture changes. To create and maintain the history table the following UNIX shell scripts are provided:

  • ddl_create.sh : This script is used to create the DDL history table. It also enables data capture changes for the following system tables:

    • SYSIBM.SYSTABLES

    • SYSIBM.SYSCOLUMNS

    • SYSIBM.SYSINDEXES

    • SYSIBM.SYSKEYCOLUSE

    Example:

    ./ddl_create.sh -f crt_ddl_hist.sql -s OGGSCHEMA

    In this example, the resulting file, crt_ddl_hist.sql must be processed by another program.

    ./ddl_create.sh -d DB2DSXY -u gguser -p ggpw -s OGGSCHEMA

    This will call a local DB2 to make a remote connection to a mainframe database to immediately create the DDL history table.

  • ddl_remove.sh: This script is used to remove the DDL history table. However, the system tables are not altered.

  • ddl_update.sh: This script should be run to establish an initial start point for the DDL version tracking using TRACKSCHEMACHANGES. It must be run after the creation of the DDL history table and should not be necessary to be run again unless the extract must be repositioned in such a way that table versions may be missed in the transaction log. Rerunning ddl_update.sh will only add information for tables that are not already present in the DDL history.

  • execsql.sh: This script should not be run directly but must be available to the other scripts to be sourced. It provides common facilities for parsing command line and executing SQL or writing it to a file.

These scripts may be used to create the tables directly using a db2 connection or they may be used to create SQL files which may be run in a SQL processing program of choice. The files have been checked to be compatible with DB2 remote and SPUFI.

Following is a description of options accepted by these scripts:
  • -h shows this usage help.
  • -d dsn specifies the DB2 DSN to connect to.
  • -u userid specifies the User ID to connect to the database with.
  • -p password specifies the password to connect to the database with.
  • -s ggschema specifies the name of the schema the DDL history table should be stored in. This schema should be the same as GGSCHEMA in GLOBALS.
  • -t ddltable specifies the name of the table the DDL history table. GGS_DDL_HIST is the default.
  • -f outfile specifies the name of a file to write the SQL statements to instead of executing them. If -d, -u, -p must all be specified if used or -f. Currently -t should not be used. -f must be used if only the db2cli command is available on the remote host since db2cli cannot run the SQL statements that are generated.
TRANCOUNT
Valid for SQL Server.

Allows adjustment of the number of transactions processed per call by Extract to pull data from the SQL Server change data capture staging tables. Based on your transaction workload, adjusting this value may improve capture rate throughput. The minimum value is 1, maximum is 100, and the default is 10.

Example:

TRANLOGOPTIONS TRANCOUNT 20 

This example instructs Extract to fetch 20 transactions at a time from change data capture enabled tables.

[TSLOOKUPBEGINLRI | TSLOOKUPENDLRI]

Valid for DB2 LUW v 10.1 and later.

When you specify an LRI range using these parameters, Extract looks for the timestamp specified in the ADD or ALTER EXTRACT command within this range. This helps Extract to optimize the look up process for a particular timestamp in the database transaction log. The TSLOOKUPBEGINLRI parameter is mandatory while TSLOOKUPENDLRI is optional. Specifying only TSLOOKUPENDLRI without TSLOOKUPBEGINLRI is invalid. For example:

TRANLOGOPTIONS TSLOOKUPBEGINLRI 75200.666197, TSLOOKUPENDLRI  75207.666216 
TRANLOGOPTIONS TSLOOKUPBEGINLRI 75200.666197 

If the provided timestamp falls between the given LRI ranges or the provided timestamp falls after the TSLOOKUPBEGINLRI LRI timestamp then Extract starts from a record with timestamp equal to or nearest less than the provided timestamp.

If the provided timestamp falls before TSLOOKUPBEGINLRI LRI timestamp, Extract is started from the specified TSLOOKUPBEGINLRI LRI. If the provided timestamp falls after TSLOOKUPENDLRI timestamp, then Extract abends. If you only specify TSLOOKUPENDLRI, then an informational message is displayed and Extract starts from a record with timestamp equal or nearest less than the provided timestamp.

USENATIVEOBJSUPPORT | NOUSENATIVEOBJSUPPORT

Valid for Extract in integrated capture mode for Oracle.

Integrated Capture adds redo-based capture for User Defined Type (UDT) and ANYDATA data types. It is enabled by default and can only be enabled if the source database version is 12.1.0.1 or greater and the source database compatibility is 12.0.0.0.0 or greater. Replicat from Oracle GoldenGate release 12.1.2.1.0 must be used. To use Native Support, all of your Oracle databases and Oracle GoldenGate instances must be release 12.1.0.1 or greater to be compatible.

If redo-based capture is enabled but a UDT contains an unsupported attribute, Integrated Capture retries to capture the UDT using fetch. For limitations of support for capture, see XML Data Types in Using Oracle GoldenGate for Oracle Database. If you create object tables by using a CREATE TABLE AS SELECT (CTAS) statement, Integrated Capture must be configured to capture DML from CTAS operation in order to fully support object tables. For CTAS use information, see How Oracle GoldenGate Handles Derived Object Names in Using Oracle GoldenGate for Oracle Database

The default is USENATIVEOBJSUPPORT if supported.

USE_ROOT_CONTAINER_TIMEZONE

Valid for Oracle integrated Extract only.

This parameter is for a CDB environment. Each PDB in a CDB can use a different database time zone. If the database time zone is available, Extract tries to get the time zone of a PDB from Integrated Dictionary. The time zone extraction requires a patch on the mining database. If the patch is not available, Extract sends a query to the PDB to get the time zone. If the database patch or a connection to the PDB is not available, and this parameter is specified, Extract assumes that the PDB database time zone is the same as the root container database time zone.

VERSIONCHECK DYNAMIC | IMMEDIATE

This is valid for SQL Server.

Use this option when when you want Extract to validate CDC object versions of common stored procedures, such as OracleCDCExtract and OracleGGCreateProcs, at startup.

DYNAMIC (default) identifies the CDC object versions of table specified in the parameter file once per table while records are processed.

IMMEDIATE identifies CDC object version issues upfront, instead of while records are processed. Databases with large numbers of tables configured for capture require longer startup validation.