Valid For
TABLE
is valid for Extract. MAP
is valid for Replicat
Description
The TABLE
and MAP
parameters control the selection, mapping, and manipulation of the objects that are to be affected by an Oracle GoldenGate process. These parameters work as follows:
Use the TABLE
parameter in an Extract parameter file to specify one or more objects that are to be captured from the data source by the Extract process. TABLE
options specify processing work such as filtering and token definitions that must be performed before Extract writes the captured data to the Oracle GoldenGate trail.
Use the MAP
parameter in the Replicat parameter file to map the data from the source objects to the appropriate target objects. MAP
options specify processing work such as filtering, conversion, and error handling that must be performed before the data is applied to the target objects. Each target object that you want to synchronize with a source object must be associated with that source object by means of a MAP
parameter. Multiple source-target relationships can be specified by means of a wildcard.
TABLE
and MAP
are valid for initial load configurations and for online processes configured to support the replication of transactional changes.
You can process the following objects with TABLE
and MAP
:
Indexes
Triggers
Materialized views
Tables
To specify a sequence for capture by Extract, use the SEQUENCE
parameter.
Note:
Oracle GoldenGate supports the replication of the actual data values of Oracle materialized views. Oracle GoldenGate supports the replication of Oracle and Teradata DDL for indexes and triggers, but not the content of those objects.
You can use one or more TABLE
or MAP
statements in a parameter file, with or without wildcards, to specify all of the objects that you want to process.
You can exclude objects from a wildcarded TABLE
or MAP
statement with the TABLEEXCLUDE and MAPEXCLUDE parameters. Additional exclusion parameters are CATALOGEXCLUDE, SCHEMAEXCLUDE, and EXCLUDEWILDCARDOBJECTSONLY.
For more information about using TABLE
and MAP
, see Administering Oracle GoldenGate.
Default
None
Syntax for TABLE
For tables, you can use all of the TABLE
options. For non-table objects, use TABLE
only to specify an object for capture.
TABLE source_table[, TARGET target_table] [, ATTRCHARSET (charset)] [, CHARSET character_set] [, COLCHARSET character_set] [, COLMAP (column_mapping)] [, {COLS | COLSEXCEPT} (column_list)] [, {DEF | TARGETDEF} template] [, EVENTACTIONS action] [, EXITPARAM 'parameter'] [, {FETCHCOLS | FETCHCOLSEXCEPT} (column_list)] [, {FETCHMODCOLS | FETCHMODCOLSEXCEPT} (column_list)] [, FETCHBEFOREFILTER] [, FILTER (filter_clause)] [, GETBEFORECOLS (column_specification)] [, KEYCOLS (columns)] [, SQLEXEC (SQL_specification)] [, SQLPREDICATE 'WHERE where_clause'] [, TOKENS (token_definition)] [, TRIMSPACES | NOTRIMSPACES] [, TRIMVARSPACES | NOTRIMVARSPACES] [, WHERE (clause)] [, container.]schema.table PARTITIONOBJID ptn_object_ID [, ptn_object_ID] ;
Syntax for MAP
MAP source_table, TARGET target_table [, MOD_COMPARE_COLS( tgt_col = source )] [, COLMAP (column_mapping)] [, COMPARECOLS (column_specification)] [, COORDINATED] [, {DEF | TARGETDEF} template] [, EXCEPTIONSONLY] [, EXITPARAM 'parameter'] [, EVENTACTIONS (action)] [, FILTER (filter_clause)] [, HANDLECOLLISIONS | NOHANDLECOLLISIONS] [, INSERTALLRECORDS] [, INSERTAPPEND | NOINSERTAPPEND] [, KEYCOLS (columns)] [, MAPALLCOLUMNS | NOMAPALLCOLUMNS] [, MAPEXCEPTION (exceptions_mapping)] [, MAPINVISIBLECOLUMNS | NOMAPINVISIBLECOLUMNS] [, REPERROR (error, response)] [, RESOLVECONFLICT (conflict_resolution_specification)] [, SQLEXEC (SQL_specification)] [, THREAD (thread_ID)] [, THREADRANGE (thread_range[, column_list])] [, TRIMSPACES | NOTRIMSPACES] [, TRIMVARSPACES | NOTRIMVARSPACES] [, WHERE (clause)] [, container.]schema.table PARTITIONOBJID ptn_object_ID [, ptn_object_ID] ;
TABLE and MAP Options
The following table summarizes the options that are available for the TABLE
and MAP
parameters. Note that not all options are valid for both parameters.
Table 6-21 Summary of TABLE and MAP Syntax Components
Component | Description | Valid For |
---|---|---|
MAP MOD_COMPARE_COL( tgt_col = source [,...] ) | This is a Replicat only parameter. Assigns specified source value to target column's before image as key value, and the value is used for WHERE clause. |
MAP |
TABLE source_table[, TARGET taget_table] |
Specifies the source object in a |
|
Specifies the source-target object mapping for the Replicat process. Use in the Replicat parameter file. |
|
|
specifies the source character set information at UDT attribute level. |
|
|
Specifies any supported character set. |
|
|
Specifies any supported character set. |
|
|
COLMAP (column_mapping) |
Maps records between different source and target columns. |
|
Selects or excludes columns for processing. |
|
|
COMPARECOLS (column_specification) |
Specifies columns to use for conflict detection and resolution. |
|
Forces a transaction to be processed as a barrier transaction. |
|
|
{DEF| TARGETDEF} template |
Specifies a source-definitions or target-definitions template. |
|
Specifies that the |
|
|
EVENTACTIONS (action) |
Triggers an action based on a record that satisfies a specified filter rule. |
|
EXITPARAM 'parameter' |
Passes a parameter in the form of a literal string to a user exit. |
|
FETCHBEFOREFILTER |
Directs the |
|
{FETCHCOLS | FETCHCOLSEXCEPT} (column_list) |
Enables the fetching of column values from the source database when the values are not in the transaction record. |
|
{FETCHMODCOLS | FETCHMODCOLSEXCEPT} (column_list) |
Forces column values to be fetched from the database when the columns are present in the transaction log. |
|
FILTER (filter_clause) |
Selects records based on a numeric value. |
|
GETBEFORECOLS (column_specification) |
Forces before images of columns to be captured and written to the trail. |
|
Reconciles the results of changes made to the target table by an initial load process with those applied by a change-synchronization group. |
|
|
INSERTALLRECORDS |
Applies all row changes as inserts. |
|
Controls whether or not Replicat uses an Oracle |
|
|
KEYCOLS (columns) |
Designates columns that uniquely identify rows. |
|
MAPALLCOLUMNS| NOMAPALLCOLUMNS |
Controls whether or not Replicat obtains non-key columns. |
|
MAPEXCEPTION (exceptions_mapping) |
Specifies that the |
|
MAPINVISIBLECOLUMNS | NOMAPINVISIBLECOLUMNS |
Controls whether or not Replicat includes invisible columns in Oracle target tables for default column mapping. For invisible columns in Oracle target tables that use explicit column mapping, they are always mapped so do not require this option. |
|
REPERROR (error, response) |
Controls how Replicat responds to errors when executing the |
|
RESOLVECONFLICT (conflict_resolution_specification) |
Specifies rules for conflict resolution. |
|
SQLEXEC (SQL_specification) |
Executes stored procedures and queries. |
|
SQLPREDICATE 'WHERE where_clause' |
Enables a |
|
Valid for Replicat in coordinated mode. Specifies that the |
|
|
Valid for Replicat in coordinated mode. Specifies that the |
|
|
TOKENS (token_definition) |
Defines user tokens. |
|
Controls whether trailing spaces are trimmed or not when mapping |
|
|
Controls whether trailing spaces are trimmed or not when mapping |
|
|
WHERE (clause) |
Selects records based on conditional operators. |
|
; |
(Semicolon) Terminates the |
|
Available for Integrated Extract. This option is used to specify the object IDs of the partitions to be captured for partitioned tables. | TABLE and MAP |
MAP MOD_COMPARE_COL( tgt_col = source [,...] )
tgt_col
must be target table column name, and should be the key column to take effect properly. source
can be source table column, constant value (number or string), column mapping function or SQLEXEC results. For example, source col1
is mapped to target col1. source col1
before image value is 1, which is a dummy value because it is masked by DBA for security. Replicat can query actual before image value of target col1
by SQLEXEC. Using MOD_COMPARE_COLS()
, Replicat replaces dummy source col1
value by SQLEXEC result, so that UPDATE or DELETE operation works properly.
TABLE
source_table
[, TARGET
taget_table
]
TABLE
is valid in an Extract parameter file.
Use TABLE
to specify a source object for which you want Extract to capture data. Specify the fully qualified two-part or three-part name of the object, such as schema.table
or catalog.schema.table
. You can use a wildcard to specify multiple objects with one TABLE
statement. To specify object names and wildcards correctly, see Administering Oracle GoldenGate.
Use the TARGET
option only when Extract must refer to a target definitions file (specified with the TARGETDEFS
parameter) to perform conversions or when the COLMAP
option is used to map columns. Otherwise, it can be omitted from a TABLE
parameter. Column mapping with COLMAP
and conversion work usually are performed on the target system to minimize the impact of replication activities on the source system, but can be performed on the source system if required. For example, column mapping and conversion can be performed on the source system in a configuration where there are multiple sources and one target. In this scenario, it may be easier to manage one target definitions file rather than managing a definitions file for each source database, especially if there are frequent application changes that require new definitions files to be generated.
Using TARGET
in a TABLE
parameter identifies the metadata of the extracted data based on the target structure, rather than that of the source, to reflect the structure of the record that is reflected in the definitions file or the column map. Do not use three-part names if TARGET
specifies tables in a target Oracle container database. Replicat can only connect to one container or catalog, so it is assumed that the container or catalog portion of the name is the same as the one that Replicat logs into (as specified with USERID
, USERIDALIAS
, or TARGETDB
, depending on the database).
If no other TABLE
syntax options are required to process the specified source data, you can use a simple TABLE
statement, making sure to terminate it with a semicolon.
TABLE sales.customers;
The following shows the use of a wildcard to specify multiple tables:
TABLE sales.*;
The preceding TABLE
statements direct Extract to capture all supported column data for the specified objects and write it to the trail without performing any filtering, conversion, or other manipulation.
MAP
source_table
, TARGET
target_table
MAP
is valid in a Replicat parameter file.
Use MAP
to specify a source object, and use TARGET
to specify the target object to which Replicat applies the replicated source data. Together, the MAP
and TARGET
clause comprise a mapping.
For MAP
source_table
, specify the source object. Specify the fully qualified two-part or three-part name of the object, such as schema.table
or catalog.schema.table
. You can use a wildcard to specify multiple source objects.
For TARGET
target_table
, specify a two-part name, even if the target is a container database. Replicat can only connect to one container or catalog, so it is assumed that the container or catalog portion of the name is the same as the one that Replicat logs into (as specified with USERID
, USERIDALIAS
, or TARGETDB
, depending on the database). You can use a wildcard to specify multiple target objects.
The following shows the use of a wildcard to specify multiple tables. Note that the TARGET
clause does not include the tab
prefix before the wildcard. That specification would be invalid, because the wildcard would be resolved as sales.tabtab1
, sales.tabtab2
, and so forth.
MAP sales.tab*, TARGET sales.*;
To specify object names and wildcards correctly in the MAP
and TARGET
clauses, see Administering Oracle GoldenGate.
If no filtering, mapping, or other work is required for the objects, you can use simple MAP
statements like the following, making sure to terminate each one with a semicolon.
MAP sales.customers, TARGET sales.customers; MAP fin.*, TARGET fin.*;
ATTRCHARSET
(charset)
ATTRCHARSET
is valid for TABLE
.
Use the ATTRCHARSET
clause to specify the source character set information at UDT attribute level. It overrides the character set defined in the trail file or specified by SOURCECHARSET
, CHARSET
, or COLCHARSET
parameters.
Valid values are character set names and valid UDT attribute names. Wildcard attribute names are supported. For example:
TABLE SCHEMA.T*, ATTRCHARSET(WE8DEC, col*.attr1, col1.attr*.attr3);
CHARSET
character_set
CHARSET
is valid for TABLE
.
Use the CHARSET
clause to specify any supported character set. See CHARSET for more information.
COLCHARSET
character_set
COLCHARSET
is valid for TABLE
.
Use the COLCHARSET
clause to specify any supported character set. See COLCHARSET for more information.
COLMAP
(column_mapping)
COLMAP
is valid for TABLE
and MAP
.
Use COLMAP
to:
Map individual source columns to target columns when the source and target columns have different names.
Specify default column mapping when the source and target names are identical.
COLMAP
provides instructions for selecting, translating, and moving column data from a source column to a target column.
Note:
To create global rules for column mapping across all tables in subsequent MAP
statements, use the COLMATCH
parameter.
Getting More Information About Configuring Column Mapping
See Administering Oracle GoldenGate for more information about using COLMAP
. To use COLMAP
, related configuration considerations must be taken into account, such as whether source and target column structures are identical or different and whether global column mapping parameters may be sufficient.
Syntax
COLMAP ( [USEDEFAULTS, ] target_column = source_expression [BINARYINPUT] [, ...] )
USEDEFAULTS
Automatically maps source and target columns that have the same name if they were not specified in an explicit column mapping. The data types are translated automatically, as needed, based on the local data-definitions file. USEDEFAULTS
eliminates the need for an explicit column mapping if those columns have the same name and the data does not require any filtering or conversion.
Specify USEDEFAULTS
before explicit column mappings in the COLMAP
clause. For additional information about default column mapping in COLMAP
, see Administering Oracle GoldenGate.
target_column = source_expression
Defines an explicit source-target column mapping.
target_column
Specifies the name of the target column. For supported characters in column names, see Administering Oracle GoldenGate.
source_expression
Can be any of the following:
The name of a source column, such as ORD_DATE
A numeric constant, such as 123
A string constant within single quotes, such as 'ABCD'
An expression using an Oracle GoldenGate column-conversion function, such as @STREXT (COL1, 1, 3)
. See "Column Conversion Functions" for more information.
BINARYINPUT
Use BINARYINPUT
when the target column is defined as a binary data type, such as RAW
or BLOB
, but the source input contains binary zeros in the middle of the data. he source input is handled as binary input, and replacement of data values is suppressed.
{COLS | COLSEXCEPT} (
column_list
)
COLS
and COLSEXCEPT
are valid for TABLE
.
Use COLS
and COLSEXCEPT
to control the columns for which data is captured.
COLS
specifies columns that contain the data that you want to capture. When COLS
is used, all columns that are not in the COLS
list are ignored by Oracle GoldenGate.
COLSEXCEPT
specifies columns to exclude from being captured. When COLSEXCEPT
is used, all columns that are not in the COLSEXCEPT
list are captured by Oracle GoldenGate. For tables with numerous columns, COLSEXCEPT
may be more efficient than listing each column with COLS
.
Caution:
Do not exclude key columns, and do not use COLSEXCEPT
to exclude columns that contain data types that are not supported by Oracle GoldenGate. COLSEXCEPT
does not exclude unsupported data types.
To use COLS
, the following is required:
The table must have one or more key columns, or a substitute key must be defined with the KEYCOLS
option. See "KEYCOLS (columns)".
The key columns or the columns specified with KEYCOLS
must be included in the column list that is specified with COLS
. Otherwise, they will not be captured, and an error will be generated during processing.
Without a primary key, a unique key, or a KEYCOLS
clause in the TABLE
statement, Oracle GoldenGate uses all of the columns in the table, rendering COLS
unnecessary.
Note:
Do not use this option for tables that are processed in pass-through mode by a data-pump Extract group.
Syntax
{COLS | COLSEXCEPT} (column [, ...])
column
The name of a column. To specify multiple columns, create a comma-delimited list, for example:
COLS (name, city, state, phone)
Note:
If the database only logs values for columns that were changed in an update operation, a column specified for capture with COLS
might not be available. To make those columns available, use the FETCHCOLS
option in the TABLE
statement or enable supplemental logging for the column.
Example
The COLS
clause in this example captures only columns 1 and 3, whereas the COLSEXCEPT
clause captures all columns except columns 1 and 3.
TABLE hq.acct, COLS (col1, col3); TABLE hq.sales, COLSEXCEPT (col1, col3);
COMPARECOLS (
column_specification
)
COMPARECOLS
is valid for MAP
.
Use COMPARECOLS
to specify the columns that Replicat uses to detect and resolve update or delete conflicts when configured with the RESOLVECONFLICT
option of MAP
in a multi-master configuration. A conflict is a mismatch between the before image of a record in the trail and the correct data in the target table.
To use COMPARECOLS
, the before image must be available in the trail record by means of the GETBEFORECOLS
parameter in the Extract TABLE
statement. The specified columns must exist in the target database and also be part of the Replicat configuration (satisfy the TARGET
specification with or without a COLMAP
clause).
Only scalar data types are supported by COMPARECOLS
as comparison columns. A scalar data type can be used in a WHERE
clause, has a single, atomic value and no internal components. Scalar data types supported by Oracle GoldenGate include the following, but not LOBs.
Numeric data types
Date data types
Character data types
Some examples of non-scalar data types are spatial data, user-defined data types, large objects (LOB), XML, reference data types, and RAW. A row being considered for CDR can include non-scalar data so long as the conflict is not in the non-scalar data itself.
To specify conflict resolution routines, use the RESOLVECONFLICT
option of MAP
. COMPARECOLS
and RESOLVECONFLICT
can be in any order in the MAP
statement.
Getting More Information About Configuring the CDR Feature
See Administering Oracle GoldenGate for more information about configuring conflict detection and resolution.
Syntax
COMPARECOLS( {ON UPDATE | ON DELETE} {ALL | KEY | KEYINCLUDING (col[,...]) | ALLEXCLUDING (col[,...]) } [,...] )
{ON UPDATE | ON DELETE}
Specifies whether the before image of the specified columns should be compared for updates or deletes. You can use ON UPDATE
only, ON DELETE
only, or both. If using both, specify them within the same COMPARECOLS
clause. See the example for how to use both.
{ALL | KEY | KEYINCLUDING (
col
[,...]) | ALLEXCLUDING (
col
[,...])}
Specifies the columns for which a before image is captured.
ALL
Compares using all columns in the target table. An error is generated if any corresponding before images are not available in the trail. Using ALL
imposes the highest processing load for Replicat, but allows conflict-detection comparisons to be performed using all columns for maximum accuracy.
KEY
Compares only the primary key columns. This is the fastest option, but does not permit the most accurate conflict detection, because keys can match but non-key columns could be different.
KEYINCLUDING
Compares the primary key columns and the specified column or columns. This is a reasonable compromise between speed and detection accuracy.
ALLEXCLUDING
Compares all columns except the specified columns. For tables with numerous columns, ALLEXCLUDING
may be more efficient than KEYINCLUDING
. Do not exclude key columns.
In the following example, the key columns plus the name
, address
, and salary
columns are compared for conflicts.
MAP src, TARGET tgt COMPARECOLS ( ON UPDATE KEYINCLUDING (name, address, salary), ON DELETE KEYINCLUDING (name, address, salary));
In the following example, the comment
column is ignored and all other columns are compared for conflicts.
MAP src, TARGET tgt COMPARECOLS (ON UPDATE ALLEXCLUDING (comment))
COORDINATED
COORDINATED
is valid for MAP
. This option is valid when Replicat is in coordinated mode.
Use the COORDINATED
option to force transactions made on objects in the same MAP
statement to be processed as barrier transactions. It causes all of the threads across all MAP
statements to synchronize to the same trail location. The synchronized position is the beginning of the transaction that contains a record that satisfies a MAP
that contains the COORDINATED
keyword. The transaction is then applied atomically by a single thread, which is either the thread with the lowest thread ID among the currently running threads or a dedicated thread with the ID of 0 if USEDEDICATEDCOORDINATIONTHREAD
is specified in the parameter file.
THREAD
and THREADRANGE
clauses specified in conjunction with COORDINATED
are ignored because the record will not be applied by the designated thread(s). The COORDINATED
keyword results in temporarily suspending parallelism so that the target tables are in a consistent state before the force-coordinated transaction is applied. After this point, parallel execution commences again.
Replicat by default coordinates transactions in which the primary key is updated, transactions that perform DDL, and certain EVENTACTIONS
actions. COORDINATED
provides for explicit coordination.
See Administering Oracle GoldenGate for more information about Coordinated Replicat.
Syntax
COORDINATED
Example
The following is an example of the use of the COORDINATED
option. In this example, business rules require that the target tables be in a consistent state before Replicat executes transactions that include SQLEXEC
operations on the objects specified in the MAP
statement. Parallelism must be temporarily converted to serial SQL processing in this case.
Given the following MAP
statement, if another thread inserts into t2
a record with a value of 100 for col_val
before the insert to t1
is performed by thread 1, then the SQLEXEC
will delete the row. If other threads are still processing the record that has the value of 100, the SQLEXEC
fails. The results of this MAP
statement are, therefore, not predictable.
MAP u1.t1, TARGET u2.t1 SQLEXEC (ID test2, QUERY ' delete from u2.t2 where col_val =100 ', NOPARAMS)), THREAD(1);
Conversely, when COORDINATED
is used, all of the threads synchronize at a common point, including the one processing the col_val=100
record, thereby removing the ambiguity of the results.
MAP u1.t1, TARGET u2.t1 SQLEXEC (ID test2, QUERY ' delete from u2.t2 where col_val =100 ', NOPARAMS)), THREAD(1), COORDINATED;
{DEF| TARGETDEF}
template
DEF
and TARGETDEF
are valid for TABLE
and MAP
.
Use DEF
and TARGETDEF
to specify the name of a definitions template that was created by the DEFGEN utility.
DEF
specifies a source-definitions template.
TARGETDEF
specifies a target-definitions template.
A template is based on the definitions of a specific table. It enables new tables that have the same definitions as the original table to be added to the Oracle GoldenGate configuration without running DEFGEN for them, and without having to stop and start the Oracle GoldenGate process. The definitions in the template are used for definitions lookups.
Getting More Information About Creating Definitions Templates
For more information about templates and DEFGEN, see Administering Oracle GoldenGate.
Syntax
{DEF | TARGETDEF} template
template
The name of one of the following definitions templates generated by the DEFGEN utility:
Use DEF
to specify a source-definitions template generated by the DEF
option of the TABLE
parameter in the DEFGEN parameter file.
Use TARGETDEF
to specify a target-definitions template generated by the TARGETDEF
option of the TABLE
parameter in the DEFGEN parameter file.
The definitions contained in the template must be identical to the definitions of the table or tables that are specified in the same TABLE
or MAP
statement.
Case-sensitivity of the template name is observed when the name is specified the same way that it is stored in the database. Make certain that the template name is specified the same way in both the DEF
or TARGETDEF
clause in this TABLE
or MAP
statement, and in the DEFGEN parameter file that created the template.
This example shows a case-insensitive template name.
MAP acct.cust*, TARGET acct.cust*, DEF custdef;
This example shows a case-sensitive template name when the database requires quotes to enforce case-sensitivity.
TABLE acct.cust*, DEF "CustDef";
This example shows a case where both DEF
and TARGETDEF
are used.
MAP acct.cust*, TARGET acc.cust*, DEF custdef, TARGETDEF tcustdef;
EXCEPTIONSONLY
EXCEPTIONSONLY
is valid for MAP
.
Use EXCEPTIONSONLY
in an exceptions MAP
statement intended for error handling. The exceptions MAP
statement must follow the MAP
statement for which errors are anticipated. The exceptions MAP
statement executes only if an error occurs for the last record processed in the preceding regular MAP
statement.
To use EXCEPTIONSONLY
, use a REPERROR
statement with the EXCEPTION
option either within the regular MAP
statement or at the root of the parameter file. See "REPERROR" for more information.
Note:
If using the Oracle GoldenGate Conflict Detection and Resolution (CDR) feature, a REPERROR
with EXCEPTION
is not needed. CDR automatically sends all operations that cause errors to the exceptions MAP
statement.
The exceptions MAP
statement must specify the same source table as in the regular MAP
statement, but the target table in the exceptions MAP
statement must be an exceptions table.
Note:
See "MAPEXCEPTION (exceptions_mapping)" to support wildcarded object names.
Getting More Information About Configuring Exceptions Handling
For more information about configuring exceptions handling with an exceptions MAP
statement, see Administering Oracle GoldenGate.
Syntax
EXCEPTIONSONLY
EVENTACTIONS (
action
)
EVENTACTIONS
is valid for TABLE
and MAP
. Some options apply only to one or the other parameter and are noted as such in the descriptions.
Use EVENTACTIONS
to cause the process to take a defined action based on a record in the trail, known as the event record, that qualifies for a specific filter rule. You can use this system, known as the event marker system (or event marker infrastructure) to customize processing based on database events. For example, you can suspend a process to perform a transformation or report statistics. The event marker feature is supported for the replication of data changes, but not for initial loads.
To trigger actions that do not require data to be applied to target tables, you can use the Replicat TABLE
parameter with filtering options that support EVENTACTIONS
. See "TABLE for Replicat" for more information.
Caution:
EVENTACTIONS
is not supported if the source database is Teradata and Extract is configured in maximum performance mode.
You may need to combine two or more actions to achieve your goals. When multiple actions are combined, the entire EVENTACTIONS
statement is parsed first, and then the specified options execute in order of precedence. The following list shows the order of precedence. The actions listed before Process the record
occur before the record is written to the trail or applied to the target (depending on the process). Actions listed after Process the record
are executed after the record is processed.
TRACE
LOG
CHECKPOINT BEFORE
DISCARD
SHELL
ROLLOVER
IGNORE
REPORT
SUSPEND
ABORT
CHECKPOINT AFTER
FORCESTOP
STOP
To prevent the event record itself from being processed in the normal manner, use the IGNORE
or DISCARD
option. Because IGNORE
and DISCARD
are evaluated before the record itself, they prevent the record from being processed. Without those options, EVENTACTIONS
for Extract writes the record to the trail, and EVENTACTIONS
for Replicat applies that operation to the target database.
You should take into account the possibility that a transaction could contain two or more records that trigger an event action. In such a case, there could be multiple executions of certain EVENTACTIONS
specifications. For example, encountering two qualifying records that trigger two successive ROLLOVER
actions will cause Extract to roll over the trail twice, leaving one of the two files empty of transaction data.
You should also take into account that when the GETUPDATEBEFORES
parameter is in effect, two records are generated for UPDATE
operations: a record that contains the before image and a record that contains the after image. An event action is triggered for each of those records when the operation qualifies as an event record. You can use the BEFOREAFTERINDICATOR
token of the GGHEADER
column-conversion function as a filter in a FILTER
clause to qualify the records so that the event action triggers only once, either on the before record or the after record, but not both.
The following example filters on the BEFORE
indicator. The EVENTACTION
issues the ECHO
shell command to output the string 'Triggered on BEFORE
' to the event log when a BEFORE
record is encountered.
TABLE qasource.test, & FILTER(@STRFIND('BEFORE', @GETENV('GGHEADER' , 'BEFOREAFTERINDICATOR')) > 0), & EVENTACTIONS ( shell ('echo --== Triggered on BEFORE ==-- '), LOG);
The following shows the result of the event action:
013-03-06 17:59:31 INFO OGG-05301 Shell command output: '--== Triggered on AFTER ==--'
The following example does the same thing, but for the AFTER
indicator.
TABLE qasource.test, & FILTER(@STRFIND('AFTER', @GETENV('GGHEADER' , 'BEFOREAFTERINDICATOR')) > 0), & EVENTACTIONS ( shell ('echo --== Triggered on AFTER ==-- '), LOG);
In a Teradata configuration where Extract is configured in maximum protection mode, use EVENTACTIONS
only in the VAM-sort Extract group. It is not supported by the primary Extract in this configuration because concurrent changes are not sorted in transaction order at this point in the processing stream. For more information, see Using Oracle GoldenGate for Heterogeneous Databases.
Getting More Information About Configuring the Event Marker System
See Administering Oracle GoldenGate for more information about using EVENTACTIONS
and the Event Marker System.
Syntax
EVENTACTIONS ( [STOP | SUSPEND | ABORT | FORCESTOP] [IGNORE [RECORD | TRANSACTION [INCLUDEVENT]] [DISCARD] [LOG [INFO | WARNING]] [REPORT] [ROLLOVER] [SHELL 'command' | SHELL ('command', VAR variable = {column_name | expression} [, ...]) ] [TRACE[2] file [TRANSACTION] [DDL[INCLUDE] | DDLONLY] [PURGE | APPEND]] [CHECKPOINT [BEFORE | AFTER | BOTH]] [, ...] )
STOP
Valid in TABLE
for Extract and in MAP
for Replicat.
Brings the process to a graceful stop when the specified event record is encountered. The process waits for other operations within event transaction to be completed before stopping. If the transaction is a Replicat grouped or batched transaction, the current group of transactions are applied before the process stops gracefully. The process restarts at the next record after the event record, so long as that record also signified the end of a transaction.
The process logs a message if it cannot stop immediately because a transaction is still open. However, if the event record is encountered within a long-running open transaction, there is no warning message that alerts you to the uncommitted state of the transaction. Therefore, the process may remain running for a long time despite the STOP
event.
STOP
can be combined with other EVENTACTIONS
options except for ABORT
and FORCESTOP
.
SUSPEND
Valid in TABLE
for Extract and in MAP
for Replicat.
Pauses the process so that it retains the active context of the current run and can still respond to SEND
commands that are issued in GGSCI. When a process is suspended, the INFO
command shows it as RUNNING
, and the RBA
field shows the last checkpoint position.
To resume processing, issue the SEND
command with the RESUME
option.
To use the CHECKPOINT BEFORE
option in conjunction with SUSPEND
, the event record must be the start of a transaction for the SUSPEND
to take place. That way, if the process is killed while in the suspended state, the event record with the SUSPEND
action is the first record to be reprocessed upon restart. If both CHECKPOINT BERORE
and SUSPEND
are specified, but the event record is not the start of a transaction, the process abends before SUSPEND
can take place.
To use the CHECKPOINT AFTER
option in conjunction with SUSPEND
, the RESUME
command must be issued before the checkpoint can take place, and the event record must be a COMMIT
record. If the process is killed while in a SUSPEND
state, the process reprocesses the transaction from the last checkpointed position upon restart.
SUSPEND
cannot be combined with ABORT
but can be combined with all other options.
ABORT
Valid in TABLE
for Extract and in MAP
for Replicat.
Forces the process to exit immediately when the specified event record is encountered, whether or not there are open transactions. The event record is not processed. A fatal error is written to the log, and the event record is written to the discard file if DISCARD
is also specified. The process will undergo recovery on startup.
ABORT
can be combined only with CHECKPOINT BEFORE
, DISCARD
, SHELL
, and REPORT
.
FORCESTOP
Valid in TABLE
for Extract and in MAP
for Replicat.
Forces the process to stop gracefully when the specified event record is encountered, but only if the event record is the last operation in the transaction or the only record in the transaction. The record is written normally.
If the event record is encountered within a long-running open transaction, the process writes a warning message to the log and exits immediately, as in ABORT
. In this case, recovery may be required on startup. If the FORCESTOP
action is triggered in the middle of a long-running transaction, the process exits without a warning message.
FORCESTOP
can be combined with other EVENTACTIONS
options except for ABORT
, STOP
, CHECKPOINT AFTER
, and CHECKPOINT BOTH
. If used with ROLLOVER
, the rollover only occurs if the process stops gracefully.
IGNORE [RECORD | TRANSACTION [INCLUDEVENT]]
Valid in TABLE
for Extract and in MAP
for Replicat.
Ignores some or all of the transaction, depending on the selected action.
RECORD
is the default. It forces the process to ignore only the specified event record, but not the rest of the transaction. No warning or message is written to the log, but the Oracle GoldenGate statistics are updated to show that the record was ignored.
Use TRANSACTION
to ignore the entire transaction that contains the record that triggered the event. If TRANSACTION
is used, the event record must be the first one in the transaction. When ignoring a transaction, the event record is also ignored by default. TRANSACTION
can be shortened to TRANS
.
Use INCLUDEEVENT
with TRANSACTION
to propagate the event record to the trail or to the target, but ignore the rest of the associated transaction.
IGNORE
can be combined with all other EVENTACTIONS
options except ABORT
and DISCARD
.
An IGNORE
action is processed after all the qualification, filtering, mapping, and user-exit operations are processed. The record or transaction is ignored in the final output phase and prevents the record or transaction from being written to the output target (the trail in the case of Extract or the database in the case of Replicat). Therefore, in certain expressions, for example those that include SQLEXEC
operations, the SQLEXEC
will be executed before the IGNORE
is processed. This means that, while the record is not written to the trail or target database, all of the effects of processing the record through qualification, filtering, mapping and user-exit will occur.
This action is not valid for DDL records. Because DDL operations are autonomous, ignoring a record is equivalent to ignoring the entire transaction.
DISCARD
Valid in TABLE
for Extract and in MAP
for Replicat.
Causes the process to:
write the specified event record to the discard file.
update the Oracle GoldenGate statistics to show that the record was discarded.
The process resumes processing with the next record in the trail.
DISCARD
can be combined with all other EVENTACTIONS
options except IGNORE
.
LOG [INFO | WARNING]
Valid in TABLE
for Extract and in MAP
for Replicat.
Causes the process to log the event when the specified event record is encountered. The message is written to the report file, to the Oracle GoldenGate error log, and to the system event log.
Use the following options to specify the severity of the message:
INFO
specifies a low-severity informational message. This is the default.
WARNING
specifies a high-severity warning message.
LOG
can be combined with all other EVENTACTIONS
options except ABORT
. If using ABORT
, LOG
is not needed because ABORT
logs a fatal error before the process exits.
REPORT
Valid in TABLE
for Extract and in MAP
for Replicat.
Causes the process to generate a report file when the specified event record is encountered. This is the same as using the SEND
command with the REPORT
option in GGSCI.
The REPORT
message occurs after the event record is processed (unless DISCARD
, IGNORE
, or ABORT
are used), so the report data will include the event record.
REPORT
can be combined with all other EVENTACTIONS
options.
ROLLOVER
Valid in TABLE
for Extract.
Causes Extract to roll over the trail to a new file when the specified event record is encountered. The ROLLOVER
action occurs before Extract writes the event record to the trail file, which causes the record to be the first one in the new file unless DISCARD
, IGNORE
or ABORT
are also used.
ROLLOVER
can be combined with all other EVENTACTIONS
options except ABORT
. ROLLOVER
cannot be combined with ABORT
because ROLLOVER
does not cause the process to write a checkpoint, and ROLLOVER
happens before ABORT
.
Without a ROLLOVER
checkpoint, ABORT
causes Extract to go to its previous checkpoint upon restart, which would be in the previous trail file. In effect, this cancels the rollover.
SHELL '
command'
Valid in TABLE
for Extract and in MAP
for Replicat.
Causes the process to execute the specified shell command when the event record is encountered. SHELL '
command'
executes a basic shell command. The command string is taken at its literal value and sent to the system that way. The command is case-sensitive. Enclose the command string within single quote marks, for example:
EVENTACTIONS (SHELL 'echo hello world! > output.txt')
If the shell command is successful, the process writes an informational message to the report file and to the event log. Success is based upon the exit status of the command in accordance with the UNIX shell language. In that language, zero indicates success.
If the system call is not successful, the process abends with a fatal error. In the UNIX shell language, non-zero equals failure. Note that the error message relates only to the execution of the SHELL
command itself, and not the exit status of any subordinate commands. For example, SHELL
can execute a script successfully, but commands in that script could fail.
SHELL
can be combined with all other EVENTACTIONS
options.
SHELL ('
command
', VAR
variable
= {
column_name
|
expression
} [, ...])
Valid in TABLE
for Extract and in MAP
for Replicat.
Causes the process to execute the specified shell command when the event record is encountered and supports parameter passing. The command and the parameters are case-sensitive.
When SHELL
is used with arguments, the entire command and argument strings must be enclosed within parentheses, for example:
EVENTACTIONS (SHELL ('Current timestamp: $1 SQLEXEC result is $2 ',VAR $1 = @GETENV('JULIANTIMESTAMP'),VAR $2 = mytest.description));
The input is as follows:
command
Is the command, which is passed literally to the system.
VAR
Is a required keyword that starts the parameter input.
variable
Is the user-defined name of the placeholder variable where the run-time variable value will be substituted. Extra variables that are not used in the command are ignored. Note that any literal in the SHELL
command that matches a VAR
variable name is replaced by the substituted VAR
value. This may have unintended consequences, so test your code before putting it into production.
column_name
Can be the before or after (current) image of a column value.
expression
can be the following, depending on whether column data or DDL is being handled.
Valid expressions for column data:
The value from a TOKENS
clause in a TABLE
statement.
A return value from any Oracle GoldenGate column-conversion function.
A return value from a SQLEXEC
query or procedure.
Valid expressions for DDL:
Return value from @TOKEN
function (Replicat only).
Return value from @GETENV
function.
Return value from other functions that do not reference column data (for example, @DATENOW
).
Return value from @DDL
function.
TRACE[2]
file
[TRANSACTION] [DDL[INCLUDE] | DDLONLY] [PURGE | APPEND]
Valid in TABLE
for Extract and in MAP
for Replicat.
Causes process trace information to be written to a trace file when the specified event record is encountered. TRACE
provides step-by-step processing information. TRACE2
identifies the code segments on which the process is spending the most time.
By default (without options), standard DML tracing without consideration of transaction boundaries is enabled until the process terminates.
file
specifies the name of the trace file and must appear immediately after the TRACE
keyword. You can specify a unique trace file, or use the default trace file that is specified with the standalone TRACE
or TRACE2
parameter.
The same trace file can be used across different TABLE
or MAP
statements in which EVENTACTIONS
TRACE
is used. If multiple TABLE
or MAP
statements specify the same trace file name, but the TRACE
options are not used consistently, preference is given to the options in the last resolved TABLE
or MAP
that contains this trace file.
Use TRANSACTION
to enable tracing only until the end of the current transaction, instead of when the process terminates. For Replicat, transaction boundaries are based on the source transaction, not the typical Replicat grouped or batched target transaction. TRANSACTION
can be shortened to TRANS
. This option is valid only for DML operations.
DDL[INCLUDE]
traces DDL and also DML transactional data processing. Either DDL
or DDLINCLUDE
is valid.
DDLONLY
traces DDL but does not trace DML transactional data.
These options are valid only for Replicat. By default DDL tracing is disabled.
Use PURGE
to truncate the trace file before writing additional trace records, or use APPEND
to write new trace records at the end of the existing records. APPEND
is the default.
TRACE
can be combined with all other EVENTACTIONS
options except ABORT
.
To disable tracing to the specified trace file, issue the GGSCI SEND
process
command with the TRACE OFF
file_name
option.
CHECKPOINT [BEFORE | AFTER | BOTH]
Valid in TABLE
for Extract and in MAP
for Replicat.
Causes the process to write a checkpoint when the specified event record is encountered. Checkpoint actions provide a context around the processing that is defined in TABLE
or MAP
statements. This context has a begin point and an end point, thus providing synchronization points for mapping the functions that are performed with SQLEXEC
and user exits.
BEFORE
BEFORE
for an Extract process writes a checkpoint before Extract writes the event record to the trail. BEFORE
for a Replicat process writes a checkpoint before Replicat applies the SQL operation that is contained in the record to the target.
BEFORE
requires the event record to be the first record in a transaction. If it is not the first record, the process will abend. Use BEFORE
to ensure that all transactions prior to the one that begins with the event record are committed.
When using EVENTACTIONS
for a DDL record, note that since each DDL record is autonomous, the DDL record is guaranteed to be the start of a transaction; therefore the CHECKPOINT BEFORE
event action is implied for a DDL record.
CHECKPOINT BEFORE
can be combined with all EVENTACTIONS
options.
AFTER
AFTER
for Extract writes a checkpoint after Extract writes the event record to the trail. AFTER
for Replicat writes a checkpoint after Replicat applies the SQL operation that is contained in the record to the target.
AFTER
flags the checkpoint request as an advisory, meaning that the process will only issue a checkpoint at the next practical opportunity. For example, in the case where the event record is one of a multi-record transaction, the checkpoint will take place at the next transaction boundary, in keeping with the Oracle GoldenGate data-integrity model.
When using EVENTACTIONS
for a DDL record, note that since each DDL record is autonomous, the DDL record is guaranteed to be the end (boundary) of a transaction; therefore the CHECKPOINT AFTER
event action is implied for a DDL record.
CHECKPOINT AFTER
can be combined with all EVENTACTIONS
options except ABORT.
BOTH
BOTH
combines BEFORE
and AFTER
. The Extract or Replicat process writes a checkpoint before and after it processes the event record.
CHECKPOINT BOTH
can be combined with all EVENTACTIONS
options except ABORT.
CHECKPOINT
can be shortened to CP
.
The following example shows how you can configure a process to ignore certain records. When Extract processes any trail record that has name = abc
, it ignores the record.
TABLE fin.cust, & WHERE (name = 'abc'), & EVENTACTIONS (ignore);
Based on the compatibility and precedence rules of EVENTACTIONS
options, DISCARD
takes higher precedence than ABORT
, so in this example the event record gets written to the discard file before the process abends.
MAP fin.cust, TARGET fin.cust2, & WHERE (name = 'abc'), & EVENTACTIONS (DISCARD, ABORT);
The following example executes a SHELL
action. It gets the result of a SQLEXEC
query and pairs it with the current timestamp.
TABLE src.tab & SQLEXEC (id mytest, query 'select description from lookup & where pop = :mycol2', params (mycol2 = col2) ), & EVENTACTIONS (SHELL ('Current timestamp: $1 SQLEXEC result is $2 ', & VAR $1 = @GETENV('JULIANTIMESTAMP'), VAR $2 = mytest.description));
The shell command that results from this example could be similar to the following:
'Current timestamp: 212156002704718000 SQLEXEC result is test passed'
The following example shows how invalid results can occur if a placeholder name conflicts with literal text in the command string. In this example, a placeholder named $1
is associated with a column value, and the SHELL
command echoes a literal string that includes $1
.
MAP src.tab1, TARGET targ.tab1 & EVENTACTIONS (SHELL ('echo Extra charge for $1 is $1', VAR $1 = COL1));
This is the unintended result, assuming the column value is gift wrap
:
'Extra charge for gift wrap is gift wrap'
Changing the placeholder variable to $col
results in the correct output:
MAP src.tab1, TARGET targ.tab1 & EVENTACTIONS (SHELL ('echo Extra charge for $col is $1', VAR $col = COL1)); 'Extra charge for gift wrap is $1'
The following shows similar potential for unintended results:
MAP src.tab1, TARGET targ.tab1 & EVENTACTIONS (SHELL ('Timestamp: $1 Price is $13 > out.txt ', & VAR $1 = @GETENV('JULIANTIMESTAMP')));
The redirected output file might contain a string like this (notice the second timestamp contains an appended value of 3
):
'Timestamp: 212156002704718000 Price is 2121560027047180003'
The intended result is this:
'Timestamp: 212156002704718000 Price is $13'
These examples show different ways to configure tracing.
MAP tab1, TARGET tab1 EVENTACTIONS (TRACE ./dirrpt/trace1.txt); MAP tab2, TARGET tab2 EVENTACTIONS (TRACE ./dirrpt/trace2.txt TRANSACTION);
In the first MAP
statement, the trace1.txt
trace file is generated just before the first tab1
event record is applied to the target. It contains all of the tracing information from that point forward until Replicat terminates or unless tracing is turned off with the GGSCI SEND REPLICAT
command.
Because the second MAP
statement contains the TRANSACTION
option, the trace2.txt
file is generated just before the first tab2
event record is applied to the target, but the tracing stops automatically at the conclusion of the transaction that contains the tab2
event record.
The following shows how EVENTACTIONS
with SUSPEND
can be used.
Case 1: You are replicating DDL, and you want to ensure that there is enough space in the target database to create a new table. Use EVENTACTIONS
with SUSPEND
in the MAP
statement that maps the CREATE TABLE DDL
operation, and then execute a SQL statement in that MAP
statement to query the amount of space remaining in a tablespace. If there is enough space, use SEND REPLICAT
with RESUME
to resume processing immediately; if not, leave Replicat suspended until a DBA can add the space, and then use SEND REPLICAT
with RESUME
to resume processing.
Case 2: You want to fix unique key violations when they occur on any table. Because Replicat is processing thousands of tables, you do not want to stop the process each time there is a violation, because this would cause Replicat to spend time rebuilding the object cache again upon restart. By using EVENTACTIONS
with SUSPEND
, you can simply suspend processing until the problem is fixed.
Case 3: At the end of the day, you suspend Replicat to run daily reports, and then resume processing immediately without stopping and restarting the process.
EXITPARAM '
parameter
'
EXITPARAM
is valid for TABLE
and MAP
.
Use EXITPARAM
to pass a parameter to the EXIT_PARAMS
function of a user exit routine whenever a record from the TABLE
or MAP
statement is encountered.
Getting More Information about User Exits
See Administering Oracle GoldenGate for instructions on how to configure user exits.
See "User Exit Functions" for more information about the syntax for the user exits.
Syntax
EXITPARAM 'parameter string'
FETCHBEFOREFILTER
FETCHBEFOREFILTER
is valid for TABLE
.
Use FETCHBEFOREFILTER
to fetch columns that are specified with FETCHCOLS
or FETCHCOLSEXCEPT
before a FILTER
operation is executed. Fetching before the filter ensures that values required for the filter are available. Without FETCHBEFOREFILTER
, fetches specified with FETCHCOLS
or FETCHCOLSEXCEPT
are not performed until after filters are executed. Specify FETCHBEFOREFILTER
before FILTER
in the parameter file.
Do not use this option for tables being processed in pass-through mode by a data-pump Extract group.
Syntax
FETCHBEFOREFILTER
Example
TABLE hr.salary, FETCHCOLS (sal_level),
FETCHBEFOREFILTER,
FILTER (sal_level >= 8)
;
{FETCHCOLS | FETCHCOLSEXCEPT} (
column_list
)
FETCHCOLS
and FETCHCOLSEXCEPT
are valid for TABLE
. These options are only valid for the primary Extract.
Use FETCHCOLS
and FETCHCOLSEXCEPT
to fetch column values from the database when the values are not present in the transaction log record. Use this option if the database only logs the values of columns that were changed in an update operation, but you need to ensure that other column values required for FILTER
operations are available.
FETCHCOLS
fetches the specified columns.
FETCHCOLSEXCEPT
fetches all columns except the specified columns. For tables with numerous columns, FETCHCOLSEXCEPT
may be more efficient than listing each column with FETCHCOLS
.
FETCHCOLS
and FETCHCOLSEXCEPT
are valid for all databases that are supported by Oracle GoldenGate.
For an Oracle Database, Oracle GoldenGate fetches the values from the undo tablespace through Oracle's Flashback Query mechanism. The query provides a read-consistent image of the columns as of a specific time or SCN. For more information about how Oracle GoldenGate uses Flashback Query.
Instead of using FETCHCOLS
or FETCHCOLSEXCEPT
, it may be more efficient to enable supplemental logging for the desired columns.
To control fetching and enable a response when a column specified for fetching cannot be located, use the FETCHOPTIONS
parameter. To include fetch results in statistical displays generated by the STATS EXTRACT
command, use the STATOPTIONS
parameter.
If values for columns specified with FETCHCOLS
or FETCHCOLSEXCEPT
are present in the transaction log, no database fetch is performed. This reduces database overhead.
Syntax
{FETCHCOLS | FETCHCOLSEXCEPT} (column [, ...])
Example
The FETCHCOLS
clause in this example fetches only columns 1 and 3, whereas the FETCHCOLSEXCEPT
clause fetches all columns except columns 1 and 3.
TABLE hq.acct, FETCHCOLS (col1, col3); TABLE hq.sales, FETCHCOLSEXCEPT (col1, col3);
{FETCHMODCOLS | FETCHMODCOLSEXCEPT} (
column_list
)
FETCHMODCOLS
and FETCHMODCOLSEXCEPT
are valid for TABLE
. These options are only valid for the primary Extract.
Use FETCHMODCOLS
and FETCHMODCOLSEXCEPT
to force column values to be fetched from the database even if the columns are present in the transaction log. These Depending on the database type, a log record can contain all of the columns of a table or only the columns that changed in the given transaction operation.
FETCHMODCOLS
fetches the specified columns.
FETCHMODCOLSEXCEPT
fetches all columns that are present in the transaction log, except the specified columns. For tables with numerous columns, FETCHMODCOLSEXCEPT
might be more efficient than listing each column with FETCHMODCOLS
.
FETCHMODCOLS
and FETCHMODCOLSEXCEPT
are valid for all databases that are supported by Oracle GoldenGate.
Observe the following usage guidelines:
Do not use FETCHMODCOLS
and FETCHMODCOLSEXCEPT
for key columns.
Syntax
{FETCHMODCOLS | FETCHMODCOLSEXCEPT} (column [, ...])
Example
The FETCHMODCOLS
clause in this example fetches only columns 1 and 3, whereas the FETCHMODCOLSEXCEPT
clause fetches all columns except columns 1 and 3.
TABLE hq.acct, FETCHMODCOLS (col1, col3); TABLE hq.sales, FETCHMODCOLSEXCEPT (col1, col3);
FILTER (
filter_clause
)
FILTER
is valid for TABLE
and MAP
.
Use FILTER
to select or exclude records based on a numeric value. A filter expression can use conditional operators, Oracle GoldenGate column-conversion functions, or both.
Note:
To filter based on a string, use one of the Oracle GoldenGate string functions. See "Column Conversion Functions" for more information about these functions. You can also use the WHERE
option. See "WHERE (clause)".
Separate all FILTER
components with commas. A FILTER
clause can include the following:
Numbers
Columns that contain numbers
Functions that return numbers
Arithmetic operators:
+ (plus)
- (minus)
* (multiply)
/ (divide)
\ (remainder)
Comparison operators:
> (greater than)
>= (greater than or equal)
< (less than)
<= (less than or equal)
= (equal)
<> (not equal)
Results derived from comparisons can be zero (indicating FALSE
) or non-zero (indicating TRUE
).
Parentheses (for grouping results in the expression)
Conjunction operators: AND
, OR
Enclose literals in single quotes. Specify case-sensitive column names as they are stored in the database, and enclose them in double quotes if the database requires quotes to enforce case-sensitivity (such as Oracle).
Oracle GoldenGate supports FILTER
for columns that have a multi-byte character set.
Getting More Information about Record Filtering
See Administering Oracle GoldenGate for more information about FILTER
and other filtering options.
Syntax
FILTER ( [, ON INSERT | ON UPDATE| ON DELETE] [, IGNORE INSERT | IGNORE UPDATE | IGNORE DELETE] , filter_clause [, RAISEERROR error_number] )
filter_clause
Selects records based on an expression, such as:
FILTER ((PRODUCT_PRICE*PRODUCT_AMOUNT) > 10000))
You can use the column-conversion functions of Oracle GoldenGate in a filter clause, as in:
FILTER (@COMPUTE (PRODUCT_PRICE*PRODUCT_AMOUNT)>10000)
Enclose literals in single quotes. Specify case-sensitive column names as they are stored in the database, and enclose them in double quotes if the database requires quotes to enforce case-sensitivity (such as Oracle).
Oracle GoldenGate does not support FILTER
for columns that have a multi-byte character set or a character set that is incompatible with the character set of the local operating system.
The maximum size of the filter clause is 5,000 bytes.
ON INSERT | ON UPDATE | ON DELETE
Restricts record filtering to the specified operation(s). Separate operations with commas, for example:
FILTER (ON UPDATE, ON DELETE, @COMPUTE (PRODUCT_PRICE*PRODUCT_AMOUNT)>10000)
The preceding example executes the filter for UPDATE
and DELETE
operations, but not INSERT
operations.
IGNORE INSERT | IGNORE UPDATE | IGNORE DELETE
Does not apply the filter for the specified operation(s). Separate operations with commas, for example:
FILTER (IGNORE INSERT, @COMPUTE (PRODUCT_PRICE*PRODUCT_AMOUNT)>10000)
The preceding example executes the filter on UPDATE
and DELETE
operations, but ignores INSERT
operations.
RAISEERROR
error
Raises a user-defined error number if the filter fails. Can be used as input to the REPERROR
parameter to invoke error handling. Make certain that the value for error
is outside the range of error numbers that is used by the database or by Oracle GoldenGate. For example: RAISEERROR 21000
.
GETBEFORECOLS (
column_specification
)
GETBEFORECOLS
is valid for TABLE
.
Use GETBEFORECOLS
to specify columns for which you want before image to be captured and written to the trail upon an update or delete operation. Use GETBEFORECOLS
when using the Oracle GoldenGate Conflict Detection and Resolution (CDR) feature in a bi-directional or multi-master configuration. Also use it when using conversion functions or other processing features that require the before image of a record.
For updates, the before image of the specified columns is included in the trail whether or not any given column is modified. In addition to the columns specified in the GETBEFORECOLS
clause, an Oracle database will also log the before image of other columns that are modified. For other supported databases, you can use the GETUPDATEBEFORES
parameter to force the inclusion of the before values of other columns that are modified.
Note:
GETUPDATEBEFORES
overrides GETBEFORECOLS
if both are used in the same parameter file.
To use this parameter, supplemental logging must be enabled for any database that does not log before values by default.
GETBEFORECOLS
overrides COMPRESSUPDATES
and COMPRESSDELETES
if used in the same parameter file.
This parameter is valid for all databases except DB2. For DB2 on all platforms that are supported by Oracle GoldenGate, use the GETUPDATEBEFORES
parameter instead of GETBEFORECOLS.
Syntax
GETBEFORECOLS( {ON UPDATE | ON DELETE} {ALL | KEY | KEYINCLUDING (col[,...]) | KEYANDMOD | | ALLEXCLUDING (col[,...]) } [,...] )
{ON UPDATE | ON DELETE}
Specifies whether the before image of the specified columns should be captured for updates or deletes. You can use ON UPDATE
only, ON DELETE
only, or both. If using both, specify them within the same GETBEFORECOLS
clause. See the example for how to use both.
{ALL | KEY | KEYINCLUDING (
col
[,...]) | KEYANDMOD | ALLEXCLUDING (
col
[,...])}
Specifies the columns for which a before image is captured.
ALL
Captures a before image of all supported data type columns in the target table, including the primary key; all unsupported columns are skipped and logged in the Extract or Replicat parameter file as an information message. This imposes the highest processing load for Extract, but allows conflict-detection comparisons to be performed using all columns for maximum accuracy.
KEY
Capture before image only for the primary key. This is the fastest option, but does not permit the most accurate conflict detection, because keys can match but non-key columns could be different. KEY
is the default.
KEYINCLUDING
Capture before image of the primary key and also the specified column or columns. This is a reasonable compromise between speed and detection accuracy.
KEYANDMOD
Use this option as an extension of the key option for both Extract and Replicat. For update DMLs on the source, Extract logs the key and modified columns. Replicat on the target will use the KEY
and MODIFIED
columns during conflict detection in a WHERE
clause. With Oracle databases, the modified column is always used for conflict detection by default and this parameter makes it explicit.
ALLEXCLUDING
Capture before image of all columns except the specified columns. For tables with numerous columns, ALLEXCLUDING
may be more efficient than KEYINCLUDING
. Do not exclude key columns.
Example
In the following example, the before images for the key column(s) plus the name
, address
, and salary
are always written to the trail file on update and delete operations.
TABLE src, GETBEFORECOLS ( ON UPDATE KEYINCLUDING (name, address, salary), ON DELETE KEYINCLUDING (name, address, salary));
HANDLECOLLISIONS | NOHANDLECOLLISIONS
HANDLECOLLISIONS
and NOHANDLECOLLISIONS
are valid for MAP
.
Use HANDLECOLLISIONS
and NOHANDLECOLLISIONS
to control whether or not Oracle GoldenGate reconciles the results of an initial load with replicated transactional changes that are made to the same tables. When Oracle GoldenGate applies replicated changes after the load is finished, HANDLECOLLISIONS
causes Replicat to overwrite duplicate records in the target tables and provides alternate handling of errors for missing records.
HANDLECOLLISIONS
and NOHANDLECOLLISIONS
can be used globally for all MAP
statements in the parameter file or as an ON/OFF
switch for groups of tables specified with MAP
statements, and they can be used within a MAP
statement. When used in a MAP
statement, they override the global specifications.
See "HANDLECOLLISIONS | NOHANDLECOLLISIONS" for syntax and usage.
INSERTALLRECORDS
INSERTALLRECORDS
is valid for MAP
.
Use the INSERTALLRECORDS
parameter to convert all mapped operations to INSERT
operations on the target. INSERTALLRECORDS
can be used at the root level of the parameter file, within a MAP
statement, and within a MAPEXCEPTION
clause of a MAP
statement.
See "INSERTALLRECORDS" for syntax and usage.
INSERTAPPEND | NOINSERTAPPEND
INSERTAPPEND
is valid for MAP
.
Use the INSERTAPPEND
and NOINSERTAPPEND
parameters to control whether or not Replicat uses an APPEND
hint when it applies INSERT
operations to Oracle target tables. These parameters are valid only for Oracle databases.
See "INSERTAPPEND | NOINSERTAPPEND" for syntax and usage.
KEYCOLS (
columns
)
KEYCOLS
is valid for TABLE
and MAP
.
Use KEYCOLS
to define one or more columns of the target table as unique. The primary use for KEYCOLS
is to define a substitute primary key when a primary key or an appropriate unique index is not available for the table. You can also use KEYCOLS
to specify additional columns to use in the row identifier that Replicat uses. Without the availability of a key or KEYCOLS
clause, Replicat uses all columns of the table to build its WHERE
clause, essentially performing a full table scan.
The columns of a key rendered by KEYCOLS
must uniquely identify a row, and they must match the columns that are used as a key on the source table. The source table must contain at least as many key or index columns as the KEYCOLS
key specified for the target table. Otherwise, in the event of an update to the source key or index columns, Replicat will not have the before images for the extra target KEYCOL
columns.
When defining a substitute key with KEYCOLS
, observe the following guidelines:
If the source and target tables both lack keys or unique indexes, use a KEYCOLS
clause in the TABLE
parameter and in the MAP
parameter, and specify matching sets of columns in each KEYCOLS
clause.
If either of the tables lacks a key or unique index, use KEYCOLS
for that table. Specify columns that match the actual key or index columns of the other table. If a matching set cannot be defined with KEYCOLS
, you must use KEYCOLS
for the source table (TABLE
parameter) and for the target table (MAP
parameter). Specify matching sets of columns that contain unique values. KEYCOLS
overrides a key or unique index.
If the target table has a larger key than the source table does (or if it has more unique-index columns), use KEYCOLS
in the TABLE
statement to specify the source columns that match the extra target columns. You must also include the actual source key or index columns in this KEYCOLS
clause. Using KEYCOLS
in this way ensures that before images are available to Replicat in case the non-key columns are updated on the source.
When using KEYCOLS
, make certain that the specified columns are configured for logging so that they are available to Replicat in the trail records. For an Oracle database, you can enable the logging by using the COLS
option of the ADD TRANDATA
command.
On the target tables, create a unique index on the KEYCOLS
-defined key columns. An index improves the speed with which Oracle GoldenGate locates the target rows that it needs to process.
Do not use KEYCOLS
for tables being processed in pass-through mode by a data-pump Extract group.
KEYCOLS
when using Parallel Replicat or Integrated Replicat:
When using KEYCOLS
with ALLOWDUPTARGETMAP
, the key columns must be the same for each mapped table. For example, if you map HR.EMP
to HR.EMP_TARGET
and HR.EMP_BACKUP
and if you specify KEYCOLS
, they must be the same for both HR.EMP_TARGET
and HR.EMP_BACKUP
.
When using KEYCOLS
to map from multiple source tables to the same target table, the MAP
statements must use the same set of KEYCOLS
.
Syntax
KEYCOLS (column [, ... ])
column
Defines a column to be used as a substitute primary key. If a primary or unique key exists, those columns must be included in the KEYCOLS
specification. To specify multiple columns, create a comma-delimited list as in:
KEYCOLS (id, name)
The following column-types are not supported in KEYCOLS
:
Oracle column types not supported by KEYCOLS
:
Virtual columns, UDTs, function-based columns, and any columns that are explicitly excluded from the Oracle GoldenGate configuration.
SQL Server, DB2 LUW, DB2 z/OS, MySQL, and Teradata:
Columns that contain a timestamp or non-materialized computed column, and any columns excluded from the Oracle GoldenGate configuration. For SQL Server Oracle GoldenGate enforces the total length of data in rows for target tables without a primary key to be below 8000 bytes.
Example
TABLE hr.emp, KEYCOLS (id, first, last, birthdate);
MAPEXCEPTION (
exceptions_mapping
)
MAPEXCEPTIONS
is valid for MAP
.
Use MAPEXCEPTION
as part of an exceptions MAP
statement intended for error handling. MAPEXCEPTION
maps failed operations that are flagged as exceptions by the REPERROR
parameter to an exceptions table. Replicat writes the values of these operations along with other information to the exceptions table.
You can use MAPEXCEPTION
within the same MAP
statement that includes the source-target table mapping and other standard MAP
options. The source and target table names can include wildcards.
When using MAPEXCEPTION
, use a REPERROR
statement with the EXCEPTION
option either within the same MAP
statement or at the root of the Replicat parameter file. See "EXCEPTIONSONLY" and "REPERROR".
Getting More Information About Exceptions Handling
For more information about configuring exceptions handling with an exceptions MAP
statement, see Administering Oracle GoldenGate.
Syntax
MAPEXCEPTION (TARGET exceptions_table, INSERTALLRECORDS [, exception_MAP_options])
TARGET
exceptions_table
The fully qualified name of the exceptions table. Standard Oracle GoldenGate rules for object names apply to the name of the exceptions table. See Administering Oracle GoldenGate.
exception_MAP_options
Any valid options of the MAP
parameter that you want to apply to the exceptions handling.
INSERTALLRECORDS
Applies all exceptions to the exceptions table as INSERT
operations. This parameter is required when using MAPEXCEPTION
.
Example
This is an example of how to use MAPEXCEPTION
for exceptions mapping. The MAP
and TARGET
clauses contain wildcard source and target table names. Exceptions that occur when processing any table with a name beginning with TRX
will be captured to the fin.trxexceptions
table using the specified mapping.
MAP src.trx*, TARGET trg.*, MAPEXCEPTION (TARGET fin.trxexceptions, INSERTALLRECORDS, COLMAP (USEDEFAULTS, ACCT_NO = ACCT_NO, OPTYPE = @GETENV ('LASTERR', 'OPTYPE'), DBERR = @GETENV ('LASTERR', 'DBERRNUM'), DBERRMSG = @GETENV ('LASTERR', 'DBERRMSG') ) );
MAPALLCOLUMNS | NOMAPALLCOLUMNS
MAPALLCOLUMNS
and NOMAPALLCOLUMNS
are valid for MAP
.
Use MAPALLCOLUMNS
to obtain unmapped columns (non-key). When this option is specified, Extract or Replicat checks if all source columns are directly mapped to the target without the column mapping function. If any source columns isn’t mapped, then the Extract and/or Replicat abends.
MAPINVISIBLECOLUMNS | NOMAPINVISIBLECOLUMNS
MAPINVISIBLECOLUMNS
and NOMAPINVISIBLECOLUMNS
are valid for MAP
.
Use MAPINVISIBLECOLUMNS
and NOMAPINVISIBLECOLUMNS
to control whether or not Replicat includes invisible columns in Oracle target tables for default column mapping. For invisible columns in Oracle target tables that use explicit column mapping, they are always mapped so do not require this option.
MAPINVISIBLECOLUMNS
and NOMAPINVISIBLECOLUMNS
can be used in two different ways. When specified at a global level, one parameter remains in effect for all subsequent MAP statements, until the other parameter is specified. When used within a MAP statement, they override the global specifications
See “MAPINVISIBLECOLUMNS | NOMAPINVISIBLECOLUMNS” for syntax and usage.
REPERROR (
error
,
response
)
REPERROR
is valid for MAP
.
Use REPERROR
to specify an error and a response that together control how Replicat responds to the error when executing the MAP
statement. You can use REPERROR
at the MAP
level to override and supplement global error handling rules set with the REPERROR
parameter at the root level of the parameter file. Multiple REPERROR
statements can be applied to the same MAP
statement to enable automatic, comprehensive management of errors and interruption-free replication processing.
For syntax and descriptions, see "REPERROR".
RESOLVECONFLICT (
conflict_resolution_specification
)
RESOLVECONFLICT
is valid for MAP
.
Use RESOLVECONFLICT
in a bi-directional or multi-master configuration to specify how Replicat handles conflicts on operations made to the tables in the MAP
statement.
Multiple resolutions can be specified for the same conflict type and are executed in the order listed in RESOLVECONFLICT
. Multiple resolutions are limited to INSERTROWEXISTS
and UPDATEROWEXISTS
conflicts only.
RESOLVECONFLICT
can be used multiple times in a MAP
statement to specify different resolutions for different conflict types.
The following are the data types and platforms that are supported by RESOLVECONFLICT
.
RESOLVECONFLICT
supports all databases that are supported by Oracle GoldenGate for Windows and UNIX.
To use RESOLVECONFLICT
, the database must reside on a Windows, Linux, or UNIX system (including those running on NonStop OSS).
CDR supports data types that can be compared with simple SQL and without explicit conversion. See the individual parameter options for details.
Do not use RESOLVECONFLICT
for columns that contain LOBs, abstract data types (ADT), or user-defined types (UDT).
Getting More Information About Configuring Conflict Resolution
See Administering Oracle GoldenGate for detailed instructions on configuring bi-directional replication and conflict resolution, including use cases and examples.
Syntax
RESOLVECONFLICT ( {INSERTROWEXISTS | UPDATEROWEXISTS | UPDATEROWMISSING | DELETEROWEXISTS | DELETEROWMISSING} ( {DEFAULT | resolution_name}, {USEMAX (resolution_column) | USEMAXEQ (resolution_column) | USEMIN (resolution_column) | USEMINEQ (resolution_column) | USEDELTA | DISCARD | OVERWRITE | IGNORE} ) [, COLS (column[,...])] )
INSERTROWEXISTS | UPDATEROWEXISTS | UPDATEROWMISSING |
DELETEROWEXISTS | DELETEROWMISSING
The type of conflict that this resolution handles.
INSERTROWEXISTS
An inserted row violates a uniqueness constraint on the target.
UPDATEROWEXISTS
An updated row exists on the target, but one or more columns have a before image in the trail that is different from the current value in the database.
UPDATEROWMISSING
An updated row does not exist in the target.
DELETEROWEXISTS
A deleted row exists in the target, but one or more columns have a before image in the trail that is different from the current value in the database.
DELETEROWMISSING
A deleted row does not exist in the target.
DEFAULT |
resolution_name
DEFAULT
The default column group. The resolution that is associated with the DEFAULT
column group is used for all columns that are not in an explicitly named column group. You must define a DEFAULT
column group.
resolution_name
A name for a specific column group that is linked to a specific resolution type. Supply a name that identifies the resolution type. Valid values are alphanumeric characters. Avoid spaces and special characters, but underscores are permitted, for example:
delta_res_method
Use either a named resolution or DEFAULT
, but not both.
USEMAX (
resolution_column
) | USEMAXEQ (
resolution_column
) | USEMIN (
resolution_column
) | USEMINEQ (
resolution_column
) | USEDELTA |
DISCARD | OVERWRITE | IGNORE
The conflict-handler logic that is used to resolve the conflict. Valid resolutions are:
USEMAX
If the value of resolution_column
in the trail record is greater than the value of the column in the database, the appropriate action is performed.
(INSERTROWEXISTS
conflict) Apply the trail record, but change the insert to an update to avoid a uniqueness violation, and overwrite the existing values.
(UPDATEROWEXISTS
conflict) Apply the trail record as an update.
USEMAXEQ
If the value of resolution_column
in the trail record is greater than or equal to the value of the column in the database, the appropriate action is performed.
(INSERTROWEXISTS
conflict) Apply the trail record, but change the insert to an update to avoid a uniqueness violation, and overwrite the existing values.
(UPDATEROWEXISTS
conflict) Apply the trail record as an update.
USEMIN
If the value of resolution_column
in the trail record is less than the value of the column in the database, the appropriate action is performed:
(INSERTROWEXISTS
conflict) Apply the trail record, but change the insert to an update to avoid a uniqueness violation, and overwrite the existing values.
(UPDATEROWEXISTS
conflict) Apply the update from the trail record.
USEMINEQ
If the value of resolution_column
in the trail record is less than or equal to the value of the column in the database, the appropriate action is performed:
(INSERTROWEXISTS
conflict) Apply the trail record, but change the insert to an update to avoid a uniqueness violation, and overwrite the existing values.
(UPDATEROWEXISTS
conflict) Apply the update from the trail record.
resolution_column
The name of a NOT NULL
column that serves as the resolution column. This column must be part of the column group that is associated with this resolution. The value of the resolution column compared to the current value in the target database determines how a resolution should be applied. The after image of the resolution column is used for the comparison, if available; otherwise the before image value is used. Use a column that can be compared through simple SQL:
NUMERIC
DATE
TIMESTAMP
CHAR/NCHAR
VARCHAR/ NVARCHAR
To use a latest-timestamp resolution, use a timestamp column as the resolution_column
and set the timestamp column to the current time when a row is inserted or updated. If possible, define the resolution column with the SYSTIMESTAMP
data type, which supports fractional seconds. When comparisons are performed with sub-second granularity, there is little need for tie-breaking conflict handlers that resolve cases where the value of the resolution column is identical in both trail and target. If you ensure that the value of the timestamp column can only increase or only decrease (depending on the resolution), then USEMAX
and USEMIN
does not lead to data divergence.
Note:
Do not use a primary key column as the resolution column in a USEMAX
statement for the UPDATEROWEXISTS
conflict. Otherwise, Replicat abends with an error similar to the following:
2013-04-04 10:18:38 ERROR OGG-01922 Missing RESOLUTION COLUMN NAME while mapping to target table "FIN"."ACCT".
USEDELTA
(UPDATEROWEXISTS
conflict only) Add the difference between the before and after values in the trail record to the current value of the column in the target database. If any of the values is NULL
, an error is raised. Base USEDELTA
on columns that contain NUMERIC
data types. USEDELTA
is useful in a multi-node configuration when a row is getting simultaneously updated on multiple nodes. It propagates only the difference in the column values to the other nodes, so that all nodes become synchronized.
DISCARD
(Valid for all conflict types) Retain the current value in the target database, and write the data in the trail record to the discard file.
Use DISCARD
with caution, because it can lead to data divergence.
OVERWRITE
(Valid for all conflict types except DELETEROWMISSING
) Apply the trail record as follows:
(INSERTROWEXISTS
conflict) Apply the trail record but change the insert to an update to avoid a uniqueness violation, and overwrite the existing values.
(UPDATEROWEXISTS
conflict) Apply the update from the trail record.
(UPDATEROWMISSING
conflict) Apply the trail record but convert the missing UPDATE
to an INSERT
by using the modified columns from the after image and the unmodified columns from the before image. To convert an update to an insert, the before image of all columns of the row must be available in the trail. Use supplemental logging if the database does not log before images by default, and specify ALL
for the Extract GETBEFORECOLS
parameter.
(DELETEROWEXISTS
conflict) Apply the delete from the trail record, but use only the primary key columns in the WHERE
clause.
Use OVERWRITE
with caution, because it can lead to data divergence.
IGNORE
(Valid for all conflict types) Retain the current value in the target database, and ignore the trail record: Do not apply to the target table or a discard file.
COLS (
column
[, ...])
A non-default column group. This is a list of columns in the target database (after mapping) that are linked to, and operated upon by, a specific resolution type. If no column group is specified for a conflict, then all columns are affected by the resolution that is specified for the given conflict.
Alternatively, you can specify a DEFAULT
column group, which includes all columns that are not listed in another column group. See the DEFAULT
option.
You can specify multiple column groups, each with a different resolution. For example, you could use OVERWRITE
for col2
and col3
, and you could use USEDELTA
for col4
. No column in any group can be in any other group. Conflicts for columns in different column groups are resolved separately according to the specified resolution, and in the order listed.
Column groups work as follows:
For INSERTROWEXISTS
and UPDATEROWEXISTS
conflicts, you can use different column groups to specify more than one of these conflict types and resolutions per table. Conflicts for columns in different column groups are resolved separately, according to the conflict resolution method specified for the column group.
For UPDATEROWMISSING
, DELETEROWEXISTS
, and DELETEROWMISSING
, you can use only one column group, and all columns of the table must be in this column group (considered the default column group).
Examples
The following examples are explained in detail in Administering Oracle GoldenGate.
This example demonstrates all conflict types with USEMAX
, OVERWRITE
, DISCARD
.
MAP fin.src, TARGET fin.tgt, COMPARECOLS (ON UPDATE ALL, ON DELETE ALL), RESOLVECONFLICT (UPDATEROWEXISTS, (DEFAULT, USEMAX (last_mod_time)), RESOLVECONFLICT (INSERTROWEXISTS, (DEFAULT, USEMAX (last_mod_time)), RESOLVECONFLICT (DELETEROWEXISTS, (DEFAULT, OVERWRITE)), RESOLVECONFLICT (UPDATEROWMISSING, (DEFAULT, OVERWRITE)), RESOLVECONFLICT (DELETEROWMISSING, (DEFAULT, DISCARD)), );
This example demonstrates UPDATEROWEXISTS
with USEDELTA
and USEMAX
.
MAP fin.src, TARGET fin.tgt, COMPARECOLS (ON UPDATE KEYINCLUDING (address, phone, salary, last_mod_time), ON DELETE KEYINCLUDING (address, phone, salary, last_mod_time)), RESOLVECONFLICT ( UPDATEROWEXISTS, (delta_res_method, USEDELTA, COLS (salary)), (DEFAULT, USEMAX (last_mod_time)));
This example demonstrates UPDATEROWEXISTS
with USEDELTA
, USEMAX
, and IGNORE
.
MAP fin.src, TARGET fin.tgt, COMPARECOLS (ON UPDATE ALLEXCLUDING (comment)), RESOLVECONFLICT ( UPDATEROWEXISTS, (delta_res_method, USEDELTA, COLS (salary, balance)), (max_res_method, USEMAX (last_mod_time), COLS (address, last_mod_time)), (DEFAULT, IGNORE));
SQLEXEC (
SQL_specification
)
SQLEXEC
is valid for TABLE
and MAP
.
Use SQLEXEC
to execute a SQL stored procedure or query from within a MAP
statement during Oracle GoldenGate processing. SQLEXEC
enables Oracle GoldenGate to communicate directly with the database to perform any work that is supported by the database. This work can be part of the synchronization process, such as retrieving values for column conversion, or it can be independent of extracting or replicating data, such as executing a stored procedure that executes an action within the database.
See "SQLEXEC" for syntax and usage.
SQLPREDICATE 'WHERE
where_clause
'
SQLPREDICATE
is valid for TABLE
.
Use SQLPREDICATE
to include a conventional SQL WHERE
clause in the SELECT
statement that Extract uses when selecting data from a table in preparation for an initial load. SQLPREDICATE
forces the records returned by the selection to be ordered by the key values.
SQLPREDICATE
is a faster selection method for initial loads than the WHERE
or FILTER
options. It affects the SQL statement directly and does not require Extract to fetch all records before filtering them.
For Oracle tables, SQLPREDICATE
reduces the amount of data that is stored in the undo segment, which can reduce the incidence of snapshot-too-old errors. This is useful when loading very large tables.
By using a SQLPREDICATE
clause, you can partition the rows of a large table among two or more parallel Extract processes. This configuration enables you to take advantage of parallel delivery load processing as well.
SQLPREDICATE
also enables you to select data based on a timestamp or other criteria to filter the rows that are extracted and loaded to the target table. SQLPREDICATE
can be used for ORDER BY
clauses or any other type of selection clause.
Make certain that the WHERE
clause contains columns that are part of a key or index. Otherwise, Extract performs a full table scan, which reduces the efficiency of the SELECT
statement.
SQLPREDICATE
is valid for Oracle, DB2 LUW, DB2 on z/OS, SQL Server, and Teradata databases. Do not use SQLPREDICATE
for an Extract group that is configured to synchronize transactional changes. It is only appropriate for an initial load Extract, because it re quires a SELECT
statement that selects records directly from tables.
Syntax
TABLE source_table, SQLPREDICATE 'WHERE where_clause';
Example
TABLE hr.emp, SQLPREDICATE 'WHERE state = 'CO' and city = 'DENVER''
THREAD (
thread_ID
)
THREAD
is valid for MAP
. This option is valid when Replicat is in coordinated mode.
Use THREAD
to specify that all of the object or objects in the same MAP
statement are to be processed by the specified Replicat thread. The specified thread handles filtering, manipulation, delivery to the target, error handling, and other work that is configured for those objects. Wildcards can be used in the TARGET
clause when THREAD
is used.
All tables that have referential dependencies among one another must be mapped in the same thread. For example, if tables scott.cust
and scott.ord
have a foreign-key relationship, the following is a possible mapping:
MAP scott.cust, TARGET scott.cust, THREAD (5); MAP scott.ord, TARGET scott.ord, THREAD (5);
The thread with the lowest thread ID always processes barrier transactions if the THREAD
or THREADRANGE
option is omitted. Additionally, and work that is not explicitly assigned to a thread is processed through this thread. For example, if there are threads with IDs ranging from 1 to 10, barrier and non-assigned transactions are performed by thread 1.
To process a MAP
statement among multiple threads, see THREADRANGE (thread_range, column_list). THREAD
and THREADRANGE
are mutually exclusive options. Do not use them together in the same MAP
statement.
For more information about Replicat modes, see Deciding Which Apply Method to Use in Using Oracle GoldenGate for Oracle Database and "BATCHSQL".
Syntax
THREAD (thread_ID)
thread_ID
A numerical identifier for the thread that will process this MAP
statement. Valid values are 1 through the value that was specified with the MAXTHREADS
option of the ADD REPLICAT
command that created this group. You can use the INFO REPLICAT
command to verify the maximum number of threads allowed for a Replicat group. When specifying thread IDs, the following must be true:
The total number of threads specified across all MAP
statements of a Replicat group cannot exceed the value of MAXTHREADS
.
No single thread_ID
value in the Replicat group can be higher than the value of MAXTHREADS
. For example, if MAXTHREADS
is 25, there cannot be a thread_ID
of 26 or higher.
If MAXTHREADS
was not used, the default maximum number of threads is 25.
Examples
The following examples show some ways to use the THREAD
option.
In this example, thread 1 processes table cust
.
MAP scott.cust, TARGET scott.cust, THREAD (1);
In this example, thread 1 processes all of the tables in the scott
schema.
MAP scott.*, TARGET scott.*, THREAD (1);
In this example, the orders
table is partitioned among two MAP
statements through the use of FILTER (filter_clause)
and the @RANGE
function. For more information about @RANGE
, see "@RANGE".
MAP scott.orders, TARGET scott.orders, FILTER (@RANGE (1, 2, OID)), THREAD (1); MAP scott.orders, TARGET scott.orders, FILTER (@RANGE (2, 2, OID)), THREAD (2);
THREADRANGE (
thread_range
,
column_list
)
THREADRANGE
is valid for MAP
. This option is valid when Replicat is in coordinated mode.
Use THREADRANGE
to specify that the workload of the target table is to be partitioned evenly among a range of Replicat threads, based on the value of a specified column or columns. For example, if the partitioning is based on the value of a column named ID
, and the THREADRANGE
value is 1-3, then thread 1 processes rows with ID
values from 1 through 10, thread 2 processes rows with ID
values from 11 through 20, and thread 3 processes rows with ID
values from 21 through 30. The partitioning may not be as absolutely even as shown in the preceding example, depending on the initial calculation of the workload, but it is coordinated so that same row is always processed by the same thread. Each specified thread handles filtering, manipulation, error handling, delivery to the target, and other work for its range of rows.
Partitioning a table across a range of threads may improve apply performance for very large tables or tables that frequently incur long-running transactions or heavy volume, but can be used in other cases, as well. You can process more than one table through the same range of threads.
A wildcarded TARGET
clause can be used when THREADRANGE
is used if the optional column list is omitted. When using a column list, use separate explicit MAP
statements for each table that is using the same thread range.
To process a MAP
statement with one specific thread, see THREAD (thread_ID). THREAD
and THREADRANGE
are mutually exclusive options. Do not use them together in the same MAP
statement.
Do not specify tables that have referential dependencies among one another in a thread range. Use the THREAD
option and process all of those tables with the same thread.
Do not use THREADRANGE
to partition sequences. If coordination is required, for example when a sequence is part of a SQLEXEC
operation, partition the sequence work to one thread with the THREAD
option.
The thread with the lowest thread ID always processes barrier transactions if the THREAD
or THREADRANGE
option is omitted. Additionally, and work that is not explicitly assigned to a thread is processed through this thread. For example, if there are threads with IDs ranging from 1 to 10, barrier and non-assigned transactions are performed by thread 1.
Note:
The columns specified in a list of columns must exist in the trail file. You can control this using KEYCOLS
in the Extract to include this column, or by using FETCHCOLS
in the Extract for the column, or by ensuring that the column is part of the supplemental log group and then using LOGALLSUPCOLS
.
For more information about Replicat modes, see "Deciding Which Apply Method to Use" in Using Oracle GoldenGate for Oracle Database and "BATCHSQL".
Syntax
THREADRANGE (lowID-highID, [column[, column][, ...]])
lowID
The lowest thread identifier of this range. Valid values are 1 through 500.
highID
The highest thread identifier of this range, which must be a higher number than lowID
. Valid values are lowID
+1
through 500. The number of threads in the range cannot exceed the value that was specified with the MAXTHREADS
option of the ADD REPLICAT
command. If MAXTHREADS
was not used, the default maximum number of threads is 25.
[
column
[,
column
][, ...]]
Optional. Specifies one or more unique columns on which to base the row partitioning. To specify multiple columns, use a comma-delimited list, such as col1, col2, col3
. When this option is omitted, the partitioning among the threads is based by default on the following columns, in the order of preference shown:
Primary key
KEYCOLS
clause in the same MAP
statement
All of the columns of the table that are supported by Oracle GoldenGate for use as a key.
Example
The following example divides the orders
and order_lines
tables between the same two threads, based on the value of the OID
column.
MAP scott.orders, TARGET scott.orders, THREADRANGE (1-2, OID); MAP scott.order_lines, TARGET scott.order_lines, THREADRANGE (1-2, OID);
TOKENS (
token_definition
)
TOKENS
is valid for TABLE
.
Use TOKENS
to define a user token and associate it with data. Tokens enable you to extract and store data within the user token area of a trail record header. Token data can be retrieved and used in many ways to customize the way that Oracle GoldenGate delivers data. For example, you can use token data in column maps, stored procedures called by SQLEXEC
, or macros.
To use the defined token data in target tables, use the @TOKEN
column-conversion function in the COLMAP
clause of a Replicat MAP
statement. The @TOKEN
function maps the name of a token to a target column.
Do not use this option for tables being processed in pass-through mode by a data-pump Extract group.
The character set of token data is not converted. The token must be in the character set of the source database for Extract and in the character set of the target database for Replicat.
Do not use this option for source tables that are encoded as EBCDIC on a z/OS system if the target tables are not EBCDIC.
For more information about using tokens, see Administering Oracle GoldenGate.
Syntax
TOKENS (token_name = token_data [, ...])
token_name
A name of your choice for the token. It can be any number of valid characters and is not case-sensitive. Multi-byte names are not supported.
token_data
Any valid character string of up to 2000 bytes. The data can be either a literal that is enclosed within single quotes (or double quotes if NOUSEANSISQLQUOTES
is in use) or the result of an Oracle GoldenGate column-conversion function.
Example
The following creates tokens named TK-OSUSER
, TK-GROUP
, and TK-HOST
and maps them to token data obtained with the @GETENV
function.
TABLE ora.oratest, TOKENS ( TK-OSUSER = @GETENV ('GGENVIRONMENT' , 'OSUSERNAME'), TK-GROUP = @GETENV ('GGENVIRONMENT' , 'GROUPNAME') TK-HOST = @GETENV ('GGENVIRONMENT' , 'HOSTNAME'));
TRIMSPACES | NOTRIMSPACES
TRIMSPACES
and NOTRIMSPACES
are valid for TABLE
and MAP
.
Use TRIMSPACES
and NOTRIMSPACES
at the root level of a parameter file or within a TABLE
or MAP
statement to control whether or not trailing spaces in a source CHAR
column are truncated when applied to a target CHAR
or VARCHAR
column. The default is TRIMSPACES
.
See "TRIMSPACES | NOTRIMSPACES" for syntax and usage.
TRIMVARSPACES | NOTRIMVARSPACES
TRIMVARSPACES
and NOTRIMVARSPACES
are valid for TABLE
and MAP
.
Use TRIMVARSPACES
and NOTRIMVARSPACES
at the root level of a parameter file or within a TABLE
or MAP
statement to control whether or not trailing spaces in a source VARCHAR
column are truncated when applied to a target CHAR
or VARCHAR
column. The default is NOTRIMVARSPACES
.
See "TRIMVARSPACES | NOTRIMVARSPACES" for syntax and usage.
WHERE (
clause
)
WHERE
is valid for TABLE
and MAP
.
Use WHERE
to select records based on a conditional statement. WHERE
does not support the following:
Columns that have a multi-byte character set or a character set that is incompatible with the character set of the local operating system.
The evaluation of the before image of a primary key column in the conditional statement as part of a primary key update operation.
Enclose literals in single quotes. Specify case-sensitive column names as they are stored in the database, and enclose them in double quotes if the database requires quotes to enforce case-sensitivity (such as Oracle).
Getting More Information about Record Filtering
See Administering Oracle GoldenGate for more information aboutWHERE
and other filtering options.Syntax
WHERE (clause)
clause
Selects records based on a condition, such as:
WHERE (branch = 'NY')
Table 6-22 shows permissible WHERE
operators.
Table 6-22 Permissible WHERE Operators
Operator | Example |
---|---|
Column names |
PRODUCT_AMT "Product_Amt" |
Numeric values |
-123, 5500.123 |
Literal strings enclosed in single quotes |
'AUTO', 'Ca' |
Column tests |
|
Comparison operators |
=, <>, >, <, >=, <= |
Conjunctive operators |
AND, OR |
Grouping parentheses |
Use open and close parentheses for logical grouping of multiple elements. |
Example
The following WHERE
example returns all records when the AMOUNT
column is over 10,000 and does not cause a record to be discarded when AMOUNT
is absent.
WHERE (amount = @PRESENT AND amount > 10000)
PARTITIONOBJID
Valid for Integrated Extract.
PARTITIONOBJID
is used to specify the object IDs of the partitions to be captured for partitioned tables. PARTITIONOBJID
is different from ALTID
because PARTITIONOBJID
applied to Integrated Extract while ALTID
applies to Classic Extract.. For an IO table (with or without overflow area), index segment object ID should be used for partition level filtering. In this case, PARTITIONOBJID
in the MAP
or TABLE
statement specifies the index segment object IDs of the partitions to be extracted.
Syntax
MAP/TABLE [container.]schema.table PARTITIONOBJID ptn_object_ID [, ptn_object_ID]
The following restrictions apply:
Wildcarded table names are not allowed for a MAP/TABLE
parameter that contains PARTITIONOBJID
.
DDL Capture and replication is not supported when using PARTITIONOBJID
.
Syntax for IO table TABLE
statement:
TABLE [container.]schema.table PARTITIONOBJID index_segment_object_ID [, index_segment_object_ID]
Syntax for IO table MAP
statement:
MAP [container.]schema.table PARTITIONOBJID index_segment_object_ID [, index_segment_object_ID]