12 Configuring DDL Support

This chapter contains information to help you understand and configure DDL support in Oracle GoldenGate.

Topics:

Prerequisites for Configuring DDL

Extract can capture DDL operations from a source Oracle Database through the use of a special DDL trigger or natively through the Oracle logmining server.

Which of these methods you can use depends on the Extract capture mode and the version of the source Oracle Database. This section describes the available support in each capture mode, see Choosing Capture and Apply Modes.

Support for DDL Capture in Integrated Capture Mode

The integrated capture mode of Extract supports two DDL capture methods:

  • Oracle 11.2.0.4 or later: Oracle Databases that have the database COMPATIBLE parameter set to 11.2.0.4 or higher support DDL capture through the database logmining server. This method is known as native DDL capture (also known as triggerless DDL capture). No trigger or installed supportive objects are required. Native DDL capture is the only supported method for capturing DDL from a multitenant container database. For downstream mining, the source database must also have database COMPATIBLE set to 11.2.0.4 or higher to support DDL capture through the database logmining server.

  • Versions earlier than 11.2.0.4: Oracle Databases that have the COMPATIBLE parameter set to anything earlier than 11.2.0.4 require the use of the Oracle GoldenGate DDL trigger. To use trigger-based DDL capture, you must install the DDL trigger and supporting database objects before you configure Extract for DDL support.

Support for DDL Capture in Classic Capture Mode

Classic capture mode requires the use of the Oracle GoldenGate DDL trigger to capture DDL from an Oracle Database. Native DDL capture is not supported by classic capture mode.

DDL capture from a multitenant container database is not supported by classic capture mode.

When you are using Classic capture mode and replicating a CREATE USER using the DDL trigger, the trigger owner and the Extract login user must match to avoid a privilege error when attempting to replicate the CREATE USER command.

To use trigger-based DDL capture, you must install the DDL trigger and supporting database objects before you configure Extract for DDL support, see Installing Trigger-Based DDL Capture.

Overview of DDL Synchronization

Oracle GoldenGate supports the synchronization of DDL operations from one database to another.

DDL synchronization can be active when:

  • business applications are actively accessing and updating the source and target objects.

  • Oracle GoldenGate transactional data synchronization is active.

The components that support the replication of DDL and the replication of transactional data changes (DML) are independent of each other. Therefore, you can synchronize:

  • just DDL changes

  • just DML changes

  • both DDL and DML

For a list of supported objects and operations for DDL support for Oracle, see Supported Objects and Operations in Oracle DDL.

Limitations of Oracle GoldenGate DDL Support

This topic contains some limitations of the DDL feature.

For any additional limitations that were found after this documentation was published, see the Release Notes for Oracle GoldenGate.

DDL Statement Length

Oracle GoldenGate measures the length of a DDL statement in bytes, not in characters. The supported length is approximately 4 MB, allowing for some internal overhead that can vary in size depending on the name of the affected object and its DDL type, among other characteristics. If the DDL is longer than the supported size, Extract will issue a warning and ignore the DDL operation.

If Extract is capturing DDL by means of the DDL trigger, the ignored DDL is saved in the marker table. You can capture Oracle DDL statements that are ignored, as well as any other Oracle DDL statement, by using the ddl_ddl2file.sql script, which saves the DDL operation to a text file in the USER_DUMP_DEST directory of Oracle. The script prompts for the following input:

  • The name of the schema that contains the Oracle GoldenGate DDL objects, which is specified in the GLOBALS file.

  • The Oracle GoldenGate marker sequence number, which is recorded in the Extract report file when DDLOPTIONS with the REPORT option is used in the Extract parameter file.

  • A name for the output file.

Supported Topologies

Oracle GoldenGate supports DDL synchronization only in a like-to-like configuration. The source and target object definitions must be identical.

DDL replication is only supported for Oracle to Oracle replication. It is not supported between different databases, like Oracle to Teradata, or SQL Server to Oracle.

Oracle GoldenGatedoes not support DDL on a standby database.

Oracle GoldenGate supports DDL replication in all supported unidirectional configurations, and in bidirectional configurations between two, and only two, systems. For special considerations in an Oracle active-active configuration, see Propagating DDL in Active-Active (Bidirectional) Configurations.

Filtering, Mapping, and Transformation

DDL operations cannot be transformed by any Oracle GoldenGate process. However, source DDL can be mapped and filtered to a different target object by a primary Extract or a Replicat process. Mapping or filtering of DDL by a data-pump Extract is not permitted, and the DDL is passed as it was received from the primary Extract.

For example, ALTER TABLE TableA is processed by a data pump as ALTER TABLE TableA. It cannot be mapped by that process as ALTER TABLE TableB, regardless of any TABLE statements that specify otherwise.

Renames

RENAME operations on tables are converted to the equivalent ALTER TABLE RENAME so that a schema name can be included in the target DDL statement. For example RENAME tab1 TO tab2 could be changed to ALTER TABLE schema.tab1 RENAME TO schema.tab2. The conversion is reported in the Replicat process report file.

Interactions Between Fetches from a Table and DDL

Oracle GoldenGate supports some data types by identifying the modified row from the redo stream and then querying the underlying table to fetch the changed columns. For instance, in classic capture, partial updates on LOBs (modifications done via dbms_lob package) are supported by identifying the modified row and the LOB column from the redo log, and then querying for the LOB column value for the row from the base table. A similar technique is employed to support UDT (both in classic and integrated capture).

Note:

Integrated capture only requires fetch for UDT when not using native object support.

Such fetch-based support is implemented by issuing a flashback query to the database based on the SCN (System Change Number) at which the transaction committed. The flashback query feature has certain limitations. Certain DDL operations act as barriers such that flashback queries to get data prior to these DDLs do not succeed. Examples of such DDL are ALTER TABLE MODIFY COLUMN and ALTER TABLE DROP COLUMN.

Thus, in cases where there is Extract capture lag, an intervening DDL may cause fetch requests for data prior to the DDL to fail. In such cases, Extract falls back and fetches the current snapshot of the data for the modified column. There are several limitations to this approach: First, the DDL could have modified the column that Extract needs to fetch (for example, suppose the intervening DDL added a new attribute to the UDT that is being captured). Second, the DDL could have modified one of the columns that Extract uses as a logical row identifier. Third, the table could have been renamed before Extract had a chance to fetch the data.

To prevent fetch-related inconsistencies such as these, take the following precautions while modifying columns.

  1. Pause all DML to the table.

  2. Wait for Extract to finish capturing all remaining redo, and wait for Replicat to finish processing the captured data from trail. To determine whether Replicat is finished, issue the following command in GGSCI until you see a message that there is no more data to process.

    INFO REPLICAT group
    
  3. Execute the DDL on the source.

  4. Resume source DML operations.

Comments in SQL

If a source DDL statement contains a comment in the middle of an object name, that comment will appear at the end of the object name in the target DDL statement. For example:

Source:

CREATE TABLE hr./*comment*/emp ...

Target:

CREATE TABLE hr.emp /*comment*/ ...

This does not affect the integrity of DDL synchronization. Comments in any other area of a DDL statement remain in place when replicated.

Compilation Errors

If a CREATE operation on a trigger, procedure, function, or package results in compilation errors, Oracle GoldenGate executes the DDL operation on the target anyway. Technically, the DDL operations themselves completed successfully and should be propagated to allow dependencies to be executed on the target, for example in recursive procedures.

Interval Partitioning

DDL replication is unaffected by interval partitioning, because the DDL is implicit. However, this is system generated name so Replicat cannot convert this to the target.I believe this is expected behavior. You must drop the partition on the source. For example:

alter table t2 drop partition for (20); 

DML or DDL Performed Inside a DDL Trigger

DML or DDL operations performed from within a DDL trigger are not captured.

LogMiner Data Dictionary Maintenance

Oracle recommends that you gather dictionary statistics after the Extract is registered (logminer session) and the logminer dictionary is loaded, or after any significant DDL activity on the database.

Configuration Guidelines for DDL Support

The following are guidelines to take into account when configuring Oracle GoldenGate processes to support DDL replication.

Database Privileges

For database privileges that are required for Oracle GoldenGate to support DDL capture and replication, see Establishing Oracle GoldenGate Credentials .

Parallel Processing

If using parallel Extract and/or Replicat processes, keep related DDL and DML together in the same process stream to ensure data integrity. Configure the processes so that:

  • all DDL and DML for any given object are processed by the same Extract group and by the same Replicat group.

  • all objects that are relational to one another are processed by the same process group.

For example, if ReplicatA processes DML for Table1, then it should also process the DDL for Table1. If Table2 has a foreign key to Table1, then its DML and DDL operations also should be processed by ReplicatA.

If an Extract group writes to multiple trails that are read by different Replicat groups, Extract sends all of the DDL to all of the trails. Use each Replicat group to filter the DDL by using the filter options of the DDL parameter in the Replicat parameter file.

Object Names

Oracle GoldenGate preserves the database-defined object name, case, and character set. This support preserves single-byte and multibyte names, symbols, and accent characters at all levels of the database hierarchy.

Object names must be fully qualified with their two-part or three-part names when supplied as input to any parameters that support DDL synchronization. You can use the question mark (?) and asterisk (*) wildcards to specify object names in configuration parameters that support DDL synchronization, but the wildcard specification also must be fully qualified as a two-part or three-part name. For more information about support for wildcards, see Administering Oracle GoldenGate. To process wildcards correctly, the WILDCARDRESOLVE parameter is set to DYNAMIC by default. If WILDCARDRESOLVE is set to anything else, the Oracle GoldenGate process that is processing DDL operations will abend and write the error to the process report.

Data Definitions

Because DDL support requires a like-to-like configuration, the ASSUMETARGETDEFS parameter must be used in the Replicat parameter file. Replicat will abend if objects are configured for DDL support and the SOURCEDEFS parameter is being used. For more information about ASSUMETARGETDEFS, see Reference for Oracle GoldenGate.

For more information about using a definitions file, see Administering Oracle GoldenGate.

Truncates

TRUNCATE statements can be supported as follows:

  • As part of the Oracle GoldenGate full DDL support, which supports TRUNCATE TABLE, ALTER TABLE TRUNCATE PARTITION, and other DDL. This is controlled by the DDL parameter (see Enabling DDL Support.)

  • As standalone TRUNCATE support. This support enables you to replicate TRUNCATE TABLE, but no other DDL. The GETTRUNCATES parameter controls the standalone TRUNCATE feature. For more information, see Reference for Oracle GoldenGate.

To avoid errors from duplicate operations, only one of these features can be active at the same time.

Initial Synchronization

To configure DDL replication, start with a target database that is synchronized with the source database. DDL support is compatible with the Replicat initial load method.

Before executing an initial load, disable DDL extraction and replication. DDL processing is controlled by the DDL parameter in the Extract and Replicat parameter files.

After initial synchronization of the source and target data, use all of the source sequence values at least once with NEXTVAL before you run the source applications. You can use a script that selects NEXTVAL from every sequence in the system. This must be done while Extract is running.

Data Continuity After CREATE or RENAME

To replicate DML operations on new Oracle tables resulting from a CREATE or RENAME operation, the names of the new tables must be specified in TABLE and MAP statements in the parameter files. You can use wildcards to make certain that they are included.

To create a new user with CREATE USER and then move new or renamed tables into that schema, the new user name must be specified in TABLE and MAP statements. To create a new user fin2 and move new or renamed tables into that schema, the parameter statements could look as follows, depending on whether you want the fin2 objects mapped to the same, or different, schema on the target:

Extract:

TABLE fin2.*;

Replicat:

MAP fin2.*, TARGET different_schema.*;

Understanding DDL Scopes

Database objects are classified into scopes. A scope is a category that defines how DDL operations on an object are handled by Oracle GoldenGate.

The scopes are:

  • MAPPED

  • UNMAPPED

  • OTHER

The use of scopes enables granular control over the filtering of DDL operations, string substitutions, and error handling.

Mapped Scope

Objects that are specified in TABLE and MAP statements are of MAPPED scope. Extraction and replication instructions in those statements apply to both data (DML) and DDL on the specified objects, unless override rules are applied.

For objects in TABLE and MAP statements, the DDL operations listed in the following table are supported.

Table 12-1 Objects That Can be Mapped in MAP and TABLE Statements

Operations On any of these Objects(1)

CREATE

ALTER

DROP

RENAME

COMMENT ONFoot 2

TABLEFoot 3

INDEX

TRIGGER

SEQUENCE

MATERIALIZED VIEW

VIEW

FUNCTION

PACKAGE

PROCEDURE

SYNONYM

PUBLIC SYNONYMFoot 4

GRANT

REVOKE

TABLE

SEQUENCE

MATERIALIZED VIEW

ANALYZE

TABLE

INDEX

CLUSTER

Footnote 1 TABLE and MAP do not support some special characters that could be used in an object name affected by these operations. Objects with non-supported special characters are supported by the scopes of UNMAPPED and OTHER.

Footnote 2

Applies to COMMENT ON TABLE, COMMENT ON COLUMN

Footnote 3

Includes AS SELECT

Footnote 4

Table name must be qualified with schema name.

For Extract, MAPPED scope marks an object for DDL capture according to the instructions in the TABLE statement. For Replicat, MAPPED scope marks DDL for replication and maps it to the object specified by the schema and name in the TARGET clause of the MAP statement. To perform this mapping, Replicat issues ALTER SESSION to set the schema of the Replicat session to the schema that is specified in the TARGET clause. If the DDL contains unqualified objects, the schema that is assigned on the target depends on circumstances described in Understanding DDL Scopes.

Assume the following TABLE and MAP statements:

Extract (source)

TABLE fin.expen;
TABLE hr.tab*;

Replicat (target)

MAP fin.expen, TARGET fin2.expen2;
MAP hr.tab*, TARGET hrBackup.bak_*;

Also assume a source DDL statement of:

ALTER TABLE fin.expen ADD notes varchar2(100);

In this example, because the source table fin.expen is in a MAP statement with a TARGET clause that maps to a different schema and table name, the target DDL statement becomes:

ALTER TABLE fin2.expen2 ADD notes varchar2(100);

Likewise, the following source and target DDL statements are possible for the second set of TABLE and MAP statements in the example:

Source:

CREATE TABLE hr.tabPayables ... ;

Target:

CREATE TABLE hrBackup.bak_tabPayables ...;

When objects are of MAPPED scope, you can omit their names from the DDL configuration parameters, unless you want to refine their DDL support further. If you ever need to change the object names in TABLE and MAP statements, the changes will apply automatically to the DDL on those objects.

If you include an object in a TABLE statement, but not in a MAP statement, the DDL for that object is MAPPED in scope on the source but UNMAPPED in scope on the target.

Unmapped Scope

If a DDL operation is supported for use in a TABLE or MAP statement, but its base object name is not included in one of those parameters, it is of UNMAPPED scope.

An object name can be of UNMAPPED scope on the source (not in an Extract TABLE statement), but of MAPPED scope on the target (in a Replicat MAP statement), or the other way around. When Oracle DDL is of UNMAPPED scope in the Replicat configuration, Replicat will by default do the following:

  1. Set the current schema of the Replicat session to the schema of the source DDL object.
  2. Execute the DDL as that schema.
  3. Restore Replicat as the current schema of the Replicat session.

Other Scope

DDL operations that cannot be mapped are of OTHER scope. When DDL is of OTHER scope in the Replicat configuration, it is applied to the target with the same schema and object name as in the source DDL.

An example of OTHER scope is a DDL operation that makes a system-specific reference, such as DDL that operates on data file names.

Some other examples of OTHER scope:

CREATE USER joe IDENTIFIED by joe;
CREATE ROLE ggs_gguser_role IDENTIFIED GLOBALLY;
ALTER TABLESPACE gg_user TABLESPACE GROUP gg_grp_user;

See Understanding DDL Scopes.

Correctly Identifying Unqualified Object Names in DDL

Extract captures the current schema (also called session schema) that is in effect when a DDL operation is executed. The current container is also captured if the source is a multitenant container database.

The container and schema are used to resolve unqualified object names in the DDL.

Consider the following example:

CONNECT SCOTT/TIGER
CREATE TABLE TAB1 (X NUMBER);
CREATE TABLE SRC1.TAB2(X NUMBER) AS SELECT * FROM TAB1;

In both of those DDL statements, the unqualified table TAB1 is resolved as SCOTT.TAB1 based on the current schema SCOTT that is in effect during the DDL execution.

There is another way of setting the current schema, which is to set the current_schema for the session, as in the following example:

CONNECT SCOTT/TIGER
ALTER SESSION SET CURRENT_SCHEMA=SRC;
CREATE TABLE TAB1 (X NUMBER);
CREATE TABLE SRC1.TAB2(X NUMBER) AS SELECT * FROM TAB1;

In both of those DDL statements, the unqualified table TAB1 is resolved as SRC.TAB1 based on the current schema SRC that is in effect during the DDL execution.

In both classic and integrated capture modes, Extract captures the current schema that is in effect during DDL execution, and it resolves the unqualified object names (if any) by using the current schema. As a result, MAP statements specified for Replicat work correctly for DDL with unqualified object names.

You can also map a source session schema to a different target session schema, if that is required for the DDL to succeed on the target. This mapping is global and overrides any other mappings that involve the same schema names. To map session schemas, use the DDLOPTIONS parameter with the MAPSESSIONSCHEMA option. For more information, see Reference for Oracle GoldenGate.

If the default or mapped session schema mapping fails, you can handle the error with the following DDLERROR parameter statement, where error 1435 means that the schema does not exist.

DDLERROR 1435 IGNORE INCLUDE OPTYPE ALTER OBJTYPE SESSION

Enabling DDL Support

Data Definition Language (DDL) is useful in dynamic environments which change constantly.

By default, the status of DDL replication support is as follows:

  • On the source, Oracle GoldenGate DDL support is disabled by default. You must configure Extract to capture DDL by using the DDL parameter.

  • On the target, DDL support is enabled by default, to maintain the integrity of transactional data that is replicated. By default, Replicat will process all DDL operations that the trail contains. If needed, you can use the DDL parameter to configure Replicat to ignore or filter DDL operations.

Filtering DDL Replication

By default, all DDL is passed to Extract.

You can use the following methods to filter DDL operations so that specific (or all) DDL is applied to the target database according to your requirements.

  • Filtering with PL/SQL Code: Valid only for trigger-based DDL capture. This method makes use of an Oracle function that is called by the DDL trigger when a DDL operation occurs, to compute whether or not to send the DDL to Extract. Filtering with PL/SQL code should only be used to improve the performance of the source database when the DDL trigger is in use. It can be combined with built-in rules and DDL parameter filtering (see the following). Any DDL that is passed to Extract after it is filtered by the DDL trigger or filter rules can be filtered further with the DDL parameter to meet specific needs.

  • Filtering with Built-In Filter Rules: Valid only for trigger-based DDL capture. This method makes use of some procedures that you run to build filter rules into the Oracle GoldenGate trigger logic. This method allows discreet control over the types of objects that are sent to Extract, and it allows the ordering of rule evaluation. This method should only be used to improve the performance of the source database when the DDL trigger is in use. You can combine built-in rules with PL/SQL and DDL parameter filtering. Any DDL that is passed to Extract after it is filtered by the DDL trigger or filter rules can be filtered further with the DDL parameter to meet specific needs.

    Note:

    Filtering with PL/SQL or built-in filter rules is unnecessary for an Extract that operates in integrated-capture mode. If Extract must operate in classic mode and you use these filtering methods, the same filtering must happen for any transactional data (DML) that is associated with the filtered objects. For example, if you filter out the DDL that creates a table named ACCOUNTS, make certain the ACCOUNTS table is not specified in any TABLE or MAP statements, or use the appropriate exclusion parameter to exclude it from wildcard resolution. See Reference for Oracle GoldenGate for a list of wildcard exclusion parameters.

  • Filtering with DDL Parameter:Valid for both trigger-based and native DDL capture. This is the preferred method of filtering and is performed within Oracle GoldenGate, and both Extract and Replicat can execute filter criteria. Extract can perform filtering, or it can send all of the DDL to a trail, and then Replicat can perform the filtering. Alternatively, you can filter in a combination of different locations. The DDL parameter gives you control over where the filtering is performed, and it also offers more filtering options than the trigger method, including the ability to filter collectively based on the DDL scope (for example, include all MAPPED scope).

    Note:

    If a DDL operation fails in the middle of a TRANSACTION, it forces a commit, which means that the transaction spanning the DDL is split into two. The first half is committed and the second half can be restarted. If a recovery occurs, the second half of the transaction cannot be filtered since the information contained in the header of the transaction is no longer there.

Filtering with PL/SQL Code

This method is only valid for trigger-based capture.

You can write PL/SQL code to pass information about the DDL to a function that computes whether or not the DDL is passed to Extract. By sending fewer DDL operations to Extract, you can improve capture performance.

  1. Copy the ddl_filter.sql file that is in the Oracle GoldenGate installation directory to a test machine where you can test the code that you will be writing.
  2. Open the file for editing. It contains a PL/SQL function named filterDDL, which you can modify to specify if/then filter criteria. The information that is passed to this function includes:
    • ora_owner: the schema of the DDL object

    • ora_name: the defined name of the object

    • ora_objtype: the type of object, such as TABLE or INDEX

    • ora_optype: the operation type, such as CREATE or ALTER

    • ora_login_user: The user that executed the DDL

    • retVal: can be either INCLUDE to include the DDL, or EXCLUDE to exclude the DDL from Extract processing.

    In the location after the 'compute retVal here' comment, write filter code for each type of DDL that you want to be filtered. The following is an example:

    if ora_owner='SYS' then
    retVal:='EXCLUDE';
    end if;
    if ora_objtype='USER' and ora_optype ='DROP' then
    retVal:='EXCLUDE';
    end if;
    if ora_owner='JOE' and ora_name like 'TEMP%' then
    retVal:='EXCLUDE';
    end if;
    

    In this example, the following DDL is excluded from being processed by the DDL trigger:

    • DDL for objects owned by SYS

    • any DROP USER

    • any DDL on JOE.TEMP%

  3. (Optional) To trace the filtering, you can add the following syntax to each if/then statement in the PL/SQL:
    if ora_owner='JOE' and ora_name like 'TEMP%' then
    retVal:='EXCLUDE';
    if "&gg_user" .DDLReplication.trace_level >= 1 then
    "&gg_user" .trace_put_line ('DDLFILTER', 'excluded JOE.TEMP%');
    end if;
     

    Where:

    • &gg_user is the schema of the Oracle GoldenGate DDL support objects.

    • .DDLReplication.trace_level is the level of DDL tracing. To use trigger tracing, the TRACE or TRACE2 parameter must be used with the DDL or DDLONLY option in the Extract parameter file. The .DDLReplication.trace_level parameter must be set to >=1.

    • trace_put_line is a user-defined text string that Extract writes to the trace file that represents the type of DDL that was filtered.

  4. Save the code.
  5. Stop DDL activity on the test system.
  6. In SQL*Plus, compile the ddl_filter.sql file as follows, where schema_name is the schema where the Oracle GoldenGate DDL objects are installed.
    @ddl_filter schema_name
    
  7. Test in the test environment to make certain that the filtering works. It is important to perform this testing, because any errors in the code could cause source and target DDL to become out of synchronization.
  8. After a successful test, copy the file to the Oracle GoldenGate installation directory on the source production system.
  9. Stop DDL activity on the source system.
  10. Compile the ddl_filter.sql file as you did before.
    @ddl_filter schema_name
    
  11. Resume DDL activity on the source system.

Filtering With Built-in Filter Rules

This method is only valid for trigger-based capture.

You can add inclusion and exclusion rules to control the DDL operations that are sent to Extract by the DDL trigger. By storing rules and sending fewer DDL operations to Extract, you can improve capture performance.

  1. Use the DDLAUX.addRule() function to define your rules according to the following instructions. This function is installed in the Oracle GoldenGate DDL schema after the DDL objects are installed with the ddl_setup.sql script.
  2. To activate the rules, execute the function in SQL*Plus or enter a collection of rules in a SQL file and execute that file in SQL*Plus.
DDLAUX.addRule() Function Definition
FUNCTION addRule( obj_name IN VARCHAR2 DEFAULT NULL,
base_obj_name IN VARCHAR2 DEFAULT NULL,
owner_name IN VARCHAR2 DEFAULT NULL,
base_owner_name IN VARCHAR2 DEFAULT NULL,
base_obj_property IN NUMBER DEFAULT NULL,
obj_type IN NUMBER DEFAULT NULL,
command IN VARCHAR2 DEFAULT NULL,
inclusion IN boolean DEFAULT NULL ,
sno IN NUMBER DEFAULT NULL)
RETURN NUMBER;
Parameters for DDLAUX.addRule()

The information passed to this function are the following parameters, which correlate to the attributes of an object. All parameters are optional, and more than one parameter can be specified.

  • sno: Specifies a serial number that identifies the rule. The order of evaluation of rules is from the lowest serial number to the highest serial number, until a match is found. The sno can be used to place inclusion rules ahead of an exclusion rule, so as to make an exception to the exclusion rule. Because this is a function and not a procedure, it returns the serial number of the rule, which should be used for the drop rule specified with DDLAUX.dropRule(). The serial number is generated automatically unless you specify one with this statement at the beginning of your code: DECLARE sno NUMBER; BEGIN sno :=

    For example:

    DECLARE 
      sno NUMBER; 
    BEGIN 
      sno := tkggadmin..DDLAUX.ADDRULE(obj_name => 'GGS%' , 
                                       obj_type => TYPE_TABLE); 
    END;
    /
    
  • obj_name: Specifies the object name. If the name is case-sensitive, enclose it within double quotes.

  • owner_name: Specifies the name of the object schema

  • base_obj_name: Specifies the base object name of the DDL object (such as the base table if the object is an index). If the name is case-sensitive, enclose it within double quotes.

  • base_owner_name: Specifies the base object schema name.

  • base_obj_property: Specifies the base object property.

  • obj_type: Specifies the object type.

  • command: Specifies the command.

  • inclusion = TRUE: Indicates that the specified objects are to be captured by the DDL trigger. If this parameter is not specified, the rule becomes an exclusion rule, and the specified objects are not captured. You can specify both an exclusion rule and an inclusion rule. If a DDL does not match any of the rules, it is included (passed to Extract) by default. Calling DDLAUX.addRule() without any parameters generates an empty rule that excludes all DDL on all the objects.

Valid DDL Components for DDLAUX.addRule()

The following are the defined DDL object types, base object properties, and DDL commands that can be specified in the function code.

Valid object types are:

  • TYPE_INDEX
  • TYPE_TABLE
  • TYPE_VIEW
  • TYPE_SYNONYM
  • TYPE_SEQUENCE
  • TYPE_PROCEDURE
  • TYPE_FUNCTION
  • TYPE_PACKAGE
  • TYPE_TRIGGER

Valid base object properties are:

  • TB_IOT
  • TB_CLUSTER
  • TB_NESTED
  • TB_TEMP
  • TB_EXTERNAL

Valid commands are:

  • CMD_CREATE
  • CMD_DROP
  • CMD_TRUNCATE
  • CMD_ALTER
Examples of Rule-based Trigger Filtering

The following example excludes all temporary tables, except tables with names that start with IMPTEMP.

1. DDLAUX.ADDRULE(obj_name => 'IMPTEMP%', base_obj_property => TB_TEMP, obj_type => TYPE_TABLE, INCLUSION => TRUE);
2. DDLAUX.ADDRULE(base_obj_property => TB_TEMP, obj_type => TYPE_TABLE); 

Note:

Since the IMPTEMP% tables must be included, that rule should come first.

The following example excludes all tables with name 'GGS%'

DECLARE sno NUMBER; BEGIN sno := DDLAUX.ADDRULE(obj_name => 'GGS%' , obj_type => TYPE_TABLE); END 

The following example excludes all temporary tables.

DDLAUX.ADDRULE(base_obj_property => TB_TEMP, obj_type => TYPE_TABLE); 

The following example excludes all indexes on TEMP tables.

DDLAUX.ADDRULE(base_obj_property => TB_TEMP, obj_type => TYPE_INDEX); 

The following example excludes all objects in schema TKGGADMIN.

DDLAUX.ADDRULE(owner_name => 'TKGGADMIN'); 

The following example excludes all objects in TRUNCATE operations made to TEMP tables.

DDLAUX.ADDRULE(base_obj_property => TB_TEMP, obj_type => TYPE_TABLE, command => CMD_TRUNCATE)
Dropping Filter Rules

Use the DDLAUX.dropRule() function with the drop rule. This function is installed in the Oracle GoldenGate DDL schema after the DDL objects are installed with the ddl_setup.sql script. As input, specify the serial number of the rule that you want to drop.

FUNCTION dropRule(sno IN NUMBER) RETURN BOOLEAN;

Filtering with the DDL Parameter

This method is valid for trigger-based and integrated capture modes.

The DDL parameter is the main Oracle GoldenGate parameter for filtering DDL within the Extract and Replicat processes.

When used without options, the DDL parameter performs no filtering, and it causes all DDL operations to be propagated as follows:

  • As an Extract parameter, it captures all supported DDL operations that are generated on all supported database objects and sends them to the trail.

  • As a Replicat parameter, it replicates all DDL operations from the Oracle GoldenGate trail and applies them to the target. This is the same as the default behavior without this parameter.

When used with options, the DDL parameter acts as a filtering agent to include or exclude DDL operations based on:

  • scope

  • object type

  • operation type

  • object name

  • strings in the DDL command syntax or comments, or both

Only one DDL parameter can be used in a parameter file, but you can combine multiple inclusion and exclusion options, along with other options, to filter the DDL to the required level.

  • DDL filtering options are valid for a primary Extract that captures from the transaction source, but not for a data-pump Extract.

  • When combined, multiple filter option specifications are linked logically as AND statements.

  • All filter criteria specified with multiple options must be satisfied for a DDL statement to be replicated.

  • When using complex DDL filtering criteria, it is recommended that you test your configuration in a test environment before using it in production.

For DDL parameter syntax and additional usage guidelines, see Reference for Oracle GoldenGate.

Note:

Before you configure DDL support, it might help to review How DDL is Evaluated for Processing.

Special Filter Cases

This topic describes the special cases that you must consider before creating your DDL filters.

The following are the special cases for creating filter conditions.

DDL EXCLUDE ALL

DDL EXCLUDE ALL is a special processing option that is intended primarily for Extract when using trigger-based DDL capture. DDL EXCLUDE ALL blocks the replication of DDL operations, but ensures that Oracle GoldenGate continues to keep the object metadata current. When Extract receives DDL directly from the logmining server (triggerless DDL capture mode), current metadata is always maintained.

You can use DDL EXCLUDE ALL when using a method other than Oracle GoldenGate to apply DDL to the target and you want Oracle GoldenGate to replicate data changes to the target objects. It provides the current metadata to Oracle GoldenGate as objects change, thus preventing the need to stop and start the Oracle GoldenGate processes. The following special conditions apply to DDL EXCLUDE ALL:

  • DDL EXCLUDE ALL does not require the use of an INCLUDE clause.

  • When using DDL EXCLUDE ALL, you can set the WILDCARDRESOLVE parameter to IMMEDIATE to allow immediate DML resolution if required. For more information, see Reference for Oracle GoldenGate.

To prevent all DDL metadata and operations from being replicated, omit the DDL parameter entirely.

Implicit DDL

User-generated DDL operations can generate implicit DDL operations. For example, the following statement generates two distinct DDL operations.

CREATE TABLE customers (custID number, name varchar2(50), address varchar2(75), address2 varchar2(75), city varchar2(50), state (varchar2(2), zip number, contact varchar2(50), areacode number(3), phone number(7), primary key (custID));

The first (explicit) DDL operation is the CREATE TABLE statement itself.

The second DDL operation is an implicit CREATE UNIQUE INDEX statement that creates the index for the primary key. This operation is generated by the database engine, not a user application.

Guidelines for Filtering Implicit DDL

How to filter implicit DDL depends on the mechanism that you are using to filter DDL. See Filtering DDL Replication for more information.

  • When the DDL parameter is used to filter DDL operations, Oracle GoldenGate filters out any implicit DDL by default, because the explicit DDL will generate the implicit DDL on the target. For example, the target database will create the appropriate index when the CREATE TABLE statement in the preceding example is applied by Replicat.

  • When the DDL trigger is being used to filter DDL operations, you must handle the implicit DDL in your filter rules based on the following:

    • If your filtering rules exclude the explicit DDL from being propagated, you must also create a rule to exclude the implicit DDL. For example, if you exclude the CREATE TABLE statement in the following example, but do not exclude the implicit CREATE UNIQUE INDEX statement, the target database will try to create the index on a non-existent table.

      CREATE TABLE customers (custID number, name varchar2(50), address varchar2(75), address2 varchar2(75), city varchar2(50), state (varchar2(2), zip number, contact varchar2(50), areacode number(3), phone number(7), primary key (custID));
      
    • If your filtering rules permit the propagation of the explicit DDL, you do not need to exclude the implicit DDL. It will be handled correctly by Oracle GoldenGate and the target database.

How Oracle GoldenGate Handles Derived Object Names

DDL operations can contain a base object name and also a derived object name.

A base object is an object that contains data. A derived object is an object that inherits some attributes of the base object to perform a function related to that object. DDL statements that have both base and derived objects are:

  • RENAME and ALTER RENAME

  • CREATE and DROP on an index, synonym, or trigger

Consider the following DDL statement:

CREATE INDEX hr.indexPayrollDate ON TABLE hr.tabPayroll (payDate);

In this case, the table is the base object. Its name (hr.tabPayroll) is the base name and is subject to mapping with TABLE or MAP under the MAPPED scope. The derived object is the index, and its name (hr.indexPayrollDate) is the derived name.

You can map a derived name in its own TABLE or MAP statement, separately from that of the base object. Or, you can use one MAP statement to handle both. In the case of MAP, the conversion of derived object names on the target works as follows.

MAP Exists for Base Object, But Not Derived Object

If there is a MAP statement for the base object, but not for the derived object, the result is a schema based on the mapping that matches the derived object name. Derived objects are only mapped if the MAPDERIVED option is enabled, which is also the default option.

For example, consider the following:

Extract (source)

Table hr.*;

Replicat (target)

MAP hr.*, TARGET hrBackup.*;

Assume the following source DDL statement:

CREATE INDEX hr.indexPayrollDate ON TABLE hr.Payroll (payDate);

The CREATE INDEX statement is executed by Replicat on the target as follows:

CREATE INDEX hrBackup.indexPayrollDate ON TABLE hrBackup.Payroll (payDate);

In this example, the mapping is such that it matches the derived object name because of which the derived object schema is changed from hr to hrBackup.

Here’s another example, where there is no mapping that matches the derived object name so the derived object name remains the same.

Extract (source)

Table hr.tab*;

Replicat (target)

MAP hr.tab*, TARGET hrBackup.*;

Assume the following source DDL statement:

CREATE INDEX hr.indexPayrollDate ON TABLE hr.tabPayroll (payDate);

The CREATE INDEX statement is executed by Replicat on the target as follows:

CREATE INDEX hr.indexPayrollDate ON TABLE hrBackup.tabPayroll (payDate);

MAP Exists for Base and Derived Objects

If there is a MAP statement for the base object and also one for the derived object, the result is an explicit mapping. Assuming the DDL statement includes MAPPED, Replicat converts the schema and name of each object according to its own TARGET clause. For example, assume the following:

Extract (source)

TABLE hr.tab*;  TABLE hr.index*;

Replicat (target)

MAP hr.tab*, TARGET hrBackup.*;MAP hr.index*, TARGET hrIndex.*;

Assume the following source DDL statement:

CREATE INDEX hr.indexPayrollDate ON TABLE hr.tabPayroll (payDate);

The CREATE INDEX statement is executed by Replicat on the target as follows:

CREATE INDEX hrIndex.indexPayrollDate ON TABLE hrBackup.tabPayroll (payDate);

Use an explicit mapping when the index on the target must be owned by a different schema from that of the base object, or when the name on the target must be different from that of the source.

MAP Exists for Derived Object, But Not Base Object

If there is a MAP statement for the derived object, but not for the base object, Replicat does not perform any name conversion for either object. The target DDL statement is the same as that of the source. To map a derived object, the choices are:

  • Use an explicit MAP statement for the base object.

  • If names permit, map both base and derived objects in the same MAP statement by means of a wildcard.

  • Create a MAP statement for each object, depending on how you want the names converted.

New Tables as Derived Objects

The following explains how Oracle GoldenGate handles new tables that are created from:

  • RENAME and ALTER RENAME

  • CREATE TABLE AS SELECT

CREATE TABLE AS SELECT

The CREATE TABLE AS SELECT (CTAS) statements include SELECT statements and INSERT statements that reference any number of underlying objects. By default, Oracle GoldenGate obtains the data for the AS SELECT clause from the target database. You can force the CTAS operation to preserve the original inserts using this parameter.

Note:

For this reason, Oracle XMLType tables created from a CTAS (CREATE TABLE AS SELECT) statement cannot be supported. For XMLType tables, the row object IDs must match between source and target, which cannot be maintained in this scenario. XMLType tables created by an empty CTAS statement (that does not insert data in the new table) can be maintained correctly.

In addition, you could use the GETCTASDML parameter that allows CTAS to replay the inserts of the CTAS thus preserving OIDs during replication. This parameter is only supported with Integrated Dictionary and any downstream Replicat must be 12.1.2.1 or greater to consume the trail otherwise, there may be divergence.

The objects in the AS SELECT clause must exist in the target database, and their names must be identical to the ones on the source.

In a MAP statement, Oracle GoldenGate only maps the name of the new table (CREATE TABLE name) to the TARGET specification, but does not map the names of the underlying objects from the AS SELECT clause. There could be dependencies on those objects that could cause data inconsistencies if the names were converted to the TARGET specification.

The following shows an example of a CREATE TABLE AS SELECT statement on the source and how it would be replicated to the target by Oracle GoldenGate.

CREATE TABLE a.tab1 AS SELECT * FROM a.tab2;

The MAP statement for Replicat is as follows:

MAP a.tab*, TARGET a.x*;

The target DDL statement that is applied by Replicat is the following:

CREATE TABLE a.xtab1 AS SELECT * FROM a.tab2;

The name of the table in the AS SELECT * FROM clause remains as it was on the source: tab2 (rather than xtab2).

To keep the data in the underlying objects consistent on source and target, you can configure them for data replication by Oracle GoldenGate. In the preceding example, you could use the following statements to accommodate this requirement:

Source

TABLE a.tab*;

Target

MAPEXCLUDE a.tab2
MAP a.tab*, TARGET a.x*;
MAP a.tab2, TARGET a.tab2;

See Correctly Identifying Unqualified Object Names in DDL.

RENAME and ALTER TABLE RENAME

In RENAME and ALTER TABLE RENAME operations, the base object is always the new table name. In the following example, the base object name is considered to be index_paydate.

ALTER TABLE hr.indexPayrollDate RENAME TO index_paydate;

or...

RENAME hr.indexPayrollDate TO index_paydate;

The derived object name is hr.indexPayrollDate.

Disabling the Mapping of Derived Objects

Use the DDLOPTIONS parameter with the NOMAPDERIVED option to prevent the conversion of the name of a derived object according to a TARGET clause of a MAP statement that includes it. NOMAPDERIVED overrides any explicit MAP statements that contain the name of the base or derived object. Source DDL that contains derived objects is replicated to the target with the same schema and object names as on the source.

The following table shows the results of MAPDERIVED compared to NOMAPDERIVED, based on whether there is a MAP statement just for the base object, just for the derived object, or for both.

Base Object Derived Object MAP/NOMAP DERIVED? Derived object converted per a MAP? Derived object gets schema of base object?

mappedFoot 5

mapped

MAPDERIVED

yes

no

mapped

not mapped

MAPDERIVED

no

yes

not mapped

mapped

MAPDERIVED

no

no

not mapped

not mapped

MAPDERIVED

no

no

mapped

mapped

NOMAPDERIVED

no

no

mapped

not mapped

NOMAPDERIVED

no

no

not mapped

mapped

NOMAPDERIVED

no

no

not mapped

not mapped

NOMAPDERIVED

no

no

Footnote 5

Mapped means included in a MAP statement.

The following examples illustrate the results of MAPDERIVED as compared to NOMAPDERIVED. In the following table, both trigger and table are owned by rpt on the target because both base and derived names are converted by means of MAPDERIVED.

MAP statement Source DDL statement captured by Extract Target DDL statement applied by Replicat

MAP fin.*, TARGET rpt.*;

CREATE TRIGGER fin.act_trig ON fin.acct;

CREATE TRIGGER rpt.act_trig ON rpt.acct;

In the following table, the trigger is owned by fin, because conversion is prevented by means of NOMAPDERIVED.

MAP statement Source DDL statement captured by Extract Target DDL statement applied by Replicat

MAP fin.*, TARGET rpt.*;

CREATE TRIGGER fin.act_trig ON fin.acct;

CREATE TRIGGER fin.act_trig ON rpt.acct;

Note:

In the case of a RENAME statement, the new table name is considered to be the base table name, and the old table name is considered to be the derived table name.

Using DDL String Substitution

You can substitute strings within a DDL operation while it is being processed by Oracle GoldenGate.

This feature provides a convenience for changing and mapping directory names, comments, and other things that are not directly related to data structures. For example, you could substitute one tablespace name for another, or substitute a string within comments. String substitution is controlled by the DDLSUBST parameter. For more information, see Reference for Oracle GoldenGate.

Note:

Before you create a DDLSUBST parameter statement, it might help to review How DDL is Evaluated for Processing in this chapter.

Controlling the Propagation of DDL to Support Different Topologies

To support bidirectional and cascading replication configurations, it is important for Extract to be able to identify the DDL that is performed by Oracle GoldenGate and by other applications, such as the local business applications.

Depending on the configuration that you want to deploy, it might be appropriate to capture one or both of these sources of DDL on the local system.

Note:

Oracle GoldenGate DDL consists of ALTER TABLE statements performed by Extract to create log groups and the DDL that is performed by Replicat to replicate source DDL changes.

The following options of the DDLOPTIONS parameter control whether DDL on the local system is captured by Extract and then sent to a remote system, assuming Oracle GoldenGate DDL support is configured and enabled:

  • The GETREPLICATES and IGNOREREPLICATES options control whether Extract captures or ignores the DDL that is generated by Oracle GoldenGate. The default is IGNOREREPLICATES, which does not propagate the DDL that is generated by Oracle GoldenGate. To identify the DDL operations that are performed by Oracle GoldenGate, the following comment is part of each Extract and Replicat DDL statement:

    /* GOLDENGATE_DDL_REPLICATION */
    
  • The GETAPPLOPS and IGNOREAPPLOPS options control whether Extract captures or ignores the DDL that is generated by applications other than Oracle GoldenGate. The default is GETAPPLOPS, which propagates the DDL from local applications (other than Oracle GoldenGate).

The result of these default settings is that Extract ignores its own DDL and the DDL that is applied to the local database by a local Replicat, so that the DDL is not sent back to its source, and Extract captures all other DDL that is configured for replication. The following is the default DDLOPTIONS configuration.

DDLOPTIONS GETAPPLOPS, IGNOREREPLICATES

This behavior can be modified. See the following topics:

Propagating DDL in Active-Active (Bidirectional) Configurations

Oracle GoldenGate supports active-active DDL replication between two systems. For an active-active bidirectional replication, the following must be configured in the Oracle GoldenGate processes:

  1. DDL that is performed by a business application on one system must be replicated to the other system to maintain synchronization. To satisfy this requirement, include the GETAPPLOPS option in the DDLOPTIONS statement in the Extract parameter files on both systems.

  2. DDL that is applied by Replicat on one system must be captured by the local Extract and sent back to the other system. To satisfy this requirement, use the GETREPLICATES option in the DDLOPTIONS statement in the Extract parameter files on both systems.

    Note:

    An internal Oracle GoldenGate token will cause the actual Replicat DDL statement itself to be ignored to prevent loopback. The purpose of propagating Replicat DDL back to the original system is so that the Replicat on that system can update its object metadata cache, in preparation to receive incoming DML, which will have the new metadata.

  3. Each Replicat must be configured to update its object metadata cache whenever the remote Extract sends over a captured Replicat DDL statement. To satisfy this requirement, use the UPDATEMETADATA option in the DDLOPTIONS statement in the Replicat parameter files on both systems.

The resultant DDLOPTIONS statements should look as follows:

Extract (primary and secondary)

DDLOPTIONS GETREPLICATES, GETAPPLOPS 

Replicat (primary and secondary)

DDLOPTIONS UPDATEMETADATA

WARNING:

Before you allow new DDL or DML to be issued for the same object(s) as the original DDL, allow time for the original DDL to be replicated to the remote system and then captured again by the Extract on that system. This will ensure that the operations arrive in correct order to the Replicat on the original system, to prevent DML errors caused by metadata inconsistencies. See the following diagram for more information.

The labels in the diagrams imply the following:

  • 1: ALTER TABLE Customer ADD Birth_Date date

  • 2; New metadata: First_Name varchar2(50), Last_Name varchar2(50), Address varchar2(50), City varchar2(50), Country varchar2(25), Birth_Date date

  • 3: ALTER TABLE

  • 4: New metadata: First_Name varchar2(50), Last_Name varchar2(50), Address varchar2(50), City varchar2(50), Country varchar2(25), Birth_Date date

  • 5: ALTER TABLE

  • 6: DDLOPTIONS UPDATEMETADATA New metadata: First_Name varchar2(50), Last_Name varchar2(50), Address varchar2(50), City varchar2(50), Country varchar2(25), Birth_Date date

For more information about DDLOPTIONS, see Reference for Oracle GoldenGate.

For more information about configuring a bidirectional configuration, see Administering Oracle GoldenGate.

Propagating DDL in a Cascading Configuration

In a cascading configuration, use the following setting for DDLOPTIONS in the Extract parameter file on each intermediary system. This configuration forces Extract to capture the DDL from Replicat on an intermediary system and cascade it to the next system downstream.

DDLOPTIONS GETREPLICATES, IGNOREAPPLOPS

For more information about DDLOPTIONS, see Reference for Oracle GoldenGate

For more information about configuring a cascading configuration, see Administering Oracle GoldenGate for Windows and UNIX.

Adding Supplemental Log Groups Automatically

Use the DDLOPTIONS parameter with the ADDTRANDATA option for performing tasks described in this topic.

You can perform the following tasks using the DDLOPTIONS:

  • Enable Oracle's supplemental logging automatically for new tables created with a CREATE TABLE.

  • Update Oracle's supplemental logging for tables affected by an ALTER TABLE to add or drop columns.

  • Update Oracle's supplemental logging for tables that are renamed.

  • Update Oracle's supplemental logging for tables where unique or primary keys are added or dropped.

To use DDLOPTIONS ADDSCHEMATRANDATA, the ADD SCHEMATRANDATA command must be issued in GGSCI to enable schema-level supplemental logging.

By default, the ALTER TABLE that adds the supplemental logging is not replicated to the target unless the GETREPLICATES parameter is in use.

DDLOPTIONS ADDTRANDATA is not supported for multitenant container databases, see Configuring Logging Properties for more information.

Removing Comments from Replicated DDL

You can use the DDLOPTIONS parameter with the REMOVECOMMENTS BEFORE and REMOVECOMMENTS AFTER options to prevent comments that were used in the source DDL from being included in the target DDL.

By default, comments are not removed, so that they can be used for string substitution.

For more information about DDLOPTIONS, see Reference for Oracle GoldenGate.

Replicating an IDENTIFIED BY Password

Use the DDLOPTIONS parameter with the DEFAULTUSERPASSWORDALIAS and REPLICATEPASSWORD | NOREPLICATEPASSWORD options to control how the password of a replicated {CREATE | ALTER} USER name IDENTIFIED BY password statement is handled. These options must be used together.

See the USEPASSWORDVERIFIERLEVEL option of DDLOPTIONS for important information about specifying the password verifier when Replicat operates against an Oracle 10g or 11g database.

For more information about DDLOPTIONS, see Reference for Oracle GoldenGate.

Note:

Replication of CREATE | ALTER PROFILE will fail as the profile/password verification function must exist in the SYS schema. To replicate these DDLs successfully, password verification function must be created manually on both source/target(s) since DDL to SYS schema is excluded.

How DDL is Evaluated for Processing

This topic explains how Oracle GoldenGate processes DDL statements on the source and target systems.

It shows the order in which different criteria in the Oracle GoldenGate parameters are processed, and it explains the differences between how Extract and Replicat each process the DDL.

Extract

  1. Extract captures a DDL statement.

  2. Extract separates comments, if any, from the main statement.

  3. Extract searches for the DDL parameter. (This example assumes it exists.)

  4. Extract searches for the IGNOREREPLICATES parameter. If it is present, and if Replicat produced this DDL on this system, Extract ignores the DDL statement. (This example assumes no Replicat operations on this system.)

  5. Extract determines whether the DDL statement is a RENAME. If so, the rename is flagged internally.

  6. Extract gets the base object name and, if present, the derived object name.

  7. If the statement is a RENAME, Extract changes it to ALTER TABLE RENAME.

  8. Extract searches for the DDLOPTIONS REMOVECOMMENTS BEFORE parameter. If it is present, Extract removes the comments from the DDL statement, but stores them in case there is a DDL INCLUDE or DDL EXCLUDE clause that uses INSTR or INSTRCOMMENTS.

  9. Extract determines the DDL scope: MAPPED, UNMAPPED or OTHER:

    • It is MAPPED if the operation and object types are supported for mapping, and the base object name and/or derived object name (if RENAME) is in a TABLE parameter.

    • It is UNMAPPED if the operation and object types are not supported for mapping, and the base object name and/or derived object name (if RENAME) is not in a TABLE parameter.

    • Otherwise the operation is identified as OTHER.

  10. Extract checks the DDL parameter for INCLUDE and EXCLUDE clauses, and it evaluates the DDL parameter criteria in those clauses. All options must evaluate to TRUE in order for the INCLUDE or EXCLUDE to evaluate to TRUE. The following occurs:

    • If an EXCLUDE clause evaluates to TRUE, Extract discards the DDL statement and evaluates another DDL statement. In this case, the processing steps start over.

    • If an INCLUDE clause evaluates to TRUE, or if the DDL parameter does not have any INCLUDE or EXCLUDE clauses, Extract includes the DDL statement, and the processing logic continues.

  11. Extract searches for a DDLSUBST parameter and evaluates the INCLUDE and EXCLUDE clauses. If the criteria in those clauses add up to TRUE, Extract performs string substitution. Extract evaluates the DDL statement against each DDLSUBST parameter in the parameter file. For all true DDLSUBST specifications, Extract performs string substitution in the order that the DDLSUBST parameters are listed in the file.

  12. Now that DDLSUBT has been processed, Extract searches for the REMOVECOMMENTS AFTER parameter. If it is present, Extract removes the comments from the DDL statement.

  13. Extract searches for DDLOPTIONS ADDTRANDATA. If it is present, and if the operation is CREATE TABLE, Extract issues the ALTER TABLE name ADD SUPPLEMENTAL LOG GROUP command on the table.

  14. Extract writes the DDL statement to the trail.

Replicat

  1. Replicat reads the DDL statement from the trail.

  2. Replicat separates comments, if any, from the main statement.

  3. Replicat searches for DDLOPTIONS REMOVECOMMENTS BEFORE. If it is present, Replicat removes the comments from the DDL statement.

  4. Replicat evaluates the DDL synchronization scope to determine if the DDL qualifies for name mapping. Anything else is of OTHER scope.

  5. Replicat evaluates the MAP statements in the parameter file. If the source base object name for this DDL (as read from the trail) appears in any of the MAP statements, the operation is marked as MAPPED in scope. Otherwise it is marked as UNMAPPED in scope.

  6. Replicat replaces the source base object name with the base object name that is specified in the TARGET clause of the MAP statement.

  7. If there is a derived object, Replicat searches for DDLOPTIONS MAPDERIVED. If it is present, Replicat replaces the source derived name with the target derived name from the MAP statement.

  8. Replicat checks the DDL parameter for INCLUDE and EXCLUDE clauses, and it evaluates the DDL parameter criteria contained in them. All options must evaluate to TRUE in order for the INCLUDE or EXCLUDE to evaluate to TRUE. The following occurs:

    • If any EXCLUDE clause evaluates to TRUE, Replicat discards the DDL statement and starts evaluating another DDL statement. In this case, the processing steps start over.

    • If any INCLUDE clause evaluates to TRUE, or if the DDL parameter does not have any INCLUDE or EXCLUDE clauses, Replicat includes the DDL statement, and the processing logic continues.

  9. Replicat searches for the DDLSUBST parameter and evaluates the INCLUDE and EXCLUDE clauses. If the options in those clauses add up to TRUE, Replicat performs string substitution. Replicat evaluates the DDL statement against each DDLSUBST parameter in the parameter file. For all true DDLSUBST specifications, Replicat performs string substitution in the order that the DDLSUBST parameters are listed in the file.

  10. Now that DDLSUBT has been processed, Replicat searches for the REMOVECOMMENTS AFTER parameter. If it is present, Replicat removes the comments from the DDL statement.

  11. Replicat executes the DDL statement on the target database.

  12. If there are no errors, Replicat processes the next DDL statement. If there are errors, Replicat performs the following steps.

  13. Replicat analyzes the INCLUDE and EXCLUDE rules in the Replicat DDLERROR parameters in the order that they appear in the parameter file. If Replicat finds a rule for the error code, it applies the specified error handling; otherwise, it applies DEFAULT handling.

  14. If the error handling does not enable the DDL statement to succeed, Replicat does one of the following: abends, ignores the operation, or discards it as specified in the rules.

Note:

If there are multiple targets for the same source in a MAP statement, the processing logic executes for each one.

Viewing DDL Report Information

By default, Oracle GoldenGate shows basic statistics about DDL at the end of the Extract and Replicat reports.

To enable expanded DDL reporting, use the DDLOPTIONS parameter with the REPORT option. Expanded reporting includes the following information about DDL processing:

  • A step-by-step history of the DDL operations that were processed by Oracle GoldenGate.

  • The DDL filtering and processing parameters that are being used.

Expanded DDL report information increases the size of the report file, but it might be useful in certain situations, such as for troubleshooting or to determine when an ADDTRANDATA to add supplemental logging was applied.

To view a report, use the VIEW REPORT command in GGSCI.

VIEW REPORT group

Viewing DDL Reporting in Replicat

The Replicat report lists:

  • The entire syntax and source Oracle GoldenGate SCN of each DDL operation that Replicat processed from the trail. You can use the source SCN for tracking purposes, especially when there are restores from backup and Replicat is positioned backward in the trail.

  • A subsequent entry that shows the scope of the operation (MAPPED, UNMAPPED, OTHER) and how object names were mapped in the target DDL statement, if applicable.

  • Another entry that shows how processing criteria was applied.

  • Additional entries that show whether the operation succeeded or failed, and whether or not Replicat applied error handling rules.

The following excerpt from a Replicat report illustrates a sequence of steps, including error handling:

2011-01-20 15:11:45  GGS INFO     2104  DDL found, operation [drop table myTableTemp ], Source SCN [1186713.0].
 2011-01-20 15:11:45  GGS INFO     2100  DDL is of mapped scope, after mapping new operation [drop table "QATEST2"."MYTABLETEMP" ].
 2011-01-20 15:11:45  GGS INFO     2100  DDL operation included [include objname myTable*], optype [DROP], objtype [TABLE], objname [QATEST2.MYTABLETEMP].
 2011-01-20 15:11:45  GGS INFO     2100  Executing DDL operation.
 2011-01-20 15:11:48  GGS INFO     2105  DDL error ignored for next retry: error code [942], filter [include objname myTableTemp], error text [ORA-00942: table or view does not exist], retry [1].
 2011-01-20 15:11:48  GGS INFO     2100  Executing DDL operation , trying again due to RETRYOP parameter.
 2011-01-20 15:11:51  GGS INFO     2105  DDL error ignored for next retry: error code [942], filter [include objname myTableTemp], error text [ORA-00942: table or view does not exist], retry [2].
 2011-01-20 15:11:51  GGS INFO     2100  Executing DDL operation, trying again due to RETRYOP parameter.
 2011-01-20 15:11:54  GGS INFO     2105  DDL error ignored for next retry: error code [942], filter [include objname myTableTemp], error text [ORA-00942: table or view does not exist], retry [3].
 2011-01-20 15:11:54  GGS INFO     2100  Executing DDL operation, trying again due to RETRYOP parameter.
 2011-01-20 15:11:54  GGS INFO     2105  DDL error ignored: error code [942], filter [include objname myTableTemp], error text [ORA-00942: table or view does not exist].

Viewing DDL Reporting in Extract

The Extract report lists the following:

  • The entire syntax of each captured DDL operation, the start and end SCN, the Oracle instance, the DDL sequence number (from the SEQNO column of the history table), and the size of the operation in bytes.

  • A subsequent entry that shows how processing criteria was applied to the operation, for example string substitution or INCLUDE and EXCLUDE filtering.

  • Another entry showing whether the operation was written to the trail or excluded.

The following, taken from an Extract report, shows an included operation and an excluded operation. There is a report message for the included operation, but not for the excluded one.

2011-01-20 15:11:41  GGS INFO     2100  DDL found, operation [create table myTable (
    myId number (10) not null,
    myNumber number,
    myString varchar2(100),
    myDate date,
    primary key (myId)
) ], start SCN [1186754], commit SCN [1186772] instance [test11g (1)], DDL seqno [4134].
 
2011-01-20 15:11:41  GGS INFO     2100  DDL operation included [INCLUDE OBJNAME myTable*], optype [CREATE], objtype [TABLE], objname [QATEST1.MYTABLE].
 
2011-01-20 15:11:41  GGS INFO     2100  DDL operation written to extract trail file.
 
2011-01-20 15:11:42  GGS INFO     2100  Successfully added TRAN DATA for table with the key, table [QATEST1.MYTABLE], operation [ALTER TABLE "QATEST1"."MYTABLE" ADD SUPPLEMENTAL LOG GROUP "GGS_MYTABLE_53475" (MYID) ALWAYS  /* GOLDENGATE_DDL_REPLICATION */ ].
 
2011-01-20 15:11:43  GGS INFO     2100  DDL found, operation [create table myTableTemp (
    vid varchar2(100),
    someDate date,
    primary key (vid)
) ], start SCN [1186777], commit SCN [1186795] instance [test11g (1)], DDL seqno [4137].
 
2011-01-20 15:11:43  GGS INFO     2100  DDL operation excluded [EXCLUDE OBJNAME myTableTemp OPTYPE CREATE], optype [CREATE], objtype [TABLE], objname [QATEST1.MYTABLETEMP].

Statistics in the Process Reports

You can send current statistics for DDL processing to the Extract and Replicat reports by using the SEND command in GGSCI.

SEND {EXTRACT | REPLICAT} group REPORT

The statistics show totals for:

  • All DDL operations

  • Operations that are MAPPED in scope

  • Operations that are UNMAPPED in scope

  • Operations that are OTHER in scope

  • Operations that were excluded (number of operations minus included ones)

  • Errors (Replicat only)

  • Retried errors (Replicat only)

  • Discarded errors (Replicat only)

  • Ignored operations (Replicat only)

Tracing DDL Processing

If you open a support case with Oracle GoldenGate Technical Support, you might be asked to turn on tracing. TRACE and TRACE2 control DDL tracing.

Using Tools that Support Trigger-Based DDL Capture

This section documents the additional tools available to support trigger-based capture.

Tracing the DDL Trigger

To trace the activity of the Oracle GoldenGate DDL trigger, use the following tools.

  • ggs_ddl_trace.log trace file: Oracle GoldenGate creates a trace file in the USER_DUMP_DEST directory of Oracle. On RAC, each node has its own trace file that captures DDL tracing for that node. You can query the trace file as follows:

    select value from sys.v_$parameter where name = 'user_dump_dest';
    
  • ddl_tracelevel script: Edit and run this script to set the trace level. A value of None generates no DDL tracing, except for fatal errors and installation logging. The default value of 0 generates minimal tracing information. A value of 1 or 2 generates a much larger amount of information in the trace file. Do not use 1 or 2 unless requested to do so by a Oracle GoldenGate Technical Support analyst as part of a support case.

  • ddl_cleartrace script: Run this script on a regular schedule to prevent the trace file from consuming excessive disk space as it expands. It deletes the file, but Oracle GoldenGate will create another one. The DDL trigger stops writing to the trace file when the Oracle directory gets low on space, and then resumes writing when space is available again. This script is in the Oracle GoldenGate directory. Back up the trace file before running the script.

Viewing Metadata in the DDL History Table

Use the DUMPDDL command in GGSCI to view the information that is contained in the DDL history table. This information is stored in proprietary format, but you can export it in human-readable form to the screen or to a series of SQL tables that you can query. The information in the DDL history table is the same as that used by the Extract process.

Handling DDL Trigger Errors

Use the params.sql non-executable script to handle failures of the Oracle GoldenGate DDL trigger in relation to whether the source DDL fails or succeeds. The params.sql script is in the root Oracle GoldenGate directory. The parameters to use are the following:

  • ddl_fire_error_in_trigger: If set to TRUE, failures of the Oracle GoldenGate DDL trigger are raised with a Oracle GoldenGate error message and a database error message to the source end-user application. The source operations fails.

    If set to FALSE, no errors are raised, and a message is written to the trigger trace file in the Oracle GoldenGate directory. The source operation succeeds, but no DDL is replicated. The target application will eventually fail if subsequent data changes do not match the old target object structure. The default is FALSE.

  • ddl_cause_error: If set to TRUE, tests the error response of the trigger by deliberately causing an error. To generate the error, Oracle GoldenGate attempts to SELECT zero rows without exception handling. Revert this flag to the default of FALSE after testing is done.

Using Edition-Based Redefinition

Oracle GoldenGate supports the use of Edition-based Redefinition (EBR) with Oracle Databases enabling you to upgrade the database component of an application while it is in use, thereby minimizing or eliminating down time.

Editions are non-schema objects that Editioned objects belong to. Editions can be thought of as owning editioned objects or as a namespace. Every database starts with one edition named, ORA$BASE; this includes upgraded databases. More than one edition can exist in a database and each can only have one child. For example, if you create three editions in succession, edition1, edition2, edition3, then edition1 is the parent of edition2 which is the parent of edition3. This is irrespective of the user or database session that creates them or which edition was current when the new one is created. When you create an edition, it inherits all the editioned objects of its parent. To use editions with Oracle GoldenGate, you must create them. For more information about creating and managing editions, see Oracle Database Administrator’s Guide .

An object is considered editioned if it is an editionable type, it is created with the EDITIONABLE attribute, and the schema is enabled for editioning of that object type. When you create, alter, or drop an editioned object, the redo log will contain the name of the edition in which it belongs. In a container database, editions belong to the container and each container has its own default edition.

The CREATE | DROP EDITION DDLs are captured for all Extract configurations. They fall into the OTHER category and assigned an OBJTYPE option value of EDITION. The OBJTYPE option can be used for filtering, for example:

DDL EXCLUDE OBJTYPE EDITION
DDL EXCLUDE OBJTYPE EDITION OPTYPE CREATE
DDL EXCLUDE OBJTYPE EDITION OPTYPE DROP
DDL EXCLUDE OBJTYPE EDITION OPTYPE DROP ALLOWEMPTYOWNER OBJNAME edition_name

You must use the following syntax to exclude an edition from Extract or Replicat:

EXCLUDE OBJTYPE EDITION, ALLOWEMPTYOWNER OBJNAME edition_name

Editions fall into the OTHER category so no mapping is performed on the edition name. When applied, the USE permission is automatically granted to the Replicat user. Replicat will also perform a grant use on edition name with grant option to the original creating user if that user exists on the target database. Because editions are not mappable operations, they do not have owners so the standard EXCLUDE statement does not work.

The DDLs used to create or alter editions does not actually enable the user for editions, rather they enable the schema for editions. This is an important distinction because it means that the Replicat user does not need to be enabled for editions to apply DDLs to editioned objects. When Replicat applies a CREATE EDITION DDL, it grants the original creating user permission to USE it if the original user exists on the target database. For any unreplicated CREATE EDITION statements, you must issue a USE WITH GRANT OPTION grant to the Replicat user.

Whether or not an editionable objects becomes editioned is controlled by the schema it is applied in. Replicat switches its current session Edition before applying a DDL if the edition name attribute exists in the trail file and it is not empty.

Container database environments are supported for both Extract and Replicat. No additional configuration is necessary. The Replicat user's schema can not be enabled for editions if it is a common user. The Replicat user's schema does not need to be enabled for editions when applying DDLs to editioned objects in other schemas.

Note:

EBR support is limited to Integrated Dictionary; it is not supported when using a DDL trigger.