3 Configuring Oracle GoldenGate in a Multitenant Container Database

This chapter contains additional configuration instructions when configuring Oracle GoldenGate using the per-PDB capture mode or the CDB root capture mode.

Topics:

Requirements for Configuring Container Databases for Oracle GoldenGate

Here are configuration requirements to enable replication to and from multitenant container databases:

  • The different pluggable databases in the multitenant container database can have different character sets. Oracle GoldenGate captures data from any multitenant database with different character sets into one trail file and replicates the data without corruption due to using different character sets.

  • Oracle GoldenGate on-premise 21c and higher with Oracle Database 21c support the per-PDB-Extract feature. You can create a per-PDB Extract connecting to the local ggadmin user in a specific pluggable database to create and register the Extract. You do not need the container clause or the SOURCECATALOG to set up the per-PDB Extract. Setting up the Extract as a per-PDB-Extract matches exactly the same procedure as a Non-CDB.

    If required, you can also set up the root-level Extract connecting to the common c##ggadmin user in the root container to create and register the Extract for specific pluggable databases using the container. See Granting User Privileges for Oracle Database 21c and Lower depending on the Oracle database installation that you need to configure.

    See Establishing Oracle GoldenGate Credentials for how to create a user for the Oracle GoldenGate processes and grant the correct privileges.

  • To support source CDB 12.2, Extract must specify the trail format as release 12.3. Due to changes in the redo logs, to capture from a multitenant database that is Oracle 12.2 or higher, the trail format release must be 12.3 or higher.

  • The dbms_goldengate_auth.grant_admin_privilege package grants the appropriate privileges for capture and apply within a multitenant container database. This includes the container parameter, which must be set to ALL, as shown in the following example:

    EXCE DBMS_GOLDENGATE_AUTH.GRANT_ADMIN_PRIVILEGE ('c##ggadmin',container=>'all')
  • DDL replication works as a normal replication for multitenant databases. However, DDL on the root container should not be replicated because Replicats must not connect to the root container, only to PDBs.

You can also see Quickstart Your Data Replication with Oracle GoldenGate Microservices Architecture for a list of required privileges and how to configure the container and pluggable databases for Oracle GoldenGate.

Flush Sequence for Multitenant Container Database

Use FLUSH SEQUENCE immediately after you start Extract for the first time during an initial synchronization or a re-synchronization. This command updates an Oracle sequence, so that initial redo records are available at the time that Extract starts to capture transaction data. Normally, redo is not generated until the current cache is exhausted. The flush gives Replicat an initial start point with which to synchronize to the correct sequence value on the target system. From then on, Extract can use the redo that is associated with the usual cache reservation of sequence values.
  1. The following Oracle procedures are used by FLUSH SEQUENCE:

    Database Procedure User and Privileges

    Source

    updateSequence

    Grants EXECUTE to the owner of the Oracle GoldenGate DDL objects, or other selected user if not using DDL support.

    Target

    replicateSequence

    Grants EXECUTE to the Oracle GoldenGate Replicat user.

    The sequence.sql script installs these procedures. Normally, this script is run as part of the Oracle GoldenGate installation process, but make certain that was done before using FLUSH SEQUENCE. If sequence.sql was not run, the flush fails and an error message similar to the following is generated:

    Cannot flush sequence {0}. Refer to the Oracle GoldenGate for Oracle 
    documentation for instructions on how to set up and run the sequence.sql 
    script. Error {1}.
  2. The GLOBALS file must contain a GGSCHEMA parameter that specifies the schema in which the procedures are installed. This user must have CONNECT, RESOURCE, and DBA privileges.

  3. Before using FLUSH SEQUENCE, issue the DBLOGIN command as the database user that has EXECUTE privilege on the updateSequence procedure. If logging into a multitenant container database, log into the pluggable database that contains the sequence that is to be flushed.

FLUSH SEQUENCE must be issued at the PDB level, to create an Oracle GoldenGate user in each PDB for which the sequence replication is required. Use DBLOGIN to log into that PDB, and run the FLUSH SEQUENCE command.

It is recommended that you use the same schema in each PDB, so that it works with the GGSCHEMA GLOBALS parameter file. Here is an example:

Environment Information OGG 21.3 Oracle 21c to Oracle 21c Replication, Integrated Extract, Parallel Replicat
Source: CDB NORTH, PDB DBEAST 
Target: CDB SOUTH, PDB DBWEST
Source OGG Configuration
    Container User: C##GGADMIN
    PDB User for Sequences: GGATE
sqlplus / as sysdbao 
ALTER SESSION SESSION SET CONTAINER=CERTMISSN;
CREATE USER ggadmin CREATE USER ggate IDENTIFIED BY password DEFAULT TABLESPACE USERS TEMPORARY TABLESPACE TEMP QUOTA UNLIMITED ON USERS CONTAINER=CURRENT;
Run @sequence
sqlplus / as sysdba
SQL> ALTER SESSION SET CONTAINER=PDBEAST;
SQL> @sequence
When prompted enter
GGADMIN GLOBALS
GGSCHEMA GGADMIN
FLUSH SEQUENCE:
DBLOGIN USERIDALIAS ggeast DOMAIN OracleGoldenGate
FLUSH SEQUENCE PDBEAST.HR.*
 
Target Oracle GoldenGate Configuration:
 PDB User: ggadmin
Run @sequence
sqlplus / as sysdba
SQL> ALTER SESSION SET CONTAINER =PDBWEST;
SQL> @sequence

When prompted enter ggadmin.

This also applies to the @sequence.sql script, which must also be run on each PDB from where you are going to capture.

Using the Root Container Extract from PDB

Oracle GoldenGate 21c and higher releases allows capturing from a PDB in a multitenant container Oracle database. To capture from a multitenant database, you must use an Extract that is configured at the root level using a c## account. To apply data into a multitenant database, a separate Replicat is needed for each PDB, because a Replicat connects at the PDB level and doesn't have access to objects outside of that PDB

One Extract group can capture from multiple pluggable databases to a single trail. In the parameter file, source objects must be specified in TABLE and SEQUENCE statements with their fully qualified three-part names in the format of container.schema.object.

As an alternative to specifying three-part names, you can specify a default pluggable database with the SOURCECATALOG parameter, and then specify only the schema.object in subsequent TABLE or SEQUENCE parameters. You can use multiple instances of this configuration to handle multiple source pluggable databases. For example:

SOURCECATALOG DBEAST
TABLE hr.employees;
SEQUENCE hr.seq;
SOURCECATALOG DBWEST
TABLE hr.employees;
SEQUENCE hr.seq;

You can also refer to the Quickstart Bidirectional Replication for steps to perform in a bidirectional set up for Oracle GoldenGate on Oracle multitenant database.

Topics:

About Extract

The Oracle GoldenGate Extract process interacts directly with a database logmining server to receive data changes in the form of logical change records (LCRs).

The following diagram illustrates the configuration of Extract.

Description of downstream_ext.png follows
Description of the illustration downstream_ext.png

Some of the additional features of Oracle GoldenGate Extract are:

  • Extract is fully integrated with the database, allowing seamless interoperability between features such as Oracle RAC, ASM, and TDE.

  • Extract uses the database logmining server to access the Oracle redo stream, with the benefit of being able to automatically switch between different copies of archive logs or different mirrored versions of the online logs. Thus, capture can transparently handle the absence of a log file caused by disk corruption, hardware failure, or operator error, assuming that additional copies of the archived and online logs are available

  • Extract enables faster filtering of tables.

  • Extract handles point-in-time recovery and RAC integration more efficiently.

  • Extract features integrated log management. The Oracle Recovery Manager (RMAN) automatically retains the archive logs that are needed by Extract.

  • Extract supports capture from a multitenant container database and from per-PDB capture mode.

  • Extract and Replicat (integrated) are both database objects, so the naming of the objects follow the same rules as other Oracle database objects. See Specifying Object Names in Oracle GoldenGate Input in Oracle GoldenGate Microservices Documentation.

  • When Extract is running from a remote system, Oracle GoldenGate automatically enables cross endian interoperability. This implies that if the endian value where Extract is running is different from the endian value where the Oracle database is running, then the cross endian support is automatically enabled. For cross endian Extract to work, the compatibility parameter of the source database must be 11.2.0.4 or higher.

Extract Deployment Options

The deployment options for Extract are described in this section and depend on where the mining database is deployed. The mining database is the one where the logmining server is deployed.

  • Local deployment: For a local deployment, the source database and the mining database are the same. The source database is the database for which you want to mine the redo stream to capture changes, and also where you deploy the logmining server. Because integrated capture is fully integrated with the database, this mode does not require any special database setup.

  • Downstream deployment: In a downstream deployment, the source and mining databases are different databases. You create the logmining server at the downstream database. You configure redo transport at the source database to ship the redo logs to the downstream mining database for capture at that location. Using a downstream mining server for capture may be desirable to offload the capture overhead and any other overhead from transformation or other processing from the production server, but requires log shipping and other configuration.

    When using a downstream mining configuration, the source database and mining database must be of the same platform. For example, if the source database is running on Windows 64-bit, the downstream database must also be on a Windows 64-bit platform. See Configuring a Downstream Mining Database to configure a downstream mining database.

  • Downstream sourceless Extract deployment: In the Extract parameter file, replace the USERID parameter with NOUSERID. You must use TRANLOGOPTIONS MININGUSER. Extract obtains all required information from the downstream mining database. Extract is not dependent on any connection to the source database. The source database can be shutdown and restarted without affecting Extract.

    Extract will abend if it encounters redo changes that require data to be fetched from the source database.

    To capture any tables that are listed as ID KEY in the dba_goldengate_support_mode view, you need to have a FETCHUSERID or FETCHUSERIDALIAS connection to support the tables. Tables that are listed as FULL do not require this. We also need to state that if a customer wants to perform SQLEXEC operations that perform a query or execute a stored procedure they cannot use this method as it is incompatible with NOUSERID because SQLEXEC works with USERID or USERIDALIAS.

Configuring Primary Extract

The mining database from which the primary Extract captures log change records from the logmining server, can be either local or downstream from the source database.

These steps configure the primary Extract to capture transaction data from either location. See Configuring a Downstream Mining Database and see the following examples:

  1. Example 1: Capturing from One Source Database in Real-time Mode

  2. Example 2: Capturing from Multiple Sources in Archive-log-only Mode

  3. Example 3: Capturing from Multiple Sources with Mixed Real-time and Archive-log-only Mode

Note:

One Extract group is generally sufficient to capture from a single database or multiple pluggable databases within a multitenant container database. See Configuring Oracle GoldenGate in a Multitenant Container Database. You can also choose per-PDB capture mode when working in an Oracle Autonomous Database or cloud environment. See Configuring Extract to Capture from an Autonomous Database.

  1. In GGSCI, Admin Client, or REST API client on the source system, create the Extract parameter file.
    EDIT PARAMS name

    Where: name is the name of the primary Extract.

  2. Enter the Extract parameters in the order shown, starting a new line for each parameter statement. Examples are shown for a regular database, a multitenant container database, and downstream deployments for both non-CDB and multitenant databases. The difference between the two is whether you must use two-part or three-part object names in the TABLE and SEQUENCE specifications. See the basic parameters for primary Extract for more information and parameter descriptions.

    Basic parameters for Extract mining a non-mulitenant database

    EXTRACT financep
    USERIDALIAS c##_alias
    DDL INCLUDE MAPPED
    EXTTRAIL /ggs/dirdat/lt
    SEQUENCE hr.employees_seq;
    TABLE hr.*;

    Basic parameters for Extract capturing from a multitenant database

    EXTRACT financep
    USERIDALIAS c##_alias
    DDL INCLUDE MAPPED 
    EXTTRAIL /ggs/dirdat/lt
    TABLE test.ogg.tab1;
    SEQUENCE hr.employees_seq;
    TABLE hr.*;
    TABLE sales.*;
    TABLE acct.*;
    

    Basic parameters for Extract where the mining database is a downstream database and is a non-CDB database

    EXTRACT financep
    USERIDALIAS c##_alias 
    TRANLOGOPTIONS MININGUSERALIAS c##_alias 
    TRANLOGOPTIONS INTEGRATEDPARAMS (DOWNSTREAM_REAL_TIME_MINE Y)
    LOGALLSUPCOLS
    UPDATERECORDFORMAT COMPACT
    DDL INCLUDE MAPPED
    ENCRYPTTRAIL AES192
    EXTTRAIL /ggs/dirdat/lt
    SEQUENCE hr.employees_seq;
    TABLE hr.*;
    

    Basic parameters for the primary Extract where the mining database is a downstream database and is a multitenant container database

    EXTRACT financep
    USERIDALIAS tiger1 
    TRANLOGOPTIONS MININGUSERALIAS tiger2 
    TRANLOGOPTIONS INTEGRATEDPARAMS (MAX_SGA_SIZE 164, & 
       DOWNSTREAM_REAL_TIME_MINE y)
    LOGALLSUPCOLS
    UPDATERECORDFORMAT COMPACT
    DDL INCLUDE MAPPED SOURCECATALOG pdb1 INCLUDE MAPPED SOURCECATALOG pdb2
    ENCRYPTTRAIL AES192EXTTRAIL /ggs/dirdat/lt
    TABLE test.ogg.tab1;
    SOURCECATALOG pdb1
    SEQUENCE hr.employees_seq;
    TABLE hr.*;
    SOURCECATALOG pdb2
    TABLE sales.*;
    TABLE acct.*;
    
    Parameter Description
    EXTRACT group

    group is the name of the Extract group. For more information, see Reference for Oracle GoldenGate.

    USERIDALIAS alias

    Specifies the alias of the database login credential of the user that is assigned to Extract. This credential must exist in the Oracle GoldenGate credential store.

    LOGALLSUPCOLS

    Writes all supplementally logged columns to the trail, including those required for conflict detection and resolution and the scheduling columns required to support integrated Replicat. (Scheduling columns are primary key, unique index, and foreign key columns.) You configure the database to log these columns with GGSCI commands. See Establishing Oracle GoldenGate Credentials.

    UPDATERECORDFORMAT COMPACT

    Combines the before and after images of an UPDATE operation into a single record in the trail. This parameter is valid for Oracle Databases version 12c and later to support Replicat in integrated mode. Although not a required parameter, UPDATERECORDFORMAT COMPACT is a best practice and significantly improves Replicat performance.

    TRANLOGOPTIONS MININGUSERALIAS alias

    Specifies connection information for the logmining server at the downstream mining database, if being used.

    MININGUSERALIAS specifies the alias of the Extract user for the downstream mining database. This is the user that you created in Configuring a Downstream Mining Database. The credential for this user must be stored in the Oracle GoldenGate credential store.

    Use MININGUSERALIAS only if the database logmining server is in a different database from the source database; otherwise just use USERIDALIAS. When using MININGUSERALIAS, use it in addition to USERIDALIAS, because credentials are required for both databases.

    TRANLOGOPTIONS [INTEGRATEDPARAMS (parameter[, ...])]

    Optional, passes parameters to the Oracle Database that contains the database logmining server. Use only to change logmining server parameters from their default settings. See Additional Parameter Options for Extract.

    TRANLOGOPTIONS CHECKPOINTRETENTIONTIME days

    Optional, controls the number of days that Extract retains checkpoints before purging them automatically. Partial days can be specified using decimal values. For example, 8.25 specifies 8 days and 6 hours. For more information, see Reference for Oracle GoldenGate.

    DDL include_clause

    Required if replicating DDL operations. See Configuring DDL Support for more information.

    ENCRYPTTRAIL algorithm

    Encrypts the local trail.

    EXTTRAIL pathname

    Specifies the path name of the local trail to which the primary Extract writes captured data.

    SOURCECATALOG container

    Use this parameter when the source database is a multitenant container database. Specifies the name of a pluggable database that is to be used as the default container for all subsequent TABLE and SEQUENCE parameters that contain two-part names. This parameter enables you to use two-part object names (schema.object) rather than three-part names (container.schema.object). It remains in effect until another SOURCECATALOG parameter is encountered or a full three-part TABLE or SEQUENCE specification is encountered.

    {TABLE | SEQUENCE} [container.]schema.object;
    

    Specifies the database object for which to capture data.

    • TABLE specifies a table or a wildcarded set of tables.

    • SEQUENCE specifies a sequence or a wildcarded set of sequences.

    • container is the name of the pluggable database (PDB) that contains the object, if this database is a multitenant container database. The container part of the name is not required if this Extract group will only process data from one PDB and the default PDB is specified with the SOURCECATALOG parameter.

    • schema is the schema name or a wildcarded set of schemas.

    • object is the table or sequence name, or a wildcarded set of those objects.

    Terminate the parameter statement with a semi-colon.

    To exclude a name from a wildcard specification, use the CATALOGEXCLUDE, SCHEMAEXCLUDE, TABLEEXCLUDE, and EXCLUDEWILDCARDOBJECTSONLY parameters as appropriate.

    MAPINVISIBLECOLUMNS

    Controls whether or not Replicat includes invisible columns in Oracle target tables for default column mapping. Configure the invisible columns in your column mapping using SQL to explicitly specify column names. For example:

    CREATE TABLE tab1 (id NUMBER, data CLOB INVISIBLE);
       INSERT INTO tab1 VALUES (1, 'a');ERROR: ORA-913
       INSERT INTO tab1 (id, data) VALUES (1, 'a'); OK
    

    You can change the column visibility using ALTER TABLE. The invisible column can be part of an index, including primary key and unique index.

  3. Enter any optional Extract parameters that are recommended for your configuration. You can edit this file at any point before starting processing by using the EDIT PARAMS command in GGSCI.
  4. Save and close the file.
Add the Local Trail

These steps add the local trail to which the primary Extract writes captured data.

On the source system, issue the ADD EXTTRAIL command on the command line:

ADD EXTTRAIL pathname, EXTRACT group name

Where:

  • EXTTRAIL specifies that the trail is to be created on the local system.

  • pathname is the relative or fully qualified name of the trail, including the two-character name.

  • EXTRACT group name is the name of the primary Extract group.

Note:

Oracle GoldenGate creates this trail automatically during processing.

Example 3-1

ADD EXTTRAIL /north/ea, EXTRACT exte
Add the Remote Trail

Although it is read by Replicat, this trail must be associated with the Extract, so it must be added on the source system, not the target.

These steps add the remote trail:

On the source system, issue the following command:

ADD RMTTRAIL pathname, EXTRACT group name

Where:

  • RMTTRAIL specifies that the trail is to be created on the target system.

  • pathname is the relative or fully qualified name of the trail, including the two-character name.

  • EXTRACT group name is the name of the data-pump Extract group.

Note:

Oracle GoldenGate creates this trail automatically during processing.

Example 3-2

ADD RMTTRAIL /south/re, EXTRACT exts

Setting up the Auto-Capture Mode

The automatic Extract mode captures changes for all the tables that are enabled for logical replication.

The auto-capture mode is available from Oracle Database 21c and higher. A table is enabled for logical replication or auto-capture when:
  • It has sufficient ID or scheduling-key supplemental log data at table or schema level.

  • It has primary key (PK), unique identifier (UI), foreign key (FK) supplemental log data, and ALLKEYS supplemental log data. ALLKEYS is required in addition to PK, UI and FK because it logs all unique keys at the schema-wide supplemental logging level in the absence of a primary key.

Benefits of Using the Auto Extract Mode

  • Easy to configure captured table set

  • When captured table set changes, you don't need to update the TABLE/TABLEEXCLUDE parameter, or stop and restart Extract.

Enabling Auto Capture

See TRANLOGOPTIONS for syntax and usage.

Use the following DDLs to enable auto capture at the table level:

CREATE/ALTER TABLE table_name ENABLE LOGICAL REPLICATION ALLKEYS

or

CREATE/ALTER TABLE table_name ENABLE LOGICAL REPLICATION ALLOWNONVALIDATEDKEYS

Mining Mode Toggling

Mining mode toggling is not supported. This implies that after you create mining in ROOT, then the session will stay mining in ROOT and if the you choose to create mining in PDB, then the session will stay mining in that particular PDB.

However, for a specific need if you need to migrate some previous Oracle GoldenGate session mining in CDB$ROOT to mining in specific PDB, then there is a migration process that you can follow:
  1. Register a new per-PDB Extract. For example, the SCN returned is X.

  2. Let the old ROOT Extract mine the past X (RECOVERYSCN X).

  3. Stop the old Extract

  4. Alter the new Extract so that its current SCN is set to be Y.

  5. Start the new Extract.

This new Extract picks up from where the old Extract left off.

Note:

There may be some duplicate transactions at SCN Y, if there were multiple txs committing at the same SCN Y. However, Replicat can handle duplicate txs by default.

Applying to Pluggable Databases

Replicat can only connect and apply to one pluggable database. To specify the correct one, use a SQL*Net connect string for the database user that you specify with the USERID or USERIDALIAS parameter. For example: GGADMIN@DBEAST.

In the parameter file, specify only the schema.object in the TARGET portion of the MAP statements. In the MAP portion, identify source objects captured from more than one pluggable database with their three-part names or use the SOURCECATALOG parameter with two-part names. The following is an example of this configuration.

SOURCECATALOG pdbeast
MAP hr.employees, TARGET hr.employees;
MAP hr.seq, TARGET hr.employees;
SOURCECATALOG pdbwest
MAP hr.*, TARGET hr.*;
MAP hr.seq, hr.*;

The following is an example without the use of SOURCECATALOG to identify the source pluggable database. In this case, the source objects are specified with their three-part names.

MAP pdbeast.hr.employees, TARGET hr.*;
MAP pdbeast.hr.seq, TARGET hr.*;

To configure replication from multiple source pluggable databases to multiple target pluggable databases, you can configure parallel Extract and Replicat streams, each handling data for one pluggable database. Alternatively, you can configure one Extract capturing from multiple source pluggable databases, which writes to one trail that is read by multiple Replicat groups, each applying to a different target pluggable database.

Yet another alternative is to use one Extract writing to multiple trails, each trail read by a Replicat assigned to a specific target pluggable database :

See Configuring Replicat for steps for Replicat configuration at the PDB level and adding Replicat groups.

Add the Replicat Group

These steps add the Replicat group that reads the remote trail and applies the data changes to the target Oracle Database.

  1. Connect to the deployment from the Admin Client using the CONNECT command.
  2. If using integrated Replicat, issue the DBLOGIN command to log into the database.
    DBLOGIN USERIDALIAS alias

    Where: alias specifies the alias of the database login credential that is assigned to Replicat. This credential must exist in the Oracle GoldenGate credential store. For more information, see Establishing Oracle GoldenGate Credentials

  3. Issue the ADD REPLICAT command with the following syntax.
    ADD REPLICAT group name, [INTEGRATED,] EXTTRAIL pathname

    Where:

    • group name is the name of the Replicat group.

    • INTEGRATED creates an integrated Replicat group.

    • EXTTRAIL pathname is the relative or fully qualified name of the remote trail, including the two-character name.

      For more information, see Reference for Oracle GoldenGate.

Example 3-3 Adds a Nonintegrated Replicat

ADD REPLICAT repe, EXTTRAIL east/ea

Example 3-4 Adds an Integrated Replicat

ADD REPLICAT repn, INTEGRATED, EXTTRAIL north/em

Excluding Objects from the Configuration

To exclude pluggable databases, schemas, and objects from the configuration, you can use the CATALOGEXCLUDE, SCHEMAEXCLUDE, TABLEEXCLUDE, MAPEXCLUDE, and EXCLUDEWILDCARDOBJECTSONLY parameters.