C Example Downstream Mining Configuration

This appendix contains examples for preparing a downstream Oracle mining database to support Extract in integrated capture mode.

Configuring a downstream mining database, see Configuring a Downstream Mining Database.

Topics:

C.1 Example 1: Capturing from One Source Database in Real-time Mode

This example captures changes from source database DBMS1 by deploying an integrated capture session at a downstream mining database DBMSCAP.

Note:

The example assumes that you created the necessary standby redo log files as shown in Configuring a Downstream Mining Database.

This assumes that the following users exist:

  • User GGADM1 in DBMS1 whose credentials Extract will use to fetch data and metadata from DBMS1. This user has the alias of ggadm1 in the Oracle GoldenGate credential store and logs in as ggadm1@dbms1. It is assumed that the DBMS_GOLDENGATE_AUTH.GRANT_ADMIN_PRIVILEGE() procedure was called to grant appropriate privileges to this user at the source database.

  • User GGADMCAP in DBMSCAP whose credentials Extract will use to retrieve logical change records from the logmining server at the downstream mining database DBMSCAP. This user has the alias of ggadmcap in the Oracle GoldenGate credential store and logs in as ggadmcap@dbmscap. It is assumed that the DBMS_GOLDENGATE_AUTH.GRANT_ADMIN_PRIVILEGE() procedure was called to grant appropriate privileges to this user at the mining database.

C.1.1 Prepare the Mining Database to Archive its Local Redo

To prepare the mining database to archive its local redo:

  1. The downstream mining database must be in archive log mode. You can do this by issuing the following DDL.
            STARTUP MOUNT;
            ALTER DATABASE ARCHIVELOG;
            ALTER DATABASE OPEN;
    
  2. At the downstream mining database, set log_archive_dest_1 to archive local redo.
    ALTER SYSTEM SET
    LOG_ARCHIVE_DEST_1='LOCATION=/home/arc_dest/local
    VALID_FOR=(ONLINE_LOGFILE, PRIMARY_ROLE)'
    
  3. Enable log_archive_dest_1.
    ALTER SYSTEM SET LOG_ARCHIVE_DEST_STATE_1=ENABLE 
    

C.1.2 Prepare the Mining Database to Archive Redo Received in Standby Redo Logs from the Source Database

To prepare the mining database to archive the redo received in standby redo logs from the source database:

  1. At the downstream mining database, set log_archive_dest_2 as shown in the following example.
    ALTER SYSTEM SET
    LOG_ARCHIVE_DEST_2='LOCATION=/home/arc_dest/srl_dbms1
    VALID_FOR=(STANDBY_LOGFILE,PRIMARY_ROLE)'
    
  2. Enable log_archive_dest_2 as shown in the following example.
    ALTER SYSTEM SET LOG_ARCHIVE_DEST_STATE_2=ENABLE
    
  3. Set DG_CONFIG at the downstream mining database.
    ALTER SYSTEM SET LOG_ARCHIVE_CONFIG='DG_CONFIG=(dbms1,dbmscap)'
    

C.1.3 Prepare the Source Database to Send Redo to the Mining Database

To prepare the source database to send redo to the mining database:

  1. Make sure that the source database is running with the required compatibility.
    select name, value from v$parameter where name = 'compatible';
    
    NAME            VALUE
    ---------       ---------------------
    compatible      11.1.0.7.0
    

    The minimum compatibility setting required from integrated capture is 11.1.0.0.0.

  2. Set DG_CONFIG at the source database.
    ALTER SYSTEM SET LOG_ARCHIVE_CONFIG='DG_CONFIG=(dbms1,dbmscap)';
    
  3. Set up redo transport at the source database.
    ALTER SYSTEM
    SET LOG_ARCHIVE_DEST_2='SERVICE=DBMSCAP.EXAMPLE.COM ASYNC OPTIONAL NOREGISTER 
    VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE)DB_UNIQUE_NAME=dbmscap';
    
  4. Enable the downstream destination.
    ALTER SYSTEM SET LOG_ARCHIVE_DEST_STATE_2=ENABLE;
    

C.1.4 Set up Integrated Capture (ext1) on DBMSCAP

To set up integrated capture (ext1) on DBMSCAP:

  1. Register Extract with the downstream mining database. In the credential store, the alias name of ggadm1 is linked to a user connect string of ggadm1@dbms1. The alias name of ggadmcap is linked to a user connect string of ggadmcap@dbmscap.
    GGSCI> DBLOGIN USERIDALIAS ggadm1
    GGSCI> MININGDBLOGIN USERIDALIAS ggadmcap
    GGSCI> REGISTER EXTRACT ext1 DATABASE
    
  2. Create Extract at the downstream mining database.
    GGSCI> ADD EXTRACT ext1 INTEGRATED TRANLOG BEGIN NOW
    
  3. Edit Extract parameter file ext1.prm. The following lines must be present to take advantage of real-time capture. In the credential store, the alias name of ggadm1 is linked to a user connect string of ggadm1@dbms1. The alias name of ggadmcap is linked to a user connect string of ggadmcap@dbmscap.
    USERIDALIAS ggadm1
    TRANLOGOPTIONS MININGUSERALIAS ggadmcap
    TRANLOGOPTIONS INTEGRATEDPARAMS (downstream_real_time_mine Y)
    
  4. Start Extract.
    GGSCI> START EXTRACT ext1
    

Note:

You can create multiple Extracts running in real-time integrated capture mode in the downstream mining database, as long as they all are capturing data from the same source database, such as capturing changes for database DBMS1 in the preceding example.

C.2 Example 2: Capturing from Multiple Sources in Archive-log-only Mode

The following example captures changes from database DBMS1 and DBMS2 by deploying an integrated capture session at a downstream mining database DBMSCAP.

It assumes the following users:

  • User GGADM1 in DBMS1 whose credentials Extract will use to fetch data and metadata from DBMS1. It is assumed that the DBMS_GOLDENGATE_AUTH.GRANT_ADMIN_PRIVILEGE() procedure was called to grant appropriate privileges to this user at DBMS1.

  • User GGADM2 in DBMS2 whose credentials Extract will use to fetch data and metadata from DBMS2. It is assumed that the DBMS_GOLDENGATE_AUTH.GRANT_ADMIN_PRIVILEGE() procedure was called to grant appropriate privileges to this user at DBMS2.

  • User GGADMCAP in DBMSCAP whose credentials Extract will use to retrieve logical change records from the logmining server at the downstream mining database. It is assumed that the DBMS_GOLDENGATE_AUTH.GRANT_ADMIN_PRIVILEGE() procedure was called to grant appropriate privileges to this user at the downstream mining database DBMSCAP.

This procedure also assumes that the downstream mining database is configured in archive log mode.

C.2.1 Prepare the Mining Database to Archive its Local Redo

To prepare the mining database to archive its local redo:

  1. The downstream mining database must be in archive log mode. You can do this by issuing the following DDL.
            STARTUP MOUNT;
            ALTER DATABASE ARCHIVELOG;
            ALTER DATABASE OPEN;
    
  2. At the downstream mining database, set log_archive_dest_1 to archive local redo.
    ALTER SYSTEM SET
    LOG_ARCHIVE_DEST_1='LOCATION=/home/arc_dest/local
    VALID_FOR=(ONLINE_LOGFILE, PRIMARY_ROLE)'
    
  3. Enable log_archive_dest_1.
    ALTER SYSTEM SET LOG_ARCHIVE_DEST_STATE_1=ENABLE 
    

C.2.2 Prepare the Mining Database to Archive Redo from the Source Database

Set DG_CONFIG at the downstream mining database.

ALTER SYSTEM SET LOG_ARCHIVE_CONFIG='DG_CONFIG=(dbms1,dbms2, dbmscap)'

C.2.3 Prepare the First Source Database to Send Redo to the Mining Database

To prepare the first source database to send redo to the mining database:

  1. Make certain that DBMS1 source database is running with the required compatibility.
    select name, value from v$parameter where name = 'compatible';
    
    NAME            VALUE
    ---------       ---------------------
    compatible      11.1.0.0.0
    

    The minimum compatibility setting required from integrated capture is 11.1.0.0.0.

  2. Set DG_CONFIG at DBMS1 source database.
    ALTER SYSTEM SET LOG_ARCHIVE_CONFIG='DG_CONFIG=(dbms1, dbmscap)'; 
    
  3. Set up redo transport at DBMS1 source database. The TEMPLATE clause is mandatory if you want to send redo data directly to foreign archived logs at the downstream mining database.
    ALTER SYSTEM
    SET LOG_ARCHIVE_DEST_2='SERVICE=DBMSCAP.EXAMPLE.COM ASYNC OPTIONAL NOREGISTER
    TEMPLATE='/usr/orcl/arc_dest/dbms1/dbms1_arch_%t_%s_%r.log VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE)DB_UNIQUE_NAME=dbmscap';
    
  4. Enable the downstream destination.
    ALTER SYSTEM SET LOG_ARCHIVE_DEST_STATE_2=ENABLE;
    

C.2.4 Prepare the Second Source Database to Send Redo to the Mining Database

To prepare the second source database to send redo to the mining database:

  1. Make sure that DBMS2 source database is running with the required compatibility.
    select name, value from v$parameter where name = 'compatible';
    
    NAME                    VALUE
    ---------               ---------------------
    compatible              11.1.0.0.0
    

    The minimum compatibility setting required from integrated capture is 11.1.0.0.0.

  2. Set DG_CONFIG at DBMS2 source database.
    ALTER SYSTEM SET LOG_ARCHIVE_CONFIG='DG_CONFIG=(dbms2, dbmscap)'; 
    
  3. Set up redo transport at DBMS2 source database. The TEMPLATE clause is mandatory if you want to send redo data directly to foreign archived logs at the downstream mining database.
    ALTER SYSTEM
    SET LOG_ARCHIVE_DEST_2='SERVICE=DBMSCAP.EXAMPLE.COM ASYNC OPTIONAL NOREGISTER
    TEMPLATE='/usr/orcl/arc_dest/dbms2/dbms2_arch_%t_%s_%r.log VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE)DB_UNIQUE_NAME=dbmscap';
    
  4. Enable the downstream destination.
    ALTER SYSTEM SET LOG_ARCHIVE_DEST_STATE_2=ENABLE;
    

C.2.5 Set up Extracts at Downstream Mining Database

These steps set up Extract at the downstream database to capture from the archived logs sent by DBMS1 and DBMS2.

C.3 Example 3: Capturing from Multiple Sources with Mixed Real-time and Archive-log-only Mode

The following example captures changes from database DBMS1, DBMS2 and DBMS3 by deploying an integrated capture session at a downstream mining database DBMSCAP.

Note:

This example assumes that you created the necessary standby redo log files as shown in Configuring a Downstream Mining Database.

It assumes the following users:

  • User GGADM1 in DBMS1 whose credentials Extract will use to fetch data and metadata from DBMS1. It is assumed that the DBMS_GOLDENGATE_AUTH.GRANT_ADMIN_PRIVILEGE() procedure was called to grant appropriate privileges to this user at DBMS1.

  • User GGADM2 in DBMS2 whose credentials Extract will use to fetch data and metadata from DBMS2. It is assumed that the DBMS_GOLDENGATE_AUTH.GRANT_ADMIN_PRIVILEGE() procedure was called to grant appropriate privileges to this user at DBMS2.

  • User GGADM3 in DBMS3 whose credentials Extract will use to fetch data and metadata from DBMS3. It is assumed that the DBMS_GOLDENGATE_AUTH.GRANT_ADMIN_PRIVILEGE() procedure was called to grant appropriate privileges to this user at DBMS3.

  • User GGADMCAP in DBMSCAP whose credentials Extract will use to retrieve logical change records from the logmining server at the downstream mining database. It is assumed that the DBMS_GOLDENGATE_AUTH.GRANT_ADMIN_PRIVILEGE() procedure was called to grant appropriate privileges to this user at the downstream mining database DBMSCAP.

This procedure also assumes that the downstream mining database is configured in archive log mode.

In this example, the redo sent by DBMS3 will be mined in real time mode, whereas the redo data sent from DBMS1 and DBMS2 will be mined in archive-log-only mode.

C.3.1 Prepare the Mining Database to Archive its Local Redo

To prepare the mining database to archive its local redo:

  1. The downstream mining database must be in archive log mode. You can do this by issuing the following DDL.
    STARTUP MOUNT;
    ALTER DATABASE ARCHIVELOG;
    ALTER DATABASE OPEN;
    
  2. At the downstream mining database, set log_archive_dest_1 to archive local redo.
    ALTER SYSTEM SETLOG_ARCHIVE_DEST_1='LOCATION=/home/arc_dest/localVALID_FOR=(ONLINE_LOGFILE, PRIMARY_ROLE)'
    
  3. Enable log_archive_dest_1.
    ALTER SYSTEM SET LOG_ARCHIVE_DEST_STATE_1=ENABLE 
    

C.3.2 Prepare the Mining Database to Accept Redo from the Source Databases

Because redo data is being accepted in the standby redo logs of the downstream mining database, the appropriate number of correctly sized standby redo logs must exist. If you did not configure the standby logs, see Configuring a Downstream Mining Database.

  1. At the downstream mining database, set the second archive log destination in the LOG_ARCHIVE_DEST_n initialization parameter as shown in the following example. This is needed to handle archive standby redo logs.
    ALTER SYSTEM SET
    LOG_ARCHIVE_DEST_2='LOCATION=/home/arc_dest/srl_dbms3
    VALID_FOR=(STANDBY_LOGFILE,PRIMARY_ROLE)'
    
  2. Enable the LOG_ARCHIVE_DEST_STATE_2 initialization parameter that corresponds with the LOG_ARCHIVE_DEST_2 parameter as shown in the following example.
    ALTER SYSTEM SET LOG_ARCHIVE_DEST_STATE_2=ENABLE 
    
  3. Set DG_CONFIG at the downstream mining database to accept redo data from all of the source databases.
    ALTER SYSTEM SET LOG_ARCHIVE_CONFIG='DG_CONFIG=(dbms1, dbms2, dbms3, dbmscap)' 
    

C.3.3 Prepare the First Source Database to Send Redo to the Mining Database

To prepare the first source database to send redo to the mining database:

  1. Make certain that DBMS1 source database is running with the required compatibility.
    select name, value from v$parameter where name = 'compatible';
    
    NAME            VALUE
    ---------       ---------------------
    compatible      11.1.0.0.0
    

    The minimum compatibility setting required from integrated capture is 11.1.0.0.0.

  2. Set DG_CONFIG at DBMS1 source database.
    ALTER SYSTEM SET LOG_ARCHIVE_CONFIG='DG_CONFIG=(dbms1, dbmscap)'; 
    
  3. Set up redo transport at DBMS1 source database. The TEMPLATE clause is mandatory if you want to send redo data directly to foreign archived logs at the downstream mining database.
    ALTER SYSTEM
    SET LOG_ARCHIVE_DEST_2='SERVICE=DBMSCAP.EXAMPLE.COM ASYNC OPTIONAL NOREGISTER
    TEMPLATE='/usr/orcl/arc_dest/dbms1/dbms1_arch_%t_%s_%r.log VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE)DB_UNIQUE_NAME=dbmscap';
    
  4. Enable the downstream destination.
    ALTER SYSTEM SET LOG_ARCHIVE_DEST_STATE_2=ENABLE;
    

C.3.4 Prepare the Second Source Database to Send Redo to the Mining Database

To prepare the second source database to send redo to the mining database:

  1. Make sure that DBMS2 source database is running with the required compatibility.
    select name, value from v$parameter where name = 'compatible';
    
    NAME                    VALUE
    ---------               ---------------------
    compatible              11.1.0.0.0
    

    The minimum compatibility setting required from integrated capture is 11.1.0.0.0.

  2. Set DG_CONFIG at DBMS2 source database.
    ALTER SYSTEM SET LOG_ARCHIVE_CONFIG='DG_CONFIG=(dbms2, dbmscap)'; 
    
  3. Set up redo transport at DBMS2 source database. The TEMPLATE clause is mandatory if you want to send redo data directly to foreign archived logs at the downstream mining database.
    ALTER SYSTEM
    SET LOG_ARCHIVE_DEST_2='SERVICE=DBMSCAP.EXAMPLE.COM ASYNC OPTIONAL NOREGISTER
    TEMPLATE='/usr/orcl/arc_dest/dbms2/dbms2_arch_%t_%s_%r.log VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE)DB_UNIQUE_NAME=dbmscap';
    
  4. Enable the downstream destination.
    ALTER SYSTEM SET LOG_ARCHIVE_DEST_STATE_2=ENABLE;
    

C.3.5 Prepare the Third Source Database to Send Redo to the Mining Database

To prepare the third source database to send redo to the mining database:

  1. Make sure that DBMS3 source database is running with the required compatibility.
    select name, value from v$parameter where name = 'compatible';
    
    NAME                    VALUE
    ---------               ---------------------
    compatible              11.1.0.0.0
    

    The minimum compatibility setting required from integrated capture is 11.1.0.0.0.

  2. Set DG_CONFIG at DBMS3 source database.
    ALTER SYSTEM SET LOG_ARCHIVE_CONFIG='DG_CONFIG=(dbms3, dbmscap)'; 
    
  3. Set up redo transport at DBMS3 source database. Because DBMS3 is the source that will send its online redo logs to the standby redo logs at the downstream mining database, do not specify a TEMPLATE clause.
    ALTER SYSTEM
    SET LOG_ARCHIVE_DEST_2='SERVICE=DBMSCAP.EXAMPLE.COM ASYNC OPTIONAL NOREGISTER
    VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE)DB_UNIQUE_NAME=dbmscap';
    
  4. Enable the downstream destination.
    ALTER SYSTEM SET LOG_ARCHIVE_DEST_STATE_2=ENABLE;
    

C.3.6 Set up Extracts at Downstream Mining Database

These steps set up Extract at the downstream database to capture from the archived logs sent by DBMS1 and DBMS2.

C.3.6.1 Set up Extract (ext1) to Capture Changes from Archived Logs Sent by DBMS1

Perform the following steps on the DBMSCAP downstream mining database.

  1. Register Extract with DBMSCAP for the DBMS1 source database. In the credential store, the alias name of ggadm1 is linked to a user connect string of ggadm1@dbms1.The alias name of ggadmcap is linked to a user connect string of ggadmcap@dbmscap.
    GGSCI> DBLOGIN USERIDALIAS ggadm1
    GGSCI> MININGDBLOGIN USERIDALIAS ggadmcap
    GGSCI> REGISTER EXTRACT ext1 DATABASE
    
  2. Add Extract at the mining database DBMSCAP.
    GGSCI> ADD EXTRACT ext1 INTEGRATED TRANLOG BEGIN NOW
    
  3. Edit the Extract parameter file ext1.prm. In the credential store, the alias name of ggadm1 is linked to a user connect string of ggadm1@dbms1. The alias name of ggadmcap is linked to a user connect string of ggadmcap@dbmscap.
    USERIDALIAS ggadm1
    TRANLOGOPTIONS MININGUSERALIAS ggadmcap
    TRANLOGOPTIONS INTEGRATEDPARAMS (downstream_real_time_mine N)
    
  4. Start Extract.
    GGSCI> START EXTRACT ext1
    

C.3.6.2 Set up Extract (ext2) to Capture Changes from Archived Logs Sent by DBMS2

Perform the following steps on the DBMSCAP downstream mining database.

  1. Register Extract with the mining database for source database DBMS2. In the credential store, the alias name of ggadm2 is linked to a user connect string of ggadm2@dbms2.The alias name of ggadmcap is linked to a user connect string of ggadmcap@dbmscap.
    GGSCI> DBLOGIN USERIDALIAS ggadm2
    GGSCI> MININGDBLOGIN USERIDALIAS ggadmcap
    GGSCI> REGISTER EXTRACT ext2 DATABASE
    
  2. Create Extract at the mining database.
    GGSCI> ADD EXTRACT ext2 INTEGRATED TRANLOG, BEGIN NOW
    
  3. Edit the Extract parameter file ext2.prm. In the credential store, the alias name of ggadm2 is linked to a user connect string of ggadm2@dbms2.The alias name of ggadmcap is linked to a user connect string of ggadmcap@dbmscap.
    USERIDALIAS ggadm2
    TRANLOGOPTIONS MININGUSERALIAS ggadmcap
    TRANLOGOPTIONS INTEGRATEDPARAMS (downstream_real_time_mine N)
    
  4. Start Extract.
    GGSCI> START EXTRACT ext2
    

C.3.6.3 Set up Extract (ext3) to Capture Changes in Real-time Mode from Online Logs Sent by DBMS3

Perform the following steps on the DBMSCAP downstream mining database.

  1. Register Extract with the mining database for source database DBMS3. In the credential store, the alias name of ggadm3 is linked to a user connect string of ggadm3@dbms3.The alias name of ggadmcap is linked to a user connect string of ggadmcap@dbmscap.
    GGSCI> DBLOGIN USERID ggadm3
    GGSCI> MININGDBLOGIN USERID ggadmcap
    GGSCI> REGISTER EXTRACT ext3 DATABASE
    
  2. Create Extract at the mining database.
    GGSCI> ADD EXTRACT ext3 INTEGRATED TRANLOG, BEGIN NOW
    
  3. Edit the Extract parameter file ext3.prm. To enable real-time mining, you must specify downstream_real_time_mine. In the credential store, the alias name of ggadm3 is linked to a user connect string of ggadm3@dbms3.The alias name of ggadmcap is linked to a user connect string of ggadmcap@dbmscap.
    USERIDALIAS ggadm3
    TRANLOGOPTIONS MININGUSERALIAS ggadmcap
    TRANLOGOPTIONS INTEGRATEDPARAMS (downstream_real_time_mine Y)
    
  4. Start Extract.
    GGSCI> START EXTRACT ext3
    

Note:

You can create multiple Extracts running in real-time integrated capture mode in the downstream mining database, as long as they all are capturing data from the same source database, such as all capturing for database DBMS3 in the preceding example.