Case 1: Capture from One Source Database in Real-time Mode
This example captures changes from source database DBMS1 by deploying an Extract at a
downstream mining database DBMSCAP.
The example assumes that you created the necessary standby redo log files as shown in Configure Extract for a Downstream Deployment.
This assumes that the following users exist:
- User
GGADM1in DBMS1 whose credentials Extract will use to fetch data and metadata from DBMS1. This user has the alias of ggadm1 in the Oracle GoldenGate credential store and logs in asggadm1@dbms1. It is assumed that theDBMS_GOLDENGATE_AUTH.GRANT_ADMIN_PRIVILEGE()procedure was called to grant appropriate privileges to this user at the source database. - User
GGADMCAPinDBMSCAPwhose credentials Extract will use to retrieve logical change records from the logmining server at the downstream mining databaseDBMSCAP. This user has the alias of ggadmcap in the Oracle GoldenGate credential store and logs in asggadmcap@dbmscap. It is assumed that theDBMS_GOLDENGATE_AUTH.GRANT_ADMIN_PRIVILEGE()procedure was called to grant appropriate privileges to this user at the mining database.
Prepare the Mining Database to Archive its Local Redo
To prepare the mining database to archive its local redo:
- The downstream mining database must be in archive log mode. You can
do this by issuing the following
DDL:
STARTUP MOUNT; ALTER DATABASE ARCHIVELOG; ALTER DATABASE OPEN; - At the downstream mining database, set
log_archive_dest_1to archive local redo:ALTER SYSTEM SET LOG_ARCHIVE_DEST_1='LOCATION=/home/arc_dest/local VALID_FOR=(ONLINE_LOGFILE, PRIMARY_ROLE)' - Enable
log_archive_dest_1.ALTER SYSTEM SET LOG_ARCHIVE_DEST_STATE_1=ENABLE
Prepare the Mining Database to Archive Redo Received in Standby Redo Logs from the Source Database
To prepare the mining database to archive the redo received in standby redo logs from the source database:
- At the downstream mining database, set log_archive_dest_2 as shown in the
following
example:
ALTER SYSTEM SET LOG_ARCHIVE_DEST_2='LOCATION=/home/arc_dest/srl_dbms1 VALID_FOR=(STANDBY_LOGFILE,PRIMARY_ROLE)' - Enable log_archive_dest_2 as shown in the following
example:
ALTER SYSTEM SET LOG_ARCHIVE_DEST_STATE_2=ENABLE - Set
DG_CONFIGat the downstream mining database:ALTER SYSTEM SET LOG_ARCHIVE_CONFIG='DG_CONFIG=(dbms1,dbmscap)'
Prepare the Source Database to Send Redo to the Mining Database
To prepare the source database to send redo to the mining database::
- Make sure that the source database is running with the required
compatibility:
The minimum compatibility setting required from integrated capture is 11.1.0.0.0.select name, value from v$parameter where name = 'compatible'; - Set
DG_CONFIGat the source database:ALTER SYSTEM SET LOG_ARCHIVE_CONFIG='DG_CONFIG=(dbms1,dbmscap)'; - Set up redo transport at the source
database..
ALTER SYSTEM SET LOG_ARCHIVE_DEST_2='SERVICE=DBMSCAP.EXAMPLE.COM ASYNC OPTIONAL NOREGISTER VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE)DB_UNIQUE_NAME=dbmscap'; - Enable the downstream
destination.
ALTER SYSTEM SET LOG_ARCHIVE_DEST_STATE_2=ENABLE;
Set up Extract (ext1) on DBMSCAP
To set up Extract (ext1) on DBMSCAP:
- Register Extract with the downstream mining database. In the
credential store, the alias name of
ggadm1is linked to a user connect string ofggadm1@dbms1. The alias name ofggadmcapis linked to a user connect string ofggadmcap@dbmscap.DBLOGIN USERIDALIAS ggadm1MININGDBLOGIN USERIDALIAS ggadmcapREGISTER EXTRACT ext1 DATABASE - Create Extract at the downstream mining
database:
ADD EXTRACT ext1 INTEGRATED TRANLOG BEGIN NOW - Edit Extract parameter file
ext1.prm. The following lines must be present to take advantage of real-time capture. In the credential store, the alias name ofggadm1is linked to a user connect string ofggadm1@dbms1. The alias name ofggadmcapis linked to a user connect string ofggadmcap@dbmscap.USERIDALIAS ggadm1 TRANLOGOPTIONS MININGUSERALIAS ggadmcap TRANLOGOPTIONS INTEGRATEDPARAMS (downstream_real_time_mine Y) - Start
Extract.
START EXTRACT ext1
Note:
You can create multiple Extracts running in real-time Extract mode in the downstream mining database, as long as they all are capturing data from the same source database, such as capturing changes for database DBMS1 in the preceding example.