7 Configuring a Downstream Mining Database
For examples of the downstream mining configuration, see the following:
Example 1: Capturing from One Source Database in Real-time Mode.
Example 2: Capturing from Multiple Sources in Archive-log-only Mode
Example 3: Capturing from Multiple Sources with Mixed Real-time and Archive-log-only Mode
Topics:
- Evaluating Capture Options for a Downstream Deployment
Downstream deployment allows you to offload the source database. - Preparing the Source Database for Downstream Deployment
The source database ships its redo logs to a downstream database, and Extract uses the logmining server at the downstream database to mine the redo logs. - Preparing the Downstream Mining Database
A downstream mining database can accept both archived logs and online redo logs from a source database. - Enabling Downstream Extract Registration Using ADG Redirection in Downstream Configuration
Oracle GoldenGate supports downstream Extract registration using ADG redirection in a downstream mining database configuration. - Example 1: Capturing from One Source Database in Real-time Mode
This example captures changes from source database DBMS1 by deploying an Extract at a downstream mining database DBMSCAP. - Example 2: Capturing from Multiple Sources in Archive-log-only Mode
The following example captures changes from database DBMS1 and DBMS2 by deploying an Extract at a downstream mining database DBMSCAP. - Example 3: Capturing from Multiple Sources with Mixed Real-time and Archive-log-only Mode
The following example captures changes from database DBMS1, DBMS2 and DBMS3 by deploying an Extract at a downstream mining database DBMSCAP.
Evaluating Capture Options for a Downstream Deployment
Downstream deployment allows you to offload the source database.
A downstream mining database can accept both archived logs and online redo logs from a source database.
Multiple source databases can send their redo data to a single downstream database; however the downstream mining database can accept online redo logs from only one of those source databases. The rest of the source databases must ship archived logs.
When online logs are shipped to the downstream database, real-time capture by Extract is possible. Changes are captured as though Extract is reading from the source logs. In order to accept online redo logs from a source database, the downstream mining database must have standby redo logs configured.
When using a downstream mining configuration, the source database and mining database must be the same endian and same bitsize, which is 64 bits. For example, if the source database was on Linux 64-bit, you can have the mining database run on Windows 64-bit, because they have the same endian and bitsize.Parent topic: Configuring a Downstream Mining Database
Preparing the Source Database for Downstream Deployment
The source database ships its redo logs to a downstream database, and Extract uses the logmining server at the downstream database to mine the redo logs.
This section guides you in the process of:
- Creating the Source User Account
There must be an Extract user on the source database. Extract uses the credentials of this user to do metadata queries and to fetch column values as needed from the source database. - Configuring Redo Transport from Source to Downstream Mining Database
To set up the transfer of redo log files from a source database to the downstream mining database, and to prepare the downstream mining database to accept these redo log files, perform the steps given in this topic.
Parent topic: Configuring a Downstream Mining Database
Creating the Source User Account
There must be an Extract user on the source database. Extract uses the credentials of this user to do metadata queries and to fetch column values as needed from the source database.
The source user is specified by the USERIDALIAS
parameter.
To assign the required privileges, follow the procedure in Establishing Oracle GoldenGate Credentials
Parent topic: Preparing the Source Database for Downstream Deployment
Configuring Redo Transport from Source to Downstream Mining Database
To set up the transfer of redo log files from a source database to the downstream mining database, and to prepare the downstream mining database to accept these redo log files, perform the steps given in this topic.
The following summarizes the rules for supporting multiple sources sending redo to a single downstream mining database:
-
Only one source database can be configured to send online redo to the standby redo logs at the downstream mining database. The
log_archive_dest_n
setting for this source database should not have aTEMPLATE
clause. -
Source databases that are not sending online redo to the standby redo logs of the downstream mining database must have a
TEMPLATE
clause specified in thelog_archive_dest_n
parameter. -
Each of the source databases that sends redo to the downstream mining database must have a unique
DBID
. You can select theDBID
column from thev$database
view of these source databases to ensure that the DBIDs are unique. -
The
FAL_SERVER
value must be set to the downstream mining database.FAL_SERVER
specifies the FAL (fetch archive log) server for a standby database. The value is a list of Oracle Net service names, which are assumed to be configured properly on the standby database system to point to the desired FAL servers. The list contains the net service name of any database that can potentially ship redo to the downstream database. -
When using redo transport, there could be a delay in processing redo due to network latency. For Extract, this latency is monitored by measuring the delay between LCRs received from source database and reporting it. If the latency exceeds a threshold, a warning message appears in the report file and a subsequent information message appears when the lag drops to normal values. The default value for the threshold is 10 seconds.
Note:
The archived logs shipped from the source databases are called foreign archived logs. You must not use the recovery area at the downstream mining database to store foreign archived logs. Such a configuration is not supported by Extract. Foreign archived logs stored in the Flash Recovery Area (FRA) are not automatically deleted by RMAN jobs. These archived logs must be manually purged.
These instructions take into account the requirements to ship redo from multiple sources, if required. You must configure an Extract process for each of those sources.
To Configure Redo Transport
Parent topic: Preparing the Source Database for Downstream Deployment
Preparing the Downstream Mining Database
A downstream mining database can accept both archived logs and online redo logs from a source database.
The following sections explain how to prepare the downstream mining database:
- Creating the Downstream Mining User Account
- Registering Extract with the Mining Database
You need to create a database logmining server to capture redo data when using a downstream database. - Configuring the Mining Database to Archive Local Redo Log Files
- Configure the Wallet for the Downstream Mining Database
- Preparing a Downstream Mining Database for Real-time Capture
Parent topic: Configuring a Downstream Mining Database
Creating the Downstream Mining User Account
When using a downstream mining configuration, there must be an Extract mining user on the downstream database. The mining Extract process uses the credentials of this user to interact with the downstream logmining server. The downstream mining user is specified by the TRANLOGOPTIONS
parameter with the MININGUSERALIAS
option. See Establishing Oracle GoldenGate Credentials to assign the correct credentials for the version of your database.
Parent topic: Preparing the Downstream Mining Database
Registering Extract with the Mining Database
You need to create a database logmining server to capture redo data when using a downstream database.
The creation of the logmining server captures a snapshot of the source database in the redo stream of the source database. In a source multitenant container database, you register Extract with each of the pluggable databases that you want to include for Extract.
WARNING:
Make certain that you know the earliest SCN of the log stream at which you want Extract to begin processing. Extract cannot have a starting SCN value that is lower than the first SCN that is specified when the underlying database capture process is created with the REGISTER EXTRACT
command. You can use the SCN
option
Note:
The register process may take a few to several minutes to complete, even though the REGISTER
command returns immediately.
Parent topic: Preparing the Downstream Mining Database
Configuring the Mining Database to Archive Local Redo Log Files
This procedure configures the downstream mining database to archive redo data in its online redo logs. These are redo logs that are generated at the downstream mining database.
Archiving must be enabled at the downstream mining database if you want to run Extract in real-time integrated capture mode, but it is also recommended for archive-log-only capture. Extract in integrated capture mode writes state information in the database. Archiving and regular backups will enable you to recover this state information in case there are disk failures or corruption at the downstream mining database.
To Archive Local Redo Log Files
For more information about these initialization parameters, see Set Primary Database Initialization Parameters in the Oracle Data Guard Concepts and Administration guide.
Parent topic: Preparing the Downstream Mining Database
Configure the Wallet for the Downstream Mining Database
When TDE is enabled on source database and downstream database, then the source wallet or keys should be the same on the downstream mining database and the source database.
-
Shutdown the downstream database using the
shutdown immediate
command. -
Remove the wallet directory on downstream database view:
rm $T_WORK/wallet/*
-
Copy the
$T_WORK/wallet/*
from the source database view to the downstream database view. -
Restart the downstream database.
-
Run checksum on the source database view and downstream database view to ensure that it matches:
cksum $T_WORK/wallet/*
Parent topic: Preparing the Downstream Mining Database
Preparing a Downstream Mining Database for Real-time Capture
This procedure is only required if you want to use real-time capture at a downstream mining database. It is not required to use archived-log-only capture mode. To use real-time capture, it is assumed that the downstream database has already been configured to archive its local redo data as shown in Configuring the Mining Database to Archive Local Redo Log Files.
Parent topic: Preparing the Downstream Mining Database
Create the Standby Redo Log Files
The following steps outline the procedure for adding standby redo log files to the downstream mining database. The following summarizes the rules for creating the standby redo logs:
-
Each standby redo log file must be at least as large as the largest redo log file of the redo source database. For administrative ease, Oracle recommends that all redo log files at source database and the standby redo log files at the downstream mining database be of the same size.
-
The standby redo log must have at least one more redo log group than the redo log at the source database, for each redo thread at the source database.
The specific steps and SQL statements that are required to add standby redo log files depend on your environment. See Oracle Data Guard Concepts and Administration 11g Release 2 (11.2) for detailed instructions about adding standby redo log files to a database.
Note:
If there will be multiple source databases sending redo to a single downstream mining database, only one of those sources can send redo to the standby redo logs of the mining database. An Extract process that mines the redo from this source database can run in real-time mode. All other source databases must send only their archived logs to the downstream mining database, and the Extracts that read this data must be configured to run in archived-log-only mode.
To Create the Standby Redo Log Files
Configure the Database to Archive Standby Redo Log Files Locally
This procedure configures the downstream mining database to archive the standby redo logs that receive redo data from the online redo logs of the source database. Keep in mind that foreign archived logs should not be archived in the recovery area of the downstream mining database.
To Archive Standby Redo Logs Locally
Enabling Downstream Extract Registration Using ADG Redirection in Downstream Configuration
Oracle GoldenGate supports downstream Extract registration using ADG redirection in a downstream mining database configuration.
This approach uses an Active Dataguard (ADG) configured in a cascaded mode to transport redo logs to a downstream mining database to use with downstream Extract, which reduces the overhead on the source database.
Extract must be started using sourceless option so that it does not connect
to source database instead connects to ADG using FETCHUSERID
or
FETCHUSERIDALIAS
when it needs to fetch any non-native datatypes.
Note:
SCHEMATRANDATA
and
TRANDATA
, even though the command is executed on the Standby,
the actual log groups are created and maintained on the primary database where the
actual DML operations take place.
-
SCHEMATRANDATA
-
TRANDATA
-
FLUSH SEQUENCE
-
TRACETABLE
-
HEARTBEATTABLE
-
REGISTER EXTRACT
This feature is supported for CDB and supports wildcard registration. It is only supported when using Oracle Database 21c and higher.
Parent topic: Configuring a Downstream Mining Database
How to Enable Downstream Extract Registration Using ADG Redirection
-
Add an additional
LOG_ARCHIVE_DESTINATION_N (LAD)
on the ADG standby, as shown in the following example:alter system set log_archive_dest_3='service=service name mining db ASYNC NOREGISTER VALID_FOR(STANDBY_LOGFILES,STANDBY_ROLES) DB_UNIQUE_NAME=db unique name of 3rd db' scope=both
This step transports and generates the
standby_logfiles
for an ADG Standby. -
Set the
LOG_ARCHIVE_CONFIG
on the ADG Standby to ship the logs to the mining database, as shown in the following example:ALTER SYSTEM SET LOG_ARCHIVE_CONFIG=‘dg_config(db unique name of 1st db,db unique name of 2nd db,db unique name of 3rd db)’ scope=both;
-
On the mining database, set up the location to store the incoming
standby_logfiles
on the mining database:alter system set log_archive_dest_2='location=USE_DB_RECOVERY_FILE_DEST VALID_FOR=(STANDBY_LOGFILE,ALL_ROLES)' scope=both
-
Run
LOG_ARCHIVE_CONFIG
on the mining database, so that the Extract process is able to read them on the mining database, as shown in the following example:ALTER SYSTEM SET LOG_ARCHIVE_CONFIG=‘dg_config(db unique name of 1st db, db unique name of 2nd db, db unique name of 3rd db)’ scope=both
-
For a downstream Extract, you need to ensure that the database connections are appropriately configured for GGSCI and Admin Client. When registering the Extract, you need to make sure that
DBLOGIN
connection is made to the ADG Standby, that is open for read only activity. To add the Extract and register it, use the following command:dblogin userid ggadmin@inst2, password ggadmin (inst2 is the ADG not primary) miningdblogin userid ggadmin@inst3, password ggadmin (inst3 is the mining database)
-
Now, register an Extract that uses the
NOUSERID
parameter:add extract ext1, integrated tranlog, begin now register extract ext1 database
-
After the Extract is registered, you can use this Extract to mine data and start the Extract normally.
Example 1: Capturing from One Source Database in Real-time Mode
This example captures changes from source database DBMS1 by deploying an Extract at a downstream mining database DBMSCAP.
Note:
The example assumes that you created the necessary standby redo log files as shown in Configuring a Downstream Mining Database.
This assumes that the following users exist:
-
User GGADM1 in DBMS1 whose credentials Extract will use to fetch data and metadata from DBMS1. This user has the alias of
ggadm1
in the Oracle GoldenGate credential store and logs in asggadm1@dbms1
. It is assumed that theDBMS_GOLDENGATE_AUTH.GRANT_ADMIN_PRIVILEGE()
procedure was called to grant appropriate privileges to this user at the source database. -
User GGADMCAP in DBMSCAP whose credentials Extract will use to retrieve logical change records from the logmining server at the downstream mining database DBMSCAP. This user has the alias of
ggadmcap
in the Oracle GoldenGate credential store and logs in asggadmcap@dbmscap
. It is assumed that theDBMS_GOLDENGATE_AUTH.GRANT_ADMIN_PRIVILEGE()
procedure was called to grant appropriate privileges to this user at the mining database.
- Prepare the Mining Database to Archive its Local Redo
- Prepare the Mining Database to Archive Redo Received in Standby Redo Logs from the Source Database
- Prepare the Source Database to Send Redo to the Mining Database
- Set up Extract (ext1) on DBMSCAP
Parent topic: Configuring a Downstream Mining Database
Prepare the Mining Database to Archive its Local Redo
To prepare the mining database to archive its local redo:
Prepare the Mining Database to Archive Redo Received in Standby Redo Logs from the Source Database
To prepare the mining database to archive the redo received in standby redo logs from the source database:
Prepare the Source Database to Send Redo to the Mining Database
To prepare the source database to send redo to the mining database:
Set up Extract (ext1) on DBMSCAP
To set up Extract (ext1) on DBMSCAP:
Note:
You can create multiple Extracts running in real-time Extract mode in the downstream mining database, as long as they all are capturing data from the same source database, such as capturing changes for database DBMS1 in the preceding example.
Example 2: Capturing from Multiple Sources in Archive-log-only Mode
The following example captures changes from database DBMS1 and DBMS2 by deploying an Extract at a downstream mining database DBMSCAP.
It assumes the following users:
-
User GGADM1 in DBMS1 whose credentials Extract will use to fetch data and metadata from DBMS1. It is assumed that the
DBMS_GOLDENGATE_AUTH.GRANT_ADMIN_PRIVILEGE()
procedure was called to grant appropriate privileges to this user at DBMS1. -
User GGADM2 in DBMS2 whose credentials Extract will use to fetch data and metadata from DBMS2. It is assumed that the
DBMS_GOLDENGATE_AUTH.GRANT_ADMIN_PRIVILEGE()
procedure was called to grant appropriate privileges to this user at DBMS2. -
User GGADMCAP in DBMSCAP whose credentials Extract will use to retrieve logical change records from the logmining server at the downstream mining database. It is assumed that the
DBMS_GOLDENGATE_AUTH.GRANT_ADMIN_PRIVILEGE()
procedure was called to grant appropriate privileges to this user at the downstream mining database DBMSCAP.
This procedure also assumes that the downstream mining database is configured in archive log mode.
- Prepare the Mining Database to Archive its Local Redo
- Prepare the Mining Database to Archive Redo from the Source Database
- Prepare the First Source Database to Send Redo to the Mining Database
- Prepare the Second Source Database to Send Redo to the Mining Database
- Set up Extracts at Downstream Mining Database
Parent topic: Configuring a Downstream Mining Database
Prepare the Mining Database to Archive its Local Redo
To prepare the mining database to archive its local redo:
Prepare the Mining Database to Archive Redo from the Source Database
Set DG_CONFIG
at the downstream mining database.
ALTER SYSTEM SET LOG_ARCHIVE_CONFIG='DG_CONFIG=(dbms1,dbms2, dbmscap)'
Prepare the First Source Database to Send Redo to the Mining Database
To prepare the first source database to send redo to the mining database:
Prepare the Second Source Database to Send Redo to the Mining Database
To prepare the second source database to send redo to the mining database:
Example 3: Capturing from Multiple Sources with Mixed Real-time and Archive-log-only Mode
The following example captures changes from database DBMS1, DBMS2 and DBMS3 by deploying an Extract at a downstream mining database DBMSCAP.
Note:
This example assumes that you created the necessary standby redo log files as shown in Configuring a Downstream Mining Database.
It assumes the following users:
-
User GGADM1 in DBMS1 whose credentials Extract will use to fetch data and metadata from DBMS1. It is assumed that the
DBMS_GOLDENGATE_AUTH.GRANT_ADMIN_PRIVILEGE()
procedure was called to grant appropriate privileges to this user at DBMS1. -
User GGADM2 in DBMS2 whose credentials Extract will use to fetch data and metadata from DBMS2. It is assumed that the
DBMS_GOLDENGATE_AUTH.GRANT_ADMIN_PRIVILEGE()
procedure was called to grant appropriate privileges to this user at DBMS2. -
User GGADM3 in DBMS3 whose credentials Extract will use to fetch data and metadata from DBMS3. It is assumed that the
DBMS_GOLDENGATE_AUTH.GRANT_ADMIN_PRIVILEGE()
procedure was called to grant appropriate privileges to this user at DBMS3. -
User GGADMCAP in DBMSCAP whose credentials Extract will use to retrieve logical change records from the logmining server at the downstream mining database. It is assumed that the
DBMS_GOLDENGATE_AUTH.GRANT_ADMIN_PRIVILEGE()
procedure was called to grant appropriate privileges to this user at the downstream mining database DBMSCAP.
This procedure also assumes that the downstream mining database is configured in archive log mode.
In this example, the redo sent by DBMS3 will be mined in real time mode, whereas the redo data sent from DBMS1 and DBMS2 will be mined in archive-log-only mode.
- Prepare the Mining Database to Archive its Local Redo
- Prepare the Mining Database to Accept Redo from the Source Databases
- Prepare the First Source Database to Send Redo to the Mining Database
- Prepare the Second Source Database to Send Redo to the Mining Database
- Prepare the Third Source Database to Send Redo to the Mining Database
- Set up Extracts at Downstream Mining Database
Parent topic: Configuring a Downstream Mining Database
Prepare the Mining Database to Archive its Local Redo
To prepare the mining database to archive its local redo:
Prepare the Mining Database to Accept Redo from the Source Databases
Because redo data is being accepted in the standby redo logs of the downstream mining database, the appropriate number of correctly sized standby redo logs must exist. If you did not configure the standby logs, see Configuring a Downstream Mining Database.
Prepare the First Source Database to Send Redo to the Mining Database
To prepare the first source database to send redo to the mining database:
Prepare the Second Source Database to Send Redo to the Mining Database
To prepare the second source database to send redo to the mining database:
Prepare the Third Source Database to Send Redo to the Mining Database
To prepare the third source database to send redo to the mining database:
Set up Extracts at Downstream Mining Database
These steps set up Extract at the downstream database to capture from the archived logs sent by DBMS1 and DBMS2.
Set up Extract (ext1) to Capture Changes from Archived Logs Sent by DBMS1
Perform the following steps on the DBMSCAP downstream mining database.
Parent topic: Set up Extracts at Downstream Mining Database
Set up Extract (ext2) to Capture Changes from Archived Logs Sent by DBMS2
Perform the following steps on the DBMSCAP downstream mining database.
Parent topic: Set up Extracts at Downstream Mining Database
Set up Extract (ext3) to Capture Changes in Real-time Mode from Online Logs Sent by DBMS3
Perform the following steps on the DBMSCAP downstream mining database.
Note:
You can create multiple Extracts running in real-time integrated capture mode in the downstream mining database, as long as they all are capturing data from the same source database, such as all capturing for database DBMS3 in the preceding example.
Parent topic: Set up Extracts at Downstream Mining Database