Configure Redo Transport from Source Database to Downstream Mining Database
To set up the transfer of redo log files from a source database to the downstream mining database, and to prepare the downstream mining database to accept these redo log files, perform the steps given in this topic.
The following summarizes the rules for supporting multiple sources sending redo to a single downstream mining database:
- Only one source database can be configured to send online redo to the standby
redo logs at the downstream mining database. The
log_archive_dest_n
setting for this source database should not have aTEMPLATE
clause. - Source databases that are not sending online redo to the standby redo logs of
the downstream mining database must have a
TEMPLATE
clause specified in thelog_archive_dest_n
parameter. - Each of the source databases that sends redo to the downstream mining database
must have a unique
DBID
. You can select theDBID
column from thev$database
view of these source databases to ensure that the DBIDs are unique. - The
FAL_SERVER
value must be set to the downstream mining database.FAL_SERVER
specifies theFAL
(fetch archive log) server for a standby database. The value is a list of Oracle Net service names, which are assumed to be configured properly on the standby database system to point to the desired FAL servers. The list contains the net service name of any database that can potentially ship redo to the downstream database. - When using redo transport, there could be a delay in processing redo due to network latency. For Extract, this latency is monitored by measuring the delay between LCRs received from source database and reporting it. If the latency exceeds a threshold, a warning message appears in the report file and a subsequent information message appears when the lag drops to normal values. The default value for the threshold is 10 seconds.
Note:
The archived logs shipped from the source databases are called foreign archived logs. You must not use the recovery area at the downstream mining database to store foreign archived logs. Such a configuration is not supported by Extract. Foreign archived logs stored in the Flash Recovery Area (FRA) are not automatically deleted by RMAN jobs. These archived logs must be manually purged.These instructions take into account the requirements to ship redo from multiple sources, if required. You must configure an Extract process for each of those sources.
To configure redo transport:
- Configure database connection to connect the source database with the mining database.
- Configure authentication at each source database and at the downstream mining database to support the transfer of redo data. Redo transport sessions are authenticated using either the Secure Sockets Layer (SSL) protocol or a remote login password file. If a source database has a remote login password file, copy it to the appropriate directory of the mining database system. The password file must be the same at all source databases, and at the mining database.
- At each source database, configure one
LOG_ARCHIVE_DEST_n
initialization parameter to transmit redo data to the downstream mining database. Set the attributes of this parameter as shown in one of the following examples, depending on whether real-time or archived-log-only capture mode is to be used.- Example for real-time capture at the downstream
logmining server, where the source database sends its online redo
logs to the downstream
database:
ALTER SYSTEM SET LOG_ARCHIVE_DEST_2='SERVICE=DBMSCAP.EXAMPLE.COM ASYNC NOREGISTER VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE)DB_UNIQUE_NAME=dbmscap'
- Example for archived-log-only capture at the downstream
logmining
server:
ALTER SYSTEM SET LOG_ARCHIVE_DEST_2='SERVICE=DMBSCAP.EXAMPLE.COM ASYNC NOREGISTER VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE) TEMPLATE=/usr/oracle/log_for_dbms1/dbms1_arch_%t_%s_%r.log DB_UNIQUE_NAME=dbmscap'
Note:
When using an archived-log-only downstream mining database, you must specify a value for theTEMPLATE
attribute. Oracle also recommends that you use theTEMPLATE
clause in the source databases so that the log files from all remote source databases are kept separated from the local database log files, and from each other. - Example for real-time capture at the downstream
logmining server, where the source database sends its online redo
logs to the downstream
database:
- At the source database, set a value of
ENABLE
for theLOG_ARCHIVE_DEST_STATE_n
initialization parameter that corresponds with theLOG_ARCHIVE_DEST_n
parameter that corresponds to the destination for the downstream mining database, as shown in the following example:ALTER SYSTEM SET LOG_ARCHIVE_DEST_STATE_2=ENABLE
- At the source database, and at the downstream mining database, set the
DG_CONFIG
attribute of theLOG_ARCHIVE_CONFIG
initialization parameter to include theDB_UNIQUE_NAME
of the source database and the downstream database, as shown in the following example:ALTER SYSTEM SET LOG_ARCHIVE_CONFIG='DG_CONFIG=(dbms1,dbmscap)'