B Configuring a Downstream Mining Database

This appendix contains instructions for preparing a downstream Oracle mining database to support Extract in integrated capture mode.

For more information about integrated capture, see Deciding Which Capture Method to Use.

For examples of the downstream mining configuration, see Example Downstream Mining Configuration.

Topics:

Evaluating Capture Options for a Downstream Deployment

Downstream deployment allows you to offload the source database.

A downstream mining database can accept both archived logs and online redo logs from a source database.

Multiple source databases can send their redo data to a single downstream database; however the downstream mining database can accept online redo logs from only one of those source databases. The rest of the source databases must ship archived logs.

When online logs are shipped to the downstream database, real-time capture by Extract is possible. Changes are captured as though Extract is reading from the source logs. In order to accept online redo logs from a source database, the downstream mining database must have standby redo logs configured.

When using a downstream mining configuration, the source database and mining database must be the same endian and same bitsize, which is 64 bits. For example, if the source database was on Linux 64-bit, you can have the mining database run on Windows 64-bit, because they have the same endian and bitsize.

Preparing the Source Database for Downstream Deployment

The source database ships its redo logs to a downstream database, and Extract uses the logmining server at the downstream database to mine the redo logs.

This section guides you in the process of:

Creating the Source User Account

There must be an Extract user on the source database. Extract uses the credentials of this user to do metadata queries and to fetch column values as needed from the source database.

The source user is specified by the USERIDALIAS parameter.

To assign the required privileges, follow the procedure in Establishing Oracle GoldenGate Credentials

Configuring Redo Transport from Source to Downstream Mining Database

To set up the transfer of redo log files from a source database to the downstream mining database, and to prepare the downstream mining database to accept these redo log files, perform the steps given in this topic.

The following summarizes the rules for supporting multiple sources sending redo to a single downstream mining database:

  • Only one source database can be configured to send online redo to the standby redo logs at the downstream mining database. The log_archive_dest_n setting for this source database should not have a TEMPLATE clause.

  • Source databases that are not sending online redo to the standby redo logs of the downstream mining database must have a TEMPLATE clause specified in the log_archive_dest_n parameter.

  • Each of the source databases that sends redo to the downstream mining database must have a unique DBID. You can select the DBID column from the v$database view of these source databases to ensure that the DBIDs are unique.

  • The FAL_SERVER value must be set to the downstream mining database. FAL_SERVER specifies the FAL (fetch archive log) server for a standby database. The value is a list of Oracle Net service names, which are assumed to be configured properly on the standby database system to point to the desired FAL servers. The list contains the net service name of any database that can potentially ship redo to the downstream database.

  • When using redo transport, there could be a delay in processing redo due to network latency. For Extract, this latency is monitored by measuring the delay between LCRs received from source database and reporting it. If the latency exceeds a threshold, a warning message appears in the report file and a subsequent information message appears when the lag drops to normal values. The default value for the threshold is 10 seconds.

Note:

The archived logs shipped from the source databases are called foreign archived logs. From Oracle Database 12.2.0.1 onward, the archived logs sent to downstream are purged automatically in downstream database as long as it is stored on Flash Recovery Area (FRA).

These instructions take into account the requirements to ship redo from multiple sources, if required. You must configure an Extract process for each of those sources.

To Configure Redo Transport

  1. Configure Oracle Net so that each source database can communicate with the mining database. For instructions, see Oracle Database Net Services Administrator's Guide.
  2. Configure authentication at each source database and at the downstream mining database to support the transfer of redo data. Redo transport sessions are authenticated using either the Secure Sockets Layer (SSL) protocol or a remote login password file. If a source database has a remote login password file, copy it to the appropriate directory of the mining database system. The password file must be the same at all source databases, and at the mining database. For more information about authentication requirements for redo transport, see Preparing the Primary Database for Standby Database Creation in Oracle Data Guard Concepts and Administration.
  3. At each source database, configure one LOG_ARCHIVE_DEST_n initialization parameter to transmit redo data to the downstream mining database. Set the attributes of this parameter as shown in one of the following examples, depending on whether real-time or archived-log-only capture mode is to be used.
    • Example for real-time capture at the downstream logmining server, where the source database sends its online redo logs to the downstream database:

      ALTER SYSTEM 
      SET LOG_ARCHIVE_DEST_2='SERVICE=DBMSCAP.EXAMPLE.COM ASYNC NOREGISTER
      VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE)DB_UNIQUE_NAME=dbmscap'
      
    • Example for archived-log-only capture at the downstream logmining server:

      ALTER SYSTEM SET
      LOG_ARCHIVE_DEST_2='SERVICE=DMBSCAP.EXAMPLE.COM ASYNC NOREGISTER
      VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE)
      TEMPLATE=/usr/oracle/log_for_dbms1/dbms1_arch_%t_%s_%r.log
      DB_UNIQUE_NAME=dbmscap'

    Note:

    When using an archived-log-only downstream mining database, you must specify a value for the TEMPLATE attribute. Oracle also recommends that you use the TEMPLATE clause in the source databases so that the log files from all remote source databases are kept separated from the local database log files, and from each other.

  4. At the source database, set a value of ENABLE for the LOG_ARCHIVE_DEST_STATE_n initialization parameter that corresponds with the LOG_ARCHIVE_DEST_n parameter that corresponds to the destination for the downstream mining database, as shown in the following example.
    ALTER SYSTEM SET LOG_ARCHIVE_DEST_STATE_2=ENABLE
    
  5. At the source database, and at the downstream mining database, set the DG_CONFIG attribute of the LOG_ARCHIVE_CONFIG initialization parameter to include the DB_UNIQUE_NAME of the source database and the downstream database, as shown in the following example.
    ALTER SYSTEM SET LOG_ARCHIVE_CONFIG='DG_CONFIG=(dbms1,dbmscap)'

Preparing the Downstream Mining Database

A downstream mining database can accept both archived logs and online redo logs from a source database.

The following sections explain how to prepare the downstream mining database:

Creating the Downstream Mining User Account

When using a downstream mining configuration, there must be an Extract mining user on the downstream database. The mining Extract process uses the credentials of this user to interact with the downstream logmining server. The downstream mining user is specified by the TRANLOGOPTIONS parameter with the MININGUSERALIAS option. See Establishing Oracle GoldenGate Credentials to assign the correct credentials for the version of your database.

Configuring the Mining Database to Archive Local Redo Log Files

This procedure configures the downstream mining database to archive redo data in its online redo logs. These are redo logs that are generated at the downstream mining database.

Archiving must be enabled at the downstream mining database if you want to run Extract in real-time integrated capture mode, but it is also recommended for archive-log-only capture. Extract in integrated capture mode writes state information in the database. Archiving and regular backups will enable you to recover this state information in case there are disk failures or corruption at the downstream mining database.

To Archive Local Redo Log Files

  1. Alter the downstream mining database to be in archive log mode. You can do this by issuing the following DDL.
    STARTUP MOUNT;
    ALTER DATABASE ARCHIVELOG;
    ALTER DATABASE OPEN;
    
  2. At the downstream mining database, set the first archive log destination in the LOG_ARCHIVE_DEST_n initialization parameter as shown in the following example:
    ALTER SYSTEM SET
    LOG_ARCHIVE_DEST_1='LOCATION=/home/arc_dest/local
    VALID_FOR=(ONLINE_LOGFILE,PRIMARY_ROLE)'
    

    Alternatively, you can use a command like this example:

    ALTER SYSTEM SET
    LOG_ARCHIVE_DEST_1='LOCATION='USE_DB_RECOVERY_FILE_DEST' valid_for=(ONLINE_LOGFILE,PRIMARY_ROLE)'
    

    Note:

    The online redo logs generated by the downstream mining database can be archived to a recovery area. However, you must not use the recovery area of the downstream mining database to stage foreign archived logs or to archive standby redo logs. For information about configuring a fast recovery area, see Oracle Database Backup and Recovery User’s Guide.

  3. Enable the local archive destination.
    ALTER SYSTEM SET LOG_ARCHIVE_DEST_STATE_1=ENABLE
     

For more information about these initialization parameters, see Oracle Data Guard Concepts and Administration.

Preparing a Downstream Mining Database for Real-time Capture

This procedure is only required if you want to use real-time capture at a downstream mining database. It is not required to use archived-log-only capture mode. To use real-time capture, it is assumed that the downstream database has already been configured to archive its local redo data as shown in Configuring the Mining Database to Archive Local Redo Log Files.

Create the Standby Redo Log Files

The following steps outline the procedure for adding standby redo log files to the downstream mining database. The following summarizes the rules for creating the standby redo logs:

  • Each standby redo log file must be at least as large as the largest redo log file of the redo source database. For administrative ease, Oracle recommends that all redo log files at source database and the standby redo log files at the downstream mining database be of the same size.

  • The standby redo log must have at least one more redo log group than the redo log at the source database, for each redo thread at the source database.

The specific steps and SQL statements that are required to add standby redo log files depend on your environment. See Oracle Data Guard Concepts and Administration 11g Release 2 (11.2) for detailed instructions about adding standby redo log files to a database.

Note:

If there will be multiple source databases sending redo to a single downstream mining database, only one of those sources can send redo to the standby redo logs of the mining database. An Extract process that mines the redo from this source database can run in real-time mode. All other source databases must send only their archived logs to the downstream mining database, and the Extracts that read this data must be configured to run in archived-log-only mode.

To Create the Standby Redo Log Files

  1. In SQL*Plus, connect to the source database as an administrative user.
  2. Determine the size of the source log file. Make note of the results.
    SELECT BYTES FROM V$LOG;
    
  3. Determine the number of online log file groups that are configured on the source database. Make note of the results.
    SELECT COUNT(GROUP#) FROM V$LOG;
    
  4. Connect to the downstream mining database as an administrative user.
  5. Add the standby log file groups to the mining database. The standby log file size must be at least the size of the source log file size. The number of standby log file groups must be at least one more than the number of source online log file groups. This applies to each instance (thread) in a RAC installation. So if you have "n" threads at the source database, each having "m" redo log groups, you should configure n*(m+1) redo log groups at the downstream mining database.

    The following example shows three standby log groups.

    ALTER DATABASE ADD STANDBY LOGFILE GROUP 3
    ('/oracle/dbs/slog3a.rdo', '/oracle/dbs/slog3b.rdo') SIZE 500M;
    ALTER DATABASE ADD STANDBY LOGFILE GROUP 4
    ('/oracle/dbs/slog4.rdo', '/oracle/dbs/slog4b.rdo') SIZE 500M;
    ALTER DATABASE ADD STANDBY LOGFILE GROUP 5
    ('/oracle/dbs/slog5.rdo', '/oracle/dbs/slog5b.rdo') SIZE 500M;
    
  6. Confirm that the standby log file groups were added successfully.
    SELECT GROUP#, THREAD#, SEQUENCE#, ARCHIVED, STATUS
    FROM V$STANDBY_LOG;
    

    The output should be similar to the following:

    GROUP#     THREAD#    SEQUENCE#  ARC STATUS
    ---------- ---------- ---------- --- ----------
             3          0          0 YES UNASSIGNED
             4          0          0 YES UNASSIGNED
             5          0          0 YES UNASSIGNED
    
  7. Ensure that log files from the source database are appearing in the location that is specified in the LOCATION attribute of the local LOG_ARCHIVE_DEST_n that you set. You might need to switch the log file at the source database to see files in the directory.
Configure the Database to Archive Standby Redo Log Files Locally

This procedure configures the downstream mining database to archive the standby redo logs that receive redo data from the online redo logs of the source database. Keep in mind that foreign archived logs should not be archived in the recovery area of the downstream mining database.

To Archive Standby Redo Logs Locally

  1. At the downstream mining database, set the second archive log destination in the LOG_ARCHIVE_DEST_n initialization parameter as shown in the following example.
    ALTER SYSTEM SET
    LOG_ARCHIVE_DEST_2='LOCATION=/home/arc_dest/srl_dbms1
    VALID_FOR=(STANDBY_LOGFILE,PRIMARY_ROLE)'
    

    Oracle recommends that foreign archived logs (logs from remote source databases) be kept separate from local mining database log files, and from each other. You must not use the recovery area of the downstream mining database to stage foreign archived logs..

  2. Enable the LOG_ARCHIVE_DEST_2 parameter you set in the previous step as shown in the following example.
    ALTER SYSTEM SET LOG_ARCHIVE_DEST_STATE_2=ENABLE