D Transaction Log Audit Data Collection for Oracle Database

You can fine-tune audit data collection by setting REDO log parameters for Oracle Database targets.

D.1 Introduction to Transaction Log Audit Trails for Oracle Database Using Oracle GoldenGate

Learn about the recommended collection from REDO logs settings using Oracle GoldenGate.

REDO log files also known as transaction logs are files used by Oracle Database to maintain logs of all the transactions that have occurred in the database. This chapter contains the recommendations for setting initialization parameters to use the TRANSACTION LOG audit trail type to collect audit data from the REDO logs of Oracle Database target.

These log files allow Oracle Database to recover the changes made to the database in case of a failure. For example, if a user updates a salary value in a table that contains employee related data, a REDO record is generated. It contains the value before this change (old value) and the new changed value. REDO records are used to guarantee ACID (Atomicity, Consistency, Isolation, and Durability) properties over crash or hardware failure. In case of a database crash, the system performs redo (re-process) of all the changes on data files that takes the database data back to the state it was when the last REDO record was written.

REDO log records contain Before and After values for every DML (Data Manipulation Language) and DDL (Data Definition Language) operations. Oracle Audit Vault and Database Firewall provides the ability to monitor the changed values from REDO logs using Transaction Log collector.

Transaction Log collector takes advantage of Oracle GoldenGate’s Integrated Extract process to move the REDO log data from database to XML files. The extract process is configured to run against the source database or it is configured to run on a Downstream Mining database (Oracle only). It captures DML and DDL operations that are performed on the configured objects. The captured operations from transaction logs are transferred to GoldenGate XML trail files. Oracle AVDF's Transaction Log collector collects transaction log records from generated XML files. These logs are forwarded to the Audit Vault Server to show the before and after values that were changed in the Data Modification Before-After Values report. The DDL changes are available in the All Activity report. The DML changes are available in the Data Modification Before-After Values report.

Starting with Oracle AVDF 20.10, the Data Modification Before-After Values report has additional information about key columns. GoldenGate, by default, uses the primary key columns of the table as key columns. If no primary keys are defined for the table, or if you want to use some other columns as key columns, then GoldenGate provides an option to specify key columns in the parameter file.

Figure D-1 Transaction Log Collection Process

Description of Figure D-1 follows
Description of "Figure D-1 Transaction Log Collection Process"

See Also:

Oracle Database Setup Scripts for instructions on setting up privileges in the Oracle Database for collecting audit data from the REDO logs.

D.2 Sizing Guidelines

Learn and follow the sizing guidelines outlined in this topic.

Prerequisites

Adhere to the system and sizing requirements outlined in System Requirements and Operating System Requirements of Oracle GoldenGate documentation.

General sizing guidelines

  1. General recommendation for memory and CPU is to start with 32G memory and 2 CPUs per Integrated Extract as it is a multi threaded process and uses large memory when processing large transaction. Depending on the transaction volume and transaction pattern, scale up the resources appropriately following the guidelines in Oracle GoldenGate documentation.
  2. General recommendation for disk space is to start with 2T, and vary based on the volume of data the Integrated Extract captures from the source databases. Extract uses storage for trail files and temporary disk space for cache files in case there is big transaction to buffer for processing.

There are multiple database dictionary views in the computation formulae referred in the guidelines. They provide information on transaction log size. For example, v$log gives detailed information of each online log. Similarly number of log switches per day can be estimated from v$log_history/gv$log_history.

Temporary disk space requirements on account of large transactions may fill up cache and spill over to the transaction cached data or temporary files. Configure an archive policy and define the retention period of the files, so they can be recycled accordingly.

Maintain enough physical memory to handle large transactions. As per the guidelines, have at least 32 GB of memory available for Extract to use. For a more accurate estimation, collect the statistics from the database server history run and check for the size of the biggest transaction. Oracle GoldenGate provides send <extract> cachemgr, cachestats command that displays the statistics of the transaction, that is helpful to determine the base line for estimation.

In general, the sizing, storage, and memory for Oracle GoldenGate Integrated Extract process is highly dependent on the transaction volume and transaction pattern. Collect these statistics from every single database server to estimate as there is no standard value.

The number of databases that can be supported by a single GoldenGate instance or Integrated Extract process, depends on the system resources that support multiple extracts. Ensure to configure one extract for every database.

D.3 Restricted Use License for Oracle GoldenGate

Learn about restricted license of Oracle GoldenGate.

A restricted use license for Oracle GoldenGate is included with Oracle Audit Vault and Database Firewall release 20. This license permits you to install Oracle GoldenGate and use the Integrated Extract process to capture transactional changes in database systems monitored by Oracle AVDF. The extracted data from Oracle GoldenGate is consumed only by Oracle AVDF. Deploy Oracle GoldenGate Microservices Architecture on a separate server other than the server on which the Oracle AVDF appliance is deployed. Later configure the Integrated Extract feature of Oracle GoldenGate. Oracle GoldenGate version 19.1.0.0.4 is the minimum version supported with Oracle Audit Vault and Database Firewall 20.9 and earlier, and Oracle GoldenGate 21.9.0.0.0 is the minimum version supported with Oracle Audit Vault and Database Firewall 20.10 and later. To support Oracle Databases prior to 12.2, Downstream Mining needs to be configured. It requires the deployment of Oracle Database Enterprise Edition and has to be licensed separately.

D.4 Installing Oracle GoldenGate on Oracle Databases

Follow these instructions to install Oracle GoldenGate for Oracle Databases.

Deploy Oracle GoldenGate on a separate server other than the server on which the Oracle AVDF appliance is deployed. Then configure the Oracle GoldenGate Integrated Extract feature.

Oracle AVDF 20.9 and Earlier

Download and install Oracle GoldenGate 19.1.0.0.0 Microservices architecture from Oracle Software Delivery Cloud.

Follow the instructions for Installing Oracle GoldenGate for Oracle Databases in the Oracle GoldenGate 19c documentation. After installing Oracle GoldenGate, apply the Oracle GoldenGate 19.1.0.0.4 Microservices architecture patch from My Oracle Support.

Note:

After installing Oracle GoldenGate, contact Oracle Support to create a Merge Label Request for applying the patch 32175609, 32063871, 33701099, and 34014874. This patch needs to be applied on Oracle GoldenGate installation.

Oracle AVDF 20.10 and Later

Download and install Oracle GoldenGate 21.9.0.0.0 Microservices architecture from My Oracle Support (patch 34958369 complete install).

Follow the instructions for Installing Oracle GoldenGate in the Oracle GoldenGate Microservices documentation for Oracle GoldenGate 21c.

D.5 Capturing Transaction Log Data from Oracle Database 12.2.0.1 and Later

Learn how to capture Transaction Log data from Oracle Database versions 12.2.0.1 and later.

Oracle GoldenGate Integrated Extract process is supported only for Oracle Database versions 12.2.0.1 and later. In this case Oracle GoldenGate Integrated Extract is configured on the source database. To capture Transaction Log data from Oracle Database 12.2.0.1 or later, run the steps in the following sections and in the same order:

  1. Create User and Grant Relevant Privileges
  2. Configure Oracle GoldenGate Parameters for Oracle Database
  3. Create a New Credential in the GoldenGate Administration Server
  4. Create a New Integrated Extract in Oracle GoldenGate Administration Server

D.6 Downstream Mining to Capture Transaction Log Data from Oracle Database Prior to 12.2.0.1

Learn how to capture Transaction Log data from Oracle Database versions prior to 12.2.0.1.

Oracle GoldenGate Integrated Extract process is supported only for Oracle Database versions 12.2.0.1 and later. In this case Oracle GoldenGate Integrated Extract is configured on the source database.

For capturing Transaction Log data from Oracle Database versions prior to 12.2.0.1, Downstream Mining must be used. In this case there are 2 databases, the source database and the Downstream Mining database. The source database (Oracle Database prior to 12.2.0.1) is configured to ship the online REDO logs to a Downstream database (Oracle Database version 12.2.0.1 or later). Integrated Extract is then configured on the Downstream database.

Note:

D.7 Migrating Transaction Log Audit Trail from Oracle AVDF 12.2 to 20

Learn how to migrate transaction log audit trail from Oracle AVDF 12.2 to 20.

Transaction log audit trail data can be migrated from Oracle AVDF 12.2 to 20. Follow this procedure before upgrading to Oracle AVDF 20:

  1. Install and deploy Oracle GoldenGate.
  2. Run the below procedure for every transaction log audit trail in Oracle AVDF 12.2:

    1. Ensure Oracle AVDF 12.2 transaction log audit trail is running on the Oracle source database. Create Oracle Goldengate integrated extract. If Oracle source database is older than 12.2.0.1, then configure Downstream Mining and create Integrated Extract for Downstream Mining database. If Oracle source database is version 12.2.0.1 or later, then create Integrated Extract for the source database.
    2. Configure Integrated Extract XML file for each source database instance in a unique location.
    3. Wait for five minutes after creating the Integrated Extract, to ensure it is running successfully. In case the Integrated Extract fails, then check the logs in the Reports tab and fix the issue.
    4. After confirming that the Integrated Extract is running successfully, wait till DDL/DML statements run. Ensure that the Integrated Extract file contains XML data in it.
    5. Stop the 12.2 transaction log audit trail. Before Oracle AVDF 12.2 transaction log audit trail is stopped, for a brief duration both the GoldenGate Integrated Extract and Oracle AVDF 12.2 transaction log audit trail are running concurrently. Hence duplicate records are observed only for this brief duration. Safely ignore the duplicate records observed for this short duration.
    6. Ensure these steps run successfully for all the 12.2 transaction log audit trails.
  3. If the current version of Oracle AVDF is prior to 12.2.0.9.0, then first upgrade to 12.2.0.9.0 and then upgrade to Oracle AVDF 20.
  4. After upgrading to Oracle AVDF 20, perform these steps for each target database which has transaction log audit trail:

    1. Delete the old transaction log audit trail.
    2. Create a new transaction log audit trail.
    3. Make sure the trail location is the full path of the directory containing Integrated Extract XML files.

D.8 Create User and Grant Relevant Privileges

Learn how to create a user and grant the required privileges.

Create a new user depending on the type of the database:

  • In case of standalone database, create a new user and grant relevant privileges to the user. This new user can fetch REDO log data from the Oracle Database using Oracle GoldenGate Integrated Extract.

  • In case of multitenant database, create a new user in the CDB and grant relevant privileges to the user. This new CDB user can fetch REDO log data from individual PDBs in Oracle Database using Oracle GoldenGate Integrated Extract.

Follow this procedure for the standalone database:

  1. Log in to the database as sysdba.
  2. Execute the following command to create an example user avggadmin:

    create user avggadmin identified by avggadmin;
  3. Execute the following commands to grant privileges to the newly created user:

    grant create session, resource, alter system to avggadmin;
    grant unlimited tablespace to avggadmin;
  4. Execute the following commands to grant GoldenGate admin privilege to the example user avggadmin:

    
    begin
    DBMS_GOLDENGATE_AUTH.GRANT_ADMIN_PRIVILEGE(
    'avggadmin',
    '*',
    TRUE,
    TRUE,
    NULL,
    NULL,
    NULL,
    'CURRENT');
    end;
    /
    

Follow this procedure for multitenant database:

  1. Log in to CDB$ROOT as sysdba.
  2. Execute the following command to create an example user c##avggadmin:
    create user c##avggadmin identified by c##avggadmin container=all;
  3. Execute the following commands to grant privileges to the newly created user:
    grant create session, resource, alter system to c##avggadmin container=all;
    grant unlimited tablespace to c##avggadmin container=all;
  4. Execute the following commands to grant GoldenGate admin privilege to the example user c##avggadmin:
    
    begin
    DBMS_GOLDENGATE_AUTH.GRANT_ADMIN_PRIVILEGE(
    'c##avggadmin',
    '*',
    TRUE,
    TRUE,
    NULL,
    NULL,
    NULL,
    'ALL');
    end;
    /
    

D.9 Configure Oracle GoldenGate Parameters for Oracle Database

Follow this procedure to configure Oracle GoldenGate parameters for Oracle Database.

  1. For multitenant database, log in to CDB$ROOT as sysdba. For standalone database, log in as sysdba.
  2. Execute the following command to enable GoldenGate replication:
    alter system set enable_goldengate_replication=true scope=spfile;
  3. Execute the following commands to enable Archive Log:
    shutdown immediate
    startup mount
    alter database archivelog;
    alter database open;
    alter pluggable database all open  /*Applicable only for multitenant database*/;
    select name,log_mode from v$database;
  4. Enabling Forced Logging is recommended by Oracle GoldenGate. Execute the following command to enable Forced Logging:
    alter database force logging;
  5. Execute the following commands to enable Supplemental Logging:
    alter database add supplemental log data;
    select force_logging, supplemental_log_data_min from v$database;
  6. Change database compatibility only if the version is prior to 12.2.0.1.0. Execute the following command to see database compatibility:
    show parameter compatible;
  7. The database compatibility parameter needs to be changed only for the database, on which Integrated Extract will be configured. Execute the following command to set database compatibility to version 12.2.0.1.0 or higher. In case of normal integrated extract, execute the following command on the source database. In case of Downstream Mining configuration, execute the following command only on the Downstream Mining database and not on the source database.
    alter system set compatible = '12.2.0.1.0' scope=spfile;
  8. Set the appropriate streams_pool_size depending on the number of integrated extracts on the database. Refer to Oracle GoldenGate Performance Best Practices guide for complete information on sizing.
  9. Execute the following command to check the current parameter values:
    show parameter streams;
  10. In case the streams_pool_size is not as per above sizing document, then set the relevant streams_pool_size, by executing the following commands:
    alter system set streams_pool_size=1250M scope=spfile;
    shutdown immediate;
    startup;
    alter pluggable database all open /*Applicable only for multitenant database*/;
    show parameter streams;

D.10 Create a New Credential in the GoldenGate Administration Server

Create a new credential for the target database in the Oracle GoldenGate Administration Server.

  1. Log in to the Oracle GoldenGate Administration Server.
  2. Click Configuration in the left navigation menu.
  3. Click the plus button next to Credentials on the main page.
  4. Enter the domain name in the Credential Domain field. For example, inst1.
  5. Enter the alias in the Credential Alias field. For example, avggadmin_inst1.
  6. In the User ID field, enter the user name in the following format: <username>@<connect string>. For example:
    avggadmin@(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST= foobar.example.com)(PORT=1234))(CONNECT_DATA=(SERVICE_NAME= foobar.example.com)))
  7. Enter the password.
  8. Re-enter the password in the Verify Password field.
  9. Click Submit.
  10. After creating the credential, click the Log in database icon to verify that the newly created credential can connect to the target database.
  11. Complete the TRANDATA Information section.

    After you test the database connection, the TRANDATA Information section appears below the table of credentials. (In 19c, it's the Transaction Information section.)

    The following steps are for 21c, as an example:

    1. Click the plus button next to TRANDATA Information.
    2. Select Table and add the table name in the Table Name field.
    3. Edit the columns as needed.
    4. Select nowait in the Prepare CSN Mode drop-down list.
    5. Click Submit.

D.11 Create a New Integrated Extract in Oracle GoldenGate Administration Server

Follow this procedure to create new Integrated Extract in Oracle GoldenGate Administration Server for the target database.

  1. Log in to the Oracle GoldenGate Administration Server.
  2. Click the Overview tab in left navigation menu.
  3. Click the plus button next to Extract on the main page.
  4. In the Add Extract section and Extract Type sub section, select the Integrated Extract radio button.
  5. Click Next.
  6. In the Extract Options sub section, enter the Process Name.
  7. Optionally, enter the Description.
  8. Select Unidirectional in the Intent field.
  9. Select the Credential Domain from the drop down.
  10. Select the Credential Alias from the drop down.
  11. Select Now in the Begin field.
  12. Enter a two character trail name in the Trail Name field.
  13. Enter the Trail Subdirectory.

    Note:

    Trail subdirectory can be full path of any directory. This directory must already exist on the file system.
  14. Enter the Trail Size (in MB).

    Note:

    In case the record generation rate of GoldenGate is low (less than 50 records per second), then it is recommended to set the Trail Size to lower values. For example, 100MB.
  15. Select the PDB container in the Register to PDBs field.
  16. The maximum size of the XML file can be configured using the Trail Size field. After the XML file reaches this size, rotation happens and integrated extract starts writing into a new XML file.
  17. The other fields can be left unchanged as they are optional.
  18. Click Next.
  19. In the Parameter File sub section, enter the below parameters:
    
    extract <extract_name>
    useridalias <credential_userid_alias> domain <credential_domain>
    OUTPUTFORMAT XML _AUDIT_VAULT
    exttrail <sub_directory>/<trail_name>
    SOURCECATALOG <pdb_name>
    DDL <ddl options to include or exclude schemas and tables>
    TABLE <schema>.<table>;
    

    Example parameter file:

    
    extract int_ex_1
    useridalias tkggadmin_inst1 domain inst1
    OUTPUTFORMAT XML _AUDIT_VAULT
    exttrail subdirectory/aa
    SOURCECATALOG cdb1_pdb1
    DDL INCLUDE ALL
    TABLE scott.*;
    

    Note:

    • The SOURCECATALOG parameter is applicable only for the multi tenant database and is not required for the standalone database.
    • There is space between XML and _AUDIT_VAULT in the OUTPUTFORMAT parameter.
    • The OUTPUTFORMAT parameter must be mentioned before the exttrail parameter in the parameter file. Else, the XML files are not generated.
    • Ensure the TABLE command always ends with a semicolon (;).
    • Ensure the sequence of all the parameters are in the exact order as mentioned above.
    • The DDL INCLUDE command is used to specify the tables for which DDL changes need to be captured.
    • The TABLE command is used to specify the tables for which DML changes need to be captured.
    • Refer to the following sections in Oracle® GoldenGate Reference for Oracle GoldenGate for Windows and UNIX for more information about the DDL, TABLE and TABLEEXCLUDE commands.
  20. After entering the values in the Parameter File field, click Create and Run button to start the integrated extract process.
  21. In the Extracts panel, the newly created Integrated Extract is displayed. To view the status of the Integrated Extract follow these steps:
    1. Click Actions drop down next to the Integrated Extract icon.
    2. Select Details.
    3. Click Report tab to view the diagnostic messages. In case the extract process fails, then the relevant errors are displayed in the report.

D.12 Periodic Backup of LogMiner Dictionary

Learn when to take backup of the LogMiner dictionary.

Oracle GoldenGate highly recommends periodic backup (preferably every day) of the LogMiner dictionary. It can be performed by extracting the LogMiner dictionary to the redo log files. Database jobs can be created to perform periodic backup.

D.13 Sample Oracle GoldenGate Integrated Extract Parameter Files

Use these Oracle GoldenGate Integrated Extract parameter files as samples.

Audit DML and DDL in the schema excluding some tables

The following parameter file configures Integrated Extract to capture the following:

  • DDL operations on all objects, except the objects in accounts schema
  • DML operations on all tables in scott schema, except the emp table in the scott schema

extract <extract_name>
useridalias <credential_userid_alias> domain <credential_domain>
OUTPUTFORMAT XML _AUDIT_VAULT
exttrail <sub_directory>/<trail_name>
SOURCECATALOG cdb1_pdb1
DDL INCLUDE ALL, EXCLUDE OBJNAME accounts.*
TABLE scott.*;
TABLEEXCLUDE scott.emp

Audit all DDL and DML in a schema

The following parameter file configures Integrated Extract to capture the following:

  • DDL operations on all objects in the scott schema
  • DML operations on all tables in the scott schema

extract <extract_name>
useridalias <credential_userid_alias> domain <credential_domain>
OUTPUTFORMAT XML _AUDIT_VAULT
exttrail <sub_directory>/<trail_name>
SOURCECATALOG cdb1_pdb1
DDL INCLUDE OBJNAME scott.*
TABLE scott.*;

Audit only DDL in a schema

The following parameter file configures Integrated Extract to capture DDL operations on all objects in the scott schema.


extract <extract_name>
useridalias <credential_userid_alias> domain <credential_domain>
OUTPUTFORMAT XML _AUDIT_VAULT
exttrail <sub_directory>/<trail_name>
SOURCECATALOG cdb1_pdb1
DDL INCLUDE OBJNAME scott.*

Audit only DML in a schema

The following parameter file configures Integrated Extract to capture DML operations on all tables in the scott schema.


extract <extract_name>
useridalias <credential_userid_alias> domain <credential_domain>
OUTPUTFORMAT XML _AUDIT_VAULT
exttrail <sub_directory>/<trail_name>
SOURCECATALOG cdb1_pdb1
TABLE scott.*;

Audit all DDL in all schema

The following parameter file configures Integrated Extract to capture DDL operations on all objects.


extract <extract_name>
useridalias <credential_userid_alias> domain <credential_domain>
OUTPUTFORMAT XML _AUDIT_VAULT
exttrail <sub_directory>/<trail_name>
SOURCECATALOG cdb1_pdb1
DDL INCLUDE ALL

Audit DML for a table and set the columns to be used as key columns

The following parameter file configures Integrated Extract to do the following:

  • Capture DML operations on the emp table in the scott schema
  • Set empno and ename as key columns
extract <extract_name>
useridalias <credential_userid_alias> domain <credential_domain>
OUTPUTFORMAT XML _AUDIT_VAULT
exttrail <sub_directory>/<trail_name>
SOURCECATALOG cdb1_pdb1
KEYCOLS (empno, ename)
TABLE scott.emp;

D.14 Audit Trail Creation in Audit Vault Console

Learn how to create a mandatory target attribute before creating audit trail.

Ensure the mandatory target attribute AV.COLLECTOR.TIMEZONEOFFSET is set on the Oracle Database target in Audit Vault Server console before audit trail creation. This attribute must be set to timezone offset of Oracle Database. For example, +03:00 for positive offset, -03:00 for negative offset.

Create audit trail by specifying the following details or guidelines:

  • Trail Type: TRANSACTION LOG
  • Trail Location: Full path of directory containing integrated extract XML files
  • Agent should be running on the host machine which has access to the trail location
  • Agent user should have read permission on the trail location

D.15 Audit Trail Cleanup

Learn how to delete the files that are read by Audit Vault Agent.

Oracle Golden Gate Collector writes the checkpoint information into the Audit Trail Cleanup (ATC) file. This file is present in <Agent_Home>/av/atc directory. The ATC file contains information of the target type, target name, trail type, trail directory, and the checkpoint timestamp. The ATC file has extension .atc. All the records with event timestamp older than the checkpoint timestamp are read by Audit Vault Agent and written into the event_log table in Audit Vault Server.

Note:

The timestamp in ATC file is in UTC (Coordinated Universal Time) time zone.

Here is an example ATC file:


securedTargetType=Oracle Database
SecuredTargetName=secured_target_oracle_one
TrailType=TRANSACTION LOG
TrailName=/foo/bar/trail_files
2020-06-30 07:11:46.0

For Oracle AVDF 20.3 and Earlier

To delete the files that are read by the Audit Vault Agent, create a script which deletes the files. These are the files where the last modified timestamp is older than the checkpoint timestamp present in ATC file. This script can be scheduled to run periodically.

For Oracle AVDF 20.4 and Later

To delete the files that are read by the Audit Vault Agent use the Oracle GoldenGate Extract Cleanup utility. This is available starting Oracle AVDF 20.4.

D.16 Configure GoldenGate Downstream Mining

Learn how to configure GoldenGate downstream mining.

Oracle GoldenGate Integrated Extract process is supported only for Oracle Database versions 12.2.0.1 and later. In this case Oracle GoldenGate Integrated Extract is configured on the source database. For capturing transaction log data from Oracle Database versions prior to 12.2.0.1, downstream mining is used. In this case there are 2 databases, the source database and the Downstream Mining database. The source database (Oracle Database versions prior to 12.2.0.1) is configured to send the online REDO logs to a Downstream database (Oracle Database version 12.2.0.1 or later). Integrated Extract is then configured on the Downstream database to generate XML files in _AUDIT_VAULT format.

Prerequisite

Execute the steps in Create User and Grant Relevant Privileges and Configure Oracle GoldenGate Parameters for Oracle Database on both the source database and the Downstream Mining database.

Configuring the Password File

  1. Execute the follow command on the source database to see database compatibility:

    show parameter compatible;
  2. If the version of the source database is 12.1.0.2.0, then execute below command:

    alter system set compatible = '12.1.0.2.0' scope=spfile;
    shutdown immediate
    startup
  3. Execute the following command and check if the compatibility has changed to 12.1.0.2.0:

    show parameter compatible;
  4. Execute the following query to find the global_name on both the source database and the Downstream Mining database:

    select * from global_name;
  5. Ensure the source database and the Downstream Mining database do not have the same global_name.

  6. If a source database has a remote login password file, copy it to the appropriate directory of the mining database system. The password file must be the same as the source database and the mining database. If the source database and the Downstream Mining database do not have the same password file, then execute the following commands in the source database and then copy over the source password file to the Downstream Mining database:

    alter system set remote_login_passwordfile = 'shared' scope = spfile;
    shutdown immediate
    startup
  7. In the source database, the password file is $ORACLE_HOME/dbs/orapw<$ORACLE_SID>.

    The example source password file is /foo/bar/orapwsource.

  8. Execute the following command on the Downstream Mining database to find the downstream password file:

    select file_name from v$passwordfile_info;

    The example downstream password file is /foo/bar/orapwdownstream.

  9. Execute the following command to take backup of the existing downstream password file:

    cp /foo/bar/orapwdownstream /foo/bar/orapwdownstream_orig
  10. Execute the following command to copy the source password file to downstream password file location:

    cp /foo/bar/orapwsource /foo/bar/orapwdownstream

Configuring the Source Database

In this example, the database unique name for source database is source_db_unique_name and for the Downstream Mining database is downstream_db_unique_name.

Execute the following command to find the database unique name:

select db_unique_name from v$database;

Execute the following commands on the source database, to configure the source database to transmit redo data to the Downstream Mining database. While setting the LOG_ARCHIVE_DEST_2 parameter, the connection details of the Downstream Mining database needs to be provided.

ALTER SYSTEM SET LOG_ARCHIVE_CONFIG='DG_CONFIG=(source_db_unique_name,downstream_db_unique_name)';
ALTER SYSTEM SET LOG_ARCHIVE_DEST_2='SERVICE="(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=foo.bar.com)(PORT=1234))(CONNECT_DATA=(SERVICE_NAME=foo.bar.com)))" ASYNC OPTIONAL NOREGISTER VALID_FOR=(ONLINE_LOGFILE,PRIMARY_ROLE) DB_UNIQUE_NAME=downstream_db_unique_name';
ALTER SYSTEM SET LOG_ARCHIVE_DEST_STATE_2=ENABLE;

Configuring the Downstream Mining Database

  1. Archiving must be enabled in the Downstream Mining database to run Integrated Extract in real time integrated capture mode. Execute the following commands on the Downstream Mining database to archive local redo log files:

    ALTER SYSTEM SET LOG_ARCHIVE_DEST_1='LOCATION=/foo/bar/arc_dest/local valid_for=(ONLINE_LOGFILE,PRIMARY_ROLE)';
    ALTER SYSTEM SET LOG_ARCHIVE_DEST_STATE_1=ENABLE;
  2. Downstream Mining database must be configured to archive the standby redo logs that receive redo data from the online redo logs of the source database. The foreign archived logs should not be archived in the recovery area of the Downstream Mining database. Execute the following commands on the Downstream Mining database to archive standby redo logs locally:

    ALTER SYSTEM SET LOG_ARCHIVE_CONFIG='DG_CONFIG=(source_db_unique_name,downstream_db_unique_name)';
    ALTER SYSTEM SET LOG_ARCHIVE_DEST_2='LOCATION=/foo/bar/arc_dest/standbyredo VALID_FOR=(STANDBY_LOGFILE,PRIMARY_ROLE)';
    ALTER SYSTEM SET LOG_ARCHIVE_DEST_STATE_2=ENABLE ;
  3. Execute the following command on the source database and make a note of the results:

    select group#, thread#, bytes from v$log;
  4. Add the standby log file groups to the mining database. The standby log file size must be at least the size of the source log file. The number of standby log file groups must be at least one more than the number of source online log file groups. This applies to each instance (thread) in case of Oracle RAC installation. If you have n threads in the source database, each having m redo log groups, then you should configure n*(m+1) redo log groups in the Downstream Mining database.

    For example, let us assume the following is the result of the query select group#, thread#, bytes from v$log;:

    GROUP# THREAD# BYTES
    1 1 26214400
    2 1 26214400

    Number of threads ( n ) is 1.

    Number of groups per thread ( m ) is 2.

    Hence n*(m+1) = 3 redo logs groups are required in the Downstream Mining database, where the size of each log group should be at least 26214400 bytes.

  5. For this example, execute the following query on the Downstream Mining database:

    
    ALTER DATABASE ADD STANDBY LOGFILE GROUP 3
    ('/foo/bar/arc_dest/standbyloggroups/slog3a.rdo', '/foo/bar/arc_dest/standbyloggroups/slog3b.rdo') SIZE 500M;
     
    ALTER DATABASE ADD STANDBY LOGFILE GROUP 4
    ('/foo/bar/arc_dest/standbyloggroups/slog4a.rdo', '/foo/bar/arc_dest/standbyloggroups/slog4b.rdo') SIZE 500M;
     
    ALTER DATABASE ADD STANDBY LOGFILE GROUP 5
    ('/foo/bar/arc_dest/standbyloggroups/slog5a.rdo', '/foo/bar/arc_dest/standbyloggroups/slog5b.rdo') SIZE 500M;
     
    SELECT * FROM V$STANDBY_LOG;

Registering Integrated Extract

  1. Create credentials for both the source database and the Downstream Mining database on Oracle GoldenGate administration server by following the steps in Create a New Credential in the GoldenGate Administration Server.

  2. Launch the adminclient command line interface:

    $GG_HOME/bin/adminclient
  3. Execute the command to connect to the GoldenGate Service Manager. If SSL is configured, then execute the command:

    connect https://<hostname>:<port> as <username> password <password> !

    If SSL is not configured, then execute the command:

    connect http://<hostname>:<port> as <username> password <password> !

    Example command if SSL is configured:

    connect https://localhost:1234 as ggadminuser password ggadminpassword !

    Example command if SSL is not configured:

    connect https://localhost:1234 as ggadminuser password ggadminpassword !
  4. Execute the following command to log in to the source database in adminclient:

    dblogin useridalias <source db user id> domain <source db domain>

    For example:

    dblogin useridalias avggadmin_remotesourceinst1 domain remotesourceinst1
  5. Execute the following command to log in to the Downstream Mining database in adminclient:

    miningdblogin useridalias <downstream db user id> domain <downstream db domain>

    For example:

    miningdblogin useridalias avggadmin_remotedowninst1 domain remotedowninst1
  6. Execute the following commands to add and register the Integratd Extract. Before executing these steps, manually create the subdirectory, where the Integrated Extract XML files need to be stored.

    ADD EXTRACT <extract name> INTEGRATED TRANLOG BEGIN NOW
    REGISTER EXTRACT <extract name> DATABASE
    ADD EXTTRAIL <subdirectory>/<trail name>, EXTRACT <extract name>

    After executing this command, you may see the message OGG-12029 The file with name '<extract name>.prm' does not exist. Ignore this message.

    For example:

    
    ADD EXTRACT ext1 INTEGRATED TRANLOG BEGIN NOW
    REGISTER EXTRACT ext1 DATABASE
    ADD EXTTRAIL e1/e1, EXTRACT ext1
  7. Log in to the Oracle GoldenGate administration server.

  8. In the Extracts panel, the newly created Integrated Extract is displayed. To update the parameter file of the Integrated Extract follow these steps:

    1. Click Actions drop down next to the Integrated Extract icon.
    2. Select Details.
    3. In the Parameters tab, enter the below parameters:

      
      extract <extract name>
      useridalias <source db user id> domain <source db domain>
      TRANLOGOPTIONS MININGUSERALIAS <downstream db user id> domain <downstream db domain>
      TRANLOGOPTIONS INTEGRATEDPARAMS (downstream_real_time_mine Y)
      OUTPUTFORMAT XML _AUDIT_VAULT
      exttrail <subdirectory>/<trail name>
      DDL INCLUDE ALL
      TABLE <schema>.<table>;
      

      For example:

      
      extract ext1
      useridalias avggadmin_remotesourceinst1 domain remotesourceinst1
      TRANLOGOPTIONS MININGUSERALIAS avggadmin_remotedowninst1 domain remotedowninst1
      TRANLOGOPTIONS INTEGRATEDPARAMS (downstream_real_time_mine Y)
      OUTPUTFORMAT XML _AUDIT_VAULT
      exttrail e1/e1
      DDL INCLUDE ALL
      TABLE scott.*;
      
  9. After updating the parameters, click the Apply button.

  10. Click Actions drop down next to the Integrated Extract icon.

  11. Click Start button to start the Integrated Extract. Wait for 5 minutes for the Integrated Extract to start successfully and create the background log mining process. Log Mining process runs in the background and is not visible to the user.

  12. Execute the following commands on the source database to switch the log files on the source database:

    select * from v$log;
    alter system switch logfile;
    select * from v$log;
  13. Wait for 5 minutes after performing the log switch. The Integrated Extract needs few minutes to start creating the XML files.

Checking the status of Downstream Mining

Execute the following commands on both the source database and the Downstream Mining database:

select * from V$archive_dest_status;
select * from V$archive_dest;

In the row having dest_name column with values LOG_ARCHIVE_DEST_1 and LOG_ARCHIVE_DEST_2, ensure the status column has a value VALID and the gap_status column has a value NO GAP or null.

If the status column has a value ERROR, then the error column shows the relevant error message.

Checking the status of Integrated Extract

  1. Log in to the Oracle GoldenGate administration server.
  2. View the Extracts panel. The Integrated Extract is displayed.
  3. Check the status of the Integrated Extract. Click Actions drop down next to the Integrated Extract icon.
  4. Select Details.
  5. Click Report tab to view the diagnostic messages. In case the extract process fails, then the relevant errors are displayed in the report.

See Also:

Understand the downstream mining process available in Configuring a Downstream Mining Database and Example Downstream Mining Configuration.