These requirements supplement the basic configuration requirements documented in Configuring Capture in Classic Mode.
Topics:
This section does not apply to Extract in integrated capture mode.
The following special configuration steps are required to support TDE when Extract is in classic capture mode.
Note:
When in integrated mode, Extract leverages the database logging server and supports TDE column encryption and TDE tablespace encryption without special setup requirements or parameter settings. For more information about integrated capture, see Choosing Capture and Apply Modes.
Parent topic: Additional Configuration Steps for Using Classic Capture
TDE support when Extract is in classic capture mode requires the exchange of two kinds of keys:
The encrypted key can be a table key (column-level encryption), an encrypted redo log key (tablespace-level encryption), or both. This key is shared between the Oracle Database and Extract.
The decryption key is named ORACLEGG
and its password is known as the shared secret. This key is stored securely in the Oracle and Oracle GoldenGate domains. Only a party that has possession of the shared secret can decrypt the table and redo log keys.
The encrypted keys are delivered to the Extract process by means of built-in PL/SQL code. Extract uses the shared secret to decrypt the data. Extract never handles the wallet master key itself, nor is it aware of the master key password. Those remain within the Oracle Database security framework.
Extract never writes the decrypted data to any file other than a trail file, not even a discard file (specified with the DISCARDFILE
parameter). The word "ENCRYPTED
" will be written to any discard file that is in use.
The impact of this feature on Oracle GoldenGate performance should mirror that of the impact of decryption on database performance. Other than a slight increase in Extract startup time, there should be a minimal affect on performance from replicating TDE data.
Parent topic: Configuring Oracle TDE Data in Classic Capture Mode
The following are requirements for Extract to support TDE capture:
To maintain high security standards, the Oracle GoldenGate Extract process should run as part of the oracle
user (the user that runs the Oracle Database). That way, the keys are protected in memory by the same privileges as the oracle
user.
The Extract process must run on the same machine as the database installation.
Even if using TDE with a Hardware Security Module, you must use a software wallet. Instructions are provided in Oracle Security Officer Tasks in the configuration steps for moving from an HSM-only to an HSM-plus-wallet configuration and configuring the sqlnet.ora
file correctly.
Whenever the source database is upgraded, you must rekey the master key.
Parent topic: Configuring Oracle TDE Data in Classic Capture Mode
To support TDE on Oracle 11.2.0.2, refer to article 1557031.1 on the My Oracle Support website (https://support.oracle.com
).
Parent topic: Configuring Oracle TDE Data in Classic Capture Mode
The following outlines the steps that the Oracle Security Officer and the Oracle GoldenGate Administrator take to establish communication between the Oracle server and the Extract process.
Parent topic: Configuring Oracle TDE Data in Classic Capture Mode
Agree on a shared secret password that meets or exceeds Oracle password standards. This password must not be known by anyone else. For guidelines on creating secure passwords, see Oracle Database Security Guide.
Parent topic: Configuring Classic Capture for TDE Support
Oracle GoldenGate requires the use of a software wallet even with HSM. If you are currently using HSM-only mode, move to HSM-plus-wallet mode by taking the following steps:
Change the sqlnet.ora
file configuration as shown in the following example, where the wallet directory can be any location on disk that is accessible (rwx) by the owner of the Oracle Database. This example shows a best-practice location, where my_db
is the $ORACLE_SID
.
ENCRYPTION_WALLET_LOCATION= (SOURCE=(METHOD=HSM)(METHOD_DATA= (DIRECTORY=/etc/oracle/wallets/my_db)))
Log in to orapki
(or Wallet Manager) as the owner of the Oracle Database, and create an auto-login wallet in the location that you specified in the sqlnet.ora
file. When prompted for the wallet password, specify the same password as the HSM password (or HSM Connect String). These two passwords must be identical.
cd /etc/oracle/wallets/my_db orapki wallet create -wallet . -auto_login[_local]
Note:
The Oracle Database owner must have full operating system privileges on the wallet.
Add the following entry to the empty wallet to enable an 'auto-open' HSM:
mkstore -wrl . -createEntry ORACLE.TDE.HSM.AUTOLOGIN non-empty-string
Create an entry named ORACLEGG
in the wallet. ORACLEGG
must be the name of this key. The password for this key must be the agreed-upon shared secret, but do not enter this password on the command line. Instead, wait to be prompted.
mkstore -wrl ./ -createEntry ORACLE.SECURITY.CL.ENCRYPTION.ORACLEGG Oracle Secret Store Tool : Version 11.2.0.3.0 - Production Copyright (c) 2004, 2011, Oracle and/or its affiliates. All rights reserved. Your secret/Password is missing in the command line Enter your secret/Password: sharedsecret Re-enter your secret/Password: sharedsecret Enter wallet password: hsm/wallet_password
Verify the ORACLEGG
entry.
mkstore -wrl . -list
Oracle Secret Store Tool : Version 11.2.0.3.0 - Production
Copyright (c) 2004, 2011, Oracle and/or its affiliates. All rights reserved.
Enter wallet password: hsm/wallet_password
Oracle Secret Store entries:
ORACLE.SECURITY.CL.ENCRYPTION.ORACLEGG
Log in to SQL*Plus as a user with the SYSDBA
system privilege.
Close and then re-open the wallet.
SQL> alter system set encryption wallet close identified by "hsm/wallet_password"; System altered. SQL> alter system set encryption wallet open identified by "hsm/wallet_password"; System altered.
This inserts the password into the auto-open wallet, so that no password is required to access encrypted data with the TDE master encryption key stored in HSM.
Switch log files.
alter system switch logfile; System altered.
If this is an Oracle RAC environment and you are using copies of the wallet on each node, make the copies now and then reopen each wallet.
Note:
Oracle recommends using one wallet in a shared location, with synchronized access among all Oracle RAC nodes.
Parent topic: Configuring Classic Capture for TDE Support
Parent topic: Configuring Classic Capture for TDE Support
Extract decrypts the TDE data and writes it to the trail as clear text. To maintain data security throughout the path to the target database, it is recommended that you also deploy Oracle GoldenGate security features to:
encrypt the data in the trails
encrypt the data in transit across TCP/IP
For more information, see Administering Oracle GoldenGate.
Parent topic: Configuring Oracle TDE Data in Classic Capture Mode
If DDL will ever be performed on a table that has column-level encryption, or if table keys will ever be re-keyed, you must either quiesce the table while the DDL is performed or enable Oracle GoldenGate DDL support. It is more practical to have the DDL environment active so that it is ready, because a re-key usually is a response to a security violation and must be performed immediately. To install the Oracle GoldenGate DDL environment, see Installing Trigger-Based DDL Capture. To configure Oracle GoldenGate DDL support, see Configuring DDL Support. For tablespace-level encryption, the Oracle GoldenGate DDL support is not required.
Parent topic: Configuring Oracle TDE Data in Classic Capture Mode
Whenever the source database is upgraded and Oracle GoldenGate is capturing TDE data, you must rekey the master key, and then restart the database and Extract. The commands to rekey the master key are:
alter system set encryption key identified by "mykey";
Parent topic: Configuring Oracle TDE Data in Classic Capture Mode
Use this procedure to update and encrypt the TDE shared secret within the Extract parameter file.
Parent topic: Configuring Oracle TDE Data in Classic Capture Mode
The following general guidelines apply to Oracle RAC when Extract is operating in classic capture mode.
During operations, if the primary database instance against which Oracle GoldenGate is running stops or fails for any reason, Extract abends. To resume processing, you can restart the instance or mount the Oracle GoldenGate binaries to another node where the database is running and then restart the Oracle GoldenGate processes. Stop the Manager process on the original node before starting Oracle GoldenGate processes from another node.
Whenever the number of redo threads changes, the Extract group must be dropped and re-created. For the recommended procedure, see Administering Oracle GoldenGate.
Extract ensures that transactions are written to the trail file in commit order, regardless of the RAC instance where the transaction originated. When Extract is capturing in archived-log-only mode, where one or more RAC instances may be idle, you may need to perform archive log switching on the idle nodes to ensure that operations from the active instances are recorded in the trail file in a timely manner. You can instruct the Oracle RDBMS to do this log archiving automatically at a preset interval by setting the archive_lag_target
parameter. For example, to ensure that logs are archived every fifteen minutes, regardless of activity, you can issue the following command in all instances of the RAC system:
SQL> alter system set archive_lag_target 900
To process the last transaction in a RAC cluster before shutting down Extract, insert a dummy record into a source table that Oracle GoldenGate is replicating, and then switch log files on all nodes. This updates the Extract checkpoint and confirms that all available archive logs can be read. It also confirms that all transactions in those archive logs are captured and written to the trail in the correct order.
The following table shows some Oracle GoldenGate parameters that are of specific benefit in Oracle RAC.
Parameter | Description |
---|---|
|
Sets the amount of data that Extract queues in memory before sending it to the target system. Tuning these parameters might increase Extract performance on Oracle RAC. |
|
Controls how Extract handles orphaned transactions, which can occur when a node fails during a transaction and Extract cannot capture the rollback. Although the database performs the rollback on the failover node, the transaction would otherwise remain in the Extract transaction list indefinitely and prevent further checkpointing for the Extract thread that was processing the transaction. By default, Oracle GoldenGate purges these transactions from its list after they are confirmed as orphaned. This functionality can also be controlled on demand with the |
Parent topic: Additional Configuration Steps for Using Classic Capture
This topic covers additional configuration requirements that apply when Oracle GoldenGate mines transaction logs that are stored in Oracle Automatic Storage Management (ASM).
Parent topic: Additional Configuration Steps for Using Classic Capture
Extract must be configured to read logs that are stored in ASM. Depending on the database version, the following options are available:
Parent topic: Mining ASM-stored Logs in Classic Capture Mode
Use the TRANLOGOPTIONS
parameter with the DBLOGREADER
option in the Extract parameter file if the RDBMS is Oracle 11.1.0.7 or Oracle 11.2.0.2 or later 11g R2 versions.
An API is available in those releases (but not in Oracle 11g R1 versions) that uses the database server to access the redo and archive logs. When used, this API enables Extract to use a read buffer size of up to 4 MB in size. A larger buffer may improve the performance of Extract when redo rate is high. You can use the DBLOGREADERBUFSIZE
option of TRANLOGOPTIONS
to specify a buffer size.
Note:
DBLOGREADER
also can be used when the redo and archive logs are on regular disk or on a raw device.
When using DBLOGREADER
and using Oracle Data Vault, grant the DV_GOLDENGATE_REDO_ACCESS
Role to the Extract database user in addition to the privileges that are listed in Establishing Oracle GoldenGate Credentials.
Parent topic: Accessing the Transaction Logs in ASM
If the RDBMS version is not one of those listed in Reading Transaction Logs Through the RDBMS, do the following:
Parent topic: Accessing the Transaction Logs in ASM
To ensure that the Oracle GoldenGate Extract process can connect to an ASM instance, list the ASM instance in the tnsnames.ora
file. The recommended method for connecting to an ASM instance when Oracle GoldenGate is running on the database host machine is to use a bequeath (BEQ) protocol. The BEQ protocol does not require a listener. If you prefer to use the TCP/IP protocol, verify that the Oracle listener is listening for new connections to the ASM instance. The listener.ora
file must contain an entry similar to the following.
SID_LIST_LISTENER_ASM = (SID_LIST = (SID_DESC = (GLOBAL_DBNAME = ASM) (ORACLE_HOME = /u01/app/grid) (SID_NAME = +ASM1) ) )
Note:
A BEQ connection does not work when using a remote Extract configuration. Use TNSNAMES
with the TCP/IP protocol.
Parent topic: Mining ASM-stored Logs in Classic Capture Mode
To ensure the continuity and integrity of capture processing when Extract operates in classic capture mode, enable archive logging.
The archive logs provide a secondary data source should the online logs recycle before Extract is finished with them. The archive logs for open transactions must be retained on the system in case Extract needs to recapture data from them to perform a recovery.
WARNING:
If you cannot enable archive logging, there is a high risk that you will need to completely resynchronize the source and target objects and reinstantiate replication should there be a failure that causes an Extract outage while transactions are still active. If you must operate this way, configure the online logs according to the following guidelines to retain enough data for Extract to capture what it needs before the online logs recycle. Allow for Extract backlogs caused by network outages and other external factors, as well as long-running transactions.
In a RAC configuration, Extract must have access to the online and archived logs for all nodes in the cluster, including the one where Oracle GoldenGate is installed.
Parent topic: Additional Configuration Steps for Using Classic Capture
The following summarizes the different recovery modes that Extract might use and their log-retention requirements:
By default, the Bounded Recovery mode is in effect, and Extract requires access to the logs only as far back as twice the Bounded Recovery interval that is set with the BR
parameter. This interval is an integral multiple of the standard Extract checkpoint interval, as controlled by the CHECKPOINTSECS
parameter. These two parameters control the Oracle GoldenGate Bounded Recovery feature, which ensures that Extract can recover in-memory captured data after a failure, no matter how old the oldest open transaction was at the time of failure. For more information about Bounded Recovery, see Reference for Oracle GoldenGate.
In the unlikely event that the Bounded Recovery mechanism fails when Extract attempts a recovery, Extract reverts to normal recovery mode and must have access to the archived log that contains the beginning of the oldest open transaction in memory at the time of failure and all logs thereafter.
Parent topic: Ensuring Data Availability for Classic Capture
Depending on the version of Oracle, there are different options for ensuring that the required logs are retained on the system.
Parent topic: Ensuring Data Availability for Classic Capture
For these versions, Extract can be configured to work with Oracle Recovery Manager (RMAN) to retain the logs that Extract needs for recovery. You enable this feature when you issue the REGISTER EXTRACT
command. See Creating Process Groups for more information. To use this feature, the Extract database user must have the following privileges, in addition to the basic privileges listed in Establishing Oracle GoldenGate Credentials.
Oracle EE version | Privileges |
---|---|
11.1 and 11.2.0.1 |
|
11.2.0.3 and later |
Run package to grant Oracle GoldenGate admin privilege.
exec dbms_goldengate_auth.grant_admin_privilege('user')
|
When log retention is enabled, Extract retains enough logs to perform a Bounded Recovery, but you can configure Extract to retain enough logs through RMAN for a normal recovery by using the TRANLOGOPTIONS
parameter with the LOGRETENTION
option set to SR
. There also is an option to disable the use of RMAN log retention. Review the options of LOGRETENTION
in the Reference for Oracle GoldenGate before you configure Extract. If you set LOGRETENTION
to DISABLED
, see Determining How Much Data to Retain,.
Note:
To support RMAN log retention on Oracle RAC for Oracle versions prior to 11.2.0.3, you must download and install the database patch that is provided in BUGFIX 11879974 before you add the Extract groups.
The RMAN log retention feature creates an underlying (but non-functioning) Oracle Streams Capture process for each Extract group. The name of the Capture is based on the name of the associated Extract group. The log retention feature can operate concurrently with other local Oracle Streams installations. When you create an Extract group, the logs are retained from the current database SCN.
Note:
If the storage area is full, RMAN purges the archive logs even when needed by Extract. This limitation exists so that the requirements of Extract (and other Oracle replication components) do not interfere with the availability of redo to the database.
Parent topic: Log Retention Options
For versions of Oracle other than Enterprise Edition, you must manage the log retention process with your preferred administrative tools. Follow the directions in Determining How Much Data to Retain.
Parent topic: Log Retention Options
When managing log retention, try to ensure rapid access to the logs that Extract would require to perform a normal recovery (not a Bounded Recovery). See Log Retention Requirements per Extract Recovery Mode. If you must move the archives off the database system, the TRANLOGOPTIONS
parameter provides a way to specify an alternate location. See Specifying the Archive Location.
The recommended retention period is at least 24 hours worth of transaction data, including both online and archived logs. To determine the oldest log that Extract might need at any given point, issue the SEND EXTRACT
command with the SHOWTRANS
option. You might need to do some testing to determine the best retention time given your data volume and business requirements.
If data that Extract needs during processing was not retained, either in online or archived logs, one of the following corrective actions might be required:
Alter Extract to capture from a later point in time for which log data is available (and accept possible data loss on the target).
Resynchronize the source and target data, and then start the Oracle GoldenGate environment over again.
Parent topic: Ensuring Data Availability for Classic Capture
Make certain not to use backup or archive options that cause old archive files to be overwritten by new backups. Ideally, new backups should be separate files with different names from older ones. This ensures that if Extract looks for a particular log, it will still exist, and it also ensures that the data is available in case it is needed for a support case.
Parent topic: Ensuring Data Availability for Classic Capture
If the archived logs reside somewhere other than the Oracle default directory, specify that directory with the ALTARCHIVELOGDEST
option of the TRANLOGOPTIONS
parameter in the Extract parameter file.
You might also need to use the ALTARCHIVEDLOGFORMAT
option of TRANLOGOPTIONS
if the format that is specified with the Oracle parameter LOG_ARCHIVE_FORMAT
contains sub-directories. ALTARCHIVEDLOGFORMAT
specifies an alternate format that removes the sub-directory from the path. For example, %T/log_%t_%s_%r.arc
would be changed to log_%t_%s_%r.arc
. As an alternative to using ALTARCHIVEDLOGFORMAT
, you can create the sub-directory manually, and then move the log files to it.
Parent topic: Ensuring Data Availability for Classic Capture
If the online and archived redo logs are stored on a different platform from the one that Extract is built for, do the following:
NFS-mount the archive files.
Map the file structure to the structure of the source system by using the LOGSOURCE
and PATHMAP
options of the Extract parameter TRANLOGOPTIONS
. For more information, see Reference for Oracle GoldenGate.
Parent topic: Ensuring Data Availability for Classic Capture
You can configure Extract to read exclusively from the archived logs. This is known as Archived Log Only (ALO) mode.
In this mode, Extract reads exclusively from archived logs that are stored in a specified location. ALO mode enables Extract to use production logs that are shipped to a secondary database (such as a standby) as the data source. The online logs are not used at all. Oracle GoldenGate connects to the secondary database to get metadata and other required data as needed. As an alternative, ALO mode is supported on the production system.
Note:
ALO mode is not compatible with Extract operating in integrated capture mode.
Parent topic: Additional Configuration Steps for Using Classic Capture
Observe the following limitations and requirements when using Extract in ALO mode.
Log resets (RESETLOG
) cannot be done on the source database after the standby database is created.
ALO cannot be used on a standby database if the production system is Oracle RAC and the standby database is non-RAC. In addition to both systems being Oracle RAC, the number of nodes on each system must be identical.
ALO on Oracle RAC requires a dedicated connection to the source server. If that connection is lost, Oracle GoldenGate processing will stop.
It is a best practice to use separate archive log directories when using Oracle GoldenGate for Oracle RAC in ALO mode. This will avoid any possibility of the same file name showing up twice, which could result in Extract returning an "out of order scn" error.
The LOGRETENTION
parameter defaults to DISABLED
when Extract is in ALO mode. You can override this with a specific LOGRETENTION
setting, if needed.
Parent topic: Configuring Classic Capture in Archived Log Only Mode
To configure Extract for ALO mode, follow these steps as part of the overall process for configuring Oracle GoldenGate, as documented in Configuring Capture in Classic Mode.
Parent topic: Configuring Classic Capture in Archived Log Only Mode
You can configure Classic Extract to access both redo data and metadata in real-time to successfully replicate source database activities using Oracle Active Data Guard. This is known as Active Data Guard (ADG) mode.
ADG mode enables Extract to use production logs that are shipped to a standby database as the data source. The online logs are not used at all. Oracle GoldenGate connects to the standby database to get metadata and other required data as needed.
This mode is useful in load sensitive environments where ADG is already in place or can be implemented. It can also be used as cost effective method to implement high availability using the ADG Broker role planned (switchover) and failover (unplanned) changes. In an ADG configuration, switchover and failover are considered roles. When either of the operations occur, it is considered a role change. For more information, see Oracle Data Guard Concepts and Administration and Oracle Data Guard Broker.
You can configure Integrated Extract to fetch table data and metadata required for the fetch from an ADG instead of the source database. This is possible because an ADG is a physical replica of the source database. Fetching from an ADG using the FETCHUSER
parameter is supported by Extract in all configurations except when running as Classic Extract. Classic Extract already has the ability to connect directly to an ADG and mine its redo logs and fetch from it using standard connection information supplied using the USERID
parameter. The impact to the source database is minimized because Extract gathers information from the source database at startup, including compatibility level, database type, and source database validation checks, when fetching from an ADG.
All previous fetch functionality and parameters are supported.
Note:
Integrated Extract cannot capture from a standby database because it requiresREAD
and WRITE
access to the database, and an ADG standby only provides READ ONLY
access.Parent topic: Additional Configuration Steps for Using Classic Capture
Observe the following limitations and requirements when using Extract in ADG mode.
Extract in ADG mode will only apply redo data that has been applied to the standby database by the apply process. If Extract runs ahead of the standby database, it will wait for the standby database to catch up.
You must explicitly specify ADG mode in your classic Extract parameter file to run extract on the standby database.
You must specify the database user and password to connect to the ADG system because fetch and other metadata resolution occurs in the database.
The number of redo threads in the standby logs in the standby database must match the number of nodes from the primary database.
No new RAC instance can be added to the primary database after classic Extract has been created on the standby database. If you do add new instances, the redo data from the new thread will not be captured by classic Extract.
Archived logs and standby redo logs accessed from the standby database will be an exact duplicate of the primary database. The size and the contents will match, including redo data, transactional data, and supplemental data. This is guaranteed by a properly configured ADG deployment.
ADG role changes are infrequent and require user intervention in both cases.
With a switchover, there will be an indicator in the redo log file header (end of the redo log or EOR marker) to indicate end of log stream so that classic Extract on the standby can complete the RAC coordination successfully and ship all of the committed transactions to the trail file.
With a failover, a new incarnation is created on both the primary and the standby databases with a new incarnation ID, RESETLOG
sequence number, and SCN value.
You must connect to the primary database from GGSCI to add TRANDATA
or SCHEMATRANDATA
because this is done on the primary database.
DDL triggers cannot be used on the standby database, in order to support DDL replication (except ADDTRANDATA
). You must install the Oracle GoldenGate DDL package on the primary database.
DDL ADDTRANDATA
is not supported in ADG mode; you must use ADDSCHEMATRANDATA
for DDL replication.
When adding extract on the standby database, you must specify the starting position using a specific SCN value, timestamp, or log position. Relative timestamp values, such as NOW
, become ambiguous and may lead to data inconsistency.
When adding extract on the standby database, you must specify the number of threads that will include all of the relevant threads from the primary database.
During or after failover or switchover, no thread can be added or dropped from either primary or standby databases.
Classic Extract will only use one intervening RESETLOG
operation.
If you do not want to relocate your Oracle GoldenGate installation, then you must position it in a shared space where the Oracle GoldenGate installation directory can be accessed from both the primary and standby databases.
If you are moving capture off of an ADG standby database to a primary database, then you must point your net alias to the primary database and you must remove the TRANLOG
options.
Only Oracle Database releases that are running with compatibility setting of 10.2 or higher (10g Release 2) are supported.
Classic Extract cannot use the DBLOGREADER
option. Use ASMUSER
(there is approximately a 20gb/hr read limit) or move the online and archive logs outside of the Application Security Manager on both the primary and the standby databases.
To configure Classic Extract for ADG mode, follow these steps as part of the overall process for configuring Oracle GoldenGate, as documented in Configuring Capture in Classic Mode.
You must have your parameter files, checkpoint files, bounded recovery files, and trail files stored in shared storage or copied to the ADG database before attempting to migrate a classic Extract to or from an ADG database. Additionally, you must ensure that there has not been any intervening role change or Extract will mine the same branch of redo.
Use the following steps to move to an ADG database:
Edit the parameter file ext1.prm
to add the following parameters:
DBLOGIN USERID userid@ADG PASSWORD password TRANLOGOPTIONS MINEFROMACTIVEDG
Start Extract by issuing the START EXTRACT ext1
command.
Use the following steps to move from an ADG database:
In a role change involving a standby database, all sessions in the primary and the standby database are first disconnected including the connections used by Extract. Then both databases are shut down, then the original primary is mounted as a standby database, and the original standby is opened as the primary database.
The procedure for a role change is determined by the initial deployment of Classic Extract and the deployment relation that you want, database or role. The following table outlines the four possible role changes and is predicated on an ADG configuration comprised of two databases, prisys
and stansys
. The prisys
system contains the primary database and the stansys
system contains the standby database; prisys
has two redo threads active, whereas stansys
has four redo threads active.
Initial Deployment Primary (prisys) | Initial Deployment ADG (stansys) |
---|---|
Original Deployment: |
|
ext1.prm DBLOGIN USERID userid@prisys, PASSWORD password |
ext1.prm DBLOGIN USERID userid@stansys, PASSWORD password TRANLOGOPTIONS MINEFROMACTIVEDG |
Database Related: |
|
After Role Transition: Classic Extract to ADG
|
After Role Transition: ADG to classic Extract
|
Role Related: |
|
After Role Transition: Classic Extract to classic Extract
|
After Role Transition: ADG to ADG
|
When Oracle GoldenGate captures data from the redo logs, I/O bottlenecks can occur because Extract is reading the same files that are being written by the database logging mechanism.
Performance degradation increases with the number of Extract processes that read the same logs. You can:
Try using faster drives and a faster controller. Both Extract and the database logging mechanism will be faster on a faster I/O system.
Store the logs on RAID 0+1. Avoid RAID 5, which performs checksums on every block written and is not a good choice for high levels of continuous I/O.
Parent topic: Additional Configuration Steps for Using Classic Capture