Skip Headers
Oracle® Streams Replication Administrator's Guide
11g Release 1 (11.1)

B28322-03
Go to Documentation Home
Home
Go to Book List
Book List
Go to Table of Contents
Contents
Go to Index
Index
Go to Master Index
Master Index
Go to Feedback page
Contact Us

Go to previous page
Previous
Go to next page
Next
PDF · Mobi · ePub

1 Understanding Oracle Streams Replication

This chapter contains conceptual information about Oracle Streams replication. This chapter contains these topics:

Overview of Oracle Streams Replication

Replication is the process of sharing database objects and data at multiple databases. To maintain replicated database objects and data at multiple databases, a change to one of these database objects at a database is shared with the other databases. In this way, the database objects and data are kept synchronized at all of the databases in the replication environment. In an Oracle Streams replication environment, the database where a change originates is called the source database, and a database where a change is shared is called a destination database.

When you use Oracle Streams, replication of a DML or DDL change typically includes three steps:

  1. A capture process, a synchronous capture, or an application creates one or more logical change records (LCRs) and enqueues them. An LCR is a message with a specific format that describes a database change. A capture process reformats changes captured from the redo log into LCRs, a synchronous capture uses an internal mechanism to reformat changes into LCRs, and applications can construct LCRs. If the change was a data manipulation language (DML) operation, then each LCR encapsulates a row change resulting from the DML operation to a shared table at the source database. If the change was a data definition language (DDL) operation, then an LCR encapsulates the DDL change that was made to a shared database object at a source database.

  2. A propagation propagates the staged LCR to another queue, which usually resides in a database that is separate from the database where the LCR was captured. An LCR can be propagated to a number of queues before it arrives at a destination database.

  3. At a destination database, an apply process consumes the change by applying the LCR to the shared database object. An apply process can dequeue the LCR and apply it directly, or an apply process can dequeue the LCR and send it to an apply handler. In an Oracle Streams replication environment, an apply handler performs customized processing of the LCR and then applies the LCR to the shared database object.

Step 1 and Step 3 are required, but Step 2 is optional because, in some cases, an application can enqueue an LCR directly at a destination database. In addition, in a heterogeneous replication environment in which an Oracle database shares information with a non-Oracle database, an apply process can apply changes directly to a non-Oracle database without propagating LCRs.

Figure 1-1 illustrates the information flow in an Oracle Streams replication environment.

Figure 1-1 Oracle Streams Information Flow

Description of Figure 1-1 follows
Description of "Figure 1-1 Oracle Streams Information Flow"

This document describes how to use Oracle Streams for replication and includes the following information:

Replication is one form of information sharing. Oracle Streams enables replication, and it also enables other forms of information sharing, such as messaging, event management and notification, data warehouse loading, and data protection.

See Also:

Oracle Streams Concepts and Administration for more information about the other information sharing capabilities of Oracle Streams

Rules in an Oracle Streams Replication Environment

A rule is a database object that enables a client to perform an action when an event occurs and a condition is satisfied. Rules are evaluated by a rules engine, which is a built-in part of Oracle. You use rules to control the information flow in an Oracle Streams replication environment. Each of the following mechanisms is a client of the rules engine:

  • Capture process

  • Synchronous capture

  • Propagation

  • Apply process

You control the behavior of each of these Oracle Streams clients using rules. A rule set contains a collection of rules. You can associate a positive and a negative rule set with a capture process, a propagation, and an apply process, but a synchronous capture can only have a positive rule set.

In a replication environment, an Oracle Streams client performs an action if an LCR satisfies its rule sets. In general, a change satisfies the rule sets for an Oracle Streams client if no rules in the negative rule set evaluate to TRUE for the LCR, and at least one rule in the positive rule set evaluates to TRUE for the LCR. If an Oracle Streams client is associated with both a positive and negative rule set, then the negative rule set is always evaluated first.

Specifically, you control the information flow in an Oracle Streams replication environment in the following ways:

  • Specify the changes that a capture process captures from the redo log or discards. That is, if a change found in the redo log satisfies the rule sets for a capture process, then the capture process captures the change. If a change found in the redo log does not satisfy the rule sets for a capture process, then the capture process discards the change.

  • Specify the changes that a synchronous capture captures or discards. That is, if a DML change made to a table satisfies the rule set for a synchronous capture, then the synchronous capture captures the change. If a DML change made to a table does not satisfy the rule set for a synchronous capture, then the synchronous capture discards the change.

  • Specify the LCRs that a propagation propagates from one queue to another or discards. That is, if an LCR in a queue satisfies the rule sets for a propagation, then the propagation propagates the LCR. If an LCR in a queue does not satisfy the rule sets for a propagation, then the propagation discards the LCR.

  • Specify the LCRs that an apply process dequeues or discards. That is, if an LCR in a queue satisfies the rule sets for an apply process, then the LCR is retrieved and processed by the apply process. If an LCR in a queue does not satisfy the rule sets for an apply process, then the apply process discards the LCR.

You can use the Oracle-supplied DBMS_STREAMS_ADM PL/SQL package to create rules for an Oracle Streams replication environment. You can specify these system-created rules at the following levels:

  • Table - Contains a rule condition that evaluates to TRUE for changes made to a particular table

  • Schema - Contains a rule condition that evaluates to TRUE for changes made to a particular schema

  • Global - Contains a rule condition that evaluates to TRUE for all changes made to a database

In addition, a single system-created rule can evaluate to TRUE for DML changes or for DDL changes, but not both. So, for example, if you want to replicate both DML and DDL changes to a particular table, then you need both a table-level DML rule and a table-level DDL rule for the table.

Note:

Synchronous captures only use table rules. Synchronous captures ignore schema and global rules.

See Also:

Oracle Streams Concepts and Administration for more information about how rules are used in Oracle Streams

Nonidentical Replicas with Oracle Streams

Oracle Streams replication supports sharing database objects that are not identical at multiple databases. Different databases in the Oracle Streams environment can contain shared database objects with different structures. You can configure rule-based transformations during capture, propagation, or apply to make any necessary changes to LCRs so that they can be applied at a destination database. In Oracle Streams replication, a rule-based transformation is any modification to an LCR that results when a rule in a positive rule set evaluates to TRUE.

For example, a table at a source database can have the same data as a table at a destination database, but some of the column names can be different. In this case, a rule-based transformation can change the names of the columns in LCRs from the source database so that they can be applied successfully at the destination database.

There are two types of rule-based transformations: declarative and custom. Declarative rule-based transformations cover a set of common transformation scenarios for row LCRs, including renaming a schema, renaming a table, adding a column, renaming a column, and deleting a column. You specify (or declare) such a transformation using a procedure in the DBMS_STREAMS_ADM package. Oracle Streams performs declarative transformations internally, without invoking PL/SQL

A custom rule-based transformation requires a user-defined PL/SQL function to perform the transformation. Oracle Streams invokes the PL/SQL function to perform the transformation. A custom rule-based transformation can modify captured LCRs, persistent LCRs, or user messages. For example, a custom rule-based transformation can change the data type of a particular column in an LCR. A custom rule-based transformation must be defined as a PL/SQL function that takes an ANYDATA object as input and returns an ANYDATA object.

Rule-based transformations can be done at any point in the Oracle Streams information flow. That is, a capture process or a synchronous capture can perform a rule-based transformation on a change when a rule in its positive rule set evaluates to TRUE for the change. Similarly, a propagation or an apply process can perform a rule-based transformation on a message when a rule in its positive rule set evaluates to TRUE for the message.

Note:

Throughout this document, "rule-based transformation" is used when the text applies to both declarative and custom rule-based transformations. This document distinguishes between the two types of rule-based transformations when necessary.

See Also:

Oracle Streams Concepts and Administration for more information about rule-based transformations

Subsetting with Oracle Streams

Oracle Streams also supports subsetting of table data through the use of subset rules. If a shared table in a database in an Oracle Streams replication environment contains only a subset of data, then you can configure Oracle Streams to manage changes to a table so that only the appropriate subset of data is shared with the subset table. For example, a particular database can maintain data for employees in a particular department only. In this case, you can use subset rules to share changes to the data for employees in that department with the subset table, but not changes to employees in other departments.

Subsetting can be done at any point in the Oracle Streams information flow. That is, a capture process or synchronous capture can use a subset rule to capture a subset of changes to a particular table, a propagation can use a subset rule to propagate a subset of changes to a particular table, and an apply process can use a subset rule to apply only a subset of changes to a particular table.

See Also:

Oracle Streams Concepts and Administration for more information subset rules

Capture and Oracle Streams Replication

To maintain replicated database objects and data, you must capture changes made to these database objects and their data. Next, you must share these changes with the databases in the replication environment. In an Oracle Streams replication environment, you can capture changes in the following ways:

Change Capture Using a Capture Process

This section contains a brief overview of the capture process and conceptual information that is important for a capture process in a replication environment.

See Also:

Oracle Streams Concepts and Administration for general conceptual information about a capture process

Capture Process Overview

Changes made to database objects in an Oracle database are logged in the redo log to guarantee recoverability in the event of user error or media failure. A capture process is an Oracle background process that reads the database redo log to capture DML and DDL changes made to database objects. The source database for a change that was captured by a capture process is always the database where the change was generated in the redo log. A capture process formats these changes into messages called LCRs and enqueues them into the buffered queue portion of a queue. LCRs enqueued by a capture process are called captured LCRs. Because a running capture process automatically captures changes based on its rules, change capture using a capture process is a form of implicit capture.

There are two types of LCRs: a row LCR contains information about a change to a row in a table resulting from a DML operation, and a DDL LCR contains information about a DDL change to a database object. You use rules to specify which changes are captured. A single DML operation can change more than one row in a table. Therefore, a single DML operation can result in more than one row LCR, and a single transaction can consist of multiple DML operations.

Changes are captured by a capture user. The capture user captures all DML changes and DDL changes that satisfy the capture process rule sets.

A capture process can capture changes locally at the source database, or it can capture changes remotely at a downstream database. Figure 1-2 illustrates a local capture process.

Figure 1-2 Local Capture Process

Description of Figure 1-2 follows
Description of "Figure 1-2 Local Capture Process"

Downstream capture means that a capture process runs on a database other than the source database. The following types of configurations are possible for a downstream capture process:

  • A real-time downstream capture configuration means that redo transport services use the log writer process (LGWR) at the source database to send redo data from the online redo log to the downstream database. At the downstream database, a remote file server process (RFS) receives the redo data and stores it in the standby redo log, and the archiver at the downstream database archives the redo data in the standby redo log. The real-time downstream capture process captures changes from the standby redo log whenever possible and from the archived redo log whenever necessary.

  • An archived-log downstream capture configuration means that archived redo log files from the source database are copied to the downstream database, and the capture process captures changes in these archived redo log files. You can copy the archived redo log files to the downstream database using redo transport services, the DBMS_FILE_TRANSFER package, file transfer protocol (FTP), or some other mechanism.

Figure 1-3 illustrates a real-time downstream capture process.

Figure 1-3 Real-Time Downstream Capture

Description of Figure 1-3 follows
Description of "Figure 1-3 Real-Time Downstream Capture"

Figure 1-4 illustrates an archived-log downstream capture process.

Figure 1-4 Archived-Log Downstream Capture Process

Description of Figure 1-4 follows
Description of "Figure 1-4 Archived-Log Downstream Capture Process"

A local capture process reads the online redo log whenever possible and archived redo log files otherwise. A real-time downstream capture process reads the standby redo log whenever possible and archived standby redo log files otherwise. An archived-log downstream capture process always reads archived redo log files from the source database.

Note:

  • As illustrated in Figure 1-4, the source database for a change captured by a downstream capture process is the database where the change was recorded in the redo log, not the database running the downstream capture process.

  • References to "downstream capture processes" in this document apply to both real-time downstream capture processes and archived-log downstream capture processes. This document distinguishes between the two types of downstream capture processes when necessary.

Supplemental Logging for Oracle Streams Replication

Supplemental logging places additional column data into a redo log whenever an operation is performed. The capture process captures this additional information and places it in LCRs. Supplemental logging is always configured at a source database, regardless of the location of the capture process that captures changes to the source database.

There are two types of supplemental logging: database supplemental logging and table supplemental logging. Database supplemental logging specifies supplemental logging for an entire database, while table supplemental logging enables you to specify log groups for supplemental logging of a particular table. If you use table supplemental logging, then you can choose between two types of log groups: unconditional log groups and conditional log groups.

Unconditional log groups log the before images of specified columns when the table is changed, regardless of whether the change affected any of the specified columns. Unconditional log groups are sometimes referred to as "always log groups." Conditional log groups log the before images of all specified columns only if at least one of the columns in the log group is changed.

Supplementing logging at the database level, unconditional log groups at the table level, and conditional log groups at the table level together determine which old values are logged for a change.

If you plan to use one or more apply processes to apply LCRs captured by a capture process, then you must enable supplemental logging at the source database for the following types of columns in tables at the destination database:

  • Any columns at the source database that are used in a primary key in tables for which changes are applied at a destination database must be unconditionally logged in a log group or by database supplemental logging of primary key columns.

  • If the parallelism of any apply process that will apply the changes is greater than 1, then any unique constraint column at a destination database that comes from multiple columns at the source database must be conditionally logged. Supplemental logging does not need to be specified if a unique constraint column comes from a single column at the source database.

  • If the parallelism of any apply process that will apply the changes is greater than 1, then any foreign key column at a destination database that comes from multiple columns at the source database must be conditionally logged. Supplemental logging does not need to be specified if the foreign key column comes from a single column at the source database.

  • If the parallelism of any apply process that will apply the changes is greater than 1, then any bitmap index column at a destination database that comes from multiple columns at the source database must be conditionally logged. Supplemental logging does not need to be specified if the bitmap index column comes from a single column at the source database.

  • Any columns at the source database that are used as substitute key columns for an apply process at a destination database must be unconditionally logged. You specify substitute key columns for a table using the SET_KEY_COLUMNS procedure in the DBMS_APPLY_ADM package.

  • The columns specified in a column list for conflict resolution during apply must be conditionally logged if more than one column at the source database is used in the column list at the destination database.

  • Any columns at the source database that are used by a DML handler or error handler at a destination database must be unconditionally logged.

  • Any columns at the source database that are used by a rule or a rule-based transformation must be unconditionally logged.

  • Any columns at the source database that are specified in a value dependency virtual dependency definition at a destination database must be unconditionally logged.

  • If you specify row subsetting for a table at a destination database, then any columns at the source database that are in the destination table or columns at the source database that are in the subset condition must be unconditionally logged. You specify a row subsetting condition for an apply process using the dml_condition parameter in the ADD_SUBSET_RULES procedure in the DBMS_STREAMS_ADM package.

If you do not use supplemental logging for these types of columns at a source database, then changes involving these columns might not apply properly at a destination database.

Note:

LOB, LONG, LONG RAW, user-defined type, and Oracle-supplied type columns cannot be part of a supplemental log group.

See Also:

Change Capture Using a Synchronous Capture

This section contains a brief overview of synchronous capture and conceptual information that is important for a synchronous capture in a replication environment.

Synchronous capture is an optional Oracle Streams client that captures data manipulation language (DML) changes made to tables. Synchronous capture uses an internal mechanism to capture DML changes to specified tables. When synchronous capture is configured to capture changes to tables, the database that contains these tables is called the source database. Because a synchronous capture automatically captures changes based on its rules, change capture using a synchronous capture is a form of implicit capture.

When a DML change is made to a table, it can result in changes to one or more rows in the table. Synchronous capture captures each row change and converts it into a specific message format called a row logical change record (row LCR). After capturing a row LCR, synchronous capture enqueues a message containing the row LCR into the persistent queue portion of a queue. LCRs enqueued by a synchronous capture are persistent LCRs.

Figure 1-5 shows a synchronous capture capturing LCRs.

Figure 1-5 Synchronous Capture

Description of Figure 1-5 follows
Description of "Figure 1-5 Synchronous Capture"

See Also:

Change Capture Using a Custom Application

A custom application can capture the changes made to an Oracle database by reading from transaction logs, by using triggers, or by some other method. The application must assemble and order the transactions and must convert each change into a logical change record (LCR). Next, the application must enqueue the LCRs in an Oracle database using the DBMS_STREAMS_MESSAGING package or the DBMS_AQ package. The application must commit after enqueuing all LCRs in each transaction.

Because the LCRs are constructed and enqueued manually by a user or application, change capture that manually enqueues constructed LCRs is sometimes called explicit capture. If you have a heterogeneous replication environment where you must capture changes at a non-Oracle database and share these changes with an Oracle database, then you can create a custom application to capture changes made to the non-Oracle database.

Propagation and Oracle Streams Replication

In an Oracle Streams replication environment, propagations can propagate logical change records (LCRs) to the appropriate databases so that changes to replicated database objects can be shared. You can use ANYDATA queues to stage LCRs, and propagations to propagate these LCRs to the appropriate databases. In some configurations, the combined capture and apply optimization enables capture processes to send LCRs directly to apply processes.

The following sections describe staging and propagation in an Oracle Streams replication environment:

See Also:

Oracle Streams Concepts and Administration for more information about staging and propagation in Oracle Streams

LCR Staging

Captured LCRs are staged in a staging area. In Oracle Streams, the staging area is an ANYDATA queue that can store row LCRs and DDL LCRs, as well as other types of messages. LCRs are staged in the following ways:

  • LCRs captured by a capture process (captured LCRs) are staged in a buffered queue, which is System Global Area (SGA) memory associated with a queue.

  • LCRs captured by a synchronous capture (persistent LCRs) are staged in a persistent queue, which hard disk space associated with a queue.

  • LCRs captured by an application can be enqueued into a buffered queue or a persistent queue. If you want these LCRs to be dequeued and processed by an apply process, then they must be enqueued into a persistent queue (persistent LCRs).

Staged LCRs can be propagated by a propagation or applied by an apply process, and a particular staged LCR can be both propagated and applied. A running propagation automatically propagates LCRs based on the rules in its rule sets, and a running apply process automatically applies LCRs based on the rules in its rule sets.

See Also:

Oracle Streams Concepts and Administration for more information about buffered queues

LCR Propagation

In an Oracle Streams replication environment, a propagation typically propagates LCRs from a queue in the local database to a queue in a remote database. The queue from which the LCRs are propagated is called the source queue, and the queue that receives the LCRs is called the destination queue. There can be a one-to-many, many-to-one, or many-to-many relationship between source and destination queues.

Figure 1-6 Propagation from a Source Queue to a Destination Queue

Description of Figure 1-6 follows
Description of "Figure 1-6 Propagation from a Source Queue to a Destination Queue"

Even after an LCR is propagated by a propagation or applied by an apply process, it can remain in the source queue if you have also configured Oracle Streams to propagate the LCR to one or more other queues. Also, notice that an ANYDATA queue can store non-LCR user messages as well as LCRs. Typically, non-LCR user messages are used for messaging applications, not for replication.

You can configure an Oracle Streams replication environment to propagate LCRs through one or more intermediate databases before arriving at a destination database. Such a propagation environment is called a directed network. An LCR might or might not be processed by an apply process at an intermediate database. Rules determine which LCRs are propagated to each destination database, and you can specify the route that LCRs will traverse on their way to a destination database.

The advantage of using a directed network is that a source database does not need to have a physical network connection with the destination database. So, if you want LCRs to propagate from one database to another, but there is no direct network connection between the computers running these databases, then you can still propagate the LCRs without reconfiguring your network, as long as one or more intermediate databases connect the source database to the destination database. If you use directed networks, and an intermediate site goes down for an extended period of time or is removed, then you might need to reconfigure the network and the Oracle Streams environment.

See Also:

Oracle Streams Concepts and Administration for more information about directed networks

Combined Capture and Apply Optimization

In Oracle Database 11g Release 1 (11.1) and later, a capture process can send logical change records (LCRs) directly to an apply process under specific conditions. This configuration is called combined capture and apply.

Combined capture and apply automatically optimizes the path of the stream so that the capture process communicates directly with the apply process for certain configurations. When combined capture and apply is in use, the capture process acts as the propagation sender to transmit LCRs directly from the capture process to the apply process through a database link. In this mode, the buffered queue is optimized to improve efficiency.

Apply and Oracle Streams Replication

In an Oracle Streams replication environment, changes made to shared database objects are captured and propagated to destination databases where they are applied. You configure one or more apply processes at each destination database to apply these changes. The following sections describe the concepts related to change apply in an Oracle Streams replication environment:

See Also:

Oracle Streams Concepts and Administration for more information about change apply with an apply process

Overview of the Apply Process

An apply process is an optional Oracle background process that dequeues logical change records (LCRs) and user messages from a specific queue and either applies each one directly or passes it as a parameter to a user-defined procedure. The LCRs dequeued by an apply process contain the results of DML changes or DDL changes that an apply process can apply to database objects in a destination database. A user-defined message dequeued by an apply process is of type ANYDATA and can contain any user message, including a user-constructed LCR.

LCRs are applied by an apply user. The apply user applies all row changes resulting from DML operations and all DDL changes. The apply user also runs custom rule-based transformations configured for apply process rules, and runs apply handlers configured for an apply process.

Apply Processing Options for LCRs

An apply process is a flexible mechanism for processing the logical change records (LCRs) in a queue. You have options to consider when you configure one or more apply processes for your environment. Typically, to accomplish replication in an Oracle Streams environment, an apply process applies LCRs, not non-LCR user messages. This section discusses the LCR processing options available to you with an apply process.

Captured LCRs and Persistent LCRs

A single apply process can apply messages from either a buffered queue or a persistent queue, but not both. If a queue at a destination database contains LCRs in both its buffered queue and persistent queue, then the destination database must have at least two apply processes to dequeue these LCRs.

When an apply process is dequeuing messages from a buffered queue, it can only dequeue messages that were captured by a capture process (captured LCRs). An apply process cannot dequeue messages that were enqueued by an application into a buffered queue. When an apply process is dequeuing messages from a persistent queue, it can dequeue persistent LCRs that were enqueued by a synchronous capture or by an application.

You can use the DBMS_STREAMS_ADM package or the DBMS_APPLY_ADM package to create an apply process that dequeues captured LCRs, but only the CREATE_APPLY procedure in the DBMS_APPLY_ADM package can create an apply process that dequeues persistent LCRs.

Direct and Custom Apply of LCRs

Direct apply means that an apply process applies an LCR without running a user procedure. The apply process either successfully applies the change in the LCR to a database object or, if a conflict or an apply error is encountered, tries to resolve the error with a conflict handler or a user-specified procedure called an error handler.

If a conflict handler can resolve the conflict, then it either applies the LCR, or it discards the change in the LCR. If an error handler can resolve the error, then it should apply the LCR, if appropriate. An error handler can resolve an error by modifying the LCR before applying it. If the error handler cannot resolve the error, then the apply process places the transaction, and all LCRs associated with the transaction, into the error queue.

Custom apply means that an apply process passes the LCR as a parameter to a user procedure for processing. The user procedure can process the LCR in a customized way.

A user procedure that processes row LCRs resulting from DML statements is called a DML handler, while a user procedure that processes DDL LCRs resulting from DDL statements is called a DDL handler. An apply process can have many DML handlers but only one DDL handler, which processes all DDL LCRs dequeued by the apply process.

For each table associated with an apply process, you can set a separate DML handler to process each of the following types of operations in row LCRs:

  • INSERT

  • UPDATE

  • DELETE

  • LOB_UPDATE

For example, the hr.employees table can have one DML handler to process INSERT operations and a different DML handler to process UPDATE operations.

A user procedure can be used for any customized processing of LCRs. For example, if you want to skip DELETE operations for the hr.employees table at a certain destination database, then you can specify a DML handler for DELETE operations on this table to accomplish this goal. Such a handler is not invoked for INSERT, UPDATE, or LOB_UPDATE operations on the table. Or, if you want to log DDL changes before applying them, then you can create a user procedure that processes DDL operations to accomplish this.

A DML handler should never commit and never roll back, except to a named savepoint that the user procedure has established. To execute DDL inside a DDL handler, invoke the EXECUTE member procedure for the LCR.

In addition to DML handlers and DDL handlers, you can specify a precommit handler for an apply process. A precommit handler is a PL/SQL procedure that takes the commit SCN from an internal commit directive in the queue used by the apply process. The precommit handler can process the commit information in any customized way. For example, it can record the commit information for an apply process in an audit table.

Caution:

Do not modify LONG, LONG RAW, or nonassembled LOB column data in an LCR with DML handlers, error handlers, or custom rule-based transformation functions. DML handlers and error handlers can modify LOB columns in row LCRs that have been constructed by LOB assembly.

See Also:

Apply Processes and Dependencies

The following sections describe how apply processes handle dependencies:

How Dependent Transactions Are Applied

The parallelism apply process parameter controls the parallelism of an apply process. When apply process parallelism is set to 1, a single apply server applies transactions in the same order as the order in which they were committed on the source database. In this case, dependencies are not an issue. For example, if transaction A committed before transaction B on the source database, then, on the destination database, all of the LCRs in transaction A are applied before any LCRs in transaction B.

However, when apply process parallelism is set to a value greater than 1, multiple apply servers apply transactions simultaneously. When an apply process is applying transactions in parallel, it applies the row LCRs in these transactions until it detects a row LCR that depends on a row LCR in another transaction. When a dependent row LCR is detected, an apply process finishes applying the LCRs in the transaction with the lower commit system change number (CSCN) and commits this transaction before it finishes applying the remaining row LCRs in the transaction with the higher CSCN.

For example, consider two transactions: transaction A and transaction B. The transactions are dependent transactions, and each transaction contains 100 row LCRs. Transaction A committed on the source database before transaction B. Therefore, transaction A has the lower CSCN of the two transactions. An apply process can apply these transactions in parallel in the following way:

  1. The apply process begins to apply row LCRs from both transactions in parallel.

  2. Using a constraint in the destination database's data dictionary or a virtual dependency definition at the destination database, the apply process detects a dependency between a row LCR in transaction A and a row LCR in transaction B.

  3. Because transaction B has the higher CSCN of the two transactions, the apply process waits to apply transaction B and does not apply the dependent row LCR in transaction B. The row LCRs before the dependent row LCR in transaction B have been applied. For example, if the dependent row LCR in transaction B is the 81st row LCR, then the apply process could have applied 80 of the 100 row LCRs in transaction B.

  4. Because transaction A has the lower CSCN of the two transactions, the apply process applies all the row LCRs in transaction A and commits.

  5. The apply process applies the dependent row LCR in transaction B and the remaining row LCRs in transaction B. When all of the row LCRs in transaction B are applied, the apply process commits transaction B.

Note:

You can set the parallelism apply process parameter using the SET_PARAMETER procedure in the DBMS_APPLY_ADM package.

Row LCR Ordering During Apply

An apply process orders and applies row LCRs in the following way:

  • Row LCRs within a single transaction are always applied in the same order as the corresponding changes on the source database.

  • Row LCRs that depend on each other in different transactions are always applied in the same order as the corresponding changes on the source database. When apply process parallelism is greater than 1, and the apply process detects a dependency between row LCRs in different transactions, the apply process always executes the transaction with the lower CSCN before executing the dependent row LCR. This behavior is described in more detail in "How Dependent Transactions Are Applied".

  • If commit_serialization apply process parameter is set to full, then the apply process commits all transactions, regardless of whether they contain dependent row LCRs, in the same order as the corresponding transactions on the source database.

  • If commit_serialization apply process parameter is set to none, then the apply process might apply transactions that do not depend on each other in a different order than the commit order of the corresponding transactions on the source database.

Note:

You can set the commit_serialization apply process parameter using the SET_PARAMETER procedure in the DBMS_APPLY_ADM package.

Dependencies and Constraints

If the names of shared database objects are the same at the source and destination databases, and if the objects are in the same schemas at these databases, then an apply process automatically detects dependencies between row LCRs, assuming constraints are defined for the database objects at the destination database. Information about these constraints is stored in the data dictionary at the destination database.

Regardless of the setting for the commit_serialization parameter and apply process parallelism, an apply process always respects dependencies between transactions that are enforced by database constraints. When an apply process is applying a transaction that contains row LCRs that depend on row LCRs in another transaction, the apply process ensures that the row LCRs are applied in the correct order and that the transactions are committed in the correct order to maintain the dependencies. Apply processes detect dependencies for captured row LCRs and persistent row LCRs.

However, some environments have dependencies that are not enforced by database constraints, such as environments that enforce dependencies using applications. If your environment has dependencies for shared database objects that are not enforced by database constraints, then set the commit_serialization parameter to full for apply processes that apply changes to these database objects.

Dependency Detection, Rule-Based Transformations, and Apply Handlers

When rule-based transformations are specified for rules used by an apply process, and apply handlers are configured for the apply process, LCRs are processed in the following order:

  1. The apply process dequeues LCRs from its queue.

  2. The apply process runs rule-based transformations on LCRs, when appropriate.

  3. The apply process detects dependencies between LCRs.

  4. The apply process passes LCRs to apply handlers, when appropriate.

See Also:

Oracle Streams Concepts and Administration for more information about apply servers

Virtual Dependency Definitions

In some cases, an apply process requires additional information to detect dependencies in row LCRs that are being applied in parallel. The following are examples of cases in which an apply process requires additional information to detect dependencies:

  • The data dictionary at the destination database does not contain the required information. The following are examples of this case:

    • The apply process cannot find information about a database object in the data dictionary of the destination database. This can happen when there are data dictionary differences for shared database objects between the source and destination databases. For example, a shared database object can have a different name or can be in a different schema at the source database and destination database.

    • A relationship exists between two or more tables, and the relationship is not recorded in the data dictionary of the destination database. This can happen when database constraints are not defined to improve performance or when an application enforces dependencies during database operations instead of database constraints.

  • Data is denormalized by an apply handler after dependency computation. For example, the information in a single row LCR can be used to create multiple row LCRs that are applied to multiple tables.

Apply errors or incorrect processing can result when an apply process cannot determine dependencies properly. In some of the cases described in the previous list, rule-based transformations can be used to avoid apply problems. For example, if a shared database object is in different schemas at the source and destination databases, then a rule-based transformation can change the schema in the appropriate LCRs. However, the disadvantage with using rule-based transformations is that they cannot be executed in parallel.

A virtual dependency definition is a description of a dependency that is used by an apply process to detect dependencies between transactions at a destination database. A virtual dependency definition is not described as a constraint in the data dictionary of the destination database. Instead, it is specified using procedures in the DBMS_APPLY_ADM package. Virtual dependency definitions enable an apply process to detect dependencies that it would not be able to detect by using only the constraint information in the data dictionary. After dependencies are detected, an apply process schedules LCRs and transactions in the correct order for apply.

Virtual dependency definitions provide required information so that apply processes can detect dependencies correctly before applying LCRs directly or passing LCRs to apply handlers. Virtual dependency definitions enable apply handlers to process these LCRs correctly, and the apply handlers can process them in parallel to improve performance.

A virtual dependency definition can define one of the following types of dependencies:

Note:

A destination database must be running Oracle Database 10g Release 2 or later to specify virtual dependency definitions.
Value Dependency

A value dependency defines a table constraint, such as a unique key, or a relationship between the columns of two or more tables. A value dependency is set for one or more columns, and an apply process uses a value dependency to detect dependencies between row LCRs that contain values for these columns. Value dependencies can define virtual foreign key relationships between tables, but, unlike foreign key relationships, value dependencies can involve more than two tables.

Value dependencies are useful when relationships between columns in tables are not described by constraints in the data dictionary of the destination database. Value dependencies describe these relationships, and an apply process uses the value dependencies to determine when two or more row LCRs in different transactions involve the same row in a table at the destination database. For transactions that are being applied in parallel, when two or more row LCRs involve the same row, the transactions that include these row LCRs are dependent transactions.

Use the SET_VALUE_DEPENDENCY procedure in the DBMS_APPLY_ADM package to define or remove a value dependency at a destination database. In this procedure, table columns are specified as attributes.

The following restrictions pertain to value dependencies:

  • The row LCRs that involve the database objects specified in a value dependency must originate from a single source database.

  • Each value dependency must contain only one set of attributes for a particular database object.

Also, any columns specified in a value dependency at a destination database must be supplementally logged at the source database. These columns must be unconditionally logged.

Object Dependency

An object dependency defines a parent-child relationship between two objects at a destination database. An apply process schedules execution of transactions that involve the child object after all transactions with lower commit system change number (CSCN) values that involve the parent object have been committed. An apply process uses the object identifier in each row LCR to detect dependencies. The apply process does not use column values in the row LCRs to detect object dependencies.

Object dependencies are useful when relationships between tables are not described by constraints in the data dictionary of the destination database. Object dependencies describe these relationships, and an apply process uses the object dependencies to determine when two or more row LCRs in different transactions involve these tables. For transactions that are being applied in parallel, when a row LCR in one transaction involves the child table, and a row LCR in a different transaction involves the parent table, the transactions that include these row LCRs are dependent transactions.

Use the CREATE_OBJECT_DEPENDENCY procedure to create an object dependency at a destination database. Use the DROP_OBJECT_DEPENDENCY procedure to drop an object dependency at a destination database. Both of these procedures are in the in the DBMS_APPLY_ADM package.

Note:

Tables with circular dependencies can result in apply process deadlocks when apply process parallelism is greater than 1. The following is an example of a circular dependency: Table A has a foreign key constraint on table B, and table B has a foreign key constraint on table A. Apply process deadlocks are possible when two or more transactions that involve the tables with circular dependencies commit at the same SCN.

Barrier Transactions

When an apply process cannot identify the table row or the database object specified in a row LCR by using the destination database's data dictionary and virtual dependency definitions, the transaction that contains the row LCR is applied after all of the other transactions with lower CSCN values. Such a transaction is called a barrier transaction. Transactions with higher CSCN values than the barrier transaction are not applied until after the barrier transaction has committed. In addition, all DDL transactions are barrier transactions.

Considerations for Applying DML Changes to Tables

The following sections discuss considerations for applying DML changes to tables:

Constraints and Applying DML Changes to Tables

You must ensure that the primary key columns at the destination database are logged in the redo log at the source database for every update. A unique key or foreign key constraint at a destination database that contains data from more that one column at the source database requires additional logging at the source database.

There are various ways to ensure that a column is logged at the source database. For example, whenever the value of a column is updated, the column is logged. Also, Oracle has a feature called supplemental logging that automates the logging of specified columns.

For a unique key and foreign key constraint at a destination database that contains data from only one column at a source database, no supplemental logging is required. However, for a constraint that contains data from multiple columns at the source database, you must create a conditional supplemental log group containing all the columns at the source database that are used by the constraint at the destination database.

Typically, unique key and foreign key constraints include the same columns at the source database and destination database. However, in some cases, an apply handler or custom rule-based transformation can combine a multi-column constraint from the source database into a single key column at the destination database. Also, an apply handler or custom rule-based transformation can separate a single key column from the source database into a multi-column constraint at the destination database. In such cases, the number of columns in the constraint at the source database determines whether a conditional supplemental log group is required. If there is more than one column in the constraint at the source database, then a conditional supplemental log group containing all the constraint columns is required at the source database. If there is only one column in the constraint at the source database, then no supplemental logging is required for the key column.

Substitute Key Columns

If possible, each table for which changes are applied by an apply process should have a primary key. When a primary key is not possible, Oracle recommends that each table have a set of columns that can be used as a unique identifier for each row of the table. If the tables that you plan to use in your Oracle Streams environment do not have a primary key or a set of unique columns, then consider altering these tables accordingly.

To detect conflicts and handle errors accurately, Oracle must be able to identify uniquely and match corresponding rows at different databases. By default, Oracle Streams uses the primary key of a table to identify rows in the table, and if a primary key does not exist, Oracle Streams uses the smallest unique key that has at least one NOT NULL column to identify rows in the table. When a table at a destination database does not have a primary key or a unique key with at least one NOT NULL column, or when you want to use columns other than the primary key or unique key for the key, you can designate a substitute key at the destination database. A substitute key is a column or set of columns that Oracle can use to identify rows in the table during apply.

You can specify the substitute primary key for a table using the SET_KEY_COLUMNS procedure in the DBMS_APPLY_ADM package. Unlike true primary keys, the substitute key columns can contain nulls. Also, the substitute key columns take precedence over any existing primary key or unique keys for the specified table for all apply processes at the destination database.

If you specify a substitute key for a table in a destination database, and these columns are not a primary key for the same table at the source database, then you must create an unconditional supplemental log group containing the substitute key columns at the source database.

In the absence of substitute key columns, primary key constraints, and unique key constraints, an apply process uses all of the columns in the table as the key columns, excluding LOB, LONG, and LONG RAW columns. In this case, you must create an unconditional supplemental log group containing these columns at the source database. Using substitute key columns is preferable when there is no primary key constraint for a table because fewer columns are needed in the row LCR.

Note:

  • Oracle recommends that each column you specify as a substitute key column be a NOT NULL column. You should also create a single index that includes all of the columns in a substitute key. Following these guidelines improves performance for changes because the database can locate the relevant row more efficiently.

  • LOB, LONG, LONG RAW, user-defined type, and Oracle-supplied type columns cannot be specified as substitute key columns.

Apply Process Behavior for Column Discrepancies

A column discrepancy is any difference in the columns in a table at a source database and the columns in the same table at a destination database. If there are column discrepancies in your Oracle Streams environment, then use rule-based transformations or DML handlers to make the columns in row LCRs being applied by an apply process match the columns in the relevant tables at a destination database. The following sections describe apply process behavior for common column discrepancies.

Missing Columns at the Destination Database

If the table at the destination database is missing one or more columns that are in the table at the source database, then an apply process raises an error and moves the transaction that caused the error into the error queue. You can avoid such an error by creating a rule-based transformation or DML handler that deletes the missing columns from the LCRs before they are applied. Specifically, the transformation or handler can remove the extra columns using the DELETE_COLUMN member procedure on the row LCR.

Extra Columns at the Destination Database

If the table at the destination database has more columns than the table at the source database, then apply process behavior depends on whether the extra columns are required for dependency computations. If the extra columns are not used for dependency computations, then an apply process applies changes to the destination table. In this case, if column defaults exist for the extra columns at the destination database, then these defaults are used for these columns for all inserts. Otherwise, these inserted columns are NULL.

If, however, the extra columns are used for dependency computations, then an apply process places the transactions that include these changes in the error queue. The following types of columns are required for dependency computations:

  • For all changes, all key columns

  • For INSERT and DELETE statements, all columns involved with constraints

  • For UPDATE statements, if a constraint column is changed, such as a unique key constraint column or a foreign key constraint column, then all columns involved in the constraint

Column Data Type Mismatch

A column data type mismatch results when the data type for a column in a table at the destination database does not match the data type for the same column at the source database. An apply process can automatically convert certain data types when it encounters a column data type mismatch. If an apply process cannot automatically convert the data type, then apply process places transactions containing the changes to the mismatched column into the error queue. To avoid such an error, you can create a custom rule-based transformation or DML handler that converts the data type.

See Also:

Oracle Streams Concepts and Administration for more information about automatic data type conversion during apply

Conflict Resolution and an Apply Process

Conflicts are possible in an Oracle Streams configuration where data is shared between multiple databases. A conflict is a mismatch between the old values in an LCR and the expected data in a table. A conflict can occur if DML changes are allowed to a table for which changes are captured and to a table where these changes are applied.

For example, a transaction at the source database can update a row at nearly the same time as a different transaction that updates the same row at a destination database. In this case, if data consistency between the two databases is important, then when the change is propagated to the destination database, an apply process must be instructed either to keep the change at the destination database or replace it with the change from the source database. When data conflicts occur, you need a mechanism to ensure that the conflict is resolved in accordance with your business rules.

Oracle Streams automatically detects conflicts and, for update conflicts, tries to use an update conflict handler to resolve them if one is configured. Oracle Streams offers a variety of prebuilt handlers that enable you to define a conflict resolution system for your database that resolves conflicts in accordance with your business rules. If you have a unique situation that a prebuilt conflict resolution handler cannot resolve, then you can build and use your own custom conflict resolution handlers in an error handler or DML handler. Conflict detection can be disabled for nonkey columns.

Handlers and Row LCR Processing

Any of the following handlers can process a row LCR:

  • DML handler

  • Error handler

  • Update conflict handler

The following sections describe the possible scenarios involving these handlers:

You cannot have a DML handler and an error handler simultaneously for the same operation on the same table. Therefore, there is no scenario in which they could both be invoked.

No Relevant Handlers

If there are no relevant handlers for a row LCR, then an apply process tries to apply the change specified in the row LCR directly. If the apply process can apply the row LCR, then the change is made to the row in the table. If there is a conflict or an error during apply, then the transaction containing the row LCR is rolled back, and all of the LCRs in the transaction that should be applied according to the apply process rule sets are moved to the error queue.

Relevant Update Conflict Handler

Consider a case where there is a relevant update conflict handler configured, but no other relevant handlers are configured. An apply process tries to apply the change specified in a row LCR directly. If the apply process can apply the row LCR, then the change is made to the row in the table.

If there is an error during apply that is caused by a condition other than an update conflict, including a uniqueness conflict or a delete conflict, then the transaction containing the row LCR is rolled back, and all of the LCRs in the transaction that should be applied according to the apply process rule sets are moved to the error queue.

If there is an update conflict during apply, then the relevant update conflict handler is invoked. If the update conflict handler resolves the conflict successfully, then the apply process either applies the LCR or discards the LCR, depending on the resolution of the update conflict, and the apply process continues applying the other LCRs in the transaction that should be applied according to the apply process rule sets. If the update conflict handler cannot resolve the conflict, then the transaction containing the row LCR is rolled back, and all of the LCRs in the transaction that should be applied according to the apply process rule sets are moved to the error queue.

DML Handler But No Relevant Update Conflict Handler

Consider a case where an apply process passes a row LCR to a DML handler, and there is no relevant update conflict handler configured.

The DML handler processes the row LCR. The designer of the DML handler has complete control over this processing. Some DML handlers can perform SQL operations or run the EXECUTE member procedure of the row LCR. If the DML handler runs the EXECUTE member procedure of the row LCR, then the apply process tries to apply the row LCR. This row LCR might have been modified by the DML handler.

If any SQL operation performed by the DML handler fails, or if an attempt to run the EXECUTE member procedure fails, then the DML handler can try to handle the exception. If the DML handler does not raise an exception, then the apply process assumes the DML handler has performed the appropriate action with the row LCR, and the apply process continues applying the other LCRs in the transaction that should be applied according to the apply process rule sets.

If the DML handler cannot handle the exception, then the DML handler should raise an exception. In this case, the transaction containing the row LCR is rolled back, and all of the LCRs in the transaction that should be applied according to the apply process rule sets are moved to the error queue.

DML Handler And a Relevant Update Conflict Handler

Consider a case where an apply process passes a row LCR to a DML handler and there is a relevant update conflict handler configured.

The DML handler processes the row LCR. The designer of the DML handler has complete control over this processing. Some DML handlers might perform SQL operations or run the EXECUTE member procedure of the row LCR. If the DML handler runs the EXECUTE member procedure of the row LCR, then the apply process tries to apply the row LCR. This row LCR could have been modified by the DML handler.

If any SQL operation performed by the DML handler fails, or if an attempt to run the EXECUTE member procedure fails for any reason other than an update conflict, then the behavior is the same as that described in "DML Handler But No Relevant Update Conflict Handler". Note that uniqueness conflicts and delete conflicts are not update conflicts.

If an attempt to run the EXECUTE member procedure fails because of an update conflict, then the behavior depends on the setting of the conflict_resolution parameter in the EXECUTE member procedure:

The conflict_resolution Parameter Is Set to TRUE

If the conflict_resolution parameter is set to TRUE, then the relevant update conflict handler is invoked. If the update conflict handler resolves the conflict successfully, and all other operations performed by the DML handler succeed, then the DML handler finishes without raising an exception, and the apply process continues applying the other LCRs in the transaction that should be applied according to the apply process rule sets.

If the update conflict handler cannot resolve the conflict, then the DML handler can try to handle the exception. If the DML handler does not raise an exception, then the apply process assumes the DML handler has performed the appropriate action with the row LCR, and the apply process continues applying the other LCRs in the transaction that should be applied according to the apply process rule sets. If the DML handler cannot handle the exception, then the DML handler should raise an exception. In this case, the transaction containing the row LCR is rolled back, and all of the LCRs in the transaction that should be applied according to the apply process rule sets are moved to the error queue.

The conflict_resolution Parameter Is Set to FALSE

If the conflict_resolution parameter is set to FALSE, then the relevant update conflict handler is not invoked. In this case, the behavior is the same as that described in "DML Handler But No Relevant Update Conflict Handler".

Error Handler But No Relevant Update Conflict Handler

Consider a case where an apply process encounters an error when it tries to apply a row LCR. This error can be caused by a conflict or by some other condition. There is an error handler for the table operation but no relevant update conflict handler configured.

The row LCR is passed to the error handler. The error handler processes the row LCR. The designer of the error handler has complete control over this processing. Some error handlers might perform SQL operations or run the EXECUTE member procedure of the row LCR. If the error handler runs the EXECUTE member procedure of the row LCR, then the apply process tries to apply the row LCR. This row LCR could have been modified by the error handler.

If any SQL operation performed by the error handler fails, or if an attempt to run the EXECUTE member procedure fails, then the error handler can try to handle the exception. If the error handler does not raise an exception, then the apply process assumes the error handler has performed the appropriate action with the row LCR, and the apply process continues applying the other LCRs in the transaction that should be applied according to the apply process rule sets.

If the error handler cannot handle the exception, then the error handler should raise an exception. In this case, the transaction containing the row LCR is rolled back, and all of the LCRs in the transaction that should be applied according to the apply process rule sets are moved to the error queue.

Error Handler And a Relevant Update Conflict Handler

Consider a case where an apply process encounters an error when it tries to apply a row LCR. There is an error handler for the table operation, and there is a relevant update conflict handler configured.

The handler that is invoked to handle the error depends on the type of error it is:

  • If the error is caused by a condition other than an update conflict, including a uniqueness conflict or a delete conflict, then the error handler is invoked, and the behavior is the same as that described in "Error Handler But No Relevant Update Conflict Handler".

  • If the error is caused by an update conflict, then the update conflict handler is invoked. If the update conflict handler resolves the conflict successfully, then the apply process continues applying the other LCRs in the transaction that should be applied according to the apply process rule sets. In this case, the error handler is not invoked.

    If the update conflict handler cannot resolve the conflict, then the error handler is invoked. If the error handler does not raise an exception, then the apply process assumes the error handler has performed the appropriate action with the row LCR, and the apply process continues applying the other LCRs in the transaction that should be applied according to the apply process rule sets. If the error handler cannot process the LCR, then the error handler should raise an exception. In this case, the transaction containing the row LCR is rolled back, and all of the LCRs in the transaction that should be applied according to the apply process rule sets are moved to the error queue.

See Also:

Considerations for Applying DDL Changes

The following sections discuss considerations for applying DDL changes to tables:

System-Generated Names

If you plan to capture DDL changes at a source database and apply these DDL changes at a destination database, then avoid using system-generated names. If a DDL statement results in a system-generated name for an object, then the name of the object typically will be different at the source database and each destination database applying the DDL change from this source database. Different names for objects can result in apply errors for future DDL changes.

For example, suppose the following DDL statement is run at a source database:

CREATE TABLE sys_gen_name (n1 NUMBER  NOT NULL); 

This statement results in a NOT NULL constraint with a system-generated name. For example, the NOT NULL constraint might be named sys_001500. When this change is applied at a destination database, the system-generated name for this constraint might be sys_c1000.

Suppose the following DDL statement is run at the source database:

ALTER TABLE sys_gen_name DROP CONSTRAINT sys_001500;

This DDL statement succeeds at the source database, but it fails at the destination database and results in an apply error.

To avoid such an error, explicitly name all objects resulting from DDL statements. For example, to name a NOT NULL constraint explicitly, run the following DDL statement:

CREATE TABLE sys_gen_name (n1 NUMBER CONSTRAINT sys_gen_name_nn NOT NULL);

CREATE TABLE AS SELECT Statements

When applying a change resulting from a CREATE TABLE AS SELECT statement, an apply process performs two steps:

  1. The CREATE TABLE AS SELECT statement is executed at the destination database, but it creates only the structure of the table. It does not insert any rows into the table. If the CREATE TABLE AS SELECT statement fails, then an apply process error results. Otherwise, the statement auto commits, and the apply process performs Step 2.

  2. The apply process inserts the rows that were inserted at the source database as a result of the CREATE TABLE AS SELECT statement into the corresponding table at the destination database. It is possible that a capture process, a propagation, or an apply process will discard all of the row LCRs with these inserts based on their rule sets. In this case, the table remains empty at the destination database.

See Also:

Oracle Streams Concepts and Administration for more information about how rules are used in Oracle Streams

Instantiation SCN and Ignore SCN for an Apply Process

In an Oracle Streams environment that shares information within a single database or between multiple databases, a source database is the database where changes are generated in the redo log. Suppose an environment has the following characteristics:

  • A capture process or a synchronous capture captures changes to tables at the source database and stages the changes as LCRs in a queue.

  • An apply process applies these LCRs, either at the same database or at a destination database to which the LCRs have been propagated.

In such an environment, for the each table, only changes that committed after a specific system change number (SCN) at the source database are applied. An instantiation SCN specifies this value for each table.

An instantiation SCN can be set during instantiation, or an instantiation SCN can be set using a procedure in the DBMS_APPLY_ADM package. If the tables do not exist at the destination database before the Oracle Streams replication environment is configured, then these table are physically created (instantiated) using copies from the source database, and the instantiation SCN is set for each table during instantiation. If the tables already exist at the destination database before the Oracle Streams replication environment is configured, then these table are not instantiated using copies from the source database. Instead, the instantiation SCN must be set manually for each table using one of the following procedures in the DBMS_APPLY_ADM package: SET_TABLE_INSTANTIATION_SCN, SET_SCHEMA_INSTANATIATION_SCN, or SET_GLOBAL_INSTANTIATION_SCN.

The instantiation SCN for a database object controls which LCRs that contain changes to the database object are ignored by an apply process and which LCRs are applied by an apply process. If the commit SCN of an LCR for a database object from a source database is less than or equal to the instantiation SCN for that database object at a destination database, then the apply process at the destination database discards the LCR. Otherwise, the apply process applies the LCR.

Also, if there are multiple source databases for a shared database object at a destination database, then an instantiation SCN must be set for each source database, and the instantiation SCN can be different for each source database. You can set instantiation SCNs by using export/import or transportable tablespaces. You can also set an instantiation SCN by using a procedure in the DBMS_APPLY_ADM package.

Oracle Streams also records the ignore SCN for each database object. The ignore SCN is the SCN below which changes to the database object cannot be applied. The instantiation SCN for an object cannot be set lower than the ignore SCN for the object. This value corresponds to the SCN value at the source database at the time when the object was prepared for instantiation. An ignore SCN is set for a database object only when the database object is instantiated using Export/Import.

You can view the instantiation SCN and ignore SCN for database objects by querying the DBA_APPLY_INSTANTIATED_OBJECTS data dictionary view.

The Oldest SCN for an Apply Process

If an apply process is running, then the oldest SCN is the earliest SCN of the transactions currently being dequeued and applied. For a stopped apply process, the oldest SCN is the earliest SCN of the transactions that were being applied when the apply process was stopped.

The following are two common scenarios in which the oldest SCN is important:

  • You must recover the database in which the apply process is running to a certain point in time.

  • You stop using an existing capture process that captures changes for the apply process and use a different capture process to capture changes for the apply process.

In both cases, you should determine the oldest SCN for the apply process by querying the DBA_APPLY_PROGRESS data dictionary view. The OLDEST_MESSAGE_NUMBER column in this view contains the oldest SCN. Next, set the start SCN for the capture process that is capturing changes for the apply process to the same value as the oldest SCN value. If the capture process is capturing changes for other apply processes, then these other apply processes might receive duplicate LCRs when you reset the start SCN for the capture process. In this case, the other apply processes automatically discard the duplicate LCRs.

Note:

The oldest SCN is only valid for apply processes that apply LCRs that were captured by a capture process. The oldest SCN does not pertain to apply processes that apply LCRs captured by synchronous capture or LCRs enqueued explicitly.

See Also:

Low-Watermark and High-Watermark for an Apply Process

The low-watermark for an apply process is the system change number (SCN) up to which all LCRs have been applied. That is, LCRs that were committed at an SCN less than or equal to the low-watermark number have definitely been applied, but some LCRs that were committed with a higher SCN also might have been applied. The low-watermark SCN for an apply process is equivalent to the applied SCN for a capture process.

The high-watermark for an apply process is the SCN beyond which no LCRs have been applied. That is, no LCRs that were committed with an SCN greater than the high-watermark have been applied.

You can view the low-watermark and high-watermark for one or more apply processes by querying the V$STREAMS_APPLY_COORDINATOR and ALL_APPLY_PROGRESS data dictionary views.

Trigger Firing Property

You can control a DML or DDL trigger's firing property using the SET_TRIGGER_FIRING_PROPERTY procedure in the DBMS_DDL package. This procedure lets you specify whether a trigger's firing property is set to fire once.

If a trigger's firing property is set to fire once, then it does not fire in the following cases:

  • When a relevant change is made by an apply process

  • When a relevant change results from the execution of one or more apply errors using the EXECUTE_ERROR or EXECUTE_ALL_ERRORS procedure in the DBMS_APPLY_ADM package

If a trigger is not set to fire once, then it fires in both of these cases.

By default, DML and DDL triggers are set to fire once. You can check a trigger's firing property by using the IS_TRIGGER_FIRE_ONCE function in the DBMS_DDL package.

For example, in the hr schema, the update_job_history trigger adds a row to the job_history table when data is updated in the job_id or department_id column in the employees table. Suppose, in an Oracle Streams environment, the following configuration exists:

  • A capture process or synchronous capture captures changes to both of these tables at the dbs1.example.com database.

  • A propagation propagates these changes to the dbs2.example.com database.

  • An apply process applies these changes at the dbs2.example.com database.

  • The update_job_history trigger exists in the hr schema in both databases.

If the update_job_history trigger is not set to fire once at dbs2.example.com in this scenario, then these actions result:

  1. The job_id column is updated for an employee in the employees table at dbs1.example.com.

  2. The update_job_history trigger fires at dbs1.example.com and adds a row to the job_history table that records the change.

  3. The capture process or synchronous capture at dbs1.example.com captures the changes to both the employees table and the job_history table.

  4. A propagation propagates these changes to the dbs2.example.com database.

  5. An apply process at the dbs2.example.com database applies both changes.

  6. The update_job_history trigger fires at dbs2.example.com when the apply process updates the employees table.

In this case, the change to the employees table is recorded twice at the dbs2.example.com database: when the apply process applies the change to the job_history table and when the update_job_history trigger fires to record the change made to the employees table by the apply process.

A database administrator might not want the update_job_history trigger to fire at the dbs2.example.com database when a change is made by the apply process. Similarly, a database administrator might not want a trigger to fire because of the execution of an apply error transaction. If the update_job_history trigger's firing property is set to fire once, then it does not fire at dbs2.example.com when the apply process applies a change to the employees table, and it does not fire when an executed error transaction updates the employees table.

Also, if you use the ON SCHEMA clause to create a schema trigger, then the schema trigger fires only if the schema performs a relevant change. Therefore, when an apply process is applying changes, a schema trigger that is set to fire always fires only if the apply user is the same as the schema specified in the schema trigger. If the schema trigger is set to fire once, then it never fires when an apply process applies changes, regardless of whether the apply user is the same as the schema specified in the schema trigger.

For example, if you specify a schema trigger that always fires on the hr schema at a source database and destination database, but the apply user at a destination database is strmadmin, then the trigger fires when the hr user performs a relevant change on the source database, but the trigger does not fire when this change is applied at the destination database. However, if you specify a schema trigger that always fires on the strmadmin schema at the destination database, then this trigger fires whenever a relevant change is made by the apply process, regardless of any trigger specifications at the source database.

Note:

Only DML and DDL triggers can be set to fire once. All other types of triggers always fire.

See Also:

Oracle Database PL/SQL Packages and Types Reference for more information about setting a trigger's firing property with the SET_TRIGGER_FIRING_PROPERTY procedure