Skip Headers
Oracle® Database 2 Day + Data Replication and Integration Guide
11g Release 1 (11.1)

B28324-03
Go to Documentation Home
Home
Go to Book List
Book List
Go to Table of Contents
Contents
Go to Index
Index
Go to Master Index
Master Index
Go to Feedback page
Contact Us

Go to previous page
Previous
Go to next page
Next
PDF · Mobi · ePub

4 Replicating Data Using Oracle Streams

This chapter contains conceptual information about Oracle Streams replication and describes how to replicate data continuously between databases.

This chapter contains the following sections:

See Also:

About Oracle Streams Replication

Replication is the process of sharing database objects and data at multiple databases. To maintain the database objects and data at multiple databases, a change to one of these database objects at a database is shared with the other databases. In this way, the database objects and data are kept synchronized at all of the databases in the replication environment.

Some replication environments must continually replicate the changes made to shared database objects. Oracle Streams is the Oracle Database feature for continuous replication. Typically, in such environments, the databases that contain the shared database objects are connected to the network nearly all the time and continually push database changes over these network connections.

When a change is made to one shared database object, Oracle Streams performs the following actions to ensure that the same change is made to the corresponding shared database object at each of the other databases:

  1. Oracle Streams automatically captures the change and stages it in a queue.

  2. Oracle Streams automatically pushes the change to a queue in each of the other databases that contain the shared database object.

  3. Oracle Streams automatically consumes the change at each of the other databases. During consumption, Oracle Streams dequeues the change and applies the change to the shared database object.

Figure 4-1 shows the Oracle Streams information flow:

Figure 4-1 Oracle Streams Information Flow

Description of Figure 4-1 follows
Description of "Figure 4-1 Oracle Streams Information Flow"

You can use Oracle Streams replication to share data at multiple databases and efficiently keep the data current at these databases. For example, a company with several call centers throughout the world might want to store customer information in a local database at each call center. In such an environment, continuous replication with Oracle Streams can ensure that a change made to customer data at one location is pushed to all of the other locations as soon as possible.

When you use Oracle Streams to capture changes to database objects, the changes are formatted into logical change records (LCRs). An LCR is a message with a specific format that describes a database change. If the change was a data manipulation language (DML) operation, then a row LCR encapsulates each row change resulting from the DML operation. One DML operation might result in multiple row changes, and so one DML operation might result in multiple row LCRs. If the change was a data definition language (DDL) operation, then a single DDL LCR encapsulates the DDL change.

The following topics describe Oracle Streams replication in more detail:

About Change Capture

Oracle Streams provides two ways to capture database changes automatically:

  • A capture process should be used to capture data manipulation language (DML) changes to a relatively large number of tables, an entire schema, or a database. Also, a capture process must be used to capture data definition language (DDL) changes to tables and other database objects. See "About Change Capture with a Capture Process".

  • A synchronous capture should be used to capture DML changes to a relatively small number of tables. See "About Change Capture with a Synchronous Capture".

A single capture process or a single synchronous capture can capture changes made to only one database. The database where a change originated is called the source database for the change.

Note:

The examples in this guide replicate DML changes only. You should understand the implications of replicating DDL changes before doing so. See Oracle Streams Replication Administrator's Guide and Oracle Database PL/SQL Packages and Types Reference for information about replicating DDL changes.

About Change Capture with a Capture Process

A capture process is an optional Oracle Database background process that asynchronously captures changes recorded in the redo log. When a capture process captures a database change, it converts it into a logical change record (LCR) and enqueues the LCR.

A capture process is always associated with a single queue, and it enqueues LCRs into this queue only. For improved performance, captured LCRs are always stored in a buffered queue, which is System Global Area (SGA) memory associated with a queue.

Figure 4-2 shows how a capture process works.

Figure 4-2 Capture Process

Description of Figure 4-2 follows
Description of "Figure 4-2 Capture Process"

A capture process can run on the source database or on a remote database. When a capture process runs on the source database, the capture process is called a local capture process. When a capture process runs on a remote database, the capture process is called a downstream capture process.

With downstream capture, redo transport services use the log writer process (LGWR) at the source database to send redo data to the database that runs the downstream capture process. A downstream capture process requires fewer resources at the source database because a different database captures the changes. A local capture process, however, is easier to configure and manage than a downstream capture process. Local capture processes also provide more flexibility in replication environments with different platforms or different versions of Oracle Database.

About Change Capture with a Synchronous Capture

Instead of asynchronously mining the redo log, a synchronous capture uses an internal mechanism to capture data manipulation language (DML) changes when they are made to tables. A single DML change can result in changes to one or more rows in the table. A synchronous capture captures each row change, converts it into a row logical change record (LCR), and enqueues it.

A synchronous capture is always associated with a single queue, and it enqueues row LCRs into this queue only. Synchronous capture always enqueues row LCRs into the persistent queue. The persistent queue is the portion of a queue that stores messages on hard disk in a queue table, not in memory.

It is usually best to use synchronous capture in a replication environment that captures changes to a relatively small number of tables. If you must capture changes to many tables, to an entire schema, or to an entire database, then you should use a capture process instead of a synchronous capture.

Figure 4-3 shows how a synchronous capture works.

Figure 4-3 Synchronous Capture

Description of Figure 4-3 follows
Description of "Figure 4-3 Synchronous Capture"

Note:

If you are using Oracle Database 11g Standard Edition, then synchronous capture is the only Oracle Streams component that can capture database changes automatically. To use capture processes, you must have Oracle Database 11g Enterprise Edition.

About Change Propagation Between Databases

A propagation sends messages from one queue to another. You can use Oracle Streams to configure message propagation between two queues in the same database or in different databases. Oracle Streams uses a database link and Oracle Scheduler jobs to send messages. A propagation is always between a source queue and a destination queue. In an Oracle Streams replication environment, a propagation typically sends messages that describe database changes (in the form of LCRs) from a source queue in the local database to a destination queue in a remote database.

Figure 4-4 shows a propagation.

About Change Apply

After database changes have been captured and propagated, they reside in a queue and are ready to be applied to complete the replication process. An apply process is an optional Oracle Database background process that dequeues logical change records (LCRs) and other types of messages from a specific queue. In a simple Oracle Streams replication environment, an apply process typically applies the changes in the LCRs that it dequeues directly to the database objects in the local database.

An apply process is always associated with a single queue, and it dequeues messages from this queue only. A single apply process either can dequeue messages from the buffered queue or from the persistent queue, but not both. Therefore, if an apply process applies changes that were captured by a capture process, then the apply process must be configured to dequeue LCRs from the buffered queue. However, if an apply process applies changes that were captured by a synchronous capture, then the apply process must be configured to dequeue LCRs from the persistent queue.

Figure 4-5 shows how an apply process works.

When an apply process cannot apply an LCR successfully, it moves the LCR, and all of the other LCRs in the transaction, to a special queue called the error queue. The error queue contains all of the current apply errors for a database. If there are multiple apply processes in a database, then the error queue contains the apply errors for each apply process. You can correct the condition that caused an error and then reexecute the corresponding transaction in the error queue to apply its changes. For example, you might modify a row in a table to correct the condition that caused an error in a transaction and then reexecute the transaction.

For an apply process to apply changes to a database object, an instantiation system change number (SCN) must be set for the database object. An instantiation SCN is the SCN for a database object that specifies that only changes that were committed after the SCN at the source database are applied by an apply process. The instantiation SCN for a table assumes that the table is consistent at the source and destination database at the specified SCN. Typically, the instantiation SCN is set automatically when you configure the Oracle Streams replication environment.

Note:

An apply process can also pass a message that it dequeues as a parameter to a user-defined procedure called an apply handler for custom processing. Apply handlers are beyond the scope of this guide. See Oracle Streams Concepts and Administration for more information.

About Rules for Controlling the Behavior of Capture, Propagation, and Apply

An Oracle Streams replication configuration must identify what to replicate. Capture processes, synchronous captures, propagations, and apply processes are called Oracle Streams clients. Rules determine what Oracle Streams clients replicate. You can configure rules for each Oracle Streams client independently, and the rules for different Oracle Streams clients do not need to match.

Rules can be organized into rule sets, and the behavior of each Oracle Streams client is determined by the rules in the rule sets that are associated with it. You can associate a positive rule set and a negative rule set with a capture process, a propagation, and an apply process, but a synchronous capture can only have a positive rule set.

In a replication environment, an Oracle Streams client performs its task if a database change satisfies its rule sets. In general, a change satisfies the rule sets for an Oracle Streams client if no rules in the negative rule set evaluate to TRUE for the change, and at least one rule in the positive rule set evaluates to TRUE for the change. The negative rule set is always evaluated first.

Specifically, you use rule sets in an Oracle Streams replication environment to do the following:

  • Specify the changes that a capture process captures from the redo log or discards. If a change found in the redo log satisfies the rule sets for a capture process, then the capture process captures the change. If a change found in the redo log does not satisfy the rule sets for a capture process, then the capture process discards the change.

  • Specify the changes that a synchronous capture captures. If a data manipulation language (DML) change satisfies the rule set for a synchronous capture, then the synchronous capture captures the change immediately after the change is committed. If a DML change made to a table does not satisfy the rule set for a synchronous capture, then the synchronous capture does not capture the change.

  • Specify the changes (encapsulated in LCRs) that a propagation sends from one queue to another or discards. If an LCR in a queue satisfies the rule sets for a propagation, then the propagation sends the LCR. If an LCR in a queue does not satisfy the rule sets for a propagation, then the propagation discards the LCR.

  • Specify the LCRs that an apply process dequeues or discards. If an LCR in a queue satisfies the rule sets for an apply process, then the apply process dequeues and processes the LCR. If an LCR in a queue does not satisfy the rule sets for an apply process, then the apply process discards the LCR.

See Also:

About Rule-Based Transformations for Nonidentical Copies

A rule-based transformation is an additional configuration option that provides flexibility when a database object is not identical at the different databases. Rule-based transformations modify changes to a database object so that the changes can be applied successfully at each database. Specifically, a rule-based transformation is any modification to a message when a rule in a positive rule set evaluates to TRUE.

For example, suppose a table has five columns at the database where changes originated, but the shared table at a different database only has four of the five columns. When a data manipulation language (DML) operation is performed on the table at the source database, the row changes are captured and formatted as row LCRs. A rule-based transformation can delete the extra column in these row LCRs so that they can be applied successfully at the other database. If the row LCRs are not transformed, then the apply process at the other database will raise errors because the row LCRs have an extra column.

There are two types of rule-based transformations: declarative and custom. Declarative rule-based transformations include a set of common transformation scenarios for row changes resulting from DML changes (row LCRs). Custom rule-based transformations require a user-defined PL/SQL function to perform the transformation. This guide discusses only declarative rule-based transformations.

The following declarative rule-based transformations are available:

  • An add column transformation adds a column to a row LCR.

  • A delete column transformation deletes a column from a row LCR.

  • A rename column transformation renames a column in a row LCR.

  • A rename schema transformation renames the schema in a row LCR.

  • A rename table transformation renames the table in a row LCR.

When you add one of these declarative rule-based transformations, you specify the rule to associate with the transformation. When the specified rule evaluates to TRUE for a row LCR, Oracle Streams performs the declarative transformation internally on the row LCR. Typically, rules and rule sets are created automatically when you configure your Oracle Streams replication environment.

You can configure multiple declarative rule-based transformations for a single rule, but you can only configure one custom rule-based transformation for a single rule. In addition, you can configure declarative rule-based transformations and a custom rule-based transformation for a single rule.

A transformation can occur at any stage in the Oracle Streams information flow: during capture, propagation, or apply. When a transformation occurs depends on the rule with which the transformation is associated. For example, to perform a transformation during propagation, associate the transformation with a rule in the positive rule set for a propagation.

See Also:

About Supplemental Logging

Supplemental logging is the process of adding additional column data to the redo log whenever an operation is performed (such as a row update). A capture process captures this additional information and places it in logical change records (LCRs). Apply processes that apply these LCRs might need this additional information to apply database changes properly.

See Also:

About Conflicts and Conflict Resolution

Conflicts occur when two different databases that are sharing data in a table modify the same row in the table at nearly the same time. When these changes are captured at one of these databases and sent to the other database, an apply process detects the conflict when it attempts to apply a row LCR to the table. By default, apply errors are placed in the error queue, where they can be resolved manually. To avoid apply errors, you must configure conflict resolution so that apply processes handle conflicts in the best way for your environment.

Oracle Database supplies prebuilt conflict handlers that provide conflict resolution when a conflict results from an UPDATE on a row. These handlers are called prebuilt update conflict handlers.

When an apply process encounters an update conflict for a row LCR that it has dequeued, it must either apply the row LCR or discard it to keep the data in the two databases consistent. The most common way to resolve update conflicts is to keep the change with the most recent time stamp and discard the older change. See "Tutorial: Configuring Latest Time Conflict Resolution for a Table" for instructions about configuring latest time conflict resolution for a table.

The following topics discuss how to configure conflict resolution in a particular type of replication environment:

Note:

Conflicts are not possible in a replication environment when changes to only one database are captured. In these replication environments, typically the replicas at the other databases are read-only.

About Tags for Avoiding Change Cycling

Change cycling means sending a change back to the database where it originated. Typically, change cycling should be avoided because it can result in each database change going through endless loops to the database where it originated. Such loops can result in unintended data in the database and tax the networking and computer resources of an environment. By default, Oracle Streams is designed to avoid change cycling.

A tag is additional information in a change record. Each redo entry that records a database change and each logical change record (LCR) that encapsulates a database change includes a tag. The data type of the tag is RAW.

By default, change records have the following tag values:

  • When a user or application generates database changes, the value of the tag is NULL for each change. This default can be changed for a particular database session.

  • When an apply process generates database changes by applying them to database objects, the tag value for each change is the hexadecimal equivalent of '00' (double zero). This default can be changed for a particular apply process.

The tag value in an LCR depends on how the LCR was captured:

  • An LCR captured by a capture process has the tag value of the redo record that was captured.

  • An LCR captured by a synchronous capture has the tag value of the database session that made the change.

Rules for Oracle Streams clients can include conditions for tag values. For example, the rules for a capture process can determine whether a change in the redo log is captured based on the tag value of the redo record. In an Oracle Streams replication environment, Oracle Streams clients use tags and rules to avoid change cycling.

The following topics discuss how change cycling is avoided in a particular type of replication environment:

Note:

  • Change cycling is not possible in a replication environment when changes to only one database are captured.

  • You can also use tags to avoid replicating the changes made by a particular session. Use the DBMS_STREAMS.SET_TAG procedure to set the tag for a session. See "Correcting Apply Errors in Database Objects" for an example.

See Also:

About the Common Types of Oracle Streams Replication Environments

Oracle Streams enables you to configure many different types of custom replication environments. However, three types of replication environments are the most common: two-database, hub-and-spoke, and n-way.

The following topics describe these common types of replication environments and help you decide which one is best for you:

About Two-Database Replication Environments

A two-database replication environment is one in which only two databases share the replicated database objects. The changes made to replicated database objects at one database are captured and sent directly to the other database, where they are applied. In a two-database replication environment, only one database might allow changes to the database objects, or both databases might allow changes to them.

If only one database allows changes to the replicated database objects, then the other database contains read-only replicas of these database objects. This is a one-way replication environment and typically has the following basic components:

  • The first database has a capture process or synchronous capture to capture changes to the replicated database objects.

  • The first database has a propagation that sends the captured changes to the other database.

  • The second database has an apply process to apply changes from the first database.

  • For the best performance, each capture process and apply process has its own queue.

Figure 4-6 shows a two-database replication environment configured for one-way replication.

Figure 4-6 One-Way Replication in a Two-Database Replication Environment

Description of Figure 4-6 follows
Description of "Figure 4-6 One-Way Replication in a Two-Database Replication Environment"

In a two-database replication environment, both databases can allow changes to the replicated database objects. In this case, both databases capture changes to these database objects and send the changes to the other database, where they are applied. This is a bi-directional replication environment and typically has the following basic components:

  • Each database has a capture process or synchronous capture to capture changes to the replicated database objects.

  • Each database has a propagation that sends the captured changes to the other database.

  • Each database has an apply process to apply changes from the other database.

  • For the best performance, each capture process and apply process has its own queue.

Figure 4-7 show a two-database replication environment configured for bi-directional replication.

Figure 4-7 Bi-Directional Replication in a Two-Database Replication Environment

Description of Figure 4-7 follows
Description of "Figure 4-7 Bi-Directional Replication in a Two-Database Replication Environment"

Typically, in a bi-directional replication environment, you should configure conflict resolution to keep the replicated database objects synchronized. You can configure a two-database replication environment using the configuration procedures in the DBMS_STREAMS_ADM package. See "About the Oracle Streams Replication Configuration Procedures".

About Hub-And-Spoke Replication Environments

A hub-and-spoke replication environment is one in which a central database, or hub, communicates with secondary databases, or spokes. The spokes do not communicate directly with each other. In a hub-and-spoke replication environment, the spokes might or might not allow changes to the replicated database objects.

If the spokes do not allow changes, then they contain read-only replicas of the database objects at the hub. This type of hub-and-spoke replication environment typically has the following basic components:

  • The hub has a capture process or synchronous capture to capture changes to the replicated database objects.

  • The hub has propagations that send the captured changes to each of the spokes.

  • Each spoke has an apply process to apply changes from the hub.

  • For the best performance, each capture process and apply process has its own queue.

Figure 4-8 shows a hub-and-spoke replication environment with read-only spokes.

Figure 4-8 Hub-and-Spoke Replication Environment with Read-Only Spokes

Description of Figure 4-8 follows
Description of "Figure 4-8 Hub-and-Spoke Replication Environment with Read-Only Spokes"

If the spokes allow changes to the database objects, then typically the changes are captured and sent back to the hub, and the hub replicates the changes with the other spokes. This type of hub-and-spoke replication environment typically has the following basic components:

  • The hub has a capture process or synchronous capture to capture changes to the replicated database objects.

  • The hub has propagations that send the captured changes to each of the spokes.

  • Each spoke has a capture process or synchronous capture to capture changes to the replicated database objects.

  • Each spoke has a propagation that sends changes made at the spoke back to the hub.

  • Each spoke has an apply process to apply changes from the hub and from the other spokes.

  • The hub has a separate apply process to apply changes from each spoke. A different apply process must apply changes from each spoke.

  • For the best performance, each capture process and apply process has its own queue.

Figure 4-9 shows a hub-and-spoke replication environment with read/write spokes.

Figure 4-9 Hub-and-Spoke Replication Environment with Read/Write Spokes

Description of Figure 4-9 follows
Description of "Figure 4-9 Hub-and-Spoke Replication Environment with Read/Write Spokes"

Typically, in a hub-and-spoke replication environment that allows changes at spoke databases, you should configure conflict resolution to keep the replicated database objects synchronized. Some hub-and-spoke replication environments allow changes to the replicated database objects at some spokes but not at others.

You can configure a hub-and-spoke replication environment using the configuration procedures in the DBMS_STREAMS_ADM package. See "About the Oracle Streams Replication Configuration Procedures".

See "When to Replicate Data with Oracle Streams" for information about when hub-and-spoke replication is useful.

About N-Way Replication Environments

An n-way replication environment is one in which each database communicates directly with each other database in the environment. The changes made to replicated database objects at one database are captured and sent directly to each of the other databases in the environment, where they are applied.

An n-way replication environment typically has the following basic components:

  • Each database has one or more capture processes or synchronous captures to capture changes to the replicated database objects.

  • Each database has propagations that send the captured changes to each of the other databases.

  • Each database has apply processes that apply changes from each of the other databases. A different apply process must apply changes from each source database.

  • For the best performance, each capture process and apply process has its own queue.

Figure 4-10 shows an n-way replication environment.

Figure 4-10 N-Way Replication Environment

Description of Figure 4-10 follows
Description of "Figure 4-10 N-Way Replication Environment"

You can configure an n-way replication environment by using the following Oracle-supplied packages:

  • DBMS_STREAMS_ADM can be used to perform most of the configuration actions, including setting up queues, creating capture processes or synchronous captures, creating propagations, creating apply processes, and configuring rules and rule sets for the replication environment.

  • DBMS_CAPTURE_ADM can be used to start any capture processes you configured in the replication environment.

  • DBMS_APPLY_ADM can be used to configure apply processes, configure conflict resolution, and start apply processes, as well as other configuration tasks.

See "When to Replicate Data with Oracle Streams" for information about when n-way replication is useful.

Typically, in an n-way replication environment, you should configure conflict resolution to keep the replicated database objects synchronized.

Configuring an n-way replication environment is beyond the scope of this guide. See Oracle Streams Replication Administrator's Guide for a detailed example that configures an n-way replication environment.

About the Oracle Streams Replication Configuration Procedures

The easiest way to configure an Oracle Streams replication environment is by running one of the following configuration procedures in the DBMS_STREAMS_ADM package:

  • The MAINTAIN_GLOBAL procedure configures an Oracle Streams environment that replicates changes at the database level between two databases.

  • The MAINTAIN_SCHEMAS procedure configures an Oracle Streams environment that replicates changes to specified schemas between two databases.

  • The MAINTAIN_SIMPLE_TTS procedure clones a simple tablespace from a source database at a destination database and uses Oracle Streams to maintain this tablespace at both databases.

  • The MAINTAIN_TABLES procedure configures an Oracle Streams environment that replicates changes to specified tables between two databases.

  • The MAINTAIN_TTS procedure clones a set of tablespaces from a source database at a destination database and uses Oracle Streams to maintain these tablespaces at both databases.

These procedures configure two databases at a time, and they require you to specify one database as the source database and the other database as the destination database. They can be used to configure a replication environment with more than two databases by running them multiple times.

Table 4-1 describes the required parameters for these procedures.

Table 4-1 Required Parameters for the Oracle Streams Replication Configuration Procedures

Parameter Procedure Description

source_directory_object

All procedures

The directory object for the directory on the computer system running the source database into which the generated Data Pump export dump file is placed.

destination_directory_object

All procedures

The directory object for the directory on the computer system running the destination database into which the generated Data Pump export dump file is transferred. The dump file is used to instantiate the replicated database objects at the destination database.

source_database

All procedures

The global name of the source database. The specified database must contain the database objects to be replicated.

destination_database

All procedures

The global name of the destination database. The database objects to be replicated are optional at the destination database. If they do not exist at the destination database, then they are instantiated by Data Pump export/import.

If the local database is not the destination database, then a database link from the local database to the destination database, with the same name as the global name of the destination database, must exist and must be accessible to the user who runs the procedure.

schema_names

MAINTAIN_SCHEMAS only

The schemas to be configured for replication.

tablespace_name

MAINTAIN_SIMPLE_TTS only

The tablespace to be configured for replication.

table_names

MAINTAIN_TABLES only

The tables to be configured for replication.

tablespace_names

MAINTAIN_TTS only

The tablespaces to be configured for replication.


In addition, each of these procedures has several optional parameters. The bi_directional parameter is an important optional parameter. If you want changes to the replicated database objects to be captured at each database and sent to the other database, then the bi_directional parameter must be set to TRUE. The default setting for this parameter is FALSE. When the bi_directional parameter is set to FALSE, the procedures configure a one-way replication environment, where the changes made at the destination database are not captured.

These procedures perform several tasks to configure an Oracle Streams replication environment. These tasks include:

  • Configure supplemental logging for the replicated database objects at the source database. See "About Supplemental Logging".

  • If the bi_directional parameter is set to TRUE, then configure supplemental logging for the replicated database objects at the destination database.

  • Instantiate the database objects at the destination database. If the database objects do not exist at the destination database, then the procedures use Data Pump export/import to instantiate them at the destination database.

  • Configure a capture process to capture changes to the replicated database objects at the source database. This capture process can be a local capture process or a downstream capture process. If the procedure is run at the database specified in the source_database parameter, then the procedure configures a local capture process on this database. If the procedure is run at a database other than the database specified in the source_database parameter, then the procedure configures a downstream capture process on the database that runs the procedure. See "About Change Capture with a Capture Process".

  • If the bi_directional parameter is set to TRUE, then configure a capture process to capture changes to the replicated database objects at the destination database. This capture process must be a local capture process.

  • Configure one or more queues at each database to store captured changes.

  • Configure a propagation to send changes made to the database objects at the source database to the destination database. See "About Change Propagation Between Databases".

  • If the bi_directional parameter is set to TRUE, then configure a propagation to send changes made to the database objects at the destination database to the source database

  • Configure an apply process at the destination database to apply changes from the source database. See "About Change Apply".

  • If the bi_directional parameter is set to TRUE, then configure an apply process at the source database to apply changes from the destination database.

  • Configure rule sets and rules for each capture process, propagation, and apply process. The rules instruct the Oracle Streams clients to replicate changes to the specified database objects. See "About Rules for Controlling the Behavior of Capture, Propagation, and Apply".

  • Set the instantiation SCN for the replicated database objects at the destination database. See "About Change Apply".

  • If the bi_directional parameter is set to TRUE, then set the instantiation SCN for the replicated database objects at the source database.

Tip:

To view all of the actions performed by one of these procedures in detail, use the procedure to generate a script, and view the script in a text editor. You can use the perform_actions, script_name, and script_directory_object parameters to generate a script.

These procedures always configure tags for a hub-and-spoke replication environment. The following are important considerations about these procedures and tags:

  • If you are configuring a two-database replication environment, then you can use these procedures to configure it. These procedures configure tags in a two-database environment to avoid change cycling. If you plan to expand the replication environment beyond two databases in the future, then it is important to understand how the tags are configured. If the expanded database environment is not a hub-and-spoke environment, then you might need to modify the tags to avoid change cycling.

  • If you are configuring a replication environment that involves three or more databases, then these procedures can only be used to configure a hub-and-spoke replication environment. These procedures configure tags in a hub-and-spoke environment to avoid change cycling.

  • If you are configuring an n-way replication environment, then do not use these procedures to configure it. Change cycling might result if you do so.

See "About Tags for Avoiding Change Cycling" and "About the Common Types of Oracle Streams Replication Environments" for more information about tags and the different types of replication environments.

Note:

Currently, these configuration procedures configure only capture processes to capture changes. You cannot use these procedures to configure a replication environment that uses synchronous captures. You can configure a synchronous capture using the ADD_TABLE_RULES and ADD_SUBSET_RULES procedures in the DBMS_STREAMS_ADM package.

About Key Oracle Streams Supplied PL/SQL Packages and Data Dictionary Views

In addition to Oracle Enterprise Manager, you can use several supplied PL/SQL packages to configure and administer an Oracle Streams replication environment. You can also use several data dictionary views to monitor an Oracle Streams replication environment.

The following topics describe the key Oracle Streams supplied PL/SQL packages and data dictionary views:

About Key Oracle Streams Supplied PL/SQL Packages

Table 4-2 describes the supplied PL/SQL packages that are important for configuring and administering an Oracle Streams replication environment.

Table 4-2 Key Oracle Streams Supplied PL/SQL Packages

Package Description

DBMS_STREAMS_ADM

This package provides an easy way to complete common tasks in an Oracle Streams environment. This package contains procedures that enable you to configure and maintain an Oracle Streams replication environment. This package also provides an administrative interface for adding and removing simple rules for capture processes, synchronous captures, propagations, and apply processes at the table, schema, and database level. This package also contains procedures for creating queues and for managing Oracle Streams metadata, such as data dictionary information.

DBMS_CAPTURE_ADM

This package provides an administrative interface for starting, stopping, and configuring a capture process. It also provides an administrative interface for configuring a synchronous capture. This package also provides administrative procedures that prepare database objects at the source database for instantiation at a destination database.

DBMS_PROPAGATION_ADM

This package provides an administrative interface for starting, stopping, and configuring a propagation.

DBMS_APPLY_ADM

This package provides an administrative interface for starting, stopping, and configuring an apply process. This package also includes subprograms for configuring conflict detection and resolution and for managing apply errors. This package also includes procedures that enable you to configure apply handlers. This package also provides administrative procedures that set the instantiation SCN for objects at a destination database.

DBMS_STREAMS_AUTH

This package provides subprograms for granting privileges to Oracle Streams administrators and revoking privileges from Oracle Streams administrators.

DBMS_STREAMS_ADVISOR_ADM

This package provides an interface for gathering information about an Oracle Streams environment and advising database administrators based on the information gathered. This package is part of the Oracle Streams Performance Advisor.


See Also:

About Key Oracle Streams Data Dictionary Views

Table 4-3 describes the data dictionary views that are important for monitoring an Oracle Streams replication environment.

Table 4-3 Key Oracle Streams Data Dictionary Views

Data Dictionary View Description

ALL_APPLY

DBA_APPLY

Displays information about apply processes.

ALL_APPLY_CONFLICT_COLUMNS

DBA_APPLY_CONFLICT_COLUMNS

Displays information about conflict handlers.

ALL_APPLY_ERROR

DBA_APPLY_ERROR

Displays information about the error transactions generated by apply processes.

ALL_CAPTURE

DBA_CAPTURE

Displays information about capture processes.

ALL_PROPAGATION

DBA_PROPAGATION

Displays information about propagations.

ALL_STREAMS_COLUMNS

DBA_STREAMS_COLUMNS

Displays information about the columns that are not supported by synchronous captures and apply processes.

ALL_STREAMS_RULES

DBA_STREAMS_RULES

Displays information about the rules used by capture processes, synchronous captures, propagations, and apply processes.

ALL_STREAMS_UNSUPPORTED

DBA_STREAMS_UNSUPPORTED

Displays information about the database objects that are not supported by capture processes.

ALL_SYNC_CAPTURE

DBA_SYNC_CAPTURE

Displays information about synchronous captures.

V$BUFFERED_QUEUES

Displays information about buffered queues and the messages in the buffered queues.

V$PROPAGATION_RECEIVER

Displays information about the messages received into buffered queues by propagations.

V$PROPAGATION_SENDER

Displays information about the messages sent from buffered queues by propagations.

V$STREAMS_APPLY_COORDINATOR

Displays information about apply process coordinators for enabled apply processes.

V$STREAMS_APPLY_READER

Displays information about apply process reader servers for enabled apply processes.

V$STREAMS_APPLY_SERVER

Displays information about apply process apply servers for enabled apply processes.

V$STREAMS_CAPTURE

Displays information about enabled capture processes.

V$STREAMS_TRANSACTION

Displays information about transactions that are being processed by capture processes or apply processes. This view can be used to identify long running transactions and to determine how many logical change records (LCRs) are being processed in each transaction. This view only contains information about LCRs that were captured by a capture process.


See Also:

Preparing for Oracle Streams Replication

Before configuring Oracle Streams replication, prepare the databases that will participate in the replication environment.

To prepare for Oracle Streams replication:

  1. Set initialization parameters properly before you configure a replication environment with Oracle Streams:

    • Global Names: Set the GLOBAL_NAMES initialization parameter to TRUE at each database that will participate in the Oracle Streams replication environment. See "Setting the GLOBAL_NAMES Initialization Parameter to TRUE".

    • Compatibility: To use the latest features of Oracle Streams, it is best to set the COMPATIBLE initialization parameter as high as you can. If possible, then set this parameter to 11.1.0 or higher.

    • System Global Area (SGA) and the Oracle Streams pool: Ensure that the Oracle Streams pool is large enough to accommodate the Oracle Streams components created for the replication environment. The Oracle Streams pool is part of the System Global Area (SGA). You can manage the Oracle Streams pool by setting the MEMORY_TARGET initialization parameter (Automatic Memory Management), the SGA_TARGET initialization parameter (Automatic Shared Memory Management), or the STREAMS_POOL_SIZE initialization parameter. See Oracle Streams Concepts and Administration for more information about the Oracle Streams pool.

      The memory requirements for Oracle Streams components are:

      • Each queue requires at least 10 MB of memory.

      • Each capture process parallelism requires at least 10 MB of memory. The parallelism capture process parameter controls the number of processes used by the capture process to capture changes. You might be able to improve capture process performance by adjusting capture process parallelism.

      • Each propagation requires at least 1 MB of memory.

      • Each apply process parallelism requires at least 1 MB of memory. The parallelism apply process parameter controls the number of processes used by the apply process to apply changes. You might be able to improve apply process performance by adjusting apply process parallelism.

    • Processes and Sessions: Oracle Streams capture processes, propagations, and apply processes use processes that run in the background. You might need to increase the value of the PROCESSES and SESSIONS initialization parameters to accommodate these processes.

  2. Review the best practices for Oracle Streams replication environments and follow the best practices when you configure the environment. See Oracle Streams Replication Administrator's Guide for information about best practices.

    Following the best practices ensures that your environment performs optimally and avoids problems. The configuration procedures in the DBMS_STREAMS_ADM package follow the best practices automatically. However, if you plan to configure an Oracle Streams replication environment without using a configuration procedure, then learn about the best practices and follow them whenever possible.

    The following are some of the important best practices to follow during Oracle Streams configuration:

    • Configure a separate tablespace for the Oracle Streams administrator. The instructions in "Tutorial: Creating an Oracle Streams Administrator" follow this best practice.

    • Use separate queues for capture processes, synchronous captures, and apply processes. For the best performance, these components typically should not share a queue.

    • Use queue-to-queue propagations.

    After the Oracle Streams environment is configured, the following are some of the important best practices to follow for operation of the Oracle Streams environment:

    • Monitor performance and make adjustments when necessary.

    • Monitor queues for size.

    • Follow the Oracle Streams best practices for backups.

    • Check the alert log for Oracle Streams information.

    • Set capture process parallelism for best performance.

    • Set apply process parallelism for best performance.

    • Check for apply errors and manage them if they occur.

See Oracle Streams Replication Administrator's Guide for detailed information about these best practices, and for information about other Oracle Streams best practices.

See Also:

Configuring Oracle Streams Replication: Examples

This section uses examples to show you how to configure Oracle Streams replication environments. The examples configure the most common types of Oracle Streams replication environments.

The following are descriptions of the examples:

This section also includes an example that configures conflict resolution for a table. See "Tutorial: Configuring Latest Time Conflict Resolution for a Table". Use conflict resolution in a replication environment that allows more than one database to perform DML changes on replicated tables. This example configures latest time conflict resolution. Therefore, when a conflict occurs for a row change to a table, the most recent change is retained, and the older change is discarded. See "About Conflicts and Conflict Resolution" for more information about conflict resolution.

Note:

Another common Oracle Streams replication environment is the n-way environment. See "About N-Way Replication Environments".

Tutorial: Configuring Two-Database Replication with Local Capture Processes

This example configures an Oracle Streams replication environment that replicates data manipulation language (DML) changes to all of the tables in the hr schema. This example configures a two-database replication environment with local capture processes to capture changes. This example uses the global database names db1.example.com and db2.example.com. However, you can substitute databases in your environment to complete the example. See "About Two-Database Replication Environments" for more information about two-database replication environments.

This example uses the MAINTAIN_SCHEMAS procedure in the DBMS_STREAMS_ADM package to configure the two-database replication environment. This procedure is the fastest and simplest way to configure an Oracle Streams environment that replicates one or more schemas. In addition, the procedure follows established best practices for Oracle Streams replication environments.

The database objects being configured for replication might or might not exist at the destination database when you run the MAINTAIN_SCHEMAS procedure. If the database objects do not exist at the destination database, then the MAINTAIN_SCHEMAS procedure instantiates them at the destination database using a Data Pump export/import. During instantiation, the instantiation SCN is set for these database objects. If the database objects already exist at the destination database, then the MAINTAIN_SCHEMAS procedure retains the existing database objects and sets the instantiation SCN for them. In this example, the hr schema exists at both the db1.example.com database and the db2.example.com database before the MAINTAIN_SCHEMAS procedure is run.

This example provides instructions for configuring either one-way or bi-directional replication. To configure bi-directional replication, you must complete additional steps and set the bi_directional parameter to TRUE when you run the configuration procedure.

Figure 4-11 provides an overview of the environment created in this example. The additional components required for bi-directional replication are shown in gray, and their actions are indicated by dashed lines.

Figure 4-11 Two-Database Replication Environment with Local Capture Processes

Description of Figure 4-11 follows
Description of "Figure 4-11 Two-Database Replication Environment with Local Capture Processes"

To configure this two-database replication environment:

  1. Complete the following tasks to prepare for the two-database replication environment:

    1. Configure network connectivity so that the db1.example.com database can communicate with the db2.example.com database.

      See Oracle Database 2 Day DBA for information about configuring network connectivity between databases.

    2. Configure an Oracle Streams administrator at each database that will participate in the replication environment. See "Tutorial: Creating an Oracle Streams Administrator" for instructions. This example assumes that the Oracle Streams administrator is strmadmin.

    3. Create a database link from the db1.example.com database to the db2.example.com database.

      The database link should be created in the Oracle Streams administrator's schema. Also, the database link should connect to the Oracle Streams administrator at the other database. Both the name and the service name of the database link must be db2.example.com. See "Tutorial: Creating a Database Link" for instructions.

    4. Configure the db1.example.com database to run in ARCHIVELOG mode. For a capture process to capture changes generated at a source database, the source database must be running in ARCHIVELOG mode. See Oracle Database Administrator's Guide for information about configuring a database to run in ARCHIVELOG mode.

  2. To configure a bi-directional replication environment, complete the following steps. If you are configuring a one-way replication environment, then these steps are not required, and you can move on to Step 3.

    1. Create a database link from the db2.example.com database to the db1.example.com database.

      The database link should be created in the Oracle Streams administrator's schema. Also, the database link should connect to the Oracle Streams administrator at the other database. Both the name and the service name of the database link must be db1.example.com. See "Tutorial: Creating a Database Link" for instructions.

    2. Configure the db2.example.com database to run in ARCHIVELOG mode. For a capture process to capture changes generated at a source database, the source database must be running in ARCHIVELOG mode. See Oracle Database Administrator's Guide for information about configuring a database to run in ARCHIVELOG mode.

  3. Set initialization parameters properly at each database that will participate in the Oracle Streams replication environment. See "Preparing for Oracle Streams Replication" for instructions.

  4. On a command line, open SQL*Plus and connect to the db2.example.com database as the Oracle Streams administrator.

    See Oracle Database 2 Day DBA for more information about starting SQL*Plus.

  5. Create a directory object to hold files that will be generated by the MAINTAIN_SCHEMAS procedure, including the Data Pump export dump file used for instantiation. The directory object can point to any accessible directory on the computer system. For example, the following statement creates a directory object named db2_dir that points to the /usr/db2_log_files directory:

    CREATE DIRECTORY db2_dir AS '/usr/db2_log_files';
    
  6. In SQL*Plus, connect to the db1.example.com database as the Oracle Streams administrator.

    See Oracle Database Administrator's Guide for information about connecting to a database in SQL*Plus.

  7. Create a directory object to hold files that will be generated by the MAINTAIN_SCHEMAS procedure, including the Data Pump export dump file used for instantiation. The directory object can point to any accessible directory on the computer system. For example, the following statement creates a directory object named db1_dir that points to the /usr/db1_log_files directory:

    CREATE DIRECTORY db1_dir AS '/usr/db1_log_files';
    
  8. Run the MAINTAIN_SCHEMAS procedure to configure replication of the hr schema between the db1.example.com database and the db2.example.com database.

    Ensure that the bi_directional parameter is set properly for the replication environment that you are configuring. Either set this parameter to FALSE for one-way replication, or set it to TRUE for bi-directional replication.

    BEGIN
      DBMS_STREAMS_ADM.MAINTAIN_SCHEMAS(
        schema_names                 => 'hr',
        source_directory_object      => 'db1_dir',
        destination_directory_object => 'db2_dir',
        source_database              => 'db1.example.com',
        destination_database         => 'db2.example.com',
        bi_directional               => FALSE); -- Set to TRUE for bi-directional
    END;
    /
    

    The MAINTAIN_SCHEMAS procedure can take some time to run because it is performing many configuration tasks. Do not allow data manipulation language (DML) or data definition language (DDL) changes to the replicated database objects at the destination database while the procedure is running. See "About the Oracle Streams Replication Configuration Procedures".

    When a configuration procedure is run, information about its progress is recorded in the following data dictionary views: DBA_RECOVERABLE_SCRIPT, DBA_RECOVERABLE_SCRIPT_PARAMS, DBA_RECOVERABLE_SCRIPT_BLOCKS, and DBA_RECOVERABLE_SCRIPT_ERRORS. If the procedure stops because it encounters an error, then see Oracle Streams Replication Administrator's Guide for instructions about using the RECOVER_OPERATION procedure in the DBMS_STREAMS_ADM package to recover from these errors.

  9. If you configured bi-directional replication, then configure latest time conflict resolution for all of the tables in the hr schema at both databases. This schema includes the countries, departments, employees, jobs, job_history, locations, and regions tables. See "Tutorial: Configuring Latest Time Conflict Resolution for a Table" for instructions.

When you complete the example, a two-database replication environment with the following characteristics is configured:

  • At db1.example.com, supplemental logging is configured for the tables in the hr schema.

  • The db1.example.com database has the following components:

    • A capture process with a system-generated name. The capture process captures DML changes to the hr schema.

    • A queue with a system-generated name. This queue is for the capture process at the database.

    • A propagation with a system-generated name that sends changes from the queue at the db1.example.com database to the queue at the db2.example.com database.

  • The db2.example.com database has the following components:

    • A queue with a system-generated name that receives the changes sent from the db1.example.com database. This queue is for the apply process at the local database.

    • An apply process with a system-generated name. The apply process dequeues changes from its queue and applies them to the hr schema.

  • If the replication environment is bi-directional, then the following are also configured:

    • At db2.example.com, supplemental logging for the tables in the hr schema.

    • At db2.example.com, a capture process with a system-generated name. The capture process captures DML changes to the hr schema.

    • At db2.example.com, a queue with a system-generated name. This queue is for the capture process at the database.

    • At db1.example.com, a queue with a system-generated name that receives the changes sent from the db2.example.com database. This queue is for the apply process at the local database.

    • At db1.example.com, an apply process with a system-generated name. The apply process dequeues changes from its queue and applies them to the hr schema.

  • If the replication environment is bi-directional, then tags are used to avoid change cycling in the following way:

    • Each apply process uses an apply tag, and redo records for changes applied by the apply process include the tag. Each apply process uses an apply tag that is unique in the replication environment.

    • Each capture process captures all of the changes to the replicated database objects, regardless of the tag in the redo record. Therefore, each capture process captures the changes applied by the apply processes on its source database.

    • Each propagation sends all changes made to the replicated database objects to the other database in the replication environment, except for changes that originated at the other database. The propagation rules instruct the propagation to discard these changes.

    See "About Tags for Avoiding Change Cycling" for more information about how the replication environment avoids change cycling. If you configured one-way replication, then change cycling is not possible because changes are only captured in a single location.

To check the Oracle Streams replication configuration:

  1. At the db1.example.com database, ensure that the capture process is enabled and that the capture type is local. To do so, follow the instructions in "Viewing Information About a Capture Process", and check the Status and Capture Type fields on the Capture subpage.

  2. At the db1.example.com database, ensure that the propagation is enabled. To do so, follow the instructions in "Viewing Information About a Propagation", and check the Status field on the Propagation subpage.

  3. At the db2.example.com database, ensure that the apply process is enabled. To do so, follow the instructions in "Viewing Information About an Apply Process", and check the Status field on the Apply subpage.

  4. If you configured bi-directional replication, then complete the following steps:

    1. At the db2.example.com database, ensure that the capture process is enabled and that the capture type is local.

    2. At the db2.example.com database, ensure that the propagation is enabled.

    3. At the db1.example.com database, ensure that the apply process is enabled.

To replicate changes:

  1. At a database that captures changes to the hr schema, make DML changes to any table in the hr schema. In this example, the db1.example.com database captures changes to the hr schema, and, if you configured bi-directional replication, then db2.example.com also captures changes to the hr schema.

  2. After some time has passed to allow for replication of the changes, use SQL*Plus to query the modified table at the other database to view the DML changes.

Note:

The configuration procedures in the DBMS_STREAMS_ADM package do not configure the replicated tables to be read only at the destination databases. If one-way replication is configured and they should be read only, then configure privileges at the destination databases accordingly. However, the apply user for the apply process must be able to make DML changes to the replicated database objects. In this example, the apply user is the Oracle Streams administrator. See Oracle Database Security Guide for information about configuring privileges.

Tutorial: Configuring Two-Database Replication with a Downstream Capture Process

The example in this topic configures an Oracle Streams replication environment that replicates data manipulation language (DML) changes to all of the tables in the hr schema. This example configures a two-database replication environment with a downstream capture process at the destination database. This example uses the global database names src.example.com and dest.example.com. However, you can substitute databases in your environment to complete the example. See "About Two-Database Replication Environments" for more information about two-database replication environments.

In this example, the downstream capture process runs on the destination database dest.example.com. Therefore, the resources required to capture changes are freed at the source database src.example.com. This example configures a real-time downstream capture process, not an archived-log downstream capture process. The advantage of real-time downstream capture is that it reduces the amount of time required to capture the changes made at the source database. The time is reduced because the real-time downstream capture process does not need to wait for the redo log file to be archived before it can capture data from it.

This example assumes that the replicated database objects are used for reporting and analysis at the destination database. Therefore, these database objects are assumed to be read-only at the dest.example.com database.

This example uses the MAINTAIN_SCHEMAS procedure in the DBMS_STREAMS_ADM package to configure the two-database replication environment. This procedure is the fastest and simplest way to configure an Oracle Streams environment that replicates one or more schemas. In addition, the procedure follows established best practices for Oracle Streams replication environments.

The database objects being configured for replication might or might not exist at the destination database when you run the MAINTAIN_SCHEMAS procedure. If the database objects do not exist at the destination database, then the MAINTAIN_SCHEMAS procedure instantiates them at the destination database using a Data Pump export/import. During instantiation, the instantiation SCN is set for these database objects. If the database objects already exist at the destination database, then the MAINTAIN_SCHEMAS procedure retains the existing database objects and sets the instantiation SCN for them. In this example, the hr schema exists at both the src.example.com database and the dest.example.com database before the MAINTAIN_SCHEMAS procedure is run.

Figure 4-12 provides an overview of the environment created in this example.

Figure 4-12 Two-Database Replication Environment with a Downstream Capture Process

Description of Figure 4-12 follows
Description of "Figure 4-12 Two-Database Replication Environment with a Downstream Capture Process"

Note:

Local capture processes provide more flexibility than downstream capture processes in replication environments with different platforms or different versions of Oracle Database. See Oracle Streams Concepts and Administration for more information.

To configure this two-database replication environment:

  1. Complete the following tasks to prepare for the two-database replication environment:

    1. Configure network connectivity so that the src.example.com database and the dest.example.com database can communicate with each other.

      See Oracle Database 2 Day DBA for information about configuring network connectivity between databases.

    2. Configure an Oracle Streams administrator at each database that will participate in the replication environment. See "Tutorial: Creating an Oracle Streams Administrator" for instructions. This example assumes that the Oracle Streams administrator is strmadmin.

    3. Create a database link from the source database to the destination database and from the destination database to the source database. In this example, create the following database links:

      • From the src.example.com database to the dest.example.com database. Both the name and the service name of the database link must be dest.example.com.

      • From the dest.example.com database to the src.example.com database. Both the name and the service name of the database link must be src.example.com.

      The database link from the dest.example.com database to the src.example.com database is necessary because the src.example.com database is the source database for the downstream capture process at the dest.example.com database. This database link simplifies the creation and configuration of the capture process.

      Each database link should be created in the Oracle Streams administrator's schema. Also, each database link should connect to the Oracle Streams administrator at the other database. See "Tutorial: Creating a Database Link" for instructions.

    4. Set initialization parameters properly at each database that will participate in the Oracle Streams replication environment. See "Preparing for Oracle Streams Replication" for instructions.

    5. Configure both databases to run in ARCHIVELOG mode. For a downstream capture process to capture changes generated at a source database, both the source database and the downstream capture database must be running in ARCHIVELOG mode. In this example, the src.example.com and dest.example.com databases must be running in ARCHIVELOG mode. See Oracle Database Administrator's Guide for information about configuring a database to run in ARCHIVELOG mode.

    6. Configure authentication at both databases to support the transfer of redo data.

      Redo transport sessions are authenticated using either the Secure Sockets Layer (SSL) protocol or a remote login password file. If the source database has a remote login password file, then copy it to the appropriate directory on the downstream capture database system. The password file must be the same at the source database and the downstream capture database.

      In this example, the source database is src.example.com and the downstream capture database is dest.example.com. See Oracle Data Guard Concepts and Administration for detailed information about authentication requirements for redo transport.

  2. At the source database src.example.com, set the following initialization parameters to configure redo transport services to transmit redo data from the online redo log at the source database to the standby redo log at the downstream database dest.example.com:

    • At the source database, configure at least one LOG_ARCHIVE_DEST_n initialization parameter to transmit redo data to the downstream database. To do this, set the following attributes of this parameter:

      • SERVICE - Specify the network service name of the downstream database.

      • ASYNC or SYNC - Specify a redo transport mode.

        The advantage of specifying ASYNC is that it results in little or no effect on the performance of the source database. If the source database is running Oracle Database 10g Release 1 or later, then ASYNC is recommended to avoid affecting source database performance if the downstream database or network is performing poorly.

        The advantage of specifying SYNC is that redo data is sent to the downstream database faster then when ASYNC is specified. Also, specifying SYNC AFFIRM results in behavior that is similar to MAXIMUM AVAILABILITY standby protection mode. Note that specifying an ALTER DATABASE STANDBY DATABASE TO MAXIMIZE AVAILABILITY SQL statement has no effect on an Oracle Streams capture process.

      • NOREGISTER - Specify this attribute so that the location of the archived redo log files is not recorded in the downstream database control file.

      • VALID_FOR - Specify either (ONLINE_LOGFILE,PRIMARY_ROLE) or (ONLINE_LOGFILE,ALL_ROLES).

      • DB_UNIQUE_NAME - The unique name of the downstream database. Use the name specified for the DB_UNIQUE_NAME initialization parameter at the downstream database.

      The following example is a LOG_ARCHIVE_DEST_n setting that specifies a downstream database:

      LOG_ARCHIVE_DEST_2='SERVICE=DEST.EXAMPLE.COM ASYNC NOREGISTER
         VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE)
         DB_UNIQUE_NAME=dest'
      
    • LOG_ARCHIVE_DEST_STATE_n - At the source database, set this initialization parameter that corresponds with the LOG_ARCHIVE_DEST_n parameter for the downstream database to ENABLE.

      For example, if the LOG_ARCHIVE_DEST_2 initialization parameter is set for the downstream database, then set the LOG_ARCHIVE_DEST_STATE_2 parameter in the following way:

      LOG_ARCHIVE_DEST_STATE_2=ENABLE 
      
    • LOG_ARCHIVE_CONFIG - Set the DB_CONFIG attribute in this initialization parameter to include the DB_UNIQUE_NAME of the source database and the downstream database.

      For example, if the DB_UNIQUE_NAME of the source database is src, and the DB_UNIQUE_NAME of the downstream database is dest, then specify the following parameter:

      LOG_ARCHIVE_CONFIG='DG_CONFIG=(src,dest)'
      

      By default, the LOG_ARCHIVE_CONFIG parameter enables a database to both send and receive redo.

    See Also:

    Oracle Database Reference and Oracle Data Guard Concepts and Administration for more information about these initialization parameters
  3. At the downstream database dest.example.com, set the following initialization parameters to configure archiving of the redo data generated locally:

    • Set at least one archive log destination in the LOG_ARCHIVE_DEST_n initialization parameter to either a directory or to the flash recovery area on the computer system running the downstream database. To do this, set the following attributes of this parameter:

      • LOCATION - Specify either a valid path name for a disk directory or USE_DB_RECOVERY_FILE_DEST. Each destination that specifies the LOCATION attribute must specify either a unique directory path name or USE_DB_RECOVERY_FILE_DEST. This is the local destination for archived redo log files generated by the local database.

      • VALID_FOR - Specify either (ONLINE_LOGFILE,PRIMARY_ROLE) or (ONLINE_LOGFILE,ALL_ROLES).

      The following example is a LOG_ARCHIVE_DEST_n setting for the locally generated redo data at the real-time downstream capture database:

      LOG_ARCHIVE_DEST_1='LOCATION=/home/arc_dest/local_rl
         VALID_FOR=(ONLINE_LOGFILE,PRIMARY_ROLE)'
      

      A real-time downstream capture configuration should keep archived standby redo log files separate from archived online redo log files from the downstream database. Specify ONLINE_LOGFILE instead of ALL_LOGFILES for the redo log type in the VALID_FOR attribute to accomplish this.

      You can specify other attributes in the LOG_ARCHIVE_DEST_n initialization parameter if necessary.

    • Set the LOG_ARCHIVE_DEST_STATE_n initialization parameter that corresponds with the LOG_ARCHIVE_DEST_n parameter previously set in this step to ENABLE.

      For example, if the LOG_ARCHIVE_DEST_1 initialization parameter is set, then set the LOG_ARCHIVE_DEST_STATE_1 parameter in the following way:

      LOG_ARCHIVE_DEST_STATE_1=ENABLE 
      
  4. At the downstream database dest.example.com, set the following initialization parameters to configure the downstream database to receive redo data from the source database and write the redo data to the standby redo log at the downstream database:

    • Set at least one archive log destination in the LOG_ARCHIVE_DEST_n initialization parameter to either a directory or to the flash recovery area on the computer system running the downstream database. To do this, set the following attributes of this parameter:

      • LOCATION - Specify either a valid path name for a disk directory or USE_DB_RECOVERY_FILE_DEST. Each destination that specifies the LOCATION attribute must specify either a unique directory path name or USE_DB_RECOVERY_FILE_DEST. This is the local destination for archived redo log files written from the standby redo logs. Log files from a remote source database should be kept separate from local database log files.

      • VALID_FOR - Specify either (STANDBY_LOGFILE,PRIMARY_ROLE) or (STANDBY_LOGFILE,ALL_ROLES).

      The following example is a LOG_ARCHIVE_DEST_n setting at the real-time downstream capture database:

      LOG_ARCHIVE_DEST_2='LOCATION=/home/arc_dest/srl_src1
         VALID_FOR=(STANDBY_LOGFILE,PRIMARY_ROLE)'
      

      You can specify other attributes in the LOG_ARCHIVE_DEST_n initialization parameter if necessary.

    • Set the LOG_ARCHIVE_DEST_STATE_n initialization parameter that corresponds with the LOG_ARCHIVE_DEST_n parameter previously set in this step to ENABLE.

      For example, if the LOG_ARCHIVE_DEST_2 initialization parameter is set for the downstream database, then set the LOG_ARCHIVE_DEST_STATE_2 parameter in the following way:

      LOG_ARCHIVE_DEST_STATE_2=ENABLE 
      
    • LOG_ARCHIVE_CONFIG - Set the DB_CONFIG attribute in this initialization parameter to include the DB_UNIQUE_NAME of the source database and the downstream database.

      For example, if the DB_UNIQUE_NAME of the source database is src, and the DB_UNIQUE_NAME of the downstream database is dest, then specify the following parameter:

      LOG_ARCHIVE_CONFIG='DG_CONFIG=(src,dest)'
      

      By default, the LOG_ARCHIVE_CONFIG parameter enables a database to both send and receive redo.

  5. If you reset any initialization parameters while an instance was running at a database in Step 2, 3 or 4, then consider resetting them in the relevant initialization parameter file as well, so that the new values are retained when the database is restarted.

    If you did not reset the initialization parameters while an instance was running, but instead reset them in the initialization parameter file in Step 2, 3 or 4, then restart the database. The source database must be open when it sends redo data to the downstream database, because the global name of the source database is sent to the downstream database only if the source database is open.

  6. At the downstream database dest.example.com, connect as an administrative user and create standby redo log files.

    Note:

    The following steps outline the general procedure for adding standby redo log files to the downstream database. The specific steps and SQL statements used to add standby redo log files depend on your environment. For example, in an Oracle Real Application Clusters (Oracle RAC) environment, the steps are different. See Oracle Data Guard Concepts and Administration for detailed instructions about adding standby redo log files to a database.
    1. Open SQL*Plus and connect to the source database src.example.com as an administrative user.

      For example, to connect to the src.example.com database as an administrative user:

      sqlplus system@src.example.com
      Enter password: password
      

      See Oracle Database 2 Day DBA for more information about starting SQL*Plus.

    2. Determine the log file size used on the source database src.example.com. The standby log file size must exactly match (or be larger than) the source database log file size. For example, if the source database log file size is 500 MB, then the standby log file size must be 500 MB or larger. You can determine the size of the redo log files at the source database (in bytes) by querying the V$LOG view at the source database.

      For example, query the V$LOG view:

      SELECT BYTES FROM V$LOG;
      
    3. Determine the number of standby log file groups required on the downstream database dest.example.com. The number of standby log file groups must be at least one more than the number of online log file groups on the source database. For example, if the source database has two online log file groups, then the downstream database must have at least three standby log file groups. You can determine the number of source database online log file groups by querying the V$LOG view at the source database.

      For example, while still connected in SQL*Plus as an administrative user to src.example.com, query the V$LOG view:

      SELECT COUNT(GROUP#) FROM V$LOG;
      
    4. In SQL*Plus, connect to the downstream database dest.example.com as an administrative user.

      CONNECT system@dest.example.com
      Enter password: password
      
    5. Use the SQL statement ALTER DATABASE ADD STANDBY LOGFILE to add the standby log file groups to the downstream database dest.example.com.

      For example, assume that the source database has two online redo log file groups and is using a log file size of 500 MB. In this case, use the following statements to create the appropriate standby log file groups:

      ALTER DATABASE ADD STANDBY LOGFILE GROUP 3
         ('/oracle/dbs/slog3a.rdo', '/oracle/dbs/slog3b.rdo') SIZE 500M;
      
      ALTER DATABASE ADD STANDBY LOGFILE GROUP 4
         ('/oracle/dbs/slog4a.rdo', '/oracle/dbs/slog4b.rdo') SIZE 500M;
      
      ALTER DATABASE ADD STANDBY LOGFILE GROUP 5
         ('/oracle/dbs/slog5a.rdo', '/oracle/dbs/slog5b.rdo') SIZE 500M;
      
    6. Ensure that the standby log file groups were added successfully by running the following query at dest.example.com:

      SELECT GROUP#, THREAD#, SEQUENCE#, ARCHIVED, STATUS
         FROM V$STANDBY_LOG;
      

      Your output should be similar to the following:

          GROUP#    THREAD#  SEQUENCE# ARC STATUS
      ---------- ---------- ---------- --- ----------
               3          0          0 YES UNASSIGNED
               4          0          0 YES UNASSIGNED
               5          0          0 YES UNASSIGNED
      
    7. Ensure that log files from the source database are appearing in the directory specified in the LOCATION attribute in Step 4. You might need to switch the log file at the source database to see files in the directory.

  7. In SQL*Plus, connect to the src.example.com database as the Oracle Streams administrator.

    See Oracle Database Administrator's Guide for information about connecting to a database in SQL*Plus.

  8. Create a directory object to hold files that will be generated by the MAINTAIN_SCHEMAS procedure, including the Data Pump export dump file used for instantiation. The directory object can point to any accessible directory on the computer system. For example, the following statement creates a directory object named src_dir that points to the /usr/src_log_files directory:

    CREATE DIRECTORY src_dir AS '/usr/src_log_files';
    
  9. In SQL*Plus, connect to the dest.example.com database as the Oracle Streams administrator.

  10. Create a directory object to hold files that will be generated by the MAINTAIN_SCHEMAS procedure, including the Data Pump export dump file used for instantiation. The directory object can point to any accessible directory on the computer system. For example, the following statement creates a directory object named dest_dir that points to the /usr/dest_log_files directory:

    CREATE DIRECTORY dest_dir AS '/usr/dest_log_files';
    
  11. While still connected to the dest.example.com database as the Oracle Streams administrator, run the MAINTAIN_SCHEMAS procedure to configure replication between the src.example.com database and the dest.example.com database:

    BEGIN
      DBMS_STREAMS_ADM.MAINTAIN_SCHEMAS(
        schema_names                 => 'hr',
        source_directory_object      => 'src_dir',
        destination_directory_object => 'dest_dir',
        source_database              => 'src.example.com',
        destination_database         => 'dest.example.com',
        capture_name                 => 'capture',
        capture_queue_table          => 'streams_queue_qt',
        capture_queue_name           => 'streams_queue',
        apply_name                   => 'apply',
        apply_queue_table            => 'streams_queue_qt',
        apply_queue_name             => 'streams_queue');
    END;
    /
    

    The MAINTAIN_SCHEMAS procedure can take some time to run because it is performing many configuration tasks. Do not allow data manipulation language (DML) or data definition language (DDL) changes to the replicated database objects at the destination database while the procedure is running.

    In the MAINTAIN_SCHEMAS procedure, only the following parameters are required: schema_names, source_directory_object, destination_directory_object, source_database, and destination_database. See "About the Oracle Streams Replication Configuration Procedures".

    This example specifies the other parameters to show that you can choose the name for the capture process, capture process queue table, capture process queue, apply process, apply process queue table, and apply process queue. If you do not specify these parameters, then system-generated names are used.

    When you use a configuration procedure to configure downstream capture, the parameters that specify the queue and queue table names are important. In such a configuration, it is more efficient for the capture process and apply process to use the same queue at the downstream capture database to avoid propagating changes between queues. To improve efficiency in this sample configuration, notice that streams_queue is specified for both the capture_queue_name and apply_queue_name parameters. Also, streams_queue_qt is specified for both the capture_queue_table and apply_queue_table parameters.

    When a configuration procedure is run, information about its progress is recorded in the following data dictionary views: DBA_RECOVERABLE_SCRIPT, DBA_RECOVERABLE_SCRIPT_PARAMS, DBA_RECOVERABLE_SCRIPT_BLOCKS, and DBA_RECOVERABLE_SCRIPT_ERRORS. If the procedure stops because it encounters an error, then see Oracle Streams Replication Administrator's Guide for instructions about using the RECOVER_OPERATION procedure in the DBMS_STREAMS_ADM package to recover from these errors.

    Wait until the procedure completes successfully before proceeding to the next step.

  12. While still connected to the dest.example.com database as the Oracle Streams administrator, set the downstream_real_time_mine capture process parameter to Y:

    BEGIN
      DBMS_CAPTURE_ADM.SET_PARAMETER(
        capture_name => 'capture',
        parameter    => 'downstream_real_time_mine',
        value        => 'Y');
    END;
    /
    

    If you would rather set the capture process parameter using Enterprise Manager, then see "Setting a Capture Process Parameter" for instructions.

  13. In SQL*Plus, connect to the source database src.example.com as an administrative user.

  14. Archive the current log file at the source database:

    ALTER SYSTEM ARCHIVE LOG CURRENT;
    

    Archiving the current log file at the source database starts real-time mining of the source database redo log.

When you complete the example, a two-database replication environment with the following characteristics is configured:

  • At the src.example.com database, supplemental logging is configured for the tables in the hr schema.

  • The dest.example.com database has the following components:

    • A downstream capture process named capture. The capture process captures changes to the hr schema in the redo log information sent from the source database src.example.com.

    • A queue named streams_queue that uses a queue table named streams_queue_qt. This queue is for the capture process and apply process at the database.

    • An apply process named apply. The apply process applies changes to the hr schema.

To check the Oracle Streams replication configuration:

  1. At the dest.example.com database, ensure that the capture process is enabled and that the capture type is downstream. To do so, follow the instructions in "Viewing Information About a Capture Process", and check the Status and Capture Type fields on the Capture subpage.

  2. At the dest.example.com database, ensure that the apply process is enabled. To do so, follow the instructions in "Viewing Information About an Apply Process", and check Status field on the Apply subpage.

To replicate changes:

  1. At the src.example.com database, make DML changes to any table in the hr schema, and commit the changes.

  2. After some time has passed to allow for replication of the changes, use SQL*Plus to query the modified table at the dest.example.com database to view the DML changes.

Note:

The configuration procedures in the DBMS_STREAMS_ADM package do not configure the replicated tables to be read only at the destination database. If they should be read only, then configure privileges at the destination database accordingly. However, the apply user for the apply process must be able to make DML changes to the replicated database objects. In this example, the apply user is the Oracle Streams administrator. See Oracle Database Security Guide for information about configuring privileges.

See Also:

Tutorial: Configuring Hub-and-Spoke Replication with Local Capture Processes

The example in this topic configures an Oracle Streams hub-and-spoke replication environment that replicates data manipulation language (DML) changes to all of the tables in the hr schema. This example uses a capture process at each database to capture these changes. Hub-and-spoke replication means that a central hub database replicates changes with one or more spoke databases. The spoke databases do not communicate with each other directly. In this sample configuration, the hub database sends changes generated at one spoke database to the other spoke database. See "About Hub-And-Spoke Replication Environments" for more information about hub-and-spoke replication environments.

This example uses the MAINTAIN_SCHEMAS procedure in the DBMS_STREAMS_ADM package to configure the hub-and-spoke replication environment. This procedure is the fastest and simplest way to configure an Oracle Streams environment that replicates one or more schemas. In addition, the procedure follows established best practices for Oracle Streams replication environments.

In this example, the global name of the hub database is hub.example.com, and the global names of the spoke databases are spoke1.example.com and spoke2.example.com. However, you can substitute databases in your environment to complete the example.

The database objects being configured for replication might or might not exist at the destination databases when you run the MAINTAIN_SCHEMAS procedure. If the database objects do not exist at a destination database, then the MAINTAIN_SCHEMAS procedure instantiates them at the destination database using a Data Pump export/import. During instantiation, the instantiation SCN is set for these database objects. If the database objects already exist at a destination database, then the MAINTAIN_SCHEMAS procedure retains the existing database objects and sets the instantiation SCN for them. In this example, the hr schema exists at each database before the MAINTAIN_SCHEMAS procedure is run.

Figure 4-13 provides an overview of the environment created in this example.

Figure 4-13 Sample Hub-and-Spoke Environment with Capture Processes and Read/Write Spokes

Description of Figure 4-13 follows
Description of "Figure 4-13 Sample Hub-and-Spoke Environment with Capture Processes and Read/Write Spokes"

To configure this hub-and-spoke replication environment with read/write spokes:

  1. Complete the following tasks to prepare for the hub-and-spoke replication environment:

    1. Configure network connectivity so that the following databases can communicate with each other:

      • The hub.example.com database and the spoke1.example.com database

      • The hub.example.com database and the spoke2.example.com database

      See Oracle Database 2 Day DBA for information about configuring network connectivity between databases.

    2. Configure an Oracle Streams administrator at each database that will participate in the replication environment. See "Tutorial: Creating an Oracle Streams Administrator" for instructions. This example assumes that the Oracle Streams administrator is strmadmin.

    3. Create a database link from the hub database to each spoke database and from each spoke database to the hub database. In this example, create the following database links:

      • From the hub.example.com database to the spoke1.example.com database. Both the name and the service name of the database link must be spoke1.example.com.

      • From the hub.example.com database to the spoke2.example.com database. Both the name and the service name of the database link must be spoke2.example.com.

      • From the spoke1.example.com database to the hub.example.com database. Both the name and the service name of the database link must be hub.example.com.

      • From the spoke2.example.com database to the hub.example.com database. Both the name and the service name of the database link must be hub.example.com.

      Each database link should be created in the Oracle Streams administrator's schema. Also, each database link should connect to the Oracle Streams administrator at the destination database. See "Tutorial: Creating a Database Link" for instructions.

    4. Set initialization parameters properly at each database that will participate in the Oracle Streams replication environment. See "Preparing for Oracle Streams Replication" for instructions.

    5. Configure each source database to run in ARCHIVELOG mode. For a capture process to capture changes generated at a source database, the source database must be running in ARCHIVELOG mode. In this example, all databases must be running in ARCHIVELOG mode. See Oracle Database Administrator's Guide for information about configuring a database to run in ARCHIVELOG mode.

  2. On a command line, open SQL*Plus and connect to the spoke1.example.com database as the Oracle Streams administrator.

    See Oracle Database 2 Day DBA for more information about starting SQL*Plus.

  3. Create a directory object to hold files that will be generated by the MAINTAIN_SCHEMAS procedure, including the Data Pump export dump file used for instantiation. The directory object can point to any accessible directory on the computer system. For example, the following statement creates a directory object named spoke1_dir that points to the /usr/spoke1_log_files directory:

    CREATE DIRECTORY spoke1_dir AS '/usr/spoke1_log_files';
    
  4. In SQL*Plus, connect to the spoke2.example.com database as the Oracle Streams administrator.

    See Oracle Database Administrator's Guide for information about connecting to a database in SQL*Plus.

  5. Create a directory object to hold files that will be generated by the MAINTAIN_SCHEMAS procedure, including the Data Pump export dump file used for instantiation. The directory object can point to any accessible directory on the computer system. For example, the following statement creates a directory object named spoke2_dir that points to the /usr/spoke2_log_files directory:

    CREATE DIRECTORY spoke2_dir AS '/usr/spoke2_log_files';
    
  6. In SQL*Plus, connect to the hub.example.com database as the Oracle Streams administrator.

  7. Create a directory object to hold files that will be generated by the MAINTAIN_SCHEMAS procedure, including the Data Pump export dump file used for instantiation. The directory object can point to any accessible directory on the computer system. For example, the following statement creates a directory object named hub_dir that points to the /usr/hub_log_files directory:

    CREATE DIRECTORY hub_dir AS '/usr/hub_log_files';
    
  8. While still connected in SQL*Plus to the hub.example.com database as the Oracle Streams administrator, run the MAINTAIN_SCHEMAS procedure to configure replication between the hub.example.com database and the spoke1.example.com database:

    BEGIN
      DBMS_STREAMS_ADM.MAINTAIN_SCHEMAS(
        schema_names                 => 'hr',
        source_directory_object      => 'hub_dir',
        destination_directory_object => 'spoke1_dir',
        source_database              => 'hub.example.com',
        destination_database         => 'spoke1.example.com',
        capture_name                 => 'capture_hns',
        capture_queue_table          => 'source_hns_qt',
        capture_queue_name           => 'source_hns',
        propagation_name             => 'propagation_spoke1',
        apply_name                   => 'apply_spoke1',
        apply_queue_table            => 'destination_spoke1_qt',
        apply_queue_name             => 'destination_spoke1',
        bi_directional               => TRUE);
    END;
    /
    

    The MAINTAIN_SCHEMAS procedure can take some time to run because it is performing many configuration tasks. Do not allow data manipulation language (DML) or data definition language (DDL) changes to the replicated database objects at the destination database while the procedure is running.

    In the MAINTAIN_SCHEMAS procedure, only the following parameters are required: schema_names, source_directory_object, destination_directory_object, source_database, and destination_database. Also, when you use a configuration procedure to configure bi-directional replication, the bi_directional parameter must be set to TRUE. See "About the Oracle Streams Replication Configuration Procedures".

    This example specifies the other parameters to show that you can choose the name for the capture process, capture process queue table, capture process queue, propagation, apply process, apply process queue table, and apply process queue. If you do not specify these parameters, then system-generated names are used.

    When a configuration procedure is run, information about its progress is recorded in the following data dictionary views: DBA_RECOVERABLE_SCRIPT, DBA_RECOVERABLE_SCRIPT_PARAMS, DBA_RECOVERABLE_SCRIPT_BLOCKS, and DBA_RECOVERABLE_SCRIPT_ERRORS. If the procedure stops because it encounters an error, then see Oracle Streams Replication Administrator's Guide for instructions about using the RECOVER_OPERATION procedure in the DBMS_STREAMS_ADM package to recover from these errors.

  9. While still connected in SQL*Plus to the hub.example.com database as the Oracle Streams administrator, run the MAINTAIN_SCHEMAS procedure to configure replication between the hub.example.com database and the spoke2.example.com database:

    BEGIN
      DBMS_STREAMS_ADM.MAINTAIN_SCHEMAS(
        schema_names                 => 'hr',
        source_directory_object      => 'hub_dir',
        destination_directory_object => 'spoke2_dir',
        source_database              => 'hub.example.com',
        destination_database         => 'spoke2.example.com',
        capture_name                 => 'capture_hns',
        capture_queue_table          => 'source_hns_qt',
        capture_queue_name           => 'source_hns',
        propagation_name             => 'propagation_spoke2',
        apply_name                   => 'apply_spoke2',
        apply_queue_table            => 'destination_spoke2_qt',
        apply_queue_name             => 'destination_spoke2',
        bi_directional               => TRUE);
    END;
    /
    
  10. Configure latest time conflict resolution for all of the tables in the hr schema at the hub.example.com, spoke1.example.com, and spoke2.example.com databases. This schema includes the countries, departments, employees, jobs, job_history, locations, and regions tables. See "Tutorial: Configuring Latest Time Conflict Resolution for a Table" for instructions.

When you complete the example, a hub-and-spoke replication environment with the following characteristics is configured:

  • Supplemental logging is configured for the tables in the hr schema at each database.

  • Each database has a capture process named capture_hns. The capture process captures changes to the hr schema at the database.

  • Each database has a queue named source_hns that uses a queue table named source_hns_qt. This queue is for the capture process at the database.

  • The hub database hub.example.com has the following additional components:

    • An apply process named apply_spoke1. This apply process applies changes to the hr schema that were sent from the spoke1.example.com database.

    • A queue named destination_spoke1 that uses a queue table named destination_spoke1_qt. This queue is for the apply_spoke1 apply process at the database.

    • An apply process named apply_spoke2. This apply process applies changes to the hr schema that were sent from the spoke2.example.com database.

    • A queue named destination_spoke2 that uses a queue table named destination_spoke2_qt. This queue is for the apply_spoke2 apply process at the database.

    • A propagation named propagation_spoke1. This propagation sends changes to the hr schema from the source_hns queue to the destination_spoke1 queue at the spoke1.example.com database.

    • A propagation named propagation_spoke2. This propagation sends changes to the hr schema from the source_hns queue to the destination_spoke2 queue at the spoke2.example.com database.

  • The spoke database spoke1.example.com has the following additional components:

    • An apply process named apply_spoke1. The apply process applies changes to the hr schema that were sent from the hub.example.com database.

    • A queue named destination_spoke1 that uses a queue table named destination_spoke1_qt. This queue is for the apply_spoke1 apply process at the database.

    • A propagation named propagation_spoke1. This propagation sends changes to the hr schema from the source_hns queue to the destination_spoke1 queue at the hub.example.com database.

  • The spoke database spoke2.example.com has the following additional components:

    • An apply process named apply_spoke2. The apply process applies changes to the hr schema that were sent from the hub.example.com database.

    • A queue named destination_spoke2 that uses a queue table named destination_spoke2_qt. This queue is for the apply_spoke2 apply process at the database.

    • A propagation named propagation_spoke2. This propagation sends changes to the hr schema from the source_hns queue to the destination_spoke2 queue at the hub.example.com database.

  • Tags are used to avoid change cycling in the following way:

    • Each apply process uses an apply tag, and redo records for changes applied by the apply process include the tag. Each apply process uses an apply tag that is unique in the replication environment.

    • Each capture process captures all of the changes to the replicated database objects, regardless of the tag in the redo record. Therefore, each capture process captures the changes applied by the apply processes on its source database.

    • Each propagation sends all changes made to the replicated database objects to another database in the replication environment, except for changes that originated at the other database. The propagation rules instruct the propagation to discard these changes.

    See "About Tags for Avoiding Change Cycling" and Oracle Database PL/SQL Packages and Types Reference for more information about how the replication environment avoids change cycling.

To check the Oracle Streams replication configuration:

  1. At each database, ensure that the capture process is enabled and that the capture type is local. To do so, follow the instructions in "Viewing Information About a Capture Process", and check the Status and Capture Type fields on the Capture subpage.

  2. At each database, ensure that each propagation is enabled. To do so, follow the instructions in "Viewing Information About a Propagation", and check the Status field on the Propagation subpage. The hub database should have two propagations, and they should both be enabled. Each spoke database should have one propagation that is enabled.

  3. At each database, ensure that each apply process is enabled. To do so, follow the instructions in "Viewing Information About an Apply Process", and check the Status field on the Apply subpage. The hub database should have two apply processes, and they should both be enabled. Each spoke database should have one apply process that is enabled.

To replicate changes:

  1. At one of the databases, make DML changes to any table in the hr schema.

  2. After some time has passed to allow for replication of the changes, use SQL*Plus to query the modified table at the other databases to view the DML changes.

Tutorial: Configuring Two-Database Replication with Synchronous Captures

The example in this topic configures an Oracle Streams replication environment that replicates data manipulation language (DML) changes to two tables in the hr schema. This example uses a synchronous capture at each database to capture these changes. In this example, the global names of the databases in the Oracle Streams replication environment are sync1.example.com and sync2.example.com. However, you can substitute any two databases in your environment to complete the example. See "About Two-Database Replication Environments" for more information about two-database replication environments.

Specifically, this example configures a two-database Oracle Streams replication environment that shares the hr.employees and hr.departments tables at the sync1.example.com and sync2.example.com databases. The two databases replicate all of the DML changes to these tables. The hr sample schema is installed by default with Oracle Database.

Note:

A synchronous capture can only capture changes at the table level. It cannot capture changes at the schema or database level. You can configure a synchronous capture using the ADD_TABLE_RULES and ADD_SUBSET_RULES procedures in the DBMS_STREAMS_ADM package.

Figure 4-14 provides an overview of the environment created in this example.

Figure 4-14 Two-Database Replication Environment with Synchronous Captures

Description of Figure 4-14 follows
Description of "Figure 4-14 Two-Database Replication Environment with Synchronous Captures"

To configure this replication environment with synchronous captures:

  1. Complete the following tasks to prepare for the two-database replication environment:

    1. Configure network connectivity so that the two databases can communicate with each other. See Oracle Database 2 Day DBA for information about configuring network connectivity between databases.

    2. Configure an Oracle Streams administrator at each database that will participate in the replication environment. See "Tutorial: Creating an Oracle Streams Administrator" for instructions. This example assumes that the Oracle Streams administrator is strmadmin.

    3. Set initialization parameters properly at each database that will participate in the Oracle Streams replication environment. See "Preparing for Oracle Streams Replication" for instructions.

    4. Ensure that the hr.employees and hr.departments tables exist at the two databases and are consistent at these databases. If the database objects exist at only one database, then you can use export/import to create and populate them at the other database. See Oracle Database Utilities for information about export/import.

  2. Create two ANYDATA queues at each database. For this example, create the following two queues at each database:

    • A queue named capture_queue owned by the Oracle Streams administrator strmadmin. This queue will be used by the synchronous capture at the database.

    • A queue named apply_queue owned by the Oracle Streams administrator strmadmin. This queue will be used by the apply process at the database.

    See "Creating an ANYDATA Queue" for instructions.

  3. Create a database link from each database to the other database:

    1. Create a database link from the sync1.example.com database to the sync2.example.com database. The database link should be created in the Oracle Streams administrator's schema. Also, the database link should connect to the Oracle Streams administrator at the sync2.example.com database. Both the name and the service name of the database link must be sync2.example.com.

    2. Create a database link from the sync2.example.com database to the sync1.example.com database. The database link should be created in the Oracle Streams administrator's schema. Also, the database link should connect to the Oracle Streams administrator at the sync1.example.com database. Both the name and the service name of the database link must be sync1.example.com.

    See "Tutorial: Creating a Database Link" for instructions.

  4. Configure an apply process at the sync1.example.com database. This apply process will apply changes to the shared tables that were captured at the sync2.example.com database and propagated to the sync1.example.com database.

    1. Open SQL*Plus and connect to the sync1.example.com database as the Oracle Streams administrator.

      See Oracle Database 2 Day DBA for more information about starting SQL*Plus.

    2. Create the apply process:

      BEGIN
        DBMS_APPLY_ADM.CREATE_APPLY(
          queue_name     => 'strmadmin.apply_queue',
          apply_name     => 'apply_emp_dep',
          apply_captured => FALSE);
      END;
      /
      

      The apply_captured parameter is set to FALSE because the apply process applies changes in the persistent queue. These are changes that were captured by a synchronous capture. The apply_captured parameter should be set to TRUE only when the apply process applies changes captured by a capture process.

      Do not start the apply process.

    3. Add a rule to the apply process rule set:

      BEGIN 
        DBMS_STREAMS_ADM.ADD_TABLE_RULES(
          table_name      => 'hr.employees',
          streams_type    => 'apply',
          streams_name    => 'apply_emp_dep',
          queue_name      => 'strmadmin.apply_queue',
          source_database => 'sync2.example.com');
      END;
      /
      

      This rule instructs the apply process apply_emp_dep to apply all DML changes to the hr.employees table that appear in the apply_queue queue. The rule also specifies that the apply process applies only changes that were captured at the sync2.example.com source database.

    4. Add an additional rule to the apply process rule set:

      BEGIN 
        DBMS_STREAMS_ADM.ADD_TABLE_RULES(
          table_name      => 'hr.departments',
          streams_type    => 'apply',
          streams_name    => 'apply_emp_dep',
          queue_name      => 'strmadmin.apply_queue',
          source_database => 'sync2.example.com');
      END;
      /
      

      This rule instructs the apply process apply_emp_dep to apply all DML changes to the hr.departments table that appear in the apply_queue queue. The rule also specifies that the apply process applies only changes that were captured at the sync2.example.com source database.

  5. Configure an apply process at the sync2.example.com database. This apply process will apply changes that were captured at the sync1.example.com database and propagated to the sync2.example.com database.

    1. In SQL*Plus, connect to the sync2.example.com database as the Oracle Streams administrator.

      See Oracle Database Administrator's Guide for information about connecting to a database in SQL*Plus.

    2. Create the apply process:

      BEGIN
        DBMS_APPLY_ADM.CREATE_APPLY(
          queue_name     => 'strmadmin.apply_queue',
          apply_name     => 'apply_emp_dep',
          apply_captured => FALSE);
      END;
      /
      

      The apply_captured parameter is set to FALSE because the apply process applies changes in the persistent queue. These changes were captured by a synchronous capture. The apply_captured parameter should be set to TRUE only when the apply process applies changes captured by a capture process.

      Do not start the apply process.

    3. Add a rule to the apply process rule set:

      BEGIN 
        DBMS_STREAMS_ADM.ADD_TABLE_RULES(
          table_name      => 'hr.employees',
          streams_type    => 'apply',
          streams_name    => 'apply_emp_dep',
          queue_name      => 'strmadmin.apply_queue',
          source_database => 'sync1.example.com');
      END;
      /
      

      This rule instructs the apply process apply_emp_dep to apply all DML changes that appear in the apply_queue queue to the hr.employees table. The rule also specifies that the apply process applies only changes that were captured at the sync1.example.com source database.

    4. Add an additional rule to the apply process rule set:

      BEGIN 
        DBMS_STREAMS_ADM.ADD_TABLE_RULES(
          table_name      => 'hr.departments',
          streams_type    => 'apply',
          streams_name    => 'apply_emp_dep',
          queue_name      => 'strmadmin.apply_queue',
          source_database => 'sync1.example.com');
      END;
      /
      

      This rule instructs the apply process apply_emp_dep to apply all DML changes that appear in the apply_queue queue to the hr.departments table. The rule also specifies that the apply process applies only changes that were captured at the sync1.example.com source database.

  6. Create a propagation to send changes from a queue at the sync1.example.com database to a queue at the sync2.example.com database:

    1. In SQL*Plus, connect to the sync1.example.com database as the Oracle Streams administrator.

    2. Create the propagation that sends changes to the sync2.example.com database:

      BEGIN
        DBMS_STREAMS_ADM.ADD_TABLE_PROPAGATION_RULES(
          table_name              => 'hr.employees',
          streams_name            => 'send_emp_dep',
          source_queue_name       => 'strmadmin.capture_queue',
          destination_queue_name  => 'strmadmin.apply_queue@sync2.example.com',
          source_database         => 'sync1.example.com',
          queue_to_queue          => TRUE);
      END;
      /
      

      The ADD_TABLE_PROPAGATION_RULES procedure creates the propagation and its positive rule set. This procedure also adds a rule to the propagation rule set that instructs it to send DML changes to the hr.employees table to the apply_queue queue in the sync2.example.com database.

    3. Add an additional rule to the propagation rule set:

      BEGIN
        DBMS_STREAMS_ADM.ADD_TABLE_PROPAGATION_RULES(
          table_name              => 'hr.departments',
          streams_name            => 'send_emp_dep',
          source_queue_name       => 'strmadmin.capture_queue',
          destination_queue_name  => 'strmadmin.apply_queue@sync2.example.com',
          source_database         => 'sync1.example.com',
          queue_to_queue          => TRUE);
      END;
      /
      

      The ADD_TABLE_PROPAGATION_RULES procedure adds a rule to the propagation rule set that instructs it to send DML changes to the hr.departments table to the apply_queue queue in the sync2.example.com database.

  7. Create a propagation to send changes from a queue at the sync2.example.com database to a queue at the sync1.example.com database:

    1. In SQL*Plus, connect to the sync2.example.com database as the Oracle Streams administrator.

    2. Create the propagation that sends changes to the sync1.example.com database:

      BEGIN
        DBMS_STREAMS_ADM.ADD_TABLE_PROPAGATION_RULES(
          table_name              => 'hr.employees',
          streams_name            => 'send_emp_dep',
          source_queue_name       => 'strmadmin.capture_queue',
          destination_queue_name  => 'strmadmin.apply_queue@sync1.example.com',
          source_database         => 'sync2.example.com',
          queue_to_queue          => TRUE);
      END;
      /
      

      The ADD_TABLE_PROPAGATION_RULES procedure creates the propagation and its positive rule set. This procedure also adds a rule to the propagation rule set that instructs it to send DML changes to the hr.employees table to the apply_queue queue in the sync1.example.com database.

    3. Add an additional rule to the propagation rule set:

      BEGIN
        DBMS_STREAMS_ADM.ADD_TABLE_PROPAGATION_RULES(
          table_name              => 'hr.departments',
          streams_name            => 'send_emp_dep',
          source_queue_name       => 'strmadmin.capture_queue',
          destination_queue_name  => 'strmadmin.apply_queue@sync1.example.com',
          source_database         => 'sync2.example.com',
          queue_to_queue          => TRUE);
      END;
      /
      

      The ADD_TABLE_PROPAGATION_RULES procedure adds a rule to the propagation rule set that instructs it to send DML changes to the hr.departments table to the apply_queue queue in the sync1.example.com database.

  8. Configure a synchronous capture at the sync1.example.com database:

    1. In SQL*Plus, connect to the sync1.example.com database as the Oracle Streams administrator.

    2. Run the ADD_TABLE_RULES procedure to create the synchronous capture and add a rule to instruct it to capture changes to the hr.employees table:

      BEGIN 
        DBMS_STREAMS_ADM.ADD_TABLE_RULES(
          table_name    => 'hr.employees',
          streams_type  => 'sync_capture',
          streams_name  => 'sync_capture',
          queue_name    => 'strmadmin.capture_queue');
      END;
      /
      
    3. Add an additional rule to the synchronous capture rule set:

      BEGIN 
        DBMS_STREAMS_ADM.ADD_TABLE_RULES(
          table_name    => 'hr.departments',
          streams_type  => 'sync_capture',
          streams_name  => 'sync_capture',
          queue_name    => 'strmadmin.capture_queue');
      END;
      /
      

    Running these procedures performs the following actions:

    • Creates a synchronous capture named sync_capture at the current database. A synchronous capture with the same name must not exist.

    • Enables the synchronous capture. A synchronous capture cannot be disabled.

    • Associates the synchronous capture with an existing queue named capture_queue owned by strmadmin.

    • Creates a positive rule set for synchronous capture sync_capture. The rule set has a system-generated name.

    • Creates a rule that captures DML changes to the hr.employees table and adds the rule to the positive rule set for the synchronous capture. The rule has a system-generated name.

    • Prepares the hr.employees table for instantiation by running the DBMS_CAPTURE_ADM.PREPARE_SYNC_INSTANTIATION function for the table automatically.

    • Creates a rule that captures DML changes to the hr.departments table and adds the rule to the positive rule set for the synchronous capture. The rule has a system-generated name.

    • Prepares the hr.departments table for instantiation by running the DBMS_CAPTURE_ADM.PREPARE_SYNC_INSTANTIATION function for the table automatically.

  9. Configure a synchronous capture at the sync2.example.com database:

    1. In SQL*Plus, connect to the sync2.example.com database as the Oracle Streams administrator.

    2. Run the ADD_TABLE_RULES procedure to create the synchronous capture and add a rule to instruct it to capture changes to the hr.employees table:

      BEGIN 
        DBMS_STREAMS_ADM.ADD_TABLE_RULES(
          table_name    => 'hr.employees',
          streams_type  => 'sync_capture',
          streams_name  => 'sync_capture',
          queue_name    => 'strmadmin.capture_queue');
      END;
      /
      
    3. Add an additional rule to the synchronous capture rule set:

      BEGIN 
        DBMS_STREAMS_ADM.ADD_TABLE_RULES(
          table_name    => 'hr.departments',
          streams_type  => 'sync_capture',
          streams_name  => 'sync_capture',
          queue_name    => 'strmadmin.capture_queue');
      END;
      /
      

    Step 8 describes the actions performed by these procedures at the current database.

  10. Set the instantiation SCN for the tables at the sync2.example.com database:

    1. In SQL*Plus, connect to the sync1.example.com database as the Oracle Streams administrator.

    2. Set the instantiation SCN for the hr.employees table at the sync2.example.com database:

      DECLARE
        iscn  NUMBER;    -- Variable to hold instantiation SCN value
      BEGIN
        iscn := DBMS_FLASHBACK.GET_SYSTEM_CHANGE_NUMBER();
        DBMS_APPLY_ADM.SET_TABLE_INSTANTIATION_SCN@sync2.example.com(
          source_object_name    => 'hr.employees',
          source_database_name  => 'sync1.example.com',
          instantiation_scn     => iscn);
      END;
      /
      
    3. Set the instantiation SCN for the hr.departments table at the sync2.example.com database:

      DECLARE
        iscn  NUMBER;    -- Variable to hold instantiation SCN value
      BEGIN
        iscn := DBMS_FLASHBACK.GET_SYSTEM_CHANGE_NUMBER();
        DBMS_APPLY_ADM.SET_TABLE_INSTANTIATION_SCN@sync2.example.com(
          source_object_name    => 'hr.departments',
          source_database_name  => 'sync1.example.com',
          instantiation_scn     => iscn);
      END;
      /
      

    An instantiation SCN is the lowest SCN for which an apply process can apply changes to a table. Before the apply process can apply changes to the shared tables at the sync2.example.com database, an instantiation SCN must be set for each table.

  11. Set the instantiation SCN for the tables at the sync1.example.com database:

    1. In SQL*Plus, connect to the sync2.example.com database as the Oracle Streams administrator.

    2. Set the instantiation SCN for the hr.employees table at the sync1.example.com database:

      DECLARE
        iscn  NUMBER;    -- Variable to hold instantiation SCN value
      BEGIN
        iscn := DBMS_FLASHBACK.GET_SYSTEM_CHANGE_NUMBER();
        DBMS_APPLY_ADM.SET_TABLE_INSTANTIATION_SCN@sync1.example.com(
          source_object_name    => 'hr.employees',
          source_database_name  => 'sync2.example.com',
          instantiation_scn     => iscn);
      END;
      /
      
    3. Set the instantiation SCN for the hr.departments table at the sync2.example.com database:

      DECLARE
        iscn  NUMBER;    -- Variable to hold instantiation SCN value
      BEGIN
        iscn := DBMS_FLASHBACK.GET_SYSTEM_CHANGE_NUMBER();
        DBMS_APPLY_ADM.SET_TABLE_INSTANTIATION_SCN@sync1.example.com(
          source_object_name    => 'hr.departments',
          source_database_name  => 'sync2.example.com',
          instantiation_scn     => iscn);
      END;
      /
      
  12. Start the apply process at each database:

    1. In SQL*Plus, connect to the sync1.example.com database as the Oracle Streams administrator.

    2. Start the apply process:

      BEGIN
        DBMS_APPLY_ADM.START_APPLY(
          apply_name => 'apply_emp_dep');
      END;
      /
      
    3. In SQL*Plus, connect to the sync2.example.com database as the Oracle Streams administrator.

    4. Start the apply process:

      BEGIN
        DBMS_APPLY_ADM.START_APPLY(
          apply_name => 'apply_emp_dep');
      END;
      /
      

    If you would rather start the apply processes using Enterprise Manager, then see "Starting and Stopping an Apply Process" for instructions.

  13. Configure latest time conflict resolution for the hr.departments and hr.employees tables at the sync1.example.com and sync2.example.com databases. See "Tutorial: Configuring Latest Time Conflict Resolution for a Table" for instructions.

A two-database replication environment with the following characteristics is configured:

  • Each database has a synchronous capture named sync_capture. The synchronous capture captures all DML changes to the hr.employees hr.departments tables.

  • Each database has a queue named capture_queue. This queue is for the synchronous capture at the database.

  • Each database has an apply process named apply_emp_dep. The apply process applies all DML changes to the hr.employees table and hr.departments tables.

  • Each database has a queue named apply_queue. This queue is for the apply process at the database.

  • Each database has a propagation named send_emp_dep. The propagation sends changes from the capture_queue in the local database to the apply_queue in the other database. The propagation sends all DML changes to the hr.employees and hr.departments tables.

  • Tags are used to avoid change cycling in the following way:

    • Each apply process uses the default apply tag. The default apply tag is the hexadecimal equivalent of '00' (double zero).

    • Each synchronous capture only captures changes in a session with a NULL tag. Therefore, neither synchronous capture captures the changes that are being applied by the local apply process. The synchronous capture rules instruct the synchronous capture not to capture these changes.

    See "About Tags for Avoiding Change Cycling" for more information about how the replication environment avoids change cycling.

To check the Oracle Streams replication configuration:

  1. At each database, complete the following steps to ensure that synchronous capture is configured:

    1. Start SQL*Plus and connect to the database as the Oracle Streams administrator.

      See Oracle Database 2 Day DBA for more information about starting SQL*Plus.

    2. Query the ALL_SYNC_CAPTURE data dictionary view:

      SELECT CAPTURE_NAME FROM ALL_SYNC_CAPTURE;
      

      Ensure that a synchronous capture named sync_capture exists at each database.

  2. At each database, ensure that the propagation is enabled. To do so, follow the instructions in "Viewing Information About a Propagation", and check Status on the Propagation subpage.

  3. At each database, ensure that the apply process is enabled. To do so, follow the instructions in "Viewing Information About an Apply Process", and check Status on the Apply subpage.

To replicate changes:

  1. At one of the databases, make DML changes to the hr.employees table or hr.departments table.

  2. After some time has passed to allow for replication of the changes, use SQL*Plus to query the hr.employees or hr.departments table at the other database to view the changes.

Tutorial: Configuring Latest Time Conflict Resolution for a Table

Conflict resolution automatically resolves conflicts in a replication environment. See "About Conflicts and Conflict Resolution" for more information about conflict resolution.

The most common way to resolve update conflicts is to keep the change with the most recent time stamp and discard the older change. With this method, when a conflict is detected during apply, the apply process applies the change if the time-stamp column for the change is more recent than the corresponding row in the table. If the time-stamp column in the table is more recent, then the apply process discards the change.

The example in this topic configures latest time conflict resolution for the hr.departments table by completing the following actions:

  • Adds a time column of the TIMESTAMP WITH TIME ZONE data type to the table

  • Configures a trigger to update the time column in a row with the current time when the row is changed

  • Adds supplemental logging for the columns in the table

  • Runs the SET_UPDATE_CONFLICT_HANDLER procedure in the DBMS_APPLY_ADM package to configure conflict resolution for the table

You can use the steps in this topic to configure conflict resolution for any table. To do so, substitute your schema name for hr and your table name for departments. Also, substitute the columns in your table for the columns in the hr.departments table when you run the SET_UPDATE_CONFLICT_HANDLER procedure.

To configure latest time conflict resolution for the hr.departments table:

  1. Add a time column to the table.

    1. In SQL*Plus, connect to the database as an administrative user, such as the Oracle Streams administrator or SYSTEM. Alternatively, you can connect as the user who owns the table to which the time column will be added.

      See Oracle Database 2 Day DBA for more information about starting SQL*Plus.

    2. Use the ALTER TABLE SQL statement to add the time column to the table. In this example, run the following statement to add the time column to the hr.departments table.

      ALTER TABLE hr.departments ADD (time TIMESTAMP WITH TIME ZONE);
      
  2. Create a trigger to update the time column in each master table with the current time when a change occurs.

    Tip:

    Instead of using a trigger to update the time column, an application can populate the time column each time it modifies or inserts a row into a table.
    1. In Oracle Enterprise Manager, log in to the database as an administrative user, such as the Oracle Streams administrator or SYSTEM.

    2. Go to the Database Home page.

    3. Click Schema to open the Schema subpage.

    4. Click Triggers in the Programs section.

    5. On the Triggers page, click Create.

      The Create Trigger page appears, showing the General subpage.

      Description of tdpii_create_trigger.gif follows
      Description of the illustration tdpii_create_trigger.gif

    6. Enter the name of the trigger in the Name field. In this example, enter insert_departments_time.

    7. Enter the schema that owns the table in the Schema field. In this example, enter hr in the Schema field.

    8. Enter the following in the Trigger Body field:

      BEGIN
         -- Consider time synchronization problems. The previous update to this
         -- row might have originated from a site with a clock time ahead of
         -- the local clock time.
         IF :OLD.TIME IS NULL OR :OLD.TIME < SYSTIMESTAMP THEN
           :NEW.TIME := SYSTIMESTAMP;
         ELSE
           :NEW.TIME := :OLD.TIME + 1 / 86400;
         END IF;
      END;
      
    9. Click Event to open the Event subpage.

    10. Ensure that Table is selected in the Trigger On list.

    11. Enter the table name in the form schema.table in the Table (Schema.Table) field, or use the flashlight icon to find the database object. In this example, enter hr.departments.

    12. Ensure that Before is selected for Fire Trigger.

    13. Select Insert and Update of Columns for Event.

      The columns in the table appear.

    14. Select every column in the table except for the new time column.

    15. Click Advanced to open the Advanced subpage.

    16. Select Trigger for each row.

    17. Click OK to create the trigger.

    Note:

    You can also use the CREATE TRIGGER SQL statement to create a trigger.
  3. In SQL*Plus, connect to the database as the Oracle Streams administrator.

    See Oracle Database 2 Day DBA for more information about starting SQL*Plus.

  4. Add supplemental logging for the columns in the table:

    ALTER TABLE hr.departments ADD SUPPLEMENTAL LOG DATA (ALL) COLUMNS;
    

    Supplemental logging is required for conflict resolution during apply.

  5. Run the SET_UPDATE_CONFLICT_HANDLER procedure to configure latest time conflict resolution for the table.

    For example, run the following procedure to configure latest time conflict resolution for the hr.departments table:

    DECLARE
      cols  DBMS_UTILITY.NAME_ARRAY;
    BEGIN
      cols(1) := 'department_id';
      cols(2) := 'department_name';
      cols(3) := 'manager_id';
      cols(4) := 'location_id';
      cols(5) := 'time';
      DBMS_APPLY_ADM.SET_UPDATE_CONFLICT_HANDLER(
        object_name        =>  'hr.departments',
        method_name        =>  'MAXIMUM',
        resolution_column  =>  'time',
        column_list        =>  cols);
    END;
    /
    

    Include all of the columns in the table in the cols column list.

  6. Repeat these steps for any tables that require conflict resolution in your replication environment. You might need to configure conflict resolution for the tables at several databases.

    If you are completing an example that configures or extends a replication environment, then configure latest time conflict resolution for the appropriate tables:

If you were directed to this section from an example, then go back to the example now.