Skip Headers

Oracle® Streams Replication Administrator's Guide
10g Release 1 (10.1)

Part Number B10728-01
Go to Documentation Home
Home
Go to Book List
Book List
Go to Table of Contents
Contents
Go to Index
Index
Go to Master Index
Master Index
Go to Feedback page
Feedback

Go to previous page
Previous
Go to next page
Next
View PDF

7
Managing Capture, Propagation, and Apply

This chapter contains instructions for managing Streams capture processes, propagations, and Streams apply processes in a Streams replication environment. This chapter also includes instructions for managing Streams tags, and for performing database point-in-time recovery at a destination database in a Streams environment

This chapter contains these topics:

Managing Capture for Streams Replication

The following sections describe management tasks for a capture process in a Streams replication environment:

You also may need to perform other management tasks.

See Also:

Oracle Streams Replication Administrator's Guide for more information about managing a capture process

Creating a Capture Process

A capture process typically starts the process of replicating a database change by capturing the change, converting the change into a logical change record (LCR), and enqueuing the change into a SYS.AnyData queue. From there, the LCR can be propagated to other databases and applied at these database to complete the replication process.

You can create a capture process that captures changes to the local source database, or you can create a capture process that captures changes remotely at a downstream database. If a capture process runs on a downstream database, then archived redo log files from the source database are copied to the downstream database, and the capture process captures changes in these files at the downstream database.

You can use any of the following procedures to create a local capture process:

The following is an example that runs the ADD_SCHEMA_RULES procedure in the DBMS_STREAMS_ADM package to create a local capture process:

BEGIN
  DBMS_STREAMS_ADM.ADD_SCHEMA_RULES(
    schema_name        => 'hr',
    streams_type       => 'capture',
    streams_name       => 'strep01_capture',
    queue_name         => 'strep01_queue',
    include_dml        => true,
    include_ddl        => true,
    include_tagged_lcr => false,
    source_database    => NULL,
    inclusion_rule     => true);
END;
/

Running this procedure performs the following actions:

Managing Supplemental Logging in a Streams Replication Environment

Supplemental logging must be specified for certain columns at a source database for changes to the columns to be applied successfully at a destination database. The following sections illustrate how to manage supplemental logging at a source database:

Specifying Table Supplemental Logging Using Unconditional Log Groups

The following sections describe creating an unconditional log group:

Specifying an Unconditional Supplemental Log Group for Primary Key Column(s)

To specify an unconditional supplemental log group that only includes the primary key column(s) for a table, use an ALTER TABLE statement with the PRIMARY KEY option in the ADD SUPPLEMENTAL LOG DATA clause.

For example, the following statement adds the primary key column of the hr.regions table to an unconditional log group:

ALTER TABLE hr.regions ADD SUPPLEMENTAL LOG DATA (PRIMARY KEY) COLUMNS;

The log group has a system-generated name.

Specifying an Unconditional Supplemental Log Group for All Table Columns

To specify an unconditional supplemental log group that includes all of the columns in a table, use an ALTER TABLE statement with the ALL option in the ADD SUPPLEMENTAL LOG DATA clause.

For example, the following statement adds all of the columns in the hr.departments table to an unconditional log group:

ALTER TABLE hr.regions ADD SUPPLEMENTAL LOG DATA (ALL) COLUMNS;

The log group has a system-generated name.

Specifying an Unconditional Supplemental Log Group That Includes Selected Columns

To specify an unconditional supplemental log group that contains columns that you select, use an ALTER TABLE statement with the ALWAYS specification for the ADD SUPPLEMENTAL LOG GROUP clause.These log groups may include key columns, if necessary.

For example, the following statement adds the department_id column and the manager_id column of the hr.departments table to an unconditional log group named log_group_dep_pk:

ALTER TABLE hr.departments ADD SUPPLEMENTAL LOG GROUP log_group_dep_pk
  (department_id, manager_id) ALWAYS;

The ALWAYS specification makes this log group an unconditional log group.

Specifying Table Supplemental Logging Using Conditional Log Groups

The following sections describe creating a conditional log group:

Specifying a Conditional Log Group Using the ADD SUPPLEMENTAL LOG DATA Clause

You can use the following options in the ADD SUPPLEMENTAL LOG DATA clause of an ALTER TABLE statement:

If you specify more than one option in a single ALTER TABLE statement, then a separate conditional log group is created for each option.

For example, the following statement creates two conditional log groups:

ALTER TABLE hr.employees ADD SUPPLEMENTAL LOG DATA 
  (UNIQUE, FOREIGN KEY) COLUMNS;

One conditional log group includes the unique key columns for the table, and the other conditional log group includes the foreign key columns for the table. Both log groups have a system-generated name.

Specifying a Conditional Log Group Using the ADD SUPPLEMENTAL LOG GROUP Clause

To specify a conditional supplemental log group that includes any columns you choose to add, you can use the ADD SUPPLEMENTAL LOG GROUP clause in the ALTER TABLE statement. To make the log group conditional, do not include the ALWAYS specification.

For example, suppose the min_salary and max_salary columns in the hr.jobs table are included in a column list for conflict resolution at a destination database. The following statement adds the min_salary and max_salary columns to a conditional log group named log_group_jobs_cr:

ALTER TABLE hr.jobs ADD SUPPLEMENTAL LOG GROUP log_group_jobs_cr 
  (min_salary, max_salary);

Dropping a Supplemental Log Group

To drop a conditional or unconditional supplemental log group, use the DROP SUPPLEMENTAL LOG GROUP clause in the ALTER TABLE statement. For example, to drop a supplemental log group named log_group_jobs_cr, run the following statement:

ALTER TABLE hr.jobs DROP SUPPLEMENTAL LOG GROUP log_group_jobs_cr;

Specifying Database Supplemental Logging of Key Columns

You also have the option of specifying supplemental logging for all primary key, unique key, and foreign key columns in a source database. You may choose this option if you configure a capture process to capture changes to an entire database. To specify supplemental logging for all primary key, unique key, and foreign key columns in a source database, issue the following SQL statement:

ALTER DATABASE ADD SUPPLEMENTAL LOG DATA 
   (PRIMARY KEY, UNIQUE, FOREIGN KEY) COLUMNS;

If your primary, unique, and foreign key columns are the same at all source and destination databases, then running this command at the source database provides the supplemental logging needed for primary, unique, and foreign key columns at all destination databases. When you specify the FOREIGN KEY option, all columns of a row's foreign key are placed in the redo log file if any column belonging to the foreign key is modified.

You may omit one or more of these options. For example, if you do not want to supplementally log all of the foreign key columns in the database, then you can omit the FOREIGN KEY option, as in the following example:

ALTER DATABASE ADD SUPPLEMENTAL LOG DATA 
   (PRIMARY KEY, UNIQUE) COLUMNS;

In additional to PRIMARY KEY, UNIQUE, and FOREIGN KEY, you also can use the ALL option. The ALL option specifies that, when a row is changed, all the columns of that row (except for LOB, LONG, LONG RAW, and user-defined type columns) are placed in the redo log file.

Supplemental logging statements are cumulative. If you issue two consecutive ALTER DATABASE ADD SUPPLEMENTAL LOG DATA commands, each with a different identification key, then both keys are supplementally logged.

Dropping Database Supplemental Logging of Key Columns

To drop supplemental logging for all primary key and unique key columns in a source database, issue the ALTER DATABASE DROP SUPPLEMENTAL LOG DATA statement. To drop database supplemental logging for all primary key, unique key, and foreign key columns, issue the following SQL statement:

ALTER DATABASE DROP SUPPLEMENTAL LOG DATA 
  (PRIMARY KEY, UNIQUE, FOREIGN KEY) COLUMNS;


Note:

Dropping database supplemental logging of key columns does not affect any existing table-level supplemental log groups.


Managing Staging and Propagation for Streams Replication

The following sections describe management tasks for LCR staging and propagation in a Streams replication environment:

You also may need to perform other management tasks.

See Also:

Oracle Streams Replication Administrator's Guide for more information about managing event staging and propagation

Creating a SYS.AnyData Queue to Stage LCRs

In a Streams replication environment, SYS.AnyData queues stage LCRs that encapsulate captured changes. These queues may be used by capture processes, propagations, and apply processes as an LCR goes through a stream from a source database to a destination database.

You use the SET_UP_QUEUE procedure in the DBMS_STREAMS_ADM package to create a SYS.AnyData queue. This procedure enables you to specify the following for the SYS.AnyData queue it creates:

This procedure creates a queue that is both a secure queue and a transactional queue and starts the newly created queue.

For example, to create a SYS.AnyData queue named strep01_queue in the strmadmin schema with a queue table named strep01_queue_table, run the following procedure:

BEGIN
  DBMS_STREAMS_ADM.SET_UP_QUEUE(
    queue_table  => 'strmadmin.strep01_queue_table',
    queue_name   => 'strmadmin.strep01_queue');
END;
/

You also can use procedures in the DBMS_AQADM package to create a SYS.AnyData queue.

See Also:

Oracle Streams Concepts and Administration for information about managing SYS.AnyData queues

Creating a Propagation That Propagates LCRs

To replicate LCRs between databases, you must propagate the LCRs from the database where they were first staged in a queue to the database where they are applied. To accomplish this goal, you may use any number of separate propagations.

You can use any of the following procedures to create a propagation:

The following tasks must be completed before you create a propagation:

The following is an example that runs the ADD_SCHEMA_PROPAGATION_RULES procedure in the DBMS_STREAMS_ADM package to create a propagation:

BEGIN
  DBMS_STREAMS_ADM.ADD_SCHEMA_PROPAGATION_RULES(
    schema_name             => 'hr',
    streams_name            => 'strep01_propagation',
    source_queue_name       => 'strmadmin.strep01_queue',
    destination_queue_name  => 'strmadmin.strep02_queue@rep2.net',
    include_dml             => true,
    include_ddl             => true,
    include_tagged_lcr      => false,
    source_database         => 'rep1.net',
    inclusion_rule          => true);
END;
/

Running this procedure performs the following actions:

Managing Apply for Streams Replication

The following sections describe management tasks for an apply process in a Streams replication environment:

You also may need to perform other management tasks.

See Also:

Oracle Streams Concepts and Administration for more information about managing an apply process

Creating an Apply Process That Applies LCRs

When an apply process applies an LCR or sends an LCR to an apply handler that executes it, the replication process for the LCR is complete. That is, the database change that is encapsulated in the LCR is shared with the database where the LCR is applied.

You can use any of the following procedures to create an apply process that applies LCRs:

Before you create an apply process, create a SYS.AnyData queue to associate with the apply process, if one does not exist.

The following is an example that runs the ADD_SCHEMA_RULES procedure in the DBMS_STREAMS_ADM package to create an apply process:

BEGIN
  DBMS_STREAMS_ADM.ADD_SCHEMA_RULES(
    schema_name        => 'hr',
    streams_type       => 'apply',
    streams_name       => 'strep01_apply',
    queue_name         => 'strep02_queue',
    include_dml        => true,
    include_ddl        => true,
    include_tagged_lcr => false,
    source_database    => 'rep1.net',
    inclusion_rule     => true);
END;
/

Running this procedure performs the following actions:

Managing the Substitute Key Columns for a Table

This section contains instructions for setting and removing the substitute key columns for a table.

See Also:

Setting Substitute Key Columns for a Table

When an apply process applies changes to a table, substitute key columns can either replace the primary key columns for a table that has a primary key or act as the primary key columns for a table that does not have a primary key. Set the substitute key columns for a table using the SET_KEY_COLUMNS procedure in the DBMS_APPLY_ADM package. This setting applies to all of the apply processes that apply local changes to the database.

For example, to set the substitute key columns for the hr.employees table to the first_name, last_name, and hire_date columns, replacing the employee_id column, run the following procedure:

BEGIN
  DBMS_APPLY_ADM.SET_KEY_COLUMNS(
    object_name         => 'hr.employees',
    column_list         => 'first_name,last_name,hire_date');
END;
/

Note:
  • You must specify an unconditional supplemental log group at the source database for all of the columns specified as substitute key columns in the column_list or column_table parameter at the destination database. In this example, you would specify an unconditional supplemental log group including the first_name, last_name, and hire_date columns in the hr.employees table.
  • If an apply process applies changes to a remote non-Oracle database, then it may use different substitute key columns for the same table. You can run the SET_KEY_COLUMNS procedure in the DBMS_APPLY_ADM package to specify substitute key columns for changes that will be applied to a remote non-Oracle database by setting the apply_database_link parameter to a non-NULL value.

See Also:

Removing the Substitute Key Columns for a Table

You remove the substitute key columns for a table by specifying NULL for the column_list or column_table parameter in the SET_KEY_COLUMNS procedure in the DBMS_APPLY_ADM package. If the table has a primary key, then the table's primary key is used by any apply process for local changes to the database after you remove the substitute primary key.

For example, to remove the substitute key columns for the hr.employees table, run the following procedure:

BEGIN
  DBMS_APPLY_ADM.SET_KEY_COLUMNS(
    object_name  => 'hr.employees',
    column_list  => NULL);
END;
/

Managing a DML Handler

This section contains instructions for creating, setting, and removing a DML handler.

See Also:

"Apply Processing Options for LCRs"

Creating a DML Handler

A DML handler must have the following signature:

PROCEDURE user_procedure (
parameter_name   IN  SYS.AnyData);

Here, user_procedure stands for the name of the procedure and parameter_name stands for the name of the parameter passed to the procedure. The parameter passed to the procedure is a SYS.AnyData encapsulation of a row LCR.

The following restrictions apply to the user procedure:

A DML handler can be used for any customized processing of row LCRs. For example, the handler may modify an LCR and then execute it using the EXECUTE member procedure for the LCR. When you execute a row LCR in a DML handler, the apply process applies the LCR without calling the DML handler again.

You may also use a DML handler for recording the history of DML changes. For example, a DML handler may insert information about an LCR it processes into a table and then apply the LCR using the EXECUTE member procedure. To create such a DML handler, first create a table to hold the history information:

CREATE TABLE strmadmin.history_row_lcrs(
  timestamp             DATE,
  source_database_name  VARCHAR2(128),
  command_type          VARCHAR2(30),
  object_owner          VARCHAR2(32),
  object_name           VARCHAR2(32),
  tag                   RAW(10),
  transaction_id        VARCHAR2(10),
  scn                   NUMBER,
  commit_scn            NUMBER,
  old_values            SYS.LCR$_ROW_LIST,
  new_values            SYS.LCR$_ROW_LIST)
    NESTED TABLE old_values STORE AS old_values_ntab
    NESTED TABLE new_values STORE AS new_values_ntab;

Create the procedure that inserts the information about the row LCR into the history_row_lcrs table and executes the row LCR:

CREATE OR REPLACE PROCEDURE history_dml(in_any IN SYS.ANYDATA)  
 IS
  lcr   SYS.LCR$_ROW_RECORD;
  rc    PLS_INTEGER;
 BEGIN
  -- Access the LCR
  rc := in_any.GETOBJECT(lcr);
  -- Insert information about the LCR into the history_row_lcrs table
  INSERT INTO strmadmin.history_row_lcrs VALUES 
   (SYSDATE, lcr.GET_SOURCE_DATABASE_NAME(), lcr.GET_COMMAND_TYPE(),   
    lcr.GET_OBJECT_OWNER(), lcr.GET_OBJECT_NAME(), lcr.GET_TAG(), 
    lcr.GET_TRANSACTION_ID(), lcr.GET_SCN(), lcr.GET_COMMIT_SCN,
    lcr.GET_VALUES('old'), lcr.GET_VALUES('new', 'n'));
  --  Apply row LCR
  lcr.EXECUTE(true);
END;
/

Note:

You must specify an unconditional supplemental log group at the source database for any columns needed by a DML handler at the destination database. This example DML handler does not require any additional supplemental logging because it simply records information about the row LCR and does not manipulate the row LCR in any other way.


See Also:

Setting a DML Handler

A DML handler processes each row LCR dequeued by any apply process that contains a specific operation on a specific table. You can specify multiple DML handlers on the same table, to handle different operations on the table. All apply processes that apply changes to the specified table in the local database use the specified DML handler.

Set the DML handler using the SET_DML_HANDLER procedure in the DBMS_APPLY_ADM package. For example, the following procedure sets the DML handler for UPDATE operations on the hr.locations table. Therefore, when any apply process that applies changes locally dequeues a row LCR containing an UPDATE operation on the hr.locations table, the apply process sends the row LCR to the history_dml PL/SQL procedure in the strmadmin schema for processing. The apply process does not apply a row LCR containing such a change directly.

In this example, the apply_name parameter is set to NULL. Therefore, the DML handler is a general DML handler that is used by all of the apply processes in the database.

BEGIN
  DBMS_APPLY_ADM.SET_DML_HANDLER(
    object_name         => 'hr.locations',
    object_type         => 'TABLE',
    operation_name      => 'UPDATE',
    error_handler       => false,
    user_procedure      => 'strmadmin.history_dml',
    apply_database_link => NULL,
    apply_name          => NULL);
END;
/


Note:

If an apply process applies changes to a remote non-Oracle database, then it may use a different DML handler for the same table. You can run the SET_DML_HANDLER procedure in the DBMS_APPLY_ADM package to specify a DML handler for changes that will be applied to a remote non-Oracle database by setting the apply_database_link parameter to a non-NULL value.


See Also:

Unsetting a DML Handler

You unset a DML handler using the SET_DML_HANDLER procedure in the DBMS_APPLY_ADM package. When you run that procedure, set the user_procedure parameter to NULL for a specific operation on a specific table. After the DML handler is unset, any apply process that applies changes locally will apply a row LCR containing such a change directly.

For example, the following procedure unsets the DML handler for UPDATE operations on the hr.locations table:

BEGIN
  DBMS_APPLY_ADM.SET_DML_HANDLER(
    object_name    => 'hr.locations',
    object_type    => 'TABLE',
    operation_name => 'UPDATE',
    error_handler  => false,
    user_procedure => NULL,
    apply_name     => NULL);
END;
/

Managing the DDL Handler for an Apply Process

This section contains instructions for creating, specifying, and removing the DDL handler for an apply process.


Note:

All applied DDL LCRs commit automatically. Therefore, if a DDL handler calls the EXECUTE member procedure of a DDL LCR, then a commit is performed automatically.


See Also:

Creating a DDL Handler for an Apply Process

A DDL handler must have the following signature:

PROCEDURE handler_procedure (
parameter_name   IN  SYS.AnyData);

Here, handler_procedure stands for the name of the procedure and parameter_name stands for the name of the parameter passed to the procedure. The parameter passed to the procedure is a SYS.AnyData encapsulation of a DDL LCR.

A DDL handler can be used for any customized processing of DDL LCRs. For example, the handler may modify the LCR and then execute it using the EXECUTE member procedure for the LCR. When you execute a DDL LCR in a DDL handler, the apply process applies the LCR without calling the DDL handler again.

You may also use a DDL handler to record the history of DDL changes. For example, a DDL handler may insert information about an LCR it processes into a table and then apply the LCR using the EXECUTE member procedure.

To create such a DDL handler, first create a table to hold the history information:

CREATE TABLE strmadmin.history_ddl_lcrs(
  timestamp             DATE,
  source_database_name  VARCHAR2(128),
  command_type          VARCHAR2(30),
  object_owner          VARCHAR2(32),
  object_name           VARCHAR2(32),
  object_type           VARCHAR2(18),
  ddl_text              CLOB,
  logon_user            VARCHAR2(32),
  current_schema        VARCHAR2(32),
  base_table_owner      VARCHAR2(32),
  base_table_name       VARCHAR2(32),
  tag                   RAW(10),
  transaction_id        VARCHAR2(10),
  scn                   NUMBER);

Create the procedure that inserts the information about the DDL LCR into the history_ddl_lcrs table and executes the DDL LCR:

CREATE OR REPLACE PROCEDURE history_ddl(in_any IN SYS.ANYDATA)  
 IS
   lcr       SYS.LCR$_DDL_RECORD;
   rc        PLS_INTEGER;
   ddl_text  CLOB;
 BEGIN
   -- Access the LCR
   rc := in_any.GETOBJECT(lcr);
   DBMS_LOB.CREATETEMPORARY(ddl_text, true);
   lcr.GET_DDL_TEXT(ddl_text);
   --  Insert DDL LCR information into history_ddl_lcrs table
   INSERT INTO strmadmin.history_ddl_lcrs VALUES( 
     SYSDATE, lcr.GET_SOURCE_DATABASE_NAME(), lcr.GET_COMMAND_TYPE(), 
     lcr.GET_OBJECT_OWNER(), lcr.GET_OBJECT_NAME(), lcr.GET_OBJECT_TYPE(), 
     ddl_text, lcr.GET_LOGON_USER(), lcr.GET_CURRENT_SCHEMA(), 
     lcr.GET_BASE_TABLE_OWNER(), lcr.GET_BASE_TABLE_NAME(), lcr.GET_TAG(), 
     lcr.GET_TRANSACTION_ID(), lcr.GET_SCN());
   --  Apply DDL LCR
   lcr.EXECUTE();
   -- Free temporary LOB space
   DBMS_LOB.FREETEMPORARY(ddl_text);
END;
/

Setting the DDL Handler for an Apply Process

A DDL handler processes all DDL LCRs dequeued by an apply process. Set the DDL handler for an apply process using the ddl_handler parameter in the ALTER_APPLY procedure in the DBMS_APPLY_ADM package. For example, the following procedure sets the DDL handler for an apply process named strep01_apply to the history_ddl procedure in the strmadmin schema.

BEGIN
  DBMS_APPLY_ADM.ALTER_APPLY(
    apply_name  => 'strep01_apply',
    ddl_handler => 'strmadmin.history_ddl');
END;
/

Removing the DDL Handler for an Apply Process

A DDL handler processes all DDL LCRs dequeued by an apply process. You remove the DDL handler for an apply process by setting the remove_ddl_handler parameter to true in the ALTER_APPLY procedure in the DBMS_APPLY_ADM package. For example, the following procedure removes the DDL handler from an apply process named strep01_apply.

BEGIN
  DBMS_APPLY_ADM.ALTER_APPLY(
    apply_name         => 'strep01_apply',
    remove_ddl_handler => true);
END;
/

Managing Streams Conflict Detection and Resolution

This section describes the following tasks:

Setting an Update Conflict Handler

Set an update conflict handler using the SET_UPDATE_CONFLICT_HANDLER procedure in the DBMS_APPLY_ADM package. You can use one of the following prebuilt methods when you create an update conflict resolution handler:

For example, suppose a Streams environment captures changes to the hr.jobs table at dbs1.net and propagates these changes to the dbs2.net destination database, where they are applied. In this environment, applications can perform DML changes on the hr.jobs table at both databases, but, if there is a conflict for a particular DML change, then the change at the dbs1.net database should always overwrite the change at the dbs2.net database. In this environment, you can accomplish this goal by specifying an OVERWRITE handler at the dbs2.net database.

To specify an update conflict handler for the hr.jobs table in the hr schema at the dbs2.net database, run the following procedure at dbs2.net:

DECLARE
  cols DBMS_UTILITY.NAME_ARRAY;
  BEGIN
    cols(1) := 'job_title';
    cols(2) := 'min_salary';
    cols(3) := 'max_salary';
    DBMS_APPLY_ADM.SET_UPDATE_CONFLICT_HANDLER(
      object_name       => 'hr.jobs',
      method_name       => 'OVERWRITE',
      resolution_column => 'job_title',
      column_list       => cols);
END;
/

All apply processes running on a database that apply changes to the specified table locally use the specified update conflict handler.


Note:
  • The resolution_column is not used for OVERWRITE and DISCARD methods, but one of the columns in the column_list still must be specified.
  • You must specify a conditional supplemental log group at the source database for all of the columns in the column_list at the destination database. In this example, you would specify a conditional supplemental log group including the job_title, min_salary, and max_salary columns in the hr.jobs table at the dbs1.net database.
  • Prebuilt update conflict handlers do not support LOB, LONG, LONG RAW, and user-defined type columns. Therefore, you should not include these types of columns in the column_list parameter when running the procedure SET_UPDATE_CONFLICT_HANDLER.

See Also:

Modifying an Existing Update Conflict Handler

You can modify an existing update conflict handler by running the SET_UPDATE_CONFLICT_HANDLER procedure in the DBMS_APPLY_ADM package. To update an existing conflict handler, specify the same table and resolution column as the existing conflict handler.

To modify the update conflict handler created in "Setting an Update Conflict Handler", you specify the hr.jobs table and the job_title column as the resolution column. You can modify this update conflict handler by specifying a different type of prebuilt method or a different column list, or both. However, if you want to change the resolution column for an update conflict handler, then you must remove and re-create the handler.

For example, suppose the environment changes, and you want changes from dbs1.net to be discarded in the event of a conflict, whereas previously changes from dbs1.net overwrote changes at dbs2.net. You can accomplish this goal by specifying a DISCARD handler at the dbs2.net database.

To modify the existing update conflict handler for the hr.jobs table in the hr schema at the dbs2.net database, run the following procedure:

DECLARE
  cols DBMS_UTILITY.NAME_ARRAY;
  BEGIN
    cols(1) := 'job_title';
    cols(2) := 'min_salary';
    cols(3) := 'max_salary';
    DBMS_APPLY_ADM.SET_UPDATE_CONFLICT_HANDLER(
      object_name       => 'hr.jobs',
      method_name       => 'DISCARD',
      resolution_column => 'job_title',
      column_list       => cols);
END;
/

Removing an Existing Update Conflict Handler

You can remove an existing update conflict handler by running the SET_UPDATE_CONFLICT_HANDLER procedure in the DBMS_APPLY_ADM package. To remove a an existing conflict handler, specify NULL for the method, and specify the same table, column list, and resolution column as the existing conflict handler.

For example, suppose you want to remove the update conflict handler created in "Setting an Update Conflict Handler" and then modified in "Modifying an Existing Update Conflict Handler". To remove this update conflict handler, run the following procedure:

DECLARE
  cols DBMS_UTILITY.NAME_ARRAY;
  BEGIN
    cols(1) := 'job_title';
    cols(2) := 'min_salary';
    cols(3) := 'max_salary';
    DBMS_APPLY_ADM.SET_UPDATE_CONFLICT_HANDLER(
      object_name       => 'hr.jobs',
      method_name       => NULL,
      resolution_column => 'job_title',
      column_list       => cols);
END;
/

Stopping Conflict Detection for Nonkey Columns

You can stop conflict detection for nonkey columns using the COMPARE_OLD_VALUES procedure in the DBMS_APPLY_ADM package.

For example, suppose you configure a time column for conflict resolution for the hr.employees table, as described in "MAXIMUM". In this case, you may decide to stop conflict detection for the other nonkey columns in the table. After adding the time column and creating the trigger as described in that section, add the columns in the hr.employees table to the column list for an update conflict handler:

DECLARE
  cols  DBMS_UTILITY.NAME_ARRAY;
BEGIN
  cols(1)  := 'first_name';
  cols(2)  := 'last_name';
  cols(3)  := 'email';
  cols(4)  := 'phone_number';
  cols(5)  := 'hire_date';
  cols(6)  := 'job_id';
  cols(7)  := 'salary';
  cols(8)  := 'commission_pct';
  cols(9)  := 'manager_id';
  cols(10) := 'department_id';
  cols(11) := 'time';
  DBMS_APPLY_ADM.SET_UPDATE_CONFLICT_HANDLER(
    object_name       => 'hr.employees',
    method_name       => 'MAXIMUM',
    resolution_column => 'time',
    column_list       => cols);
END;
/

This example does not include the primary key for the table in the column list because it assumes that the primary key is never updated. However, other key columns are included in the column list.

To stop conflict detection for all nonkey columns in the table for both UPDATE and DELETE operations at a destination database, run the following procedure:

DECLARE
  cols DBMS_UTILITY.LNAME_ARRAY;
  BEGIN
    cols(1) := 'first_name';
    cols(2) := 'last_name';
    cols(3) := 'email';
    cols(4) := 'phone_number';
    cols(5) := 'hire_date';
    cols(6) := 'job_id';
    cols(7) := 'salary';
    cols(8) := 'commission_pct';  
  DBMS_APPLY_ADM.COMPARE_OLD_VALUES(
    object_name  => 'hr.employees',
    column_table => cols, 
    operation    => '*',
    compare      => false);
END;
/

The asterisk (*) specified for the operation parameter means that conflict detection is stopped for both UPDATE and DELETE operations. After you run this procedure, all apply processes running on the database that apply changes to the specified table locally do not detect conflicts on the specified columns. Therefore, in this example, the time column is the only column used for conflict detection.


Note:

The example in this section sets an update conflict handler before stopping conflict detection for nonkey columns. However, an update conflict handler is not required before you stop conflict detection for nonkey columns.


See Also:

Replicating and Maintaining Tablespaces Using Streams

You can use the MAINTAIN_SIMPLE_TABLESPACE procedure to configure Streams replication for a simple tablespace, and you can use the MAINTAIN_TABLESPACES procedure to configure Streams replication for a set of self-contained tablespaces. Both of these procedures are in the DBMS_STREAMS_ADM package. These procedures use transportable tablespaces, Data Pump, the DBMS_STREAMS_TABLESPACE_ADM package, and the DBMS_FILE_TRANSFER package to configure the environment.

A self-contained tablespace has no references from the tablespace pointing outside of the tablespace. For example, if an index in the tablespace is for a table in a different tablespace, then the tablespace is not self-contained. A simple tablespace is a self-contained tablespace that uses only one datafile. When there is more than one tablespace, a self-contained tablespace set has no references from inside the set of tablespaces pointing outside of the set of tablespaces.

These procedures set up either a single source Streams configuration with the local database as the source database, or a bi-directional Streams configuration with both databases acting as source and destination databases. The bi_directional parameter for each procedure controls whether the Streams configuration is single source or bi-directional. If bi_directional is false, then a capture process at the local database captures DML changes to the tables in the specified tablespace or tablespace set, a propagation propagates these changes to the destination database, and an apply process at the destination database applies these changes. If bi_directional is true, then each database captures changes and propagates them to the other database, and each database applies changes from the other database.

These procedures cannot be used to configure multi-directional replication where changes may be cycled back to a source database by a third database in the environment. For example, this procedure cannot be used to configure a Streams replication environment with three databases where each database shares changes with the other two databases in the environment. If this procedure is used to configure a three way replication environment such as this, then changes made at a source database would be cycled back to the same source database. In a valid three way replication environment, a particular change is made only once at each database.

These procedures do not configure the Streams environment to maintain DDL changes to the tablespace nor to the database objects in the tablespace. For example, the Streams environment is not configured to replicate ALTER TABLESPACE statements on the tablespace, nor is it configured to replicate ALTER TABLE statements on tables in the tablespace. You may configure the Streams environment to maintain DDL changes manually.

The example in this section uses the MAINTAIN_TABLESPACES procedure to configure and maintain a bi-directional Streams replication environment. This example makes the following assumptions:

This example configures the bi-directional Streams replication environment directly, but you also have the option of generating a script. You may modify the script and run it to configure the environment.

Complete the following steps to configure and maintain the bi-directional Streams replication environment:

  1. Configure a Streams administrator at both sts1.net and sts2.net if they do not exist. See Oracle Streams Concepts and Administration for instructions. This example assumes that the username of the Streams administrator is strmadmin at both databases.
  2. Create a database link from the sts1.net database to the sts2.net database:
    CONNECT strmadmin/strmadminpw@sts1.net
    
    CREATE DATABASE LINK sts2.net CONNECT TO strmadmin IDENTIFIED BY strmadminpw 
       USING 'sts2.net';
    
    
  3. Create a database link from the sts2.net database to the sts1.net database:
    CONNECT strmadmin/strmadminpw@sts2.net
    
    CREATE DATABASE LINK sts1.net CONNECT TO strmadmin IDENTIFIED BY strmadminpw 
       USING 'sts1.net';
    
    
  4. While connected as the Streams administrator to the database that contains the tablespace set, create a directory object for the directory that contains the datafiles for the tablespaces in the tablespace set. For example, the following statement creates a directory object named tbs_directory that corresponds to the /orc/dbs directory:
    CONNECT strmadmin/strmadminpw@sts1.net
    
    CREATE DIRECTORY tbs_directory AS '/orc/dbs';
    
    

    If the datafiles are in multiple directories, then a directory object must exist for each of these directories, and the user who runs the MAINTAIN_TABLESPACES procedure in Step 7 must have READ privilege to these directory objects. In this example, the Streams administrator has this privilege because this user creates the directory object.

  5. While connected as the Streams administrator to the database that contains the tablespace set, create a directory object that will hold the generated Data Pump Export dump file and the datafiles that comprise the cloned tablespace set. For example, the following statement creates a directory object named source_directory that corresponds to the /usr/db_files directory:
    CONNECT strmadmin/strmadminpw@sts1.net
    
    CREATE DIRECTORY source_directory AS '/usr/db_files';
    
    
  6. While connected as the Streams administrator to the database that does not contain the tablespace set, create a directory object that will hold the transferred Data Pump Export dump file and the datafiles. For example, the following statement creates a directory object named dest_directory that corresponds to the /usr/trans_files directory:
    CONNECT strmadmin/strmadminpw@sts2.net
    
    CREATE DIRECTORY dest_directory AS '/usr/trans_files';
    
    
  7. While connected as the Streams administrator to the database that contains the tablespace set, run the MAINTAIN_TABLESPACES procedure:
    CONNECT strmadmin/strmadminpw@sts1.net
    
    DECLARE
      t_names  DBMS_STREAMS_TABLESPACE_ADM.TABLESPACE_SET;
    BEGIN
      -- Tablespace names
      t_names(1) := 'TBS1';
      t_names(2) := 'TBS2';
      DBMS_STREAMS_ADM.MAINTAIN_TABLESPACES(
         tablespace_names             => t_names,
         source_directory_object      => 'SOURCE_DIRECTORY',
         destination_directory_object => 'DEST_DIRECTORY',
         destination_database         => 'STS2.NET',
         bi_directional               => true);
    END;
    /
    
    

    When this procedure completes, the Streams bi-directional replication environment is configured. The procedure uses default names for the SYS.AnyData queues and capture processes it creates, and the procedure produces system-generated names for the propagations and apply processes it creates. You can specify different names by using additional parameters available in the MAINTAIN_TABLESPACES procedure. This procedure also starts the queues, capture processes, and apply processes, and it enables the propagation schedules for the propagations.

  8. Configure conflict resolution for the objects in the tablespace set if necessary.

    Typically, conflicts are possible in a bi-directional replication environment. If conflicts are possible in the environment created by the MAINTAIN_TABLESPACES procedure, then configure conflict resolution before you allow users to make changes to the objects in the tablespace set.

    See Also:

Managing Streams Tags

You can set or get the value of the tags generated by the current session or by an apply process. The following sections describe how to set and get tag values.

Managing Streams Tags for the Current Session

This section contains instructions for setting and getting the tag for the current session.

Setting the Tag Values Generated by the Current Session

You can set the tag for all redo entries generated by the current session using the SET_TAG procedure in the DBMS_STREAMS package. For example, to set the tag to the hexadecimal value of '1D' in the current session, run the following procedure:

BEGIN
   DBMS_STREAMS.SET_TAG(
      tag  =>  HEXTORAW('1D'));
END;
/

After running this procedure, each redo entry generated by DML or DDL statements in the current session will have a tag value of 1D. Running this procedure affects only the current session.

Getting the Tag Value for the Current Session

You can get the tag for all redo entries generated by the current session using the GET_TAG procedure in the DBMS_STREAMS package. For example, to get the hexadecimal value of the tags generated in the redo entries for the current session, run the following procedure:

SET SERVEROUTPUT ON
DECLARE
   raw_tag RAW(2048);
BEGIN
   raw_tag := DBMS_STREAMS.GET_TAG();
   DBMS_OUTPUT.PUT_LINE('Tag Value = ' || RAWTOHEX(raw_tag));
END;
/

You also can display the tag value for the current session by querying the DUAL view:

SELECT DBMS_STREAMS.GET_TAG FROM DUAL;

Managing Streams Tags for an Apply Process

This section contains instructions for setting and removing the tag for an apply process.

See Also:

Setting the Tag Values Generated by an Apply Process

An apply process generates redo entries when it applies changes to a database or invokes handlers. You can set the default tag for all redo entries generated by an apply process when you create the apply process using the CREATE_APPLY procedure in the DBMS_APPLY_ADM package, or when you alter an existing apply process using the ALTER_APPLY procedure in the DBMS_APPLY_ADM package. In both of these procedures, set the apply_tag parameter to the value you want to specify for the tags generated by the apply process.

For example, to set the value of the tags generated in the redo log by an existing apply process named strep01_apply to the hexadecimal value of '7', run the following procedure:

BEGIN
  DBMS_APPLY_ADM.ALTER_APPLY(
     apply_name  =>  'strep01_apply',
     apply_tag   =>  HEXTORAW('7'));
END;
/

After running this procedure, each redo entry generated by the apply process will have a tag value of 7.

Removing the Apply Tag for an Apply Process

You remove the apply tag for an apply process by setting the remove_apply_tag parameter to true in the ALTER_APPLY procedure in the DBMS_APPLY_ADM package. Removing the apply tag means that each redo entry generated by the apply process has a NULL tag. For example, the following procedure removes the apply tag from an apply process named strep01_apply.

BEGIN
  DBMS_APPLY_ADM.ALTER_APPLY(
    apply_name       => 'strep01_apply',
    remove_apply_tag => true);
END;
/

Changing the DBID or Global Name of a Source Database

Typically, database administrators change the DBID and global name of a database when it is a clone of another database. You can view the DBID of a database by querying the DBID column in the V$DATABASE dynamic performance view, and you can view the global name of a database by querying the GLOBAL_NAME static data dictionary view. When you change the DBID or global name of a source database, any existing capture processes that capture changes originating at this source database become unusable. The capture processes may be local capture processes or downstream capture processes that capture changes originated at the source database. Also, any existing apply processes that apply changes from the source database become unusable.

If a capture process is capturing changes generated by a database for which you have changed the DBID or global name, then complete the following steps:

  1. Shut down the source database.
  2. Restart the source database with RESTRICTED SESSION enabled using STARTUP RESTRICT.
  3. Drop the capture process using the DROP_CAPTURE procedure in the DBMS_CAPTURE_ADM package. The capture process may be a local capture process at the source database or a downstream capture process at a remote database.
  4. At the source database, run the ALTER SYSTEM SWITCH LOGFILE statement on the database.
  5. If any changes have been captured from the source database, then manually resynchronize the data at all destination databases that apply changes originating at this source database. If the database never captured any changes, then this step is not necessary.
  6. Modify any rules that use the source database name as a condition. The source database name should be changed to the new global name of the source database where appropriate in these rules. You may need to modify capture process rules, propagation rules, and apply process rules at the local database and at remote databases in the environment.
  7. Drop the apply processes that apply changes from the capture process that you dropped in Step 3. Use the DROP_APPLY procedure in the DBMS_APPLY_ADM package to drop an apply process.
  8. At each destination database that applies changes from the source database, re-create the apply processes you dropped in Step 7. You may want to associate the each apply process with the same rule sets it used before it was dropped. See "Creating an Apply Process That Applies LCRs" for instructions.
  9. Re-create the capture process you dropped in Step 3, if necessary. You may want to associate the capture process with the same rule sets used by the capture process you dropped in Step 3. See "Creating a Capture Process" for instructions.
  10. At the source database, prepare database objects whose changes will be captured by the re-created capture process for instantiation. See "Preparing Database Objects for Instantiation at a Source Database".
  11. At each destination database that applies changes from the source database, set the instantiation SCN for all databases objects to which changes from the source database will be applied. See "Setting Instantiation SCNs at a Destination Database" for instructions.
  12. Disable the restricted session using the ALTER SYSTEM DISABLE RESTRICTED SESSION statement.
  13. At each destination database that applies changes from the source database, start the apply processes you created in Step 8.
  14. At the source database, start the capture process you created in Step 9.

    See Also:

    Oracle Database Utilities for more information about changing the DBID of a database using the DBNEWID utility

Resynchronizing a Source Database in a Multiple Source Environment

A multiple source environment is one in which there is more than one source database for any of the shared data. If a source database in a multiple source environment cannot be recovered to the current point in time, then you can use the method described in this section to resynchronize the source database with the other source databases in the environment. Some reasons why a database cannot be recovered to the current point in time include corrupted archived redo logs or the media failure of an online redo log group.

For example, a bidirectional Streams environment is one in which exactly two databases share the replicated database objects and data. In this example, assume that database A is the database that must be resynchronized and that database B is the other source database in the environment. To resynchronize database A in this bidirectional Streams environment, complete the following steps:

  1. Verify that database B has applied all of the changes sent from database A. You can query the V$BUFFERED_SUBSCRIBERS data dictionary view at database B to determine whether the apply process that applies these changes has any unapplied changes in its queue. See "Viewing the Propagations Dequeuing LCRs From Each Buffered Queue" for an example of such a query. Do not continue until all of these changes have been applied.
  2. Remove the Streams configuration from database A by running the REMOVE_STREAMS_CONFIGURATION procedure in the DBMS_STREAMS_ADM package. See PL/SQL Packages and Types Reference for more information about this procedure.
  3. At database B, drop the apply process that applies changes from database A. Do not drop the rule sets used by this apply process because you will re-create the apply process in a subsequent step.
  4. Complete the steps in "Adding a New Database to an Existing Multiple Source Environment" to add database A back into the Streams environment.

Performing Database Point-in-Time Recovery in a Streams Environment

Point-in-time recovery is the recovery of a database to a specified noncurrent time, SCN, or log sequence number. The following sections discuss performing point-in-time recovery in a Streams replication environment:

Performing Point-in-Time Recovery on the Source in a Single Source Environment

A single source Streams replication environment is one in which there is only one source database for shared data. If database point-in-time recovery is required at the source database in a single source Streams environment, then you must stop all capture processes that capture changes generated at a source database before you perform the recovery operation. Both local and downstream capture process that capture changes generated at the source database must be stopped. Typically, database administrators reset the log sequence number of a database during point-in-time recovery. The ALTER DATABASE OPEN RESETLOGS statement is an example of a statement that resets the log sequence number.

The instructions in this section assume that the single source replication environment has the following characteristics:

If point-in-time recovery must be performed on the source database, then you can follow these instructions to recover as many transactions as possible at the source database by using transactions applied at the destination database. These instructions assume that you can identify the transactions applied at the destination database after the source point-in-time SCN and execute these transactions at the source database.


Note:

Oracle Corporation recommends that you set the apply process parameter COMMIT_SERIALIZATION to FULL when performing point-in-time recovery in a single source Streams replication environment.


Complete the following steps to perform point-in-time recovery on the source database in a single source Streams replication environment:

  1. Perform point-in-time recovery on the source database if you have not already done so. Note the point-in-time recovery SCN because it is needed in subsequent steps.
  2. Ensure that the source database is in restricted mode.
  3. Stop the capture process using the STOP_CAPTURE procedure in the DBMS_CAPTURE_ADM package.
  4. At the source database, perform a data dictionary build:
    SET SERVEROUTPUT ON
    DECLARE
      scn  NUMBER;
    BEGIN
      DBMS_CAPTURE_ADM.BUILD(
        first_scn => scn);
      DBMS_OUTPUT.PUT_LINE('First SCN Value = ' || scn);
    END;
    /
    
    

    Note the SCN value returned because it is needed in Step 13.

  5. At the destination database, wait until all of the transactions from the source database in the apply process queue have been applied. The apply processes should become idle when these transactions have been applied. You can query the STATE column in both the V$STREAMS_APPLY_READER and V$STREAMS_APPLY_SERVER. The state should be IDLE for the apply process in both views before you continue.
  6. Perform a query at the destination database to determine the highest SCN for a transaction that was applied.

    If the apply process is running, then perform the following query:

    SELECT HWM_MESSAGE_NUMBER FROM V$STREAMS_APPLY_COORDINATOR
      WHERE APPLY_NAME = 'STRM01_APPLY';
    
    

    If the apply process is disabled, then perform the following query:

    SELECT APPLIED_MESSAGE_NUMBER FROM DBA_APPLY_PROGRESS
      WHERE APPLY_NAME = 'STRM01_APPLY';
    
    

    Note the highest apply SCN returned by the query because it is needed in subsequent steps.

  7. If the highest apply SCN obtained in Step 6 is less than the point-in-time recovery SCN noted in Step 1, then proceed to step 8. Otherwise, if the highest apply SCN obtained in Step 6 is greater than or equal to the point-in-time recovery SCN noted in Step 1, then the apply process has applied some transactions from the source database after point-in-time recovery SCN. In this case complete the following steps:
    1. Manually execute transactions applied after the point-in-time SCN at the source database. When you execute these transactions at the source database, make sure you set a Streams tag in the session so that the transactions will not be captured by the capture process. If no such Streams session tag is set, then these changes may be cycled back to the destination database. See "Managing Streams Tags for the Current Session" for instructions.
    2. Disable the restricted session at the source database.
  8. If you completed the actions in Step 7, then proceed to step 12. Otherwise, if the highest apply SCN obtained in Step 6 is less than the point-in-time recovery SCN noted in Step 1, then the apply process has not applied any transactions from the source database after point-in-time recovery SCN. In this case, complete the following steps:
    1. Disable the restricted session at the source database.
    2. Ensure that the apply process is running at the destination database.
    3. Set the maximum_scn capture process parameter of the original capture process to the point-in-time recovery SCN using the SET_PARAMETER procedure in the DBMS_CAPTURE_ADM package.
    4. Set the start SCN of the original capture process to the oldest SCN of the apply process. You can determine the oldest SCN of a running apply process by querying the OLDEST_SCN_NUM column in the V$STREAMS_APPLY_READER dynamic performance view at the destination database. To set the start SCN of the capture process, specify the start_scn parameter when you run the ALTER_CAPTURE procedure in the DBMS_CAPTURE_ADM package.
    5. Ensure that the capture process writes information to the alert log by running the following procedure:
      BEGIN
        DBMS_CAPTURE_ADM.SET_PARAMETER(
          capture_name => 'strm01_capture',
          parameter    => 'write_alert_log', 
          value        => 'Y');
      END;
      /
      
      
    6. Start the original capture process using the START_CAPTURE procedure in the DBMS_CAPTURE_ADM package.
    7. Ensure that the original capture process has captured all changes up to the maximum_scn setting by querying the CAPTURED_SCN column in the DBA_CAPTURE data dictionary view. When the value returned by the query is equal to or greater than the maximum_scn value, the capture process should stop automatically. When the capture process is stopped, proceed to the next step.
    8. Find the value of the LAST_ENQUEUE_MESSAGE_NUMBER in the alert log. Note this value because it is needed in subsequent steps.
    9. At the destination database, wait until all the changes are applied. You can monitor the applied changes for the apply process strm01_apply by running the following queries at the destination database:
      SELECT DEQUEUED_MESSAGE_NUMBER
        FROM V$STREAMS_APPLY_READER
        WHERE APPLY_NAME = 'STRM01_APPLY' AND
              DEQUEUED_MESSAGE_NUMBER = last_enqueue_message_number;
      
      

      Substitute the LAST_ENQUEUE_MESSAGE_NUMBER found in the alert log in Step h for last_enqueue_message_number on the last line of the query. When this query returns a row, all of the changes from the capture database have been applied at the destination database.

      Also, ensure that the state of the apply process reader server and each apply server is IDLE. For example, run the following queries for an apply process named strm01_apply:

      SELECT STATE FROM V$STREAMS_APPLY_READER 
        WHERE APPLY_NAME = 'STRM01_APPLY';
      
      SELECT STATE FROM V$STREAMS_APPLY_SERVER 
        WHERE APPLY_NAME = 'STRM01_APPLY';
      
      

      When both of these queries return IDLE, move on to the next step.

  9. At the destination database, drop the apply process using the DROP_APPLY procedure in the DBMS_APPLY_ADM package.
  10. At the destination database, create a new apply process. The new apply process should use the same queue and rule sets used by the original apply process.
  11. At the destination database, start the new apply process using the START_APPLY procedure in the DBMS_APPLY_ADM package.
  12. Drop the original capture process using the DROP_CAPTURE procedure in the DBMS_CAPTURE_ADM package.
  13. Create a new capture process using the CREATE_CAPTURE procedure in the DBMS_CAPTURE_ADM package to replace the capture process you dropped in Step 12. Specify the SCN returned by the data dictionary build in Step 4 for both the first_scn and start_scn parameters. The new capture process should use the same queue and rule sets as the original capture process.
  14. Start the new capture process using the START_CAPTURE procedure in the DBMS_CAPTURE_ADM package.

Performing Point-in-Time Recovery in a Multiple Source Environment

A multiple source environment is one in which there is more than one source database for any of the shared data. If database point-in-time recovery is required at a source database in a multiple source Streams environment, then you can use another source database in the environment to recapture the changes made to the recovered source database after the point-in-time recovery.

For example, in a multiple source Streams environment, one source database may become unavailable at time T2 and undergo point in time recovery to an earlier time T1. After recovery to T1, transactions performed at the recovered database between T1 and T2 are lost at the recovered database. However, before the recovered database became unavailable, assume that these transactions were propagated to another source database and applied. In this case, this other source database can be used to restore the lost changes to the recovered database.

Specifically, to restore changes made to the recovered database after the point-in-time recovery, you configure a capture process to recapture these changes from the redo logs at the other source database, a propagation to propagate these changes from the database where changes are recaptured to the recovered database, and an apply process at the recovered database to apply these changes.

Changes originating at the other source database that were applied at the recovered database between T1 and T2 also have been lost and must be recovered. To accomplish this, alter the capture process at the other source database to start capturing changes at an earlier SCN. This SCN is the oldest SCN for the apply process at the recovered database.

The following SCN values are required to restore lost changes to the recovered database:

You should record the point-in-time SCN when you perform point-in-time recovery on the recovered database. You can use the GET_SCN_MAPPING procedure in the DBMS_STREAMS_ADM package to determine the other necessary SCN values.

See Also:

PL/SQL Packages and Types Reference for more information about the GET_SCN_MAPPING procedure

Performing Point-in-Time Recovery on a Destination Database

If database point-in-time recovery is required at a destination database in a Streams environment, then you must reapply the captured changes that had already been applied after the point-in-time recovery.

For each relevant capture process, you can choose either of the following methods to perform point-in-time recovery at a destination database in a Streams environment:

Resetting the start SCN for the capture process is simpler than creating a new capture process. However, if the capture process captures changes that are applied at multiple destination databases, then the changes are resent to all the destination databases, including the ones that did not perform point-in-time recovery. If a change is already applied at a destination database, then it is discarded by the apply process, but you may not want to use the network and computer resources required to resend the changes to multiple destination databases. In this case, you can create and temporarily use a new capture process and a new propagation that propagates changes only to the destination database that was recovered.

The following sections provide instructions for each task:

If there are multiple apply processes at the destination database where you performed point-in-time recovery, then complete one of the tasks in this section for each apply process.

Neither of these methods should be used if any of the following conditions are true regarding the destination database you are recovering:

If any of these conditions are true in your environment, then you cannot use the methods described in this section. Instead, you must manually resynchronize the data at all destination databases.

See Also:

Oracle Streams Concepts and Administration for more information about SCN values relating to a capture process and directed networks

Resetting the Start SCN for the Existing Capture Process to Perform Recovery

If you decide to reset the start SCN for the existing capture process to perform point-in-time recovery, then complete the following steps:

  1. If the destination database is also a source database in a multiple source Streams environment, then complete the actions described in "Performing Point-in-Time Recovery in a Multiple Source Environment".
  2. If you are not using directed networks between the source database and destination database, then drop the propagation that propagates changes from the source queue at the source database to the destination queue at the destination database. Use the DROP_PROPAGATION procedure in the DBMS_PROPAGATION_ADM package to drop the propagation.

    If you are using directed networks, and there are intermediate databases between the source database and destination database, then drop the propagation at each intermediate database in the path to the destination database, including the propagation at the source database.

    Do not drop the rule sets used by the propagations you drop.


    Note:

    You must drop the appropriate propagation(s). Disabling them is not sufficient. You will re-create the propagation(s) in Step 7, and dropping them now ensures that only events created after resetting the start SCN for the capture process are propagated.


    See Also:

    Oracle Streams Concepts and Administration for more information about directed networks

  3. Perform the point-in-time recovery at the destination database.
  4. Query for the oldest message number (oldest SCN) from the source database for the apply process at the destination database. Make a note of the results of the query. The oldest message number is the earliest system change number (SCN) that may need to be applied.

    The following query at a destination database lists the oldest message number for each apply process:

    SELECT APPLY_NAME, OLDEST_MESSAGE_NUMBER FROM DBA_APPLY_PROGRESS;
    
    
  5. Stop the existing capture process using the STOP_CAPTURE procedure in the DBMS_CAPTURE_ADM package.
  6. Reset the start SCN of the existing capture process.

    To reset the start SCN for an existing capture process, run the ALTER_CAPTURE procedure in the DBMS_CAPTURE_ADM package and set the start_scn parameter to the value you recorded from the query in Step 4. For example, to reset the start SCN for a capture process named strm01_capture to the value 829381993, run the following ALTER_CAPTURE procedure:

    BEGIN
      DBMS_CAPTURE_ADM.ALTER_CAPTURE(
        capture_name  =>  'strm01_capture',
        start_scn     =>  829381993);
    END;
    /
    
    
  7. If you are not using directed networks between the source database and destination database, then create a new propagation to propagate changes from the source queue to the destination queue using the CREATE_PROPAGATION procedure in the DBMS_PROPAGATION_ADM package. Specify any rule sets used by the original propagation when you create the propagation.

    If you are using directed networks, and there are intermediate databases between the source database and destination database, then create a new propagation at each intermediate database in the path to the destination database, including the propagation at the source database.

  8. Start the existing capture process using the START_CAPTURE procedure in the DBMS_CAPTURE_ADM package.

Creating a New Capture Process to Perform Recovery

If you decide to create a new capture process to perform point-in-time recovery, then complete the following steps:

  1. If the destination database is also a source database in a multiple source Streams environment, then complete the actions described in "Performing Point-in-Time Recovery in a Multiple Source Environment".
  2. If you are not using directed networks between the source database and destination database, then drop the propagation that propagates changes from the source queue at the source database to the destination queue at the destination database. Use the DROP_PROPAGATION procedure in the DBMS_PROPAGATION_ADM package to drop the propagation.

    If you are using directed networks, and there are intermediate databases between the source database and destination database, then drop the propagation that propagates events between the last intermediate database and the destination database. You do not need to drop the propagations at the other intermediate databases nor at the source database.


    Note:

    You must drop the appropriate propagation. Disabling it is not sufficient.


    See Also:

    Oracle Streams Concepts and Administration for more information about directed networks

  3. Perform the point-in-time recovery at the destination database.
  4. Query for the oldest message number (oldest SCN) from the source database for the apply process at the destination database. Make a note of the results of the query. The oldest message number is the earliest system change number (SCN) that may need to be applied.

    The following query at a destination database lists the oldest message number for each apply process:

    SELECT APPLY_NAME, OLDEST_MESSAGE_NUMBER FROM DBA_APPLY_PROGRESS;
    
    
  5. Create a queue at the source database to be used by the capture process using the SET_UP_QUEUE procedure in the DBMS_STREAMS_ADM package.

    If you are using directed networks, and there are intermediate databases between the source database and destination database, then create a queue at each intermediate database in the path to the destination database, including the new queue at the source database. Do not create a new queue at the destination database.

  6. If you are not using directed networks between the source database and destination database, then create a new propagation to propagate changes from the source queue created in Step 5 to the destination queue using the CREATE_PROPAGATION procedure in the DBMS_PROPAGATION_ADM package. Specify any rule sets used by the original propagation when you create the propagation.

    If you are using directed networks, and there are intermediate databases between the source database and destination database, then create a propagation at each intermediate database in the path to the destination database, including the propagation from the source database to the first intermediate database. These propagations propagate changes captured by the capture process you will create in Step 7 between the queues created in Step 5.

  7. Create a new capture process at the source database using the CREATE_CAPTURE procedure in the DBMS_CAPTURE_ADM package. Set the source_queue parameter to the local queue you created in Step 5 and the start_scn parameter to the value you recorded from the query in Step 4. Also, specify any rule sets used by the original capture process. If the rule sets used by the original capture process instruct the capture process to capture events that should not be sent to the destination database that was recovered, then you can create and use smaller, customized rule sets that share some rules with the original rule sets.
  8. Start the capture process you created in Step 7 using the START_CAPTURE procedure in the DBMS_CAPTURE_ADM package.
  9. When the oldest message number of the apply process at the recovered database is approaching the capture number of the original capture process at the source database, stop the original capture process using the STOP_CAPTURE procedure in the DBMS_CAPTURE_ADM package.

    At the destination database, you can use the following query to determine the oldest message number from the source database for the apply process:

    SELECT APPLY_NAME, OLDEST_MESSAGE_NUMBER FROM DBA_APPLY_PROGRESS;
    
    

    At the source database, you can use the following query to determine the capture number of the original capture process:

    SELECT CAPTURE_NAME, CAPTURE_MESSAGE_NUMBER FROM V$STREAMS_CAPTURE;
    
    
  10. When the oldest message number of the apply process at the recovered database is beyond the capture number of the original capture process at the source database, drop the new capture process created in Step 7.
  11. If you are not using directed networks between the source database and destination database, then drop the new propagation created in Step 6.

    If you are using directed networks, and there are intermediate databases between the source database and destination database, then drop the new propagation at each intermediate database in the path to the destination database, including the new propagation at the source database.

  12. If you are not using directed networks between the source database and destination database, then remove the queue created in Step 5.

    If you are using directed networks, and there are intermediate databases between the source database and destination database, then drop the new queue at each intermediate database in the path to the destination database, including the new queue at the source database. Do not drop the queue at the destination database.

  13. If you are not using directed networks between the source database and destination database, then create a propagation that propagates changes from the original source queue at the source database to the destination queue at the destination database. Use the CREATE_PROPAGATION procedure in the DBMS_PROPAGATION_ADM package to create the propagation. Specify any rule sets used by the original propagation when you create the propagation.

    If you are using directed networks, and there are intermediate databases between the source database and destination database, then re-create the propagation from the last intermediate database to the destination database. You dropped this propagation in Step 1.

  14. Start the capture process you stopped in Step 9.

All of the steps after Step 8 can be deferred to a later time, or they can be done as soon as the condition described in Step 9 is met.