Oracle8 Replication
Release 8.0

A58245-01

Library

Product

Contents

Index

Prev Next

7
Advanced Techniques

This chapter describes several advanced techniques that you can use in implementing an Oracle replicated database environment:

Using Procedural Replication

Procedural replication can offer performance advantages for large batch-oriented operations operating on large numbers of rows that can be run serially within a replicated environment.

A good example of an appropriate application is a purge operation (also referred to as an archive operation) that you run infrequently (for example, once per quarter) during off hours to remove old data, or data that was "logically" deleted from the online database. An example using procedural replication to purge deleted rows is described in "Avoiding Delete Conflicts" on page 7-25.

Restrictions on Procedural Replication

All parameters for a replicated procedure must be IN parameters; OUT and IN/OUT modes are not supported. The datatypes supported for these parameters are: NUMBER, DATE, VARCHAR2, CHAR, ROWID, RAW, BLOB, CLOB, NCHAR, NVARCHAR, and NCLOB.

Oracle cannot detect update conflicts produced by replicated procedures. Replicated procedures must detect and resolve conflicts themselves. Because of the difficulties involved in writing your own conflict resolution routines, it is best to simply avoid the possibility of conflicts altogether.

Adhering to the following guidelines will help you ensure that your tables remain consistent at all sites when you plan to use procedural replication.

Serialization of Transactions

Serial execution ensures that your data remains consistent. The advanced replication facility propagates and executes replicated transactions one at a time. For example, assume that you have two procedures, A and B, that perform updates on local data. Now assume that you perform the following actions, in order:

  1. Execute A and B locally.
  2. Queue requests to execute other replicas of A and B on other nodes.
  3. Commit.

The replicas of A and B on the other nodes are executed completely serially, in the same order that they were committed at the originating site. If A and B execute concurrently at the originating site, however, they may produce different results locally than they do remotely. Executing A and B serially at the originating site ensures that all sites have identical results. Propagating the transaction serially ensures that A and B will be executing in serial order at the target site in all cases.

Alternatively, you could write the procedures carefully, to ensure serialization. For example, you could use SELECT... FOR UPDATE for queries to ensure serialization at the originating site and at the target site if you are using parallel propagation.

Generating Support for Replicated Procedures

You must disable row-level replication support at the start of your procedure, and then re-enable support at the end. This ensures that any updates that occur as a result of executing the procedure are not propagated to other sites. Row-level replication is enabled and disabled by calling the following procedures, respectively

Additional Information: See "Disabling the Advanced Replication Facility" on page 7-33.

When you generate replication support for your replicated package, Oracle creates a wrapper package in the schema of the replication propagator.

Note: Unregistering the current propagator drops all existing generated wrappers in the propagator's schema. Replication support for wrapped stored procedures must be regenerated after you register a new propagator.

The wrapper package has the same name as the original package, but its name is prefixed with the string you supply when you generate replication support for the procedure. If you do not supply a prefix, Oracle uses the default prefix, "defer_". The wrapper procedure has the same parameters as the original, along with two additional parameters: CALL_LOCAL and CALL_REMOTE. These two boolean parameters determine where the procedure gets executed. When CALL_LOCAL is TRUE, the procedure is executed locally. When CALL_REMOTE is TRUE, the procedure will ultimately be executed at all other master sites in the replicated environment.

The remote procedures are called directly if you are propagating changes synchronously. Or calls to these procedures are added to the deferred transaction queue if you are propagating changes asynchronously. By default, CALL_LOCAL is FALSE, and CALL_REMOTE is TRUE.

Oracle generates replication support for a package in two phases. The first phase creates the package specification at all sites. Phase two generates the package body at all sites. These two phases are necessary to support synchronous replication.

For example, suppose you create the package EMP_MGMT containing the procedure NEW_DEPT, which takes one argument, ENAME. To replicate this package to all master sites in your system, you can use Replication Manager to add the package to a master group and then generate replication support for the object. See "Managing Master Groups" on page 3-15 for more information about managing master groups and replicated objects using Replication Manager. After completing these steps, an application can call procedure in the replicated package as follows:

defer_emp_mgmt.new_dept( ename         => 'Jones',
                        call_local    => TRUE,
                        call_remote  => TRUE);

As shown in Figure 7-1, the logic of the wrapper procedure ensures that the procedure is called at the local site and subsequently at all remote sites. The logic of the wrapper procedure also ensures that when the replicated procedure is called at the remote sites, CALL_REMOTE is FALSE, ensuring that the procedure is not further propagated.

If you are operating in a mixed replicated environment with static partitioning of data ownership (that is, if you are not preventing row-level replication), the replication facility will preserve the order of operations at the remote node, since both row-level and procedural replication use the same asynchronous queue.

Figure 7-1 Asynchronous Procedural Replication

Using Synchronous Data Propagation

Asynchronous data propagation is the normal configuration for advanced replication environments. However, Oracle also supports synchronous data propagation for applications with special requirements. Synchronous data propagation occurs when an application updates a local replica of a table, and within the same transaction also updates all other replicas of the same table. Consequently, synchronous data replication is also called real-time data replication. Use synchronous replication only when applications require that replicated sites remain continuously synchronized.

Note: A replication system that uses real-time propagation of replication data is highly dependent on system and network availability because it can function only when all sites in the system are concurrently available.

The following sections explain more about synchronous data propagation and how to manage a replicated database system that uses synchronous data propagation.

Understanding Synchronous Data Propagation

As shown in Figure 7-2, whenever an application makes a DML change to a local replicated table and the replication group is using synchronous row-level replication, the change is synchronously propagated to the other master sites in the replicated environment using internal triggers. When the application applies a local change, the internal triggers issue calls to generated procedures at the remote master sites in the security context of the replication propagator. Oracle ensures that all distributed transactions either commit or rollback in the event of a failure.

Additional Information: See the discussion of distributed updates in the book Oracle8 Distributed Database Systems.

Figure 7-2 Propagating Changes using Synchronous Row-Level Replication

Restrictions

Because of the locking mechanism used by synchronous replication, deadlocks can occur. When an application performs a synchronous update to a replicated table, Oracle first locks the local row and then uses an AFTER ROW trigger to lock the corresponding remote row. Oracle releases the locks when the transaction commits at each site.

Destination of Synchronously Replicated Transactions

The necessary remote procedure calls to support synchronous replication are included in the internal trigger for each object. When you generate replication support for a replicated object, Oracle activates the triggers at all master sites to add the necessary remote procedure calls for the new site. Conversely, when you remove a master site from a master group, Oracle removes the calls from the internal triggers.

Conflict Detection

If all sites of a master group communicate synchronously with one another, applications should never experience replication conflicts. However, if even one site is sending changes asynchronously to another site, applications can experience conflicts at any site in the replicated environment.

If the change is being propagated synchronously, an error is raised and a rollback will be required. If the change is propagated asynchronously, Oracle automatically detects the conflicts and either logs the conflict or, if you designate an appropriate resolution method, resolves the conflict.

Additional Information: See Chapter 5, "Conflict Resolution".

Adding New Sites to an Advanced Replication Environment

When you add a new master or snapshot site for to a replication group in an advanced replication environment, Replication Manager allows you to select the data propagation mode (method) for the new site.

See Chapter 3, "Using Multimaster Replication" and Chapter 4, "Using Snapshot Site Replication" for more information about adding master and snapshot sites to an advanced replication environment, respectively.

Understanding Mixed-Mode Multimaster Systems

In some situations, you might decide to have a mixed-mode environment in which some master sites propagate a master group's changes asynchronously and others propagate changes synchronously. The order in which you add new master sites to a group with different data propagation modes can be important.

For example, suppose that you have three master sites: A, B, and C. If you first create site A as the master definition site, and then add site B with a synchronous propagation mode, site A will send changes to site B synchronously and site B will send changes to site A synchronously. There is no need to concern yourself with the scheduling of links at either site, because neither site is creating deferred transactions.

Now suppose that you create master site C with an asynchronous propagation mode. The propagation modes are now as illustrated in Figure 7-3.

Figure 7-3 Selecting a Propagation Mode

You must now schedule propagation of the deferred transaction queue from site A to site C, from site B to site C, and from site C to sites A and B.

As another example, consider what would happen if you created site A as the master definition site, then added site C with an asynchronous propagation mode, then added site B with a synchronous propagation mode? Now the propagation modes would be as shown in Figure 7-4.

Figure 7-4 Ordering Considerations

Each time that you add a new master site to a mixed-mode multimaster system, consider how the addition will affect the data propagation modes to and from existing sites.

Tip: You can view the data propagation modes between master group sites in a multimaster system quickly by using a Replication Manager destination map. See "Displaying a Destination Map for a Master Group" on page 3-29 for more information about master group destination maps.

Altering a Master Site's Data Propagation Mode

To change the data propagation mode from one master site to another in a master group, use the destination map for the group in Replication Manager.

  1. Suspend replication activity for the master group.
  2. Right-click the link that you want to alter.
  3. Click Make Asynchronous or Make Synchronous.
  4. Resume replication activity for the master group.

API Reference: DBMS_REPCAT.ALTER_MASTER_PROPAGATION

After you switch the propagation mode between one or more master sites in a master group:

Designing for Survivability

Survivability provides the capability to continue running applications despite system or site failures. Survivability allows you to run applications on a fail-over system, accessing the same, or very nearly the same, data as these systems accessed on the primary system when it failed. As shown in Figure 7-5, the Oracle Server provides two different technologies for accomplishing survivability: the Oracle Parallel Server and the advanced replication facility.

Figure 7-5 Survivability Methods: Advanced Replication vs. Parallel Server


Oracle Parallel Server versus Advanced Replication

The Oracle Parallel Server supports fail-over to surviving systems when a system supporting an instance of the Oracle Server fails. The Oracle Parallel Server requires a cluster or massively parallel hardware platform, and thus is applicable for protection against processor system failures in the local environment where the cluster or massively parallel system is running.

In these environments, the Oracle Parallel Server is the ideal solution for survivability - supporting high transaction volumes with no lost transactions or data inconsistencies in the event of an instance failure. If an instance fails, a surviving instance of the Oracle Parallel Server automatically recovers any incomplete transactions. Applications running on the failed system can execute on the fail-over system, accessing all data in the database.

The Oracle Parallel Server does not, however, provide survivability for site failures (such as flood, fire, or sabotage) that render an entire site, and thus the entire cluster or massively parallel system, inoperable. To provide survivability for site failures, you can use the advanced replication facility to maintain a replica of a database at a geographically remote location.

Should the local system fail, the application can continue to execute at the remote site. Advanced replication, however, cannot guarantee the protection of all transactions. Also, special care must be taken to prevent data inconsistencies when recovering the primary site.

Note: You can also configure a standby-database to protect an Oracle database from site failures. For more information about Oracle's standby database feature, see the Oracle8 Backup and Recovery Guide.

Designing for Survivability

If you choose to use the advanced replication facility for survivability, you should consider the following issues:

Implementing a Survivable System

Oracle's advanced replication facility can be used to provide survivability against site failures by using multiple replicated master sites. You must configure your system using one of the following methods. These methods are listed in order of increasing implementation difficulty.

Snapshot Cloning and Offline Instantiation

By default, Oracle builds and populates replicas when you:

When building a large replicated environment, the amount of data necessary to build and populate replicas throughout the system can generate an excessive amount of network traffic. To avoid saturating the network with the data necessary to build a large replicated environment, Oracle lets you perform offline instantiation of new sites in both basic and advanced replication systems. The following sections explain how to clone snapshots in a basic replication environment and offline instantiate master and snapshot sites in an advanced replication environment.

Snapshot Cloning for Basic Replication Environments

To reduce the network overhead associated with the creation of the same set of snapshots in many databases, you can perform snapshot "cloning" by following these steps:

  1. Create the snapshots and corresponding refresh groups that you want to clone in a database separate from the database that contains the master tables. The database in which you create snapshots of the master table data is the template snapshot database for the cloning process. If you are not familiar with creating snapshots in a basic replication environment, see Chapter 2, "Using Basic Replication".
  2. At the template snapshot database, export the schemas containing the snapshots and refresh groups that you want to clone to other snapshot databases. The Export utility does not support the export of individual snapshots. Therefore, you must export snapshots at the schema level.
  3. Prepare all other snapshot databases for snapshots of the master database. If you are not familiar with setting up the accounts and database links necessary to prepare a snapshot database, see "Preparing a Database for Snapshots" on page 2-4.

    Note: To make sure that you have prepared a snapshot database properly, connect to each snapshot schema in the database. Next, execute the defining queries of the proposed snapshots to ensure that they execute without error. Additionally, check to make sure that each new snapshot database has enough free space to hold the new snapshots.

  4. At each new snapshot database, import the snapshot schemas that were exported from the template snapshot database. This creates and populates the same snapshots that were exported from the template snapshot database.
  5. Perform a fast refresh of all fast-refreshable snapshots. Doing so registers the new snapshots in the master database.

Offline Instantiation of a Master Site in an Advanced Replication System

Offline instantiation of a master site lets you add the new master site to a master group while limiting the amount of downtime required for existing sites in the multimaster replication system. Offline instantiation of a master site is useful primarily for systems with very large databases where the time required to transfer the data through network links to a new site would be prohibitive.

Assuming that you are using a default replication configuration that uses asynchronous row-level replication, the steps necessary to create a new master site using offline instantiation are as follows:

  1. Prepare the new master site with the Replication Manager setup wizard. At the new master site, make sure to create the schemas that will contain objects of the master group. If you are not sure how to prepare a master site, see "The Replication Setup Wizard" on page 3-4 and "Adding a Master Site to a Master Group" on page 3-24 for more information.
  2. Synchronize the master sites that replicate the master group. This process includes pushing of all outstanding administration requests at each master site, resolving any administration requests that have an error status, resolving error transactions at each master site, and so on.
  3. Suspend replication activity for the master group.

    Warning: Do not resume replication activity or do other replication administration for the master group until the new master site appears at the master definition site. Otherwise, changes that you make at any site will not be propagated to the new site, and you might have problems synchronizing the group's data.

  4. From the group's master definition site, call the replication management API procedure DBMS_OFFLINE_OG.BEGIN_INSTANTIATION with the parameters gname (the name of the master group) and new_site (the name of the new master site). This procedure adds the new site to the master group.
  5. Export the individual objects in the master group from any master site. When a master group contains replication objects that originate from more than one schema, make sure to export the objects from each schema.
  6. From the master definition site, call the replication management API procedure DBMS_OFFLINE_OG.RESUME_SUBSET_OF_MASTERS with the parameters gname (the name of the master group) and new_site (the name of the new master site).

    At this point, normal non-administrative activities can resume at the existing master sites. However, replication activity for the group remains suspended at the new site.

  7. Transfer the export file to the new master site.
  8. At the new master site, call the replication management API procedure DBMS_OFFLINE_OG.BEGIN_LOAD with the parameters gname (the name of the master group) and new_site (the name of the new master site). This procedure ensures that applications cannot update master group objects while you import master group data in the next step, disables internal replication triggers, and disables propagation of the deferred RPC queue from all other master sites to the new site.
  9. Use the Import utility to import the data from the Export file.
  10. When the import is complete at the new master site, call the replication management API procedure DBMS_OFFLINE_OG.END_LOAD (gname, new_site) to enable the new site.
  11. Return to the master definition site and call the procedure DBMS_OFFLINE_OG.END_INSTANTIATION with the parameters gname (the name of the master group) and new_site (the name of the new master site). This procedure resumes replication activity at the new site.

Additional Information: See the book Oracle8 Utilities to learn about the Import and Export utilities.

Please note the following:

Offline Instantiation of a Snapshot Site in an Advanced Replication System

Offline instantiation of a snapshot site in an advanced replication environment is useful when you need to create a large snapshot site, and the time required to transfer replication data through network to the new site would be prohibitive.

Use the following steps to create a snapshot site using offline instantiation:

At the Master Site
  1. Before you begin, create a snapshot log for each master table.
  2. Create a snapshot of each master group table. Create each snapshot in the same schema as its master table. To accomplish this, each snapshot's name must be different from its corresponding master table. Additionally, the snapshot's defining query must use a loopback database link to reference the master table. For example, to create a snapshot EMPLOYEE of the EMP table in the same database DBS1:
    CREATE SNAPSHOT sales.employee AS SELECT * FROM sales.emp@dbs1
    
    

    Attention: Before creating snapshots, make sure that the master database has ample storage space.

  3. As schema owner, use Export to export all schemas that contain the snapshots.
  4. Drop the snapshots at the master site that were created for offline instantiation.
  5. Transfer the Export file(s) to the new snapshot site.
At a New Snapshot Site
  1. Using the Replication Manager setup wizard, prepare the new snapshot site to communicate with the master site. If you are not familiar with this process, see "Preparing for Snapshot Site Replication" on page 4-5.
  2. Using the Replication Manager snapshot group wizard, create an empty snapshot group to correspond with the master group. Do not choose to replicate any objects of the master group.
  3. For each schema and snapshot, use the procedure DBMS_OFFLINE_ SNAPSHOT.BEGIN_LOAD (gname, sname, master_site, snapshot_oname) to create empty snapshots in the specified schema and object group at the new snapshot site, as well as all the necessary supporting objects.
  4. Import only the snapshot base tables from the Export file(s).
  5. When the import is complete, for each schema and snapshot, use the procedure DBMS_OFFLINE_SNAPSHOT.END_LOAD (gname, sname, snapshot_oname).

Additional Information: See the book Oracle8 Utilities.

Security Setup for Multimaster Replication

Nearly all users should find it easiest to use the Replication Manager setup wizard when configuring multimaster replication security. However, for certain case you may need to use the replication management API to perform these setup operations.

To configure a replication environment the replication administrator must have DBA privileges including the ability to connect as SYS.

First set up user accounts at each master site with the appropriate privileges to configure and maintain the replication environment and to propagate and apply replicated changes. You must also define links for users at each master site.

In addition to the end users who will access replicated objects, there are three special categories of "users" in a replication environment:

Typically, a single user acts as administrator, propagator, and receiver. However, you can have separate users perform each of these functions. You can choose to have a single, global replication administrator or, if your replication groups do not span schema boundaries, you may prefer to have separate replication administrators for different schemas. Note, however, that you can have only one registered propagator for each database.

Table 7-1 describes the necessary privileges that must be assigned to these specialized accounts. Most privileges needed by these users are granted to them through calls to the replication management API. You will also need to grant certain privileges directly.

Table 7-1 Required User Accounts
User   Privileges  

global replication
administrator  

DBMS_REPCAT_ADMIN.GRANT_ADMIN_ANY_SCHEMA  

schema-level replication administrator  

DBMS_REPCAT_ADMIN.GRANT_ADMIN_SCHEMA  

propagator  

DBMS_DEFER_SYS.REGISTER_PROPAGATOR  

receiver  

grant EXECUTE ANY PROCEDURE
grant CREATE SESSION

or, if you have stricter security requirements:

grant CREATE SESSION
grant EXECUTE on DBMS_DEFER_INTERNAL_SYS
grant EXECUTE on necessary $RP, $RL and wrapper packages  

After you have created these accounts, create the following private database links, including username and password between each site:

Assuming you have designated a single user account to act as replication administrator, propagator, and receiver, you will need to create N(N-1) links, where N is the number of master sites in your replication environment.

After creating these links, you must call DBMS_DEFER_SYS.SCHEDULE_PUSH and DBMS_DEFER_SYS.SCHEDULE_PURGE, at each location, to define how frequently you want to propagate your deferred transaction queue to each remote location, and how frequently you wish to purge this queue. You will need to call DBMS_DEFER_SYS.SCHEDULE_PUSH multiple times at each site, once for each remote location.

A sample script for setting up multimaster replication between HQ.WORLD and SALES.WORLD is shown below:

/*--- Create global replication administrator at HQ ---*/
connect system/manager@hq.world
create user repadmin identified by repadmin
execute dbms_repcat_admin.grant_admin_any_schema(username => 'repadmin')

/*--- Create global replication administrator at Sales ---*/
connect system/manager@sales.world
create user repadmin identified by repadmin
execute dbms_repcat_admin.grant_admin_any_schema(username => 'repadmin')

/*--- Create single user to act as both propagator and receiver at HQ ---*/
connect system/manager@hq.world
create user prop_rec identified by prop_rec
/*--- Grant privileges necessary to act as propagator ---*/
execute dbms_defer_sys.register_propagator(username => 'prop_rec')
/*--- Grant privileges necessary to act as receiver ---*/
grant execute any procedure to prop_rec
grant create session to prop_rec

/*--- Create single user to act as both propagator and receiver at Sales ---*/
connect system/manager@sales.world
create user prop_rec identified by prop_rec
/*--- Grant privileges necessary to act as propagator ---*/execute
dbms_defer_sys.register_propagator(username => 'prop_rec')
/*--- Grant privileges necessary to act as receiver ---*/
grant execute any procedure to prop_rec
grant create session to prop_rec

/*--- Create public link from HQ to Sales with necessary USING clause ---*/
connect system/manager@hq.world
create public database link sales.world using sales.world

/*--- Create private repadmin to repadmin link ---*/
connect repadmin/repadmin@hq.world
create database link sales.world connect to repadmin identified by repadmin

/*--- Schedule replication from HQ to Sales ---*/
execute dbms_defer_sys.schedule_push(
     destination => 'sales.world',
     interval => 'sysdate + 1/24',
     next_date => sysdate,
     stop_on_error => FALSE,
     delay_seconds => 0,
     parallelism => 1)

/*--- Schedule purge of def tran queue at HQ ---*/
execute dbms_defer_sys.schedule_purge(
     next_date => sysdate,
     interval = 'sysdate + 1',
     delay_seconds => 0,
     rollback_segment =>'')

/*--- Create link from propagator to receiver for scheduled push ---*/
connect prop_rec/prop_rec@hq.world
create database link sales.world connect to prop_rec identified by prop_rec

/*--- Create public link from Sales to HQ with necessary USING clause ---*/
connect system/manager@sales.world
create public database link hq.world using hq.world

/*--- Create private repadmin to repadmin link ---*/
connect repadmin/repadmin@sales.world
create database link hq.world connect to repadmin identified by repadmin

/*--- Schedule replication from Sales to HQ ---*/
execute dbms_defer_sys.schedule_push(
     destination => 'hq.world',
     interval => 'sysdate + 1/24',
     next_date => sysdate,
     stop_on_error => FALSE,
     delay_seconds => 0,
     parallelism => 1)

/*--- Schedule purge of def tran queue at Sales ---*/
execute dbms_defer_sys.schedule_purge(
     next_date => sysdate,
     interval = 'sysdate + 1',
     delay_seconds => 0,
     rollback_segment =>'')

/*--- Create link from propagator to receiver for scheduled push ---*/
connect prop_rec/prop_rec@sales.world
create database link hq.world connect to prop_rec identified by prop_rec

Security Setup for Snapshot Replication

Nearly all users should find it easiest to use the Replication Manager setup wizard when configuring snapshot replication security. However, for certain specialized cases, you may need to use the replication management API to perform these setup operations.

To configure a replication environment you must have DBA privileges, including the ability to connect as SYS.

First set up user accounts at each snapshot site with the appropriate privileges to configure and maintain the replication environment and to propagate replicated changes. You must also define links for these users to the associated master site. You may need to create additional users, or assign additional privileges to users at the associated master site.

In addition to end users who will be accessing replicated objects, there are three special categories of "users" at a snapshot site:

Typically, a single user performs each of these functions. However, there may be situations where you need different users performing these functions. For example, snapshots may be created by a snapshot site administrator and refreshed by another end user.

Table 7-2 describes the privileges needed to create and maintain a snapshot site.

Table 7-2 Required Snapshot Site User Accounts
User   Privileges  

snapshot site replication administrator  

DBMS_REPCAT_ADMIN.GRANT_ADMIN_ANY_SCHEMA  

propagator  

DBMS_DEFER_SYS.REGISTER_PROPAGATOR  

refresher  

CREATE ANY SNAPSHOT
ALTER ANY SNAPSHOT  

In addition to creating the appropriate users at the snapshot site, you may need to create additional users at the associated master site, as well. Table 7-3 describes the privileges need by master site users to support a new snapshot site.

Table 7-3 Required Master Site User Accounts
User   Privileges  

proxy snapshot site administrator  

DBMS_REPCAT_ADMIN.GRANT_ADMIN_ANY_SCHEMA

or, grant the following privileges:

CREATE SESSION, CREATE ANY TRIGGER, CREATE ANY PROCEDURE, SELECT ANY TABLE, and EXECUTE on
DBMSOBJGWRAPPER, DBMS_REPCAT_UTL2, DBMS_REPCAT_UNTRUSTED

in addition, the proxy admin requires the following privileges in order to create snapshot logs:

COMMENT ANY TABLE, CREATE ANY TABLE, and SELECT ANY TABLE or SELECT on master tables  

receiver  

grant EXECUTE ANY PROCEDURE
grant CREATE SESSION

or

grant CREATE SESSION
grant EXECUTE on DBMS_DEFER_INTERNAL_SYS
grant EXECUTE on necessary $RP, $RL and wrapper packages  

proxy refresher  

grant CREATE SESSION
grant SELECT ANY TABLE

or

grant CREATE SESSION
grant SELECT on necessary master tables and snapshot logs  

After creating the accounts at both the snapshot and associated master sites, you need to create the following private database links, including username and password, from the snapshot site to the master:

Assuming you have designated a single user account to act as replication administrator, propagator, and receiver, you will need to create one link for each snapshot site. You do not need a link from the master site to the snapshot site.

After creating these links, you must call DBMS_DEFER_SYS.SCHEDULE_PUSH and DBMS_DEFER_SYS.SCHEDULE_PURGE at the snapshot site to define how frequently you want to propagate your deferred transaction queue to the associated master site, and how frequently you wish to purge this queue. You must also call DMBS_REFRESH. REFRESH at the snapshot site to schedule how frequently to pull changes from the associated master site.

Alternative Security Setup for an Advanced Replication Snapshot Site

You can configure snapshot site security using an alternative approach that provides both greater simplicity and security. Set up this configuration using the replication management API. The Replication Manager setup wizard does not support this approach, nor can you use Replication Manager, in most cases, to administer snapshot sites that use this configuration.

This approach requires that all snapshots at the snapshot site, and corresponding master tables at the master site, be contained within a single schema. The schema owner is then authorized as the primary snapshot site administration for the specific schema and as the propagator for the snapshot site. Similarly, the schema owner is then authorized as the snapshot replication receiver at the master site. This approach is simpler and more secure because the snapshot site administrator does not need privileges to administer objects in other schemas.

A sample script implementing this approach between HQ, as the master site, and Sales, as the snapshot site, for the order_entry schema is shown below.

/*--- Grant snap_repadmin privileges to schema owner at snapshot site ---*/
connect system/manager@sales.world
execute dbms_repcat_admin.grant_admin_schema(username => 'order_entry')

/*--- Register schema owner as propagator at snapshot site ---*/
execute dbms_defer_sys.register_propagator(username => 'order_entry')

/*--- Grant proxy snap_repadmin privileges to schema owner at master site ---*/
connect system/manager@hq.world
grant execute on sys.dbmsobjgwrapper to order_entry
grant execute on sys.dbms_defer_internal_sys to order_entry
grant execute on sys.dbms_repcat_untrusted to order_entry
grant execute on sys.dbms_repcat_utl2 to order_entry

/*--- Create link from snapshot schema owner to master schema owner ---*/
connect order_entry/order_entry@sales.world
create database link hq.world 
connect to 'order_entry' identified by 'order_entry' using 'hq.world'

Avoiding Delete Conflicts

To avoid encountering delete conflicts, you might find it easiest to mark rows as deleted and purge them later. This section outlines a simple technique for purging these marked rows using procedural replication.

Suppose that your database contains the following MAIL_LIST table:

Name                Null?      Type
------------------- ---------- --------     --------------
CUSTNO              NOT NULL   NUMBER(4)    PRIMARY KEY
CUSTNAME                       VARCHAR2(10)   
ADDR1                          VARCHAR2(30)    
ADDR2                          VARCHAR2(30)      
CITY                           VARCHAR2(30)   
STATE                          VARCHAR2(2)    
ZIP                            NUMBER(9)    
PHONE                          NUMBER(10)  
REMOVE_DATE                   DATE

Instead of deleting a customer when he or she requests to be removed from your mailing list, the REMOVE_DATE column would be used to indicate former customers; A NULL value would be used for current customers. After customers request removal from the mailing list, their rows are no longer updated. Such a convention avoids conflicts when the rows are actually deleted sometime later. A view of current customers could be defined as follows:

CREATE OR REPLACE VIEW corp.current_mail_list AS
 SELECT custno, custname, addr1, addr2, city, state, zip, phone 
  FROM corp.mail_list WHERE remove_date IS NULL;

Periodically, perhaps once a year after the holiday sales, the former customers would be purged from the table using the REMOVE_DATE field. Such a delete could be performed using row-level replication just by performing the following delete:

DELETE corp.mail_list
  WHERE remove_date IS NOT NULL AND remove_date<'01-JAN-95';

However, for a large company with an extensive mail order business, the number of former customers could be quite large resulting in a lot of undesired network traffic and database overhead. Instead, the procedural replication could be used using the following package:

CREATE OR REPLACE PACKAGE corp.purge AS
 PROCEDURE remove_cust(purge_date IN DATE);
END;
/
CREATE OR REPLACE PACKAGE BODY corp.purge AS
 PROCEDURE remove_cust(purge_date IN DATE) IS
 BEGIN
  -- turn off row-level replication for set delete
  dbms_reputil.replication_off;
  -- prevent phantom reads
  LOCK TABLE corp.mail_list IN EXCLUSIVE MODE;
  DELETE corp.mail_list WHERE remove_date IS NOT NULL AND 
                              remove_date < purge_date;
  dbms_reputil.replication_on;
 EXCEPTION WHEN others THEN
  dbms_reputil.replication_on;
 END;
END;

The DBMS_REPCAT.GENERATE_REPLICATION_SUPPORT procedure would have been used to generate the DEFER_PURGE package during the initial replication setup. Then, the procedural replication package could be called as follows by a single master site:

BEGIN
 defer_purge.remove_cust('14-APR-97','Y');
END;

The procedure, PURGE.REMOVE_CUST, would be executed locally and asynchronously executed at each master, resulting in many rows being deleted with only minimal network traffic.

To ensure that there are no outstanding transactions against the rows to be purged, your application should be written to never update logically deleted rows and the REMOVE_DATE should be old enough to ensure that the logical delete of the row is propagated before the row is purged. Thus, in the previous example, it is probably not necessary to lock the table in EXCLUSIVE mode; although this is another method of guaranteeing that these rows not be updated during the purge.

Using Dynamic Ownership Conflict Avoidance

This section describes a more advanced method of designing your applications to avoid conflicts. This method, known as token passing, is similar to the workflow method described in Chapter 1. Although this section describes how to use this method to control the ownership of an entire row, you can use a modified form of this method to control ownership of the individual column groups within a row.

Both workflow and token passing allow dynamic ownership of data. With dynamic ownership, only one site at a time is allowed to update a row, but ownership of the row can be passed from site to site. Both workflow and token passing use the value of one or more "identifier" columns to determine who is currently allowed to update the row.

Workflow

With workflow partitioning, you can think of data ownership as being "pushed" from site to site. Only the current owner of the row is allowed to push the ownership of the row to another site, by changing the value of the "identifier" columns.

Take the simple example of separate sites for ordering, shipping, and billing. Here, the identifier columns are used to indicate the status of an order. The status determines which site can update the row. After a user at the ordering site has entered the order, he or she updates the status of this row to SHIP. Users at the ordering site are no longer allowed to modify this row - ownership has been pushed to the shipping site.

After shipping the order, the user at the shipping site will update the status of this row to BILL, thus pushing ownership to the billing site, and so on.

To successfully avoid conflicts, applications implementing dynamic data ownership must ensure that the following conditions are met:

With workflow partitioning, only the current owner of the row can push the ownership of the row to the next site by updating the "identifier" columns. No site is given ownership unless another site has given up ownership; thus ensuring there is never more than one owner.

Because the flow of work is ordered, ordering conflicts can be resolved by applying the change from the site that occurs latest in the flow of work. Any ordering conflicts can be resolved using a form of the PRIORITY conflict resolution method, where the priority value increases with each step in the work flow process.

The PRIORITY conflict resolution method successfully converges for more than one master as long as the priority value is always increasing.

Token Passing

Token passing uses a more generalized approach to meeting these criteria. To implement token passing, instead of the "identifier" columns, your replicated tables must have owner and epoch columns. The owner column stores the global database name of the site currently believed to own the row.

Once you have designed a token passing mechanism, you can use it to implement a variety of forms of dynamic partitioning of data ownership, including workflow.

You should design your application to implement token passing for you automatically. You should not allow the owner or epoch columns to be updated outside this application.

Whenever you attempt to update a row, your application should:

  1. Locate the current owner of the row.
  2. Lock the row to prevent updates while ownership is changing.
  3. Grab ownership of the row.
  4. Perform the update. (Oracle releases the lock when you commit your transaction.)

For example, Figure 7-6 illustrates how ownership of employee 100 passes from the ACCT_SF database to the ACCT_NY database.

Figure 7-6 Grabbing the Token

Locating the Owner of a Row

To obtain ownership, the ACCT_NY database uses a simple recursive algorithm to locate the owner of the row. The pseudo code for this algorithm is shown below:

-- Pseudo code for locating the token owner.
-- This is for a table TABLE_NAME with primary key PK.
-- Initial call should initialize loc_epoch to 0 and loc_owner
-- to the local global name.
get_owner(PK IN primary_key_type, loc_epoch IN OUT NUMBER, 
          loc_owner IN OUT VARCHAR2)
{
  -- use dynamic SQL (dbms_sql) to perform a select similar to
  -- the following:
  SELECT owner, epoch into rmt_owner, rmt_epoch
     FROM TABLE_NAME@loc_owner
     WHERE primary_key = PK FOR UPDATE;
  IF rmt_owner = loc_owner AND rmt_epoch >= loc_epoch THEN
   loc_owner := rmt_owner;
   loc_epoch := rmt_epoch;
   RETURN;
  ELSIF rmt_epoch >= loc_epoch THEN 
   get_owner(PK, rmt_epoch, rmt_owner);
   loc_owner := rmt_owner;
   loc_epoch := rmt_epoch;
   RETURN;
  ELSE
   raise_application_error(-20000, 'No owner for row');
  END IF;
}

Obtaining Ownership

After locating the owner of the row, the ACCT_NY site gets ownership from the ACCT_SF site by completing the following steps:

  1. Lock the row at the SF site to prevent any changes from occurring while ownership is being exchanged.
  2. Synchronously update the owner information at both the SF and NY sites. This ensures that only one site considers itself to be the owner at all times. The update at the SF site should not be replicated using DBMS_REPUTIL. REPLICATION_OFF. The replicated change of ownership at the NY site in step 4 will ultimately be propagated to all other sites in the replicated environment (including the SF site, where it will have no effect).
  3. Update the row information at the new owner site, NY, with the information from the current owner site, SF. This data is guaranteed to be the most recent. This time, the change at the NY site should not be replicated. Any queued changes to this data at the SF site will be propagated to all other sites in the usual manner. When the SF change is propagated to NY, it will be ignored because of the values of the epoch numbers, as described in the next bullet point.
  4. Update the epoch number at the new owner site to be one greater than the value at the previous site. Perform this update at the new owner only, and then asynchronously propagate this update to the other master sites. Incrementing the epoch number at the new owner site prevents ordering conflicts.

    When the SF changes (that were in the deferred queue in Step 2 above) are ultimately propagated to the NY site, the NY site will ignore them because they will have a lower epoch number than the epoch number at the NY site for the same data.

    As another example, suppose the HQ site received the SF changes after receiving the NY changes, the HQ site would ignore the SF changes because the changes applied from the NY site would have the greater epoch number.

Applying the Change

You should design your application to implement this method of token passing for you automatically whenever you perform an update. You should not allow the owner or epoch columns to be updated outside this application. The lock that you grab when you change ownership is released when you apply your actual update. The changed information, along with the updated owner and epoch information, will be asynchronously propagated to the other sites in the usual manner.

Modifying Tables without Replicating the Modifications

You may encounter a situation where you need to modify a replicated object, but you do not want this modification replicated to the other sites in the replicated environment. For example, you might want to disable replication in the following situations:

To modify tables without replicating the modifications, use the REPLICATION_ON and REPLICATION_OFF procedures in the DBMS_REPUTIL package. These procedures take no arguments and are used as flags by the generated replication triggers.

Note: To enable and disable replication, you must have the EXECUTE privilege on the DBMS_REPUTIL package.

Disabling the Advanced Replication Facility

The DBMS_REPUTIL.REPLICATION_OFF procedure sets the state of an internal replication variable for the current session to FALSE. Because all replicated triggers check the state of this variable before queuing any transactions, modifications made to the replicated tables that use row-level replication do not result in any queued deferred transactions.

Attention: Turning replication on or off affects only the current session. That is, other users currently connected to the same server are not restricted from placing committed changes in the deferred transaction queue.

If you are using procedural replication, you should call REPLICATION_OFF at the start of your procedure, as shown in the following example. This ensures that the advanced replication facility does not attempt to use row-level replication to propagate the changes that you make.

CREATE OR REPLACE PACKAGE update AS
  PROCEDURE update_emp(adjustment IN NUMBER);
END;
/
CREATE OR REPLACE PACKAGE BODY update AS
  PROCEDURE update_emp(adjustment IN NUMBER) IS
  BEGIN
   -- turn off row-level replication for set update
   dbms_reputil.replication_off;
   UPDATE emp . . .;
   -- re-enable replication
   dbms_reputil.replication_on;
  EXCEPTION WHEN OTHERS THEN
   . . . 
   dbms_reputil.replication_on;
  END;
END;

Re-enabling the Advanced Replication Facility

After resolving any conflicts, or at the end of your replicated procedure, be certain to call DBMS_REPUTIL.REPLICATION_ON to resume normal replication of changes to your replicated tables or snapshots. This procedure takes no arguments. Calling REPLICATION_ON sets the internal replication variable to TRUE.

Triggers and Replication

If you have defined a replicated trigger on a replicated table, you may need to ensure that the trigger fires only once for each change that you make. Typically, you will only want the trigger to fire when the change is first made, and you will not want the remote trigger to fire when the change is replicated to the remote site.

You should check the value of the DBMS_REPUTIL.FROM_REMOTE package variable at the start of your trigger. The trigger should update the table only if the value of this variable is FALSE.

Alternatively, you can disable replication at the start of the trigger and re-enable it at the end of the trigger when modifying rows other than the one that caused the trigger to fire. Using this method, only the original change is replicated to the remote sites. Then the replicated trigger will fire at each remote site. Any updates performed by the replicated trigger will not be pushed to any other sites.

Using this approach, conflict resolution is not invoked. Therefore, you must ensure that the changes resulting from the trigger do not affect the consistency of the data.

Enabling/Disabling Replication for Snapshots

To disable all local replication triggers for snapshots at your current site, set the internal refresh variable to TRUE by calling SET_I_AM_A_REFRESH, as shown in the following example:

DBMS_SNAPSHOT.SET_I_AM_A_REFRESH(value => TRUE);

To re-enable the triggers, set the internal refresh variable to FALSE, as shown below:

DBMS_SNAPSHOT.SET_I_AM_A_REFRESH(value => FALSE);

To determine the value of the internal refresh variable, call the I_AM_A_REFRESH function as shown below:

ref_stat := DBMS_SNAPSHOT.I_AM_A_REFRESH;replication:advanced
techniques<$startrange>;advanced replication:techniques<$startrange>




Prev

Next
Oracle
Copyright © 1997 Oracle Corporation.

All Rights Reserved.

Library

Product

Contents

Index