PK ◊hUIoaę,mimetypeapplication/epub+zipPK◊hUIOEBPS/strms_mprop.htmġ Managing Staging and Propagation

12 Managing Staging and Propagation

This chapter provides instructions for managing ANYDATA queues, propagations, and messaging environments.

This chapter contains these topics:

Each task described in this chapter should be completed by a Streams administrator that has been granted the appropriate privileges, unless specified otherwise.

Managing ANYDATA Queues

An ANYDATA queue stages messages whose payloads are of ANYDATA type. Therefore, an ANYDATA queue can stage a message with a payload of nearly any type, if the payload is wrapped in an ANYDATA wrapper. Each Streams capture process, apply process, and messaging client is associated with one ANYDATA queue, and each Streams propagation is associated with one ANYDATA source queue and one ANYDATA destination queue.

This section contains instructions for completing the following tasks related to ANYDATA queues:

Creating an ANYDATA Queue

The easiest way to create an ANYDATA queue is to use the SET_UP_QUEUE procedure in the DBMS_STREAMS_ADM package. This procedure enables you to specify the following settings for the ANYDATA queue it creates:

  • The queue table for the queue

  • A storage clause for the queue table

  • The queue name

  • A queue user that will be configured as a secure queue user of the queue and granted ENQUEUE and DEQUEUE privileges on the queue

  • A comment for the queue

If the specified queue table does not exist, then it is created. If the specified queue table exists, then the existing queue table is used for the new queue. If you do not specify any queue table when you create the queue, then, by default, streams_queue_table is specified.

You can use a single procedure, the SET_UP_QUEUE procedure in the DBMS_STREAMS_ADM package, to create an ANYDATA queue and the queue table used by the queue. For SET_UP_QUEUE to create a new queue table, the specified queue table must not exist.

For example, run the following procedure to create an ANYDATA queue with the SET_UP_QUEUE procedure:

BEGIN
  DBMS_STREAMS_ADM.SET_UP_QUEUE(
    queue_table => 'strmadmin.streams_queue_table',
    queue_name  => 'strmadmin.streams_queue',
    queue_user  => 'hr');
END;
/

Running this procedure performs the following actions:

  • Creates a queue table named streams_queue_table. The queue table is created only if it does not already exist. Queues based on the queue table stage messages of ANYDATA type. Queue table names can be a maximum of 24 bytes.

  • Creates a queue named streams_queue. The queue is created only if it does not already exist. Queue names can be a maximum of 24 bytes.

  • Specifies that the streams_queue queue is based on the strmadmin.streams_queue_table queue table.

  • Configures the hr user as a secure queue user of the queue, and grants this user ENQUEUE and DEQUEUE privileges on the queue.

  • Starts the queue.

Default settings are used for the parameters that are not explicitly set in the SET_UP_QUEUE procedure.

When the SET_UP_QUEUE procedure creates a queue table, the following DBMS_AQADM.CREATE_QUEUE_TABLE parameter settings are specified:

  • If the database is Oracle Database 10g Release¬†2 or later, the sort_list setting is commit_time. If the database is a release prior to Oracle Database 10g Release¬†2, the sort_list setting is enq_time.

  • The multiple_consumers setting is true.

  • The message_grouping setting is transactional.

  • The secure setting is true.

The other parameters in the CREATE_QUEUE_TABLE procedure are set to their default values.

You can use the CREATE_QUEUE_TABLE procedure in the DBMS_AQADM package to create a queue table of ANYDATA type with different properties than the default properties specified by the SET_UP_QUEUE procedure in the DBMS_STREAMS_ADM package. After you create the queue table with the CREATE_QUEUE_TABLE procedure, you can create a queue that uses the queue table. To do so, specify the queue table in the queue_table parameter of the SET_UP_QUEUE procedure.

Similarly, you can use the CREATE_QUEUE procedure in the DBMS_AQADM package to create a queue instead of SET_UP_QUEUE. Use CREATE_QUEUE if you require custom settings for the queue. For example, use CREATE_QUEUE to specify a custom retry delay or retention time. If you use CREATE_QUEUE, then you must start the queue manually.


Note:

A message cannot be enqueued into a queue unless a subscriber who can dequeue the message is configured.

Enabling a User to Perform Operations on a Secure Queue

For a user to perform queue operations, such as enqueue and dequeue, on a secure queue, the user must be configured as a secure queue user of the queue. If you use the SET_UP_QUEUE procedure in the DBMS_STREAMS_ADM package to create the secure queue, then the queue owner and the user specified by the queue_user parameter are configured as secure users of the queue automatically. If you want to enable other users to perform operations on the queue, then you can configure these users in one of the following ways:

  • Run SET_UP_QUEUE and specify a queue_user. Queue creation is skipped if the queue already exists, but a new queue user is configured if one is specified.

  • Associate the user with an AQ agent manually.

The following example illustrates associating a user with an AQ agent manually. Suppose you want to enable the oe user to perform queue operations on the streams_queue created in "Creating an ANYDATA Queue". The following steps configure the oe user as a secure queue user of streams_queue:

  1. Connect as an administrative user who can create AQ agents and alter users.

  2. Create an agent:

    EXEC DBMS_AQADM.CREATE_AQ_AGENT(agent_name => 'streams_queue_agent');
    
  3. If the user must be able to dequeue messages from queue, then make the agent a subscriber of the secure queue:

    DECLARE
      subscriber SYS.AQ$_AGENT;
    BEGIN
      subscriber :=  SYS.AQ$_AGENT('streams_queue_agent', NULL, NULL);  
      DBMS_AQADM.ADD_SUBSCRIBER(
        queue_name          =>  'strmadmin.streams_queue',
        subscriber          =>  subscriber,
        rule                =>  NULL,
        transformation      =>  NULL);
    END;
    /
    
  4. Associate the user with the agent:

    BEGIN
      DBMS_AQADM.ENABLE_DB_ACCESS(
        agent_name  => 'streams_queue_agent',
        db_username => 'oe');
    END;
    /
    
  5. Grant the user EXECUTE privilege on the DBMS_STREAMS_MESSAGING package or the DBMS_AQ package, if the user is not already granted these privileges:

    GRANT EXECUTE ON DBMS_STREAMS_MESSAGING TO oe;
    
    GRANT EXECUTE ON DBMS_AQ TO oe;
    

When these steps are complete, the oe user is a secure user of the streams_queue queue and can perform operations on the queue. You still must grant the user specific privileges to perform queue operations, such as enqueue and dequeue privileges.


See Also:


Disabling a User from Performing Operations on a Secure Queue

You might want to disable a user from performing queue operations on a secure queue for the following reasons:

  • You dropped a capture process, but you did not drop the queue that was used by the capture process, and you do not want the user who was the capture user to be able to perform operations on the remaining secure queue.

  • You dropped an apply process, but you did not drop the queue that was used by the apply process, and you do not want the user who was the apply user to be able to perform operations on the remaining secure queue.

  • You used the ALTER_APPLY procedure in the DBMS_APPLY_ADM package to change the apply_user for an apply process, and you do not want the old apply_user to be able to perform operations on the apply process queue.

  • You enabled a user to perform operations on a secure queue by completing the steps described in Enabling a User to Perform Operations on a Secure Queue, but you no longer want this user to be able to perform operations on the secure queue.

To disable a secure queue user, you can revoke ENQUEUE and DEQUEUE privilege on the queue from the user, or you can run the DISABLE_DB_ACCESS procedure in the DBMS_AQADM package. For example, suppose you want to disable the oe user from performing queue operations on the streams_queue created in "Creating an ANYDATA Queue".


Attention:

If an AQ agent is used for multiple secure queues, then running DISABLE_DB_ACCESS for the agent prevents the user associated with the agent from performing operations on all of these queues.

  1. Run the following procedure to disable the oe user from performing queue operations on the secure queue streams_queue:

    BEGIN
      DBMS_AQADM.DISABLE_DB_ACCESS(
        agent_name  => 'streams_queue_agent',
        db_username => 'oe');
    END;
    /
    
  2. If the agent is no longer needed, you can drop the agent:

    BEGIN
      DBMS_AQADM.DROP_AQ_AGENT(
        agent_name  => 'streams_queue_agent');
    END;
    /
    
  3. Revoke privileges on the queue from the user, if the user no longer needs these privileges.

    BEGIN
      DBMS_AQADM.REVOKE_QUEUE_PRIVILEGE (
       privilege   => 'ALL',
       queue_name  => 'strmadmin.streams_queue',
       grantee     => 'oe');
    END;
    /
    

See Also:


Removing an ANYDATA Queue

You use the REMOVE_QUEUE procedure in the DBMS_STREAMS_ADM package to remove an existing ANYDATA queue. When you run the REMOVE_QUEUE procedure, it waits until any existing messages in the queue are consumed. Next, it stops the queue, which means that no further enqueues into the queue or dequeues from the queue are allowed. When the queue is stopped, it drops the queue.

You can also drop the queue table for the queue if it is empty and is not used by another queue. To do so, specify true, the default, for the drop_unused_queue_table parameter.

In addition, you can drop any Streams clients that use the queue by setting the cascade parameter to true. By default, the cascade parameter is set to false.

For example, to remove an ANYDATA queue named streams_queue in the strmadmin schema and drop its empty queue table, run the following procedure:

BEGIN
  DBMS_STREAMS_ADM.REMOVE_QUEUE(
    queue_name              => 'strmadmin.streams_queue',
    cascade                 => false,
    drop_unused_queue_table => true);
END;
/

In this case, because the cascade parameter is set to false, this procedure drops the streams_queue only if no Streams clients use the queue. If the cascade parameter is set to false and any Streams client uses the queue, then an error is raised.

Managing Streams Propagations and Propagation Jobs

A propagation propagates messages from a Streams source queue to a Streams destination queue. This section provides instructions for completing the following tasks:

In addition, you can use the features of Oracle Advanced Queuing (AQ) to manage Streams propagations.


See Also:


Creating a Propagation Between Two ANYDATA Queues

You can use any of the following procedures to create a propagation between two ANYDATA queues:

Each of these procedures in the DBMS_STREAMS_ADM package creates a propagation with the specified name if it does not already exist, creates either a positive rule set or negative rule set for the propagation if the propagation does not have such a rule set, and can add table rules, schema rules, or global rules to the rule set. The CREATE_PROPAGATION procedure creates a propagation, but does not create a rule set or rules for the propagation. However, the CREATE_PROPAGATION procedure enables you to specify an existing rule set to associate with the propagation, either as a positive or a negative rule set. All propagations are started automatically upon creation.

The following tasks must be completed before you create a propagation:

Example of Creating a Propagation Using DBMS_STREAMS_ADM

The following example runs the ADD_TABLE_PROPAGATION_RULES procedure in the DBMS_STREAMS_ADM package to create a propagation:

BEGIN
  DBMS_STREAMS_ADM.ADD_TABLE_PROPAGATION_RULES(
    table_name              => 'hr.departments',
    streams_name            => 'strm01_propagation',
    source_queue_name       => 'strmadmin.strm_a_queue',
    destination_queue_name  => 'strmadmin.strm_b_queue@dbs2.net',
    include_dml             => true,
    include_ddl             => true,
    include_tagged_lcr      => false,
    source_database         => 'dbs1.net',
    inclusion_rule          => true,
    queue_to_queue          => true);
END;
/

Running this procedure performs the following actions:

  • Creates a propagation named strm01_propagation. The propagation is created only if it does not already exist.

  • Specifies that the propagation propagates LCRs from strm_a_queue in the current database to strm_b_queue in the dbs2.net database.

  • Specifies that the propagation uses the dbs2.net database link to propagate the LCRs, because the destination_queue_name parameter contains @dbs2.net.

  • Creates a positive rule set and associates it with the propagation because the inclusion_rule parameter is set to true. The rule set uses the evaluation context SYS.STREAMS$_EVALUATION_CONTEXT. The rule set name is system generated.

  • Creates two rules. One rule evaluates to TRUE for row LCRs that contain the results of DML changes to the hr.departments table. The other rule evaluates to TRUE for DDL LCRs that contain DDL changes to the hr.departments table. The rule names are system generated.

  • Adds the two rules to the positive rule set associated with the propagation. The rules are added to the positive rule set because the inclusion_rule parameter is set to true.

  • Specifies that the propagation propagates an LCR only if it has a NULL tag, because the include_tagged_lcr parameter is set to¬†false. This behavior is accomplished through the system-created rules for the propagation.

  • Specifies that the source database for the LCRs being propagated is dbs1.net, which might or might not be the current database. This propagation does not propagate LCRs in the source queue that have a different source database.

  • Creates a propagation job for the queue-to-queue propagation.


Note:

To use queue-to-queue propagation, the compatibility level must be 10.2.0 or higher for each database that contains a queue involved in the propagation.

Example of Creating a Propagation Using DBMS_PROPAGATION_ADM

The following example runs the CREATE_PROPAGATION procedure in the DBMS_PROPAGATION_ADM package to create a propagation:

BEGIN
  DBMS_PROPAGATION_ADM.CREATE_PROPAGATION(
    propagation_name   => 'strm02_propagation',
    source_queue       => 'strmadmin.strm03_queue',
    destination_queue  => 'strmadmin.strm04_queue',
    destination_dblink => 'dbs2.net',
    rule_set_name      => 'strmadmin.strm01_rule_set',
    queue_to_queue     => true);
END;
/

Running this procedure performs the following actions:

  • Creates a propagation named strm02_propagation. A propagation with the same name must not exist.

  • Specifies that the propagation propagates messages from strm03_queue in the current database to strm04_queue in the dbs2.net database. Depending on the rules in the rule sets for the propagation, the propagated messages can be captured messages or user-enqueued messages, or both.

  • Specifies that the propagation uses the dbs2.net database link to propagate the messages.

  • Associates the propagation with an existing rule set named strm01_rule_set. This rule set is the positive ruleġ set for the propagation.

  • Creates a propagation job for the queue-to-queue propagation.


Note:

To use queue-to-queue propagation, the compatibility level must be 10.2.0 or higher for each database that contains a queue involved in the propagation.

Starting a Propagation

You run the START_PROPAGATION procedure in the DBMS_PROPAGATION_ADM package to start an existing propagation. For example, the following procedure starts a propagation named strm01_propagation:

BEGIN
  DBMS_PROPAGATION_ADM.START_PROPAGATION(
    propagation_name => 'strm01_propagation');
END;
/

Stopping a Propagation

You run the STOP_PROPAGATION procedure in the DBMS_PROPAGATION_ADM package to stop an existing propagation. For example, the following procedure stops a propagation named strm01_propagation:

BEGIN
  DBMS_PROPAGATION_ADM.STOP_PROPAGATION(
    propagation_name => 'strm01_propagation',
    force            => false);
END;
/

To clear the statistics for the propagation when it is stopped, set the force parameter to true. If there is a problem with a propagation, then stopping the propagation with the force parameter set to true and restarting the propagation might correct the problem. If the force parameter is set to false, then the statistics for the propagation are not cleared.

Altering the Schedule of a Propagation Job

To alter the schedule of an existing propagation job, use the ALTER_PROPAGATION_SCHEDULE procedure in the DBMS_AQADM package. The following sections contain examples that alter the schedule of a propagation job for a queue-to-queue propagation and for a queue-to-dblink propagation. These examples set the propagation job to propagate messages every 15 minutes (900 seconds), with each propagation lasting 300 seconds, and a 25-second wait before new messages in a completely propagated queue are propagated.


See Also:


Altering the Schedule of a Propagation Job for a Queue-to-Queue Propagation

To alter the schedule of a propagation job for a queue-to-queue propagation that propagates messages from the strmadmin.strm_a_queue source queue to the strmadmin.strm_b_queue destination queue using the dbs2.net database link, run the following procedure:

BEGIN
  DBMS_AQADM.ALTER_PROPAGATION_SCHEDULE(
   queue_name        => 'strmadmin.strm_a_queue',
   destination       => 'dbs2.net',
   duration          => 300,
   next_time         => 'SYSDATE + 900/86400',
   latency           => 25,
   destination_queue => 'strmadmin.strm_b_queue'); 
END;
/

Because each queue-to-queue propagation has its own propagation job, this procedure alters only the schedule of the propagation that propagates messages between the two queues specified. The destination_queue parameter must specify the name of the destination queue to alter the propagation schedule of a queue-to-queue propagation.

Altering the Schedule of a Propagation Job for a Queue-to-Dblink Propagation

To alter the schedule of a propagation job for a queue-to-dblink propagation that propagates messages from the strmadmin.streams_queue source queue using the dbs3.net database link, run the following procedure:

BEGIN
  DBMS_AQADM.ALTER_PROPAGATION_SCHEDULE(
   queue_name  => 'strmadmin.streams_queue',
   destination => 'dbs3.net',
   duration    => 300,
   next_time   => 'SYSDATE + 900/86400',
   latency     => 25); 
END;
/

Because the propagation is a queue-to-dblink propagation, the destination_queue parameter is not specified. Completing this task affects all queue-to-dblink propagations that propagate messages from the source queue to all destination queues that use the dbs3.net database link.

Specifying the Rule Set for a Propagation

You can specify one positive rule set and one negative rule set for a propagation. The propagation propagates a message if it evaluates to TRUE for at least one rule in the positive rule set and discards a change if it evaluates to TRUE for at least one rule in the negative rule set. The negative rule set is evaluated before the positive rule set.

Specifying a Positive Rule Set for a Propagation

You specify an existing rule set as the positive rule set for an existing propagation using the rule_set_name parameter in the ALTER_PROPAGATION procedure. This procedure is in the DBMS_PROPAGATION_ADM package.

For example, the following procedure sets the positive rule set for a propagation named strm01_propagation to strm02_rule_set.

BEGIN
  DBMS_PROPAGATION_ADM.ALTER_PROPAGATION(
    propagation_name  => 'strm01_propagation',
    rule_set_name     => 'strmadmin.strm02_rule_set');
END;
/

Specifying a Negative Rule Set for a Propagation

You specify an existing rule set as the negative rule set for an existing propagation using the negative_rule_set_name parameter in the ALTER_PROPAGATION procedure. This procedure is in the DBMS_PROPAGATION_ADM package.

For example, the following procedure sets the negative rule set for a propagation named strm01_propagation to strm03_rule_set.

BEGIN
  DBMS_PROPAGATION_ADM.ALTER_PROPAGATION(
    propagation_name        => 'strm01_propagation',
    negative_rule_set_name  => 'strmadmin.strm03_rule_set');
END;
/

Adding Rules to the Rule Set for a Propagation

To add rules to the rule set of a propagation, you can run one of the following procedures:

Excluding the ADD_SUBSET_PROPAGATION_RULES procedure, these procedures can add rules to the positive rule set or negative rule set for a propagation. The ADD_SUBSET_PROPAGATION_RULES procedure can add rules only to the positive rule set for a propagation.

Adding Rules to the Positive Rule Set for a Propagation

The following example runs the ADD_TABLE_PROPAGATION_RULES procedure in the DBMS_STREAMS_ADM package to add rules to the positive rule set of an existing propagation named strm01_propagation:

BEGIN
  DBMS_STREAMS_ADM.ADD_TABLE_PROPAGATION_RULES(
    table_name              => 'hr.locations',
    streams_name            => 'strm01_propagation',
    source_queue_name       => 'strmadmin.strm_a_queue',
    destination_queue_name  => 'strmadmin.strm_b_queue@dbs2.net',
    include_dml             => true,
    include_ddl             => true,
    source_database         => 'dbs1.net',
    inclusion_rule          => true);
END;
/

Running this procedure performs the following actions:

  • Creates two rules. One rule evaluates to TRUE for row LCRs that contain the results of DML changes to the hr.locations table. The other rule evaluates to TRUE for DDL LCRs that contain DDL changes to the hr.locations table. The rule names are system generated.

  • Specifies that both rules evaluate to TRUE only for LCRs whose changes originated at the dbs1.net source database.

  • Adds the two rules to the positive rule set associated with the propagation because the inclusion_rule parameter is set to true.

Adding Rules to the Negative Rule Set for a Propagation

The following example runs the ADD_TABLE_PROPAGATION_RULES procedure in the DBMS_STREAMS_ADM package to add rules to the negative rule set of an existing propagation named strm01_propagation:

BEGIN
  DBMS_STREAMS_ADM.ADD_TABLE_PROPAGATION_RULES(
    table_name              => 'hr.departments',
    streams_name            => 'strm01_propagation',
    source_queue_name       => 'strmadmin.strm_a_queue',
    destination_queue_name  => 'strmadmin.strm_b_queue@dbs2.net',
    include_dml             => true,
    include_ddl             => true,
    source_database         => 'dbs1.net',
    inclusion_rule          => false);
END;
/

Running this procedure performs the following actions:

  • Creates two rules. One rule evaluates to TRUE for row LCRs that contain the results of DML changes to the hr.departments table, and the other rule evaluates to TRUE for DDL LCRs that contain DDL changes to the hr.departments table. The rule names are system generated.

  • Specifies that both rules evaluate to TRUE only for LCRs whose changes originated at the dbs1.net source database.

  • Adds the two rules to the negative rule set associated with the propagation because the inclusion_rule parameter is set to false.

Removing a Rule from the Rule Set for a Propagation

You remove a rule from the rule set for an existing propagation by running the REMOVE_RULE procedure in the DBMS_STREAMS_ADM package. For example, the following procedure removes a rule named departments3 from the positive rule set of a propagation named strm01_propagation.

BEGIN
  DBMS_STREAMS_ADM.REMOVE_RULE(
    rule_name        => 'departments3',
    streams_type     => 'propagation',
    streams_name     => 'strm01_propagation',
    drop_unused_rule => true,
    inclusion_rule   => true);
END;
/

In this example, the drop_unused_rule parameter in the REMOVE_RULE procedure is set to true, which is the default setting. Therefore, if the rule being removed is not in any other rule set, then it will be dropped from the database. If the drop_unused_rule parameter is set to false, then the rule is removed from the rule set, but it is not dropped from the database even if it is not in any other rule set.

If the inclusion_rule parameter is set to false, then the REMOVE_RULE procedure removes the rule from the negative rule set for the propagation, not the positive rule set.

To remove all of the rules in the rule set for the propagation, then specify NULL for the rule_name parameter when you run the REMOVE_RULE procedure.

Removing a Rule Set for a Propagation

You specify that you want to remove a rule set from a propagation using the ALTER_PROPAGATION procedure in the DBMS_PROPAGATION_ADM package. This procedure can remove the positive rule set, negative rule set, or both. Specify true for the remove_rule_set parameter to remove the positive rule set for the propagation. Specify true for the remove_negative_rule_set parameter to remove the negative rule set for the propagation.

For example, the following procedure removes both the positive and the negative rule set from a propagation named strm01_propagation.

BEGIN
  DBMS_PROPAGATION_ADM.ALTER_PROPAGATION(
    propagation_name         => 'strm01_propagation',
    remove_rule_set          => true,
    remove_negative_rule_set => true);
END;
/

Note:

If a propagation does not have a positive or negative rule set, then the propagation propagates all messages in the source queue to the destination queue.

Dropping a Propagation

You run the DROP_PROPAGATION procedure in the DBMS_PROPAGATION_ADM package to drop an existing propagation. For example, the following procedure drops a propagation named strm01_propagation:

BEGIN
  DBMS_PROPAGATION_ADM.DROP_PROPAGATION(
    propagation_name      => 'strm01_propagation',
    drop_unused_rule_sets => true);
END;
/

Because the drop_unused_rule_sets parameter is set to true, this procedure also drops any rule sets used by the propagation strm01_propagation, unless a rule set is used by another Streams client. If the drop_unused_rule_sets parameter is set to true, then both the positive rule set and negative rule set for the propagation might be dropped. If this procedure drops a rule set, then it also drops any rules in the rule set that are not in another rule set.


Note:

When you drop a propagation, the propagation job used by the propagation is dropped automatically, if no other propagations are using the propagation job.

Managing a Streams Messaging Environment

Streams enables messaging with queues of type ANYDATA. These queues stage user messages whose payloads are of ANYDATA type, and an ANYDATA payload can be a wrapper for payloads of different datatypes.

This section provides instructions for completing the following tasks:


Note:

The examples in this section assume that you have configured a Streams administrator at each database.


See Also:


Wrapping User Message Payloads in an ANYDATA Wrapper and Enqueuing Them

You can wrap almost any type of payload in an ANYDATA payload. The following sections provide examples of enqueuing messages into, and dequeuing messages from, an ANYDATA queue.

The following steps illustrate how to wrap payloads of various types in an ANYDATA payload.

  1. Connect as an administrative user who can create users, grant privileges, create tablespaces, and alter users at the dbs1.net database.

  2. Grant EXECUTE privilege on the DBMS_AQ package to the oe user so that this user can run the ENQUEUE and DEQUEUE procedures in that package:

    GRANT EXECUTE ON DBMS_AQ TO oe;
    
  3. Connect as the Streams administrator, as in the following example:

    CONNECT strmadmin/strmadminpw@dbs1.net
    
  4. Create an ANYDATA queue if one does not already exist.

    BEGIN
      DBMS_STREAMS_ADM.SET_UP_QUEUE(
        queue_table  => 'oe_q_table_any',
        queue_name   => 'oe_q_any',
        queue_user   => 'oe');
    END;
    /
    

    The oe user is configured automatically as a secure queue user of the oe_q_any queue and is given ENQUEUE and DEQUEUE privileges on the queue. In addition, an AQ agent named oe is configured and is associated with the oe user. However, a message cannot be enqueued into a queue unless a subscriber who can dequeue the message is configured.

  5. Add a subscriber for oe_q_any queue. This subscriber will perform explicit dequeues of messages.

    DECLARE
      subscriber SYS.AQ$_AGENT;
    BEGIN
      subscriber :=  SYS.AQ$_AGENT('OE', NULL, NULL);  
      SYS.DBMS_AQADM.ADD_SUBSCRIBER(
        queue_name  =>  'strmadmin.oe_q_any',
        subscriber  =>  subscriber);
    END;
    /
    
  6. Connect as the oe user.

    CONNECT oe/oe@dbs1.net
    
  7. Create a procedure that takes as an input parameter an object of ANYDATA type and enqueues a message containing the payload into an existing ANYDATA queue.

    CREATE OR REPLACE PROCEDURE oe.enq_proc (payload ANYDATA) 
    IS
      enqopt     DBMS_AQ.ENQUEUE_OPTIONS_T;
      mprop      DBMS_AQ.MESSAGE_PROPERTIES_T;
      enq_msgid  RAW(16);
    BEGIN
      mprop.SENDER_ID := SYS.AQ$_AGENT('OE', NULL, NULL); 
      DBMS_AQ.ENQUEUE(
        queue_name          =>  'strmadmin.oe_q_any',
        enqueue_options     =>  enqopt,
        message_properties  =>  mprop,
        payload             =>  payload,
        msgid               =>  enq_msgid);
    END;
    /
    
  8. Run the procedure you created in Step 7 by specifying the appropriate Convertdata_type function. The following commands enqueue messages of various types.

    VARCHAR2 type:

    EXEC oe.enq_proc(ANYDATA.ConvertVarchar2('Chemicals - SW'));
    COMMIT;
    

    NUMBER type:

    EXEC oe.enq_proc(ANYDATA.ConvertNumber('16'));
    COMMIT;
    

    User-defined type:

    BEGIN
      oe.enq_proc(ANYDATA.ConvertObject(oe.cust_address_typ(
        '1646 Brazil Blvd','361168','Chennai','Tam', 'IN')));
    END;
    /
    COMMIT;
    

See Also:

"Viewing the Contents of User-Enqueued Messages in a Queue" for information about viewing the contents of these enqueued messages

Dequeuing a Payload that Is Wrapped in an ANYDATA Payload

The following steps illustrate how to dequeue a payload wrapped in an ANYDATA payload. This example assumes that you have completed the steps in "Wrapping User Message Payloads in an ANYDATA Wrapper and Enqueuing Them".

To dequeue messages, you must know the consumer of the messages. To find the consumer for the messages in a queue, connect as the owner of the queue and query the AQ$queue_table_name, where queue_table_name is the name of the queue table. For example, to find the consumers of the messages in the oe_q_any queue, run the following query:

CONNECT strmadmin/strmadminpw@dbs1.net

SELECT MSG_ID, MSG_STATE, CONSUMER_NAME FROM AQ$OE_Q_TABLE_ANY;
  1. Connect as the oe user:

    CONNECT oe/oe@dbs1.net
    
  2. Create a procedure that takes as an input the consumer of the messages you want to dequeue. The following example procedure dequeues messages of oe.cust_address_typ and prints the contents of the messages.

    CREATE OR REPLACE PROCEDURE oe.get_cust_address (
    consumer IN VARCHAR2) AS
      address         OE.CUST_ADDRESS_TYP;
      deq_address     ANYDATA; 
      msgid           RAW(16); 
      deqopt          DBMS_AQ.DEQUEUE_OPTIONS_T; 
      mprop           DBMS_AQ.MESSAGE_PROPERTIES_T;
      new_addresses   BOOLEAN := true;
      next_trans      EXCEPTION;
      no_messages     EXCEPTION; 
      pragma exception_init (next_trans, -25235);
      pragma exception_init (no_messages, -25228);
      num_var         pls_integer;
    BEGIN
         deqopt.consumer_name := consumer;
         deqopt.wait := 1;
         WHILE (new_addresses) LOOP
         BEGIN
          DBMS_AQ.DEQUEUE( 
             queue_name          =>  'strmadmin.oe_q_any',
             dequeue_options     =>  deqopt,
             message_properties  =>  mprop,
             payload             =>  deq_address,
             msgid               =>  msgid);
          deqopt.navigation := DBMS_AQ.NEXT;
          DBMS_OUTPUT.PUT_LINE('****');
          IF (deq_address.GetTypeName() = 'OE.CUST_ADDRESS_TYP') THEN
              DBMS_OUTPUT.PUT_LINE('Message TYP3ŰŐE is: ' ||  
                                    deq_address.GetTypeName()); 
              num_var := deq_address.GetObject(address);    
              DBMS_OUTPUT.PUT_LINE(' **** CUSTOMER ADDRESS **** ');
              DBMS_OUTPUT.PUT_LINE(address.street_address);
              DBMS_OUTPUT.PUT_LINE(address.postal_code);
              DBMS_OUTPUT.PUT_LINE(address.city);
              DBMS_OUTPUT.PUT_LINE(address.state_province);
              DBMS_OUTPUT.PUT_LINE(address.country_id);
          ELSE
             DBMS_OUTPUT.PUT_LINE('Message TYPE is: ' ||    
                                   deq_address.GetTypeName()); 
          END IF;  
        COMMIT;   
        EXCEPTION
          WHEN next_trans THEN
          deqopt.navigation := DBMS_AQ.NEXT_TRANSACTION;
          WHEN no_messages THEN
            new_addresses := false;
            DBMS_OUTPUT.PUT_LINE('No more messages');
         END;
      END LOOP; 
    END;
    /
    
  3. Run the procedure you created in Step 1 and specify the consumer of the messages you want to dequeue, as in the following example:

    SET SERVEROUTPUT ON SIZE 100000
    EXEC oe.get_cust_address('OE');
    

Configuring a Messaging Client and Message Notification

This section contains instructions for configuring the following elements in a database:

  • An enqueue procedure that enqueues messages into an ANYDATA queue at a database. In this example, the enqueue procedure uses a trigger to enqueue a message every time a row is inserted into the oe.orders table.

  • A messaging client that can dequeue user-enqueued messages based on rules. In this example, the messaging client uses a rule so that it dequeues only messages that involve the oe.orders table. The messaging client uses the DEQUEUE procedure in the DBMS_STREAMS_MESSAGING to dequeue one message at a time and display the order number for the order.

  • Message notification for the messaging client. In this example, a notification is sent to an email address when a message is enqueued into the queue used by the messaging client. The message can be dequeued by the messaging client because the message satisfies the rule sets of the messaging client.

You can query the DBA_STREAMS_MESSAGE_CONSUMERS data dictionary view for information about existing messaging clients and notifications.

Complete the following steps to configure a messaging client and message notification:

  1. Connect as an administrative user who can grant privileges and execute subprograms in supplied packages.

  2. Set the host name used to send the email, the mail port, and the email account that sends email messages for email notifications using the DBMS_AQELM package. The following example sets the mail host name to smtp.mycompany.com, the mail port to 25, and the email account to Mary.Smith@mycompany.com:

    BEGIN
      DBMS_AQELM.SET_MAILHOST('smtp.mycompany.com') ;
      DBMS_AQELM.SET_MAILPORT(25) ;
      DBMS_AQELM.SET_SENDFROM('Mary.Smith@mycompany.com');
    END;
    /
    

    You can use procedures in the DBMS_AQELM package to determine the current mail host, mail port, and send from settings for a database. For example, to determine the current mail host for a database, use the DBMS_AQELM.GET_MAILHOST procedure.

  3. Grant the necessary privileges to the users who will create the messaging client, enqueue and dequeue messages, and specify message notifications. In this example, the oe user performs all of these tasks.

    GRANT EXECUTE ON DBMS_AQ TO oe;
    GRANT EXECUTE ON DBMS_STREAMS_ADM TO oe;
    GRANT EXECUTE ON DBMS_STREAMS_MESSAGING TO oe;
    
    BEGIN 
      DBMS_RULE_ADM.GRANT_SYSTEM_PRIVILEGE(
        privilege    => DBMS_RULE_ADM.CREATE_RULE_SET_OBJ, 
        grantee      => 'oe', 
        grant_option => false);
    END;
    /
    
    BEGIN 
      DBMS_RULE_ADM.GRANT_SYSTEM_PRIVILEGE(
        privilege    => DBMS_RULE_ADM.CREATE_RULE_OBJ, 
        grantee      => 'oe', 
        grant_option => false);
    END;
    /
    
    BEGIN 
      DBMS_RULE_ADM.GRANT_SYSTEM_PRIVILEGE(
        privilege    => DBMS_RULE_ADM.CREATE_EVALUATION_CONTEXT_OBJ, 
        grantee      => 'oe', 
        grant_option => false);
    END;
    /
    
  4. Connect as the oe user:

    CONNECT oe/oe
    
  5. Create an ANYDATA queue using SET_UP_QUEUE, as in the following example:

    BEGIN
      DBMS_STREAMS_ADM.SET_UP_QUEUE(
        queue_table  => 'oe.notification_queue_table',
        queue_name   => 'oe.notification_queue');
    END;
    /
    
  6. Create the types for the user-enqueued messages, as in the following example:

    CREATE TYPE oe.user_msg AS OBJECT(
      object_name    VARCHAR2(30),
      object_owner   VARCHAR2(30),
      message        VARCHAR2(50));
    /
    
  7. Create a trigger that enqueues a message into the queue whenever an order is inserted into the oe.orders table, as in the following example:

    CREATE OR REPLACE TRIGGER oe.order_insert AFTER INSERT
    ON oe.orders FOR EACH ROW
    DECLARE
      msg            oe.user_msg;
      str            VARCHAR2(2000);
    BEGIN
      str := 'New Order - ' || :NEW.ORDER_ID || ' Order ID';
      msg  := oe.user_msg(
                 object_name   => 'ORDERS',
                 object_owner  => 'OE',
                 message       => str);
      DBMS_STREAMS_MESSAGING.ENQUEUE (
        queue_name   => 'oe.notification_queue',
        payload      => ANYDATA.CONVERTOBJECT(msg));
    END;
    /
    
  8. Create the messaging client that will dequeue messages from the queue and the rule used by the messaging client to determine which messages to dequeue, as in the following example:

    BEGIN
      DBMS_STREAMS_ADM.ADD_MESSAGE_RULE (
        message_type   => 'oe.user_msg',
        rule_condition => ' :msg.OBJECT_OWNER = ''OE'' AND  ' ||
                          ' :msg.OBJECT_NAME = ''ORDERS'' ',
        streams_type   => 'dequeue',
        streams_name   => 'oe',
        queue_name     => 'oe.notification_queue');
    END;
    /
    
  9. Set the message notification to send email upon enqueue of messages that can be dequeued by the messaging client, as in the following example:

    BEGIN
      DBMS_STREAMS_ADM.SET_MESSAGE_NOTIFICATION (
        streams_name         => 'oe',
        notification_action  => 'Mary.Smith@mycompany.com',
        notification_type    => 'MAIL',
        include_notification => true,
        queue_name           => 'oe.notification_queue');
    END;
    /
    
  10. Create a PL/SQL procedure that dequeues messages using the messaging client, as in the following example:

    CREATE OR REPLACE PROCEDURE oe.deq_notification(consumer IN VARCHAR2) AS
      msg            ANYDATA;
      user_msg       oe.user_msg;
      num_var        PLS_INTEGER;
      more_messages  BOOLEAN := true;
      navigation     VARCHAR2(30);
    BEGIN
      navigation := 'FIRST MESSAGE';
      WHILE (more_messages) LOOP
        BEGIN
          DBMS_STREAMS_MESSAGING.DEQUEUE(
            queue_name   => 'oe.notification_queue',
            streams_name => consumer,
            payload      => msg,
            navigation   => navigation,
            wait         => DBMS_STREAMS_MESSAGING.NO_WAIT);
          IF msg.GETTYPENAME() = 'OE.USER_MSG' THEN
            num_var := msg.GETOBJECT(user_msg);
            DBMS_OUTPUT.PUT_LINE(user_msg.object_name);
            DBMS_OUTPUT.PUT_LINE(user_msg.object_owner);
            DBMS_OUTPUT.PUT_LINE(user_msg.message);
          END IF;
          navigation := 'NEXT MESSAGE';
          COMMIT;
        EXCEPTION WHEN SYS.DBMS_STREAMS_MESSAGING.ENDOFCURTRANS THEN
                    navigation := 'NEXT TRANSACTION';
                  WHEN DBMS_STREAMS_MESSAGING.NOMOREMSGS THEN
                    more_messages := false;
                    DBMS_OUTPUT.PUT_LINE('No more messages.');
                  WHEN OTHERS THEN
                    RAISE;  
        END;
      END LOOP;
    END;
    /
    
  11. Insert rows into the oe.orders table, as in the following example:

    INSERT INTO oe.orders VALUES(2521, 'direct', 144, 0, 922.57, 159, NULL);
    INSERT INTO oe.orders VALUES(2522, 'direct', 116, 0, 1608.29, 153, NULL);
    COMMIT;
    INSERT INTO oe.orders VALUES(2523, 'direct', 116, 0, 227.55, 155, NULL);
    COMMIT;
    

Message notification sends a message to the email address specified in Step 9 for each message that was enqueued. Each notification is an AQXmlNotification, which includes of the following:

  • notification_options, which includes the following:

    • destination - The destination queue from which the message was dequeued

    • consumer_name - The name of the messaging client that dequeued the message

  • message_set - The set of message properties

The following example shows the AQXmlNotification format sent in an email notification:

<?xml version="1.0" encoding="UTF-8"?>
<Envelope xmlns="http://ns.oracle.com/AQ/schemas/envelope">
    <Body>
        <AQXmlNotification xmlns="http://ns.oracle.com/AQ/schemas/access">
            <notification_options>
                <destination>OE.NOTIFICATION_QUEUE</destination>
                <consumer_name>OE</consumer_name>
            </notification_options>
            <message_set>
                <message>
                    <message_header>
                        <message_id>CB510DDB19454731E034080020AE3E0A</message_id>
                        <expiration>-1</expiration>
                        <delay>0</delay>
                        <priority>1</priority>
                        <delivery_count>0</delivery_count>
                        <sender_id>
                            <agent_name>OE</agent_name>
                            <protocol>0</protocol>
                        </sender_id>
                        <message_state>0</message_state>
                    </message_header>
                </message>
            </message_set>
        </AQXmlNotification>
    </Body>
</Envelope>

You can dequeue the messages enqueued in this example by running the oe.deq_notification procedure:

SET SERVEROUTPUT ON SIZE 100000
EXEC oe.deq_notification('OE');

See Also:


PK áĎ‚3 3PK◊hUIOEBPS/man_advanced.htmńL;≥ Other Streams Management Tasks

17 Other Streams Management Tasks

This chapter provides instructions for performing full database export/import in a Streams environment. This chapter also provides instructions for removing a Streams configuration.

This chapter contains these topics:

Each task described in this chapter should be completed by a Streams administrator that has been granted the appropriate privileges, unless specified otherwise.

Performing Full Database Export/Import in a Streams Environment

This section describes how to perform a full database export/import on a database that is running one or more Streams capture processes, propagations, or apply processes. These instructions pertain to a full database export/import where the import database and export database are running on different computers, and the import database replaces the export database. The global name of the import database and the global name of the export database must match. These instructions assume that both databases already exist. The export/import described in this section can be performed using Data Pump Export/Import utilities or the original Export/Import utilities.


Note:

If you want to add a database to an existing Streams environment, then do not use the instructions in this section. Instead, see Oracle Streams Replication Administrator's Guide.


See Also:


Complete the following steps to perform a full database export/import on a database that is using Streams:

  1. If the export database contains any destination queues for propagations from other databases, then stop each propagation that propagates messages to the export database. You can stop a propagation using the STOP_PROPAGATION procedure in the DBMS_PROPAGATION_ADM package.

  2. Make the necessary changes to your network configuration so that the database links used by the propagation jobs you disabled in Step 1 point to the computer running the import database.

    To complete this step, you might need to re-create the database links used by these propagation jobs or modify your Oracle networking files at the databases that contain the source queues.

  3. Notify all users to stop making data manipulation language (DML) and data definition language (DDL) changes to the export database, and wait until these changes have stopped.

  4. Make a note of the current export database system change number (SCN). You can determine the current SCN using the GET_SYSTEM_CHANGE_NUMBER function in the DBMS_FLASHBACK package. For example:

    SET SERVEROUTPUT ON SIZE 1000000
    DECLARE
      current_scn NUMBER;
    BEGIN
      current_scn:= DBMS_FLASHBACK.GET_SYSTEM_CHANGE_NUMBER;
          DBMS_OUTPUT.PUT_LINE('Current SCN: ' || current_scn);
    END;
    /
    

    In this example, assume that current SCN returned is 7000000.

    After completing this step, do not stop any capture process running on the export database. Step 7c instructs you to use the V$STREAMS_CAPTURE dynamic performance view to ensure that no DML or DDL changes were made to the database after Step 3. The information about a capture process in this view is reset if the capture process is stopped and restarted.

    For the check in Step 7c to be valid, this information should not be reset for any capture process. To prevent a capture process from stopping automatically, you might need to set the message_limit and time_limit capture process parameters to infinite if these parameters are set to another value for any capture process.

  5. If any downstream capture processes are capturing changes that originated at the export database, then make sure the log file containing the SCN determined in Step 4 has been transferred to the downstream database and added to the capture process session. See "Displaying the Registered Redo Log Files for Each Capture Process" for queries that can determine this information.

  6. If the export database is not running any apply processes, and is not propagating user-enqueued messages, then start the full database export now. Make sure that the FULL export parameter is set to y so that the required Streams metadata is exported.

    If the export database is running one or more apply processes or is propagating user-enqueued messages, then do not start the export and proceed to the next step.

  7. If the export database is the source database for changes captured by any capture processes, then complete the following steps for each capture process:

    1. Wait until the capture process has scanned past the redo record that corresponds to the SCN determined in Step 4. You can view the SCN of the redo record last scanned by a capture process by querying the CAPTURE_MESSAGE_NUMBER column in the V$STREAMS_CAPTURE dynamic performance view. Make sure the value of CAPTURE_MESSAGE_NUMBER is greater than or equal to the SCN determined in Step 4 before you continue.

    2. Monitor the Streams environment until the apply process at the destination database has applied all of the changes from the capture database. For example, if the name of the capture process is capture, the name of the apply process is apply, the global name of the destination database is dest.net, and the SCN value returned in Step 4 is 7000000, then run the following query at the capture database:

      CONNECT strmadmin/strmadminpw
      
      SELECT cap.ENQUEUE_MESSAGE_NUMBER
        FROM V$STREAMS_CAPTURE cap
        WHERE cap.CAPTURE_NAME = 'CAPTURE' AND
              cap.ENQUEUE_MESSAGE_NUMBER IN (
                SELECT DEQUEUED_MESSAGE_NUMBER
                FROM V$STREAMS_APPLY_READER@dest.net reader,
                     V$STREAMS_APPLY_COORDINATOR@dest.net coord
                WHERE reader.APPLY_NAME = 'APPLY' AND
                  reader.DEQUEUED_MESSAGE_NUMBER = reader.OLDEST_SCN_NUM AND
                  coord.APPLY_NAME = 'APPLY' AND
                  coord.LWM_MESSAGE_NUMBER = coord.HWM_MESSAGE_NUMBER AND
                  coord.APPLY# = reader.APPLY#) AND
                cap.CAPTURE_MESSAGE_NUMBER >= 7000000;
      

      When this query returns a row, all of the changes from the capture database have been applied at the destination database, and you can move on to the next step.

      If this query returns no results for an inordinately long time, then make sure the Streams clients in the environment are enabled by querying the STATUS column in the DBA_CAPTURE view at the source database and the DBA_APPLY view at the destination database. You can check the status of the propagation by running the query in "Displaying the Schedule for a Propagation Job".

      If a Streams client is disabled, then try restarting it. If a Streams client will not restart, then troubleshoot the environment using the information in Chapter 18, "Troubleshooting a Streams Environment".

      The query in this step assumes that a database link accessible to the Streams administrator exists between the capture database and the destination database. If such a database link does not exist, then you can perform two separate queries at the capture database and destination database.

    3. Verify that the enqueue message number of each capture process is less than or equal to the SCN determined in Step 4. You can view the enqueue message number for each capture process by querying the ENQUEUE_MESSAGE_NUMBER column in the V$STREAMS_CAPTURE dynamic performance view.

      If the enqueue message number of each capture process is less than or equal to the SCN determined in Step 4, then proceed to Step 9.

      However, if the enqueue message number of any capture process is higher than the SCN determined in Step 4, then one or more DML or DDL changes were made after the SCN determined in Step 4, and these changes were captured and enqueued by a capture process. In this case, perform all of the steps in this section again, starting with Step 1.


      Note:

      For this verification to be valid, each capture process must have been running uninterrupted since Step 4.

  8. If any downstream capture processes captured changes that originated at the export database, then drop these downstream capture processes. You will re-create them in Step 14a.

  9. If the export database has any propagations that are propagating user-enqueued messages, then stop these propagations using the STOP_PROPAGATION procedure in the DBMS_PROPAGATION package.

  10. If the export database is running one or more apply processes, or is propagating user-enqueued messages, then start the full database export now. Make sure that the FULL export parameter is set to y so that the required Streams metadata is exported. If you already started the export in Step 6, then proceed to Step 11.

  11. When the export is complete, transfer the export dump file to the computer running the import database.

  12. Perform the full database import. Make sure that the STREAMS_CONFIGURATION and FULL import parameters are both set to y so that the required Streams metadata is imported. The default setting is y for the STREAMS_CONFIGURATION import parameter. Also, make sure no DML or DDL changes are made to the import database during the import.

  13. If any downstream capture processes are capturing changes that originated at the database, then make the necessary changes so that log files are transferred from the import database to the downstream database. See "Preparing to Transmit Redo Data to a Downstream Database" for instructions.

  14. Re-create downstream capture processes:

    1. Re-create any downstream capture processes that you dropped in Step 8, if necessary. These dropped downstream capture processes were capturing changes that originated at the export database. Configure the re-created downstream capture processes to capture changes that originate at the import database.

    2. Re-create in the import database any downstream capture processes that were running in the export database, if necessary. If the export database had any downstream capture processes, then those downstream capture processes were not exported.


    See Also:

    "Creating a Capture Process" for information about creating a downstream capture process

  15. If any local or downstream capture processes will capture changes that originate at the database, then, at the import database, prepare the database objects whose changes will be captured for instantiation. See Oracle Streams Replication Administrator's Guide for information about preparing database objects for instantiation.

  16. Let users access the import database, and shut down the export database.

  17. Enable any propagation jobs you disabled in Steps 1 and 9.

  18. If you reset the value of a message_limit or time_limit capture process parameter in Step 4, then, at the import database, reset these parameters to their original settings.

Removing a Streams Configuration

You run the REMOVE_STREAMS_CONFIGURATION procedure in the DBMS_STREAMS_ADM package to remove a Streams configuration at the local database.


Attention:

Running this procedure is dangerous. You should run this procedure only if you are sure you want to remove the entire Streams configuration at a database.

To remove the Streams configuration at the local database, run the following procedure:

EXEC DBMS_STREAMS_ADM.REMOVE_STREAMS_CONFIGURATION();

After running this procedure, drop the Streams administrator at the database, if possible.


See Also:

Oracle Database PL/SQL Packages and Types Reference for detailed information about the actions performed by the REMOVE_STREAMS_CONFIGURATION procedure

PK{ą%l…LńLPK◊hUIOEBPS/strms_trouble.htmġ Troubleshooting a Streams Environment

18 Troubleshooting a Streams Environment

This chapter contains information about identifying and resolving common problems in a Streams environment.

This chapter contains these topics:


See Also:

Oracle Streams Replication Administrator's Guide for more information about troubleshooting Streams replication environments

Troubleshooting Capture Problems

If a capture process is not capturing changes as expected, or if you are having other problems with a capture process, then use the following checklist to identify and resolve capture problems:

Is the Capture Process Enabled?

A capture process captures changes only when it is enabled.

You can check whether a capture process is enabled, disabled, or aborted by querying the DBA_CAPTURE data dictionary view. For example, to check whether a capture process named capture is enabled, run the following query:

SELECT STATUS FROM DBA_CAPTURE WHERE CAPTURE_NAME = 'CAPTURE';

If the capture process is disabled, then your output looks similar to the following:

STATUS
--------
DISABLED

If the capture process is disabled, then try restarting it. If the capture process is aborted, then you might need to correct an error before you can restart it successfully.

To determine why the capture process aborted, query the DBA_CAPTURE data dictionary view or check the trace file for the capture process. The following query shows when the capture process aborted and the error that caused it to abort:

COLUMN CAPTURE_NAME HEADING 'Capture|Process|Name' FORMAT A10
COLUMN STATUS_CHANGE_TIME HEADING 'Abort Time'
COLUMN ERROR_NUMBER HEADING 'Error Number' FORMAT 99999999
COLUMN ERROR_MESSAGE HEADING 'Error Message' FORMAT A40

SELECT CAPTURE_NAME, STATUS_CHANGE_TIME, ERROR_NUMBER, ERROR_MESSAGE
  FROM DBA_CAPTURE WHERE STATUS='ABORTED';

See Also:


Is the Capture Process Current?

If a capture process has not captured recent changes, then the cause might be that the capture process has fallen behind. To check, you can query the V$STREAMS_CAPTURE dynamic performance view. If capture process latency is high, then you might be able to improve performance by adjusting the setting of the parallelism capture process parameter.

Are Required Redo Log Files Missing?

When a capture process is started or restarted, it might need to scan redo log files that were generated before the log file that contains the start SCN. You can query the DBA_CAPTURE data dictionary view to determine the first SCN and start SCN for a capture process. Removing required redo log files before they are scanned by a capture process causes the capture process to abort and results in the following error in a capture process trace file:

ORA-01291: missing logfile

If you see this error, then try restoring any missing redo log file and restarting the capture process. You can check the V$LOGMNR_LOGS dynamic performance view to determine the missing SCN range, and add the relevant redo log files. A capture process needs the redo log file that includes the required checkpoint SCN and all subsequent redo log files. You can query the REQUIRED_CHECKPOINT_SCN column in the DBA_CAPTURE data dictionary view to determine the required checkpoint SCN for a capture process.

If you are using the flash recovery area feature of Recovery Manager (RMAN) on a source database in a Streams environment, then RMAN might delete archived redo log files that are required by a capture process. RMAN might delete these files when the disk space used by the recovery-related files is nearing the specified disk quota for the flash recovery area. To prevent this problem in the future, complete one or more of the following actions:

  • Increase the disk quota for the flash recovery area. Increasing the disk quota makes it less likely that RMAN will delete a required archived redo log file, but it will not always prevent the problem.

  • Configure the source database to store archived redo log files in a location other than the flash recovery area. A local capture process will be able to use the log files in the other location if the required log files are missing in the flash recovery area. In this case, a database administrator must manage the log files manually in the other location.

Is a Downstream Capture Process Waiting for Redo Data?

If a downstream capture process is not capturing changes, then it might be waiting for redo data to scan. Redo log files can be registered implicitly or explicitly for a downstream capture process. Redo log files registered implicitly typically are registered in one of the following ways:

  • For a real-time downstream capture process, redo transport services use the log writer process (LGWR) to transfer the redo data from the source database to the standby redo log at the downstream database. Next, the archiver at the downstream database registers the redo log files with the downstream capture process when it archives them.

  • For an archived-log downstream capture process, redo transport services transfer the archived redo log files from the source database to the downstream database and register the archived redo log files with the downstream capture process.

If redo log files are registered explicitly for a downstream capture process, then you must manually transfer the redo log files to the downstream database and register them with the downstream capture process.

Regardless of whether the redo log files are registered implicitly or explicitly, the downstream capture process can capture changes made to the source database only if the appropriate redo log files are registered with the downstream capture process. You can query the V$STREAMS_CAPTURE dynamic performance view to determine whether a downstream capture process is waiting for a redo log file. For example, run the following query for a downstream capture process named strm05_capture:

SELECT STATE FROM V$STREAMS_CAPTURE WHERE CAPTURE_NAME='STRM05_CAPTURE';

If the capture process state is either WAITING FOR DICTIONARY REDO or WAITING FOR REDO, then verify that the redo log files have been registered with the downstream capture process by querying the DBA_REGISTERED_ARCHIVED_LOG and DBA_CAPTURE data dictionary views. For example, the following query lists the redo log files currently registered with the strm05_capture downstream capture process:

COLUMN SOURCE_DATABASE HEADING 'Source|Database' FORMAT A15
COLUMN SEQUENCE# HEADING 'Sequence|Number' FORMAT 9999999
COLUMN NAME HEADING 'Archived Redo Log|File Name' FORMAT A30
COLUMN DICTIONARY_BEGIN HEADING 'Dictionary|Build|Begin' FORMAT A10
COLUMN DICTIONARY_END HEADING 'Dictionary|Build|End' FORMAT A10

SELECT r.SOURCE_DATABASE,
       r.SEQUENCE#, 
       r.NAME, 
       r.DICTIONARY_BEGIN, 
       r.DICTIONARY_END 
  FROM DBA_REGISTERED_ARCHIVED_LOG r, DBA_CAPTURE c
  WHERE c.CAPTURE_NAME = 'STRM05_CAPTURE' AND 
        r.CONSUMER_NAME = c.CAPTURE_NAME;

If this query does not return any rows, then no redo log files are registered with the capture process currently. If you configured redo transport services to transfer redo data from the source database to the downstream database for this capture process, then make sure the redo transport services are configured correctly. If the redo transport services are configured correctly, then run the ALTER SYSTEM ARCHIVE LOG CURRENT statement at the source database to archive a log file. If you did not configure redo transport services to transfer redo data, then make sure the method you are using for log file transfer and registration is working properly. You can register log files explicitly using an ALTER DATABASE REGISTER LOGICAL LOGFILE statement.

If the downstream capture process is waiting for redo, then it also is possible that there is a problem with the network connection between the source database and the downstream database. There also might be a problem with the log file transfer method. Check your network connection and log file transfer method to ensure that they are working properly.

If you configured a real-time downstream capture process, and no redo log files are registered with the capture process, then try switching the log file at the source database. You might need to switch the log file more than once if there is little or no activity at the source database.

Also, if you plan to use a downstream capture process to capture changes to historical data, then consider the following additional issues:

  • Both the source database that generates the redo log files and the database that runs a downstream capture process must be Oracle Database 10g databases.

  • The start of a data dictionary build must be present in the oldest redo log file added, and the capture process must be configured with a first SCN that matches the start of the data dictionary build.

  • The database objects for which the capture process will capture changes must be prepared for instantiation at the source database, not at the downstream database. In addition, you cannot specify a time in the past when you prepare objects for instantiation. Objects are always prepared for instantiation at the current database SCN, and only changes to a database object that occurred after the object was prepared for instantiation can be captured by a capture process.

Are You Trying to Configure Downstream Capture Incorrectly?

To create a downstream capture process, you must use one of the following procedures:

  • DBMS_CAPTURE_ADM.CREATE_CAPTURE

  • DBMS_STREAMS_ADM.MAINTAIN_GLOBAL

  • DBMS_STREAMS_ADM.MAINTAIN_SCHEMAS

  • DBMS_STREAMS_ADM.MAINTAIN_SIMPLE_TTS

  • DBMS_STREAMS_ADM.MAINTAIN_TABLES

  • DBMS_STREAMS_ADM.MAINTAIN_TTS

  • PRE_INSTANTIATION_SETUP and POST_INSTANTIATION_SETUP in the DBMS_STREAMS_ADM package

The procedures in the DBMS_STREAMS_ADM package can configure a downstream capture process as well as the other Oracle Streams components in an Oracle Streams replication environment.

If you try to create a downstream capture process without using one of these procedures, then Oracle returns the following error:

ORA-26678: Streams capture process must be created first

To correct the problem, use one of these procedures to create the downstream capture process.

If you are trying to create a local capture process using a procedure in the DBMS_STREAMS_ADM package, and you encounter this error, then make sure the database name specified in the source_database parameter of the procedure you are running matches the global name of the local database.

Are More Actions Required for Downstream Capture without a Database Link?

When downstream capture is configured with a database link, the database link can be used to perform operations at the source database and obtain information from the source database automatically. When downstream capture is configured without a database link, these actions must be performed manually, and the information must be obtained manually. If you do not complete these actions manually, then errors result when you try to create the downstream capture process.

Specifically, the following actions must be performed manually when you configure downstream capture without a database link:

  • In certain situations, you must run the DBMS_CAPTURE_ADM.BUILD procedure at the source database to extract the data dictionary at the source database to the redo log before a capture process is created.

  • You must prepare the source database objects for instantiation.

  • You must obtain the first SCN for the downstream capture process and specify the first SCN using the first_scn parameter when you create the capture process with the CREATE_CAPTURE procedure in the DBMS_CAPTURE_ADM package.

Troubleshooting Propagation Problems

If a propagation is not propagating changes as expected, then use the following checklist to identify and resolve propagation problems:

Does the Propagation Use the Correct Source and Destination Queue?

If messages are not appearing in the destination queue for a propagation as expected, then the propagation might not be configured to propagate messages from the correct source queue to the correct destination queue.

For example, to check the source queue and destination queue for a propagation named dbs1_to_dbs2, run the following query:

COLUMN SOURCE_QUEUE HEADING 'Source Queue' FORMAT A35
COLUMN DESTINATION_QUEUE HEADING 'Destination Queue' FORMAT A35

SELEC0T 
  p.SOURCE_QUEUE_OWNER||'.'||
    p.SOURCE_QUEUE_NAME||'@'||
    g.GLOBAL_NAME SOURCE_QUEUE, 
  p.DESTINATION_QUEUE_OWNER||'.'||
    p.DESTINATION_QUEUE_NAME||'@'||
    p.DESTINATION_DBLINK DESTINATION_QUEUE 
  FROM DBA_PROPAGATION p, GLOBAL_NAME g
  WHERE p.PROPAGATION_NAME = 'DBS1_TO_DBS2';

Your output looks similar to the following:

Source Queue                        Destination Queue
----------------------------------- -----------------------------------
STRMADMIN.STREAMS_QUEUE@DBS1.NET    STRMADMIN.STREAMS_QUEUE@DBS2.NET

If the propagation is not using the correct queues, then create a new propagation. You might need to remove the existing propagation if it is not appropriate for your environment.

Is the Propagation Enabled?

For a propagation job to propagate messages, the propagation must be enabled. If messages are not being propagated by a propagation as expected, then the propagation might not be enabled.

You can find the following information about a propagation:

  • The database link used to propagate messages from the source queue to the destination queue

  • Whether the propagation is ENABLED, DISABLED, or ABORTED

  • The date of the last error, if there are any propagation errors

  • If there are any propagation errors, then the error number of the last error

  • The error message of the last error, if there are any propagation errors

For example, to check whether a propagation named streams_propagation is enabled, run the following query:

COLUMN DESTINATION_DBLINK HEADING 'Database|Link'      FORMAT A10
COLUMN STATUS             HEADING 'Status'             FORMAT A8
COLUMN ERROR_DATE         HEADING 'Error|Date'
COLUMN ERROR_MESSAGE      HEADING 'Error Message'      FORMAT A50
 
SELECT DESTINATION_DBLINK,
       STATUS,
       ERROR_DATE,
       ERROR_MESSAGE
  FROM DBA_PROPAGATION
  WHERE PROPAGATION_NAME = 'STREAMS_PROPAGATION';

If the propagation is disabled currently, then your output looks similar to the following:

Database            Error
Link       Status   Date      Error Message
---------- -------- --------- --------------------------------------------------
INST2.NET  DISABLED 27-APR-05 ORA-25307: Enqueue rate too high, flow control
                              enabled

If there is a problem, then try the following actions to correct it:

  • If a propagation is disabled, then you can enable it using the START_PROPAGATION procedure in the DBMS_PROPAGATION_ADM package, if you have not done so already.

  • If the propagation is disabled or aborted, and the Error Date and Error Message fields are populated, then diagnose and correct the problem based on the error message.

  • If the propagation is disabled or aborted, then check the trace file for the propagation job process. The query in "Displaying the Schedule for a Propagation Job" displays the propagation job process.

  • If the propagation job is enabled, but is not propagating messages, then try stopping and restarting the propagation.

Are There Enough Job Queue Processes?

Propagation jobs use job queue processes to propagate messages. Make sure the JOB_QUEUE_PROCESSES initialization parameter is set to 2 or higher in each database instance that runs propagations. It should be set to a value that is high enough to accommodate all of the jobs that run simultaneously.


See Also:


Is Security Configured Properly for the ANYDATA Queue?

ANYDATA queues are secure queues, and security must be configured properly for users to be able to perform operations on them. If you use the SET_UP_QUEUE procedure in the DBMS_STREAMS_ADM package to configure a secure ANYDATA queue, then an error is raised if the agent that SET_UP_QUEUE tries to create already exists and is associated with a user other than the user specified by queue_user in this procedure. In this case, rename or remove the existing agent using the ALTER_AQ_AGENT or DROP_AQ_AGENT procedure, respectively, in the DBMS_AQADM package. Next, retry SET_UP_QUEUE.

In addition, you might encounter one of the following errors if security is not configured properly for an ANYDATA queue:


See Also:

"Secure Queues"

ORA-24093 AQ Agent not granted privileges of database user

Secure queue access must be granted to an AQ agent explicitly for both enqueue and dequeue operations. You grant the agent these privileges using the ENABLE_DB_ACCESS procedure in the DBMS_AQADM package.

For example, to grant an agent named explicit_dq privileges of the database user oe, run the following procedure:

BEGIN
  DBMS_AQADM.ENABLE_DB_ACCESS(
    agent_name  => 'explicit_dq',
    db_username => 'oe');
END;
/

To check the privileges of the agents in a database, run the following query:

SELECT AGENT_NAME "Agent", DB_USERNAME "User" FROM DBA_AQ_AGENT_PRIVS;

Your output looks similar to the following:

Agent                          User
------------------------------ ------------------------------
EXPLICIT_ENQ                   OE
APPLY_OE                       OE
EXPLICIT_DQ                    OE

See Also:

"Enabling a User to Perform Operations on a Secure Queue" for a detailed example that grants privileges to an agent

ORA-25224 Sender name must be specified for enqueue into secure queues

To enqueue into a secure queue, the SENDER_ID must be set to an AQ agent with secure queue privileges for the queue in the message properties.


See Also:

"Wrapping User Message Payloads in an ANYDATA Wrapper and Enqueuing Them" for an example that sets the SENDER_ID for enqueue

Troubleshooting Apply Problems

If an apply process is not applying changes as expected, then use the following checklist to identify and resolve apply problems:

Is the Apply Process Enabled?

An apply process applies changes only when it is enabled. You can check whether an apply process is enabled, disabled, or aborted by querying the DBA_APPLY data dictionary view. For example, to check whether an apply process named apply is enabled, run the following query:

SELECT STATUS FROM DBA_APPLY WHERE APPLY_NAME = 'APPLY';

If the apply process is disabled, then your output looks similar to the following:

STATUS
--------
DISABLED

If the apply process is disabled, then try restarting it. If the apply process is aborted, then you might need to correct an error before you can restart it successfully. If the apply process did not shut down cleanly, then it might not restart. In this case, it returns the following error:

ORA-26666 cannot alter STREAMS process

If this happens then, then run the STOP_APPLY procedure in the DBMS_APPLY_ADM package with the force parameter set to true. Next, restart the apply process.

To determine why an apply process aborted, query the DBA_APPLY data dictionary view or check the trace files for the apply process. The following query shows when the apply process aborted and the error that caused it to abort:

COLUMN APPLY_NAME HEADING 'APPLY|Process|Name' FORMAT A10
COLUMN STATUS_CHANGE_TIME HEADING 'Abort Time'
COLUMN ERROR_NUMBER HEADING 'Error Number' FORMAT 99999999
COLUMN ERROR_MESSAGE HEADING 'Error Message' FORMAT A40

SELECT APPLY_NAME, STATUS_CHANGE_TIME, ERROR_NUMBER, ERROR_MESSAGE
  FROM DBA_APPLY WHERE STATUS='ABORTED';

Is the Apply Process Current?

If an apply process has not applied recent changes, then the problem might be that the apply process has fallen behind. You can check apply process latency by querying the V$STREAMS_APPLY_COORDINATOR dynamic performance view. If apply process latency is high, then you might be able to improve performance by adjusting the setting of the parallelism apply process parameter.

Does the Apply Process Apply Captured Messages or User-Enqueued Messages?

An apply process can apply either captured messages or user-enqueued messages, but not both types of messages. An apply process might not be applying messages of a one type because it was configured to apply the other type of messages.

You can check the type of messages applied by an apply process by querying the DBA_APPLY data dictionary view. For example, to check whether an apply process named apply applies captured messages or user-enqueued messages, run the following query:

COLUMN APPLY_CAPTURED HEADING 'Type of Messages Applied' FORMAT A25

SELECT DECODE(APPLY_CAPTURED,
                'YES', 'Captured',
                'NO',  'User-Enqueued') APPLY_CAPTURED
  FROM DBA_APPLY
  WHERE APPLY_NAME = 'APPLY';

If the apply process applies captured messages, then your output looks similar to the following:

Type of Messages Applied
-------------------------
Captured

If an apply process is not applying the expected type of messages, then you might need to create a new apply process to apply the messages.

Is the Apply Process Queue Receiving the Messages to be Applied?

An apply process must receive messages in its queue before it can apply these messages. Therefore, if an apply process is applying captured messages, then the capture process that captures these messages must be enabled, and it must be configured properly. Similarly, if messages are propagated from one or more databases before reaching the apply process, then each propagation must be enabled and must be configured properly. If a capture process or a propagation on which the apply process depends is not enabled or is not configured properly, then the messages might never reach the apply process queue.

The rule sets used by all Streams clients, including capture processes and propagations, determine the behavior of these Streams clients. Therefore, make sure the rule sets for any capture processes or propagations on which an apply process depends contain the correct rules. If the rules for these Streams clients are not configured properly, then the apply process queue might never receive the appropriate messages. Also, a message traveling through a stream is the composition of all of the transformations done along the path. For example, if a capture process uses subset rules and performs row migration during capture of a message, and a propagation uses a rule-based transformation on the message to change the table name, then, when the message reaches an apply process, the apply process rules must account for these transformations.

In an environment where a capture process captures changes that are propagated and applied at multiple databases, you can use the following guidelines to determine whether a problem is caused by a capture process or a propagation on which an apply process depends or by the apply process itself:

  • If no other destination databases of a capture process are applying changes from the capture process, then the problem is most likely caused by the capture process or a propagation near the capture process. In this case, first make sure the capture process is enabled and configured properly, and then make sure the propagations nearest the capture process are enabled and configured properly.

  • If other destination databases of a capture process are applying changes from the capture process, then the problem is most likely caused by the apply process itself or a propagation near the apply process. In this case, first make sure the apply process is enabled and configured properly, and then make sure the propagations nearest the apply process are enabled and configured properly.

Is a Custom Apply Handler Specified?

You can use apply handlers to handle messages dequeued by an apply process in a customized way. These handlers include DML handlers, DDL handlers, precommit handlers, and message handlers. If an apply process is not behaving as expected, then check the handler procedures used by the apply process, and correct any flaws. You might need to modify a handler procedure or remove it to correct an apply problem.

You can find the names of these procedures by querying the DBA_APPLY_DML_HANDLERS and DBA_APPLY data dictionary views.


See Also:


Is the AQ_TM_PROCESSES Initialization Parameter Set to Zero?

The AQ_TM_PROCESSES initialization parameter controls time monitoring on queue messages and controls processing of messages with delay and expiration properties specified. In Oracle Database 10g, the database automatically controls these activities when the AQ_TM_PROCESSES initialization parameter is not set.

If an apply process is not applying messages, but there are messages that satisfy the apply process rule sets in the apply process queue, then make sure the AQ_TM_PROCESSES initialization parameter is not set to zero at the destination database. If this parameter is set to zero, then unset this parameter or set it to a nonzero value and monitor the apply process to see if it begins to apply messages.

To determine whether there are messages in a buffered queue, you can query the V$BUFFERED_QUEUES and V$BUFFERED_SUBSCRIBERS dynamic performance views. To determine whether there are user-enqueued messages in a queue, you can query the queue table for the queue.

Does the Apply User Have the Required Privileges?

If the apply user does not have explicit EXECUTE privilege on an apply handler procedure or custom rule-based transformation function, then an ORA-06550 error might result when the apply user tries to run the procedure or function. Typically, this error is causes the apply process to abort without adding errors to the DBA_APPLY_ERROR view. However, the trace file for the apply coordinator reports the error. Specifically, errors similar to the following appear in the trace file:

ORA-12801 in STREAMS process
ORA-12801: error signaled in parallel query server P000
ORA-06550: line 1, column 15:
PLS-00201: identifier 'STRMADMIN.TO_AWARDFCT_RULEDML' must be declared
ORA-06550: line 1, column 7:
PL/SQL:  Statement ignored

In this example, the apply user dssdbo does not have execute privilege on the to_award_fct_ruledml function in the strmadmin schema. To correct the problem, grant the required EXECUTE privilege to the apply user.

Are Any Apply Errors in the Error Queue?

When an apply process cannot apply a message, it moves the message and all of the other messages in the same transaction into the error queue. You should check for apply errors periodically to see if there are any transactions that could not be applied.

You can check for apply errors by querying the DBA_APPLY_ERROR data dictionary view. Also, you can reexecute a particular transaction from the error queue or all of the transactions in the error queue.

Troubleshooting Problems with Rules and Rule-Based Transformations

When a capture process, a propagation, an apply process, or a messaging client is not behaving as expected, the problem might be that rules or rule-based transformations for the Streams client are not configured properly. Use the following checklist to identify and resolve problems with rules and rule-based transformations:

Are Rules Configured Properly for the Streams Client?

If a capture process, a propagation, an apply process, or a messaging client is behaving in an unexpected way, then the problem might be that the rules in either the positive rule set or negative rule set for the Streams client are not configured properly. For example, if you expect a capture process to capture changes made to a particular table, but the capture process is not capturing these changes, then the cause might be that the rules in the rule sets used by the capture process do not instruct the capture process to capture changes to the table.

You can check the rules for a particular Streams client by querying the DBA_STREAMS_RULES data dictionary view. If you use both positive and negative rule sets in your Streams environment, then it is important to know whether a rule returned by this view is in the positive or negative rule set for a particular Streams client.

A Streams client performs an action, such as capture, propagation, apply, or dequeue, for messages that satisfy its rule sets. In general, a message satisfies the rule sets for a Streams client if no rules in the negative rule set evaluate to TRUE for the message, and at least one rule in the positive rule set evaluates to TRUE for the message.

"Rule Sets and Rule Evaluation of Messages" contains more detailed information about how a message satisfies the rule sets for a Streams client, including information about Streams client behavior when one or more rule sets are not specified.

This section includes the following subsections:

Checking Schema and Global Rules

Schema and global rules in the positive rule set for a Streams client instruct the Streams client to perform its task for all of the messages relating to a particular schema or database, respectively. Schema and global rules in the negative rule set for a Streams client instruct the Streams client to discard all of the messages relating to a particular schema or database, respectively. If a Streams client is not behaving as expected, then it might be because schema or global rules are not configured properly for the Streams client.

For example, suppose a database is running an apply process named strm01_apply, and you want this apply process to apply LCRs containing changes to the hr schema. If the apply process uses a negative rule set, then make sure there are no schema rules that evaluate to TRUE for this schema in the negative rule set. Such rules cause the apply process to discard LCRs containing changes to the schema. "Displaying the Rules in the Negative Rule Set for a Streams Client" contains an example of a query that shows such rules.

If the query returns any such rules, then the rules returned might be causing the apply process to discard changes to the schema. If this query returns no rows, then make sure there are schema rules in the positive rule set for the apply process that evaluate to TRUE for the schema. "Displaying the Rules in the Positive Rule Set for a Streams Client" contains an example of a query that shows such rules.

Checking Table Rules

Table rules in the positive rule set for a Streams client instruct the Streams client to perform its task for the messages relating to one or more particular tables. Table rules in the negative rule set for a Streams client instruct the Streams client to discard the messages relating to one or more particular tables.

If a Streams client is not behaving as expected for a particular table, then it might be for one of the following reasons:

  • One or more global rules in the rule sets for the Streams client ġinstruct the Streams client to behave in a particular way for messages relating to the table because the table is in a specific database. That is, a global rule in the negative rule set for the Streams client might instruct the Streams client to discard all messages from the source database that contains the table, or a global rule in the positive rule set for the Streams client might instruct the Streams client to perform its task for all messages from the source database that contains the table.

  • One or more schema rules in the rule sets for the Streams client instruct the Streams client to behave in a particular way for messages relating to the table because the table is in a specific schema. That is, a schema rule in the negative rule set for the Streams client might instruct the Streams client to discard all messages relating to database objects in the schema, or a schema rule in the positive rule set for the Streams client might instruct the Streams client to perform its task for all messages relating to database objects in the schema.

  • One or more table rules in the rule sets for the Streams client instruct the Streams client to behave in a particular way for messages relating to the table.

If you are sure that no global or schema rules are causing the unexpected behavior, then you can check for table rules in the rule sets for a Streams client. For example, if you expect a capture process to capture changes to a particular table, but the capture process is not capturing these changes, then the cause might be that the rules in the positive and negative rule sets for the capture process do not instruct it to capture changes to the table.

Suppose a database is running a capture process named strm01_capture, and you want this capture process to capture changes to the hr.departments table. If the capture process uses a negative rule set, then make sure there are no table rules that evaluate to TRUE for this table in the negative rule set. Such rules cause the capture process to discard changes to the table. "Displaying the Rules in the Negative Rule Set for a Streams Client" contains an example of a query that shows rules in a negative rule set.

If that query returns any such rules, then the rules returned might be causing the capture process to discard changes to the table. If that query returns no rules, then make sure there are one or more table rules in the positive rule set for the capture process that evaluate to TRUE for the table. "Displaying the Rules in the Positive Rule Set for a Streams Client" contains an example of a query that shows rules in a positive rule set.

You can also determine which rules have a particular pattern in their rule condition. "Listing Each Rule that Contains a Specified Pattern in Its Condition". For example, you can find all of the rules with the string "departments" in their rule condition, and you can make sure these rules are in the correct rule sets.


See Also:

"Table Rules Example" for more information about specifying table rules

Checking Subset Rules

A subset rule can be in the rule set used by a capture process, propagation, apply process, or messaging client. A subset rule evaluates to TRUE only if a DML operation contains a change to a particular subset of rows in the table. For example, to check for table rules that evaluate to TRUE for an apply process named strm01_apply when there are changes to the hr.departments table, run the following query:

COLUMN RULE_NAME HEADING 'Rule Name' FORMAT A20
COLUMN RULE_TYPE HEADING 'Rule Type' FORMAT A20
COLUMN DML_CONDITION HEADING 'Subset Condition' FORMAT A30

SELECT RULE_NAME, RULE_TYPE, DML_CONDITION
  FROM DBA_STREAMS_RULES
  WHERE STREAMS_NAME   = 'STRM01_APPLY' AND 
        STREAMS_TYPE   = 'APPLY' AND
        SCHEMA_NAME    = 'HR' AND
        OBJECT_NAME    = 'DEPARTMENTS';
Rule Name            Rule Type            Subset Condition
-------------------- -------------------- ------------------------------
DEPARTMENTS5         DML                  location_id=1700
DEPARTMENTS6         DML                  location_id=1700
DEPARTMENTS7         DML                  location_id=1700

Notice that this query returns any subset condition for the table in the DML_CONDITION column, which is labeled "Subset Condition" in the output. In this example, subset rules are specified for the hr.departments table. These subset rules evaluate to TRUE only if an LCR contains a change that involves a row where the location_id is 1700. So, if you expected the apply process to apply all changes to the table, then these subset rules cause the apply process to discard changes that involve rows where the location_id is not 1700.


Note:

Subset rules must reside only in positive rule sets.


See Also:


Checking for Message Rules

A message rule can be in the rule set used by a propagation, apply process, or messaging client. Message rules pertain only to user-enqueued messages of a specific message type, not to captured messages. A message rule evaluates to TRUE if a user-enqueued message in a queue is of the type specified in the message rule and satisfies the rule condition of the message rule.

If you expect a propagation, apply process, or messaging client to perform its task for some user-enqueued messages, but the Streams client is not performing its task for these messages, then the cause might be that the rules in the positive and negative rule sets for the Streams client do not instruct it to perform its task for these messages. Similarly, if you expect a propagation, apply process, or messaging client to discard some user-enqueued messages, but the Streams client is not discarding these messages, then the cause might be that the rules in the positive and negative rule sets for the Streams client do not instruct it to discard these messages.

For example, suppose you want a messaging client named oe to dequeue messages of type oe.user_msg that satisfy the following condition:

:"VAR$_2".OBJECT_OWNER = 'OE' AND  :"VAR$_2".OBJECT_NAME = 'ORDERS'

If the messaging client uses a negative rule set, then make sure there are no message rules that evaluate to TRUE for this message type in the negative rule set. Such rules cause the messaging client to discard these messages. For example, to determine whether there are any such rules in the negative rule set for the messaging client, run the following query:

COLUMN RULE_NAME HEADING 'Rule Name' FORMAT A30
COLUMN RULE_CONDITION HEADING 'Rule Condition' FORMAT A30

SELECT RULE_NAME, RULE_CONDITION 
  FROM DBA_STREAMS_RULES
  WHERE STREAMS_NAME       = 'OE' AND
        MESSAGE_TYPE_OWNER = 'OE' AND
        MESSAGE_TYPE_NAME  = 'USER_MSG' AND
        RULE_SET_TYPE      = 'NEGATIVE';

If this query returns any rules, then the rules returned might be causing the messaging client to discard messages. Examine the rule condition of the returned rules to determine whether these rules are causing the messaging client to discard the messages that it should be dequeuing. If this query returns no rules, then make sure there are message rules in the positive rule set for the messaging client that evaluate to TRUE for this message type and condition.

For example, to determine whether any message rules evaluate to TRUE for this message type in the positive rule set for the messaging client, run the following query:

COLUMN RULE_NAME HEADING 'Rule Name' FORMAT A35
COLUMN RULE_CONDITION HEADING 'Rule Condition' FORMAT A35

SELECT RULE_NAME, RULE_CONDITION 
  FROM DBA_STREAMS_RULES 
  WHERE STREAMS_NAME       = 'OE' AND
        MESSAGE_TYPE_OWNER = 'OE' AND
        MESSAGE_TYPE_NAME  = 'USER_MSG' AND
        RULE_SET_TYPE      = 'POSITIVE';

If you have message rules that evaluate to TRUE for this message type in the positive rule set for the messaging client, then these rules are returned. In this case, your output looks similar to the following:

Rule Name                           Rule Condition
----------------------------------- -----------------------------------
RULE$_3                             :"VAR$_2".OBJECT_OWNER = 'OE' AND
                                    :"VAR$_2".OBJECT_NAME = 'ORDERS'

Examine the rule condition for the rules returned to determine whether they instruct the messaging client to dequeue the proper messages. Based on these results, the messaging client named oe should dequeue messages of oe.user_msg type that satisfy condition shown in the output. In other words, no rule in the negative messaging client rule set discards these messages, and a rule exists in the positive messaging client rule set that evaluates to TRUE when the messaging client finds a message in its queue of the of oe.user_msg type that satisfies the rule condition.


See Also:


Resolving Problems with Rules

If you determine that a Streams capture process, propagation, apply process, or messaging client is not behaving as expected because one or more rules must be added to the rule set for the Streams client, then you can use one of the following procedures in the DBMS_STREAMS_ADM package to add appropriate rules:

  • ADD_GLOBAL_PROPAGATION_RULES

  • ADD_GLOBAL_RULES

  • ADD_SCHEMA_PROPAGATION_RULES

  • ADD_SCHEMA_RULES

  • ADD_SUBSET_PROPAGATION_RULES

  • ADD_SUBSET_RULES

  • ADD_TABLE_PROPAGATION_RULES

  • ADD_TABLE_RULES

  • ADD_MESSAGE_PROPAGATION_RULE

  • ADD_MESSAGE_RULE

You can use the DBMS_RULE_ADM package to add customized rules, if necessary.

It is also possible that the Streams capture process, propagation, apply process, or messaging client is not behaving as expected because one or more rules should be altered or removed from a rule set.

If you have the correct rules, and the relevant messages are still filtered out by a Streams capture process, propagation, or apply process, then check your trace files and alert log for a warning about a missing "multi-version data dictionary", which is a Streams data dictionary. The following information might be included in such warning messages:

  • gdbnm: Global name of the source database of the missing object

  • scn: SCN for the transaction that has been missed

If you find such messages, and you are using custom capture process rules or reusing existing capture process rules for a new destination database, then make sure you run the appropriate procedure to prepare for instantiation:

  • PREPARE_TABLE_INSTANTIATION

  • PREPARE_SCHEMA_INSTANTIATION

  • PREPARE_GLOBAL_INSTANTIATION

Also, make sure propagation is working from the source database to the destination database. Streams data dictionary information is propagated to the destination database and loaded into the dictionary at the destination database.


See Also:


Are Declarative Rule-Based Transformations Configured Properly?

A declarative rule-based transformation is a rule-based transformation that covers one of a common set of transformation scenarios for row LCRs. Declarative rule-based transformations are run internally without using PL/SQL. If a Streams capture process, propagation, apply process, or messaging client is not behaving as expected, then check the declarative rule-based transformations specified for the rules used by the Streams client and correct any mistakes.

The most common problems with declarative rule-based transformations are:

  • The declarative rule-based transformation is specified for a table or involves columns in a table, but the schema either was not specified or was incorrectly specified when the transformation was created. If the schema is not correct in a declarative rule-based transformation, then the transformation will not be run on the appropriate LCRs. You should specify the owning schema for a table when you create a declarative rule-based transformation. If the schema is not specified when a declarative rule-based transformation is created, then the user who creates the transformation is specified for the schema by default.

    If the schema is not correct for a declarative rule-based transformation, then, to correct the problem, remove the transformation and re-create it, specifying the correct schema for each table.

  • If more than one declarative rule-based transformation is specified for a particular rule, then make sure the ordering is correct for execution of these transformations. Incorrect ordering of declarative rule-based transformations can result in errors or inconsistent data.

    If the ordering is not correct for the declarative rule-based transformation specified on a single rule, then, to correct the problem, remove the transformations and re-create them with the correct ordering.

Are the Custom Rule-Based Transformations Configured Properly?

A custom rule-based transformation is any modification by a user-defined function to a message when a rule evaluates to TRUE. A custom rule-based transformation is specified in the action context of a rule, and these action contexts contain a name-value pair with STREAMS$_TRANSFORM_FUNCTION for the name and a user-created function name for the value. This user-created function performs the transformation. If the user-created function contains any flaws, then unexpected behavior can result.

If a Streams capture process, propagation, apply process, or messaging client is not behaving as expected, then check the custom rule-based transformation functions specified for the rules used by the Streams client and correct any flaws. You can find the names of these functions by querying the DBA_STREAMS_TRANSFORM_FUNCTION data dictionary view. You might need to modify a transformation function or remove a custom rule-based transformation to correct the problem. Also, make sure the name of the function is spelled correctly when you specify the transformation for a rule.

An error caused by a custom rule-based transformation might cause a capture process, propagation, apply process, or messaging client to abort. In this case, you might need to correct the transformation before the Streams client can be restarted or invoked.

Rule evaluation is done before a custom rule-based transformation. For example, if you have a transformation that changes the name of a table from emps to employees, then make sure each rule using the transformation specifies the table name emps, rather than employees, in its rule condition.


See Also:


Are Incorrectly Transformed LCRs in the Error Queue?

In some cases, incorrectly transformed LCRs might have been moved to the error queue by an apply process. When this occurs, you should examine the transaction in the error queue to analyze the feasibility of reexecuting the transaction successfully. If an abnormality is found in the transaction, then you might be able to configure a DML handler to correct the problem. The DML handler will run when you reexecute the error transaction. When a DML handler is used to correct a problem in an error transaction, the apply process that uses the DML handler should be stopped to prevent the DML handler from acting on LCRs that are not involved with the error transaction. After successful reexecution, if the DML handler is no longer needed, then remove it. Also, correct the rule-based transformation to avoid future errors.

Checking the Trace Files and Alert Log for Problems

Messages about each capture process, propagation, and apply process are recorded in trace files for the database in which the process or propagation job is running. A local capture process runs on a source database, a downstream capture process runs on a downstream database, a propagation job runs on the database containing the source queue in the propagation, and an apply process runs on a destination database. These trace file messages can help you to identify and resolve problems in a Streams environment.

All trace files for background processes are written to the destination directory specified by the initialization parameter BACKGROUND_DUMP_DEST. The names of trace files are operating system specific, but each file usually includes the name of the process writing the file.

For example, on some operating systems, the trace file name for a process is sid_xxxxx_iiiii.trc, where:

Also, you can set the write_alert_log parameter to y for both a capture process and an apply process. When this parameter is set to y, which is the default setting, the alert log for the database contains messages about why the capture process or apply process stopped.

You can control the information in the trace files by setting the trace_level capture process or apply process parameter using the SET_PARAMETER procedure in the DBMS_CAPTURE_ADM and DBMS_APPLY_ADM packages.

Use the following checklist to check the trace files related to Streams:


See Also:


Does a Capture Process Trace File Contain Messages About Capture Problems?

A capture process is an Oracle background process named cnnn, where nnn is the capture process number. For example, on some operating systems, if the system identifier for a database running a capture process is hqdb and the capture process number is 01, then the trace file for the capture process starts with hqdb_c001.


See Also:

"Displaying Change Capture Information About Each Capture Process" for a query that displays the capture process number of a capture process

Do the Trace Files Related to Propagation Jobs Contain Messages About Problems?

Each propagation uses a propagation job that depends on the job queue coordinator process and a job queue process. The job queue coordinator process is named cjqnn, where nn is the job queue coordinator process number, and a job queue process is named jnnn, where nnn is the job queue process number.

For example, on some operating systems, if the system identifier for a database running a propagation job is hqdb and the job queue coordinator process is 01, then the trace file for the job queue coordinator process starts with hqdb_cjq01. Similarly, on the same database, if a job queue process is 001, then the trace file for the job queue process starts with hqdb_j001. You can check the process name by querying the PROCESS_NAME column in the DBA_QUEUE_SCHEDULES data dictionary view.


See Also:

"Is the Propagation Enabled?" for a query that displays the job queue process used by a propagation job

Does an Apply Process Trace File Contain Messages About Apply Problems?

An apply process is an Oracle background process named annn, where nnn is the apply process number. For example, on some operating systems, if the system identifgėųier for a database running an apply process is hqdb and the apply process number is 001, then the trace file for the apply process starts with hqdb_a001.

An apply process also uses parallel execution servers. Information about an apply process might be recorded in the trace file for one or more parallel execution servers. The process name of a parallel execution server is pnnn, where nnn is the process number. So, on some operating systems, if the system identifier for a database running an apply process is hqdb and the process number is 001, then the trace file that contains information about a parallel execution server used by an apply process starts with hqdb_p001.


See Also:


PKĽcŪÄ{ągąPK◊hUIOEBPS/ha_streams.htmġ Streams High Availability Environments

9 Streams High Availability Environments

This chapter explains concepts relating to Streams high availability environments.

This chapter contains these topics:

Overview of Streams High Availability Environments

Configuring a high availability solution requires careful planning and analysis of failure scenarios. Database backups and physical standby databases provide physical copies of a source database for failover protection. Oracle Data Guard, in SQL apply mode, implements a logical standby database in a high availability environment. Because Oracle Data Guard is designed for a high availability environment, it handles most failure scenarios. However, some environments might require the flexibility available in Oracle Streams, so that they can take advantage of the extended feature set offered by Streams.

This chapter discusses some of the scenarios that can benefit from a Streams-based solution and explains Streams-specific issues that arise in high availability environments. It also contains information about best practices for deploying Streams in a high availability environment, including hardware failover within a cluster, instance failover within an Oracle Real Application Clusters (RAC) cluster, and failover and switchover between replicas.

Protection from Failures

RAC is the preferred method for protecting from an instance or system failure. After a failure, services are provided by a surviving node in the cluster. However, clustering does not protect from user error, media failure, or disasters. These types of failures require redundant copies of the database. You can make both physical and logical copies of a database.

Physical copies are identical, block for block, with the source database, and are the preferred means of protecting data. There are three types of physical copies: database backup, mirrored or multiplexed database files, and a physical standby database.

Logical copies contain the same information as the source database, but the information can be stored differently within the database. Creating a logical copy of your database offers many advantages. However, you should always create a logical copy in addition to a physical copy, not instead of physical copy.

A logical copy has the following benefits:

There are three types of logical copies of a database:

Logical standby databases are best maintained using Oracle Data Guard in SQL apply mode. The rest of this chapter discusses Streams replica databases and application maintained copies.


See Also:


Streams Replica Database

Like Oracle Data Guard in SQL apply mode, Oracle Streams can capture database changes, propagate them to destinations, and apply the changes at these destinations. Streams is optimized for replicating data. Streams can capture changes locally in the online redo log as it is written, and the captured changes can be propagated asynchronously to replica databases. This optimization can reduce the latency and can enable the replicas to lag the primary database by no more than a few seconds.

Nevertheless, you might choose to use Streams to configure and maintain a logical copy of your production database. Although using Streams might require additional work, it offers increased flexibility that might be required to meet specific business requirements. A logical copy configured and maintained using Streams is called a replica, not a logical standby, because it provides many capabilities that are beyond the scope of the normal definition of a standby database. Some of the requirements that can best be met using an Oracle Streams replica are listed in the following sections.


See Also:

Oracle Streams Replication Administrator's Guide for more information about replicating database changes with Streams

Updates at the Replica Database

The greatest difference between a replica database and a standby database is that a replica database can be updated and a standby database cannot. Applications that must update data can run against the replica, including job queues and reporting applications that log reporting activity. Replica databases also allow local applications to operate autonomously, protecting local applications from WAN failures and reducing latency for database operations.

Heterogeneous Platform Support

The production and the replica do not need to be running on the exact same platform. This provides more flexibility in using computing assets, and facilitates migration between platforms.

Multiple Character Sets

Streams replicas can use different character sets than the production database. Data is automatically converted from one character set to another before being applied. This ability is extremely important if you have global operations and you must distribute data in multiple countries.

Mining the Online Redo Logs to Minimize Latency

If the replica is used for near real-time reporting, Streams can lag the production database by no more than a few seconds, providing up-to-date and accurate queries. Changes can be read from the online redo logs as the logs are written, rather than from the redo logs after archiving.

Greater than Ten Copies of Data

Streams supports unlimited numbers of replicas. Its flexible routing architecture allows for hub-and-spoke configurations that can efficiently propagate data to hundreds of replicas. This ability can be important if you must provide autonomous operation to many local offices in your organization. In contrast, because standby databases configured with Oracle Data Guard use the LOG_ARCHIVE_DEST_n initialization parameter to specify destinations, there is a limit of ten copies when you use Oracle Data Guard.

Fast Failover

Streams replicas can be open to read/write operations at all times. If a primary database fails, then Streams replicas are able to instantly resume processing. A small window of data might be left at the primary database, but this data will be automatically applied when the primary database recovers. This ability can be important if you value fast recovery time over no lost data. Assuming the primary database can eventually be recovered, the data is only temporarily unavailable.

Single Capture for Multiple Destinations

In a complex environment, changes need only be captured once. These changes can then be sent to multiple destinations. This ability enables more efficient use of the resources needed to mine the redo logs for changes.

When Not to Use Streams

As mentioned previously, there are scenarios in which you might choose to use Streams to meet some of your high availability requirements. One of the rules of high availability is to keep it simple. Oracle Data Guard is designed for high availability and is easier to implement than a Streams-based high availability solution. If you decide to leverage the flexibility offered by Streams, then you must be prepared to invest in the expertise and planning required to make a Streams-based solution robust. This means writing scripts to implement much of the automation and management tools provided with Oracle Data Guard.

Application-maintained Copies

The best availability can be achieved by designing the maintenance of logical copies of data directly into an application. The application knows what data is valuable and must be immediately moved off-site to guarantee no data loss. It can also synchronously replicate truly critical data, while asynchronously replicating less critical data. Applications maintain copies of data by either synchronously or asynchronously sending data to other applications that manage another logical copy of the data. Synchronous operations are performed using the distributed SQL or remote procedure features of the database. Asynchronous operations are performed using Advanced Queuing. Advanced Queuing is a database message queuing feature that is part of Oracle Streams.

Although the highest levels of availability can be achieved with application-maintained copies of data, great care is required to realize these results. Typically, a great amount of custom development is required. Many of the difficult boundary conditions that have been analyzed and solved with solutions such as Oracle Data Guard and Streams replication must be reanalyzed and solved by the custom application developers. In addition, standard solutions like Oracle Data Guard and Streams replication undergo stringent testing both by Oracle and its customers. It will take a great deal of effort before a custom-developed solution can exhibit the same degree of maturity. For these reasons, only organizations with substantial patience and expertise should attempt to build a high availability solution with application maintained copies.


See Also:

Oracle Streams Advanced Queuing User's Guide and Reference for more information about developing applications with Advanced Queuing

Best Practices for Streams High Availability Environments

Implementing Streams in a high availability environment requires consideration of possible failure and recovery scenarios, and the implementation of procedures to ensure Streams continues to capture, propagate, and apply changes after a failure. Some of the issues that must be examined include the following:

The following sections discuss these issues in detail.

Configuring Streams for High Availability

When configuring a solution using Streams, it is important to anticipate failures and design availability into the architecture. You must examine every database in the distributed system, and design a recovery plan in case of failure of that database. In some situations, failure of a database affects only services accessing data on that database. In other situations, a failure is multiplied, because it can affect other databases.

Directly Connecting Every Database to Every Other Database

A configuration where each database is directly connected to every other database in the distributed system is the most resilient to failures, because a failure of one database will not prevent any other databases from operating or communicating. Assuming all data is replicated, services that were using the failed database can connect to surviving replicas.


See Also:


Creating Hub-and-Spoke Configurations

Although configurations where each database is directly connected to every other database provide the best high availability characteristics, they can become difficult to manage when the number of databases becomes large. Hub-and-spoke configurations solve this manageability issue by funneling changes from many databases into a hub database, and then to other hub databases, or to other spoke databases. To add a new source or destination, you simply connect it to a hub database, rather than establishing connections to every other database.

A hub, however, becomes a very important node in your distributed environment. Should it fail, all communications flowing through the hub will fail. Due to the asynchronous nature of the messages propagating through the hub, it can be very difficult to redirect a stream from one hub to another. A better approach is to make the hub resilient to failures.

The same techniques used to make a single database resilient to failures also apply to distributed hub databases. Oracle recommends RAC to provide protection from instance and node failures. This configuration should be combined with a "no loss" physical standby database, to protect from disasters and data errors. Oracle does not recommend using a Streams replica as the only means to protect from disasters or data errors.


See Also:

Oracle Streams Replication Administrator's Guide for a detailed example of such an environment

Configuring Oracle Real Application Clusters with Streams

Using RAC with Streams introduces some important considerations. When running in a RAC cluster, a capture process runs on the instance that owns the queue that is receiving the captured logical change records (LCRs). Job queues should be running on all instances, and a propagation job running on an instance will propagate LCRs from any queue owned by that instance to destination queues. An apply process runs on the instance that owns the queue from which the apply process dequeues its messages. That might or might not be the same queue on which capture runs.

Any propagation to the database running RAC is made over database links. The database links must be configured to connect to the destination instance that owns the queue that will receive the messages.

You might choose to use a cold failover cluster to protect from system failure rather than RAC. A cold failover cluster is not RAC. Instead, a cold failover cluster uses a secondary node to mount and recover the database when the first node fails.

Local or Downstream Capture with Streams

Beginning in Oracle Database 10g, Streams supports capturing changes from the redo log on the local source database or at a downstream database at a different site. The choice of local capture or downstream capture has implications for availability. When a failure occurs at a source database, some changes might not have been captured. With local capture, those changes might not be available until the source database is recovered. In the event of a catastrophic failure, those changes might be lost.

Downstream capture at a remote database reduces the window of potential data loss in the event of a failure. Depending on the configuration, downstream capture enables you to guarantee all changes committed at the source database are safely copied to a remote site, where they can be captured and propagated to other databases and applications. Streams uses the same mechanism as Oracle Data Guard to copy redo data or log files to remote destinations, and supports the same operational modes, including maximum protection, maximum availability, and maximum performance.

Recovering from Failures

The following sections provide best practices for recovering from failures.

Automatic Capture Process Restart After a Failover

After a failure and restart of a single-node database, or a failure and restart of a database on another node in a cold failover cluster, the capture process automatically returns to the status it was in at the time of the failure. That is, if it was running at the time of the failure, then the capture process restarts automatically.

Similarly, for a capture process running in a RAC environment, if an instance running the capture process fails, then the queue that receives the captured messages is assigned to another node in the cluster, and the capture process is restarted automatically. A capture process follows its queue to a different instance if the current owner instance becomes unavailable, and the queue itself follows the rules for primary instance and secondary instance ownership.

Database Links Reestablishment After a Failover

It is important to ensure that a propagation continues to function after a failure of a destination database instance. A propagation job will retry (with increasing delay between retries) its database link sixteen times after a failure until the connection is reestablished. If the connection is not reestablished after sixteen tries, then the propagation schedule is disabled.

If the database is restarted on the same node, or on a different node in a cold failover cluster, then the connection should be reestablished. In some circumstances, the database link could be waiting on a read or write, and will not detect the failure until a lengthy timeout expires. The timeout is controlled by the TCP_KEEPALIVE_INTERVAL TCP/IP parameter. In such circumstances, you should drop and re-create the database link to ensure that communication is reestablished quickly.

When an instance in a RAC cluster fails, the instance is recovered by another node in the cluster. Each queue that was previously owned by the failed instance is assigned to a new instance. If the failed instance contained one or more destination queues for propagations, then queue-to-queue propagations automatically failover to the new instance. However, for queue-to-dblink propagations, you must drop and reestablish any inbound database links to point to the new instance that owns a destination queue. You do not need to modify a propagation that uses a re-created database link.

In a high availability environment, you can prepare scripts that will drop and re-create all necessary database links. After a failover, you can execute these scripts so that Streams can resume propagation.


See Also:


Propagation Job Restart After a Failover

For messages to be propagated from a source queue to a destination queue, a propagation job must run on the instance owning the source queue. In a single-node database, or cold failover cluster, propagation resumes when the single database instance is restarted.

When running in a RAC environment, a propagation job runs on the instance that owns the source queue from which the propagation job sends messages to a destination queue. If the owner instance for a propagation job goes down, then the propagation job automatically migrates to a new owner instance. You should not alter instance affinity for Streams propagation jobs, because Streams manages instance affinity for propagation jobs automatically. Also, for any jobs to run on an instance, the modifiable initialization parameter JOB_QUEUE_PROCESSES must be greater than zero for that instance.

Automatic Apply Process Restart After a Failover

After a failure and restart of a single-node database, or a failure and restart of a database on another node in a cold failover cluster, the apply process automatically returns to the status it was in at the time of the failure. That is, if it was running at the time of the failure, then the apply process restarts automatically.

Similarly, in a RAC cluster, if an instance hosting the apply process fails, then the queue from which the apply process dequeues messages is assigned to another node in the cluster, and the apply process is restarted automatically. An apply process follows its queue to a different instance if the current owner instance becomes unavailable, and the queue itself follows the rules for primary instance and secondary instance ownership.

PK;Õá5ā+āPK◊hUIOEBPS/strms_trmon.htm5M ≤ Monitoring Rule-Based Transformations

24 Monitoring Rule-Based Transformations

A rule-based transformation is any modification to a message that results when a rule in a positive rule set evaluates to TRUE. This chapter provides sample queries that you can use to monitor rule-based transformations.

This chapter contains these topics:


Note:

The Streams tool in the Oracle Enterprise Manager Console is also an excellent way to monitor a Streams environment. See the online help for the Streams tool for more information.


See Also:


Displaying Information About All Rule-Based Transformations

The query in this section displays the following information about each rule-based transformation in a database:

Run the following query to display this information for the rule-based transformations in a database:

COLUMN RULE_OWNER HEADING 'Rule Owner' FORMAT A20
COLUMN RULE_NAME HEADING 'Rule Name' FORMAT A20
COLUMN TRANSFORM_TYPE HEADING 'Transformation Type' FORMAT A30

SELECT RULE_OWNER, 
       RULE_NAME, 
       TRANSFORM_TYPE
  FROM DBA_STREAMS_TRANSFORMATIONS;

Your output looks similar to the following:

Rule Owner           Rule Name            Transformation Type
-------------------- -------------------- ------------------------------
STRMADMIN            EMPLOYEES23          DECLARATIVE TRANSFORMATION
STRMADMIN            JOBS26               DECLARATIVE TRANSFORMATION
STRMADMIN            DEPARTMENTS33        SUBSET RULE
STRMADMIN            DEPARTMENTS32        SUBSET RULE
STRMADMIN            DEPARTMENTS34        SUBSET RULE
STRMADMIN            DEPARTMENTS32        CUSTOM TRANSFORMATION
STRMADMIN            DEPARTMENTS33        CUSTOM TRANSFORMATION
STRMADMIN            DEPARTMENTS34        CUSTOM TRANSFORMATION

Displaying Declarative Rule-Based Transformations

A declarative rule-based transformation is a rule-based transformation that covers one of a common set of transformation scenarios for row LCRs. Declarative rule-based transformations are run internally without using PL/SQL.

The query in this section displays the following information about each declarative rule-based transformation in a database:

Run the following query to display this information for the declarative rule-based transformations in a database:

COLUMN RULE_OWNER HEADING 'Rule Owner' FORMAT A15
COLUMN RULE_NAME HEADING 'Rule Name' FORMAT A15
COLUMN DECLARATIVE_TYPE HEADING 'Declarative|Type' FORMAT A15
COLUMN PRECEDENCE HEADING 'Precedence' FORMAT 99999
COLUMN STEP_NUMBER HEADING 'Step Number' FORMAT 99999

SELECT RULE_OWNER, 
       RULE_NAME, 
       DECLARATIVE_TYPE,
       PRECEDENCE,
       STEP_NUMBER
  FROM DBA_STREAMS_TRANSFORMATIONS
  WHERE TRANSFORM_TYPE = 'DECLARATIVE TRANSFORMATION';

Your output looks similar to the following:

                                Declarative
Rule Owner      Rule Name       Type            Precedence Step Number
--------------- --------------- --------------- ---------- -----------
STRMADMIN       JOBS26          RENAME TABLE             4           0
STRMADMIN       EMPLOYEES23     ADD COLUMN               3           0

Based on this output, the ADD COLUMN transformation executes before the RENAME TABLE transformation because the step number is the same (zero) for both transformations and the ADD COLUMN transformation has the lower precedence.

When you determine which types of declarative rule-based transformations are in a database, you can display more detailed information about each transformation. The following data dictionary views contain detailed information about the various types of declarative rule-based transformations:

For example, the previous query listed an ADD COLUMN transformation and a RENAME TABLE transformation. The following sections contain queries that display detailed information about these transformations:


Note:

Precedence and step number pertain only to declarative rule-based transformations. They do not pertain to subset rule transformations or custom rule-based transformations.

Displaying Information About ADD COLUMN Transformations

The following query displays detailed information about the ADD COLUMN declarative rule-based transformations in a database:

COLUMN RULE_OWNER HEADING 'Rule|Owner' FORMAT A9
COLUMN RULE_NAME HEADING 'Rule|Name' FORMAT A12
COLUMN SCHEMA_NAME HEADING 'Schema|Name' FORMAT A6
COLUMN TABLE_NAME HEADING 'Table|Name' FORMAT A9
COLUMN COLUMN_NAME HEADING 'Column|Name' FORMAT A10
COLUMN COLUMN_TYPE HEADING 'Column|Type' FORMAT A8

SELECT RULE_OWNER, 
       RULE_NAME, 
       SCHEMA_NAME,
       TABLE_NAME,
       COLUMN_NAME,
       ANYDATA.AccessDate(COLUMN_VALUE) "Value",
       COLUMN_TYPE
  FROM DBA_STREAMS_ADD_COLUMN;

Your output looks similar to the following:

Rule      Rule         Schema Table     Column                          Column
Owner     Name         Name   Name      Name       Value                Type
--------- ------------ ------ --------- ---------- -------------------- --------
STRMADMIN EMPLOYEES23  HR     EMPLOYEES BIRTH_DATE                      SYS.DATE

This output show the following information about the ADD COLUMN declarative rule-based transformation:

  • It is specified on the employees23 rule in the strmadmin schema.

  • It adds a column to row LCRs that involve the employees table in the hr schema.

  • The column name of the added column is birth_date.

  • The value of the added column is NULL. Notice that the COLUMN_VALUE column in the DBA_STREAMS_ADD_COLUMN view is type ANYDATA. In this example, because the column type is DATE, the ANYDATA.AccessDate member function is used to display the value. Use the appropriate member function to display values of other types.

  • The type of the added column is DATE.

Displaying Information About RENAME TABLE Transformations

The following query displays detailed information about the RENAME TABLE declarative rule-based transformations in a database:

COLUMN RULE_OWNER HEADING 'Rule|Owner' FORMAT A10
COLUMN RULE_NAME HEADING 'Rule|Name' FORMAT A10
COLUMN FROM_SCHEMA_NAME HEADING 'From|Schema|Name' FORMAT A10
COLUMN TO_SCHEMA_NAME HEADING 'To|Schema|Name' FORMAT A10
COLUMN FROM_TABLE_NAME HEADING 'From|Table|Name' FORMAT A15
COLUMN TO_TABLE_NAME HEADING 'To|Table|Name' FORMAT A15

SELECT RULE_OWNER, 
       RULE_NAME, 
       FROM_SCHEMA_NAME,
       TO_SCHEMA_NAME,
       FROM_TABLE_NAME,
       TO_TABLE_NAME
  FROM DBA_STREAMS_RENAME_TABLE;

Your output looks similar to the following:

                      From       To         From            To
Rule       Rule       Schema     Schema     Table           Table
Owner      Name       Name       Name       Name            Name
---------- ---------- ---------- ---------- --------------- ---------------
STRMADMIN  JOBS26     HR         HR         JOBS            ASSIGNMENTS

This output show the following information about the RENAME TABLE declarative rule-based transformation:

  • It is specified on the jobs26 rule in the strmadmin schema.

  • It renames the hr.jobs table in row LCRs to the hr.assignments table.

Displaying Custom Rule-Based Transformations

A custom rule-based transformation is a rule-based transformation that requires a user-defined PL/SQL function. The query in this section displays the following information about each custom rule-based transformation specified in a database:

Run the following query to display this information:

COLUMN RULE_OWNER HEADING 'Rule Owner' FORMAT A20
COLUMN RULE_NAME HEADING 'Rule Name' FORMAT A15
COLUMN TRANSFORM_FUNCTION_NAME HEADING 'Transformation Function' FORMAT A30
COLUMN CUSTOM_TYPE HEADING 'Type' FORMAT A11
 
SELECT RULE_OWNER, RULE_NAME, TRANSFORM_FUNCTION_NAME, CUSTOM_TYPE
  FROM DBA_STREAMS_TRANSFORM_FUNCTION;

Your output looks similar to the following:

Rule Owner           Rule Name       Transformation Function        Type
-------------------- --------------- ------------------------------ -----------
STRMADMIN            DEPARTMENTS31   "HR"."EXECUTIVE_TO_MANAGEMENT" ONE TO ONE
STRMADMIN            DEPARTMENTS32   "HR"."EXECUTIVE_TO_MANAGEMENT" ONE TO ONE
STRMADMIN            DEPARTMENTS33   "HR"."EXECUTIVE_TO_MANAGEMENT" ONE TO ONE

Note:

The transformation function name must be of type VARCHAR2. If it is not, then the value of TRANSFORM_FUNCTION_NAME is NULL. The VALUE_TYPE column in the DBA_STREAMS_TRANSFORM_FUNCTION view displays the type of the transform function name.

PKų—ē3:M5MPK◊hUIOEBPS/capappdemo.htmġ Single-Database Capture and Apply¬†Example

27 Single-Database Capture and Apply Example

This chapter illustrates an example of a single database that captures changes to a table, reenqueues the captured changes into a queue, and then uses a DML handler during apply to insert a subset of the changes into a different table.

This chapter contains these topics:

Overview of the Single-Database Capture and Apply Example

The example in this chapter illustrates using Streams to capture and apply data manipulation language (DML) changes at a single database named cpap.net. Specifically, this example captures DML changes to the employees table in the hr schema, placing row logical change records (LCRs) into a queue named streams_queue. Next, an apply process dequeues these row LCRs from the same queue, reenqueues them into this queue, and sends them to a DML handler.

When the row LCRs are captured, they reside in the buffered queue and cannot be dequeued explicitly. After the row LCRs are reenqueued during apply, they are available for explicit dequeue by an application. This example does not create the application that dequeues these row LCRs.

This example illustrates a DML handler that inserts records of deleted employees into an emp_del table in the hr schema. This example assumes that the emp_del table is used to retain the records of all deleted employees. The DML handler is used to determine whether each row LCR contains a DELETE statement. When the DML handler finds a row LCR containing a DELETE statement, it converts the DELETE into an INSERT on the emp_del table and then inserts the row.

Figure 27-1 provides an overview of the environment.

Figure 27-1 Single Database Capture and Apply Example

Description of Figure 27-1 follows


See Also:


Prerequisites

The following prerequisites must be completed before you begin the example in this chapter.

Set Up the Environment

Complete the following steps to create the hr.emp_del table, set up the Streams administrator, and create the queue.

  1. Show Output and Spool Results

  2. Create the hr.emp_del Table

  3. Set Up Users at cpap.net

  4. Create the ANYDATA Queue at cpap.net

  5. Check the Spool Results


Note:

If you are viewing this document online, then you can copy the text from the "BEGINNING OF SCRIPT" line after this note to the next "END OF SCRIPT" line into a text editor and then edit the text to create a script for your environment. Run the script with SQL*Plus on a computer that can connect to the database.

/************************* BEGINNING OF SCRIPT ******************************

Step 1   Show Output and Spool Results

Run SET ECHO ON and specify the spool file for the script. Check the spool file for errors after you run this script.

*/

SET ECHO ON
SPOOL streams_setup_capapp.out

/*

Step 2   Create the hr.emp_del Table

Connect to cpap.net as the hr user.

*/
 
CONNECT hr/hr@cpap.net

/*

Create the hr.emp_del table. The columns in the emp_del table is the same as the columns in the employees table, except for one added timestamp column that will record the date when a row is inserted into the emp_del table.

*/

CREATE TABLE emp_del( 
  employee_id    NUMBER(6), 
  first_name     VARCHAR2(20), 
  last_name      VARCHAR2(25), 
  email          VARCHAR2(25), 
  phone_number   VARCHAR2(20), 
  hire_date      DATE, 
  job_id         VARCHAR2(10), 
  salary         NUMBER(8,2), 
  commission_pct NUMBER(2,2), 
  manager_id     NUMBER(6), 
  department_id  NUMBER(4),
  timestamp      DATE);

CREATE UNIQUE INDEX emp_del_id_pk ON emp_del (employee_id);

ALTER TABLE emp_del ADD (CONSTRAINT emp_del_id_pk PRIMARY KEY (employee_id));

/*

Step 3   Set Up Users at cpap.net

Connect to cpap.net as SYSTEM user.

*/
 
CONNECT SYSTEM/MANAGER@cpap.net

/*

Create the Streams administrator named strmadmin and grant this user the necessary privileges. These privileges enable the user to manage queues, execute subprograms in packages related to Streams, create rule sets, create rules, and monitor the Streams environment by querying data dictionary views and queue table. You can choose a different name for this user.

In this example, the Streams administrator will be the apply user for the apply process and must be able to apply changes to the hr.emp_del table. Therefore, the Streams administrator is granted ALL privileges on this table.


Note:

  • For security purposes, use a password other than strmadminpw for the Streams administrator.

  • The ACCEPT command must appear on a single line in the script.


*/

GRANT DBA TO strmadmin IDENTIFIED BY strmadminpw;

ACCEPT streams_tbs PROMPT 'Enter Streams administrator tablespace on cpap.net: '

ALTER USER strmadmin DEFAULT TABLESPACE &streams_tbs
                     QUOTA UNLIMITED ON &streams_tbs;

/*

This example executes a subprogram in a Streams packages within a stored procedure. Specifically, the emp_dq procedure created in Step 8 runs the DEQUEUE procedure in the DBMS_STREAMS_MESSAGING package. Therefore, the Streams administrator must be granted EXECUTE privilege explicitly on the package. In this case, EXECUTE privilege cannot be granted through a role. The GRANT_ADMIN_PRIVILEGE procedure grants EXECUTE on all Streams packages, as well as other privileges relevant to Streams.

*/

BEGIN
  DBMS_STREAMS_AUTH.GRANT_ADMIN_PRIVILEGE(
    grantee          => 'strmadmin',    
    grant_privileges => true);
END;
/

/*

Grant the Streams administrator all privileges on the emp_del table, because the Streams administrator will be the apply user and must be able to insert records into this table. Alternatively, you can alter the apply process to specify that hr is the apply user.

*/

GRANT ALL ON hr.emp_del TO STRMADMIN;

/*

Step 4   Create the ANYDATA Queue at cpap.net

Connect to cpap.net as the strmadmin user.

*/

CONNECT strmadmin/strmadminpw@cpap.net

/*

Run the SET_UP_QUEUE procedure to create a queue named streams_queue at cpap.net. This queue is an ANYDATA queue that will stage the captured changes to be dequeued by an apply process and the user-enqueued changes to be dequeued by a dequeue procedure.

Running the SET_UP_QUEUE procedure performs the following actions:

*/

BEGIN
  DBMS_STREAMS_ADM.SET_UP_QUEUE(
    queue_table  => 'strmadmin.streams_queue_table',
    queue_name   => 'strmadmin.streams_queue');
END;
/

/*

Step 5   Check the Spool Results

Check the streams_setup_capapp.out spool file to ensure that all actions finished successfully after this script is completed.

*/

SET ECHO OFF
SPOOL OFF

/*************************** END OF SCRIPT ******************************/

Configure Capture and Apply

Complete the following steps to capture changes to the hr.employees table and apply these changes on single database in a customized way using a DML handler.

  1. Show Output and Spool Results

  2. Configure the Capture Process at cpap.net

  3. Set the Instantiation SCN for the hr.employees Table

  4. Create the DML Handler Procedure

  5. Set the DML Handler for the hr.employees Table

  6. Create a Messaging Client for the Queue

  7. Configure the Apply Process at cpap.net

  8. Create a Procedure to Dequeue the Messages

  9. Start the Apply Process at cpap.net

  10. Start the Capture Process at cpap.net

  11. Check the Spool Results


Note:

If you are viewing this document online, then you can copy the text from the "BEGINNING OF SCRIPT" line after this note to the next "END OF SCRIPT" line into a text editor and then edit the text to create a script for your environment. Run the script with SQL*Plus on a computer that can connect the database.

/************************* BEGINNING OF SCRIPT ******************************

Step 1   Show Output and Spool Results

Run SET ECHO ON and specify the spool file for the script. Check the spool file for errors after you run this script.

*/

SET ECHO ON
SPOOL streams_config_capapp.out

/*

Step 2   Configure the Capture Process at cpap.net

Connect to cpap.net as the strmadmin user.

*/
 
CONNECT strmadmin/strmadminpw@cpap.net

/*

Configure the capture process to capture DML changes to the hr.employees table at cpap.net. This step creates the capture process and adds a rule to its positive rule set that instructs the capture process to capture DML changes to this table. This step also prepares the hr.employees table for instantiation and enables supplemental logging for any primary key, unique key, bitmap index, and foreign key columns in the table.

Supplemental logging places additional information in the redo log for changes made to tables. The apply process needs this extra information to perform some operations, such as unique row identification.

*/

BEGIN
  DBMS_STREAMS_ADM.ADD_TABLE_RULES(
    table_name     => 'hr.employees',   
    streams_type   => 'capture',
    streams_name   => 'capture_emp',
    queue_name     => 'strmadmin.streams_queue',
    include_dml    =>  true,
    include_ddl    =>  false,
    inclusion_rule =>  true);
END;
/

/*

Step 3   Set the Instantiation SCN for the hr.employees Table

Because this example captures and applies changes in a single database, no instantiation is necessary. However, the apply process at the cpap.net database still must be instructed to apply changes that were made to the hr.employees table after a specific system change number (SCN).

This example uses the GET_SYSTEM_CHANGE_NUMBER function in the DBMS_FLASHBACK package to obtain the current SCN for the database. This SCN is used to run the SET_TABLE_INSTANTIATION_SCN procedure in the DBMS_APPLY_ADM package.

The SET_TABLE_INSTANTIATION_SCN procedure controls which LCRs for a table are ignored by an apply process and which LCRs for a table are applied by an apply process. If the commit SCN of an LCR for a table from a source database is less than or equal to the instantiation SCN for that table at a destination database, then the apply process at the destination database discards the LCR. Otherwise, the apply process applies the LCR. In this example, the cpap.net database is both the source database and the destination database.

The apply process will apply transactions to the hr.employees table with SCNs that were committed after SCN obtained in this step.


Note:

The hr.employees table also must be prepared for instantiation. This preparation was done automatically when the the capture process was configured with a rule to capture DML changes to the hr.employees table in Step 2.

*/

DECLARE
  iscn  NUMBER;         -- Variable to hold instantiation SCN value
BEGIN
  iscn := DBMS_FLASHBACK.GET_SYSTEM_CHANGE_NUMBER();
  DBMS_APPLY_ADM.SET_TABLE_INSTANTIATION_SCN(
    source_object_name    => 'hr.employees',
    source_database_name  => 'cpap.net',
    instantiation_scn     => iscn);
END;
/

/*

Step 4   Create the DML Handler Procedure

This step creates the emp_dml_handler procedure. This procedure will be the DML handler for DELETE changes to the hr.employees table. It converts any row LCR containing a DELETE command type into an INSERT row LCR and then inserts the converted row LCR into the hr.emp_del table by executing the row LCR.

*/

CREATE OR REPLACE PROCEDURE emp_dml_handler(in_any IN ANYDATA) IS
  lcr          SYS.LCR$_ROW_RECORD;
  rc           PLS_INTEGER;
  command      VARCHAR2(30);
  old_values   SYS.LCR$_ROW_LIST;
BEGIN    
  -- Access the LCR
  rc := in_any.GETOBJECT(lcr);
  -- Get the object command type
  command := lcr.GET_COMMAND_TYPE();
  -- Check for DELETE command on the hr.employees table
  IF command = 'DELETE' THEN
    -- Set the command_type in the row LCR to INSERT
    lcr.SET_COMMAND_TYPE('INSERT');
    -- Set the object_name in the row LCR to EMP_DEL
    lcr.SET_OBJECT_NAME('EMP_DEL');
    -- Get the old values in the row LCR
    old_values := lcr.GET_VALUES('old');
    -- Set the old values in the row LCR to the new values in the row LCR
    lcr.SET_VALUES('new', old_values);
    -- Set the old values in the row LCR to NULL
    lcr.SET_VALUES('old', NULL);
    -- Add a SYSDATE value for the timestamp column
    lcr.ADD_COLUMN('new', 'TIMESTAMP', ANYDATA.ConvertDate(SYSDATE));
    -- Apply the row LCR as an INSERT into the hr.emp_del table
    lcr.EXECUTE(true);
  END IF;
END;
/

/*

Step 5   Set the DML Handler for the hr.employees Table

Set the DML handler for the hr.employees table to the procedure created in Step 4. Notice that the DML handler must be set separately for each possible operation on the table: INSERT, UPDATE, and DELETE.

*/

BEGIN
  DBMS_APPLY_ADM.SET_DML_HANDLER(
    object_name         => 'hr.employees',
    object_type         => 'TABLE',
    operation_name      => 'INSERT',
    error_handler       => false,
    user_procedure      => 'strmadmin.emp_dml_handler',
    apply_database_link => NULL,
    apply_name          => NULL);
END;
/

BEGIN
  DBMS_APPLY_ADM.SET_DML_HANDLER(
    object_name         => 'hr.employees',
    object_type         => 'TABLE',
    operation_name      => 'UPDATE',
    error_handler       => false,
    user_procedure      => 'strmadmin.emp_dml_handler',
    apply_database_link => NULL,
    apply_name          => NULL);
END;
/

BEGIN
  DBMS_APPLY_ADM.SET_DML_HANDLER(
    object_name         => 'hr.employees',
    object_type         => 'TABLE',
    operation_name      => 'DELETE',
    error_handler       => false,
    user_procedure      => 'strmadmin.emp_dml_handler',
    apply_database_link => NULL,
    apply_name          => NULL);
END;
/

/*

Step 6   Create a Messaging Client for the Queue

Create a messaging client that can be used by an application to dequeue the reenqueued messages. A messaging client must be specified before the messages can be reenqueued into the queue.

*/

BEGIN
  DBMS_STREAMS_ADM.ADD_TABLE_RULES(
    table_name     => 'hr.employees',   
    streams_type   => 'dequeue',
    streams_name   => 'hr',
    queue_name     => 'strmadmin.streams_queue',
    include_dml    =>  true,
    include_ddl    =>  false,
    inclusion_rule =>  true);
END;
/

/*

Step 7   Configure the Apply Process at cpap.net

Create an apply process to apply DML changes to the hr.employees table. Although the DML handler for the apply process causes deleted employees to be inserted into the emp_del table, this rule specifies the employees table, because the row LCRs in the queue contain changes to the employees table, not the emp_del table. When you run the ADD_TABLE_RULES procedure to create the apply process, the out parameter dml_rule_name contains the name of the DML rule created. This rule name is then passed to the SET_ENQUEUE_DESTINATION procedure.

The SET_ENQUEUE_DESTINATION procedure in the DBMS_APPLY_ADM package specifies that any apply process using the DML rule generated by ADD_TABLE_RULES will enqueue messages that satisfy this rule into streams_queue. In this case, the DML rule is for row LCRs with DML changes to the hr.employees table. A local queue other than the apply process queue can be specified if appropriate.

*/

DECLARE
          emp_rule_name_dml  VARCHAR2(30);
          emp_rule_name_ddl  VARCHAR2(30);
BEGIN
  DBMS_STREAMS_ADM.ADD_TABLE_RULES(
    table_name      => 'hr.employees',
    streams_type    => 'apply', 
    streams_name    => 'apply_emp',
    queue_name      => 'strmadmin.streams_queue',
    include_dml     =>  true,
    include_ddl     =>  false,
    source_database => 'cpap.net',
    dml_rule_name   => emp_rule_name_dml,
    ddl_rule_name   => emp_rule_name_ddl);
  DBMS_APPLY_ADM.SET_ENQUEUE_DESTINATION(
    rule_name               =>  emp_rule_name_dml,
    destination_queue_name  =>  'strmadmin.streams_queue');
END;
/

/*

Step 8   Create a Procedure to Dequeue the Messages

The emp_dq procedure created in this step can be used to dequeue the messages that are reenqueued by the apply process. In Step 7, the SET_ENQUEUE_DESTINATION procedure was used to instruct the apply process to enqueue row LCRs containing changes to the hr.employees table into streams_queue. When the emp_dq procedure is executed, it dequeues each row LCR in the queue and displays the type of command in the row LCR, either INSERT, UPDATE, or DELETE. Any information in the row LCRs can be accessed and displayed, not just the command type.


See Also:

"Displaying Detailed Information About Apply Errors" for more information about displaying information in LCRs

*/

CREATE OR REPLACE PROCEDURE emp_dq (consumer IN VARCHAR2) AS
  msg            ANYDATA;
  row_lcr        SYS.LCR$_ROW_RECORD;
  num_var        pls_integer;
  more_messages  BOOLEAN := true;
  navigation     VARCHAR2(30);
BEGIN
  navigation := 'FIRST MESSAGE';
  WHILE (more_messages) LOOP
    BEGIN
      DBMS_STREAMS_MESSAGING.DEQUEUE(
        queue_name   => 'strmadmin.streams_queue',
        streams_name => consumer,
        payload      => msg,
        navigation   => navigation,
        wait         => DBMS_STREAMS_MESSAGING.NO_WAIT);
      IF msg.GETTYPENAME() = 'SYS.LCR$_ROW_RECORD' THEN
        num_var := msg.GetObject(row_lcr);   
        DBMS_OUTPUT.PUT_LINE(row_lcr.GET_COMMAND_TYPE || ' row LCR dequeued');
      END IF;
      navigation := 'NEXT MESSAGE';
    COMMIT;
    EXCEPTION WHEN SYS.DBMS_STREAMS_MESSAGING.ENDOFCURTRANS THEN
                navigation := 'NEXT TRANSACTION';
              WHEN DBMS_STREAMS_MESSAGING.NOMOREMSGS THEN
                more_messages := false;
                DBMS_OUTPUT.PUT_LINE('No more messages.');
              WHEN OTHERS THEN
                RAISE; 
    END;
  END LOOP;
END;
/

/*

Step 9   Start the Apply Process at cpap.net

Set the disable_on_error parameter to n so that the apply process will not be disabled if it encounters an error, and start the apply process at cpap.net.

*/

BEGIN
  DBMS_APPLY_ADM.SET_PARAMETER(
    apply_name  => 'apply_emp', 
    parameter   => 'disable_on_error', 
    value       => 'n');
END;
/
 
BEGIN
  DBMS_APPLY_ADM.START_APPLY(
    apply_?ņÁname  => 'apply_emp');
END;
/

/*

Step 10   Start the Capture Process at cpap.net

Start the capture process at cpap.net.

*/

BEGIN
  DBMS_CAPTURE_ADM.START_CAPTURE(
    capture_name  => 'capture_emp');
END;
/

/*

Step 11   Check the Spool Results

Check the streams_config_capapp.out spool file to ensure that all actions finished successfully after this script is completed.

*/

SET ECHO OFF
SPOOL OFF

/*************************** END OF SCRIPT ******************************/

Make DML Changes, Query for Results, and Dequeue Messages

Complete the following steps to confirm that apply process is configured correctly, make DML changes to the hr.employees table, query for the resulting inserts into the hr.emp_del table and the reenqueued messages in the streams_queue_table, and dequeue the messages that were reenqueued by the DML handler.


Step 1   Confirm the Rule Action Context

Step 7 creates an apply process rule that specifies a destination queue into which LCRs that satisfy the rule are enqueued. In this case, LCRs that satisfy the rule are row LCRs with changes to the hr.employees table.

Complete the following steps to confirm that the rule specifies a destination queue:

  1. Run the following query to determine the name of the rule for DML changes to the hr.employees table used by the apply process apply_emp:

    CONNECT strmadmin/strmadminpw@cpap.net
    
    SELECT RULE_OWNER, RULE_NAME FROM DBA_STREAMS_RULES 
      WHERE STREAMS_NAME = 'APPLY_EMP' AND
            STREAMS_TYPE = 'APPLY' AND
            SCHEMA_NAME  = 'HR' AND
            OBJECT_NAME  = 'EMPLOYEES' AND
            RULE_TYPE    = 'DML'
      ORDER BY RULE_NAME;
    

    Your output looks similar to the following:

    RULE_OWNER                     RULE_NAME
    ------------------------------ ------------------------------
    STRMADMIN                      EMPLOYEES3
    
  2. View the action context for the rule returned by the query in Step 1:

    COLUMN RULE_OWNER HEADING 'Rule Owner' FORMAT A15
    COLUMN DESTINATION_QUEUE_NAME HEADING 'Destination Queue' FORMAT A30
    
    SELECT RULE_OWNER, DESTINATION_QUEUE_NAME
      FROM DBA_APPLY_ENQUEUE
      WHERE RULE_NAME = 'EMPLOYEES3'
      ORDER BY DESTINATION_QUEUE_NAME;
    

    Make sure you substitute the rule name returned in Step 1 in the WHERE clause. Your output looks similar to the following:

    Rule Owner      Destination Queue
    --------------- ------------------------------
    STRMADMIN       "STRMADMIN"."STREAMS_QUEUE"
    

    The output should show that LCRs that satisfy the apply process rule are enqueued into streams_queue.

Step 2   Perform an INSERT, UPDATE, and DELETE on hr.employees

Make the following DML changes to the hr.employees table.

CONNECT hr/hr@cpap.net

INSERT INTO hr.employees VALUES(207, 'JOHN', 'SMITH', 'JSMITH@MYCOMPANY.COM', 
  NULL, '07-JUN-94', 'AC_ACCOUNT', 777, NULL, NULL, 110);
COMMIT;

UPDATE hr.employees SET salary=5999 WHERE employee_id=206;
COMMIT;

DELETE FROM hr.employees WHERE employee_id=207;
COMMIT;

Step 3   Query the hr.emp_del Table and the streams_queue_table

After some time passes to allow for capture and apply of the changes performed in the previous step, run the following queries to see the results:

CONNECT strmadmin/strmadminpw@cpap.net

SELECT employee_id, first_name, last_name, timestamp 
  FROM hr.emp_del ORDER BY employee_id;

SELECT MSG_ID, MSG_STATE, CONSUMER_NAME 
  FROM AQ$STREAMS_QUEUE_TABLE ORDER BY MSG_ID;

When you run the first query, you should see a record for the employee with an employee_id of 207. This employee was deleted in the previous step. When you run the second query, you should see the reenqueued messages resulting from all of the changes in the previous step, and the MSG_STATE should be READY for these messages.

Step 4   Dequeue Messages Reenqueued by the DML Handler

Use the emp_dq procedure to dequeue the messages that were reenqueued by the DML handler.

SET SERVEROUTPUT ON SIZE 100000

EXEC emp_dq('HR');

For each row changed by a DML statement, one line is returned, and each line states the command type of the change (either INSERT, UPDATE, or DELETE). If you repeat the query on the queue table in Step 3 after the messages are dequeued, then the dequeued messages should have been consumed. That is, either the MSG_STATE should be PROCESSED for these messages, or the messages should no longer be in the queue.

SELECT MSG_ID, MSG_STATE, CONSUMER_NAME 
  FROM AQ$STREAMS_QUEUE_TABLE ORDER BY MSG_ID;
PKĒ)ńčIė?ėPK◊hUIOEBPS/strms_over.htmġ Introduction to Streams

1 Introduction to Streams

This chapter briefly describes the basic concepts and terminology related to Oracle Streams. These concepts are described in more detail in other chapters in this book and in the Oracle Streams Replication Administrator's Guide.

This chapter contains these topics:

Overview of Streams

Oracle Streams enables information sharing. Using Oracle Streams, each unit of shared information is called a message, and you can share these messages in a stream. The stream can propagate information within a database or from one database to another. The stream routes specified information to specified destinations. The result is a feature that provides greater functionality and flexibility than traditional solutions for capturing and managing messages, and sharing the messages with other databases and applications. Streams provides the capabilities needed to build and operate distributed enterprises and applications, data warehouses, and high availability solutions. You can use all of the capabilities of Oracle Streams at the same time. If your needs change, then you can implement a new capability of Streams without sacrificing existing capabilities.

Using Oracle Streams, you control what information is put into a stream, how the stream flows or is routed from database to database, what happens to messages in the stream as they flow into each database, and how the stream terminates. By configuring specific capabilities of Streams, you can address specific requirements. Based on your specifications, Streams can capture, stage, and manage messages in the database automatically, including, but not limited to, data manipulation language (DML) changes and data definition language (DDL) changes. You can also put user-defined messages into a stream, and Streams can propagate the information to other databases or applications automatically. When messages reach a destination, Streams can consume them based on your specifications.

Figure 1-1 shows the Streams information flow.

Figure 1-1 Streams Information Flow

Description of Figure 1-1 follows

What Can Streams Do?

The following sections provide an overview of what Streams can do.

Capture Messages at a Database

A capture process can capture database events, such as changes made to tables, schemas, or an entire database. Such changes are recorded in the redo log for a database, and a capture process captures changes from the redo log and formats each captured change into a message called a logical change record (LCR). The rules used by a capture process determine which changes it captures, and these captured changes are called captured messages.

The database where changes are generated in the redo log is called the source database. A capture process can capture changes locally at the source database, or it can capture changes remotely at a downstream database. A capture process enqueues logical change records (LCRs) into a queue that is associated with it. When a capture process captures messages, it is sometimes referred to as implicit capture.

Users and applications can also enqueue messages into a queue manually. These messages are called user-enqueued messages, and they can be LCRs or messages of a user-defined type called user messages. When users and applications enqueue messages into a queue manually, it is sometimes referred to as explicit capture.

Stage Messages in a Queue

Messages are stored (or staged) in a queue. These messages can be captured messages or user-enqueued messages. A capture process enqueues messages into a ANYDATA queue. An ANYDATA queue can stage messages of different types. Users and applications can enqueue messages into an ANYDATA queue or into a typed queue. A typed queue can stage messages of one specific type only.

Propagate Messages from One Queue to Another

Streams propagations can propagate messages from one queue to another. These queues can be in the same database or in different databases. Rules determine which messages are propagated by a propagation.

Consume Messages

A message is consumed when it is dequeued from a queue. An apply process can dequeue messages from a queue implicitly. A user, application, or messaging client can dequeue messages explicitly. The database where messages are consumed is called the destination database. In some configurations, the source database and the destination database can be the same.

Rules determine which messages are dequeued and processed by an apply process. An apply process can apply messages directly to database objects or pass messages to custom PL/SQL subprograms for processing.

Rules determine which messages are dequeued by a messaging client. A messaging client dequeues messages when it is invoked by an application or a user.

Other Capabilities of Streams

Other capabilities of Streams include the following:

These capabilities are discussed briefly later in this chapter and in detail later in this document and in the Oracle Streams Replication Administrator's Guide.

What Are the Uses of Streams?

The following sections briefly describe some of the reasons for using Streams. In some cases, Streams components provide infrastructure for various features of Oracle.

Message Queuing

Oracle Streams Advanced Queuing (AQ) enables user applications to enqueue messages into a queue, propagate messages to subscribing queues, notify user applications that messages are ready for consumption, and dequeue messages at the destination. A queue can be configured to stage messages of a particular type only, or a queue can be configured as an ANYDATA queue. Messages of almost any type can be wrapped in an ANYDATA wrapper and staged in ANYDATA queues. AQ supports all the standard features of message queuing systems, including multiconsumer queues, publish and subscribe, content-based routing, Internet propagation, transformations, and gateways to other messaging subsystems.

You can create a queue at a database, and applications can enqueue messages into the queue explicitly. Subscribing applications or messaging clients can dequeue messages directly from this queue. If an application is remote, then a queue can be created in a remote database that subscribes to messages published in the source queue. The destination application can dequeue messages from the remote queue. Alternatively, the destination application can dequeue messages directly from the source queue using a variety of standard protocols.


See Also:

Oracle Streams Advanced Queuing User's Guide and Reference for more information about AQ

Data Replication

Streams can capture DML and DDL changes made to database objects and replicate those changes to one or more other databases. A Streams capture process captures changes made to source database objects and formats them into LCRs, which can be propagated to destination databases and then applied by Streams apply processes.

The destination databases can allow DML and DDL changes to the same database objects, and these changes might or might not be propagated to the other databases in the environment. In other words, you can configure a Streams environment with one database that propagates changes, or you can configure an environment where changes are propagated between databases bidirectionally. Also, the tables for which data is shared do not need to be identical copies at all databases. Both the structure and the contents of these tables can differ at different databases, and the information in these tables can be shared between these databases.


See Also:

Oracle Streams Replication Administrator's Guide for more information using Streams for replication

Event Management and Notification

Business events are valuable communications between applications or organizations. An application can enqueue messages that represent events into a queue explicitly, or a Streams capture process can capture database events and encapsulate them into messages called LCRs. These captured messages can be the results of DML or DDL changes. Propagations can propagate messages in a stream through multiple queues. Finally, a user application can dequeue messages explicitly, or a Streams apply process can dequeue messages implicitly. An apply process can reenqueue these messages explicitly into the same queue or a different queue if necessary.

You can configure queues to retain explicitly-enqueued messages after consumption for a specified period of time. This capability enables you to use Advanced Queuing (AQ) as a business event management system. AQ stores all messages in the database in a transactional manner, where they can be automatically audited and tracked. You can use this audit trail to extract intelligence about the business operations.

Streams capture processes, propagations, apply processes, and messaging clients perform actions based on rules. You specify which events are captured, propagated, applied, and dequeued using rules, and a built-in rules engine evaluates events based on these rules. The ability to capture events and propagate them to relevant consumers based on rules means that you can use Streams for event notification. Messages representing events can be staged in a queue and dequeued explicitly by a messaging client or an application, and then actions can be taken based on these events, which can include an email notification, or passing the message to a wireless gateway for transmission to a cell phone or pager.


See Also:


Data Warehouse Loading

Data warehouse loading is a special case of data replication. Some of the most critical tasks in creating and maintaining a data warehouse include refreshing existing data, and adding new data from the operational databases. Streams components can capture changes made to a production system and send those changes to a staging database or directly to a data warehouse or operational data store. Streams capture of redo data avoids unnecessary overhead on the production systems. Support for data transformations and user-defined apply procedures enables the necessary flexibility to reformat data or update warehouse-specific data fields as data is loaded. In addition, Change Data Capture uses some of the components of Streams to identify data that has changed so that this data can be loaded into a data warehouse.


See Also:

Oracle Database Data Warehousing Guide for more information about data warehouses

Data Protection

One solution for data protection is to create a local or remote copy of a production database. In the event of human error or a catastrophe, the copy can be used to resume processing. You can use Streams to configure flexible high availability environments.

In addition, you can use Oracle Data Guard, a data protection feature that uses some of the same infrastructure as Streams, to create and maintain a logical standby database, which is a logically equivalent standby copy of a production database. As in the case of Streams replication, a capture process captures changes in the redo log and formats these changes into LCRs. These LCRs are applied at the standby databases. The standby databases are fully open for read/write and can include specialized indexes or other database objects. Therefore, these standby databases can be queried as updates are applied.

It is important to move the updates to the remote site as soon as possible with a logical standby database. Doing so ensures that, in the event of a failure, lost transactions are minimal. By directly and synchronously writing the redo logs at the remote database, you can achieve no data loss in the event of a disaster. At the standby system, the changes are captured and directly applied to the standby database with an apply process.


See Also:


Database Availability During Upgrade and Maintenance Operations

You can use the features of Oracle Streams to achieve little or no database down time during database upgrade and maintenance operations. Maintenance operations include migrating a database to a different platform, migrating a database to a different character set, modifying database schema objects to support upgrades to user-created applications, and applying an Oracle software patch.

Overview of the Capture Process

Changes made to database objects in an Oracle database are logged in the redo log to guarantee recoverability in the event of user error or media failure. A capture process is an Oracle background process that scans the database redo log to capture DML and DDL changes made to database objects. A capture process formats these changes into messages called LCRs and enqueues them into a queue. There are two types of LCRs: row LCRs contain information about a change to a row in table resulting from a DML operation, and DDL LCRs contain information about a DDL change to a database object. Rules determine which changes are captured. Figure 1-2 shows a capture process capturing LCRs.

Figure 1-2 Capture Process

Description of Figure 1-2 follows

You can configure change capture locally at a source database or remotely at a downstream database. A local capture process runs at the source database and captures changes from the local source database redo log. The following types of configurations are possible for a downstream capture process:


Note:

A capture process does not capture some types of DML and DDL changes, and it does not capture changes made in the SYS, SYSTEM, or CTXSYS schemas.


See Also:

Chapter 2, "Streams Capture Process" for more information about capture processes and for detailed information about which DML and DDL statements are captured by a capture process

Overview of Message Staging and Propagation

Streams uses queues to stage messages for propagation or consumption. Propagations send messages from one queue to another, and these queues can be in the same database or in different databases. The queue from which the messages are propagated is called the source queue, and the queue that receives the messages is called the destination queue. There can be a one-to-many, many-to-one, or many-to-many relationship between source and destination queues.

Messages that are staged in a queue can be consumed by an apply process, a messaging client, or an application. Rules determine which messages are propagated by a propagation. Figure 1-3 shows propagation from a source queue to a destination queue.

Figure 1-3 Propagation from a Source Queue to a Destination Queue

Description of Figure 1-3 follows


See Also:

Chapter 3, "Streams Staging and Propagation" for more information about staging and propagation

Overview of Directed Networks

Streams enables you to configure an environment in which changes are shared through directed networks. In a directed network, propagated messages pass through one or more intermediate databases before arriving at a destination database where they are consumed. The messages might or might not be consumed at an intermediate database in addition to the destination database. Using Streams, you can choose which messages are propagated to each destination database, and you can specify the route messages will traverse on their way to a destination database.

Explicit Enqueue and Dequeue of Messages

User applications can enqueue messages into a queue explicitly. The user applicationġs can format these user-enqueued messages as LCRs or user messages, and an apply process, a messaging client, or a user application can consume these messages. Messages that were enqueued explicitly into a queue can be propagated to another queue or explicitly dequeued from the same queue. Figure 1-4 shows explicit enqueue of messages into and dequeue of messages from the same queue.

Figure 1-4 Explicit Enqueue and Dequeue of Messages in a Single Queue

Description of Figure 1-4 follows

When messages are propagated between queues, messages that were enqueued explicitly into a source queue can be dequeued explicitly from a destination queue by a messaging client or user application. These messages can also be processed by an apply process. Figure 1-5 shows explicit enqueue of messages into a source queue, propagation to a destination queue, and then explicit dequeue of messages from the destination queue.

Figure 1-5 Explicit Enqueue, Propagation, and Dequeue of Messages

Description of Figure 1-5 follows


See Also:

"ANYDATA Queues and User Messages" for more information about explicit enqueue and dequeue of messages

Overview of the Apply Process

An apply process is an Oracle background process that dequeues messages from a queue and either applies each message directly to a database object or passes the message as a parameter to a user-defined procedure called an apply handler. Apply handlers include message handlers, DML handlers, DDL handler, precommit handlers, and error handlers.

Typically, an apply process applies messages to the local database where it is running, but, in a heterogeneous database environment, it can be configured to apply messages at a remote non-Oracle database. Rules determine which messages are dequeued by an apply process. Figure 1-6 shows an apply process processing LCRs and user messages.

Figure 1-6 Apply Process

Description of Figure 1-6 follows

Overview of the Messaging Client

A messaging client consumes user-enqueued messages when it is invoked by an application or a user. Rules determine which user-enqueued messages are dequeued by a messaging client. These user-enqueued messages can be LCRs or user messages. Figure 1-7 shows a messaging client dequeuing user-enqueued messages.

Figure 1-7 Messaging Client

Description of Figure 1-7 follows

Overview of Automatic Conflict Detection and Resolution

An apply process detects conflicts automatically when directly applying LCRs in a replication environment. A conflict is a mismatch between the old values in an LCR and the expected data in a table. Typically, a conflict results when the same row in the source database and destination database is changed at approximately the same time.

When a conflict occurs, you need a mechanism to ensure that the conflict is resolved in accordance with your business rules. Streams offers a variety of prebuilt conflict handlers. Using these prebuilt handlers, you can define a conflict resolution system for each of your databases that resolves conflicts in accordance with your business rules. If you have a unique situation that prebuilt conflict resolution handlers cannot resolve, then you can build your own conflict resolution handlers.

If a conflict is not resolved, or if a handler procedure raises an error, then all messages in the transaction that raised the error are saved in the error queue for later analysis and possible reexecution.

Overview of Rules

Streams enables you to control which information to share and where to share it using rules. A rule is specified as a condition that is similar to the condition in the WHERE clause of a SQL query.

A rule consists of the following components:

You can group related rules together into rule sets. In Streams, rule sets can be positive or negative.

For example, the following rule condition can be used for a rule in Streams to specify that the schema name that owns a table must be hr and that the table name must be departments for the condition to evaluate to TRUE:

:dml.get_object_owner() = 'HR' AND :dml.get_object_name() = 'DEPARTMENTS'

The :dml variable is used in rule conditions for row LCRs. In a Streams environment, a rule with this condition can be used in the following ways:

Streams performs tasks based on rules. These tasks include capturing messages with a capture process, propagating messages with a propagation, applying messages with an apply process, dequeuing messages with a messaging client, and discarding messages.

Overview of Rule-Based Transformations

A rule-based transformation is any modification to a message that results when a rule in a positive rule set evaluates to TRUE. There are two types of rule-based transformations: declarative and custom.

Declarative rule-based transformations cover a set of common transformation scenarios for row LCRs, including renaming a schema, renaming a table, adding a column, renaming a column, and deleting a column. You specify (or declare) such a transformation using a procedure in the DBMS_STREAMS_ADM package. Streams performs declarative transformations internally, without invoking PL/SQL.

A custom rule-based transformation requires a user-defined PL/SQL function to perform the transformation. Streams invokes the PL/SQL function to perform the transformation. A custom rule-based transformation can modify either captured messages or user-enqueued messages, and these messages can be LCRs or user messages. For example, a custom rule-based transformation can change the datatype of a particular column in an LCR.

To specify a custom rule-based transformation, use the DBMS_STREAMS_ADM.SET_RULE_TRANSFORM_FUNCTION procedure. The transformation function takes as input an ANYDATA object containing a message and returns an ANYDATA object containing the transformed message. For example, a transformation can use a PL/SQL function that takes as input an ANYDATA object containing an LCR with a NUMBER datatype for a column and returns an ANYDATA object containing an LCR with a VARCHAR2 datatype for the same column.

Either type of rule-based transformation can occur at the following times:

When a transformation is performed during apply, an apply process can apply the transformed message directly or send the transformed message to an apply handler for processing. Figure 1-8 shows a rule-based transformation during apply.

Figure 1-8 Transformation During Apply

Description of Figure 1-8 follows


Note:

  • A rule must be in a positive rule set for its rule-based transformation to be invoked. A rule-based transformation specified for a rule in a negative rule set is ignored by capture processes, propagations, apply processes, and messaging clients.

  • Throughout this document, "rule-based transformation" is used when the text applies to both declarative and custom rule-based transformations. This document distinguishes between the two types of rule-based transformations when necessary.


Overview of Streams Tags

Every redo entry in the redo log has a tag associated with it. The datatype of the tag is RAW. By default, when a user or application generates redo entries, the value of the tag is NULL for each redo entry, and a NULL tag consumes no space in the redo entry. The size limit for a tag value is 2000 bytes.

In Streams, rules can have conditions relating to tag values to control the behavior of Streams clients. For example, a tag can be used to determine whether an LCR contains a change that originated in the local database or at a different database, so that you can avoid change cycling (sending an LCR back to the database where it originated). Also, a tag can be used to specify the set of destination databases for each LCR. Tags can be used for other LCR tracking purposes as well.

You can specify Streams tags for redo entries generated by a certain session or by an apply process. These tags then become part of the LCRs captured by a capture process. Typically, tags are used in Streams replication environments, but you can use them whenever it is necessary to track database changes and LCRs.


See Also:

Oracle Streams Replication Administrator's Guide for more information about Streams tags

Overview of Heterogeneous Information Sharing

In addition to information sharing between Oracle databases, Streams supports information sharing between Oracle databases and non-Oracle databases. The following sections contain an overview of this support.


See Also:

Oracle Streams Replication Administrator's Guide for more information about heterogeneous information sharing with Streams

Overview of Oracle to Non-Oracle Data Sharing

If an Oracle database is the source and a non-Oracle database is the destination, then the non-Oracle database destination lacks the following Streams mechanisms:

To share DML changes from an Oracle source database with a non-Oracle destination database, the Oracle database functions as a proxy and carries out some of the steps that would normally be done at the destination database. That is, the messages intended for the non-Oracle destination database are dequeued in the Oracle database itself, and an apply process at the Oracle database uses Heterogeneous Services to apply the messages to the non-Oracle database across a network connection through a gateway. Figure 1-9 shows an Oracle databases sharing data with a non-Oracle database.

Figure 1-9 Oracle to Non-Oracle Heterogeneous Data Sharing

Description of Figure 1-9 follows


See Also:

Oracle Database Heterogeneous Connectivity Administrator's Guide for more information about Heterogeneous Services

Overview of Non-Oracle to Oracle Data Sharing

To capture and propagate changes from a non-Oracle database to an Oracle database, a custom application is required. This application gets the changes made to the non-Oracle database by reading from transaction logs, using triggers, or some other method. The application must assemble and order the transactions and must convert each change into an LCR. Next, the application must enqueue the LCRs into a queue in an Oracle database by using the PL/SQL interface, where they can be processed by an apply process. Figure 1-10 shows a non-Oracle databases sharing data with an Oracle database.

Figure 1-10 Non-Oracle to Oracle Heterogeneous Data Sharing

Description of Figure 1-10 follows

Example Streams Configurations

Figure 1-11 shows how Streams might be configured to share information within a single database, while Figure 1-12 shows how Streams might be configured to share information between two different databases.

Figure 1-11 Streams Configuration in a Single Database

Description of Figure 1-11 follows

Figure 1-12 Streams Configuration Sharing Information Between Databases

Description of Figure 1-12 follows

Administration Tools for a Streams Environment

Several tools are available for configuring, administering, and monitoring your Streams environment. Oracle-supplied PL/SQL packages are the primary configuration and management tools, and the Streams tool in Oracle Enterprise Manager provides some configuration, administration, and monitoring capabilities to help you manage your environment. Additionally, Streams data dictionary views keep you informed about your Streams environment.

Oracle-Supplied PL/SQL Packages

The following Oracle-supplied PL/SQL packages contain procedures and functions for configuring and managing a Streams environment.


See Also:

Oracle Database PL/SQL Packages and Types Reference for more information about these packages

DBMS_STREAMS_ADM Package

The DBMS_STREAMS_ADM package provides an administrative interface for adding and removing simple rules for capture processes, propagations, and apply processes at the table, schema, and database level. This package also enables you to add rules that control which messages a propagation propagates and which messages a messaging client dequeues. This package also contains procedures for creating queues and for managing Streams metadata, such as data dictionary information. This package also contains procedures that enable you to configure and maintain a Streams replication environment. This package is provided as an easy way to complete common tasks in a Streams environment. You can use other packages, such as the DBMS_CAPTURE_ADM, DBMS_PROPAGATION_ADM, DBMS_APPLY_ADM, DBMS_RULE_ADM, and DBMS_AQADM packages, to complete these same tasks, as well as tasks that require additional customization.

DBMS_CAPTURE_ADM Package

The DBMS_CAPTURE_ADM package provides an administrative interface for starting, stopping, and configuring a capture process. This package also provides administrative procedures that prepare database objects at the source database for instantiation at a destination database.

DBMS_PROPAGATION_ADM Package

The DBMS_PROPAGATION_ADM package provides an administrative interface for configuring propagation from a source queue to a destination queue.

DBMS_APPLY_ADM Package

The DBMS_APPLY_ADM package provides an administrative interface for starting, stopping, and configuring an apply process. This package includes procedures that enable you to configure apply handlers, set enqueue destinations for messages, and specify execution directives for messages. This package also provides administrative procedures that set the instantiation SCN for objects at a destination database. This package also includes subprograms for configuring conflict detection and resolution and for managing apply errors.

DBMS_STREAMS_MESSAGING Package

The DBMS_STREAMS_MESSAGING package provides interfaces to enqueue messages into and dequeue messages from an ANYDATA queue.

DBMS_RULE_ADM Package

The DBMS_RULE_ADM package provides an administrative interface for creating and managing rules, rule sets, and rule evaluation contexts. This package also contains subprograms for managing privileges related to rules.

DBMS_RULE Package

The DBMS_RULE package contains the EVALUATE procedure, which evaluates a rule set. The goal of this procedure is to produce the list of satisfied rules, based on the data. This package also contains subprograms that enable you to use iterators during rule evaluation. Instead of returning all rules that evaluate to TRUE or MAYBE for an evaluation, iterators can return one rule at a time.

DBMS_STREAMS Package

The DBMS_STREAMS package provides interfaces to convert ANYDATA objects into LCR objects, to return information about Streams attributes and Streams clients, and to annotate redo entries generated by a session with a tag. This tag can affect the behavior of a capture process, a propagation, an apply process, or a messaging client whose rules include specifications for these tags in redo entries or LCRs.

DBMS_STREAMS_TABLESPACE_ADM

The DBMS_STREAMS_TABLESPACE_ADM package provides administrative procedures for creating and managing a tablespace repository. This package also provides administrative procedures for copying tablespaces between databases and moving tablespaces from one database to another. This package uses transportable tablespaces, Data Pump, and the DBMS_FILE_TRANSFER package.

DBMS_STREAMS_AUTH Package

The DBMS_STREAMS_AUTH package provides interfaces for granting privileges to and revoking privileges from Streams administrators.

Streams Data Dictionary Views

Every database in a Streams environment has Streams data dictionary views. These views maintain administrative information about local rules, objects, capture processes, propagations, apply processes, and messaging clients. You can use these views to monitor your Streams environment.


See Also:


Streams Tool in the Oracle Enterprise Manager Console

To help configure, administer, and monitor Streams environments, Oracle provides a Streams tool in the Oracle Enterprise Manager Console. You can also use the Streams tool to generate Streams configuration scripts, which you can then modify and run to configure your Streams environment. The Streams tool online help contains the primary documentation for this tool.

Figure 1-13 shows the top portion of the Streams page in Enterprise Manager.

Figure 1-13 Streams Page in Enterprise Manager

Description of Figure 1-13 follows

Figure 1-14 shows the Streams Topology, which is on the bottom portion of the Streams page in the Enterprise Manager.

Figure 1-14 Streams Topology

Description of Figure 1-14 follows


See Also:

The online help for the Streams tool in the Oracle Enterprise Manager

PK√ň°twhPK◊hUIOEBPS/strms_miprov.htmġ Using Information Provisioning

16 Using Information Provisioning

This chapter describes how to use information provisioning. This chapter includes an example that creates a tablespace repository, examples that transfer tablespaces between databases, and an example that uses a file group repository to store different versions of files.

This chapter contains these topics:

Using a Tablespace Repository

The following procedures in the DBMS_STREAMS_TABLESPACE_ADM package can create a tablespace repository, add versioned tablespace sets to a tablespace repository, and copy versioned tablespace sets from a tablespace repository:

This section illustrates how to use a tablespace repository with an example scenario. In the scenario, the goal is to run quarterly reports on the sales tablespaces (sales_tbs1 and sales_tbs2). Sales are recorded in these tablespaces in the inst1.net database. The example clones the tablespaces quarterly and stores a new version of the tablespaces in the tablespace repository. The tablespace repository also resides in the inst1.net database. When a specific version of the tablespace set is required to run reports at a reporting database, it is copied from the tablespace repository and attached to the reporting database.

In this example scenario, the following databases are the reporting databases:

The following sections describe how to create and populate the tablespace repository and how to use the tablespace repository to run reports at the other databases:

These examples must be run by an administrative user with the necessary privileges to run the procedures listed previously.


See Also:

Oracle Database PL/SQL Packages and Types Reference for more information about these procedures and the privileges required to run them

Creating and Populating a Tablespace Repository

This example creates a tablespaces repository and adds a new version of a tablespace set to the repository after each quarter. The tablespace set consists of the sales tablespaces for a business: sales_tbs1 and sales_tbs2.

Figure 16-1 provides an overview of the tablespace repository created in this example:

Figure 16-1 Example Tablespace Repository

Description of Figure 16-1 follows

Table 16-1 shows the tablespace set versions created in this example, their directory objects, and the corresponding file system directory for each directory object.

Table 16-1 Versions in the Tablespace Repository

VersionDirectory ObjectCorresponding File System Directory

v_q1fy2005

q1fy2005

/home/sales/q1fy2005

v_q2fy2005

q2fy2005

/home/sales/q2fy2005


This example makes the following assumptions:

  • The inst1.net database exists.

  • The sales_tbs1 and sales_tbs2 tablespaces exist in the inst1.net database.

The following steps create and populate a tablespace repository:

  1. Connect as an administrative user to the database where the sales tablespaces are modified with new sales data:

    CONNECT strmadmin/strmadminpw@inst1.net
    

    The administrative user must have the necessary privileges to run the procedures in the DBMS_STREAMS_TABLESPACE_ADM package and must have the necessary privileges to create directory objects.

  2. Create a directory object for the first quarter in fiscal year 2005 on inst1.net:

    CREATE OR REPLACE DIRECTORY q1fy2005 AS '/home/sales/q1fy2005';
    

    The specified file system directory must exist when you create the directory object.

  3. Create a directory object that corresponds to the directory that contains the datafiles for the tablespaces in the inst1.net database. For example, if the datafiles for the tablespaces are in the /orc/inst1/dbs directory, then create a directory object that corresponds to this directory:

    CREATE OR REPLACE DIRECTORY dbfiles_inst1 AS '/orc/inst1/dbs';
    
  4. Clone the tablespace set and add the first version of the tablespace set to the tablespace repository:

    DECLARE
      tbs_set DBMS_STREAMS_TABLESPACE_ADM.TABLESPACE_SET;
    BEGIN
      tbs_set(1) := 'sales_tbs1';
      tbs_set(2) := 'sales_tbs2';
      DBMS_STREAMS_TABLESPACE_ADM.CLONE_TABLESPACES(
        tablespace_names            => tbs_set,
        tablespace_directory_object => 'q1fy2005',
        file_group_name             => 'strmadmin.sales',
        version_name                => 'v_q1fy2005');
    END;
    /
    

    The sales file group is created automatically if it does not exist.

  5. When the second quarter in fiscal year 2005 is complete, create a directory object for the second quarter in fiscal year 2005:

    CREATE OR REPLACE DIRECTORY q2fy2005 AS '/home/sales/q2fy2005';
    

    The specified file system directory must exist when you create the directory object.

  6. Clone the tablespace set and add the next version of the tablespace set to the tablespace repository at the inst1.net database:

    DECLARE
      tbs_set DBMS_STREAMS_TABLESPACE_ADM.TABLESPACE_SET;
    BEGIN
      tbs_set(1) := 'sales_tbs1';
      tbs_set(2) := 'sales_tbs2';
      DBMS_STREAMS_TABLESPACE_ADM.CLONE_TABLESPACES(
        tablespace_names            => tbs_set,
        tablespace_directory_object => 'q2fy2005',
        file_group_name             => 'strmadmin.sales',
        version_name                => 'v_q2fy2005');
    END;
    /
    

Steps 5 and 6 can be repeated whenever a quarter ends to store a version of the tablespace set for each quarter. Each time, create a new directory object to store the tablespace files for the quarter, and specify a unique version name for the quarter.

Using a Tablespace Repository for Remote Reporting with a Shared File System

This example runs reports at inst2.net on specific versions of the sales tablespaces stored in a tablespace repository at inst1.net. These two databases share a file system, and the reports that are run on inst2.net might make changes to the tablespace. Therefore, the tablespaces are made read/write at inst2.net. When the reports are complete, a new version of the tablespace files is stored in a separate directory from the original version of the tablespace files.

Figure 16-2 provides an overview of how tablespaces in a tablespace repository are attached to a different database in this example:

Figure 16-2 Attaching Tablespaces with a Shared File System

Description of Figure 16-2 follows

Figure 16-3 provides an overview of how tablespaces are detached and placed in a tablespace repository in this example:

Figure 16-3 Detaching Tablespaces with a Shared File System

Description of Figure 16-3 follows

Table 16-2 shows the tablespace set versions in the tablespace repository when this example is complete. It shows the directory object for each version and the corresponding file system directory for each directory object. The versions that are new are created in this example. The versions that existed prior to this example were created in "Creating and Populating a Tablespace Repository".

Table 16-2 Versions in the Tablespace Repository After inst2.net Reporting

VersionDirectory ObjectCorresponding File System DirectoryNew?

v_q1fy2005

q1fy2005

/home/sales/q1fy2005

No

v_q1fy2005_r

q1fy2005_r

/home/sales/q1fy2005_r

Yes

v_q2fy2005

q2fy2005

/home/sales/q2fy2005

No

v_q2fy2005_r

q2fy2005_r

/home/sales/q2fy2005_r

Yes


This example makes the following assumptions:

  • The inst1.net and inst2.net databases exist.

  • The inst1.net and inst2.net databases can access a shared file system.

  • Networking is configured between the databases so that these databases can communicate with each other.

  • A tablespace repository that contains a version of the sales tablespaces (sales_tbs1 and sales_tbs2) for various quarters exists in the inst1.net database. This tablespace repository was created and populated in the example "Creating and Populating a Tablespace Repository".

Complete the following steps:

  1. Connect to inst1.net:

    CONNECT strmadmin/strmadminpw@inst1.net
    

    The administrative user must have the necessary privileges to create directory objects.

  2. Create a directory object that will store the tablespace files for the first quarter in fiscal year 2005 on inst1.net after the inst2.net database has completed reporting on this quarter:

    CREATE OR REPLACE DIRECTORY q1fy2005_r AS '/home/sales/q1fy2005_r';
    

    The specified file system directory must exist when you create the directory objects.

  3. Connect as an administrative user to the inst2.net database:

    CONNECT strmadmin/strmadminpw@inst2.net
    

    The administrative user must have the necessary privileges to run the procedures in the DBMS_STREAMS_TABLESPACE_ADM package, create directory objects, and create database links.

  4. Create two directory objects for the first quarter in fiscal year 2005 on inst2.net. These directory objects must have the same names and correspond to the same directories on the shared file system as the directory objects used by the tablespace repository in the inst1.net database for the first quarter:

    CREATE OR REPLACE DIRECTORY q1fy2005 AS '/home/sales/q1fy2005';
    
    CREATE OR REPLACE DIRECTORY q1fy2005_r AS '/home/sales/q1fy2005_r';
    
  5. Create a database link from inst2.net to the inst1.net database:

    CREATE DATABASE LINK inst1.net CONNECT TO strmadmin IDENTIFIED BY strmadminpw 
       USING 'inst1.net';
    
  6. Attach the tablespace set to the inst2.net database from the strmadmin.sales file group in the inst1.net database:

    DECLARE
      tbs_set DBMS_STREAMS_TABLESPACE_ADM.TABLESPACE_SET;
    BEGIN
      DBMS_STREAMS_TABLESPACE_ADM.ATTACH_TABLESPACES(
        file_group_name            => 'strmadmin.sales',
        version_name               => 'v_q1fy2005',
        datafiles_directory_object => 'q1fy2005_r',
        repository_db_link         => 'inst1.net',
        tablespace_names           => tbs_set);
    END;
    /
    

    Notice that q1fy2005_r is specified for the datafiles_directory_object parameter. Therefore, the datafiles for the tablespaces and the export dump file are copied from the /home/sales/q1fy2005 location to the /home/sales/q1fy2005_r location by the procedure. The attached tablespaces in the inst2.net database use the datafiles in the /home/sales/q1fy2005_r location. The Data Pump import log file also is placed in this directory.

    The attached tablespaces use the datafiles in the /home/sales/q1fy2005_r location. However, the v_q1fy2005 version of the tablespaces in the tablespace repository consists of the files in the original /home/sales/q1fy2005 location.

  7. Make the tablespaces read/write at inst2.net:

    ALTER TABLESPACE sales_tbs1 READ WRITE;
    
    ALTER TABLESPACE sales_tbs2 READ WRITE;
    
  8. Run the reports on the data in the sales tablespaces at the inst2.net database. The reports make changes to the tablespaces.

  9. Detach the version of the tablespace set for the first quarter of 2005 from the inst2.net database:

    DECLARE
      tbs_set  DBMS_STREAMS_TABLESPACE_ADM.TABLESPACE_SET;
    BEGIN
      tbs_set(1) := 'sales_tbs1';
      tbs_set(2) := 'sales_tbs2';
      DBMS_STREAMS_TABLESPACE_ADM.DETACH_TABLESPACES(
        tablespace_names        => tbs_set,
        export_directory_object => 'q1fy2005_r',
        file_group_name         => 'strmadmin.sales',
        version_name            => 'v_q1fy2005_r',
        repository_db_link      => 'inst1.net');
    END;
    /
    

    Only one version of a tablespace set can be attached to a database at a time. Therefore, the version of the sales tablespaces for the first quarter of 2005 must be detached from inst2.net before the version of this tablespace set for the second quarter of 2005 can be attached.

    Also, notice that the specified export_directory_object is q1fy2005_r, and that the version_name is v_q1fy2005_r. After the detach operation, there are two versions of the tablespace files for the first quarter of 2005 stored in the tablespace repository on inst1.net: one version of the tablespace prior to reporting and one version after reporting. These two versions have different version names and are stored in different directory objects.

  10. Connect to inst1.net, and create a directory object that will store the tablespace files for the second quarter in fiscal year 2005 on inst1.net after the inst2.net database has completed reporting on this quarter:

    CONNECT strmadmin/strmadminpw@inst1.net
    
    CREATE OR REPLACE DIRECTORY q2fy2005_r AS '/home/sales/q2fy2005_r';
    

    The specified file system directory must exist when you create the directory object.

  11. Connect to inst2.net, and create two directory objects for the second quarter in fiscal year 2005 at inst2.net. These directory objects must have the same names and correspond to the same directories on the shared file system as the directory objects used by the tablespace repository in the inst1.net database for the second quarter:

    CONNECT strmadmin/strmadminpw@inst2.net
    
    CREATE OR REPLACE DIRECTORY q2fy2005 AS '/home/sales/q2fy2005';
    
    CREATE OR REPLACE DIRECTORY q2fy2005_r AS '/home/sales/q2fy2005_r';
    
  12. Attach the tablespace set for the second quarter of 2005 to the inst2.net database from the sales file group in the inst1.net database:

    DECLARE
      tbs_set DBMS_STREAMS_TABLESPACE_ADM.TABLESPACE_SET;
    BEGIN
      DBMS_STREAMS_TABLESPACE_ADM.ATTACH_TABLESPACES(
        file_group_name            => 'strmadmin.sales',
        version_name               => 'v_q2fy2005',
        datafiles_directory_object => 'q2fy2005_r',
        repository_db_link         => 'inst1.net',
        tablespace_names           => tbs_set);
    END;
    /
    
  13. Make the tablespaces read/write at inst2.net:

    ALTER TABLESPACE sales_tbs1 READ WRITE;
    
    ALTER TABLESPACE sales_tbs2 READ WRITE;
    
  14. Run the reports on the data in the sales tablespaces at the inst2.net database. The reports make changes to the tablespace.

  15. Detach the version of the tablespace set for the second quarter of 2005 from inst2.net:

    DECLARE
      tbs_set  DBMS_STREAMS_TABLESPACE_ADM.TABLESPACE_SET;
    BEGIN
      tbs_set(1) := 'sales_tbs1';
      tbs_set(2) := 'sales_tbs2';
      DBMS_STREAMS_TABLESPACE_ADM.DETACH_TABLESPACES(
        tablespace_names        => tbs_set,
        export_directory_object => 'q2fy2005_r',
        file_group_name         => 'strmadmin.sales',
        version_name            => 'v_q2fy2005_r',
        repository_db_link      => 'inst1.net');
    END;
    /
    

Steps 10-15 can be repeated whenever a quarter ends to run reports on each quarter.

Using a Tablespace Repository for Remote Reporting Without a Shared File System

This example runs reports at inst3.net on specific versions of the sales tablespaces stored in a tablespace repository at inst1.net. These two databases do not share a file system, and the reports that are run on inst3.net do not make any changes to the tablespace. Therefore, the tablespaces remain read-only at inst3.net, and, when the reports are complete, there is no need for a new version of the tablespace files in the tablespace repository on inst1.net.

Figure 16-4 provides an overview of how tablespaces in a tablespace repository are attached to a different database in this example:

Figure 16-4 Attaching Tablespaces Without a Shared File System

Description of Figure 16-4 follows

Table 16-3 shows the directory objects used in this example. It shows the existing directory objects that are associated with tablespace repository versions on the inst1.net database, and it shows the new directory objects created on the inst3.net database in this example. The directory objects that existed prior to this example were created in "Creating and Populating a Tablespace Repository".

Table 16-3 Directory Objects Used in Example

Directory ObjectDatabaseVersionCorresponding File System DirectoryNew?

q1fy2005

inst1.net

v_q1fy2005

/home/sales/q1fy2005

No

q2fy2005

inst1.net

v_q2fy2005

/home/sales/q2fy2005

No

q1fy2005

inst3.net

Not associated with a tablespace repository version

/usr/sales_data/fy2005q1

Yes

q2fy2005

inst3.net

Not associated with a tablespace repository version

/usr/sales_data/fy2005q2

Yes


This example makes the following assumptions:

  • The inst1.net and inst3.net databases exist.

  • The inst1.net and inst3.net databases do not share a file system.

  • Networking is configured between the databases so that they can communicate with each other.

  • The sales tablespaces (sales_tbs1 and sales_tbs2) exist in the inst1.net database.

Complete the following steps:

  1. Connect as an administraėVg©tive user to the inst3.net database:

    CONNECT strmadmin/strmadminpw@inst3.net
    

    The administrative user must have the necessary privileges to run the procedures in the DBMS_STREAMS_TABLESPACE_ADM package, create directory objects, and create database links.

  2. Create a database link from inst3.net to the inst1.net database:

    CREATE DATABASE LINK inst1.net CONNECT TO strmadmin IDENTIFIED BY strmadminpw 
       USING 'inst1.net';
    
  3. Create a directory object for the first quarter in fiscal year 2005 on inst3.net. Although inst3.net is a remote database that does not share a file system with inst1.net, the directory object must have the same name as the directory object used by the tablespace repository in the inst1.net database for the first quarter. However, the directory paths of the directory objects on inst1.net and inst3.net do not need to match.

    CREATE OR REPLACE DIRECTORY q1fy2005 AS '/usr/sales_data/fy2005q1';
    

    The specified file system directory must exist when you create the directory object.

  4. Connect as an administrative user to the inst1.net database:

    CONNECT strmadmin/strmadminpw@inst1.net
    

    The administrative user must have the necessary privileges to run the procedures in the DBMS_FILE_TRANSFER package and create database links. This example uses the DBMS_FILE_TRANSFER package to copy the tablespace files from inst1.net to inst3.net. If some other method is used to transfer the files, then the privileges to run the procedures in the DBMS_FILE_TRANSFER package are not required.

  5. Create a database link from inst1.net to the inst3.net database:

    CREATE DATABASE LINK inst3.net CONNECT TO strmadmin IDENTIFIED BY strmadminpw 
       USING 'inst3.net';
    

    This database link will be used to transfer files to the inst3.net database in Step 6.

  6. Copy the datafile for each tablespace and the export dump file for the first quarter to the inst3.net database:

    BEGIN
      DBMS_FILE_TRANSFER.PUT_FILE(
        source_directory_object      => 'q1fy2005',
        source_file_name             => 'sales_tbs1.dbf',
        destination_directory_object => 'q1fy2005',
        destination_file_name        => 'sales_tbs1.dbf',
        destination_database         => 'inst3.net');
      DBMS_FILE_TRANSFER.PUT_FILE(
        source_directory_object      => 'q1fy2005',
        source_file_name             => 'sales_tbs2.dbf',
        destination_directory_object => 'q1fy2005',
        destination_file_name        => 'sales_tbs2.dbf',
        destination_database         => 'inst3.net');
      DBMS_FILE_TRANSFER.PUT_FILE(
        source_directory_object      => 'q1fy2005',
        source_file_name             => 'expdat16.dmp',
        destination_directory_object => 'q1fy2005',
        destination_file_name        => 'expdat16.dmp',
        destination_database         => 'inst3.net');
    END;
    /
    

    Before you run the PUT_FILE procedure for the export dump file, you can query the DBA_FILE_GROUP_FILES data dictionary view to determine the name and directory object of the export dump file. For example, run the following query to list this information for the export dump file in the v_q1fy2005 version:

    COLUMN FILE_NAME HEADING 'Export Dump|File Name' FORMAT A35
    COLUMN FILE_DIRECTORY HEADING 'Directory Object' FORMAT A35
    
    SELECT FILE_NAME, FILE_DIRECTORY FROM DBA_FILE_GROUP_FILES
      where FILE_GROUP_NAME = 'SALES' AND
            VERSION_NAME    = 'V_Q1FY2005';
    
  7. Connect to inst3.net and attach the tablespace set for the first quarter of 2005 to the inst3.net database from the sales file group in the inst1.net database:

    CONNECT strmadmin/strmadminpw@inst3.net
    
    DECLARE
      tbs_set DBMS_STREAMS_TABLESPACE_ADM.TABLESPACE_SET;
    BEGIN
      DBMS_STREAMS_TABLESPACE_ADM.ATTACH_TABLESPACES(
        file_group_name            => 'strmadmin.sales',
        version_name               => 'v_q1fy2005',
        datafiles_directory_object => 'q1fy2005',
        repository_db_link         => 'inst1.net',
        tablespace_names           => tbs_set);
    END;
    /
    

    The tablespaces are read-only when they are attached. Because the reports on inst3.net do not change the tablespaces, the tablespaces can remain read-only.

  8. Run the reports on the data in the sales tablespaces at the inst3.net database.

  9. Drop the tablespaces and their contents at inst3.net:

    DROP TABLESPACE sales_tbs1 INCLUDING CONTENTS;
    
    DROP TABLESPACE sales_tbs2 INCLUDING CONTENTS;
    

    The tablespaces are dropped from the inst3.net database, but the tablespace files remain in the directory object.

  10. Create a directory object for the second quarter in fiscal year 2005 on inst3.net. The directory object must have the same name as the directory object used by the tablespace repository in the inst1.net database for the second quarter. However, the directory paths of the directory objects on inst1.net and inst3.net do not need to match.

    CREATE OR REPLACE DIRECTORY q2fy2005 AS '/usr/sales_data/fy2005q2';
    

    The specified file system directory must exist when you create the directory object.

  11. Connect to the inst1.net database and copy the datafile and the export dump file for the second quarter to the inst3.net database:

    CONNECT strmadmin/strmadminpw@inst1.net
    
    BEGIN
      DBMS_FILE_TRANSFER.PUT_FILE(
        source_directory_object      => 'q2fy2005',
        source_file_name             => 'sales_tbs1.dbf',
        destination_directory_object => 'q2fy2005',
        destination_file_name        => 'sales_tbs1.dbf',
        destination_database         => 'inst3.net');
      DBMS_FILE_TRANSFER.PUT_FILE(
        source_directory_object      => 'q2fy2005',
        source_file_name             => 'sales_tbs2.dbf',
        destination_directory_object => 'q2fy2005',
        destination_file_name        => 'sales_tbs2.dbf',
        destination_database         => 'inst3.net');
      DBMS_FILE_TRANSFER.PUT_FILE(
        source_directory_object      => 'q2fy2005',
        source_file_name             => 'expdat18.dmp',
        destination_directory_object => 'q2fy2005',
        destination_file_name        => 'expdat18.dmp',
        destination_database         => 'inst3.net');
    END;
    /
    

    Before you run the PUT_FILE procedure for the export dump file, you can query the DBA_FILE_GROUP_FILES data dictionary view to determine the name and directory object of the export dump file. For example, run the following query to list this information for the export dump file in the v_q2fy2005 version:

    COLUMN FILE_NAME HEADING 'Export Dump|File Name' FORMAT A35
    COLUMN FILE_DIRECTORY HEADING 'Directory Object' FORMAT A35
    
    SELECT FILE_NAME, FILE_DIRECTORY FROM DBA_FILE_GROUP_FILES
      where FILE_GROUP_NAME = 'SALES' AND
            VERSION_NAME    = 'V_Q2FY2005';
    
  12. Attach the tablespace set for the second quarter of 2005 to the inst3.net database from the sales file group in the inst1.net database:

    CONNECT strmadmin/strmadminpw@inst3.net
    
    DECLARE
      tbs_set DBMS_STREAMS_TABLESPACE_ADM.TABLESPACE_SET;
    BEGIN
      DBMS_STREAMS_TABLESPACE_ADM.ATTACH_TABLESPACES(
        file_group_name            => 'strmadmin.sales',
        version_name               => 'v_q2fy2005',
        datafiles_directory_object => 'q2fy2005',
        repository_db_link         => 'inst1.net',
        tablespace_names           => tbs_set);
    END;
    /
    

    The tablespaces are read-only when they are attached. Because the reports on inst3.net do not change the tablespace, the tablespaces can remain read-only.

  13. Run the reports on the data in the sales tablespaces at the inst3.net database.

  14. Drop the tablespaces and their contents:

    DROP TABLESPACE sales_tbs1 INCLUDING CONTENTS;
    
    DROP TABLESPACE sales_tbs2 INCLUDING CONTENTS;
    

    The tablespaces are dropped from the inst3.net database, but the tablespace files remain in the directory object.

Steps 10-14 can be repeated whenever a quarter ends to run reports on each quarter.

Using a File Group Repository

The DBMS_FILE_GROUP package can create a file group repository, add versioned file groups to the repository, and copy versioned file groups from the repository. This section illustrates how to use a file group repository with a scenario that stores reports in the repository.

In this scenario, a business sells books and music over the internet. The business runs weekly reports on the sales data in the inst1.net database and stores these reports in two HTML files on a computer file system. The book_sales.htm file contains the report for book sales, and the music_sales.htm file contains the report for music sales. The business wants to store these weekly reports in a file group repository at the inst2.net remote database. Every week, the two reports are generated on the inst1.net database, transferred to the computer system running the inst2.net database, and added to the repository as a file group version. The file group repository stores all of the file group versions that contain the reports for each week.

Figure 16-5 provides an overview of the file group repository created in this example:

Figure 16-5 Example File Group Repository

Description of Figure 16-5 follows

The benefits of the file group repository are that it stores metadata about each file group version in the data dictionary and provides a standard interface for managing the file group versions. For example, when the business needs to view a specific sales report, it can query the data dictionary in the inst2.net database to determine the location of the report on the computer file system.

Table 16-4 shows the directory objects created in this example. It shows the directory object created on the inst1.net database to store new reports, and it shows the directory objects that are associated with file group repository versions on the inst2.net database.

Table 16-4 Directory Objects Created in Example

Directory ObjectDatabaseVersionCorresponding File System Directory

sales_reports

inst1.net

Not associated with a file group repository version

/home/sales_reports

sales_reports1

inst2.net

sales_reports_v1

/home/sales_reports/fg1

sales_reports2

inst2.net

sales_reports_v1

/home/sales_reports/fg2


This example makes the following assumptions:

The following steps configure and populate a file group repository at a remote database:

  1. Connect as an administrative user to the remote database that will contain the file group repository:

    CONNECT strmadmin/strmadminpw@inst2.net
    

    The administrative user must have the necessary privileges to create directory objects and run the procedures in the DBMS_FILE_GROUP package.

  2. Create a directory object to hold the first version of the file group:

    CREATE OR REPLACE DIRECTORY sales_reports1 AS '/home/sales_reports/fg1';
    

    The specified file system directory must exist when you create the directory object.

  3. Connect as an administrative user to the database that runs the reports:

    CONNECT strmadmin/strmadminpw@inst1.net
    

    The administrative user must have the necessary privileges to create directory objects.

  4. Create a directory object to hold the latest reports:

    CREATE OR REPLACE DIRECTORY sales_reports AS '/home/sales_reports';
    

    The specified file system directory must exist when you create the directory object.

  5. Create a database link to the inst2.net database:

    CREATE DATABASE LINK inst2.net CONNECT TO strmadmin IDENTIFIED BY strmadminpw 
       USING 'inst2.net';
    
  6. Run the reports on the inst1.net database. Running the reports should place the book_sales.htm and music_sales.htm files in the directory specified in Step 4.

  7. Transfer the report files from the computer system running the inst1.net database to the computer system running the inst2.net database using file transfer protocol (FTP) or some other method. Make sure the files are copied to the directory that corresponds to the directory object created in Step 2.

  8. Connect in SQL*Plus to inst2.net:

    CONNECT strmadmin/strmadminpw@inst2.net
    
  9. Create the file group repository that will contain the reports:

    BEGIN
      DBMS_FILE_GROUP.CREATE_FILE_GROUP(
        file_group_name => 'strmadmin.reports');
    END;
    /
    

    The reports file group repository is created with the following default properties:

    • The minimum number of versions in the repository is 2. When the file group is purged, the number of versions cannot drop below 2.

    • The maximum number of versions is infinite. A file group version is not purged because of the number of versions in the of the file group in the repository.

    • The retention days is infinite. A file group version is not purged because of the amount of time it has been in the repository.

  10. Create the first version of the file group:

    BEGIN
      DBMS_FILE_GROUP.CREATE_VERSION(
        file_group_name => 'strmadmin.reports',
        version_name    => 'sales_reports_v1',
        comments        => 'Sales reports for week of 06-FEB-2005');
    END;
    /
    
  11. Add the report files to the file group version:

    BEGIN
     DBMS_FILE_GROUP.ADD_FILE(
        file_group_name  => 'strmadmin.reports',
        file_name        => 'book_sales.htm',
        file_type        => 'HTML',
        file_directory   => 'sales_reports1',
        version_name     => 'sales_reports_v1');
     DBMS_FILE_GROUP.ADD_FILE(
        file_group_name  => 'strmadmin.reports',
        file_name        => 'music_sales.htm',
        file_type        => 'HTML',
        file_directory   => 'sales_reports1',
        version_name     => 'sales_reports_v1');
    END;
    /
    
  12. Create a directory object on inst2.net to hold the next version of the file group:

    CREATE OR REPLACE DIRECTORY sales_reports2 AS '/home/sales_reports/fg2';
    

    The specified file system directory must exist when you create the directory object.

  13. At the end of the next week, run the reports on the inst1.net database. Running the reports should place new book_sales.htm and music_sales.htm files in the directory specified in Step 4. If necessary, remove the old files from this directory before running the reports.

  14. Transfer the report files from the computer system running the inst1.net database to the computer system running the inst2.net database using file transfer protocol (FTP) or some other method. Make sure the files are copied to the directory that corresponds to the directory object created in Step 12.

  15. While connected in SQL*Plus to inst2.net as an administrative user, create the next version of the file group:

    BEGIN
      DBMS_FILE_GROUP.CREATE_VERSION(
        file_group_name => 'strmadmin.reports',
        version_name    => 'sales_reports_v2',
        comments        => 'Sales reports for week of 13-FEB-2005');
    END;
    /
    
  16. Add the report files to the file group version:

    BEGIN
     DBMS_FILE_GROUP.ADD_FILE(
        file_group_name  => 'strmadmin.reports',
        file_name        => 'book_sales.htm',
        file_type        => 'HTML',
        file_directory   => 'sales_reports2',
        version_name     => 'sales_reports_v2');
     DBMS_FILE_GROUP.ADD_FILE(
        file_group_name  => 'strmadmin.reports',
        file_name        => 'music_sales.htm',
        file_type        => 'HTML',
        file_directory   => 'sales_reports2',
        version_name     => 'sales_reports_v2');
    END;
    /
    

The file group repository now contains two versions of the file group that contains the sales report files. Repeat steps 12-16 to add new versions of the file group to the repository.


See Also:


PKŅvéēĘ÷ė÷PK◊hUIOEBPS/strms_ipro.htmġ Information Provisioning

8 Information Provisioning

Information provisioning makes information available when and where it is needed. Information provisioning is part of Oracle grid computing, which pools large numbers of servers, storage areas, and networks into a flexible, on-demand computing resource for enterprise computing needs. Information provisioning uses many of the features that also are used for information integration.

This chapter contains these topics:


See Also:


Overview of Information Provisioning

Oracle grid computing enables resource provisioning with features such as Oracle Real Application Clusters (RAC), Oracle Scheduler, and Database Resource Manager. RAC enables you to provision hardware resources by running a single Oracle database server on a cluster of physical servers. Oracle Scheduler enables you to provision database workload over time for more efficient use of resources. Database Resource Manager provisions resources to database users, applications, or services within an Oracle database.

In addition to resource provisioning, Oracle grid computing also enables information provisioning. Information provisioning delivers information when and where it is needed, regardless of where the information currently resides on the grid. In a grid environment with distributed systems, the grid must move or copy information efficiently to make it available where it is needed.

Information provisioning can take the following forms:

These information provisioning capabilities can be used individually or in combination to provide a full information provisioning solution in your environment. The remaining sections in this chapter discuss the ways to provision information in more detail.

Bulk Provisioning of Large Amounts of Information

Oracle provides several ways to move or copy large amounts of information from database to database efficiently. Data Pump can export and import at the database, tablespace, schema, or table level. There are several ways to move or copy a tablespace set from one Oracle database to another. Transportable tablespaces can move or copy a subset of an Oracle database and "plug" it in to another Oracle database. Transportable tablespace from backup with RMAN enables you to move or copy a tablespace set while the tablespaces remain online. The procedures in the DBMS_STREAMS_TABLESPACE_ADM package combine several steps that are required to move or copy a tablespace set into one procedure call.

Each method for moving or copying a tablespace set requires that the tablespace set is self-contained. A self-contained tablespace has no references from the tablespace pointing outside of the tablespace. For example, if an index in the tablespace is for a table in a different tablespace, then the tablespace is not self-contained. A self-contained tablespace set has no references from inside the set of tablespaces pointing outside of the set of tablespaces. For example, if a partitioned table is partially contained in the set of tablespaces, then the set of tablespaces is not self-contained. To determine whether a set of tablespaces is self-contained, use the TRANSPORT_SET_CHECK procedure in the Oracle supplied package DBMS_TTS.

The following sections describe the options for moving or copying large amounts of information and when to use each option:

Data Pump Export/Import

Data Pump export/import can move or copy data efficiently between databases. Data Pump can export/import a full database, tablespaces, schemas, or tables to provision large or small amounts of data for a particular requirement. Data Pump exports and imports can be performed using command line clients (expdp and impdp) or the DBMS_DATAPUMP package.

A transportable tablespaces export/import is specified using the TRANSPORT_TABLESPACES parameter. Transportable tablespaces enables you to unplug a set of tablespaces from a database, move or copy them to another location, and then plug them into another database. The transport is quick because the process transfers metadata and files. It does not unload and load the data. In transportable tablespaces mode, only the metadata for the tables (and their dependent objects) within a specified set of tablespaces are unloaded at the source and loaded at the target. This allows the tablespace datafiles to be copied to the target Oracle database and incorporated efficiently.

The tablespaces being transported can be either dictionary managed or locally managed. Moving or copying tablespaces using transportable tablespaces is faster than performing either an export/import or unload/load of the same data. To use transportable tablespaces, you must have the EXP_FULL_DATABASE and IMP_FULL_DATABASE role. The tablespaces being transported must be read-only during export, and the export cannot have a degree of parallelism greater than 1.


See Also:


Transportable Tablespace from Backup with RMAN

The Recovery Manager (RMAN) TRANSPORT TABLESPACE command copies tablespaces without requiring that the tablespaces be in read-only mode during the transport process. Appropriate database backups must be available to perform RMAN transportable tablespace from backup.

DBMS_STREAMS_TABLESPACE_ADM Procedures

The following procedures in the DBMS_STREAMS_TABLESPACE_ADM package can be used to move or copy tablespaces:

  • ATTACH_TABLESPACES: Uses Data Pump to import a self-contained tablespace set previously exported using the DBMS_STREAMS_TABLESPACE_ADM package, Data Pump export, or the RMAN TRANSPORT TABLESPACE command.

  • CLONE_TABLESPACES: Uses Data Pump export to clone a set of self-contained tablespaces. The tablespace set can be attached to a database after it is cloned. The tablespace set remains in the database from which it was cloned.

  • DETACH_TABLESPACES: Uses Data Pump export to detach a set of self-contained tablespaces. The tablespace set can be attached to a database after it is detached. The tablespace set is dropped from the database from which it was detached.

  • PULL_TABLESPACES: Uses Data Pump export/import to copy a set of self-contained tablespaces from a remote database and attach the tablespace set to the current database.

In addition, the DBMS_STREAMS_TABLESPACE_ADM package also contains the following procedures: ATTACH_SIMPLE_TABLESPACE, CLONE_SIMPLE_TABLESPACE, DETACH_SIMPLE_TABLESPACE, and PULL_SIMPLE_TABLESPACE. These procedures operate on a single tablespace that uses only one datafile instead of a tablespace set.

File Group Repository

In the context of a file group, a file is a reference to a file stored on hard disk. A file is composed of a file name, a directory object, and a file type. The directory object references the directory in which the file is stored on hard disk. A version is a collection of related files, and a file group is a collection of versions.

A file group repository is a collection of all of the file groups in a database. A file group repository can contain multiple file groups and multiple versions of a particular file group.

For example, a file group named reports can store versions of sales reports. The reports can be generated on a regular schedule, and each version can contain the report files. The file group repository can version the file group under names such as sales_reports_v1, sales_reports_v2, and so on.

File group repositories can contain all types of files. You can create and manage file group repositories using the DBMS_FILE_GROUP package.


See Also:


Tablespace Repository

A tablespace repository is a collection of tablespace sets in a file group repository. Tablespace repositories are built on file group repositories, but tablespace repositories only contain the files required to move or copy tablespaces between databases. A file group repository can store versioned sets of files, including, but not restricted to, tablespace sets.

Different tablespace sets can be stored in a tablespace repository, and different versions of a particular tablespace set can also be stored. A version of a tablespace set in a tablespace repository consists of the following files:

  • The Data Pump export dump file for the tablespace set

  • The Data Pump log file for the export

  • The datafiles that make up the tablespace set

All of the files in a version can reside in a single directory, or they can reside in different directories. The following procedures can move or copy tablespaces with or without using a tablespace repository:

  • ATTACH_TABLESPACES

  • CLONE_TABLESPACES

  • DETACH_TABLESPACES

If one of these procedures is run without using a tablespace repository, then a tablespace set is moved or copied, but it is not placed in or copied from a tablespace repository. If the CLONE_TABLESPACES or DETACH_TABLESPACES procedure is run using a tablespace repository, then the procedure places a tablespace set in the repository as a version of the tablespace set. If the ATTACH_TABLESPACES procedure is run using a tablespace repository, then the procedure copies a particular version of a tablespace set from the repository and attaches it to a database.

When to Use a Tablespace Repository

A tablespace repository is useful when you need to store different versions of one or more tablespace sets. For example, a tablespace repository can be used to accomplish the following goals:

  • You want to run quarterly reports on a tablespace set. You can clone the tablespace set quarterly for storage in a versioned tablespace repository, and a specific version of the tablespace set can be requested from the repository and attached to another database to run the reports.

  • You want applications to be able to attach required tablespace sets on demand in a grid environment. You can store multiple versions of several different tablespace sets in the tablespace repository. Each tablespace set can be used for a different purpose by the application. When the application needs a particular version of a particular tablespace set, the application can scan the tablespace repository and attach the correct tablespace set to a database.

Differences Between the Tablespace Repository Procedures

The procedures that include the file_group_name parameter in the DBMS_STREAMS_TABLESPACE_ADM package behave differently with regard to the tablespace set, the datafiles in the tablespace set, and the export dump file. Table 8-1 describes these differences.

Table 8-1 Tablespace Repository Procedures

ProcedureTablespace SetDatafilesExport Dump File

ATTACH_TABLESPACES

The tablespace set is added to the local database.

If the datafiles_directory_object parameter is non-NULL, then the datafiles are copied from their current location(s) for the version in the tablespace repository to the directory object specified in the datafiles_directory_object parameter. The attached tablespace set uses the datafiles that were copied.

If the datafiles_directory_object parameter is NULL, then the datafiles are not moved or copied. The datafiles remain in the directory object(s) for the version in the tablespace repository, and the attached tablespace set uses these datafiles.

If the datafiles_directory_object parameter is non-NULL, then the export dump file is copied from its directory object for the version in the tablespace repository to the directory object specified in the datafiles_directory_object parameter.

If the datafiles_directory_object parameter is NULL, then the export dump file is not moved or copied.

CLONE_TABLESPACES

The tablespace set is retained in the local database.

The datafiles are copied from their current location(s) to the directory object specified in the tablespace_directory_object parameter or in the default directory for the version or file group. This parameter specifies where the version of the tablespace set is stored in the tablespace repository. The current location of the datafiles can be determined by querying the DBA_DATA_FILES data dictionary view. A directory object must exist, and must be accessible to the user who runs the procedure, for each datafile location.

The export dump file is placed in the directory object specified in the tablespace_directory_object parameter or in the default directory for the version or file group.

DETACH_TABLESPACES

The tablespace set is dropped from the local database.

The datafiles are not moved or copied. The datafiles remain in their current location(s). A directory object must exist, and must be accessible to the user who runs the procedure, for each datafile location. These datafiles are included in the version of the tablespace set stored in the tablespace repository.

The export dump file is placed in the directory object specified in the export_directory_object parameter or in the default directory for the version or file group.


Remote Access to a Tablespace Repository

A tablespace repository can reside in the database that uses the tablespaces, or it can reside in a remote database. If it resides in a remote database, then a database link must be specified in the repository_db_link parameter when you run one of the procedures, and the database link must be accessible to the user who runs the procedure.

Only One Tablespace Version Can Be Online in a Database

A version of a tablespace set in a tablespace repository can be either online or offline in a database. A tablespace set version is online in a database when it is attached to the database using the ATTACH_TABLESPACES procedure. Only a single version of a tablespace set can be online in a database at a particular time. However, the same version or different versions of a tablespace set can be online in different databases at the same time. In this case, it might be necessary to ensure that only one database can make changes to the tablespace set.

Tablespace Repository Procedures Use the DBMS_FILE_GROUP Package Automatically

Although tablespace repositories are built on file group repositories, it is not necessary to use the DBMS_FILE_GROUP package to create a file group repository before using one of the procedures in the DBMS_STREAMS_TABLESPACE_ADM package. If you run the CLONE_TABLESPACES or DETACH_TABLESPACES procedure and specify a file group that does not exist, then the procedure creates the file group automatically.

A Tablespace Repository Provides Versioning but Not Source Control

A tablespace repository provides versioning of tablespace sets, but it does not provide source control. If two or more versions of a tablespace set are changed at the same time and placed in a tablespace repository, then these changes are not merged.

Read-Only Tablespaces Requirement During Export

The procedures in the DBMS_STREAMS_TABLESPACE_ADM package that perform a Data Pump export make any read/write tablespace being exported read-only. After the export is complete, if a procedure in the DBMS_STREAMS_TABLESPACE_ADM package made a tablespace read-only, then the procedure makes the tablespace read/write.

Automatic Platform Conversion for Tablespaces

When one of the procedures in the DBMS_STREAMS_TABLESPACE_ADM package moves or copies tablespaces to a database that is running on a different platform, the procedure can convert the datafiles to the appropriate platform if the conversion is supported. The V$TRANSPORTABLE_PLATFORM dynamic performance view lists all platforms that support cross-platform transportable tablespaces.

When a tablespace repository is used, the platform conversion is automatic if it is supported. When a tablespace repository is not used, you must specify the platform to which or from which the tablespace is being converted.


See Also:


Options for Bulk Information Provisioning

Table 8-2 describes when to use each option for bulk information provisioning.

Table 8-2 Options for Moving or Copying Tablespaces

OptionUse this Option Under these Conditions

Data Pump export/import

  • You want to move or copy data at the database, tablespace, schema, or table level.

  • You want to perform each step required to complete the Data Pump export/import.

Data Pump export/import with the TRANSPORT_TABLESPACES option

  • The tablespaces being moved or copied can be read-only during the operation.

  • You want to perform each step required to complete the Data Pump export/import.

Transportable tablespace from backup with the RMAN TRANSPORT TABLESPACE command

The tablespaces being moved or copied must remain online (writeable) during the operation.

DBMS_STREAMS_TABLESPACE_ADM procedures without a tablespace repository

  • The tablespaces being moved or copied can be read-only during the operation.

  • You want to combine multiple steps in the Data Pump export/import into one procedure call.

  • You do not want to use a tablespace repository for the tablespaces being moved or copied.

DBMS_STREAMS_TABLESPACE_ADM procedures with a tablespace repository

  • The tablespaces being moved or copied can be read-only during the operation.

  • You want to combine multiple steps in the Data Pump export/import into one procedure call.

  • You want to use a tablespace repository for the tablespaces being moved or copied.

  • You want platform conversion to be automatic.


Incremental Information Provisioning with Streams

Streams can share and maintain database objects in different databases at each of the following levels:

Streams can keep shared database objects synchronized at two or more databases. Specifically, a Streams capture process captures changes to a shared database object in a source database's redo log, one or more propagations propagate the changes to another database, and a Streams apply process applies the changes to the shared database object. If database objects are not identical at different databases, then Streams can transform them at any point in the process. That is, a change can be transformed during capture, propagation, or apply. In addition, Streams provides custom processing of changes during apply with apply handlers. Database objects can be shared between Oracle databases, or they can be shared between Oracle and non-Oracle databases through the use of Oracle Transparent Gateways. In addition to data replication, Streams provides messaging, event management and notification, and data warehouse loading.

A combination of Streams and bulk provisioning enables you to copy and maintain a large amount of data by running a single procedure. The following procedures in the DBMS_STREAMS_ADM package use Data Pump to copy data between databases and configure Streams to maintain the copied data incrementally:

In addition, the PRE_INSTANTIATION_SETUP and POST_INSTANTIATION_SETUP procedures configure a Streams environment that replicates changes either at the database level or to specified tablespaces between two databases. These procedures must be used together, and instantiation actions must be performed manually, to complete the Streams replication configuration.

Using these procedures, you can export data from one database, ship it to another database, reformat the data if the second database is on a different platform, import the data into the second database, and start syncing the data with the changes happening in the first database. If the second database is on a grid, then you have just migrated your application to a grid with one command.

These procedures can configure Streams clients to maintain changes originating at the source database in a single-source replication environment, or they can configure Streams clients to maintain changes originating at both databases in a bidirectional replication environment. By maintaining changes to the data, it can be kept synchronized at both databases. These procedures can either perform these actions directly, or they can generate one or more scripts that performs these actions.


See Also:


On-Demand Information Access

Users and applications can access information without moving or copying it to a new location. Distributed SQL allows grid users to access and integrate data stored in multiple Oracle and, through Oracle Transparent Gateways, non-Oracle databases. Transparent remote data access with distributed SQL allows grid users to run their applications against any other database without making any code change to the applications. While integrating data and managing transactions across multiple data stores, the Oracle database optimizes the execution plans to access data in the most efficient manner.


See Also:


PK‘FD„z°p°PK◊hUIOEBPS/preface.htm;$ńŘ Preface

Preface

Oracle Streams Concepts and Administration describes the features and functionality of Streams. This document contains conceptual information about Streams, along with information about managing a Streams environment. In addition, this document contains detailed examples that configure a Streams capture and apply environment and a rule-based application.

This Preface contains these topics:

Audience

Oracle Streams Concepts and Administration is intended for database administrators who create and maintain Streams environments. These administrators perform one or more of the following tasks:

To use this document, you need to be familiar with relational database concepts, SQL, distributed database administration, Advanced Queuing concepts, PL/SQL, and the operating systems under which you run a Streams environment.

Documentation Accessibility

Our goal is to make Oracle products, services, and supporting documentation accessible, with good usability, to the disabled community. To that end, our documentation includes features that make information available to users of assistive technology. This documentation is available in HTML format, and contains markup to facilitate access by the disabled community. Accessibility standards will continue to evolve over time, and Oracle is actively engaged with other market-leading technology vendors to address technical obstacles so that our documentation can be accessible to all of our customers. For more information, visit the Oracle Accessibility Program Web site at http://www.oracle.com/accessibility/.

Accessibility of Code Examples in Documentation

Screen readers may not always correctly read the code examples in this document. The conventions for writing code require that closing braces should appear on an otherwise empty line; however, some screen readers may not always read a line of text that consists solely of a bracket or brace.

Accessibility of Links to External Web Sites in Documentation

This documentation may contain links to Web sites of other companies or organizations that Oracle does not own or control. Oracle neither evaluates nor makes any representations regarding the accessibility of these Web sites.

TTY Access to Oracle Support Services

Oracle provides dedicated Text Telephone (TTY) access to Oracle Support Services within the United States of America 24 hours a day, 7 days a week. For TTY support, call 800.446.2398. Outside the United States, call +1.407.458.2479.

Related Documents

For more information, see these Oracle resources:

Many of the examples in this book use the sample schemas of the sample database, which is installed by default when you install Oracle Database. Refer to Oracle Database Sample Schemas for information on how these schemas were created and how you can use them yourself.

Printed documentation is available for sale in the Oracle Store at

http://oraclestore.oracle.com/

To download free release notes, installation documentation, white papers, or other collateral, please visit the Oracle Technology Network (OTN). You must register online before using OTN; registration is free and can be done at

http://www.oracle.com/technology/membership/

If you already have a username and password for OTN, then you can go directly to the documentation section of the OTN Web site at

http://www.oracle.com/technology/documentation/

Conventions

The following text conventions are used in this document:

ConventionMeaning
boldfaceBoldface type indicates graphical user interface elements associated with an action, or terms defined in text or the glossary.
italicItalic type indicates book titles, emphasis, or placeholder variables for which you supply particular values.
monospaceMonospace type indicates commands within a paragraph, URLs, code in examples, text that appears on the screen, or text that you enter.

PKř…%»@$;$PK◊hUIOEBPS/rulesdemo.htmġ Rule-Based Application Example

28 Rule-Based Application Example

This chapter illustrates a rule-based application that uses the Oracle rules engine.

The examples in this chapter are independent of Streams. That is, no Streams capture processes, propagations, apply processes, or messaging clients are clients of the rules engine in these examples, and no queues are used.

This chapter contains these topics:

Overview of the Rule-Based Application

Each example in this chapter creates a rule-based application that handles customer problems. The application uses rules to determine actions that must be completed based on the problem priority when a new problem is reported. For example, the application assigns each problem to a particular company center based on the problem priority.

The application enforces these rules using the rules engine. An evaluation context named evalctx is created to define the information surrounding a support problem. Rules are created based on the requirements described previously, and they are added to a rule set named rs.

The task of assigning problems is done by a user-defined procedure named problem_dispatch, which calls the rules engine to evaluate rules in the rule set rs and then takes appropriate action based on the rules that evaluate to TRUE.

Using Rules on Nontable Data Stored in Explicit Variables

This example illustrates how to use rules to evaluate data stored in explicit variables. This example handles customer problems based on priority and uses the following rules for handling customer problems:

The evaluation context contains only one explicit variable named priority, which refers to the priority of the problem being dispatched. The value for this variable is passed to DBMS_RULE.EVALUATE procedure by the problem_dispatch procedure.

Complete the following steps:

  1. Show Output and Spool Results

  2. Create the support User

  3. Grant the support User the Necessary System Privileges on Rules

  4. Create the evalctx Evaluation Context

  5. Create the Rules that Correspond to Problem Priority

  6. Create the rs Rule Set

  7. Add the Rules to the Rule Set

  8. Query the Data Dictionary

  9. Create the problem_dispatch PL/SQL Procedure

  10. Dispatch Sample Problems

  11. Check the Spool Results


Note:

If you are viewing this document online, then you can copy the text from the "BEGINNING OF SCRIPT" line after this note to the next "END OF SCRIPT" line into a text editor and then edit the text to create a script for your environment. Run the script with SQL*Plus on a computer that can connect to all of the databases in the environment.

/************************* BEGINNING OF SCRIPT ******************************

Step 1   Show Output and Spool Results

Run SET ECHO ON and specify the spool file for the script. Check the spool file for errors after you run this script.

*/

SET ECHO ON
SPOOL rules_stored_variables.out

/*

Step 2   Create the support User

*/
CONNECT SYSTEM/MANAGER AS SYSDBA;

GRANT ALTER SESSION, CREATE CLUSTER, CREATE DATABASE LINK, CREATE SEQUENCE,
  CREATE SESSION, CREATE SYNONYM, CREATE TABLE, CREATE VIEW, CREATE INDEXTYPE, 
  CREATE OPERATOR, CREATE PROCEDURE, CREATE TRIGGER, CREATE TYPE
TO support IDENTIFIED BY support;

/*

Step 3   Grant the support User the Necessary System Privileges on Rules

*/
BEGIN
  DBMS_RULE_ADM.GRANT_SYSTEM_PRIVILEGE(
    privilege    => DBMS_RULE_ADM.CREATE_RULE_SET_OBJ, 
    grantee      => 'support', 
    grant_option => false);
  DBMS_RULE_ADM.GRANT_SYSTEM_PRIVILEGE(
    privilege    => DBMS_RULE_ADM.CREATE_RULE_OBJ,
    grantee      => 'support', 
    grant_option => false);
  DBMS_RULE_ADM.GRANT_SYSTEM_PRIVILEGE(
    privilege    => DBMS_RULE_ADM.CREATE_EVALUATION_CONTEXT_OBJ, 
    grantee      => 'support', 
    grant_option => false);
END;
/

/*

Step 4   Create the evalctx Evaluation Context

*/
CONNECT support/support

SET FEEDBACK 1
SET NUMWIDTH 10
SET LINESIZE 80
SET TRIMSPOOL ON
SET TAB OFF
SET PAGESIZE 100
SET SERVEROUTPUT ON
DECLARE
  vt SYS.RE$VARIABLE_TYPE_LIST;
BEGIN
  vt := SYS.RE$VARIABLE_TYPE_LIST(
    SYS.RE$VARIABLE_TYPE('priority', 'NUMBER', NULL, NULL));
  DBMS_RULE_ADM.CREATE_EVALUATION_CONTEXT(
    evaluation_context_name    => 'evalctx',
    variable_types             => vt,
    evaluation_context_comment => 'support problem definition');
END;
/

/*

Step 5   Create the Rules that Correspond to Problem Priority

The following code creates one action context for each rule, and one name-value pair in each action context.

*/

DECLARE
  ac  SYS.RE$NV_LIST;
BEGIN
  ac := SYS.RE$NV_LIST(NULL);
  ac.ADD_PAIR('CENTER', ANYDATA.CONVERTVARCHAR2('San Jose'));
  DBMS_RULE_ADM.CREATE_RULE(
    rule_name      => 'r1',
    condition      => ':priority > 2',
    action_context => ac,
    rule_comment   => 'Low priority problems');
  ac := SYS.RE$NV_LIST(NULL);
  ac.ADD_PAIR('CENTER', ANYDATA.CONVERTVARCHAR2('New York'));
  DBMS_RULE_ADM.CREATE_RULE(
    rule_name      => 'r2',
    condition      => ':priority <= 2',
    action_context => ac,
    rule_comment   => 'High priority problems');
  ac := SYS.RE$NV_LIST(NULL);
  ac.ADD_PAIR('ALERT', ANYDATA.CONVERTVARCHAR2('John Doe'));
  DBMS_RULE_ADM.CREATE_RULE(
    rule_name      => 'r3',
    condition      => ':priority = 1',
    action_context => ac,
    rule_comment   => 'Urgent problems');
END;
/

/*

Step 6   Create the rs Rule Set

*/
BEGIN
  DBMS_RULE_ADM.CREATE_RULE_SET(
    rule_set_name      => 'rs',
    evaluation_context => 'evalctx',
    rule_set_comment   => 'support rules');
END;
/

/*

Step 7   Add the Rules to the Rule Set

*/
BEGIN
  DBMS_RULE_ADM.ADD_RULE(
    rule_name     => 'r1', 
    rule_set_name => 'rs');
  DBMS_RULE_ADM.ADD_RULE(
    rule_name     => 'r2', 
    rule_set_name => 'rs');
  DBMS_RULE_ADM.ADD_RULE(
    rule_name     => 'r3', 
    rule_set_name => 'rs');
END;
/

/*

Step 8   Query the Data Dictionary

At this point, you can view the evaluation context, rules, and rule set you created in the previous steps.

*/

COLUMN EVALUATION_CONTEXT_NAME HEADING 'Eval Context Name' FORMAT A30
COLUMN EVALUATION_CONTEXT_COMMENT HEADING 'Eval Context Comment' FORMAT A40

SELECT EVALUATION_CONTEXT_NAME, EVALUATION_CONTEXT_COMMENT
  FROM USER_EVALUATION_CONTEXTS
  ORDER BY EVALUATION_CONTEXT_NAME;

SET LONGCHUNKSIZE 4000
SET LONG 4000
COLUMN RULE_NAME HEADING 'Rule|Name' FORMAT A5
COLUMN RULE_CONDITION HEADING 'Rule Condition' FORMAT A35
COLUMN ACTION_CONTEXT_NAME HEADING 'Action|Context|Name' FORMAT A10
COLUMN ACTION_CONTEXT_VALUE HEADING 'Action|Context|Value' FORMAT A10

SELECT RULE_NAME, 
       RULE_CONDITION,
       AC.NVN_NAME ACTION_CONTEXT_NAME, 
       AC.NVN_VALUE.ACCESSVARCHAR2() ACTION_CONTEXT_VALUE
  FROM USER_RULES R, TABLE(R.RULE_ACTION_CONTEXT.ACTX_LIST) AC
  ORDER BY RULE_NAME;

COLUMN RULE_SET_NAME HEADING 'Rule Set Name' FORMAT A20
COLUMN RULE_SET_EVAL_CONTEXT_OWNER HEADING 'Eval Context|Owner' FORMAT A12
COLUMN RULE_SET_EVAL_CONTEXT_NAME HEADING 'Eval Context Name' FORMAT A25
COLUMN RULE_SET_COMMENT HEADING 'Rule Set|Comment' FORMAT A15

SELECT RULE_SET_NAME, 
       RULE_SET_EVAL_CONTEXT_OWNER,
       RULE_SET_EVAL_CONTEXT_NAME,
       RULE_SET_COMMENT
  FROM USER_RULE_SETS
  ORDER BY RULE_SET_NAME;

/*

Step 9   Create the problem_dispatch PL/SQL Procedure

*/
CREATE OR REPLACE PROCEDURE problem_dispatch (priority NUMBER) 
IS
    vv        SYS.RE$VARIABLE_VALUE;
    vvl       SYS.RE$VARIABLE_VALUE_LIST;
    truehits  SYS.RE$RULE_HIT_LIST;
    maybehits SYS.RE$RULE_HIT_LIST;
    ac        SYS.RE$NV_LIST;
    namearray SYS.RE$NAME_ARRAY;
    name      VARCHAR2(30);
    cval      VARCHAR2(100);
    rnum      INTEGER;
    i         INTEGER;
    status    PLS_INTEGER;
BEGIN
  vv  := SYS.RE$VARIABLE_VALUE('priority',
                               ANYDATA.CONVERTNUMBER(priority));
  vvl := SYS.RE$VARIABLE_VALUE_LIST(vv);
  truehits := SYS.RE$RULE_HIT_LIST();
  maybehits := SYS.RE$RULE_HIT_LIST();
  DBMS_RULE.EVALUATE(
      rule_set_name        => 'support.rs',
      evaluation_context   => 'evalctx',
      variable_values      => vvl,
      true_rules           => truehits,
      maybe_rules          => maybehits);
  FOR rnum IN 1..truehits.COUNT LOOP
    DBMS_OUTPUT.PUT_LINE('Using rule '|| truehits(rnum).rule_name);
    ac := truehits(rnum).rule_action_context;
    namearray := ac.GET_ALL_NAMES;
      FOR i IN 1..namearray.count loop
        name := namearray(i);
        status := ac.GET_VALUE(name).GETVARCHAR2(cval);
        IF (name = 'CENTER') then
          DBMS_OUTPUT.PUT_LINE('Assigning problem to ' || cval);
        ELSIF (name = 'ALERT') THEN
          DBMS_OUTPUT.PUT_LINE('Sending alert to: '|| cval);
        END IF;
      END LOOP;
  END LOOP;
END;
/

/*

Step 10   Dispatch Sample Problems

*/
EXECUTE problem_dispatch(1);
EXECUTE problem_dispatch(2);
EXECUTE problem_dispatch(3);
EXECUTE problem_dispatch(5);

/*

Step 11   Check the Spool Results

Check the rules_stored_variables.out spool file to ensure that all actions completed successfully after this script completes.

*/

SET ECHO OFF
SPOOL OFF

/*************************** END OF SCRIPT ******************************/

Using Rules on Data in Explicit Variables with Iterative Results

This example is the same as the previous example "Using Rules on Nontable Data Stored in Explicit Variables", except that this example returns evaluation results iteratively instead of all at once.

Complete the following steps:

  1. Show Output and Spool Results

  2. Make Sure You Have Completed the Preliminary Steps

  3. Replace the problem_dispatch PL/SQL Procedure

  4. Dispatch Sample Problems

  5. Clean Up the Environment (Optional)

  6. Check the Spool Results


Note:

If you are viewing this document online, then you can copy the text from the "BEGINNING OF SCRIPT" line after this note to the next "END OF SCRIPT" line into a text editor and then edit the text to create a script for your environment. Run the script with SQL*Plus on a computer that can connect to all of the databases in the environment.

/************************* BEGINNING OF SCRIPT ******************************

Step 1   Show Output and Spool Results

Run SET ECHO ON and specify the spool file for the script. Check the spool file for errors after you run this script.

*/

SET ECHO ON
SPOOL rules_stored_variables_iterative.out

/*

Step 2   Make Sure You Have Completed the Preliminary Steps

Make sure you have completed Steps 1 to 8 in the "Using Rules on Nontable Data Stored in Explicit Variables". If you have not completed these steps, then complete them before you continue.

*/ 

PAUSE Press <RETURN> to continue when the preliminary steps have been completed.

/*

Step 3   Replace the problem_dispatch PL/SQL Procedure

Replace the problem_dispatch procedure created in Step 9 with the procedure in this step. The difference between the two procedures is that the procedure created in Step 9 returns all evaluation results at once while the procedure in this step returns evaluation results iteratively.

*/

CONNECT support/support

SET SERVEROUTPUT ON
CREATE OR REPLACE PROCEDURE problem_dispatch (priority NUMBER) 
IS
    vv          SYS.RE$VARIABLE_VALUE;
    vvl         SYS.RE$VARIABLE_VALUE_LIST;
    truehits    BINARY_INTEGER;
    maybehits   BINARY_INTEGER;
    hit         SYS.RE$RULE_HIT;
    ac          SYS.RE$NV_LIST;
    namearray   SYS.RE$NAME_ARRAY;
    name        VARCHAR2(30);
    cval        VARCHAR2(100);
    i           INTEGER;
    status      PLS_INTEGER;
    iter_closed EXCEPTION;
    pragma exception_init(iter_closed, -25453);
BEGIN
  vv  := SYS.RE$VARIABLE_VALUE('priority',
                               ANYDATA.CONVERTNUMBER(priority));
  vvl := SYS.RE$VARIABLE_VALUE_LIST(vv);
  DBMS_RULE.EVALUATE(
      rule_set_name        => 'support.rs',
      evaluation_context   => 'evalctx',
      variable_values      => vvl,
      true_rules_iterator  => truehits,
      maybe_rules_iterator => maybehits);
  LOOP
    hit := DBMS_RULE.GET_NEXT_HIT(truehits);
    EXIT WHEN hit IS NULL;
    DBMS_OUTPUT.PUT_LINE('Using rule '|| hit.rule_name);
    ac := hit.rule_action_context;
    namearray := ac.GET_ALL_NAMES;
      FOR i IN 1..namearray.COUNT LOOP
        name := namearray(i);
        status := ac.GET_VALUE(name).GETVARCHAR2(cval);
        IF (name = 'CENTER') then
          DBMS_OUTPUT.PUT_LINE('Assigning problem to ' || cval);
        ELSIF (name = 'ALERT') THEN
          DBMS_OUTPUT.PUT_LINE('Sending alert to: '|| cval);
        END IF;
      END LOOP;
  END LOOP;
  -- Close iterators
  BEGIN
    DBMS_RULE.CLOSE_ITERATOR(truehits);
  EXCEPTION
    WHEN iter_closed THEN
      NULL;
  END;
  BEGIN
    DBMS_RULE.CLOSE_ITERATOR(maybehits);
  EXCEPTION
    WHEN iter_closed THEN
      NULL;
  END;
END;
/

/*

Step 4   Dispatch Sample Problems

*/
EXECUTE problem_dispatch(1);
EXECUTE problem_dispatch(2);
EXECUTE problem_dispatch(3);
EXECUTE problem_dispatch(5);

/*

Step 5   Clean Up the Environment (Optional)

You can clean up the sample environment by dropping the support user.

*/

CONNECT SYSTEM/MANAGER AS SYSDBA;

DROP USER support CASCADE;

/*

Step 6   Check the Spool Results

Check the rules_stored_variables_iterative.out spool file to ensure that all actions completed successfully after this script completes.

*/

SET ECHO OFF
SPOOL OFF

/*************************** END OF SCRIPT ******************************/

Using Partial Evaluation of Rules on Data in Explicit Variables

This example illustrates how to use partial evaluation when an event causes rules to evaluate to MAYBE instead of TRUE or FALSE. This example handles customer problems based on priority and problem type, and uses the following rules for handling customer problems:

Problems whose problem type is NULL evaluate to MAYBE. This example uses partial evaluation to take an action when MAYBE rules are returned to the rules engine client. In this case, the action is to assign the problem to the Texas Center.

The evaluation context contains an explicit variable named priority, which refers to the priority of the problem being dispatched. The evaluation context also contains an explicit variable named problem_type, which refers to the type of problem being dispatched (either HARDWARE or SOFTWARE). The values for these variables are passed to DBMS_RULE.EVALUATE procedure by the problem_dispatch procedure.

Complete the following steps:

  1. Show Output and Spool Results

  2. Create the support User

  3. Grant the support User the Necessary System Privileges on Rules

  4. Create the evalctx Evaluation Context

  5. Create the Rules that Correspond to Problem Priority

  6. Create the rs Rule Set

  7. Add the Rules to the Rule Set

  8. Query the Data Dictionary

  9. Create the problem_dispatch PL/SQL Procedure

  10. Dispatch Sample Problems

  11. Clean Up the Environment (Optional)

  12. Check the Spool Results


Note:

If you are viewing this document online, then you can copy the text from the "BEGINNING OF SCRIPT" line after this note to the next "END OF SCRIPT" line into a text editor and then edit the text to create a script for your environment. Run the script with SQL*Plus on a computer that can connect to all of the databases in the environment.

/************************* BEGINNING OF SCRIPT ******************************

Step 1   Show Output and Spool Results

Run SET ECHO ON and specify the spool file for the script. Check the spool file for errors after you run this script.

*/

SET ECHO ON
SPOOL rules_stored_variables_partial.out

/*

Step 2   Create the support User

*/
CONNECT SYSTEM/MANAGER AS SYSDBA;

GRANT ALTER SESSION, CREATE CLUSTER, CREATE DATABASE LINK, CREATE SEQUENCE,
  CREATE SESSION, CREATE SYNONYM, CREATE TABLE, CREATE VIEW, CREATE INDEXTYPE, 
  CREATE OPERATOR, CREATE PROCEDURE, CREATE TRIGGER, CREATE TYPE
TO support IDENTIFIED BY support;

/*

Step 3   Grant the support User the Necessary System Privileges on Rules

*/
BEGIN
  DBMS_RULE_ADM.GRANT_SYSTEM_PRIVILEGE(
    privilege    => DBMS_RULE_ADM.CREATE_RULE_SET_OBJ, 
    grantee      => 'support', 
    grant_option => false);
  DBMS_RULE_ADM.GRANT_SYSTEM_PRIVILEGE(
    privilege    => DBMS_RULE_ADM.CREATE_RULE_OBJ,
    grantee      => 'support', 
    grant_option => false);
  DBMS_RULE_ADM.GRANT_SYSTEM_PRIVILEGE(
    privilege    => DBMS_RULE_ADM.CREATE_EVALUATION_CONTEXT_OBJ, 
    grantee      => 'support', 
    grant_option => false);
END;
/

/*

Step 4   Create the evalctx Evaluation Context

*/
CONNECT support/support

SET FEEDBACK 1
SET NUMWIDTH 10
SET LINESIZE 80
SET TRIMSPOOL ON
SET TAB OFF
SET PAGESIZE 100
SET SERVEROUTPUT ON
DECLARE
  vt  SYS.RE$VARIABLE_TYPE_LIST;
BEGIN
  vt := SYS.RE$VARIABLE_TYPE_LIST(
        SYS.RE$VARIABLE_TYPE('priority', 'NUMBER', NULL, NULL),
        SYS.RE$VARIABLE_TYPE('problem_type', 'VARCHAR2(30)', NULL, NULL));
  DBMS_RULE_ADM.CREATE_EVALUATION_CONTEXT(
    evaluation_context_name    => 'evalctx',
    variable_types             => vt,
    evaluation_context_comment => 'support problem definition');
end;
/

/*

Step 5   Create the Rules that Correspond to Problem Priority

The following code creates one action context for each rule, and one name-value pair in each action context.

*/

DECLARE
  ac  SYS.RE$NV_LIST;
begin
  ac := SYS.RE$NV_LIST(NULL);
  ac.ADD_PAIR('ALERT', ANYDATA.CONVERTVARCHAR2('John Doe'));
  DBMS_RULE_ADM.CREATE_RULE(
    rule_name      => 'r1',
    condition      => ':priority = 1',
    action_context => ac,
    rule_comment   => 'Urgent problems');
  ac := sys.re$nv_list(NULL);
  ac.ADD_PAIR('TRUE CENTER', ANYDATA.CONVERTVARCHAR2('San Jose'));
  ac.ADD_PAIR('MAYBE CENTER', ANYDATA.CONVERTVARCHAR2('Texas'));
  DBMS_RULE_ADM.CREATE_RULE(
    rule_name       => 'r2',
    condition       => ':problem_type = ''HARDWARE''',
    action_context  => ac,
    rule_comment    => 'Hardware problems');
  ac := sys.re$nv_list(NULL);
  ac.ADD_PAIR('TRUE CENTER', ANYDATA.CONVERTVARCHAR2('New York'));
  ac.ADD_PAIR('MAYBE CENTER', ANYDATA.CONVERTVARCHAR2('Texas'));
  DBMS_RULE_ADM.CREATE_RULE(
    rule_name       => 'r3',
    condition       => ':problem_type = ''SOFTWARE''',
    action_context  => ac,
    rule_comment    => 'Software problems');
END;
/

/*

Step 6   Create the rs Rule Set

*/
BEGIN
  DBMS_RULE_ADM.CREATE_RULE_SET(
    rule_set_name      => 'rs',
    evaluation_context => 'evalctx',
    rule_set_comment   => 'support rules');
END;
/

/*

Step 7   Add the Rules to the Rule Set

*/
BEGIN
  DBMS_RULE_ADM.ADD_RULE(
    rule_name     => 'r1', 
    rule_set_name => 'rs');
  DBMS_RULE_ADM.ADD_RULE(
    rule_name     => 'r2', 
    rule_set_name => 'rs');
  DBMS_RULE_ADM.ADD_RULE(
    rule_name     => 'r3', 
    rule_set_name => 'rs');
END;
/

/*

Step 8   Query the Data Dictionary

At this point, you can view the evaluation context, rules, and rule set you created in the previous steps.

*/

COLUMN EVALUATION_CONTEXT_NAME HEADING 'Eval Context Name' FORMAT A30
COLUMN EVALUATION_CONTEXT_COMMENT HEADING 'Eval Context Comment' FORMAT A40

SELECT EVALUATION_CONTEXT_NAME, EVALUATION_CONTEXT_COMMENT
  FROM USER_EVALUATION_CONTEXTS
  ORDER BY EVALUATION_CONTEXT_NAME;

SET LONGCHUNKSIZE 4000
SET LONG 4000
COLUMN RULE_NAME HEADING 'Rule|Name' FORMAT A5
COLUMN RULE_CONDITION HEADING 'Rule Condition' FORMAT A35
COLUMN ACTION_CONTEXT_NAME HEADING 'Action|Context|Name' FORMAT A10
COLUMN ACTION_CONTEXT_VALUE HEADING 'Action|Context|Value' FORMAT A10

SELECT RULE_NAME, 
       RULE_CONDITION,
       AC.NVN_NAME ACTION_CONTEXT_NAME, 
       AC.NVN_VALUE.ACCESSVARCHAR2() ACTION_CONTEXT_VALUE
  FROM USER_RULES R, TABLE(R.RULE_ACTION_CONTEXT.ACTX_LIST) AC
  ORDER BY RULE_NAME;

COLUMN RULE_SET_NAME HEADING 'Rule Set Name' FORMAT A20
COLUMN RULE_SET_EVAL_CONTEXT_OWNER HEADING 'Eval Context|Owner' FORMAT A12
COLUMN RULE_SET_EVAL_CONTEXT_NAME HEADING 'Eval Context Name' FORMAT A25
COLUMN RULE_SET_COMMENT HEADING 'Rule Set|Comment' FORMAT A15

SELECT RULE_SET_NAME, 
       RULE_SET_EVAL_CONTEXT_OWNER,
       RULE_SET_EVAL_CONTEXT_NAME,
       RULE_SET_COMMENT
  FROM USER_RULE_SETS
  ORDER BY RULE_SET_NAME;

/*

Step 9   Create the problem_dispatch PL/SQL Procedure

*/
CREATE OR REPLACE PROCEDURE problem_dispatch (priority     NUMBER,
                                              problem_type VARCHAR2 := NULL) 
IS
    vvl       SYS.RE$VARIABLE_VALUE_LIST;
    truehits  SYS.RE$RULE_HIT_LIST;
    maybehits SYS.RE$RULE_HIT_LIST;
    ac        SYS.RE$NV_LIST;
    namearray SYS.RE$NAME_ARRAY;
    name      VARCHAR2(30);
    cval      VARCHAR2(100);
    rnum      INTEGER;
    i         INTEGER;
    status    PLS_INTEGER;
BEGIN
  IF (problem_type IS NULL) THEN 
    vvl  := SYS.RE$VARIABLE_VALUE_LIST(
            SYS.RE$VARIABLE_VALUE('priority',
                                  ANYDATA.CONVERTNUMBER(priority)));
  ELSE
    vvl  := SYS.RE$VARIABLE_VALUE_LIST(
            SYS.RE$VARIABLE_VALUE('priority',
                                  ANYDATA.CONVERTNUMBER(priority)),
            SYS.RE$VARIABLE_VALUE('problem_type',
                                  ANYDATAġ.CONVERTVARCHAR2(problem_type)));
  END IF;
  truehits := SYS.RE$RULE_HIT_LIST();
  maybehits := SYS.RE$RULE_HIT_LIST();
  DBMS_RULE.EVALUATE(
      rule_set_name        => 'support.rs',
      evaluation_context   => 'evalctx',
      variable_values      => vvl,
      true_rules           => truehits,
      maybe_rules          => maybehits);
  FOR rnum IN 1..truehits.COUNT LOOP
    DBMS_OUTPUT.PUT_LINE('Using rule '|| truehits(rnum).rule_name);
    ac := truehits(rnum).rule_action_context;
    namearray := ac.GET_ALL_NAMES;
      FOR i IN 1..namearray.count LOOP
        name := namearray(i);
        status := ac.GET_VALUE(name).GETVARCHAR2(cval);
        IF (name = 'TRUE CENTER') then
          DBMS_OUTPUT.PUT_LINE('Assigning problem to ' || cval);
        ELSIF (name = 'ALERT') THEN
          DBMS_OUTPUT.PUT_LINE('Sending alert to: '|| cval);
        END IF;
      END LOOP;
  END LOOP;
  FOR rnum IN 1..maybehits.COUNT LOOP
    DBMS_OUTPUT.PUT_LINE('Using rule '|| maybehits(rnum).rule_name);
    ac := maybehits(rnum).rule_action_context;
    namearray := ac.GET_ALL_NAMES;
      FOR i IN 1..namearray.count loop
        name := namearray(i);
        status := ac.GET_VALUE(name).GETVARCHAR2(cval);
        IF (name = 'MAYBE CENTER') then
          DBMS_OUTPUT.PUT_LINE('Assigning problem to ' || cval);
        END IF;
      END LOOP;
  END LOOP;
END;
/

/*

Step 10   Dispatch Sample Problems

The first problem dispatch in this step uses partial evaluation and takes an action based on the partial evaluation. Specifically, the first problem dispatch specifies that the priority is 1 and the problem_type is NULL. In this case, the rules engine returns a MAYBE rule for the event, and the problem_dispatch procedure assigns the problem to the Texas center.

The second and third problem dispatches do not use partial evaluation. Each of these problems evaluate to TRUE for a rule, and the problem is assigned accordingly by the problem_dispatch procedure.

*/

EXECUTE problem_dispatch(1, NULL);
EXECUTE problem_dispatch(2, 'HARDWARE');
EXECUTE problem_dispatch(3, 'SOFTWARE');

/*

Step 11   Clean Up the Environment (Optional)

You can clean up the sample environment by dropping the support user.

*/

CONNECT SYSTEM/MANAGER AS SYSDBA;

DROP USER support CASCADE;

/*

Step 12   Check the Spool Results

Check the rules_stored_variables_partial.out spool file to ensure that all actions completed successfully after this script completes.

*/

SET ECHO OFF
SPOOL OFF

/*************************** END OF SCRIPT ******************************/

Using Rules on Data Stored in a Table

This example illustrates how to use rules to evaluate data stored in a table. This example is similar to the example described in "Using Rules on Nontable Data Stored in Explicit Variables". In both examples, the application routes customer problems based on priority. However, in this example, the problems are stored in a table instead of variables.

The application uses the problems table in the support schema, into which customer problems are inserted. This example uses the following rules for handling customer problems:

The evaluation context consists of the problems table. The relevant row of the table, which corresponds to the problem being routed, is passed to the DBMS_RULE.EVALUATE procedure as a table value.

Complete the following steps:

  1. Show Output and Spool Results

  2. Create the support User

  3. Grant the support User the Necessary System Privileges on Rules

  4. Create the problems Table

  5. Create the evalctx Evaluation Context

  6. Create the Rules that Correspond to Problem Priority

  7. Create the rs Rule Set

  8. Add the Rules to the Rule Set

  9. Create the problem_dispatch PL/SQL Procedure

  10. Log Problems

  11. Check the Spool Results


Note:

If you are viewing this document online, then you can copy the text from the "BEGINNING OF SCRIPT" line after this note to the next "END OF SCRIPT" line into a text editor and then edit the text to create a script for your environment. Run the script with SQL*Plus on a computer that can connect to all of the databases in the environment.

/************************* BEGINNING OF SCRIPT ******************************

Step 1   Show Output and Spool Results

Run SET ECHO ON and specify the spool file for the script. Check the spool file for errors after you run this script.

*/

SET ECHO ON
SPOOL rules_table.out

/*

Step 2   Create the support User

*/
CONNECT SYSTEM/MANAGER AS SYSDBA;

CREATE TABLESPACE support_tbs1 DATAFILE 'support_tbs1.dbf'   SIZE 5M REUSE AUTOEXTEND ON MAXSIZE UNLIMITED;

CREATE USER support
IDENTIFIED BY support
  DEFAULT TABLESPACE support_tbs1
  QUOTA UNLIMITED ON support_tbs1;

GRANT ALTER SESSION, CREATE CLUSTER, CREATE DATABASE LINK, CREATE SEQUENCE,
  CREATE SESSION, CREATE SYNONYM, CREATE TABLE, CREATE VIEW, CREATE INDEXTYPE, 
  CREATE OPERATOR, CREATE PROCEDURE, CREATE TRIGGER, CREATE TYPE
TO support;

/*

Step 3   Grant the support User the Necessary System Privileges on Rules

*/
BEGIN
  DBMS_RULE_ADM.GRANT_SYSTEM_PRIVILEGE(
    privilege    => DBMS_RULE_ADM.CREATE_RULE_SET_OBJ, 
    grantee      => 'support', 
    grant_option => false);
  DBMS_RULE_ADM.GRANT_SYSTEM_PRIVILEGE(
    privilege    => DBMS_RULE_ADM.CREATE_RULE_OBJ,
    grantee      => 'support', 
    grant_option => false);
  DBMS_RULE_ADM.GRANT_SYSTEM_PRIVILEGE(
    privilege    => DBMS_RULE_ADM.CREATE_EVALUATION_CONTEXT_OBJ, 
    grantee      => 'support', 
    grant_option => false);
END;
/

/*

Step 4   Create the problems Table

*/
CONNECT support/support

SET FEEDBACK 1
SET NUMWIDTH 10
SET LINESIZE 80
SET TRIMSPOOL ON
SET TAB OFF
SET PAGESIZE 100
SET SERVEROUTPUT ON

CREATE TABLE problems(
  probid          NUMBER PRIMARY KEY,
  custid          NUMBER,
  priority        NUMBER,
  description     VARCHAR2(4000),
  center          VARCHAR2(100));

/*

Step 5   Create the evalctx Evaluation Context

*/
DECLARE
  ta  SYS.RE$TABLE_ALIAS_LIST;
BEGIN
  ta := SYS.RE$TABLE_ALIAS_LIST(SYS.RE$TABLE_ALIAS('prob', 'problems'));
  DBMS_RULE_ADM.CREATE_EVALUATION_CONTEXT(
    evaluation_context_name    => 'evalctx',
    table_aliases              => ta,
    evaluation_context_comment => 'support problem definition');
END;
/

/*

Step 6   Create the Rules that Correspond to Problem Priority

The following code creates one action context for each rule, and one name-value pair in each action context.

*/

DECLARE
  ac  SYS.RE$NV_LIST;
BEGIN
  ac := SYS.RE$NV_LIST(NULL);
  ac.ADD_PAIR('CENTER', ANYDATA.CONVERTVARCHAR2('San Jose'));
  DBMS_RULE_ADM.CREATE_RULE(
    rule_name      => 'r1',
    condition      => 'prob.priority > 2',
    action_context => ac,
    rule_comment   => 'Low priority problems');
  ac := SYS.RE$NV_LIST(NULL);
  ac.ADD_PAIR('CENTER', ANYDATA.CONVERTVARCHAR2('New York'));
  DBMS_RULE_ADM.CREATE_RULE(
    rule_name      => 'r2',
    condition      => 'prob.priority <= 2',
    action_context => ac,
    rule_comment   => 'High priority problems');
  ac := sys.RE$NV_LIST(NULL);
  ac.ADD_PAIR('ALERT', ANYDATA.CONVERTVARCHAR2('John Doe'));
  DBMS_RULE_ADM.CREATE_RULE(
    rule_name      => 'r3',
    condition      => 'prob.priority = 1',
    action_context => ac,
    rule_comment   => 'Urgent problems');
END;
/

/*

Step 7   Create the rs Rule Set

*/
BEGIN
  DBMS_RULE_ADM.CREATE_RULE_SET(
    rule_set_name      => 'rs',
    evaluation_context => 'evalctx',
    rule_set_comment   => 'support rules');
END;
/

/*

Step 8   Add the Rules to the Rule Set

*/
BEGIN
  DBMS_RULE_ADM.ADD_RULE(
    rule_name     => 'r1', 
    rule_set_name => 'rs');
  DBMS_RULE_ADM.ADD_RULE(
    rule_name     => 'r2', 
    rule_set_name => 'rs');
  DBMS_RULE_ADM.ADD_RULE(
    rule_name     => 'r3', 
    rule_set_name => 'rs');
END;
/

/*

Step 9   Create the problem_dispatch PL/SQL Procedure

*/
CREATE OR REPLACE PROCEDURE problem_dispatch 
IS
    cursor c IS SELECT probid, rowid FROM problems WHERE center IS NULL;
    tv        SYS.RE$TABLE_VALUE;
    tvl       SYS.RE$TABLE_VALUE_LIST;
    truehits  SYS.RE$RULE_HIT_LIST;
    maybehits SYS.RE$RULE_HIT_LIST;
    ac        SYS.RE$NV_LIST;
    namearray SYS.RE$NAME_ARRAY;
    name      VARCHAR2(30);
    cval      VARCHAR2(100);
    rnum      INTEGER;
    i         INTEGER;
    status    PLS_INTEGER;
BEGIN
  FOR r IN c LOOP
    tv  := SYS.RE$TABLE_VALUE('prob', rowidtochar(r.rowid));
    tvl := SYS.RE$TABLE_VALUE_LIST(tv);
    truehits := SYS.RE$RULE_HIT_LIST();
    maybehits := SYS.RE$RULE_HIT_LIST();
    DBMS_RULE.EVALUATE(
      rule_set_name        => 'support.rs',
      evaluation_context   => 'evalctx',
      table_values         => tvl,
      true_rules           => truehits,
      maybe_rules          => maybehits);
    FOR rnum IN 1..truehits.COUNT LOOP
      DBMS_OUTPUT.PUT_LINE('Using rule '|| truehits(rnum).rule_name);
      ac := truehits(rnum).rule_action_context;
      namearray := ac.GET_ALL_NAMES;
      FOR i IN 1..namearray.COUNT LOOP
        name := namearray(i);
        status := ac.GET_VALUE(name).GETVARCHAR2(cval);
        IF (name = 'CENTER') THEN
          UPDATE PROBLEMS SET center = cval WHERE rowid = r.rowid;
          DBMS_OUTPUT.PUT_LINE('Assigning '|| r.probid || ' to ' || cval);
        ELSIF (name = 'ALERT') THEN
          DBMS_OUTPUT.PUT_LINE('Alert: '|| cval || ' Problem:' || r.probid);
        END IF;
       END LOOP;
    END LOOP;
  END LOOP;
END;
/

/*

Step 10   Log Problems

*/
INSERT INTO problems(probid, custid, priority, description)
  VALUES(10101, 11, 1, 'no dial tone');

INSERT INTO problems(probid, custid, priority, description)
  VALUES(10102, 21, 2, 'noise on local calls');

INSERT INTO problems(probid, custid, priority, description)
  VALUES(10103, 31, 3, 'noise on long distance calls');

COMMIT;

/*

Step 11   Check the Spool Results

Check the rules_table.out spool file to ensure that all actions completed successfully after this script completes.

*/

SET ECHO OFF
SPOOL OFF

/*************************** END OF SCRIPT ******************************/

See Also:

"Dispatching Problems and Checking Results for the Table Examples" for the steps to complete to dispatch the problems logged in this example and check the results of the problem dispatch

Using Rules on Both Explicit Variables and Table Data

This example illustrates how to use rules to evaluate data stored in explicit variables and in a table. The application uses the problems table in the support schema, into which customer problems are inserted. This example uses the following rules for handling customer problems:

The evaluation context consists of the problems table. The relevant row of the table, which corresponds to the problem being routed, is passed to the DBMS_RULE.EVALUATE procedure as a table value.

Some of the rules in this example refer to the current time, which is represented as an explicit variable named current_time. The current time is treated as additional data in the evaluation context. It is represented as a variable for the following reasons:

Complete the following steps:

  1. Show Output and Spool Results

  2. Create the support User

  3. Grant the support User the Necessary System Privileges on Rules

  4. Create the problems Table

  5. Create the evalctx Evaluation Context

  6. Create the Rules that Correspond to Problem Priority

  7. Create the rs Rule Set

  8. Add the Rules to the Rule Set

  9. Create the problem_dispatch PL/SQL Procedure

  10. Log Problems

  11. Check the Spool Results


Note:

If you are viewing this document online, then you can copy the text from the "BEGINNING OF SCRIPT" line after this note to the next "END OF SCRIPT" line into a text editor and then edit the text to create a script for your environment. Run the script with SQL*Plus on a computer that can connect to all of the databases in the environment.

/************************* BEGINNING OF SCRIPT ******************************

Step 1   Show Output and Spool Results

Run SET ECHO ON and specify the spool file for the script. Check the spool file for errors after you run this script.

*/

SET ECHO ON
SPOOL rules_var_tab.out

/*

Step 2   Create the support User

*/
CONNECT SYSTEM/MANAGER AS SYSDBA;

CREATE TABLESPACE support_tbs2 DATAFILE 'support_tbs2.dbf'   SIZE 5M REUSE AUTOEXTEND ON MAXSIZE UNLIMITED;

CREATE USER support
IDENTIFIED BY support
  DEFAULT TABLESPACE support_tbs2
  QUOTA UNLIMITED ON support_tbs2;

GRANT ALTER SESSION, CREATE CLUSTER, CREATE DATABASE LINK, CREATE SEQUENCE,
  CREATE SESSION, CREATE SYNONYM, CREATE TABLE, CREATE VIEW, CREATE INDEXTYPE, 
  CREATE OPERATOR, CREATE PROCEDURE, CREATE TRIGGER, CREATE TYPE
TO support;

/*

Step 3   Grant the support User the Necessary System Privileges on Rules

*/
BEGIN
  DBMS_RULE_ADM.GRANT_SYSTEM_PRIVILEGE(
    privilege    => DBMS_RULE_ADM.CREATE_RULE_SET_OBJ, 
    grantee      => 'support', 
    grant_option => false);
  DBMS_RULE_ADM.GRANT_SYSTEM_PRIVILEGE(
    privilege    => DBMS_RULE_ADM.CREATE_RULE_OBJ,
    grantee      => 'support', 
    grant_option => false);
  DBMS_RULE_ADM.GRANT_SYSTEM_PRIVILEGE(
    privilege    => DBMS_RULE_ADM.CREATE_EVALUATION_CONTEXT_OBJ, 
    grantee      => 'support', 
    grant_option => false);
END;
/

/*

Step 4   Create the problems Table

*/
CONNECT support/support

SET FEEDBACK 1
SET NUMWIDTH 10
SET LINESIZE 80
SET TRIMSPOOL ON
SET TAB OFF
SET PAGESIZE 100
SET SERVEROUTPUT ON

CREATE TABLE problems(
  probid          NUMBER PRIMARY KEY,
  custid          NUMBER,
  priority        NUMBER,
  description     VARCHAR2(4000),
  center          VARCHAR2(100));

/*

Step 5   Create the evalctx Evaluation Context

*/
DECLARE
  ta SYS.RE$TABLE_ALIAS_LIST;
  vt SYS.RE$VARIABLE_TYPE_LIST;
BEGIN
  ta := SYS.RE$TABLE_ALIAS_LIST(SYS.RE$TABLE_ALIAS('prob', 'problems'));
  vt := SYS.RE$VARIABLE_TYPE_LIST(
          SYS.RE$VARIABLE_TYPE('current_time', 'DATE', NULL, NULL));
  DBMS_RULE_ADM.CREATE_EVALUATION_CONTEXT(
    evaluation_context_name    => 'evalctx',
    table_aliases              => ta,
    variable_types             => vt,
    evaluation_context_comment => 'support problem definition');
END;
/

/*

Step 6   Create the Rules that Correspond to Problem Priority

The following code creates one action context for each rule, and one name-value pair in each action context.

*/

DECLARE
  ac SYS.RE$NV_LIST;
BEGIN
  ac := SYS.RE$NV_LIST(NULL);
  ac.ADD_PAIR('CENTER', ANYDATA.CONVERTVARCHAR2('San Jose'));
  DBMS_RULE_ADM.CREATE_RULE(
    rule_name      => 'r1',
    condition      => 'prob.priority > 2',
    action_context => ac,
    rule_comment   => 'Low priority problems');
  ac := SYS.RE$NV_LIST(NULL);
  ac.ADD_PAIR('CENTER', ANYDATA.CONVERTVARCHAR2('New York'));
  DBMS_RULE_ADM.CREATE_RULE(
    rule_name      => 'r2',
    condition      => 'prob.priority = 2',
    action_context => ac,
    rule_comment   => 'High priority problems');
  ac := SYS.RE$NV_LIST(NULL);
  ac.ADD_PAIR('ALERT', ANYDATA.CONVERTVARCHAR2('John Doe'));
  DBMS_RULE_ADM.CREATE_RULE(
    rule_name      => 'r3',
    condition      => 'prob.priority = 1',
    action_context => ac,
    rule_comment   => 'Urgent problems');
  ac := SYS.RE$NV_LIST(NULL);
  ac.ADD_PAIR('CENTER', ANYDATA.CONVERTVARCHAR2('Tampa'));
  DBMS_RULE_ADM.CREATE_RULE(
    rule_name => 'r4',
    condition => '(prob.priority = 1) and ' ||
                 '(TO_NUMBER(TO_CHAR(:current_time, ''HH24'')) >= 8) and ' ||
                 '(TO_NUMBER(TO_CHAR(:current_time, ''HH24'')) <= 20)',
    action_context => ac,
    rule_comment => 'Urgent daytime problems');
  ac := sys.RE$NV_LIST(NULL);
  ac.add_pair('CENTER', ANYDATA.CONVERTVARCHAR2('Bangalore'));
  DBMS_RULE_ADM.CREATE_RULE(
    rule_name => 'r5',
    condition => '(prob.priority = 1) and ' ||
                 '((TO_NUMBER(TO_CHAR(:current_time, ''HH24'')) < 8) or ' ||
                 ' (TO_NUMBER(TO_CHAR(:current_time, ''HH24'')) > 20))',
    action_context => ac,
    rule_comment => 'Urgent nighttime problems');
END;
/

/*

Step 7   Create the rs Rule Set

*/
BEGIN
  DBMS_RULE_ADM.CREATE_RULE_SET(
    rule_set_name      => 'rs',
    evaluation_context => 'evalctx',
    rule_set_comment   => 'support rules');
END;
/

/*

Step 8   Add the Rules to the Rule Set

*/
BEGIN
  DBMS_RULE_ADM.ADD_RULE(
    rule_name     => 'r1', 
    rule_set_name => 'rs');
  DBMS_RULE_ADM.ADD_RULE(
    rule_name     => 'r2', 
    rule_set_name => 'rs');
  DBMS_RULE_ADM.ADD_RULE(
    rule_name     => 'r3', 
    rule_set_name => 'rs');
  DBMS_RULE_ADM.ADD_RULE(
    rule_name     => 'r4', 
    rule_set_name => 'rs');
  DBMS_RULE_ADM.ADD_RULE(
    rule_name     => 'r5', 
    rule_set_name => 'rs');
END;
/

/*

Step 9   Create the problem_dispatch PL/SQL Procedure

*/
CREATE OR REPLACE PROCEDURE problem_dispatch
IS
    cursor c  is SELECT probid, rowid FROM PROBLEMS WHERE center IS NULL;
    tv        SYS.RE$TABLE_VALUE;
    tvl       SYS.RE$TABLE_VALUE_LIST;
    vv1       SYS.RE$VARIABLE_VALUE;
    vvl       SYS.RE$VARIABLE_VALUE_LIST;
    truehits  SYS.RE$RULE_HIT_LIST;
    maybehits SYS.RE$RULE_HIT_LIST;
    ac        SYS.RE$NV_LIST;
    namearray SYS.RE$NAME_ARRAY;
    name      VARCHAR2(30);
    cval      VARCHAR2(100);
    rnum      INTEGER;
    i         INTEGER;
    status    PLS_INTEGER;
BEGIN
  FOR r IN c LOOP
    tv  := sYS.RE$TABLE_VALUE('prob', ROWIDTOCHAR(r.rowid));
    tvl := SYS.RE$TABLE_VALUE_LIST(tv);
    vv1 := SYS.RE$VARIABLE_VALUE('current_time',
                                 ANYDATA.CONVERTDATE(SYSDATE));
    vvl := SYS.RE$VARIABLE_VALUE_LIST(vv1);
    truehits := SYS.RE$RULE_HIT_LIST();
    maybehits := SYS.RE$RULE_HIT_LIST();
    DBMS_RULE.EVALUATE(
        rule_set_name        => 'support.rs',
        evaluation_context   => 'evalctx',
        table_values         => tvl,
        variable_values      => vvl,
        true_rules           => truehits,
        maybe_rules          => maybehits);
    FOR rnum IN 1..truehits.COUNT loop
      DBMS_OUTPUT.PUT_LINE('Using rule '|| truehits(rnum).rule_name);
      ac := truehits(rnum).rule_action_context;
      namearray := ac.GET_ALL_NAMES;
      FOR i in 1..namearray.COUNT LOOP
        name := namearray(i);
        status := ac.GET_VALUE(name).GETVARCHAR2(cval);
        IF (name = 'CENTER') THEN
          UPDATE problems SET center = cval
          WHERE rowid = r.rowid;
          DBMS_OUTPUT.PUT_LINE('Assigning '|| r.probid || ' to ' || cval);
        ELSIF (name = 'ALERT') THEN
          DBMS_OUTPUT.PUT_LINE('Alert: '|| cval || ' Problem:' || r.probid);
        END IF;
      END LOOP;
    END LOOP;  
  END LOOP;
END;
/

/*

Step 10   Log Problems

*/
INSERT INTO problems(probid, custid, priority, description)
  VALUES(10201, 12, 1, 'no dial tone');

INSERT INTO problems(probid, custid, priority, description)
  VALUES(10202, 22, 2, 'noise on local calls');

INSERT INTO PROBLEMS(probid, custid, priority, description)
  VALUES(10203, 32, 3, 'noise on long distance calls');

COMMIT;

/*

Step 11   Check the Spool Results

Check the rules_var_tab.out spool file to ensure that all actions completed successfully after this script completes.

*/

SET ECHO OFF
SPOOL OFF

/*************************** END OF SCRIPT ******************************/

See Also:

"Dispatching Problems and Checking Results for the Table Examples" for the steps to complete to dispatch the problems logged in this example and check the results of the problem dispatch

Using Rules on Implicit Variables and Table Data

This example illustrates how to use rules to evaluate implicit variables and data stored in a table. The application uses the problems table in the support schema, into which customer problems are inserted. This example uses the following rules for handling customer problems:

The evaluation context consists of the problems table. The relevant row of the table, which corresponds to the problem being routed, is passed to the DBMS_RULE.EVALUATE procedure as a table value.

As in the example illustrated in "Using Rules on Both Explicit Variables and Table Data", the current time is represented as a variable named current_time. However, this variable value is not specified during evaluation by the caller. That is, current_time is an implicit variable in this example. A PL/SQL function named timefunc is specified for current_time, and this function is invoked once during evaluation to get its value.

Using implicit variables can be useful in other cases if one of the following conditions is true:

Complete the following steps:

  1. Show Output and Spool Results

  2. Create the support User

  3. Grant the support User the Necessary System Privileges on Rules

  4. Create the problems Table

  5. Create the timefunc Function to Return the Value of current_time

  6. Create the evalctx Evaluation Context

  7. Create the Rules that Correspond to Problem Priority

  8. Create the rs Rule Set

  9. Add the Rules to the Rule Set

  10. Create the problem_dispatch PL/SQL Procedure

  11. Log Problems

  12. Check the Spool Results


Note:

If you are viewing this document online, then you can copy the text from the "BEGINNING OF SCRIPT" line after this note to the next "END OF SCRIPT" line into a text editor and then edit the text to create a script for your environment. Run the script with SQL*Plus on a computer that can connect to all of the databases in the environment.

/************************* BEGINNING OF SCRIPT ******************************

Step 1   Show Output and Spool Results

Run SET ECHO ON and specify the spool file for the script. Check the spool file for errors after you run this script.

*/

SET ECHO ON
SPOOL rules_implicit_var.out

/*

Step 2   Create the support User

*/
CONNECT SYSTEM/MANAGER AS SYSDBA;

CREATE TABLESPACE support_tbs3 DATAFILE 'support_tbs3.dbf'   SIZE 5M REUSE AUTOEXTEND ON MAXSIZE UNLIMITED;

CREATE USER support
IDENTIFIED BY support
  DEFAULT TABLESPACE support_tbs3
  QUOTA UNLIMITED ON support_tbs3;

GRANT ALTER SESSION, CREATE CLUSTER, CREATE DATABASE LINK, CREATE SEQUENCE,
  CREATE SESSION, CREATE SYNONYM, CREATE TABLE, CREATE VIEW, CREATE INDEXTYPE, 
  CREATE OPERATOR, CREATE PROCEDURE, CREATE TRIGGER, CREATE TYPE
TO support;

/*

Step 3   Grant the support User the Necessary System Privileges on Rules

*/
BEGIN
  DBMS_RU„uäLE_ADM.GRANT_SYSTEM_PRIVILEGE(
    privilege    => DBMS_RULE_ADM.CREATE_RULE_SET_OBJ, 
    grantee      => 'support', 
    grant_option => false);
  DBMS_RULE_ADM.GRANT_SYSTEM_PRIVILEGE(
    privilege    => DBMS_RULE_ADM.CREATE_RULE_OBJ,
    grantee      => 'support', 
    grant_option => false);
  DBMS_RULE_ADM.GRANT_SYSTEM_PRIVILEGE(
    privilege    => DBMS_RULE_ADM.CREATE_EVALUATION_CONTEXT_OBJ, 
    grantee      => 'support', 
    grant_option => false);
END;
/

/*

Step 4   Create the problems Table

*/
CONNECT support/support

SET FEEDBACK 1
SET NUMWIDTH 10
SET LINESIZE 80
SET TRIMSPOOL ON
SET TAB OFF
SET PAGESIZE 100
SET SERVEROUTPUT ON

CREATE TABLE problems(
  probid          NUMBER PRIMARY KEY,
  custid          NUMBER,
  priority        NUMBER,
  description     VARCHAR2(4000),
  center          VARCHAR2(100));

/*

Step 5   Create the timefunc Function to Return the Value of current_time

*/
CREATE OR REPLACE FUNCTION timefunc(
  eco    VARCHAR2, 
  ecn    VARCHAR2, 
  var    VARCHAR2,
  evctx  SYS.RE$NV_LIST)
RETURN SYS.RE$VARIABLE_VALUE
IS
BEGIN
  IF (var = 'CURRENT_TIME') THEN
    RETURN(SYS.RE$VARIABLE_VALUE('current_time',
                                 ANYDATA.CONVERTDATE(SYSDATE)));
  ELSE
    RETURN(NULL);
  END IF;
END;
/

/*

Step 6   Create the evalctx Evaluation Context

*/
DECLARE
  ta SYS.RE$TABLE_ALIAS_LIST;
  vt SYS.RE$VARIABLE_TYPE_LIST;
BEGIN
  ta := SYS.RE$TABLE_ALIAS_LIST(SYS.RE$TABLE_ALIAS('prob', 'problems'));
  vt := SYS.RE$VARIABLE_TYPE_LIST(
          SYS.RE$VARIABLE_TYPE('current_time', 'DATE', 'timefunc', NULL));
  DBMS_RULE_ADM.CREATE_EVALUATION_CONTEXT(
    evaluation_context_name    => 'evalctx',
    table_aliases              => ta,
    variable_types             => vt,
    evaluation_context_comment => 'support problem definition');
END;
/

/*

Step 7   Create the Rules that Correspond to Problem Priority

The following code creates one action context for each rule, and one name-value pair in each action context.

*/

DECLARE
  ac SYS.RE$NV_LIST;
BEGIN
  ac := SYS.RE$NV_LIST(NULL);
  ac.ADD_PAIR('CENTER', ANYDATA.CONVERTVARCHAR2('San Jose'));
  DBMS_RULE_ADM.CREATE_RULE(
    rule_name      => 'r1',
    condition      => 'prob.priority > 2',
    action_context => ac,
    rule_comment   => 'Low priority problems');
  ac := SYS.RE$NV_LIST(NULL);
  ac.ADD_PAIR('CENTER', ANYDATA.CONVERTVARCHAR2('New York'));
  DBMS_RULE_ADM.CREATE_RULE(
    rule_name      => 'r2',
    condition      => 'prob.priority = 2',
    action_context => ac,
    rule_comment   => 'High priority problems');
  ac := SYS.RE$NV_LIST(NULL);
  ac.ADD_PAIR('ALERT', ANYDATA.CONVERTVARCHAR2('John Doe'));
  DBMS_RULE_ADM.CREATE_RULE(
    rule_name      => 'r3',
    condition      => 'prob.priority = 1',
    action_context => ac,
    rule_comment   => 'Urgent problems');
  ac := SYS.RE$NV_LIST(NULL);
  ac.ADD_PAIR('CENTER', ANYDATA.CONVERTVARCHAR2('Tampa'));
  DBMS_RULE_ADM.CREATE_RULE(
    rule_name => 'r4',
    condition => '(prob.priority = 1) and ' ||
                 '(TO_NUMBER(TO_CHAR(:current_time, ''HH24'')) >= 8) and ' ||
                 '(TO_NUMBER(TO_CHAR(:current_time, ''HH24'')) <= 20)',
    action_context => ac,
    rule_comment   => 'Urgent daytime problems');
  ac := SYS.RE$NV_LIST(NULL);
  ac.add_pair('CENTER', ANYDATA.CONVERTVARCHAR2('Bangalore'));
  DBMS_RULE_ADM.CREATE_RULE(
    rule_name => 'r5',
    condition => '(prob.priority = 1) and ' ||
                 '((TO_NUMBER(TO_CHAR(:current_time, ''HH24'')) < 8) or ' ||
                 ' (TO_NUMBER(TO_CHAR(:current_time, ''HH24'')) > 20))',
    action_context => ac,
    rule_comment => 'Urgent nighttime problems');
END;
/

/*

Step 8   Create the rs Rule Set

*/
BEGIN
  DBMS_RULE_ADM.CREATE_RULE_SET(
    rule_set_name      => 'rs',
    evaluation_context => 'evalctx',
    rule_set_comment   => 'support rules');
END;
/

/*

Step 9   Add the Rules to the Rule Set

*/
BEGIN
  DBMS_RULE_ADM.ADD_RULE(
    rule_name     => 'r1', 
    rule_set_name => 'rs');
  DBMS_RULE_ADM.ADD_RULE(
    rule_name     => 'r2', 
    rule_set_name => 'rs');
  DBMS_RULE_ADM.ADD_RULE(
    rule_name     => 'r3', 
    rule_set_name => 'rs');
  DBMS_RULE_ADM.ADD_RULE(
    rule_name     => 'r4', 
    rule_set_name => 'rs');
  DBMS_RULE_ADM.ADD_RULE(
    rule_name     => 'r5', 
    rule_set_name => 'rs');
END;
/

/*

Step 10   Create the problem_dispatch PL/SQL Procedure

*/
CREATE OR REPLACE PROCEDURE problem_dispatch
IS
    cursor c  IS SELECT probid, rowid FROM problems WHERE center IS NULL;
    tv        SYS.RE$TABLE_VALUE;
    tvl       SYS.RE$TABLE_VALUE_LIST;
    truehits  SYS.RE$RULE_HIT_LIST;
    maybehits SYS.RE$RULE_HIT_LIST;
    ac        SYS.RE$NV_LIST;
    namearray SYS.RE$NAME_ARRAY;
    name      VARCHAR2(30);
    cval      VARCHAR2(100);
    rnum      INTEGER;
    i         INTEGER;
    status    PLS_INTEGER;
BEGIN
  FOR r IN c LOOP
    tv  := SYS.RE$TABLE_VALUE('prob', rowidtochar(r.rowid));
    tvl := SYS.RE$TABLE_VALUE_LIST(tv);
    truehits := SYS.RE$RULE_HIT_LIST();
    maybehits := SYS.RE$RULE_HIT_LIST();
    DBMS_RULE.EVALUATE(
        rule_set_name        => 'support.rs',
        evaluation_context   => 'evalctx',
        table_values         => tvl,
        true_rules           => truehits,
        maybe_rules          => maybehits);
    FOR rnum IN 1..truehits.COUNT LOOP
      DBMS_OUTPUT.PUT_LINE('Using rule '|| truehits(rnum).rule_name);
      ac := truehits(rnum).rule_action_context;
      namearray := ac.GET_ALL_NAMES;
      FOR i IN 1..namearray.COUNT LOOP
        name := namearray(i);
        status := ac.GET_VALUE(name).GETVARCHAR2(cval);
        IF (name = 'CENTER') THEN
          UPDATE problems SET center = cval
            WHERE rowid = r.rowid;
          DBMS_OUTPUT.PUT_LINE('Assigning '|| r.probid || ' to ' || cval);
        ELSIF (name = 'ALERT') THEN
          DBMS_OUTPUT.PUT_LINE('Alert: '|| cval || ' Problem:' || r.probid);
        END IF;
      END LOOP;
    END LOOP;
  END LOOP;
END;
/

/*

Step 11   Log Problems

*/
INSERT INTO problems(probid, custid, priority, description)
  VALUES(10301, 13, 1, 'no dial tone');

INSERT INTO problems(probid, custid, priority, description)
  VALUES(10302, 23, 2, 'noise on local calls');

INSERT INTO problems(probid, custid, priority, description)
  VALUES(10303, 33, 3, 'noise on long distance calls');

COMMIT;

/*

Step 12   Check the Spool Results

Check the rules_implicit_var.out spool file to ensure that all actions completed successfully after this script completes.

*/

SET ECHO OFF
SPOOL OFF

/*************************** END OF SCRIPT ******************************/

See Also:

"Dispatching Problems and Checking Results for the Table Examples" for the steps to complete to dispatch the problems logged in this example and check the results of the problem dispatch

Using Event Contexts and Implicit Variables with Rules

An event context is a varray of type SYS.RE$NV_LIST that contains name-value pairs that contain information about the event. This optional information is not directly used or interpreted by the rules engine. Instead, it is passed to client callbacks such as an evaluation function, a variable value function (for implicit variables), or a variable method function.

In this example, assume every customer has a primary contact person, and the goal is to assign the problem reported by a customer to the support center to which the customer's primary contact person belongs. The customer name is passed in the event context.

This example illustrates how to use event contexts with rules to evaluate implicit variables. Specifically, when an event is evaluated using the DBMS_RULE.EVALUATE procedure, the event context is passed to the variable value function for implicit variables in the evaluation context. The name of the variable value function is find_contact, and this PL/SQL function returns the contact person based on the name of the company specified in the event context. The rule set is evaluated based on the contact person name and the priority for an event.

This example uses the following rules for handling customer problems:

Complete the following steps:

  1. Show Output and Spool Results

  2. Create the support User

  3. Grant the support User the Necessary System Privileges on Rules

  4. Create the find_contact Function to Return a Customer's Contact

  5. Create the evalctx Evaluation Context

  6. Create the Rules that Correspond to Problem Priority and Contact

  7. Create the rs Rule Set

  8. Add the Rules to the Rule Set

  9. Query the Data Dictionary

  10. Create the problem_dispatch PL/SQL Procedure

  11. Dispatch Sample Problems

  12. Clean Up the Environment (Optional)

  13. Check the Spool Results


Note:

If you are viewing this document online, then you can copy the text from the "BEGINNING OF SCRIPT" line after this note to the next "END OF SCRIPT" line into a text editor and then edit the text to create a script for your environment. Run the script with SQL*Plus on a computer that can connect to all of the databases in the environment.

/************************* BEGINNING OF SCRIPT ******************************

Step 1   Show Output and Spool Results

Run SET ECHO ON and specify the spool file for the script. Check the spool file for errors after you run this script.

*/

SET ECHO ON
SPOOL rules_event_context.out

/*

Step 2   Create the support User

*/
CONNECT SYSTEM/MANAGER AS SYSDBA;

GRANT ALTER SESSION, CREATE CLUSTER, CREATE DATABASE LINK, CREATE SEQUENCE,
  CREATE SESSION, CREATE SYNONYM, CREATE TABLE, CREATE VIEW, CREATE INDEXTYPE, 
  CREATE OPERATOR, CREATE PROCEDURE, CREATE TRIGGER, CREATE TYPE
TO support IDENTIFIED BY support;

/*

Step 3   Grant the support User the Necessary System Privileges on Rules

*/
BEGIN
  DBMS_RULE_ADM.GRANT_SYSTEM_PRIVILEGE(
    privilege    => DBMS_RULE_ADM.CREATE_RULE_SET_OBJ, 
    grantee      => 'support', 
    grant_option => false);
  DBMS_RULE_ADM.GRANT_SYSTEM_PRIVILEGE(
    privilege    => DBMS_RULE_ADM.CREATE_RULE_OBJ,
    grantee      => 'support', 
    grant_option => false);
  DBMS_RULE_ADM.GRANT_SYSTEM_PRIVILEGE(
    privilege    => DBMS_RULE_ADM.CREATE_EVALUATION_CONTEXT_OBJ, 
    grantee      => 'support', 
    grant_option => false);
END;
/

/*

Step 4   Create the find_contact Function to Return a Customer's Contact

*/
CONNECT support/support

SET FEEDBACK 1
SET NUMWIDTH 10
SET LINESIZE 80
SET TRIMSPOOL ON
SET TAB OFF
SET PAGESIZE 100
SET SERVEROUTPUT ON
CREATE OR REPLACE FUNCTION find_contact(
  eco       VARCHAR2, 
  ecn       VARCHAR2, 
  var       VARCHAR2,
  evctx     SYS.RE$NV_LIST)
RETURN SYS.RE$VARIABLE_VALUE IS
  cust      VARCHAR2(30);
  contact   VARCHAR2(30);
  status    PLS_INTEGER;
BEGIN  
  IF (var = 'CUSTOMER_CONTACT') THEN
    status := evctx.GET_VALUE('CUSTOMER').GETVARCHAR2(cust);    
    IF (cust = 'COMPANY1') THEN     -- COMPANY1's contact person is Jane
      contact := 'JANE';
    ELSIF (cust = 'COMPANY2') THEN  -- COMPANY2's contact person is Fred
      contact := 'FRED';
    ELSE        -- Assign customers without primary contact person to George
      contact := 'GEORGE';
    END IF;
    RETURN SYS.RE$VARIABLE_VALUE('customer_contact',
                                 ANYDATA.CONVERTVARCHAR2(contact));
  ELSE
    RETURN NULL;
  END IF;
END;
/

/*

Step 5   Create the evalctx Evaluation Context

*/
DECLARE
  vt  SYS.RE$VARIABLE_TYPE_LIST;
BEGIN
  vt := SYS.RE$VARIABLE_TYPE_LIST(
        SYS.RE$VARIABLE_TYPE('priority', 'NUMBER', NULL, NULL),
        SYS.RE$VARIABLE_TYPE('customer_contact', 'VARCHAR2(30)', 
                             'find_contact', NULL));
  DBMS_RULE_ADM.CREATE_EVALUATION_CONTEXT(
    evaluation_context_name    => 'evalctx',
    variable_types             => vt,
    evaluation_context_comment => 'support problem definition');
END;
/

/*

Step 6   Create the Rules that Correspond to Problem Priority and Contact

The following code creates one action context for each rule, and one name-value pair in each action context.

*/

DECLARE
  ac  SYS.RE$NV_LIST;
BEGIN
  ac := SYS.RE$NV_LIST(NULL);
  ac.ADD_PAIR('CENTER', ANYDATA.CONVERTVARCHAR2('San Jose'));
  DBMS_RULE_ADM.CREATE_RULE(
    rule_name       => 'r1',
    condition       => ':customer_contact = ''JANE''',
    action_context  => ac,
    rule_comment    => 'Jane''s customer problems');
  ac := sys.re$nv_list(NULL);
  ac.ADD_PAIR('CENTER', ANYDATA.CONVERTVARCHAR2('New York'));
  DBMS_RULE_ADM.CREATE_RULE(
    rule_name       => 'r2',
    condition       => ':customer_contact = ''FRED''',
    action_context  => ac,
    rule_comment    => 'Fred''s customer problems');
  ac := sys.re$nv_list(NULL);
  ac.ADD_PAIR('CENTER', ANYDATA.CONVERTVARCHAR2('Texas'));
  DBMS_RULE_ADM.CREATE_RULE(
    rule_name       => 'r3',
    condition       => ':customer_contact = ''GEORGE''',
    action_context  => ac,
    rule_comment    => 'George''s customer problems');
  ac := sys.re$nv_list(NULL);
  ac.ADD_PAIR('ALERT', ANYDATA.CONVERTVARCHAR2('John Doe'));
  DBMS_RULE_ADM.CREATE_RULE(
    rule_name       => 'r4',
    condition       => ':priority=1',
    action_context  => ac,
    rule_comment    => 'Urgent problems');
END;
/

/*

Step 7   Create the rs Rule Set

*/
BEGIN
  DBMS_RULE_ADM.CREATE_RULE_SET(
    rule_set_name      => 'rs',
    evaluation_context => 'evalctx',
    rule_set_comment   => 'support rules');
END;
/

/*

Step 8   Add the Rules to the Rule Set

*/
BEGIN
  DBMS_RULE_ADM.ADD_RULE(
    rule_name     => 'r1', 
    rule_set_name => 'rs');
  DBMS_RULE_ADM.ADD_RULE(
    rule_name     => 'r2', 
    rule_set_name => 'rs');
  DBMS_RULE_ADM.ADD_RULE(
    rule_name     => 'r3', 
    rule_set_name => 'rs');
  DBMS_RULE_ADM.ADD_RULE(
    rule_name     => 'r4', 
    rule_set_name => 'rs');
END;
/

/*

Step 9   Query the Data Dictionary

At this point, you can view the evaluation context, rules, and rule set you created in the previous steps.

*/

COLUMN EVALUATION_CONTEXT_NAME HEADING 'Eval Context Name' FORMAT A30
COLUMN EVALUATION_CONTEXT_COMMENT HEADING 'Eval Context Comment' FORMAT A40

SELECT EVALUATION_CONTEXT_NAME, EVALUATION_CONTEXT_COMMENT
  FROM USER_EVALUATION_CONTEXTS
  ORDER BY EVALUATION_CONTEXT_NAME;

SET LONGCHUNKSIZE 4000
SET LONG 4000
COLUMN RULE_NAME HEADING 'Rule|Name' FORMAT A5
COLUMN RULE_CONDITION HEADING 'Rule Condition' FORMAT A35
COLUMN ACTION_CONTEXT_NAME HEADING 'Action|Context|Name' FORMAT A10
COLUMN ACTION_CONTEXT_VALUE HEADING 'Action|Context|Value' FORMAT A10

SELECT RULE_NAME, 
       RULE_CONDITION,
       AC.NVN_NAME ACTION_CONTEXT_NAME, 
       AC.NVN_VALUE.ACCESSVARCHAR2() ACTION_CONTEXT_VALUE
  FROM USER_RULES R, TABLE(R.RULE_ACTION_CONTEXT.ACTX_LIST) AC
  ORDER BY RULE_NAME;

COLUMN RULE_SET_NAME HEADING 'Rule Set Name' FORMAT A20
COLUMN RULE_SET_EVAL_CONTEXT_OWNER HEADING 'Eval Context|Owner' FORMAT A12
COLUMN RULE_SET_EVAL_CONTEXT_NAME HEADING 'Eval Context Name' FORMAT A25
COLUMN RULE_SET_COMMENT HEADING 'Rule Set|Comment' FORMAT A15

SELECT RULE_SET_NAME, 
       RULE_SET_EVAL_CONTEXT_OWNER,
       RULE_SET_EVAL_CONTEXT_NAME,
       RULE_SET_COMMENT
  FROM USER_RULE_SETS
  ORDER BY RULE_SET_NAME;

/*

Step 10   Create the problem_dispatch PL/SQL Procedure

*/
CREATE OR REPLACE PROCEDURE problem_dispatch (priority  NUMBER,
                                              customer  VARCHAR2) 
IS
    vvl       SYS.RE$VARIABLE_VALUE_LIST;
    truehits  SYS.RE$RULE_HIT_LIST;
    maybehits SYS.RE$RULE_HIT_LIST;
    ac        SYS.RE$NV_LIST;
    namearray SYS.RE$NAME_ARRAY;
    name      VARCHAR2(30);
    cval      VARCHAR2(100);
    rnum      INTEGER;
    i         INTEGER;
    status    PLS_INTEGER;
    evctx     SYS.RE$NV_LIST;
BEGIN
  vvl  := SYS.RE$VARIABLE_VALUE_LIST(
            SYS.RE$VARIABLE_VALUE('priority',
                                  ANYDATA.CONVERTNUMBER(priority)));
  evctx := SYS.RE$NV_LIST(NULL);
  evctx.ADD_PAIR('CUSTOMER', ANYDATA.CONVERTVARCHAR2(customer));
  truehits  := SYS.RE$RULE_HIT_LIST();
  maybehits := SYS.RE$RULE_HIT_LIST();
  DBMS_RULE.EVALUATE(
      rule_set_name        => 'support.rs',
      evaluation_context   => 'evalctx',
      event_context        => evctx,
      variable_values      => vvl,
      true_rules           => truehits,
      maybe_rules          => maybehits);
  FOR rnum IN 1..truehits.COUNT LOOP
    DBMS_OUTPUT.PUT_LINE('Using rule '|| truehits(rnum).rule_name);
    ac := truehits(rnum).rule_action_context;
    namearray := ac.GET_ALL_NAMES;
      FOR i IN 1..namearray.count LOOP
        name := namearray(i);
        status := ac.GET_VALUE(name).GETVARCHAR2(cval);
        IF (name = 'CENTER') THEN
          DBMS_OUTPUT.PUT_LINE('Assigning problem to ' || cval);
        ELSIF (name = 'ALERT') THEN
          DBMS_OUTPUT.PUT_LINE('Sending alert to: '|| cval);
        END IF;
      END LOOP;
  END LOOP;
END;
/

/*

Step 11   Dispatch Sample Problems

The first problem dispatch in this step uses the event context and the variable value function to determine the contact person for COMPANY1. The event context is passed to the find_contact variable value function, and this function returns the contact name JANE. Therefore, rule r1 evaluates to TRUE. The problem_dispatch procedure sends the problem to the San Jose office because JANE belongs to that office. In addition, the priority for this event is 1, which causes rule r4 to evaluate to TRUE. As a result, the problem_dispatch procedure sends an alert to John Doe.

The second problem dispatch in this step uses the event context and the variable value function to determine the contact person for COMPANY2. The event context is passed to the find_contact variable value function, and this function returns the contact name FRED. Therefore, rule r2 evaluates to TRUE. The problem_dispatch procedure sends the problem to the New York office because FRED belongs to that office.

The third problem dispatch in this step uses the event context and the variable value function to determine the contact person for COMPANY3. This company does not have a dedicated contact person. The event context is passed to the find_contact variable value function, and this function returns the contact name GEORGE, because GEORGE is the default contact when no contact person is found. Therefore, rule r3 evaluates to TRUE. The problem_dispatch procedure sends the problem to the Texas office because GEORGE belongs to that office.

*/

EXECUTE problem_dispatch(1, 'COMPANY1');
EXECUTE problem_dispatch(2, 'COMPANY2');
EXECUTE problem_dispatch(5, 'COMPANY3');

/*

Step 12   Clean Up the Environment (Optional)

You can clean up the sample environment by dropping the support user.

*/

CONNECT SYSTEM/MANAGER AS SYSDBA;

DROP USER support CASCADE;

/*

Step 13   Check the Spool Results

Check the rules_event_context.out spool file to ensure that all actions completed successfully after this script completes.

*/

SET ECHO OFF
SPOOL OFF

/*************************** END OF SCRIPT ******************************/

Dispatching Problems and Checking Results for the Table Examples

The following sections configure a problem_dispatch procedure that updates information in the problems table:

The steps in this section dispatch the problems by running the problem_dispatch procedure and display the results in the problems table.


Step 1   Query the Data Dictionary

View the evaluation context, rules, and rule set you created in the example:

CONNECT support/support

COLUMN EVALUATION_CONTEXT_NAME HEADING 'Eval Context Name' FORMAT A30
COLUMN EVALUATION_CONTEXT_COMMENT HEADING 'Eval Context Comment' FORMAT A40

SELECT EVALUATION_CONTEXT_NAME, EVALUATION_CONTEXT_COMMENT
  FROM USER_EVALUATION_CONTEXTS
  ORDER BY EVALUATION_CONTEXT_NAME;

SET LONGCHUNKSIZE 4000
SET LONG 4000
COLUMN RULE_NAME HEADING 'Rule|Name' FORMAT A5
COLUMN RULE_CONDITION HEADING 'Rule Condition' FORMAT A35
COLUMN ACTION_CONTEXT_NAME HEADING 'Action|Context|Name' FORMAT A10
COLUMN ACTION_CONTEXT_VALUE HEADING 'Action|Context|Value' FORMAT A10

SELECT RULE_NAME, 
       RULE_CONDITION,
       AC.NVN_NAME ACTION_CONTEXT_NAME, 
       AC.NVN_VALUE.ACCESSVARCHAR2() ACTION_CONTEXT_VALUE
  FROM USER_RULES R, TABLE(R.RULE_ACTION_CONTEXT.ACTX_LIST) AC
  ORDER BY RULE_NAME;

COLUMN RULE_SET_NAME HEADING 'Rule Set Name' FORMAT A20
COLUMN RULE_SET_EVAL_CONTEXT_OWNER HEADING 'Eval Context|Owner' FORMAT A12
COLUMN RULE_SET_EVAL_CONTEXT_NAME HEADING 'Eval Context Name' FORMAT A25
COLUMN RULE_SET_COMMENT HEADING 'Rule Set|Comment' FORMAT A15

SELECT RULE_SET_NAME, 
       RULE_SET_EVAL_CONTEXT_OWNER,
       RULE_SET_EVAL_CONTEXT_NAME,
       RULE_SET_COMMENT
  FROM USER_RULE_SETS
  ORDER BY RULE_SET_NAME;

Step 2   List the Problems in the problems Table

This SELECT statement should show the problems logged previously.

COLUMN probid HEADING 'Problem ID' FORMAT 99999
COLUMN custid HEADING 'Customer ID' FORMAT 99
COLUMN priority HEADING 'Priority' FORMAT 9
COLUMN description HEADING 'Problem Description' FORMAT A30
COLUMN center HEADING 'Center' FORMAT A10

SELECT probid, custid, priority, description, center FROM problems
  ORDER BY probid;

Your output looks similar to the following:

Problem ID Customer ID Priority Problem Description            Center
---------- ----------- -------- ------------------------------ ----------
     10301          13        1 no dial tone
     10302          23        2 noise on local calls
     10303          33        3 noise on long distance calls

Notice that the Center column is NULL for each new row inserted.

Step 3   Dispatch the Problems by Running the problem_dispatch Procedure

Execute the problem_dispatch procedure.

SET SERVEROUTPUT ON
EXECUTE problem_dispatch;

Step 4   List the Problems in the problems Table

If the problems were dispatched successfully in Step 3, then this SELECT statement should show the center to which each problem was dispatched in the Center column.

SELECT probid, custid, priority, description, center FROM problems
  ORDER BY probid;

Your output looks similar to the following:

Problem ID Customer ID Priority Problem Description            Center
---------- ----------- -------- ------------------------------ ----------
     10201          12        1 no dial tone                   Tampa
     10202          22        2 noise on local calls           New York
     10203          32        3 noise on long distance calls   San Jose

Note:

The output will vary depending on which example you used to create the problem_dispatch procedure.

Step 5   Clean Up the Environment (Optional)

You can clean up the sample environment by dropping the support user.

CONNECT SYSTEM/MANAGER AS SYSDBA;

DROP USER support CASCADE;
PKŕeÚu„uPK◊hUIOEBPS/strms_apmon.htmġ Monitoring Streams Apply Processes

22 Monitoring Streams Apply Processes

This chapter provides sample queries that you can use to monitor your Streams apply processes.

This chapter contains these topics:


Note:

The Streams tool in the Oracle Enterprise Manager Console is also an excellent way to monitor a Streams environment. See the online help for the Streams tool for more information.


See Also:


Determining the Queue, Rule Sets, and Status for Each Apply Process

You can determine the following information for each apply process in a database by running the query in this section:

To display this general information about each apply process in a database, run the following query:

COLUMN APPLY_NAME HEADING 'Apply|Process|Name' FORMAT A15
COLUMN QUEUE_NAME HEADING 'Apply|Process|Queue' FORMAT A15
COLUMN RULE_SET_NAME HEADING 'Positive|Rule Set' FORMAT A15
COLUMN NEGATIVE_RULE_SET_NAME HEADING 'Negative|Rule Set' FORMAT A15
COLUMN STATUS HEADING 'Apply|Process|Status' FORMAT A15

SELECT APPLY_NAME, 
       QUEUE_NAME, 
       RULE_SET_NAME, 
       NEGATIVE_RULE_SET_NAME,
       STATUS
  FROM DBA_APPLY;

Your output looks similar to the following:

Apply           Apply                                           Apply
Process         Process         Positive        Negative        Process
Name            Queue           Rule Set        Rule Set        Status
--------------- --------------- --------------- --------------- ---------------
STRM01_APPLY    STREAMS_QUEUE   RULESET$_36                     ENABLED
APPLY_EMP       STREAMS_QUEUE   RULESET$_16                     DISABLED
APPLY           STREAMS_QUEUE   RULESET$_21     RULESET$_23     ENABLED

If the status of an apply process is ABORTED, then you can query the ERROR_NUMBER and ERROR_MESSAGE columns in the DBA_APPLY data dictionary view to determine the error. These columns are populated when an apply process aborts or when an apply process is disabled after reaching a limit. These columns are cleared when an apply process is restarted.


Note:

The ERROR_NUMBER and ERROR_MESSAGE columns in the DBA_APPLY data dictionary view are not related to the information in the DBA_APPLY_ERROR data dictionary view.


See Also:

"Checking for Apply Errors" to check for apply errors if the apply process status is ABORTED

Displaying General Information About Each Apply Process

You can display the following general information about each apply process in a database by running the query in this section:

To display this general information about each apply process in a database, run the following query:

COLUMN APPLY_NAME HEADING 'Apply Process Name' FORMAT A20
COLUMN APPLY_CAPTURED HEADING 'Type of Messages Applied' FORMAT A25
COLUMN APPLY_USER HEADING 'Apply User' FORMAT A30

SELECT APPLY_NAME, 
       DECODE(APPLY_CAPTURED,
              'YES', 'Captured',
              'NO',  'User-Enqueued') APPLY_CAPTURED,
       APPLY_USER
  FROM DBA_APPLY;

Your output looks similar to the following:

Apply Process Name   Type of Messages Applied  Apply User
-------------------- ------------------------- ------------------------------
STRM01_APPLY         Captured                  STRMADMIN
APPLY_OE             User-Enqueued             STRMADMIN
APPLY                Captured                  HR

Listing the Parameter Settings for Each Apply Process

The following query displays the current setting for each apply process parameter for each apply process in a database:

COLUMN APPLY_NAME HEADING 'Apply Process|Name' FORMAT A15
COLUMN PARAMETER HEADING 'Parameter' FORMAT A25
COLUMN VALUE HEADING 'Value' FORMAT A20
COLUMN SET_BY_USER HEADING 'Set by User?' FORMAT A15

SELECT APPLY_NAME,
       PARAMETER, 
       VALUE,
       SET_BY_USER  
  FROM DBA_APPLY_PARAMETERS;

Your output looks similar to the following:

Apply Process
Name            Parameter                 Value                Set by User?
--------------- ------------------------- -------------------- ---------------
APPLY_HR        ALLOW_DUPLICATE_ROWS      N                    NO
APPLY_HR        COMMIT_SERIALIZATION      FULL                 NO
APPLY_HR        DISABLE_ON_ERROR          Y                    NO
APPLY_HR        DISABLE_ON_LIMIT          N                    NO
APPLY_HR        MAXIMUM_SCN               INFINITE             NO
APPLY_HR        PARALLELISM               1                    NO
APPLY_HR        STARTUP_SECONDS           0                    NO
APPLY_HR        TIME_LIMIT                INFINITE             NO
APPLY_HR        TRACE_LEVEL               0                    NO
APPLY_HR        TRANSACTION_LIMIT         INFINITE             NO
APPLY_HR        TXN_LCR_SPILL_THRESHOLD   5000                 YES
APPLY_HR        WRITE_ALERT_LOG           Y                    NO

Note:

If the Set by User? column is NO for a parameter, then the parameter is set to its default value. If the Set by User? column is YES for a parameter, then the parameter might or might not be set to its default value.

Displaying Information About Apply Handlers

This section contains instructions for displaying information about apply process message handlers and error handlers.

Displaying All of the Error Handlers for Local Apply Processes

When you specify a local error handler using the SET_DML_HANDLER procedure in the DBMS_APPLY_ADM package at a destination database, you can specify either that the handler runs for a specific apply process or that the handler is a general handler that runs for all apply processes in the database that apply changes locally when an error is raised by an apply process. A specific error handler takes precedence over a generic error handler. An error handler is run for a specified operation on a specific table.

To display the error handler for each apply process that applies changes locally in a database, run the following query:

COLUMN OBJECT_OWNER HEADING 'Table|Owner' FORMAT A5
COLUMN OBJECT_NAME HEADING 'Table Name' FORMAT A10
COLUMN OPERATION_NAME HEADING 'Operation' FORMAT A10
COLUMN USER_PROCEDURE HEADING 'Handler Procedure' FORMAT A30
COLUMN APPLY_NAME HEADING 'Apply Process|Name' FORMAT A15

SELECT OBJECT_OWNER, 
       OBJECT_NAME, 
       OPERATION_NAME, 
       USER_PROCEDURE,
       APPLY_NAME 
  FROM DBA_APPLY_DML_HANDLERS
  WHERE ERROR_HANDLER = 'Y'
  ORDER BY OBJECT_OWNER, OBJECT_NAME;

Your output looks similar to the following:

Table                                                      Apply Process
Owner Table Name Operation  Handler Procedure              Name
----- ---------- ---------- ------------------------------ --------------
HR    REGIONS    INSERT     "STRMADMIN"."ERRORS_PKG"."REGI
                            ONS_PK_ERROR"

Apply Process Name is NULL for the strmadmin.errors_pkg.regions_pk_error error handler. Therefore, this handler is a general handler that runs for all of the local apply processes.

Displaying the Message Handler for Each Apply Process

To display each message handler in a database, run the following query:

COLUMN APPLY_NAME HEADING 'Apply Process Name' FORMAT A20
COLUMN MESSAGE_HANDLER HEADING 'Message Handler' FORMAT A20

SELECT APPLY_NAME, MESSAGE_HANDLER FROM DBA_APPLY
  WHERE MESSAGE_HANDLER IS NOT NULL;

Your output looks similar to the following:

Apply Process Name   Message Handler
-------------------- --------------------
STRM03_APPLY         "OE"."MES_HANDLER"

Displaying the Precommit Handler for Each Apply Process

To display each precommit handler in a database, run the following query:

COLUMN APPLY_NAME HEADING 'Apply Process Name' FORMAT A20
COLUMN PRECOMMIT_HANDLER HEADING 'Precommit Handler' FORMAT A30
COLUMN APPLY_CAPTURED HEADING 'Type of|Messages|Applied' FORMAT A15

SELECT APPLY_NAME, 
       PRECOMMIT_HANDLER,
       DECODE(APPLY_CAPTURED,
              'YES', 'Captured',
              'NO',  'User-Enqueued') APPLY_CAPTURED
  FROM DBA_APPLY
  WHERE PRECOMMIT_HANDLER IS NOT NULL;

Your output looks similar to the following:

                                                    Type of
                                                    Messages
Apply Process Name   Precommit Handler              Applied
-------------------- ------------------------------ ---------------
STRM01_APPLY         "STRMADMIN"."HISTORY_COMMIT"   Captured

Displaying Information About the Reader Server for Each Apply Process

The reader server for an apply process dequeues messages from the queue. The reader server is a parallel execution server that computes dependencies between LCRs and assembles messages into transactions. The reader server then returns the assembled transactions to the coordinator, which assigns them to idle apply servers.

The query in this section displays the following information about the reader server for each apply process:

The information displayed by this query is valid only for an enabled apply process.

Run the following query to display this information for each apply process:

COLUMN APPLY_NAME HEADING 'Apply Process|Name' FORMAT A15
COLUMN APPLY_CAPTURED HEADING 'Apply Type' FORMAT A22
COLUMN PROCESS_NAME HEADING 'Process|Name' FORMAT A7
COLUMN STATE HEADING 'State' FORMAT A17
COLUMN TOTAL_MESSAGES_DEQUEUED HEADING 'Total Messages|Dequeued' FORMAT 99999999

SELECT r.APPLY_NAME,
       DECODE(ap.APPLY_CAPTURED,
                'YES','Captured LCRS',
                'NO','User-enqueued messages','UNKNOWN') APPLY_CAPTURED,
       SUBSTR(s.PROGRAM,INSTR(s.PROGRAM,'(')+1,4) PROCESS_NAME,
       r.STATE,
       r.TOTAL_MESSAGES_DEQUEUED
       FROM V$STREAMS_APPLY_READER r, V$SESSION s, DBA_APPLY ap 
       WHERE r.SID = s.SID AND 
             r.SERIAL# = s.SERIAL# AND 
             r.APPLY_NAME = ap.APPLY_NAME;

Your output looks similar to the following:

Apply Process                          Process                   Total Messages
Name            Apply Type             Name    State                   Dequeued
--------------- ---------------------- ------- ----------------- --------------
APPLY$_STM2_14  Captured LCRS          P000    DEQUEUE MESSAGES            5650

Monitoring Transactions and Messages Spilled by Each Apply Process

If the txn_lcr_spill_threshold apply process parameter is set to a value other than infinite, then an apply process can spill messages from memory to hard disk when the number of messages in a transaction exceeds the specified number.

The first query in this section displays the following information about each transaction currently being applied for which the apply process has spilled messages:

To display this information for each apply process in a database, run the following query:

COLUMN APPLY_NAME HEADING 'Apply Name' FORMAT A20
COLUMN 'Transaction ID' HEADING 'Transaction ID' FORMAT A15
COLUMN FIRST_SCN HEADING 'First SCN'   FORMAT 99999999
COLUMN MESSAGE_COUNT HEADING 'Message Count' FORMAT 99999999
 
SELECT APPLY_NAME,
       XIDUSN ||'.'|| 
       XIDSLT ||'.'||
       XIDSQN "Transaction ID",
       FIRST_SCN,
       MESSAGE_COUNT
  FROM DBA_APPLY_SPILL_TXN;

Your output looks similar to the following:

Apply Name           Transaction ID  First SCN Message Count
-------------------- --------------- --------- -------------
APPLY_HR             1.42.2277         2246944           100

The next query in this section displays the following information about the messages spilled by the apply processes in the local database:

To display this information for each apply process in a database, run the following query:

COLUMN APPLY_NAME HEADING 'Apply Name' FORMAT A15
COLUMN TOTAL_MESSAGES_SPILLED HEADING 'Total|Spilled Messages' FORMAT 99999999
COLUMN ELAPSED_SPILL_TIME HEADING 'Elapsed Time|Spilling Messages' FORMAT 99999999.99

SELECT APPLY_NAME,
       TOTAL_MESSAGES_SPILLED,
       (ELAPSED_SPILL_TIME/100) ELAPSED_SPILL_TIME
  FROM V$STREAMS_APPLY_READER;

Your output looks similar to the following:

                           Total      Elapsed Time
Apply Name      Spilled Messages Spilling Messages
--------------- ---------------- -----------------
APPLY_HR                     100              2.67

Note:

The elapsed time spilling messages is displayed in seconds. The V$STREAMS_APPLY_READER view displays elapsed time in centiseconds by default. A centisecond is one-hundredth of a second. The query in this section divides each elapsed time by one hundred to display the elapsed time in seconds.

Determining Capture to Dequeue Latency for a Message

The query in this section displays the following information about the last message dequeued by each apply process:

The information displayed by this query is valid only for an enabled apply process.

Run the following query to display this information for each apply process:

COLUMN APPLY_NAME HEADING 'Apply Process|Name' FORMAT A17
COLUMN LATENCY HEADING 'Latency|in|Seconds' FORMAT 9999
COLUMN CREATION HEADING 'Message Creation' FORMAT A17
COLUMN LAST_DEQUEUE HEADING 'Last Dequeue Time' FORMAT A20
COLUMN DEQUEUED_MESSAGE_NUMBER HEADING 'Dequeued|Message Number' FORMAT 999999

SELECT APPLY_NAME,
     (DEQUEUE_TIME-DEQUEUED_MESSAGE_CREATE_TIME)*86400 LATENCY,
     TO_CHAR(DEQUEUED_MESSAGE_CREATE_TIME,'HH24:MI:SS MM/DD/YY') CREATION,
     TO_CHAR(DEQUEUE_TIME,'HH24:MI:SS MM/DD/YY') LAST_DEQUEUE,
     DEQUEUED_MESSAGE_NUMBER  
  FROM V$STREAMS_APPLY_READER;

Your output looks similar to the following:

                  Latency
Apply Process          in                                              Dequeued
Name              Seconds Message Creation  Last Dequeue Time    Message Number
----------------- ------- ----------------- -------------------- --------------
APPLY$_STM1_14          1 15:22:15 06/13/05 15:22:16 06/13/05            502129

Displaying General Information About Each Coordinator Process

A coordinator process gets transactions from the reader server and passes these transactions to apply servers. The coordinator process name is apnn, where nn is a coordinator process number.

The query in this section displays the following information about the coordinator process for each apply process:

The information displayed by this query is valid only for an enabled apply process.

Run the following query to display this information for each apply process:

COLUMN APPLY_NAME HEADING 'Apply Process|Name' FORMAT A17
COLUMN PROCESS_NAME HEADING 'Coordinator|Process|Name' FORMAT A11
COLUMN SID HEADING 'Session|ID' FORMAT 9999
COLUMN SERIAL# HEADING 'Session|Serial|Number' FORMAT 9999
COLUMN STATE HEADING 'State' FORMAT A21

SELECT c.APPLY_NAME,
       SUBSTR(s.PROGRAM,INSTR(s.PROGRAM,'(')+1,4) PROCESS_NAME,
       c.SID,
       c.SERIAL#,
       c.STATE
       FROM V$STREAMS_APPLY_COORDINATOR c, V$SESSION s
       WHERE c.SID = s.SID AND
             c.SERIAL# = s.SERIAL#;

Your output looks similar to the following:

                  Coordinator         Session
Apply Process     Process     Session  Serial
Name              Name             ID  Number State
----------------- ----------- ------- ------- ---------------------
APPLY_FROM_MULT1  A001             16       1 APPLYING
APPLY_FROM_MULT2  A002             18       1 APPLYING

Displaying Information About Transactions Received and Applied

The query in this section displays the following information about the transactions received, applied, and being applied by each apply process:

The information displayed by this query is valid only for an enabled apply process.

For example, to display this information for an apply process named apply, run the following query:

COLUMN APPLY_NAME HEADING 'Apply Process Name' FORMAT A25
COLUMN TOTAL_RECEIVED HEADING 'Total|Trans|Received' FORMAT 99999999
COLUMN TOTAL_APPLIED HEADING 'Total|Trans|Applied' FORMAT 99999999
COLUMN TOTAL_ERRORS HEADING 'Total|Apply|Errors' FORMAT 9999
COLUMN BEING_APPLIED HEADING 'Total|Trans Being|Applied' FORMAT 99999999
COLUMN TOTAL_IGNORED HEADING 'Total|Trans|Ignored' FORMAT 99999999

SELECT APPLY_NAME,
       TOTAL_RECEIVED,
       TOTAL_APPLIED,
       TOTAL_ERRORS,
       (TOTAL_ASSIGNED - (TOTAL_ROLLBACKS + TOTAL_APPLIED)) BEING_APPLIED,
       TOTAL_IGNORED 
       FROM V$STREAMS_APPLY_COORDINATOR;

Your output looks similar to the following:

                              Total     Total  Total       Total     Total
                              Trans     Trans  Apply Trans Being     Trans
Apply Process Name         Received   Applied Errors     Applied   Ignored
------------------------- --------- --------- ------ ----------- ---------
APPLY_FROM_MULT1                 81        73      2           6         0
APPLY_FROM_MULT2                114        96      0          14         4

Determining the Capture to Apply Latency for a Message for Each Apply Process

This section contains two different queries that show the capture to apply latency for a particular message. That is, for captured messages, these queries show the amount of time between when the message was created at a source database and when the message was applied by the apply process. One query uses the V$STREAMS_APPLY_COORDINATOR dynamic performance view. The other uses the DBA_APPLY_PROGRESS static data dictionary view.


Note:

These queries assume that the apply process applies captured messages, not user-enqueued messages.

The two queues differ in the following ways:

Both queries display the following information about a message applied by each apply process:

Example V$STREAMS_APPLY_COORDINATOR Query for Latency

Run the following query to display the capture to apply latency using the V$STREAMS_APPLY_COORDINATOR view for a message for each apply process:

COLUMN APPLY_NAME HEADING 'Apply Process|Name' FORMAT A17
COLUMN 'Latency in Seconds' FORMAT 999999
COLUMN 'Message Creation' FORMAT A17
COLUMN 'Apply Time' FORMAT A17
COLUMN HWM_MESSAGE_NUMBER HEADING 'Applied|Message|Number' FORMAT 999999

SELECT APPLY_NAME,
     (HWM_TIME-HWM_MESSAGE_CREATE_TIME)*86400 "Latency in Seconds",
     TO_CHAR(HWM_MESSAGE_CREATE_TIME,'HH24:MI:SS MM/DD/YY') 
        "Message Creation",
     TO_CHAR(HWM_TIME,'HH24:MI:SS MM/DD/YY') "Apply Time",
     HWM_MESSAGE_NUMBER  
  FROM V$STREAMS_APPLY_COORDINATOR;

Your output looks similar to the following:

                                                                         Applied
Apply Process                                                            Message
Name              Latency in Seconds Message Creation  Apply Time         Number
----------------- ------------------ ----------------- ----------------- -------
APPLY$_STM1_14                     4 14:05:13 06/13/05 14:05:17 06/13/05  498215

Example DBA_APPLY_PROGRESS Query for Latency

Run the following query to display the capture to apply latency using the DBA_APPLY_PROGRESS view for a message for each apply process:

COLUMN APPLY_NAME HEADING 'Apply Process|Name' FORMAT A17
COLUMN 'Latency in Seconds' FORMAT 999999
COLUMN 'Message Creation' FORMAT A17
COLUMN 'Apply Time' FORMAT A17
COLUMN APPLIED_MESSAGE_NUMBER HEADING 'Applied|Message|Number' FORMAT 999999

SELECT APPLY_NAME,
     (APPLY_TIME-APPLIED_MESSAGE_CREATE_TIME)*86400 "Latency in Seconds",
     TO_CHAR(APPLIED_MESSAGE_CREATE_TIME,'HH24:MI:SS MM/DD/YY') 
        "Message Creation",
     TO_CHAR(APPLY_TIME,'HH24:MI:SS MM/DD/YY') "Apply Time",
     APPLIED_MESSAGE_NUMBER  
  FROM DBA_APPLY_PROGRESS;

Your output looks similar to the following:

                                                                         Applied
Apply Process                                                            Message
Name              Latency in Seconds Message Creation  Apply Time         Number
----------------- ------------------ ----------------- ----------------- -------
APPLY$_STM1_14                    33 14:05:13 06/13/05 14:05:46 06/13/05  498215

Displaying Information About the Apply Servers for Each Apply Process

An apply process can use one or more apply servers that apply LCRs to database objects as DML statements or DDL statements or pass the LCRs to their appropriate handlers. For non-LCR messages, the apply servers pass the messages to the message handler. Each apply server is a parallel execution server.

The query in this section displays the following information about the apply servers for each apply process:

The information displayed by this query is valid only for an enabled apply process.

Run the following query to display information about the apply servers for each apply process:

COLUMN APPLY_NAME HEADING 'Apply Process Name' FORMAT A22
COLUMN PROCESS_NAME HEADING 'Process Name' FORMAT A12
COLUMN STATE HEADING 'State' FORMAT A17
COLUMN TOTAL_ASSIGNED HEADING 'Total|Transactions|Assigned' FORMAT 99999999
COLUMN TOTAL_MESSAGES_APPLIED HEADING 'Total|Messages|Applied' FORMAT 99999999

SELECT r.APPLY_NAME,
       SUBSTR(s.PROGRAM,INSTR(S.PROGRAM,'(')+1,4) PROCESS_NAME,
       r.STATE,
       r.TOTAL_ASSIGNED, 
       r.TOTAL_MESSAGES_APPLIED
  FROM V$STREAMS_APPLY_SERVER R, V$SESSION S 
  WHERE r.SID = s.SID AND 
        r.SERIAL# = s.SERIAL# 
  ORDER BY r.APPLY_NAME, r.SERVER_ID;

Your output looks similar to the following:

                                                             Total      Total
                                                      Transactions   Messages
Apply Process Name     Process Name State                 Assigned    Applied
---------------------- ------------ ----------------- ------------ ----------
APPLY                  P001         IDLE                        94       2141
APPLY                  P002         IDLE                        12        276
APPLY                  P003         IDLE                         0          0

Displaying Effective Apply Parallelism for an Apply Process

In some environments, an apply process might not use all of the apply servers available to it. For example, apply process parallelism can be set to five, but only three apply servers are ever used by the apply process. In this case, the effective apply parallelism is three.

The following query displays the effective apply parallelism for an apply process named apply:

SELECT COUNT(SERVER_ID) "Effective Parallelism"
  FROM V$STREAMS_APPLY_SERVER
  WHERE APPLY_NAME = 'APPLY' AND
        TOTAL_MESSAGES_APPLIED > 0;

Your output looks similar to the following:

Effective Parallelism
---------------------
                    2

This query returned two for the effective parallelism. If parallelism is set to three for the apply process named apply, then one apply server has not been used since the last time the apply process was started.

You can display the total number of messages applied by each apply server by running the following query:

COLUMN SERVER_ID HEADING 'Apply Server ID' FORMAT 99
COLUMN TOTAL_MESSAGES_APPLIED HEADING 'Total Messages Applied' FORMAT 999999

SELECT SERVER_ID, TOTAL_MESSAGES_APPLIED 
  FROM V$STREAMS_APPLY_SERVER
  WHERE APPLY_NAME = 'APPLY'
  ORDER BY SERVER_ID;

Your output looks similar to the following:

Apply Server ID Total Messages Applied
--------------- ----------------------
              1                   2141
              2                    276
              3                      0

In this case, apply server 3 has not been used by the apply process since it was last started. If the parallelism setting for an apply process is higher than the effective parallelism for the apply process, then consider lowering the parallelism setting.

Viewing Rules that Specify a Destination Queue on Apply

You can specify a destination queue for a rule using the SET_ENQUEUE_DESTINATION procedure in the DBMS_APPLY_ADM package. If an apply process has such a rule in its positive rule set, and a message satisfies the rule, then the apply process enqueues the message into the destination queue.

To view destination queue settings for rules, run the following query:

COLUMN RULE_OWNER HEADING 'Rule Owner' FORMAT A15
COLUMN RULE_NAME HEADING 'Rule Name' FORMAT A15
COLUMN DESTINATION_QUEUE_NAME HEADING 'Destination Queue' FORMAT A30

SELECT RULE_OWNER, RULE_NAME, DESTINATION_QUEUE_NAME
  FROM DBA_APPLY_ENQUEUE;

Your output looks similar to the following:

Rule Owner      Rule Name       Destination Queue
--------------- --------------- ------------------------------
STRMADMIN       DEPARTMENTS17   "STRMADMIN"."STREAMS_QUEUE"

Viewing Rules that Specify No Execution on Apply

You can specify an execution directive for a rule using the SET_EXECUTE procedure in the DBMS_APPLY_ADM package. An execution directive controls whether a message that satisfies the specified rule is executed by an apply process. If an apply process has a rule in its positive rule set with NO for its execution directive, and a message satisfies the rule, then the apply process does not execute the message and does not send the message to any apply handler.

To view each rule with NO for its execution directive, run the following query:

COLUMN RULE_OWNER HEADING 'Rule Owner' FORMAT A20
COLUMN RULE_NAME HEADING 'Rule Name' FORMAT A20

SELECT RULE_OWNER, RULE_NAME
  FROM DBA_APPLY_EXECUTE
  WHERE EXECUTE_EVENT = 'NO';

Your output looks similar to the following:

Rule Owner           Rule Name
-------------------- --------------------
STRMADMIN            DEPARTMENTS18

Checking for Apply Errors

To check for apply errors, run the following query:

COLUMN APPLY_NAME HEADING 'Apply|Process|Name' FORMAT A10
COLUMN SOURCE_DATABASE HEADING 'Source|Database' FORMAT A10
COLUMN LOCAL_TRANSACTION_ID HEADING 'Local|Transaction|ID' FORMAT A11
COLUMN ERROR_NUMBER HEADING 'Error Number' FORMAT 99999999
COLUMN ERROR_MESSAGE HEADING 'Error Message' FORMAT A20
COLUMN MESSAGE_COUNT HEADING 'Messages in|Error|Transaction' FORMAT 99999999

SELECT APPLY_NAME, 
       SOURCE_DATABASE, 
       LOCAL_TRANSACTION_ID, 
       ERROR_NUMBER,
       ERROR_MESSAGE,
       MESSAGE_COUNT
  FROM DBA_APPLY_ERROR;

If there are any apply errors, then your output looks similar to the following:

Apply                 Local                                         Messages in
Process    Source     Transaction                                         Error
Name       Database   ID          Error Number Error Message        Transaction
---------- ---------- ----------- ------------ -------------------- -----------
APPLY_FROM MULT3.NET  1.62.948            1403 ORA-01403: no data f           1
_MULT3                                         ound

APPLY_FROM MULT2.NET  1.54.948            1403 ORA-01403: no data f           1
_MULT2                                         ound

If there are apply errors, then you can either try to reexecute the transactions that encountered the errors, or you can delete the transactions. If you want to reexecute a transaction that encountered an error, then first correct the condition that caused the transaction to raise an error.

If you want to delete a transaction that encountered an error, then you might need to resynchronize data manually if you are sharing data between multiple databases. Remember to set an appropriate session tag, if necessary, when you resynchronize data manually.


See Also:


Displaying Detailed Information About Apply Errors

This section contains SQL scripts that you can use to display detailed information about the error transactions in the error queue in a database. These scripts are designed to display information about LCRs, but you can extend them to display information about any non-LCR messages used in your environment as well.

To use these scripts, complete the following steps:

  1. Grant Explicit SELECT Privilege on the DBA_APPLY_ERROR View

  2. Create a Procedure that Prints the Value in an ANYDATA Object

  3. Create a Procedure that Prints a Specified LCR

  4. Create a Procedure that Prints All the LCRs in the Error Queue

  5. Create a Procedure that Prints All the Error LCRs for a Transaction


Note:

These scripts display only the first 253 characters for VARCHAR2 values in LCRs.


Step 1   Grant Explicit SELECT Privilege on the DBA_APPLY_ERROR View

The user who creates and runs the print_errors and print_transaction procedures described in the following sections must be granted explicit SELECT privilege on the DBA_APPLY_ERROR data dictionary view. This privilege cannot be granted through a role. Running the GRANT_ADMIN_PRIVILEGE procedure in the DBMS_STREAMS_AUTH package on a user grants this privilege to the user.

To grant this privilege to a user directly, complete the following steps:

  1. Connect as an administrative user who can grant privileges.

  2. Grant SELECT privilege on the DBA_APPLY_ERROR data dictionary view to the appropriate user. For example, to grant this privilege to the strmadmin user, run the following statement:

    GRANT SELECT ON DBA_APPLY_ERROR TO strmadmin;
    
  3. Grant EXECUTE privilege on the DBMS_APPLY_ADM package. For example, to grant this privilege to the strmadmin user, run the following statement:

    GRANT EXECUTE ON DBMS_APPLY_ADM TO strmadmin;
    
  4. Connect to the database as the user to whom you granted the privilege in Step 2 and 3.

Step 2   Create a Procedure that Prints the Value in an ANYDATA Object

The following procedure prints the value in a specified ANYDATA object for some selected datatypes. You can add more datatypes to this procedure if you wish.

CREATE OR REPLACE PROCEDURE print_any(data IN ANYDATA) IS
  tn  VARCHAR2(61);
  str VARCHAR2(4000);
  chr VARCHAR2(1000);
  num NUMBER;
  dat DATE;
  rw  RAW(4000);
  res NUMBER;
BEGIN
  IF data IS NULL THEN
    DBMS_OUTPUT.PUT_LINE('NULL value');
    RETURN;
  END IF;
  tn := data.GETTYPENAME();
  IF tn = 'SYS.VARCHAR2' THEN
    res := data.GETVARCHAR2(str);
    DBMS_OUTPUT.PUT_LINE(SUBSTR(str,0,253));
  ELSIF tn = 'SYS.CHAR' then
    res := data.GETCHAR(chr);
    DBMS_OUTPUT.PUT_LINE(SUBSTR(chr,0,253));
  ELSIF tn = 'SYS.VARCHAR' THEN
    res := data.GETVARCHAR(chr);
    DBMS_OUTPUT.PUT_LINE(chr);
  ELSIF tn = 'SYS.NUMBER' THEN
    res := data.GETNUMBER(num);
    DBMS_OUTPUT.PUT_LINE(num);
  ELSIF tn = 'SYS.DATE' THEN
    res := data.GETDATE(dat);
    DBMS_OUTPUT.PUT_LINE(dat);
  ELSIF tn = 'SYS.RAW' THEN
    -- res := data.GETRAW(rw);
    -- DBMS_OUTPUT.PUT_LINE(SUBSTR(DBMS_LOB.SUBSTR(rw),0,253));
    DBMS_OUTPUT.PUT_LINE('BLOB Value');
  ELSIF tn = 'SYS.BLOB' THEN
    DBMS_OUTPUT.PUT_LINE('BLOB Found');
  ELSE
    DBMS_OUTPUT.PUT_LINE('typename is ' || tn);
  END IF;
END print_any;
/

Step 3   Create a Procedure that Prints a Specified LCR

The following procedure prints a specified LCR. It calls the print_any procedure created in "Create a Procedure that Prints the Value in an ANYDATA Object".

CREATE OR REPLACE PROCEDURE print_lcr(lcr IN ANYDATA) IS
  typenm    VARCHAR2(61);
  ddllcr    SYS.LCR$_DDL_RECORD;
  proclcr   SYS.LCR$_PROCEDURE_RECORD;
  rowlcr    SYS.LCR$_ROW_RECORD;
  res       NUMBER;
  newlist   SYS.LCR$_ROW_LIST;
  oldlist   SYS.LCR$_ROW_LIST;
  ddl_text  CLOB;
  ext_attr  ANYDATA;
BEGIN
  typenm := lcr.GETTYPENAME();
  DBMS_OUTPUT.PUT_LINE('type name: ' || typenm);
  IF (typenm = 'SYS.LCR$_DDL_RECORD') THEN
    res := lcr.GETOBJECT(ddllcr);
    DBMS_OUTPUT.PUT_LINE('source database: ' || 
                         ddllcr.GET_SOURCE_DATABASE_NAME);
    DBMS_OUTPUT.PUT_LINE('owner: ' || ddllcr.GET_OBJECT_OWNER);
    DBMS_OUTPUT.PUT_LINE('object: ' || ddllcr.GET_OBJECT_NAME);
    DBMS_OUTPUT.PUT_LINE('is tag null: ' || ddllcr.IS_NULL_TAG);
    DBMS_LOB.CREATETEMPORARY(ddl_text, true);
    ddllcr.GET_DDL_TEXT(ddl_text);
    DBMS_OUTPUT.PUT_LINE('ddl: ' || ddl_text);    
    -- Print extra attributes in DDL LCR
    ext_attr := ddllcr.GET_EXTRA_ATTRIBUTE('serial#');
      IF (ext_attr IS NOT NULL) THEN
        DBMS_OUTPUT.PUT_LINE('serial#: ' || ext_attr.ACCESSNUMBER());
      END IF;
    ext_attr := ddllcr.GET_EXTRA_ATTRIBUTE('session#');
      IF (ext_attr IS NOT NULL) THEN
        DBMS_OUTPUT.PUT_LINE('session#: ' || ext_attr.ACCESSNUMBER());
      END IF; 
    ext_attr := ddllcr.GET_EXTRA_ATTRIBUTE('thread#');
      IF (ext_attr IS NOT NULL) THEN
        DBMS_OUTPUT.PUT_LINE('thread#: ' || ext_attr.ACCESSNUMBER());
      END IF;   
    ext_attr := ddllcr.GET_EXTRA_ATTRIBUTE('tx_name');
      IF (ext_attr IS NOT NULL) THEN
        DBMS_OUTPUT.PUT_LINE('transaction name: ' || ext_attr.ACCESSVARCHAR2());
      END IF;
    ext_attr := ddllcr.GET_EXTRA_ATTRIBUTE('username');
      IF (ext_attr IS NOT NULL) THEN
        DBMS_OUTPUT.PUT_LINE('username: ' || ext_attr.ACCESSVARCHAR2());
      END IF;      
    DBMS_LOB.FREETEMPORARY(ddl_text);
  ELSIF (typenm = 'SYS.LCR$_ROW_RECORD') THEN
    res := lcr.GETOBJECT(rowlcr);
    DBMS_OUTPUT.PUT_LINE('source database: ' || 
                         rowlcr.GET_SOURCE_DATABASE_NAME);
    DBMS_OUTPUT.PUT_LINE('owner: ' || rowlcr.GET_OBJECT_OWNER);
    DBMS_OUTPUT.PUT_LINE('object: ' || rowlcr.GET_OBJECT_NAME);
    DBMS_OUTPUT.PUT_LINE('is tag null: ' || rowlcr.IS_NULL_TAG); 
    DBMS_OUTPUT.PUT_LINE('command_type: ' || rowlcr.GET_COMMAND_TYPE); 
    oldlist := rowlcr.GET_VALUES('old');
    FOR i IN 1..oldlist.COUNT LOOP
      IF oldlist(i) IS NOT NULL THEN
        DBMS_OUTPUT.PUT_LINE('old(' || i || '): ' || oldlist(i).column_name);
        print_any(oldlist(i).data);
      END IF;
    END LOOP;
    newlist := rowlcr.GET_VALUES('new', 'n');
    FOR i in 1..newlist.count LOOP
      IF newlist(i) IS NOT NULL THEN
        DBMS_OUTPUT.PUT_LINE('new(' || i || '): ' || newlist(i).column_name);
        print_any(newlist(i).data);
      END IF;
    END LOOP;
    -- Print extra attributes in row LCR
    ext_attr := rowlcr.GET_EXTRA_ATTRIBUTE('row_id');
      IF (ext_attr IS NOT NULL) THEN
        DBMS_OUTPUT.PUT_LINE('row_id: ' || ext_attr.ACCESSUROWID());
      END IF;
    ext_attr := rowlcr.GET_EXTRA_ATTRIBUTE('serial#');
      IF (ext_attr IS NOT NULL) THEN
        DBMS_OUTPUT.PUT_LINE('serial#: ' || ext_attr.ACCESSNUMBER());
      END IF;
    ext_attr := rowlcr.GET_EXTRA_ATTRIBUTE('session#');
      IF (ext_attr IS NOT NULL) THEN
        DBMS_OUTPUT.PUT_LINE('session#: ' || ext_attr.ACCESSNUMBER());
      END IF; 
    ext_attr := rowlcr.GET_EXTRA_ATTRIBUTE('thread#');
      IF (ext_attr IS NOT NULL) THEN
        DBMS_OUTPUT.PUT_LINE('thread#: ' || ext_attr.ACCESSNUMBER());
      END IF;   
    ext_attr := rowlcr.GET_EXTRA_ATTRIBUTE('tx_name');
      IF (ext_attr IS NOT NULL) THEN
        DBMS_OUTPUT.PUT_LINE('transaction name: ' || ext_attr.ACCESSVARCHAR2());
      END IF;
    ext_attr := rowlcr.GET_EXTRA_ATTRIBUTE('username');
      IF (ext_attr IS NOT NULL) THEN
        DBMS_OUTPUT.PUT_LINE('username: ' || ext_attr.ACCESSVARCHAR2());
      END IF;          
  ELSE
    DBMS_OUTPUT.PUT_LINE('Non-LCR Message with type ' || typenm);
  END IF;
END print_lcr;
/

Step 4   Create a Procedure that Prints All the LCRs in the Error Queue

The following procedure prints all of the LCRs in all of the error queues. It calls the print_lcr procedure created in "Create a Procedure that Prints a Specified LCR".

CREATE OR REPLACE PROCEDURE print_errors IS
  CURSOR c IS
    SELECT LOCAL_TRANSACTION_ID,
           SOURCE_DATABASE,
           MESSAGE_NUMBER,
           MESSAGE_COUNT,
           ERROR_NUMBER,
           ERROR_MESSAGE
      FROM DBA_APPLY_ERROR
      ORDER BY SOURCE_DATABASE, SOURCE_COMMIT_SCN;
  i      NUMBER;
  txnid  VARCHAR2(30);
  source VARCHAR2(128);
  msgno  NUMBER;
  msgcnt NUMBER;
  errnum NUMBER := 0;
  errno  NUMBER;
  errmsg VARCHAR2(255);
  lcr    ANYDATA;
  r      NUMBER;
BEGIN
  FOR r IN c LOOP
    errnum := errnum + 1;
   ŽŰ msgcnt := r.MESSAGE_COUNT;
    txnid  := r.LOCAL_TRANSACTION_ID;
    source := r.SOURCE_DATABASE;
    msgno  := r.MESSAGE_NUMBER;
    errno  := r.ERROR_NUMBER;
    errmsg := r.ERROR_MESSAGE;
DBMS_OUTPUT.PUT_LINE('*************************************************');
    DBMS_OUTPUT.PUT_LINE('----- ERROR #' || errnum);
    DBMS_OUTPUT.PUT_LINE('----- Local Transaction ID: ' || txnid);
    DBMS_OUTPUT.PUT_LINE('----- Source Database: ' || source);
    DBMS_OUTPUT.PUT_LINE('----Error in Message: '|| msgno);
    DBMS_OUTPUT.PUT_LINE('----Error Number: '||errno);
    DBMS_OUTPUT.PUT_LINE('----Message Text: '||errmsg);
    FOR i IN 1..msgcnt LOOP
      DBMS_OUTPUT.PUT_LINE('--message: ' || i);
        lcr := DBMS_APPLY_ADM.GET_ERROR_MESSAGE(i, txnid);
        print_lcr(lcr);
    END LOOP;
  END LOOP;
END print_errors;
/

To run this procedure after you create it, enter the following:

SET SERVEROUTPUT ON SIZE 1000000

EXEC print_errors

Step 5   Create a Procedure that Prints All the Error LCRs for a Transaction

The following procedure prints all the LCRs in the error queue for a particular transaction. It calls the print_lcr procedure created in "Create a Procedure that Prints a Specified LCR".

CREATE OR REPLACE PROCEDURE print_transaction(ltxnid IN VARCHAR2) IS
  i      NUMBER;
  txnid  VARCHAR2(30);
  source VARCHAR2(128);
  msgno  NUMBER;
  msgcnt NUMBER;
  errno  NUMBER;
  errmsg VARCHAR2(128);
  lcr    ANYDATA;
BEGIN
  SELECT LOCAL_TRANSACTION_ID,
         SOURCE_DATABASE,
         MESSAGE_NUMBER,
         MESSAGE_COUNT,
         ERROR_NUMBER,
         ERROR_MESSAGE
      INTO txnid, source, msgno, msgcnt, errno, errmsg
      FROM DBA_APPLY_ERROR
      WHERE LOCAL_TRANSACTION_ID =  ltxnid;
  DBMS_OUTPUT.PUT_LINE('----- Local Transaction ID: ' || txnid);
  DBMS_OUTPUT.PUT_LINE('----- Source Database: ' || source);
  DBMS_OUTPUT.PUT_LINE('----Error in Message: '|| msgno);
  DBMS_OUTPUT.PUT_LINE('----Error Number: '||errno);
  DBMS_OUTPUT.PUT_LINE('----Message Text: '||errmsg);
  FOR i IN 1..msgcnt LOOP
  DBMS_OUTPUT.PUT_LINE('--message: ' || i);
    lcr := DBMS_APPLY_ADM.GET_ERROR_MESSAGE(i, txnid); -- gets the LCR
    print_lcr(lcr);
  END LOOP;
END print_transaction;
/

To run this procedure after you create it, pass to it the local transaction identifier of a error transaction. For example, if the local transaction identifier is 1.17.2485, then enter the following:

SET SERVEROUTPUT ON SIZE 1000000

EXEC print_transaction('1.17.2485')
PKE95Ö#  PK◊hUIOEBPS/strms_qpmon.htmġ Monitoring Streams Queues and Propagations

21 Monitoring Streams Queues and Propagations

This chapter provides sample queries that you can use to monitor Streams queues and propagations.

This chapter contains these topics:


Note:

The Streams tool in the Oracle Enterprise Manager Console is also an excellent way to monitor a Streams environment. See the online help for the Streams tool for more information.


See Also:


Monitoring ANYDATA Queues and Messaging

The following sections contain instructions for displaying information about ANYDATA queues and messaging:

Displaying the ANYDATA Queues in a Database

To display all of the ANYDATA queues in a database, run the following query:

COLUMN OWNER HEADING 'Owner' FORMAT A10
COLUMN NAME HEADING 'Queue Name' FORMAT A28
COLUMN QUEUE_TABLE HEADING 'Queue Table' FORMAT A22
COLUMN USER_COMMENT HEADING 'Comment' FORMAT A15

SELECT q.OWNER, q.NAME, t.QUEUE_TABLE, q.USER_COMMENT
  FROM DBA_QUEUES q, DBA_QUEUE_TABLES t
  WHERE t.OBJECT_TYPE = 'SYS.ANYDATA' AND
        q.QUEUE_TABLE = t.QUEUE_TABLE AND
        q.OWNER       = t.OWNER;

Your output looks similar to the following:

Owner      Queue Name                   Queue Table            Comment
---------- ---------------------------- ---------------------- ---------------
SYS        AQ$_SCHEDULER$_JOBQTAB_E     SCHEDULER$_JOBQTAB     exception queue
SYS        SCHEDULER$_JOBQ              SCHEDULER$_JOBQTAB     Scheduler job q
                                                               ueue
SYS        AQ$_DIR$EVENT_TABLE_E        DIR$EVENT_TABLE        exception queue
SYS        DIR$EVENT_QUEUE              DIR$EVENT_TABLE
SYS        AQ$_DIR$CLUSTER_DIR_TABLE_E  DIR$CLUSTER_DIR_TABLE  exception queue
SYS        DIR$CLUSTER_DIR_QUEUE        DIR$CLUSTER_DIR_TABLE
STRMADMIN  AQ$_STREAMS_QUEUE_TABLE_E    STREAMS_QUEUE_TABLE    exception queue
STRMADMIN  STREAMS_QUEUE                STREAMS_QUEUE_TABLE

An exception queue is created automatically when you create an ANYDATA queue.

Viewing the Messaging Clients in a Database

You can view the messaging clients in a database by querying the DBA_STREAMS_MESSAGE_CONSUMERS data dictionary view. The query in this section displays the following information about each messaging client:

Run the following query to view this information about messaging clients:

COLUMN STREAMS_NAME HEADING 'Messaging|Client' FORMAT A25
COLUMN QUEUE_OWNER HEADING 'Queue|Owner' FORMAT A10
COLUMN QUEUE_NAME HEADING 'Queue Name' FORMAT A18
COLUMN RULE_SET_NAME HEADING 'Positive|Rule Set' FORMAT A11
COLUMN NEGATIVE_RULE_SET_NAME HEADING 'Negative|Rule Set' FORMAT A11

SELECT STREAMS_NAME, 
       QUEUE_OWNER, 
       QUEUE_NAME, 
       RULE_SET_NAME, 
       NEGATIVE_RULE_SET_NAME 
  FROM DBA_STREAMS_MESSAGE_CONSUMERS;

Your output looks similar to the following:

Messaging                 Queue                         Positive    Negative
Client                    Owner      Queue Name         Rule Set    Rule Set
------------------------- ---------- ------------------ ----------- -----------
SCHEDULER_PICKUP          SYS        SCHEDULER$_JOBQ    RULESET$_8
SCHEDULER_COORDINATOR     SYS        SCHEDULER$_JOBQ    RULESET$_4
HR                        STRMADMIN  STREAMS_QUEUE      RULESET$_15

See Also:

Chapter 3, "Streams Staging and Propagation" for more information about messaging clients

Viewing Message Notifications

You can configure a message notification to send a notification when a message that can be dequeued by a messaging client is enqueued into a queue. The notification can be sent to an email address, to an HTTP URL, or to a PL/SQL procedure. Run the following query to view the message notifications configured in a database:

COLUMN STREAMS_NAME HEADING 'Messaging|Client' FORMAT A10
COLUMN QUEUE_OWNER HEADING 'Queue|Owner' FORMAT A5
COLUMN QUEUE_NAME HEADING 'Queue Name' FORMAT A20
COLUMN NOTIFICATION_TYPE HEADING 'Notification|Type' FORMAT A15
COLUMN NOTIFICATION_ACTION HEADING 'Notification|Action' FORMAT A25

SELECT STREAMS_NAME, 
       QUEUE_OWNER, 
       QUEUE_NAME, 
       NOTIFICATION_TYPE, 
       NOTIFICATION_ACTION 
  FROM DBA_STREAMS_MESSAGE_CONSUMERS
  WHERE NOTIFICATION_TYPE IS NOT NULL;

Your output looks similar to the following:

Messaging  Queue                      Notification    Notification
Client     Owner Queue Name           Type            Action
---------- ----- -------------------- --------------- -------------------------
OE         OE    NOTIFICATION_QUEUE   MAIL            mary.smith@mycompany.com

Determining the Consumer of Each User-Enqueued Message in a Queue

To determine the consumer for each user-enqueued message in a queue, query AQ$queue_table_name in the queue owner's schema, where queue_table_name is the name of the queue table. For example, to find the consumers of the user-enqueued messages in the oe_q_table_any queue table, run the following query:

COLUMN MSG_ID HEADING 'Message ID' FORMAT 9999
COLUMN MSG_STATE HEADING 'Message State' FORMAT A13
COLUMN CONSUMER_NAME HEADING 'Consumer' FORMAT A30

SELECT MSG_ID, MSG_STATE, CONSUMER_NAME FROM AQ$OE_Q_TABLE_ANY;

Your output looks similar to the following:

Message ID                       Message State Consumer
-------------------------------- ------------- ------------------------------
B79AC412AE6E08CAE034080020AE3E0A PROCESSED     OE
B79AC412AE6F08CAE034080020AE3E0A PROCESSED     OE
B79AC412AE7008CAE034080020AE3E0A PROCESSED     OE

Note:

This query lists only user-enqueued messages, not captured messages.


See Also:

Oracle Streams Advanced Queuing User's Guide and Reference for an example that enqueues messages into an ANYDATA queue

Viewing the Contents of User-Enqueued Messages in a Queue

In an ANYDATA queue, to view the contents of a payload that is encapsulated within an ANYDATA payload, you query the queue table using the Accessdata_type static functions of the ANYDATA type, where data_type is the type of payload to view.


See Also:

"Wrapping User Message Payloads in an ANYDATA Wrapper and Enqueuing Them" for an example that enqueues the messages shown in the queries in this section into an ANYDATA queue

For example, to view the contents of payload of type NUMBER in a queue with a queue table named oe_queue_table, run the following query as the queue owner:

SELECT qt.user_data.AccessNumber() "Numbers in Queue" 
  FROM strmadmin.oe_q_table_any qt;

Your output looks similar to the following:

Numbers in Queue
----------------
              16

Similarly, to view the contents of a payload of type VARCHAR2 in a queue with a queue table named oe_q_table_any, run the following query:

SELECT qt.user_data.AccessVarchar2() "Varchar2s in Queue"
   FROM strmadmin.oe_q_table_any qt;

Your output looks similar to the following:

Varchar2s in Queue
--------------------------------------------------------------------------------
Chemicals - SW

To view the contents of a user-defined datatype, you query the queue table using a custom function that you create. For example, to view the contents of a payload of oe.cust_address_typ, connect as the Streams administrator and create a function similar to the following:

CONNECT oe/oe

CREATE OR REPLACE FUNCTION oe.view_cust_address_typ(
in_any IN ANYDATA) 
RETURN oe.cust_address_typ
IS
  address   oe.cust_address_typ;
  num_var   NUMBER;
BEGIN
  IF (in_any.GetTypeName() = 'OE.CUST_ADDRESS_TYP') THEN
    num_var := in_any.GetObject(address);
    RETURN address;
  ELSE RETURN NULL;
  END IF;
END;
/

GRANT EXECUTE ON oe.view_cust_address_typ TO strmadmin;

GRANT EXECUTE ON oe.cust_address_typ TO strmadmin;

Query the queue table using the function, as in the following example:

CONNECT strmadmin/strmadminpw

SELECT oe.view_cust_address_typ(qt.user_data) "Customer Addresses"
  FROM strmadmin.oe_q_table_any qt 
  WHERE qt.user_data.GetTypeName() = 'OE.CUST_ADDRESS_TYP';

Your output looks similar to the following:

Customer Addresses(STREET_ADDRESS, POSTAL_CODE, CITY, STATE_PROVINCE, COUNTRY_ID
--------------------------------------------------------------------------------
CUST_ADDRESS_TYP('1646 Brazil Blvd', '361168', 'Chennai', 'Tam', 'IN')

Monitoring Buffered Queues

A buffered queue includes the following storage areas:

Buffered queues are stored in the Streams pool, and the Streams pool is a portion of memory in the System Global Area (SGA) that is used by Streams. In a Streams environment, LCRs captured by a capture process always are stored in the buffered queue of an ANYDATA queue. Users and application can also enqueue messages into buffered queues, and these buffered queues be part of ANYDATA queues or part of typed queues.

Buffered queues enable Oracle databases to optimize messages by storing them in the SGA instead of always storing them in a queue table. Captured messages always are stored in buffered queues, but user-enqueued LCRs and user messages can be stored in buffered queues or persistently in queue tables. Messages in a buffered queue can spill from memory if they have been staged in the buffered queue for a period of time without being dequeued, or if there is not enough space in memory to hold all of the messages. Messages that spill from memory are stored in the appropriate queue table.

The following sections describe queries that monitor buffered queues:

Determining the Number of Messages in Each Buffered Queue

The V$BUFFERED_QUEUES dynamic performance view contains information about the number of messages in a buffered queue. The messages can be captured messages, or user-enqueued messages, or both.

You can determine the following information about each buffered queue in a database by running the query in this section:

  • The queue owner

  • The queue name

  • The number of messages currently in memory

  • The number of messages that have spilled from memory into the queue table

  • The total number of messages in the buffered queue, which includes the messages in memory and the messages spilled to the queue table

To display this information, run the following query:

COLUMN QUEUE_SCHEMA HEADING 'Queue Owner' FORMAT A15
COLUMN QUEUE_NAME HEADING 'Queue Name' FORMAT A15
COLUMN MEM_MSG HEADING 'Messages|in Memory' FORMAT 99999999
COLUMN SPILL_MSGS HEADING 'Messages|Spilled' FORMAT 99999999
COLUMN NUM_MSGS HEADING 'Total Messages|in Buffered Queue' FORMAT 99999999

SELECT QUEUE_SCHEMA, 
       QUEUE_NAME, 
       (NUM_MSGS - SPILL_MSGS) MEM_MSG, 
       SPILL_MSGS, 
       NUM_MSGS
  FROM V$BUFFERED_QUEUES;

Your output looks similar to the following:

                                     Messages      Messages      Total Messages
Queue Owner     Queue Name          in Memory       Spilled   in Buffered Queue
--------------- --------------- ------------- ------------- -------------------
STRMADMIN       STREAMS_QUEUE             534            21                 555

Viewing the Capture Processes for the LCRs in Each Buffered Queue

A capture process is a queue publisher that enqueues captured messages into a buffered queue. These LCRs can be propagated to other queues subsequently. By querying the V$BUFFERED_PUBLISHERS dynamic performance view, you can display each capture process that captured the LCRs in the buffered queue. These LCRs might have been captured at the local database, or they might have been captured at a remote database and propagated to the queue specified in the query.

The query in this section assumes that the buffered queues in the local database only store captured messages, not user-enqueued messages. The query displays the following information about each capture process:

  • The name of a capture process that captured the LCRs in the buffered queue

  • If the capture process is running on a remote database, and the captured messages have been propagated to the local queue, then the name of the queue and database from which the captured messages were last propagated

  • The name of the local queue staging the captured messages

  • The total number of LCRs captured by a capture process that have been staged in the buffered queue since the database instance was last started

  • The message number of the LCR last enqueued into the buffered queue from the sender

To display this information, run the following query:

COLUMN SENDER_NAME HEADING 'Capture|Process' FORMAT A13
COLUMN SENDER_ADDRESS HEADING 'Sender Queue' FORMAT A27
COLUMN QUEUE_NAME HEADING 'Queue Name' FORMAT A15
COLUMN CNUM_MSGS HEADING 'Number|of LCRs|Enqueued' FORMAT 99999999
COLUMN LAST_ENQUEUED_MSG HEADING 'Last|Enqueued|LCR' FORMAT 99999999

SELECT SENDER_NAME,
       SENDER_ADDRESS,
       QUEUE_NAME,        
       CNUM_MSGS, 
       LAST_ENQUEUED_MSG
  FROM V$BUFFERED_PUBLISHERS;

Your output looks similar to the following:

                                                             Number      Last
Capture                                                     of LCRs  Enqueued
Process       Sender Queue                Queue Name       Enqueued       LCR
------------- --------------------------- --------------- --------- ---------
CAPTURE_HR    "STRMADMIN"."STREAMS_QUEUE" STREAMS_QUEUE         382       844
              @MULT3.NET

CAPTURE_HR    "STRMADMIN"."STREAMS_QUEUE" STREAMS_QUEUE         387       840
              @MULT2.NET

CAPTURE_HR                                STREAMS_QUEUE          75       833

This output shows following:

  • 382 LCRs from the capture_hr capture process running on a remote database were propagated from a queue named streams_queue on database mult3.net to the local queue named streams_queue. The message number of the last enqueued LCR from this sender was 844.

  • 387 LCRs from the capture_hr capture process running on a remote database were propagated from a queue named streams_queue on database mult2.net to the local queue named streams_queue. The message number of the last enqueued LCR from this sender was 840.

  • 75 LCRs from the local capture_hr capture process were enqueued into the local queue named streams_queue. The capture process is local because the Sender Queue column is NULL. The message number of the last enqueued LCR from this capture process was 833.

Displaying Information About Propagations that Send Buffered Messages

The query in this section displays the following information about each propagation that sends buffered messages from a buffered queue in the local database:

  • The name of the propagation

  • The queue owner

  • The queue name

  • The name of the database link used by the propagation

  • The status of the propagation schedule

To display this information, run the following query:

COLUMN PROPAGATION_NAME HEADING 'Propagation' FORMAT A15
COLUMN QUEUE_SCHEMA HEADING 'Queue|Owner' FORMAT A10
COLUMN QUEUE_NAME HEADING 'Queue|Name' FORMAT A15
COLUMN DBLINK HEADING 'Database|Link' FORMAT A10
COLUMN SCHEDULE_STATUS HEADING 'Schedule Status' FORMAT A20

SELECT p.PROPAGATION_NAME,
       s.QUEUE_SCHEMA,
       s.QUEUE_NAME,
       s.DBLINK,
       s.SCHEDULE_STATUS
  FROM DBA_PROPAGATION p, V$PROPAGATION_SENDER s
  WHERE p.DESTINATION_DBLINK = s.DBLINK AND
        p.SOURCE_QUEUE_OWNER = s.QUEUE_SCHEMA AND
        p.SOURCE_QUEUE_NAME  = s.QUEUE_NAME;

Your output looks similar to the following:

                Queue      Queue           Database
Propagation     Owner      Name            Link       Schedule Status
--------------- ---------- --------------- ---------- --------------------
MULT1_TO_MULT3  STRMADMIN  STREAMS_QUEUE   MULT3.NET  SCHEDULE ENABLED
MULT1_TO_MULT2  STRMADMIN  STREAMS_QUEUE   MULT2.NET  SCHEDULE ENABLED

Displaying the Number of Messages and Bytes Sent By Propagations

The query in this section displays the number of messages and the number of bytes sent by each propagation that sends buffered messages from a buffered queue in the local database:

  • The name of the propagation

  • The queue name

  • The name of the database link used by the propagation

  • The total number of messages sent since the database instance was last started

  • The total number of bytes sent since the database instance was last started

To display this information, run the following query:

COLUMN PROPAGATION_NAME HEADING 'Propagation' FORMAT A15
COLUMN QUEUE_NAME HEADING 'Queue|Name' FORMAT A15
COLUMN DBLINK HEADING 'Database|Link' FORMAT A10
COLUMN TOTAL_MSGS HEADING 'Total|Messages' FORMAT 99999999
COLUMN TOTAL_BYTES HEADING 'Total|Bytes' FORMAT 99999999

SELECT p.PROPAGATION_NAME,
       s.QUEUE_NAME,
       s.DBLINK,
       s.TOTAL_MSGS,
       s.TOTAL_BYTES
  FROM DBA_PROPAGATION p, V$PROPAGATION_SENDER s
  WHERE p.DESTINATION_DBLINK = s.DBLINK AND
        p.SOURCE_QUEUE_OWNER = s.QUEUE_SCHEMA AND
        p.SOURCE_QUEUE_NAME  = s.QUEUE_NAME;

Your output looks similar to the following:

                Queue           Database       Total     Total
Propagation     Name            Link        Messages     Bytes
--------------- --------------- ---------- --------- ---------
MULT1_TO_MULT3  STREAMS_QUEUE   MULT3.NET         79     71467
MULT1_TO_MULT2  STREAMS_QUEUE   MULT2.NET         79     71467

Displaying Performance Statistics for Propagations that Send Buffered Messages

The query in this section displays the amount of time that a propagation sending buffered messages spends performing various tasks. Each propagation sends messages from the source queue to the destination queue. Specifically, the query displays the following information:

  • The name of the propagation

  • The queue name

  • The name of the database link used by the propagation

  • The amount of time spent dequeu q5éing messages from the queue since the database instance was last started, in seconds

  • The amount of time spent pickling messages since the database instance was last started, in seconds. Pickling involves changing a message in memory into a series of bytes that can be sent over a network.

  • The amount of time spent propagating messages since the database instance was last started, in seconds

To display this information, run the following query:

COLUMN PROPAGATION_NAME HEADING 'Propagation' FORMAT A15
COLUMN QUEUE_NAME HEADING 'Queue|Name' FORMAT A13
COLUMN DBLINK HEADING 'Database|Link' FORMAT A9
COLUMN ELAPSED_DEQUEUE_TIME HEADING 'Dequeue|Time' FORMAT 99999999.99
COLUMN ELAPSED_PICKLE_TIME HEADING 'Pickle|Time' FORMAT 99999999.99
COLUMN ELAPSED_PROPAGATION_TIME HEADING 'Propagation|Time' FORMAT 99999999.99

SELECT p.PROPAGATION_NAME,
       s.QUEUE_NAME,
       s.DBLINK,
       (s.ELAPSED_DEQUEUE_TIME / 100) ELAPSED_DEQUEUE_TIME,
       (s.ELAPSED_PICKLE_TIME / 100) ELAPSED_PICKLE_TIME,
       (s.ELAPSED_PROPAGATION_TIME / 100) ELAPSED_PROPAGATION_TIME
  FROM DBA_PROPAGATION p, V$PROPAGATION_SENDER s
  WHERE p.DESTINATION_DBLINK = s.DBLINK AND
        p.SOURCE_QUEUE_OWNER = s.QUEUE_SCHEMA AND
        p.SOURCE_QUEUE_NAME  = s.QUEUE_NAME;

Your output looks similar to the following:

                Queue         Database       Dequeue       Pickle  Propagation
Propagation     Name          Link              Time         Time         Time
--------------- ------------- --------- ------------ ------------ ------------
MULT1_TO_MULT2  STREAMS_QUEUE MULT2.NET        30.65        45.10        10.91
MULT1_TO_MULT3  STREAMS_QUEUE MULT3.NET        25.36        37.07         8.35

Viewing the Propagations Dequeuing Messages from Each Buffered Queue

Propagations are queue subscribers that can dequeue messages from a queue. By querying the V$BUFFERED_SUBSCRIBERS dynamic performance view, you can display all the propagations that can dequeue buffered messages from a queue.

You can also use the V$BUFFERED_SUBSCRIBERS dynamic performance view to determine the performance of a propagation. For example, if a propagation has a high number of spilled messages, then that propagation might not be dequeuing messages fast enough from the buffered queue. Spilling messages to a queue table has a negative impact on the performance of your Streams environment.

Apply processes also are queue subscribers. This query joins with the DBA_PROPAGATION and V$BUFFERED_QUEUES views to limit the output to propagations only and to show the propagation name of each propagation.

The query in this section displays the following information about each propagation that can dequeue messages from queues:

  • The name of the propagation.

  • The destination database, which is the database that contains the destination queue for the propagation.

  • The sequence number for the message most recently enqueued into the queue. The sequence number for message shows the order of the message in the queue.

  • The sequence number for the message in the queue most recently browsed by the propagation.

  • The sequence number for the message most recently dequeued from the queue by the propagation.

  • The current number of messages in the queue waiting to be dequeued by the propagation.

  • The cumulative number of messages spilled from memory to the queue table for the propagation since the database last started.

To display this information, run the following query:

COLUMN PROPAGATION_NAME HEADING 'Propagation' FORMAT A15
COLUMN SUBSCRIBER_ADDRESS HEADING 'Destination|Database' FORMAT A11
COLUMN CURRENT_ENQ_SEQ HEADING 'Current|Enqueued|Sequence' FORMAT 99999999
COLUMN LAST_BROWSED_SEQ HEADING 'Last|Browsed|Sequence' FORMAT 99999999
COLUMN LAST_DEQUEUED_SEQ HEADING 'Last|Dequeued|Sequence' FORMAT 99999999
COLUMN NUM_MSGS HEADING 'Number of|Messages|in Queue|(Current)' FORMAT 99999999
COLUMN TOTAL_SPILLED_MSG HEADING 'Number of|Spilled|Messages|(Cumulative)' 
  FORMAT 99999999

SELECT p.PROPAGATION_NAME,
       s.SUBSCRIBER_ADDRESS, 
       s.CURRENT_ENQ_SEQ,
       s.LAST_BROWSED_SEQ,     
       s.LAST_DEQUEUED_SEQ,
       s.NUM_MSGS,  
       s.TOTAL_SPILLED_MSG
FROM DBA_PROPAGATION p, V$BUFFERED_SUBSCRIBERS s, V$BUFFERED_QUEUES q 
WHERE q.QUEUE_ID = s.QUEUE_ID AND 
      p.SOURCE_QUEUE_OWNER = q.QUEUE_SCHEMA AND
      p.SOURCE_QUEUE_NAME = q.QUEUE_NAME AND 
      p.DESTINATION_DBLINK = s.SUBSCRIBER_ADDRESS; 

Your output looks similar to the following:

                                                          Number of    Number of
                              Current      Last      Last  Messages      Spilled
                Destination  Enqueued   Browsed  Dequeued  in Queue     Messages
Propagation     Database     Sequence  Sequence  Sequence (Current) (Cumulative)
--------------- ----------- --------- --------- --------- --------- ------------
MULT1_TO_MULT2  MULT2.NET         157       144       129        24            0
MULT1_TO_MULT3  MULT3.NET          98        88        81        53            0

Note:

If there are multiple propagations using the same database link but propagating messages to different queues at the destination database, then the statistics returned by this query are approximate rather than accurate.

Displaying Performance Statistics for Propagations that Receive Buffered Messages

The query in this section displays the amount of time that each propagation receiving buffered messages spends performing various tasks. Each propagation receives the messages and enqueues them into the destination queue for the propagation. Specifically, the query displays the following information:

  • The name of the source queue from which messages are propagated.

  • The name of the source database.

  • The amount of time spent unpickling messages since the database instance was last started, in seconds. Unpickling involves changing a series of bytes that can be sent over a network back into a buffered message in memory.

  • The amount of time spent evaluating rules for propagated messages since the database instance was last started, in seconds.

  • The amount of time spent enqueuing messages into the destination queue for the propagation since the database instance was last started, in seconds.

To display this information, run the following query:

COLUMN SRC_QUEUE_NAME HEADING 'Source|Queue|Name' FORMAT A20
COLUMN SRC_DBNAME HEADING 'Source|Database' FORMAT A15
COLUMN ELAPSED_UNPICKLE_TIME HEADING 'Unpickle|Time' FORMAT 99999999.99
COLUMN ELAPSED_RULE_TIME HEADING 'Rule|Evaluation|Time' FORMAT 99999999.99
COLUMN ELAPSED_ENQUEUE_TIME HEADING 'Enqueue|Time' FORMAT 99999999.99

SELECT SRC_QUEUE_NAME,
       SRC_DBNAME,
       (ELAPSED_UNPICKLE_TIME / 100) ELAPSED_UNPICKLE_TIME,
       (ELAPSED_RULE_TIME / 100) ELAPSED_RULE_TIME,
       (ELAPSED_ENQUEUE_TIME / 100) ELAPSED_ENQUEUE_TIME
  FROM V$PROPAGATION_RECEIVER;

Your output looks similar to the following:

Source                                                    Rule
Queue                Source              Unpickle   Evaluation      Enqueue
Name                 Database                Time         Time         Time
-------------------- --------------- ------------ ------------ ------------
STREAMS_QUEUE        MULT2.NET              45.65         5.44        45.85
STREAMS_QUEUE        MULT3.NET              53.35         8.01        50.41

Viewing the Apply Processes Dequeuing Messages from Each Buffered Queue

Apply processes are queue subscribers that can dequeue messages from a queue. By querying the V$BUFFERED_SUBSCRIBERS dynamic performance view, you can display all the apply processes that can dequeue messages from a queue.

You can also use the V$BUFFERED_SUBSCRIBERS dynamic performance view to determine the performance of an apply process. For example, if an apply process has a high number of spilled messages, then that apply process might not be dequeuing messages fast enough from the buffered queue. Spilling messages to a queue table has a negative impact on the performance of your Streams environment.

This query joins with the V$BUFFERED_QUEUES views to show the name of the queue. In addition, propagations also are queue subscribers, and this query limits the output to subscribers where the SUBSCRIBER_ADDRESS is NULL to return only apply processes.

The query in this section displays the following information about the apply processes that can dequeue messages from queues:

  • The name of the apply process.

  • The queue owner.

  • The queue name.

  • The sequence number for the message most recently dequeued by the apply process. The sequence number for message shows the order of the message in the queue.

  • The current number of messages in the queue waiting to be dequeued by the apply process.

  • The cumulative number of messages spilled from memory to the queue table for the apply process since the database last started.

To display this information, run the following query:

COLUMN SUBSCRIBER_NAME HEADING 'Apply Process' FORMAT A16
COLUMN QUEUE_SCHEMA HEADING 'Queue|Owner' FORMAT A10
COLUMN QUEUE_NAME HEADING 'Queue|Name' FORMAT A15
COLUMN LAST_DEQUEUED_SEQ HEADING 'Last|Dequeued|Sequence' FORMAT 99999999
COLUMN NUM_MSGS HEADING 'Number of|Messages|in Queue|(Current)' FORMAT 99999999
COLUMN TOTAL_SPILLED_MSG HEADING 'Number of|Spilled|Messages|(Cumulative)' 
  FORMAT 99999999

SELECT s.SUBSCRIBER_NAME,
       q.QUEUE_SCHEMA,
       q.QUEUE_NAME, 
       s.LAST_DEQUEUED_SEQ,
       s.NUM_MSGS,
       s.TOTAL_SPILLED_MSG
FROM V$BUFFERED_QUEUES q, V$BUFFERED_SUBSCRIBERS s, DBA_APPLY a
WHERE q.QUEUE_ID = s.QUEUE_ID AND 
      s.SUBSCRIBER_ADDRESS IS NULL AND
      s.SUBSCRIBER_NAME = a.APPLY_NAME;

Your output looks similar to the following:

                                                 Last Number of   Number of
                 Queue      Queue            Dequeued  Messages     Spilled
Apply Process    Owner      Name             Sequence  in Queue    Messages
                                                       (Current)(Cumulative)
---------------- ---------- --------------- --------- --------- ------------
APPLY_FROM_MULT3 STRMADMIN  STREAMS_QUEUE          49       148            0
APPLY_FROM_MULT2 STRMADMIN  STREAMS_QUEUE          85       241            1

Monitoring Streams Propagations and Propagation Jobs

The following sections contain queries that you can run to display information about propagations and propagation jobs:

Displaying the Queues and Database Link for Each Propagation

You can display information about each propagation by querying the DBA_PROPAGATION data dictionary view. This view contains information about each propagation with a source queue is at the local database.

The query in this section displays the following information about each propagation:

  • The propagation name

  • The source queue name

  • The database link used by the propagation

  • The destination queue name

  • The status of the propagation, either ENABLED, DISABLED, or ABORTED

  • Whether the propagation is a queue-to-queue propagation

To display this information about each propagation in a database, run the following query:

COLUMN PROPAGATION_NAME        HEADING 'Propagation|Name'   FORMAT A19
COLUMN SOURCE_QUEUE_NAME       HEADING 'Source|Queue|Name'  FORMAT A17
COLUMN DESTINATION_DBLINK      HEADING 'Database|Link'      FORMAT A9
COLUMN DESTINATION_QUEUE_NAME  HEADING 'Dest|Queue|Name'    FORMAT A15
COLUMN STATUS                  HEADING 'Status'             FORMAT A8
COLUMN QUEUE_TO_QUEUE          HEADING 'Queue-|to-|Queue?'  FORMAT A6
 
SELECT PROPAGATION_NAME,
       SOURCE_QUEUE_NAME,
       DESTINATION_DBLINK, 
       DESTINATION_QUEUE_NAME,
       STATUS,
       QUEUE_TO_QUEUE
  FROM DBA_PROPAGATION;

Your output looks similar to the following:

                    Source                      Dest                     Queue-
Propagation         Queue             Database  Queue                    to-
Name                Name              Link      Name            Status   Queue?
------------------- ----------------- --------- --------------- -------- ------
STREAMS_PROPAGATION STREAMS_CAPTURE_Q INST2.NET STREAMS_APPLY_Q ENABLED  FALSE

Determining the Source Queue and Destination Queue for Each Propagation

You can determine the source queue and destination queue for each propagation by querying the DBA_PROPAGATION data dictionary view.

The query in this section displays the following information about each propagation:

  • The propagation name

  • The source queue owner

  • The source queue name

  • The database that contains the source queue

  • The destination queue owner

  • The destination queue name

  • The database that contains the destination queue

To display this information about each propagation in a database, run the following query:

COLUMN PROPAGATION_NAME HEADING 'Propagation|Name' FORMAT A20
COLUMN SOURCE_QUEUE_OWNER HEADING 'Source|Queue|Owner' FORMAT A10
COLUMN 'Source Queue' HEADING 'Source|Queue' FORMAT A15
COLUMN DESTINATION_QUEUE_OWNER HEADING 'Dest|Queue|Owner'   FORMAT A10
COLUMN 'Destination Queue' HEADING 'Destination|Queue' FORMAT A15

SELECT p.PROPAGATION_NAME,
       p.SOURCE_QUEUE_OWNER,
       p.SOURCE_QUEUE_NAME ||'@'|| 
       g.GLOBAL_NAME "Source Queue",
       p.DESTINATION_QUEUE_OWNER,
       p.DESTINATION_QUEUE_NAME ||'@'|| 
       p.DESTINATION_DBLINK "Destination Queue"
  FROM DBA_PROPAGATION p, GLOBAL_NAME g;

Your output looks similar to the following:

                     Source                     Dest
Propagation          Queue      Source          Queue      Destination
Name                 Owner      Queue           Owner      Queue
-------------------- ---------- --------------- ---------- ---------------
STREAMS_PROPAGATION  STRMADMIN  STREAMS_CAPTURE STRMADMIN  STREAMS_APPLY_Q
                                _Q@INST1.NET               @INST2.NET

Determining the Rule Sets for Each Propagation

The query in this section displays the following information for each propagation:

  • The propagation name

  • The owner of the positive rule set for the propagation

  • The name of the positive rule set used by the propagation

  • The owner of the negative rule set used by the propagation

  • The name of the negative rule set used by the propagation

To display this general information about each propagation in a database, run the following query:

COLUMN PROPAGATION_NAME HEADING 'Propagation|Name' FORMAT A20
COLUMN RULE_SET_OWNER HEADING 'Positive|Rule Set|Owner' FORMAT A10
COLUMN RULE_SET_NAME HEADING 'Positive Rule|Set Name' FORMAT A15
COLUMN NEGATIVE_RULE_SET_OWNER HEADING 'Negative|Rule Set|Owner' FORMAT A10
COLUMN NEGATIVE_RULE_SET_NAME HEADING 'Negative Rule|Set Name' FORMAT A15

SELECT PROPAGATION_NAME, 
       RULE_SET_OWNER, 
       RULE_SET_NAME, 
       NEGATIVE_RULE_SET_OWNER, 
       NEGATIVE_RULE_SET_NAME
  FROM DBA_PROPAGATION;

Your output looks similar to the following:

                     Positive                   Negative
Propagation          Rule Set   Positive Rule   Rule Set   Negative Rule
Name                 Owner      Set Name        Owner      Set Name
-------------------- ---------- --------------- ---------- ---------------
STRM01_PROPAGATION   STRMADMIN  RULESET$_22     STRMADMIN  RULESET$_31

Displaying the Schedule for a Propagation Job

The query in this section displays the following information about the propagation schedule for a propagation job used by a propagation named dbs1_to_dbs2:

  • The date and time when the propagation schedule started (or will start).

  • The duration of the propagation job, which is the amount of time the job propagates messages before restarting.

  • The next time the propagation will start.

  • The latency of the propagation job, which is the maximum wait time to propagate a new message during the duration, when all other messages in the queue to the relevant destination have been propagated.

  • Whether or not the propagation job is enabled.

  • The name of the process that most recently executed the schedule.

  • The number of consecutive times schedule execution has failed, if any. After 16 consecutive failures, a propagation job becomes disabled automatically.

Run this query at the database that contains the source queue:

COLUMN START_DATE HEADING 'Start Date'
COLUMN PROPAGATION_WINDOW HEADING 'Duration|in Seconds' FORMAT 99999
COLUMN NEXT_TIME HEADING 'Next|Time' FORMAT A8
COLUMN LATENCY HEADING 'Latency|in Seconds' FORMAT 99999
COLUMN SCHEDULE_DISABLED HEADING 'Status' FORMAT A8
COLUMN PROCESS_NAME HEADING 'Process' FORMAT A8
COLUMN FAILURES HEADING 'Number of|Failures' FORMAT 99

SELECT DISTINCT TO_CHAR(s.START_DATE, 'HH24:MI:SS MM/DD/YY') START_DATE,
       s.PROPAGATION_WINDOW, 
       s.NEXT_TIME, 
       s.LATENCY,
       DECODE(s.SCHEDULE_DISABLED,
                'Y', 'Disabled',
                'N', 'Enabled') SCHEDULE_DISABLED,
       s.PROCESS_NAME,
       s.FAILURES
  FROM DBA_QUEUE_SCHEDULES s, DBA_PROPAGATION p
  WHERE p.PROPAGATION_NAME = 'DBS1_TO_DBS2'
  AND p.DESTINATION_DBLINK = s.DESTINATION
  AND s.SCHEMA = p.SOURCE_QUEUE_OWNER
  AND s.QNAME = p.SOURCE_QUEUE_NAME;

Your output looks similar to the following:

                    Duration Next        Latency                   Number of
Start Date        in Seconds Time     in Seconds Status   Process   Failures
----------------- ---------- -------- ---------- -------- -------- ---------
15:23:40 03/02/02                              5 Enabled  J002             0

This propagation job uses the default schedule for a Streams propagation job. That is, the duration and next time are both NULL, and the latency is five seconds. When the duration is NULL, the job propagates changes without restarting automatically. When the next time is NULL, the propagation job is running currently.


See Also:


Determining the Total Number of Messages and Bytes Propagated

All propagation jobs from a source queue that share the same database link have a single propagation schedule. The query in this section displays the following information for each propagation:

  • The name of the propagation

  • The total time spent by the system executing the propagation schedule

  • The total number of messages propagated by the propagation schedule

  • The total number of bytes propagated by the propagation schedule

Run the following query to display this information for each propagation with a source queue at the local database:

COLUMN PROPAGATION_NAME HEADING 'Propagation|Name' FORMAT A20
COLUMN TOTAL_TIME HEADING 'Total Time|Executing|in Seconds' FORMAT 999999
COLUMN TOTAL_NUMBER HEADING 'Total Messages|Propagated' FORMAT 999999999
COLUMN TOTAL_BYTES HEADING 'Total Bytes|Propagated' FORMAT 9999999999999

SELECT p.PROPAGATION_NAME, s.TOTAL_TIME, s.TOTAL_NUMBER, s.TOTAL_BYTES 
  FROM DBA_QUEUE_SCHEDULES s, DBA_PROPAGATION p
  WHERE p.DESTINATION_DBLINK = s.DESTINATION
    AND s.SCHEMA = p.SOURCE_QUEUE_OWNER
    AND s.QNAME = p.SOURCE_QUEUE_NAME;

Your output looks similar to the following:

                     Total Time
Propagation           Executing Total Messages    Total Bytes
Name                 in Seconds   Propagated       Propagated
-------------------- ---------- -------------- --------------
MULT3_TO_MULT1              351          872           875252
MULT3_TO_MULT2              596          872           875252

See Also:

Oracle Streams Advanced Queuing User's Guide and Reference and Oracle Database Reference for more information about the DBA_QUEUE_SCHEDULES data dictionary view

PKä}VĻ‘Ů ŮPK◊hUIOEBPS/ap_xmlschema.htmI-∂“ XML Schema for LCRs

A XML Schema for LCRs

The XML schema described in this appendix defines the format of a logical change record (LCR). The Oracle XML DB must be installed to use the XML schema for LCRs.

This appendix contains this topic:

The namespace for this schema is the following:

http://xmlns.oracle.com/streams/schemas/lcr 

The schema is the following:

http://xmlns.oracle.com/streams/schemas/lcr/streamslcr.xsd

See Also:

Oracle XML DB Developer's Guide for more information about Oracle XML DB and for information about upgrading an existing XML schema for LCRs

Definition of the XML Schema for LCRs

The following is the XML schema definition for LCRs:

'<schema xmlns="http://www.w3.org/2001/XMLSchema" 
        targetNamespace="http://xmlns.oracle.com/streams/schemas/lcr" 
        xmlns:lcr="http://xmlns.oracle.com/streams/schemas/lcr"
        xmlns:xdb="http://xmlns.oracle.com/xdb"
          version="1.0"
        elementFormDefault="qualified">
 
  <simpleType name = "short_name">
    <restriction base = "string">
      <maxLength value="30"/>
    </restriction>
  </simpleType>
 
  <simpleType name = "long_name">
    <restriction base = "string">
      <maxLength value="4000"/>
    </restriction>
  </simpleType>
 
  <simpleType name = "db_name">
    <restriction base = "string">
      <maxLength value="128"/>
    </restriction>
  </simpleType>
 
  <!-- Default session parameter is used if format is not specified -->
  <complexType name="datetime_format">
    <sequence>
      <element name = "value" type = "string" nillable="true"/>
      <element name = "format" type = "string" minOccurs="0" nillable="true"/>
    </sequence>
  </complexType>
 
  <complexType name="anydata">
    <choice>
      <element name="varchar2" type = "string" xdb:SQLType="CLOB" 
                                                        nillable="true"/>
 
      <!-- Represent char as varchar2. xdb:CHAR blank pads upto 2000 bytes! -->
      <element name="char" type = "string" xdb:SQLType="CLOB"
                                                        nillable="true"/>
      <element name="nchar" type = "string" xdb:SQLType="NCLOB"
                                                        nillable="true"/>
 
      <element name="nvarchar2" type = "string" xdb:SQLType="NCLOB"
                                                        nillable="true"/>
      <element name="number" type = "double" xdb:SQLType="NUMBER"
                                                        nillable="true"/>
      <element name="raw" type = "hexBinary" xdb:SQLType="BLOB" 
                                                        nillable="true"/>
      <element name="date" type = "lcr:datetime_format"/>
      <element name="timestamp" type = "lcr:datetime_format"/>
      <element name="timestamp_tz" type = "lcr:datetime_format"/>
      <element name="timestamp_ltz" type = "lcr:datetime_format"/>
 
      <!-- Interval YM should be as per format allowed by SQL -->
      <element name="interval_ym" type = "string" nillable="true"/>
 
      <!-- Interval DS should be as per format allowed by SQL -->
      <element name="interval_ds" type = "string" nillable="true"/>
 
      <element name="urowid" type = "string" xdb:SQLType="VARCHAR2"
                                                        nillable="true"/>
    </choice>
  </complexType>
 
  <complexType name="column_value">
    <sequence>
      <element name = "column_name" type = "lcr:long_name" nillable="false"/>
      <element name = "data" type = "lcr:anydata" nillable="false"/>
      <element name = "lob_information" type = "string" minOccurs="0"
                                                           nillable="true"/>
      <element name = "lob_offset" type = "nonNegativeInteger" minOccurs="0"
                                                           nillable="true"/>
      <element name = "lob_operation_size" type = "nonNegativeInteger" 
                                             minOccurs="0" nillable="true"/>
      <element name = "long_information" type = "string" minOccurs="0"
                                                           nillable="true"/>
    </sequence>
  </complexType>
 
  <complexType name="extra_attribute">
    <sequence>
      <element name = "attribute_name" type = "lcr:short_name"/>
      <element name = "attribute_value" type = "lcr:anydata"/>
    </sequence>
  </complexType>
 
  <element name = "ROW_LCR" xdb:defaultTable="">
    <complexType>
      <sequence>
        <element name = "source_database_name" type = "lcr:db_name" 
                                                            nillable="false"/>
        <element name = "command_type" type = "string" nillable="false"/>
        <element name = "object_owner" type = "lcr:short_name" 
                                                            nillable="false"/>
        <element name = "object_name" type = "lcr:short_name"
                                                            nillable="false"/>
        <element name = "tag" type = "hexBinary" xdb:SQLType="RAW" 
                                               minOccurs="0" nillable="true"/>
        <element name = "transaction_id" type = "string" minOccurs="0" 
                                                             nillable="true"/>
        <element name = "scn" type = "double" xdb:SQLType="NUMBER" 
                                               minOccurs="0" nillable="true"/>
        <element name = "old_values" minOccurs = "0">
          <complexType>
            <sequence>
              <element name = "old_value" type="lcr:column_value" 
                                                    maxOccurs = "unbounded"/>
            </sequence>
          </complexType>
        </element>
        <element name = "new_values" minOccurs = "0">
          <complexType>
            <sequence>
              <element name = "new_value" type="lcr:column_value" 
                                                    maxOccurs = "unbounded"/>
            </sequence>
          </complexType>
        </element>
        <element name = "extra_attribute_values" minOccurs = "0">
          <complexType>
            <sequence>
              <element name = "extra_attribute_value"
                       type="lcr:extra_attribute"
                       maxOccurs = "unbounded"/>
            </sequence>
          </complexType>
        </element>
      </sequence>
    </complexType>
  </element>
 
  <element name = "DDL_LCR" xdb:defaultTable="">
    <complexType>
      <sequence>
        <element name = "source_database_name" type = "lcr:db_name" 
                                                        nillable="false"/>
        <element name = "command_type" type = "string" nillable="false"/>
        <element name = "current_schema" type = "lcr:short_name"
                                                        nillable="false"/>
        <element name = "ddl_text" type = "string" xdb:SQLType="CLOB"
                                                        nillable="false"/>
        <element name = "object_type" type = "string"
                                        minOccurs = "0" nillable="true"/>
        <element name = "object_owner" type = "lcr:short_name"
                                        minOccurs = "0" nillable="true"/>
        <element name = "object_name" type = "lcr:short_name"
                                        minOccurs = "0" nillable="true"/>
        <element name = "logon_user" type = "lcr:short_name"
                                        minOccurs = "0" nillable="true"/>
        <element name = "base_table_owner" type = "lcr:short_name"
                                        minOccurs = "0" nillable="true"/>
        <element name = "base_table_name" type = "lcr:short_name"
                                        minOccurs = "0" nillable="true"/>
        <element name = "tag" type = "hexBinary" xdb:SQLType="RAW"
                                        minOccurs = "0" nillable="true"/>
        <element name = "transaction_id" type = "string"
                                        minOccurs = "0" nillable="true"/>
        <element name = "scn" type = "double" xdb:SQLType="NUMBER"
                                        minOccurs = "0" nillable="true"/>
        <element name = "extra_attribute_values" minOccurs = "0">
          <complexType>
            <sequence>
              <element name = "extra_attribute_value"
                       type="lcr:extra_attribute"
                       maxOccurs = "unbounded"/>
            </sequence>
          </complexType>
        </element>
      </sequence>
    </complexType>
  </element>
</schema>';
PKŔŐN-I-PK◊hUIOEBPS/strms_rumon.htmġ Monitoring Rules

23 Monitoring Rules

This chapter provides sample queries that you can use to monitor rules, rule sets, and evaluation contexts.

This chapter contains these topics:


Note:

The Streams tool in the Oracle Enterprise Manager Console is also an excellent way to monitor a Streams environment. See the online help for the Streams tool for more information.


See Also:


Displaying All Rules Used by All Streams Clients

Streams rules are created using the DBMS_STREAMS_ADM package or the Streams tool in the Oracle Enterprise Manager Console. Streams rules in the rule sets for a Streams client determine the behavior of the Streams client. Streams clients include capture processes, propagations, apply processes, and messaging clients. The rule sets for a Streams client can also contain rules created using the DBMS_RULE_ADM package, and these rules also determine the behavior of the Streams client.

For example, if a rule in the positive rule set for a capture process evaluates to TRUE for DML changes to the hr.employees table, then the capture process captures DML changes to this table. However, if a rule in the negative rule set for a capture process evaluates to TRUE for DML changes to the hr.employees table, then the capture process discards DML changes to this table.

You query the following data dictionary views to display all rules in the rule sets for Streams clients, including Streams rules and rules created using the DBMS_RULE_ADM package:

In addition, these two views display the current rule condition for each rule and whether the rule condition has been modified.

The query in this section displays the following information about all of the rules used by Streams clients in a database:

Run the following query to display this information:

COLUMN STREAMS_NAME HEADING 'Streams|Name' FORMAT A14
COLUMN STREAMS_TYPE HEADING 'Streams|Type' FORMAT A11
COLUMN RULE_NAME HEADING 'Rule|Name' FORMAT A12
COLUMN RULE_SET_TYPE HEADING 'Rule Set|Type' FORMAT A8
COLUMN STREAMS_RULE_TYPE HEADING 'Streams|Rule|Level' FORMAT A7
COLUMN SCHEMA_NAME HEADING 'Schema|Name' FORMAT A6
COLUMN OBJECT_NAME HEADING 'Object|Name' FORMAT A11
COLUMN RULE_TYPE HEADING 'Rule|Type' FORMAT A4

SELECT STREAMS_NAME, 
       STREAMS_TYPE,
       RULE_NAME,
       RULE_SET_TYPE,
       STREAMS_RULE_TYPE,
       SCHEMA_NAME,
       OBJECT_NAME,
       RULE_TYPE
  FROM DBA_STREAMS_RULES;

Your output looks similar to the following:

                                                 Streams
Streams        Streams     Rule         Rule Set Rule    Schema Object      Rule
Name           Type        Name         Type     Level   Name   Name        Type
-------------- ----------- ------------ -------- ------- ------ ----------- ----
STRM01_CAPTURE CAPTURE     JOBS4        POSITIVE TABLE   HR     JOBS        DML
STRM01_CAPTURE CAPTURE     JOBS5        POSITIVE TABLE   HR     JOBS        DDL
DBS1_TO_DBS2   PROPAGATION HR18         POSITIVE SCHEMA  HR                 DDL
DBS1_TO_DBS2   PROPAGATION HR17         POSITIVE SCHEMA  HR                 DML
APPLY          APPLY       HR20         POSITIVE SCHEMA  HR                 DML
APPLY          APPLY       JOB_HISTORY2 NEGATIVE TABLE   HR     JOB_HISTORY DML
OE             DEQUEUE     RULE$_28     POSITIVE

This output provides the following information about the rules used by Streams clients in the database:

The ALL_STREAMS_RULES and DBA_STREAMS_RULES views also contain information about the rule sets used by a Streams client, the current and original rule condition for Streams rules, whether the rule condition has been changed, the subsetting operation and DML condition for each Streams subset rule, the source database specified for each Streams rule, and information about the message type and message variable for Streams messaging rules.

The following data dictionary views also display Streams rules:

These views display Streams rules only. They do not display any manual modifications to these rules made by the DBMS_RULE_ADM package, and they do not display rules created using the DBMS_RULE_ADM package. These views can display the original rule condition for each rule only. They do not display the current rule condition for a rule if the rule condition was modified after the rule was created.

Displaying the Streams Rules Used by a Specific Streams Client

To determine which rules are in a rule set used by a particular Streams client, you can query the DBA_STREAMS_RULES data dictionary view. For example, suppose a database is running an apply process named strm01_apply. The following sections describe how to determine the rules in the positive rule set and negative rule set for this apply process.

The following sections describe how to determine which rules are in a rule set used by a particular Streams client:

Displaying the Rules in the Positive Rule Set for a Streams Client

The following query displays all of the rules in the positive rule set for an apply processs named strm01_apply:

COLUMN RULE_OWNER HEADING 'Rule Owner' FORMAT A10
COLUMN RULE_NAME HEADING 'Rule|Name' FORMAT A12
COLUMN STREAMS_RULE_TYPE HEADING 'Streams|Rule|Level' FORMAT A7
COLUMN SCHEMA_NAME HEADING 'Schema|Name' FORMAT A6
COLUMN OBJECT_NAME HEADING 'Object|Name' FORMAT A11
COLUMN RULE_TYPE HEADING 'Rule|Type' FORMAT A4
COLUMN SOURCE_DATABASE HEADING 'Source' FORMAT A10
COLUMN INCLUDE_TAGGED_LCR HEADING 'Apply|Tagged|LCRs?' FORMAT A9

SELECT RULE_OWNER,
       RULE_NAME,
       STREAMS_RULE_TYPE,
       SCHEMA_NAME,
       OBJECT_NAME,
       RULE_TYPE,
       SOURCE_DATABASE,
       INCLUDE_TAGGED_LCR
  FROM DBA_STREAMS_RULES
  WHERE STREAMS_NAME  = 'STRM01_APPLY' AND
        RULE_SET_TYPE = 'POSITIVE';

If this query returns any rows, then the apply process applies LCRs containing changes that evaluate to TRUE for the rules.

Your output looks similar to the following:

                           Streams                                    Apply
           Rule            Rule    Schema Object      Rule            Tagged
Rule Owner Name            Level   Name   Name        Type Source     LCRs?
---------- --------------- ------- ------ ----------- ---- ---------- ---------
STRMADMIN  HR20            SCHEMA  HR                 DML   DBS1.NET  NO
STRMADMIN  HR21            SCHEMA  HR                 DDL   DBS1.NET  NO

Assuming the rule conditions for the Streams rules returned by this query have not been modified, these results show that the apply process applies LCRs containing DML changes and DDL changes to the hr schema and that the LCRs originated at the dbs1.net database. The rules in the positive rule set that instruct the apply process to apply these LCRs are owned by the strmadmin user and are named hr20 and hr21. Also, the apply process applies an LCR that satisfies one of these rules only if the tag in the LCR is NULL.

If the rule condition for a Streams rule has been modified, then you must check the current rule condition to determine the effect of the rule on a Streams client. Streams rules whose rule condition has been modified have NO for the SAME_RULE_CONDITION column.

Displaying the Rules in the Negative Rule Set for a Streams Client

The following query displays all of the rules in the negative rule set for an apply process named strm01_apply:

COLUMN RULE_OWNER HEADING 'Rule Owner' FORMAT A10
COLUMN RULE_NAME HEADING 'Rule|Name' FORMAT A15
COLUMN STREAMS_RULE_TYPE HEADING 'Streams|Rule|Level' FORMAT A7
COLUMN SCHEMA_NAME HEADING 'Schema|Name' FORMAT A6
COLUMN OBJECT_NAME HEADING 'Object|Name' FORMAT A11
COLUMN RULE_TYPE HEADING 'Rule|Type' FORMAT A4
COLUMN SOURCE_DATABASE HEADING 'Source' FORMAT A10
COLUMN INCLUDE_TAGGED_LCR HEADING 'Apply|Tagged|LCRs?' FORMAT A9

SELECT RULE_OWNER,
       RULE_NAME,
       STREAMS_RULE_TYPE,
       SCHEMA_NAME,
       OBJECT_NAME,
       RULE_TYPE,
       SOURCE_DATABASE,
       INCLUDE_TAGGED_LCR
  FROM DBA_STREAMS_RULES
  WHERE STREAMS_NAME  = 'APPLY' AND
        RULE_SET_TYPE = 'NEGATIVE';

If this query returns any rows, then the apply process discards LCRs containing changes that evaluate to TRUE for the rules.

Your output looks similar to the following:

                           Streams                                    Apply
           Rule            Rule    Schema Object      Rule            Tagged
Rule Owner Name            Level   Name   Name        Type Source     LCRs?
---------- --------------- ------- ------ ----------- ---- ---------- ---------
STRMADMIN  JOB_HISTORY22   TABLE   HR     JOB_HISTORY DML  DBS1.NET   YES
STRMADMIN  JOB_HISTORY23   TABLE   HR     JOB_HISTORY DDL  DBS1.NET   YES

Assuming the rule conditions for the Streams rules returned by this query have not been modified, these results show that the apply process discards LCRs containing DML changes and DDL changes to the hr.job_history table and that the LCRs originated at the dbs1.net database. The rules in the negative rule set that instruct the apply process to discard these LCRs are owned by the strmadmin user and are named job_history22 and job_history23. Also, the apply process discards an LCR that satisfies one of these rules regardless of the value of the tag in the LCR.

If the rule condition for a Streams rule has been modified, then you must check the current rule condition to determine the effect of the rule on a Streams client. Streams rules whose rule condition has been modified have NO for the SAME_RULE_CONDITION column.

Displaying the Current Condition for a Rule

If you know the name of a rule, then you can display its rule condition. For example, consider the rule returned by the query in "Displaying the Streams Rules Used by a Specific Streams Client". The name of the rule is hr1, and you can display its condition by running the following query:

SET LONG  8000
SET PAGES 8000
SELECT RULE_CONDITION "Current Rule Condition"
  FROM DBA_STREAMS_RULES 
  WHERE RULE_NAME  = 'HR1' AND
        RULE_OWNER = 'STRMADMIN';

Your output looks similar to the following:

Current Rule Condition
-----------------------------------------------------------------
(:dml.get_object_owner() = 'HR' and :dml.is_null_tag() = 'Y' and 
:dml.get_source_database_name() = 'DBS1.NET' )

Displaying Modified Rule Conditions for Streams Rules

It is possible to modify the rule condition of a Streams rule. These modifications can change the behavior of the Streams clients using the Streams rule. In addition, some modifications can degrade rule evaluation performance.

The following query displays the rule name, the original rule condition, and the current rule condition for each Streams rule whose condition has been modified:

COLUMN RULE_NAME HEADING 'Rule Name' FORMAT A12
COLUMN ORIGINAL_RULE_CONDITION HEADING 'Original Rule Condition' FORMAT A33
COLUMN RULE_CONDITION HEADING 'Current Rule Condition' FORMAT A33

SET LONG  8000
SET PAGES 8000
SELECT RULE_NAME, ORIGINAL_RULE_CONDITION, RULE_CONDITION
  FROM DBA_STREAMS_RULES 
  WHERE SAME_RULE_CONDITION = 'NO';

Your output looks similar to the following:

Rule Name    Original Rule Condition           Current Rule Condition
------------ --------------------------------- ---------------------------------
HR20         ((:dml.get_object_owner() = 'HR') ((:dml.get_object_owner() = 'HR')
              and :dml.is_null_tag() = 'Y' )    and :dml.is_null_tag() = 'Y' and
                                                :dml.get_object_name() != 'JOB_H
                                               ISTORY')

In this example, the output shows that the condition of the hr20 rule has been modified. Originally, this schema rule evaluated to TRUE for all changes to the hr schema. The current modified condition for this rule evaluates to TRUE for all changes to the hr schema, except for DML changes to the hr.job_history table.


Note:

The query in this section applies only to Streams rules. It does not apply to rules created using the DBMS_RULE_ADM package because these rules always show NULL for the ORIGINAL_RULE_CONDITION column and NULL for the SAME_RULE_CONDITION column.

Displaying the Evaluation Context for Each Rule Set

The following query displays the default evaluation context for each rule set in a database:

COLUMN RULE_SET_OWNER HEADING 'Rule Set|Owner' FORMAT A10
COLUMN RULE_SET_NAME HEADING 'Rule Set Name' FORMAT A20
COLUMN RULE_SET_EVAL_CONTEXT_OWNER HEADING 'Eval Context|Owner' FORMAT A12
COLUMN RULE_SET_EVAL_CONTEXT_NAME HEADING 'Eval Context Name' FORMAT A30

SELECT RULE_SET_OWNER, 
       RULE_SET_NAME, 
       RULE_SET_EVAL_CONTEXT_OWNER,
       RULE_SET_EVAL_CONTEXT_NAME
  FROM DBA_RULE_SETS;

Your output looks similar to the following:

Rule Set                        Eval Context
Owner      Rule Set Name        Owner        Eval Context Name
---------- -------------------- ------------ ------------------------------
STRMADMIN  RULESET$_2           SYS          STREAMS$_EVALUATION_CONTEXT
STRMADMIN  STRM02_QUEUE_R       STRMADMIN    AQ$_STRM02_QUEUE_TABLE_V
STRMADMIN  APPLY_OE_RS          STRMADMIN    OE_EVAL_CONTEXT
STRMADMIN  OE_QUEUE_R           STRMADMIN    AQ$_OE_QUEUE_TABLE_V
STRMADMIN  AQ$_1_RE             STRMADMIN    AQ$_OE_QUEUE_TABLE_V
SUPPORT    RS                   SUPPORT      EVALCTX
OE         NOTIFICATION_QUEUE_R OE           AQ$_NOTIFICATION_QUEUE_TABLE_V

Displaying Information About the Tables Used by an Evaluation Context

The following query displays information about the tables used by an evaluation context named evalctx, which is owned by the support user:

COLUMN TABLE_ALIAS HEADING 'Table Alias' FORMAT A20
COLUMN TABLE_NAME HEADING 'Table Name' FORMAT A40

SELECT TABLE_ALIAS,
       TABLE_NAME
  FROM DBA_EVALUATION_CONTEXT_TABLES
  WHERE EVALUATION_CONTEXT_OWNER = 'SUPPORT' AND
        EVALUATION_CONTEXT_NAME = 'EVALCTX';

Your output looks similar to the following:

Table Alias          Table Name
-------------------- ----------------------------------------
PROB                 problems

Displaying Information About the Variables Used in an Evaluation Context

The following query displays information about the variables used by an evaluation context named evalctx, which is owned by the support user:

COLUMN VARIABLE_NAME HEADING 'Variable Name' FORMAT A15
COLUMN VARIABLE_TYPE HEADING 'Variable Type' FORMAT A15
COLUMN VARIABLE_VALUE_FUNCTION HEADING 'Variable Value|Function' FORMAT A20
COLUMN VARIABLE_METHOD_FUNCTION HEADING 'Variable Method|Function' FORMAT A20

SELECT VARIABLE_NAME,
       VARIABLE_TYPE,
       VARIABLE_VALUE_FUNCTION,
       VARIABLE_METHOD_FUNCTION
  FROM DBA_EVALUATION_CONTEXT_VARS
  WHERE EVALUATION_CONTEXT_OWNER = 'SUPPORT' AND
        EVALUATION_CONTEXT_NAME = 'EVALCTX';

Your output looks similar to the following:

                                Variable Value       Variable Method
Variable Name   Variable Type   Function             Function
--------------- --------------- -------------------- --------------------
CURRENT_TIME    DATE            timefunc

Displaying All of the Rules in a Rule Set

The query in this section displays the following information about all of the rules in a rule set:

For example, to display this information for each rule in a rule set named oe_queue_r that is owned by the user stN>ĪŃrmadmin, run the following query:

COLUMN RULE_OWNER HEADING 'Rule Owner' FORMAT A10
COLUMN RULE_NAME HEADING 'Rule Name' FORMAT A20
COLUMN RULE_EVALUATION_CONTEXT_NAME HEADING 'Eval Context Name' FORMAT A27
COLUMN RULE_EVALUATION_CONTEXT_OWNER HEADING 'Eval Context|Owner' FORMAT A11

SELECT R.RULE_OWNER, 
       R.RULE_NAME, 
       R.RULE_EVALUATION_CONTEXT_NAME,
       R.RULE_EVALUATION_CONTEXT_OWNER
  FROM DBA_RULES R, DBA_RULE_SET_RULES RS 
  WHERE RS.RULE_SET_OWNER = 'STRMADMIN' AND 
        RS.RULE_SET_NAME = 'OE_QUEUE_R' AND 
  RS.RULE_NAME = R.RULE_NAME AND 
  RS.RULE_OWNER = R.RULE_OWNER;

Your output looks similar to the following:

                                                            Eval Contex
Rule Owner Rule Name            Eval Context Name           Owner
---------- -------------------- --------------------------- -----------
STRMADMIN  HR1                  STREAMS$_EVALUATION_CONTEXT SYS
STRMADMIN  APPLY_LCRS           STREAMS$_EVALUATION_CONTEXT SYS
STRMADMIN  OE_QUEUE$3
STRMADMIN  APPLY_ACTION

Displaying the Condition for Each Rule in a Rule Set

The following query displays the condition for each rule in a rule set named hr_queue_r that is owned by the user strmadmin:

SET LONGCHUNKSIZE 4000
SET LONG 4000
COLUMN RULE_OWNER HEADING 'Rule Owner' FORMAT A15
COLUMN RULE_NAME HEADING 'Rule Name' FORMAT A15
COLUMN RULE_CONDITION HEADING 'Rule Condition' FORMAT A45

SELECT R.RULE_OWNER, 
       R.RULE_NAME, 
       R.RULE_CONDITION
  FROM DBA_RULES R, DBA_RULE_SET_RULES RS 
  WHERE RS.RULE_SET_OWNER = 'STRMADMIN' AND 
        RS.RULE_SET_NAME = 'HR_QUEUE_R' AND 
  RS.RULE_NAME = R.RULE_NAME AND 
  RS.RULE_OWNER = R.RULE_OWNER;

Your output looks similar to the following:

Rule Owner      Rule Name       Rule Condition
--------------- --------------- ---------------------------------------------
STRMADMIN       APPLY_ACTION     hr.get_hr_action(tab.user_data) = 'APPLY'
STRMADMIN       APPLY_LCRS      :dml.get_object_owner() = 'HR' AND  (:dml.get
                                _object_name() = 'DEPARTMENTS' OR 
                                :dml.get_object_name() = 'EMPLOYEES')

STRMADMIN       HR_QUEUE$3      hr.get_hr_action(tab.user_data) != 'APPLY'

Listing Each Rule that Contains a Specified Pattern in Its Condition

To list each rule in a database that contains a specified pattern in its condition, you can query the DBMS_RULES data dictionary view and use the DBMS_LOB.INSTR function to search for the pattern in the rule conditions. For example, the following query lists each rule that contains the pattern 'HR' in its condition:

COLUMN RULE_OWNER HEADING 'Rule Owner' FORMAT A30
COLUMN RULE_NAME HEADING 'Rule Name' FORMAT A30

SELECT RULE_OWNER, RULE_NAME FROM DBA_RULES 
  WHERE DBMS_LOB.INSTR(RULE_CONDITION, 'HR', 1, 1) > 0;

Your output looks similar to the following:

Rule Owner                     Rule Name
------------------------------ ------------------------------
STRMADMIN                      DEPARTMENTS4
STRMADMIN                      DEPARTMENTS5
STRMADMIN                      DEPARTMENTS6

Displaying Aggregate Statistics for All Rule Set Evaluations

You can query the V$RULE_SET_AGGREGATE_STATS dynamic performance view to display statistics for all rule set evaluations since the database instance last started.

The query in this section contains the following information about rule set evaluations:

Run the following query to display this information:

COLUMN NAME HEADING 'Name of Statistic' FORMAT A55
COLUMN VALUE HEADING 'Value' FORMAT 999999999

SELECT NAME, VALUE FROM V$RULE_SET_AGGREGATE_STATS;

Your output looks similar to the following:

Name of Statistic                                            Value
------------------------------------------------------- ----------
rule set evaluations (all)                                    5584
rule set evaluations (first_hit)                              5584
rule set evaluations (simple_rules_only)                      3675
rule set evaluations (SQL free)                               5584
rule set evaluation time (CPU)                                 179
rule set evaluation time (elapsed)                            1053
rule set SQL executions                                          0
rule set conditions processed                                11551
rule set true rules                                             10
rule set maybe rules                                           328
rule set user function calls (variable value function)         182
rule set user function calls (variable method function)      12794
rule set user function calls (evaluation function)            3857

Note:

A centisecond is one-hundredth of a second. So, for example, this output shows 1.79 seconds of CPU time and 10.53 seconds of elapsed time.

Displaying Information About Evaluations for Each Rule Set

You can query the V$RULE_SET dynamic performance view to display information about evaluations for each rule set since the database instance last started. The query in this section contains the following information about each rule set in a database:

Run the following query to display this information for each rule set in the database:

COLUMN OWNER HEADING 'Rule Set|Owner' FORMAT A9
COLUMN NAME HEADING 'Rule Set|Name' FORMAT A11
COLUMN EVALUATIONS HEADING 'Total|Evaluations' FORMAT 999999
COLUMN SQL_EXECUTIONS HEADING 'SQL|Executions' FORMAT 999999
COLUMN SQL_FREE_EVALUATIONS HEADING 'SQL Free|Evaluations' FORMAT 999999
COLUMN TRUE_RULES HEADING 'True|Rules' FORMAT 999999
COLUMN MAYBE_RULES HEADING 'Maybe|Rules' FORMAT 999999

SELECT OWNER, 
       NAME, 
       EVALUATIONS,
       SQL_EXECUTIONS,
       SQL_FREE_EVALUATIONS,
       TRUE_RULES,
       MAYBE_RULES
  FROM V$RULE_SET;

Your output looks similar to the following:

Rule Set  Rule Set          Total        SQL    SQL Free    True   Maybe
Owner     Name        Evaluations Executions Evaluations   Rules   Rules
--------- ----------- ----------- ---------- ----------- ------- -------
STRMADMIN RULESET$_18         403          0         403       0     200
STRMADMIN RULESET$_9         3454          0        3454       5      64

Note:

Querying the V$RULE_SET view can have a negative impact on performance if a database has a large library cache.

Determining the Resources Used by Evaluation of Each Rule Set

You can query the V$RULE_SET dynamic performance view to determine the resources used by evaluation of a rule set since the database instance last started. If a rule set was evaluated more than one time since the database instance last started, then some statistics are cumulative, including statistics for the amount of CPU time, evaluation time, and shared memory bytes used.

The query in this section contains the following information about each rule set in a database:

Run the following query to display this information for each rule set in the database:

COLUMN OWNER HEADING 'Rule Set|Owner' FORMAT A15
COLUMN NAME HEADING 'Rule Set Name' FORMAT A15
COLUMN CPU_SECONDS HEADING 'Seconds|of CPU|Time' FORMAT 999999.999
COLUMN ELAPSED_SECONDS HEADING 'Seconds of|Evaluation|Time' FORMAT 999999.999
COLUMN SHARABLE_MEM HEADING 'Bytes|of Shared|Memory' FORMAT 999999999

SELECT OWNER, 
       NAME, 
       (CPU_TIME/100) CPU_SECONDS,
       (ELAPSED_TIME/100) ELAPSED_SECONDS,
       SHARABLE_MEM
  FROM V$RULE_SET;

Your output looks similar to the following:

                                    Seconds  Seconds of      Bytes
Rule Set                             of CPU  Evaluation  of Shared
Owner           Rule Set Name          Time        Time     Memory
--------------- --------------- ----------- ----------- ----------
STRMADMIN       RULESET$_18            .840       8.550     444497
STRMADMIN       RULESET$_9             .700       1.750     444496

Note:

Querying the V$RULE_SET view can have a negative impact on performance if a database has a large library cache.

Displaying Evaluation Statistics for a Rule

You can query the V$RULE dynamic performance view to display evaluation statistics for a particular rule since the database instance last started. The query in this section contains the following information about each rule set in a database:

For example, run the following query to display this information for the locations25 rule in the strmadmin schema:

COLUMN TRUE_HITS HEADING 'True Evaluations' FORMAT 999999
COLUMN MAYBE_HITS HEADING 'Maybe Evaluations' FORMAT 999999
COLUMN SQL_EVALUATIONS HEADING 'SQL Evaluations' FORMAT 999999

SELECT TRUE_HITS, MAYBE_HITS, SQL_EVALUATIONS 
  FROM V$RULE
  WHERE RULE_OWNER = 'STRMADMIN' AND
        RULE_NAME  = 'LOCATIONS25';

Your output looks similar to the following:

True Evaluations Maybe Evaluations SQL Evaluations
---------------- ----------------- ---------------
            1518               154               0
PKxáĀVXĺNĺPK◊hUIOEBPS/strms_monitor.htmġ Monitoring a Streams Environment

19 Monitoring a Streams Environment

This chapter lists the static data dictionary views and dynamic performance views related to Streams. You can use these views to monitor your Streams environment.

This chapter contains these topics:


Note:

The Streams tool in the Oracle Enterprise Manager Console is also an excellent way to monitor a Streams environment. See the online help for the Streams tool for more information.


See Also:


Summary of Streams Static Data Dictionary Views

Table 19-1 lists the Streams static data dictionary views.

Table 19-1 Streams Static Data Dictionary Views

ALL_ ViewsDBA_ ViewsUSER_ Views

ALL_APPLY

DBA_APPLY

N/A

ALL_APPLY_CONFLICT_COLUMNS

DBA_APPLY_CONFLICT_COLUMNS

N/A

ALL_APPLY_DML_HANDLERS

DBA_APPLY_DML_HANDLERS

N/A

ALL_APPLY_ENQUEUE

DBA_APPLY_ENQUEUE

N/A

ALL_APPLY_ERROR

DBA_APPLY_ERROR

N/A

ALL_APPLY_EXECUTE

DBA_APPLY_EXECUTE

N/A

N/A

DBA_APPLY_INSTANTIATED_GLOBAL

N/A

N/A

DBA_APPLY_INSTANTIATED_OBJECTS

N/A

N/A

DBA_APPLY_INSTANTIATED_SCHEMAS

N/A

ALL_APPLY_KEY_COLUMNS

DBA_APPLY_KEY_COLUMNS

N/A

N/A

DBA_APPLY_OBJECT_DEPENDENCIES

N/A

ALL_APPLY_PARAMETERS

DBA_APPLY_PARAMETERS

N/A

ALL_APPLY_PROGRESS

DBA_APPLY_PROGRESS

N/A

N/A

DBA_APPLY_SPILL_TXN

N/A

ALL_APPLY_TABLE_COLUMNS

DBA_APPLY_TABLE_COLUMNS

N/A

N/A

DBA_APPLY_VALUE_DEPENDENCIES

N/A

ALL_CAPTURE

DBA_CAPTURE

N/A

ALL_CAPTURE_EXTRA_ATTRIBUTES

DBA_CAPTURE_EXTRA_ATTRIBUTES

N/A

ALL_CAPTURE_PARAMETERS

DBA_CAPTURE_PARAMETERS

N/A

ALL_CAPTURE_PREPARED_DATABASE

DBA_CAPTURE_PREPARED_DATABASE

N/A

ALL_CAPTURE_PREPARED_SCHEMAS

DBA_CAPTURE_PREPARED_SCHEMAS

N/A

ALL_CAPTURE_PREPARED_TABLES

DBA_CAPTURE_PREPARED_TABLES

N/A

ALL_EVALUATION_CONTEXT_TABLES

DBA_EVALUATION_CONTEXT_TABLES

USER_EVALUATION_CONTEXT_TABLES

ALL_EVALUATION_CONTEXT_VARS

DBA_EVALUATION_CONTEXT_VARS

USER_EVALUATION_CONTEXT_VARS

ALL_EVALUATION_CONTEXTS

DBA_EVALUATION_CONTEXTS

USER_EVALUATION_CONTEXTS

ALL_FILE_GROUP_EXPORT_INFO

DBA_FILE_GROUP_EXPORT_INFO

USER_FILE_GROUP_EXPORT_INFO

ALL_FILE_GROUP_FILES

DBA_FILE_GROUP_FILES

USER_FILE_GROUP_FILES

ALL_FILE_GROUP_TABLES

DBA_FILE_GROUP_TABLES

USER_FILE_GROUP_TABLES

ALL_FILE_GROUP_TABLESPACES

DBA_FILE_GROUP_TABLESPACES

USER_FILE_GROUP_TABLESPACES

ALL_FILE_GROUP_VERSIONS

DBA_FILE_GROUP_VERSIONS

USER_FILE_GROUP_VERSIONS

ALL_FILE_GROUPS

DBA_FILE_GROUPS

USER_FILE_GROUPS

N/A

DBA_HIST_STREAMS_APPLY_SUM

N/A

N/A

DBA_HIST_STREAMS_CAPTURE

N/A

N/A

DBA_HIST_STREAMS_POOL_ADVICE

N/A

ALL_PROPAGATION

DBA_PROPAGATION

N/A

N/A

DBA_REGISTERED_ARCHIVED_LOG

N/A

ALL_RULE_SET_RULES

DBA_RULE_SET_RULES

USER_RULE_SET_RULES

ALL_RULE_SETS

DBA_RULE_SETS

USER_RULE_SETS

ALL_RULES

DBA_RULES

USER_RULES

N/A

DBA_STREAMS_ADD_COLUMN

N/A

N/A

DBA_STREAMS_ADMINISTRATOR

N/A

N/A

DBA_STREAMS_DELETE_COLUMN

N/A

ALL_STREAMS_GLOBAL_RULES

DBA_STREAMS_GLOBAL_RULES

N/A

ALL_STREAMS_MESSAGE_CONSUMERS

DBA_STREAMS_MESSAGE_CONSUMERS

N/A

ALL_STREAMS_MESSAGE_RULES

DBA_STREAMS_MESSAGE_RULES

N/A

ALL_STREAMS_NEWLY_SUPPORTED

DBA_STREAMS_NEWLY_SUPPORTED

N/A

N/A

DBA_STREAMS_RENAME_COLUMN

N/A

N/A

DBA_STREAMS_RENAME_SCHEMA

N/A

N/A

DBA_STREAMS_RENAME_TABLE

N/A

ALL_STREAMS_RULES

DBA_STREAMS_RULES

N/A

ALL_STREAMS_SCHEMA_RULES

DBA_STREAMS_SCHEMA_RULES

N/A

ALL_STREAMS_TABLE_RULES

DBA_STREAMS_TABLE_RULES

N/A

ALL_STREAMS_TRANSFORM_FUNCTION

DBA_STREAMS_TRANSFORM_FUNCTION

N/A

N/A

DBA_STREAMS_TRANSFORMATIONS

N/A

ALL_STREAMS_UNSUPPORTED

DBA_STREAMS_UNSUPPORTED

N/A


Summary of Streams Dynamic Performance Views

The Streams dynamic performance views are:


Note:

  • When monitoring a Real Application Clusters (RAC) database, use the GV$ versions of the dynamic performance views.

  • To collect elapsed time statistics in these dynamic performance views, set the TIMED_STATISTICS initialization parameter to¬†true.


PKŠ¶õāĶčęčPK◊hUIOEBPS/strms_mtransform.htmġ Managing Rule-Based Transformations

15 Managing Rule-Based Transformations

In Streams, a rule-based transformation is any modification to a message that results when a rule in a positive rule set evaluates to TRUE. There are two types of rule-based transformations: declarative and custom. This chapter describes managing each type of rule-based transformation.


Note:

A transformation specified for a rule is performed only if the rule is in a positive rule set. If the rule is in the negative rule set for a capture process, propagation, apply process, or messaging client, then these Streams clients ignore the rule-based transformation.


See Also:

Chapter 7, "Rule-Based Transformations" for conceptual information about each type of rule-based transformation

Managing Declarative Rule-Based Transformations

You can use the following procedures in the DBMS_STREAMS_ADM package to manage declarative rule-based transformations: ADD_COLUMN, DELETE_COLUMN, RENAME_COLUMN, RENAME_SCHEMA, and RENAME_TABLE.

This section provides instructions for completing the following tasks:

Adding Declarative Rule-Based Transformations

The following sections contain examples that add declarative rule-based transformations to rules.

Adding a Declarative Rule-Based Transformation that Renames a Table

Use the RENAME_TABLE procedure in the DBMS_STREAMS_ADM package to add a declarative rule-based transformation that renames a table in a row LCR. For example, the following procedure adds a declarative rule-based transformation to the jobs12 rule in the strmadmin schema:

BEGIN 
  DBMS_STREAMS_ADM.RENAME_TABLE(
    rule_name       => 'strmadmin.jobs12',
    from_table_name => 'hr.jobs',
    to_table_name   => 'hr.assignments', 
    step_number     => 0,
    operation       => 'ADD');
END;
/

The declarative rule-based transformation added by this procedure renames the table hr.jobs to hr.assignments in a row LCR when the rule jobs12 evaluates to TRUE for the row LCR. If more than one declarative rule-based transformation is specified for the jobs12 rule, then this transformation follows default transformation ordering because the step_number parameter is set to 0 (zero). In addition, the operation parameter is set to ADD to indicate that the transformation is being added to the rule, not removed from it.

The RENAME_TABLE procedure can also add a transformation that renames the schema in addition to the table. For example, in the previous example, to specify that the schema should be renamed to oe, specify oe.assignments for the to_table_name parameter.

Adding a Declarative Rule-Based Transformation that Adds a Column

Use the ADD_COLUMN procedure in the DBMS_STREAMS_ADM package to add a declarative rule-based transformation that adds a column to a row in a row LCR. For example, the following procedure adds a declarative rule-based transformation to the employees35 rule in the strmadmin schema:

BEGIN 
  DBMS_STREAMS_ADM.ADD_COLUMN(
    rule_name    => 'employees35',
    table_name   => 'hr.employees',
    column_name  => 'birth_date', 
    column_value => ANYDATA.ConvertDate(NULL),
    value_type   => 'NEW',
    step_number  => 0,
    operation    => 'ADD');
END;
/

The declarative rule-based transformation added by this procedure adds a birth_date column of datatype DATE to an hr.employees table row in a row LCR when the rule employees35 evaluates to TRUE for the row LCR.

Notice that the ANYDATA.ConvertDate function specifies the column type and the column value. In this example, the added column value is NULL, but a valid date can also be specified. Use the appropriate AnyData function for the column being added. For example, if the datatype of the column being added is NUMBER, then use the ANYDATA.ConvertNumber function.

The value_type parameter is set to NEW to indicate that the column is added to the new values in a row LCR. You can also specify OLD to add the column to the old values.

If more than one declarative rule-based transformation is specified for the employees35 rule, then the transformation follows default transformation ordering because the step_number parameter is set to 0 (zero). In addition, the operation parameter is set to ADD to indicate that the transformation is being added, not removed.


Note:

The ADD_COLUMN procedure is overloaded. A column_function parameter can specify that the current system date or timestamp is the value for the added column. The column_value and column_function parameters are mutually exclusive.


See Also:

Oracle Database PL/SQL Packages and Types Reference for more information about AnyData type functions

Overwriting an Existing Declarative Rule-Based Transformation

When the operation parameter is set to ADD in a procedure that adds a declarative rule-based transformation, an existing declarative rule-based transformation is overwritten if the parameters in the following list match the existing transformation parameters:

  • ADD_COLUMN procedure: rule_name, table_name, column_name, and step_number parameters

  • DELETE_COLUMN procedure: rule_name, table_name, column_name, and step_number parameters

  • RENAME_COLUMN procedure: rule_name, table_name, from_column_name, and step_number parameters

  • RENAME_SCHEMA procedure: rule_name, from_schema_name, and step_number parameters

  • RENAME_TABLE procedure: rule_name, from_table_name, and step_number parameters

For example, suppose an existing declarative rule-based transformation was creating by running the following procedure:

BEGIN 
  DBMS_STREAMS_ADM.RENAME_COLUMN(
    rule_name         => 'departments33',
    table_name        => 'hr.departments',
    from_column_name  => 'manager_id', 
    to_column_name    => 'lead_id',
    value_type        => 'NEW',
    step_number       => 0,
    operation         => 'ADD');
END;
/

Running the following procedure overwrites this existing declarative rule-based transformation:

BEGIN 
  DBMS_STREAMS_ADM.RENAME_COLUMN(
    rule_name         => 'departments33',
    table_name        => 'hr.departments',
    from_column_name  => 'manager_id', 
    to_column_name    => 'lead_id',
    value_type        => '*',
    step_number       => 0,
    operation         => 'ADD');
END;
/

In this case, the value_type parameter in the declarative rule-based transformation was changed from NEW to *. That is, in the original transformation, only new values were renamed in row LCRs, but, in the new transformation, both old and new values are renamed in row LCRs.

Removing Declarative Rule-Based Transformations

To remove a declarative rule-based transformation from a rule, use the same procedure used to add the transformation, but specify REMOVE for the operation parameter. For example, to remove the transformation added in "Adding a Declarative Rule-Based Transformation that Renames a Table", run the following procedure:

BEGIN 
  DBMS_STREAMS_ADM.RENAME_TABLE(
    rule_name       => 'strmadmin.jobs12',
    from_table_name => 'hr.jobs',
    to_table_name   => 'hr.assignments', 
    step_number     => 0,
    operation       => 'REMOVE');
END;
/

When the operation parameter is set to REMOVE in any of the declarative transformation procedures listed in "Managing Declarative Rule-Based Transformations", the other parameters in the procedure are optional, excluding the rule_name parameter. If these optional parameters are set to NULL, then they become wildcards.

The RENAME_TABLE procedure in the previous example behaves in the following way when one or more of the optional parameters are set to NULL:

from_table_name Parameterto_table_name Parameterstep_number ParameterResult
NULLNULLNULLRemove all rename table transformations for the specified rule
non-NULLNULLNULLRemove all rename table transformations with the specified from_table_name for the specified rule
NULLnon-NULLNULLRemove all rename table transformations with the specified to_table_name for the specified rule
NULLNULLnon-NULLRemove all rename table transformations with the specified step_number for the specified rule
non-NULLnon-NULLNULLRemove all rename table transformations with the specified from_table_name and to_table_name for the specified rule
NULLnon-NULLnon-NULLRemove all rename table transformations with the specified to_table_name and step_number for the specified rule
non-NULLNULLnon-NULLRemove all rename table transformations with the specified from_table_name and step_number for the specified rule

The other declarative transformation procedures work in a similar way when optional parameters are set to NULL and the operation parameter is set to REMOVE.

Managing Custom Rule-Based Transformations

Use the SET_RULE_TRANSFORM_FUNCTION procedure in the DBMS_STREAMS_ADM package to set or unset a custom rule-based transformation for a rule. This procedure modifies the rule action context to specify the custom rule-based transformation.

This section provides instructions for completing the following tasks:


Attention:

Do not modify LONG, LONG RAW, or LOB column data in an LCR with a custom rule-based transformation.


Note:

  • There is no automatic locking mechanism for a rule action context. Therefore, make sure an action context is not updated by two or more sessions at the same time.

  • When you perform custom rule-based transformations on DDL LCRs, you probably need to modify the DDL text in the DDL LCR to match any other modification. For example, if the transformation changes the name of a table in the DDL LCR, then the transformation should change the table name in the DDL text in the same way.


Creating a Custom Rule-Based Transformation

A custom rule-based transformation function always operates on one message, but it can return one message or many messages. A custom rule-based transformation function that returns one message is a one-to-one transformation function. A one-to-one transformation function must have the following signature:

FUNCTION user_function (
   parameter_name   IN  ANYDATA)
RETURN ANYDATA;

Here, user_function stands for the name of the function and parameter_name stands for the name of the parameter passed to the function. The parameter passed to the function is an ANYDATA encapsulation of a message, and the function must return an ANYDATA encapsulation of a message.

A custom rule-based transformation function that can return more than one message is a one-to-many transformation function. A one-to-many transformation function must have the following signature:

FUNCTION user_function (
   parameter_name   IN  ANYDATA)
RETURN STREAMS$_ANYDATA_ARRAY;

Here, user_function stands for the name of the function and parameter_name stands for the name of the parameter passed to the function. The parameter passed to the function is an ANYDATA encapsulation of a message, and the function must return an array that contains zero or more ANYDATA encapsulations of a message. If the array contains zero ANYDATA encapsulations of a message, then the original message is discarded. One-to-many transformation functions are supported only for Streams capture processes.

The STREAMS$_ANYDATA_ARRAY type is an Oracle-supplied type that has the following definition:

CREATE OR REPLACE TYPE SYS.STREAMS$_ANYDATA_ARRAY
   AS VARRAY(2147483647) of SYS.ANYDATA
/

The following steps outline the general procedure for creating a custom rule-based transformation that uses a one-to-one function:

  1. Create a PL/SQL function that performs the transformation.


    Caution:

    Make sure the transformation function is deterministic. A deterministic function always returns the same value for any given set of input argument values, now and in the future. Also, make sure the transformation function does not raise any exceptions. Exceptions can cause a capture process, propagation, or apply process to become disabled, and you will need to correct the transformation function before the capture process, propagation, or apply process can proceed. Exceptions raised by a custom rule-based transformation for a messaging client can prevent the messaging client from dequeuing messages.

    The following example creates a function called executive_to_management in the hr schema that changes the value in the department_name column of the departments table from Executive to Management. Such a transformation might be necessary if one branch in a company uses a different name for this department.

    CONNECT hr/hr
    
    CREATE OR REPLACE FUNCTION hr.executive_to_management(in_any IN ANYDATA) 
    RETURN ANYDATA
    IS
      lcr SYS.LCR$_ROW_RECORD;
      rc  NUMBER;
      ob_owner VARCHAR2(30);
      ob_name VARCHAR2(30);
      dep_value_anydata ANYDATA;
      dep_value_varchar2 VARCHAR2(30);
    BEGIN
      -- Get the type of object
      -- Check if the object type is SYS.LCR$_ROW_RECORD
      IF in_any.GETTYPENAME='SYS.LCR$_ROW_RECORD' THEN
        -- Put the row LCR into lcr
        rc := in_any.GETOBJECT(lcr);
        -- Get the object owner and name
        ob_owner := lcr.GET_OBJECT_OWNER();
        ob_name := lcr.GET_OBJECT_NAME();
        -- Check for the hr.departments table
        IF ob_owner = 'HR' AND ob_name = 'DEPARTMENTS' THEN
          -- Get the old value of the department_name column in the LCR
          dep_value_anydata := lcr.GET_VALUE('old','DEPARTMENT_NAME');
          IF dep_value_anydata IS NOT NULL THEN
            -- Put the column value into dep_value_varchar2
            rc := dep_value_anydata.GETVARCHAR2(dep_value_varchar2);
            -- Change a value of Executive in the column to Management
            IF (dep_value_varchar2 = 'Executive') THEN
              lcr.SET_VALUE('OLD','DEPARTMENT_NAME',
                ANYDATA.CONVERTVARCHAR2('Management'));
            END IF;
          END IF;
          -- Get the new value of the department_name column in the LCR
          dep_value_anydata := lcr.GET_VALUE('new', 'DEPARTMENT_NAME', 'n');
          IF dep_value_anydata IS NOT NULL THEN
            -- Put the column value into dep_value_varchar2
            rc := dep_value_anydata.GETVARCHAR2(dep_value_varchar2);
            -- Change a value of Executive in the column to Management
            IF (dep_value_varchar2 = 'Executive') THEN
              lcr.SET_VALUE('new','DEPARTMENT_NAME',
                ANYDATA.CONVERTVARCHAR2('Management'));
            END IF;
          END IF;
        END IF;
        RETURN ANYDATA.CONVERTOBJECT(lcr);
      END IF;
    RETURN in_any;
    END;
    /
    
  2. Grant the Streams administrator EXECUTE privilege on the hr.executive_to_management function.

    GRANT EXECUTE ON hr.executive_to_management TO strmadmin;
    
  3. Create subset rules for DML operations on the hr.departments table. The subset rules will use the transformation created in Step 1.

    Subset rules are not required to use custom rule-based transformations. This example uses subset rules to illustrate an action context with more than one name-value pair. This example creates subset rules for an apply process on a database named dbs1.net. These rules evaluate to TRUE when an LCR contains a DML change to a row with a location_id of 1700 in the hr.departments table. This example assumes that an ANYDATA queue named streams_queue already exists in the database.

    To create these rules, connect as the Streams administrator and run the following ADD_SUBSET_RULES procedure:

    CONNECT strmadmin/strmadminpw
    
    BEGIN 
      DBMS_STREAMS_ADM.ADD_SUBSET_RULES(
        table_name               =>  'hr.departments',
        dml_condition            =>  'location_id=1700',
        streams_type             =>  'apply',
        streams_name             =>  'strm01_apply',
        queue_name               =>  'streams_queue',
        include_tagged_lcr       =>  false,
        source_database          =>  'dbs1.net');
    END;
    /
    

    Note:

    • To create the rule and the rule set, the Streams administrator must have CREATE_RULE_SET_OBJ (or¬†CREATE_ANYRULE_SET_OBJ) and CREATE_RULE_OBJ (or¬†CREATE_ANY_RULE_OBJ) system privileges. You grant these privileges using the GRANT_SYSTEM_PRIVILEGE procedure in the DBMS_RULE_ADM package.

    • This example creates the rule using the DBMS_STREAMS_ADM package. Alternatively, you can create a rule, add it to a rule set, and specify a custom rule-based transformation using the DBMS_RULE_ADM package. Oracle Streams Replication Administrator's Guide contains an example of this procedure.

    • The ADD_SUBSET_RULES procedure adds the subset rules to the positive rule set for the apply process.


  4. Determine the names of the system-created rules by running the following query:

    SELECT RULE_NAME, SUBSETTING_OPERATION FROM DBA_STREAMS_RULES 
      WHERE OBJECT_NAME='DEPARTMENTS' AND DML_CONDITION='location_id=1700';
    

    This query displays output similar to the following:

    RULE_NAME                      SUBSET
    ------------------------------ ------
    DEPARTMENTS5                   INSERT
    DEPARTMENTS6                   UPDATE
    DEPARTMENTS7                   DELETE
    

    Note:

    You can also obtain this information using the OUT parameters when you run ADD_SUBSET_RULES.

    Because these are subset rules, two of them contain a non-NULL action context that performs an internal transformation:

    • The rule with a subsetting condition of INSERT contains an internal transformation that converts updates into inserts if the update changes the value of the location_id column to 1700 from some other value. The internal transformation does not affect inserts.

    • The rule with a subsetting condition of DELETE contains an internal transformation that converts updates into deletes if the update changes the value of the location_id column from 1700 to a different value. The internal transformatioÁ4ňn does not affect deletes.

    In this example, you can confirm that the rules DEPARTMENTS5 and DEPARTMENTS7 have a non-NULL action context, and that the rule DEPARTMENTS6 has a NULL action context, by running the following query:

    COLUMN RULE_NAME HEADING 'Rule Name' FORMAT A13
    COLUMN ACTION_CONTEXT_NAME HEADING 'Action Context Name' FORMAT A27
    COLUMN ACTION_CONTEXT_VALUE HEADING 'Action Context Value' FORMAT A30
    
    SELECT 
        RULE_NAME,
        AC.NVN_NAME ACTION_CONTEXT_NAME, 
        AC.NVN_VALUE.ACCESSVARCHAR2() ACTION_CONTEXT_VALUE
      FROM DBA_RULES R, TABLE(R.RULE_ACTION_CONTEXT.ACTX_LIST) AC
      WHERE RULE_NAME IN ('DEPARTMENTS5','DEPARTMENTS6','DEPARTMENTS7');
    

    This query displays output similar to the following:

    Rule Name     Action Context Name         Action Context Value
    ------------- --------------------------- ------------------------------
    DEPARTMENTS5  STREAMS$_ROW_SUBSET         INSERT
    DEPARTMENTS7  STREAMS$_ROW_SUBSET         DELETE
    

    The DEPARTMENTS6 rule does not appear in the output because its action context is NULL.

  5. Set the custom rule-based transformation for each subset rule by running the SET_RULE_TRANSFORM_FUNCTION procedure. This step runs this procedure for each rule and specifies hr.executive_to_management as the transformation function. Make sure no other users are modifying the action context at the same time.

    BEGIN
      DBMS_STREAMS_ADM.SET_RULE_TRANSFORM_FUNCTION(
        rule_name           => 'departments5',
        transform_function  => 'hr.executive_to_management');
      DBMS_STREAMS_ADM.SET_RULE_TRANSFORM_FUNCTION(
        rule_name           => 'departments6',
        transform_function  => 'hr.executive_to_management');
      DBMS_STREAMS_ADM.SET_RULE_TRANSFORM_FUNCTION(
        rule_name           => 'departments7',
        transform_function  => 'hr.executive_to_management');    
    END;
    /
    

    Specifically, this procedure adds a name-value pair to each rule action context that specifies the name STREAMS$_TRANSFORM_FUNCTION and a value that is an ANYDATA instance containing the name of the PL/SQL function that performs the transformation. In this case, the transformation function is hr.executive_to_management.


    Note:

    The SET_RULE_TRANSFORM_FUNCTION does not verify that the specified transformation function exists. If the function does not exist, then an error is raised when a Streams process or job tries to invoke the transformation function.

Now, if you run the query that displays the name-value pairs in the action context for these rules, each rule, including the DEPARTMENTS6 rule, shows the name-value pair for the custom rule-based transformation:

SELECT 
    RULE_NAME,
    AC.NVN_NAME ACTION_CONTEXT_NAME, 
    AC.NVN_VALUE.ACCESSVARCHAR2() ACTION_CONTEXT_VALUE
  FROM DBA_RULES R, TABLE(R.RULE_ACTION_CONTEXT.ACTX_LIST) AC
  WHERE RULE_NAME IN ('DEPARTMENTS5','DEPARTMENTS6','DEPARTMENTS7');

This query displays output similar to the following:

Rule Name     Action Context Name         Action Context Value
------------- --------------------------- ------------------------------
DEPARTMENTS51 STREAMS$_ROW_SUBSET         INSERT
DEPARTMENTS51 STREAMS$_TRANSFORM_FUNCTION "HR"."EXECUTIVE_TO_MANAGEMENT"
DEPARTMENTS52 STREAMS$_TRANSFORM_FUNCTION "HR"."EXECUTIVE_TO_MANAGEMENT"
DEPARTMENTS53 STREAMS$_ROW_SUBSET         DELETE
DEPARTMENTS53 STREAMS$_TRANSFORM_FUNCTION "HR"."EXECUTIVE_TO_MANAGEMENT"

You can also view transformation functions using the DBA_STREAMS_TRANSFORM_FUNCTION data dictionary view.


See Also:

Oracle Database PL/SQL Packages and Types Reference for more information about the SET_RULE_TRANSFORM_FUNCTION and the rule types used in this example

Altering a Custom Rule-Based Transformation

To alter a custom rule-based transformation, you can either edit the transformation function or run the SET_RULE_TRANSFORM_FUNCTION procedure to specify a different transformation function. This example runs the SET_RULE_TRANSFORM_FUNCTION procedure to specify a different transformation function. The SET_RULE_TRANSFORM_FUNCTION procedure modifies the action context of a specified rule to run a different transformation function. If you edit the transformation function itself, then you do not need to run this procedure.

This example alters a custom rule-based transformation for rule DEPARTMENTS5 by changing the transformation function from hr.execute_to_management to hr.executive_to_lead. The hr.execute_to_management rule-based transformation was added to the DEPARTMENTS5 rule in the example in "Creating a Custom Rule-Based Transformation".

In Streams, subset rules use name-value pairs in an action context to perform internal transformations that convert UPDATE operations into INSERT and DELETE operations in some situations. Such a conversion is called a row migration. The SET_RULE_TRANSFORM_FUNCTION procedure preserves the name-value pairs that perform row migrations.


See Also:

"Row Migration and Subset Rules" for more information about row migration

Complete the following steps to alter a custom rule-based transformation:

  1. You can view all of the name-value pairs in the action context of a rule by performing the following query:

    COLUMN ACTION_CONTEXT_NAME HEADING 'Action Context Name' FORMAT A30
    COLUMN ACTION_CONTEXT_VALUE HEADING 'Action Context Value' FORMAT A30
    
    SELECT 
        AC.NVN_NAME ACTION_CONTEXT_NAME, 
        AC.NVN_VALUE.ACCESSVARCHAR2() ACTION_CONTEXT_VALUE
      FROM DBA_RULES R, TABLE(R.RULE_ACTION_CONTEXT.ACTX_LIST) AC
      WHERE RULE_NAME = 'DEPARTMENTS5';
    

    This query displays output similar to the following:

    Action Context Name            Action Context Value
    ------------------------------ ------------------------------
    STREAMS$_ROW_SUBSET            INSERT
    STREAMS$_TRANSFORM_FUNCTION    "HR"."EXECUTIVE_TO_MANAGEMENT"
    
  2. Run the SET_RULE_TRANSFORM_FUNCTION procedure to set the transformation function to executive_to_lead for the DEPARTMENTS5 rule. In this example, it is assumed that the new transformation function is hr.executive_to_lead and that the strmadmin user has EXECUTE privilege on it.

    BEGIN
      DBMS_STREAMS_ADM.SET_RULE_TRANSFORM_FUNCTION(
        rule_name           => 'departments5',
        transform_function  => 'hr.executive_to_lead');
    END;
    /  
    

    To ensure that the transformation function was altered properly, you can rerun the query in Step 1. You should alter the action context for the DEPARTMENTS6 and DEPARTMENTS7 rules in a similar way to keep the three subset rules consistent.


Note:

  • The SET_RULE_TRANSFORM_FUNCTION does not verify that the specified transformation function exists. If the function does not exist, then an error is raised when a Streams process or job tries to invoke the transformation function.

  • If a custom rule-based transformation function is modified at the same time that a Streams client tries to access it, then an error might be raised.


Unsetting a Custom Rule-Based Transformation

To unset a custom rule-based transformation from a rule, run the SET_RULE_TRANSFORM_FUNCTION procedure and specify NULL for the transformation function. Specifying NULL unsets the name-value pair that specifies the custom rule-based transformation in the rule action context. This example unsets a custom rule-based transformation for rule DEPARTMENTS5. This transformation was added to the DEPARTMENTS5 rule in the example in "Creating a Custom Rule-Based Transformation".

In Streams, subset rules use name-value pairs in an action context to perform internal transformations that convert UPDATE operations into INSERT and DELETE operations in some situations. Such a conversion is called a row migration. The SET_RULE_TRANSFORM_FUNCTION procedure preserves the name-value pairs that perform row migrations.


See Also:

"Row Migration and Subset Rules" for more information about row migration

Run the following procedure to unset the custom rule-based transformation for rule DEPARTMENTS5:

BEGIN
  DBMS_STREAMS_ADM.SET_RULE_TRANSFORM_FUNCTION(
    rule_name           => 'departments5',
    transform_function  => NULL);
END;
/

To ensure that the transformation function was unset, you can run the query in Step 1. You should alter the action context for the DEPARTMENTS6 and DEPARTMENTS7 rules in a similar way to keep the three subset rules consistent.


See Also:

"Row Migration and Subset Rules" for more information about row migration

PK pÕŮīÁīPK◊hUIOEBPS/strms_prop.htmġ Streams Staging and Propagation

3 Streams Staging and Propagation

This chapter explains the concepts relating to staging messages in a queue and propagating messages from one queue to another.

This chapter contains these topics:

Introduction to Message Staging and Propagation

Streams uses queues to stage messages. A queue of ANYDATA type can stage messages of almost any type and is called a ANYDATA queue. A typed queue can store messages of a specific type. Streams clients always use ANYDATA queues.

In Streams, two types of messages can be encapsulated into an ANYDATA object and staged in an ANYDATA queue: logical change records (LCRs) and user messages. An LCR is an object that contains information about a change to a database object. A user message is a message of a user-defined type created by users or applications. Both types of messages can be used for information sharing within a single database or between databases.

In a messaging environment, both ANYDATA queues and typed queues can be used to stage messages of a specific type. Publishing applications can enqueue messages into a single queue, and subscribing applications can dequeue these messages.

Staged messages can be consumed or propagated, or both. Staged messages can be consumed by an apply process, by a messaging client, or by a user application. A running apply process implicitly dequeues messages, but messaging clients and user applications explicitly dequeue messages. Even after a message is consumed, it can remain in the queue if you also have configured a Streams propagation to propagate, or send, the message to one or more other queues or if message retention is specified for user-enqueued messages. Message retention does not apply to LCRs captured by a capture process.

The queues to which messages are propagated can reside in the same database or in different databases than the queue from which the messages are propagated. In either case, the queue from which the messages are propagated is called the source queue, and the queue that receives the messages is called the destination queue. There can be a one-to-many, many-to-one, or many-to-many relationship between source and destination queues.

Figure 3-1 shows propagation from a source queue to a destination queue.

Figure 3-1 Propagation from a Source Queue to a Destination Queue

Description of Figure 3-1 follows

You can create, alter, and drop a propagation, and you can define propagation rules that control which messages are propagated. The user who owns the source queue is the user who propagates messages, and this user must have the necessary privileges to propagate messages. These privileges include the following:

If the propagation propagates messages to a destination queue in a remote database, then the owner of the source queue must be able to use the database link used by the propagation, and the user to which the database link connects at the remote database must have enqueue privilege on the destination queue.


Note:

Connection qualifiers cannot be specified in the database links that are used by Streams propagations.


See Also:


Captured and User-Enqueued Messages in an ANYDATA Queue

Messages can be enqueued into an ANYDATA queue in two ways:

So, each captured message contains an LCR, but a user-enqueued message might or might not contain an LCR. Propagating a captured message or a user-enqueued message enqueues the message into the destination queue.

Messages can be dequeued from an ANYDATA queue in two ways:

The dequeued messages might have originated at the same database where they are dequeued, or they might have originated at a different database.


See Also:


Message Propagation Between Queues

You can use Streams to configure message propagation between two queues, which can reside in different databases. Streams uses job queues to propagate messages.

A propagation is always between a source queue and a destination queue. Although propagation is always between two queues, a single queue can participate in many propagations. That is, a single source queue can propagate messages to multiple destination queues, and a single destination queue can receive messages from multiple source queues. However, only one propagation is allowed between a particular source queue and a particular destination queue. Also, a single queue can be a destination queue for some propagations and a source queue for other propagations.

A propagation can propagate all of the messages in a source queue to a destination queue, or a propagation can propagate only a subset of the messages. Also, a single propagation can propagate both captured messages and user-enqueued messages. You can use rules to control which messages in the source queue are propagated to the destination queue and which messages are discarded.

Depending on how you set up your Streams environment, changes could be sent back to the site where they originated. You need to ensure that your environment is configured to avoid cycling a change in an endless loop. You can use Streams tags to avoid such a change cycling loop.


Note:

Propagations can propagate user-enqueued ANYDATA messages that encapsulate payloads of object types, varrays, or nested tables between databases only if the databases use the same character set.


See Also:


Propagation Rules

A propagation either propagates or discards messages based on rules that you define. For LCRs, each rule specifies the database objects and types of changes for which the rule evaluates to TRUE. You can place these rules in a positive rule set or a negative rule set used by the propagation.

If a rule evaluates to TRUE for a message, and the rule is in the positive rule set for a propagation, then the propagation propagates the change. If a rule evaluates to TRUE for a message, and the rule is in the negative rule set for a propagation, then the propagation discards the change. If a propagation has both a positive and a negative rule set, then the negative rule set is always evaluated first.

You can specify propagation rules for LCRs at the following levels:

  • A table rule propagates or discards either row changes resulting from DML changes or DDL changes to a particular table. Subset rules are table rules that include a subset of the row changes to a particular table.

  • A schema rule propagates or discards either row changes resulting from DML changes or DDL changes to the database objects in a particular schema.

  • A global rule propagates or discards either all row changes resulting from DML changes or all DDL changes in the source queue.

For non-LCR messages, you can create your own rules to control propagation.

A queue subscriber that specifies a condition causes the system to generate a rule. The rule sets for all subscribers to a queue are combined into a single system-generated rule set to make subscription more efficient.

Queue-to-Queue Propagations

A propagation can be queue-to-queue or queue-to-database link (queue-to-dblink). A queue-to-queue propagation always has its own exclusive propagation job to propagate messages from the source queue to the destination queue. Because each propagation job has its own propagation schedule, the propagation schedule of each queue-to-queue propagation can be managed separately. Even when multiple queue-to-queue propagations use the same database link, you can enable, disable, or set the propagation schedule for each queue-to-queue propagation separately. Propagation jobs are described in detail later in this chapter.

A single database link can be used by multiple queue-to-queue propagations. The database link must be created with the service name specified as the global name of the database that contains the destination queue.

In contrast, a queue-to-dblink propagation shares a propagation job with other queue-to-dblink propagations from the same source queue that use the same database link. Therefore, these propagations share the same propagation schedule, and any change to the propagation schedule affects all of the queue-to-dblink propagations from the same source queue that use the database link.

Queue-to-queue propagation connects to the destination queue service when one exists. Currently, a queue service is created when the database is a Real Application Clusters (RAC) database and the queue is a buffered queue. Because the queue service always runs on the owner instance of the queue, transparent failover can occur when RAC instances fail. When multiple queue-to-queue propagations use a single database link, the connect description for each queue-to-queue propagation changes automatically to propagate messages to the correct destination queue. In contrast, queue-to-dblink propagations require you to repoint your database links if the owner instance in a RAC database that contains the destination queue for the propagation fails.


Note:

To use queue-to-queue propagation, the compatibility level must be 10.2.0 or higher for each database that contains a queue involved in the propagation.


See Also:


Ensured Message Delivery

A user-enqueued message is propagated successfully to a destination queue when the enqueue into the destination queue is committed. A captured message is propagated successfully to a destination queue when both of the following actions are completed:

  • The message is processed by all relevant apply processes associated with the destination queue.

  • The message is propagated successfully from the destination queue to all of its relevant destination queues.

When a message is successfully propagated between two ANYDATA queues, the destination queue acknowledges successful propagation of the message. If the source queue is configured to propagate a message to multiple destination queues, then the message remains in the source queue until each destination queue has sent confirmation of message propagation to the source queue. When each destination queue acknowledges successful propagation of the message, and all local consumers in the source queue database have consumed the message, the source queue can drop the message.

This confirmation system ensures that messages are always propagated from the source queue to the destination queue, but, in some configurations, the source queue can grow larger than an optimal size. When a source queue grows, it uses more SGA memory and might use more disk space.

There are two common reasons for source-queue growth:

  • If a message cannot be propagated to a specified destination queue for some reason (such as a network problem), then the message will remain in the source queue until the destination queue becomes available. This situation could cause the source queue to grow large. So, you should monitor your queues regularly to detect problems early.

  • Suppose a source queue is propagating captured messages to multiple destination queues, and one or more destination databases acknowledge successful propagation of messages much more slowly than the other queues. In this case, the source queue can grow because the slower destination databases create a backlog of messages that have already been acknowledged by the faster destination databases. In such an environment, consider creating more than one capture process to capture changes at the source database. Doing so lets you use one source queue for the slower destination databases and another source queue for the faster destination databases.

Directed Networks

A directed network is one in which propagated messages pass through one or more intermediate databases before arriving at a destination database. A message might or might not be processed by an apply process at an intermediate database. Using Streams, you can choose which messages are propagated to each destination database, and you can specify the route that messages will traverse on their way to a destination database. Figure 3-2 shows an example of a directed networks environment.

Figure 3-2 Example Directed Networks Environment

Description of Figure 3-2 follows

The advantage of using a directed network is that a source database does not need to have a physical network connection with a destination database. So, if you want messages to propagate from one database to another, but there is no direct network connection between the computers running these databases, then you can still propagate the messages without reconfiguring your network, as long as one or more intermediate databases connect the source database to the destination database.

If you use directed networks, and an intermediate site goes down for an extended period of time or is removed, then you might need to reconfigure the network and the Streams environment.

Queue Forwarding and Apply Forwarding

An intermediate database in a directed network can propagate messages using either queue forwarding or apply forwarding. Queue forwarding means that the messages being forwarded at an intermediate database are the messages received by the intermediate database. The source database for a message is the database where the message originated.

Apply forwarding means that the messages being forwarded at an intermediate database are first processed by an apply process. These messages are then recaptured by a capture process at the intermediate database and forwarded. When you use apply forwarding, the intermediate database becomes the new source database for the messages, because the messages are recaptured from the redo log generated there.

Consider the following differences between queue forwarding and apply forwarding when you plan your Streams environment:

  • With queue forwarding, a message is propagated through the directed network without being changed, assuming there are no capture or propagation transformations. With apply forwarding, messages are applied and recaptured at intermediate databases and can be changed by conflict resolution, apply handlers, or apply transformations.

  • With queue forwarding, a destination database must have a separate apply process to apply messages from each source database. With apply forwarding, fewer apply processes might be required at a destination database because recapturing of messages at intermediate databases can result in fewer source databases when changes reach a destination database.

  • With queue forwarding, one or more intermediate databases are in place between a source database and a destination database. With apply forwarding, because messages are recaptured at intermediate databases, the source database for a message can be the same as the intermediate database connected directly with the destination database.

A single Streams environment can use a combination of queue forwarding and apply forwarding.

Advantages of Queue Forwarding

Queue forwarding has the following advantages compared with apply forwarding:

  • Performance might be improved because a message is captured only once.

  • Less time might be required to propagate a message from the database where the message originated to the destination database, because the messages are not applied and recaptured at one or more intermediate databases. In other words, latency might be lower with queue forwarding.

  • The origin of a message can be determined easily by running the GET_SOURCE_DATABASE_NAME member procedure oġn the LCR contained in the message. If you use apply forwarding, then determining the origin of a message requires the use of Streams tags and apply handlers.

  • Parallel apply might scale better and provide more throughput when separate apply processes are used because there are fewer dependencies, and because there are multiple apply coordinators and apply reader processes to perform the work.

  • If one intermediate database goes down, then you can reroute the queues and reset the start SCN at the capture site to reconfigure end-to-end capture, propagation, and apply.

    If you use apply forwarding, then substantially more work might be required to reconfigure end-to-end capture, propagation, and apply of messages, because the destination database(s) downstream from the unavailable intermediate database were using the SCN information of this intermediate database. Without this SCN information, the destination databases cannot apply the changes properly.

Advantages of Apply Forwarding

Apply forwarding has the following advantages compared with queue forwarding:

  • A Streams environment might be easier to configure because each database can apply changes only from databases directly connected to it, rather than from multiple remote source databases.

  • In a large Streams environment where intermediate databases apply changes, the environment might be easier to monitor and manage because fewer apply processes might be required. An intermediate database that applies changes must have one apply process for each source database from which it receives changes. In an apply forwarding environment, the source databases of an intermediate database are only the databases to which it is directly connected. In a queue forwarding environment, the source databases of an intermediate database are all of the other source databases in the environment, whether they are directly connected to the intermediate database or not.

Binary File Propagation

You can propagate a binary file between databases by using Streams. To do so, you put one or more BFILE attributes in a message payload and then propagate the message to a remote queue. Each BFILE referenced in the payload is transferred to the remote database after the message is propagated, but before the message propagation is committed. The directory object and filename of each propagated BFILE are preserved, but you can map the directory object to different directories on the source and destination databases. The message payload can be a BFILE wrapped in an ANYDATA payload, or the message payload can be one or more BFILE attributes of an object wrapped in an ANYDATA payload.

The following are not supported in a message payload:

  • One or more BFILE attributes in a varray

  • A user-defined type object with an ANYDATA attribute that contains one or more BFILE attributes

Propagating a BFILE in Streams has the same restrictions as the procedure DBMS_FILE_TRANSFER.PUT_FILE.


See Also:

Oracle Database Concepts, Oracle Database Administrator's Guide, and Oracle Database PL/SQL Packages and Types Reference for more information about transferring files with the DBMS_FILE_TRANSFER package

Messaging Clients

A messaging client dequeues user-enqueued messages when it is invoked by an application or a user. You use rules to specify which user-enqueued messages in the queue are dequeued by a messaging client. These user-enqueued messages can be user-enqueued LCRs or user-enqueued messages.

You can create a messaging client by specifying dequeue for the streams_type parameter when you run one of the following procedures in the DBMS_STREAMS_ADM package:

When you create a messaging client, you specify the name of the messaging client and the ANYDATA queue from which the messaging client dequeues messages. These procedures can also add rules to the positive rule set or negative rule set of a messaging client. You specify the message type for each rule, and a single messaging client can dequeue messages of different types.

The user who creates a messaging client is granted the privileges to dequeue from the queue using the messaging client. This user is the messaging client user. The messaging client user can dequeue messages that satisfy the messaging client rule sets. A messaging client can be associated with only one user, but one user can be associated with many messaging clients.

Figure 3-3 shows a messaging client dequeuing user-enqueued messages.

Figure 3-3 Messaging Client

Description of Figure 3-3 follows

ANYDATA Queues and User Messages

Streams enables messaging with queues of type ANYDATA. These queues can stage user messages whose payloads are of ANYDATA type. An ANYDATA payload can be a wrapper for payloads of different datatypes.

By using ANYDATA wrappers for message payloads, publishing applications can enqueue messages of different types into a single queue, and subscribing applications can dequeue these messages, either explicitly using a messaging client or an application, or implicitly using an apply process. If the subscribing application is remote, then the messages can be propagated to the remote site, and the subscribing application can dequeue the messages from a local queue in the remote database. Alternatively, a remote subscribing application can dequeue messages directly from the source queue using a variety of standard protocols, such as PL/SQL and OCI.

Streams includes the features of Advanced Queuing (AQ), which supports all the standard features of message queuing systems, including multiconsumer queues, publish and subscribe, content-based routing, internet propagation, transformations, and gateways to other messaging subsystems.

You can wrap almost any type of payload in an ANYDATA payload. To do this, you use the Convertdata_type static functions of the ANYDATA type, where data_type is the type of object to wrap. These functions take the object as input and return an ANYDATA object.

You cannot enqueue ANYDATA payloads that contain payloads of the following types into an ANYDATA queue:


Note:

  • Payloads of ROWID datatype cannot be wrapped in an ANYDATA wrapper. This restriction does not apply to payloads of UROWID datatype.

  • A queue that can stage messages of only one particular type is called a typed queue.



See Also:


Buffered Messaging and Streams Clients

Buffered messaging enables users and applications to enqueue messages into and dequeue messages from a buffered queue. Propagations can propagate buffered messages from one buffered queue to another. Buffered messaging can improve the performance of a messaging environment by storing messages in memory instead of persistently on disk in a queue table. The following sections discuss how buffered messages interact with Streams clients:


Note:

To use buffered messaging, the compatibility level of the Oracle database must be 10.2.0 or higher.


See Also:


Buffered Messages and Capture Processes

Messages enqueued into a buffered queue by a capture process can be dequeued only by an apply process. Captured messages cannot be dequeued by users or applications.

Buffered Messages and Propagations

A propagation will propagate any messages in its source queue that satisfy its rule sets. These messages can be stored in a buffered queue or stored persistently in a queue table. A propagation can propagate both types of messages if the messages satisfy the rule sets used by the propagation.

Buffered Messages and Apply Processes

Apply processes can dequeue and process messages in a buffered queue. To dequeue messages in a buffered queue that were enqueued by a capture process, the apply process must be configured with the apply_captured parameter set to true. To dequeue messages in a buffered queue that were enqueued by a user or application, the apply process must be configured with the apply_captured parameter set to false. An apply process sends user-enqueued messages to its message handler for processing.

Buffered Messages and Messaging Clients

Currently, messaging clients cannot dequeue buffered messages. In addition, the DBMS_STREAMS_MESSAGING package cannot be used to enqueue messages into or dequeue messages from a buffered queue.


Note:

The DBMS_AQ and DBMS_AQADM packages support buffered messaging.


See Also:

Oracle Streams Advanced Queuing User's Guide and Reference for more information about using the DBMS_AQ and DBMS_AQADM packages

Queues and Oracle Real Application Clusters

You can configure a queue to stage captured messages and user-enqueued messages in an Oracle Real Application Clusters (RAC) environment, and propagations can propagate these messages from one queue to another. In a RAC environment, only the owner instance can have a buffer for a queue, but different instances can have buffers for different queues. A buffered queue is System Global Area (SGA) memory associated with a queue. Buffered queues are discussed in more detail later in this chapter.

Streams processes and jobs support primary instance and secondary instance specifications for queue tables. If you use these specifications, then the secondary instance assumes ownership of a queue table when the primary instance becomes unavailable, and ownership is transferred back to the primary instance when it becomes available again. If both the primary and secondary instance for a queue table containing a destination queue become unavailable, then queue ownership is transferred automatically to another instance in the cluster. In this case, if the primary or secondary instance becomes available again, then ownership is transferred back to one of them accordingly. You can set primary and secondary instance specifications using the ALTER_QUEUE_TABLE procedure in the DBMS_AQADM package.

Each capture process and apply process is started on the owner instance for its queue, even if the start procedure is run on a different instance. For propagations, if the owner instance for a queue table containing a destination queue becomes unavailable, then queue ownership is transferred automatically to another instance in the cluster. A queue-to-queue propagation to a buffered destination queue uses a service to provide transparent failover in a RAC environment. That is, a propagation job for a queue-to-queue propagation automatically connects to the instance that owns the destination queue.

The service used by a queue-to-queue propagation always runs on the owner instance of the destination queue. This service is created only for buffered queues in a RAC database. If you plan to use buffered messaging with a RAC database, then messages can be enqueued into a buffered queue on any instance. If messages are enqueued on an instance that does not own the queue, then the messages are sent to the correct instance, but it is more efficient to enqueue messages on the instance that owns the queue. The service can be used to connect to the owner instance of the queue before enqueuing messages into a buffered queue.

Queue-to-dblink propagations do not use services. To make the propagation job connect to the correct instance on the destination database, manually reconfigure the database link from the source database to connect to the instance that owns the destination queue.

The NAME column in the DBA_SERVICES data dictionary view contains the service name for a queue. The NETWORK_NAME column in the DBA_QUEUES data dictionary view contains the network name for a queue. Do not manage the services for queue-to-queue propagations in any way. Oracle manages them automatically. For queue-to-dblink propagations, use the network name as the service name in the connect string of the database link to connect to the correct instance.

The DBA_QUEUE_TABLES data dictionary view contains information about the owner instance for a queue table. A queue table can contain multiple queues. In this case, each queue in a queue table has the same owner instance as the queue table.


Note:

If a queue contains or will contain captured messages in a RAC environment, then queue-to-queue propagations should be used to propagate messages to a RAC destination database. If a queue-to-dblink propagation propagates captured messages to a RAC destination database, then this propagation must use an instance-specific database link that refers to the owner instance of the destination queue. If such a propagation connects to any other instance, then the propagation will raise an error.

Commit-Time Queues

You can control the order in which user-enqueued messages in a queue are browsed or dequeued. Message ordering in a queue is determined by its queue table, and you can specify message ordering for a queue table during queue table creation. Specifically, the sort_list parameter in the DBMS_AQADM.CREATE_QUEUE_TABLE procedure determines how user-enqueued messages are ordered. Oracle Database 10g Release 2 introduces commit-time queues. Each message in a commit-time queue is ordered by an approximate commit system change number (approximate CSCN) which is obtained when the transaction that enqueued the message commits.

Commit-time ordering is specified for a queue table, and queues that use the queue table are called commit-time queues. When commit_time is specified for the sort_list parameter in the DBMS_AQADM.CREATE_QUEUE_TABLE procedure, the resulting queue table uses commit-time ordering.

For Oracle Database 10g Release 2, the default sort_list setting for queue tables created by the SET_UP_QUEUE procedure in the DBMS_STREAMS_ADM package is commit_time. For releases prior to Oracle Database 10g Release 2, the default is enq_time, which is described in the section that follows. When the queue_table parameter in the SET_UP_QUEUE procedure specifies an existing queue table, message ordering in the queue created by SET_UP_QUEUE is determined by the existing queue table.

When to Use Commit-Time Queues

A user or application can share information by enqueuing messages into a queue in an Oracle database. The enqueued messages can be shared within a single database or propagated to other databases, and the messages can be LCRs or user messages. For example, messages can be enqueued when an application-specific message occurs or when a trigger is fired for a database change. Also, in a heterogeneous environment, an application can enqueue messages that originated at a non-Oracle database into a queue in an Oracle database.

Other than commit_time, the settings for the sort_list parameter in the CREATE_QUEUE_TABLE procedure are priority and enq­_time. The priority setting orders messages by the priority specified during enqueue, highest priority to lowest priority. The enq_time setting orders messages by the time when they were enqueued, oldest to newest.

Commit-time queues are useful when an environment must support either of the following requirements for concurrent enqueues of user-enqueued messages:

Commit-time queues support these requirements. Neither priority nor enqueue time ordering supports these requirements because both allow transactional dependency violations and nonconsistent browses. Both settings allow transactional dependency violations, because messages are dequeued independent of the original dependencies. Also, both settings allow nonconsistent browses of the messages in a queue, because multiple browses performed without any dequeue operations between them can result in different sets of messages.

Transactional Dependency Ordering During Dequeue

A transactional dependency occurs when one database transaction requires that another database transaction commits before it can commit successfully. Messages that contain information about database transactions can be enqueued into a queue. For example, a database trigger can fire to enqueue messages. Figure 3-4 shows how enqueue time ordering does not support transactional dependency ordering during dequeue of such messages.

Figure 3-4 Transactional Dependency Violation During Dequeue

Description of Figure 3-4 follows

Figure 3-4 shows how transactional dependency ordering can be violated with enqueue time ordering. The transaction that enqueued message e2 was committed before the transaction that enqueued messages e1 and e3 was committed, and the update in message e3 depends on the insert in message e2. So, the correct dequeue order that supports transactional dependencies is e2, e1, e3. However, with enqueue time ordering, e3 can be dequeued before e2. Therefore, when e3 is dequeued, an error results when an application attempts to apply the change in e3 to the hr.employees table. Also, after all three messages are dequeued, a row in the hr.employees table contains the wrong information because the change in e3 was not executed.

Consistent Browse of Messages in a Queue

Figure 3-5 shows how enqueue time ordering does not support consistent browse of messages in a queue.

Figure 3-5 Inconsistent Browse of Messages in a Queue

Description of Figure 3-5 follows

Figure 3-5 shows that a client browsing messages in a queue is not guaranteed a definite order with enqueue time ordering. Sessions 1 and 2 are concurrent sessions that are enqueuing messages. Session 3 shows two sets of client browses that return the three enqueued messages in different orders. If the client requires deterministic ordering of messages, then the client might fail. For example, the client might perform a browse to initiate a program state, and a subsequent dequeue might return messages in a different order than expected.

How Commit-Time Queues Work

The commit system change number (CSCN) for a message that is enqueued into a queue is not known until the redo record for the commit of the transaction that includes the message is written to the redo log. The CSCN cannot be recorded when the message is enqueued. Commit-time queues use the current SCN of the database when a transaction is committed as the approximate CSCN for all of the messages in the transaction. The order of messages in a commit-time queue is based on the approximate CSCN of the transaction that enqueued the messages.

In a commit-time queue, messages in a transaction are not visible to dequeue and browse operations until a deterministic order for the messages can be established using the approximate CSCN. When multiple transactions are enqueuing messages concurrently into the same commit-time queue, two or more transactions can commit at nearly the same time, and the commit intervals for these transactions can overlap. In this case, the messages in these transactions are not visible until all of the transactions have committed. At that time, the order of the messages can be determined using the approximate CSCN of each transaction. Dependencies are maintained by using the approximate CSCN for messages rather than the enqueue time. Read consistency for browses is maintained by ensuring that only messages with a fully determined order are visible.

A commit-time queue always maintains transactional dependency ordering for messages that are based on database transactions. However, applications and users can enqueue messages that are not based on database transactions. For these messages, if dependencies exist between transactions, then the application or user must ensure that transactions are committed in the correct order and that the commit intervals of the dependent transactions do not overlap.

The approximate CSCNs of transactions recorded by a commit-time queue might not reflect the actual commit order of these transactions. For example, transaction 1 and transaction 2 can commit at nearly the same time after enqueuing their messages. The approximate CSCN for transaction 1 can be lower than the approximate CSCN for transaction 2, but transaction 1 can take more time to complete the commit than transaction 2. In this case, the actual CSCN for transaction 2 is lower than the actual CSCN for transaction 1.


Note:

The sort_list parameter in CREATE_QUEUE_TABLE can be set to the following:
priority, commit_time

In this case, ordering is done by priority first and commit time second. Therefore, this setting does not ensure transactional dependency ordering and browse read consistency for messages with different priorities. However, transactional dependency ordering and browse read consistency are ensured for messages with the same priority.



See Also:

"Creating an ANYDATA Queue" for information about creating a commit-time queue

Streams Staging and Propagation Architecture

This section describes buffered queues, propagation jobs, and secure queues, and how they are used in Streams. In addition, this section discusses how transactional queues handle captured messages and user-enqueued messages, as well as the need for a Streams data dictionary at databases that propagate captured messages.

This section contains the following topics:


See Also:


Streams Pool

The Streams pool is a portion of memory in the System Global Area (SGA) that is used by Streams. The Streams pool stores buffered queue messages in memory, and it provides memory for capture processes and apply processes. The Streams pool always stores LCRs captured by a capture process, and it stores LCRs and messages that are enqueued into a buffered queue by applications or users.

The Streams pool is initialized the first time any one of the following actions occur in a database:

  • A message is enqueued into a buffered queue. Data Pump export and import operations initialize the Streams pool because these operations use buffered queues.

  • A capture process is started.

  • An apply process is started.

The size of the Streams pool is determined in one of the following ways:


Note:

If the Streams pool cannot be initialized, then an ORA-00832 error is returned. If this happens, then first ensure that there is enough space in the SGA for the Streams pool. If necessary, reset the SGA_MAX_SIZE initialization parameter to increase the SGA size. Next, either set the SGA_TARGET or the STREAMS_POOL_SIZE initialization parameter (or both).

Streams Pool Size Set by Automatic Shared Memory Management

The Automatic Shared Memory Management feature manages the size of the Streams pool when the SGA_TARGET initialization parameter is set to a nonzero value. If the STREAMS_POOL_SIZE initialization parameter also is set to a nonzero value, then Automatic Shared Memory Management uses this value as a minimum for the Streams pool. You can set a minimum size if your environment needs a minimum amount of memory in the Streams pool to function properly.


See Also:

Oracle Database Administrator's Guide and Oracle Database Reference for more information about Automatic Shared Memory Management and the SGA_TARGET initialization parameter

Streams Pool Size Set Manually by a Database Administrator

If the STREAMS_POOL_SIZE initialization parameter is set to a nonzero value, and the SGA_TARGET parameter is set to 0 (zero), then the Streams pool size is the value specified by the STREAMS_POOL_SIZE parameter, in bytes. If you plan to set the Streams pool size manually, then you can use the V$STREAMS_POOL_ADVICE dynamic performance view to determine an appropriate setting for the STREAMS_POOL_SIZE initialization parameter.

Streams Pool Size Set by Default

If both the STREAMS_POOL_SIZE and the SGA_TARGET initialization parameters are set to 0 (zero), then, by default, the first use of Streams in a database transfers an amount of memory equal to 10% of the shared pool from the buffer cache to the Streams pool. The buffer cache is set by the DB_CACHE_SIZE initialization parameter, and the shared pool size is set by the SHARED_POOL_SIZE initialization parameter.

For example, consider the following configuration in a database before Streams is used for the first time:

  • DB_CACHE_SIZE is set to 100 MB.

  • SHARED_POOL_SIZE is set to 80 MB.

  • STREAMS_POOL_SIZE is set to zero.

  • SGA_TARGET is set to zero.

Given this configuration, the amount of memory allocated after Streams is used for the first time is the following:

  • The buffer cache has 92 MB.

  • The shared pool has 80 MB.

  • The Streams pool has 8 MB.

The first use of Streams in a database is the first attempt to allocate memory from the Streams pool. Memory is allocated from the Streams pool in the following ways:

  • A message is enqueued into a buffered queue. The message can be an LCR captured by a capture process, or it can be a user-enqueued LCR or message.

  • A capture process is started.

  • An apply process is started.

Buffered Queues

A buffered queue includes the following storage areas:

  • Streams pool memory associated with a queue that contains messages that were captured by a capture process or enqueued by applications or users

  • Part of a queue table that stores messages that have spilled from memory to disk

Queue tables are stored on disk. Buffered queues enable Oracle to optimize messages by buffering them in the SGA instead of always storing them in a queue table.

If the size of the Streams pool is not managed automatically, then you should increase the size of the Streams pool by 10 MB for each buffered queue in a database. Buffered queues improve performance, but some of the information in a buffered queue can be lost if the instance containing the buffered queue shuts down normally or abnormally. Streams automatically recovers from these cases, assuming full database recovery is performed on the instance.

Messages in a buffered queue can spill from memory into the queue table if they have been staged in the buffered queue for a period of time without being dequeued, or if there is not enough space in memory to hold all of the messages. Messages that spill from memory are stored in the appropriate AQ$_queue_table_name_p table, where queue_table_name is the name of the queue table for the queue. Also, for each spilled message, information is stored in the AQ$_queue_table_name_d table about any propagations and apply processes that are eligible for processing the message.

Captured messages are always stored in a buffered queue, but user-enqueued LCRs and user-enqueued non-LCR messages might or might not be stored in a buffered queue. For a user-enqueued message, the enqueue operation specifies whether the enqueued message is stored in the buffered queue or in the persistent queue. A persistent queue only stores messages on hard disk in a queue table, not in memory. The delivery_mode attribute in the enqueue_options parameter of the DBMS_AQ.ENQUEUE procedure determines whether a message is stored in the buffered queue or the persistent queue. Specifically, if the delivery_mode attribute is the default PERSISTENT, then the message is enqueued into the persistent queue. If it is set to BUFFERED, then the message is enqueued as the buffered queue. When a transaction is moved to the error queue, all messages in the transaction always are stored in a queue table, not in a buffered queue.


Note:

  • Using triggers on queue tables is not recommended because it can have a negative impact on performance. Also, the use of triggers on index-organized queue tables is not supported.

  • Although buffered and persistent messages can be stored in the same queue, it is sometimes more convenient to think of a queue having a buffered portion and a persistent portion, referred to here as "buffered queue" and "persistent queue".



See Also:


Propagation Jobs

A Streams propagation is configured internally using the DBMS_JOB package. Therefore, a propagation job is a job used by a propagation that propagates messages from a source queue to a destination queue. Like other jobs configured using the DBMS_JOB package, propagation jobs have an owner, and they use job queue processes (Jnnn) as needed to execute jobs.

The following procedures can create a propagation job when they create a propagation:

  • The ADD_GLOBAL_PROPAGATION_RULES procedure in the DBMS_STREAMS_ADM package

  • The ADD_SCHEMA_PROPAGATION_RULES procedure in the DBMS_STREAMS_ADM package

  • The ADD_TABLE_PROPAGATION_RULES procedure in the DBMS_STREAMS_ADM package

  • The ADD_SUBSET_PROPAGATION_RULE procedure in the DBMS_STREAMS_ADM package

  • The CREATE_PROPAGATION procedure in the DBMS_PROPAGATION_ADM package

When one of these procedures creates a propagation, a new propagation job is created in the following cases:

  • If the queue_to_queue parameter is set to true, then a new propagation job always is created for the propagation. Each queue-to-queue propagation has its own propagation job. However, a job queue process can be used by multiple propagation jobs.

  • If the queue_to_queue parameter is set to false, then a propagation job is created when no propagation job exists for the source queue and database link specified. If a propagation job already exists for the specified source queue and database link, then the new propagation uses the existing propagation job and shares this propagation job with all of the other queue-to-dblink propagations that use the same database link.

A propagation job for a queue-to-dblink propagation can be used by more than one propagation. All destination queues at a database receive messages from a single source queue through a single propagation job. By using a single propagation job for multiple destination queues, Streams ensures that a message is sent to a destination database only once, even if the same message is received by multiple destination queues in the same database. Communication resources are conserved because messages are not sent more than once to the same database.


Note:

The source queue owner performs the propagation, but the propagation job is owned by the user who creates it. These two users might or might not be the same.

Propagation Scheduling and Streams Propagations

A propagation schedule specifies how often a propagation job propagates messages from a source queue to a destination queue. Each queue-to-queue propagation has its own propagation job and propagation schedule, but queue-to-dblink propagations that use the same propagation job have the same propagation schedule.

A default propagation schedule is established when a new propagation job is created by a procedure in the DBMS_STREAMS_ADM or DBMS_PROPAGATION_ADM package.

The default schedule has the following properties:

  • The start time is SYSDATE().

  • The duration is NULL, which means infinite.

  • The next time is NULL, which means that propagation restarts as soon as it finishes the current duration.

  • The latency is three seconds, which is the wait time after a queue becomes empty to resubmit the propagation job. Therefore, the latency is the maximum wait, in seconds, in the propagation window for a message to be propagated after it is enqueued.

You can alter the schedule for a propagation job using the ALTER_PROPAGATION_SCHEDULE procedure in the DBMS_AQADM package. Changes made to a propagation job affect all propagations that use the propagation job.

Propagation Jobs and RESTRICTED SESSION

When the restricted session is enabled during system startup by issuing a STARTUP RESTRICT statement, propagation jobs with enabled propagation schedules do not propagate messages. When the restricted session is disabled, each propagation schedule that is enabled and ready to run will run when there is an available job queue process.

When the restricted session is enabled in a running database by the SQL statement ALTER SYSTEM ENABLE RESTRICTED SESSION, any running propagation job continues to run to completion. However, any new propagation job submitted for a propagation schedule is not started. Therefore, propagation for an enabled schedule can eventually come to a halt.

Secure Queues

Secure queues are queues for which AQ agents must be associated explicitly with one or more database users who can perform queue operations, such as enqueue and dequeue. The owner of a secure queue can perform all queue operations on the queue, but other users cannot perform queue operations on a secure queue, unless they are configured as secure queue users. In Streams, secure queues can be used to ensure that only the appropriate users and Streams clients enqueue messages into a queue and dequeue messages from a queue.

Secure Queues and the SET_UP_QUEUE Procedure

All ANYDATA queues created using the SET_UP_QUEUE procedure in the DBMS_STREAMS_ADM package are secure queues. When you use the SET_UP_QUEUE procedure to create a queue, any user specified by the queue_user parameter is configured as a secure queue user of the queue automatically, if possible. The queue user is also granted ENQUEUE and DEQUEUE privileges on the queue. To enqueue messages into and dequeue messages from a queue, a queue user must also have EXECUTE privilege on the DBMS_STREAMS_MESSAGING package or the DBMS_AQ package. The SET_UP_QUEUE procedure does not grant either of these privileges. Also, a message cannot be enqueued into a queue unless a subscriber who can dequeue the message is configured.

To configure a queue user as a secure queue user, the SET_UP_QUEUE procedure creates an AQ agent with the same name as the user name, if one does not already exist. The user must use this agent to perform queue operations on the queue. If an agent with this name already exists and is associated with the queue user only, then the existing agent is used. SET_UP_QUEUE then runs the ENABLE_DB_ACCESS procedure in the DBMS_AQADM package, specifying the agent and the user.

If you use the SET_UP_QUEUE procedure in the DBMS_STREAMS_ADM package to create a secure queue, and you want a user who is not the queue owner and who was not specified by the queue_user parameter to perform operations on the queue, then you can configure the user as a secure queue user of the queue manually. Alternatively, you can run the SET_UP_QUEUE procedure again and specify a different queue_user for the queue. In this case, SET_UP_QUEUE skips queue creation, but it configures the user specified by queue_user as a secure queue user of the queue.

If you create an ANYDATA queue using the DB5‚ MS_AQADM package, then you use the secure parameter when you run the CREATE_QUEUE_TABLE procedure to specify whether the queue is secure or not. The queue is secure if you specify true for the secure parameter when you run this procedure. When you use the DBMS_AQADM package to create a secure queue, and you want to allow users to perform queue operations on the secure queue, then you must configure these secure queue users manually.

Secure Queues and Streams Clients

When you create a capture process or an apply process, an AQ agent of the secure queue associated with the Streams process is configured automatically, and the user who runs the Streams process is specified as a secure queue user for this queue automatically. Therefore, a capture process is configured to enqueue into its secure queue automatically, and an apply process is configured to dequeue from its secure queue automatically. In either case, the AQ agent has the same name as the Streams client.

For a capture process, the user specified as the capture_user is the user who runs the capture process. For an apply process, the user specified as the apply_user is the user who runs the apply process. If no capture_user or apply_user is specified, then the user who invokes the procedure that creates the Streams process is the user who runs the Streams process.

Also, if you change the capture_user for a capture process or the apply_user for an apply process, then the specified capture_user or apply_user is configured as a secure queue user of the queue used by the Streams process. However, the old capture user or apply user remains configured as a secure queue user of the queue. To remove the old user, run the DISABLE_DB_ACCESS procedure in the DBMS_AQADM package, specifying the old user and the relevant AQ agent. You might also want to drop the agent if it is no longer needed. You can view the AQ agents and their associated users by querying the DBA_AQ_AGENT_PRIVS data dictionary view.

When you create a messaging client, an AQ agent of the secure queue with the same name as the messaging client is associated with the user who runs the procedure that creates the messaging client. This messaging client user is specified as a secure queue user for this queue automatically. Therefore, this user can use the messaging client to dequeue messages from the queue.

A capture process, an apply process, or a messaging client can be associated with only one user. However, one user can be associated with multiple Streams clients, including multiple capture processes, apply processes, and messaging clients. For example, an apply process cannot have both hr and oe as apply users, but hr can be the apply user for multiple apply processes.

If you drop a capture process, apply process, or messaging client, then the users who were configured as secure queue users for these Streams clients remain secure queue users of the queue. To remove these users as secure queue users, run the DISABLE_DB_ACCESS procedure in the DBMS_AQADM package for each user. You might also want to drop the agent if it is no longer needed.


Note:

No configuration is necessary for propagations and secure queues. Therefore, when a propagation is dropped, no additional steps are necessary to remove secure queue users from the propagation's queues.

Transactional and Nontransactional Queues

A transactional queue is a queue in which user-enqueued messages can be grouped into a set that are applied as one transaction. That is, an apply process performs a COMMIT after it applies all the user-enqueued messages in a group. The SET_UP_QUEUE procedure in the DBMS_STREAMS_ADM package always creates a transactional queue.

A nontransactional queue is one in which each user-enqueued message is its own transaction. That is, an apply process performs a COMMIT after each user-enqueued message it applies. In either case, the user-enqueued messages might or might not contain user-created LCRs.

The difference between transactional and nontransactional queues is important only for user-enqueued messages. An apply process always applies captured messages in transactions that preserve the transactions executed at the source database. Table 3-1 shows apply process behavior for each type of message and each type of queue.

Table 3-1 Apply Process Behavior for Transactional and Nontransactional Queues

Message TypeTransactional QueueNontransactional Queue

Captured Messages

Apply process preserves the original transaction

Apply process preserves the original transaction

User-Enqueued Messages

Apply a user-specified group of user-enqueued messages as one transaction

Apply each user-enqueued message in its own transaction



See Also:


Streams Data Dictionary for Propagations

When a database object is prepared for instantiation at a source database, a Streams data dictionary is populated automatically at the database where changes to the object are captured by a capture process. The Streams data dictionary is a multiversioned copy of some of the information in the primary data dictionary at a source database. The Streams data dictionary maps object numbers, object version information, and internal column numbers from the source database into table names, column names, and column datatypes. This mapping keeps each captured message as small as possible, because the message can store numbers rather than names internally.

The mapping information in the Streams data dictionary at the source database is needed to evaluate rules at any database that propagates the captured messages from the source database. To make this mapping information available to a propagation, Oracle automatically populates a multiversioned Streams data dictionary at each database that has a Streams propagation. Oracle automatically sends internal messages that contain relevant information from the Streams data dictionary at the source database to all other databases that receive captured messages from the source database.

The Streams data dictionary information contained in these internal messages in a queue might or might not be propagated by a propagation. Which Streams data dictionary information to propagate depends on the rule sets for the propagation. When a propagation encounters Streams data dictionary information for a table, the propagation rule sets are evaluated with partial information that includes the source database name, table name, and table owner. If the partial rule evaluation of these rule sets determines that there might be relevant LCRs for the given table from the specified database, then the Streams data dictionary information for the table is propagated.

When Streams data dictionary information is propagated to a destination queue, it is incorporated into the Streams data dictionary at the database that contains the destination queue, in addition to being enqueued into the destination queue. Therefore, a propagation reading the destination queue in a directed networks configuration can forward LCRs immediately without waiting for the Streams data dictionary to be populated. In this way, the Streams data dictionary for a source database always reflects the correct state of the relevant database objects for the LCRs relating to these database objects.

PKž)ďč1ĶĶPK◊hUI OEBPS/toc.ncxÔŠ Oracle® Streams Concepts and Administration, 10g Release 2 (10.2) Cover Title and Copyright Information Contents Preface What's New in Oracle Streams? Part I Streams Concepts 1 Introduction to Streams 2 Streams Capture Process 3 Streams Staging and Propagation 4 Streams Apply Process 5 Rules 6 How Rules Are Used in Streams 7 Rule-Based Transformations 8 Information Provisioning 9 Streams High Availability Environments Part II Streams Administration 10 Preparing a Streams Environment 11 Managing a Capture Process 12 Managing Staging and Propagation 13 Managing an Apply Process 14 Managing Rules 15 Managing Rule-Based Transformations 16 Using Information Provisioning 17 Other Streams Management Tasks 18 Troubleshooting a Streams Environment Part III Monitoring Streams 19 Monitoring a Streams Environment 20 Monitoring Streams Capture Processes 21 Monitoring Streams Queues and Propagations 22 Monitoring Streams Apply Processes 23 Monitoring Rules 24 Monitoring Rule-Based Transformations 25 Monitoring File Group and Tablespace Repositories 26 Monitoring Other Streams Components Part IV Sample Environments and Applications 27 Single-Database Capture and Apply¬†Example 28 Rule-Based Application Example Part V Appendixes A XML Schema for LCRs B Online Database Upgrade with Streams C Online Database Maintenance with Streams Glossary Index Copyright PKĮŗKˇŰÔPK◊hUIOEBPS/whatsnew.htmġ What's New in Oracle Streams?

What's New in Oracle Streams?

This section describes new features of Oracle Streams for Oracle Database 10g Release 2 (10.2) and provides pointers to additional information. New features information from previous releases is also retained to help those users migrating to the current release.

The following sections describe the new features in Oracle Streams:

Oracle Database 10g Release 2 (10.2) New Features in Streams

The following sections describe the new features in Oracle Streams for Oracle Database 10g Release 2 (10.2):

Streams Performance Improvements

Oracle Database 10g Release 2 includes performance improvements for most Streams operations. Specifically, the following Streams components have been improved to perform more efficiently and handle greater workloads:

This release also includes the following specific performance improvements:

Streams Configuration and Manageability Enhancements

The following are Streams configuration manageability enhancements for Oracle Database 10g Release 2:

Automatic Shared Memory Management of the Streams Pool

The Oracle Automatic Shared Memory Management feature manages the size of the Streams pool when the SGA_TARGET initialization parameter is set to a nonzero value.


See Also:

"Streams Pool"

Streams Tool in Oracle Enterprise Manager

The Streams tool in Oracle Enterprise Manager enables you to configure, manage, and monitor a Streams environment using a Web browser.


See Also:


Procedures for Starting and Stopping Propagations

The START_PROPAGATION and STOP_PROPAGATION procedures are added to the DBMS_PROPAGATION_ADM package.

Queue-to-Queue Propagations

A queue-to-queue propagation always has its own exclusive propagation job to propagate messages from the source queue to the destination queue. Also, in an Oracle Real Application Clusters (RAC) environment, when the destination queue in a queue-to-queue propagation is a buffered queue, the queue-to-queue propagation uses a service for transparent failover to another instance if the primary RAC instance fails.

Declarative Rule-Based Transformations

Declarative rule-based transformations provide a simple interface for configuring a set of common transformation scenarios for row LCRs. No user-defined PL/SQL function is required to configure a declarative rule-based transformation.

Commit-Time Queues

Commit-time queues provide more control over the order in which user-enqueued messages in a queue are browsed or dequeued.

Supplemental Logging Enabled During Preparation for Instantiation

The following procedures in the DBMS_CAPTURE_ADM package now include a supplemental_logging parameter which controls the supplemental logging specifications for the database objects being prepared for instantiation: PREPARE_TABLE_INSTANTIATION, PREPARE_SCHEMA_INSTANTIATION, and PREPARE_GLOBAL_INSTANTIATION.

Configurable Transaction Spill Threshold for Apply Processes

The new txn_lcr_spill_threshold apply process parameter enables you to specify that an apply process begins to spill messages for a transaction from memory to disk when the number of messages in memory for a particular transaction exceeds the specified number. The DBA_APPLY_SPILL_TXN and V$STREAMS_APPLY_READER views enable you to monitor the number of transactions and messages spilled by an apply process.

Conversion of LCRs to and from XML

The following functions in the DBMS_STREAMS package convert a logical change record (LCR) to or from XML:

Retrying an Error Transaction with a User Procedure

A new parameter, user_procedure, is added to the EXECUTE_ERROR procedure in the DBMS_APPLY_ADM package. This parameter enables you to specify a user procedure that modifies one or more LCRs in an error transaction before the transaction is executed.

Enhanced Support for Index-Organized Tables

Streams capture processes and apply processes now support index-organized tables that contain the following datatypes, in addition to the datatypes that were supported in past releases of Oracle:

Logical change records (LCRs) containing these datatypes in index-organized tables can also be propagated using propagations.

Also, Streams now supports index-organized tables that include an OVERFLOW segment.

Row LCR Execution Enhancements

In previous releases, the EXECUTE member procedure for row LCRs only execute row LCRs in an apply handler for an apply process. In Oracle Database 10g Release 2, the EXECUTE member procedure can execute user-constructed row LCRs, row LCRs in the error queue, and row LCRs that were last enqueued by an apply process, user, or application.

Information About Oldest Transaction in V$STREAMS_APPLY_READER

The following new columns are added to the V$STREAMS_APPLY_READER dynamic performance view: OLDEST_XIDUSN, OLDEST_XIDSLT, and OLDEST_XIDSQN. These columns show the transaction identification number of the oldest transaction being assembled or applied by an apply process. The DBA_APPLY_PROGRESS view also contains this information. However, for a running apply process, the information in the V$STREAMS_APPLY_READER view is more current than the information in the DBA_APPLY_PROGRESS view.


See Also:

Oracle Database Reference for more information about the V$STREAMS_APPLY_READER dynamic performance view

Streams Replication Enhancements

The following are Streams replication enhancements for Oracle Database 10g Release 2:

Simple Streams Replication Configuration

The following new procedures in the DBMS_STREAMS_ADM package provide simplify configuration of a Streams replication environment:

LOB Assembly

LOB assembly simplifies processing of row LCRs with LOB columns in DML handler and error handlers.

Virtual Dependency Definitions

A virtual dependency definition is a description of a dependency that is used by an apply process to detect dependencies between transactions at a destination database. Virtual dependency definitions enable an apply process to detect dependencies that it would not be able to detect by using only the constraint information in the data dictionary.

Instantiation Using Transportable Tablespace from Backup

A new RMAN command, TRANSPORT TABLESPACE, enables you to instantiate a set of tablespaces while the tablespaces in the source database remain online. The tablespaces can be added to the destination database using Data Pump import or the ATTACH_TABLESPACES procedure in the DBMS_STREAMS_TABLESPACE_ADM package.

RMAN Database Instantiation Across Platforms

The RMAN CONVERT DATABASE command can be used to instantiate an entire database in a replication environment where the source and destination databases are running on different platforms that have the same endian format.

Apply Processes Allow Duplicate Rows

In releases prior to Oracle Database 10g Release 2­, an apply process always raises an error when it encounters a row LCR that changes more than one row in a table. In Oracle Database 10g Release 2­, the new allow_duplicate_rows apply process parameter can be set to true to allow an apply process to apply a row LCR that changes more than one row.

View for Monitoring Long Running Transactions

The V$STREAMS_TRANSACTION dynamic performance view enables monitoring of long running transactions that currently are being processes by Streams capture processes and apply processes.


See Also:

Oracle Database Reference for more information about the V$STREAMS_TRANSACTION dynamic performance view

Rules Interface Enhancement

In Oracle Database 10g Release 2, a new procedure, ALTER_EVALUATION_CONTEXT in the DBMS_RULE_ADM package, enables you to alter an existing evaluation context.

Information Provisioning Enhancements

Information provisioning makes information available when and where it is needed. Oracle Database 10g Release 2­ makes it is easier to bulk provision a large amount of information and to incrementally provision information using Streams.

Oracle Database 10g Release 1 (10.1) New Features in Streams

The following sections describe the new features in Oracle Streams for Oracle Database 10g Release 1 (10.1):

Streams Performance Improvements

Oracle Database 10g Release 1 includes performance improvements for most Streams operations. Specifically, the following Streams components have been improved to perform more efficiently and handle greater workloads:

This release also includes performance improvements for ANYDATA queue operations and rule set evaluations.

Streams Configuration and Manageability Enhancements

The following are Streams configuration manageability enhancements for Oracle Database 10g Release 1:

Negative Rule Sets

Streams clients, which include capture processes, propagations, apply processes, and messaging clients, can use two rule sets: a positive rule set and a negative rule set. Negative rule sets make it easier to discard specific changes so that they are not processed by a Streams client.

Downstream Capture

A capture process can run on a database other than the source database. The redo log files from the source database are copied to the other database, called a downstream database, and the capture process captures changes in these redo log files at the downstream database.

Subset Rules for Capture and Propagation

You can use subset rules for capture processes, propagations, and messaging clients, as well as for apply processes.


See Also:

"Subset Rules"

Streams Pool

When Streams is used in a single database, memory is allocated from a pool in the System Global Area (SGA) called the Streams pool. The Streams pool contains buffered queues and is used for internal communications during parallel capture and apply. Also, a new dynamic performance view, V$STREAMS_POOL_ADVICE, provides information that you can use to determine the best size for Streams pool.

Access to Buffered Queue Information

The following new dynamic performance views enable you to monitor buffered queues:

SYSAUX Tablespace Usage

The default tablespace for LogMiner has been changed from the SYSTEM tablespace to the SYSAUX tablespace. When configuring a new database to run a capture process, you no longer need to relocate the LogMiner tables to a non-SYSTEM tablespace.

Ability to Add User-Defined Conditions to System-Created Rules

Some of the procedures that create rules in the DBMS_STREAMS_ADM package include an and_condition parameter. This parameter enables you to add custom conditions to system-created rules.

Simpler Rule-Based Transformation Configuration and Administration

A new procedure, SET_RULE_TRANSFORM_FUNCTION in the DBMS_STREAMS_ADM package, makes it easy to specify and administer rule-based transformations.

Enqueue Destinations Upon Apply

A new procedure, SET_ENQUEUE_DESTINATION in the DBMS_APPLY_ADM package, makes it easy to specify a destination queue for messages that satisfy a particular rule. When a message satisfies such a rule in an apply process rule set, the apply process enqueues the message into the specified queue.

Execution Directives Upon Apply

A new procedure, SET_EXECUTE in the DBMS_APPLY_ADM package, enables you to specify that apply processes do not execute messages that satisfy a specific rule.

Support for Additional Datatypes

Streams capture processes and apply processes now support the following additional datatypes:

Logical change records (LCRs) containing these datatypes can also be propagated using propagations.

Support for Index-Organized Tables

Streams capture processes and apply processes now support processing changes to index-organized tables.

Precommit Handlers

You can use a new type of apply handler called a precommit handler to record information about commits processed by an apply process.

Better Interoperation with Oracle Real Application Clusters

The following are specific enhancements that improve Streams interoperation with Oracle Real Application Clusters (RAC):

Support for Function-Based Indexes and Descending Indexes

Streams capture processes and apply processes now support processing changes to tables that use function-based indexes and descending indexes.

Simpler Removal of Rule Sets When a Streams Client Is Dropped

A new parameter, drop_unused_rule_sets, is added to the following procedures:

If you drop a Streams client using one of these procedures and set this parameter to true, then the procedure drops any rule sets, positive and negative, used by the specified Streams client if these rule sets are not used by any other Streams client. Streams clients include capture processes, propagations, apply processes, and messaging clients. If this procedure drops a rule set, then this procedure also drops any rules in the rule set that are not in another rule set.

Simpler Removal of ANYDATA Queues

A new procedure, REMOVE_QUEUE in the DBMS_STREAMS_ADM package, enables you to remove an ANYDATA queue. This procedure also has a cascade parameter. When cascade is set to true, any Stream client that uses the queue is removed also.


See Also:


Control Over Data Dictionary Builds in the Redo Log

You can use the BUILD procedure in the DBMS_CAPTURE_ADM package to extract the data dictionary of the current database to the redo log. A capture process can use the extracted information in the redo log to create the LogMiner data dictionary for the capture process. This procedure also identifies a valid first system change number (SCN) value that can be used by the capture process. The first SCN for a capture process is the lowest SCN in the redo log from which a capture process can capture changes. In addition, you can reset the first SCN for a capture process to purge unneeded information in a LogMiner data dictionary.

Additional Streams Data Dictionary Views and View Columns

This release includes new Streams data dictionary views and new columns in Streams data dictionary views that existed in past releases.


See Also:


Copying and Moving Tablespaces

The DBMS_STREAMS_TABLESPACE_ADM package provides administrative procedures for copying tablespaces between databases and moving tablespaces from one database to another. This package uses transportable tablespaces, Data Pump, and the DBMS_FILE_TRANSFER package.

Simpler Streams Administrator Configuration

In this release, granting the DBA role to a Streams administrator is sufficient for most actions performed by the Streams administrator. In addition, a new package, DBMS_STREAMS_AUTH, provides procedures that make it easy for you to configure and manage a Streams administrator.

Streams Configuration Removal

A new procedure, REMOVE_STREAMS_CONFIGURATION in the DBMS_STREAMS_ADM package, enables you to remove the entire Streams configuration at a database.


See Also:

Oracle Database PL/SQL Packages and Types Reference for more information about the REMOVE_STREAMS_CONFIGURATION procedure

Streams Replication Enhancements

The following are Streams replication enhancements for Oracle Database 10g Release 1:

Additional Supplemental Logging Options

For database supplemental logging, you can specify that all FOREIGN KEY columns in a database are supplementally logged, or that ALL columns in a database are supplementally logged. These new options are added to the PRIMARY KEY and UNIQUE options, which were available in past releases.

For table supplemental logging, you can specify the following options for log groups:

These new options make it easier to specify and manage supplemental logging at a source database because you can specify supplemental logging without listing each column in a log group. If a table changes in the future, then the correct columns are logged automatically. For example, if you specify FOREIGN KEY for a table's log group, then the foreign key for a row is logged when the row is changed, even if the columns in the foreign key change in the future.


See Also:

Oracle Streams Replication Administrator's Guide for more information about supplemental logging in a Streams replication environment

Additional Ways to Perform Instantiations

In addition to original export/import, you can use Data Pump export/import, transportable tablespaces, and RMAN to perform Streams instantiations.


See Also:

Oracle Streams Replication Administrator's Guide for more information about performing instantiations

New Data Dictionary Views for Schema and Global Instantiations

The following new data dictionary views enable you to determine which database objects have a set instantiation SCN at the schema and global level:

Recursively Setting Schema and Global Instantiation SCN

A new recursive parameter in the SET_SCHEMA_INSTANTIATION_SCN and SET_GLOBAL_INSTANTIATION_SCN procedures enables you to set the instantiation SCN for a schema or database, respectively, and for all of the database objects in the schema or database.

Access to Streams Client Information During LCR Processing

The DBMS_STREAMS package includes two new functions: GET_STREAMS_NAME and GET_STREAMS_TYPE. These functions return the name and type, respectively, of a Streams client that is processing an LCR. You can use these functions in rule conditions, rule-based transformations, applyĚNbĪ handlers, error handlers, and in a rule condition.

For example, if you use one error handler for multiple apply processes, then you can use the GET_STREAMS_NAME function to determine the name of the apply process that raised the error. Also, you can use the GET_STREAMS_TYPE function to instruct a DML handler to operate differently if it is processing messages from the error queue (ERROR_EXECUTION type) instead of the apply process queue (APPLY type).


See Also:


Maintaining Tablespaces

You can use the MAINTAIN_SIMPLE_TABLESPACE procedure to configure Streams replication for a simple tablespace, and you can use the MAINTAIN_TABLESPACES procedure to configure Streams replication for a set of self-contained tablespaces. Both of these procedures are in the DBMS_STREAMS_ADM package. These procedures use transportable tablespaces, Data Pump, the DBMS_STREAMS_TABLESPACE_ADM package, and the DBMS_FILE_TRANSFER package to configure the environment.

Control Over Comparing Old Values in Conflict Detection

The COMPARE_OLD_VALUES procedure in the DBMS_APPLY_ADM package enables you to specify whether to compare old values of one or more columns in a row LCR with the current value of the corresponding columns at the destination database during apply.

Extra Attributes in LCRs

You can optionally use the INCLUDE_EXTRA_ATTRIBUTE procedure in the DBMS_CAPTURE_ADM package to instruct a capture process to include the following extra attributes in LCRs:

New Procedure for Point-In-Time Recovery in a Streams Environment

The GET_SCN_MAPPING procedure in the DBMS_STREAMS_ADM package gets information about the SCN values to use for Streams capture and apply processes to recover transactions after point-in-time recovery is performed on a source database in a multiple-source Streams environment.

New Member Procedures and Functions for LCR Types

You can use the following new member procedures and functions for LCR types:


See Also:


A Generated Script to Migrate from Advanced Replication to Streams

You can use the procedure DBMS_REPCAT.STREAMS_MIGRATION to generate a SQL*Plus script that migrates an existing Advanced Replication environment to a Streams environment.


See Also:

Oracle Streams Replication Administrator's Guide for information about migrating from Advanced Replication to Streams

Streams Messaging Enhancements

The following are Streams messaging enhancements for Oracle Database 10g Release 1:

Streams Messaging Client

A messaging client is a new type of Streams client that enables users and applications to dequeue messages from an ANYDATA queue based on rules. You can create a messaging client by specifying dequeue for the streams_type parameter in certain procedures in the DBMS_STREAMS_ADM package.

Simpler Enqueue and Dequeue of Messages

A new package, DBMS_STREAMS_MESSAGING, provides an easy interface for enqueuing messages into and dequeuing messages from an ANYDATA queue.


See Also:


Simpler Configuration of Rule-Based Dequeue or Apply of Messages

A new procedure, ADD_MESSAGE_RULE in the DBMS_STREAMS_ADM package, enables you to configure messaging clients and apply processes, and it enables you to create the rules for user-enqueued messages that control the behavior of these messaging clients and apply processes.


See Also:


Simpler Configuration of Rule-Based Propagations of Messages

A new procedure, ADD_MESSAGE_PROPAGATION_RULE in the DBMS_STREAMS_ADM package, enables you to configure propagations and create rules for propagations that propagate user-enqueued messages.


See Also:

Oracle Database PL/SQL Packages and Types Reference for more information about the ADD_MESSAGE_PROPAGATION_RULE procedure

Simpler Configuration of Message Notifications

A new procedure, SET_MESSAGE_NOTIFICATION in the DBMS_STREAMS_ADM package, enables you to configure message notifications that are sent when a Streams messaging client dequeues messages. The notification can be sent to an email address, a URL, or a PL/SQL procedure.


See Also:


Rules Interface Enhancements

The following are rules interface enhancements for Oracle Database 10g Release 1:

Iterative Evaluation Results

During rule set evaluation, a client now can specify that evaluation results are sent iteratively, instead of in a complete list at one time. The EVALUATE procedure in the DBMS_RULE package includes the following two new parameters that enable you specify that evaluation results are sent iteratively: true_rules_interator and maybe_rules_iterator.

In addition, a new procedure in the DBMS_RULE package, GET_NEXT_HIT, returns the next rule that evaluated to TRUE from a true rules iterator, or returns the next rule that evaluated to MAYBE from a maybe rules iterator. Also, the new CLOSE_ITERATOR procedure in the DBMS_RULE package enables you to close an open iterator.


See Also:


New Dynamic Performance Views for Rule Sets and Rule Evaluations

You can use the following new dynamic performance views to monitor rule sets and rule evaluations:

PK@◊Q4¨NĚNPK◊hUIOEBPS/strms_mcap.htmġ Managing a Capture Process

11 Managing a Capture Process

A capture process captures changes in a redo log, reformats the captured changes into logical change records (LCRs), and enqueues the LCRs into an ANYDATA queue.

This chapter contains these topics:

Each task described in this chapter should be completed by a Streams administrator that has been granted the appropriate privileges, unless specified otherwise.

Creating a Capture Process

You can create a capture process that captures changes either locally at the source database or remotely at a downstream database. If a capture process runs on a downstream database, then redo data from the source database is copied to the downstream database, and the capture process captures changes in redo data at the downstream database.

You can use any of the following procedures to create a