10 Managing Database Growth

This chapter describes how to manage database growth with the purge scripts for deleting large numbers of instances, table partitioning for enabling schema tables to be range-partitioned on time intervals, truncate script for removing all records from runtime tables without dropping the tables, and the table recreation script (TRS script) for recreating tables with open composite instances and reclaiming database space.

This chapter includes the following sections:

Note:

Table partitioning is an advanced database task and must only be performed by an experienced database administrator (DBA).

For additional information about troubleshooting database growth issues, see Section B.2, "Parallel Purging and Table Partitioning Issues," Section B.7.4, "Extending Tablespaces to Avoid Problems at Runtime," and Section B.7.5, "Resolving Database Growth Issues Caused by a High Volume of Transactions."

For information about database growth management strategies, see Chapter 9, "Developing a Database Growth Management Strategy."

10.1 Introduction to Managing Database Growth

When the amount of data in the Oracle SOA Suite database grows very large, maintaining the database can become difficult. To address this challenge, several methods for managing database growth are provided, as described in Table 10-1.

Table 10-1 Database Growth Strategies

Environment Use See Section

Small development installations with less than 100K rows in the database

Delete With Options button of Oracle Enterprise Manager Fusion Middleware Control

Section 8.2, "Monitoring and Deleting SOA Composite Application Instances from the Application Home Page" and Section 8.3, "Monitoring and Deleting SOA Composite Application Instances at the SOA Infrastructure Level"

Small development installations with more than 100K rows in the database

Looped purge script

Section 10.3, "Deleting Large Numbers of Instances with the Purge Scripts"

Medium installations that generate less than 10 GB of data per day and retain less than 500 GB of data

Scheduled parallel purge with optimal thread count

Section 10.3, "Deleting Large Numbers of Instances with the Purge Scripts"

Large installations that generate more than 10 GB of data per day or retain more than 500 GB of data

  • Partitioning (for infrequent, long running processes).

  • Combination of parallel purge and partitioning (for long running processes than span several months). For example, perform daily purges and monthly partition dropping.

Section 10.3, "Deleting Large Numbers of Instances with the Purge Scripts"

Section 10.4.1, "Partitioning the Component Database Tables"

Section 10.4, "Partitioning Component Tables"

  • For recreating and rerunning test scenarios

  • For creating a production or test environment clone in which to keep the schemas from the production environment so that the production customizations and new job definitions are kept, but all instance data is truncated, regardless of state

Truncate scripts

Section 10.5, "Removing Records from the Runtime Tables Without Dropping the Tables"

For smaller installations with significant maintenance downtime periods in which corrective actions are required to move your installation to a more efficient database growth management strategy

TRS script

Section 10.6, "Recreating Tables with Open Composite Instances and Reclaiming Database Space"

Microsoft SQL Server databases

Looped purge script

Section 10.7, "Purging Instances on Microsoft SQL Server"


10.2 Developing a Purging and Partitioning Methodology

This section summarizes the main points into an action plan that you can follow to purge and partition the dehydration store. Purging is an essential part of any plan and should be performed when data is consuming too much space or you have some other reason for removing the data.

There are three main strategies for reducing the size of the schemas:

  • Purge script

  • Purge script + partitioning (or, more correctly, dropping table partitions)

  • Partitioning all tables

In the first two cases, the same purge script is used, although if you are partitioning, you must edit the purge script to comment out your partitioned tables.

The purge script uses standard SQL DELETE statements to remove rows from the BPEL tables. For most sites, this is sufficient. However, some sites accumulate so much data that the purge script takes too long to run. In this case, partitioning becomes the better solution. The trade off is that partitioning involves significantly more database maintenance. Moreover, partitioning is an advanced technique and requires a knowledgeable and skilled DBA. By contrast, running the purge script is straightforward and does not require significant DBA knowledge.

Try to profile the input messages, database growth rate, and how much data is purged in the purge process. If the input rate and purge rate match, then regular purging is sufficient. Otherwise, consider partitioning.

If you use partitioning, Oracle recommends that you add disk space and eventually drop the partition. However, this creates additional requirements for managing disk capacity, deciding on the correct partition size, and so on. Do not use partitioning and then rely on the purge script to reclaim disk space.

10.3 Deleting Large Numbers of Instances with the Purge Scripts

Deleting thousands of instances with the Delete With Options button on the Instances page of a SOA composite application in Oracle Enterprise Manager Fusion Middleware Control takes time and may result in a transaction timeout. Instead, use the purge scripts for deleting instances. Note the following details about the purge scripts:

  • The purge scripts delete instances that have completed or are in error (have faulted). For more information, see Section 10.3.3, "Purge States."

    Note:

    For Oracle Mediator resequencer tables, faulted messages are not purged. For more information, see Section 10.3.4, "Resequenced Message Purge States for Oracle Mediator."
  • The purge scripts do not delete instances that are in-flight or can be recovered (are in a recovery required state).

  • The purge scripts delete all Oracle SOA Suite-related tables except for Oracle B2B. If you have an installation in which Oracle SOA Suite and Oracle B2B are co-located, ensure that you also invoke the Oracle B2B purge scripts. If you have separate Oracle SOA Suite and Oracle B2B installations, you must run only the appropriate purge scripts on the respective product schemas. For information about purging Oracle B2B, see Chapter "Purging Data" and Chapter "B2B Command-Line Tools" of the Oracle Fusion Middleware User's Guide for Oracle B2B.

The following sections provide examples of how to use the script:

Notes:

  • There is no purge script support on the IBM DB2 database. Starting with release 11g Release 1 (11.1.1.7), a new purge script is provided that enables you to purge instances on Microsoft SQL Server. For more information, see Section 10.7, "Purging Instances on Microsoft SQL Server."

  • The purge_soainfra_oracle.sql PL/SQL purge script provided in pre-11.1.1.4 releases has been deprecated. If you are an existing user of this script, you can continue to use it against your database in 11g Release 1 (11.1.1.4 or later). However, starting with 11g Release 1 (11.1.1.4), this script is no longer shipped. Oracle recommends that you use the purge script provided with 11g Release 1 (11.1.1.4 or later).

  • When upgrading from 11g Release 1 (11.1.1.3) to 11g Release 1 (11.1.1.4) or later, ensure that you run the purge setup scripts from the 11.1.1.4 or later location, respectively, as this contains the latest purge details. For more information about upgrade, see Oracle Fusion Middleware Upgrade Guide for Oracle SOA Suite, WebCenter, and ADF.

10.3.1 Looped Purge Script

The master purge script includes a looping construct that allows for a batched purge. You can also provide this script with a max_runtime parameter that stops looping after the value for this parameter is exceeded.

The master script drives the purge of SOA database tables. You can use the delete_instances procedure to purge SOA database tables.

Note:

Set max_runtime to a higher value if there are many instances to purge. In this case, you should expect to wait for a longer time before the script exits. Alternatively, use a smaller batch size if you want the purge script to exit sooner.

10.3.1.1 delete_instances Procedure

Use the delete_instances procedure to delete instances. Example 10-1 shows the syntax.

Example 10-1 delete_instances Procedure Syntax

procedure delete_instances (
                   min_creation_date in timestamp,
                   max_creation_date in timestamp,
                   batch_size in integer,
                   max_runtime in integer,
                   retention_period in timestamp,
                   purge_partitioned_component in boolean
                   ignore_state in boolean
                   composite_name in varchar2
                   composite_revision in varchar2
                   soa_partition_name in varchar2
                   );

Table 10-2 describes the script parameters.

Table 10-2 delete_instances Procedure Parameter Descriptions

Parameter Description

min_creation_date

Beginning creation date for the composite instances.

max_creation_date

Ending creation date for the composite instances.

batch_size

Batch size used to loop the purge. The default value is 20000.

max_runtime

Expiration at which the purge script exits the loop. The default value is 60. This value is specified in minutes.

retention_period

Retention period is only used by the BPEL process service engine (in addition to using the creation time parameter). This functionality is not extended to other components.

This parameter checks for and deletes records in the cube_instance table. The value for this parameter must be greater then or equal to max_creation_date. The default value is null.

Specify a retention period if you want to retain the composite instances based on the modify_date of the BPEL instances (cube_instance).

In this example, the modify_date of the BPEL instances table, which can be different than the composite created_date, is used as a second level of filtering:

min_creation_date = 1st June 2011 
      max_creation_date = 30  June 2011
      retention_period = 1st July 2011

This deletes all composite instances in which the creation_time of the composite is between 1st June 2011 and 30 June 2011 and the modify_date of the cube_instance is less than 1st July 2011

purge_partitioned_component

You can invoke the same purge to delete partitioned data. The default value is false.

ignore_state

When set to true, purges all open and closed instances within the specified date range. This parameter is not supported with the parallel purge script described in Section 10.3.2, "Looped Purge in Parallel Script with dbms_scheduler." The default value is false.

Note: Use this parameter cautiously because purging open instances may leave your system in an inconsistent state.

composite_name

The name of the SOA composite application. This parameter, along with the composite_revision and soa_partition_name parameters, enables you to purge the instances of a specific SOA composite application. For more information, see Section 10.3.5, "Purging the Instances of a Specific SOA Composite Application."

composite_revision

The revision number of the SOA composite application.

soa_partition_name

The partition in which the SOA composite application is included.


Notes:

  • If you do not provide a value for retention_period, the value for this property defaults to the value of max_creation_date (this is, if retention_period equals null, then retention_period = max_creation_date). This consequence also applies to the script parameters described in Section 10.3.2, "Looped Purge in Parallel Script with dbms_scheduler."

  • The purge scripts are restricted to purging only the database and existing rows in a table. There is no way for the purge script to look at runtime execution. Therefore, assume you attempt an automatic recovery immediately after an active row has been deleted with the purge scripts (with the ignore_state parameter set to true). Because of this, a row is created after the purge has been executed. This row remains dangling because the COMPOSITE_INSTANCE table row has already been deleted.

10.3.2 Looped Purge in Parallel Script with dbms_scheduler

This script is functionally the same as the looped purge script described in Section 10.3.1, "Looped Purge Script." However, this script uses the dbms_scheduler package to spawn multiple purge jobs, with each job working on subset data.

You can use the following procedure to purge SOA database tables.

Note:

If you have a multiple CPU host, use of the parallel script can be beneficial. However, Oracle recommends that you enable the parallel script only during off hours. In addition, when purging data during off hours, Oracle recommends that you drop indexes before purging large amounts of data and then add the indexes back in. This speeds up the purge process, and also keeps indexes from becoming unbalanced.

10.3.2.1 delete_instances Procedure in Parallel

Use the delete_instances procedure in parallel to delete instances. Example 10-2 shows the syntax.

Example 10-2 delete_instances Procedure in Parallel Syntax

PROCEDURE delete_instances_in_parallel (
                   min_creation_date in timestamp,
                   max_creation_date in timestamp,
                   batch_size in integer,
                   max_runtime in integer,
                   retention_period in integer,
                   DOP in integer
                   max_count integer,
                   purge_partitioned_component in boolean)
                   composite_name in varchar2
                   composite_revision in varchar2
                   soa_partition_name in varchar2

Table 10-3 describes the script parameters.

Table 10-3 delete_instances in Parallel Procedure Parameter Descriptions

Parameter Description

min_creation_date

Beginning creation date for the composite instances.

max_creation_date

Ending creation date for the composite instances.

batch_size

Batch size used to loop the purge. The default value is 20000.

max_runtime

Expiration time at which the purge script exits the loop. The default value is 60. This value is specified in minutes.

retention_period

Retention period is only used by the BPEL process service engine (in addition to using the creation time parameter). The default value is null. For more information about this parameter, see Table 10-2.

DOP

Defines the number of parallel jobs to schedule. The default value is 4.

max_count

Defines the number of rows processed (not the number of rows deleted). A big temp table is created and then jobs are scheduled to purge based on the data. This is the maximum purge row count to use; it defaults to one million. The default value is 1000000.

purge_partitioned_component

You can invoke the same purge to delete partitioned data. The default value is false.

composite_name

The name of the SOA composite application. This parameter, along with the composite_revision and soa_partition_name parameters, enables you to purge the instances of a specific SOA composite application. For more information, see Section 10.3.5, "Purging the Instances of a Specific SOA Composite Application."

composite_revision

The revision number of the SOA composite application.

soa_partition_name

The partition in which the SOA composite application is included.


10.3.2.2 Resolving Dead Locks After Running the Looped Purge in Parallel Script

You may observe a dead lock in the thread logs for one thread after running the looped purge in parallel script. Example 10-3 provides details.

Example 10-3 Dead Lock Error in the Thread Logs

SOA_PURGE_LOG_THREAD1 (total of 4 threads)
17-JUL-2012 03:03:48
: Purge AUDIT_DETAILS. Error Code = -60, Error Message = ORA-00060: deadlock
detected while waiting for resource
17-JUL-2012 03:03:48
: ERROR(delete_inst_in_parallel_job. Error Code = -60, Error Message =
ORA-00060: deadlock detected while waiting for resource

To resolve the dead lock issue, rebuild the AUDIT_DETAILS table and increase the values for either of the following:

  • Increase PCTFREE (to allow for more interested transaction list (ITL) allocation).

  • Increase INITRANS (initial ITLs). This option is described below.

To recreate the AUDIT_DETAILS table and increase the INITRANS value:

  1. Create a temporary table and increase the value for INITRANS (for this example, a table named AUDIT_DETAILS_TMP is created).

    SQL> CREATE TABLE "PS6_SOAINFRA"."AUDIT_DETAILS_TMP"
       (    "CIKEY" NUMBER(*,0),
            "DETAIL_ID" NUMBER(*,0),
            "BIN_CSIZE" NUMBER(*,0),
            "BIN_USIZE" NUMBER(*,0),
            "DOC_REF" VARCHAR2(300),
            "BIN" BLOB,
            "CI_PARTITION_DATE" TIMESTAMP (6)
       ) SEGMENT CREATION IMMEDIATE
      PCTFREE 0 PCTUSED 1 INITRANS 4 MAXTRANS 255 NOCOMPRESS LOGGING
      STORAGE(INITIAL 331350016 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
      PCTINCREASE 0 FREELISTS 6 FREELIST GROUPS 1 BUFFER_POOL DEFAULT FLASH_CACHE
    DEFAULT CELL_FLASH_CACHE DEFAULT)
      TABLESPACE "PS6_SOAINFRA"
     LOB ("BIN") STORE AS BASICFILE (
      TABLESPACE "PS6_SOAINFRA" ENABLE STORAGE IN ROW CHUNK 8192 PCTVERSION 0
      CACHE
      STORAGE(INITIAL 16384 NEXT 8192 MINEXTENTS 1 MAXEXTENTS 2147483645
      PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT FLASH_CACHE
    DEFAULT CELL_FLASH_CACHE DEFAULT)) ;
    
    SQL> INSERT /*+ APPEND */ into audit_details_TMP select * from audit_details;
    
    SQL> COMMIT;
    
  2. Drop the AUDIT_DETAILS table.

    SQL> DROP TABLE PS6_SOAINFRA.AUDIT_DETAILS CASCADE CONSTRAINTS;
    
  3. Rename the AUDIT_DETAILS_TMP temporary table to AUDIT_DETAILS.

    SQL> ALTER TABLE PS6_SOAINFRA.AUDIT_DETAILS_TMP RENAME TO AUDIT_DETAILS;
    
  4. Create a unique index on AUDIT_DETAILS.

    SQL> CREATE UNIQUE INDEX "PS6_SOAINFRA"."AD_PK" ON "PS6_SOAINFRA"."AUDIT_ DETAILS" ("CIKEY", "DETAIL_ID");
    
  5. Add a constraint and primary key to AUDIT_DETAILS.

    SQL> ALTER TABLE "PS6_SOAINFRA"."AUDIT_DETAILS" ADD CONSTRAINT "AD_PK" PRIMARY
     KEY ("CIKEY", "DETAIL_ID") ENABLE;
    

10.3.3 Purge States

Instances in the following states are purged with the purge scripts:

  • Completed successfully

  • Faulted

  • Terminated by user

  • Stale

  • Unknown (instance tracking is disabled)

Purging of the following instance states is not supported:

  • Instances pending recovery at the BPEL process service engine level or SOA composite application level

  • Running instances

To purge these instances, you must first move them to one of the instance states supported by the purge scripts.

10.3.4 Resequenced Message Purge States for Oracle Mediator

The purge scripts include purge commands to purge the information persisted in the Oracle Mediator resequencer tables (mediator_group_status and mediator_resequencer_message). The following information is purged from the resequencer tables when you run the purge scripts:

  • Completed and aborted messages for all resequencer types

  • Timed out messages for standard resequencers

  • Groups in a ready state for best effort and FIFO (first in/first out) resequencers (these are the only groups that can be purged)

To allow fault recovery and message processing to be completed, the purge scripts do not purge all resequenced message information. In addition, standard resequencer groups store information that should not be purged. The following are not purged when you run the purge scripts:

  • Faulted messages for all resequencer types

  • Running messages for all resequencer types

  • Group information for standard resequencers

  • Groups in a state other than ready for best effort and FIFO resequencers

Note:

The purge scripts for the Oracle Mediator resequencer purge messages first and then move on to groups. If there are messages for a group in the mediator_resequencer_message table, the group cannot be deleted.

The above describes processing for both looped and parallel processing of the purge scripts, and regardless of whether instance tracking is enabled or disabled. Before any sequence groups are purged, a check is performed to verify that all messages associated with the group are processed.

Below is a list of group state codes used in the resequencer tables:

  • 0: Ready

  • 1: Locked

  • 2: Error

  • 4: Timed out

  • 6: Group error

Below is a list of message state codes used in the resequencer tables:

  • 0: Ready

  • 1: Locked

  • 2: Completed

  • 3: Error

  • 4: Timed out (this is ignored)

  • 5: Aborted

10.3.5 Purging the Instances of a Specific SOA Composite Application

You can purge the instances of a specific SOA composite application and leave the instances of other composites unpurged. This action enables you to purge certain flows more frequently than others due to high volume or retention period characteristics.

The purge scripts include an option for purging based on COMPOSITE_DN. Purging based on COMPOSITE_DN is supported with the parameters composite_name and composite_revision.

The purge logic is based on flows (execution context IDs (ECIDs)), and not COMPOSITE_IDs. Therefore, apart from the intended COMPOSITE_DNs, other composites sharing the same ECID may get deleted. The following scenarios may occur:

  • A composite instance is closed, but the flow is still open:

    In a scenario in which composite A calls composite B, the purge intends to delete instances of composite A. However, there may be a case in which an instance of composite A is closed, but the corresponding composite B instance is still open. Therefore, because the overall flow is still in an open state, the composite A instance (even though closed) is not purged.

  • The composite instance is closed and the flow is also closed:

    Composite A again calls composite B. The purge intends to delete instances of composite A. Therefore, in a case in which composite A is closed and composite B is also closed, because the overall flow is closed, both composite instances A and B are purged.

These scenarios maintain the consistency of the flow.

For information about the composite_name and composite_revision parameters, see Section 10.3.1, "Looped Purge Script" and Section 10.3.2, "Looped Purge in Parallel Script with dbms_scheduler."

10.3.6 Running the Purge Scripts

This section describes how to execute the purge scripts.

To execute the purge scripts:

  1. In SQL*Plus, connect to the database AS SYSDBA:

    CONNECT SYS AS SYSDBA
    
  2. Execute the following SQL commands:

    GRANT EXECUTE ON DBMS_LOCK TO USER;
    GRANT CREATE ANY JOB TO USER;
    

    where USER is the soainfra account to execute the scripts. These privileges are required to run the scripts.

  3. Load the purge scripts by running the main purge script in the MW_HOME/SOA_ORACLE_HOME/rcu/integration/soainfra/sql/soa_purge directory.

    For a parallel purge, the debug logs from the jobs spawned by a parallel purge are logged into files created in the directory named SOA_PURGE_DIR. This directory must be accessible to the Oracle database.

  4. Create SOA_PURGE_DIR and grant write permissions to the soainfra user.

    mkdir -p /tmp/purgelog
    CREATE OR REPLACE DIRECTORY SOA_PURGE_DIR AS 'SERVER_DIRECTORY'
    

    where SERVER_DIRECTORY is the name of the directory to create (for example, '/tmp/purgelog/'). Note the required single quotes around the directory path.

  5. If you want to execute the scripts in debug mode, run common/debug_on.sql and set serverout to on in SQL*Plus. This step is optional.

    SET SERVEROUTPUT ON
    

    The logs from the spawned jobs are logged into the directory created in Step 4 (separate files per job). The rest of the logs are displayed on stdout (or the spool file, if configured).

    There are two options for purging:

    • Looped purge

    • Parallel purge

  6. Execute the purge scripts as shown below. Examples are provided for both options.

    1. For looped purge:

      DECLARE
      
         MAX_CREATION_DATE timestamp;
         MIN_CREATION_DATE timestamp;
         batch_size integer;
         max_runtime integer;
         retention_period timestamp;
      
        BEGIN
      
         MIN_CREATION_DATE := to_timestamp('2010-01-01','YYYY-MM-DD');
         MAX_CREATION_DATE := to_timestamp('2010-01-31','YYYY-MM-DD');
          max_runtime := 60;
          retention_period := to_timestamp('2010-01-31','YYYY-MM-DD');
         batch_size := 10000;
           soa.delete_instances(
           min_creation_date => MIN_CREATION_DATE,
           max_creation_date => MAX_CREATION_DATE,
           batch_size => batch_size,
           max_runtime => max_runtime,
           retention_period => retention_period,
           purge_partitioned_component => false);
        END;
        /
      
    2. For parallel purge:

      DECLARE
      
         max_creation_date timestamp;
         min_creation_date timestamp;
         retention_period timestamp;
        BEGIN
      
         min_creation_date := to_timestamp('2010-01-01','YYYY-MM-DD');
         max_creation_date := to_timestamp('2010-01-31','YYYY-MM-DD');
         retention_period := to_timestamp('2010-01-31','YYYY-MM-DD');
      
          soa.delete_instances_in_parallel(
           min_creation_date => min_creation_date,
           max_creation_date => max_creation_date,
           batch_size => 10000,
           max_runtime => 60,
           retention_period => retention_period,
           DOP => 3,
           max_count => 1000000,
           purge_partitioned_component => false);
      
       END;
      

10.4 Partitioning Component Tables

The runtime and schema code for the following components has been modified to store the flow creation date column with their transactional tables.

  • Oracle BPEL Process Manager

  • Oracle Mediator

  • Human workflow

  • Oracle B2B

  • SOA Infrastructure (includes component instances)

  • Oracle BPM Suite

The CPST_CREATED_DATE column contains the flow creation date time populated by the instance tracking code. This is available as the normalized message property oracle.integration.platform.instance.CommonConstants.COMPOSITE_INSTANCE_CREATED_TIME.

All SOA components are partitioned on the same partition key. These partitioned components use the same time range and partition ID.

Note:

Before performing complete or partial partitioning, run the purge scripts.

10.4.1 Partitioning the Component Database Tables

Oracle SOA Suite has been instrumented with partition keys that enable DBAs to take advantage of Oracle RDBMS partitioning features and capabilities. This action enables the schema tables to be range-partitioned on time intervals. This is useful when you must reduce the database maintenance window of large tables. (Though not discussed in this chapter, this also provides for the possibility of archiving partitioned data.)

The task of partitioning the Oracle SOA Suite tables must be performed by an experienced DBA. Since partitioning tables is considered a core DBA skill, this chapter does not provide detailed, step-by-step instructions on how to partition tables. Rather, it provides the DBA with the knowledge and understanding of Oracle SOA Suite schemas and their associated scripts. With this knowledge, the DBA can customize any partitioning strategy for their environment, and incorporate any tuning parameters in response to the performance of their database. Tuning is never a one-size-fits-all proposition or a one-off configuration change. Rather, it is an iterative process of monitoring and tuning.

The partitioning schemes discussed in this chapter can only be used with Oracle SOA Suite 11g Release 1 (11.1.1.4 or later).

The following components are associated with their own database schemas:

  • Oracle BPEL Process Manager

  • Oracle Mediator

  • Human workflow

  • Oracle B2B

  • SOA Infrastructure

  • Oracle BPM Suite

For more information about table partitioning, see the Oracle database administration documentation library located at the following URL:

http://www.oracle.com/technetwork/indexes/documentation/index.html

Notes:

  • A hash subpartition is an option the DBA may want to explore, especially for tables with large object (LOB) segments. This can assist with high water (HW) enqueue contention.

  • A global hash index on primary keys that are monotonically increasing (like CIKEY) may relieve block contention.

10.4.1.1 Referential Integrity and Equipartioning

For performance reasons, the Oracle BPEL Process Manager, Oracle Mediator, human workflow, Oracle B2B, SOA Infrastructure, and Oracle BPM Suite schemas have no foreign key constraints to enforce integrity. This fact discounts the use of the 11g RDBMS feature known as referential partitioning. This feature provides significant benefits because it equipartitions master and detail tables across foreign key constraints. Equipartioning means that the associated dependent table rows are in a database partition with the same partition key interval as their master table rows.

One benefit of this feature is that the state (for example, completed, faulted, and so on) of each detail row in the equipartition can be inferred from its associated master table row.

Although the 11g RDBMS referential partitioning feature cannot be used, similar behavior can be mimicked to achieve some of the same benefits. The Oracle BPEL Process Manager, Oracle Mediator, human workflow, Oracle B2B, SOA Infrastructure, and Oracle BPM Suite components ensure that the partition key of every detail table row is the same as the partition key of its master table row (that is, the date (timestamp) that is the partition key is pushed down). To then complete the setup, the DBA must ensure that the master and detail tables are range-partitioned on the same intervals. Some examples are provided in subsequent sections of this chapter.

Note:

You may decide that referential integrity of aged partitions is not a concern for your site. For example, the site may have ample disk space, allowing data to significantly age, or there may be no apparent, adverse impact of allowing unreferenced data to be stored in the dependent tables.

10.4.1.2 Introduction to Partition Key Selection

The following factors were considered when selecting the schema partition keys:

  • Convey or imply state (for example, completed) for referential integrity

  • Allow range partitioning on time intervals for maintenance operations

  • Be static to avoid row movement that may lead to unreferenced data

  • Be static to avoid row movement when table maintenance operations are performed

  • Provide performance benefits for console queries through partition pruning

10.4.2 Configuring Partitions

Partitioning is not configured by default; it is a postinstallation step that must be performed manually. Once you decide to implement partitioning of the database, you must perform some initial configuration tasks only once:

  • Using the information in this chapter, decide which groups you want to partition.

  • For each of those groups, decide which tables you want to partition, remembering that there are some mandatory tables in each group that must be partitioned.

  • For each group, decide on the partition interval.

  • Create the partition scripts to partition the Oracle SOA Suite schemas. No scripts are supplied; each DBA is responsible for creating the partition scripts appropriate for their environment.

  • Edit the purge script and remove references to any tables that you partitioned.

10.4.3 Introduction to the Verification Script

A verification script is provided for a DBA to identify when to drop a partition and its equipartitioned dependent table. The verification script also identifies if there are active, long running instances. You can then move these instances to a different partition, and then drop the original partition.

Note:

The verification script does not drop any partitions; it just ensures that partitions are eligible to be dropped.

10.4.4 Component Tables

This section describes partitioning constraints and lists the component tables, the groups to which they belong, and their partition key.

10.4.4.1 Partitioning Constraints

Note the following table partitioning constraints:

  • You have the choice of the following approach to partitioning:

    • Complete partitioning: All tables of a service component/service engine are partitioned.

    • No partitioning: No tables of a service component/service engine are partitioned.

    • Partial partitioning: Restrict partitioning to specific tables with a high growth rate. Table 10-4 lists the master and dependent tables that can be partitioned.

      Table 10-4 Partial Partitioning

      Master Table Dependent Table of Master Table

      COMPOSITE_INSTANCE

      REFERENCE_INSTANCE

      CUBE_INSTANCE

      CUBE_SCOPE

      XML_DOCUMENT

      None

      MEDIATOR_INSTANCE

      MEDIATOR_CASE_INSTANCE

      MEDIATOR_PAYLOAD

      None


      You can partition any of the tables in Table 10-4 by following these constraints:

      • If you want to partition a dependent table, you must also partition its master table.

      • All tables should be equipartitioned along the same date ranges and the same name.

      • Always partition the COMPOSITE_INSTANCE table. This constraint is essential when the Audit Level property is set to Development or Production for any of the composites. The verification script checks for active flows based on the active composite instances within that partition. Therefore, if the COMPOSITE_INSTANCE table is not partitioned, the entire verification script logic based on the equipartitioning of all the tables fails.

  • Regardless of the group and component, all tables that are partitioned use the same time range and the partition ID.

10.4.4.2 Component Tables, Range Partition Keys, and Groups

Table 10-5 through Table 10-10 are divided into three groups.

  • Group 1: This includes tables that are directly related to the end-to-end flow trace of a composite. A majority of the tables fall into this group.

  • Group 1A: This includes a small set of tables that are not directly related to the flow trace (for example, REJECTED_MESSAGE).

  • Group 2: This includes a small set of tables that have a dependency on multiple tables from Group 1 and 1A tables. You must first execute the group 1 verification script and drop the group 1 partitions before running the group 2 verification script.

    Note:

    Groups 1 and 1A are combined in the verification script. Running the verification script does not require you to have knowledge of this classification.

Table 10-5 Component: SOA Infrastructure

Table Range Partition Key Group

COMPOSITE_INSTANCE

PARTITION_DATE

1

REFERENCE_INSTANCE

CPST_PARTITION_DATE

1

COMPOSITE_INSTANCE_FAULT

CPST_PARTITION_DATE

1

COMPOSITE_SENSOR_VALUE

CPST_PARTITION_DATE

1

COMPONENT_INSTANCE

CPST_PARTITION_DATE

1

REJECTED_MESSAGE

CREATED_TIME

1A

REJECTED_MSG_NATIVE_PAYLOAD

RM_PARTITION_DATE

1A

INSTANCE_PAYLOAD

CREATED_TIME

2

COMPOSITE_INSTANCE_ASSOC

CREATED_TIME

2


Table 10-6 Component: Oracle BPEL Process Manager

Table Range Partition Key Group

CUBE_INSTANCE

CPST_INST_CREATED_TIME

1

CI_INDEXES

CI_PARTITION_DATE

1

CUBE_SCOPE

CI_PARTITION_DATE

1

DOCUMENT_CI_REF

CI_PARTITION_DATE

1

AUDIT_TRAIL

CI_PARTITION_DATE

1

AUDIT_DETAILS

CI_PARTITION_DATE

1

DLV_SUBSCRIPTION

CI_PARTITION_DATE

1

WORK_ITEM

CI_PARTITION_DATE

1

AUDIT_COUNTER

CI_PARTITION_DATE

1

WI_FAULT

CI_PARTITION_DATE

1

DLV_MESSAGE

RECEIVE_DATE

1A

HEADERS_PROPERTIES

DLV_PARTITION_DATE

1A

DOCUMENT_DLV_MSG_REF

DLV_PARTITION_DATE

1A

XML_DOCUMENT

DOC_PARTITION_DATE

2


Table 10-7 Component: Oracle Mediator

Table Name Range Partition Key Group

MEDIATOR_INSTANCE

COMPOSITE_CREATION_DATE

1

MEDIATOR_CASE_INSTANCE

MI_PARTITION_DATE

1

MEDIATOR_CASE_DETAIL

MI_PARTITION_DATE

1

MEDIATOR_AUDIT_DOCUMENT

MI_PARTITION_DATE

1

MEDIATOR_DEFERRED_MESSAGE

CREATION_DATE

1A

MEDIATOR_PAYLOAD

CREATION_TIME

2


Table 10-8 Component: Human Workflow

Table Range Partition Key Group

WFTASK

COMPOSITECREATEDTIME

1

WFTask_TL

COMPOSITECREATEDTIME

1

WFTaskHistory

COMPOSITECREATEDTIME

1

WFTaskHistory_TL

COMPOSITECREATEDTIME

1

WFComments

COMPOSITECREATEDTIME

1

WFMessageAttribute

COMPOSITECREATEDTIME

1

WFAttachment

COMPOSITECREATEDTIME

1

WFAssignee

COMPOSITECREATEDTIME

1

WFReviewer

COMPOSITECREATEDTIME

1

WFCollectionTarget

COMPOSITECREATEDTIME

1

WFRoutingSlip

COMPOSITECREATEDTIME

1

WFNotification

COMPOSITECREATEDTIME

1

WFTaskTimer

COMPOSITECREATEDTIME

1

WFTaskError

COMPOSITECREATEDTIME

1

WFHeaderProps

COMPOSITECREATEDTIME

1

WFEvidence

COMPOSITECREATEDTIME

1

WFTaskAssignmentStatistic

COMPOSITECREATEDTIME

1

WFTaskAggregation

COMPOSITECREATEDTIME

1


Table 10-9 Component: Oracle B2B

Table Range Partition Key Group

B2B_BUSINESS_MESSAGE

CPST_INST_CREATED_TIME

1

B2B_APP_MESSAGE

CPST_INST_CREATED_TIME

1

B2B_WIRE_MESSAGE

CPST_INST_CREATED_TIME

1

B2B_DATA_STORAGE

CPST_INST_CREATED_TIME

1

B2B_EXT_BUSINESS_MESSAGE

CPST_INST_CREATED_TIME

1


Table 10-10 Component: Oracle BPM Suite

Table Range Partition Key Group

BPM_AUDIT_QUERY

CI_PARTITION_DATE

1

BPM_MEASUREMENT_ACTIONS

CI_PARTITION_DATE

1

BPM_MEASUREMENT_ACTION_EXCEPS

CI_PARTITION_DATE

1

BPM_CUBE_AUDITINSTANCE

CIPARTITIONDATE

1

BPM_CUBE_TASKPERFORMANCE

CIPARTITIONDATE

1

BPM_CUBE_PROCESSPERFORMANCE

CIPARTITIONDATE

1


10.4.5 Running the Verification Script

A verification script is provided for the DBA to identify when to drop a partition and its equipartitioned dependent table. The verification script is located in MW_HOME/SOA_ORACLE_HOME/rcu/integration/soainfra/sql/verify.

To execute the verification script:

  1. Create a directory with the SQL command PART_DIR. For example:

    CREATE DIRECTORY PART_DIR AS '/tmp/verify'
    
  2. Provide the soainfra user with write privileges on this directory. The log and SQL files are generated in this directory.

  3. Delete the temporary tables by calling the following stored procedures. This is because active instances are captured in a set of temporary tables that get processed by the row movement script.

    For group 1 tables:

    DECLARE
    BEGIN
    verify_soa.trunc_verify1_temp_tables;  
    END;
    

    For group 2 tables:

    DECLARE
    BEGIN
    verify_soa.trunc_verify2_temp_tables;
    END;
    
  4. For running the stored procedure, the client script soa_exec_verify.sql can be used. Edit soa_exec_verify.sql and enter the partition names that require verification in the array mySoa_drv_list.

    1. To execute function verify_soa.verify_1, pass 1 as the parameter.

    2. To execute function verify_soa_verify_2, pass 2 as the parameter.

  5. Review the logs and SQL files generated in the PART_DIR directory.

Note:

A verification script is not provided for business rules.

10.4.6 Verifying and Dropping Partitions

To verify and drop partitions:

  1. Execute function verify_soa.verify_1.

  2. Check the log file in the PART_DIR folder with the name SOA_PARTITION_NAME_LOG_1 for any failures. If you have active, long running instances, see Section 10.4.7, "Moving Active, Long Running Instances to a Different Partition."

  3. Drop the partitions that can be dropped by using the script generated in the PART_DIR folder with the name SOA_PARTITION_NAME_RESULT_1.sql.

  4. Execute verify_soa.verify_2.

  5. Check the log file in the PART_DIR folder with the name SOA_PARTITION_NAME_LOG_2 for any failures.

  6. Drop the droppable partitions using the script generated in the PART_DIR folder with the name SOA_PARTITION_NAME_RESULT_2.sql.

Note:

There is an issue caused by the existence of foreign key constraints in the Oracle B2B table. When dropping the partition, the B2B partition purge is invoked, the foreign key constraints must be disabled before dropping the partition and enabled afterward. To perform this action, execute the PL/SQL procedures b2b_disable_constraints and b2b_enable_constraints at the appropriate steps in the above procedure. Because foreign keys are enabled and disabled in these procedures, it is not recommended to run them on a live system.

10.4.7 Moving Active, Long Running Instances to a Different Partition

The verification script checks if there are active instances in the partition. When no instances are active, the partition can be dropped. However, you may have active, long running instances present in the partition. These active instances prevent a partition from being dropped. To avoid this problem, you can move long running instances to a different partition.

The Oracle database provides an option for enabling and disabling row movements across partitions. When you create or alter a partitioned table, a row movement clause, either ENABLE ROW MOVEMENT or DISABLE ROW MOVEMENT, can be specified. This clause either enables or disables the movement of a row to a new partition if its key is updated. This option can handle long-running processes that prevent partition drops.

The verify scripts provide a count of total instances, open instances, and the percentage of open instances in a partition. Based on this information, you now have a choice of running the row movement procedure. This updates the partition keys of the targeted tables, which in turn initiates a row movement of these instances to a different partition. Once all active instances from all partitioned tables have been moved to a different partition, the targeted partition can be dropped.

Notes:

  • Create a separate partition for active, long running transactions. You can then move these instances from the current partition to the new partition by providing the new_partition_date that falls into that partition range. It is recommended that you run a periodical purge on these long-running instances.

  • Row movement involves expensive updates to multiple rows. Only exercise row movement when the number of active processes is small. While the verification script provides a count of active instances, many of the dependent tables have a many-to-one relationship with the master tables. This can become a larger set of rows to move between partitions (if the dependent tables are also partitioned). Use row movement discretely based on the tables partitioned, the active instances in the partition, the data shape, and the available infrastructure setup.

  • Enabling row movement between partitions can significantly degrade runtime performance if the partition key is updated frequently. However, a partition key column is never changed for the Oracle SOA Suite database tables once they are created. Therefore, no row movement at runtime occurs.

To move long running instances to a different partition:

  1. Run the verification script for group 1. This script is described in Section 10.4.5, "Running the Verification Script."

  2. Check the log scripts to see if there are any active instances preventing the partition drop.

  3. Run the row movement procedure. Based on the logs from step 2, check the count of open instances. Based on this count, decide if the row movement script should be executed or if dropping of partitioning should be postponed.

    1. Log in to SQL*Plus as the SOAINFRA user:

      CONNECT SOAINFRA/password
      
    2. Execute the following PL/SQL procedure to move rows for group 1 tables:

      SQL> PROCEDURE exec_row_movement_1( partition_name in varchar2, 
      new_partition_date in timestamp );
      

      where:

      partition_name is the name of the partition on which to execute row movement.

      new_partition_date is the new date with which to update the partition key column.

  4. Execute the purge script to delete nonpartitioned tables as described in Section 10.3.6, "Running the Purge Scripts." The purge_partitioned_component parameter of the purge procedures must be set to false.

  5. Drop the group 1 partitions (based on step 3).

  6. Run the verification script for group 2.

  7. Check if there are any active instances holding back the partition drop.

  8. Run the row movement procedure. Based on the logs from step 7, check the count of open instances. Based on this count, decide if the row movement script should be executed or if dropping of partitioning should be postponed.

    1. Return to SQL*Plus as the SOAINFRA user.

      CONNECT SOAINFRA/password
      
    2. Execute the following PL/SQL procedure to move rows for group 2 tables:

      SQL> PROCEDURE exec_row_movement_2( partition_name in varchar2, 
      new_partition_date in timestamp );
      

      where:

      partition_name is the name of the partition on which to execute row movement.

      new_partition_date is the new date with which to update the partition key column.

  9. Drop the group 2 partitions (based on step 8).

10.4.8 Partial Partitioning of Components

If you have an environment in which some components are partitioned, while other components are not partitioned, the nonpartitioned data set must be purged using the purge scripts described in Section 10.3, "Deleting Large Numbers of Instances with the Purge Scripts."

For example, assume human workflow is not partitioned, while other components are partitioned. The verification script reports that all SOA partitions can be dropped using the command for dropping partitions. However, the human workflow tables continue to hold workflow data until the data is purged using the loop/parallel purge scripts.

10.5 Removing Records from the Runtime Tables Without Dropping the Tables

The truncate script enables you to remove all records from all Oracle SOA Suite runtime tables without dropping the tables. You cannot reclaim database space with the truncate scripts.

The truncate script is useful for the following scenarios:

  • To create a production or test environment clone (test-to-production or production-to-test) in which you want to keep the schemas from the production environment so that the production customizations and new job definitions are kept, but all instance data in the SOA Infrastructure (that is, in the cloned database) must be truncated, regardless of state.

  • For testing purposes in which test scenarios must be recreated and rerun.

The truncate script provides this option by including truncate statements covering all the runtime tables of the following components:

  • Oracle BPEL Process Manager

  • Oracle Mediator

  • Business rules

  • Oracle B2B

  • SOA Infrastructure

  • Oracle BPM Suite

To remove records from the runtime tables without dropping the tables:

  1. Start SQL*Plus:

    sqlplus
    
  2. In SQL*Plus, connect to the database as the SOAINFRA user:

    CONNECT SYS AS SOAINFRA
    
  3. Execute the truncate script located in the MW_HOME/SOA_ORACLE_HOME/rcu/integration/soainfra/sql/truncate directory:

    SQL> @truncate_soa_oracle.sql
    

10.6 Recreating Tables with Open Composite Instances and Reclaiming Database Space

For some Oracle SOA Suite environments, the purge scripts may not always remove a sufficient number of composite instances to reclaim database space. This can lead to performance issues. The table recreation script (TRS script) is provided to perform the following tasks:

  • Delete all closed composite instances with respect to a retention policy that you specify.

  • Recreate the Oracle SOA Suite tables while preserving only open composite instances.

  • Reclaim space all at once for database tables that have grown too large without shrinking or rebuilding database segments (tables and indexes).

The deletion of all closed composite instances and the reorganization and reclamation of space are performed in a single step.

You can use the TRS script to get your database growth strategy back on track. The TRS script is provided to help installations reclaim space when the Oracle SOA Suite schema tables have been allowed to grow excessively, possibly without ever having been purged (with the looped or parallel script). Once the TRS script is used to reclaim space, you must implement an appropriate purge and partitioning strategy to maintain database growth. The TRS script is provided for corrective actions, and is not a replacement for the purge scripts.

The TRS script preserves data for open composites and service components and honors the specified retention policy. In addition, TRS generates DDL scripts to recreate the tables from the onsite Oracle SOA Suite schema, which therefore preserves any onsite customizations. (It is mandatory that the DDL is parsed and tested in collaboration with the onsite database administrator.)

TRS is executed in two phases:

  • Phase one: Populates a set of temporary tables that contain the closed composite and service component IDs to ignore (drop) when the tables are rebuilt. This phase also generates a DDL script to rebuild the tables.

  • Phase two: Executes the generated DDL script to recreate the specified Oracle SOA Suite tables. The script should be parsed and tested before execution and can be customized. Database backups are required before and after phase two.

10.6.1 Considerations for Using the TRS Script

Review the following requirements to determine if the TRS script is suitable for your Oracle SOA Suite environment.

Note:

You must first tune and use the Oracle SOA Suite purge scripts described in Section 10.3, "Deleting Large Numbers of Instances with the Purge Scripts" before using the TRS script.
  • The TRS script is provided for performing corrective actions, and is not a replacement for the purge scripts. Corrective actions are those required to move your installation to a more efficient database growth management strategy. This includes range partitioning tables and reducing the size of individual tables that have grown too large due to a lack of a purging strategy. Use the TRS script only to get your Oracle SOA Suite environment back on track. You must develop a growth management strategy that is best handled with the purge scripts. Once tuned, return to using the purge scripts as your primary method for deleting instances.

  • Though the TRS script implements direct path inserts, parallel query slaves, and no logging, it is expected to take longer than the purge scripts. The purge scripts only delete a subset of closed composites, but TRS removes all closed composites. The TRS script still must parse the composites in a similar manner to purge scripts, but does this for all composites (with consideration for the retention period).

  • The TRS script requires Oracle SOA Suite downtime because the tables must be recreated (dropped/created/inserted). This is the main reason the TRS script is not an alternative to the purge scripts. Many installations are 24 hours-by -7 days with small maintenance windows and unlikely to use these scripts to fully maintain the growth of the Oracle SOA Suite database.

  • If there is no urgency to reclaim space, but there is a need to range partition SOA tables or modify their structure, then the TRS script may not be the best option. It may be preferable to rebuild the table using a plain CTAS (CREATE TABLE AS SELECT) or ITAS (INSERT TABLE AS SELECT) command, which copies all (closed and open) composites within the given table. Custom scripts can be written by the database administrator and are likely to be faster than TRS because they do not need to parse the composite to determine which are closed.

    In addition, to avoid any dangling references, some tables (master tables such as COMPOSITE_INSTANCE, more later) may also require dependent tables to be rebuilt. A custom CTAS or ITAS script preserves all rows within the given master table, therefore avoiding the need to rebuild dependent tables. Therefore, TRS may not be the best option when space does not need to be reclaimed.

  • You must back up the Oracle SOA Suite database before and after the execution of the TRS DDL-generated script in phase two. This is especially important when no logging is specified by setting the redo_flag parameter to false in the trs_instances PL/SQL procedure in Section 10.6.4, "Phase One - Creating the TRS Script."

  • The dynamically created DDL script must be checked and tested thoroughly in a quality assurance (QA) environment by a database administrator before execution in a production environment.

  • The TRS script can rebuild all runtime tables except for the Oracle B2B tables in the Oracle SOA Suite schema.

  • The TRS script cannot be executed on non-Oracle databases.

  • Oracle SOA Suite tables not mentioned in the Oracle SOA Suite 11g purge scripts cannot be deleted.

10.6.2 TRS Script Features

The TRS script can rebuild very large tables and has been designed with the following features:

  • Performance features:

    • Parallel query slaves manage and traverse large tables.

    • The data manipulation language (DML) script is generated.

    • Minimal redo/undo logging reduces the number of redo logs generated.

    • Direct path input/output (I/O) reduces the number of redo and undo logs generated. Direct I/O also bypasses the database buffer cache. The TRS script uses a direct path insert with a subquery to populate the recreated table. The subquery selects only open composite flows from the source table and ignores (deleted) closed composites. As a biproduct of recreating the table, disk space is also reclaimed and reorganized.

  • Onsite table and index definitions (data definition language (DDL)) are preserved, regardless of customizations that you make to the base table and index definitions.

  • The TRS script can be customized before execution, therefore facilitating table partitioning. The onsite database administrator must modify the generated DDL script.

  • Fine-grained table rebuilds target large tables. This is the primary purpose of the TRS script. This helps to avoid the significant escalations that arise when a purging strategy has not been implemented.

  • A drop_flag parameter determines if the source (original) table can be dropped. This helps with space estimations and validation.

The TRS script is executed in two phases, as described in Table 10-11.

Table 10-11 Two Phase Execution of the TRS Script

Phase One Tasks Phase Two Tasks

You execute the trs_instances PL/SQL procedure to create the TRS DDL script. This phase can be executed a few days in advance with an appropriate retention period setting.

The PL/SQL procedure performs the following tasks:

  • Reads input parameters.

  • Reads table names from a parameter table (TRS_PARAMETER).

  • Populates temporary ID tables (using direct path inserts) with the keys on which to rebuild the Oracle SOA Suite tables in phase two. The temporary ID tables contain the component IDs that are closed and ignored when the source tables are rebuilt with the generated DDL script.

  • Generates the table recreation DDL script to be executed to perform the rebuild in phase two. This script preserves any onsite table and index modifications you have made to the Repository Creation Utility (RCU)-generated Oracle SOA Suite schema. For example, you may have partitioned a table or changed large objects (LOBs) from basic files to secure files. The generated DDL script includes these changes.

  • Generates the log files.

Note: If phase one fails, the DDL script is not generated or only partially generated.

You execute the DDL script during Oracle SOA Suite installation downtime after testing it thoroughly in a QA environment.

The DDL script can be customized before execution. The script performs the following tasks:

  • Drops, creates, and inserts rows into the table based on composite flows (ECIDs) not in the temporary ID tables.

  • Recreates the dependent objects. This script contains the table, index, and constraint definitions and the direct path insert to populate the Oracle SOA Suite tables.


10.6.3 Prerequisites for Rebuilding the XML_DOCUMENT Table

You specify the tables to recreate with open composite instances in Section 10.6.4, "Phase One - Creating the TRS Script." The XML_DOCUMENT table is among the tables that you can specify. The XML_DOCUMENT table is shared by components across the SOA Infrastructure, with document IDs present in the following tables:

  • INSTANCE_PAYLOAD

  • DOCUMENT_DLV_MSG_REF

  • DOCUMENT_CI_REF

  • MEDIATOR_AUDIT_DOCUMENT (only used in development mode)

  • B2B_DATA_STORAGE

The XML_DOCUMENT table is a master table by itself. Unlike the logic used to rebuild the other tables, this table is rebuilt on document IDs that are still present in the above tables. Therefore, it is necessary to purge the above tables or recreate them with the TRS script before rebuilding the XML_DOCUMENT table.

This may require multiple executions of the TRS script (if the purge script is not used on the above tables). The first task is to remove all closed document IDs from the above tables and the second task is to reduce the size of the XML_DOCUMENT table based on what is still present in these tables.

The XML_DOCUMENT table is not rebuilt based on closed composite and service component IDs because of a foreign key association with the B2B_DATA_STORAGE table. The document IDs that are closed in Oracle B2B cannot be removed because the TRS script, and therefore the need to rebuild this table, is based on what is still present.

Before rebuilding XML_DOCUMENT either:

  1. Execute the purge script to remove rows from the INSTANCE_PAYLOAD, DOCUMENT_DLV_MSG_REF, DOCUMENT_CI_REF, and MEDIATOR_AUDIT_DOCUMENT tables.

and/or:

  1. Execute the TRS script to remove rows from the INSTANCE_PAYLOAD, DOCUMENT_DLV_MSG_REF, and DOCUMENT_CI_REF tables.

  2. Execute the Oracle B2B purge script to remove rows from B2B_DATA_STORAGE.

10.6.4 Phase One - Creating the TRS Script

Note:

The XML_DOCUMENT table is among the tables that you can recreate with open composite instances. Before recreating the XML_DOCUMENT table, there are specific prerequisites that you must understand. For more information, see Section 10.6.3, "Prerequisites for Rebuilding the XML_DOCUMENT Table."

Use the trs_instances PL/SQL procedure to create the TRS DDL script. The script accepts input parameters and reads database tables to determine the tables to recreate. Example 10-4 shows the syntax.

Example 10-4 trs_instances Procedure

PROCEDURE trs_instances (i_older in TIMESTAMP default null,
                    drop_flag in boolean default false,
                    redo_flag in boolean default false,
                    DOP in number default 0,
                    sql_trace in boolean default false);

Table 10-12 describes the TRS script parameters.

Table 10-12 Parameter Descriptions

Parameter Description

i_older

Specifies the date to implement your retention policy. That is; any composite instances with creation dates older than this date are candidates for removal as long as they are closed.

Any composites with creation dates earlier than this date are retained whether open or closed. This is a mandatory parameter. There is no default value.

drop_flag

Specifies if the original (source) table is dropped as part of the TRS script. Use this flag to perform a trial run to ensure that there is enough database space to perform the TRS operation. The TRS operation requires enough space to accommodate an additional table equal in size to the original table.

  • true: The original (source) table is immediately dropped after recreation. The temporary table is renamed to the original. Therefore, the database requires enough space to accommodate the largest table in the DDL script.

  • false (the default value): The original table is not dropped. However, a temporary table is created that contains only open composites. Therefore, the database requires enough space to accommodate the original (source) and temporary (target) tables for all tables in the DDL script.

redo_flag

Specifies if database redo logging is performed when the selected table is recreated. Avoiding the generation of redo logging can significantly improve the performance of the TRS script operation, but has implications for database backup and recovery. It is recommended that you perform a backup before and after TRS script execution, regardless of this parameter setting.

  • true: (the default value) Database redo logging is performed.

  • false: Database redo logging is not performed. A database backup must be performed immediately after the TRS script is executed if no logging is specified (that is, redo_flag is set to false).

DOP

Specifies the degree of parallel used in setting the number of parallel query slaves for the TRS script operation. The parallel query slaves help with TRS performance on multi-CPU hardware. If set to 0 or 1 (the default value), then no parallel query slaves are specified in the script.

sql_trace

Used for SQL tuning and debugging because it produces the SQL trace for the SQL statements executed by the trs_instances PL/SQL procedure. The SQL trace is also set in the generated DDL script. The SQL trace files can be found in the database user dump destination:

SQL> SHOW PARAMETER DUMP

A SQL trace provides a significant overhead to SQL execution. The default value is false.


RCU initially installs the TRS script. To complete TRS script installation, the following steps are required.

To complete TRS script installation:

  1. Create a directory to capture the DDL script and log files:

    mkdir -p /…/trs_output
    
  2. In SQL*Plus, connect to the database AS SYSDBA:

    SQL> CONNECT SYS AS SYSDBA
    
  3. Create the trs_dir database directory:

    SQL> CREATE OR REPLACE DIRECTORY trs_dir AS '/.../trs_output';
    
  4. Grant privileges to the Oracle SOA Suite schema owner:

    SQL> GRANT READ, WRITE ON DIRECTORY trs_dir TO soa_owner;
    
  5. For SQL tracing, the Oracle SOA Suite schema owner may require the ALTER SESSION privilege:

    SQL > GRANT ALTER SESSION TO soa_owner;
    
  6. Insert the table names of the Oracle SOA Suite tables to recreate into the TRS_PARAMETER table.

    • Table 10-13 shows the master component table name that you can insert and the corresponding tables that are automatically included with your selection.

    • Table 10-14 shows the individual tables that you can insert and the corresponding tables that are automatically included with your selection.

    The table has one column called TABLE_NAME that accepts the name of any Oracle SOA Suite tables mentioned in the purge script. One or more tables can be inserted into the TRS_PARAMETER table, but some tables that are considered master tables within the Oracle SOA Suite schema may require additional tables to be rebuilt to avoid dangling references. (TRS automatically determines table dependencies.)

    SQL> CREATE TABLE TRS_PARAMETER (TABLE_NAME VARCHAR2(200));
    SQL> INSERT INTO TABLE TRS_PARAMETER VALUES(’COMPOSITE_INSTANCE');
    SQL> COMMIT;
    

    If the COMPOSITE_INSTANCE table is inserted into the TRS_PARAMETER table, then all component runtime tables must be rebuilt because the COMPOSITE_INSTANCE table holds the state (open, closed) of all composite flows. To avoid dangling references, the service component tables for Oracle BPEL Process Manager, Oracle Mediator, human workflow, and decision services (business rules) are rebuilt. This is automatically handled by the TRS script when the DDL is generated.

    In addition, if only the Oracle BPEL Process Manager master table CUBE_INSTANCE is inserted in the TRS_PARAMETER table, only the Oracle BPEL Process Manager runtime table is rebuilt. Therefore, the runtime tables for the SOA Infrastructure, Oracle Mediator, human workflow, and decision services (business rules) are untouched and the closed composites may still appear in Oracle Enterprise Manager Fusion Middleware Control through the SOA Infrastructure tables (COMPOSITE_INSTANCE). You must run the purge script described in Section 10.3, "Deleting Large Numbers of Instances with the Purge Scripts" to remove the closed SOA composite applications. (As stated, TRS is for corrective action so that purge and partitioning can be implemented.)

    Note:

    Though any individual Oracle SOA Suite table within the purge scripts can be recreated, some tables within the Oracle SOA Suite schema are considered master tables and others have dependencies. To avoid dangling references, the TRS script automatically includes dependent tables.

    Table 10-13 Master Component Tables and TRS_PARAMETER Table

    Master Component Table Names Automatically Included Tables

    COMPOSITE_INSTANCE

    Recreates all SOA Infrastructure, Oracle BPEL Process Manager, Oracle Mediator, human workflow, and business rule (decision service) component runtime tables.

    CUBE_INSTANCE

    Recreates all Oracle BPEL Process Manager runtime tables (including Oracle BPM Suite runtime tables).

    MEDIATOR_INSTANCE

    Recreates all Oracle Mediator runtime tables.

    WFTASK

    Recreates all human workflow runtime tables.

    BRDECISIONINSTANCE

    Recreates all business rule (decision service) runtime tables.

    XML_DOCUMENTFoot 1 

    Recreates the XML_DOCUMENT table.


    Footnote 1 This table typically grows large in size. Before recreating the XML_DOCUMENT table, there are specific prerequisites that you must perform. For more information, see Section 10.6.3, "Prerequisites for Rebuilding the XML_DOCUMENT Table."

    Table 10-14 Individual Tables and TRS_PARAMETER Table

    Table Name Automatically Included Tables

    INSTANCE_PAYLOADFoot 1  or REJECTED_MSG_NATIVE_PAYLOAD

    • INSTANCE_PAYLOADFootref 1

    • REJECTED_MSG_NATIVE_PAYLOAD

    • REFERENCE_INSTANCEFootref 1

    • COMPOSITE_INSTANCE_FAULT

    • REJECTED_MESSAGE

    • COMPONENT_INSTANCE

    ATTACHMENT or ATTACHMENT_REF

    • ATTACHMENT

    • ATTACHMENT_REF

    DLV_MESSAGE or DOCUMENT_DLV_MSG_REF or HEADER_PROPERTIES

    • DLV_MESSAGE

    • DOCUMENT_DLV_MSG_REF

    • HEADERS_PROPERTIES

    BPM_AUDIT_QUERY or BPM_CUBE_AUDITINSTANCE or BPM_CUBE_PROCESSPERFORMANCE or BPM_CUBE_TASKPERFORMANCE or BPM_MEASURE_ACTION_EXCEPS or BPM_MEASUREMENT_ACTIONS

    • BPM_AUDIT_QUERY

    • BPM_CUBE_AUDITINSTANCE

    • BPM_CUBE_PROCESSPERFORMANCE

    • BPM_CUBE_TASKPERFORMANCE

    • BPM_MEASURE_ACTION_EXCEPS

    • BPM_MEASUREMENT_ACTIONS

    MEDIATOR_PAYLOADFootref 1

    • MEDIATOR_DEFERRED_MESSAGE

    • MEDIATOR_CASE_INSTANCE

    • MEDIATOR_RESEQUENCER_MESSAGE

    • MEDIATOR_GROUP_STATUS

    • MEDIATOR_PAYLOAD

    MEDIATOR_DEFERRED_MESSAGE

    • MEDIATOR_DEFERRED_MESSAGE

    • MEDIATOR_PAYLOAD

    MEDIATOR_CASE_INSTANCE

    • MEDIATOR_CASE_INSTANCE

    • MEDIATOR_PAYLOAD

    MEDIATOR_RESEQUENCER_MESSAGE

    • MEDIATOR_RESEQUENCER_MESSAGE

    • MEDIATOR_GROUP_STATUS

    • MEDIATOR_PAYLOAD


    Footnote 1 This table typically grows large in size.

  7. Generate the log file in database directory trs_dir.

    SQL> @debug_on
    
  8. Capture any API statement failures in a log file.

    SQL> SET SERVEROUTPUT ON
    
  9. Notes:

    • Phase one takes time to execute.

    • You can monitor phase one progress by viewing the log in the trs_dir directory if debugging is on.

    • You can also monitor phase one SQL progress by examining the SQL trace file in the database user dump destination if SQL_TRACE is set to true.

  10. Exit to flush the last buffer to the log and SQL script files.

    SQL> EXIT
    

10.6.5 Phase Two - Running the TRS Script

Run the generated DDL script found in the database directory trs_dir. This script contains the statements to recreate the Oracle SOA Suite tables that you generated in Section 10.6.4, "Phase One - Creating the TRS Script."

  1. In SQL*Plus, connect to the database AS SYSDBA:

    SQL> CONNECT SYS AS SYSDBA
    
  2. Execute the generated DDL script in the trs_dir directory:

    SQL> @SOA_TRS_20120711091958.sql
    

    This creates a spool file named SOA_TRS_20120711091958.log.

10.6.6 Logging and Debugging

You can record an audit trail of the current procedure and SQL statement being executed. The audit trail is written to a log file named soa_trs_DATE.log in the trs_dir directory.

To enable logging and debugging:

  1. Change directories to the following location:

    $ cd …./trs_common
    
  2. Turn on debugging:

    SQL> @debug_on.sql;
    
  3. When DDL script execution is complete, turn off debugging:

    SQL> @debug_off.sql;
    

You can trace errors that occur during DDL script creation and execution. Note the following details about the sql_trace flag.

  • SQL trace is set (event 10046) for statements in the trs_instances procedure.

  • The SQL trace file is located in the database user dump destination.

  • SQL statements can be correlated with the DEBUG log setting to determine the procedure.

  • A SQL trace is set (event 10046) at the top of the TRS script.

10.7 Purging Instances on Microsoft SQL Server

You can execute the looped (single thread) purge script described in section Section 10.3.1, "Looped Purge Script" to purge database instances on Microsoft SQL Server.

Before running the looped purge script, note the following details:

  • There is no support for running the parallel purge script or for partitioning on Microsoft SQL Server.

  • The following Microsoft SQL Server versions are supported:

    • 2005 (all service pack levels included)

    • 2008 (all service pack levels included)

    • 2008 R2 (all service pack levels included)

  • The Oracle PL/SQL purge scripts are translated to Microsoft SQL Server Transact-SQL (T-SQL).

  • The scripts are executed slightly differently than on an Oracle database because of differences in T-SQL. These differences are described in subsequent sections.

10.7.1 Understanding Microsoft SQL Server Features

This section describes several basic Microsoft SQL Server features. For more specific details, see your Microsoft SQL Server documentation.

  • Microsoft SQL Server has separate databases that do not share disk files. This differs from an Oracle database, in which one database instance can have many users and schemas, and tablespaces are shared by users.

  • Microsoft SQL Server uses logins to provide access to an instance. Each database has users that map to a login to obtain individual access to tables, view, and so on.

  • Each user has a default schema and can have access to other schemas.

  • Logins with the sysadmin role have the default schema assigned to the database owner (dbo).

  • T-SQL is equivalent to Oracle PL/SQL.

  • Each login has access to multiple databases and schemas in the database. Access to any object must be with a fully qualified name:

    [DATEBASE_NAME].[SCHEMA_NAME].[TABLE_NAME]
    
  • You can default to a database name before accessing the database objects:

    1. Use the [DATEBASE_NAME] command.

    2. Access schema objects as [SCHEMA_NAME].[TABLE_NAME].

  • If the object is in the default schema specified for the user, it can be directly accessed without qualifying it with the schema name.

10.7.2 Purge Script Differences on Microsoft SQL Server

This section describes the different aspects of using the purge script on Microsoft SQL Server.

  • Microsoft SQL Server does not support packages, while the purge scripts rely on separate packages for each component. Schemas in Microsoft SQL Server are used to achieve this functionality.

  • T-SQL does not directly support logging (dbms_putline and utl_file.put_line). The xp_cmdshell logs purge details to a file. This functionality requires additional configuration changes described in Section 10.7.3, "Running the Purge Scripts on Microsoft SQL Server."

  • T-SQL does not support compiler flags, which are used for switching debugging on and off in an Oracle database. An additional table manages the debugging settings.

  • Accessing tables from Microsoft SQL Server requires them to be qualified with the schema name. Additional procedures are provided for Microsoft SQL Server to create and drop synonyms for evading schema name prefixes in the main scripts.

  • The xp_cmdshell spawns a Windows command shell and passes a string for execution. The sysadmin role is required to execute this statement.

10.7.3 Running the Purge Scripts on Microsoft SQL Server

You run the purge script with the SQLCMD utility. This utility enables you to enter T-SQL statements, system procedures, and script files at the command prompt in the following locations:

  • The Query Editor in SQLCMD mode

  • A Windows script file

  • An operating system (cmd.exe) job step of a SQL Server Agent job

Since the purge script is packaged in different SQL files in different directories and accessed through their related path, the SQLCMD utility is the best method for loading it.

The purge script is located in the MW_HOME/SOA_ORACLE_HOME/rcu/integration/soainfra/mssql/soa_purge directory.

  1. Connect to Microsoft SQL Server using the SQLCMD utility as a user with sysadmin privileges. For example:

    sqlcmd -S myhost.us.example.com\Instance_Name -D database_name - U user_name -P password
    
  2. Set the DATEFORMAT command to run the purge scripts in English.

    Note:

    English is the only supported language for running the purge scripts on Microsoft SQL Server.
    'set DATEFORMAT mdy'
    

    For example:

    'set DATEFORMAT 08122013'
    
  3. Load the purge scripts.

    :r soa_purge_scripts.sql
    
    <1 rows affected>
    
  4. Change the configuration to support xp_cmdshell.

    sp_configure 'xp_cmdshell', 1;
    GO
      RECONFIGURE;
    GO
    
  5. If required, enable debugging.

    :r common\debug_on.sql
    
  6. Create synonyms to target the purge on a specific schema.

    exec soa.create_synonyms
        @schema_name = 'SS1_SOAINFRA'
     GO
    
  7. Execute the purge script.

    Exec soa.delete_instances
      @min_creation_date ='01 Jan 2012',
      @max_creation_date ='30 Jan 2012',
      @batch_size = 50 ,
      @max_runtime = 15 ,
      @retention_period = '31 Dec 2012',
      @write_file = 'C:\temp\out1.txt'
    GO
    

    The parameters to specify are the same as those described in Table 10-1 for running the looped purge script on an Oracle database, except that the purge_partitioned_component parameter is not supported.

    Notes: