Oracle9i Database Tuning Guide

     Previous  Next    Open TOC in new window    View as PDF - New Window  Get Adobe Reader - New Window
Content starts here

WLI Schema Tuning

In this section, we highlight typical areas within the WLI schema that can potentially perform better with tuning, depending on the application architecture of the target WLI application. The tuning techniques described in this section impose incremental costs in terms of database system resources. While benefits from these tuning techniques usually outweigh the additional cost of resources, these techniques should only be applied when a specific performance problem is identified. See Oracle Statspack for a list of commonly identifiable database performance issues found with WLI applications.

This chapter includes the following sections:

JPD Tables

In WLI, business process logic is implemented using Java Process Definitions (JPDs). JPD information is persisted in JPD_PROCESS tables in the WLI schema. These tables can be tuned to accommodate a much higher level of concurrency and throughput by applying some database tuning techniques.

One of the main areas where JPD_PROCESS tables derive a large performance increase is from modifying the storage characteristics for the BLOB data column, CG_DATA. This column contains the serialized byte array representing the JPD instance.

BLOB storage in a JPD table suffers from two common problems with LOBs in Oracle: not caching the BLOB data and storing BLOB data in-line with table data. To remove these bottlenecks, the JPD table’s BLOB column should be cached and stored in a separate tablespace.

Caching JPD BLOB Data

JPD BLOB caching should be enabled in WLI database where a JPD table has been identified as a bottleneck. BLOB data can be cached in Oracle's DEFAULT pool, KEEP pool, RECYCLE pool, or an alternate block size cache. Caching LOB data in the DEFAULT pool can have a negative effect on performance because it will compete for space with the most commonly cached data in the database. For this reason, LOB data should be cached in one of the alternate buffer pools.

To cache JPD BLOBs, a target buffer pool must be identified or created, and the JPD table must be created or altered to use the cache.

The following code samples show how to create a RECYCLE pool, and to alter and create the jpd_process_table.

-- create RECYCLE pool
ALTER SYSTEM
   SET db_recycle_cache_size = 64M
   SCOPE = BOTH
/
-- alter existing JPD table to use RECYCLE pool
ALTER TABLE jpd_processes_table 
   MODIFY LOB (cg_data)
      (
         CACHE 
         STORAGE 
            (
               BUFFER_POOL RECYCLE
            )
      )
/
-- create new JPD table to use RECYCLE pool
CREATE TABLE jpd_processes_table
   (
      cg_id                 VARCHAR2(768 byte)      NOT NULL, 
      last_access_time      NUMBER(19),  
      cg_data               BLOB,
      CONSTRAINT jpd_proceses_table_pk 
         PRIMARY KEY(cg_id)
            USING INDEX TABLESPACE wli_index
   )
   TABLESPACE wli_data
   LOB(cg_data) STORE AS 
      (
         CACHE
         STORAGE 
            (
               BUFFER_POOL RECYCLE
            )
      )
/

See LOB Tuning in Database Tuning for more information on LOB caching.

Separate Tablespace for BLOBs

A dedicated tablespace for WLI LOB data should be created. This tablespace can be created with either the default database block size or an alternate larger block size. Larger block sizes can increase performance for LOB data access.

The following code samples show how to create default and alternate block size table space:

-- create default block size tablespace
CREATE TABLESPACE wli_lob_data
   DATAFILE '/u03/app/oracle/oradata/wlidb1/wli_lob_data01.dbf' SIZE 1000M
   EXTENT MANAGEMENT LOCAL
   UNIFORM SIZE 50M
   SEGMENT SPACE MANAGEMENT AUTO
/
-- create alternate block size tablespace
CREATE TABLESPACE wli_lob_data
   DATAFILE '/u03/app/oracle/oradata/wlidb1/wli_lob_data01.dbf' SIZE 1000M
   BLOCKSIZE 16K
   EXTENT MANAGEMENT LOCAL
   UNIFORM SIZE 50M
   SEGMENT SPACE MANAGEMENT AUTO
/

To store BLOB data from the JPD tables in a separate tablespace, the JPD table must be created or moved with the TABLESPACE storage parameter set to the alternate tablespace.

The following code samples show how to create a JPD table and alter an existing table:

-- create the a new JPD table
CREATE TABLE jpd_processes_table
   (
      cg_id                 VARCHAR2(768 byte)      NOT NULL, 
      last_access_time      NUMBER(19),  
      cg_data               BLOB,
      CONSTRAINT jpd_proceses_table_pk 
         PRIMARY KEY(cg_id)
         USING INDEX TABLESPACE wli_index
   )
   TABLESPACE wli_data
   LOB(cg_data) STORE AS 
      (
         TABLESPACE wli_lob_data
         DISABLE STORAGE IN ROW 
         CACHE
      )
/
-- alter an existing JPD table
ALTER TABLE jpd_processes_table
   MOVE
   LOB(cg_data) 
      STORE AS 
         (
            DISABLE STORAGE IN ROW
            TABLESPACE wli_lob_data
            CACHE
         )
/

See Multiple Block Size Tablespaces in Database Tuning for more information on multiple block size tablespaces.

WLI_PROCESS_INSTANCE_INFO Table

The WLI_PROCESS_INSTANCE_INFO table is updated on every persistent change in the JPD. In some applications built with WLI, this table can become a performance bottleneck due to a large number of concurrent inserts. Two tuning techniques that have had a positive effect on the performance of this table are: adding a reverse-key index and partitioning the table by hash.

Reverse Key Index

The primary key index of the WLI_PROCESS_INSTANCE_INFO table is populated by a sequence. This sequential population causes the index on the primary key to be right-growing and suffer performance problems when under heavy concurrent load. Reversing the index on the primary key alleviates this problem by removing the serialization that occurs in the index.

To reverse the index for the primary key of the WLI_PROCESS_INSTANCE_INFO table, the index has to be altered or the table has to be recreated.

The following code sample shows how to reverse the index.

-- rebuild the index reverse
ALTER INDEX pk_wli_process_instance_info 
   REBUILD 
   REVERSE 
   COMPUTE STATISTICS
/

See Reverse Key Indexes in Database Tuning for more information on using reverse-key indexes.

Partitioning

As the concurrent demand for access to the WLI_PROCESS_INSTANCE_INFO table grows in a high-volume WLI application, contention can begin to occur at the block level. Partitioning the table decreases this contention by distributing table data across many physical partitions, thereby reducing the likelihood that concurrent transactions will try to access the same physical block.

To partition the WLI_PROCESS_INSTANCE_INFO table, it has to be recreated. Partition values should be set to powers of two. Good performance has been observed with partition values of 32 and 64.

The following code sample shows how to partition the WLI_PROCESS_INSTANCE_INFO table.

-- create partitioned WLI_PROCESS_INSTANCE_INFO table
CREATE TABLE WLI_PROCESS_INSTANCE_INFO
   (
      PROCESS_INSTANCE        VARCHAR(768) NOT NULL,
      PROCESS_TYPE            VARCHAR(200) NOT NULL,
      PROCESS_LABEL           VARCHAR(1000),
      PROCESS_STATUS          SMALLINT     NOT NULL,
      PROCESS_START_TIME      NUMBER       NOT NULL,
      PROCESS_END_TIME        NUMBER,
      SLA_EXCEED_TIME         NUMBER,
      SEQUENCE_ID             INTEGER      NOT NULL,
      CONSTRAINT PK_WLI_PROCESS_INSTANCE_INFO 
         PRIMARY KEY(PROCESS_INSTANCE)
            USING INDEX TABLESPACE wli_index
   )
   PARTITION BY HASH (PROCESS_INSTANCE) PARTITIONS 64
   TABLESPACE wli_data
/

See Partitioning in Database Tuning for more information on partitioning tables.

WLI_PROCESS_EVENT Table

The WLI_PROCESS_EVENT table contains detailed tracking information that describes the events that occurred within a JPD. At the end of a JPD transaction, all the events generated during that transaction are sent to a JMS queue and are written to the WLI_PROCESS_EVENT table. The number of events actually generated depends on the complexity of the JPD and the TrackingLevel set through the OA&M console. With the TrackingLevel set at its most verbose setting, contention for access to this table can degrade system performance. This performance degradation can be alleviated by partitioning the WLI_PROCESS_EVENT table.

Partitioning

As the number of events being tracked increases, contention for access to the WLI_PROCESS_EVENT table at the block level also increases. Partitioning the table decreases this contention by distributing table data across many physical partitions, thereby reducing the likelihood that concurrent transactions will try to access the same physical block.

To partition the WLI_PROCESS_EVENT table, it has to be recreated. Partition values should be set to powers of two. Good performance has been observed with partition values of 32 and 64.

The following code sample shows how to partition the WLI_PROCESS_EVENT table.

-- create partitioned WLI_PROCESS_EVENT table
CREATE TABLE wli_process_event
   (
      process_type            VARCHAR(200) NOT NULL,
      process_event_id        VARCHAR(60)  NOT NULL,
      process_instance        VARCHAR(768) NOT NULL,
      deployment_id           INTEGER      NOT NULL,
      event_time              INTEGER      NOT NULL,
      activity_id             SMALLINT     NOT NULL,
      event_type              SMALLINT     NOT NULL,
      event_data              BLOB,
      process_label           VARCHAR(1000),
      is_rolled_back          SMALLINT     NOT NULL,
      event_elapsed_time      NUMBER,
      start_event_id          VARCHAR(60),
      event_count             INT,
      CONSTRAINT pk_wli_process_event
         PRIMARY KEY (process_instance, process_event_id)
            USING INDEX TABLESPACE wli_index
   )
   PARTITION BY HASH (process_instance, process_event_id) PARTITIONS 64
   TABLESPACE wli_data
/

  Back to Top       Previous  Next