31 Tuning Integration Flows

To maximize the performance of your integration flows, you must monitor, analyze, and tune all the components. This chapter describes the tools that you can use to monitor performance and the techniques for optimizing the performance of the integration flows and the underlying Oracle Fusion Middleware components.

This chapter includes the following sections:

31.1 Introduction to Tuning

Performance tuning usually involves a series of trade-offs. After you have determined what is causing the bottlenecks, you may have to modify performance in some other areas to achieve the desired results. However, if you have a clearly defined plan for achieving your performance objectives, the decision on what to trade for higher performance is easier because you have identified the most important areas. Tuning is never a one-size-fits-all proposition or a one-off configuration change. Rather, it is an iterative process of monitoring and tuning.

31.1.1 How to Use Baselines

The most effective way to tune is to have an established performance baseline that can be used for comparison if a performance issue arises.

It is important to identify the peak periods at the site and install a monitoring tool that gathers performance data for those high-load times. Optimally, data gathering should be configured when the application is in its initial trial phase during the QA cycle. Otherwise, this should be configured when the system is first in production.

31.1.2 How to Handle Resource Saturation

If any of the hardware resources are saturated (consistently at or near 100% utilization), one or more of the following conditions may exist:

  • The hardware resources are insufficient to run the application.

  • The system is not properly configured.

  • The application or database must be tuned.

  • There could be some serious problem/bottleneck with in the code/service/process which is consuming more resources or not releasing the resources for a long time.

  • There could be a problem with ecosystems not responding within the expected time and causing a choking situation.

  • There could be situations where unrelated applications / databases belonging to the customer might be running on the same hardware consuming all of the available resources.

Any bottlenecks identified in production should be fixed in the code and a performance improvement patch must be applied.

For a consistently saturated resource, the solution is to reduce load or increase resources. For peak traffic periods when the increased response time is not a solution, consider increasing resources in the form of additional memory or nodes to an existing cluster or determine if there is traffic that can be rescheduled to reduce the peak load, such as throttling at the source or at the middleware or scheduling batch or background operations during slower periods.

31.1.3 How to Use Proactive Monitoring

Proactive monitoring usually occurs on a regularly scheduled interval, where several performance statistics are examined to identify whether the system behavior and resource usage has changed. Proactive monitoring can also be considered as proactive tuning.

Usually, monitoring does not result in configuration changes to the system, unless the monitoring exposes a serious problem that is developing. In some situations, experienced performance engineers can identify potential problems through statistics alone, although accompanying performance degradation is usual.

Experimenting with or tweaking a system when there is no apparent performance degradation as a proactive action can be a dangerous activity, resulting in unnecessary performance drops. Tweaking a system should be considered reactive tuning, and the steps for reactive tuning should be followed.

Oracle Fusion Middleware provides a variety of technologies and tools that can be used to monitor Server and Application performance. These tools are described at length in the coming chapters. Monitoring is an important step in performance tuning and enables you to evaluate server activity, watch trends, diagnose system bottlenecks, debug applications with performance problems and gather data that can assist you in tuning the system.

For more information, see "Monitoring of Oracle SOA Suite and Oracle BPM Suite" in Oracle Fusion Middleware Administrator's Guide for Oracle SOA Suite and Oracle Business Process Management Suite.

Also, refer to "Garbage Collection Configuration" in the Oracle Fusion Middleware Performance and Tuning Guide for monitoring memory by enabling garbage collections.

31.1.4 How to Eliminate Bottlenecks

Tuning usually implies fixing a performance problem. However, tuning should be part of the life cycle of an application-through the analysis, design, coding, production, and maintenance stages. Often the tuning phase is left until the system is in production. At this time, tuning becomes a reactive fire-fighting exercise, where the most important bottleneck is identified and fixed.

System test should be done rigorously against the baseline performance requirements identified during the requirements, analysis, and design phases. System test should be planned with robust test cases considering all the possible scenarios with expected peak loads and peak stress in each use case which helps to eliminate most of the serious bottlenecks. This comes under the preventive tuning category because the problem can be identified before production. Any bottleneck or performance issue identified during production would fall under the reactive tuning category.

The only constraint with the proactive or preventive tuning is that the production-like environment or the deployment topology cannot be predicted during the QA phase though there are some baselines identified during the requirement and analysis phase. Using an appropriate hardware and extrapolation logic helps in addressing this constraint.

During the application's initial trial run, or while entering the QA phase, full-fledged efforts should be taken to tune the application and the environment as part of baseline data collection. Rather than deploying all parts of the application and testing the application as it would be in the production environment, take a bottom-up approach. It is highly recommended that you take a single end-to-end flow and optimize it fully before introducing multiple flows into the mix. Even for a single flow, it is better to test it with a single user running the five to ten iterations. You must keep tuning the flow until the average response time and throughput for a specific flow is in adherence to the KPI. Look at the metrics for all of the components participating in the flow; identify the areas where most of the time has been spent and find ways to fix them.

The process described in the previous paragraph must be repeated with multiple concurrent users executing the same flow. Start with a small number of concurrent users and gradually increase the number. During this testing phase, in addition to looking at metrics generated by the services, look at how garbage collection is occurring by looking at the garbage collection log. This is a key piece of information that will shed light on how the JVM environment can be configured for better performance and throughput. In addition, review the AWR report to check how the database is behaving.

After all the end-to-end flows have been individually optimized to perform according to KPI for concurrent number of users, multiple disparate flows can be tested simultaneously to check whether a specific flow's characteristics have any kind of impact on other flows.

Though the section talks about the steps to be carried out in tuning the system before going to production, Oracle AIA emphasizes that monitoring and tuning is a lifelong activity so that the systems are more under control, they are predictable, they are less-fail prone and best utilized.

31.1.5 Top Performance Areas

Even though this chapter can be considered a 'Quick Start' guide for fine tuning integration flows, it is not intended to detail a complete list of areas to tune or the techniques on how to identify and fix the issues.

For complete information about fine tuning Fusion Middleware application, see Oracle Fusion Middleware Performance and Tuning Guide.

Here is the list of critical Oracle Fusion Middleware performance areas that must be looked at to performance tune your integration flows.

  • Oracle database

  • SOA dehydration database

    • Configuring database parameters for the most optimal performance

    • Identifying and fixing the bottlenecks

  • Java Virtual Machine (JVM)

    • Configuring Garbage Collection to get the most optimal performance

    • Monitoring and Profiling the JVM

  • Oracle Fusion Middleware components

    • Configuring FMW components for the most optimal performance

    • Configuring components for concurrency

    • Configuring logging levels

    • Configuring Meta Data Service

  • Oracle AIA Process Integration Packs

    Configuring AIA Services. Pay attention to external contributing factors such as size of payload and latency in edge systems. These factors could certainly play an adverse impact on the overall performance of the PIPs.

The following sections discuss how to configure and tune each of the Oracle Fusion Middleware Performance areas.

31.2 Oracle Database Performance Tuning

This section includes the following topics:

31.2.1 How to Tune the Oracle Database

Oracle Database plays a critical role in the execution of AIA pre-integration packs. Oracle Database is leveraged by SOA Suite and by AIA pre-integration packs for multiple purposes. Apart from the database being used to store the metadata for all of the design-time artifacts, it is being used to persist the state and the audit information for the in-flight and the completed instances. In addition, the database is used to manage the relationships between the semantically equivalent object instances (Cross Reference data) residing in multiple applications. The Oracle database must be monitored and tuned to get the best performance from AIA applications.

Database administrators must monitor the database (for example, by generating Automatic Workload Repository (AWR) reports for Oracle database) to observe lock contention, I/O usage and take appropriate action to address the issues.

31.2.2 Introducing Automatic Workload Repository

The Automatic Workload Repository (AWR) collects, processes, and maintains performance statistics for problem detection and self-tuning purposes. This data is both in memory and stored in the database. The gathered data can be displayed in both reports and views.

Oracle generates many types of cumulative statistics for the system, sessions, and individual SQL statements. Oracle also tracks cumulative statistics on segments and services. When analyzing a performance problem in any of these scopes, you typically look at the change in statistics (delta value) over the period you are interested in. Specifically, you look at the difference between the cumulative value of a statistic at the start of the period and the cumulative value at the end.

Cumulative values for statistics are generally available through dynamic performance views, such as the V$SESSTAT and V$SYSSTAT views. Ensure that the cumulative values in dynamic views are reset when the database instance is shutdown.

The AWR automatically persists the cumulative and delta values for most of the statistics at all levels except the session level. This process is repeated on a regular time period and the result is called an AWR snapshot. The delta values captured by the snapshot represent the changes for each statistic over the time period.

AWR supports the capture of baseline data by enabling you to specify and preserve a pair or range of AWR snapshots as a baseline. Carefully consider the time period you choose as a baseline; the baseline should be a good representation of the peak load on the system. In the future, you can compare these baselines with snapshots captured during periods of poor performance.

Oracle Enterprise Manager is the recommended tool for viewing both real time data in the dynamic performance views and historical data from the AWR history tables. Oracle Enterprise Manager can also be used to capture operating system and network statistical data that can be correlated with AWR data.

For more information, see Oracle Database 2 Day + Performance Tuning Guide.

The statistics collected and processed by AWR include:

  • Object statistics that determine both access and usage statistics of database segments

  • Time model statistics based on time usage for activities, displayed in the V$SYS_TIME_MODEL and V$SESS_TIME_MODEL views

  • Some system and session statistics collected in the V$SYSSTAT and V$SESSTAT views

  • SQL statements that are producing the highest load on the system, based on criteria such as elapsed time and CPU time

  • Active Session History (ASH) statistics, representing the history of recent sessions activity

31.2.3 Configuring Performance Related Database Initialization Parameters

Below are the minimum basic configurations to be set for the dehydration store.

Tip:

This SHOULD be implemented in the staging and production environment.

Table 31-1 provides common init.ora parameters and their descriptions. Consider following these guidelines to set the database parameters. One would use either pfile (init[SID].ora) or spfile to manage the parameters. Ultimately, however, the DBA should monitor the database health and tune parameters based on the need.

The values should be considered as starting values. Again these values can vary depending upon the target environment's hardware and software topologies. Additional applications or processes, if any, that might run on the target environment could significantly have an impact on these properties. Further, these values must be adjusted based on the inferences made using the data collected through monitoring tools.

Table 31-1 Common init.ora Parameters

Database Parameter Recommended starting values for testing Description

shared_pool_size

800M

Applicable when SGA Auto Tuning using sga_target and sga_maxsize is not being used. Set the value to as high as possible. Ensure that the sum of the total memory consumed by database and any other applications running on your system along with the operating system do not exceed the amount of available physical RAM. Customer implementations having tens of Giga bytes of shared_pool_size are quite normal.

Note: If it is a 11g database, consider setting MEMORY_TARGET instead of setting SGA and the PGA separately.

sga_max_size

1504M

The SGA_MAX_SIZE initialization parameter specifies the maximum size of the System Global Area for the lifetime of the instance. You can dynamically alter the initialization parameters affecting the size of the buffer caches, shared pool, and large pool, but only to the extent that the sum of these sizes and the sizes of the other components of the SGA (fixed SGA, variable SGA, and redo log buffers) does not exceed the value specified by SGA_MAX_SIZE. Set the value to as high as possible. Ensure that the sum of the total memory consumed by database and any other applications running on your system along with the operating system do not exceed the amount of available physical RAM. Customer implementations having tens of Giga bytes of sga_max_size are quite normal.

Ensure that you regularly monitor the buffer cache hit ratio and size the SGA so that the buffer cache has an adequate number of frames for the workload. The buffer cache hit ratio may be calculated from data in the view V$SYSSTAT. Also the view V$DB_CACHE_ADVICE provides data that can be used to tune the buffer cache.

Note: If it is a 11g database, consider setting MEMORY_TARGET instead of setting SGA and the PGA separately.

pga_aggregate_target

800M

Specifies the target aggregate PGA memory available to all server processes attached to the instance. Starting from 11g, set MEMORY_TARGET instead of setting SGA and the PGA separately.

processes

1500

Sets the maximum number of operating system processes that can be connected to Oracle concurrently. The value of this parameter must account for Oracle background processes. SESSIONS parameter is deduced from this value.

log_buffer (applicable only for 10g database)

30523392

LOG_BUFFER specifies the amount of memory (in bytes) that Oracle uses when buffering redo entries to a redo log file. Redo log entries contain a record of the changes that have been made to the database block buffers. The LGWR process writes redo log entries from the log buffer to a redo log file

The goal in sizing log_buffer is to set a value that results in the least overall amount of log-related wait events. Common wait events related to a too-small log_buffer size include high "redo log space requests" and a too-large log_buffer may result in high "log file sync" waits.

db_block_size

8192

DB_BLOCK_SIZE specifies (in bytes) the size of Oracle database blocks. The default block size of 8K is optimal for most systems. Set this parameter at the time of database creation.

job_queue_processes

10

JOB_QUEUE_PROCESSES specifies the maximum number of processes that can be created for the execution of jobs. It specifies the number of job queue processes per instance

UNDO_MANAGEMENT

AUTO

UNDO_MANAGEMENT specifies which undo space management mode the system should use. When set to AUTO, the instance starts in automatic undo management mode. In manual undo management mode, undo space is allocated externally as rollback segments. Starting with Oracle Database 11g Release 1 (11.1), the default value of the UNDO_MANAGEMENT parameter is AUTO so that automatic undo management is enabled by default. You must set the parameter to MANUAL to turn off automatic undo management, if required.

open_cursors

1000

OPEN_CURSORS specifies the maximum number of open cursors (handles to private SQL areas) a session can have at once. You can use this parameter to prevent a session from opening an too many cursors. It is important to set the value of OPEN_CURSORS high enough to prevent your application from running out of open cursors. The number varies from one application to another. If a session does not open the number of cursors specified by OPEN_CURSORS, there is no added performance impact to setting this value higher than actually needed. A value of 1000 for open_cursors is a reasonable number to start with.

Sga_target

1504M

Setting this parameter to a nonzero value enables Automatic Shared Memory

Management. Consider using automatic memory management, both to simplify configuration and to improve performance. Set the value to as high as possible. Ensure that the sum of the total memory consumed by database and any other applications running on your system along with the operating system do not exceed the amount of available physical RAM. Customer implementations having tens of Giga bytes of SGA_TARGET are quite normal.

Note: If it is a 11g database, consider setting MEMORY_TARGET instead of setting SGA and the PGA separately.

MEMORY_TARGET

2500M

MEMORY_TARGET specifies the Oracle system-wide usable memory. The database tunes memory to the MEMORY_TARGET value, reducing or enlarging the SGA and PGA as needed.

MEMORY_MAX_TARGET

3000M

MEMORY_MAX_TARGET specifies the maximum value to which a DBA can set the MEMORY_TARGET initialization parameter.

Session_cached_cursors

200

SESSION_CACHED_CURSORS specifies the number of session cursors to cache. Repeated parse calls of the same SQL statement cause the session cursor for that statement to be moved into the session cursor cache. Subsequent parse calls find the cursor in the cache and do not reopen the cursor. Oracle uses a least recently used algorithm to remove entries in the session cursor cache to make room for new entries when needed. This parameter also constrains the size of the PL/SQL cursor cache which PL/SQL uses to avoid having to re-parse as statements are re-executed by a user. A starting value of 200 is reasonable for a typical AIA system.

TRACE_ENABLED

FALSE

TRACE_ENABLED controls tracing of the execution history, or code path, of Oracle. Oracle Support Services uses this information for debugging. Although the performance impact incurred from processing is not excessive, you may improve performance by setting TRACE_ENABLED to FALSE.

AUDIT TRIAL

NONE

-

NLS SORT

BINARY

-

FAST_START_MTTR_TARGET

3600

Oracle 10g has introduced a new advisory utility that enables you to specify your optimal mean time to recovery (MTTR) recovery interval and uses this to suggest the optimal redo log size. In Oracle 10g the fast_start_mttr_target parameter is used.

Oracle recommends using the fast_start_mttr_target initialization parameter to control the duration of startup after instance failure. With 10g, the Oracle database can now self-tune check-pointing to achieve good recovery times with low impact on normal throughput. You no longer have to set any checkpoint-related parameters.

This method reduces the time required for cache recovery and makes the recovery bounded and predictable by limiting the number of dirty buffers and the number of redo records generated between the most recent redo record and the last checkpoint. Administrators specify a target (bounded) time to complete the cache recovery phase of recovery with the fast_start_mttr_target initialization parameter, and Oracle automatically varies the incremental checkpoint writes to meet that target.

DISK_ASYNCH_IO

TRUE

DISK_ASYNCH_IO controls whether I/O to datafiles, control files, and logfiles is asynchronous (that is, whether parallel server processes can overlap I/O requests with CPU processing during table scans). If your platform supports asynchronous I/O to disk, it is recommended that you leave this parameter set to its default value. However, if the asynchronous I/O implementation is not stable, you can set this parameter to false to disable asynchronous I/O. If your platform does not support asynchronous I/O to disk, this parameter has no effect.

If you set DISK_ASYNCH_IO to false, then you should also set DBWR_IO_SLAVES to a value other than its default of zero to simulate asynchronous I/O

FILESYSTEMIO_OPTIONS

SETALL

For optimal disk performance, Oracle should always use direct I/O to its data files, bypassing any caching at the OS layer. Direct I/O must be enabled both in Oracle and in the operating system.

Oracle controls direct I/O with a parameter named filesystemio_options. filesystemio_options parameter must be set to "setall" (the preferred method, according to the Oracle documentation) or "directio" in order for Oracle to read data blocks directly from disk.

Using direct I/O enables you to enhance I/O by bypassing the redundant OS block buffers, reading the data block directly into the Oracle SGA. Using direct I/O also allow you to create multiple blocksized tablespaces to improve I/O performance.

Methods for configuring the OS vary depending on the operating system and file system in use


31.2.4 Tuning Redo Logs Location and Sizing

Managing the database I/O load balancing is a non-trivial task. However, tuning the redo log options can provide performance improvement for applications running in an Oracle Fusion Middleware environment, and in some cases, you can significantly improve I/O throughput by moving the redo logs to a separate disk.

The size of the redo log files can also influence performance, because the behavior of the database writer and archiver processes depend on the redo log sizes. Generally, larger redo log files provide better performance by reducing checkpoint activity. It is not possible to provide a specific size recommendation for redo log files, but redo log files in the range of 2g each and at least 3 redo log groups are considered reasonable. Size your online redo log files according to the amount of redo your system generates. A rough guide is to switch logs at most once every twenty minutes. Set the initialization parameter LOG_CHECKPOINTS_TO_ALERT = TRUE to have checkpoint times written to the alert file. The complete set of required redo log files can be created during database creation. After they are created, the size of a redo log size cannot be changed. New, larger files can be added later, however, and the original (smaller) ones can be dropped.

For more information, see Oracle Fusion Middleware Performance and Tuning Guide.

31.2.5 Automatic Segment-Space Management (ASSM)

For permanent tablespaces, consider using automatic segment-space management. Such tablespaces, often referred to as bitmap tablespaces, are locally managed tablespaces with bitmap segment space management.

For backward compatibility, the default local tablespace segment-space management mode is MANUAL.

For more information, see "Space Management in Data Blocks" in Oracle Database Concepts and "Specifying Segment Space Management in Locally Managed Tablespaces" in Oracle Database Administrator's Guide.

31.2.6 Tuning Cross Reference Data Table (XREF_DATA)

In situations where cross reference data table has the probability of getting several hundreds of millions of rows over a period, one should seriously consider taking advantage of database schema partitioning feature to distribute the XREF_DATA data. By limiting the amount of data to be examined or operated on, and by providing data distribution for parallel execution, partitioning provides several performance benefits. Partition pruning is the simplest and also the most substantial means to improve performance using partitioning. Partition pruning can often improve query performance by several orders of magnitude. For example, suppose XREF_DATA table containing list of cross references for all logical cross reference tables is partitioned by cross reference table name. A query requesting cross reference information pertaining to an object instance belonging to a particular cross reference table would only access a single partition of the XREF_DATA table.

List partitioning strategy should be employed on XREF_TABLE_NAME. You might want to create named partitions only for cross reference tables that have a large volume. Creating a separate partition for each of these fast growing entities would help in better distribution of data and lesser amount of data access. One could create a partition to hold one or more cross reference tables.

Here is the DDL snippet that shows how partitions can be created. The tablespace names listed are just for reference. Consult your DBAs to identify the right tablespaces.

PARTITION BY LIST (XREF_TABLE_NAME) (       
PARTITION INSTALLED_PRODUCT_ENTITY VALUES ('INSTALLEDPRODUCT_ID') TABLESPACE TS01, 
PARTITION ORDER_ENTITY VALUES ('SALESORDER_LINEID') TABLESPACE TS02, 
PARTITION OTHER_DEFAULT VALUES (DEFAULT) TABLESPACE TS03
);

Deciding to partition an existing XREF_DATA table in a production environment calls for planned execution. Another table having the same XREF_DATA structure (say XREF_DATA_NEW) must be created with the partitions in place. Data must be copied into the new table from the old table. Completion of data copy must be followed by dropping of existing table and renaming of the newly created table to XREF_DATA. Before performing these tasks, one has to ensure adequate space is available in redo log and in table spaces where the various partitions are created.

31.2.7 Recommendations for managing high-volume BPEL Tables

When the volume of data in the Oracle BPEL Process Manager dehydration store grows very large, maintaining the database can become difficult. Number of BPEL instances being generated in a daily basis and the number of days of retention could play a significant role in impacting the performance and maintenance. For example, generating north of 500 thousand BPEL instances in a day with 5 – 7 days retention could very well result in tables becoming very large very quickly – and hence, be a concern to administrators in terms of performance and maintenance. To address this maintenance challenge, Oracle BPEL Process Manager 10.1.3.5 has been instrumented with partition keys that allow database administrators (DBAs) to take advantage of the Oracle RDBMS partitioning features and capabilities. With the new instrumentation of the BPEL engine, the schema tables can be partitioned using a composite scheme – interval – hash. Refer to An Oracle White Paper – Oracle BPEL Process Manager 10g Database Schema Partitioning – 133743 to get more information on partitioning strategies.

Partitioning of BPEL tables needs due diligence. The task of partitioning the Oracle SOA Suite tables must be performed by an experienced DBA. Careful analysis and thought has to put in before implementing the partitioning strategy. Hence, do ensure that sufficient time is given for implementing the task. It is also highly recommended that customer does analysis to decide whether there would be a need to employ partitioning strategy in their environment in a year or two based on projected growth rate; and if yes, it is better to implement it before going to production.

31.2.8 Recommendations for Queue Tables

Reducing contention by spreading queues across table spaces

Great planning must go in the creation of Queue tables. Storage parameters for the Queue tables must be specified when creating a queue table using the storage_clause parameter.

The tablespace of the queue table should have sufficient space to accommodate data from all the objects associated with the queue table. With retention specified, the history table and the queue table can grow to be quite big.

The Data Base Administrator must work with Business Analysts in understanding the behavior of the critical transactions. Understanding the behavior plays a big role in understanding what queues will be accessed concurrently; and what the hotspots will be. Administrators must ensure that not all the queue tables are created in a single tablespace. Efforts should be taken to spread them across multiple tablespaces to mitigate the contention. Refer to Oracle Database SQL Manual to get a better understanding of storage clauses that must be used to accomplish this.

Limit usual access to a queue from one instance only in the Oracle RAC database

Oracle Real Application Clusters (Oracle RAC) can be used to ensure highly available access to queue data. The entry and exit points of a queue, commonly called its tail and head respectively, can be extreme hot spots. Because Oracle RAC may not scale well in the presence of hot spots, limit usual access to a queue from one instance only. If an instance failure occurs, then messages managed by the failed instance can be processed immediately by one of the surviving instances

You can associate Oracle RAC instance affinities with 8.1-compatible queue tables. You can use ALTER_QUEUE_TABLE or CREATE_QUEUE_TABLE on the queue table and set primary_instance to the appropriate instance_id.

This optional parameter specifies the primary owner of the queue table. Queue monitor scheduling and propagation for the queues in the queue table are done in this instance. The default value 0 means queue monitor scheduling and propagation is done in any available instance.

Buffered messaging operations in a Real Application Clusters environment will be fastest on the OWNER_INSTANCE of the queue.

The Queue tables have a column called primary_instance – and its default value is 0. The DBA can use either alter table or create table commands to set the primary_instance column in the Queue table to the instance_id of the database instance within an Oracle RAC. With this, enqueueing and dequeueing of messages into / from the queues present in that table happen only in that primary db instance. Failure of that instance leads to processing of messages by one of the surviving database instances within the Oracle RAC.

Here is the PL/SQL Stored procedure that is used to create the queue table.

DBMS_AQADM.CREATE_QUEUE_TABLE(
   queue_table          IN      VARCHAR2,
   queue_payload_type   IN      VARCHAR2,
   [storage_clause      IN      VARCHAR2        DEFAULT NULL,]
  sort_list            IN      VARCHAR2        DEFAULT NULL,
   multiple_consumers   IN      BOOLEAN         DEFAULT FALSE,
   message_grouping     IN      BINARY_INTEGER  DEFAULT NONE,
   comment              IN      VARCHAR2        DEFAULT NULL,
   primary_instance     IN      BINARY_INTEGER  DEFAULT 0, 
   secondary_instance   IN      BINARY_INTEGER  DEFAULT 0,
   compatible           IN      VARCHAR2        DEFAULT NULL,
   secure               IN      BOOLEAN         DEFAULT FALSE);

primary_instance This optional parameter specifies the primary owner of the queue table. Queue monitor scheduling and propagation for the queues in the queue table are done in this instance. The default value 0 means queue monitor scheduling and propagation is done in any available instance.

You can specify and modify this parameter only if compatible is 8.1 or higher.

secondary_instance This optional parameter specifies the owner of the queue table if the primary instance is not available. The default value 0 means that the queue table fails over to any available instance.

You can specify and modify this parameter only if primary_instance is also specified and compatible is 8.1 or higher.

The following is the PL/SQL Stored procedure that is used to alter the queue table.

DBMS_AQADM.ALTER_QUEUE_TABLE (
   queue_table          IN   VARCHAR2, 
   comment              IN   VARCHAR2       DEFAULT NULL,
   primary_instance     IN   BINARY_INTEGER DEFAULT NULL, 
   secondary_instance   IN   BINARY_INTEGER DEFAULT NULL);

This above procedure alters the existing properties of a queue table.

Table 31-2 Parameters in Queue Table

Parameter Description

queue_table

This required parameter specifies the queue table name.

comment

This optional parameter is a user-specified description of the queue table. This user comment is added to the queue catalog.

primary_instance

This optional parameter specifies the primary owner of the queue table. Queue monitor scheduling and propagation for the queues in the queue table are done in this instance.

You can specify and modify this parameter only if compatible is 8.1 or higher.

secondary_instance

This optional parameter specifies the owner of the queue table if the primary instance is not available.

You can specify and modify this parameter only if primary_instance is also specified and compatible is 8.1 or higher.


Note:

In general, DDL statements are not supported on queue tables and may even render them inoperable. For example, issuing an ALTER TABLE ... SHRINK statement against a queue table results in an internal error, and all subsequent attempts to use the queue table also result in errors. Oracle recommends that you not use DDL statements on queue tables.

Generating table statistics

Process must be in place to gather statistics for Queue Tables. The frequency must be determined based on the data traffic hitting these tables.

Ensure that statistics are being gathered so that the optimal access paths for retrieving messages are being chosen. By default, queue tables are locked out from automatic gathering of statistics. The recommended use is to gather statistics with a representative queue message load and lock them.

The recommendation would be to collect representative statistics on the queue table and lock it so that optimizer uses index for AQ queries. AQ by default does not allow gathering of statistics on the queue table because irrespective of the queue table load it always wants the optimizer to select index. Use dbms_stats package to unlock and collect stats for the queue table.

For more information, see Oracle Database Performance Tuning Guide.

Establish process for scheduling Queue Maintenance Tasks

The queue table indexes must be coalesced periodically. In 10.2 with automatic space segment management (ASSM), or an online shrink operation may be used for the same purpose. This reduces queue monitor CPU consumption and ensures optimal enqueue and dequeue performance. Refer to the proper documentation to get information on the following DDL commands

Alter table … coalesce
Alter table … shrink space

See My Oracle Support Notes 271855.1, 284692.1, 421474.1 for additional details

The above tasks must be run manually before the test runs until the process is established

31.2.9 Recommendations for AIA / BPEL / ESB / AQ tables

Generating database table statistics

A process must be in place to generate the database table statistics at regular intervals. Ensure that statistics are being gathered for the tables that are heavily accessed so that the optimal query plans / access paths are being employed by Cost Optimizer. Statistics must be generated for the Queue tables too.

Statistics must be gathered for all of the tables residing in AIA, orabpel, oraesb, jmsuser and xref schemas. The frequency of statistics gathering can vary anywhere from every 6 hours to once a week. It depends on the volume of inserts, updates & deletes. For example, creation of more than 1 million BPEL instances in a day calls for more frequent statistics gathering.

For queue tables, one has to unlock the statistics before collecting new statistics – and this must be followed by locking the statistics. So one has to ensure that the statistics is being gathered for data population that is representative of a typical day's activity.

No AIA / BPEL / ESB / AQ tables in SYSTEM table space

Ensure that all the queue tables, AIA and BPEL related tables are not in SYSTEM table space. System tablespace is used by Oracle database to data dictionary information. Oracle strongly recommends keeping the user data separate from data dictionary information. This gives you more flexibility in various database administration operations and reduces contention among dictionary objects and schema objects for the same datafiles. AWR and ADDM reports should shed light on the objects that reside in SYSTEM tablespace. You can use multiple tablespaces to:

  • Control disk space allocation for database data

  • Assign specific space quotas for database users

  • Control availability of data by taking individual tablespaces online or offline

  • Perform partial database backup or recovery operations

Proper space management schemes have to be put in place before moving these artifacts to non SYSTEM tablespace.

Tables / Indices Storage Management

AIA does deliver the SQL script that has the DDL command for creating the table. The SQL script does specify the structural definition for the relational table and the index definitions. However, the DDL script will not have the storage clause (segment and physical attributes) related attributes since they are very closely related to space management – and hence very customer specific.

The table space management design based on characteristics of business transactions, expected growth rate, backup, maintenance and purging strategies, distribution of data leads to identification of the attributes listed below. And the values help you decide whether the default values must be overridden or not.

  • Number of data tablespaces (single vs. many, dictionary managed vs. locally managed, read-only tablespace, transactional vs. setup, data vs. index)

  • Tablespace in which an object must reside

  • Size of the first extent (how much must be allocated for the first extent when Oracle creates this object)

  • Size of the second extent

  • Rate at which the third and subsequent extents grow over the preceding extent

  • Minimum and maximum number of extents for an object

  • Percentage of space in each data object reserved for future updates

  • Minimum percentage of used space that Oracle maintains for each data block

Other recommendations

Do not set the db_file_multiblock_read_count parameter (removing it from your spfile or init.ora file) and let the database determine the number of blocks read in multi-block I/O operations.

  • In Oracle 10G R2, Oracle defaults the db_file_multiblock_read_count to the maximum number of blocks that can be effectively read. Although this value is also platform-dependent, Oracle documentation states that it is 1 MB for most platforms. This 1 MB size allows much more data to be read in a single operation in 10GR2 than previous releases.

  • The DB_FILE_MULTIBLOCK_READ_COUNT parameter controls the number of blocks pre-fetched into the buffer cache during scan operations, such as full table scan and index fast full scan. Oracle Database 10g Release 2 automatically selects the appropriate value for this parameter depending on the operating system optimal I/O size and the size of the buffer cache.

31.2.10 Recommendations for SecureFiles migration (applicable only to 11g R1 / R2)

In 11g R1, Oracle has introduced a new LOB storage mechanism called Oracle SecureFiles. It enables File system-like performance for LOBs. In general, Oracle customers have seen performance improvements of 3 – 6x, with some seeing even larger gains, by migrating to SecureFiles to BasicFiles. To get these gains, data stored in the older format must be migrated to the SecureFile storage format.

There are several ways to migrate applications to SecureFiles. Oracle recommends Online Redefinition as the preferred method of migration as it allows the database and the table being migrated to be online during the migration process. Explaining the process is beyond the scope of this document. Refer to 'SecureFiles Migration – An Oracle White Paper – August 2008' to get complete information.

Migration of LOB structures to SecureFiles needs due diligence. The task of migrating to SecureFiles from BasicFiles must be performed by an experienced DBA. Careful analysis and thought has to put in before implementing the SecureFiles migration. Hence, do ensure that sufficient time is given for implementing the task. It is also highly recommended that customer does analysis to decide whether adopting SecureFiles is a right strategy for their implementation before implementing it.

For more information, see "Generating Automatic Workload Repository Reports" in Oracle Database Performance Tuning Guide and "Monitoring Performance" in Oracle Database Administrator's Guide.

31.2.11 Changing the Driver Name to Support XA Drivers

If your data sources require support for XA drivers, you must change the driver name on Oracle WebLogic Server. This is particularly true for environments in which BPEL processes assume XA is present when calling database adapters and JMS adapters.

Change the driver name using one of the following methods:

31.2.11.1 Edit in Oracle WebLogic Server Administration Console

To edit in Oracle WebLogic Server Administration Console

  1. Log in to Oracle WebLogic Server Administration Console.

  2. In the left pane, select Domain Structure.

  3. Select Services > JDBC > Data Source > SOADataSource > Connection Pool.

  4. For the Driver Class Name, change the value to oracle.jdbc.xa.client.OracleXADataSource.

    This provides support for the XA driver.

  5. Restart the server.

31.2.11.2 Edit the SOADataSource-jdbc.xml file

To edit the SOADataSource-jdbc.xml file

  1. Open the soaDataSource-jdbc.xml file on Oracle WebLogic Server.

  2. Change the SOADataSource driver name from

    oracle.jdbc.OracleDriver to oracle.jdbc.xa.client.OracleXADataSource as shown in Example 31-1.

    Example 31-1 soaDataSource-jdbc.xml file

    <?xml version="1.0" encoding="UTF-8"?>
    <jdbc-data-source
    /. . .
    . . .
    / <name>SOADataSource</name>
    <jdbc-driver-params>
    <url>jdbc:oracle:thin:@adc60086fems.xy.example.com:1537:co0yd570</url>
    <driver-name>*oracle.jdbc.xa.client.OracleXADataSource*</driver-name>
    <properties>
    <property>
    <name>user</name>
    <value>fusion_soainfra</value>
    </property>
    </properties>
    / . . .
    . . ./
    </jdbc-driver-params>
    /. . .
    . . ./
    </jdbc-data-source>
    

31.2.12 Configuring Database Connections and Datasource Statement Caching

By properly configuring the connection pool attributes in JDBC data sources in your WebLogic Server domain, you can improve application and system performance. The following sections include information about tuning options for the connection pool in a JDBC data source:

31.2.12.1 JDBC Datasource Connection Pool Settings

  • Statement Cache Type

    The Statement Cache Type (or algorithm) determines which prepared and callable statements to store in the cache for each connection in a data source.AIA recommends the property to set to 'LRU'

  • Statement Cache Size

    The Statement Cache Size attribute determines the total number of prepared and callable statements to cache for each connection in each instance of the data source. By caching statements, you can increase your system performance. However, you must consider how your DBMS handles open prepared and callable statements. In many cases, the DBMS maintains a cursor for each open statement. This applies to prepared and callable statements in the statement cache.

    AIA recommends keeping the default value of 10. If you cache too many statements, you may exceed the limit of open cursors on your database server. For example, if you have a data source with 50 connections deployed on 2 servers, if you set the Statement Cache Size to 10 (the default), you may open 1000 (50 x 2 x 10) cursors on your database server for the cached statements.

  • Initial Capacity and Maximum Capacity

    For applications that use a database, performance can improve when the connection pool associated with a data source limits the number of connections. You can use the Maximum Capacity to limit the database requests from Oracle Application Server so that incoming requests do not saturate the database, or to limit the database requests so that the database access does not overload the Oracle Application Server-tier resource.

    The connection pool MaxCapacity attribute specifies the maximum number of connections that a connection pool allows. By default, the value of MaxCapacity is set to 15. For best performance, you should specify a value for MaxCapacity that matches the number appropriate to your database performance characteristics.

    Limiting the total number of open database connections to a number your database can handle is an important tuning consideration. You should check to ensure that your database is configured to allow at least as large a number of open connections as the total of the values specified for all the data sources MaxCapacity option, as specified in all the applications that access the database.

    AIA recommends setting Initial and Maximum Capacity to 50 to start with; and make the necessary adjustments based on the database performance.

31.2.12.2 Getting the Right Mix of Performance and Fault Tolerance

These four options can be used to get the right mix of performance and fault tolerance for your system.

  • Test Frequency

    Enable periodic background connection testing by entering the number of seconds between periodic tests. If the request is made within the time specified for Seconds to Trust an Idle Pool Connection, since the connection was tested or successfully used by an application, WebLogic Server skips the connection test.

  • Test Reserved Connections

    If Test Reserved Connections is enabled on your data source, when an application requests a database connection, WebLogic Server tests the database connection before giving it to the application. If the request is made within the time specified for Seconds to Trust an Idle Pool Connection, since the connection was tested or successfully used by an application, WebLogic Server skips the connection test before delivering it to an application.

  • Seconds to Trust an Idle Pool Connection

    Seconds to Trust an Idle Pool Connection is a tuning feature that can improve application performance by minimizing the delay caused by database connection testing, especially during heavy traffic. However, it can reduce the effectiveness of connection testing, especially if the value is set too high. The appropriate value depends on your environment and the likelihood that a connection becomes defunct.

  • Remove Infected Connections Enabled

    Specifies whether a connection will be removed from the connection pool after the application uses the underlying vendor connection object. If you disable removing infected connections, you must ensure that the database connection is suitable for reuse by other applications.

    When set to true (the default), the physical connection is not returned to the connection pool after the application closes the logical connection. Instead, the physical connection is closed and re-created.

    When set to false, when the application closes the logical connection, the physical connection is returned to the connection pool and can be reused by the application or by another application.

For more information, see:

  • "Generating Automatic Workload Repository Reports" in Oracle Database Performance Tuning Guide.

  • "Monitoring Performance" in Oracle Database Administrator's Guide.

  • "JDBC Data Source: Configuration: Connection Pool" in Oracle Fusion Middleware Oracle WebLogic Server Administration Console Online Help.

  • "Tuning Data Source Connection Pool Options" in Oracle Fusion Middleware Configuring and Managing JMS for Oracle WebLogic Server.

31.2.13 Oracle Metadata Service (MDS) Performance Tuning

For optimal performance of MDS APIs, the database schema for the MDS repository must be monitored and tuned by the database administrator. This section lists some recommended actions to tune the database repository:

  • Collect schema statistics

  • Increase redo log size

  • Reclaim disk space

  • Monitor the database performance

31.2.13.1 Using Database Polling Interval for Change Detection

MDS employs a polling thread which queries the database to gauge if the data in the MDS in-memory cache is out of sync with data in the database. This can happen when metadata is updated in another JVM. If it is out of sync, MDS clears any out of date-cached data so subsequent operations see the latest versions of the metadata.

MDS invalidates the document cache and the MDS cache, so subsequent operations have the latest version of the metadata.

The polling interval can be configured or changed post deployment through MBeans. The element maps to the ExternalChangeDetection and ExternalChangeDetectionInterval attributes of the MDSAppConfig MBean.

Before packaging the Enterprise ARchive (EAR) file, you can configure the polling interval by adding the entry shown in Example 31-2 to adf-config.xml:

Example 31-2 Configuring the Polling Interval in the adf-config.xml

<mds-config>
<persistence-config>
<external-change-detection enabled="true" polling-interval-secs="T"/>
</persistence-config>
</mds-config>

In Example 31-2, 'T' specifies the polling interval in seconds. The minimum value is 1. Lower values cause metadata updates that are made in other JVMs, to be seen more quickly. It is important to note, however, that a lower value can also create increased middle tier and database CPU consumption due to the frequent queries. By default, polling is enabled ('true') and the default value of 30 seconds should be suitable for most purposes. Consider increasing the value to a higher number if the number of updates to MDS are few and far between.

For more information, see "Changing MDS Configuration Attributes for Deployed Applications" in Oracle Fusion Middleware Administrator's Guide.

31.2.13.2 Tuning Cache Configuration

MDS uses a cache to store metadata objects and related objects (such as XML content) in memory. MDS Cache is a shared cache that is accessible to all users of the application (on the same JVM). If a metadata object is requested repeatedly, with the same customizations, that object may be retrieved more quickly from the cache (a "warm" read). If the metadata object is not found in the cache (a "cold" read), then MDS may cache that object to facilitate subsequent read operations depending on the cache configuration, the type of metadata object and the frequency of access.

Cache can be configured or changed post deployment through MBeans. This element maps to the MaximumCacheSize attribute of the MDSAppConfig mbean.

For more information, see "Changing MDS Configuration Attributes for Deployed Applications" in Oracle Fusion Middleware Administrator's Guide.

Having a correctly sized cache can significantly improve throughput for repeated reading of metadata objects. The optimal cache size depends on the number of metadata objects used and the individual sizes of these objects. Before packaging the Enterprise ARchive (EAR) file, you can manually update the cache-config in adf-config.xml, by adding the entry shown in Example 31-3.

Example 31-3 Manually Updating the cache-config in adf-config.xml

<mds-config>
<cache-config>
<max-size-kb>200000</max-size-kb>
</cache-config>
</mds-config>

For more information about tuning the MDS and the database, see "Optimizing Instance Performance" in Oracle Database Performance Tuning Guide and "Oracle Metadata Service (MDS) Performance Tuning in Oracle Fusion Middleware Performance and Tuning Guide.

31.3 Configuring the Common SOA Infrastructure

This section discusses properties that impact the entire SOA infrastructure.

31.3.1 Configuring SOA Infrastructure Properties

These settings apply to all SOA Composite applications running in the SOA infrastructure. The properties set at this level impact all deployed SOA composite applications, except those composites for which you explicitly set different audit level values at the composite application or service engine levels. AIA recommends configuring at least the following properties.

auditLevel

This property sets the audit trail logging level. This property controls the amount of audit events that are logged by a process.

AIA recommends the following value to be set to this property:

Production: The BPEL service engine does not capture the payload. The payload details are not available in the flow audit trails. Payload details for other BPEL activities are collected, except for assign activities. This level is optimal for most normal operations and testing.

captureCompositeInstanceState

Enabling this option may result in additional runtime overhead during instance processing. This option provides for separate tracking of the running instances. All instances are captured as either running or not running. This information displays later in the State column of the composite instances tables for the SOA Infrastructure and SOA composite application, where it shows the counts of running instances versus total instances. You can also limit the view to running instances only

Valid states are Running, Completed, Faulted, Recovery Needed, Stale, Terminated, Suspended, and State Not Available.

The Running and Completed states are captured only if this check box is selected. Otherwise, the state is set to Unknown. The conditional capturing of these states is done mainly to reduce the performance overhead on SOA Infrastructure runtime.

Note:

If this property is disabled and you create an instance of a SOA composite application, an instance is created, but the instance does not display as running, faulted, stale, suspended, terminated, completed, or requiring recovery in the table of the Dashboard page of the composite application. This is because capturing the composite state of instances is a performance-intensive process.

AIA recommends disabling this property.

PayloadValidation

This property validates incoming and outgoing XML documents. If set to True, the SOA Infrastructure applies schema validation for incoming and outgoing XML documents.

This property is applicable to both durable and transient processes. The default value is False. This value can be overridden to turn on schema validation based on business needs. Turning on schema validation both at SOA Infrastructure and at the BPEL Process Service engine causes the schema validation to be done twice.

31.3.2 Disabling HTTP Logging

To disable HTTP logging from WebLogic Console for Admin, SOA and BAM servers

  1. Login to WebLogic Console

  2. Environments -> Servers -> Admin Server -> Logging -> HTTP -> Uncheck HTTP access log file enabled option

To change log level to error (From Oracle Enterprise Manager):

  1. Login to EM

  2. SOA Infrastructure -> Logs -> Log Configuration

31.4 BPEL - General Performance Recommendations

This section includes the following topics:

31.4.1 Configuring BPEL Process Service Engine Properties

The basic BPEL Process Manager performance tuning properties can be configured using WLST or Oracle Enterprise Manager. These properties must be configured:

auditLevel

This property sets the audit trail logging level. This property controls the amount of audit events that are logged by a process.

AIA recommends the following value to be set to this property.

Production: The BPEL service engine does not capture the payload. The payload details are not available in the flow audit trails. Payload details for other BPEL activities are collected, except for assign activities. This level is optimal for most normal operations and testing.

instanceKeyBlockSize

This property controls the instance ID range size. Oracle BPEL Server creates instance keys (a range of process instance IDs) in batches using the value specified. After creating this range of in-memory IDs, the next range is updated and saved in the ci_id_range table. For example, if instanceKeyBlockSize is set to 100, Oracle BPEL Server creates a range of instance keys in-memory (100 keys, which are later inserted into the cube_ instance table as cikey). To maintain optimal performance, ensure that the block size is larger than the number of updates to the ci_id_range table.

The default value is 10000. AIA recommends the default value to be used as is.

auditDetailThreshold

This property sets the maximum size (in kilobytes) of an audit trail details string before it is stored separately from the audit trail. If an audit trail details string is larger than the threshold setting, it is not immediately loaded when the audit trail is initially retrieved; a link is displayed with the size of the details string. Strings larger than the threshold setting are stored in the audit_details table, instead of the audit_trail table. The details string typically contains the contents of a BPEL variable. In cases where the variable is very large, performance can be severely impacted by logging it to the audit trail.

The default value is 50000 (50 kilobytes). AIA recommends retaining the default value.

LargeDocumentThreshold

This property sets the large XML document persistence threshold. This is the maximum size (in kilobytes) of a BPEL variable before it is stored in a separate location from the rest of the instance scope data. This property is applicable to both durable and transient processes. Large XML documents impact the performance of the entire Oracle BPEL Server if they are constantly read in and written out whenever processing on an instance must be performed.

The default value is 10000 (100 kilobytes). AIA recommends changing the value to 50000.

PayloadValidation

This property validates incoming and outgoing XML documents. If set to True, the Oracle BPEL Process Manager applies schema validation for incoming and outgoing XML documents.

This property is applicable to both durable and transient processes. The default value is False. This value can be overridden to turn on schema validation based on business needs. Turning on schema validation both at SOA Infrastructure and at the BPEL Process Service engine causes the schema validation to be done twice.

DispatcherInvokeThreads (dspInvokeThreads in 10g)

This property specifies the total number of threads allocated to process invocation dispatcher messages. Invocation dispatcher messages are generated for each payload received and are meant to instantiate a new instance. If the majority of requests processed by the engine are instance invocations (as opposed to instance callbacks), greater performance may be achieved by increasing the number of invocation threads. Higher thread counts may cause greater CPU utilization due to higher context switching costs.

The default value is 20. AIA recommends this value as the starting point. This parameter can be used throttle the requests to avoid overloading the mid-tier, database tier, and participating application systems and tune for best performance.

DispatcherEngineThreads (dspEngineThreads in 10g)

This property specifies the total number of threads allocated to process engine dispatcher messages. Engine dispatcher messages are generated whenever an activity must be processed asynchronously. If the majority of processes deployed are durable with a large number of dehydration points (mid-process receive, onMessage, onAlarm, and wait activities), greater performance may be achieved by increasing the number of engine threads. Higher thread counts can cause greater CPU utilization due to higher context switching costs.

The default value is 30. Since the majority of AIA flows do not have dehydration points, AIA recommends the value to be set to 20. This parameter can be used to throttle the requests to avoid overloading the mid-tier, database tier, and participating application systems and tune for best performance.

DispatcherSystemThreads (dspSystemThreads in 10g)

This property specifies the total number of threads allocated to process system dispatcher messages. System dispatcher messages are general clean-up tasks that are typically processed quickly by the server (for example, releasing stateful message beans back to the pool). Typically, only a small number of threads are required to handle the number of system dispatch messages generated during runtime.

The default value is 2. AIA recommends the default value is kept as is.

Disable BPEL Monitors and Sensors

Select this check box to disable all BPEL monitors and sensors defined for all BPEL components across all deployed SOA composite applications.

syncMaxWaittime

This property sets the maximum time the process result receiver waits for a result before returning. Results from asynchronous BPEL processes are retrieved synchronously by a receiver that waits for a result from Oracle BPEL Server. This property is applicable to transient processes.

The default value is 45. Value of syncMaxWaittime parameter mainly depends on the scenario and the number of concurrent processes. Even though one can increase the time to avoid the transaction time out, one has to realize that end user experience will be adversely impacted after increasing the value beyond a certain point.

StatsLastN

This property sets the size of the most-recently processed request list. After each request is finished, statistics for the request are kept in a list.

A value less than or equal to 0 disables statistics gathering. This property is applicable to both durable and transient processes.

For more information, see "Configuring BPEL Process Service Engine Properties" in Oracle Fusion Middleware Administrator's Guide for Oracle SOA Suite and Oracle Business Process Management Suite.

31.4.2 Configuring BPEL Properties Inside a Composite

This section lists the config properties of some sections of the deployment descriptor.

For each configuration property parameter, a description is given, and the expected behavior of the engine when it is changed. All the properties set in this section affect the behavior of the component containing the BPEL composite only. Each BPEL process can be created as a component of a composite. These properties are modified through WLST.

inMemoryOptimization

This property indicates to Oracle BPEL Server that this process is a transient process and dehydration of the instance is not required. When set to True, Oracle BPEL Server keeps the instances of this process in memory only during execution. This property can only be set to True for transient processes or processes that do not contain any dehydration points such as mid-process receive, wait, onMessage and onAlarm activities. This property has the following values:

  • False (default): Instances are persisted completely and recorded in the dehydration store database.

  • True: Oracle BPEL Process Manager keeps instances in memory only.

AIA recommends setting the value to True for transient BPEL processes. BPEL processes that have request-response pattern with no dehydration points should have this property set to True. Setting this property to true and setting completionPersistPolicy to faulted ensures that successfully completed transient processes will not be persisted.

completionPersistPolicy

This property configures how much of instance data should be saved. It can only be set at the BPEL component level.

AIA recommends setting the property to Off for transient services. In that situation, no instances of process are saved. It persists only the faulted instances.

PayloadValidation

This property is set at the partnerlink level. This property validates incoming and outgoing XML documents.

If set to True, the Oracle BPEL Process Manager applies schema validation for incoming and outgoing XML documents. When set to True the engine validates the XML message against the XML schema during <receive> and <invoke> for this partnerLink. If the XML message is invalid then bpelx:invalidVariables runtime BPEL Fault is thrown. This overrides the domain level validateXML property.

The default value is False. This value can be overridden to turn on schema validation based on business needs.

31.4.3 How to Monitor the BPEL Service Engine

The request and thread statistics for all BPEL process service components running in the service engine must be monitored regularly to check whether the above properties must be tweaked. The statistics page for the BPEL engine provides valuable information. These details help determine how the BPEL engine is performing:

  • Details on Pending requests in the service engine

  • Active requests in the service engine

  • Thread statistics for the service engine

The statistics page also provides information about the count, minimum, average, and maximum processing times for each of the tasks. This information helps the developer and the administrator identify the hot spots.

For more information, see Oracle Fusion Middleware Administrator's Guide for Oracle SOA Suite and Oracle Business Process Management Suite.

31.5 Oracle Mediator: General Performance Recommendations

This section includes the following topics:

31.5.1 Configuring Mediator Service Engine Properties

You can configure the properties of Mediator Service Engine by setting the parameters mentioned in this section. These parameters can be set by setting the values of the parameters in the Mediator Service Engine Properties page. These properties must be configured:

auditLevel

This property sets the audit trail logging level. This property controls the amount of audit events that are logged by a process.

AIA recommends the following value to be set to this property.

Production: All events are logged. All audit details, except the details of assign activities, are logged. Instance tracking information is collected, but payload details are not captured and these details are not available in the flow audit trails. This level is optimal for most typical operations and testing.

metricsLevel

Administrator can set the Mediator-specific flag metricsLevel for configuring the Dynamic Monitoring Service (DMS) metrics level. DMS metrics are used to measure the performance of application components. The possible values of this flag are:

  • Enabled - Enables DMS metrics tracking

  • Disabled - Disables DMS metrics tracking

AIA recommends disabling the DMS metrics tracking in production. Consider turning this on when there is a need for tracking DMS metrics.

DeferredWorkerThreadCount

Specifies the number of deferred dispatchers for processing messages in parallel. For higher loads consider increasing this parameter to have more outbound threads for deferred processing as each parallel rule is processed by one of the DeferredWorkerThreads.

Default value is 4 threads. AIA recommends changing the value to a higher number only when there are a significant number of mediator services having parallel routing rules.

DeferredMaxRowsRetrieved

When the Mediator routing rule type is set to 'Parallel', DeferredMaxRowsRetrieved sets the number of maximum rows (maximum number of messages for parallel routing rule processing) that are retrieved from Mediator store table (that stores messages for parallel routing rule) for processing.

Ensure that each message retrieved in this batch is processed by one worker thread at a time. The default value is 20 threads. AIA recommends altering the value only when there are a significant number of mediator services having parallel routing rules.

DeferredLockerThreadSleep

For processing parallel routing rules, Oracle Mediator has a daemon locker thread that retrieves and locks messages from Mediator store database. The thread polls the database to retrieve messages for parallel processing. When no messages are available, the locker thread "sleeps" for the amount of time specified in the DeferredLockerThreadSleep and prevents round trips to database.

Default value is 2 seconds. Consider increasing the value to a higher number, say 120 seconds, for integrations having no mediator components with parallel routing rules.

During the specified time, no messages are available for parallel routing in either of the following cases:

  • There are no Mediator components with parallel routing rules deployed.

  • Mediator component(s) with parallel routing rule is deployed, but there are no continuous incoming messages for such components.

SyncMaxWaitTime

The maximum time a request and response operation takes before timing out.

Default value is 45 seconds.

31.5.2 How to Monitor the Mediator Service Engine

The efficiency level of the Mediator service engine can be assessed by monitoring the following:

  • Routing statistics

  • Request breakdown statistics

  • Instance statistics

The request breakdown statistics provide information about the count and the time taken for processing each of the following actions:

  • Invoke one-way

  • Enqueues

  • Transformation

  • Invoke request-response

  • Publish

  • Condition evaluation

  • Message validation

Keeping the number of time consuming actions such as enqueues, message validations, transformations as minimal as possible is critical for improving the performance and scalability.

For more information, see Oracle Fusion Middleware Administrator's Guide for Oracle SOA Suite and Oracle Business Process Management Suite.

31.6 Tuning Oracle Adapters for Performance

Oracle technology adapters integrate Oracle Application Server and Oracle Fusion Middleware components such as Oracle BPEL Process Manager (Oracle BPEL PM) or Oracle Mediator components to file systems, FTP servers, database queues (advanced queues, or AQ), Java Message Services (JMS), database tables, and message queues (MQ Series).

This section describes how to tune Oracle Adapters for optimal performance.

This section focuses only on the tuning aspect for technology adapters that are used by AIA to integrate with applications.

For more information about Oracle Adapters, see Oracle Fusion Middleware User's Guide for Technology Adapters.

31.6.1 How to Tune JMS Adapters

This section describes some properties that can be set for the Oracle SOA JMS Adapter to optimize performance.

For more information, see "Introduction to the Oracle JMS Adapter" in Oracle Fusion Middleware User's Guide for Technology Adapters.

If during stress it is taking time to read the messages off the queue, then the JMS adapter thread shown in Example 31-4 can be increased to get the desired output. The following parameter must be set in the bpel.xml property file to achieve the above purpose:

To improve performance, the adapter.jms.receive.threads property can be tuned for an adapter service. The default value is 1, but multiple inbound threads can be used to improve performance. When specified, the value of adapter.jms.receive.threads is used to spawn multiple inbound poller threads.

Be very cautious about increasing the value for this property. Increasing the value increases the load for the Fusion Middleware Server and puts additional stress on the downstream systems such as application systems and databases. Consider increasing the value only when the CPU is under utilized.

Example 31-4 shows the parameter that must be set in the composite.xml property file to achieve the above purpose.

Example 31-4 Tuning JMS Adapters Using the composite.xml Property File

<activationAgents>
<activationAgent className="…" partnerLink="MsgQueuePL">
... <property name="adapter.jms.receive.threads">5</property>
</activationAgent>
</activationAgents>
<activationAgents>
<activationAgent className="…" partnerLink="MsgQueuePL">
... <property name="adapter.jms.receive.threads">5</property>
</activationAgent>

31.6.2 How to Tune AQ Adapters

If during stress it is taking time to read the messages off the queue, then the AQ adapter thread shown in Example 31-5 can be increased to get the desired output. The following parameter must be set in the bpel.xml property file to achieve the above purpose:

To improve performance, the adapter.aq.receive.threads property can be tuned for an adapter service. The default value is 1, but multiple inbound threads can be used to improve performance. When specified, the value of adapter.aq.receive.threads is used to spawn multiple inbound poller threads.

Be very cautious about increasing the value for this property. Increasing the value increases the load for the Fusion Middleware Server and puts additional stress on the downstream systems such as application systems and databases. Consider increasing the value only when the CPU is under utilized.

Example 31-5 shows the parameter that must be set in the composite.xml property file to achieve the above purpose.

Example 31-5 Tuning AQ Adapters Using the composite.xml Property File

<service name="dequeue" ui:wsdlLocation="dequeue.wsdl">
<interface.wsdl
interface="http://xmlns.oracle.com/pcbpel/adapter/aq/raw/raw/dequeue/#wsdl.interface(Dequeue_ptt)"/>
<binding.jca config="dequeue_aq.jca">
<property name="adapter.aq.dequeue.threads" type="xs:string"
many="false">5</property>
</binding.jca>
</service>
<service name="dequeue" ui:wsdlLocation="dequeue.wsdl">

minimumDelayBetweenMessages

There could be situations where downstream / back office application will not be able to keep up pace with the SOA Server – and this could result in more requests being sent to the application than it could process. Hence, this could result in longer time being taken to process the requests, invariably resulting in transaction timeouts. In these situations, the following parameter could be used for throttling - the tuning parameter that you should use is:

<property name="minimumDelayBetweenMessages">1000</property>

This value can be tuned (increase/reduce) further based on the actual time required to complete your transaction.

This property is available for JMS based end points only within BPEL based services.

31.6.3 How to Tune Database Adapters

Consider the following properties when tuning database adapters:

  • MaxTransactionSize

    This property controls the number of records processed per transaction by each thread. This should be set to a reasonable value, such as 100.

  • NumberOfThreads

    This property controls how many threads the DB adapter uses to process records. The default is 1.

    Reducing the numProcessorThread (setting numProcessorThread=4) may improve performance in Core2 Duo boxes but not for Netburst boxes.

  • UseBatchDestroy

    This property controls how the processed records are updated (for example, Deleted for DeletePollingStrategy, MarkedProcessed for LogicalDeleteStrategy). If set, only one update or delete is executed for all the rows part of that transaction. The number of rows in a transaction is controlled by the MaxTransactionSize option.

  • MaxTransactionSize

    • If set to a large value such as 1000, turning on the UseBatchDestroy option could have a negative impact on performance.

    • Setting a large MaxTransactionSize and a small MaxRaiseSize could have negative impact on performance.

    • Maintaining 10:1 ratio could give better performance.

31.6.4 Throttling Inbound Message Flows

The setting in 11g should be configured as a binding property in the composite.xml for the corresponding (inbound) <service> as shown in Example 31-6.

Example 31-6 Binding Property in the composite.xml

<service name="Inbound">
interface.wsdl interface="http://xmlns.oracle.com/pcbpel/demo1#wsdl.interface
(SampleInbound_PortType)"/>
<binding.jca config="Dequeue_jms.jca">

<property name="minimumDelayBetweenMessages">1000</property>

</binding.jca>
</service>

This setting ensures that there is at least 1000 milliseconds delay between two consecutive messages being posted to the BPEL process.

Note:

This setting pertains only to one adapter polling thread. If multiple adapter polling threads (for example, multiple JMS dequeuer threads) have been configured, this setting controls the speed of each thread, not the combined speed.

31.7 Purging the Completed Composite Instances

Data growth from auditing and dehydration can have a significant impact on database performance and throughput. See the sections on auditLevel for audit configuration and inMemoryOptimization for dehydration configuration.

These tables are impacted by instance data growth:

  • Audit_trail

  • Audit_details

  • Cube_instance

  • Cube_scope

  • Dlv_message

  • Dlv_subscription

  • Document_ci_ref

  • Document_dlv_message_ref

  • Schema_md

  • Task

  • Work_item

  • Xml_document

  • Header_properties

The database administrator must continuously monitor the growth of these tables and make sure these tables are configured properly to give the optimal performance.

In addition, AIA highly recommends having a process in place to archive and clean up the completed composite instances after a specified period.

  • Synchronous request -response based completed composite instances should be purged from SOA dehydration data store after a month.

  • Data synchronization related composite instances should be purged 3 - 4 months after their completion.

  • Composite Business Processes should be purged 1 - 2 years after their completion.

Even though this duration is subject to change based on your company's policies, attempts should be made to do the purging at the earliest possible time. Purging must be preceded by the archiving of the instance data.

For more information about performing this task, see "Deleting Large Numbers of Instances with the Purge Script" in Oracle Fusion Middleware Administrator's Guide for Oracle SOA Suite and Oracle Business Process Management Suite to get complete information on how to perform this task.

31.8 Tuning Java Virtual Machines (JVMs)

This section discusses how to optimize the JVM heap.

31.8.1 How to Optimize the JVM Heap - Specifying Heap Size Values

The goal of tuning your heap size is to minimize the time that your JVM spends doing garbage collection while maximizing the number of clients that the Fusion Middleware stack can handle at a given time.

Specifically the Java heap is where the objects of a Java program live. It is a repository for live objects, dead objects, and free memory. When an object can no longer be reached from any pointer in the running program, it is considered "garbage" and ready for collection. A best practice is to tune the time spent doing garbage collection to within 5% of execution time.

The JVM heap size determines how often and how long the virtual system spends collecting garbage. An acceptable rate for garbage collection is application-specific and should be adjusted after analyzing the actual time and frequency of garbage collections. If you set a large heap size, full garbage collection is slower, but it occurs less frequently. If you set your heap size in accordance with your memory needs, full garbage collection is faster, but occurs more frequently.

In production environments, set the minimum heap size and the maximum heap size to the same value to prevent wasting virtual system resources used to constantly grow and shrink the heap. Ensure that the sum of the maximum heap size of all the JVMs running on your system does not exceed the amount of available physical RAM. If this value is exceeded, the Operating System starts paging and performance degrades significantly. The virtual system always uses more memory than the heap size. The memory required for internal virtual system functionality, native libraries outside of the virtual system, and permanent generation memory (memory required to store classes and methods) is allocated in addition to the heap size settings.

To tune the JVM garbage collection options you must analyze garbage collection data and check for the frequency and type of garbage collections, the size of the memory pools, and the time spent on garbage collection.

Before you configure JVM garbage collection, analyze the following data points:

  • How often is garbage collection taking place? Compare the time stamps around the garbage collection.

  • How long is a full garbage collection taking?

  • What is the heap size after each full garbage collection? If the heap is always 85% free, for example, you might set the heap size smaller.

  • Do the young generation heap sizes (Sun) or Nursery size (Jrockit) need tuning?

You can manually log garbage collection and memory pool sizes using verbose garbage collection logging:

Sun JVM command line options:

-verbose:gc
-XX:+PrintGCDetails
-XX:+PrintGCTimeStamps

Jrockit JVM command line options:

-XXverbose:gc

For example, you can use the following JVM options to tune the heap:

  • If you run out of heap memory (not due to a memory leak), increase -Xmx.

  • If you run out of native memory, you must decrease -Xmx.

  • For Sun JVM, modify -Xmn to tune the size of the heap for the young generation.

  • For Oracle JRockit, modify -Xns:<nursery size> to tune the size of the nursery.

  • If your AIA service instance runs on a dedicated host, set the heap size value as high as possible but make sure it does not exceed the available physical RAM.

It is highly recommended that the testing and production environment have 64-bit JVM running on 64-bit hardware with at least 8 GB of memory. The following configurations are based on assumption that the environment has at least 6 GB of memory. It also assumes that the host system has sufficient physical RAM installed and that other OS processes are not consuming a significant portion of the available memory.

Table 31-3 JVM Heap Size Values

JVM Parameters Recommended Starting Value for Testing Notes

Initial heap size

4096

It is recommended that Initial Heap and Maximum Heap are set to the same value for the proper memory utilization and minimize major garbage collection.

A small heap becomes full quickly and must be garbage collected more often. It is also prone to more fragmentation, making object allocation slower. A large heap introduces a slight overhead in garbage collection times. A heap that is larger than the available physical memory in the system must be paged out to disk, which leads to long access times or even application freezes, especially during garbage collection.

In general, the extra overhead caused by a larger heap is smaller than the gains in garbage collection frequency and allocation speed, as long as the heap does not get paged to disk. Thus a good heap size setting would be a heap that is as large as possible within the available physical memory.

Maximum heap size

4096

Enable J2SE 5.0 Platform Mbeans

Enable

Keep it enabled, so that memory and cpu real time monitoring can be done and further tuning can be done if required.

-XX:PermSize

256

This setting is used to allocate memory outside of heap space to hold java classes.

Note: JRockit JVM does not have this capability.

-XX:MaxPermSize

256

-XX:AppendRatio

3

None

-XX:NewSize

1638

Another important heap configuration is the garbage collector's generational settings. The garbage collector optimizes collection by classifying objects by how long they live. Most of the BPEL engine's objects are short lived, thus they live in the "Eden" space. AIA recommends sizing the "Eden" space to be 40-50% of the total maximum heap size. The following command line starts Java with a Eden sizing that is 60% of the maximum heap size. Also If you are using 2 or more than 2 CPUs and it is a non-Windows system, it is recommended to use -XX:+ AggressiveOpts jvm flag. The -XX:+ AggressiveOpts option inspects the system resources (size of memory and number of processors) and attempts to set various parameters to be optimal for long-running, memory allocation-intensive jobs. Note this flag does not affect Windows performance.

In JRockit JVM, young (new) generation is called nursery size - and it is specified using -Xns<size>

-XX:MaxNewSize

1638

-XX:SurvivorRatio

6

-XX:+AggressiveOpts

Set in the JVM argument


Example 31-7 shows sample JVM configurations for 64-bit Sun JVM running on 64-bit Linux on 64-bit hardware having 16GB RAM. Settings such as heap space (-mx & -ms), young generation (MaxNewSize & NewSize), permanent size (MaxPermSize), ParallelGCThreads must be adjusted according to the memory and processors available in your environment. Similarly, directory paths for several of the directives listed below must be adjusted accordingly.

Example 31-7 Sample JVM Configurations

<data id="java-options" value="
-d64 
-server -Doc4j.jms.implementation=oracle.j2ee.jms 
-Dcom.sun.management.jmxremote 
-mx8192M 
-ms8192M 
-XX:MaxPermSize=512M 
-XX:MaxNewSize=2450M 
-XX:NewSize=2450M 
-Xmn2500M 
-XX:SurvivorRatio=6 
-XX:AppendRatio=3 
-verbose:gc 
-XX:+AggressiveOpts 
-XX:+HeapDumpOnOutOfMemoryError 
-XX:+PrintGCDetails 
-XX:+PrintGCTimeStamps 
-XX:+UseParallelOldGC 
-XX:ParallelGCThreads=8 
-Xloggc:/export/oracle/product/10.1.3.1/OracleAS_1/opmn/logs/j2eegc.log 
-Djava.security.policy=$ORACLE_HOME/j2ee/OC4J_SOA/config/java2.policy 
-Djava.awt.headless=true 
-Dhttp.webdir.enable=false 
-Doc4j.userThreads=true 
-Doracle.mdb.fastUndeploy=60 
-Doc4j.formauth.redirect=true 
-Djava.net.preferIPv4Stack=true   
-Dorabpel.home=/export/oracle/product/10.1.3.1/OracleAS_1/bpel
-Xbootclasspath/p:/export/oracle/product/10.1.3.1/OracleAS_1/bpel/
   lib/orabpel-boot.jar 
-Dhttp.proxySet=false
-Doraesb.home=/export/oracle/product/10.1.3.1/OracleAS_1/integration/esb 
-Dhttp.proxySet=false -Daia.home=/export/oracle/AIA" />

31.8.1.1 Java Performance Analysis Tools

A profiler is a performance analysis tool that enables you to reveal hot spots in the application that result in either high CPU utilization or high contention for shared resources. Some common profilers are:

31.9 Tuning WebLogic Application Server

This section discusses how to optimize the WebLogic server for high performance.

31.9.1 Domain Startup Mode - Production

The WebLogic server must be created using production mode. This sets many properties like log level to minimum to optimize the WebLogic server performance.

31.9.2 Work Manager - default

To handle thread management and perform self-tuning, WebLogic Server implements a default Work Manager. This Work Manager is used by an application when no other Work Managers are specified in the application's deployment descriptors.

In many situations, the default Work Manager WebLogic Server's thread-handling algorithms assign each application its own fair share by default. Applications are given equal priority for threads and are prevented from monopolizing them.

31.9.3 Tuning Network I/O

Download and enable performance pack for the specific operating system where WebLogic server is installed.

Additional Reading