22 Tuning Oracle Service Bus

You can tune Oracle Service Bus (OSB) to optimize its performance in providing connectivity, routing, mediation, management, and also some process orchestration capabilities between two or more applications.

About Oracle Service Bus

Within a SOA framework, Oracle Service Bus (OSB) provides connectivity, routing, mediation, management, and also some process orchestration capabilities.

The design philosophy for OSB is to be a high performance and stateless (non-persistent state) intermediary between two or more applications. However, given the diversity in scale and functionality of SOA implementations, OSB applications are subject to a large variety of usage patterns, message sizes, and QOS requirements.

In most SOA deployments, OSB is part of a larger system where it plays the role of an intermediary between two or more applications (servers). A typical OSB configuration involves a client invoking an OSB proxy service, which may make one or more service callouts to intermediate back-end services and then route the request to the destination back end system before responding to the client.

It is necessary to understand that OSB is part of a larger system and the objective of tuning is the optimization of the overall system performance. This involves not only tuning OSB as a standalone application, but also using OSB to implement flow-control patterns such as throttling, request-buffering, caching, prioritization and parallelism.

For more information about Oracle Service Bus, see Oracle Fusion Middleware Administrator's Guide for Oracle Service Bus.

Tuning OSB Parameters

Oracle Service Bus performance largely depends on the performance of the other components.

The following components affect OSB performance:

  • WebLogic Server

  • Coherence

  • Adapters

You can begin tuning Oracle Service Bus if you believe the above components are tuned to your satisfaction.

Tuning Oracle Service Bus with Work Managers

Starting in 12c (12.2.1), Oracle Service Bus can be tuned by several Oracle WebLogic Server Work Managers.

For example, Split-Join tuning can be accomplished by using Work Managers. By default, applications do not specify a Work Manager for Split-Joins, but Split-Joins can be assigned a Work Manager if there are strict thread constraints that need to be met, such as scheduling parallel tasks.

For optimal performance, strike a balance between the following Work Manager constraints:

  • min-threads-constraint so that Split-Join operations are not starved of threads.

  • max-threads-constraint so that Split-Joins do not starve other resources

By default, there is no minimum or maximum thread constraint defined, which could either slow Split-Join operations down or slow down other operations sharing the same thread pool.

Work Managers take Split-Join operations into account when allotting threads to system-wide processes so that this balance is met automatically.

For more information on tuning OSB with Work Managers, see Using Work Managers with Oracle Service Bus in Developing Services with Oracle Service Bus.

Tuning OSB Operation Settings

Table 22-1 lists and describes the knobs you will most likely need to tune to improve performance. For more information on monitoring Oracle Service Bus to diagnose trouble areas, see Monitoring Oracle Service Bus in Administering Oracle Service Bus.

Table 22-1 Essential OSB Operation Tuning

Parameter Problem Tuning Recommendation Trade-offs

Monitoring and Alerting

Default: Disabled

The Monitoring and Alerting framework is designed to have minimal impact on performance, but all of these processes have performance impacts.

In general, the more monitoring rules and pipeline actions you have defined, the larger the performance impact.

Keep the default of Disabled at the OSB level. Most settings can be defined globally or per service.

The settings for monitoring and alerting can be configured in the Enterprise Manager Administrator Console.

Note that monitoring must be enabled for SLA alerts but not for Pipeline alerts.

Disabling these processes to improve performance means you are sacrificing certain metrics and alerts that could help you troubleshoot issues in the future.

For more information on the OSB Monitoring Framework, see Introduction to the Oracle Service Bus Monitoring Framework in Administering Oracle Service Bus.


Default: Disabled

If you have large message sizes and high throughput scenarios, tracing may be slowing your system down.

Leave tracing disabled to improve performance.

For more information, see How to Enable or Disable Tracing in Oracle Fusion Middleware Administrator's Guide for Oracle Service Bus.

If disabled, you lose metrics.

Tracing prints the entire message context, including headers and message body. This is an extremely useful feature both in a development and production environment for debugging, diagnosing, and troubleshooting problems involving message flows in one or more proxy services.


Default: 100

You may have one of the following issues:

Proxy services are accessed slowly.This means you want to store more proxy services in the static portion of the OSB cache for pipeline service runtime metadata. The proxy services stored here are never garbage-collected, meaning they are accessed faster.


You are seeing a lot of cache misses in DMS dumps.

If you want to include more proxy services in the static cache, increase this value as long as there is sufficient memory for runtime data processing for large number of proxy services.

If you are seeing cache misses in DMS dumps, increase this value.

This system property caps the number of proxy services in the static portion of the OSB cache for pipeline service runtime metadata. These services never get garbage collected.

You set the size of this value in the setDomainEnv.sh file as an extra java argument, as follows:


For example, if you want to set this value to 3000, you would write:


Increasing this value decreases the time it takes to make initial calls to the proxy server. It can also preload the cache when a configuration session is committed. However, while caching proxy services helps reduce compilation costs, it also increases memory consumption.

Decreasing this value may means you free up memory, but making initial calls to the proxy server may take longer.


Default: False

JSON input to REST service may not be ordered as expected by the schema definition.

When converting from JSON to XML, OSB runtime uses the order in which JSON name or value appear to construct the corresponding XML element. While well-formed, this format is not valid according to XML schema.

Set this parameter to True by running the REST wizard and checking the box on the first page.

Checking this option makes the REST service reorder the input JSON so that the response from the external REST endpoint can be ordered as per the valid schema definition.

Using this option adds significant performance overhead.

Using Other Tuning Strategies

After you have performed the recommended modifications, you can make additional changes that are specific to your deployment.

Consider carefully whether the additional tuning recommendations are appropriate for your environment.

Tuning Resequencer in OSB

A Resequencer is used to rearrange a stream of related but out-of-sequence messages back into order. It sequences the incoming messages that arrive in random order and then sends them to the target services in an orderly manner.

You can fine-tune the Resequencer by setting the properties listed in Table 22-2 using the Global operational settings page in the OSB EM console:

Table 22-2 Essential Resequencer Tuning

Parameter Problem Tuning Recommendation Trade-offs


Default: 4 groups

This parameter defines the maximum number of message groups that can be locked by resequencer locker threads for parallel processing. The locked groups can then use worker threads to process their respective messages.

If message processing is being delayed, identify which of the following situations is true:

  • Incoming messages belong to many groups.

  • There are many messages and they belong to fewer groups.

If you have many groups with a small number of messages each, increase this parameter's value. Resequencer will lock more groups in one iteration.

If you have a few groups with many messages, decrease this value. Resequencer will lock less number of groups for processing.

Increasing the MaxGroupsLocked value may result in locking more groups than there are available worker threads. This could result in groups getting blocked while waiting for the availability of the worker threads for message processing.

Decreasing the default value may result in under utilization of resources.


Default: 10 seconds

The resequencer locker thread queries the database to lock groups for parallel processing. When no groups are available, the locker thread sleeps for the configured amount of time specified by this parameter.

If you have either of the following situations, this parameter needs tuning:

  • You have a high number of messages and processing time between database queries is slow.

  • You have few messages but frequent database queries.

Decrease this parameter value if you have a high number of messages to reduce the lag time during processing.

If Resequencer locker threads are making frequent database round trips even though you do not have many incoming messages, increase this value.

If the sleep time is too short, there may not be enough worker threads available to process incoming messages of the locked groups. Too many database queries will also cause slow performance.

If the time interval between incoming messages is already long, configuring a higher value is not beneficial.


Default: True

The resequencer database is low on space. If you changed this parameter's value to false, processed messages remain in the resequencer database and slow down database inquiries.

Keep the default value of True to delete message after successful execution. This frees up database space.

You do not have a detailed history of processed messages.

Considering Design Time for Proxy Applications

Consider the design configurations described in Table 22-3 for proxy applications based on your OSB usage and use case scenarios:

Table 22-3 Tuning Design Time for Proxy Application

Strategy Description Recommendations

Avoid creating many OSB context variables that are used once within another XQuery

Context variables created by using an Assign action are converted to XmlBeans and then reverted to the native XQuery format for the next XQuery. Multiple Assignactions can be collapsed into a single Assign action by using a FLWOR expression. Intermediate values can be created by using let statements.

Avoiding redundant context variable creation eliminates overheads that are associated with internal data format conversions. This benefit has to be balanced against visibility of the code and reuse of the variables.

Transform contents of a context variable such as $body.

Transforming the contents of a context variable could be time-consuming.

Use a Replace action to complete the transformation in a single step.

If the entire content of $body is to be replaced, leave the XPath field blank and select Replace node contents. This is faster than pointing to the child node of $body (for example, $body/Order) and selecting Replace entire node.

Leaving the XPath field blank eliminates an extra XQuery evaluation.

Specify a special XPath.

A general XPath like $body/Order must be evaluated by the XQuery engine before the primary transformation resource is executed. OSB treats $body/*[1] as a special XPath that can be evaluated without invoking the XQuery engine.

Use $body/*[1] to represent the contents of $body as an input to a Transformation (XQuery / XSLT) resource.

This is faster than specifying an absolute path pointing to the child of $body.

Enable streaming for pure content-based routing scenarios.

OSB leverages the partial parsing capabilities of the XQuery engine when streaming is used in conjunction with indexed XPaths.

See Tuning XQuery for additional details.

Enabling streaming means that the payload is parsed and processed only to the field referred to in the XPath. Streaming also eliminates the overhead that is associated with parsing and serialization of XmlBeans.

Trade-offs: If the payload is accessed a large number of times for reading multiple fields, the gains from streaming can be negated. If all fields read are located in a single subsection of the XML document, a hybrid approach provides the best performance.

The output of a transformation is stored in a compressed buffer format either in memory or on disk. Therefore, streaming should be avoided when running out of memory is not a concern.

Set the appropriate QOS level and transaction settings.

OSB can invoke a back end HTTP service asynchronously if the QOS is Best- Effort. Asynchronous invocation allows OSB to scale better with long running back-end services. It also allows Publish over HTTP to be truly fire-and-forget.

Do not set XA or Exactly-Once unless the reliability level required is once and only once and it is possible to use the setting. If the client is a HTTP client it is not possible to use this setting. If OSB initiates a transaction, it is possible to replace XA with LLR to achieve the same level of reliability.

Disable or delete all log actions.

Log actions add an I/O overhead. Logging also involves an XQuery evaluation, which can be expensive. Writing to a single device (resource or directory) can also result in lock contentions.

Disable or delete all log actions.

Tuning XQuery

OSB uses XQuery and XPath extensively for various actions like Assign, Replace, and Routing Table. The following XML structure ($body) is used to explain XQuery and XPath tuning concepts:

<Item name="ACE_Car" >20000 </Item>
<Item name=" Ext_Warranty" >1500</Item>
…. a large number of items
<Shipping>My Shipping Firm </Shipping>

You can use the tuning strategies listed in Table 22-4 to tune XQuery.

Table 22-4 XQuery Tuning Strategies

Strategy Description Recommendations

Avoid the use of double front slashes (//) in XPaths.

//implies all occurrences of a node irrespective of the location in an XML tree. Thus, the entire depth and breadth of the XML tree has to be searched for the pattern specified after a //.

Use //only if the exact location of a node is not known at design time.

Index XPaths when applicable.

Indexing helps your system process only what is needed. When indexing, only the top part of the document is processed by the XQuery engine.

Index an XPath by adding [1]after each node of the path.

For example, the XPath $body/Order/CtrlArea/CustName implies returning all instances Order under $body and all instances of CtrlArea under Order. The entire document has to be read to correctly process the above XPath.

But if you know that there is a single instance of Order under $body and a single instance of CtrlArea under Order, you can index the above XPath by rewriting it as $body/Order[1]/CtrlArea[1]/CustName[1]. This only returns the first instances of the child nodes.

Note: Do not index when you need a whole array of nodes returned. Indexing only returns the first item node of the array.

Extract frequently used parts of a large XML document as intermediate variables within a FLWOR expression.

An intermediate variable can be used to store the common context for multiple values.

Using intermediate variables consumes more memory but reduces redundant XPath processing.

Use a hybrid approach for read-only scenarios with streaming.

If the payload is accessed a large number of times for reading multiple fields, The gains from streaming can be negated. If all fields read are located in a single subsection of the XML document, a hybrid approach provides the best performance.

Enable streaming at the proxy level and assigning the relevant subsection to a context variable. The individual fields can then be accessed from this context variable.

The fields Total and Status can be retrieved by using three Assign actions:

Assign "$body/Order[1]/Summary[1]" to "foo"
Assign "$foo/Total" to "total"
Assign "$foo/Status" to "total"


Pipelines enabled for content streaming should use XQuery 1.0. Using XQuery 2004 does work, but incurs a significant performance overhead, as there are on-the-fly conversions that happen to and from XQuery 1.0 engine. There is a design-time warning to that effect.

Tuning Poller-based Transports

Latency and throughput of poller-based transports depends on the frequency with which a source is polled and the number of files and messages read per polling sweep.

Setting the Polling Interval

Consider using a smaller polling interval for high throughput scenarios where the message size is not very large and the CPU is not saturated. The primary polling interval defaults are listed below with links to additional information:

Polling Intervals Default Interval Additional Information

File Transport

60 seconds

File Transport Configuration Page in Developing Services with Oracle Service Bus

FTP Transports

60 seconds

FTP Transport Configuration Page in Developing Services with Oracle Service Bus

MQ Transport

1000 milliseconds

MQ Transport Configuration Page in Developing Services with Oracle Service Bus

SFTP Transport

60 seconds

SFTP Transport Configuration Page in Developing Services with Oracle Service Bus

JCA Transport

60 seconds

JCA Transport Configuration Page in Developing Services with Oracle Service Bus

Setting Read Limit

The read limit determines the number of files or messages that are read per polling sweep. You can tune it with the information in Table 22-5.

For more information, see Using the File Transport in Developing Services with Oracle Service Bus.

Table 22-5 Essential Read Limit Tuning

Parameter Symptoms if not properly tuned Tuning Recommendation Performance Trade-offs

Read Limit

Default: 10 for File and FTP transports

Excessive memory use or high memory use due to a large number of files read into memory simultaneously.

Set this value to the desired concurrency. It can be set to 0 to specify no limit.

The read limit determines the number of files or messages that are read per polling sweep.

Setting the Read Limit to a high value and the Polling Interval to a small value may result in a large number of messages being simultaneously read into memory. If the message size is large, this can lead to an out-of-memory error .