G Work Managers and Threading

This appendix describes the internal threading model used by Oracle Service Bus and its implications regarding performance and server stability. It focuses on the HTTP transport.

This appendix includes the following sections:

G.1 Key Threading Concepts

The following concepts are important to consider when assigning Work Managers to services.

  • Request and response pipelines always execute in separate threads. While the request thread originates from the proxy service transport, the response thread originates from the business service transport.

  • When external services are invoked, threads can be blocking or non-blocking, depending on the pipeline action, the Quality of Service (QoS) configuration, and the transport being used.

  • When using blocking calls, a Work Manager with a minimum thread constraint must be associated with the response in order to prevent server deadlocks.

Service Bus optimizes invocations between proxy services. When one HTTP proxy service calls a second HTTP proxy service, the transport layer is bypassed. The request message is not sent using a network socket, so the transport overhead is eliminated. Instead, the thread processing the initial proxy service continues to process the request pipeline of the called service. Similarly, when invoking a business service, the proxy service thread is also used to send the request. Because the HTTP transport uses the asynchronous capabilities of WebLogic Server, the response of the business service is processed by a different thread.

For discussions about threading, it is important to remember that Service Bus runs on top of WebLogic Server. At times, Service Bus depends on Web Logic Server HTTP, JMS, and core engines for executing different types of requests. In some cases Service Bus hands over part of the processing to WebLogic Server.  WebLogic Server executes it by using threads available in the WebLogic Server Self-Tuning thread pool. Service Bus documentation does not cover scenarios in which Service Bus depends on WebLogic Server and its Self-Tuning thread pool for processing.

As an example, in the case of a service call out, even it is a synchronous call within the Service Bus layer, responsibility to read the response from the backend lies with WebLogic Socket Muxer threads. After these Muxer threads get a response from the backend system, they notify the Service Bus thread, which is blocked and waiting for a response.

It is strongly recommended that you ensure that all Service Bus proxy and business services are configured to execute using threads from their respective Work Managers . Work Manager minimum and maximum constraints should be set based on the maximum load. This ensures that proxy and business services should not depend on the WebLogic Server execute thread pool for execution and WebLogic Server has enough free or idle threads to process any ad-hoc requests from Service Bus without leading to stuck thread problems.

G.2 Pipeline Actions

There are three pipeline actions that specifically affect threading: route actions, publish actions, and service callout actions.

G.2.1 Route Action

By default, the HTTP transport uses asynchronous features of WebLogic Server to prevent thread blocking while waiting for a business service response. For the execution of a route action, once the thread finishes sending the request, it returns to the pool where it is then used to process other work. When a response is returned from the external service, a second thread is scheduled to process it. This behavior can be modified by using a route options action and setting the QoS to Exactly Once.

G.2.2 Publish Action

A publish action is a one-way send. It provides the means of invoking an external service but without receiving a response. This is often used to provide notification of an event, such as for logging or auditing. By default, no feedback of whether the call was successful or not is returned to the pipeline thread. For both of the above actions, setting the QoS to Exactly Once forces the request thread to block until a response is received. This allows the request thread to notify the caller of an error immediately, without callback. This behavior is also useful when attempting to throttle the number of threads simultaneously processing a proxy service.

G.2.3 Service Callout Action

A service callout is implemented as a synchronous blocking call. Its design intention is to provide the ability to invoke an external service to enrich a request message prior to routing the request to the target service. While the callout is awaiting a response, the request pipeline thread blocks until a response thread notifies it that the response is ready and processing can continue.

G.3 Work Managers

WebLogic Server uses a self-tuning thread pool for executing all application-related work. The pool size is managed by an increment manger which adds or removes threads to the pool when it deems it necessary. The number of active threads will never exceed 400. As requests enter the server, a scheduler manages the order in which the requests are executed. When the number of requests exceeds the number of available threads, they are queued and then executed as threads return to the pool and become available. Work Managers indicate the type of work and priority of a request to the scheduler.

You create and configure Work Managers using the WebLogic Server Administration Console. This section describes Work Manager concepts that are key to Service Bus optimization. For more information about Work Managers, see "Using Work Managers to Optimize Scheduled Work" in Administering Server Environments for Oracle WebLogic Server.

G.3.1 Work Manager Configuration

Two key properties when configuring a Work Manager are Max Thread Constraints and Min Thread Constraints. A maximum thread constraint limits the number of concurrent threads executing a type of request by restricting the scheduler from executing more than the configured number at one time. However, the thread pool is shared among all Work Managers, so there is no guarantee the maximum number of threads will be available for processing at any given time.

A minimum thread constraint guarantees a minimum number of threads for processing. If sufficient threads are not available in the thread pool to process up to the minimum number, the scheduler uses standby threads to satisfy the minimum. Standby threads are not counted as part of the maximum number of 400 threads in the pool. When a thread is executing a request associated with a Work Manager containing a minimum thread constraint, the Work Manager first checks the queue for another request associated with the same constraint and executes it (instead of returning to the free pool). For this reason, use minimum thread constraints judiciously. Over-use can cause resource starvation of the default Work Manager, leading to unpredictable results.

G.3.2 Work Manager Priority

By default, Work Managers have equal priority with the scheduler, with a share value of 50. When the scheduler attempts to execute waiting requests, it ensures that requests associated with each Work Manager are given an equal number of thread resources (assuming an equal number of waiting requests).

From within Service Bus, a Work Manager is associated with a service by specifying a dispatch policy. If you have a simple proxy service that routes to a business service, you can assign different dispatch policies to each service so the scheduler recognizes that new requests are different work from responses received. Subsequently, each Work Manager is scheduled evenly when work requests (either a new incoming request message for a proxy service or a response message for a business service) are added to the queue. This becomes vital when a business service is invoked with a blocking call, which is the case with a service callout.

Once a thread is processing under the designation of a specific Work Manager, it continues to do so until it is returned to the pool. Therefore, when invoking a business service or a proxy service from within a pipeline, the currently executing thread continues processing under the Work Manager of the initial proxy service. The Work Manager specified for the service being called is ignored in this case.

G.4 Designating Work Managers

The Work Manager (dispatch policy) configuration for a business service should depend on how the business service is invoked. If a proxy service invokes the business service using a service callout, a publish action, or routing with exactly-once QoS (as described in Pipeline Actions), consider using different Work Managers for the proxy service and the business service instead of using the same for both. For the business service Work Manager, configure the Min Thread Constraint property to a small number (1-3) to guarantee an available thread.