Sun Java System Web Proxy Server 4.0.11 Performance Tuning, Sizing, and Scaling Guide

Understanding Threads, Processes, and Connections

Before tuning your server, you should understand the connection-handling process in Proxy Server. Request processing threads handle Proxy Server connections. You can configure Request handling threads from the Admin console or by editing the configuration file. This section includes the following topics:

Connection-Handling Overview

In Proxy Server, acceptor threads on a listen socket accept connections and put them into a connection queue. Request processing threads in a thread pool then pick up connections from the queue and service the requests.

Figure 2–1 Proxy Server Connection Handling

Connection handling in Proxy Server, showing how a request
is transmitted to a request processing thread.

A request is not thread-safe if processing the request requires interaction between a number of threads. A part of the request which is not thread-safe is transferred to a NativePool, which is a collection of threads which can interact with each other. The NativePool processes the request and communicates the request back to the request processing thread.

At startup, the server only creates the number of threads defined in the thread pool minimum threads, by default set to number of processors. As the load increases, the server creates more threads. The policy for adding new threads is based on the connection queue state.

Each time a new request is created, the number of requests waiting in the queue, often considered the backlog of connections, is compared to the number of request processing threads already created. If the number of requests is greater than the number of threads, more threads are created.

The process of adding new session threads is strictly limited by the maximum threads value. For more information on maximum threads, see Maximum Threads (Maximum Simultaneous Requests).

You can change the settings that affect the number and timeout of threads, processes, and connections in the Admin console.

Low Latency and High Concurrency Modes

The server can run in one of two modes, depending upon the load. It changes modes to accommodate the load most efficiently.

When the server is started, it starts in low latency mode. When the load increases, the server moves to high concurrency mode. The decision to move from low latency mode to high concurrency mode and back again is made by the server, based on connection queue length, average total sessions, average idle sessions, and currently active and idle sessions.

Disabled Thread Pools

If a thread pool is disabled, no threads are created in the pool, no connection queue is created, and no keep-alive threads are created. When the thread pool is disabled, the acceptor threads themselves process the request.

Connection–Handling magnus.conf Directives for NSAPI

In addition to the settings discussed above, you can edit the following directives in the magnus.conf file to configure additional request-processing settings for NSAPI plug-ins:

For detailed information about these directives, see the Sun Java System Web Proxy Server 4.0.11 Configuration File Reference.

Custom Thread Pools

By default, the connection queue sends requests to the default thread pool. However, you can also create your own thread pools in magnus.conf using a thread pool Init function. These custom thread pools are used for executing NSAPI Service Application Functions (SAFs), not entire requests.

If the SAF requires the use of a custom thread pool, the current request processing thread queues the request, waits until the other thread from the custom thread pool completes the SAF, then the request processing thread completes the rest of the request.

For example, the obj.conf file contains the following:

NameTrans fn="assign-name" from="/testmod" name="testmod" pool="my-custom-pool"
...
<Object name="testmod">
ObjectType fn="force-type" type="magnus-internal/testmod"
Service method=(GET|HEAD|POST) type="magnus-internal/testmod"
fn="testmod_service" pool="my-custom-pool2"
</Object>

In this example, the request is processed as follows:

  1. The request processing thread, referred to as A1 in this example, picks up the request and executes the steps before the NameTrans directive.

  2. If the URI starts with /testmod, the A1 thread queues the request to the my-custom-pool queue. The A1 thread waits.

  3. A different thread in my-custom-pool, called the B1 thread in this example, picks up the request queued by A1. B1 completes the request and returns to the wait stage.

  4. The A1 thread wakes up and continues processing the request. It executes the ObjectType SAF and moves on to the Service function.

  5. Because the Service function must be processed by a thread in my-custom-pool2, the A1 thread queues the request to my-custom-pool2.

  6. A different thread in my-custom-pool2, called C1 in this example, picks up the queued request. C1 completes the request and returns to the wait stage.

  7. The A1 thread wakes up and continues processing the request.

In this example, three threads, A1, B1, and C1 work to complete the request.

Additional thread pools are a way to run thread-unsafe plug-ins. By defining a pool with a maximum number of threads set to 1, only one request is allowed into the specified service function. In the previous example, if testmod_service is not thread-safe, it must be executed by a single thread. If you create a single thread in the my-custom-pool2, the SAF works in a multi-threaded Proxy Server.

For more information on defining thread pools, see thread-pool-init in Sun Java System Web Proxy Server 4.0.11 Configuration File Reference.

Native Thread Pool

On Windows, the native thread pool (NativePool) is used internally by the server to execute NSAPI functions that require a native thread for execution.

Proxy Server uses Netscape Portable Runtime (NSPR), which is an underlying portability layer providing access to the host OS services. This layer provides abstractions for threads that are not always the same as those for the OS-provided threads. These non-native threads have lower scheduling overhead, so their use improves performance. However, these threads are sensitive to blocking calls to the OS, such as I/O calls. To make it easier to write NSAPI extensions that can make use of blocking calls, the server keeps a pool of threads that safely support blocking calls. These threads are usually native OS threads. During request processing, any NSAPI function that is not marked as being safe for execution on a non-native thread is scheduled for execution on one of the threads in the native thread pool.

If you have written your own NSAPI plug-ins such as NameTrans, Service, or PathCheck functions, these execute by default on a thread from the native thread pool. If your plug-in makes use of the NSAPI functions for I/O exclusively or does not use the NSAPI I/O functions at all, then it can execute on a non-native thread. For this to happen, the function must be loaded with a NativeThread="no" option, indicating that it does not require a native thread.

For example, add the following to the load-modules Init line in the obj.conf file:

Init funcs="pcheck_uri_clean_fixed_init" shlib="C:/Sun/proxyserver40/lib/custom.dll" 
fn="load-modules" NativeThread="no"

The NativeThread flag affects all functions in the funcs list, so if you have more than one function in a library, but only some of them use native threads, use separate Init lines. If you set NativeThread to yes, the thread maps directly to an OS thread.

For information on the load-modules function, see load-modules in Sun Java System Web Proxy Server 4.0.11 Configuration File Reference.

Process Modes

You can run Sun Java System Web Proxy Server in one of the following modes:


Note –

Multi-process mode is deprecated for Java technology-enabled servers. Most applications are now multi-threaded, and multi-process mode is usually not needed. However, multi-process mode can significantly improve overall server throughput for NSAPI applications that do not implement fine-grained locking.


Single-Process Mode

In the single-process mode, the server receives requests from web clients to a single process. Inside the single server process, acceptor threads are running that are waiting for new requests to arrive. When a request arrives, an acceptor thread accepts the connection and puts the request into the connection queue. A request processing thread picks up the request from the connection queue and handles the request.

Because the server is multi-threaded, all NSAPI extensions written to the server must be thread-safe. This means that if the NSAPI extension uses a global resource, like a shared reference to a file or global variable, then the use of that resource must be synchronized so that only one thread accesses it at a time. All plug-ins provided with the Proxy Server are thread-safe and thread-aware, providing good scalability and concurrency. However, your legacy applications might be single-threaded. When the server runs the application, it can only execute one at a time. This leads to server performance problems when put under load. Unfortunately, in the single-process design, there is no real workaround.

Multi-Process Mode

You can configure the server to handle requests using multiple processes with multiple threads in each process. This flexibility provides optimal performance for sites using threads, and also provides backward compatibility to sites running legacy applications that are not ready to run in a threaded environment. Because applications on Windows generally already take advantage of multi-thread considerations, this feature applies to UNIX and Linux platforms.

The advantage of multiple processes is that legacy applications that are not thread-aware or thread-safe can be run more effectively in Sun Java System Web Proxy Server. However, because all of the Sun Java System extensions are built to support a single-process threaded environment, they might not run in the multi-process mode. The Search plug-ins fail on startup if the server is in multi-process mode, and if session replication is enabled, the server will fail to start in multi-process mode.

In the multi-process mode, the server spawns multiple server processes at startup. Depending on the configuration, each process contains one or more threads, that receive incoming requests. Since each process is completely independent, each one has its own copies of global variables, caches, and other resources. Using multiple processes requires more resources from your system. Also, if you try to install an application that requires shared state, it has to synchronize that state across multiple processes. NSAPI provides no helper functions for implementing cross-process synchronization.

When you specify a MaxProcs value greater than 1, the server relies on the operating system to distribute connections among multiple server processes (see MaxProcs (UNIX/Linux) for information about the MaxProcs directive). However, many modern operating systems do not distribute connections evenly, particularly when there are a small number of concurrent connections.

Because Sun Java System Web Proxy Server cannot guarantee that load is distributed evenly among server processes, you might encounter performance problems if you set Maximum Threads to 1 and MaxProcs greater than 1 to accommodate a legacy application that is not thread-safe. The problem is especially pronounced if the legacy application takes a long time to respond to requests, for example, when the legacy application contacts a back-end database. In this scenario, it might be preferable to use the default value for Maximum Threads and serialize access to the legacy application using thread pools. For more information about creating a thread pool, see thread-pool-init in Sun Java System Web Proxy Server 4.0.11 Configuration File Reference.

If you are not running any NSAPI in your server, you should use the default settings: one process and many threads. If you are running an application that is not scalable in a threaded environment, you should use a few processes and many threads, for example, 4 or 8 processes and 128 or 512 threads per process.

MaxProcs (UNIX/Linux)

To run a UNIX or Linux server in multi-process mode, set the MaxProcs directive to a value that is greater than 1. Multi-process mode might provide higher scalability on multi-processor machines and improve the overall server throughput on large systems such as the Sun FireTM T2000 server. If you set the value to less than 1, it is ignored and the default value of 1 is used.

You can set the value for MaxProcs by editing the MaxProcs parameter in magnus.conf.


Note –

You will receive duplicate startup messages when running your server in MaxProcs mode.