This chapter describes specific adjustments you can make that may improve Sun Java System Web Server performance. The chapter includes the following topics:
As you tune your server it is important to remember that your specific environment is unique. The impacts of the suggestions provided in this guide will vary, depending on your specific environment. Ultimately you must rely on your own judgement and observations to select the adjustments that are best for you.
Be very careful when tuning your server. Always back up your configuration files before making any changes.
As you work to optimize performance, keep the following guidelines in mind:
Work Methodically
As much as possible, make one adjustment at a time. Measure your performance before and after each change, and rescind any change that does not produce a measurable improvement.
Adjust Gradually
When adjusting a quantitative parameter, make several stepwise changes in succession, rather than trying to make a drastic change all at once. Different systems face different circumstances, and you may leap right past your system’s best setting if you change the value too rapidly.
Start Fresh
At each major system change, be it a hardware or software upgrade or deployment of a major new application, review all previous adjustments to see whether they still apply. After a Solaris upgrade, it is strongly recommended that you start over with an unmodified /etc/system file.
Stay Informed
Read the Sun Java System Web Server and Solaris release notes whenever you upgrade your system. The release notes often provide updated information about specific adjustments.
This section describes the information available through the perfdump utility, and discusses how to tune some parameters to improve your server’s performance.
The default tuning parameters are appropriate for all sites except those with very high volume. The only parameters that large sites may regularly need to change are RqThrottle, MaxKeepAliveConnections, and KeepAliveTimeout, which are tunable from magnus.conf and the Server Manager.
The perfdump utility monitors statistics in the following categories, which are described in this section:
For general information about perfdump, see Monitoring Current Activity Using the perfdump Utility
Once you have viewed the statistics you need, you can tune various aspects of your server’s performance using:
The magnus.conf file
The Server Manager Preferences tab
The Server Manager Preferences tab includes many interfaces for setting values for server performance, including the Performance Tuning page and the File Cache Configuration page.
The Magnus Editor allows you to set values for numerous directives in the following categories, which are accessible from the drop-down list:
DNS Settings
SSL Settings
Performance Settings
CGI Settings
Keep-Alive Settings
Logging Settings
Language Settings
Connection queue information shows the number of sessions in the queue, and the average delay before the connection is accepted.
Following is an example of how these statistics are displayed in perfdump:
ConnectionQueue: ---------------------------------- Current/Peak/Limit Queue Length 0/0/4096 Total Connections Queued 0 Average Queueing Delay 0.00 milliseconds
Current/Peak/Limit queue length shows, in order:
The number of connections currently in the queue
The largest number of connections that have been in the queue simultaneously
The maximum size of the connection queue
If the peak queue length is close to the limit, you may wish to increase the maximum connection queue size to avoid dropping connections under heavy load.
You can increase the connection queue size by:
Setting or changing the value of ConnQueueSize in the Magnus Editor of the Server Manager
Editing the ConnQueueSize directive in magnus.conf
Total Connections Queued is the total number of times a connection has been queued. This includes newly accepted connections and connections from the keep-alive system.
This setting is not tunable.
Average Queueing Delay is the average amount of time a connection spends in the connection queue. This represents the delay between when a request connection is accepted by the server and when a request processing thread (also known as a session) begins servicing the request.
This setting is not tunable.
The following listen socket information includes the IP address, port number, number of acceptor threads, and the default virtual server for the listen socket. For tuning purposes, the most important field in the listen socket information is the number of acceptor threads.
You can have many listen sockets enabled for virtual servers, but at least one will be enabled for your default server instance (usually http://0.0.0.0:80).
ListenSocket ls1: ------------------------ Address http://0.0.0.0:8080 Acceptor Threads 1 Default Virtual Server https-iws-files2.red.iplanet.com
You can create listen sockets through the Server Manager, and edit much of a listen socket’s information. For more information about adding and editing listen sockets, see the Sun Java System Web Server 6.1 SP9 Administrator’s Guide.
If you have created multiple listen sockets, perfdump displays all of them.
Set the TCP/IP listen queue size for all listen sockets by:
Setting or changing the ListenQ value in the Magnus Editor of the Server Manager
Entering the value in the Listen Queue Size field of the Performance Tuning page of the Server Manager
The Address field contains the base address that this listen socket is listening on. It contains the IP address and the port number.
If your listen socket listens on all IP addresses for the machine, the IP part of the address is 0.0.0.0.
This setting is tunable when you edit a listen socket. If you specify an IP address other than 0.0.0.0, the server will make one less system call per connection. Specify an IP address other than 0.0.0.0 for best possible performance.
For more information about adding and editing listen sockets, see the Sun Java System Web Server 6.1 SP9 Administrator’s Guide.
Acceptor threads are threads that wait for connections. The threads accept connections and put them in a queue where they are then picked up by worker threads. Ideally, you want to have enough acceptor threads so that there is always one available when a user needs one, but few enough so that they do not provide too much of a burden on the system. A good rule is to have one acceptor thread per CPU on your system. You can increase this value to about double the number of CPUs if you find indications of TCP/IP listen queue overruns.
You can tune this number through the user interface when you edit a listen socket.
For more information about adding and editing listen sockets, see the Sun Java System Web Server 6.1 SP9 Administrator’s Guide.
Software virtual servers work using the HTTP/1.1 Host header. If the end user’s browser does not send the Host header, or if the server cannot find the virtual server specified by the Host header, Sun Java System Web Server handles the request using a default virtual server. Also, for hardware virtual servers, if Sun Java System Web Server cannot find the virtual server corresponding to the IP address, it displays the default virtual server. You can configure the default virtual server to send an error message or serve pages from a special document root.
You can specify a default virtual server for an individual listen socket and for the server instance. If a given listen socket does not have a default virtual server, the server instance’s default virtual server is used.
You can specify a default virtual server for a listen socket by:
Setting or changing the default virtual server information using the Edit Listen Sockets page on the Preferences tab of the Server Manger.
Editing the defaultvs attribute of the CONNECTIONGROUP element in the server.xml file. For more information about server.xml, see the Sun Java System Web Server 6.1 SP9 Administrator’s Configuration File Reference.
This section provides information about the server’s HTTP-level keep-alive system. For additional tuning information, see Monitoring Current Activity Using the perfdump Utility
The following example shows the keep-alive statistics displayed by perfdump:
KeepAliveInfo: -------------------- KeepAliveCount 0/256 KeepAliveHits 0 KeepAliveFlushes 0 KeepAliveRefusals 0 KeepAliveTimeouts 0 KeepAliveTimeout 30 seconds
The name "keep-alive" should not be confused with TCP "keep-alives." Also, note that the name "keep-alive" was changed to "Persistent Connections" in HTTP/1.1, but the .perf continues to refer to them as "KeepAlive" connections.
Both HTTP/1.0 and HTTP/1.1 support the ability to send multiple requests across a single HTTP session. A web server can receive hundreds of new HTTP requests per second. If every request was allowed to keep the connection open indefinitely, the server could become overloaded with connections. On UNIX/Linux systems this could lead to a file table overflow very easily.
To deal with this problem, the server maintains a "Maximum number of waiting keep-alive connections" counter. A "waiting" keep-alive connection has fully completed processing the previous request, and is now waiting for a new request to arrive on the same connection. If the server has more than the maximum waiting connections open when a new connection waits for a keep-alive request, the server closes the oldest connection. This algorithm keeps an upper bound on the number of open waiting keep-alive connections that the server can maintain.
Sun Java System Web Server does not always honor a keep-alive request from a client. The following conditions cause the server to close a connection, even if the client has requested a keep-alive connection:
Dynamic content, such as a CGI, does not have an HTTP content-length header set. This applies only to HTTP/1.0 requests. If the request is HTTP/1.1, the server honors keep-alive requests even if the content-length is not set. The server can use chunked encoding for these requests if the client can handle them (indicated by the request header transfer-encoding: chunked). For more information about chunked encoding, see the Sun Java System Web Server 6.1 SP9 NSAPI Programmer’s Guide.
Request is not HTTP GET or HEAD.
The request was determined to be bad. For example, if the client sends only headers with no content.
You can configure the number of threads used in the keep-alive system by:
Setting or changing the KeepAliveThreads value in the Magnus Editor of the Server Manager
This setting has two numbers:
Number of connections in keep-alive mode
Maximum number of connections allowed in keep-alive mode simultaneously
You can tune the maximum number of sessions that the server allows to wait at one time before closing the oldest connection by:
Editing the MaxKeepAliveConnections parameter in the magnus.conf file
Setting or changing the MaxKeepAliveConnections value in the Magnus Editor of the Server Manager
The number of connections specified by MaxKeepAliveConnections is divided equally among the keep-alive threads. If MaxKeeepAliveConnections is not equally divisible by KeepAliveThreads, the server may allow slightly more than MaxKeepAliveConnections simultaneous keep-alive connections.
The number of times a request was successfully received from a connection that had been kept alive.
This setting is not tunable.
The number of times the server had to close a connection because the KeepAliveCount exceeded the MaxKeepAliveConnections. In the current version of the server, the server does not close existing connections when the KeepAliveCount exceeds the MaxKeepAliveConnections. Instead, new keep-alive connections are refused and the KeepAliveResusals count is incremented.
The number of times the server could not hand off the connection to a keep-alive thread, possibly due to too many persistent connections (or when KeepAliveCount exceeds MaxKeepAliveConnections). Suggested tuning would be to increase MaxKeepAliveConnections.
The time (in seconds) before idle keep-alive connections are closed.
The number of times the server terminated keep-alive connections as the client connections timed out, without any activity. This is a useful statistic to monitor; no specific tuning is advised.
This option is not displayed in perfdump or Server Manager statistics. However, for UNIX/Linux users, it should be enabled for maximum performance.
Go to the Server Manager Preferences tab and select the Mangus Editor.
From the drop-down list, choose Keep-Alive Settings and click Manage.
Use the drop-down list to set UseNativePoll to On.
Click OK, and then click Apply.
Select Apply Changes to restart the server for your changes to take effect.
Session creation statistics are only displayed in perfdump. Following is an example of the statistics displayed:
SessionCreationInfo: ------------------------ Active Sessions 1 Total Sessions Created 48/128
Active Sessions shows the number of sessions (request processing threads) currently servicing requests.
Total Sessions Created shows both the number of sessions that have been created and the maximum number of sessions allowed.
Reaching the maximum number of configured threads is not necessarily undesirable, and you do not need to automatically increase the number of threads in the server. Reaching this limit means that the server needed this many threads at peak load, but as long as it was able to serve requests in a timely manner, the server is adequately tuned. However, at this point connections will queue up in the connection queue, potentially overflowing it. If you check your perfdump output on a regular basis and notice that total sessions created is often near the RqThrottle maximum, you should consider increasing your thread limits.
You can increase your thread limits by:
Setting or changing the RqThrottle value in the Magnus Editor of the Server Manager
Entering the value in the Maximum Simultaneous Requests field of the Performance Tuning page in the Server Manager
The cache information section provides statistics on how your file cache is being used. The file cache caches static content so that the server handles requests for static content quickly. For tuning information, seeTuning the File Cache.
Following is an example of how the cache statistics are displayed in perfdump:
CacheInfo: ------------------ enabled yes CacheEntries 0/1024 Hit Ratio 0/0 ( 0.00%) Maximum Age 30
If the cache is disabled, the rest of this section is not displayed.
The cache is enabled by default. You can disable it by:
Unselecting it from the File Cache Configuration page under Preferences in the Server Manger.
Editing the FileCacheEnable parameter in the nsfc.conf file. For more information about this file, see the Sun Java System Web Server 6.1 SP9 Administrator’s Configuration File Reference.
The number of current cache entries and the maximum number of cache entries are both displayed. A single cache entry represents a single URI.
You can set the maximum number of cached entries by:
Entering a value in the Maximum # of Files field on the File Cache Configuration page under Preferences in the Server Manger
Creating or editing the MaxFiles parameter in the nsfc.conf file. For more information about this file, see the Sun Java System Web Server 6.1 SP9 Administrator’s Configuration File Reference.
The hit ratio gives you the number of file cache hits versus cache lookups. Numbers approaching 100% indicate the file cache is operating effectively, while numbers approaching 0% could indicate that the file cache is not serving many requests.
This setting is not tunable.
This displays the maximum age of a valid cache entry. The parameter controls how long cached information is used after a file has been cached. An entry older than the maximum age is replaced by a new entry for the same file.
If your web site’s content changes infrequently, you may want to increase this value for improved performance. You can set the maximum age by:
Entering or changing the value in the Maximum Age field of the File Cache Configuration page in the Server Manager.
Editing the MaxAge parameter in the nsfc.conf file. For more information about this file, see the Sun Java System Web Server 6.1 SP9 Administrator’s Configuration File Reference.
Three types of thread pools can be configured through the Server Manager:
Thread Pools (UNIX/Linux)
Native Thread Pools (Windows)
Generic Thread Pools (Windows)
Since threads on UNIX/Linux are always operating system (OS)-scheduled, as opposed to user-scheduled, UNIX/Linux users do not need to use native thread pools, and this option is not offered in the user interface for these platforms. However, you can edit the OS-scheduled thread pools and add new thread pools if needed, using the Server Manager.
On Windows, the native thread pool (NativePool) is used internally by the server to execute NSAPI functions that require a native thread for execution. Windows users can edit native thread pool settings using the Server Manager.
Sun Java System Web Server uses NSPR, which is an underlying portability layer providing access to the host OS services. This layer provides abstractions for threads that are not always the same as those for the OS-provided threads. These non-native threads have lower scheduling overhead so their use improves performance. However, these threads are sensitive to blocking calls to the OS, such as I/O calls. To make it easier to write NSAPI extensions that can make use of blocking calls, the server keeps a pool of threads that safely support blocking calls. This usually means it is a native OS thread. During request processing, any NSAPI function that is not marked as being safe for execution on a non-native thread is scheduled for execution on one of the threads in the native thread pool.
If you have written your own NSAPI plugins such as NameTrans, Service, or PathCheck functions, these execute by default on a thread from the native thread pool. If your plugin makes use of the NSAPI functions for I/O exclusively or does not use the NSAPI I/O functions at all, then it can execute on a non-native thread. For this to happen, the function must be loaded with a NativeThread=”no” option, indicating that it does not require a native thread.
To do this, add the following to the "load-modules" Init line in the magnus.conf file:
Init funcs="pcheck_uri_clean_fixed_init" shlib="C:/Netscape/p186244/P186244.dll" fn="load-modules" NativeThread="no"
The NativeThread flag affects all functions in the funcs list, so if you have more than one function in a library, but only some of them use native threads, use separate Init lines.
On Windows, you can set up additional thread pools using the Server Manger. Use thread pools to put a limit on the maximum number of requests answered by a service function at any moment. Additional thread pools are a way to run thread-unsafe plugins. By defining a pool with a maximum number of threads set to 1, only one request is allowed into the specified service function.
Idle indicates the number of threads that are currently idle. Peak indicates the peak number in the pool. Limit indicates the maximum number of native threads allowed in the thread pool, and is determined by the setting of NativePoolMaxThreads.
You can modify the NativePoolMaxThreads by:
Editing the NativePoolMaxThreads parameter in magnus.conf
Entering or changing the value in the Maximum Threads field of the Native Thread Pool page in the Server Manager
These numbers refer to a queue of server requests that are waiting for the use of a native thread from the pool. The Work Queue Length is the current number of requests waiting for a native thread.
Peak is the highest number of requests that were ever queued up simultaneously for the use of a native thread since the server was started. This value can be viewed as the maximum concurrency for requests requiring a native thread.
Limit is the maximum number of requests that can be queued at one time to wait for a native thread, and is determined by the setting of NativePoolQueueSize.
You can modify the NativePoolQueueSize by:
Editing the NativePoolQueueSize parameter in magnus.conf
Entering or changing the value in the Queue Size field of the Native Thread Pool page in the Server Manager
The NativePoolStackSize determines the stack size in bytes of each thread in the native (kernel) thread pool.
You can modify the NativePoolStackSize by:
Editing the NativePoolStackSize parameter in magnus.conf
Setting or changing the NativePoolStackSize value in the Magnus Editor of the Server Manager
Entering or changing the value in the Stack Size field of the Native Thread Pool page in the Server Manager
The NativePoolQueueSize determines the number of threads that can wait in the queue for the thread pool. If all threads in the pool are busy, then the next request-handling thread that needs to use a thread in the native pool must wait in the queue. If the queue is full, the next request-handling thread that tries to get in the queue is rejected, with the result that it returns a busy response to the client. It is then free to handle another incoming request instead of being tied up waiting in the queue.
Setting the NativePoolQueueSize lower than the RqThrottle value causes the server to execute a busy function instead of the intended NSAPI function whenever the number of requests waiting for service by pool threads exceeds this value. The default returns a "503 Service Unavailable" response and logs a message if LogVerbose is enabled. Setting the NativePoolQueueSize higher than RqThrottle causes the server to reject connections before a busy function can execute.
This value represents the maximum number of concurrent requests for service that require a native thread. If your system is unable to fulfill requests due to load, letting more requests queue up increases the latency for requests, and could result in all available request threads waiting for a native thread. In general, set this value to be high enough to avoid rejecting requests by anticipating the maximum number of concurrent users who would execute requests requiring a native thread.
The difference between this value and RqThrottle is the number of requests reserved for non-native thread requests, such as static HTML and image files. Keeping a reserve and rejecting requests ensures that your server continues to fill requests for static files, which prevents it from becoming unresponsive during periods of very heavy dynamic content load. If your server consistently rejects connections, this value is either set too low, or your server hardware is overloaded.
You can modify the NativePoolQueueSize by:
Editing the NativePoolQueueSize parameter in magnus.conf
Entering or changing the value in the Queue Size field of the Native Thread Pool page in the Server Manager
NativePoolMaxThreads determine the maximum number of threads in the native (kernel) thread pool.
A higher value allows more requests to execute concurrently, but has more overhead due to context switching, so bigger is not always better. Typically, you will not need to increase this number, but if you are not saturating your CPU and you are seeing requests queue up, then you should increase this number.
You can modify the NativePoolMaxThreads by:
Editing the NativePoolMaxThreads parameter in magnus.conf
Entering or changing the value in the Maximum Threads field of the Native Thread Pool page in the Server Manager
Determines the minimum number of threads in the native (kernel) thread pool.
You can modify the NativePoolMinThreads by:
Editing the NativePoolMinThreads parameter in magnus.conf
Setting or changing the NativePoolMinThreads value in the Magnus Editor of the Server Manager
Entering or changing the value in the Minimum Threads field of the Native Thread Pool page in the Server Manager
The DNS cache caches IP addresses and DNS names. Your server’s DNS cache is disabled by default. Statistics are displayed in the DNS Statistics for Process ID page under Monitor in the Server Manager.
If the DNS cache is disabled, the rest of this section is not displayed.
By default, the DNS cache is off. You can enable DNS caching by:
Adding the following line to magnus.conf:
Init fn=dns-cache-init
Setting the DNS value to "on" in the Magnus Editor of the Server Manager
Selecting DNS Enabled from the Performance Tuning page under Preferences in the Server Manger
The number of current cache entries and the maximum number of cache entries. A single cache entry represents a single IP address or DNS name lookup. The cache should be as large as the maximum number of clients that will access your web site concurrently. Note that setting the cache size too high will waste memory and degrade performance.
You can set the maximum size of the DNS cache by:
Adding the following line to the magnus.conf file:
Init fn=dns-cache-init cache-size=1024
The default cache size is 1024
Entering or changing the value in the Size of DNS cache field of the Performance Tuning page in the Server Manager
The hit ratio displays the number of cache hits versus the number of cache lookups.
This setting is not tunable.
The default busy function returns a "503 Service Unavailable" response and logs a message if LogVerbose is enabled. You may wish to modify this behavior for your application. You can specify your own busy functions for any NSAPI function in the obj.conf file by including a service function in the configuration file in this format:
busy="<my-busy-function>"
For example, you could use this sample service function:
Service fn="send-cgi" busy="service-toobusy"
This allows different responses if the server become too busy in the course of processing a request that includes a number of types (such as Service, AddLog, and PathCheck). Note that your busy function will apply to all functions that require a native thread to execute when the default thread type is non-native.
To use your own busy function instead of the default busy function for the entire server, you can write an NSAPI init function that includes a func_insert call as shown below:
extern "C" NSAPI_PUBLIC int my_custom_busy_function(pblock *pb, Session *sn, Request *rq); my_init(pblock *pb, Session *, Request *){ func_insert("service-toobusy", my_custom_busy_function); }
Busy functions are never executed on a pool thread, so you must be careful to avoid using function calls that could cause the thread to block.
This section includes the following topics:
In Sun Java System Web Server, acceptor threads on a listen socket accept connections and put them into a connection queue. Session threads then pick up connections from the queue and service the requests. The session threads post more session threads if required at the end of the request. The policy for adding new threads is based on the connection queue state:
Each time a new connection is returned, the number of connections waiting in the queue (the backlog of connections) is compared to the number of session threads already created. If it is greater than the number of threads, more threads are scheduled to be added the next time a request completes.
The previous backlog is tracked, so that if it is seen to be increasing over time, and if the increase is greater than the ThreadIncrement value, and the number of session threads minus the backlog is less than the ThreadIncrement value, then another ThreadIncrement number of threads are scheduled to be added.
The process of adding new session threads is strictly limited by the RqThrottle value.
To avoid creating too many threads when the backlog increases suddenly (such as the startup of benchmark loads), the decision as to whether more threads are needed is made only once every 16 or 32 times a connection is made based on how many session threads already exist.
The following directives that affect the number and timeout of threads, processes, and connections can be tuned in the Magnus Editor or magnus.conf:
AcceptTimeout
ConnQueueSize
HeaderBufferSize
KeepAliveThreads
KeepAliveTimeout
KernelThreads
ListenQ
MaxKeepAliveConnections
MaxProcs (UNIX Only)
PostThreadsEarly
RcvBufSize
RqThrottle
RqThrottleMin
SndBufSize
StackSize
StrictHttpHeaders
TerminateTimeout
ThreadIncrement
UseNativePoll (UNIX only)
For detailed information about these directives, see the Sun Java System Web Server 6.1 SP9 Administrator’s Configuration File Reference.
You can run Sun Java System Web Server in one of the following two modes:
In the single-process mode the server receives requests from web clients to a single process. Inside the single server process many threads are running that are waiting for new requests to arrive. When a request arrives, it is handled by the thread receiving the request. Because the server is multi-threaded, all NSAPI extensions written to the server must be thread-safe. This means that if the NSAPI extension uses a global resource, like a shared reference to a file or global variable, then the use of that resource must be synchronized, so that only one thread accesses it at a time. All plugins provided by Netscape/Sun Java System are thread-safe and thread-aware, providing good scalability and concurrency. However, your legacy applications may be single-threaded. When the server runs the application, it can only execute one at a time. This leads to server performance problems when put under load. Unfortunately, in the single-process design, there is no real workaround.
You can configure the server to handle requests using multiple processes with multiple threads in each process. This flexibility provides optimal performance for sites using threads, and also provides backward compatibility to sites running legacy applications that are not ready to run in a threaded environment. Because applications on Windows generally already take advantage of multi-thread considerations, this feature applies to UNIX/Linux platforms.
The advantage of multiple processes is that legacy applications that are not thread-aware or thread-safe can be run more effectively in Sun Java System Web Server. However, because all of the Netscape/Sun ONE extensions are built to support a single-process threaded environment, they may not run in the multi-process mode, and the Search plugins will fail on startup if the server is in multi-process mode.
In the multi-process mode, the server spawns multiple server processes at startup. Each process contains one or more threads (depending on the configuration) that receive incoming requests. Since each process is completely independent, each one has its own copies of global variables, caches, and other resources. Using multiple processes requires more resources from your system. Also, if you try to install an application that requires shared state, it has to synchronize that state across multiple processes. NSAPI provides no helper functions for implementing cross-process synchronization.
When you specify a MaxProcs value greater than 1, the server relies on the operating system to distribute connections among multiple server processes (seeMaxProcs (UNIX/Linux) MaxProcs (UNIX/Linux) for information about the MaxProcs directive). However, many modern operating systems will not distribute connections evenly, particularly when there are a small number of concurrent connections.
Because Sun Java System Web Server cannot guarantee that load is distributed evenly among server processes, you may encounter performance problems if you specify RqThrottle 1 and MaxProcs greater than 1 to accommodate a legacy application that is not thread-safe. The problem will be especially pronounced if the legacy application takes a long time to respond to requests (for example, if the legacy application contacts a backend database). In this scenario, it may be preferable to use the default value for RqThrottle and serialize access to the legacy application using thread pools. For more information about creating a thread pool, refer to the description of the thread-pool-init SAF in the Sun Java System Web Server 6.1 NSAPI Programmer's Guide.
If you are not running any NSAPI in your server, you should use the default settings: one process and many threads. If you are running an application that is not scalable in a threaded environment, you should use a few processes and many threads, for example, 4 or 8 processes and 128 or 512 threads per process.
Use this directive to set your UNIX/Linux server in multi-process mode, which may allow for higher scalability on multi-processor machines. If you set the value to less than 1, it will be ignored and the default value of 1 will be used. SeeMulti-Process Mode Multi-Process Mode for a discussion of the performance implications of setting this to a value greater than 1.
You can set the value for MaxProcs by:
Editing the MaxProcs parameter in magnus.conf
Setting or changing the MaxProcs value in the Magnus Editor of the Server Manager
You will receive duplicate startup messages when running your server in MaxProcs mode.
You can specify how many threads you want in accept mode on a listen socket at any time. It’s a good practice to set this to less than or equal to the number of CPUs in your system.
You can set the number of listen socket acceptor threads by:
Editing the server.xml file
Entering the number of acceptor threads you want in the Number of Acceptor Threads field of the Edit Listen Socket page of the Server Manager
The RqThrottle parameter in the magnus.conf file specifies the maximum number of simultaneous transactions the Web Server can handle. The default value is 128. Changes to this value can be used to throttle the server, minimizing latencies for the transactions that are performed. The RqThrottle value acts across multiple virtual servers, but does not attempt to load balance.
To compute the number of simultaneous requests, the server counts the number of active requests, adding one to the number when a new request arrives, subtracting one when it finishes the request. When a new request arrives, the server checks to see if it is already processing the maximum number of requests. If it has reached the limit, it defers processing new requests until the number of active requests drops below the maximum amount.
In theory, you could set the maximum simultaneous requests to 1 and still have a functional server. Setting this value to 1 would mean that the server could only handle one request at a time, but since HTTP requests for static files generally have a very short duration (response time can be as low as 5 milliseconds), processing one request at a time would still allow you to process up to 200 requests per second.
However, in actuality, Internet clients frequently connect to the server and then do not complete their requests. In these cases, the server waits 30 seconds or more for the data before timing out. You can define this timeout period using the AcceptTimeout directive in magnus.conf. The default value is 30 seconds. By setting it to less than the default you can free up threads sooner, but you might also disconnect users with slower connections. Also, some sites perform heavyweight transactions that take minutes to complete. Both of these factors add to the maximum simultaneous requests that are required. If your site is processing many requests that take many seconds, you may need to increase the number of maximum simultaneous requests. For more information about AcceptTimeout, see the Sun Java System Web Server 6.1 SP9 Administrator’s Configuration File Reference.
Suitable RqThrottle values range from 100-500, depending on the load.
RqThrottleMin is the minimum number of threads the server initiates upon startup. The default value is 48. RqThrottle represents a hard limit for the maximum number of active threads that can run simultaneously, which can become a bottleneck for performance. The default value is 128.
If you are using older NSAPI plugins that are not reentrant, they will not work with the multi-threading model described in this document. To continue using them, you should revise them so that they are reentrant. If this is not possible, you can configure your server to work with them by setting RqThrottle to 1, and then using a high value for MaxProcs, such as 48 or greater, but this will adversely impact your server’s performance.
When configuring Sun Java System Web Server to be used with SNCA (the Solaris Network Cache and Accelerator), setting the RqThrottle and ConnQueueSize parameters to 0 provides better performance. Because SNCA manages the client connections, it is not necessary to set these parameters. These parameters can also be set to 0 with non-SNCA configurations, especially for cases in which short latency responses with no keep-alives must be delivered. It is important to note that RqThrottle and ConnQueueSize must both be set to 0.
For more information about RqThrottle and ConnQueueSize, see the chapter pertaining to magnus.conf in the Sun Java System Web Server 6.1 SP9 Administrator’s Configuration File Reference. Also consult the RqThrottle and ConnQueueSize entries in the index in this book. For information about using SNCA, seeUsing the Solaris Network Cache and Accelerator (SNCA)
You can tune the number of simultaneous requests by:
Editing RqThrottleMin and RqThrottle in the magnus.conf file
Entering or changing values for the RqThrottleMin and RqThrottle fields in the Magnus Editor of the Server Manager
Entering the desired value in the Maximum Simultaneous Requests field from the Performance Tuning page under Preferences in the Server Manger
The keep-alive (or HTTP/1.1 persistent connection handling) subsystem in Sun Java System Web Server 6.1 is designed to be massively scalable. The out-of-the-box configuration can be less than optimal if the workload is non-persistent (that is, HTTP/1.0 without the KeepAlive header), or for a lightly loaded system that’s primarily servicing keep-alive connections.
There are several tuning parameters that can help improve performance. Those parameters are listed below:
acceptorthreads: Number of threads waiting to accept incoming connections on a given network port. This is specified per the listen socket (LS) element in server.xml.
ConnQueueSize: Size of the queue of active, ready-to-process connections.
RqThrottle: Number of worker threads in the server. Each thread parses and services a request from an active connection. Worker threads, in contrast with acceptor threads, service requests. The maximum number of worker threads is configured using RqThrottle. For more information, seeMaximum Simultaneous Requests
MaxKeepAliveConnections: This controls the maximum number of keep-alive connections the Web Server can maintain at any time. The default is 256. The range is 0 to 32768.
KeepAliveTimeout: This directive determines the maximum time (in seconds) that the server holds open an HTTP keep-alive connection or a persistent connection between the client and the server. The default is 30 seconds. The connection will timeout if idle for more than 30 seconds. The maximum is 300 seconds (5 minutes).
KeepAliveThreads: This directive determines the number of threads in the keep-alive subsystem. It is recommended that this number be a small multiple of the number of processors on the system (for example, a 2 CPU system should have 2 or 4 keep-alive threads). The default is 1.
KeepAliveQueryMaxSleepTime: Specifies an upper limit to the time slept (in milliseconds) after polling keep-alive connections for further requests. The default is 100. On lightly loaded systems that primarily service keep-alive connections, you can lower this number to enhance performance. Doing so can increase CPU usage, however.
KeepAliveQueryMeanTime: Specifies the desired keep-alive latency in milliseconds. The default value of 100 is appropriate for almost all installations. Note that CPU usage will increase with lower KeepAliveQueryMeanTime values.
For more information about the Web Server’s keep-alive subsystem, seeKeep-Alive/Persistent Connection Information
For information about connection queue sizing, seeConnection Queue Information
Since HTTP/1.0 results in a large number of new incoming connections, the default acceptor threads of 1 per listen socket would be suboptimal. Increasing this to a higher number should improve performance for HTTP/1.0-style workloads. For instance, for a system with 2 CPUs, you may want to set it to 2.
In the following example, acceptor threads are increased, and keep-alive connections are reduced:
In magnus.conf: MaxKeepAliveConnections 0 RqThrottle 128 RcvBufSize 8192 In server.xml: <SERVER legacyls="ls1"> <LS id="ls1" ip="0.0.0.0" port="8080" security="off" blocking="no" acceptorthreads="2" </SERVER>
HTTP/1.0-style workloads would have many connections established and terminated.
If users are experiencing connection timeouts from a browser to Sun Java System Web Server when the server is heavily loaded, you can increase the size of the HTTP listener backlog queue by setting the ListenQ parameter in the magnus.conf file to:
ListenQ 8192
The ListenQ parameter specifies the maximum number of pending connections on a listen socket. Connections that time out on a listen socket whose backlog queue is full will fail.
In general, it is a tradeoff between throughput and latency while tuning server persistent connection handling. The KeepAliveQueryQuery* directives (KeepAliveQueryMeanTime and KeepAliveQueryMaxSleepTime) control latency. Lowering the values of these directives is intended to lower latency on lightly loaded systems (for example, reduce page load times). Increasing the values of these directives is intended to raise aggregate throughput on heavily loaded systems (for example, increase the number of requests per second the server can handle). However, if there's too much latency and too few clients, aggregate throughput will suffer as the server sits idle unnecessarily. As a result, the general keep-alive subsystem tuning rules at a particular load are as follows:
If there's idle CPU time, decrease KeepAliveQueryMeanTime and/or KeepAliveQueryMaxSleepTime.
If there's no idle CPU time, increase KeepAliveQueryMeanTime and/or KeepAliveQueryMaxSleepTime.
For more information about these directives, seeKeep-Alive Subsystem Tuning
Also, chunked encoding could affect the performance for HTTP/1.1 workload. Tuning the response buffer size could positively affect the performance. A higher OutputStreamSize for a plugin would result in sending Content-length: header, instead of chunking the response.
In the following example, MaxKeepAliveConnections is increased, as is UseOutputStreamSize for the nsapi_test Service function:
In magnus.conf: MaxKeepAliveConnections 8192 KeepAliveThreads 2 UseNativePoll 1 RqThrottle 128 RcvBufSize 8192 In obj.conf: <Object name="nsapitest"> ObjectType fn="force-type" type="magnus-internal/nsapitest" Service method=(GET) type="magnus-internal/nsapitest" fn="nsapi_test" UseOutputStreamSize=8192 </Object>
Sun Java System Web Server uses a file cache to serve static information faster. In previous versions of the server, there was also an accelerator cache that routed requests to the file cache, but the accelerator cache is no longer used. The file cache contains information about files and static file content. The file cache also caches information that is used to speed up processing of server-parsed HTML.
This section includes the following topics:
The file cache is turned on by default. The file cache settings are contained in a file called nsfc.conf. You can use the Server Manager to change the file cache settings. For more information about nsfc.conf, see the Sun Java System Web Server 6.1 SP9 Administrator’s Configuration File Reference.
From the Server Manager, select the Preferences tab.
Select File Cache Configuration.
Select Enable File Cache, if not already selected.
Choose whether to transmit files.
When you enable Transmit File, the server caches open file descriptors for files in the file cache, rather than the file contents, and PR_TransmitFile is used to send the file contents to a client. When Transmit File is enabled, the distinction normally made by the file cache between small, medium, and large files no longer applies, since only the open file descriptor is being cached. By default, Transmit File is enabled on Windows, and disabled on UNIX. On UNIX, only enable Transmit File for platforms that have native OS support for PR_TransmitFile, which currently includes HP-UX and AIX. It is not recommended for other UNIX/Linux platforms.
Enter a size for the hash table.
The default size is twice the maximum number of files plus 1. For example, if your maximum number of files is set to 1024, the default hash table size is 2049.
Enter a maximum age in seconds for a valid cache entry.
By default, this is set to 30.
This setting controls how long cached information will continue to be used once a file has been cached. An entry older than MaxAge is replaced by a new entry for the same file, if the same file is referenced through the cache.
Set the maximum age based on whether the content is updated (existing files are modified) on a regular schedule. For example, if content is updated four times a day at regular intervals, you could set the maximum age to 21600 seconds (6 hours). Otherwise, consider setting the maximum age to the longest time you are willing to serve the previous version of a content file after the file has been modified.
Enter the Maximum Number of Files to be cached.
By default, this is set to 1024.
(UNIX/Linux only) Enter medium and small file size limits in bytes.
By default, the Medium File Size Limit is set to 537600.
By default, the Small File Size Limit is set to 2048.
The cache treats small, medium, and large files differently. The contents of medium files are cached by mapping the file into virtual memory (currently only on UNIX/Linux platforms). The contents of small files are cached by allocating heap space and reading the file into it. The contents of large files (larger than medium) are not cached, although information about large files is cached.
The advantage of distinguishing between small files and medium files is to avoid wasting part of many pages of virtual memory when there are lots of small files. So the Small File Size Limit is typically a slightly lower value than the VM page size.
(UNIX/Linux only) Set the medium and small file space.
The medium file space is the size in bytes of the virtual memory used to map all medium sized files. By default, this is set to 10485760.
The small file space is the size of heap space in bytes used for the cache, including heap space used to cache small files. By default, this is set to 1048576 for UNIX/Linux.
Click OK, and then click Apply.
Select Apply Changes to restart your server and put your changes into effect.
You can use the parameter nocache for the Service function send-file to specify that files in a certain directory should not be cached. For example, if you have a set of files that changes too rapidly for caching to be useful, you can put them into a directory and instruct the server not to cache files in that directory by editing obj.conf.
<Object name=default> ... NameTrans fn="pfx2dir" from="/myurl" dir="/export/mydir" name="myname" ... Service method=(GET|HEAD|POST) type=*~magnus-internal/* fn=send-file ... </Object> <Object name="myname"> Service method=(GET|HEAD) type=*~magnus-internal/* fn=send-file nocache="" </Object>
In the above example, the server does not cache static files from /export/mydir/ when requested by the URL prefix /myurl.
From the Server Manager, select Monitor.
Select Monitor Current Activity.
If you have not yet activated statistics, do so when the Enable Statistics/Profiling page displays, click OK, and then restart the server and return to this page.
Select a refresh interval from the drop-down list.
From the drop-down list of statistics to be displayed, choose Cache and then click Submit.
The cache statistics display and are refreshed every 5-15 seconds, depending on the refresh interval.
The statistics include information on your cache settings, how many hits the cache is getting, and so on.
You can add an object to obj.conf to dynamically monitor and control the nsfc.conf file cache while the server is running.
Add a NameTrans directive to the default object:
NameTrans fn="assign-name" from="/nsfc" name="nsfc"
Add an nsfc object definition:
<Object name=”nsfc”> Service fn=service-nsfc-dump </Object>
This enables the file cache control and monitoring function (nsfc-dump) to be accessed through the URI, "/nsfc." By changing the "from" parameter in the NameTrans directive, a different URI can be used.
The following is an example of the information you receive when you access the URI:
Sun Java System Web Server File Cache Status (pid 7960) The file cache is enabled. Cache resource utilization Number of cached file entries = 1039 (112 bytes each, 116368 total bytes) Heap space used for cache = 237641/1204228 bytes Mapped memory used for medium file contents = 5742797/10485760 bytes Number of cache lookup hits = 435877/720427 ( 60.50 %) Number of hits/misses on cached file info = 212125/128556 Number of hits/misses on cached file content = 19426/502284 Number of outdated cache entries deleted = 0 Number of cache entry replacements = 127405 Total number of cache entries deleted = 127407 Number of busy deleted cache entries = 17 Parameter settings HitOrder: false CacheFileInfo: true CacheFileContent: true TransmitFile: false MaxAge: 30 seconds MaxFiles: 1024 files SmallFileSizeLimit: 2048 bytes MediumFileSizeLimit: 537600 bytes CopyFiles: false Directory for temporary files: /tmp/netscape/https-axilla.mcom.com Hash table size: 2049 buckets |
You can include a query string when you access the "/nsfc" URI. The following values are recognized:
?list: Lists the files in the cache.
?refresh=n: Causes the client to reload the page every n seconds.
?restart: Causes the cache to be shut down and then restarted.
?start: Starts the cache.
?stop: Shuts down the cache.
If you choose the ?list option, the file listing includes the file name, a set of flags, the current number of references to the cache entry, the size of the file, and an internal file ID value. The flags are as follows:
C: File contents are cached.
D: Cache entry is marked for delete.
I: File information (size, modify date, and so on) is cached.
M: File contents are mapped into virtual memory.
O: File descriptor is cached (when TransmitFile is set to true).
P: File has associated private data (should appear on shtml files).
T: Cache entry has a temporary file.
W: Cache entry is locked for write access.
For sites with scheduled updates to content, consider shutting down the cache while the content is being updated, and starting it again after the update is complete. Although performance will slow down, the server operates normally when the cache is off.
The ACL user cache is on by default. Because of the default size of the cache (200 entries), the ACL user cache can be a bottleneck, or can simply not serve its purpose on a site with heavy traffic. On a busy site, more than 200 users can hit ACL-protected resources in less time than the lifetime of the cache entries. When this situation occurs, Sun Java System Web Server must query the LDAP server more often to validate users, which impacts performance.
This bottleneck can be avoided by increasing the size of the ACL cache with the ACLUserCacheSize directive in magnus.conf. Note that increasing the cache size will use more resources; the larger you make the cache, the more RAM you'll need to hold it.
There can also be a potential (but much harder to hit) bottleneck with the number of groups stored in a cache entry (4 by default). If a user belongs to 5 groups and hits 5 ACLs that check for these different groups within the ACL cache lifetime, an additional cache entry is created to hold the additional group entry. When there are 2 cache entries, the entry with the original group information is ignored.
While it would be extremely unusual to hit this possible performance problem, the number of groups cached in a single ACL cache entry can be tuned with the ACLGroupCacheSize directive.
This section includes the following topics:
To adjust the ACL user cache values you must manually add the following directives to your magnus.conf file:
ACLCacheLifetime
ACLUserCacheSize
ACLGroupCacheSize
Set this directive to a number that determines the number of seconds before the cache entries expire. Each time an entry in the cache is referenced, its age is calculated and checked against ACLCacheLifetime. The entry is not used if its age is greater than or equal to the ACLCacheLifetime. The default value is 120 seconds. If this value is set to 0, the cache is turned off. If you use a large number for this value, you may need to restart Sun Java System Web Server when you make changes to the LDAP entries. For example, if this value is set to 120 seconds, Sun Java System Web Server might be out of sync with the LDAP server for as long as two minutes. If your LDAP is not likely to change often, use a large number.
Set this directive to a number that determines the size of the User Cache (default is 200).
Set this directive to a number that determines how many group IDs can be cached for a single UID/cache entry (default is 4).
With LogVerbose you can verify that the ACL user cache settings are being used. When LogVerbose is running, you should expect to see these messages in your errors log when the server starts:
User authentication cache entries expire in ### seconds. User authentication cache holds ### users. Up to ### groups are cached for each cached user.
You can turn LogVerbose on by:
Editing the LogVerbose parameter in magnus.conf
Setting or changing the LogVerbose value to "on" in the Magnus Editor of the Server Manager
Do not turn on LogVerbose on a production server. Doing so degrades performance and greatly increases the size of your error logs.
This section includes the following topics:
As with all Java programs, the performance of the web applications in the Sun Java System Web Server is dependent on the heap management performed by the virtual machine (VM). There is a trade-off between pause times and throughput. A good place to start is by reading the performance documentation for the Java HotSpot virtual machine, which can be found at the following location:
http://java.sun.com/docs/hotspot/index.html
Java VM options are specified using the JVMOPTIONS subelement of the JAVA element in server.xml. For more information, see the Sun Java System Web Server 6.1 SP9 Administrator’s Configuration File Reference.
Compiling JSPs is a resource-intensive and relatively time-consuming process. By default, the Web Server periodically checks to see if your JSPs have been modified and dynamically reloads them; this allows you to deploy modifications without restarting the server. The reload-interval property of the jsp-config element in sun-web.xml controls how often the server checks JSPs for modifications. However, there is a small performance penalty for that checking.
When the server detects a change in a .jsp file, only that JSP is recompiled and reloaded; the entire web application is not reloaded. If your JSPs do not change, you can improve performance by precompiling your JSPs before deploying them onto your server. For more information about jsp-config and about precompiling JSPs for Sun Java System Web Server, see the Sun Java System Web Server 6.1 SP9 Programmer’s Guide to Web Applications. Also see the following section, Configuring Class Reloading.
If you spend a lot of time re-running the same servlet/JSP, you can cache its results and return results out of the cache the next time it is run. For example, this is useful for common queries that all visitors to your site run: you want the results of the query to be dynamic because it might change day to day, but you do not need to run the logic for every user.
To enable caching, you configure the caching parameters in the sun-web.xml file of your application. For more details, see information about caching servlet results in the Sun Java System Web Server 6.1 SP9 Programmer’s Guide to Web Applications.
Sun Java System Web Server 6.1 supports the Java Security Manager. The main drawback of running with the Security Manager is that it negatively impacts performance. The Java Security Manager is disabled by default when you install the product. Running without the Security Manager may improve performance significantly for some types of applications. Based on your application and deployment needs, you should evaluate whether to run with or without the Security Manager. For more information, see the Sun Java System Web Server 6.1 SP9 Programmer’s Guide to Web Applications.
The dynamicreloadinterval of the JAVA element in server.xml and the dynamic-reload-interval of the class-loader element in sun-web.xml controls the frequency at which the server checks for changes in servlet classes. When dynamic reloading is enabled and the server detects that a .class file has changed, the entire web application is reloaded. In a production environment where changes are made in a scheduled manner, set this value to -1 to prevent the server from constantly checking for updates. The default value is -1 (that is, class reloading is disabled). For more information about elements in server.xml, see the Sun Java System Web Server 6.1 SP9 Administrator’s Configuration File Reference. For more information about elements in sun-web.xml, see the Sun Java System Web Server 6.1 SP9 Programmer’s Guide to Web Applications. Also see the previous section in this guide, Using Precompiled JSPs.
For certain applications (especially if the Java Security Manager is enabled), you can improve the performance by ensuring that there are no directories in the classpath. To do so, ensure that there are no directories in the classpath elements in server.xml (serverclasspath, classpathprefix, classpathsuffix). For more information about these elements, see the Sun Java System Web Server 6.1 SP9 Administrator’s Configuration File Reference. Also, package the web application's .class files in a .jar archive in WEB-INF/lib instead of packaging the .class files as is in WEB-INF/classes, and ensure that the .war archive does not contain a WEB-INF/classes directory.
If you have relatively short-lived sessions, try decreasing the session timeout by configuring the value of the timeOutSeconds property under the session-properties element in sun-web.xml from the default value of 10 minutes.
If you have relatively long-lived sessions, you can try decreasing the frequency at which the session reaper runs by increasing the value of the reapIntervalSeconds property from the default value of once every minute.
For more information about these settings, and about session managers, see the Sun Java System Web Server 6.1 SP9 Programmer’s Guide to Web Applications.
In multi-process mode when the persistence-type in sun-web.xml is configured to be either s1ws60 or mmap, the session manager uses cross-process locks to ensure session data integrity. These can be configured to improve performance as described below.
The implication of the number specified in the maxLocks property can be gauged by dividing the value of maxSessions with maxLocks. For example, if maxSessions = 1000 and you set maxLocks = 10, then approximately 100 sessions (1000/10) will contend for the same lock. Increasing maxLocks will reduce the number of sessions that contend for the same lock and may improve performance and reduce latency. However, increasing the number of locks also increases the number of open file descriptors, and reduces the number of available descriptors that would otherwise be assigned to incoming connection requests.
For more information about these settings, see the "Session Managers" chapter in the Sun Java System Web Server 6.1 SP9 Programmer’s Guide to Web Applications.
The following example describes the effect on process size when configuring the persistence-type="mmap" using the manager-properties properties (documented for the MMapSessionManager in the Sun Java System Web Server 6.1 Programmer’s Guide to Web Applications):
maxSessions = 1000 maxValuesPerSession = 10 maxValueSize = 4096
This example would create a memory mapped file of size 1000 X 10 X 4096 bytes, or ~40 MB. As this is a memory mapped file, the process size will increase by 40 MB upon startup. The larger the values you set for these parameters, the greater will be the increase in process size.
A JDBC connection pool is a named group of JDBC connections to a database. These connections are created when the first request for connection is made on the pool when you start Sun Java System Web Server.
The JDBC connection pool defines the properties used to create a connection pool. Each connection pool uses a JDBC driver to establish a connection to a physical database at server start-up.
A JDBC-based application or resource draws a connection from the pool, uses it, and when no longer needed, returns it to the connection pool by closing the connection. If two or more JDBC resources point to the same pool definition, they will be using the same pool of connections at run time.
The use of connection pooling improves application performance by doing the following:
Creating connections in advance. The cost of establishing connections is moved outside of the code that is critical for performance.
Reusing connections. The number of times connections are created is significantly lowered.
Controlling the amount of resources a single application can use at any moment.
JDBC connection pools can be created and edited using the Administration interface, or by editing the attributes of the JDBCCONNECTIONPOOL element in the server.xml file. For more information, see the Sun Java System Web Server 6.1 SP9 Administrator’s Guide and the Sun Java System Web Server 6.1 SP9 Administrator’s Configuration File Reference, respectively.
Each defined pool is instantiated during web server startup. However, the connections are only created the first time the pool is accessed. It is recommended that you jump-start a pool before putting it under heavy load.
Depending on your application’s database activity, you may need to size connection pool attributes. Attributes of a JDBC connection pool are listed below, along with considerations relating to performance.
The pool name.
datasourceclassname
The jdbc driver class that implements javax.sql.DataSource.
The size the pool will tend to keep during the life of the server instance. Also the initial size of the pool. Defaults to 8.
This number should be as close as possible to the expected average size of the pool. Use a high number for a pool that is expected to be under heavy load. This will minimize creation of connections during the life of the application, and will minimize pool resizing. Use a lower number if the pool load is expected to be small. This will minimize resource consumption.
The maximum number of connections that a pool can have at any given time. Defaults to 32.
Use this parameter to enforce a limit in the amount of connection resources that a pool or application can have. This limit is also beneficial to avoid application failures due to excessive resource consumption.
Number of connections to be removed when the idletimeout timer expires. Connections that have been idle longer than the timeout are candidates for removal. When the pool size reaches steady-pool-size, the connection removal stops. Defaults to 2.
Keep this number low for pools that expect regular and steady changes in demand. A higher number is recommended for pools that expect infrequent and pronounced changes in the load.
The maximum amount in seconds that a connection is ensured to remain unused in the pool. Also the intervals at which the resizer task will be scheduled.
Note that this does not control connection timeouts enforced at the database server side. Defaults to 300.
Setting this attribute to 0 prevents the connections from being closed and causes the resizing task not to be scheduled. This is recommended for pools that expect continuous high demand. Otherwise, administrators are advised to keep this timeout shorter than the database server-side timeout (if such timeouts are configured on the specific vendor's database), to prevent accumulation of unusable connections in the pool.
The amount of time in milliseconds that a request waits for a connection in the queue before timing out. Defaults to 60000.
Setting this attribute to 0 causes a request for a connection to wait indefinitely. This could also improve performance by keeping the pool from having to account for connection timers.
If set to true, the pool will always execute a call on the connection to verify its validity. Defaults to off.
The overhead caused by this call can be avoided by setting the parameter to false.
The method used for validation. Defaults to auto-commit.
If validation is needed, the methods auto-commit and meta-data are less costly than the method table. The first two require a method call, but they might not be effective if the JDBC driver caches the result of the call. The third method is almost always effective, but it requires the execution of a SQL statement, and thus is less performance-friendly.
The user-defined table to be use for validation. Defaults to test.
If this method is used, it is strongly recommended that the table used be dedicated only to validation, and the number of rows in the table be kept to a minimum.
Indicates whether all connection in the pool are re-created when one is found to be invalid or only the invalid one. Only applicable if connectionvalidationrequired is set to true. Defaults to off.
If set to true, all of the re-creation work will be done in one step, and the thread requesting the connection will be heavily affected. If set to false, the load of re-creating connections will be distributed between the threads requesting each connection.
Specifies the Transaction Isolation Level on the pooled database connections. This setting is optional and has no default.
If left empty, the default isolation level of the connection will be left intact. Setting it to any value will incur the small performance penalty cause by the method call.
Only applicable if a transactionisolationlevel has been specified. Defaults to off.
Leaving this as off or false will cause the isolation level to be set only when the connection is created. Setting this to true will set the level every time the connection is leased to an application. It is recommended that you leave this set to false.