Sun Java System Web Server 7.0 Update 7 Performance Tuning, Sizing, and Scaling Guide

Chapter 3 Common Performance Problems

This chapter discusses common web site performance problems, and includes the following topics:


Note –

For platform-specific issues, see Chapter 4, Platform-Specific Issues and Tips


check-acl Server Application Functions

For optimal server performance, use ACLs only when required.

The server is configured with an ACL file containing the default ACL allowing write access to the server only to all, and an es-internal ACL for restricting write access for anybody. The latter protects the manuals, icons, and search UI files in the server.

The default obj.conf file has NameTrans lines mapping the directories that need to be read-only to the es-internal object, which in turn has a check-acl SAF for the es-internal ACL.

The default object also contains a check-acl SAF for the default ACL.

You can improve performance by removing the check-acl SAF from the default object for URIs that are not protected by ACLs.

Low-Memory Situations

If Web Server must run in low-memory situations, reduce the thread limit to a bare minimum by lowering the value of the Maximum Threads setting on the configuration's Performance Tab ⇒ HTTP sub tab. You can also set it with wadm set-thread-pool-prop command's max-threads property.

The server automatically selects many server defaults based on the system resources, for optimal performance. However, if the server's chosen defaults are not suited to your configuration, you can override them. For more information about how to tune the server to obtain a smaller memory footprint, see Large Memory Footprint.

Web applications running under stress might sometimes result in the server running out of Java VM runtime heap space, as seen in the java.lang.OutOfMemoryError messages in the server log file. There can be several reasons for this, including excessive allocation of objects, and such behavior can affect performance. To address this problem, profile the application. Refer to the following HotSpot VM performance FAQ for tips on profiling allocations (objects and their sizes) of your application:

http://java.sun.com/docs/hotspot/index.html

If your application runs out of maximum sessions as evidenced by a “too many active sessions” message in the server log file, and results in the container throwing exceptions, application performance will be impacted. To address the situation, consider the session manager properties, and the session idle time. Note that JSPs have sessions enabled by default.

Too Few Threads

The server does not allow the number of active threads to exceed the thread limit value. If the number of simultaneous requests reaches that limit, the server stops servicing new connections until the old connections are freed up. This can lead to increased response time.

In Web Server, the server’s default maximum threads setting is greater of 128 or the number of processors in the system. If you want your server to process more requests concurrently, you need to increase the maximum number of threads.

The symptom of a server with too few threads is a long response time. Making a request from a browser establishes a connection fairly quickly to the server, but if there are too few threads on the server it can take a long time before the response comes back to the client.

The best way to tell if your server is being throttled by too few threads is to see if the number of active sessions is close to, or equal to, the maximum number of threads. To do this, see Session Creation and Thread Information.

Cache Not Utilized

If the file cache is not utilized, your server is not performing optimally. Since most sites have lots of GIF or JPEG files that are intended to always be cacheable, you need to use your cache effectively.

Some sites, however, do almost everything through CGIs, SHTML, or other dynamic sources. Dynamic content is generally not cacheable, and inherently yields a low cache hit rate. Don’t be alarmed if your site has a low cache hit rate. The most important thing is that your response time is low. You can have a very low cache hit rate and still have very good response time. As long as your response time is good, it is less important that the cache hit rate is low.

Check your hit ratio using statistics from perfdump, the Admin Console Monitoring tab, or wadm stats commands. The hit ratio is the percentage of times the cache was used with all hits to your server. A good cache hit rate is anything above 50%. Some sites can even achieve 98% or higher. For more information, see File Cache Statistics Information.

In addition, if you are doing a lot of CGI or NSAPI calls, you can have a low cache hit rate. If you have custom NSAPI functions, you can also have a low cache hit rate.

Keep-Alive Connections Flushed

A web site that can service 75 requests per second without keep-alive connections might be able to do 200-300 requests per second when keep-alive is enabled. Therefore, as a client requests various items from a single page, it is important that keep-alive connections are used effectively. If the KeepAliveCount shown in perfdump (Total Number of Connections Added, as displayed in the Admin Console) exceeds the keep-alive maximum connections, subsequent keep-alive connections are closed, or “flushed,” instead of being honored and kept alive.

Check the KeepAliveFlushes and KeepAliveHits values using statistics from perfdump or the Number of Connections Flushed and Number of Connections Processed under Keep Alive Statistics on the Monitoring Statistics page. For more information, see Keep-Alive Information.

On a site where keep-alive connections are running well, the ratio of KeepAliveFlushes to KeepAliveHits is very low. If the ratio is high (greater than 1:1), your site is probably not utilizing keep-alive connections as well as it can.

To reduce keep-alive flushes, increase the keep-alive maximum connections. You can do this in the configuration's Performance Tab ⇒ HTTP sub tab or using the wadm set-keep-ailve props command. The default is based on the number of available file descriptors in the system. By raising the keep-alive maximum connections value, you keep more waiting keep-alive connections open.


Caution – Caution –

On UNIX/Linux systems, if the keep-alive maximum connections value is too high, the server can run out of open file descriptors. Typically 1024 is the limit for open files on UNIX/Linux, so increasing this value above 500 is not recommended.


Large Memory Footprint

Web Server automatically configures the connection queue size based on the number of available file descriptors in the system. The connection queue size on a system is determined by the sum total of thread-pool/max-threads element, thread-pool/queue-size element and keep-alive/max-connections element in the server.xml file.

For more information about the server.xml file, see the Administrator's Configuration File Reference.

In certain cases, the server's chosen defaults leads to larger memory footprint than what is required to run your applications. If the server selected defaults does not suit your needs, the memory usage of the server can be changed by specifying the values in server.xml. The thread-pool/max-threads is greater of 128 or the number of processors in the system unless explicitly specified in server.xml. The thread-pool/queue-size can be obtained from perfdump by examining the Connection Queue Information. For more information, see Connection Queue Information. The keep-alive/max-connections can be obtained from Keep-Alive Information and Keep-Alive Count. Logging at level fine will print these values in the error log file.

Log File Modes

Keeping the log files on a high-level of verbosity can have a significant impact on performance. On the configuration's General Tab ⇒ Log Settings page choose the appropriate log level and use levels such as Fine, Finer, and Finest with care. To set the log level using the CLI, use the command wadm set-log-prop and set the log-level.

Tuning of File Descriptors

Web Server 7.0 uses a algorithm to divide file descriptors to various needs on unix systems.

Here is the list of items which require file descriptors:

  1. Web Applications.

    Web Server leaves 80% file descriptors for web applications.

  2. Daemon session threads connections.

    For each daemon session thread Web Server 7.0 expects a average of 4 file descriptors. One client socket connection, requested file, an included file and a backend connection.

  3. jdbc pools.

  4. Access log counts for all virtual servers.

  5. Listener counts.

  6. File descriptors for file cache.

  7. Keep alive file descriptors.

  8. Thread pool queue size.

In above list (6), (7) and (8) are auto tuned which means that if user specifies them in server.xml, it uses those, otherwise it divides the remaining or available file descriptors to (6), (7) and (8). Available file descriptors = Total descriptors - item (1) to (5)

If available file descriptors are more than 1024 then Web Server uses 1:16:16 ratio for item (6),(7),(8). If available file descriptors are less than 1024 then it uses 1:16:8 ratio. File cache is given the least importance and keep alive is given the highest importance. It also rounds of the number to the power of 2.

Note that Web Server doesn't uses above algorithm for Windows systems. On Windows it uses 64K descriptors for keep alive, 16K descriptors for thread pool queue size.