Sun Java System Web Server 7.0 Update 2 Performance Tuning, Sizing, and Scaling Guide

Chapter 3 Common Performance Problems

This chapter discusses common web site performance problems, and includes the following topics:


Note –

For platform-specific issues, see Chapter 4, Platform-Specific Issues and Tips


check-acl Server Application Functions

For optimal performance of your server, use ACLs only when required.

The server is configured with an ACL file containing the default ACL allowing write access to the server only to all, and an es-internal ACL for restricting write access for anybody. The latter protects the manuals, icons, and search UI files in the server.

The default obj.conf file has NameTrans lines mapping the directories that need to be read-only to the es-internal object, which in turn has a check-acl SAF for the es-internal ACL.

The default object also contains a check-acl SAF for the default ACL.

You can improve performance by removing the check-acl SAF from the default object for URIs that are not protected by ACLs.

Low-Memory Situations

If Web Server must run in low-memory situations, reduce the thread limit to a bare minimum by lowering the value of the Maximum Threads setting on the configuration's Performance Tab ⇒ HTTP sub tab. You can also set it with wadm set-thread-pool-prop command's max-threads property.

Your web applications running under stress might sometimes result in the server running out of Java VM runtime heap space, as can be seen by java.lang.OutOfMemoryError messages in the server log file. There could be several reasons for this (such as excessive allocation of objects), but such behavior could affect performance. To address this problem, profile the application. Refer to the following HotSpot VM performance FAQ for tips on profiling allocations (objects and their sizes) of your application:

http://java.sun.com/docs/hotspot/index.html

At times your application could be running out of maximum sessions (as evidenced by a “too many active sessions” message in the server log file), which would result in the container throwing exceptions, which in turn impacts application performance. Consideration of session manager properties, session creation activity (note that JSPs have sessions enabled by default), and session idle time is needed to address this situation.

Too Few Threads

The server does not allow the number of active threads to exceed the thread limit value. If the number of simultaneous requests reaches that limit, the server stops servicing new connections until the old connections are freed up. This can lead to increased response time.

In Web Server, the server’s default maximum threads setting is 128. If you want your server to process more requests concurrently, you need to increase the maximum number of threads.

The symptom of a server with too few threads is a long response time. Making a request from a browser establishes a connection fairly quickly to the server, but if there are too few threads on the server it might take a long time before the response comes back to the client.

The best way to tell if your server is being throttled by too few threads is to see if the number of active sessions is close to, or equal to, the maximum number of threads. To do this, see Session Creation (Thread) Information.

Cache Not Utilized

If the file cache is not utilized, your server is not performing optimally. Since most sites have lots of GIF or JPEG files that should always be cacheable, you need to use your cache effectively.

Some sites, however, do almost everything through CGIs, SHTML, or other dynamic sources. Dynamic content is generally not cacheable, and inherently yields a low cache hit rate. Don’t be too alarmed if your site has a low cache hit rate. The most important thing is that your response time is low. You can have a very low cache hit rate and still have very good response time. As long as your response time is good, you might not care that the cache hit rate is low.

Check your hit ratio using statistics from perfdump, the Admin Console Monitoring tab, or wadm stats commands. The hit ratio is the percentage of times the cache was used with all hits to your server. A good cache hit rate is anything above 50%. Some sites might even achieve 98% or higher. For more information, see File Cache Information (Static Content).

In addition, if you are doing a lot of CGI or NSAPI calls, you might have a low cache hit rate. If you have custom NSAPI functions, you might also have a low cache hit rate.

Keep-Alive Connections Flushed

A web site that might be able to service 75 requests per second without keep-alive connections might be able to do 200-300 requests per second when keep-alive is enabled. Therefore, as a client requests various items from a single page, it is important that keep-alive connections are being used effectively. If the KeepAliveCount shown in perfdump (Total Number of Connections Added, as displayed in the Admin Console) exceeds the keep-alive maximum connections, subsequent keep-alive connections are closed, or “flushed,” instead of being honored and kept alive.

Check the KeepAliveFlushes and KeepAliveHits values using statistics from perfdump or the Number of Connections Flushed and Number of Connections Processed under Keep Alive Statistics on the Monitoring Statistics page. For more information, see Keep-Alive Information.

On a site where keep-alive connections are running well, the ratio of KeepAliveFlushes to KeepAliveHits is very low. If the ratio is high (greater than 1:1), your site is probably not utilizing keep-alive connections as well as it could.

To reduce keep-alive flushes, increase the keep-alive maximum connections (as configured on the configuration's Performance Tab ⇒ HTTP sub tab or the wadm set-keep-ailve props command). The default is based on the number of available file descriptors in the system. By raising the keep-alive maximum connections value, you keep more waiting keep-alive connections open.


Caution – Caution –

On UNIX/Linux systems, if the keep-alive maximum connections value is too high, the server can run out of open file descriptors. Typically 1024 is the limit for open files on UNIX/Linux, so increasing this value above 500 is not recommended.


Large Memory Footprint

Web Server automatically configures the connection queue size based on the number of available file descriptors in the system. The connection queue size on a system is determined by the sum total of thread-pool/max-threads element, thread-pool/queue-size element and keep-alive/max-connections element in the server.xml file.

For more information about the server.xml file, see the Administrator's Configuration File Reference.

In certain cases, the server chosen defaults may lead to larger memory footprint than what is required to run your applications. If the server selected defaults does not suit your needs, the memory usage of the server can be changed by specifying the values in server.xml. The thread-pool/max-threads is 128 unless explicitly specified in server.xml. The thread-pool/queue-size can be obtained from perfdump by examining the Connection Queue Information. For more information, see Connection Queue Information. The keep-alive/max-connections can be obtained from Keep-Alive Information and Keep-Alive Count. Logging at level fine will print these values in the error log file.

Log File Modes

Keeping the log files on a high-level of verbosity can have a significant impact on performance. On the configuration's General Tab ⇒ Log Settings page choose the appropriate log level and use levels such as Fine, Finer, and Finest with care. To set the log level using the CLI, use the command wadm set-log-prop and set the log-level.