Oracle9i Application Server Oracle HTTP Server powered by Apache Performance Guide
Release 1.0.2 for Sun SPARC Solaris

Part Number A86059-01

Library

Contents

Index

Go to previous page Go to next page

4
Optimizing HTTP Server Performance

This chapter provides information on improving the Oracle HTTP Server's performance, including tuning TCP parameters, the effects of changing the MaxClients parameter, SSL caching, and logging.

Contents

TCP Tuning

Correctly tuned TCP parameters can improve performance dramatically. This section contains recommendations for TCP tuning and a brief explanation of each parameter. A comprehensive discussion of TCP tuning can be found in Sun Performance and Tuning: Java and the Internet by Adrian Cockcroft and Richard Pettit, Sun Microsystems Press, 1998.

The table below contains recommended TCP parameter settings.

Table 4-1 Recommended TCP parameter settings
Parameter  Setting  Comments 

tcp_conn_hash_size 

32768 

See "Increasing TCP Connection Table Access Speed".  

tcp_close_wait_interval 

60000 

Parameter name in Solaris 2.6. See "Specifying Retention time for Connection Table entries".  

tcp_time_wait_interval  

60000 

Parameter name in Solaris 2.7. See "Specifying Retention time for Connection Table entries".  

tcp_conn_req_max_q 

1024 

See "Increasing the Handshake Queue Length".  

tcp_conn_req_max_q0 

1024 

See "Increasing the Handshake Queue Length".  

tcp_slow_start_initial 

See "Changing the Data Transmission Rate".  

tcp_xmit_hiwat 

32768 

See "Changing the Data Transfer Window Size".  

tcp_recv_hiwat 

32768 

See "Changing the Data Transfer Window Size".  

Setting TCP parameters

To set the connection table hash parameter, you must add the following line to your /etc/system file, and then restart the system:

set tcp:tcp_conn_hash_size=32768

A sample script, tcpset.sh, that changes TCP parameters to the settings recommended here, is included in the
$ORACLE_HOME/Apache/Apache/bin/ directory.

If your system is restarted after you run the script, the default settings will be restored and you will have to run the script again. To make the settings permanent, enter them in your system startup file.

Increasing TCP Connection Table Access Speed

If you have a large user population, you should increase the hash size for the TCP connection table. The hash size is the number of hash buckets used to store the connection data. If the buckets are very full, it takes more time to find a connection. Increasing the hash size will reduce the connection lookup time, but increases memory consumption.

Suppose your system performs 100 connections per second. If you set tcp_close_wait_interval to 60000, then there will be about 6000 entries in your TCP connection table at any time. Increasing your hash size to 2048 or 4096 will improve performance significantly.

On a system servicing 300 connections per second, changing the hash size from the default of 256 to a number close to the number of connection table entries decreases the average round trip time by three to four seconds. The maximum hash size is 262144. Ensure that you increase memory as needed.

To set the tcp_conn_hash_size, add the line shown below to your /etc/system file. The parameter will take effect when the system is restarted.

set tcp:tcp_conn_hash_size=32768

Specifying Retention time for Connection Table entries

The TCP connection table maintains data associated with connections. The server maintains a TCP connection table entry for some time after a connection is closed, so that it can identify and properly dispose of any leftover incoming packets from the client.

Access speed to this table impacts performance; the access speed depends on the number of entries in the table, and on its hash size. The number of entries in the table depends on the rate of incoming requests, and the lifetime of each connection.

You can control the length of time that TCP connection table entries are maintained with the tcp_close_wait_interval parameter (renamed tcp_time_wait_interval on Solaris 2.7). This parameter is commonly set to 60,000 ms. Use the following command to set it (note the difference in parameter name for Solaris 2.6 and 2.7).

In Solaris 2.6:

prompt>/usr/sbin/ndd -set /dev/tcp tcp_close_wait_interval 60000

In Solaris 2.7:

prompt>/usr/sbin/ndd -set /dev/tcp tcp_time_wait_interval 60000


Note::

If your user population is widely dispersed (with respect to Internet topology), you may want to set this parameter to a higher value. You can improve access time to the TCP connection table with the tcp_conn_hash_size parameter.  


Increasing the Handshake Queue Length

During the TCP connection handshake, the server, after receiving a request (SYN) from a client, sends a reply, and waits to hear back from the client. The client responds to the server's message and the handshake is complete. Upon receiving the first request from the client, the server makes an entry in the listen queue. After the client responds to the server's message, it is moved to the queue for messages with completed handshakes. The second queue makes it possible for the server to continue servicing requests for which the handshake has been completed.

The maximum length of the queue for incomplete handshakes is governed by tcp_conn_req_max_q0, which by default is 1024. The maximum length of the queue for requests with completed handshakes is defined by tcp_conn_req_max_q (default is 128).

On most web servers, the defaults will be sufficient, but if you have more than 1024 concurrent users, these settings may be too low. In that case, connections will be dropped in the handshake state because the queues are full. You can determine whether this is a problem on your system by inspecting the values for tcpListenDrop, tcpListenDropQ0, and tcpHalfOpenDrop with netstat -s. If either of the first two values are nonzero, you should increase the maximums.

The defaults are probably sufficient, but Oracle recommends that you increase the value of tcp_conn_req_max_q to 1024. You can set these parameters with:

prompt>/usr/sbin/ndd -set /dev/tcp tcp_conn_req_max_q 1024
prompt>/usr/sbin/ndd -set /dev/tcp tcp_conn_req_max_q0 1024

Changing the Data Transmission Rate

Typically, all packets in a data transfer are sent at once. TCP implements a slow starting data transfer to prevent overloading a busy segment of the Internet. With slow start, one packet is sent, an acknowledgment is received, then two packets are sent. The number sent to the server continues to be doubled after each acknowledgment, until the TCP transfer window limits are reached.

Some versions of Microsoft Windows (including NT 4.0 and 95) do not acknowledge receipt of a single packet when a connection is initiated, but if two packets are received, an acknowledgment is sent immediately. Because Solaris sends only one packet when initiating a connection (per the TCP standard), this can increase the connection startup time. This is especially apparent on fast local networks, where the latency is expected to be low.

You can configure Solaris to start with two packets when initiating a data transfer:

prompt>/usr/sbin/ndd -set /dev/tcp tcp_slow_start_initial 2

Changing the Data Transfer Window Size

The size of the TCP transfer windows for sending and receiving data determine how much data can be sent without waiting for an acknowledgment. The default window size is 8192 bytes. Unless your system is memory constrained, these windows should be increased to the maximum size of 32768. This can speed up large data transfers significantly. Use these commands to enlarge the window:

prompt>/usr/sbin/ndd -set /dev/tcp tcp_xmit_hiwat 32768

prompt>/usr/sbin/ndd -set /dev/tcp tcp_recv_hiwat 32768

Because the client typically receives the bulk of the data, it would help to enlarge the TCP receive windows on end users' systems.

MaxClients

The MaxClients directive limits the number of clients that can simultaneously connect to your web server, and thus the number of httpd processes. You can configure this parameter in the httpd.conf file up to a maximum of 1024 in Oracle9i Application Server v. 1.0.2 (in the previous version, the maximum was 256). The default is 150, which should be adequate for most uses. If the MaxClients setting is too low, and the limit is reached, clients will be unable to connect.

Our tests of static page requests (average size 20K) on a 2 processor, 168 MHz Sun UltraSparc on a 100 Mbps network showed that:

On the system described above, and on 4 and 6-processor, 336 MHz systems, there was no significant performance improvement in increasing the MaxClients setting from 150 to 256, based on static page and servlet tests with up to 1000 users.

Increasing MaxClients when system resources are saturated does not improve performance. When there are no httpd processes available, connection requests are queued in the TCP/IP system until a process becomes available, and eventually clients terminate connections.


Note:

If you are using persistent connections, you may require more concurrent httpd server processes. See "httpd Process Availability" for a discussion of the relationship between persistent connections and the number of server processes.  


For dynamic requests, if the system is heavily loaded, it might be better to allow the requests to queue in the network (thereby keeping the load on the system manageable). The question for the system administrator is whether a timeout error and retry is better than a long response time. In this case, the MaxClients setting could be reduced, to act as a throttle on the number of concurrent requests on the server.

SSL Session Caching

The Oracle HTTP server caches a client's SSL session information by default. With session caching, only the first connection to the server incurs high latency. For example, in a simple test to connect and disconnect to an SSL-enabled server, the elapsed time for 5 connections was 11.4 seconds without SSL session caching. With SSL session caching enabled, the elapsed time for 5 round trips was 1.9 seconds.

The SSLSessionCacheTimeout directive in httpd.conf determines how long the server keeps a session alive (the default is 300 seconds). The session information is kept in a file. You can specify where to keep the session information using the SSLSessionCache directive; the default location is the $ORACLE_HOME/Apache/Apache/logs/ directory. The file can be used by multiple Oracle HTTP Server processes.

The duration of an SSL session is unrelated to the use of HTTP persistent connections.

Impact of Logging

This section discusses types of logging, log levels, and the performance implications for using them.

Access Logging

For static page requests, access logging of the default fields results in a 2-3% performance cost.

HostNameLookups

By default, the HostNameLookups directive is set to off. The server writes the IP addresses of incoming requests to the log files. When HostNameLookups is set to on, the server queries the DNS system on the Internet to find the host name associated with the IP addresses of each request, then writes the host names to the log.

Performance degraded by about 3% (best case) in Oracle in-house tests with HostNameLookups set to on. Depending on the server load and the network connectivity to your DNS server, the performance cost of the DNS lookup could be high. Unless you really need to have host names in your logs in real time, it is best to log IP addresses. You can resolve IP addresses to host names off-line, with the logresolve utility (found in the $ORACLE_HOME/Apache/Apache/bin/ directory).

For more information, see Dale Gaudet's Apache Performance Notes at:

http://www.apache.org/docs/misc/perf-tuning.html

Error logging

The server notes unusual activity in an error log. The ErrorLog and LogLevel directives identify the log file and the level of detail of the messages recorded. The default level is warn. There was no difference in static page performance on a loaded system between the warn, info, and debug levels.

For more information on the LogLevel directive, see:

http://www.apache.org/docs/mod/core.html#loglevel

HTTP/1.1

The Oracle HTTP server can use HTTP/1.1. Netscape Navigator 4.0 still uses HTTP/1.0, with some 1.1 features, such as persistent connections. Internet Explorer uses HTTP/1.1. The performance benefit of persistent connections comes from reducing the overhead of repeatedly establishing and tearing down connections (one per request). A persistent connection accepts multiple requests from a user.

For a small static page request, the connection latency can equal or exceed the response latency (the time to fulfill the request after the connection is established), so using persistent connections can result in major performance gains.

For more information about performance and the HTTP/1.1 protocol, see:

http://www.w3.org/Protocols/HTTP/Performance/Pipeline.html

Persistent Connections

If your users' browsers support persistent connections (the default behavior of HTTP/1.1), you can support them on the server using the KeepAlive directives in Apache. (Some browsers that do not support all HTTP/1.1 features do support persistent connections; for example, recent versions of Netscape.)

Shorter Response Times

Persistent connections can improve total response time for a web interaction that involves multiple HTTP requests, because the delay of setting up a connection only happens once.

Consider the total time required, without persistent connections, for a client to retrieve a web page with three images from the server.

Activity  Seconds 

Establish connection 

1  

Produce and send the text portion of the page 

5  

Establish connection 

1  

Transfer first image file 

2  

Establish connection 

1  

Transfer second image file 

2  

Establish connection 

1  

Transfer third image file 

2  

Total 

15  

With persistent connections, the response time for the same request is reduced:

Activity  Seconds 

Establish connection 

Produce and send the text portion of the page 

5  

Transfer first image file 

2  

Transfer second image file 

2  

Transfer third image file 

2  

Total 

12  

This is a 20% reduction in service time. When the system is under load, the benefit of reducing connection time with persistent connections is even greater, due to the corresponding reduction of the TCP queue.

Reduction in Server Workload

Another benefit of persistent connections is reduction of the work load on the server. Because the server need not repeat the work to set up the connection with a client, it is free to perform other work. For a very inexpensive servlet (Hello World), the CPU ms per request was reduced by approximately 10% when the same client made 4 requests per connection. (The impact would be far less significant for a realistic servlet application that does more work.)

httpd Process Availability

There are some serious drawbacks to using persistent connections with Apache. In particular, because httpd processes are single threaded, one client can keep a process tied up for a significant period of time (the amount of time depends on your KeepAlive settings). If you have a large user population, and you set your KeepAlive limits too high, clients could be turned away because of insufficient httpd deamons.

The default settings for the KeepAlive directives are:

KeepAlive on
MaxKeepAliveRequests 100
KeepAliveTimeOut 15

These settings allow enough requests per connection and time between requests to reap the benefits of the persistent connections, while minimizing the drawbacks. You should consider the size and behavior of your own user population in setting these values on your system. For example, if you have a large user population and the users make small infrequent requests, you may want to reduce the above settings, or even set KeepAlive to off. If you have a small population of users that return to your site frequently, you may want to increase the settings.

FIN_WAIT_2

There is a known problem with some browsers which will leave the server with a TCP connection in the FIN_WAIT_2 state. If too many connections are left in this state, the system will run out of the memory allocated for storing TCP connections, and stop.

The problem is that when a connection becomes idle, and the server closes it because the keep alive time limit has expired, the client host may not perform the TCP protocol steps required to complete the closure of the connection. The host, having sent the close request, is left with the connection in the FIN_WAIT_2 state taking up memory until it gets the appropriate packets back from the client, or until an internal flush occurs. If a connection is left in the FIN_WAIT_2 state, the httpd process with which the connection is associated is freed to service other requests as indicated, so this problem won't tie up web server processes.

On Solaris, the parameter tcp_fin_wait_2_flush_interval dictates the frequency with which these connections will be cleaned up. In general, the default setting is sufficient, and should not be modified unless the system is failing. For more information on FIN_WAIT_2, see:

http://apache.put.poznan.pl/misc/fin_wait_2.html


Note::

The FIN_WAIT_2 state can also occur due to a system bug unrelated to use of KeepAlive. The bug is fixed by the Solaris cluster patch 105181-20. 


Apache Versions

The difference between Apache versions 1.3.9 and 1.3.12 was primarily corrected bugs. With static page and servlet performance measurements, there was no performance difference measured between the versions.


Go to previous page Go to next page
Oracle
Copyright © 2000 Oracle Corporation.

All Rights Reserved.

Library

Contents

Index