Sun ONE logo      Previous      Contents      Index      Next     

Sun ONE Application Server 7 Performance Tuning Guide

Chapter 6
Tuning Operating System

Tuning Solaris TCP/IP settings benefits programs that open and close many sockets. The Sun ONE Application Server operates with a small fixed set of connections and the performance gain may not be as significant on the Application Server node. Improvements for the Web Server, configured as a Web front-end to Sun ONE Application Server, can have significant benefits. The following topics are discussed:


Tuning Parameters

The following table shows the operating system tuning, for Solaris, used when benchmarking for performance and scalability. These values are an example of how you may tune your system to achieve the desired result.

Table 6-1  Tuning the Solaris Operating System 

Parameter

Scope

Default Value

Tuned Value

Comments

rlim_fd_max

/etc/system

1024

8192

Process open file descriptors limit; should account for the expected load (for the associated sockets, files, pipes if any).

rlim_fd_cur

/etc/system

1024

8192

 

sq_max_size

/etc/system

2

0

Controls streams driver queue size; setting to 0 makes it infinity so the performance runs wont be hit by lack of buffer space. Set on clients too.

tcp_close_wait_interval

ndd /dev/tcp

240000

60000

Set on clients as well.

tcp_time_wait_interval

ndd /dev/tcp

240000

60000

 

tcp_conn_req_max_q

ndd /dev/tcp

128

1024

 

tcp_conn_req_max_q0

ndd /dev/tcp

1024

4096

 

tcp_ip_abort_interval

ndd /dev/tcp

480000

60000

 

tcp_keepalive_interval

ndd /dev/tcp

7200000

900000

For high traffic web sites lower this value.

tcp_rexmit_interval_initial

ndd /dev/tcp

3000

3000

If retransmission is greater than 30-40%, you should increase this value.

tcp_rexmit_interval_max

ndd /dev/tcp

240000

10000

 

tcp_rexmit_interval_min

ndd /dev/tcp

200

3000

 

tcp_smallest_anon_port

ndd /dev/tcp

32768

1024

Set on clients too.

tcp_slow_start_initial

ndd /dev/tcp

1

2

Slightly faster transmission of small amounts of data.

tcp_xmit_hiwat

ndd /dev/tcp

8129

32768

To increase the transmit buffer.

tcp_recv_hiwat

ndd /dev/tcp

8129

32768

To increase the transmit buffer.

tcp_conn_hash_size

ndd /dev/tcp

512

8192

The connection hash table keeps all the information for active TCP connections (ndd -get /dev/tcp tcp_conn_hash). This value does not limit the number of connections, but it can cause connection hashing to take longer. To make lookups more efficient, set the value to half of the number of concurrent TCP connections that you expect on the server (netstat -nP tcp|wc -l, gives you a number). It defaults to 512. This can only be set in /etc/system and becomes effective at boot time.


Solaris File Descriptor Setting

On Solaris, setting the maximum number of open files property using ulimit has the biggest impact on your efforts to support the maximum number of RMI/IIOP clients.

To increase the hard limit, add the following command to /etc/system and reboot it once:

set rlim_fd_max = 8192

You can verify this hard limit by using the following command:

ulimit -a -H

Once the above hard limit is set, you can increase the value of this property explicitly (up to this limit) using the following command:

ulimit -n 8192

You can verify this limit by using the following command:

ulimit -a

For example, with the default ulimit of 64, a simple test driver can support only 25 concurrent clients, but with ulimit set to 8192, the same test driver can support 120 concurrent clients. The test driver spawned multiple threads, each of which performed a JNDI lookup and repeatedly called the same business method with a think (delay) time of 500ms between business method calls, exchanging data of about 100KB.

These settings apply to RMI/IIOP clients (on Solaris). Refer to Solaris documentation on the Sun Microsystems documentation web site (www.docs.sun.com) for more information on setting the file descriptor limits.


Linux Configuration

The following parameters must be added to the /etc/rc.d/rc.local file that gets executed furing system start-up.

<-- begin

#max file count updated ~256 descriptors per 4Mb. Based on the amount of RAM you have on the system, specify the number of file descriptors.
echo 65536 > /proc/sys/fs/file-max

#inode-max 3-4 times the file-max
#file not present!!!!!
#echo 262144 > /proc/sys/fs/inode-max

#make more local ports available
echo 1024 25000 > /proc/sys/net/ipv4/ip_local_port_range

#increase the memory available with socket buffers
echo 2621143 > /proc/sys/net/core/rmem_max
echo 262143 > /proc/sys/net/core/rmem_default

#above configuration for 2.4.X kernels
echo 4096 131072 262143 > /proc/sys/net/ipv4/tcp_rmem
echo 4096 13107262143 > /proc/sys/net/ipv4/tcp_wmem

#disable RFC2018 TCP Selective Acknowledgements, and RFC1323 TCP timestamps
echo 0 > /proc/sys/net/ipv4/tcp_sack
echo 0 > /proc/sys/net/ipv4/tcp_timestamps

#double maximum amount of memory allocated to shm at runtime
echo 67108864 > /proc/sys/kernel/shmmax

#improve virtual memory VM subsystem of the Linux
echo 100 1200 128 512 15 5000 500 1884 2> /proc/sys/vm/bdflush

#we also do a sysctl
sysctl -p /etc/sysctl.conf

-- end -->

Additionally, create an /etc/sysctl.conf file and append it with the following values:

<-- begin
#Disables packet forwarding
net.ipv4.ip_forward = 0
#Enables source route verification
net.ipv4.conf.default.rp_filter = 1
#Disables the magic-sysrq key
kernel.sysrq = 0
fs.file-max=65536

vm.bdflush = 100 1200 128 512 15 5000 500 1884 2

net.ipv4.ip_local_port_range = 1024 65000
net.core.rmem_max= 262143
net.core.rmem_default = 262143

net.ipv4.tcp_rmem = 4096 131072 262143
net.ipv4.tcp_wmem = 4096 131072 262143
net.ipv4.tcp_sack = 0
net.ipv4.tcp_timestamps = 0

kernel.shmmax = 67108864



Previous      Contents      Index      Next     


Copyright 2003 Sun Microsystems, Inc. All rights reserved.