Sun ONE Application Server 7 Performance Tuning Guide |
Chapter 6
Tuning Operating SystemTuning Solaris TCP/IP settings benefits programs that open and close many sockets. The Sun ONE Application Server operates with a small fixed set of connections and the performance gain may not be as significant on the Application Server node. Improvements for the Web Server, configured as a Web front-end to Sun ONE Application Server, can have significant benefits. The following topics are discussed:
Tuning ParametersThe following table shows the operating system tuning, for Solaris, used when benchmarking for performance and scalability. These values are an example of how you may tune your system to achieve the desired result.
Solaris File Descriptor SettingOn Solaris, setting the maximum number of open files property using ulimit has the biggest impact on your efforts to support the maximum number of RMI/IIOP clients.
To increase the hard limit, add the following command to /etc/system and reboot it once:
set rlim_fd_max = 8192
You can verify this hard limit by using the following command:
ulimit -a -H
Once the above hard limit is set, you can increase the value of this property explicitly (up to this limit) using the following command:
ulimit -n 8192
You can verify this limit by using the following command:
ulimit -a
For example, with the default ulimit of 64, a simple test driver can support only 25 concurrent clients, but with ulimit set to 8192, the same test driver can support 120 concurrent clients. The test driver spawned multiple threads, each of which performed a JNDI lookup and repeatedly called the same business method with a think (delay) time of 500ms between business method calls, exchanging data of about 100KB.
These settings apply to RMI/IIOP clients (on Solaris). Refer to Solaris documentation on the Sun Microsystems documentation web site (www.docs.sun.com) for more information on setting the file descriptor limits.
Linux ConfigurationThe following parameters must be added to the /etc/rc.d/rc.local file that gets executed furing system start-up.
<-- begin
#max file count updated ~256 descriptors per 4Mb. Based on the amount of RAM you have on the system, specify the number of file descriptors.
echo “65536“ > /proc/sys/fs/file-max#inode-max 3-4 times the file-max
#file not present!!!!!
#echo “262144“ > /proc/sys/fs/inode-max#make more local ports available
echo 1024 25000 > /proc/sys/net/ipv4/ip_local_port_range#increase the memory available with socket buffers
echo 2621143 > /proc/sys/net/core/rmem_max
echo 262143 > /proc/sys/net/core/rmem_default#above configuration for 2.4.X kernels
echo 4096 131072 262143 > /proc/sys/net/ipv4/tcp_rmem
echo 4096 13107262143 > /proc/sys/net/ipv4/tcp_wmem#disable “RFC2018 TCP Selective Acknowledgements,“ and “RFC1323 TCP timestamps“
echo 0 > /proc/sys/net/ipv4/tcp_sack
echo 0 > /proc/sys/net/ipv4/tcp_timestamps#double maximum amount of memory allocated to shm at runtime
echo “67108864“ > /proc/sys/kernel/shmmax#improve virtual memory VM subsystem of the Linux
echo “100 1200 128 512 15 5000 500 1884 2“> /proc/sys/vm/bdflush#we also do a sysctl
sysctl -p /etc/sysctl.conf-- end -->
Additionally, create an /etc/sysctl.conf file and append it with the following values:
<-- begin
#Disables packet forwarding
net.ipv4.ip_forward = 0
#Enables source route verification
net.ipv4.conf.default.rp_filter = 1
#Disables the magic-sysrq key
kernel.sysrq = 0
fs.file-max=65536vm.bdflush = 100 1200 128 512 15 5000 500 1884 2
net.ipv4.ip_local_port_range = 1024 65000
net.core.rmem_max= 262143
net.core.rmem_default = 262143net.ipv4.tcp_rmem = 4096 131072 262143
net.ipv4.tcp_wmem = 4096 131072 262143
net.ipv4.tcp_sack = 0
net.ipv4.tcp_timestamps = 0kernel.shmmax = 67108864