This chapter discusses tuning the operating system (OS) for optimum performance. It discusses the following topics:
This section provides recommendations for optimal performance scaling server for the following server subsystems:
The Enterprise Server automatically takes advantage of multiple CPUs. In general, the effectiveness of multiple CPUs varies with the operating system and the workload, but more processors will generally improve dynamic content performance.
Static content involves mostly input/output (I/O) rather than CPU activity. If the server is tuned properly, increasing primary memory will increase its content caching and thus increase the relative amount of time it spends in I/O versus CPU activity. Studies have shown that doubling the number of CPUs increases servlet performance by 50 to 80 percent.
See the section Hardware and Software Requirements in the Sun Java System Application Server Release Notes for specific memory recommendations for each supported operating system.
It is best to have enough disk space for the OS, document tree, and log files. In most cases 2GB total is sufficient.
Put the OS, swap/paging file, Enterprise Server logs, and document tree each on separate hard drives. This way, if the log files fill up the log drive, the OS does not suffer. Also, its easy to tell if the OS paging file is causing drive activity, for example.
OS vendors generally provide specific recommendations for how much swap or paging space to allocate. Based on Sun testing, Enterprise Server performs best with swap space equal to RAM, plus enough to map the document tree.
To determine the bandwidth the application needs, determine the following values:
The number of peak concurrent users (Npeak) the server needs to handle.
The average request size on your site, r. The average request can include multiple documents. When in doubt, use the home page and all its associated files and graphics.
Decide how long, t, the average user will be willing to wait for a document at peak utilization.
Then, the bandwidth required is:
Npeakr / t
For example, to support a peak of 50 users with an average document size of 24 Kbytes, and transferring each document in an average of 5 seconds, requires 240 Kbytes (1920 Kbit/s). So the site needs two T1 lines (each 1544 Kbit/s). This bandwidth also allows some overhead for growth.
The server’s network interface card must support more than the WAN to which it is connected. For example, if you have up to three T1 lines, you can get by with a 10BaseT interface. Up to a T3 line (45 Mbit/s), you can use 100BaseT. But if you have more than 50 Mbit/s of WAN bandwidth, consider configuring multiple 100BaseT interfaces, or look at Gigabit Ethernet technology.
SolarisTM Dynamic Tracing (DTrace) is a comprehensive dynamic tracing framework for the Solaris Operating System (OS). You can use the DTrace Toolkit to monitor the system. The DTrace Toolkit is available through the OpenSolarisTM project from the DTraceToolkit page.
Tuning Solaris TCP/IP settings benefits programs that open and close many sockets. Since the Enterprise Server operates with a small fixed set of connections, the performance gain might not be significant.
The following table shows Solaris tuning parameters that affect performance and scalability benchmarking. These values are examples of how to tune your system for best performance.
Table 5–1 Tuning Parameters for Solaris
Parameter |
Scope |
Default |
Tuned Value |
Comments |
---|---|---|---|---|
/etc/system |
65536 |
65536 |
Limit of process open file descriptors. Set to account for expected load (for associated sockets, files, and pipes if any). |
|
/etc/system |
1024 |
8192 | ||
/etc/system |
2 |
0 |
Controls streams driver queue size; setting to 0 makes it infinite so the performance runs won’t be hit by lack of buffer space. Set on clients too. Note that setting sq_max_size to 0 might not be optimal for production systems with high network traffic. |
|
ndd /dev/tcp |
240000 |
60000 |
Set on clients too. |
|
ndd /dev/tcp |
240000 |
60000 |
Set on clients too. |
|
ndd /dev/tcp |
128 |
1024 | ||
ndd /dev/tcp |
1024 |
4096 | ||
ndd /dev/tcp |
480000 |
60000 | ||
ndd /dev/tcp |
7200000 |
900000 |
For high traffic web sites, lower this value. |
|
ndd /dev/tcp |
3000 |
3000 |
If retransmission is greater than 30-40%, you should increase this value. |
|
ndd /dev/tcp |
240000 |
10000 | ||
ndd /dev/tcp |
200 |
3000 | ||
ndd /dev/tcp |
32768 |
1024 |
Set on clients too. |
|
ndd /dev/tcp |
1 |
2 |
Slightly faster transmission of small amounts of data. |
|
ndd /dev/tcp |
8129 |
32768 |
Size of transmit buffer. |
|
ndd /dev/tcp |
8129 |
32768 |
Size of receive buffer. |
|
ndd /dev/tcp |
512 |
8192 |
Size of connection hash table. See Sizing the Connection Hash Table. |
The connection hash table keeps all the information for active TCP connections. Use the following command to get the size of the connection hash table:
ndd -get /dev/tcp tcp_conn_hash
This value does not limit the number of connections, but it can cause connection hashing to take longer. The default size is 512.
To make lookups more efficient, set the value to half of the number of concurrent TCP connections that are expected on the server. You can set this value only in /etc/system, and it becomes effective at boot time.
Use the following command to get the current number of TCP connections.
netstat -nP tcp|wc -l
On the Solaris OS, setting the maximum number of open files property using ulimit has the biggest impact on efforts to support the maximum number of RMI/IIOP clients.
To increase the hard limit, add the following command to /etc/system and reboot it once:
set rlim_fd_max = 8192
Verify this hard limit by using the following command:
ulimit -a -H
Once the above hard limit is set, increase the value of this property explicitly (up to this limit) using the following command:
ulimit -n 8192
Verify this limit by using the following command:
ulimit -a
For example, with the default ulimit of 64, a simple test driver can support only 25 concurrent clients, but with ulimit set to 8192, the same test driver can support 120 concurrent clients. The test driver spawned multiple threads, each of which performed a JNDI lookup and repeatedly called the same business method with a think (delay) time of 500 ms between business method calls, exchanging data of about 100 KB. These settings apply to RMI/IIOP clients on the Solaris OS.
The following parameters must be added to the /etc/rc.d/rc.local file that gets executed during system start-up.
<-- begin #max file count updated ~256 descriptors per 4Mb. Specify number of file descriptors based on the amount of system RAM. echo "6553" > /proc/sys/fs/file-max #inode-max 3-4 times the file-max #file not present!!!!! #echo"262144" > /proc/sys/fs/inode-max #make more local ports available echo 1024 25000 > /proc/sys/net/ipv4/ip_local_port_range #increase the memory available with socket buffers echo 2621143 > /proc/sys/net/core/rmem_max echo 262143 > /proc/sys/net/core/rmem_default #above configuration for 2.4.X kernels echo 4096 131072 262143 > /proc/sys/net/ipv4/tcp_rmem echo 4096 13107262143 > /proc/sys/net/ipv4/tcp_wmem #disable "RFC2018 TCP Selective Acknowledgements," and "RFC1323 TCP timestamps" echo 0 > /proc/sys/net/ipv4/tcp_sack echo 0 > /proc/sys/net/ipv4/tcp_timestamps #double maximum amount of memory allocated to shm at runtime echo "67108864" > /proc/sys/kernel/shmmax #improve virtual memory VM subsystem of the Linux echo "100 1200 128 512 15 5000 500 1884 2" > /proc/sys/vm/bdflush #we also do a sysctl sysctl -p /etc/sysctl.conf -- end -->
Additionally, create an /etc/sysctl.conf file and append it with the following values:
<-- begin #Disables packet forwarding net.ipv4.ip_forward = 0 #Enables source route verification net.ipv4.conf.default.rp_filter = 1 #Disables the magic-sysrq key kernel.sysrq = 0 fs.file-max=65536 vm.bdflush = 100 1200 128 512 15 5000 500 1884 2 net.ipv4.ip_local_port_range = 1024 65000 net.core.rmem_max= 262143 net.core.rmem_default = 262143 net.ipv4.tcp_rmem = 4096 131072 262143 net.ipv4.tcp_wmem = 4096 131072 262143 net.ipv4.tcp_sack = 0 net.ipv4.tcp_timestamps = 0
kernel.shmmax = 67108864
For further information on tuning Solaris system see the Solaris Tunable Parameters Reference Manual.
The following are some options to consider when tuning Solaris on x86 for Application Server and HADB:
Some of the values depend on the system resources available. After making any changes to /etc/system, reboot the machines.
Add (or edit) the following lines in the /etc/system file:
set rlim_fd_max=65536 set rlim_fd_cur=65536 set sq_max_size=0 set tcp:tcp_conn_hash_size=8192 set autoup=60 set pcisch:pci_stream_buf_enable=0
These settings affect the file descriptors.
Add (or edit) the following lines in the /etc/system file:
set ip:tcp_squeue_wput=1 set ip:tcp_squeue_close=1 set ip:ip_squeue_bind=1 set ip:ip_squeue_worker_wait=10 set ip:ip_squeue_profile=0
These settings tune the IP stack.
To preserve the changes to the file between system reboots, place the following changes to the default TCP variables in a startup script that gets executed when the system reboots:
ndd -set /dev/tcp tcp_time_wait_interval 60000 ndd -set /dev/tcp tcp_conn_req_max_q 16384 ndd -set /dev/tcp tcp_conn_req_max_q0 16384 ndd -set /dev/tcp tcp_ip_abort_interval 60000 ndd -set /dev/tcp tcp_keepalive_interval 7200000 ndd -set /dev/tcp tcp_rexmit_interval_initial 4000 ndd -set /dev/tcp tcp_rexmit_interval_min 3000 ndd -set /dev/tcp tcp_rexmit_interval_max 10000 ndd -set /dev/tcp tcp_smallest_anon_port 32768 ndd -set /dev/tcp tcp_slow_start_initial 2 ndd -set /dev/tcp tcp_xmit_hiwat 32768 ndd -set /dev/tcp tcp_recv_hiwat 32768
To tune for maximum performance on Linux, you need to make adjustments to the following:
You may need to increase the number of file descriptors from the default. Having a higher number of file descriptors ensures that the server can open sockets under high load and not abort requests coming in from clients.
Start by checking system limits for file descriptors with this command:
cat /proc/sys/fs/file-max 8192
The current limit shown is 8192. To increase it to 65535, use the following command (as root):
echo "65535" > /proc/sys/fs/file-max
To make this value to survive a system reboot, add it to /etc/sysctl.conf and specify the maximum number of open files permitted:
fs.file-max = 65535
Note: The parameter is not proc.sys.fs.file-max, as one might expect.
To list the available parameters that can be modified using sysctl:
sysctl -a
To load new values from the sysctl.conf file:
sysctl -p /etc/sysctl.conf
To check and modify limits per shell, use the following command:
limit
The output will look something like this:
cputime unlimited filesize unlimited datasize unlimited stacksize 8192 kbytes coredumpsize 0 kbytes memoryuse unlimited descriptors 1024 memorylocked unlimited maxproc 8146 openfiles 1024
The openfiles and descriptors show a limit of 1024. To increase the limit to 65535 for all users, edit /etc/security/limits.conf as root, and modify or add the nofile setting (number of file) entries:
* soft nofile 65535 * hard nofile 65535
The character “*” is a wildcard that identifies all users. You could also specify a user ID instead.
Then edit /etc/pam.d/login and add the line:
session required /lib/security/pam_limits.so
On Red Hat, you also need to edit /etc/pam.d/sshd and add the following line:
session required /lib/security/pam_limits.so
On many systems, this procedure will be sufficient. Log in as a regular user and try it before doing the remaining steps. The remaining steps might not be required, depending on how pluggable authentication modules (PAM) and secure shell (SSH) are configured.
To change virtual memory settings, add the following to /etc/rc.local:
echo 100 1200 128 512 15 5000 500 1884 2 > /proc/sys/vm/bdflush
For more information, view the man pages for bdflush.
For HADB settings, refer to Chapter 6, Tuning for High-Availability.
To ensure that the network interface is operating in full duplex mode, add the following entry into /etc/rc.local:
mii-tool -F 100baseTx-FD eth0
where eth0 is the name of the network interface card (NIC).
Test the disk speed.
Use this command:
/sbin/hdparm -t /dev/hdX |
Enable direct memory access (DMA).
Use this command:
/sbin/hdparm -d1 /dev/hdX |
Check the speed again using the hdparm command.
Given that DMA is not enabled by default, the transfer rate might have improved considerably. In order to do this at every reboot, add the /sbin/hdparm -d1 /dev/hdX line to /etc/conf.d/local.start, /etc/init.d/rc.local, or whatever the startup script is called.
For information on SCSI disks, see: System Tuning for Linux Servers — SCSI.
Add the following entry to /etc/rc.local
echo 30 > /proc/sys/net/ipv4/tcp_fin_timeout echo 60000 > /proc/sys/net/ipv4/tcp_keepalive_time echo 15000 > /proc/sys/net/ipv4/tcp_keepalive_intvl echo 0 > /proc/sys/net/ipv4/tcp_window_scaling |
Add the following to /etc/sysctl.conf
# Disables packet forwarding net.ipv4.ip_forward = 0 # Enables source route verification net.ipv4.conf.default.rp_filter = 1 # Disables the magic-sysrq key kernel.sysrq = 0 net.ipv4.ip_local_port_range = 1204 65000 net.core.rmem_max = 262140 net.core.rmem_default = 262140 net.ipv4.tcp_rmem = 4096 131072 262140 net.ipv4.tcp_wmem = 4096 131072 262140 net.ipv4.tcp_sack = 0 net.ipv4.tcp_timestamps = 0 net.ipv4.tcp_window_scaling = 0 net.ipv4.tcp_keepalive_time = 60000 net.ipv4.tcp_keepalive_intvl = 15000 net.ipv4.tcp_fin_timeout = 30 |
Add the following as the last entry in /etc/rc.local
sysctl -p /etc/sysctl.conf |
Reboot the system.
Use this command to increase the size of the transmit buffer:
tcp_recv_hiwat ndd /dev/tcp 8129 32768 |
Use a combination of tunable parameters and other parameters to tune UltraSPARC T1–based systems. These values are an example of how you might tune your system to achieve the desired result.
The following table shows the operating system tuning for Solaris 10 used when benchmarking for performance and scalability on UtraSPARC T1–based systems (64 bit systems).
Table 5–2 Tuning 64–bit Systems for Performance Benchmarking
Parameter |
Scope |
Default Value |
Tuned Value |
Comments |
---|---|---|---|---|
/etc/system |
65536 |
260000 |
Process open file descriptors limit; should account for the expected load (for the associated sockets, files, pipes if any). |
|
/etc/system |
1 | |||
/etc/system |
2 |
0 |
Controls streams driver queue size; setting to 0 makes it infinite so the performance runs won’t be hit by lack of buffer space. Set on clients too. Note that setting sq_max_size to 0 might not be optimal for production systems with high network traffic. |
|
0 | ||||
1 | ||||
/etc/system |
0 | |||
/etc/system |
2048 | |||
/etc/system |
2048 | |||
/etc/system |
384 | |||
ipge:ipge_dvma_thresh |
/etc/system |
384 | ||
/etc/system |
1 | |||
ndd /dev/tcp |
128 |
3000 | ||
ndd /dev/tcp |
1024 |
3000 | ||
ndd /dev/tcp |
4194304 | |||
ndd/dev/tcp |
2097152 | |||
ndd /dev/tcp |
8129 |
400000 |
To increase the transmit buffer. |
|
ndd /dev/tcp |
8129 |
400000 |
To increase the receive buffer. |
Note that the IPGE driver version is 1.25.25.
If HTTP access is logged, follow these guidelines for the disk:
Write access logs on faster disks or attached storage.
If running multiple instances, move the logs for each instance onto separate disks as much as possible.
Enable the disk read/write cache. Note that if you enable write cache on the disk, some writes might be lost if the disk fails.
Consider mounting the disks with the following options, which might yield better disk performance: nologging, directio, noatime.
If more than one network interface card is used, make sure the network interrupts are not all going to the same core. Run the following script to disable interrupts:
allpsr=`/usr/sbin/psrinfo | grep -v off-line | awk '{ print $1 }'` set $allpsr numpsr=$# while [ $numpsr -gt 0 ]; do shift numpsr=`expr $numpsr - 1` tmp=1 while [ $tmp -ne 4 ]; do /usr/sbin/psradm -i $1 shift numpsr=`expr $numpsr - 1` tmp=`expr $tmp + 1` done done |
Put all network interfaces into a single group. For example:
$ifconfig ipge0 group webserver $ifconfig ipge1 group webserver |
In some situations, performance can be improved by using large page sizes. The start options to use depend on your processor architecture. The following examples show the options to start the 32–bit Enterprise Server and the 64–bit Enterprise Server with 4–Mbyte pages.
To start the 32–bit Enterprise Server with 4–Mbyte pages, use the following options:
LD_PRELOAD_32=/usr/lib/mpss.so.1 ; export LD_PRELOAD_32; export MPSSHEAP=4M; ./bin/startserv; unset LD_PRELOAD_32; unset MPSSHEAP
To start the 64–bit Enterprise Server with 4–Mbyte pages, use the following options:
LD_PRELOAD_64=/usr/lib/64/mpss.so.1; export LD_PRELOAD_64; export MPSSHEAP=4M; ./bin/startserv; unset LD_PRELOAD_64; unset MPSSHEAP