JavaScript is required to for searching.
Skip Navigation Links
Exit Print View
Oracle GlassFish Server 3.1 Performance Tuning Guide
search filter icon
search icon

Document Information

Preface

1.  Overview of GlassFish Server Performance Tuning

2.  Tuning Your Application

3.  Tuning the GlassFish Server

4.  Tuning the Java Runtime System

5.  Tuning the Operating System and Platform

Server Scaling

Processors

Memory

Disk Space

Networking

UDP Buffer Sizes

To Determine an Optimal UDP Buffer Size

To Set the UDP Buffer Size on Linux Systems

Solaris 10 Platform-Specific Tuning Information

Tuning for the Solaris OS

Tuning Parameters

Sizing the Connection Hash Table

File Descriptor Setting

Tuning for Solaris on x86

File Descriptors

IP Stack Settings

Tuning for Linux platforms

Startup Files

File Descriptors

Virtual Memory

Network Interface

Disk I/O Settings

To tune disk I/O performance for non SCSI disks

TCP/IP Settings

To tune the TCP/IP settings

Tuning UltraSPARC CMT-Based Systems

Tuning Operating System and TCP Settings

Disk Configuration

Network Configuration

Index

Tuning UltraSPARC CMT–Based Systems

Use a combination of tunable parameters and other parameters to tune UltraSPARC CMT–based systems. These values are an example of how you might tune your system to achieve the desired result.

Tuning Operating System and TCP Settings

The following table shows the operating system tuning for Solaris 10 used when benchmarking for performance and scalability on UtraSPARC CMT–based systems (64-bit systems).

Table 5-2 Tuning 64–bit Systems for Performance Benchmarking

Parameter
Scope
Default Value
Tuned Value
Comments
rlim_fd_max
/etc/system
65536
260000
Process open file descriptors limit; should account for the expected load (for the associated sockets, files, pipes if any).
hires_tick
/etc/system
1
sq_max_size
/etc/system
2
0
Controls streams driver queue size; setting to 0 makes it infinite so the performance runs won’t be hit by lack of buffer space. Set on clients too. Note that setting sq_max_size to 0 might not be optimal for production systems with high network traffic.
ip:ip_squeue_bind
0
ip:ip_squeue_fanout
1
ipge:ipge_taskq_disable
/etc/system
0
ipge:ipge_tx_ring_size
/etc/system
2048
ipge:ipge_srv_fifo_depth
/etc/system
2048
ipge:ipge_bcopy_thresh
/etc/system
384
ipge:ipge_dvma_thresh
/etc/system
384
ipge:ipge_tx_syncq
/etc/system
1
tcp_conn_req_max_q
ndd /dev/tcp
128
3000
tcp_conn_req_max_q0
ndd /dev/tcp
1024
3000
tcp_max_buf
ndd /dev/tcp
4194304
tcp_cwnd_max
ndd/dev/tcp
2097152
tcp_xmit_hiwat
ndd /dev/tcp
8129
400000
To increase the transmit buffer.
tcp_recv_hiwat
ndd /dev/tcp
8129
400000
To increase the receive buffer.

Note that the IPGE driver version is 1.25.25.

Disk Configuration

If HTTP access is logged, follow these guidelines for the disk:

Network Configuration

If more than one network interface card is used, make sure the network interrupts are not all going to the same core. Run the following script to disable interrupts:

allpsr=`/usr/sbin/psrinfo | grep -v off-line | awk '{ print $1 }'`
   set $allpsr
   numpsr=$#
   while [ $numpsr -gt 0 ];
   do
       shift
       numpsr=`expr $numpsr - 1`
       tmp=1
       while [ $tmp -ne 4 ];
       do
           /usr/sbin/psradm -i $1
           shift
           numpsr=`expr $numpsr - 1`
           tmp=`expr $tmp + 1`
       done
   done

Put all network interfaces into a single group. For example:

$ifconfig ipge0 group webserver
$ifconfig ipge1 group webserver