This section provides recommendations for optimal performance scaling server for the following server subsystems:
The GlassFish Server automatically takes advantage of multiple CPUs. In general, the effectiveness of multiple CPUs varies with the operating system and the workload, but more processors will generally improve dynamic content performance.
Static content involves mostly input/output (I/O) rather than CPU activity. If the server is tuned properly, increasing primary memory will increase its content caching and thus increase the relative amount of time it spends in I/O versus CPU activity. Studies have shown that doubling the number of CPUs increases servlet performance by 50 to 80 percent.
Put the OS, swap/paging file, GlassFish Server logs, and document tree each on separate hard drives. This way, if the log files fill up the log drive, the OS does not suffer. Also, its easy to tell if the OS paging file is causing drive activity, for example.
OS vendors generally provide specific recommendations for how much swap or paging space to allocate. Based on Oracle testing, GlassFish Server performs best with swap space equal to RAM, plus enough to map the document tree.
The number of peak concurrent users (Npeak) the server needs to handle.
The average request size on your site, r. The average request can include multiple documents. When in doubt, use the home page and all its associated files and graphics.
Decide how long, t, the average user will be willing to wait for a document at peak utilization.
Then, the bandwidth required is:
Npeakr / t
For example, to support a peak of 50 users with an average document size of 24 Kbytes, and transferring each document in an average of 5 seconds, requires 240 Kbytes (1920 Kbit/s). So the site needs two T1 lines (each 1544 Kbit/s). This bandwidth also allows some overhead for growth.
The server’s network interface card must support more than the WAN to which it is connected. For example, if you have up to three T1 lines, you can get by with a 10BaseT interface. Up to a T3 line (45 Mbit/s), you can use 100BaseT. But if you have more than 50 Mbit/s of WAN bandwidth, consider configuring multiple 100BaseT interfaces, or look at Gigabit Ethernet technology.
GlassFish Server uses User Datagram Protocol (UDP) for the transmission of multicast messages to GlassFish Server instances in a cluster. For peak performance from a GlassFish Server cluster that uses UDP multicast, limit the need to retransmit UDP messages. To limit the need to retransmit UDP messages, set the size of the UDP buffer to avoid excessive UDP datagram loss.
The size of UDP buffer that is required to prevent excessive UDP datagram loss depends on many factors, such as:
The number of instances in the cluster
The number of instances on each host
The number of processors
The amount of memory
The speed of the hard disk for virtual memory
If only one instance is running on each host in your cluster, the default UDP buffer size should suffice. If several instances are running on each host, determine whether the UDP buffer is large enough by testing for the loss of UDP packets.
Note - On Linux systems, the default UDP buffer size might be insufficient even if only one instance is running on each host. In this situation, set the UDP buffer size as explained in To Set the UDP Buffer Size on Linux Systems.
If necessary, stop any running clusters as explained in To Stop a Cluster in Oracle GlassFish Server 3.1-3.1.1 High Availability Administration Guide.
How you determine the number of lost packets depends on the operating system. For example:
On Linux systems, use the netstat -su command and look for the packet receive errors count in the Udp section.
On AIX systems, use the netstat -s command and look for the fragments dropped (dup or out of space) count in the ip section.
Start each cluster as explained in To Start a Cluster in Oracle GlassFish Server 3.1-3.1.1 High Availability Administration Guide.
On Linux systems, a default UDP buffer size is set for the client, but not for the server. Therefore, on Linux systems, the UDP buffer size might have to be increased. Setting the UDP buffer size involves setting the following kernel parameters:
If you set the parameters in the /etc/sysctl.conf file, the settings are preserved when the system is rebooted. If you set the parameters at runtime, the settings are not preserved when the system is rebooted.
net.core.rmem_max=rmem-max net.core.wmem_max=wmem-max net.core.rmem_default=rmem-default net.core.wmem_default=wmem-default
$ /sbin/sysctl -w net.core.rmem_max=rmem-max \ net.core.wmem_max=wmem-max \ net.core.rmem_default=rmem-default \ net.core.wmem_default=wmem-default
Example 5-1 Setting the UDP Buffer Size in the /etc/sysctl.conf File
This example shows the lines in the /etc/sysctl.conf file for setting the kernel parameters for controlling the UDP buffer size to 524288.
net.core.rmem_max=524288 net.core.wmem_max=524288 net.core.rmem_default=524288 net.core.wmem_default=524288
Example 5-2 Setting the UDP Buffer Size at Runtime
This example sets the kernel parameters for controlling the UDP buffer size to 524288 at runtime.
$ /sbin/sysctl -w net.core.rmem_max=524288 \ net.core.wmem_max=52428 \ net.core.rmem_default=52428 \ net.core.wmem_default=524288 net.core.rmem_max = 524288 net.core.wmem_max = 52428 net.core.rmem_default = 52428 net.core.wmem_default = 524288