![]() | |
Sun Java System Applications Server Enterprise Edition 8.1 2004Q4 Beta Installation Guide |
Chapter 2
Preparing for HADB SetupAfter the high-availability components are installed on the servers that will be part of a cluster, you are ready to set up high availability.
The following topics are addressed here:
After you have done the tasks here, proceed to the Sun Java System Application Server Administration Guide for comprehensive instructions on configuring and managing the cluster, the load balancer plug-in, and the high-availability database (HADB).
Note
For up-to-date information on requirements for this topic, refer to HADB information provided in the Sun Java System Application System Release Notes.
Configuring Shared Memory and SemaphoresInformation on high-availability topologies is available in the Sun Java System Application Server System Deployment Planning Guide.
This section contains instructions for configuring shared memory for the HADB host machines. You must configure the shared memory before working with the HADB.
Configuring Shared Memory on Solaris
- Log in as root.
- Add the following to the /etc/system file for shared memory:
set shmsys:shminfo_shmmax=0x80000000
set shmsys:shminfo_shmseg=20
This example sets maximum shared memory shmmax to 2GB (hexadecimal 0x80000000) which is sufficient for most configurations.
The shmsys:shminfo_shmmax setting is calculated as 10,000,000 per 256 MB and should set to be identical to the memory size for the host. To determine your host’s memory, run this command:
prtconf | grep Memory
Then plug the value into the following formula:
((host MB / 256 MBytes) * 10,000,000)
For semaphores:
Your /etc/system file may already contain semmni, semmns, and semmnu entries. For example:
set semsys:seminfo_semmni=10
set semsys:seminfo_semmns=60
set semsys:seminfo_semmnu=30
If the entries are present, increment the values by adding 16, 128, and 1000 respectively, as follows:
set semsys:seminfo_semmni=26
set semsys:seminfo_semmns=188
set semsys:seminfo_semmnu=1030
If your /etc/system file does not contain the above mentioned entries, add the following entries at the end of the file:
set semsys:seminfo_semmni=16
set semsys:seminfo_semmns=128
set semsys:seminfo_semmnu=1000
This is sufficient to run up to 16 HADB nodes on the computer.
Consult the HADB chapter in the Sun Java System Application Server Enterprise Edition 8 2004Q2 Performance Tuning Guide if more than 16 nodes are involved.
- To make these changes permanent, add this line to /etc/sysctl.conf file. This file is used during the boot process.
echo kernel.shmmax=536870912> /etc/sysctl.conf
Configuring Shared Memory on Linux
- To increase the shared memory to 512 MB, run the following:
echo 536870912 > /proc/sys/kernel/shmmax
echo 536870912 > /proc/sys/kernel/shmall
shmmax is the maximum size of a single shared memory segment, and shmall is the total shared memory to be made available.
For a standard HADB node that uses default values, this value is enough. If you want larger size, then you have to increase it.
- The default shared memory limit for shmmax can be changed in the proc file system without having to reboot your machine. Additionally, you can use sysctl(8) to change it.
For an explanation of HADB nodes, see the section titled: Configuring the HADB in the Administering the High-Availability Database (Enterprise Edition) chapter of the Sun Java System Application Server Administration Guide. Additionally, consult the Sun Java System Application Server Performance Tuning Guide to learn about stress and performance testing.
Time SynchronizationIt is strongly recommended to synchronize the clocks on the hosts running HADB because HADB uses time stamps based on the system clock for debugging purposes as well as for controlling internal events. The events are written to history files prefixed by time stamps. Since HADB is a distributed system, history files from all HADB nodes are analyzed together in troubleshooting. HADB also uses the system clock internally to manage time-dependent events such as timeouts.
File System SupportConfiguring HADB requires you to understand the following, if you use one of these file systems:
- HADB supports ext2 and ext3 file system for Red Hat Application Server 3.0. For Red Hat Application Server 2.1, HADB supports only ext2 file system.
- When Veritas File System is used on the Solaris platform, the message WRN: Direct disk I/O mapping failed is written to the history files. This message indicates that HADB cannot turn on direct I/O for the data and log devices. Direct I/O is a performance enhancement that reduces the CPU cost of writing disk pages. It also causes less overhead of administering dirty data pages in the operating system.
To use direct I/O with Veritas File System, use one of the following:
- Create the data and log devices on a file system that is mounted with the option mincache=direct. This option applies to all files created on the file system. Check the mount_vxfs(1M) command for details.
- Use the Veritas Quick I/O facility to perform raw I/O to file system files. Check the document VERITAS File System 4.0 Administrator’s Guide for Solaris for details.
Starting the HADB Management AgentHADB requires the HADB Management Agent to be running for all operations. To start the agent, go to install_dir/hadb/4/bin directory and run this command:
ma [--javahome j2se_1.4.x_dir] ma.cfg
Use the --javahome option to specify the J2SE 1.4.x home directory if HADB cannot find a valid J2SE in the path.
For additional information, see the Sun Java System Application Server Enterprise Edition 8 2004Q4 Administration Guide.