![]() | |
Sun Java System Application Server Standard and Enterprise Edition 7 2004Q2 Update 2 Installation Guide |
Chapter 2
Preparing for HADB SetupAfter the high-availability components are installed on the servers that will be part of a cluster, you are ready to set up high availability.
The following topics are addressed here:
After you have done the tasks here, proceed to the Sun Java System Application Server Administration Guide for comprehensive instructions on configuring and managing the cluster, the load balancer plug-in, and the high-availability database (HADB).
Information on high-availability topologies is available in the Sun Java System Application Server System Deployment Guide.
Configuring Shared Memory and SemaphoresThis section contains instructions for configuring shared memory and System V semaphores for the HADB host machines. You must configure the shared memory before working with HADB.
Information on high-availability topology is available in the Sun Java System Application Server System Deployment Guide.
Configuring Shared Memory on Solaris
- Log in as root.
- Set the shmmax value to the size of the physical memory on the HADB host machine. The maximum shared memory segment size must be larger than the size of the HADB database buffer pool. For a machine with a 2 GBytes (0x8000000 hexadecimal) main memory, add the following to the /etc/system file:
set shmsys:shminfo_shmmax=0x80000000
set shmsys:shminfo_shmseg=20
- Check your /etc/system file for semaphore configuration entries. This file may already contain semmni, semmns, and semmnu entries. For example:
set semsys:seminfo_semmni=10
set semsys:seminfo_semmns=60
set semsys:seminfo_semmnu=30If the entries are present, increment the values by 16, 128, and 1000 respectively. The entries in the example above would change to:
set semsys:seminfo_semmni=26
set semsys:seminfo_semmns=188
set semsys:seminfo_semmnu=1030If your /etc/system file does not contain the above mentioned entries, add the following entries at the end of the file:
set semsys:seminfo_semmni=16
set semsys:seminfo_semmns=128
set semsys:seminfo_semmnu=1000This is sufficient to run up to 16 HADB nodes on the computer. Consult the HADB chapter in the Sun Java System Performance Tuning Guide if there will be more than 16 nodes.
- To make these changes take effect, restart the machine.
Configuring Shared Memory on Linux
- Log in as root.
- Set the size of the maximum shared memory segment to the same value as the physical memory on the machine by adding the following entry to the /etc/sysctl.conf file:
kernel.shmmax=536870912
This value is given as number of bytes in decimal numbering. This example is for a machine having 512 Mbytes of physical memory.
- Reboot the machine.
Network Configuration RequirementsConfiguration requirements for the network include the following:
- HADB supports Internet Protocol version 4 (IPv4) only. Addresses given for HADB management agents (configuration variable ma.server.mainternal.interfaces) and HADB hosts (given as the --hosts option to hadbm create) must be IPv4 addresses.
- The network (routers, switches and network interfaces on the hosts) must be configured for User Datagram Protocol (UDP) multicast. If HADB hosts span multiple subnets, routers between the subnets must be configured to forward UDP multicast messages between the subnets.
- Firewalls can be set up between two HADB hosts, or between HADB hosts and and Application Server hosts when an Application Server instance is not co-locating with an HADB node on the same host. Firewalls must be set up to allow all UDP traffic, both ordinary UDP traffic and UDP multicast traffic.
- Do not use dynamic IP addresses (DHCP) for hosts used in createdomain, extenddomain, or hadbm commands.
To ensure HADB availability in spite of single network failures, use one of the following:
- Network multipathing.
Network multipathing is available on Solaris only and tested on Solaris 9.
To set up multipathing before you adapt the multipathing setup for HADB, refer to the Administering Network Multipathing section of the IP Network Multipathing Administration Guide at http://docs.sun.com/doc/816-5249.
If the hosts to be used for HADB already use IP multipathing, reconfigure them to suit the HADB requirements:
For HADB to properly support multipathing failover, the network interface failure detection time must not exceed 1000 milliseconds as specified by the FAILURE_DETECTION_TIME parameter in /etc/default/mpathd. Edit the file and change the value of this parameter to 1000 if the original value is higher:
FAILURE_DETECTION_TIME=1000
To make the above change, use the following command:
pkill -HUP in.mpathd
As described in the IP Network Multipathing Administration Guide, multipathing involves grouping physical network interfaces into multipath interface groups. Each physical interface in such a group has two IP addresses associated with it:
Only the physical interface address can be used for transmitting data, while the test address is only for Solaris internal use.
When hadbm create --hosts is run, each host should be specified with only one physical interface address from the multipath group. For example, assume that Host 1 and Host 2 have two physical network interfaces each. On each host, these two interfaces are set up as a multipath group, and running ifconfig -a yields the following:
Host1:
bge0: flags=1000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4> mtu 1500 index 5 inet 129.159.115.10 netmask ffffff00 broadcast 129.159.115.255 groupname mp0
bge0:1: flags=9040843<UP,BROADCAST,RUNNING,MULTICAST,DEPRECATED,IPv4,NOFAILOVER> mtu 1500 index 5 inet 129.159.115.11 netmask ffffff00 broadcast 129.159.115.255
bge1: flags=1000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4> mtu 1500 index 6 inet 129.159.115.12 netmask ffffff00 broadcast 129.159.115.255 groupname mp0
bge1:1: flags=9040843<UP,BROADCAST,RUNNING,MULTICAST,DEPRECATED,IPv4,NOFAILOVER> mtu 1500 index 6 inet 129.159.115.13 netmask ff000000 broadcast 129.159.115.255
Host2:
bge0: flags=1000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4> mtu 1500 index 3 inet 129.159.115.20 netmask ffffff00 broadcast 129.159.115.255 groupname mp0
bge0:1: flags=9040843<UP,BROADCAST,RUNNING,MULTICAST,DEPRECATED,IPv4,NOFAILOVER> mtu 1500 index 3 inet 129.159.115.21 netmask ff000000 broadcast 129.159.115.255
bge1: flags=1000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4> mtu 1500 index 4 inet 129.159.115.22 netmask ffffff00 broadcast 129.159.115.255 groupname mp0
bge1:1: flags=9040843<UP,BROADCAST,RUNNING,MULTICAST,DEPRECATED,IPv4,NOFAILOVER> mtu 1500 index 4 inet 129.159.115.23 netmask ff000000 broadcast 129.159.115.255
In this example, the physical network interfaces on both hosts are those listed as bge0 and bge1. The interfaces listed as bge0:1 and bge1:1 are multipath test interfaces (marked DEPRECATED in the ifconfig output), as described in the IP Network Multipathing Administration Guide.
To set up HADB in this environment, select one physical interface address from each host. In this example, the IP address 129.159.115.10 from host 1 and 129.159.115.20 from host 2 are selected for the use of HADB. To create a database with one database node per host, use the --host argument to hadbm create. For example
--host 129.159.115.10,129.159.115.20
To create a database with two database nodes on each host, use the following argument:
--host 129.159.115.10,129.159.115.20,129.159.115.10,129.159.115.20
In both cases, you must configure the agents on Host 1 and Host 2 with separate configuration parameters to specify which interface on the machines the agents should use:
Host 1: ma.server.mainternal.interfaces=129.159.115.10
Host 2: ma.server.mainternal.interfaces=129.159.115.20
For information on the ma.server.mainternal.interfaces variable, see the section Starting Management Agents in the chapter Administering the High-Availability Database (Enterprise Edition) of the Sun Java System Application Server Administration Guide.
Time SynchronizationSynchronize the clocks on the hosts running HADB, because HADB uses time stamps based on the system clock for debugging purposes as well as for controlling internal events. The events are written to history files prefixed by time stamps. Since HADB is a distributed system, history files from all HADB nodes are analyzed together in troubleshooting. HADB also uses the system clock internally to manage time-dependent events such as timeouts.
Adjusting system clocks on running HADB systems is not recommended. HADB has been implemented to handle this in general, but you should note the following points:
- Problems in the operating system or other software components on the hosts might cause problems for the whole system when the clock is adjusted. Typical problems include hangs or restarts of nodes.
- Adjusting the clock backward might cause some HADB server processes to hang for a period of time as the clock was adjusted. Adjusting the time forward does not cause processes to hang.
To synchronize clocks, use one of these:
If a clock adjustment of more than 1 second is detected, this line is output into the history file of the HADB nodes:
NSUP INF 2003-08-26 17:46:47.975 Clock adjusted.
Leap is +195.075046 seconds.
File System SupportTo configure HADB, you need to understand the following, if you use one of these file systems:
- HADB supports the file systems ext2 and ext3 on Red Hat Enterprise Linux 3.0. On Red Hat Enterprise Linux 2.1, HADB supports only the file system ext2.
- When Veritas File System is used on the Solaris platform, the message WRN: Direct disk I/O mapping failed is written to the history files. This message indicates that HADB cannot turn on direct I/O for the data and log devices. Direct I/O is a performance enhancement that reduces the CPU cost of writing disk pages. It also causes less overhead of administering dirty data pages in the operating system.
To use direct I/O with Veritas File System, use one of the following:
- Create the data and log devices on a file system that is mounted with the option mincache=direct. This option applies to all files created on the file system. Check the mount_vxfs(1M) command for details.
- Use the Veritas Quick I/O facility to perform raw I/O to file system files. Check the document VERITAS File System 4.0 Administrator’s Guide for Solaris for details.
Running HADB Node Supervisor Processes with Real-time PriorityNode supervisor processes (NSUP) ensure the high availability of the HADB with the help of exchanging I’m alive messages in a timely manner. Timing is affected when an NSUP process is co-located with other processes and resource starvation results. False network partitioning and node restarts (preceded by a warning Process blocked for .. seconds in HADB history files) occur, resulting in aborted transactions and other exceptions.
To solve this, NSUP executable clu_nsup_srv in the install_dir/lib/server directory should have setuid root. To avoid any security impact by using setuid, the real-time priority is set immediately after the process is started and the process falls back to the effective uid once the priority has been changed. Other HADB processes run with normal priority.
If NSUP cannot set the real-time priority, it issues a warning Could not set realtime priority in Windows. In Solaris and Linux errno is set to EPERM. The error is written to the ma.log file, and the process continues without real-time priority.
Setting real-time priorities is not possible when:
- HADB is installed on Solaris 10 non-global zones
- PRIV_PROC_LOCK_MEMORY (allow a process to lock pages in physical memory) and/or PRIV_PROC_PRIOCNTL privileges are revoked in Solaris 10
- Users turn off setuid permission
- Users install the software as tar files (the non-root installation option for Application Server)
NSUP is not CPU consuming, its footprint is small, and running it with realtime priority does not affect performance.
Starting the HADB Management AgentHADB requires that a management agent must be running on every HADB host before you issue any hadbm management commands, including the creation of the database instance.
Chapter 3, “Administering the High Availability Database,” of the Sun Java System Application Server Administration Guide describes how the management agents are started.
Setting Up the User Environment
After you have set up host communication, you can run the hadbm command from the install_dir/SUNWhadb/4/bin directory location as follows:
However, it is much more convenient to set up your local environment to use the high-availability management client commands from anywhere. To set this up, perform the following steps.
Note
The examples in this section apply to using csh. If you are using another shell, refer to the man page for your shell for instructions on setting variables.
- Set the PATH variable:
setenv PATH ${PATH}:install_dir/bin:install_dir/SUNWhadb/4/bin
- Verify that the PATH settings are correct by running the following commands:
which asadmin
which hadbm- If multiple Java versions are installed, ensure that the JAVA_HOME environment is accessing JDK version 1.4.2_06 for Enterprise Edition.
setenv JAVA_HOME java_install_dir
setenv PATH ${PATH}:${JAVA_HOME}/bin- If your HADB device files and log files are not in the default location (appserver_install_dir/SUNWhadb/4),use the following hadbm command to locate these important files:
hadbm get configpath
hadbm get devicepath
hadbm get historypath
hadbm get installpathBackup the locations listed by these commands.
Setting Up Administration for Non-RootBy default, during the initial installation or setup of the Sun Java System Application Server, write permissions of the files and paths created for Sun Java System Application Server are given to root only. For a user other than root to create or manage the Sun Java System Application Server, write permissions on the associated files must be given to that specific user, or to a group to which the user belongs. The files that are affected are (with their default locations):
You can create a user group for managing the Sun Java System Application Server as described in the following procedure. (An alternate approach is to set permissions and ownership for the specific user.)
To create a Sun Java System Application Server user group and set permissions on the installation root directory, repeat the following process for each affected file:
- Log in as root.
- From the command prompt, create the Sun Java System Application Server user group. For example:
# groupadd sjsasuser
You can type groupadd at the command line to see appropriate usage.
- Change the group ownership for each affected file to the newly-created group. For example:
chgrp -R sjsasuser install_config_dir/cl*.conf
- Set the write permission for the newly-created group:
chmod -R g+rw install_config_dir/cl*.conf
- Repeat steps 3 and 4 for each affected file.
- Make the clsetup and cladmin commands executable by the newly-created group. For example:
chmod -R g+x install_dir/bin/cl*
- Delete and recreate the default domain, domain1, using the --sysuser option. The sysuser must also belong to the newly-created group. For example:
asadmin delete-domain domain1
asadmin create-domain --sysuser bleonard --adminport 4848 --adminuser admin --adminpassword password domain1
Using clsetupThe purpose of the clsetup utility is to automate the process of setting up a basic cluster in a typical configuration. The utility is located in install_dir/bin, where install_dir is the directory where the Sun Java System Application Server software is installed.
The clsetup utility, as well as the cladmin utility are bundled with Sun Java System Application Server.
The following topics are addressed:
How clsetup Works
The clsetup utility is a set of Sun Java System Application Server commands that allow a cluster to be configured automatically, based on prepopulated input files. As part of cluster setup, an HADB is created. However, you must set up your working cluster using the hadbm commands as described in the Sun Java System Application Server Administration Guide.
The following topics are addressed in this section:
How the Input Files Work
Three input files are used by the clsetup utility to configure the cluster:
- clinstance.conf—This file is pre-populated with information about application server instances server1 and server2. Refer to The clinstance.conf File for information on the contents of this file.
- clpassword.conf—This file is pre-populated with the Admin Server password for domain1, which you provided when you installed Sun Java System Application Server 7.1 Enterprise Edition. Refer to The clpassword.conf File for information on the contents of this file.
- clresource.conf—This file is pre-populated with information about the cluster resources: HADB, JDBC connection pool, JDBC resource, and session store and persistence. Refer to The clresource.conf File for information on the contents of this file.
Use the clsetup configuration parameters as they are preconfigured to set up a typical cluster configuration. To support a different configuration, make edits to any or all of the configuration files.
What clsetup Accomplishes
Using the pre-populated values in the clsetup input files, the clsetup utility command:
- Creates a new server instance named server2 in the default domain named domain1. The HTTP port number for server2 is the next sequential number after the HTTP port number specified for server1 during installation (for example, if port number 80 is provided for server1 during installation, the port number for server2 is 81).
- Creates the HADB named hadb with two nodes on the local machine. The port base is 15200, and the database password is password.
- Creates the HADB tables required to store session information in the HADB.
- Creates a connection pool named appservCPL in all the instances listed in the clinstance.conf file (server1, server2).
- Creates a JDBC resource named jdbc/hastore in all the instances listed in the clinstance.conf file (server1, server2).
- Configures the session persistence information in all the instances listed in the clinstance.conf file (server1, server2).
- Configures an RMI/IIOP cluster in all the instances listed in the clinstance.conf file (server1, server2); thereby enabling RMI/IIOP failover.
- Configures SFSB failover in all the instances listed in the clinstance.conf file (server1, server2).
- Enables high availability in all the instances listed in the clinstance.conf file (server1, server2).
Commands Used by clsetup
The clsetup utility uses a number of hadbm and asadmin commands to set up the cluster. Table 2-1 lists the clsetup tasks and the commands used to accomplish these tasks.
Requirements and Limitations of clsetup
The following requirements and limitations apply to the clsetup utility:
- The install paths, device paths, configuration paths, and so on must be the same on all machines that are part of the cluster.
- Before you can use the clsetup utility, the asadmin and hadbm utilities must be available on the local machine. The clsetup utility can only be run on a machine where the following are installed:
- Before you can use clsetup, configure shared memory for UNIX platforms as described in Configuring Shared Memory and Semaphores. The clsetup utility does not set any shared memory values.
- Before running clsetup, start the Admin Servers of all the Sun Java System Application Server instances that are part of the cluster.
- The administrator password must be the same for all domains that are part of the cluster.
- If the entities to be handled (HADB nodes and Application Server instances) already exist, clsetup does not delete or reconfigure them, and the respective configuration steps are skipped.
- The values specified in the input files will be the same for all the instances in a cluster. The clsetup utility is not designed to set up instances with different values. For example, clsetup cannot create a JDBC connection pool with different settings for each instance.
- Host names in the shell initialization files—If prompts are included with host names in your .cshrc or .login files, clsetup may appear to hang. Remove any prompts and excess output in any remote command invocations. For example, running the hostname command on hostB should print hostB without a prompt.
- To run clsetup as a user other than root, follow the steps described in Setting Up Administration for Non-Root.
Editing the clsetup Input Files
The input files that are needed by the clsetup utility are installed under the configuration installation directory, (by default /etc/opt/SUNWappserver7 on UNIX and c:\Sun\AppServer7\config\ on Windows), as part of the installation procedure. These input files are pre-populated with the values to set up a typical configuration, you can edit them as needed for your configuration.
This section addresses:
The clinstance.conf File
For clsetup to work, all application server instances that are part of a cluster must be defined in the clinstance.conf file. During installation, a clinstance.conf file is created with entries for two instances. If you add more instances to the cluster, you must add information about these additional instances as follows:
One set of entries is required for each instance that is part of the cluster. Any line that starts with a hash mark (#) is treated as a comment.
Table 2-2 provides information about the entries in the clinstance.conf file. The left column contains the parameter name, the middle column defines the parameter, and the right column contains the default value.
Example clinstance.conf File
This clinstance.conf file contains information about two instances.
#Instance 1
instancename server1
user admin
host localhost
port 4848
domain domain1
instanceport 80#Instance 2
instancename server2
user admin
host localhost
port 4848
domain domain1
instanceport 81The clpassword.conf File
When clsetup runs, it launches the asadmin command which needs the Admin Server password specified in the clpassword.conf file during installation.
The format of the clpassword.conf file is as follows:
where password is the Admin Server password.
Permissions 0600 are preset on the clpassword.conf file, which can only be accessed by the root user.
The clresource.conf File
During installation, the clresource.conf file is created to set up a typical configuration. The clresource.conf file contains information about the following resources that are part of the cluster:
On UNIX platforms, permissions 0600 are preset on the clresource.conf file, which can only be accessed by the root user.
The parameters of the clresource.conf file are described in the following tables. The left column contains the parameter name, the middle column defines the parameter, and the right column contains the default value.
Table 2-3 describes the HADB parameters in the clresource.conf File.
The database name is specified at the end of the [HADBINFO] section in the clresource.conf file.
Table 2-4 lists the HADB Agent parameters in the clresource.conf file.
Table 2-4
HADB Agent Information
Table 2-5 describes the session store parameters in the clresource.conf file.
Table 2-5 Session Store Parameters in the clresource.conf File
Parameter
Definition
Default Value
storeurl
URL of the HADB store
REPLACEURL
NOTE: Value is replaced by actual URL at runtime.
storeuser
User who has access to the session store
appservusr
NOTE: Must match the username property in Table 2-6.
storepassword
Password for the storeuser
password
NOTE: Must match the password property in Table 2-6.
dbsystempassword
Password for the HADB system user
password
Table 2-6 describes the JDBC connection pool parameters in the clresource.conf file.
The connection pool name is specified at the end of the [JDBC_CONNECTION_POOL] section in the clresource.conf file.
Table 2-7 describes the JDBC resource parameters in the clresource.conf file.
Table 2-7 JDBC Resource Parameters in the clresource.conf File
Parameter
Definition
Default Value
connectionpoolid
Name of the connection pool
appservCPL
NOTE: Connection pool name is specified in Table 2-6.
The JDBC resource name is defined at the end of the [JDBC_RESOURCE] section in the clresource.conf file.
Table 2-8 describes the session persistence parameters in the clresource.conf file.
Table 2-9 describes the stateful session bean parameter in the clresource.conf file.
Table 2-9 Stateful Session Bean Parameters in the clresource.conf File
Parameter
Definition
Default Value
sfsb
Stateful session bean failover
false
Table 2-10 describes the RMI/IIOP failover parameter in the clresource.conf file.
Table 2-10 RMI/IIOP Failover Parameters in the clresource.conf File
Parameter
Definition
Default Value
rmi_iiop
RMI/IIOP cluster configuration
false
Table 2-11 describes the cluster identification parameter in the clresource.conf file.
Table 2-11 Cluster Identification Parameters in the clresource.conf File
Parameter
Definition
Default Value
cluster_id
Cluster ID
cluster1
Example clresource.conf File on UNIX
[HADBINFO]
agent localhost:1862
historypath /var/tmp
devicepath /opt/SUNWappserver7/SUNWhadb/4
datadevices 1
portbase 15200
spares 0
devicesize 512
dbpassword password
hosts machine1,machine1
hadb[AGENTINFO]
agent localhost 1862
adminpassword password[SESSION_STORE]
storeurl REPLACEURL
storeuser appservusr
storepassword password
dbsystempassword password[JDBC_CONNECTION_POOL]
steadypoolsize 8
maxpoolsize 32
datasourceclassname com.sun.hadb.jdbc.ds.HadbDataSource
isolationlevel repeatable-read
--isisolationguaranteed=true
validationmethod meta-data
property username=appservusr:password=password:cacheDataBaseMetaData=false:eliminat eRedundantEndTransaction=true:serverList=REPLACEURLappservCPL
[JDBC_RESOURCE]
connectionpoolid appservCPL
jdbc/hastore[SESSION_PERSISTENCE]
type ha
frequency web-method
scope session
store jdbc/hastore[EJB_FAILOVER]
sfsb true
[RMI_IIOP_FAILOVER]
rmi_iiop true
[CLUSTER_ID]
cluster_id cluster1
Example clresource.conf file on Windows
[HADBINFO]
agent localhost:1862
package V4.4.1-x
historypath REPLACEDIR
devicepath C:\Sun\AppServer7\SUNWhadb\4.4.1-x
datadevices 1
portbase 15200
spares 0
#set LogbufferSize=32,DataBufferPoolSize=128
devicesize 208
dbpassword password
hosts machine1,machine2
adminpassword password
hadb
[SESSION_STORE]
storeurl REPLACEURL
storeuser appservusr
storepassword password
dbsystempassword password
[JDBC_CONNECTION_POOL]
steadypoolsize 8
maxpoolsize 32
datasourceclassname com.sun.hadb.jdbc.ds.HadbDataSource
isolationlevel repeatable-read
--isisolationguaranteed=true
validationmethod meta-data
property username=appservusr:password=password:cacheDataBaseMetaData=fa lse:eliminateRedundantEndTransaction=true:serverList=REPLACEURL
appservCPL
[JDBC_RESOURCE]
connectionpoolid appservCPL
jdbc/hastore
[SESSION_PERSISTENCE]
type ha
frequency web-method
scope session
store jdbc/hastore
[EJB_FAILOVER]
sfsb true
[RMI_IIOP_FAILOVER]
rmi_iiop true
[CLUSTER_ID]
cluster_id cluster1
Running clsetup
The syntax for running clsetup is as follows:
If no arguments are specified, clsetup assumes the following defaults:
You can override these arguments by providing custom input file locations. For example:
When providing custom input files, follow the required format found in the input files. For information on doing this, see "Editing the clsetup Input Files," on page 60.
To run clsetup:
- Verify that the requirements have been met as described in Requirements and Limitations of clsetup.
Note
If you want to run clsetup as a user other than root, see Setting Up Administration for Non-Root to set this up.
- Verify that the input files have the required information to set up the cluster. If necessary, edit the input files following the guidelines in Editing the clsetup Input Files.
- Go to the Sun Java System Application Server installation /bin directory.
- Invoke the clsetup command:
On Unix:./clsetup
On Windows: clsetup.bat
The clsetup command runs in verbose mode. The various commands are displayed on the screen as they run, and the output is redirected to the log file, /var/tmp/clsetup.log (on UNIX) or the default Windows temp directory (on Windows).
If a vital error occurs, the configuration stops and the error is recorded in the log file. If the log file already exists, the output is appended to the existing log file.
If the entities to be handled (HADB nodes and Application Server instances) already exist, clsetup does not delete or reconfigure them, and the respective configuration steps are skipped. This type of event is recorded in the log file.
- When clsetup completes the configuration, scan the log file after each run to review the setup.
- Upon completion, clsetup returns the exit codes described in Table 2-12:
Cleanup Procedures for clsetup
After running clsetup, errors that have occurred are logged in the log file /var/tmp/clsetup.log. Examine the log file after every run of the clsetup command and correct any significant errors that are reported (for example, failure to create a non-existing instance).
You can undo all or part of the configuration as follows:
- To delete an Application Server instance: asadmin delete-instance instance_name
- To delete the HADB:
- To clear the session store: cladmin clear-session-store --storeurl URL_information --storeuser storeUsername --storepassword store_user_name
- To delete the JDBC connection pool: asadmin delete-jdbc-connection-pool connectionpool_name
- To delete the JDBC resource: cladmin delete-jdbc-resource JDBCresource_Name
See the Man pages for detailed examples of each of these commands. You are now ready to proceed to the Sun Java System Application Server Administration Guide for instructions on configuring the HADB and managing the cluster, the load balancer plug-in, and the HADB.