This chapter describes procedures for setting up and administering the Sun Cluster HA for Tivoli data service on your Sun Cluster servers.
This chapter includes the following procedures:
The Sun Cluster HA for Tivoli product consists of a Tivoli Management Environment (TME) server, Tivoli managed nodes, and other components that become highly available when run in the Sun Cluster environment.
You can place Tivoli components inside or outside the Sun cluster; any components you place inside the cluster will be protected by failover. For example, if a Tivoli object dispatcher configured in the cluster fails, it will be restarted automatically or will fail over to another host.
For those Tivoli servers and managed nodes that you place inside the cluster, you must place each one on a separate logical host.
After you have installed and configured the Sun Cluster product, install the Tivoli server and managed nodes. You can use either the Tivoli desktop utility or shell commands to install the Tivoli product. See your Tivoli documentation for detailed Tivoli installation procedures.
Before starting this procedure, you should have already installed and configured Sun Cluster and set up file systems and logical hosts.
Start Sun Cluster and make sure the logical host is mastered by the physical host on which you will install Tivoli.
In this example, the physical host is phys-hahost1 and the logical hosts are hahost1 and hahost2:
phys-hahost1# haswitch phys-hahost1 hahost1 hahost2 |
Run the Tivoli preinstallation script, WPREINST.SH.
The WPREINST.SH script is located on the Tivoli media. The script creates links from an installation directory you specify back to the Tivoli media.
Install the Tivoli server and specify directory locations on the logical host for Tivoli components.
Install the Tivoli server on the multihost disk associated with the logical host.
You can use the Tivoli GUI or Tivoli commands to install the Tivoli server and managed nodes. If you use the Tivoli command line, you must set the environment variable: DOGUI=no.
The following example specifies directory locations on the logical host for the TME binaries and libraries, TME server database, man pages, message catalogs, and X11 resource files:
phys-hahost1# ./wserver -c cdrom_path -a $WLOCALHOST -p \ /hahost1/d1/Tivoli! BIN=/hahost1/d1/Tivoli/bin! \ LIB=/hahost1/d1/Tivoli/lib! ALIDB=/hahost1/d1/Tivoli! \ MAN=/hahost1/d1/Tivoli/man! \ APPD=/hahost1/d1/Tivoli/X11/app-defaults! \ CAT=/hahost1/d1/Tivoli/msg_cat! CreatePaths=1 |
Install Tivoli patches.
See your Tivoli documentation or service provider for applicable patches, and install them using instructions in your Tivoli documentation.
(Optional) Rename the Tivoli environment directory and copy the directory to all other possible masters of the logical host.
Rename the Tivoli environment directory to prevent it from being overwritten by another installation. Then copy the directory to all other possible masters of the logical host on which the Tivoli server is installed.
phys-hahost1# mv /etc/Tivoli /etc/Tivoli.hahost1 phys-hahost1# tar cvf /tmp/tiv.tar /etc/Tivoli.hahost1 phys-hahost1# rcp /tmp/tiv.tar phys-hahost2:/tmp phys-hahost2# tar xvf /tmp/tiv.tar |
Set up paths and stop and restart the Tivoli daemon.
Use the setup_env.sh script to set up paths. The default port number is 94.
phys-hahost1# . /etc/Tivoli.hahost1/setup_env.sh phys-hahost1# odadmin shutdown phys-hahost1# oserv -H hahost1 -p port_number -k $DBDIR |
(Optional) Install the Tivoli managed node instance on the second logical host.
For example:
phys-hahost1# wclient -c cdrom_path -I -p hahost1-region \ BIN=/hahost2/d1/Tivoli/bin! LIB=/hahost2/d1/Tivoli/lib! \ DB=/hahost2/d1/Tivoli! MAN=/hahost2/d1/Tivoli/man! \ APPD=/hahost2/d1/Tivoli/X11/app-defaults! \ CAT=/hahost2/d1/Tivoli/msg_cat! CreatePaths=1 hahost2 |
(Optional) Rename the Tivoli environment directory and copy the directory to all other possible masters.
Rename the Tivoli environment directory to prevent it from being overwritten by another installation. Then copy the directory to all other possible masters of the logical host on which the Tivoli server is installed.
phys-hahost1# mv /etc/Tivoli /etc/Tivoli.hahost2 phys-hahost1# tar cvf /tmp/tiv.tar /etc/Tivoli.hahost2 phys-hahost1# rcp /tmp/tiv.tar phys-hahost2:/tmp phys-hahost2# tar xvf /tmp/tiv.tar |
Modify the /etc/services file.
Add the following entry to the /etc/services file on each physical host that is a possible master of a Tivoli instance. The default port number for Tivoli is 94.
objcall port_number/tcp |
Verify the Tivoli installation.
Before configuring Sun Cluster HA for Tivoli, verify correct installation of the Tivoli server, Tivoli managed node instance, and Tivoli managed nodes used for probing.
phys-hahost1# . /etc/Tivoli.hahost1/setup_env.sh phys-hahost1# odadmin odlist phys-hahost1# wping hahost1 phys-hahost1# wping hahost2 |
Execute the setup_env.sh file from only the first logical host. If you execute the setup_env.sh file from the second logical host, the odadmin and wping commands will fail.
Create an administrative user and set permissions correctly on the Tivoli server.
Use the Tivoli user interface to create an administrator with user ID root and group ID root, and give it user, admin, senior, and super authorization. This will enable probing by running the wping command.
Stop the Tivoli servers or server daemons.
The daemons will be re-started automatically by Sun Cluster when you start the cluster, or when the logical host is switched between masters. The first invocation of odadmin shuts down the TMR server. The second invocation shuts down the managed node.
phys-hahost1# odadmin shutdown phys-hahost1# . /etc/Tivoli.hahost2/setup_env.sh phys-hahost1# odadmin shutdown |
Proceed to "9.3 Installing and Configuring Sun Cluster HA for Tivoli", to register and install the Sun Cluster HA for Tivoli data service.
This section describes the steps to install, configure, register, and start Sun Cluster HA for Tivoli. You must install and set up Sun Cluster and the Tivoli product before configuring Sun Cluster HA for Tivoli.
You will configure Sun Cluster HA for Tivoli by using the hadsconfig(1M) command. See the hadsconfig(1M) man page for details.
On each Sun Cluster server, install the Tivoli package, SUNWsctiv, in the default location, if it is not installed already.
If the Tivoli package is not installed already, use the scinstall(1M) command to install it on each Sun Cluster server that is a potential master of the logical host on which Tivoli is installed.
Run the hadsconfig(1M) command on one node to configure Sun Cluster HA for Tivoli for both the server and managed node.
Use the hadsconfig(1M) command to create, edit, and delete instances of the Sun Cluster HA for Tivoli data service for both the server and managed node. Refer to "9.3.2 Configuration Parameters for Sun Cluster HA for Tivoli", for information on input to supply to hadsconfig(1M). Run the command on one node only.
phys-hahost1# hadsconfig |
Only the Tivoli server and Tivoli managed node should be configured as instances under the control of Sun Cluster. The Tivoli managed nodes used for probing need not be controlled by Sun Cluster.
Register the Sun Cluster HA for Tivoli data service by running the hareg(1M) command.
Run the command on only one node:
phys-hahost1# hareg -s -r tivoli |
Use the hareg(1M) command to enable Sun Cluster HA for Tivoli and perform a cluster reconfiguration.
Run the command on only one node:
phys-hahost1# hareg -y tivoli |
The configuration is complete.
This section describes the information you supply to the hadsconfig(1M) command to create configuration files for Sun Cluster HA for Tivoli. The hadsconfig(1M) command uses templates to create these configuration files. The templates contain some default, some hard coded, and some unspecified parameters. You must provide values for those parameters that are unspecified.
The fault probe parameters, in particular, can affect the performance of Sun Cluster HA for Tivoli. Tuning the probe interval value too low (increasing the frequency of fault probes) might encumber system performance, and also might result in false takeovers or attempted restarts when the system is simply slow.
Configure Sun Cluster HA for Tivoli by supplying the hadsconfig(1M) command with parameters listed in Table 9-1.
Table 9-1 Configuration Parameters for Sun Cluster HA for Tivoli
Parameter |
Description |
---|---|
Name of the instance |
Nametag used as an identifier for the instance. The log messages generated by Sun Cluster HA for Tivoli refer to this nametag. The hadsconfig(1M) command prefixes the package name to the value you supply here. For example, if you specify "tivoli," the hadsconfig(1M) command produces "SUNWsctiv_tivoli." |
Logical host |
Name of the logical host that provides service for this instance of Sun Cluster HA for Tivoli. |
Port number |
Unique port for Sun Cluster HA for Tivoli. The default port number is 94. |
Configuration directory |
The directory of the database, that is, the full path of the $DBDIR. For example, /hahost1/d1/Tivoli/<database>.db. |
Local probe flag |
Specifies whether the local probe is started automatically at cluster reconfiguration or when the Tivoli service is activated. Possible values are y or -n. |
Probe interval |
Time in seconds between successive fault probes. The default is 60 seconds. |
Probe timeout |
Time out value in seconds for the probe. If the probe has not completed within this amount of time, Sun Cluster HA for Tivoli considers it to have failed. The default is 60 seconds. |
Takeover flag |
Specifies whether a failure of this instance will cause a takeover or failover of the logical host associated with the Tivoli instance. Possible values are -y or -n. |
TIV_OSERV_TYPE |
This is the TME type. Possible values are server or client. |
TIV_BIN |
The path to the TME binaries specified during installation of the instance. This is equivalent to $BINDIR without the "Solaris2" suffix. For example, /hahost1/d1/Tivoli/bin. |
TIV_LIB |
The path to the TME libraries specified during installation of the instance. For example, /hahost1/di/Tivoli/lib. This is equivalent to $LIBDIR without the "Solaris2" suffix. |