After you have installed and configured the Sun Cluster product, install the Tivoli server and managed nodes. You can use either the Tivoli desktop utility or shell commands to install the Tivoli product. See your Tivoli documentation for detailed Tivoli installation procedures.
The Tivoli probe derives the name of the logical host (on which will run the TME server or managed node) from the name of the Tivoli database, by checking an environment variable. If the Tivoli database is not named after the logical host, the probe fails to detect that the Tivoli server or managed node is running correctly, and will invoke a failover of the logical host. Therefore, make sure the name of the Tivoli database and logical host are the same.
Install all Tivoli binaries onto the shared (multihost) disk, for ease of administration and future updates.
Before starting this procedure, you should have already installed and configured Sun Cluster and set up file systems and logical hosts.
Start Sun Cluster and make sure the logical host is mastered by the physical host on which you will install Tivoli.
In this example, the physical host is phys-hahost1 and the logical hosts are hahost1 and hahost2:
phys-hahost1# haswitch phys-hahost1 hahost1 hahost2 |
Run the Tivoli preinstallation script, WPREINST.SH.
The WPREINST.SH script is located on the Tivoli media. The script creates links from an installation directory you specify back to the Tivoli media.
Install the Tivoli server and specify directory locations on the logical host for Tivoli components.
Install the Tivoli server on the multihost disk associated with the logical host.
You can use the Tivoli GUI or Tivoli commands to install the Tivoli server and managed nodes. If you use the Tivoli command line, you must set the environment variable: DOGUI=no. If you use the GUI, do not select the "start at boot time" option.
The following example specifies directory locations on the logical host for the TME binaries and libraries, TME server database, man pages, message catalogs, and X11 resource files:
phys-hahost1# ./wserver -c cdrom_path -a $WLOCALHOST -p \ /hahost1/d1/Tivoli! BIN=/hahost1/d1/Tivoli/bin! \ LIB=/hahost1/d1/Tivoli/lib! ALIDB=/hahost1/d1/Tivoli! \ MAN=/hahost1/d1/Tivoli/man! \ APPD=/hahost1/d1/Tivoli/X11/app-defaults! \ CAT=/hahost1/d1/Tivoli/msg_cat! CreatePaths=1 |
Install Tivoli patches.
See your Tivoli documentation or service provider for applicable patches, and install them using instructions in your Tivoli documentation.
Rename the Tivoli environment directory and copy the directory to all other possible masters of the logical host.
Rename the Tivoli environment directory to prevent it from being overwritten by another installation. Then copy the directory to all other possible masters of the logical host on which the Tivoli server is installed.
phys-hahost1# mv /etc/Tivoli /etc/Tivoli.hahost1 phys-hahost1# tar cvf /tmp/tiv.tar /etc/Tivoli.hahost1 phys-hahost1# rcp /tmp/tiv.tar phys-hahost2:/tmp phys-hahost2# tar xvf /tmp/tiv.tar |
Set up paths and stop and restart the Tivoli daemon.
Use the setup_env.sh script to set up paths. The default port number is 94.
phys-hahost1# . /etc/Tivoli.hahost1/setup_env.sh phys-hahost1# odadmin shutdown phys-hahost1# oserv -H hahost1 -p port_number -k $DBDIR |
(Tivoli 3.6 only) Switch over the other logical host to the second physical host.
The Tivoli 3.6 oserv does not listen to requests on a specific configured IP address, but instead listens to any IP address (INADDR_ANY) configured on the system. The default port of the Tivoli server and managed node oserv is the same (94). Therefore, when the Tivoli server is already running, the managed node oserv process cannot come up. To prevent this problem, make sure the two logical hosts are mastered by different physical hosts.
phys-hahost1# haswitch phys-hahost2 hahost2 ... phys-hahost1# haget -f master -h hahost1 phys-hahost1 ... phys-hahost1# haget -f master -h hahost2 phys-hahost2 |
(Optional) Install the Tivoli managed node instance on the second logical host.
For example:
phys-hahost1# wclient -c cdrom_path -I -p hahost1-region \ BIN=/hahost2/d1/Tivoli/bin! LIB=/hahost2/d1/Tivoli/lib! \ DB=/hahost2/d1/Tivoli! MAN=/hahost2/d1/Tivoli/man! \ APPD=/hahost2/d1/Tivoli/X11/app-defaults! \ CAT=/hahost2/d1/Tivoli/msg_cat! CreatePaths=1 hahost2 |
(Tivoli 3.6 only) Configure the managed node server to use the IP address of the logical host instead of the physical host for listening to requests.
phys-hahost1# odadmin odlist |
Verify that the host of the managed node is the logical host. If not, use the following commands, in which "odadmin" is the Tivoli server's odadmin, to associate the logical host with the managed node object dispatcher and to disassociate the physical host. Determine the dispatcher_id from the Disp field in output from the command odadmin odlist.
phys-hahost1# odadmin odlist add_ip_alias dispatcher_id logical_hostname phys-hahost1# odadmin odlist delete_ip_alias dispatcher_id logical_hostname |
(Tivoli 3.6 only) Configure the Tivoli server and managed node to listen to requests on a specific IP address.
Use the following command, in which "odadmin" is the Tivoli server's odadmin. Both the Tivoli oserv and the managed node oserv must be running before you use this command.
phys-hahost1# odadmin set_force_bind TRUE all |
(Optional) Rename the Tivoli environment directory and copy the directory to all other possible masters.
Rename the Tivoli environment directory to prevent it from being overwritten by another installation. Then copy the directory to all other possible masters of the logical host on which the Tivoli server is installed.
phys-hahost1# mv /etc/Tivoli /etc/Tivoli.hahost2 phys-hahost1# tar cvf /tmp/tiv.tar /etc/Tivoli.hahost2 phys-hahost1# rcp /tmp/tiv.tar phys-hahost2:/tmp phys-hahost2# tar xvf /tmp/tiv.tar |
Modify the /etc/services file.
Add the following entry to the /etc/services file on each physical host that is a possible master of a Tivoli instance. The default port number for Tivoli is 94.
objcall port_number/tcp |
Verify the Tivoli installation.
Before configuring Sun Cluster HA for Tivoli, verify correct installation of the Tivoli server, Tivoli managed node instance, and Tivoli managed nodes used for probing.
phys-hahost1# . /etc/Tivoli.hahost1/setup_env.sh phys-hahost1# odadmin odlist phys-hahost1# wping hahost1 phys-hahost1# wping hahost2 |
Execute the setup_env.sh file from only the first logical host. If you execute the setup_env.sh file from the second logical host, the odadmin and wping commands will fail.
Create an administrative user and set permissions correctly on the Tivoli server.
Use the Tivoli user interface to create an administrator with user ID root and group ID root, and give it user, admin, senior, and super authorization. This will enable probing by running the wping command.
Stop the Tivoli servers or server daemons.
The daemons will be re-started automatically by Sun Cluster when you start the cluster, or when the logical host is switched between masters. The first invocation of odadmin shuts down the TMR server. The second invocation shuts down the managed node.
phys-hahost1# odadmin shutdown phys-hahost1# . /etc/Tivoli.hahost2/setup_env.sh phys-hahost1# odadmin shutdown |
Proceed to "Installing and Configuring Sun Cluster HA for Tivoli", to register and install the Sun Cluster HA for Tivoli data service.