This Chapter explains how to install and configure Sun Cluster data service for SWIFTAlliance Access.
This Chapter contains the following sections.
Planning the Sun Cluster HA for SWIFTAlliance Access Installation and Configuration
Verifying the Installation and Configuration of SWIFTAlliance Access
Registering and Configuring Sun Cluster HA for SWIFTAlliance Access
Verifying the Sun Cluster HA for SWIFTAlliance Access Installation and Configuration
Understanding the Sun Cluster HA for SWIFTAlliance Access Fault Monitor
Table 1 Lists the tasks for installing and configuring Sun Cluster HA for SWIFTAlliance Access. Perform these tasks in the order that they are listed.
Table 1 Task Map: Installing and Configuring Sun Cluster HA for SWIFTAlliance Access
Task |
For Instructions, Go To |
---|---|
1. Plan the installation. |
Planning the Sun Cluster HA for SWIFTAlliance Access Installation and Configuration |
2. Install Sun Cluster HA for SWIFTAlliance Access Packages. | |
3. Verify installation and configuration. |
Verifying the Installation and Configuration of SWIFTAlliance Access |
4. Register and Configure Sun Cluster HA for SWIFTAlliance Access. |
Registering and Configuring Sun Cluster HA for SWIFTAlliance Access |
5. Verify Sun Cluster HA for SWIFTAlliance Access Installation and Configuration. |
Verifying the Sun Cluster HA for SWIFTAlliance Access Installation and Configuration |
6. Understand Sun Cluster HA for SWIFTAlliance Access fault monitor. |
Understanding the Sun Cluster HA for SWIFTAlliance Access Fault Monitor |
7. Debug Sun Cluster HA for SWIFTAlliance Access. |
The HA agent is written to work with SWIFTAlliance Access versions 5.5, 5.9, and 6.0. IBM DCE version 3.2 is not used anymore by SWIFTAlliance Access 5.9 and later, and must only be installed for SWIFTAlliance Access 5.5. SWIFTAlliance AccessTM is a trademark of SWIFT.
The Sun Cluster HA for SWIFTAlliance Access data service provides a mechanism for orderly startup, shutdown, fault monitoring, and automatic takeover of the Sun Cluster service. The Sun Cluster components protected by the Sun Cluster HA for SWIFTAlliance Access data service are the following.
Table 2 Protection of Components
Component |
Protected by |
---|---|
DCE daemon |
Sun Cluster HA for SWIFTAlliance Access (version 5.5 only) |
SWIFTAlliance Access |
Sun Cluster HA for SWIFTAlliance Access |
By default the HA agent provides a fault monitor for the DCE component only when using the SWIFTAlliance Access 5.5. The fault monitoring for SWIFTAlliance Access is switched off by default. If the SWIFTAlliance Access application fails, the agent will not restart the SWIFTAlliance Access application automatically. This behavior was explicitly requested by SWIFT. It will allow you to operate the application in a way that the probe does not interfere with the normal behavior of some SWIFTAlliance Access features like:
operator manually triggering the SWIFTAlliance Access restart function , for example, to run SWIFTAlliance Access in housekeeping mode.
automatic or scheduled SWIFTAlliance Access restart, for example, to run database backup and other maintenance or end-of-day processes.
any graceful SWIFTAlliance Access restart or recovery, in case of a SWIFTAlliance Access transient local error.
The HA agent provides the start, stop, takeover, and switchover functionality. This means that when a node fails, the other node will automatically start the SWIFTAlliance Access application. The HA agent also provides an option to turn on fault monitoring for SWIFTAlliance Access on registration time. However, this option is not recommended by SWIFT.
This section contains the information you need to plan your Sun Cluster HA for SWIFTAlliance Access installation and configuration.
This section provides a list of software and hardware configuration restrictions that apply to Sun Cluster HA for SWIFTAlliance Access only.
Your data service configuration might not be supported if you do not observe these restrictions.
You can configure the Sun Cluster HA for SWIFTAlliance Access only as a HA agent and not as a scalable agent.
You can install the SWIFTAlliance Access software on a global file system. Best practice is to use a failover file system. For SWIFTAlliance Access 5.5, you must install the IBM DCE software on local storage.
Only one SWIFTAlliance Access instance is supported by this agent.
For restrictions that apply to all data services, see the Sun Cluster Release Notes.
These requirements apply to Sun Cluster HA for SWIFTAlliance Access only. You must meet these requirements before you proceed with your Sun Cluster HA for SWIFTAlliance Access installation and configuration. Follow the SWIFTAlliance Access installation guide for the installation of the mandatory patch levels and the installation of the software itself.
Your data service configuration might not be supported if you do not adhere to these requirements.
Sun Cluster components and their dependencies —Configure the Sun Cluster HA for SWIFTAlliance Access data service to protect a Sun Cluster instance and its respective components. These components, and their dependencies, are briefly described below.
Component |
Description |
---|---|
DCE daemon |
-> SUNW.LogicalHost resource |
SWIFTAlliance Access |
-> SUNW.LogicalHost resource -> SUNW.HAStoragePlus resource The SUNW.HAStoragePlus resource manages the SWIFTAlliance Access System Mount points and ensures that Sun Cluster is not started until these are mounted. -> DCE daemon (version 5.5 only) |
The Sun Cluster component has two configuration and registration files under /opt/SUNWscsaa/util. These files allow you to register the Sun Cluster component with Sun Cluster.
Within these files, the appropriate dependencies have already been defined. You must update the saa_config file before you run the saa_register script.
This section contains the procedures you need to install and configure SWIFTAlliance Access.
Throughout the following sections, references will be made to certain directories for SWIFTAlliance Access, which can be selected by the user.
Use this procedure to install and configure SWIFTAlliance Access.
Create the resources for SWIFTAlliance Access.
Create a resource group for SWIFTAlliance Access:
# scrgadm -a -g swift-rg |
Create a logical host – Add the hostname and IP address in the /etc/inet/hosts file on both cluster nodes. Register the logical host and add it to the resource group.
# scrgadm -a -L -g swift-rg -j swift-saa-lh-rs -l swift-lh |
Create the device group and filesystem —See Sun Cluster 3.1 Software Installation Guide for instructions on how to create global file systems.
Create an HAstoragePlus resource – Although one can use global storage, it is recommended to create a HAStoragePlus failover resource to contain the SWIFTAlliance Access application and configuration data.
In the example, we use /global/saadg/alliance as the path, but you can choose the location.
# scrgadm -a -g swift-rg \ -j swift-ds \ -t SUNW.HAStoragePlus \ -x FilesystemMountPoints=/global/saadg/alliance |
Bring the resource group online
# scswitch -Z -g swift-rg |
Create configuration directory —to hold SWIFTAlliance Access information and create a link from /usr
# cd /global/saadg/alliance |
# mkdir swa |
# ln -s /global/saadg/alliance/swa /usr/swa |
Install IBM DCE client software on all the nodes.
This is valid only for SWIFTAlliance Access versions below 5.9 and should only be installed when needed.
Skip this step if you are using SWIFTAlliance Access version 5.9 or 6.0.
IBM DCE client software is a prerequisite for SWIFTAlliance Access 5.5. It must be installed and configured before the SWIFTAlliance Access application.
Install IBM DCE client software. Use local disks to install this software. The software comes in Sun package format (IDCEclnt). Because the installed files will reside at various locations on your system, it is not practical to have this installed on global file systems. Install this software on both cluster nodes.
# pkgadd -d ./IDCEclnt.pkg |
# /opt/dcelocal/tcl/config.dce —cell_name swift —dce_hostname swift-lh RPC |
Run the tests on both nodes.
# /opt/dcelocal/tcl/start.dce |
Verify that the dced daemon is running.
# /opt/dcelocal/tcl/stop.dce |
Install SWIFTAlliance Access software.
Create the users all_adm, all_usr and the group alliance up-front on all cluster nodes with the same user id and group id.
On Solaris 10: Create a project called swift and assign the users all_adm and all_usr to it.
# projadd -U all_adm,all_usr swift |
On Solaris 10: Set the values of the resource controls for the project swift:
# projmod -s -K "project.max-sem-ids=(privileged,128,deny)" swift |
# projmod -s -K "project.max-sem-nsems=(privileged,512,deny)" swift |
# projmod -s -K "project.max-sem-ops=(privileged,512,deny)" swift |
# projmod -s -K "project.max-shm-memory=(privileged,4294967295,deny)" swift |
# projmod -s -K "project.max-shm-ids=(privileged,128,deny)" swift |
# projmod -s -K "project.max-msg-qbytes=(privileged,4194304,deny)" swift |
# projmod -s -K "project.max-msg-ids=(privileged,500,deny)" swift |
# projmod -s -K "project.max-sem-messages=(privileged,8192,deny)" swift |
The above values are examples only. For more accurate values refer to the latest SWIFT documentation release notes.
On Solaris 10: Assign the project swift as the default project for all_adm and all_usr by editing the file /etc/user_attr and adding the following two lines at the end of the file:
all_adm::::project=swift |
all_usr::::project=swift |
For versions prior to Solaris 10, refer the latest SWIFT documentation and release notes to determine the necessary setup for /etc/system.
Use shared storage for the installation of this software. The installation procedure will modify system files and will also reboot the system. After the reboot, you must continue with the installation on the same node. Repeat the installation of the software on the second node, but you must end the installation before the SWIFTAlliance Access software licensing step.
Additional configuration for SWIFTAlliance Access
To enable clients to connect to the failover IP address, create a file named .alliance_ip_name (interfaces.rpc in version 5.9 and 6.0) on the data subdirectory of the SWIFTAlliance Access software.
When you are using the same file system as shown in the examples, this directory will be /global/saadg/alliance/data. This file must contain the IP address of the logical host as configured within the SWIFTAlliance Access resource.
# cd /global/saadg/alliance/data |
# chown all_adm:alliance interfaces.rpc |
If MESSENGER is licensed, create a file called interfaces.mas and add the cluster logical IP address used to communicate with SAM.
# cd /global/saadg/alliance/data |
# chown all_adm:alliance interfaces.mas |
Additional steps
Add the symbolic link /usr/swa on all cluster nodes that are part of the cluster, see Step 1 last bullet.
Entries in /etc/services has to be added on all nodes. This can be done as root by running the /usr/swa/apply_alliance_ports script.
The rc.alliance and rc.swa_boot scripts (swa_rpcd in SWIFTAlliance Access versions earlier than 5.9) in /etc/init.d must remain in place. Any references to these files in /etc/rc?.d need to be removed, the access rights must be as follows:
# cd /etc/init.d |
# chmod 750 rc.alliance rc.swa_boot |
# chown root:sys rc.alliance rc.swa_boot |
If the SWIFTAlliance Access Installer displays “Start this SWIFTAlliance at Boottime”, select No.
SWIFTAlliance Access Remote API (RA)
Install RA after SWIFTAlliance Access on shared storage using the following options:
Instance RA1 (default), user all_adm
Copy all files in the home directory of the all_adm and all_usr user to all nodes.
This section contains the procedure you need to verify the installation and configuration.
This procedure does not verify that your application is highly available because you have not yet installed your data service.
Start the SWIFTAlliance Access application
# su - all_adm |
The application GUI should start. Select the menu: Alliance —> Start SWIFTAlliance Servers. If DCE is not started yet, in case of 5.5, start it first from the GUI: OS Configuration —> DCE RPC, and then select Alliance —> Start SWIFTAlliance Servers.
Test the application
Start the GUI, then select the menu item: Alliance —> Start User Interface.
Stop the SWIFTAlliance Access application
Start the GUI:
# su - all_adm |
Select the menu: Alliance —> Stop SWIFTAlliance Servers.
If you did not install the Sun Cluster data service for SWIFTAlliance Access packages during your initial Sun Cluster installation, perform this procedure to install the packages. Perform this procedure on each cluster node where you are installing the Sun Cluster data service for SWIFTAlliance Access packages. To complete this procedure, you need the Sun Cluster Agents CD-ROM.
The SUNWscsaa package is available on the CD-ROM only in the Solaris 8 directory. On Solaris 8 and 9, this package must be installed using the pkgadd command and on Solaris 10, the package must be installed using the pkgadd -G command.
Patch 118050–05 or a later patch must be installed. On Solaris 8 and 9, the patch must be installed using the patchadd command and on Solaris 10, the patch must be installed using the patchadd -G command.
If you are installing more than one data service simultaneously, perform the procedure in Installing the Software in Sun Cluster Software Installation Guide for Solaris OS.
Install the Sun Cluster data service for SWIFTAlliance Access packages by using one of the following installation tools:
Web Start program
scinstall utility
The Web Start program is not available in releases earlier than Sun Cluster 3.1 Data Services 10/03.
You can run the Web Start program with a command-line interface (CLI) or with a graphical user interface (GUI). The content and sequence of instructions in the CLI and the GUI are similar. For more information about the Web Start program, see the installer(1M) man page.
On the cluster node where you are installing the Sun Cluster data service for SWIFTAlliance Access packages, become superuser.
(Optional) If you intend to run the Web Start program with a GUI, ensure that your DISPLAY environment variable is set.
Insert the Sun Cluster Agents CD-ROM into the CD-ROM drive.
If the Volume Management daemon vold(1M) is running and configured to manage CD-ROM devices, it automatically mounts the CD-ROM on the /cdrom/cdrom0 directory.
Change to the Sun Cluster data service for SWIFTAlliance Access component directory of the CD-ROM.
The Web Start program for the Sun Cluster data service for SWIFTAlliance Access data service resides in this directory.
# cd /cdrom/cdrom0/components/SunCluster_HA_SWIFT_3.1 |
Start the Web Start program.
# ./installer |
When you are prompted, select the type of installation.
Follow the instructions on the screen to install the Sun Cluster data service for SWIFTAlliance Access packages on the node.
After the installation is finished, the Web Start program provides an installation summary. This summary enables you to view logs that the Web Start program created during the installation. These logs are located in the /var/sadm/install/logs directory.
Exit the Web Start program.
Remove the Sun Cluster Agents CD-ROM from the CD-ROM drive.
This section contains the procedures you need to configure Sun Cluster HA for SWIFTAlliance Access.
This procedure assumes that you installed the data service packages during your initial Sun Cluster installation.
Steps 1 to 6 will normally already be done in order to prepare for the installation of the IBM DCE and SWIFTAlliance Access software. See How to Install and Configure SWIFTAlliance Access. Typically, you should go directly to step 7.
Become superuser on one of the nodes in the cluster that will host Sun Cluster.
Register the SUNW.gds resource type.
# scrgadm -a -t SUNW.gds |
Register the SUNW.HAStoragePlus resource type.
# scrgadm -a -t SUNW.HAStoragePlus |
Create a failover resource group .
# scrgadm -a -g swift-rg |
Create a resource for the Sun Cluster Disk Storage.
# scrgadm -a -j swift-ds \ -g swift-rg \ -t SUNW.HAStoragePlus \ -x FilesystemMountPoints=/global/saadg/alliance |
Create a resource for the Sun Cluster Logical Hostname.
# scrgadm -a -L -j swift-lh-rs \ -g swift-rg \ -l swift-lh |
Create a resource for SWIFTAlliance Access.
Run the registration script provided as part of the SWIFTAlliance Access HA agent. Before running this script.
Check that the names of the resources match what is configured in /opt/SUNWscsaa/util/saa_config
# /opt/SUNWscsaa/util/saa_register |
Enable the failover resource group that now includes the Sun Cluster Disk Storage and Logical Hostname resources.
# scswitch -Z -g swift-rg |
Start the SWIFTAlliance Access instance manually.
su - all_adm The GUI will open up. From within the GUI, select the menu Alliance - Start Alliance Servers |
Stop the SWIFTAlliance Access manually.
su - all_adm The GUI will come up. Stop the application from within the GUI. |
Enable each Sun Cluster resource.
# scstat -g # scswitch -e -j Sun Cluster-resource |
This section contains the procedure you need to verify that you installed and configured your data service correctly.
Become superuser on one of the nodes in the cluster that will host Sun Cluster.
Ensure all the Sun Cluster resources are online with scstat.
# scstat |
For each Sun Cluster resource that is not online, use the scswitch command as follows.
# scswitch -e -j Sun Cluster- resource |
Run the scswitch command to switch the Sun Cluster resource group to another cluster node, such as node2.
# scswitch -z -g swift-rg -h node2 |
Check that SWIFTAlliance Access is stopped on the first node and that the application is restarted on the second node.
When using a failover file system, this should disappear on the first node and will be mounted on the second node.
This section describes the Sun Cluster HA for SWIFTAlliance Access fault monitor's probing algorithm or functionality, and states the conditions, messages, and recovery actions associated with unsuccessful probing.
For conceptual information on fault monitors, see the Sun Cluster Concepts Guide.
The Sun Cluster HA for SWIFTAlliance Access fault monitor uses the same resource properties as resource type SUNW.gds, refer to the SUNW.gds(5) man page for a complete list of resource properties used.
By default, the HA agent provides a fault monitor for the DCE component only when using SWIFTAlliance Access 5.5. The fault monitoring for SWIFTAlliance Access is switched off by default. If the SWIFTAlliance Access application fails, the agent will not restart the SWIFTAlliance Access application automatically. This behavior was explicitly requested by SWIFT. It will allow you to operate the application in a way that the probe does not interfere with the normal behavior of some SWIFTAlliance Access features like:
operator manually triggering the SWIFTAlliance Access restart function , for example, to run SWIFTAlliance Access in housekeeping mode.
automatic or scheduled SWIFTAlliance Access restart, for example, to run database backup and other maintenance or end-of-day processes.
any graceful SWIFTAlliance Access restart or recovery, in case of a SWIFTAlliance Access transient local error.
The HA agent will update the resource status message to output Degraded - SAA Instance offline.
If an automatic failover occurs with default setting, it is most likely that there was a DCE problem. The SWIFTAlliance Access application will cause only a failover when it does not succeed to start on the current node.
The HA agent provides an option to turn on fault monitoring for SWIFTAlliance Access on registration time. However, this option is not recommended by SWIFT. The optional probing checks for the existence of the SWIFTAlliance Access instance by calling the alliance command that is part of the application and by evaluating its return code. If the SWIFTAlliance Access instance is not running, return code 100 is sent to SUNW.gds, which in turn will perform an automatic restart depending on the configuration of the resource properties.
Each Sun Cluster component has a DEBUG file under /opt/SUNWscsaa/etc, where saa is a three-character abbreviation for the respective Sun Cluster component.
These files allow you to turn on debugging for all Sun Cluster instances or for a specific Sun Cluster instance on a particular node with Sun Cluster. If you require debug to be turned on for Sun Cluster HA for SWIFTAlliance Access across the whole Sun Cluster, repeat this step on all nodes within Sun Cluster.
Edit /etc/syslog.conf
Change daemon.notice to daemon.debug
# grep daemon /etc/syslog.conf *.err;kern.debug;daemon.notice;mail.crit /var/adm/messages *.alert;kern.err;daemon.err operator # |
Change the daemon.notice to daemon.debug and restart syslogd. The output below, from the command grep daemon /etc/syslog.conf, shows that daemon.debug has been set.
# grep daemon /etc/syslog.conf *.err;kern.debug;daemon.debug;mail.crit /var/adm/messages *.alert;kern.err;daemon.err operator # # pkill -1 syslogd # |
Edit /opt/SUNWscsaa/etc/config
Change DEBUG= to DEBUG=ALL or DEBUG=resource
# cat /opt/SUNWscsaa/etc/config # # Copyright 2003 Sun Microsystems, Inc. All rights reserved. # Use is subject to license terms. # # Usage: # DEBUG=<RESOURCE_NAME> or ALL # DEBUG=ALL # |
To turn off debug, reverse the steps above.