Sun Cluster Data Service for SWIFTAlliance Access Guide for Solaris OS

Installing and Configuring Sun Cluster HA for SWIFTAlliance Access

This Chapter explains how to install and configure Sun Cluster data service for SWIFTAlliance Access.

This Chapter contains the following sections.

Overview of the tasks needed to install and configure Sun Cluster data service for SWIFTAlliance Access.

Table 1 Lists the tasks for installing and configuring Sun Cluster HA for SWIFTAlliance Access. Perform these tasks in the order that they are listed.

Table 1 Task Map: Installing and Configuring Sun Cluster HA for SWIFTAlliance Access

Task 

For Instructions, Go To 

1. Plan the installation. 

Planning the Sun Cluster HA for SWIFTAlliance Access Installation and Configuration

2. Install Sun Cluster HA for SWIFTAlliance Access Packages. 

How to Install and Configure SWIFTAlliance Access

3. Verify installation and configuration. 

Verifying the Installation and Configuration of SWIFTAlliance Access

4. Register and Configure Sun Cluster HA for SWIFTAlliance Access. 

Registering and Configuring Sun Cluster HA for SWIFTAlliance Access

5. Verify Sun Cluster HA for SWIFTAlliance Access Installation and Configuration. 

Verifying the Sun Cluster HA for SWIFTAlliance Access Installation and Configuration

6. Understand Sun Cluster HA for SWIFTAlliance Access fault monitor. 

Understanding the Sun Cluster HA for SWIFTAlliance Access Fault Monitor

7. Debug Sun Cluster HA for SWIFTAlliance Access. 

Debug Sun Cluster HA for SWIFTAlliance Access

Sun Cluster HA for SWIFTAlliance Access Overview

The HA agent is written to work with SWIFTAlliance Access versions 5.5, 5.9, and 6.0. IBM DCE version 3.2 is not used anymore by SWIFTAlliance Access 5.9 and later, and must only be installed for SWIFTAlliance Access 5.5. SWIFTAlliance AccessTM is a trademark of SWIFT.

The Sun Cluster HA for SWIFTAlliance Access data service provides a mechanism for orderly startup, shutdown, fault monitoring, and automatic takeover of the Sun Cluster service. The Sun Cluster components protected by the Sun Cluster HA for SWIFTAlliance Access data service are the following.

Table 2 Protection of Components

Component 

Protected by 

DCE daemon 

Sun Cluster HA for SWIFTAlliance Access (version 5.5 only) 

SWIFTAlliance Access 

Sun Cluster HA for SWIFTAlliance Access 


Note –

By default the HA agent provides a fault monitor for the DCE component only when using the SWIFTAlliance Access 5.5. The fault monitoring for SWIFTAlliance Access is switched off by default. If the SWIFTAlliance Access application fails, the agent will not restart the SWIFTAlliance Access application automatically. This behavior was explicitly requested by SWIFT. It will allow you to operate the application in a way that the probe does not interfere with the normal behavior of some SWIFTAlliance Access features like:

The HA agent provides the start, stop, takeover, and switchover functionality. This means that when a node fails, the other node will automatically start the SWIFTAlliance Access application. The HA agent also provides an option to turn on fault monitoring for SWIFTAlliance Access on registration time. However, this option is not recommended by SWIFT.


Planning the Sun Cluster HA for SWIFTAlliance Access Installation and Configuration

This section contains the information you need to plan your Sun Cluster HA for SWIFTAlliance Access installation and configuration.

Configuration Restrictions

This section provides a list of software and hardware configuration restrictions that apply to Sun Cluster HA for SWIFTAlliance Access only.


Caution – Caution –

Your data service configuration might not be supported if you do not observe these restrictions.


For restrictions that apply to all data services, see the Sun Cluster Release Notes.

Configuration Requirements

These requirements apply to Sun Cluster HA for SWIFTAlliance Access only. You must meet these requirements before you proceed with your Sun Cluster HA for SWIFTAlliance Access installation and configuration. Follow the SWIFTAlliance Access installation guide for the installation of the mandatory patch levels and the installation of the software itself.


Caution – Caution –

Your data service configuration might not be supported if you do not adhere to these requirements.


Installing and Configuring SWIFTAlliance Access

This section contains the procedures you need to install and configure SWIFTAlliance Access.

Throughout the following sections, references will be made to certain directories for SWIFTAlliance Access, which can be selected by the user.

How to Install and Configure SWIFTAlliance Access

Use this procedure to install and configure SWIFTAlliance Access.

  1. Create the resources for SWIFTAlliance Access.

    • Create a resource group for SWIFTAlliance Access:


      # scrgadm -a -g swift-rg
      
    • Create a logical host – Add the hostname and IP address in the /etc/inet/hosts file on both cluster nodes. Register the logical host and add it to the resource group.


      # scrgadm -a -L -g swift-rg -j swift-saa-lh-rs -l swift-lh
      
    • Create the device group and filesystem —See Sun Cluster 3.1 Software Installation Guide for instructions on how to create global file systems.

    • Create an HAstoragePlus resource – Although one can use global storage, it is recommended to create a HAStoragePlus failover resource to contain the SWIFTAlliance Access application and configuration data.

      In the example, we use /global/saadg/alliance as the path, but you can choose the location.


      # scrgadm -a -g swift-rg \
      -j swift-ds \
      -t SUNW.HAStoragePlus \
      -x FilesystemMountPoints=/global/saadg/alliance
      
    • Bring the resource group online


      # scswitch -Z -g swift-rg
      
    • Create configuration directory —to hold SWIFTAlliance Access information and create a link from /usr


      # cd /global/saadg/alliance
      

      # mkdir swa
      

      # ln -s /global/saadg/alliance/swa /usr/swa
      
  2. Install IBM DCE client software on all the nodes.


    Caution – Caution –

    This is valid only for SWIFTAlliance Access versions below 5.9 and should only be installed when needed.

    Skip this step if you are using SWIFTAlliance Access version 5.9 or 6.0.


    IBM DCE client software is a prerequisite for SWIFTAlliance Access 5.5. It must be installed and configured before the SWIFTAlliance Access application.

    • Install IBM DCE client software. Use local disks to install this software. The software comes in Sun package format (IDCEclnt). Because the installed files will reside at various locations on your system, it is not practical to have this installed on global file systems. Install this software on both cluster nodes.


      # pkgadd -d ./IDCEclnt.pkg
      
    • Configure DCE client RPC.


      # /opt/dcelocal/tcl/config.dce —cell_name swift —dce_hostname swift-lh RPC
      
    • Test DCE.

      Run the tests on both nodes.


      # /opt/dcelocal/tcl/start.dce
      

      Verify that the dced daemon is running.


      # /opt/dcelocal/tcl/stop.dce
      
  3. Install SWIFTAlliance Access software.

    • Create the users all_adm, all_usr and the group alliance up-front on all cluster nodes with the same user id and group id.

    • On Solaris 10: Create a project called swift and assign the users all_adm and all_usr to it.


      # projadd -U all_adm,all_usr swift
      
    • On Solaris 10: Set the values of the resource controls for the project swift:


      # projmod -s -K "project.max-sem-ids=(privileged,128,deny)" swift
      

      # projmod -s -K "project.max-sem-nsems=(privileged,512,deny)" swift
      

      # projmod -s -K "project.max-sem-ops=(privileged,512,deny)" swift
      

      # projmod -s -K "project.max-shm-memory=(privileged,4294967295,deny)" swift
      

      # projmod -s -K "project.max-shm-ids=(privileged,128,deny)" swift
      

      # projmod -s -K "project.max-msg-qbytes=(privileged,4194304,deny)" swift
      

      # projmod -s -K "project.max-msg-ids=(privileged,500,deny)" swift
      

      # projmod -s -K "project.max-sem-messages=(privileged,8192,deny)" swift
      

      The above values are examples only. For more accurate values refer to the latest SWIFT documentation release notes.

    • On Solaris 10: Assign the project swift as the default project for all_adm and all_usr by editing the file /etc/user_attr and adding the following two lines at the end of the file:


      all_adm::::project=swift

      all_usr::::project=swift
    • For versions prior to Solaris 10, refer the latest SWIFT documentation and release notes to determine the necessary setup for /etc/system.

    Use shared storage for the installation of this software. The installation procedure will modify system files and will also reboot the system. After the reboot, you must continue with the installation on the same node. Repeat the installation of the software on the second node, but you must end the installation before the SWIFTAlliance Access software licensing step.

  4. Additional configuration for SWIFTAlliance Access

    To enable clients to connect to the failover IP address, create a file named .alliance_ip_name (interfaces.rpc in version 5.9 and 6.0) on the data subdirectory of the SWIFTAlliance Access software.

    When you are using the same file system as shown in the examples, this directory will be /global/saadg/alliance/data. This file must contain the IP address of the logical host as configured within the SWIFTAlliance Access resource.


    # cd /global/saadg/alliance/data
    

    # chown all_adm:alliance interfaces.rpc
    

    If MESSENGER is licensed, create a file called interfaces.mas and add the cluster logical IP address used to communicate with SAM.


    # cd /global/saadg/alliance/data
    

    # chown all_adm:alliance interfaces.mas
    
  5. Additional steps

    • Add the symbolic link /usr/swa on all cluster nodes that are part of the cluster, see Step 1 last bullet.

    • Entries in /etc/services has to be added on all nodes. This can be done as root by running the /usr/swa/apply_alliance_ports script.

    • The rc.alliance and rc.swa_boot scripts (swa_rpcd in SWIFTAlliance Access versions earlier than 5.9) in /etc/init.d must remain in place. Any references to these files in /etc/rc?.d need to be removed, the access rights must be as follows:


      # cd /etc/init.d
      

      # chmod 750 rc.alliance rc.swa_boot
      

      # chown root:sys rc.alliance rc.swa_boot
      

      If the SWIFTAlliance Access Installer displays “Start this SWIFTAlliance at Boottime”, select No.

  6. SWIFTAlliance Access Remote API (RA)

    • Install RA after SWIFTAlliance Access on shared storage using the following options:

      Instance RA1 (default), user all_adm

    • Copy all files in the home directory of the all_adm and all_usr user to all nodes.

Verifying the Installation and Configuration of SWIFTAlliance Access

This section contains the procedure you need to verify the installation and configuration.

ProcedureHow to Verify the Installation and Configuration of SWIFTAlliance Access

This procedure does not verify that your application is highly available because you have not yet installed your data service.

  1. Start the SWIFTAlliance Access application


    # su - all_adm
    

    The application GUI should start. Select the menu: Alliance —> Start SWIFTAlliance Servers. If DCE is not started yet, in case of 5.5, start it first from the GUI: OS Configuration —> DCE RPC, and then select Alliance —> Start SWIFTAlliance Servers.

  2. Test the application

    Start the GUI, then select the menu item: Alliance —> Start User Interface.

  3. Stop the SWIFTAlliance Access application

    Start the GUI:


    # su - all_adm
    

    Select the menu: Alliance —> Stop SWIFTAlliance Servers.

Installing the Sun Cluster HA for SWIFTAlliance Access

If you did not install the Sun Cluster data service for SWIFTAlliance Access packages during your initial Sun Cluster installation, perform this procedure to install the packages. Perform this procedure on each cluster node where you are installing the Sun Cluster data service for SWIFTAlliance Access packages. To complete this procedure, you need the Sun Cluster Agents CD-ROM.


Note –

The SUNWscsaa package is available on the CD-ROM only in the Solaris 8 directory. On Solaris 8 and 9, this package must be installed using the pkgadd command and on Solaris 10, the package must be installed using the pkgadd -G command.



Note –

Patch 118050–05 or a later patch must be installed. On Solaris 8 and 9, the patch must be installed using the patchadd command and on Solaris 10, the patch must be installed using the patchadd -G command.


If you are installing more than one data service simultaneously, perform the procedure in Installing the Software in Sun Cluster Software Installation Guide for Solaris OS.

Install the Sun Cluster data service for SWIFTAlliance Access packages by using one of the following installation tools:


Note –

The Web Start program is not available in releases earlier than Sun Cluster 3.1 Data Services 10/03.


ProcedureHow to Install Sun Cluster HA for SWIFTAlliance Access Packages Using the Web Start Program

You can run the Web Start program with a command-line interface (CLI) or with a graphical user interface (GUI). The content and sequence of instructions in the CLI and the GUI are similar. For more information about the Web Start program, see the installer(1M) man page.

  1. On the cluster node where you are installing the Sun Cluster data service for SWIFTAlliance Access packages, become superuser.

  2. (Optional) If you intend to run the Web Start program with a GUI, ensure that your DISPLAY environment variable is set.

  3. Insert the Sun Cluster Agents CD-ROM into the CD-ROM drive.

    If the Volume Management daemon vold(1M) is running and configured to manage CD-ROM devices, it automatically mounts the CD-ROM on the /cdrom/cdrom0 directory.

  4. Change to the Sun Cluster data service for SWIFTAlliance Access component directory of the CD-ROM.

    The Web Start program for the Sun Cluster data service for SWIFTAlliance Access data service resides in this directory.


    # cd /cdrom/cdrom0/components/SunCluster_HA_SWIFT_3.1
    
  5. Start the Web Start program.


    # ./installer
    
  6. When you are prompted, select the type of installation.

    • To install only the C locale, select Typical.

    • To install other locales, select Custom.

  7. Follow the instructions on the screen to install the Sun Cluster data service for SWIFTAlliance Access packages on the node.

    After the installation is finished, the Web Start program provides an installation summary. This summary enables you to view logs that the Web Start program created during the installation. These logs are located in the /var/sadm/install/logs directory.

  8. Exit the Web Start program.

  9. Remove the Sun Cluster Agents CD-ROM from the CD-ROM drive.

    1. To ensure that the CD-ROM is not being used, change to a directory that does not reside on the CD-ROM.

    2. Eject the CD-ROM.


      # eject cdrom
      

Registering and Configuring Sun Cluster HA for SWIFTAlliance Access

This section contains the procedures you need to configure Sun Cluster HA for SWIFTAlliance Access.

ProcedureHow to Register and Configure Sun Cluster HA for SWIFTAlliance Access as a Failover Service

This procedure assumes that you installed the data service packages during your initial Sun Cluster installation.

Steps 1 to 6 will normally already be done in order to prepare for the installation of the IBM DCE and SWIFTAlliance Access software. See How to Install and Configure SWIFTAlliance Access. Typically, you should go directly to step 7.

  1. Become superuser on one of the nodes in the cluster that will host Sun Cluster.

  2. Register the SUNW.gds resource type.


    # scrgadm -a -t SUNW.gds
    
  3. Register the SUNW.HAStoragePlus resource type.


    # scrgadm -a -t SUNW.HAStoragePlus
    
  4. Create a failover resource group .


    # scrgadm -a -g swift-rg
    
  5. Create a resource for the Sun Cluster Disk Storage.


    # scrgadm -a -j swift-ds \
    -g swift-rg \
    -t SUNW.HAStoragePlus  \
    -x FilesystemMountPoints=/global/saadg/alliance
    
  6. Create a resource for the Sun Cluster Logical Hostname.


    # scrgadm -a -L -j swift-lh-rs \
    -g swift-rg  \
    -l swift-lh
    
  7. Create a resource for SWIFTAlliance Access.

    Run the registration script provided as part of the SWIFTAlliance Access HA agent. Before running this script.

    Check that the names of the resources match what is configured in /opt/SUNWscsaa/util/saa_config


    # /opt/SUNWscsaa/util/saa_register 
    
  8. Enable the failover resource group that now includes the Sun Cluster Disk Storage and Logical Hostname resources.


    # scswitch -Z -g swift-rg
    
  9. Start the SWIFTAlliance Access instance manually.


    su - all_adm
    The GUI will open up. From within the GUI, select the menu
    Alliance - Start Alliance Servers
  10. Stop the SWIFTAlliance Access manually.


    su - all_adm
     The GUI will come up. Stop the application from within the GUI. 
  11. Enable each Sun Cluster resource.


    # scstat -g
    # scswitch -e -j Sun Cluster-resource
    

Verifying the Sun Cluster HA for SWIFTAlliance Access Installation and Configuration

This section contains the procedure you need to verify that you installed and configured your data service correctly.

ProcedureHow to Verify the Sun Cluster HA for SWIFTAlliance Access Installation and Configuration

  1. Become superuser on one of the nodes in the cluster that will host Sun Cluster.

  2. Ensure all the Sun Cluster resources are online with scstat.


    # scstat 
    

    For each Sun Cluster resource that is not online, use the scswitch command as follows.


    # scswitch -e -j Sun Cluster- resource
    
  3. Run the scswitch command to switch the Sun Cluster resource group to another cluster node, such as node2.


    # scswitch -z -g swift-rg  -h node2
    
  4. Check that SWIFTAlliance Access is stopped on the first node and that the application is restarted on the second node.

    When using a failover file system, this should disappear on the first node and will be mounted on the second node.

Understanding the Sun Cluster HA for SWIFTAlliance Access Fault Monitor

This section describes the Sun Cluster HA for SWIFTAlliance Access fault monitor's probing algorithm or functionality, and states the conditions, messages, and recovery actions associated with unsuccessful probing.

For conceptual information on fault monitors, see the Sun Cluster Concepts Guide.

Resource Properties

The Sun Cluster HA for SWIFTAlliance Access fault monitor uses the same resource properties as resource type SUNW.gds, refer to the SUNW.gds(5) man page for a complete list of resource properties used.

Probing Algorithm and Functionality

By default, the HA agent provides a fault monitor for the DCE component only when using SWIFTAlliance Access 5.5. The fault monitoring for SWIFTAlliance Access is switched off by default. If the SWIFTAlliance Access application fails, the agent will not restart the SWIFTAlliance Access application automatically. This behavior was explicitly requested by SWIFT. It will allow you to operate the application in a way that the probe does not interfere with the normal behavior of some SWIFTAlliance Access features like:

The HA agent will update the resource status message to output Degraded - SAA Instance offline.

If an automatic failover occurs with default setting, it is most likely that there was a DCE problem. The SWIFTAlliance Access application will cause only a failover when it does not succeed to start on the current node.

The HA agent provides an option to turn on fault monitoring for SWIFTAlliance Access on registration time. However, this option is not recommended by SWIFT. The optional probing checks for the existence of the SWIFTAlliance Access instance by calling the alliance command that is part of the application and by evaluating its return code. If the SWIFTAlliance Access instance is not running, return code 100 is sent to SUNW.gds, which in turn will perform an automatic restart depending on the configuration of the resource properties.

Debug Sun Cluster HA for SWIFTAlliance Access

ProcedureHow to turn on debugging for Sun Cluster HA for SWIFTAlliance Access

Each Sun Cluster component has a DEBUG file under /opt/SUNWscsaa/etc, where saa is a three-character abbreviation for the respective Sun Cluster component.

These files allow you to turn on debugging for all Sun Cluster instances or for a specific Sun Cluster instance on a particular node with Sun Cluster. If you require debug to be turned on for Sun Cluster HA for SWIFTAlliance Access across the whole Sun Cluster, repeat this step on all nodes within Sun Cluster.

  1. Edit /etc/syslog.conf

    Change daemon.notice to daemon.debug


    # grep daemon /etc/syslog.conf
    *.err;kern.debug;daemon.notice;mail.crit        /var/adm/messages
    *.alert;kern.err;daemon.err                     operator
    #

    Change the daemon.notice to daemon.debug and restart syslogd. The output below, from the command grep daemon /etc/syslog.conf, shows that daemon.debug has been set.


    # grep daemon /etc/syslog.conf
    *.err;kern.debug;daemon.debug;mail.crit        /var/adm/messages
    *.alert;kern.err;daemon.err                    operator
    #
    # pkill -1 syslogd
    #
  2. Edit /opt/SUNWscsaa/etc/config

    Change DEBUG= to DEBUG=ALL or DEBUG=resource


    # cat /opt/SUNWscsaa/etc/config
    #
    # Copyright 2003 Sun Microsystems, Inc.  All rights reserved.
    # Use is subject to license terms.
    #
    # Usage:
    #       DEBUG=<RESOURCE_NAME> or ALL
    #
    DEBUG=ALL
    #

    Note –

    To turn off debug, reverse the steps above.