Sun Cluster Data Service for SWIFTAlliance Access Guide for Solaris OS

How to Install and Configure SWIFTAlliance Access

Use this procedure to install and configure SWIFTAlliance Access.

  1. Create the resources for SWIFTAlliance Access.

    • Create a resource group for SWIFTAlliance Access:

      # scrgadm -a -g swift-rg
    • Create a logical host – Add the hostname and IP address in the /etc/inet/hosts file on both cluster nodes. Register the logical host and add it to the resource group.

      # scrgadm -a -L -g swift-rg -j swift-saa-lh-rs -l swift-lh
    • Create the device group and filesystem —See Sun Cluster 3.1 Software Installation Guide for instructions on how to create global file systems.

    • Create an HAstoragePlus resource – Although one can use global storage, it is recommended to create a HAStoragePlus failover resource to contain the SWIFTAlliance Access application and configuration data.

      In the example, we use /global/saadg/alliance as the path, but you can choose the location.

      # scrgadm -a -g swift-rg \
      -j swift-ds \
      -t SUNW.HAStoragePlus \
      -x FilesystemMountPoints=/global/saadg/alliance
    • Bring the resource group online

      # scswitch -Z -g swift-rg
    • Create configuration directory —to hold SWIFTAlliance Access information and create a link from /usr

      # cd /global/saadg/alliance

      # mkdir swa

      # ln -s /global/saadg/alliance/swa /usr/swa
  2. Install IBM DCE client software on all the nodes.

    Caution – Caution –

    This is valid only for SWIFTAlliance Access versions below 5.9 and should only be installed when needed.

    Skip this step if you are using SWIFTAlliance Access version 5.9 or 6.0.

    IBM DCE client software is a prerequisite for SWIFTAlliance Access 5.5. It must be installed and configured before the SWIFTAlliance Access application.

    • Install IBM DCE client software. Use local disks to install this software. The software comes in Sun package format (IDCEclnt). Because the installed files will reside at various locations on your system, it is not practical to have this installed on global file systems. Install this software on both cluster nodes.

      # pkgadd -d ./IDCEclnt.pkg
    • Configure DCE client RPC.

      # /opt/dcelocal/tcl/config.dce —cell_name swift —dce_hostname swift-lh RPC
    • Test DCE.

      Run the tests on both nodes.

      # /opt/dcelocal/tcl/start.dce

      Verify that the dced daemon is running.

      # /opt/dcelocal/tcl/stop.dce
  3. Install SWIFTAlliance Access software.

    • Create the users all_adm, all_usr and the group alliance up-front on all cluster nodes with the same user id and group id.

    • On Solaris 10: Create a project called swift and assign the users all_adm and all_usr to it.

      # projadd -U all_adm,all_usr swift
    • On Solaris 10: Set the values of the resource controls for the project swift:

      # projmod -s -K "project.max-sem-ids=(privileged,128,deny)" swift

      # projmod -s -K "project.max-sem-nsems=(privileged,512,deny)" swift

      # projmod -s -K "project.max-sem-ops=(privileged,512,deny)" swift

      # projmod -s -K "project.max-shm-memory=(privileged,4294967295,deny)" swift

      # projmod -s -K "project.max-shm-ids=(privileged,128,deny)" swift

      # projmod -s -K "project.max-msg-qbytes=(privileged,4194304,deny)" swift

      # projmod -s -K "project.max-msg-ids=(privileged,500,deny)" swift

      # projmod -s -K "project.max-sem-messages=(privileged,8192,deny)" swift

      The above values are examples only. For more accurate values refer to the latest SWIFT documentation release notes.

    • On Solaris 10: Assign the project swift as the default project for all_adm and all_usr by editing the file /etc/user_attr and adding the following two lines at the end of the file:


    • For versions prior to Solaris 10, refer the latest SWIFT documentation and release notes to determine the necessary setup for /etc/system.

    Use shared storage for the installation of this software. The installation procedure will modify system files and will also reboot the system. After the reboot, you must continue with the installation on the same node. Repeat the installation of the software on the second node, but you must end the installation before the SWIFTAlliance Access software licensing step.

  4. Additional configuration for SWIFTAlliance Access

    To enable clients to connect to the failover IP address, create a file named .alliance_ip_name (interfaces.rpc in version 5.9 and 6.0) on the data subdirectory of the SWIFTAlliance Access software.

    When you are using the same file system as shown in the examples, this directory will be /global/saadg/alliance/data. This file must contain the IP address of the logical host as configured within the SWIFTAlliance Access resource.

    # cd /global/saadg/alliance/data

    # chown all_adm:alliance interfaces.rpc

    If MESSENGER is licensed, create a file called interfaces.mas and add the cluster logical IP address used to communicate with SAM.

    # cd /global/saadg/alliance/data

    # chown all_adm:alliance interfaces.mas
  5. Additional steps

    • Add the symbolic link /usr/swa on all cluster nodes that are part of the cluster, see Step 1 last bullet.

    • Entries in /etc/services has to be added on all nodes. This can be done as root by running the /usr/swa/apply_alliance_ports script.

    • The rc.alliance and rc.swa_boot scripts (swa_rpcd in SWIFTAlliance Access versions earlier than 5.9) in /etc/init.d must remain in place. Any references to these files in /etc/rc?.d need to be removed, the access rights must be as follows:

      # cd /etc/init.d

      # chmod 750 rc.alliance rc.swa_boot

      # chown root:sys rc.alliance rc.swa_boot

      If the SWIFTAlliance Access Installer displays “Start this SWIFTAlliance at Boottime”, select No.

  6. SWIFTAlliance Access Remote API (RA)

    • Install RA after SWIFTAlliance Access on shared storage using the following options:

      Instance RA1 (default), user all_adm

    • Copy all files in the home directory of the all_adm and all_usr user to all nodes.