JavaScript is required to for searching.
Skip Navigation Links
Exit Print View
Oracle Solaris Cluster Data Service for SWIFTAlliance Access Guide SPARC Platform Edition
search filter icon
search icon

Document Information

Preface

1.  Installing and Configuring Oracle Solaris Cluster HA for Alliance Access

Overview of Installing and Configuring HA for Alliance Access

Oracle Solaris Cluster HA for Alliance Access Overview

Planning the Oracle Solaris Cluster HA for Alliance Access Installation and Configuration

Configuration Restrictions

Configuration Requirements

Oracle Solaris Cluster components and their dependencies

Installing and Configuring Alliance Access

How to Install and Configure Alliance Access

Verifying the Installation and Configuration of Alliance Access

How to Verify the Installation and Configuration of Alliance Access

Installing the HA for Alliance Access Packages

How to Install the HA for Alliance Access Packages

Registering and Configuring Oracle Solaris Cluster HA for Alliance Access

How to Register and Configure Oracle Solaris Cluster HA for Alliance Access as a Failover Service

Verifying the Oracle Solaris Cluster HA for Alliance Access Installation and Configuration

How to Verify the Oracle Solaris Cluster HA for Alliance Access Installation and Configuration

Understanding the Oracle Solaris Cluster HA for Alliance Access Fault Monitor

Resource Properties

Probing Algorithm and Functionality

Debugging Oracle Solaris Cluster HA for Alliance Access

How to turn on debugging for Oracle Solaris Cluster HA for Alliance Access

Index

Installing and Configuring Alliance Access

This section describes the procedure to install and configure Alliance Access.

References will be made to some user-accessible directories for Alliance Access throughout the following sections.


Note - HA for Alliance Access can be configured to run in a whole root or a sparse root non-global zone for Alliance Access version 6.0, 6.2, and 6.3 if required.


How to Install and Configure Alliance Access

Use this procedure to install and configure Alliance Access.


Note - IBM DCE client software is a prerequisite for Alliance Access version 5.5. The client software must be installed and configured before the Alliance Access application.


  1. Create the resources for Alliance Access.
    • Create a resource group for Alliance Access.

      # clresourcegroup create [-n node-zone-list] swift-rg
      -n node-zone-list

      Specifies a comma-separated, ordered list of zones that can master the resource group. The format of each entry in the list is node. In this format, node specifies the node name and zone specifies the name of a non-global Solaris zone. To specify the global zone, or to specify a node without non-global zones, specify only node. This list is optional. If you omit this list, the global zone of each cluster node can master the resource group.

    • Create a logical host.

      Add the hostname and IP address in the /etc/inet/hosts file on all cluster nodes or zones that can master the resource group. Register the logical host and add it to the resource group.

      # clreslogicalhostname create -g swift-rg -l swift-lh swift-saa-lh-rs
    • Create the device group and file system.

      See Solaris Cluster Data Services Installation and Configuration Guide for instructions on how to create global file systems.

    • Create an HAstoragePlus resource.

      It is recommended to create a HAStoragePlus failover resource to contain the Alliance Access application and configuration data instead of using the global file system.

      # clresource create -g swift-rg \
      -t SUNW.HAStoragePlus \
      -x FilesystemMountPoints=/global/saadg/alliance swift-ds
    • Bring the resource group online.

      # clresouregroup online -M swift-rg
    • Create the configuration directory.

      This directory contains Alliance Access information and creates a link from the /usr

      # cd /global/saadg/alliance
      # mkdir swa
      # ln -s /global/saadg/alliance/swa /usr/swa

      Note - For Solaris 10 only: If you install Alliance Access in a sparse root zone, that is if the /usr directory is inherited in read-only mode through a loopback mount, the link needs to be created within the global zone.


  2. Install IBM DCE client software on all cluster nodes or zones that can master the resource group.

    Caution

    Caution - This step is valid only for Alliance Access versions prior to 5.9 and should only be installed when needed.

    Skip this step if you are using Alliance Access version 5.9 or later.


    • Install IBM DCE client software on all cluster nodes or zones that can master the resource group. Use local disks to install this application. The software is shipped in Sun package format (IDCEclnt). Because the installed files will reside at various locations on your system, it is not practical to have this installed on global file systems. Install this application on both cluster nodes.

      # pkgadd -d ./IDCEclnt.pkg
    • Configure DCE client RPC.

      # /opt/dcelocal/tcl/config.dce —cell_name swift —dce_hostname swift-lh RPC
    • Test DCE.

      Run the tests on all cluster nodes or zones that can master the resource group.

      # /opt/dcelocal/tcl/start.dce

      Verify that the dced daemon is running.

      # /opt/dcelocal/tcl/stop.dce
  3. Install Alliance Access software.

    Perform the following steps on all cluster nodes or zones that can master the resource group. The steps vary between different versions of Alliance Access. You must perform the steps corresponding to the version of Alliance Access you are using.

    • For Alliance Access 6.0 and earlier only: Create the users all_adm, all_usr and the group alliance on all cluster nodes or zones that can master the resource group with the same user ID and group ID.

      # groupadd -g groupid alliance
      # useradd -m -g alliance -d /export/home/all_adm -s /usr/bin/ksh all_adm
      # useradd -m -g alliance -d /export/home/all_usr -s /usr/bin/ksh all_usr

      On Solaris 10: Create a project called swift and assign the users all_adm and all_usr to it.

      # projadd -U all_adm,all_usr swift
    • For Alliance Access 6.2 and 6.3: Create the user all_adm and the groups alliance and sagnlg on all cluster nodes or zones that can master the resource group with the same user ID and group ID. Also, create a project called swift and assign the users all_adm to it.

      # groupadd -g groupid alliance
      # groupadd -g groupid sagsnlg
      # useradd -m -g alliance -G sagsnlg -d /export/home/all_adm -s \ /usr/bin/ksh all_adm
      # projadd -U all_adm swift
    • On Solaris 10: Set the values of the resource controls for the project swift.

      For Alliance Access 6.0 and earlier versions only:

      # projmod -s -K "project.max-sem-ids=(privileged,128,deny)" swift
      # projmod -s -K "process.max-sem-nsems=(privileged,512,deny)" swift
      # projmod -s -K "process.max-sem-ops=(privileged,512,deny)" swift
      # projmod -s -K "project.max-shm-memory=(privileged,4294967295,deny)" swift
      # projmod -s -K "project.max-shm-ids=(privileged,128,deny)" swift
      # projmod -s -K "process.max-msg-qbytes=(privileged,4194304,deny)" swift
      # projmod -s -K "project.max-msg-ids=(privileged,500,deny)" swift
      # projmod -s -K "process.max-sem-messages=(privileged,8192,deny)" swift

      For Alliance Access 6.2 and 6.3 only:

      # projmod -s -K "project.max-sem-ids=(privileged,1320,deny)" swift
      # projmod -s -K "project.max-shm-ids=(privileged,1500,deny)" swift
      # projmod -s -K "project.max-shm-memory=(privileged,4294967295,deny)" swift
      # projmod -s -K "project.max-msg-ids=(privileged,800,deny)" swift
      # projmod -s -K "process.max-sem-nsems=(privileged,512,deny)" swift
      # projmod -s -K "process.max-sem-ops=(privileged,512,deny)" swift
      # projmod -s -K "process.max-msg-qbytess=(privileged,10485760,deny)" swift
      # projmod -s -K "process.max-msg-messages=(privileged,8192,deny)" swift
      # projmod -s -K "process.max-stack-size=(basic,33554432,deny)" swift
      # projmod -s -K "process.max-data-size=(basic,8.0EB,deny)" swift
      # projmod -s -K "process.max-file-descriptor=(basic,1000,deny)" swift

      The previous values are examples. For more accurate values, refer to the latest SWIFT documentation release notes of the corresponding version.

    • On Solaris 10:

      For Alliance Access 6.0 and earlier versions only:

      Assign the project swift as the default project for all_adm and all_usr by editing the file /etc/user_attr and typing the following two lines at the end of the file.

      all_adm::::project=swift
      all_usr::::project=swift

      For Alliance Access 6.2 and 6.3 only:

      Assign the project swift as the default project for all_adm by editing the file /etc/user_attr and typing the following line at the end of the file.

      all_adm::::project=swift
    • For versions prior to Solaris 10, refer to the latest SWIFT documentation and release notes to determine the necessary setup for /etc/system.

      Use the shared storage configured in Step 1 for the installation of this application. The installation procedure will modify system files and might reboot the system. After rebooting, you must continue with the installation on the same node or zone. Ensure that the resource group is online on this node or zone.

      For Alliance Access 6.0 and earlier versions only:

      Repeat the installation of the software on the other node or zone that can master the resource group, but you must end the installation before the Alliance Access software licensing step.

  4. For Alliance Access 6.0 and earlier versions only: Continue configuring Alliance Access application.

    To enable clients to connect to the failover IP address, create a file named .alliance_ip_name (interfaces.rpc in versions 5.9 and 6.0) on the data subdirectory of the Alliance Access software.

    If you are using the same file system as shown in the examples, this directory will be /global/saadg/alliance/data. This file must contain the IP address of the logical host as configured within the Alliance Access resource group.

    # cd /global/saadg/alliance/data
    # chown all_adm:alliance interfaces.rpc

    If Alliance Messenger is licensed, create a file called interfaces.mas and add the cluster logical IP address used to communicate with SAM.

    # cd /global/saadg/alliance/data
    # chown all_adm:alliance interfaces.mas
  5. Add a symbolic link and entries.
    • Add the symbolic link /usr/swa on all cluster nodes or zones that can master the resource group, see Step 1 last bullet.

    • Entries in /etc/services have to be added on all cluster nodes or zones that can master the resource group. This can be done as root by running the /usr/swa/apply_alliance_ports script.

    • For Alliance Access 6.0 and earlier versions only:

      The rc.alliance and rc.swa_boot scripts (swa_rpcd in Alliance Access versions prior to 5.9) in /etc/init.d must remain in place. Any references to these files in /etc/rc?.d need to be removed and the access rights must be as follows:

      # cd /etc/init.d
      # chmod 750 rc.alliance rc.swa_boot
      # chown root:sys rc.alliance rc.swa_boot

      If the Alliance Access Installer displays “Start this SWIFTAlliance at Boottime”, choose No.

      You must copy the rc.alliance and rc.swa_boot scripts to all other cluster nodes or zones that can master the resource group:

      # scp rc.alliance rc.swa_boot node2:/etc/init.d

    Note - You must not configure to automatically start at boot time through the saa_configbootstrap command for Alliance Access 6.2 and 6.3.


  6. Install Alliance Access Remote API (RA).
    • Install RA after Alliance Access on shared storage using the following options:

      Instance RA1 (default), user all_adm

    • Alliance Access 6.0 and earlier versions only:

      Copy the files in the home directory of the all_adm and all_usr user to all cluster nodes or zones that can master the resource group.

    • Alliance Access 6.2 and 6.3 only:

      Copy the files in the home directory of the all_adm user to all cluster nodes or zones that can master the resource group. Copy the root/InstallShield directory to all cluster nodes or zones that can master the resource group.