Skip Headers
StorageTek Automated Cartridge System Library Software High Availability 8.3 Cluster Installation, Configuration, and Operation
Release 8.3
E51939-02
  Go To Table Of Contents
Contents
Go To Index
Index

Previous
Previous
 
Next
Next
 

7 ACSLS HA 8.3 Installation and Startup

The SUNWscacsls package contains ACSLS agent software that communicates with Oracle Solaris Cluster. It includes special configuration files and patches that insure proper operation between ACSLS and Solaris Cluster.

Basic Installation Procedure

  1. Unzip the downloaded SUNWscacsls.zip file in /opt.

    # cd /opt
    # unzip SUNWscacsls.zip
    
  2. Install the SUNWscacsls package.

    # pkgadd -d .
    
  3. Repeat steps 1 and 2 on the adjacent node.

  4. Verify that the acslspool remains mounted on one of the two nodes.

    # zpool status acslspool
    

    If the acslspool is not mounted, check the other node.

    If the acslspool is not mounted to either node, then import it to the current node as follows:

    # zpool import -f acslspool
    

    Then verify with zpool status.

  5. Go into the /opt/ACSLSHA/util directory on either node and run the copyUtils.sh script. This operation updates or copies essential files to appropriate locations on both nodes. There is no need to repeat this operation on the adjacent node.

    # cd /opt/ACSLSHA/util
    # ./copyUtils.sh
    
  6. On the node where the acslspool is active, start the ACSLS application and verify that it is operational. Resolve any issues you encounter. Major issues may be resolved by removing and reinstalling the STKacsls package on the node.

    If you must re-install the STKacsls package, run the /opt/ACSLSHA/util/copyUtils.sh script after installing the package

  7. Shutdown acsls.

    # su - acsss
    $ acsss shutdown
    $ exit
    #
    
  8. Export the acslspool from the active node.

    # zpool export acslspool
    

    Note:

    This operation fails if user acsss is logged in, if a user shell is active anywhere in the acslspool, or if any acsss service remains enabled.

  9. Import the acslspool from the adjacent node.

    # zpool import acslspool
    
  10. Startup the ACSLS application on this node and verify successful library operation. Resolve any issues you encounter. Major issues may be resolved by removing and reinstalling the STKacsls package on the node.

    If you must re-install the STKacsls package, run the /opt/ACSLSHA/util/copyUtils.sh script after installing the package.

Starting ACSLS HA

The ACSLS HA start script is found in the /opt/ACSLSHA/utils directory. This utility registers the ACSLS agent with Solaris Cluster, passing three arguments:

To start ACSLSHA:

# cd /opt/ACSLSHA/util
# ./start_acslsha.sh -h logical hostname -g IPMP group -z acslspool

This operation may take a few minutes.

Verifying Cluster Operation

  1. Once acslsha has started and is registered with Solaris Cluster, use cluster commands to check status of the ACSLS resource group and its associated resources.

    # clrg status
       === Cluster Resource Groups ===
       Group Name       Node Name       Suspended      Status
       ----------       ---------       ---------      ------
       acsls-rg         node1           No             Online
                        node2           No             Offline
    
    # clrs status
       === Cluster Resources ===
       Resource Name       Node Name      State        Status Message
       -------------       ---------      -----        --------------
       acsls-rs            node1          Online       Online
                           node2          Offline      Offline
       acsls-storage       node1          Online       Online
                           node2          Offline      Offline
       <logical host>      node1          Online       Online
                           node2          Offline      Offline
    
  2. Temporarily suspend cluster failover readiness to facilitate initial testing.

    # clrg suspend acsls-rg
    # clrg status
    
  3. Test cluster switch operation from the active node to the standby.

    # clrg switch -n standby hostname acsls-rg
    

    As the switch operation transpires, monitor the activity from each of the two system consoles.

    Using tail -f file_name monitor activity on each node from the following viewpoints:

    a) /var/adm/messages
    c) /var/cluster/logs/DS/acsls-rg/acsls-rs/start_stop_log.txt
    

    Resolve any issues that may be revealed during the switch-over event.

  4. Verify network connectivity from an ACSLS client system using the logical hostname of the ACSLS server.

    $ ping acsls_logical_host
    $ ssh root@acsls_logical_host hostname
    passwd:
    

    This operation should return the hostname of the active node.

  5. Verify ACSLS operation.

    $ acsss status
    
  6. Repeat steps 3, 4, and 5 from the opposite node.

  7. Resume cluster failover readiness

    # clrg resume acsls-rg
    # clrg status
    
  8. Reboot the active node and monitor the operation from the two system consoles and from the viewpoints suggested in step 3 above. Verify automatic failover operation to the standby node.

  9. Verify network access to the logical host from a client system as suggested in step 4.

  10. Once ACSLS operation is active on the new node, reboot this node and observe failover action to the opposite node.

  11. Repeat network verification as suggested in step 4.