StorageTek Automated Cartridge System Library Software High Availability 8.3 Cluster Installation, Configuration, and Operation Release 8.3 E51939-02 |
|
![]() Previous |
![]() Next |
The SUNWscacsls
package contains ACSLS agent software that communicates with Oracle Solaris Cluster. It includes special configuration files and patches that insure proper operation between ACSLS and Solaris Cluster.
Unzip the downloaded SUNWscacsls.zip
file in /opt.
# cd /opt # unzip SUNWscacsls.zip
Install the SUNWscacsls
package.
# pkgadd -d .
Repeat steps 1 and 2 on the adjacent node.
Verify that the acslspool
remains mounted on one of the two nodes.
# zpool status acslspool
If the acslspool
is not mounted, check the other node.
If the acslspool
is not mounted to either node, then import it to the current node as follows:
# zpool import -f acslspool
Then verify with zpool status
.
Go into the /opt/ACSLSHA/util
directory on either node and run the copyUtils.sh
script. This operation updates or copies essential files to appropriate locations on both nodes. There is no need to repeat this operation on the adjacent node.
# cd /opt/ACSLSHA/util # ./copyUtils.sh
On the node where the acslspool
is active, start the ACSLS application and verify that it is operational. Resolve any issues you encounter. Major issues may be resolved by removing and reinstalling the STKacsls package on the node.
If you must re-install the STKacsls package, run the /opt/ACSLSHA/util/copyUtils.sh
script after installing the package
Shutdown acsls.
# su - acsss $ acsss shutdown $ exit #
Export the acslspool
from the active node.
# zpool export acslspool
Note: This operation fails if useracsss is logged in, if a user shell is active anywhere in the acslspool , or if any acsss service remains enabled. |
Import the acslspool
from the adjacent node.
# zpool import acslspool
Startup the ACSLS application on this node and verify successful library operation. Resolve any issues you encounter. Major issues may be resolved by removing and reinstalling the STKacsls package on the node.
If you must re-install the STKacsls package, run the /opt/ACSLSHA/util/copyUtils.sh
script after installing the package.
The ACSLS HA start script is found in the /opt/ACSLSHA/utils
directory. This utility registers the ACSLS agent with Solaris Cluster, passing three arguments:
The ACSLS server logical hostname (see "The scinstall
Routine", step-4).
The ipmp group (see "The Public Interface and IPMP").
The ACSLS application zpool (see "File System Configuration with ZFS").
To start ACSLSHA:
# cd /opt/ACSLSHA/util # ./start_acslsha.sh -hlogical hostname
-gIPMP group
-z acslspool
This operation may take a few minutes.
Once acslsha
has started and is registered with Solaris Cluster, use cluster commands to check status of the ACSLS resource group and its associated resources.
# clrg status === Cluster Resource Groups === Group Name Node Name Suspended Status ---------- --------- --------- ------ acsls-rg node1 No Online node2 No Offline # clrs status === Cluster Resources === Resource Name Node Name State Status Message ------------- --------- ----- -------------- acsls-rs node1 Online Online node2 Offline Offline acsls-storage node1 Online Online node2 Offline Offline <logical host> node1 Online Online node2 Offline Offline
Temporarily suspend cluster failover readiness to facilitate initial testing.
# clrg suspend acsls-rg # clrg status
Test cluster switch operation from the active node to the standby.
# clrg switch -n standby hostname
acsls-rg
As the switch operation transpires, monitor the activity from each of the two system consoles.
Using tail -f
file_name
monitor activity on each node from the following viewpoints:
a) /var/adm/messages c) /var/cluster/logs/DS/acsls-rg/acsls-rs/start_stop_log.txt
Resolve any issues that may be revealed during the switch-over event.
Verify network connectivity from an ACSLS client system using the logical hostname of the ACSLS server.
$ pingacsls_logical_host
$ ssh root@acsls_logical_host
hostname passwd:
This operation should return the hostname of the active node.
Verify ACSLS operation.
$ acsss status
Repeat steps 3, 4, and 5 from the opposite node.
Resume cluster failover readiness
# clrg resume acsls-rg # clrg status
Reboot the active node and monitor the operation from the two system consoles and from the viewpoints suggested in step 3 above. Verify automatic failover operation to the standby node.
Verify network access to the logical host from a client system as suggested in step 4.
Once ACSLS operation is active on the new node, reboot this node and observe failover action to the opposite node.
Repeat network verification as suggested in step 4.