StorageTek Automated Cartridge System Library Software High Availability 8.3 Cluster Installation, Configuration, and Operation Release 8.3 E51939-02 |
|
![]() Previous |
![]() Next |
This chapter discusses installation, upgrading, and removing software components.
To install patches for the STKacsls package:
Suspend cluster control.
# clrg suspend acsls-rg
Download the patch to your /opt
directory and unzip the package.
Go into the /opt/ACSLS_8.x.x
directory and follow the instructions in the patch README.txt
file.
Switch control to the adjacent node and repeat the patch installation on that node.
Disable cluster control:
# clrg suspend acsls-rg
Stop acsls operation.
# su - acsss $ acsss shutdown
Switch control to the adjacent node.
# clrg switch -n <other node> acsls-rg
Install the ACSLS patch on this node.
Go to the /opt/ACSLSHA/util
directory and run copyUtils.sh
.
# cd /opt/ACSLSHA/util # ./copyUtils.sh
Start up ACSLS library control.
Resume cluster control of the acsls resource group.
# clrg resume acsls-rg
Removal of the ACSLS package may be necessary in cases of an ACSLS upgrade. To do so, it is necessary to disable cluster control, halt ACSLS services on both nodes, then remove the package on each node. Use the following procedure:
Suspend Cluster control.
node1:# clrg suspend acsls-rg
On the active node, shutdown ACSLS.
node1:# su - acsss node1:$ acsss shutdown node1:$ exit node1:#
Export the file system on the shared disk array.
node1:# cd / node1:# zpool export acslspool
This operation fails if you are logged in as user acsss
.
Log in to the alternate node and import the shared disk array.
node1:# ssh <alternate node> node2:# zpool import acslspool
Shutdown ACSLS
node2:# su - acsss node2:$ acsss shutdown node2:$ exit node2:#
Remove the STKacsls
package.
node2:# pkgrm STKacsls
Return to the original node and remove the STKacsls
package.
node2:# exit node1:# pkgrm STKacsls
It is necessary to remove the STKacsls
package on both nodes before installing a new release of ACSLS. Refer to the procedure detailed in the section above. To install a new package, follow this procedure:
Download the STKacsls package to your /opt
directory and unzip the package. Repeat this step on the alternate node.
With Solaris Cluster suspended, ensure that the shared disk array (acslspool
) is mounted to the current node.
node1:# zpool list
If the acslspool
is not mounted, login to the alternate node. If it is not mounted to either node, import the acslspool
.
Go into the /opt/ACSLS_8.x.x
directory and follow the instructions in the README.txt
file.
Export the acslspool
.
node1:# zpool export acslspool
This operation fails if you are logged in as user acsss
.
Login to the alternate node and repeat steps 1 through 3.
Go to the /opt/ACSLSHA/util
directory and run copyUtils.sh
.
node2:# cd /opt/ACSLSHA/util node2:# ./copyUtils.sh
Start up ACSLS library control.
node2:# su - acsss node2:$ acsss enable node2:$ exit node2:#
Resume cluster control of the acsls resource group.
node2:# clrg resume acsls-rg
Upgrades to the SUNWscacsls
package can be made without halting ACSLS library operation. However, it is advisable to suspend cluster operation during the upgrade. To do this:
Save the contents of $ACS_HOME/acslsha/ha_acs_list.txt
and $ACS_HOME/acslsha/pingpong_interval
.
Remove the original HA package from each node:
# pkgrm SUNWscacsls
Download and unzip the new SUNWscacsls.zip
file to the /opt
directory on each node.
In the /opt
directory on each node, run pkgadd -d .
to install the unzipped SUNWscacsls
package.
Suspend cluster operation from either node.
# clrg suspend acsls-rg
On either node, go to /opt/ACSLSHA/util
and run the copy utility:
# ./copyUtils.sh.
Restore the data in ha_acs_list.txt
and pingpong_interval
that you saved in step-1.
Resume cluster operation.
# clrg resume acsls-r
Consult the current Solaris Cluster documentation for specific upgrade procedures.
The general command to upgrade Solaris Cluster is:
# scinstall -u
Get a list of configured resources.
# clrs list
Disable and then delete each of the listed resources.
# clrs disable acsls-rs # clrs disable acsls-storage # clrs disable <Logical Host Name> # clrs delete acsls-rs # clrs delete acsls-storage # clrs delete <Logical Host Name>
Get the name of the resource group and delete it by name.
# clrg list # clrg delete <Group Name>
Reboot both nodes into non-cluster mode.
# reboot -- -x
When both nodes are up, log in from either node and remove the cluster configuration.
# scinstall -r