This appendix provides common ACSLS HA operational procedures.
Topics include:
Run /opt/oracle/acslsha/setup.py
and select Action 2.
Note that this is not necessarily the ”primary” node running ACSLS. In order to check the primary node, switch users to acsss
and run the acsss status
command. The node running ACSLS is the primary. Also see the section on logging to determine which node is currently the primary.
In order to force a failover, simply stop the acslsha
service on the primary (active node). It will fail over to the secondary and it will become the new primary.
Note that the secondary will reboot the original primary in order to ensure that all acslsha
services are not active. At this time you may start acslsha
as the secondary on the inactive node.
In order to gracefully shut down aclsha
and all of it's resources (ACSLS, the storage monitor, and the LogicalHostIP) stop the secondary node first. You can then stop the primary as follows, and both nodes will remain idle and not rebooted:
#systemctl stop acslsha
To perform maintenance:
Perform a graceful shutdown on node 2:
systemctl stop acslsha
Perform a graceful shutdown on node 1:
systemctl stop acslsha
To patch or upgrade:
Stop ACSLS HA on both nodes (see Performing a Graceful Shutdown above).
Follow the ACSLS patching or upgrade procedures outlined in the ACSLS Administrator's Guide. Ensure that the NFS file system is mounted to /export/home
on the node that you are currently updating.
To recover from a corrupt database:
Ensure that ACSLS HA is stopped on both nodes (see Performing a Graceful Shutdown above).
Manually mount the NFS file system to /export/home
on one node only.
Start ACSLS on the node from which you mounted /export/home
.
ACSLS automatically enters recovery mode and rebuilds the database.
Shut down ACSLS when the recovery operation is complete.
Unmount the NSF file system.
Start ACSLS HA on both nodes.
Do not run setup.py
while any node is currently running ACSLS HA. This will cause ACSLS HA to stop and failover.