6 Installing and Configuring ACSLS HA

This chapter describes how to install and configure ACSLS HA for Linux.

Topics include:

Installing ACSLS HA

Ensure that /export/home is mounted to the NFS file system from both nodes while performing the installation and configuration of ACSLS HA. When you are completely finished installing and configuring ACSLS HA on both nodes, you will be instructed to unmount /export/home from both nodes. ACSLS HA will mount and dismount /export/home as needed during startup and shutdown.

The ACSLS HA Linux rpm is named ACSLS-HA-8.5.1-X.XXX.x86_64.rpm where X indicates version levels. Earlier, you were instructed to download this file to the /opt directory. The following examples will use this directory.

Perform the following steps:

  1. From Node 1, CD to the /opt directory:

    # cd /opt
    
  2. Install the rpm:

    # rpm –ivh ACSLS-HA-8.5.1-0.00X.x86_64.rpm
    
  3. To ensure that ACSLS HA is registered with the Linux system services, use the following command to reload the system daemon:

    #systemctl daemon-reload
    
  4. From Node 2, repeat the above steps to install the ACSLS HA rpm on the other node.

  5. Prepare to run the ACSLS HA setup.py command on Node 1.

Running setup.py

The following example illustrates all of the setup.py options. You must run setup.py on both nodes, one at a time starting with Node 1. Note that when you run setup.py on Node1, it writes the same response data to Node 2 with the exception of setting up the SSH keys. When you run setup.py on Node2 you may choose to only run option 1 (Configure SSH keys between the nodes) followed by option 2 to verify that the configuration was correctly written to Node 2 when Node 1 was configured.

If you select Action 2 (Display current configuration) while running setup.py for the first time, the configuration entries will be displayed as None.

  1. On Node 1, run setup.py:

    [root@axid ~]# /opt/oracle/acslsha/setup.py
    
    Validating local node.
    Validating remote node.
    Reading config file.
    It is highly recommended that you execute each menu item in order starting with 1. If you choose not to set up the ssh keys between the nodes, you will need to enter the password for the remote node when prompted.
    
    Building the menu
    1) Configure SSH keys between the nodes
    2) Display current configuration
    3) Configure Logical Host for connecting to ACSLS
    4) Configure FileSystem
    q) Quit
    
  2. Select Action 1.

    Select action: 1
    Please enter root password for remote node when prompted.
    root@remotenode's password:
    

    Respond with the root password of the remote node.

  3. Select Action 2.

    Select action: 2
    LogicalHostDevice : None
    StorageFilesystemType : None
    LogicalHostIp : None
    NodeId : None
    StorageFilesystem : None
    StorageMountPoint : None
    StorageOptions : None
    
  4. Select Action 3.

    ACSLS HA must know the logical host address and device to access ACSLS. ACSLS HA will move this IP address between the nodes as necessary. The first step is to enter the IP address for the logical host used to access ACSLS. The format of the address is a dot delimited quad and a slash, followed by the subnet mask. For example, 10.80.25.81/23.

    Enter the IP Address: 10.80.25.81/23 (enter your IP address)
    Enter the device: eno1 (enter your device)
    
    Successfully configured the logical host.
    
  5. Select Action 4.

    ACSLS HA must know the location of the file system containing the ACSLS installation. ACSLS HA will move this file system between the nodes as necessary.

    The file system is currently set to None
    Would you like to change this filesystem (y/Y/n/N/yes/no): Y
    Enter filesystem: 10.0.0.123:/export/home (Enter the IP and name of your NFS file system)
    
    The Mount Point is currently set to None
    Would you like to change this mount point (y/Y/n/N/yes/no): Y
    Enter mount point: /export/home (Enter your local mount point directory)
    
    The filesystem type is currently set to None
    Would you like to change this type (y/Y/n/N/yes/no): Y
    
    Enter filesystem type: nfs
    The file system options are currently set to None
    Would you like to change the options (y/Y/n/N/yes/no): Y
    
    Enter options: rw,suid,soft (Note that no spaces are allowed in this response)
    
    Successfully configured the file system.
    
  6. To display the current configuration, select Action 2:

    1) Configure SSH keys between the nodes
    2) Display current configuration
    3) Configure Logical Host for connecting to ACSLS
    4) Configure FileSystem
    q) Quit
    
    Select action: 2
    
    LogicalHostDevice : eno1
    StorageFilesystemType : nfs
    LogicalHostIp : 10.80.25.81/23
    NodeId : 1
    StorageFilesystem : 10.80.25.124:/acslsha_Straub-Tooheys
    StorageMountPoint : /export/home
    StorageOptions : rw,suid,soft
    
  7. Repeat this entire procedure to run setup.py on Node 2.

    When setup.py runs, it will mount and unmount the NFS file system. At this time, ensure that it is unmounted from both nodes using the mount command first. If you see that the NFS file system is mounted then unmount it using the umount command. For example:

    #mount
    

    If the NFS file system is shown, then enter the following command:

    #umount /export/home/
    

Starting, Stopping and Statusing the acslsha Service

ACSLS HA uses a Linux 7.3, 7.6, or 7.8 service for control. This service is called acslsha.

To start the service, issue the following command on both nodes:

#systemctl start acslsha

The node that you start first becomes the primary node. Start ACSLS and render the Logical Host IP to which ACSLS clients will attach.

The node that you start second becomes the secondary node. This node monitors the primary node and remains in standby until a failover occurs.

To stop acslsha, first ensure that there is no activity or outstanding operations in ACSLS. Then enter, the following command:

#systemnctl stop acslsha

Note:

  • If you stop the primary node, the product will fail over to the secondary. If you wish to shut down acslsha gracefully, stop the secondary node first.

  • It may take several minutes before acslsha completely stops, as it must first shut down ACSLS.

To check status, enter the following command:

#systemnctl status acslsha

Typical acslsha status from a node that is running:

# systemctl status acslsha
œ acslsha.service - The Oracle ACSLSHA Service
Loaded: loaded (/usr/lib/systemd/system/acslsha.service; disabled; vendor preset:   disabled)
Active: active (running) since Wed 2020-01-29 14:17:34 MST; 2 days ago
Main PID: 7244 (bash)
CGroup: /system.slice/acslsha.service
       7244 /bin/bash -c TERM=xterm /opt/oracle/acslsha/bin/AcslsHa.py >&1 |
/opt/oracle/acslsha/bin...
       7246 /usr/bin/python -u /opt/oracle/acslsha/bin/AcslsHa.py
       7247 /usr/bin/python -u /opt/oracle/acslsha/bin/logger.py -l 100000 -g 10 -f /var/log/acslsha...
       63487 /usr/bin/python -u /opt/oracle/acslsha/bin/AcslsHa.py
       63488 /usr/bin/python -u /opt/oracle/acslsha/bin/AcslsHa.py
       63490 /usr/bin/python -u /opt/oracle/acslsha/bin/AcslsHa.py
       63492 /usr/bin/python -u /opt/oracle/acslsha/bin/AcslsHa.py

Typical acslsha status from a node that is not running:

acslsha.service - The Oracle ACSLSHA Service
Loaded: loaded (/usr/lib/systemd/system/acslsha.service; disabled; vendor preset: disabled)
Active: inactive (dead)

ACSLS HA Logging

The state of each node can be determined by following the current AcslsHa.log on each node. You must be aware of the time stamps as a node running as primary may have previously been a secondary.

A node in Primary state (running ACSLS) will repeatedly log the following:

2020/01/07 07:31:48.611329 INFO - Monitoring with primary = True:
2020/01/07 07:31:48.611375 DEBUG - System state changed to : MONITORING PRIMARY
2020/01/07 07:31:48.611454 DEBUG - AcslsHa: Updating node status with primary = True and status = MONITORING PRIMARY

A node in Secondary state will repeatedly log the following:

2020/01/06 13:36:37.383299 INFO - Monitoring with primary = False:
2020/01/06 13:36:37.383333 DEBUG - System state changed to : MONITORING SECONDARY
2020/01/06 13:36:37.383364 DEBUG - AcslsHa: Updating node status with primary = False and status = MONITORING SECONDARY

A running primary node that has not yet been a secondary node will contain the following logs:

Directory /var/log/acslsha – Contains the current logs.

Logs are restarted and archived when they reach 100,000 lines or when acslsha is restarted on that node (whichever happens first). Archived logs reside in directories located under /var/log/ named acslsha.0, acslsha.1, and up to acslsha.9, where acslsha.0 is the most recent archive. The acslsha directory (with no ”dot” number) is always the current running set of logs.

Log files in the /var/log/acslsha directory or any acslsha.# directory include the following:

  • AcslsHa.log:

    The main currently running ACSLS HA log.

  • acslshaResourceAcsls.log:

    Contains Information about ACSLS HA's starting and stopping of ACSLS.

  • acslsResource.log:

    Contains information about the current status of ACSLS.

  • acslshaResourceLogicalHost.log:

    Contains information about the Logical Host IP. Initially, this log will indicate that the Logical host IP has been started.

  • acslshaResourceRemoteNode.log:

    Contains information that the primary node logs about the remote node (the secondary). When viewed on the secondary, this log contains information that the secondary logs about it's remote (the primary).

  • acslshaResourceStorage.log:

    Contains the startup and the name of the storage resource (NFS file system mount). NFS errors or a loss of network connection to the NFS file server are logged here.

  • storageResource.log:

    This log remains empty until a storage resource issue occurs.

  • setup.log:

    Contains the responses to the questions asked when setup.py was run. Note that you may need to review any archived set of logs in order to locate the most recently updated version of this file as setup is typically only run once and will move with the archives during a restart of ACSLS HA.

A node running as the secondary that has never been a primary will only contain the following logs:

  • acslshaResourceRemoteNode.log

  • AcslsHa.log

  • acslshaResourceRemoteNode.log