Installing, Configuring, and Running ACSLS on the Feature Card
Once you have initialized and configured the feature card, you must install and configure the ACSLS software on the card.
Step 1: Installing ACSLS
Installation of ACSLS 8.5 and later differs significantly from previous ACSLS releases.
Before installing ACSLS 8.5 and later on the feature card, ensure that you have completed all initialization and configuration tasks described earlier in this chapter.
Additionally, ensure that you have completed the pre-installation tasks described in Installing ACSLS on Linux. Reference the sections on configuring YUM, creating user accounts and groups, and installing ACSLS.
Note:
-
ACSLS must be installed in the default directories on the feature card. User-defined directories are not supported.
-
ACSLS 8.5 and later uses the StorageTek Library Control Interface (SCI) protocol to connect with and operate the library. It does not use direct SCSI communications with the SL4000.
Note:
ACSLS SCI connection to an SL4000 library requires an SL4000 user credential with a user role at the User level. The SL4000 Administrator role can also be used for this credential. -
ACSLS 8.5 and later on a feature card does not support the ACSLS High Availability package on Linux. Instead, use the ACSLS Feature Card Availability Toolkit (FCAT).
-
ACSLS 8.5 and later on a feature card does not support the ACSLS SNMP Agent.
-
You must follow all steps for configuring the SL4000 to ACSLS as described in the ACSLS Administrator's Guide.
Once installation is complete, ACSLS resides on a RAID-1 disk pair under the /export
file system.
Three redundant backup directories, /bkupa
, /bkupb
, and/bkupc
, store the downloaded ACSLS package and the customized Linux system files. Copies of unexpired ACSLS database backup files are also maintained in these locations.
You may need to manage these RAID-1 disk and backup directories when troubleshooting or addressing system faults associated with the feature card.
Step 2: Configuring and Running ACSLS
Follow the instructions provided in the ACSLS Administrator's Guide to use acsss_config
to configure ACSLS and create a database image of your library. Although local backups of the database on the feature card will be created, it is highly recommended that you also establish periodic backups of your database to tape media or a storage server outside of the library as part of your own organization's disaster recovery processes.
Before running acsss_config
, ensure that you have completed the following library configuration tasks using the SL4000 GUI:
-
Define an SL4000 library certificate, including the Library Name (CN). This name must match that used in
acsss_config
andconfig new acs
. If using a host name (DN), not an IP address, it must also resolve to the same exact name. -
Define an SL4000 user that the ACSLS SCI interface can use to connect to the SL4000 library.
Note:
ACSLS SCI connection to an SL4000 library requires an SL4000 user credential with a user role at the User level. The SL4000 Administrator role can also be used for this credential. -
Ensure that the SL4000 library is SCI capable, or has an SCI capable partition.
-
Ensure ACSLS server time and SL4000 library time are synced within a couple minutes of each other.
Refer to the ACSLS Administrator's Guide for more information about these tasks.
Note:
ACSLS on the feature card does not support multiple library connections.There is a one-to-one correspondence between an instance of ACSLS running on the feature card to the SL4000 library that it supports. Accordingly, ACSLS, when running on the feature card, should be used only to manage the SL4000 within which the feature card is installed. It should not be used to manage other libraries within your organization.
Once configured, enter the following command to enable acsss
and begin operations:
acsss enable
This command is only valid if at least one ACS is configured.
For any additional operations, refer to your ACSLS publications, as all ACSLS operations run on the feature card as they do on a standalone server.