This chapter provides the command-line procedures for installing and configuring the Open Telecommunications Platform 1.1.
The following topics are discussed:
This section discusses the following topics:
This section provides summaries of the high-level tasks that you will perform as part of the Open Telecommunications Platform site preparation, installation, configuration, and run time processes.
The following diagram illustrates the sequence of the high-level tasks for site planning, installation and configuration of the Open Telecommunications Platform software.
The following prerequisites must be met before you can install the Open Telecommunications Platform using the command line.
The external OTP installation server must be set up and configured as described in Setting Up the External OTP Installation Server.
Solaris 10 Update 2 must be installed and configured on each OTP system server as described in Installing Solaris 10 Update 2 and the Remote Agent on the OTP Hosts.
A naming service such as NIS, NIS+, or /etc/hosts must be set up and all host names and IP addresses must be set up on that naming service.
All OTP system servers and storage devices must meet the minimum patch and firmware requirements as described in OTP System Hardware and Firmware Requirements.
This section provides the procedures for using the command line to install the Open Telecommunications Platform on a standalone OTP host.
Before you begin, review the OTP Plan settings described in Open Telecommunications Platform Plan Worksheets and then print out the Standalone OTP Host Plan Worksheet and fill in the values based on the standalone OTP host to which you will install OTP
Log in as root (su - root) on the external OTP installation server.
Copy the inputOTPSingleNode.dat file to /var/tmp.
Type the command cp /opt/SUNWotp10/CLI/templates/inputOTPSingleNode.dat /var/tmp
Edit the /var/tmp/inputOTPSingleNode.dat file.
Specify the values for each keyword as described by Open Telecommunications Platform Plan Worksheets and the standalone OTP host Plan worksheet.
Run the deployOTPSingleNode script on the external OTP installation server to install OTP on the standalone OTP host.
# /opt/SUNWotp10/CLI/deployOTPSingleNode /var/tmp/inputOTPSingleNode.dat |
The deployOTPSingleNode script does the following tasks:
Sets up the OTP High Availability Framework
Sets up the OTP System Management and Application Provisioning Services
Enables High Availability for the OTP Provisioning Service
The installation log files and input files generated for the plans are stored on the external OTP installation server in the directory /var/tmp/OTP_INSTALL.
Log in as root to the standalone OTP host and restart the remote agent.
Type /etc/init.d/n1spsagent restart to restart the remote agent. If the remote agent is not restarted, then the service provisioning service on the standalone OTP host will not work properly.
This completes installation of the Open Telecommunications Platform on the standalone OTP host.
This section provides the procedures for using the command line to install the Open Telecommunications Platform on the OTP hosts in a clustered OTP system.
Installing and configuring OTP on a clustered OTP system is comprised of the following tasks:
Installing the Open Telecommunications Platform On a Clustered OTP System
Completing and Validating Open Telecommunications Platform Installation
Before you begin, review the OTP Plan settings described in Open Telecommunications Platform Plan Worksheets and then print out the Clustered OTP Host Plan Worksheet and fill in the values based on the clustered OTP system to which you will install OTP
Log in as root (su - root) on the external OTP installation server.
Copy the inputOTPMultiNode.dat file to /var/tmp.
Type the command cp /opt/SUNWotp10/CLI/templates/inputOTPMultiNode.dat /var/tmp
Edit the /var/tmp/inputOTPMultiNode.dat.
Specify the values for each keyword as described by Open Telecommunications Platform Plan Worksheets and the clustered OTP host Plan worksheet.
Run the deployOTPMultiNode script on the external OTP installation server to install OTP on the OTP hosts in the clustered OTP system.
# /opt/SUNWotp10/CLI/deployOTPMultiNode /var/tmp/inputOTPMultiNode.dat |
The deployOTPMultiNode script does the following tasks:
Sets up the OTP High Availability Framework on the first OTP host
Adds additional OTP hosts to the clustered OTP system
Sets up the OTP High Availability Framework on the additional OTP hosts
The installation log files for this procedure and subsequent procedures, and the input files generated for the plans by the procedures are stored on the external OTP installation server in the directory /var/tmp/OTP_INSTALL.
If you chose no for Quorum Auto Configuration on a two-host cluster, you must manually select and configure the quorum disk as described in the following procedure.
If you are setting up a three-host or more clustered OTP system, quorum disk configuration is optional. Go to To Configure the Quorum Disk on a Two-Host Cluster.
If you chose no for Quorum Auto Configuration on a two-host cluster, you must manually select and configure the quorum disk as described in this procedure.
The following sub-steps apply only to a two-host cluster. If you are setting up a three-host or more clustered OTP system, this procedure iis optional.
Open a separate terminal window and log in as root to the first OTP host.
Type /usr/cluster/bin/scdidadm -L to display the cluster disk information. For example:
# /usr/cluster/bin/scdidadm -L 1 otpclient1:/dev/rdsk/c0t8d0 /dev/did/rdsk/d1 1 otpclient2:/dev/rdsk/c0t8d0 /dev/did/rdsk/d1 2 otpclient1:/dev/rdsk/c0t9d0 /dev/did/rdsk/d2 2 otpclient2:/dev/rdsk/c0t9d0 /dev/did/rdsk/d2 3 otpclient1:/dev/rdsk/c1t0d0 /dev/did/rdsk/d3 4 otpclient1:/dev/rdsk/c1t1d0 /dev/did/rdsk/d4 5 otpclient2:/dev/rdsk/c1t0d0 /dev/did/rdsk/d5 6 otpclient2:/dev/rdsk/c1t1d0 /dev/did/rdsk/d6 |
In the above example, disks d1 and d2 are shared by both hosts of the two-host cluster. The quorum disk must be a shared disk.
Configure a quorum disk.
Type /usr/cluster/bin/scconf -a -q globaldev=shared disk ID where shared disk ID is a shared disk ID. For example:
# /usr/cluster/bin/scconf -a -q globaldev=d1 |
Type /usr/cluster/bin/scconf -c -q reset to reset the two-host cluster to normal mode.
Create the system shared storage as described in the next procedure.
Set the hard drive variables according to your cluster settings. Failure to do so will result in OTP high availability framework installation failure. The following steps must be performed on each host in your clustered OTP system, including the first OTP host.
Create the shared storage meta database on all hosts in the clustered OTP system.
The following steps must be performed for each host in the clustered OTP system.
Log in to the as root (su - root) on the clustered OTP host.
Determine the drive on which root is mounted and the available free space.
Type prtvtoc `mount | awk '/^\/ / { print $3 }'` to list the hard drive slices and available space.
For example:
# prtvtoc `mount | awk '/^\/ / { print $3 }'` * /dev/rdsk/c0t0d0s0 partition map * * Dimensions: * 512 bytes/sector * 424 sectors/track * 24 tracks/cylinder * 10176 sectors/cylinder * 14089 cylinders * 14087 accessible cylinders * * Flags: * 1: unmountable * 10: read-only * * Unallocated space: * First Sector Last * Sector Count Sector * 63620352 79728960 143349311 * * First Sector Last * Partition Tag Flags Sector Count Sector Mount Directory 0 2 00 8201856 51205632 59407487 / 1 3 01 0 8201856 8201855 2 5 00 0 143349312 143349311 3 0 00 59407488 2106432 61513919 /globaldevices 7 0 00 61513920 2106432 63620351 |
Create the database.
Type metadb -a -f -c 6 disk slice where disk slice is an available file system.
For example, based on the example in the previous step:
# metadb -a -f -c 6 c0t0d0s7 |
Create the shared storage files on the first OTP host only.
The first OTP host must be connected to the shared storage.
Log in to the first OTP host as root (su - root).
Type scdidadm to determine which disks are seen on all nodes of the clustered OTP system and choose one to be the shared disk to the metaset.
In the following example d4, d5, d6, and d7 are shared disks. They are displayed as connected to more than one node in the listing.
# /usr/cluster/bin/scdidadm -L 1 otpclient1:/dev/rdsk/c1t0d0 /dev/did/rdsk/d1 2 otpclient1:/dev/rdsk/c2t0d0 /dev/did/rdsk/d2 3 otpclient1:/dev/rdsk/c2t1d0 /dev/did/rdsk/d3 4 otpclient1:/dev/rdsk/c3t600C0FF000000000092C187A9755BE14d0 /dev/did/rdsk/d4 4 otpclient2:/dev/rdsk/c3t600C0FF000000000092C187A9755BE14d0 /dev/did/rdsk/d4 5 otpclient1:/dev/rdsk/c3t600C0FF000000000092C187A9755BE13d0 /dev/did/rdsk/d5 5 otpclient2:/dev/rdsk/c3t600C0FF000000000092C187A9755BE13d0 /dev/did/rdsk/d5 6 otpclient1:/dev/rdsk/c3t600C0FF000000000092C187A9755BE12d0 /dev/did/rdsk/d6 6 otpclient2:/dev/rdsk/c3t600C0FF000000000092C187A9755BE12d0 /dev/did/rdsk/d6 7 otpclient1:/dev/rdsk/c3t600C0FF000000000092C187A9755BE11d0 /dev/did/rdsk/d7 7 otpclient2:/dev/rdsk/c3t600C0FF000000000092C187A9755BE11d0 /dev/did/rdsk/d7 8 otpclient2:/dev/rdsk/c1t0d0 /dev/did/rdsk/d8 9 otpclient2:/dev/rdsk/c2t0d0 /dev/did/rdsk/d9 10 otpclient2:/dev/rdsk/c2t1d0 /dev/did/rdsk/d10 |
Add the additional OTP hosts.
Type metaset -s sps–dg -a -h otpclient-1 otpclient-n where otpclient-1 otpclient-n is the list of OTP hosts separated by a space. For example:
# metaset -s sps–dg -a -h otpclient1 otpclient2 otpclient3 \ otpclient4 otpclient5 otpclient6 otpclient7 otpclient8 |
Only the nodes connected to the shared storage (displayed as such in the scdidadm -L output) should be added to the metaset.
Type metaset -s sps–dg -a shared-disk to add the shared disk to the metaset.
In the following example, the d7 disk is assigned as the shared disk:
# metaset -s sps–dg -a /dev/did/rdsk/d7 |
Type metainit -s sps-dg d0 1 1 /dev/did/rdsk/d7s0
Type newfs /dev/md/sps–dg/rdsk/d0
On a two-host cluster only, set up the mediator hosts for the sps-dg disk group.
Type metaset -s sps–dg -a -m otpclient1 otpclient2 where otpclient1 otpclient2 are the OTP hosts separated by a space.
Only the nodes connected to the shared storage (displayed as such in the scdidadm -L output) should be added to the metaset.
Type metaset -s sps-dg to verify the mediator host setup.
The following example shows hosts otpclient1 and otpclient2 set up as mediator hosts.
The following example shows hosts otpclient1 and otpclient2 in a setup for 2 node OTP system or pair + N topology cluster:
# metaset Set name = sps–dg Host Owner otpclient1 Yes otpclient2 Mediator Host(s) Aliases otpclient1 otpclient2 d7 Yes |
Update the /etc/vfstab file on all OTP hosts.
The following steps must be performed on each clustered OTP host.
If you are performing a command line installation, complete and validate the Open Telecommunications Platform installation as described in the next procedure.
If you are performing a graphical user interface installation, set up the system management and provisioning services as described in To Set Up OTP System Management and Provisioning Services on the First OTP Host
Log in as root (su - root) on the external OTP installation server.
Rerun the deployOTPMultiNode script with the -cont option.
# /opt/SUNWotp10/CLI/deployOTPMultiNode -cont /var/tmp/inputOTPMultiNode.dat |
The deployOTPMultiNode script does the following tasks:
verifies the OTP high availability framework installation and configuration
Sets up OTP System Management and Application Provisioning Services on the first OTP host
Sets up System Management and Application Provisioning Services on the additional OTP hosts
Enables High Availability for the OTP Provisioning Service on the first OTP host
Log in as root on the first OTP host and restart the remote agent.
Type /etc/init.d/n1spsagent restart to restart the remote agent. If the remote agent is not restarted, then the service provisioning service on the first OTP host will not work properly.
Configure and enable fail-over.
Type /usr/cluster/bin/scrgadm -c -g otp-system-rg -y RG_system=false to set the system property for the otp-system-rg resource group to false.
Type /usr/cluster/bin/scswitch -F -g otp-system-rg to take the remote group offline.
Type the following commands in the sequence shown to disable cluster resources.
/usr/cluster/bin/scswitch -n -j otp-spsms-rs
/usr/cluster/bin/scswitch -n -j otp-spsra-rs
/usr/cluster/bin/scswitch -n -j otp-sps-hastorage-plus
/usr/cluster/bin/scswitch -n -j otp-lhn
Type /usr/cluster/bin/scswitch -u -g otp-system-rg to put the remote group into the unmanaged state.
Type /usr/cluster/bin/scrgadm -c -j otp-spsra-rs -x Stop_signal="15" to change the Stop_signal property of the remote agent resource to 15.
Type /usr/cluster/bin/scrgadm -c -j otp-spsms-rs -x Stop_signal="15" to change the Stop_signal property of the management service resource to 15.
Type /usr/cluster/bin/scswitch -o -g otp-system-rg to put the remote group into the managed state.
Type /usr/cluster/bin/scswitch -Z -g otp-system-rg to bring the remote group back online.
Type /usr/cluster/bin/scrgadm -c -g otp-system-rg -y RG_system=true to set the system property for the otp-system-rg resource group to true.
This completes the command line installation of the Open Telecommunications Platform on a clustered OTP system.