Sun Open Telecommunications Platform 1.1 Installation and Administration Guide

Chapter 4 Installing the Open Telecommunications Platform For the First Time Using the Command Line

This chapter provides the command-line procedures for installing and configuring the Open Telecommunications Platform 1.1.

The following topics are discussed:

This section discusses the following topics:

Command-line Installation and Configuration Overview

This section provides summaries of the high-level tasks that you will perform as part of the Open Telecommunications Platform site preparation, installation, configuration, and run time processes.

The following diagram illustrates the sequence of the high-level tasks for site planning, installation and configuration of the Open Telecommunications Platform software.

Figure 4–1 Open Telecommunications Platform Site Preparation Task Flow

Diagram: Open Telecommunications Platform Site Preparation Task Flow

Open Telecommunications Platform Installation Prerequisites

The following prerequisites must be met before you can install the Open Telecommunications Platform using the command line.

Installing the Open Telecommunications Platform on a Standalone OTP Host

This section provides the procedures for using the command line to install the Open Telecommunications Platform on a standalone OTP host.

ProcedureTo Install the Open Telecommunications Platform on a Standalone OTP Host

Before You Begin

Before you begin, review the OTP Plan settings described in Open Telecommunications Platform Plan Worksheets and then print out the Standalone OTP Host Plan Worksheet and fill in the values based on the standalone OTP host to which you will install OTP

  1. Log in as root (su - root) on the external OTP installation server.

  2. Copy the inputOTPSingleNode.dat file to /var/tmp.

    Type the command cp /opt/SUNWotp10/CLI/templates/inputOTPSingleNode.dat /var/tmp

  3. Edit the /var/tmp/inputOTPSingleNode.dat file.

    Specify the values for each keyword as described by Open Telecommunications Platform Plan Worksheets and the standalone OTP host Plan worksheet.

  4. Run the deployOTPSingleNode script on the external OTP installation server to install OTP on the standalone OTP host.


    # /opt/SUNWotp10/CLI/deployOTPSingleNode /var/tmp/inputOTPSingleNode.dat
    

    The deployOTPSingleNode script does the following tasks:

    • Sets up the OTP High Availability Framework

    • Sets up the OTP System Management and Application Provisioning Services

    • Enables High Availability for the OTP Provisioning Service


    Note –

    The installation log files and input files generated for the plans are stored on the external OTP installation server in the directory /var/tmp/OTP_INSTALL.


  5. Log in as root to the standalone OTP host and restart the remote agent.

    Type /etc/init.d/n1spsagent restart to restart the remote agent. If the remote agent is not restarted, then the service provisioning service on the standalone OTP host will not work properly.

    This completes installation of the Open Telecommunications Platform on the standalone OTP host.

Installing and Setting Up the Open Telecommunications Platform on a Clustered OTP System

This section provides the procedures for using the command line to install the Open Telecommunications Platform on the OTP hosts in a clustered OTP system.

Installing and configuring OTP on a clustered OTP system is comprised of the following tasks:

ProcedureTo Install the Open Telecommunications Platform on a Clustered OTP System

Before You Begin

Before you begin, review the OTP Plan settings described in Open Telecommunications Platform Plan Worksheets and then print out the Clustered OTP Host Plan Worksheet and fill in the values based on the clustered OTP system to which you will install OTP

  1. Log in as root (su - root) on the external OTP installation server.

  2. Copy the inputOTPMultiNode.dat file to /var/tmp.

    Type the command cp /opt/SUNWotp10/CLI/templates/inputOTPMultiNode.dat /var/tmp

  3. Edit the /var/tmp/inputOTPMultiNode.dat.

    Specify the values for each keyword as described by Open Telecommunications Platform Plan Worksheets and the clustered OTP host Plan worksheet.

  4. Run the deployOTPMultiNode script on the external OTP installation server to install OTP on the OTP hosts in the clustered OTP system.


    # /opt/SUNWotp10/CLI/deployOTPMultiNode /var/tmp/inputOTPMultiNode.dat
    

    The deployOTPMultiNode script does the following tasks:

    • Sets up the OTP High Availability Framework on the first OTP host

    • Adds additional OTP hosts to the clustered OTP system

    • Sets up the OTP High Availability Framework on the additional OTP hosts


    Note –

    The installation log files for this procedure and subsequent procedures, and the input files generated for the plans by the procedures are stored on the external OTP installation server in the directory /var/tmp/OTP_INSTALL.


Next Steps

ProcedureTo Configure the Quorum Disk on a Two-Host Cluster

If you chose no for Quorum Auto Configuration on a two-host cluster, you must manually select and configure the quorum disk as described in this procedure.


Note –

The following sub-steps apply only to a two-host cluster. If you are setting up a three-host or more clustered OTP system, this procedure iis optional.


  1. Open a separate terminal window and log in as root to the first OTP host.

  2. Type /usr/cluster/bin/scdidadm -L to display the cluster disk information. For example:


    # /usr/cluster/bin/scdidadm -L
                    1        otpclient1:/dev/rdsk/c0t8d0     /dev/did/rdsk/d1
                    1        otpclient2:/dev/rdsk/c0t8d0     /dev/did/rdsk/d1
                    2        otpclient1:/dev/rdsk/c0t9d0     /dev/did/rdsk/d2
                    2        otpclient2:/dev/rdsk/c0t9d0     /dev/did/rdsk/d2
                    3        otpclient1:/dev/rdsk/c1t0d0     /dev/did/rdsk/d3
                    4        otpclient1:/dev/rdsk/c1t1d0     /dev/did/rdsk/d4
                    5        otpclient2:/dev/rdsk/c1t0d0     /dev/did/rdsk/d5
                    6        otpclient2:/dev/rdsk/c1t1d0     /dev/did/rdsk/d6

    In the above example, disks d1 and d2 are shared by both hosts of the two-host cluster. The quorum disk must be a shared disk.

  3. Configure a quorum disk.

    Type /usr/cluster/bin/scconf -a -q globaldev=shared disk ID where shared disk ID is a shared disk ID. For example:


    # /usr/cluster/bin/scconf -a -q globaldev=d1
    
  4. Type /usr/cluster/bin/scconf -c -q reset to reset the two-host cluster to normal mode.

Next Steps

Create the system shared storage as described in the next procedure.

ProcedureTo Create Shared Storage on the Clustered OTP System


Caution – Caution –

Set the hard drive variables according to your cluster settings. Failure to do so will result in OTP high availability framework installation failure. The following steps must be performed on each host in your clustered OTP system, including the first OTP host.


  1. Create the shared storage meta database on all hosts in the clustered OTP system.

    The following steps must be performed for each host in the clustered OTP system.

    1. Log in to the as root (su - root) on the clustered OTP host.

    2. Determine the drive on which root is mounted and the available free space.

      Type prtvtoc `mount | awk '/^\/ / { print $3 }'` to list the hard drive slices and available space.

      For example:


      # prtvtoc `mount | awk '/^\/ / { print $3 }'` 
      * /dev/rdsk/c0t0d0s0 partition map
      *
      * Dimensions:
      *     512 bytes/sector
      *     424 sectors/track
      *      24 tracks/cylinder
      *   10176 sectors/cylinder
      *   14089 cylinders
      *   14087 accessible cylinders
      *
      * Flags:
      *   1: unmountable
      *  10: read-only
      *
      * Unallocated space:
      *       First     Sector    Last
      *       Sector     Count    Sector 
      *    63620352  79728960 143349311
      *
      *                          First     Sector    Last
      * Partition  Tag  Flags    Sector     Count    Sector  Mount Directory
             0      2    00    8201856  51205632  59407487   /
             1      3    01          0   8201856   8201855
             2      5    00          0 143349312 143349311
             3      0    00   59407488   2106432  61513919   /globaldevices
             7      0    00   61513920   2106432  63620351
    3. Create the database.

      Type metadb -a -f -c 6 disk slice where disk slice is an available file system.

      For example, based on the example in the previous step:


      # metadb -a -f -c 6 c0t0d0s7
      
  2. Create the shared storage files on the first OTP host only.

    The first OTP host must be connected to the shared storage.

    1. Log in to the first OTP host as root (su - root).

    2. Type scdidadm to determine which disks are seen on all nodes of the clustered OTP system and choose one to be the shared disk to the metaset.

      In the following example d4, d5, d6, and d7 are shared disks. They are displayed as connected to more than one node in the listing.


      # /usr/cluster/bin/scdidadm -L
      1   otpclient1:/dev/rdsk/c1t0d0    /dev/did/rdsk/d1
      2   otpclient1:/dev/rdsk/c2t0d0    /dev/did/rdsk/d2
      3   otpclient1:/dev/rdsk/c2t1d0    /dev/did/rdsk/d3
      4   otpclient1:/dev/rdsk/c3t600C0FF000000000092C187A9755BE14d0 /dev/did/rdsk/d4
      4   otpclient2:/dev/rdsk/c3t600C0FF000000000092C187A9755BE14d0 /dev/did/rdsk/d4
      5   otpclient1:/dev/rdsk/c3t600C0FF000000000092C187A9755BE13d0 /dev/did/rdsk/d5
      5   otpclient2:/dev/rdsk/c3t600C0FF000000000092C187A9755BE13d0 /dev/did/rdsk/d5
      6   otpclient1:/dev/rdsk/c3t600C0FF000000000092C187A9755BE12d0 /dev/did/rdsk/d6
      6   otpclient2:/dev/rdsk/c3t600C0FF000000000092C187A9755BE12d0 /dev/did/rdsk/d6
      7   otpclient1:/dev/rdsk/c3t600C0FF000000000092C187A9755BE11d0 /dev/did/rdsk/d7
      7   otpclient2:/dev/rdsk/c3t600C0FF000000000092C187A9755BE11d0 /dev/did/rdsk/d7
      8   otpclient2:/dev/rdsk/c1t0d0    /dev/did/rdsk/d8
      9   otpclient2:/dev/rdsk/c2t0d0    /dev/did/rdsk/d9
      10  otpclient2:/dev/rdsk/c2t1d0    /dev/did/rdsk/d10
    3. Add the additional OTP hosts.

      Type metaset -s sps–dg -a -h otpclient-1 otpclient-n where otpclient-1 otpclient-n is the list of OTP hosts separated by a space. For example:


      # metaset -s sps–dg -a -h otpclient1 otpclient2 otpclient3 \
        otpclient4 otpclient5 otpclient6 otpclient7 otpclient8
      

      Caution – Caution –

      Only the nodes connected to the shared storage (displayed as such in the scdidadm -L output) should be added to the metaset.


    4. Type metaset -s sps–dg -a shared-disk to add the shared disk to the metaset.

      In the following example, the d7 disk is assigned as the shared disk:


      # metaset -s sps–dg -a /dev/did/rdsk/d7
      
    5. Type metainit -s sps-dg d0 1 1 /dev/did/rdsk/d7s0

    6. Type newfs /dev/md/sps–dg/rdsk/d0

    7. On a two-host cluster only, set up the mediator hosts for the sps-dg disk group.

      Type metaset -s sps–dg -a -m otpclient1 otpclient2 where otpclient1 otpclient2 are the OTP hosts separated by a space.


      Caution – Caution –

      Only the nodes connected to the shared storage (displayed as such in the scdidadm -L output) should be added to the metaset.


    8. Type metaset -s sps-dg to verify the mediator host setup.

      The following example shows hosts otpclient1 and otpclient2 set up as mediator hosts.

      The following example shows hosts otpclient1 and otpclient2 in a setup for 2 node OTP system or pair + N topology cluster:


      # metaset
      Set name = sps–dg
      Host                Owner
        otpclient1         Yes
        otpclient2
      Mediator Host(s)    Aliases
        otpclient1
        otpclient2
      d7   Yes
  3. Update the /etc/vfstab file on all OTP hosts.

    The following steps must be performed on each clustered OTP host.

    1. Log in to theOTP host as root (su - root).

    2. Update the /etc/vfstab file.

      Type echo /dev/md/sps-dg/dsk/d0 /dev/md/sps-dg/rdsk/d0 /var/otp ufs 2 no global,logging >>/etc/vfstab

    3. Type mkdir -p /var/otp

Next Steps

ProcedureTo Complete and Validate Open Telecommunications Platform Installation

  1. Log in as root (su - root) on the external OTP installation server.

  2. Rerun the deployOTPMultiNode script with the -cont option.


    # /opt/SUNWotp10/CLI/deployOTPMultiNode -cont /var/tmp/inputOTPMultiNode.dat
    

    The deployOTPMultiNode script does the following tasks:

    • verifies the OTP high availability framework installation and configuration

    • Sets up OTP System Management and Application Provisioning Services on the first OTP host

    • Sets up System Management and Application Provisioning Services on the additional OTP hosts

    • Enables High Availability for the OTP Provisioning Service on the first OTP host

  3. Log in as root on the first OTP host and restart the remote agent.

    Type /etc/init.d/n1spsagent restart to restart the remote agent. If the remote agent is not restarted, then the service provisioning service on the first OTP host will not work properly.

  4. Configure and enable fail-over.

    1. Type /usr/cluster/bin/scrgadm -c -g otp-system-rg -y RG_system=false to set the system property for the otp-system-rg resource group to false.

    2. Type /usr/cluster/bin/scswitch -F -g otp-system-rg to take the remote group offline.

    3. Type the following commands in the sequence shown to disable cluster resources.

      /usr/cluster/bin/scswitch -n -j otp-spsms-rs

      /usr/cluster/bin/scswitch -n -j otp-spsra-rs

      /usr/cluster/bin/scswitch -n -j otp-sps-hastorage-plus

      /usr/cluster/bin/scswitch -n -j otp-lhn

    4. Type /usr/cluster/bin/scswitch -u -g otp-system-rg to put the remote group into the unmanaged state.

    5. Type /usr/cluster/bin/scrgadm -c -j otp-spsra-rs -x Stop_signal="15" to change the Stop_signal property of the remote agent resource to 15.

    6. Type /usr/cluster/bin/scrgadm -c -j otp-spsms-rs -x Stop_signal="15" to change the Stop_signal property of the management service resource to 15.

    7. Type /usr/cluster/bin/scswitch -o -g otp-system-rg to put the remote group into the managed state.

    8. Type /usr/cluster/bin/scswitch -Z -g otp-system-rg to bring the remote group back online.

    9. Type /usr/cluster/bin/scrgadm -c -g otp-system-rg -y RG_system=true to set the system property for the otp-system-rg resource group to true.

    This completes the command line installation of the Open Telecommunications Platform on a clustered OTP system.