Sun Open Telecommunications Platform 1.0 Installation and Administration Guide

Installing the Open Telecommunications Platform On A Clustered OTP System

Graphical user interface installation and setup of the Open Telecommunications Platform on a clustered OTP system is comprised of the following steps:


Note –

Refer to the OTP System Plan Settings Descriptions and the Clustered OTP Host Plan Worksheet for information needed during installation.


ProcedureTo Set Up the OTP High Availability Framework on the First OTP Host

Availability services must first be set up on the first OTP host in your clustered OTP system.

Before You Begin
  1. Open a Web browser and log in to the external OTP installation server service provisioning service.

    Go to URL http://install server:9090 where install server is either the IP address or the fully qualified name of the external OTP installation server.

  2. Click OEM OTP to display the Open Telecommunications Platform home page.

  3. Click Step 1. OTP High Availability Framework on First Host: Install and Configure.

    The edit availability plan page appears.

    Figure 5–13 Clustered OTP Host Edit Availability Plan Page: System Management Server

    Screen capture: Clustered OTP Host Edit Availability
Plan Page: System Management Server

  4. Click run.

    The Availability Plan Variables page appears. Scroll the page down to view the variables:

    Figure 5–14 Clustered OTP Host Availability Plan Variables Page: System Management Server Variables

    Screen capture: Clustered OTP Host Availability Plan
Variables Page: System Management Server Variables

    Type the information in the plan variables fields according to your Clustered OTP Host Plan Worksheet. Refer to the OTP System Plan Settings Descriptions for information about each variable.


    Caution – Caution –

    Set limit overall running time of plan and limit running time of native calls to 2 hours each.


  5. Click run plan (includes preflight).

    The page refreshes, and a progress bar is displayed during the provisioning process.

    The provisioning process:

    • Installs required Solaris OS patches

    • Installs the OTP high availability framework

    • Configures the first OTP host

    • Reboots the first OTP host

    • Verifies the first OTP host configuration

Next Steps

Set up availability services on the additional OTP hosts as described in the next procedure.

ProcedureTo Set Up the OTP High Availability Framework on the Additional OTP Hosts

The OTP high availability framework must be set up on each host in your clustered OTP system. Perform the following steps on each host.

Before You Begin

The OTP high availability framework must be set up on the First OTP Host as described in the previous procedure.

  1. Open a Web browser and log in to the external OTP installation server service provisioning service.

    Go to URL http://install server:9090 where install server is either the IP address or the fully-qualifed name of the external OTP installation server.

  2. Click OEM OTP to display the Open Telecommunications Platform home page.

  3. Click Step 2. OTP High Availability Framework on Additional Hosts: Install and Configure.

    The edit availability plan page appears.

    Figure 5–15 Clustered OTP Hosts Edit Availability Plan Page

    Screen capture: Clustered OTP Hosts Edit Availability
Plan Page

  4. Click run.

    The Availability Plan Variables page appears. Scroll the page down to view the variables:

    Figure 5–16 Clustered OTP Hosts Availability Plan Variables Page

    Screen capture: Clustered OTP Hosts Availability
Plan Variables Page

    Type the information in the plan variables fields according to your Clustered OTP Host Plan Worksheet. Refer to the OTP System Plan Settings Descriptions for information about each variable.


    Caution – Caution –

    Set limit overall running time of plan and limit running time of native calls to 2 hours each.


  5. Click run plan (includes preflight).

    The page refreshes, and a progress bar is displayed during the provisioning process.

    The provisioning process:

    • Installs required Solaris OS patches

    • Installs the OTP high availability framework

    • Configures the clustered OTP host

    • Reboots the clustered OTP host

    • Verifies the clustered OTP host configuration

  6. If you chose no for Quorum Auto Configuration on a two-host cluster, you must manually select and configure the quorum disk as follows.


    Note –

    The following sub-steps apply only to a two-host cluster. If you are setting up the OTP high availability framework on a three-host or more clustered OTP system, this step is optional.


    1. Open a separate terminal window and log in as root to the first OTP host.

    2. Type /usr/cluster/bin/scdidadm -L to display the cluster disk information. For example:


      # /usr/cluster/bin/scdidadm -L
                      1        otpclient1:/dev/rdsk/c0t8d0     /dev/did/rdsk/d1
                      1        otpclient2:/dev/rdsk/c0t8d0     /dev/did/rdsk/d1
                      2        otpclient1:/dev/rdsk/c0t9d0     /dev/did/rdsk/d2
                      2        otpclient2:/dev/rdsk/c0t9d0     /dev/did/rdsk/d2
                      3        otpclient1:/dev/rdsk/c1t0d0     /dev/did/rdsk/d3
                      4        otpclient1:/dev/rdsk/c1t1d0     /dev/did/rdsk/d4
                      5        otpclient2:/dev/rdsk/c1t0d0     /dev/did/rdsk/d5
                      6        otpclient2:/dev/rdsk/c1t1d0     /dev/did/rdsk/d6

      In the above example, disks d1 and d2 are shared by both hosts of the two-host cluster. The quorum disk must be a shared disk.

    3. Configure a quorum disk.

      Type /usr/cluster/bin/scconf -a -q globaldev=shared disk ID where shared disk ID is a shared disk ID. For example:


      # /usr/cluster/bin/scconf -a -q globaldev=d1
      
    4. Type /usr/cluster/bin/scconf -c -q reset to reset the two-host cluster to normal mode.

ProcedureTo Create Shared Storage on the Clustered OTP System


Caution – Caution –

Set the hard drive variables according to your cluster settings. Failure to do so will result in OTP high availability framework installation failure.


Before You Begin

The OTP high availability framework must be set up on all OTP hosts in the clustered OTP system.

  1. Create the shared storage meta database on all clustered OTP hosts.

    The following steps must be performed for each clustered OTP host.

    1. Log in to the clustered OTP host as root (su - root).

    2. Determine the drive on which root is mounted and the available free space.

      Typeprtvtoc `mount | awk '/^\/ / { print $3 }'` to list the hard drive slices and available space.

      For example:


      # prtvtoc `mount | awk '/^\/ / { print $3 }'` 
      * /dev/rdsk/c0t0d0s0 partition map
      *
      * Dimensions:
      *     512 bytes/sector
      *     424 sectors/track
      *      24 tracks/cylinder
      *   10176 sectors/cylinder
      *   14089 cylinders
      *   14087 accessible cylinders
      *
      * Flags:
      *   1: unmountable
      *  10: read-only
      *
      * Unallocated space:
      *       First     Sector    Last
      *       Sector     Count    Sector 
      *    63620352  79728960 143349311
      *
      *                          First     Sector    Last
      * Partition  Tag  Flags    Sector     Count    Sector  Mount Directory
             0      2    00    8201856  51205632  59407487   /
             1      3    01          0   8201856   8201855
             2      5    00          0 143349312 143349311
             3      0    00   59407488   2106432  61513919   /globaldevices
             7      0    00   61513920   2106432  63620351
    3. Create the database.

      Type metadb -a -f -c 6 disk slice where disk slice is an available file system.

      For example, based on the example in the previous step:


      # metadb -a -f -c 6 c0t0d0s7
      
  2. Create the shared storage files only on the first OTP host.

    The first OTP host must be connected to the shared storage.

    1. Log in to the first OTP host as root (su - root).

    2. Type the scdidadm command to determine which disks are seen on all nodes of cluster and choose one to be the shared disk to metaset.

      In the following example d4, d5, d6, and d7 are shared disks.


      # /usr/cluster/bin/scdidadm -L
      1   otpclient1:/dev/rdsk/c1t0d0    /dev/did/rdsk/d1
      2   otpclient1:/dev/rdsk/c2t0d0    /dev/did/rdsk/d2
      3   otpclient1:/dev/rdsk/c2t1d0    /dev/did/rdsk/d3
      4   otpclient1:/dev/rdsk/c3t600C0FF000000000092C187A9755BE14d0 /dev/did/rdsk/d4
      4   otpclient2:/dev/rdsk/c3t600C0FF000000000092C187A9755BE14d0 /dev/did/rdsk/d4
      5   otpclient1:/dev/rdsk/c3t600C0FF000000000092C187A9755BE13d0 /dev/did/rdsk/d5
      5   otpclient2:/dev/rdsk/c3t600C0FF000000000092C187A9755BE13d0 /dev/did/rdsk/d5
      6   otpclient1:/dev/rdsk/c3t600C0FF000000000092C187A9755BE12d0 /dev/did/rdsk/d6
      6   otpclient2:/dev/rdsk/c3t600C0FF000000000092C187A9755BE12d0 /dev/did/rdsk/d6
      7   otpclient1:/dev/rdsk/c3t600C0FF000000000092C187A9755BE11d0 /dev/did/rdsk/d7
      7   otpclient2:/dev/rdsk/c3t600C0FF000000000092C187A9755BE11d0 /dev/did/rdsk/d7
      8   otpclient2:/dev/rdsk/c1t0d0    /dev/did/rdsk/d8
      9   otpclient2:/dev/rdsk/c2t0d0    /dev/did/rdsk/d9
      10  otpclient2:/dev/rdsk/c2t1d0    /dev/did/rdsk/d10
    3. Add the additional OTP hosts.

      Type metaset -s sps–dg -a -h otpclient1 otpclientn where otpclient1 otpclientn is the list of OTP hosts separated by a space. For example, assuming that otpclient1 is the First OTP host


      # metaset -s sps–dg -a -h  otpclient2 otpclient3 otpclient4 otpclient5 \
      ontclient6 otpclient7 otpclient8
      
    4. Type metaset -s sps–dg -a shared-disk to add the shared disk to metaset.

      In the following example, the d7 shared disk is added:


      # metaset -s sps–dg -a /dev/did/rdsk/d7
      
    5. Type metainit -s sps–dg d0 1 1 /dev/did/rdsk/d7s0

    6. Type newfs /dev/md/sps–dg/rdsk/d0

    7. On a two-host cluster only, set up the mediator strings for the sps-dg disk group.

      Type metaset -s sps–dg -a -m otpclient1 otpclientn where otpclient1 otpclientn is the list of OTP hosts separated by a space. For example:


      # metaset -s sps–dg -a -m otpclient1 otpclient2 otpclient3 otpclient4 otpclient5 \
      ontclient6 otpclient7 otpclient8
      
    8. Type metaset to verify the mediator host setup.

      The following example shows hosts otpclient1 and otpclient2 set up as mediator hosts.


      # metaset
      Set name = sps–dg, Set number = 1
      Host                Owner
        otpclient1         Yes
        otpclient2
      Mediator Host(s)    Aliases
        otpclient1
        otpclient2
      Driv Dbase
      d4   Yes
  3. Update the /etc/vfstab file on all clustered OTP hosts.

    The following steps must be performed for each host.

    1. Log in to the clustered OTP host as root (su - root).

    2. Update the /etc/vfstab file.

      Type echo /dev/md/sps-dg/dsk/d0 /dev/md/sps-dg/rdsk/d0 /var/otp ufs 2 no global,logging >>/etc/vfstab

    3. Type mkdir -p /var/otp

Next Steps

Set up the system management and provisioning services on the first OTP host as described in the next procedure.

ProcedureTo Set Up OTP System Management and Provisioning Services on the First OTP Host

Before You Begin

Shared storage must be set up on the first OTP host as described in the previous procedure.

  1. Open a Web browser and log in to the external OTP installation server service provisioning service.

    Go to URL http://install server:9090 where install server is either the IP address or the fully qualified name of the external OTP installation server.

  2. Click OEM OTP to display the Open Telecommunications Platform home page.

  3. Click Step 3. OTP System Management and Provisioning Services on First Host: Install and Configure.

    The edit System Management and Application Provisioning plan page appears.

  4. Click run.

    The System Management and Application Provisioning Plan Variables page appears. Scroll the page down to display the variables

    Figure 5–17 Clustered OTP Host System Management and Application Provisioning Plan Variables Page: First OTP Host

    Screen capture: Clustered OTP Host Applications and
Provisioning Plan Parameters page: First OTP Host

    Type the information in the plan variables fields according to your Clustered OTP Host Plan Worksheet. Refer to the OTP System Plan Settings Descriptions for information about each variable.


    Caution – Caution –

    Set limit overall running time of plan and limit running time of native calls to 2 hours each.


  5. Click run plan (includes preflight).

    The page refreshes, and a progress bar is displayed during the provisioning process.

    The provisioning process:

    • Installs the Web console

    • Applies patches required by the Open Telecommunications Platform

    • Installs the system management agent

    • Installs the system management service

    • Installs the service provisioning service

    • Installs Java patches

    When the provisioning process completes, click done.

ProcedureTo Set Up OTP System Management and Provisioning Services on the Additional OTP Hosts

Before You Begin

System management and provisioning services must be set up on the first OTP host as described in the previous procedure.

  1. Open a Web browser and log in to the external OTP installation server service provisioning service.

    Go to URL http://install server:9090 where install server is either the IP address or the fully qualified name of the external OTP installation server.

  2. Click OEM OTP to display the Open Telecommunications Platform home page.

  3. Click Step 4. OTP System Management and Provisioning Service on Additional Hosts: Install and Configure.

    The edit System Management and Application Provisioning plan page appears.

  4. Click run.

    The System Management and Application Provisioning Plan Variables page appears. Scroll the page down to display the variables

    Figure 5–18 Clustered OTP Host System Management and Application Provisioning Plan Variables Page: Additional OTP Host

    Screen capture: Clustered OTP Host Applications and
Provisioning Plan Parameters page: Additional OTP Host

    Type the information in the plan variables fields according to your Clustered OTP Host Plan Worksheet for this OTP host. Refer to the OTP System Plan Settings Descriptions for information about each variable.


    Caution – Caution –

    Set limit overall running time of plan and limit running time of native calls to 2 hours each.


  5. Click run plan (includes preflight).

    The page refreshes, and a progress bar is displayed during the provisioning process.

    The provisioning process:

    • Installs the Web console

    • Applies patches required by the Open Telecommunications Platform

    • Installs the system management agent

    • Installs the system management service

    • Installs the service provisioning service

    • Installs Java patches

    When the provisioning process completes, click done.

Next Steps

Repeat this procedure for the next OTP host in your clustered OTP system.

When you have finished setting up system management and provisioning services on all OTP hosts, enable high availability on the first OTP host as described in the next procedure.

ProcedureTo Enable High Availability for the OTP Provisioning Service on the First OTP Host

Before You Begin

System management and provisioning services must be set up on the additional OTP hosts as described in the previous procedure.

  1. Open a Web browser and log in to the external OTP installation server service provisioning service.

    Go to URL http://install server:9090 where install server is either the IP address or the fully-qualifed name of the external OTP installation server.

  2. Click OEM OTP to display the Open Telecommunications Platform home page.

  3. Click Step 5. OTP High Availability for Provisioning Service on First Host: Enable beneath Multi Cluster Setup in the central menu.

    The edit High Availability plan page appears.

  4. Click run.

    The High Availability Plan Variables page appears. Scroll the page down to display the variables

    Figure 5–19 Clustered OTP Host High Availability Plan Variables Page: First OTP Host

    Screen capture: Clustered OTP Host High Availability
Plan Parameters page: First OTP Host

    Type the information in the plan variables fields according to your Clustered OTP Host Plan Worksheet. Refer to the OTP System Plan Settings Descriptions for information about each variable.


    Caution – Caution –

    Set limit overall running time of plan and limit running time of native calls to 2 hours each.


  5. Click run plan (includes preflight).

    The page refreshes, and a progress bar is displayed during the provisioning process.

    The provisioning process installs and enables the application provisioning service high availability agent.

    When the provisioning process completes, click done.

  6. Log in as root on thefirst OTP host and restart the remote agent.

    Type /etc/init.d/n1spsagent restart to restart the remote agent. If the remote agent is not restarted, then the service provisioning service on the first OTP host will not work properly.

  7. Configure and enable fail-over.

    1. Type /usr/cluster/bin/scswitch -F -g otp-system-rg to take the remote group offline.

    2. Type the following commands in the sequence shown to disable cluster resources.

      /usr/cluster/bin/scswitch -n -j otp-spsms-rs

      /usr/cluster/bin/scswitch -n -j otp-spsra-rs

      /usr/cluster/bin/scswitch -n -j otp-sps-hastorage-plus

      /usr/cluster/bin/scswitch -n -j otp-lhn

    3. Type /usr/cluster/bin/scswitch -u -g otp-system-rg to put the remote group into the unmanaged state.

    4. Type /usr/cluster/bin/scrgadm -c -j otp-spsra-rs -x Stop_signal="15" to change the Stop_signal property of the remote agent resource to 15.

    5. Type /usr/cluster/bin/scrgadm -c -j otp-spsms-rs -x Stop_signal="15" to change the Stop_signal property of the management service resource to 15.

    6. Type /usr/cluster/bin/scswitch -o -g otp-system-rg to put the remote group into the managed state.

    7. Type /usr/cluster/bin/scswitch -Z -g otp-system-rg to bring the remote group back online.

    This completes the Open Telecommunications Platform graphical user interface installation process for a clustered OTP system.