Sun Open Telecommunications Platform 2.0 Developer's Guide

ProcedureTo Install Sun OTP on a Clustered System

Before You Begin

Make sure that you complete the tasks described in Prerequisite Tasks for Sun OTP Installation.

  1. Log in as root (su - root) to the self-contained Sun OTP provisioning server.

  2. Copy the input_otp.dat file to a local non-temporary directory.

    cp /opt/SUNWotp/cli/templates/input_otp.dat /export/

  3. Edit the /export/input_otp.dat file.

    Type the values for the appropriate plan variables in the text fields. Refer to Appendix A, Sun OTP Plan Worksheet to determine the values for these variables.

  4. Set up the Sun OTP configuration on all the Sun OTP hosts.

    /opt/SUNWotp/cli/deploy_otp -i S -f /export/input_otp.dat -o "-P passwordfile"

    This command specifies the Sun OTP deployment parameters and validates these parameters provided in the input_otp.dat file.

  5. Install the OS patches on all the Sun OTP hosts.

    /opt/SUNWotp/cli/deploy_otp -i P -f /export/input_otp.dat

    When the command completes, wait for the Sun OTP hosts to boot into multi-user mode.

  6. Install and configure the Sun OTP high availability service in the global zone on the first Sun OTP host.

    /opt/SUNWotp/cli/deploy_otp -i a -f /export/input_otp.dat -o "-N first"

    When the command completes, wait for the Sun OTP host to reboot completely and then type the following command:

    /opt/SUNWotp/cli/deploy_otp -c a -f /export/input_otp.dat -o "-N first"

  7. Install and configure the Sun OTP high availability service in the global zone on the additional Sun OTP hosts.

    /opt/SUNWotp/cli/deploy_otp -i a -f /export/input_otp.dat -o "-N additional"

    When the command completes, wait for the Sun OTP host to reboot completely and then type the following command:

    /opt/SUNWotp/cli/deploy_otp -c a -f /export/input_otp.dat -o "-N additional"

  8. (Applicable for zones) Create and configure a non-global zone on all the Sun OTP hosts.

    /opt/SUNWotp/cli/deploy_otp -i z -f /export/input_otp.dat

    This command takes the zone-related parameters in the input_otp.dat file and creates a non-global zone. This command also installs the remote agent in the non-global zone on all the Sun OTP hosts.

  9. (Applicable for zones) Configure SSH for the remote agent in the newly created non-global zone on all the Sun OTP hosts.

    1. Log in to the non-global zone on the Sun OTP host.

      zlogin zonename

      Where zonename is the name of the non-global zone.

    2. Set a password for n1spsotp user in the global zone on the Sun OTP host.

    3. Log in as the spsotp user (su - spsotp) on Sun OTP provisioning server.

    4. Append the ssh public key of the spsotp user from the provisioning server to the authorized ssh keys located in the home directory of the n1spsotp user on each Sun OTP host.

      cat /var/otp/.ssh/id_rsa.pub | ssh n1spsotp@zonehostname "tee >> /export/home/n1spsotp/.ssh/authorized_keys2"

      where zonehostname is the zone host name of Sun OTP host.

  10. Create shared storage on the clustered Sun OTP system.

    The shared storage is used for high availability of Sun OTP system management service and Sun OTP application provisioning service. These services are installed on the shared storage. If one host fails, the shared storage is mounted onto the other host and services will be restarted.

    The shared storage will contain the otp-system-rg resource group.

    /opt/SUNWotp/cli/deploy_otp -i d -f /export/input_otp.dat -o "-D did -L hostlist -G diskgroup -M mountpoint"

    Where did is the valid shared did. For example, d5.

    hostlist is the list of all Sun OTP hosts connected to shared storage. Separate the host names by a colon. For example, hostname1:hostname2.

    diskgroup is the disk group. For example, sm-dg.

    mountpoint is the mount point.

    To determine the shared did, type the following command:

    /usr/cluster/bin/cldev list -v

    Choose the did that is shared among the Sun OTP hosts.

  11. Install and configure the Sun OTP system management service in the global zone on all the Sun OTP hosts.

    /opt/SUNWotp/cli/deploy_otp -i m -f /export/input_otp.dat

  12. Install and configure the Sun OTP application provisioning service in the global zone on all the Sun OTP hosts.

    /opt/SUNWotp/cli/deploy_otp -i p -f /export/input_otp.dat

  13. Install and configure the Sun OTP security service on all the Sun OTP hosts.

    /opt/SUNWotp/cli/deploy_otp -i s -f /export/input_otp.dat

  14. Configure and enable high availability for Sun OTP services on the first Sun OTP host.

    /opt/SUNWotp/cli/deploy_otp -c h -f /export/input_otp.dat

    This command creates and starts resource groups for Sun OTP system management service, Sun OTP application provisioning service, and Sun OTP security service. This command also configures and starts master-to-master replication (MMR).


    Note –

    Self-contained Sun OTP provisioning server uses a specific logical host name and IP address defined at the beginning of the Sun OTP installation. However, to make Sun OTP application provisioning service highly available, the logical hostname that was previously used will be released upon successful completion of the Configure and Enable HA service plan, and the Sun OTP application provisioning service will be accessible through the Management and Provisioning logical hostname and IP address.


  15. Install Web SSO.

    /opt/SUNWotp/cli/deploy_otp -i o -f /export/input_otp.dat


    Note –

    Monitor the /var/OTP/SUNWotp-debug.log file to check whether the resource group otp-system-rg has been restarted. If the resource group has not been restarted, restart the resource group manually by typing the following command on any host of the cluster.

    /usr/cluster/bin/clrg online otp-system-rg


    The installation log files, input files generated for the plans, installation registry information, and the debug log files are stored in the /var/OTP directory.

See Also

On a two-host cluster, if you chose no for quorumAutoConfiguration during variable set creation, you must manually select and configure the quorum disk as described in To Configure the Quorum Disk on a Two-Host Cluster.