Graphical user interface installation and setup of the Open Telecommunications Platform on a clustered OTP system is comprised of the following steps:
To Set Up the OTP High Availability Framework on the First OTP Host
To Set Up the OTP High Availability Framework on the Additional OTP Hosts
To Set Up OTP System Management and Provisioning Services on the First OTP Host
To Set Up OTP System Management and Provisioning Services on the Additional OTP Hosts
To Enable High Availability for the OTP Provisioning Service on the First OTP Host
Refer to the OTP System Plan Settings Descriptions and the Clustered OTP Host Plan Worksheet for information needed during installation.
Availability services must first be set up on the first OTP host in your clustered OTP system.
The first OTP host must be connected to shared storage
The external OTP installation server must be set up and verified as described in To Set Up and Verify the External OTP Installation Server
The Service Provisioning remote agent must be set up on all hosts in the clustered OTP system as described in To Set Up the Service Provisioning Remote Agent on the Clustered OTP Systems
All hosts in the clustered OTP system must be added to the external OTP installation server hosts list as described in To Add Hosts to the External OTP Installation Server
Open a Web browser and log in to the external OTP installation server service provisioning service.
Go to URL http://install server:9090 where install server is either the IP address or the fully qualified name of the external OTP installation server.
Click OEM OTP to display the Open Telecommunications Platform home page.
Click Step 1. OTP High Availability Framework on First Host: Install and Configure.
The edit availability plan page appears.
Click run.
The Availability Plan Variables page appears. Scroll the page down to view the variables:
Type the information in the plan variables fields according to your Clustered OTP Host Plan Worksheet. Refer to the OTP System Plan Settings Descriptions for information about each variable.
Set limit overall running time of plan and limit running time of native calls to 2 hours each.
Click run plan (includes preflight).
The page refreshes, and a progress bar is displayed during the provisioning process.
The provisioning process:
Installs required Solaris OS patches
Installs the OTP high availability framework
Configures the first OTP host
Reboots the first OTP host
Verifies the first OTP host configuration
Set up availability services on the additional OTP hosts as described in the next procedure.
The OTP high availability framework must be set up on each host in your clustered OTP system. Perform the following steps on each host.
The OTP high availability framework must be set up on the First OTP Host as described in the previous procedure.
Open a Web browser and log in to the external OTP installation server service provisioning service.
Go to URL http://install server:9090 where install server is either the IP address or the fully-qualifed name of the external OTP installation server.
Click OEM OTP to display the Open Telecommunications Platform home page.
Click Step 2. OTP High Availability Framework on Additional Hosts: Install and Configure.
The edit availability plan page appears.
Click run.
The Availability Plan Variables page appears. Scroll the page down to view the variables:
Type the information in the plan variables fields according to your Clustered OTP Host Plan Worksheet. Refer to the OTP System Plan Settings Descriptions for information about each variable.
Set limit overall running time of plan and limit running time of native calls to 2 hours each.
Click run plan (includes preflight).
The page refreshes, and a progress bar is displayed during the provisioning process.
The provisioning process:
Installs required Solaris OS patches
Installs the OTP high availability framework
Configures the clustered OTP host
Reboots the clustered OTP host
Verifies the clustered OTP host configuration
If you chose no for Quorum Auto Configuration on a two-host cluster, you must manually select and configure the quorum disk as follows.
The following sub-steps apply only to a two-host cluster. If you are setting up the OTP high availability framework on a three-host or more clustered OTP system, this step is optional.
Open a separate terminal window and log in as root to the first OTP host.
Type /usr/cluster/bin/scdidadm -L to display the cluster disk information. For example:
# /usr/cluster/bin/scdidadm -L 1 otpclient1:/dev/rdsk/c0t8d0 /dev/did/rdsk/d1 1 otpclient2:/dev/rdsk/c0t8d0 /dev/did/rdsk/d1 2 otpclient1:/dev/rdsk/c0t9d0 /dev/did/rdsk/d2 2 otpclient2:/dev/rdsk/c0t9d0 /dev/did/rdsk/d2 3 otpclient1:/dev/rdsk/c1t0d0 /dev/did/rdsk/d3 4 otpclient1:/dev/rdsk/c1t1d0 /dev/did/rdsk/d4 5 otpclient2:/dev/rdsk/c1t0d0 /dev/did/rdsk/d5 6 otpclient2:/dev/rdsk/c1t1d0 /dev/did/rdsk/d6 |
In the above example, disks d1 and d2 are shared by both hosts of the two-host cluster. The quorum disk must be a shared disk.
Configure a quorum disk.
Type /usr/cluster/bin/scconf -a -q globaldev=shared disk ID where shared disk ID is a shared disk ID. For example:
# /usr/cluster/bin/scconf -a -q globaldev=d1 |
Type /usr/cluster/bin/scconf -c -q reset to reset the two-host cluster to normal mode.
Set the hard drive variables according to your cluster settings. Failure to do so will result in OTP high availability framework installation failure.
The OTP high availability framework must be set up on all OTP hosts in the clustered OTP system.
Create the shared storage meta database on all clustered OTP hosts.
The following steps must be performed for each clustered OTP host.
Log in to the clustered OTP host as root (su - root).
Determine the drive on which root is mounted and the available free space.
Typeprtvtoc `mount | awk '/^\/ / { print $3 }'` to list the hard drive slices and available space.
For example:
# prtvtoc `mount | awk '/^\/ / { print $3 }'` * /dev/rdsk/c0t0d0s0 partition map * * Dimensions: * 512 bytes/sector * 424 sectors/track * 24 tracks/cylinder * 10176 sectors/cylinder * 14089 cylinders * 14087 accessible cylinders * * Flags: * 1: unmountable * 10: read-only * * Unallocated space: * First Sector Last * Sector Count Sector * 63620352 79728960 143349311 * * First Sector Last * Partition Tag Flags Sector Count Sector Mount Directory 0 2 00 8201856 51205632 59407487 / 1 3 01 0 8201856 8201855 2 5 00 0 143349312 143349311 3 0 00 59407488 2106432 61513919 /globaldevices 7 0 00 61513920 2106432 63620351 |
Create the database.
Type metadb -a -f -c 6 disk slice where disk slice is an available file system.
For example, based on the example in the previous step:
# metadb -a -f -c 6 c0t0d0s7 |
Create the shared storage files only on the first OTP host.
The first OTP host must be connected to the shared storage.
Log in to the first OTP host as root (su - root).
Type the scdidadm command to determine which disks are seen on all nodes of cluster and choose one to be the shared disk to metaset.
In the following example d4, d5, d6, and d7 are shared disks.
# /usr/cluster/bin/scdidadm -L 1 otpclient1:/dev/rdsk/c1t0d0 /dev/did/rdsk/d1 2 otpclient1:/dev/rdsk/c2t0d0 /dev/did/rdsk/d2 3 otpclient1:/dev/rdsk/c2t1d0 /dev/did/rdsk/d3 4 otpclient1:/dev/rdsk/c3t600C0FF000000000092C187A9755BE14d0 /dev/did/rdsk/d4 4 otpclient2:/dev/rdsk/c3t600C0FF000000000092C187A9755BE14d0 /dev/did/rdsk/d4 5 otpclient1:/dev/rdsk/c3t600C0FF000000000092C187A9755BE13d0 /dev/did/rdsk/d5 5 otpclient2:/dev/rdsk/c3t600C0FF000000000092C187A9755BE13d0 /dev/did/rdsk/d5 6 otpclient1:/dev/rdsk/c3t600C0FF000000000092C187A9755BE12d0 /dev/did/rdsk/d6 6 otpclient2:/dev/rdsk/c3t600C0FF000000000092C187A9755BE12d0 /dev/did/rdsk/d6 7 otpclient1:/dev/rdsk/c3t600C0FF000000000092C187A9755BE11d0 /dev/did/rdsk/d7 7 otpclient2:/dev/rdsk/c3t600C0FF000000000092C187A9755BE11d0 /dev/did/rdsk/d7 8 otpclient2:/dev/rdsk/c1t0d0 /dev/did/rdsk/d8 9 otpclient2:/dev/rdsk/c2t0d0 /dev/did/rdsk/d9 10 otpclient2:/dev/rdsk/c2t1d0 /dev/did/rdsk/d10 |
Add the additional OTP hosts.
Type metaset -s sps–dg -a -h otpclient1 otpclientn where otpclient1 otpclientn is the list of OTP hosts separated by a space. For example, assuming that otpclient1 is the First OTP host
# metaset -s sps–dg -a -h otpclient2 otpclient3 otpclient4 otpclient5 \ ontclient6 otpclient7 otpclient8 |
Type metaset -s sps–dg -a shared-disk to add the shared disk to metaset.
In the following example, the d7 shared disk is added:
# metaset -s sps–dg -a /dev/did/rdsk/d7 |
Type metainit -s sps–dg d0 1 1 /dev/did/rdsk/d7s0
Type newfs /dev/md/sps–dg/rdsk/d0
On a two-host cluster only, set up the mediator strings for the sps-dg disk group.
Type metaset -s sps–dg -a -m otpclient1 otpclientn where otpclient1 otpclientn is the list of OTP hosts separated by a space. For example:
# metaset -s sps–dg -a -m otpclient1 otpclient2 otpclient3 otpclient4 otpclient5 \ ontclient6 otpclient7 otpclient8 |
Type metaset to verify the mediator host setup.
The following example shows hosts otpclient1 and otpclient2 set up as mediator hosts.
# metaset Set name = sps–dg, Set number = 1 Host Owner otpclient1 Yes otpclient2 Mediator Host(s) Aliases otpclient1 otpclient2 Driv Dbase d4 Yes |
Update the /etc/vfstab file on all clustered OTP hosts.
The following steps must be performed for each host.
Set up the system management and provisioning services on the first OTP host as described in the next procedure.
Shared storage must be set up on the first OTP host as described in the previous procedure.
Open a Web browser and log in to the external OTP installation server service provisioning service.
Go to URL http://install server:9090 where install server is either the IP address or the fully qualified name of the external OTP installation server.
Click OEM OTP to display the Open Telecommunications Platform home page.
Click Step 3. OTP System Management and Provisioning Services on First Host: Install and Configure.
The edit System Management and Application Provisioning plan page appears.
Click run.
The System Management and Application Provisioning Plan Variables page appears. Scroll the page down to display the variables
Type the information in the plan variables fields according to your Clustered OTP Host Plan Worksheet. Refer to the OTP System Plan Settings Descriptions for information about each variable.
Set limit overall running time of plan and limit running time of native calls to 2 hours each.
Click run plan (includes preflight).
The page refreshes, and a progress bar is displayed during the provisioning process.
The provisioning process:
Installs the Web console
Applies patches required by the Open Telecommunications Platform
Installs the system management agent
Installs the system management service
Installs the service provisioning service
Installs Java patches
When the provisioning process completes, click done.
System management and provisioning services must be set up on the first OTP host as described in the previous procedure.
Open a Web browser and log in to the external OTP installation server service provisioning service.
Go to URL http://install server:9090 where install server is either the IP address or the fully qualified name of the external OTP installation server.
Click OEM OTP to display the Open Telecommunications Platform home page.
Click Step 4. OTP System Management and Provisioning Service on Additional Hosts: Install and Configure.
The edit System Management and Application Provisioning plan page appears.
Click run.
The System Management and Application Provisioning Plan Variables page appears. Scroll the page down to display the variables
Type the information in the plan variables fields according to your Clustered OTP Host Plan Worksheet for this OTP host. Refer to the OTP System Plan Settings Descriptions for information about each variable.
Set limit overall running time of plan and limit running time of native calls to 2 hours each.
Click run plan (includes preflight).
The page refreshes, and a progress bar is displayed during the provisioning process.
The provisioning process:
Installs the Web console
Applies patches required by the Open Telecommunications Platform
Installs the system management agent
Installs the system management service
Installs the service provisioning service
Installs Java patches
When the provisioning process completes, click done.
Repeat this procedure for the next OTP host in your clustered OTP system.
When you have finished setting up system management and provisioning services on all OTP hosts, enable high availability on the first OTP host as described in the next procedure.
System management and provisioning services must be set up on the additional OTP hosts as described in the previous procedure.
Open a Web browser and log in to the external OTP installation server service provisioning service.
Go to URL http://install server:9090 where install server is either the IP address or the fully-qualifed name of the external OTP installation server.
Click OEM OTP to display the Open Telecommunications Platform home page.
Click Step 5. OTP High Availability for Provisioning Service on First Host: Enable beneath Multi Cluster Setup in the central menu.
The edit High Availability plan page appears.
Click run.
The High Availability Plan Variables page appears. Scroll the page down to display the variables
Type the information in the plan variables fields according to your Clustered OTP Host Plan Worksheet. Refer to the OTP System Plan Settings Descriptions for information about each variable.
Set limit overall running time of plan and limit running time of native calls to 2 hours each.
Click run plan (includes preflight).
The page refreshes, and a progress bar is displayed during the provisioning process.
The provisioning process installs and enables the application provisioning service high availability agent.
When the provisioning process completes, click done.
Log in as root on thefirst OTP host and restart the remote agent.
Type /etc/init.d/n1spsagent restart to restart the remote agent. If the remote agent is not restarted, then the service provisioning service on the first OTP host will not work properly.
Configure and enable fail-over.
Type /usr/cluster/bin/scswitch -F -g otp-system-rg to take the remote group offline.
Type the following commands in the sequence shown to disable cluster resources.
/usr/cluster/bin/scswitch -n -j otp-spsms-rs
/usr/cluster/bin/scswitch -n -j otp-spsra-rs
/usr/cluster/bin/scswitch -n -j otp-sps-hastorage-plus
/usr/cluster/bin/scswitch -n -j otp-lhn
Type /usr/cluster/bin/scswitch -u -g otp-system-rg to put the remote group into the unmanaged state.
Type /usr/cluster/bin/scrgadm -c -j otp-spsra-rs -x Stop_signal="15" to change the Stop_signal property of the remote agent resource to 15.
Type /usr/cluster/bin/scrgadm -c -j otp-spsms-rs -x Stop_signal="15" to change the Stop_signal property of the management service resource to 15.
Type /usr/cluster/bin/scswitch -o -g otp-system-rg to put the remote group into the managed state.
Type /usr/cluster/bin/scswitch -Z -g otp-system-rg to bring the remote group back online.
This completes the Open Telecommunications Platform graphical user interface installation process for a clustered OTP system.