Sun Cluster Data Service for N1 Grid Service Provisioning System for Solaris OS

Installing and Configuring N1 Grid Service Provisioning System Master Server on Shared Storage in the Global Zone

The tasks you must perform to install and configure N1 Grid Service Provisioning SystemMaster Server in the global zone are as follows:

ProcedureExample: Preparing the Cluster for N1 Grid Service Provisioning System Master Server

  1. Install and configure the cluster as instructed in Sun Cluster Software Installation Guide for Solaris OS.

    Install the following cluster software components on both nodes.

    • Sun Cluster core software

    • Sun Cluster data service for N1 Grid Service Provisioning System

  2. Beginning on the node that owns the file system, add the sps user.


    phys-schost-1# groupadd -g 1000 sps
    phys-schost-2# groupadd -g 1000 sps
    phys-schost-1# useradd -g 1000 -d /global/mnt3/sps -m -s /bin/ksh sps
    phys-schost-2# useradd -g 1000 -d /global/mnt3/sps -m -s /bin/ksh sps
    

ProcedureExample: Configuring Cluster Resources for N1 Grid Service Provisioning System Master Server

  1. Register the necessary data types on one node.


    phys-schost-1# clresourcetype register SUNW.gds SUNW.HAStoragePlus
    
  2. Create the N1 Grid Service Provisioning System resource group.


    phys-schost-1# clresourcegroup create RG-SPSMA
    
  3. Create the logical host.


    phys-schost-1# clreslogicalhostname create -g RG-SPSMA ha-host-1
    
  4. Create the HAStoragePlus resource in the RG-SPSMA resource group.


    phys-schost-1# clresource create -g RG-SPSMA -t SUNW.HAStoragePlus -p AffinityOn=TRUE \
    -p FilesystemMountPoints=/global/mnt3,/global/mnt4 RS-SPSMA-HAS
    
  5. Enable the resource group.


    phys-schost-1# clresourcegroup online -M RG-SPSMA
    

ProcedureExample: Installing the N1 Grid Service Provisioning System Master Server Software on Shared Storage

These steps illustrate how to install the N1 Grid Service Provisioning System software. As long as only one node is mentioned it needs to be the node where your resource group is online.

  1. Prepare the shared memory of the default project on both nodes.


    phys-schost-1# projmod -a -K "project.max-shm-memory=(priv,536870912,deny)" default
    phys-schost-2# projmod -a -K "project.max-shm-memory=(priv,536870912,deny)" default
    

    Note –

    This example is valid for Solaris 10 only. Use appropriate methods on Solaris 9.


  2. Install the N1 Grid Service Provisioning System binaries on one node.


    phys-schost-1# cd /installation_directory
    phys-schost-1# ./cr_ms_solaris_sparc_pkg_5.2.sh
    

    Answer on the following cluster relevant questions as follows:


    • What base directory ...
      default: /opt/SUNWn1sps) [<directory>] /global/mnt3/sps
      

    • Which user will own the N1 SPS Master Server distribution?
      (default: n1sps) [<valid username>] sps
      

    • Which group on this machine will own the
      N1 SPS Master Server distribution?
      (default: n1sps) [<valid groupname>] sps
      

    • What is the hostname or IP address for this Master Server?
      (default: phys-schost-1) ha-host-1
      

    For all the other values, you can accept the defaults, or chose appropriate values. For the simplicity of this example we assume the default values of all port values.

  3. Start the master server as user sps.


    phys-schost-1# su - sps
    phys-schost-1$ cd /global/mnt3/sps/N1_Service_Provisioning_System_5.2/server/bin
    phys-schost-1$ ./cr_server start
    
  4. Prepare the PostgreSQL database for monitoring


    phys-schost-1$ cd /opt/SUNWscsps/master/util
    phys-schost-1$ ksh ./db_prep_postgres \
    > /global/mnt3/sps/N1_Service_Provisioning_System_5.2
    
  5. Stop the master server and leave the user sps.


    phys-schost-1$ cd /global/mnt3/sps/N1_Service_Provisioning_System_5.2/server/bin
    phys-schost-1$ ./cr_server stop
    

ProcedureExample: Modifying the N1 Grid Service Provisioning System Master Server Configuration and Parameter Files

  1. Copy the N1 Grid Service Provisioning System configuration file from the agent directory to its deployment location.


    phys-schost-1# cp /opt/SUNWscsps/master/util/spsma_config /global/mnt3
    phys-schost-1# cp /opt/SUNWscsps/master/bin/pfile /global/mnt3
    
  2. Add this cluster's information to the spsma_config configuration file.

    The following listing shows the relevant file entries and the values to assign to each entry.


    .
    .
    .
    RS=RS-SPSMA
    RG=RG-SPSMA
    PORT=8080
    LH=ha-host-1
    PFILE=/global/mnt3/pfile
    HAS_RS=RS-SPSMA-HAS
    
  3. Save and close the file.

  4. Add this cluster's information to the parameter file pfile.

    The following listing shows the relevant file entries and the values to assign to each entry.


    .
    .
    .
    User=sps
    Basepath=/global/mnt3/sps/N1_Service_Provisioning_System_5.2
    Host=ha-host-1
    Tport=8080
    TestCmd="get /index.jsp"
    ReturnString="SSL|Service"
    Startwait=20
    WgetPath=
    
  5. Save and close the file.

ProcedureExample: Enabling the N1 Grid Service Provisioning System Master Server Software to Run in the Cluster

  1. Run the spsma_register script to register the resource.


    phys-schost-1# ksh /opt/SUNWscsps/master/util/spsma_register \
    -f /global/mnt3/spsma_config
    
  2. Enable the resource.


    phys-schost-1# clresource enable RS-SPSMA