Sun Cluster Data Service for N1 Grid Service Provisioning System for Solaris OS

Installing and Configuring N1 Grid Service Provisioning System Master Server on Shared Storage in the Zone

The tasks you must perform to install and configure N1 Grid Service Provisioning SystemMaster Server in the Zone are as follows:

ProcedureExample: Preparing the Cluster for N1 Grid Service Provisioning System Master Server

    Install and configure the cluster as instructed in Sun Cluster Software Installation Guide for Solaris OS.

    Install the following cluster software components on both nodes.

    • Sun Cluster core software

    • Sun Cluster data service for N1 Grid Service Provisioning System

ProcedureExample: Configuring the Zone

In this task you will install the Solaris Container on phys-schost-1 and phys-schost-2. Therefore perform this procedure on both hosts.

  1. On local cluster storage of , create a directory for the zone root path.

    This example presents a sparse root zone. You can use a whole root zone if that type better suits your configuration.


    phys-schost-1# mkdir /zones
    
  2. Create a temporary file, for example /tmp/x, and include the following entries:


    create -b
    set zonepath=/zones/clu1
    set autoboot=true
    set pool=pool_default
    add inherit-pkg-dir
    set dir=/lib
    end
    add inherit-pkg-dir
    set dir=/platform
    end
    add inherit-pkg-dir
    set dir=/sbin
    end
    add inherit-pkg-dir
    set dir=/usr
    end
    add net
    set address=zone-1 Choose a different addtress (zone-2) on the second node.
    set physical=hme0
    end
    add attr
    set name=comment
    set type=string
    set value="SPS cluster zone" Put your desired zone name between the quotes here.
    end
  3. Configure the zone, using the file you created.


    phys-schost-1# zonecfg -z clu1 -f /tmp/x
    
  4. Install the zone.


    phys-schost-1# zoneadm -z clu1 install
    
  5. Log in to the zone.


    phys-schost-1# zlogin -C clu1
    
  6. Open a new window to the same node and boot the zone?


    phys-schost-1# zoneadm -z clu1 boot
    
  7. Close this terminal window and disconnect from the zone console.


    phys-schost-1# ~~.
    

ProcedureExample: Configuring Cluster Resources for N1 Grid Service Provisioning System Master Server

  1. Register the necessary data types on one node.


    phys-schost-1# clresourcetype register SUNW.gds SUNW.HAStoragePlus
    
  2. Create the N1 Grid Service Provisioning System resource group.


    phys-schost-1# clresourcegroup create -n phys-host-1:clu1,phys-host-2:clu1 RG-SPSMA
    
  3. Create the logical host.


    phys-schost-1# clreslogicalhostname create -g RG-SPSMA ha-host-1
    
  4. Create the HAStoragePlus resource in the RG-SPSMA resource group.


    phys-schost-1# clresource create -g RG-SPSMA -t SUNW.HAStoragePlus -p AffinityOn=TRUE \
    > -p FilesystemMountPoints=/global/mnt3,/global/mnt4 RS-SPSMA-HAS
    
  5. Enable the resource group.


    phys-schost-1# clresourcegroup online -M RG-SPSMA
    

ProcedureExample: Installing the N1 Grid Service Provisioning System Master Server Software on Shared Storage

These steps illustrate how to install the N1 Grid Service Provisioning System software. As long as only one node is mentioned it needs to be the node where your resource group is online.

  1. Log into the zone on both nodes.


    phys-schost-1 zlogin clu1
    phys-schost-2 zlogin clu2
    
  2. Beginning on the node that owns the file system, add the sps user.


    zone-1# groupadd -g 1000 sps
    zone-2# groupadd -g 1000 sps
    zone-1# useradd -g 1000 -d /global/mnt3/sps -m -s /bin/ksh sps
    zone-2# useradd -g 1000 -d /global/mnt3/sps -m -s /bin/ksh sps
    
  3. Prepare the shared memory of the default project on both nodes.


    zone-1# projmod -a -K "project.max-shm-memory=(priv,536870912,deny)" default
    zone-2# projmod -a -K "project.max-shm-memory=(priv,536870912,deny)" default
    

    Note –

    This example is valid for Solaris 10 only. Use appropriate methods on Solaris 9.


  4. Install the N1 Grid Service Provisioning System binaries on one node.


    zone-1# cd /installation_directory
    zone-1# ./cr_ms_solaris_sparc_pkg_5.2.sh
    

    Answer on the following cluster relevant questions as follows:


    • What base directory ...
      default: /opt/SUNWn1sps) [<directory>] /global/mnt3/sps
      

    • Which user will own the N1 SPS Master Server distribution?
      (default: n1sps) [<valid username>] sps
      

    • Which group on this machine will own the
      N1 SPS Master Server distribution?
      (default: n1sps) [<valid groupname>] sps
      

    • What is the hostname or IP address for this Master Server?
      (default: phys-schost-1) ha-host-1
      

    For all the other values, you can accept the defaults, or chose appropriate values. For the simplicity of this example we assume the default values of all port values.

  5. Start the master server as user sps.


    zone-1# su - sps
    zone-1$ cd /global/mnt3/sps/N1_Service_Provisioning_System_5.2/server/bin
    zone-1$ ./cr_server start
    
  6. Prepare the PostgreSQL database for monitoring


    zone-1$ cd /opt/SUNWscsps/master/util
    zone-1$ ksh ./db_prep_postgres /global/mnt3/sps/N1_Service_Provisioning_System_5.2
    
  7. Stop the master server and leave the user sps.


    zone-1$ cd /global/mnt3/sps/N1_Service_Provisioning_System_5.2/server/bin
    zone-1$ ./cr_server stop
    

ProcedureExample: Modifying the N1 Grid Service Provisioning System Master Server Configuration and Parameter Files

  1. Copy the N1 Grid Service Provisioning Systemparameter file from the agent directory to its deployment location.


    zone-1# cp /opt/SUNWscsps/master/bin/pfile /global/mnt3
    
  2. Add this cluster's information to the parameter file pfile.

    The following listing shows the relevant file entries and the values to assign to each entry.


    .
    .
    .
    User=sps
    Basepath=/global/mnt3/sps/N1_Service_Provisioning_System_5.2
    Host=ha-host-1
    Tport=8080
    TestCmd="get /index.jsp"
    ReturnString="SSL|Service"
    Startwait=20
    WgetPath=
    
  3. Save and close the file.

  4. Leave the zone.

  5. Copy the N1 Grid Service Provisioning System configuration file from the agent directory to its deployment location.


    phys-schost-1# cp /opt/SUNWscsps/master/util/spsma_config /global/mnt3
    
  6. Add this cluster's information to the spsma_config configuration file.

    The following listing shows the relevant file entries and the values to assign to each entry.


    .
    .
    .
    RS=RS-SPSMA
    RG=RG-SPSMA
    PORT=8080
    LH=ha-host-1
    PFILE=/global/mnt3/pfile
    HAS_RS=RS-SPSMA-HAS
    
  7. Save and close the file.

ProcedureExample: Enabling the N1 Grid Service Provisioning System Master Server Software to Run in the Cluster

  1. Run the spsma_register script to register the resource.


    phys-schost-1# ksh /opt/SUNWscsps/master/util/spsma_register \
    > -f /global/mnt3/spsma_config
    
  2. Enable the resource.


    phys-schost-1# clresource enable RS-SPSMA