Sun Cluster Data Service for N1 Grid Service Provisioning System for Solaris OS

Appendix C Deployment Example: Installing N1 Grid Service Provisioning System in the Zone

This appendix presents a complete example of how to install and configure the N1 Grid Service Provisioning System application and data service in the zone. It presents a simple two-node cluster configuration. If you need to install the application in any other configuration, refer to the general-purpose procedures presented elsewhere in this manual. For an example of N1 Grid Service Provisioning System installation in a global zone, see Appendix A, Deployment Example: Installing N1 Grid Service Provisioning System in the Global Zone or for a non-global failover zonesee Appendix B, Deployment Example: Installing N1 Grid Service Provisioning System in the Failover Zone according to your zone type.

Target Cluster Configuration

This example uses a two-node cluster with the following node names:

This configuration also uses the logical host name ha-host-1.

Software Configuration

This deployment example uses the following software products and versions:

This example assumes that you have already installed and established your cluster. It illustrates installation and configuration of the data service application only.

Assumptions

The instructions in this example were developed with the following assumptions:

Installing and Configuring N1 Grid Service Provisioning System Master Server on Shared Storage in the Zone

The tasks you must perform to install and configure N1 Grid Service Provisioning SystemMaster Server in the Zone are as follows:

ProcedureExample: Preparing the Cluster for N1 Grid Service Provisioning System Master Server

    Install and configure the cluster as instructed in Sun Cluster Software Installation Guide for Solaris OS.

    Install the following cluster software components on both nodes.

    • Sun Cluster core software

    • Sun Cluster data service for N1 Grid Service Provisioning System

ProcedureExample: Configuring the Zone

In this task you will install the Solaris Container on phys-schost-1 and phys-schost-2. Therefore perform this procedure on both hosts.

  1. On local cluster storage of , create a directory for the zone root path.

    This example presents a sparse root zone. You can use a whole root zone if that type better suits your configuration.


    phys-schost-1# mkdir /zones
    
  2. Create a temporary file, for example /tmp/x, and include the following entries:


    create -b
    set zonepath=/zones/clu1
    set autoboot=true
    set pool=pool_default
    add inherit-pkg-dir
    set dir=/lib
    end
    add inherit-pkg-dir
    set dir=/platform
    end
    add inherit-pkg-dir
    set dir=/sbin
    end
    add inherit-pkg-dir
    set dir=/usr
    end
    add net
    set address=zone-1 Choose a different addtress (zone-2) on the second node.
    set physical=hme0
    end
    add attr
    set name=comment
    set type=string
    set value="SPS cluster zone" Put your desired zone name between the quotes here.
    end
  3. Configure the zone, using the file you created.


    phys-schost-1# zonecfg -z clu1 -f /tmp/x
    
  4. Install the zone.


    phys-schost-1# zoneadm -z clu1 install
    
  5. Log in to the zone.


    phys-schost-1# zlogin -C clu1
    
  6. Open a new window to the same node and boot the zone?


    phys-schost-1# zoneadm -z clu1 boot
    
  7. Close this terminal window and disconnect from the zone console.


    phys-schost-1# ~~.
    

ProcedureExample: Configuring Cluster Resources for N1 Grid Service Provisioning System Master Server

  1. Register the necessary data types on one node.


    phys-schost-1# clresourcetype register SUNW.gds SUNW.HAStoragePlus
    
  2. Create the N1 Grid Service Provisioning System resource group.


    phys-schost-1# clresourcegroup create -n phys-host-1:clu1,phys-host-2:clu1 RG-SPSMA
    
  3. Create the logical host.


    phys-schost-1# clreslogicalhostname create -g RG-SPSMA ha-host-1
    
  4. Create the HAStoragePlus resource in the RG-SPSMA resource group.


    phys-schost-1# clresource create -g RG-SPSMA -t SUNW.HAStoragePlus -p AffinityOn=TRUE \
    > -p FilesystemMountPoints=/global/mnt3,/global/mnt4 RS-SPSMA-HAS
    
  5. Enable the resource group.


    phys-schost-1# clresourcegroup online -M RG-SPSMA
    

ProcedureExample: Installing the N1 Grid Service Provisioning System Master Server Software on Shared Storage

These steps illustrate how to install the N1 Grid Service Provisioning System software. As long as only one node is mentioned it needs to be the node where your resource group is online.

  1. Log into the zone on both nodes.


    phys-schost-1 zlogin clu1
    phys-schost-2 zlogin clu2
    
  2. Beginning on the node that owns the file system, add the sps user.


    zone-1# groupadd -g 1000 sps
    zone-2# groupadd -g 1000 sps
    zone-1# useradd -g 1000 -d /global/mnt3/sps -m -s /bin/ksh sps
    zone-2# useradd -g 1000 -d /global/mnt3/sps -m -s /bin/ksh sps
    
  3. Prepare the shared memory of the default project on both nodes.


    zone-1# projmod -a -K "project.max-shm-memory=(priv,536870912,deny)" default
    zone-2# projmod -a -K "project.max-shm-memory=(priv,536870912,deny)" default
    

    Note –

    This example is valid for Solaris 10 only. Use appropriate methods on Solaris 9.


  4. Install the N1 Grid Service Provisioning System binaries on one node.


    zone-1# cd /installation_directory
    zone-1# ./cr_ms_solaris_sparc_pkg_5.2.sh
    

    Answer on the following cluster relevant questions as follows:


    • What base directory ...
      default: /opt/SUNWn1sps) [<directory>] /global/mnt3/sps
      

    • Which user will own the N1 SPS Master Server distribution?
      (default: n1sps) [<valid username>] sps
      

    • Which group on this machine will own the
      N1 SPS Master Server distribution?
      (default: n1sps) [<valid groupname>] sps
      

    • What is the hostname or IP address for this Master Server?
      (default: phys-schost-1) ha-host-1
      

    For all the other values, you can accept the defaults, or chose appropriate values. For the simplicity of this example we assume the default values of all port values.

  5. Start the master server as user sps.


    zone-1# su - sps
    zone-1$ cd /global/mnt3/sps/N1_Service_Provisioning_System_5.2/server/bin
    zone-1$ ./cr_server start
    
  6. Prepare the PostgreSQL database for monitoring


    zone-1$ cd /opt/SUNWscsps/master/util
    zone-1$ ksh ./db_prep_postgres /global/mnt3/sps/N1_Service_Provisioning_System_5.2
    
  7. Stop the master server and leave the user sps.


    zone-1$ cd /global/mnt3/sps/N1_Service_Provisioning_System_5.2/server/bin
    zone-1$ ./cr_server stop
    

ProcedureExample: Modifying the N1 Grid Service Provisioning System Master Server Configuration and Parameter Files

  1. Copy the N1 Grid Service Provisioning Systemparameter file from the agent directory to its deployment location.


    zone-1# cp /opt/SUNWscsps/master/bin/pfile /global/mnt3
    
  2. Add this cluster's information to the parameter file pfile.

    The following listing shows the relevant file entries and the values to assign to each entry.


    .
    .
    .
    User=sps
    Basepath=/global/mnt3/sps/N1_Service_Provisioning_System_5.2
    Host=ha-host-1
    Tport=8080
    TestCmd="get /index.jsp"
    ReturnString="SSL|Service"
    Startwait=20
    WgetPath=
    
  3. Save and close the file.

  4. Leave the zone.

  5. Copy the N1 Grid Service Provisioning System configuration file from the agent directory to its deployment location.


    phys-schost-1# cp /opt/SUNWscsps/master/util/spsma_config /global/mnt3
    
  6. Add this cluster's information to the spsma_config configuration file.

    The following listing shows the relevant file entries and the values to assign to each entry.


    .
    .
    .
    RS=RS-SPSMA
    RG=RG-SPSMA
    PORT=8080
    LH=ha-host-1
    PFILE=/global/mnt3/pfile
    HAS_RS=RS-SPSMA-HAS
    
  7. Save and close the file.

ProcedureExample: Enabling the N1 Grid Service Provisioning System Master Server Software to Run in the Cluster

  1. Run the spsma_register script to register the resource.


    phys-schost-1# ksh /opt/SUNWscsps/master/util/spsma_register \
    > -f /global/mnt3/spsma_config
    
  2. Enable the resource.


    phys-schost-1# clresource enable RS-SPSMA
    

Installing and Configuring N1 Grid Service Provisioning System Remote Agent on Shared Storage in the Zone

The tasks you must perform to install and configure N1 Grid Service Provisioning System Remote Agent in the zone are as follows:

ProcedureExample: Preparing the Cluster for N1 Grid Service Provisioning System Remote Agent

    Install and configure the cluster as instructed in Sun Cluster Software Installation Guide for Solaris OS.

    Install the following cluster software components on both nodes.

    • Sun Cluster core software

    • Sun Cluster data service for N1 Grid Service Provisioning System

ProcedureExample: Configuring the Zone

In this task you will install the Solaris Container on phys-schost-1 and phys-schost-2. Therefore perform this procedure on both hosts.

  1. On local cluster storage of , create a directory for the zone root path.

    This example presents a sparse root zone. You can use a whole root zone if that type better suits your configuration.


    phys-schost-1# mkdir /zones
    
  2. Create a temporary file, for example /tmp/x, and include the following entries:


    create -b
    set zonepath=/zones/clu1
    set autoboot=true
    set pool=pool_default
    add inherit-pkg-dir
    set dir=/lib
    end
    add inherit-pkg-dir
    set dir=/platform
    end
    add inherit-pkg-dir
    set dir=/sbin
    end
    add inherit-pkg-dir
    set dir=/usr
    end
    add net
    set address=zone-1 Choose a different addtress (zone-2) on the second node.
    set physical=hme0
    end
    add attr
    set name=comment
    set type=string
    set value="SPS cluster zone" Put your desired zone name between the quotes here.
    end
  3. Configure the failover zone, using the file you created.


    phys-schost-1# zonecfg -z clu1 -f /tmp/x
    
  4. Install the zone.


    phys-schost-1# zoneadm -z clu1 install
    
  5. Log in to the zone.


    phys-schost-1# zlogin -C clu1
    
  6. Open a new window to the same node and boot the zone?


    phys-schost-1# zoneadm -z clu1 boot
    
  7. Close this terminal window and disconnect from the zone console.


    phys-schost-1# ~~.
    

ProcedureExample: Configuring Cluster Resources for N1 Grid Service Provisioning System Remote Agent

  1. Register the necessary data types on one node.


    phys-schost-1# clresourcetype register SUNW.gds SUNW.HAStoragePlus
    
  2. Create the N1 Grid Service Provisioning System resource group.


    phys-schost-1# clresourcegroup create -n phys-host-1:clu1,phys-host-2:clu1 RG-SPSRA
    
  3. Create the logical host.


    phys-schost-1# clreslogicalhostname create -g RG-SPSRA ha-host-1
    
  4. Create the HAStoragePlus resource in the RG-SPSRA resource group.


    phys-schost-1# clresource create -g RG-SPSRA -t SUNW.HAStoragePlus -p AffinityOn=TRUE \
    -p FilesystemMountPoints=/global/mnt3,/global/mnt4 RS-SPSRA-HAS
    
  5. Enable the resource group.


    phys-schost-1# clresourcegroup online -M RG-SPSRA
    

ProcedureExample: Installing the N1 Grid Service Provisioning System Remote Agent Software on Shared Storage

These steps illustrate how to install the N1 Grid Service Provisioning System software. As long as only one node is mentioned it needs to be the node where your resource group is online.

  1. Log in to the zone on both nodes.


    phys-schost-1# zlogin clu1
    phys-schost-2# zlogin clu1
    
  2. Beginning on the node that owns the file system, add the sps user.


    zone-1# groupadd -g 1000 sps
    zone-2# groupadd -g 1000 sps
    zone-1# useradd -g 1000 -d /global/mnt3/sps -m -s /bin/ksh sps
    zone-2# useradd -g 1000 -d /global/mnt3/sps -s /bin/ksh sps
    
  3. Install the N1 Grid Service Provisioning System binaries on one node.


    zone-1# cd /installation_directory
    zone-1# ./cr_ra_solaris_sparc_5.2.sh
    

    Answer on the following cluster relevant questions as follows:


    • What base directory ...
      default: /opt/SUNWn1sps) [<directory>] /global/mnt3/sps
      

    • Which user will own the N1 SPS Remote Agent distribution?
      (default: n1sps) [<valid username>] sps
      

    • Which group on this machine will own the
      N1 SPS Remote Agent distribution?
      (default: n1sps) [<valid groupname>] sps
      

    • What is the hostname or IP address of the interface on which the
      Agent will run?
      (default: phys-schost-1) ha-host-1
      

    For all the other values, you can accept the defaults, or chose appropriate values. For the simplicity of this example we assume the default values of all port values.

  4. Leave the zone.

ProcedureExample: Modifying the N1 Grid Service Provisioning System Remote Agent Configuration File

  1. Copy the N1 Grid Service Provisioning System configuration file from the agent directory to its deployment location.


    phys-schost-1# cp /opt/SUNWscsps/remoteagent/util/spsra_config /global/mnt3
    
  2. Add this cluster's information to the spsra_config configuration file.

    The following listing shows the relevant file entries and the values to assign to each entry.


    .
    .
    .
    RS=RS-SPSRA
    RG=RG-SPSRA
    PORT=8080
    LH=ha-host-1
    USER=sps
    BASE=/global/mnt3/sps/N1_Service_Provisioning_System
    HAS_RS=RS-SPSRA-HAS
    
  3. Save and close the file.

ProcedureExample: Enabling the N1 Grid Service Provisioning System Remote Agent Software to Run in the Cluster

  1. Run the spsra_register script to register the resource.


    phys-schost-1# ksh /opt/SUNWscsps/remoteagent/util/spsra_register \
    > -f /global/mnt3/spsra_config
    
  2. Enable the resource.


    phys-schost-1# clresource enable RS-SPSRA
    

Installing and Configuring N1 Grid Service Provisioning System Local Distributor on Shared Storage in the Zone

The tasks you must perform to install and configure N1 Grid Service Provisioning System Local Distributor in the zonezone are as follows:

ProcedureExample: Preparing the Cluster for N1 Grid Service Provisioning System Local Distributor

    Install and configure the cluster as instructed in Sun Cluster Software Installation Guide for Solaris OS.

    Install the following cluster software components on both nodes.

    • Sun Cluster core software

    • Sun Cluster data service for N1 Grid Service Provisioning System

ProcedureExample: Configuring the Zone

In this task you will install the Solaris Container on phys-schost-1 and phys-schost-2. Therefore perform this procedure on both hosts.

  1. On local cluster storage of , create a directory for the zone root path.

    This example presents a sparse root zone. You can use a whole root zone if that type better suits your configuration.


    phys-schost-1# mkdir /zones
    
  2. Create a temporary file, for example /tmp/x, and include the following entries:


    create -b
    set zonepath=/zones/clu1
    set autoboot=true
    set pool=pool_default
    add inherit-pkg-dir
    set dir=/lib
    end
    add inherit-pkg-dir
    set dir=/platform
    end
    add inherit-pkg-dir
    set dir=/sbin
    end
    add inherit-pkg-dir
    set dir=/usr
    end
    add net
    set address=zone-1 Choose a different addtress (zone-2) on the second node.
    set physical=hme0
    end
    add attr
    set name=comment
    set type=string
    set value="SPS cluster zone" Put your desired zone name between the quotes here.
    end
  3. Configure the failover zone, using the file you created.


    phys-schost-1# zonecfg -z clu1 -f /tmp/x
    
  4. Install the zone.


    phys-schost-1# zoneadm -z clu1 install
    
  5. Log in to the zone.


    phys-schost-1# zlogin -C clu1
    
  6. Open a new window to the same node and boot the zone?


    phys-schost-1# zoneadm -z clu1 boot
    
  7. Close this terminal window and disconnect from the zone console.


    phys-schost-1# ~~.
    

ProcedureExample: Configuring Cluster Resources for N1 Grid Service Provisioning System Local Distributor

  1. Register the necessary data types on one node.


    phys-schost-1# clresourcetype register SUNW.gds SUNW.HAStoragePlus
    
  2. Create the N1 Grid Service Provisioning System resource group.


    phys-schost-1# clresourcegroup create -n phys-host-1:clu1,phys-host-2:clu1 RG-SPSLD
    
  3. Create the logical host.


    phys-schost-1# clreslogicalhostname create -g RG-SPSLD ha-host-1
    
  4. Create the HAStoragePlus resource in the RG-SPSLD resource group.


    phys-schost-1# clresource create -g RG-SPSLD -t SUNW.HAStoragePlus -p AffinityOn=TRUE \
    > -p FilesystemMountPoints=/global/mnt3,/global/mnt4 RS-SPSLD-HAS
    
  5. Enable the resource group.


    phys-schost-1# clresourcegroup online -M RG-SPSLD
    

ProcedureExample: Installing the N1 Grid Service Provisioning System Local Distributor Software on Shared Storage

These steps illustrate how to install the N1 Grid Service Provisioning System software. As long as only one node is mentioned it needs to be the node where your resource group is online.

  1. Log into the zone on both nodes.


    phys-schost-1# zlogin clu1
    phys-schost-2# zlogin clu1
    
  2. Beginning on the node that owns the file system, add the sps user.


    zone-1# groupadd -g 1000 sps
    zone-2# groupadd -g 1000 sps
    zone-1# useradd -g 1000 -d /global/mnt3/sps -m -s /bin/ksh sps
    zone-2# useradd -g 1000 -d /global/mnt3/sps -m -s /bin/ksh sps
    
  3. Install the N1 Grid Service Provisioning System binaries on one node.


    zone-1# cd /installation_directory
    zone-1# ./cr_ld_solaris_sparc_5.2.sh
    

    Answer on the following cluster relevant questions as follows:


    • What base directory ...
      default: /opt/SUNWn1sps) [<directory>] /global/mnt3/sps
      

    • Which user will own the N1 SPS Local Distributor distribution?
      (default: n1sps) [<valid username>] sps
      

    • Which group on this machine will own the
      N1 SPS Local Distributor distribution?
      (default: n1sps) [<valid groupname>] sps
      

    • What is the hostname or IP address of this machine?
      (default: phys-schost-1) ha-host-1
      

    For all the other values, you can accept the defaults, or chose appropriate values. For the simplicity of this example we assume the default values of all port values.

  4. Leave the zone.

ProcedureExample: Modifying the N1 Grid Service Provisioning System Local Distributor Configuration File

  1. Copy the N1 Grid Service Provisioning System configuration file from the agent directory to its deployment location.


    phys-schost-1# cp /opt/SUNWscsps/localdist/util/spsld_config /global/mnt3
    
  2. Add this cluster's information to the spsld_config configuration file.

    The following listing shows the relevant file entries and the values to assign to each entry.


    .
    .
    .
    RS=RS-SPSLD
    RG=RG-SPSLD
    PORT=8080
    LH=ha-host-1
    USER=sps
    BASE=/global/mnt3/sps/N1_Service_Provisioning_System
    HAS_RS=RS-SPSLD-HAS
    
  3. Save and close the file.

ProcedureExample: Enabling the N1 Grid Service Provisioning System Local Distributor Software to Run in the Cluster

  1. Run the spsld_register script to register the resource.


    phys-schost-1# ksh /opt/SUNWscsps/localdist/util/spsld_register \
    > -f /global/mnt3/spsld_config
    
  2. Enable the resource.


    phys-schost-1# clresource enable RS-SPSLD