Sun Cluster Data Service for N1 Grid Service Provisioning System for Solaris OS

Appendix B Deployment Example: Installing N1 Grid Service Provisioning System in the Failover Zone

This appendix presents a complete example of how to install and configure the N1 Grid Service Provisioning System application and data service in the failover zone. It presents a simple two-node cluster configuration. If you need to install the application in any other configuration, refer to the general-purpose procedures presented elsewhere in this manual. For an example of N1 Grid Service Provisioning System installation in a global zone, see Appendix A, Deployment Example: Installing N1 Grid Service Provisioning System in the Global Zone or for a non-global zone see Appendix C, Deployment Example: Installing N1 Grid Service Provisioning System in the Zone according to your zone type.

Target Cluster Configuration

This example uses a two-node cluster with the following node names:

This configuration also uses the logical host name ha-host-1.

Software Configuration

This deployment example uses the following software products and versions:

This example assumes that you have already installed and established your cluster. It illustrates installation and configuration of the data service application only.

Assumptions

The instructions in this example were developed with the following assumptions:

Installing and Configuring N1 Grid Service Provisioning System Master Server on Shared Storage in the Failover Zone

The tasks you must perform to install and configure N1 Grid Service Provisioning SystemMaster Server in the Zone are as follows:

ProcedureExample: Preparing the Cluster for N1 Grid Service Provisioning System Master Server

    Install and configure the cluster as instructed in Sun Cluster Software Installation Guide for Solaris OS.

    Install the following cluster software components on both nodes.

    • Sun Cluster core software

    • Sun Cluster data service for N1 Grid Service Provisioning System

    • Sun Cluster HA for Solaris Container

ProcedureExample: Configuring Cluster Resources for N1 Grid Service Provisioning System Master Server

  1. Register the necessary data types on one node.


    phys-schost-1# clresourcetype register SUNW.gds SUNW.HAStoragePlus
    
  2. Create the N1 Grid Service Provisioning System resource group.


    phys-schost-1# clresourcegroup create -n phys-host-1,phys-host-2 RG-SPSMA
    
  3. Create the HAStoragePlus resource in the RG-SPSMA resource group.


    phys-schost-1# clresource create -g RG-SPSMA -t SUNW.HAStoragePlus -p AffinityOn=TRUE \
    > -p FilesystemMountPoints=/global/mnt3,/global/mnt4 RS-SPSMA-HAS
    
  4. Enable the resource group.


    phys-schost-1# clresourcegroup online -M RG-SPSMA
    

ProcedureExample: Configuring the Failover Zone

  1. On shared cluster storage, create a directory for the failover zone root path.

    This example presents a sparse root zone. You can use a whole root zone if that type better suits your configuration.


    phys-schost-1# mkdir /global/mnt3/zones
    
  2. Create a temporary file, for example /tmp/x, and include the following entries:


    create -b
    set zonepath=/global/mnt3/zones/clu1
    set autoboot=false
    set pool=pool_default
    add inherit-pkg-dir
    set dir=/lib
    end
    add inherit-pkg-dir
    set dir=/platform
    end
    add inherit-pkg-dir
    set dir=/sbin
    end
    add inherit-pkg-dir
    set dir=/usr
    end
    add net
    set address=ha-host-1
    set physical=hme0
    end
    add attr
    set name=comment
    set type=string
    set value="N1 Grid Service Provisioning System cluster zone" Put your desired zone name between the quotes here.
    end
  3. Configure the failover zone, using the file you created.


    phys-schost-1# zonecfg -z clu1 -f /tmp/x
    
  4. Install the zone.


    phys-schost-1# zoneadm -z clu1 install
    
  5. Log in to the zone.


    phys-schost-1# zlogin -C clu1
    
  6. Open a new window to the same node and boot the zone?


    phys-schost-1a# zoneadm -z clu1 boot
    
  7. Close this terminal window and disconnect from the zone console.


    phys-schost-1# ~~.
    
  8. Copy the containers configuration file to a temporary location.


    phys-schost-1# cp /opt/SUNWsczone/sczbt/util/sczbt_config /tmp/sczbt_config
    
  9. Edit the /tmp/sczbt_config file and set variable values as shown:


    RS=RS-SPSMA-ZONE
    RG=RG-SPSMA
    PARAMETERDIR=/global/mnt3/zonepar
    SC_NETWORK=false
    SC_LH=
    FAILOVER=true
    HAS_RS=RS-SPSMA-HAS
    
    
    Zonename=clu1
    Zonebootopt=
    Milestone=multi-user-server
    Mounts=
  10. Create the zone on phys-schost-2according to the instructions in the Sun Cluster Data Service for Solaris Containers Guide.

  11. Register the zone resource.


    phys-schost-1# ksh /opt/SUNWsczone/sczbt/util/sczbt_register -f /tmp/sczbt_config
    
  12. Enable the zone resource.


    phys-schost-1# clresource enable RS-SPSMA-ZONE
    

ProcedureExample: Installing the N1 Grid Service Provisioning System Master Server Software

These steps illustrate how to install the N1 Grid Service Provisioning System software. As long as only one node is mentioned it needs to be the node where your resource group is online.

  1. Log into the zone.


    phys-schost-1 zlogin clu1
    
  2. Add the sps user.


    zone-1# groupadd -g 1000 sps
    zone-1# useradd -g 1000 -d /global/mnt3/sps -m -s /bin/ksh sps
    
  3. Prepare the shared memory of the default project on both nodes.


    zone-1#projmod -a -K "project.max-shm-memory=(priv,536870912,deny)" default
    

    Note –

    This example is valid for Solaris 10 only. Use appropriate methods on Solaris 9.


  4. Install the N1 Grid Service Provisioning System binaries.


    zone-1# cd /installation_directory
    zone-1# ./cr_ms_solaris_sparc_pkg_5.2.sh
    

    Answer on the following cluster relevant questions as follows:


    • What base directory ...
      default: /opt/SUNWn1sps) [<directory>] /global/mnt3/sps
      

    • Which user will own the N1 SPS Master Server distribution?
      (default: n1sps) [<valid username>] sps
      

    • Which group on this machine will own the
      N1 SPS Master Server distribution?
      (default: n1sps) [<valid groupname>] sps
      

    • What is the hostname or IP address for this Master Server?
      (default: phys-schost-1) ha-host-1
      

    For all the other values, you can accept the defaults, or chose appropriate values. For the simplicity of this example we assume the default values of all port values.

  5. Start the master server as user sps.


    zone-1# su - sps
    zone-1$ cd /global/mnt3/sps/N1_Service_Provisioning_System_5.2/server/bin
    zone-1$ ./cr_server start
    
  6. Prepare the PostgreSQL database for monitoring


    zone-1$ cd /opt/SUNWscsps/master/util
    zone-1$ ksh ./db_prep_postgres /global/mnt3/sps/N1_Service_Provisioning_System_5.2
    
  7. Stop the master server and leave the user sps.


    zone-1$ cd /global/mnt3/sps/N1_Service_Provisioning_System_5.2/server/bin
    zone-1$ ./cr_server stop
    

ProcedureExample: Modifying the N1 Grid Service Provisioning System Master Server Configuration and Parameter Files

  1. Copy the N1 Grid Service Provisioning Systemparameter file from the agent directory to its deployment location.


    zone-1# cp /opt/SUNWscsps/master/bin/pfile /global/mnt3
    
  2. Add this cluster's information to the parameter file pfile.

    The following listing shows the relevant file entries and the values to assign to each entry.


    .
    .
    .
    User=sps
    Basepath=/global/mnt3/sps/N1_Service_Provisioning_System_5.2
    Host=ha-host-1
    Tport=8080
    TestCmd="get /index.jsp"
    ReturnString="SSL|Service"
    Startwait=20
    WgetPath=
    
  3. Save and close the file.

  4. Leave the zone.

  5. Copy the N1 Grid Service Provisioning System configuration file from the agent directory to its deployment location.


    phys-schost-1# cp /opt/SUNWscsps/master/util/spsma_config /global/mnt3
    
  6. Add this cluster's information to the spsma_config configuration file.

    The following listing shows the relevant file entries and the values to assign to each entry.


    .
    .
    .
    RS=RS-SPSMA
    RG=RG-SPSMA
    PORT=
    LH=
    PFILE=/global/mnt3/pfile
    HAS_RS=RS-SPSMA-HAS.
    .
    .
    ZONE=clu1
    ZONE_BT=RS-SPSMA-ZONE
    PROJECT=
    
  7. Save and close the file.

ProcedureExample: Enabling the N1 Grid Service Provisioning System Master Server Software to Run in the Cluster

  1. Run the spsma_register script to register the resource.


    phys-schost-1# ksh /opt/SUNWscsps/master/util/spsma_register \
    > -f /global/mnt3/spsma_config
    
  2. Enable the resource.


    phys-schost-1# clresource enable RS-SPSMA
    

Installing and Configuring N1 Grid Service Provisioning System Remote Agent on Shared Storage in the Failover Zone

The tasks you must perform to install and configure N1 Grid Service Provisioning System Remote Agent in the failover zone are as follows:

ProcedureExample: Preparing the Cluster for N1 Grid Service Provisioning System Remote Agent

    Install and configure the cluster as instructed in Sun Cluster Software Installation Guide for Solaris OS.

    Install the following cluster software components on both nodes.

    • Sun Cluster core software

    • Sun Cluster data service for N1 Grid Service Provisioning System

    • Sun Cluster HA for Solaris Container

ProcedureExample: Configuring Cluster Resources for N1 Grid Service Provisioning System Remote Agent

  1. Register the necessary data types on one node.


    phys-schost-1# clresourcetype register SUNW.gds SUNW.HAStoragePlus
    
  2. Create the N1 Grid Service Provisioning System resource group.


    phys-schost-1# clresourcegroup create -n phys-host-1:clu1,phys-host-2:clu1 RG-SPSRA
    
  3. Create the logical host.


    phys-schost-1# clreslogicalhostname create -g RG-SPSRA ha-host-1
    
  4. Create the HAStoragePlus resource in the RG-SPSRA resource group.


    phys-schost-1# clresource create -g RG-SPSRA -t SUNW.HAStoragePlus -p AffinityOn=TRUE \
    > -p FilesystemMountPoints=/global/mnt3,/global/mnt4 RS-SPSRA-HAS
    
  5. Enable the resource group.


    phys-schost-1# clresourcegroup online -M RG-SPSRA
    

ProcedureExample: Configuring the Failover Zone

  1. On shared cluster storage, create a directory for the failover zone root path.

    This example presents a sparse root zone. You can use a whole root zone if that type better suits your configuration.


    phys-schost-1# mkdir /global/mnt3/zones
    
  2. Create a temporary file, for example /tmp/x, and include the following entries:


    create -b
    set zonepath=/global/mnt3/zones/clu1
    set autoboot=false
    set pool=pool_default
    add inherit-pkg-dir
    set dir=/lib
    end
    add inherit-pkg-dir
    set dir=/platform
    end
    add inherit-pkg-dir
    set dir=/sbin
    end
    add inherit-pkg-dir
    set dir=/usr
    end
    add net
    set address=ha-host-1
    set physical=hme0
    end
    add attr
    set name=comment
    set type=string
    set value="N1 Grid Service Provisioning System cluster zone" Put your desired zone name between the quotes here.
    end
  3. Configure the failover zone, using the file you created.


    phys-schost-1# zonecfg -z clu1 -f /tmp/x
    
  4. Install the zone.


    phys-schost-1# zoneadm -z clu1 install
    
  5. Log in to the zone.


    phys-schost-1# zlogin -C clu1
    
  6. Open a new window to the same node and boot the zone?


    phys-schost-1a# zoneadm -z clu1 boot
    
  7. Close this terminal window and disconnect from the zone console.


    phys-schost-1# ~~.
    
  8. Copy the containers configuration file to a temporary location.


    phys-schost-1# cp /opt/SUNWsczone/sczbt/util/sczbt_config /tmp/sczbt_config
    
  9. Edit the /tmp/sczbt_config file and set variable values as shown:


    RS=RS-SPSRA-ZONE
    RG=RG-SPSRA
    PARAMETERDIR=/global/mnt3/zonepar
    SC_NETWORK=false
    SC_LH=
    FAILOVER=true
    HAS_RS=RS-SPSRA-HAS
    
    
    Zonename=clu1
    Zonebootopt=
    Milestone=multi-user-server
    Mounts=
  10. Create the zone according to the instructions in the Sun Cluster Data Service for Solaris Containers Guide.

  11. Register the zone resource.


    phys-schost-1# ksh /opt/SUNWsczone/sczbt/util/sczbt_register -f /tmp/sczbt_config
    
  12. Enable the zone resource.


    phys-schost-1# clresource enable RS-SPSRA-ZONE
    

ProcedureExample: Installing the N1 Grid Service Provisioning System Remote Agent Software on Shared Storage

These steps illustrate how to install the N1 Grid Service Provisioning System software. As long as only one node is mentioned it needs to be the node where your resource group is online.

  1. Log in to the zone.


    phys-schost-1# zlogin clu1
    
  2. Add the sps user.


    zone-1# groupadd -g 1000 sps
    zone-1# useradd -g 1000 -d /global/mnt3/sps -m -s /bin/ksh sps
    
  3. Install the N1 Grid Service Provisioning System binaries.


    zone-1# cd /installation_directory
    zone-1# ./cr_ra_solaris_sparc_5.2.sh
    

    Answer on the following cluster relevant questions as follows:


    • What base directory ...
      default: /opt/SUNWn1sps) [<directory>] /global/mnt3/sps
      

    • Which user will own the N1 SPS Remote Agent distribution?
      (default: n1sps) [<valid username>] sps
      

    • Which group on this machine will own the
      N1 SPS Remote Agent distribution?
      (default: n1sps) [<valid groupname>] sps
      

    • What is the hostname or IP address of the interface on which the
      Agent will run?
      (default: phys-schost-1) ha-host-1
      

    For all the other values, you can accept the defaults, or chose appropriate values. For the simplicity of this example we assume the default values of all port values.

  4. Leave the zone.

ProcedureExample: Modifying the N1 Grid Service Provisioning System Remote Agent Configuration File

  1. Copy the N1 Grid Service Provisioning System configuration file from the agent directory to its deployment location.


    phys-schost-1# cp /opt/SUNWscsps/remoteagent/util/spsra_config /global/mnt3
    
  2. Add this cluster's information to the spsra_config configuration file.

    The following listing shows the relevant file entries and the values to assign to each entry.


    .
    .
    .
    RS=RS-SPSRA
    RG=RG-SPSRA
    PORT=
    LH=
    USER=sps
    BASE=/global/mnt3/sps/N1_Service_Provisioning_System
    HAS_RS=RS-SPSRA-HAS.
    .
    .
    ZONE=clu1
    ZONE_BT=RS-SPSMA-ZONE
    PROJECT=
    
  3. Save and close the file.

ProcedureExample: Enabling the N1 Grid Service Provisioning System Remote Agent Software to Run in the Cluster

  1. Run the spsra_register script to register the resource.


    phys-schost-1# ksh /opt/SUNWscsps/remoteagent/util/spsra_register \
    > -f /global/mnt3/spsra_config
    
  2. Enable the resource.


    phys-schost-1# clresource enable RS-SPSRA
    

Installing and Configuring N1 Grid Service Provisioning System Local Distributor on Shared Storage in the Failover Zone

The tasks you must perform to install and configure N1 Grid Service Provisioning System Local Distributor in the failover zonezone are as follows:

ProcedureExample: Preparing the Cluster for N1 Grid Service Provisioning System Local Distributor

    Install and configure the cluster as instructed in Sun Cluster Software Installation Guide for Solaris OS.

    Install the following cluster software components on both nodes.

    • Sun Cluster core software

    • Sun Cluster data service for N1 Grid Service Provisioning System

    • Sun Cluster HA for Solaris Container

ProcedureExample: Configuring Cluster Resources for N1 Grid Service Provisioning System Local Distributor

  1. Register the necessary data types on one node.


    phys-schost-1# clresourcetype register SUNW.gds SUNW.HAStoragePlus
    
  2. Create the N1 Grid Service Provisioning System resource group.


    phys-schost-1# clresourcegroup create -n phys-host-1:clu1,phys-host-2:clu1 RG-SPSLD
    
  3. Create the logical host.


    phys-schost-1# clreslogicalhostname create -g RG-SPSLD ha-host-1
    
  4. Create the HAStoragePlus resource in the RG-SPSLD resource group.


    phys-schost-1# clresource create -g RG-SPSLD -t SUNW.HAStoragePlus -p AffinityOn=TRUE \
    > -p FilesystemMountPoints=/global/mnt3,/global/mnt4 RS-SPSLD-HAS
    
  5. Enable the resource group.


    phys-schost-1# clresourcegroup online -M RG-SPSLD
    

ProcedureExample: Configuring the Failover Zone

  1. On shared cluster storage, create a directory for the failover zone root path.

    This example presents a sparse root zone. You can use a whole root zone if that type better suits your configuration.


    phys-schost-1# mkdir /global/mnt3/zones
    
  2. Create a temporary file, for example /tmp/x, and include the following entries:


    create -b
    set zonepath=/global/mnt3/zones/clu1
    set autoboot=false
    set pool=pool_default
    add inherit-pkg-dir
    set dir=/lib
    end
    add inherit-pkg-dir
    set dir=/platform
    end
    add inherit-pkg-dir
    set dir=/sbin
    end
    add inherit-pkg-dir
    set dir=/usr
    end
    add net
    set address=ha-host-1
    set physical=hme0
    end
    add attr
    set name=comment
    set type=string
    set value="N1 Grid Service Provisioning System cluster zone" Put your desired zone name between the quotes here.
    end
  3. Configure the failover zone, using the file you created.


    phys-schost-1# zonecfg -z clu1 -f /tmp/x
    
  4. Install the zone.


    phys-schost-1# zoneadm -z clu1 install
    
  5. Log in to the zone.


    phys-schost-1# zlogin -C clu1
    
  6. Open a new window to the same node and boot the zone?


    phys-schost-1a# zoneadm -z clu1 boot
    
  7. Close this terminal window and disconnect from the zone console.


    phys-schost-1# ~~.
    
  8. Copy the containers configuration file to a temporary location.


    phys-schost-1# cp /opt/SUNWsczone/sczbt/util/sczbt_config /tmp/sczbt_config
    
  9. Edit the /tmp/sczbt_config file and set variable values as shown:


    RS=RS-SPSLD-ZONE
    RG=RG-SPSLD
    PARAMETERDIR=/global/mnt3/zonepar
    SC_NETWORK=false
    SC_LH=
    FAILOVER=true
    HAS_RS=RS-SPSLD-HAS
    
    
    Zonename=clu1
    Zonebootopt=
    Milestone=multi-user-server
    Mounts=
  10. Create the zone according to the instructions in the Sun Cluster Data Service for Solaris Containers Guide.

  11. Register the zone resource.


    phys-schost-1# ksh /opt/SUNWsczone/sczbt/util/sczbt_register -f /tmp/sczbt_config
    
  12. Enable the zone resource.


    phys-schost-1# clresource enable RS-SPSLD-ZONE
    

ProcedureExample: Installing the N1 Grid Service Provisioning System Local Distributor Software

These steps illustrate how to install the N1 Grid Service Provisioning System software. As long as only one node is mentioned it needs to be the node where your resource group is online.

  1. Log into the zones.


    phys-schost-1# zlogin clu1
    
  2. Add the sps user.


    zone-1# groupadd -g 1000 sps
    zone-1# useradd -g 1000 -d /global/mnt3/sps -m -s /bin/ksh sps
    
  3. Install the N1 Grid Service Provisioning System binaries on one node.


    zone-1# cd /installation_directory
    zone-1# ./cr_ld_solaris_sparc_5.2.sh
    

    Answer on the following cluster relevant questions as follows:


    • What base directory ...
      default: /opt/SUNWn1sps) [<directory>] /global/mnt3/sps
      

    • Which user will own the N1 SPS Local Distributor distribution?
      (default: n1sps) [<valid username>] sps
      

    • Which group on this machine will own the
      N1 SPS Local Distributor distribution?
      (default: n1sps) [<valid groupname>] sps
      

    • What is the hostname or IP address of this machine?
      (default: phys-schost-1) ha-host-1
      

    For all the other values, you can accept the defaults, or chose appropriate values. For the simplicity of this example we assume the default values of all port values.

  4. Leave the zone.

ProcedureExample: Modifying the N1 Grid Service Provisioning System Local Distributor Configuration File

  1. Copy the N1 Grid Service Provisioning System configuration file from the agent directory to its deployment location.


    phys-schost-1# cp /opt/SUNWscsps/localdist/util/spsld_config /global/mnt3
    
  2. Add this cluster's information to the spsld_config configuration file.

    The following listing shows the relevant file entries and the values to assign to each entry.


    .
    .
    .
    RS=RS-SPSLD
    RG=RG-SPSLD
    PORT=
    LH=
    USER=sps
    BASE=/global/mnt3/sps/N1_Service_Provisioning_System
    HAS_RS=RS-SPSLD-HAS
    
  3. Save and close the file.

ProcedureExample: Enabling the N1 Grid Service Provisioning System Local Distributor Software to Run in the Cluster

  1. Run the spsld_register script to register the resource.


    phys-schost-1# ksh /opt/SUNWscsps/localdist/util/spsld_register \
    > -f /global/mnt3/spsld_config
    
  2. Enable the resource.


    phys-schost-1# clresource enable RS-SPSLD