This appendix presents a complete example of how to install and configure the N1 Grid Service Provisioning System application and data service in the failover zone. It presents a simple two-node cluster configuration. If you need to install the application in any other configuration, refer to the general-purpose procedures presented elsewhere in this manual. For an example of N1 Grid Service Provisioning System installation in a global zone, see Appendix A, Deployment Example: Installing N1 Grid Service Provisioning System in the Global Zone or for a non-global zone see Appendix C, Deployment Example: Installing N1 Grid Service Provisioning System in the Zone according to your zone type.
This example uses a two-node cluster with the following node names:
phys-schost-1 (a physical node, which owns the file system)
phys-schost-2 (a physical node)
clu1 the zone to be failed over
This configuration also uses the logical host name ha-host-1.
This deployment example uses the following software products and versions:
Solaris 10 6/06 software for SPARC or x86 platforms
Sun Cluster 3.2 core software
Sun Cluster HA for Solaris Container
Sun Cluster HA for N1 Service Provisioning System
N1 Grid Service Provisioning System 5.2.
Your preferred text editor
This example assumes that you have already installed and established your cluster. It illustrates installation and configuration of the data service application only.
The instructions in this example were developed with the following assumptions:
Shell environment: All commands and the environment setup in this example are for the Korn shell environment. If you use a different shell, replace any Korn shell-specific information or instructions with the appropriate information for you preferred shell environment.
User login: Unless otherwise specified, perform all procedures as superuser or assume a role that provides solaris.cluster.admin, solaris.cluster.modify, and solaris.cluster.read RBAC authorization.
The tasks you must perform to install and configure N1 Grid Service Provisioning SystemMaster Server in the Zone are as follows:
Example: Preparing the Cluster for N1 Grid Service Provisioning System Master Server
Example: Configuring Cluster Resources for N1 Grid Service Provisioning System Master Server
Example: Installing the N1 Grid Service Provisioning System Master Server Software
Install and configure the cluster as instructed in Sun Cluster Software Installation Guide for Solaris OS.
Install the following cluster software components on both nodes.
Sun Cluster core software
Sun Cluster data service for N1 Grid Service Provisioning System
Sun Cluster HA for Solaris Container
Register the necessary data types on one node.
phys-schost-1# clresourcetype register SUNW.gds SUNW.HAStoragePlus |
Create the N1 Grid Service Provisioning System resource group.
phys-schost-1# clresourcegroup create -n phys-host-1,phys-host-2 RG-SPSMA |
Create the HAStoragePlus resource in the RG-SPSMA resource group.
phys-schost-1# clresource create -g RG-SPSMA -t SUNW.HAStoragePlus -p AffinityOn=TRUE \ > -p FilesystemMountPoints=/global/mnt3,/global/mnt4 RS-SPSMA-HAS |
Enable the resource group.
phys-schost-1# clresourcegroup online -M RG-SPSMA |
On shared cluster storage, create a directory for the failover zone root path.
This example presents a sparse root zone. You can use a whole root zone if that type better suits your configuration.
phys-schost-1# mkdir /global/mnt3/zones |
Create a temporary file, for example /tmp/x, and include the following entries:
create -b set zonepath=/global/mnt3/zones/clu1 set autoboot=false set pool=pool_default add inherit-pkg-dir set dir=/lib end add inherit-pkg-dir set dir=/platform end add inherit-pkg-dir set dir=/sbin end add inherit-pkg-dir set dir=/usr end add net set address=ha-host-1 set physical=hme0 end add attr set name=comment set type=string set value="N1 Grid Service Provisioning System cluster zone" Put your desired zone name between the quotes here. end |
Configure the failover zone, using the file you created.
phys-schost-1# zonecfg -z clu1 -f /tmp/x |
Install the zone.
phys-schost-1# zoneadm -z clu1 install |
Log in to the zone.
phys-schost-1# zlogin -C clu1 |
Open a new window to the same node and boot the zone?
phys-schost-1a# zoneadm -z clu1 boot |
Close this terminal window and disconnect from the zone console.
phys-schost-1# ~~. |
Copy the containers configuration file to a temporary location.
phys-schost-1# cp /opt/SUNWsczone/sczbt/util/sczbt_config /tmp/sczbt_config |
Edit the /tmp/sczbt_config file and set variable values as shown:
RS=RS-SPSMA-ZONE RG=RG-SPSMA PARAMETERDIR=/global/mnt3/zonepar SC_NETWORK=false SC_LH= FAILOVER=true HAS_RS=RS-SPSMA-HAS Zonename=clu1 Zonebootopt= Milestone=multi-user-server Mounts= |
Create the zone on phys-schost-2according to the instructions in the Sun Cluster Data Service for Solaris Containers Guide.
Register the zone resource.
phys-schost-1# ksh /opt/SUNWsczone/sczbt/util/sczbt_register -f /tmp/sczbt_config |
Enable the zone resource.
phys-schost-1# clresource enable RS-SPSMA-ZONE |
These steps illustrate how to install the N1 Grid Service Provisioning System software. As long as only one node is mentioned it needs to be the node where your resource group is online.
Log into the zone.
phys-schost-1 zlogin clu1 |
Add the sps user.
zone-1# groupadd -g 1000 sps zone-1# useradd -g 1000 -d /global/mnt3/sps -m -s /bin/ksh sps |
Prepare the shared memory of the default project on both nodes.
zone-1#projmod -a -K "project.max-shm-memory=(priv,536870912,deny)" default |
This example is valid for Solaris 10 only. Use appropriate methods on Solaris 9.
Install the N1 Grid Service Provisioning System binaries.
zone-1# cd /installation_directory zone-1# ./cr_ms_solaris_sparc_pkg_5.2.sh |
Answer on the following cluster relevant questions as follows:
What base directory ... default: /opt/SUNWn1sps) [<directory>] /global/mnt3/sps |
Which user will own the N1 SPS Master Server distribution? (default: n1sps) [<valid username>] sps |
Which group on this machine will own the N1 SPS Master Server distribution? (default: n1sps) [<valid groupname>] sps |
What is the hostname or IP address for this Master Server? (default: phys-schost-1) ha-host-1 |
For all the other values, you can accept the defaults, or chose appropriate values. For the simplicity of this example we assume the default values of all port values.
Start the master server as user sps.
zone-1# su - sps zone-1$ cd /global/mnt3/sps/N1_Service_Provisioning_System_5.2/server/bin zone-1$ ./cr_server start |
Prepare the PostgreSQL database for monitoring
zone-1$ cd /opt/SUNWscsps/master/util zone-1$ ksh ./db_prep_postgres /global/mnt3/sps/N1_Service_Provisioning_System_5.2 |
Stop the master server and leave the user sps.
zone-1$ cd /global/mnt3/sps/N1_Service_Provisioning_System_5.2/server/bin zone-1$ ./cr_server stop |
Copy the N1 Grid Service Provisioning Systemparameter file from the agent directory to its deployment location.
zone-1# cp /opt/SUNWscsps/master/bin/pfile /global/mnt3 |
Add this cluster's information to the parameter file pfile.
The following listing shows the relevant file entries and the values to assign to each entry.
. . . User=sps Basepath=/global/mnt3/sps/N1_Service_Provisioning_System_5.2 Host=ha-host-1 Tport=8080 TestCmd="get /index.jsp" ReturnString="SSL|Service" Startwait=20 WgetPath= |
Save and close the file.
Leave the zone.
Copy the N1 Grid Service Provisioning System configuration file from the agent directory to its deployment location.
phys-schost-1# cp /opt/SUNWscsps/master/util/spsma_config /global/mnt3 |
Add this cluster's information to the spsma_config configuration file.
The following listing shows the relevant file entries and the values to assign to each entry.
. . . RS=RS-SPSMA RG=RG-SPSMA PORT= LH= PFILE=/global/mnt3/pfile HAS_RS=RS-SPSMA-HAS. . . ZONE=clu1 ZONE_BT=RS-SPSMA-ZONE PROJECT= |
Save and close the file.
Run the spsma_register script to register the resource.
phys-schost-1# ksh /opt/SUNWscsps/master/util/spsma_register \ > -f /global/mnt3/spsma_config |
Enable the resource.
phys-schost-1# clresource enable RS-SPSMA |
The tasks you must perform to install and configure N1 Grid Service Provisioning System Remote Agent in the failover zone are as follows:
Example: Preparing the Cluster for N1 Grid Service Provisioning System Remote Agent
Example: Configuring Cluster Resources for N1 Grid Service Provisioning System Remote Agent
Example: Installing the N1 Grid Service Provisioning System Remote Agent Software on Shared Storage
Example: Modifying the N1 Grid Service Provisioning System Remote Agent Configuration File
Install and configure the cluster as instructed in Sun Cluster Software Installation Guide for Solaris OS.
Install the following cluster software components on both nodes.
Sun Cluster core software
Sun Cluster data service for N1 Grid Service Provisioning System
Sun Cluster HA for Solaris Container
Register the necessary data types on one node.
phys-schost-1# clresourcetype register SUNW.gds SUNW.HAStoragePlus |
Create the N1 Grid Service Provisioning System resource group.
phys-schost-1# clresourcegroup create -n phys-host-1:clu1,phys-host-2:clu1 RG-SPSRA |
Create the logical host.
phys-schost-1# clreslogicalhostname create -g RG-SPSRA ha-host-1 |
Create the HAStoragePlus resource in the RG-SPSRA resource group.
phys-schost-1# clresource create -g RG-SPSRA -t SUNW.HAStoragePlus -p AffinityOn=TRUE \ > -p FilesystemMountPoints=/global/mnt3,/global/mnt4 RS-SPSRA-HAS |
Enable the resource group.
phys-schost-1# clresourcegroup online -M RG-SPSRA |
On shared cluster storage, create a directory for the failover zone root path.
This example presents a sparse root zone. You can use a whole root zone if that type better suits your configuration.
phys-schost-1# mkdir /global/mnt3/zones |
Create a temporary file, for example /tmp/x, and include the following entries:
create -b set zonepath=/global/mnt3/zones/clu1 set autoboot=false set pool=pool_default add inherit-pkg-dir set dir=/lib end add inherit-pkg-dir set dir=/platform end add inherit-pkg-dir set dir=/sbin end add inherit-pkg-dir set dir=/usr end add net set address=ha-host-1 set physical=hme0 end add attr set name=comment set type=string set value="N1 Grid Service Provisioning System cluster zone" Put your desired zone name between the quotes here. end |
Configure the failover zone, using the file you created.
phys-schost-1# zonecfg -z clu1 -f /tmp/x |
Install the zone.
phys-schost-1# zoneadm -z clu1 install |
Log in to the zone.
phys-schost-1# zlogin -C clu1 |
Open a new window to the same node and boot the zone?
phys-schost-1a# zoneadm -z clu1 boot |
Close this terminal window and disconnect from the zone console.
phys-schost-1# ~~. |
Copy the containers configuration file to a temporary location.
phys-schost-1# cp /opt/SUNWsczone/sczbt/util/sczbt_config /tmp/sczbt_config |
Edit the /tmp/sczbt_config file and set variable values as shown:
RS=RS-SPSRA-ZONE RG=RG-SPSRA PARAMETERDIR=/global/mnt3/zonepar SC_NETWORK=false SC_LH= FAILOVER=true HAS_RS=RS-SPSRA-HAS Zonename=clu1 Zonebootopt= Milestone=multi-user-server Mounts= |
Create the zone according to the instructions in the Sun Cluster Data Service for Solaris Containers Guide.
Register the zone resource.
phys-schost-1# ksh /opt/SUNWsczone/sczbt/util/sczbt_register -f /tmp/sczbt_config |
Enable the zone resource.
phys-schost-1# clresource enable RS-SPSRA-ZONE |
These steps illustrate how to install the N1 Grid Service Provisioning System software. As long as only one node is mentioned it needs to be the node where your resource group is online.
Log in to the zone.
phys-schost-1# zlogin clu1 |
Add the sps user.
zone-1# groupadd -g 1000 sps zone-1# useradd -g 1000 -d /global/mnt3/sps -m -s /bin/ksh sps |
Install the N1 Grid Service Provisioning System binaries.
zone-1# cd /installation_directory zone-1# ./cr_ra_solaris_sparc_5.2.sh |
Answer on the following cluster relevant questions as follows:
What base directory ... default: /opt/SUNWn1sps) [<directory>] /global/mnt3/sps |
Which user will own the N1 SPS Remote Agent distribution? (default: n1sps) [<valid username>] sps |
Which group on this machine will own the N1 SPS Remote Agent distribution? (default: n1sps) [<valid groupname>] sps |
What is the hostname or IP address of the interface on which the Agent will run? (default: phys-schost-1) ha-host-1 |
For all the other values, you can accept the defaults, or chose appropriate values. For the simplicity of this example we assume the default values of all port values.
Leave the zone.
Copy the N1 Grid Service Provisioning System configuration file from the agent directory to its deployment location.
phys-schost-1# cp /opt/SUNWscsps/remoteagent/util/spsra_config /global/mnt3 |
Add this cluster's information to the spsra_config configuration file.
The following listing shows the relevant file entries and the values to assign to each entry.
. . . RS=RS-SPSRA RG=RG-SPSRA PORT= LH= USER=sps BASE=/global/mnt3/sps/N1_Service_Provisioning_System HAS_RS=RS-SPSRA-HAS. . . ZONE=clu1 ZONE_BT=RS-SPSMA-ZONE PROJECT= |
Save and close the file.
Run the spsra_register script to register the resource.
phys-schost-1# ksh /opt/SUNWscsps/remoteagent/util/spsra_register \ > -f /global/mnt3/spsra_config |
Enable the resource.
phys-schost-1# clresource enable RS-SPSRA |
The tasks you must perform to install and configure N1 Grid Service Provisioning System Local Distributor in the failover zonezone are as follows:
Example: Preparing the Cluster for N1 Grid Service Provisioning System Local Distributor
Example: Configuring Cluster Resources for N1 Grid Service Provisioning System Local Distributor
Example: Installing the N1 Grid Service Provisioning System Local Distributor Software
Example: Modifying the N1 Grid Service Provisioning System Local Distributor Configuration File
Install and configure the cluster as instructed in Sun Cluster Software Installation Guide for Solaris OS.
Install the following cluster software components on both nodes.
Sun Cluster core software
Sun Cluster data service for N1 Grid Service Provisioning System
Sun Cluster HA for Solaris Container
Register the necessary data types on one node.
phys-schost-1# clresourcetype register SUNW.gds SUNW.HAStoragePlus |
Create the N1 Grid Service Provisioning System resource group.
phys-schost-1# clresourcegroup create -n phys-host-1:clu1,phys-host-2:clu1 RG-SPSLD |
Create the logical host.
phys-schost-1# clreslogicalhostname create -g RG-SPSLD ha-host-1 |
Create the HAStoragePlus resource in the RG-SPSLD resource group.
phys-schost-1# clresource create -g RG-SPSLD -t SUNW.HAStoragePlus -p AffinityOn=TRUE \ > -p FilesystemMountPoints=/global/mnt3,/global/mnt4 RS-SPSLD-HAS |
Enable the resource group.
phys-schost-1# clresourcegroup online -M RG-SPSLD |
On shared cluster storage, create a directory for the failover zone root path.
This example presents a sparse root zone. You can use a whole root zone if that type better suits your configuration.
phys-schost-1# mkdir /global/mnt3/zones |
Create a temporary file, for example /tmp/x, and include the following entries:
create -b set zonepath=/global/mnt3/zones/clu1 set autoboot=false set pool=pool_default add inherit-pkg-dir set dir=/lib end add inherit-pkg-dir set dir=/platform end add inherit-pkg-dir set dir=/sbin end add inherit-pkg-dir set dir=/usr end add net set address=ha-host-1 set physical=hme0 end add attr set name=comment set type=string set value="N1 Grid Service Provisioning System cluster zone" Put your desired zone name between the quotes here. end |
Configure the failover zone, using the file you created.
phys-schost-1# zonecfg -z clu1 -f /tmp/x |
Install the zone.
phys-schost-1# zoneadm -z clu1 install |
Log in to the zone.
phys-schost-1# zlogin -C clu1 |
Open a new window to the same node and boot the zone?
phys-schost-1a# zoneadm -z clu1 boot |
Close this terminal window and disconnect from the zone console.
phys-schost-1# ~~. |
Copy the containers configuration file to a temporary location.
phys-schost-1# cp /opt/SUNWsczone/sczbt/util/sczbt_config /tmp/sczbt_config |
Edit the /tmp/sczbt_config file and set variable values as shown:
RS=RS-SPSLD-ZONE RG=RG-SPSLD PARAMETERDIR=/global/mnt3/zonepar SC_NETWORK=false SC_LH= FAILOVER=true HAS_RS=RS-SPSLD-HAS Zonename=clu1 Zonebootopt= Milestone=multi-user-server Mounts= |
Create the zone according to the instructions in the Sun Cluster Data Service for Solaris Containers Guide.
Register the zone resource.
phys-schost-1# ksh /opt/SUNWsczone/sczbt/util/sczbt_register -f /tmp/sczbt_config |
Enable the zone resource.
phys-schost-1# clresource enable RS-SPSLD-ZONE |
These steps illustrate how to install the N1 Grid Service Provisioning System software. As long as only one node is mentioned it needs to be the node where your resource group is online.
Log into the zones.
phys-schost-1# zlogin clu1 |
Add the sps user.
zone-1# groupadd -g 1000 sps zone-1# useradd -g 1000 -d /global/mnt3/sps -m -s /bin/ksh sps |
Install the N1 Grid Service Provisioning System binaries on one node.
zone-1# cd /installation_directory zone-1# ./cr_ld_solaris_sparc_5.2.sh |
Answer on the following cluster relevant questions as follows:
What base directory ... default: /opt/SUNWn1sps) [<directory>] /global/mnt3/sps |
Which user will own the N1 SPS Local Distributor distribution? (default: n1sps) [<valid username>] sps |
Which group on this machine will own the N1 SPS Local Distributor distribution? (default: n1sps) [<valid groupname>] sps |
What is the hostname or IP address of this machine? (default: phys-schost-1) ha-host-1 |
For all the other values, you can accept the defaults, or chose appropriate values. For the simplicity of this example we assume the default values of all port values.
Leave the zone.
Copy the N1 Grid Service Provisioning System configuration file from the agent directory to its deployment location.
phys-schost-1# cp /opt/SUNWscsps/localdist/util/spsld_config /global/mnt3 |
Add this cluster's information to the spsld_config configuration file.
The following listing shows the relevant file entries and the values to assign to each entry.
. . . RS=RS-SPSLD RG=RG-SPSLD PORT= LH= USER=sps BASE=/global/mnt3/sps/N1_Service_Provisioning_System HAS_RS=RS-SPSLD-HAS |
Save and close the file.