The tasks you must perform to install and configure N1 Grid Service Provisioning SystemMaster Server in the Zone are as follows:
Example: Preparing the Cluster for N1 Grid Service Provisioning System Master Server
Example: Configuring Cluster Resources for N1 Grid Service Provisioning System Master Server
Example: Installing the N1 Grid Service Provisioning System Master Server Software on Shared Storage
Install and configure the cluster as instructed in Sun Cluster Software Installation Guide for Solaris OS.
Install the following cluster software components on both nodes.
Sun Cluster core software
Sun Cluster data service for N1 Grid Service Provisioning System
In this task you will install the Solaris Container on phys-schost-1 and phys-schost-2. Therefore perform this procedure on both hosts.
On local cluster storage of , create a directory for the zone root path.
This example presents a sparse root zone. You can use a whole root zone if that type better suits your configuration.
phys-schost-1# mkdir /zones |
Create a temporary file, for example /tmp/x, and include the following entries:
create -b set zonepath=/zones/clu1 set autoboot=true set pool=pool_default add inherit-pkg-dir set dir=/lib end add inherit-pkg-dir set dir=/platform end add inherit-pkg-dir set dir=/sbin end add inherit-pkg-dir set dir=/usr end add net set address=zone-1 Choose a different addtress (zone-2) on the second node. set physical=hme0 end add attr set name=comment set type=string set value="SPS cluster zone" Put your desired zone name between the quotes here. end |
Configure the zone, using the file you created.
phys-schost-1# zonecfg -z clu1 -f /tmp/x |
Install the zone.
phys-schost-1# zoneadm -z clu1 install |
Log in to the zone.
phys-schost-1# zlogin -C clu1 |
Open a new window to the same node and boot the zone?
phys-schost-1# zoneadm -z clu1 boot |
Close this terminal window and disconnect from the zone console.
phys-schost-1# ~~. |
Register the necessary data types on one node.
phys-schost-1# clresourcetype register SUNW.gds SUNW.HAStoragePlus |
Create the N1 Grid Service Provisioning System resource group.
phys-schost-1# clresourcegroup create -n phys-host-1:clu1,phys-host-2:clu1 RG-SPSMA |
Create the logical host.
phys-schost-1# clreslogicalhostname create -g RG-SPSMA ha-host-1 |
Create the HAStoragePlus resource in the RG-SPSMA resource group.
phys-schost-1# clresource create -g RG-SPSMA -t SUNW.HAStoragePlus -p AffinityOn=TRUE \ > -p FilesystemMountPoints=/global/mnt3,/global/mnt4 RS-SPSMA-HAS |
Enable the resource group.
phys-schost-1# clresourcegroup online -M RG-SPSMA |
These steps illustrate how to install the N1 Grid Service Provisioning System software. As long as only one node is mentioned it needs to be the node where your resource group is online.
Log into the zone on both nodes.
phys-schost-1 zlogin clu1 phys-schost-2 zlogin clu2 |
Beginning on the node that owns the file system, add the sps user.
zone-1# groupadd -g 1000 sps zone-2# groupadd -g 1000 sps zone-1# useradd -g 1000 -d /global/mnt3/sps -m -s /bin/ksh sps zone-2# useradd -g 1000 -d /global/mnt3/sps -m -s /bin/ksh sps |
Prepare the shared memory of the default project on both nodes.
zone-1# projmod -a -K "project.max-shm-memory=(priv,536870912,deny)" default zone-2# projmod -a -K "project.max-shm-memory=(priv,536870912,deny)" default |
This example is valid for Solaris 10 only. Use appropriate methods on Solaris 9.
Install the N1 Grid Service Provisioning System binaries on one node.
zone-1# cd /installation_directory zone-1# ./cr_ms_solaris_sparc_pkg_5.2.sh |
Answer on the following cluster relevant questions as follows:
What base directory ... default: /opt/SUNWn1sps) [<directory>] /global/mnt3/sps |
Which user will own the N1 SPS Master Server distribution? (default: n1sps) [<valid username>] sps |
Which group on this machine will own the N1 SPS Master Server distribution? (default: n1sps) [<valid groupname>] sps |
What is the hostname or IP address for this Master Server? (default: phys-schost-1) ha-host-1 |
For all the other values, you can accept the defaults, or chose appropriate values. For the simplicity of this example we assume the default values of all port values.
Start the master server as user sps.
zone-1# su - sps zone-1$ cd /global/mnt3/sps/N1_Service_Provisioning_System_5.2/server/bin zone-1$ ./cr_server start |
Prepare the PostgreSQL database for monitoring
zone-1$ cd /opt/SUNWscsps/master/util zone-1$ ksh ./db_prep_postgres /global/mnt3/sps/N1_Service_Provisioning_System_5.2 |
Stop the master server and leave the user sps.
zone-1$ cd /global/mnt3/sps/N1_Service_Provisioning_System_5.2/server/bin zone-1$ ./cr_server stop |
Copy the N1 Grid Service Provisioning Systemparameter file from the agent directory to its deployment location.
zone-1# cp /opt/SUNWscsps/master/bin/pfile /global/mnt3 |
Add this cluster's information to the parameter file pfile.
The following listing shows the relevant file entries and the values to assign to each entry.
. . . User=sps Basepath=/global/mnt3/sps/N1_Service_Provisioning_System_5.2 Host=ha-host-1 Tport=8080 TestCmd="get /index.jsp" ReturnString="SSL|Service" Startwait=20 WgetPath= |
Save and close the file.
Leave the zone.
Copy the N1 Grid Service Provisioning System configuration file from the agent directory to its deployment location.
phys-schost-1# cp /opt/SUNWscsps/master/util/spsma_config /global/mnt3 |
Add this cluster's information to the spsma_config configuration file.
The following listing shows the relevant file entries and the values to assign to each entry.
. . . RS=RS-SPSMA RG=RG-SPSMA PORT=8080 LH=ha-host-1 PFILE=/global/mnt3/pfile HAS_RS=RS-SPSMA-HAS |
Save and close the file.