These steps illustrate how to install the N1 Grid Service Provisioning System software. As long as only one node is mentioned it needs to be the node where your resource group is online.
Prepare the shared memory of the default project on both nodes.
phys-schost-1# projmod -a -K "project.max-shm-memory=(priv,536870912,deny)" default phys-schost-2# projmod -a -K "project.max-shm-memory=(priv,536870912,deny)" default |
This example is valid for Solaris 10 only. Use appropriate methods on Solaris 9.
Install the N1 Grid Service Provisioning System binaries on one node.
phys-schost-1# cd /installation_directory phys-schost-1# ./cr_ms_solaris_sparc_pkg_5.2.sh |
Answer on the following cluster relevant questions as follows:
What base directory ... default: /opt/SUNWn1sps) [<directory>] /global/mnt3/sps |
Which user will own the N1 SPS Master Server distribution? (default: n1sps) [<valid username>] sps |
Which group on this machine will own the N1 SPS Master Server distribution? (default: n1sps) [<valid groupname>] sps |
What is the hostname or IP address for this Master Server? (default: phys-schost-1) ha-host-1 |
For all the other values, you can accept the defaults, or chose appropriate values. For the simplicity of this example we assume the default values of all port values.
Start the master server as user sps.
phys-schost-1# su - sps phys-schost-1$ cd /global/mnt3/sps/N1_Service_Provisioning_System_5.2/server/bin phys-schost-1$ ./cr_server start |
Prepare the PostgreSQL database for monitoring
phys-schost-1$ cd /opt/SUNWscsps/master/util phys-schost-1$ ksh ./db_prep_postgres \ > /global/mnt3/sps/N1_Service_Provisioning_System_5.2 |
Stop the master server and leave the user sps.
phys-schost-1$ cd /global/mnt3/sps/N1_Service_Provisioning_System_5.2/server/bin phys-schost-1$ ./cr_server stop |