This chapter explains how to install and configure Sun Cluster HA for N1 Service Provisioning System.
This chapter contains the following sections.
Installing and Configuring Sun Cluster HA for N1 Service Provisioning System
Planning the Sun Cluster HA for N1 Service Provisioning System Installation and Configuration
Installing and Configuring N1 Grid Service Provisioning System
Verifying the Installation and Configuration of N1 Grid Service Provisioning System
Installing the Sun Cluster HA for N1 Service Provisioning System Packages
Preparation of the N1 Grid Service Provisioning System Master Servers database
Registering and Configuring Sun Cluster HA for N1 Service Provisioning System
Verifying the Sun Cluster HA for N1 Service Provisioning System and Configuration
Understanding the Sun Cluster HA for N1 Service Provisioning System Master Server Parameter File
Understanding the fault monitor of the Sun Cluster HA for N1 Service Provisioning System
How to debug Sun Cluster HA for N1 Service Provisioning System
Table 1 list the tasks for installing and configuring Sun Cluster HA for N1 Service Provisioning System. Perform these tasks in the order they are listed.
Table 1 Task Map: Installing and Configuring Sun Cluster HA for N1 Service Provisioning System
Task |
For Instructions, Go To |
---|---|
1 Plan the installation. |
Planning the Sun Cluster HA for N1 Service Provisioning System Installation and Configuration |
2 Install and configure the N1 Grid Service Provisioning System. |
Installing and Configuring N1 Grid Service Provisioning System |
3 Verify installation and configuration. | |
4 Install Sun Cluster HA for N1 Service Provisioning System Packages. |
Installing the Sun Cluster HA for N1 Service Provisioning System Packages |
5 Register and configure Sun Cluster HA for N1 Service Provisioning System components |
Registering and Configuring Sun Cluster HA for N1 Service Provisioning System |
5.1 Register and configure Sun Cluster HA for N1 Service Provisioning System Master Server as a failover data service. | |
5.2 Register and configure Sun Cluster HA for N1 Service Provisioning System Remote Agent as a failover data service. | |
5.3 Register and Configure Sun Cluster HA for N1 Service Provisioning System Local Distributor as a failover data service. | |
6 Verify Sun Cluster HA for N1 Service Provisioning System Installation and Configuration. |
How to Verify the Sun Cluster HA for N1 Service Provisioning System Installation and Configuration |
7 Understanding the Sun Cluster HA for N1 Service Provisioning System parameter file. |
Understanding the Sun Cluster HA for N1 Service Provisioning System Master Server Parameter File |
8 Understanding the Sun Cluster HA for N1 Service Provisioning System Fault Monitor. |
Understanding the fault monitor of the Sun Cluster HA for N1 Service Provisioning System |
9 How to debug Sun Cluster HA for N1 Service Provisioning System. |
How to turn debug on for a Sun Cluster HA for N1 Service Provisioning System component |
The N1 Grid Service Provisioning System is Sun Microsystems product for service (software) Distribution in the N1 environment. It consists of four components:
The Master Server which is the core component for the service distribution.
The client component is called Remote Agent. It has to run on every target host.
A proxy component called Local Distributor. The Local Distributor is used to minimize data transfer between datacenters.
A command line interface which can be installed on every host.
The Master Server is built upon Apache Tomcat and the PostgreSql Database. All other components are pure Java.
The Sun Cluster HA for N1 Service Provisioning System data service provides mechanisms for orderly startup and shutdown, fault monitoring, and automatic failover of the Master Server, the Remote Agent and the Local Distributor.
The following table describes the relation between the application components and the related Sun Cluster data service.
Table 2 Protection of Components
Component |
Protected by |
---|---|
Master Server |
Sun Cluster HA for N1 Service Provisioning System |
Remote Agent |
Sun Cluster HA for N1 Service Provisioning System |
Local Distributor |
Sun Cluster HA for N1 Service Provisioning System |
This section contains the information you need to plan your Sun Cluster HA for N1 Service Provisioning System installation and configuration.
Sun Cluster HA for N1 Service Provisioning System is supported in Solaris Containers, Sun Cluster is offering two concepts for Solaris Containers.
Zones are containers which are running after a reboot of the node. These containers, combined with resource groups having the nodename nodename:zonename as a valid “nodename” in the resource groups nodename list.
Failover Zone containers are managed by the Solaris Container agent, and are represented by a resource of a resource group.
This paragraph provides a list of software and hardware configuration restrictions that apply to Sun Cluster HA for N1 Service Provisioning System only.
For restrictions that apply to all data services, see the Sun Cluster Release Notes.
Your data service configuration might not be supported if you do not adhere to these restrictions.
Sun Cluster HA for N1 Service Provisioning System can only be configured as a failover data service. Each component of N1 Grid Service Provisioning System can operate as a failover data service only. Therefore, all the components of the Sun Cluster HA for N1 Service Provisioning System can only be configured to run as failover data services.
Install the N1 Grid Service Provisioning System components on shared storage. The Master Server and the Local Distributor have to be installed on the shared storage. The remote agents which are configured to bind on the logical host have to be installed on the shared storage as well.
This restriction is automatically adhered in failover zone configurations.
Configure a Sun Cluster resource for the N1 Grid Service Provisioning System Remote Agent for raw and ssl communication only. The Master Server will start and stop the Remote Agent on every connection, as long as the Remote agent is configured for ssh communication. In this case, there is no Sun Cluster resource needed. In the ssh scenario, you have to install the N1 Grid Service Provisioning System Remote Agent on the shared storage and copy the ssh keys from one node to the remaining nodes of the cluster. This assures that all the cluster nodes have the same ssh personality.
There is no need to copy ssh keys in failover zone configurations.
The N1 Grid Service Provisioning System configuration in a failover zone uses the smf component of Sun Cluster HA for Solaris Containers. The registration of the N1 Grid Service Provisioning System data service in a failover zone defines an smf service to control the N1 Grid Service Provisioning System database. The name of this smf service is generated in this naming scheme: svc:/application/sczone-agents:resource-name. No other smf service with exactly this name can exist.
The associated smf manifest is automatically created during the registration process in this location and naming scheme: /var/svc/manifest/application/sczone-agents/resource-name.xml. No other manifest can coexist with this name.
These requirements apply to Sun Cluster HA for N1 Service Provisioning System only. You must meet these requirements before you proceed with your Sun Cluster HA for N1 Service Provisioning System installation and configuration.
Your data service configuration might not be supported if you do not adhere to these requirements.
Create the N1 Grid Service Provisioning System base directory on the shared storage. The location for the base directory can reside on a Global File System (GFS) or it can reside on a Failover File System (FFS) with an HAStoragePlus resource. It is best practice to store it on a FFS.
The FFS is required because the Master Server uses the directory structure to store its configuration, logs, deployed applications, database and so on. The Remote agent and the Local Distributor store their caches below the base directory. It is not recommended to store the binaries on the local storage and the dynamic parts of the data on the shared storage.
It is best practice to mount Global File Systems with the /global prefix and to mount Failover File Systems with the /local prefix.
You can configure the Sun Cluster HA for N1 Service Provisioning System data service to protect one or more N1 Grid Service Provisioning System instances or components. Each instance or component needs to be covered by one Sun Cluster HA for N1 Service Provisioning System resource. The dependencies between the Sun Cluster HA for N1 Service Provisioning System resource and other necessary resources are described in the following table.
Table 3 Dependencies Between Sun Cluster HA for N1 Service Provisioning System Components in Failover Configurations
Component |
Dependency |
---|---|
N1 Grid Service Provisioning System resource in a Solaris 10 global zone, zone or in Solaris 9. |
SUNW.HAStoragePlus This dependency is required only, if the configuration uses a failover file system or file systems in a zone. SUNW.LogicalHostName |
N1 Grid Service Provisioning System resource in a Solaris 10 failover zone. |
Sun Cluster HA for the Solaris Container boot resource. SUNW.HAStoragePlus SUNW.LogicalHostName — This dependency is required only if the zones boot resource does not manage the zone's IP address. |
For more detailed information about N1 Grid Service Provisioning System, refer to the product documentation on the docs.sun.com webpage or the documentation delivered with the product.
Each component of Sun Cluster HA for N1 Service Provisioning System has configuration and registration files in the directory /opt/SUNWscsps/component-dir/util — The term component-dir stands for the directory names master, localdist or remoteagent. These files let you register the N1 Grid Service Provisioning System component with Sun Cluster.
Within these files, you apply the appropriate dependencies.
# cd /opt/SUNWscsps/master # # ls -l util total 38 -r-xr-xr-x 1 root bin 913 Jun 6 13:54 db_prep_postgres -r-xr-xr-x 1 root bin 1271 Jun 6 13:54 spsma_config -r-xr-xr-x 1 root bin 7709 Jun 6 13:54 spsma_register -r-xr-xr-x 1 root bin 5276 Jun 6 13:54 spsma_smf_register -r-xr-xr-x 1 root bin 1348 Jun 6 13:54 spsma_smf_remove # more util/spsma_config # # Copyright 2006 Sun Microsystems, Inc. All rights reserved. # Use is subject to license terms. # #ident "@(#)spsma_config.ksh 1.2 06/03/17 SMI" # This file will be sourced in by spsma_register and the parameters # listed below will be used. # # These parameters can be customized in (key=value) form # # RS - name of the resource for the application # RG - name of the resource group containing RS # PORT - name of the port number to satisfy GDS registration # LH - name of the LogicalHostname SC resource # PFILE - name of the parameter file for additional variables # HAS_RS - name of the HAStoragePlus SC resource # # The following variables need to be set only if the agent runs in a # failover zone # # ZONE - Zonename where the zsmf component should be registered # ZONE_BT - Resource name of the zone boot component # PROJECT - A project in the zone, that will be used for the PostgreSQL # smf service. # If the variable is not set it will be translated as :default for # the smf credentialss. # Optional # RS= RG= PORT=8080 LH= PFILE= HAS_RS= # failover zone specific options ZONE= ZONE_BT= PROJECT= |
The spsma_register script validates the variables of the spsma_config script and registers the resource for the master server.
The master server component has an additional script db_prep_postgres. The purpose of this script is to prepare the PostgreSql database of the Master Server for monitoring.
# cd /opt/SUNWscsps/remoteagent # # ls -l util total 34 -r-xr-xr-x 1 root bin 1363 Jun 6 13:54 spsra_config -r-xr-xr-x 1 root bin 7556 Jun 6 13:54 spsra_register -r-xr-xr-x 1 root bin 4478 Jun 6 13:54 spsra_smf_register -r-xr-xr-x 1 root bin 1347 Jun 6 13:54 spsra_smf_remove # more util/spsra_config # # Copyright 2006 Sun Microsystems, Inc. All rights reserved. # Use is subject to license terms. # #ident "@(#)spsra_config.ksh 1.2 06/03/17 SMI" # This file will be sourced in by spsra_register and the parameters # listed below will be used. # # These parameters can be customized in (key=value) form # # RS - name of the resource for the application # RG - name of the resource group containing RS # PORT - name of the port number to satisfy GDS registration # LH - name of the LogicalHostname SC resource # USER - name of the owner of the remote agent # BASE - name of the direcotry where the N1 Service Provisioning Server # is installed # HAS_RS - name of the HAStoragePlus SC resource # # The following variables need to be set only if the agent runs in a # failover zone # # ZONE - Zonename where the zsmf component should be registered # ZONE_BT - Resource name of the zone boot component # PROJECT - A project in the zone, that will be used for the PostgreSQL # smf service. # If the variable is not set it will be translated as :default for # the smf credentialss. # Optional # RS= RG= PORT=22 LH= USER= BASE= HAS_RS= # failover zone specific options ZONE= ZONE_BT= PROJECT= |
The spsra_register script validates the variables of the spsra_config script and registers the resource for the remote agent.
# cd /opt/SUNWscsps/localdist # # ls -l util total 34 -r-xr-xr-x 1 root bin 1369 Jun 6 13:54 spsld_config -r-xr-xr-x 1 root bin 7550 Jun 6 13:54 spsld_register -r-xr-xr-x 1 root bin 4501 Jun 6 13:54 spsld_smf_register -r-xr-xr-x 1 root bin 1347 Jun 6 13:54 spsld_smf_remove # more util/spsld_config # # Copyright 2006 Sun Microsystems, Inc. All rights reserved. # Use is subject to license terms. # #ident "@(#)spsld_config.ksh 1.2 06/03/17 SMI" # This file will be sourced in by spsld_register and the parameters # listed below will be used. # # These parameters can be customized in (key=value) form # # RS - name of the resource for the application # RG - name of the resource group containing RS # PORT - name of the port number to satisfy GDS registration # LH - name of the LogicalHostname SC resource # USER - name of the owner of the local distributor # BASE - name of the directry where the N1 Service Provisioning Server # is installed # HAS_RS - name of the HAStoragePlus SC resource # # # The following variables need to be set only if the agent runs in a # failover zone # # ZONE - Zonename where the zsmf component should be registered # ZONE_BT - Resource name of the zone boot component # PROJECT - A project in the zone, that will be used for the PostgreSQL # smf service. # If the variable is not set it will be translated as :default for # the smf credentialss. # Optional # RS= RG= PORT=22 LH= USER= BASE= HAS_RS= # failover zone specific options ZONE= ZONE_BT= PROJECT= |
The spsld_register script validates the variables of the spsld_config script and registers the resource for the local distributor.
This section contains the procedures for installing and configuring N1 Grid Service Provisioning System components. The components are the Master Server, the Remote Agent and the Local Distributor.
Determine how N1 Grid Service Provisioning System will be deployed in the Sun Cluster
Determine which component of the N1 Grid Service Provisioning System you will use.
Determine which user name will run N1 Grid Service Provisioning System component.
Determine how many N1 Grid Service Provisioning System component versions and instances will be deployed.
Determine which Cluster File System will be used by each N1 Grid Service Provisioning System component instance.
Determine the type of the target zone where you will install N1 Grid Service Provisioning System. Valid zone types are, the global zone, the failover zone, or a zone.
To deploy N1 Grid Service Provisioning System complete one of the following tasks:
To install and configure N1 Grid Service Provisioning System in a global zone configuration, complete the following tasks:
How to enable the N1 Grid Service Provisioning System Components to run in the Global Zone
How to Install the N1 Grid Service Provisioning System Components in a Global Zone
To install and configure N1 Grid Service Provisioning System in a zone configuration, complete the following tasks:
How to enable the N1 Grid Service Provisioning System Components to run in a Zone
How to Install the N1 Grid Service Provisioning System Components in a Zone
To install and configure N1 Grid Service Provisioning System in a failover zone configuration, complete the following tasks:
How to enable the N1 Grid Service Provisioning System Components to run in a Failover Zone
How to Install the N1 Grid Service Provisioning System Components in a Failover Zone
You will find installation examples for each zone type in:
Appendix A, Deployment Example: Installing N1 Grid Service Provisioning System in the Global Zone
Appendix B, Deployment Example: Installing N1 Grid Service Provisioning System in the Failover Zone
Appendix C, Deployment Example: Installing N1 Grid Service Provisioning System in the Zone
Perform these steps on one node only.
Become superuser or assume a role that provides solaris.cluster.verb RBAC authorization on one of the nodes in the cluster that will host N1 Grid Service Provisioning System.
Register the SUNW.HAStoragePlus and SUNW.gds resource types.
It is assumed that the file system of the N1 Grid Service Provisioning System component will be mounted as a failover file system.
# clresourcetype register SUNW.gds SUNW.HAStoragePlus |
Create a failover resource group.
# clresourcegroup create N1sps-component-resource-group |
Create a resource for the N1 Grid Service Provisioning System component Disk Storage.
# clresource create \ > -g N1sps-component-resource-group \ > -t SUNW.HAStoragePlus \ > -p FilesystemMountPoints=N1sps-component-instance-mount-points \ > N1sps-component-has-resource |
Create a resource for the N1 Grid Service Provisioning System Master component Logical Host name.
# clreslogicalhostname \ > -g N1sps-component-resource-group \ > -h N1 Grid Service Provisioning System-logical-hostname \ > N1sps-component-logical-hostname |
Enable the failover resource group, which now includes the N1 Grid Service Provisioning System Disk Storage and Logical Hostname resources.
# clresourcegroup online -M N1sps-component-resource-group |
Create user and group if required — If theN1 Grid Service Provisioning System is to run under a non root user, you have to create the appropriate user, and the appropriate group. For these tasks use the following commands on every node.
# groupadd —g 1000 sps # useradd —u 1000 —g 1000 —d /global/sps —s /bin/ksh sps |
Install the N1 Grid Service Provisioning System components — Install the appropriate N1 Grid Service Provisioning System components on one node. Use a shared file system within Sun Cluster for the installation location.
It is recommended that you install N1 Grid Service Provisioning System onto shared disks. For a discussion of the advantages and disadvantages of installing the software on a local versus a cluster file system, see “Determining the Location of the Application Binaries” in the Sun Cluster Data Services Installation and Configuration Guide.
Refer to the N1 Grid Service Provisioning System product documentation on http://docs.sun.com for instructions about installing N1 Grid Service Provisioning System. For more information about N1 Grid Service Provisioning System, refer to the docs.sun.com web page.
Become superuser or assume a role that provides solaris.cluster.verb RBAC authorization on one of the nodes in the cluster that will host N1 Grid Service Provisioning System.
Create and boot your zone N1 Grid Service Provisioning System-zone on all the nodes to host your N1 Grid Service Provisioning System data base.
Register the SUNW.HAStoragePlus and SUNW.gds resource types.
It is assumed that the file system of the N1 Grid Service Provisioning System component will be mounted as a failover file system.
# clresourcetype register SUNW.gds SUNW.HAStoragePlus |
Create a failover resource group.
# clresourcegroup create \ > -n node1:N1 Grid Service Provisioning System-zone,node2:N1 Grid Service Provisioning System-zone \ > N1sps-component-resource-group |
Create a resource for the N1 Grid Service Provisioning System component Disk Storage.
# clresource create \ > -g N1sps-component-resource-group \ > -t SUNW.HAStoragePlus \ > -p FilesystemMountPoints=N1sps-component-instance-mount-points \ > N1sps-component-has-resource |
Create a resource for the N1 Grid Service Provisioning System Master component Logical Host name.
# clreslogicalhostname \ > -g N1sps-component-resource-group \ > -h N1 Grid Service Provisioning System-logical-hostname \ > N1sps-component-logical-hostname |
Enable the failover resource group, which now includes the N1 Grid Service Provisioning System Disk Storage and Logical Hostname resources.
# clresourcegroup online -M N1sps-component-resource-group |
Become superuser or assume a role that provides solaris.cluster.modify and solaris.cluster.admin RBAC authorizations.
Enter the target zone
# zlogin sps-zone |
Create user and group if required — If theN1 Grid Service Provisioning System is to run under a non root user, you have to create the appropriate user, and the appropriate group. For these tasks use the following commands on every node.
# groupadd —g 1000 sps # useradd —u 1000 —g 1000 —d /global/sps —s /bin/ksh sps |
Install the N1 Grid Service Provisioning System components — Install the appropriate N1 Grid Service Provisioning System components on one node. Use a shared file system within Sun Cluster for the installation location.
It is recommended that you install N1 Grid Service Provisioning System onto shared disks. For a discussion of the advantages and disadvantages of installing the software on a local versus a cluster file system, see “Determining the Location of the Application Binaries” in the Sun Cluster Data Services Installation and Configuration Guide.
Refer to the N1 Grid Service Provisioning System product documentation on http://docs.sun.com for instructions about installing N1 Grid Service Provisioning System. For more information about N1 Grid Service Provisioning System, refer to the docs.sun.com web page.
You installed the N1 Grid Service Provisioning System onto shared storage, so installing the software on one node is sufficient.
Become superuser or assume a role that provides solaris.cluster.verb RBAC authorization on one of the nodes in the cluster that will host N1 Grid Service Provisioning System.
As superuser register the SUNW.HAStoragePlus and the SUNW.gds resource types.
# clresourcetype register SUNW.HAStoragePlus SUNW.gds |
Create a failover resource group.
# clresourcegroup create N1 Grid Service Provisioning System-resource-group |
Create a resource for the N1 Grid Service Provisioning System zone`s disk storage.
# clresource create -t SUNW.HAStoragePlus \ -p FileSystemMountPoints=N1 Grid Service Provisioning System-instance-mount-points \ N1 Grid Service Provisioning System-has-resource |
(Optional) If you want the protection against a total adapter failure for your public network, create a resource for the N1 Grid Service Provisioning System`s logical hostname.
# clreslogicalhostname create -g N1 Grid Service Provisioning System-resource-group \ -h logical-hostname \ N1 Grid Service Provisioning System-logical-hostname-resource-name |
Place the resource group in the managed state.
# clresourcegroup online -M N1 Grid Service Provisioning System-resource-group |
Install the zone.
Install the zone according to the Sun Cluster HA for Solaris Containers agent documentation, assuming that the resource name is N1 Grid Service Provisioning System-zone-rs and that the zone name is N1 Grid Service Provisioning System-zone.
Verify the zone's installation.
# zoneadm -z N1 Grid Service Provisioning System-zone boot # zoneadm -z N1 Grid Service Provisioning System-zone halt |
Register the zone's boot component.
Copy the container resource boot component configuration file.
# cp /opt/SUNWsczone/sczbt/util/sczbt_config zones-target-configuration-file |
Use a plain text editor to set the following variables:
RS=N1 Grid Service Provisioning System-zone-rs RG=N1 Grid Service Provisioning System-resource-group PARAMETERDIR=N1 Grid Service Provisioning System-zone-parameter-directory SC_NETWORK=true|false SC_LH=N1 Grid Service Provisioning System-logical-hostname-resource-name FAILOVER=true|false HAS_RS=N1 Grid Service Provisioning System-has-resource Zonename=N1 Grid Service Provisioning System-zone Zonebootopt=zone-boot-options Milestone=zone-boot-milestone Mounts=
Create the parameter directory for your zone's resource.
# mkdir N1 Grid Service Provisioning System-zone-parameter-directory |
Execute the Sun Cluster HA for Solaris Container's registration script.
# /opt/SUNWsczone/sczbt/util/sczbt_register -f zones-target-configuration-file |
Enable the Solaris Container resource
# clresource enable N1 Grid Service Provisioning System-zone-rs |
# clresourcegroup online N1 Grid Service Provisioning System-resource-group |
Insure that you are on the node where your enabled your resource group.
Enter the target zone
# zlogin sps-zone |
Become superuser or assume a role that provides solaris.cluster.modify and solaris.cluster.admin RBAC authorizations.
Create user and group if required — If theN1 Grid Service Provisioning System is to run under a non root user, you have to create the appropriate user, and the appropriate group. For these tasks use the following commands on every node.
# groupadd —g 1000 sps # useradd —u 1000 —g 1000 —d /global/sps —s /bin/ksh sps |
Install the N1 Grid Service Provisioning System components — Install the appropriate N1 Grid Service Provisioning System components on one node. Use a shared file system within Sun Cluster for the installation location.
It is recommended that you install N1 Grid Service Provisioning System onto shared disks. For a discussion of the advantages and disadvantages of installing the software on a local versus a cluster file system, see “Determining the Location of the Application Binaries” in the Sun Cluster Data Services Installation and Configuration Guide. In respect to the fact that the root file system of a failover zone is installed on shared storage, any directory of the root file system is sufficient.
Refer to the N1 Grid Service Provisioning System product documentation on http://docs.sun.com for instructions about installing N1 Grid Service Provisioning System. For more information about N1 Grid Service Provisioning System, refer to the docs.sun.com web page.
You installed the N1 Grid Service Provisioning System in a failover zone zone on shared storage, so installing the software on one node is sufficient.
This section contains the procedure you need for verifying the installation and configuration of N1 Grid Service Provisioning System.
This procedure does not verify that your applications are highly available because you have not installed your data service yet. Select the appropriate procedure for the N1 Grid Service Provisioning System application you installed.
This procedure is for the installation verification of the master server.
(Optional) Log in to the target zone if the master server is installed in a non-global zone.
# zlogin sps-zone |
Start the N1 Grid Service Provisioning System Master Server.
Switch to the N1 Grid Service Provisioning System Master Servers user name (in the following example, it is sps) and change to the directory where the software is located. In the following example the software version is 4.1.
The output messages of the start and shutdown commands are highly version dependent.
# su - sps $ cd N1_Service_Provisioning_System_4.1 $ cd server/bin $ ./cr_server start *** Starting database *** Starting cr_server |
Check the Installation
Start a web browser and connect to the cluster node with http://logical-hostname:port. The port is the web administration port configured in the installation procedure of the master server. If you see the default N1 Grid Service Provisioning System login page everything is working correctly.
Stop the N1 Grid Service Provisioning System Master Server.
$ ./cr_server stop *** Stopping cr_server Waiting for CR to complete shutdown... *** Stopping database waiting for postmaster to shut down.......done postmaster successfully shut down |
(Optional) Leave the target zone.
This procedure is for the installation verification of the N1 Grid Service Provisioning System Remote Agent.
(Optional) Log in to the target zone if the master server is installed in a non-global zone.
# zlogin sps-zone |
Start the N1 Grid Service Provisioning System Remote Agent.
Switch to the N1 Grid Service Provisioning System Remote Agents user name (in the following example, it is sps) and change to the directory where the software is located. In the following example the software version is 4.1.
The output messages of the start and shutdown commands are highly version dependent.
# su - sps $ cd N1_Service_Provisioning_System $ cd agent/bin $ ./cr_agent start *** Starting cr_agent |
Check the Installation
Check the process table with the following command:
$ /usr/ucb/ps -auxww |grep java|grep agent >/dev/null;echo $? 0 |
If the response is 0, everything is working correctly. You may omit the |grep agent >/dev/null;echo $? in this case you have to see a java process with agent in the process string.
Stop the N1 Grid Service Provisioning System Remote Agent.
$ ./cr_agent stop *** Stopping cr_agent |
(Optional) Leave the target zone.
This procedure is for the installation verification of the N1 Grid Service Provisioning System Local Distributor.
(Optional) Log in to the target zone if the master server is installed in a non-global zone.
# zlogin sps-zone |
Start the N1 Grid Service Provisioning System Local Distributor.
Switch to the N1 Grid Service Provisioning System Remote Agents user name (in the following example, it is sps) and change to the directory the software is located. In the following example the software version is 4.1.
The output messages of the start and shutdown commands are highly version dependent.
# su - sps $ cd N1_Service_Provisioning_System $ cd ld/bin $ ./cr_ld start *** Starting cr_ld |
Check the Installation
Check the process table with the following command:
$ /usr/ucb/ps -auxww |grep java|grep ld>/dev/null;echo $? 0 |
If the response is 0, everything is working correctly. You may omit the |grep ld>/dev/null;echo $? in this case you have to see a java process with ld in the process string.
Stop the N1 Grid Service Provisioning System Local Distributor.
$ ./cr_ld stop *** Stopping cr_ld |
(Optional) Leave the target zone.
If you did not install the Sun Cluster HA for N1 Service Provisioning System packages during your initial Sun Cluster installation, perform this procedure to install the packages. To install the packages, use the Sun JavaTM Enterprise System Installation Wizard.
Perform this procedure on each cluster node where you are installing the Sun Cluster HA for N1 Service Provisioning System packages.
You can run the Sun Java Enterprise System Installation Wizard with a command-line interface (CLI) or with a graphical user interface (GUI). The content and sequence of instructions in the CLI and the GUI are similar.
Even if you plan to configure this data service to run in non-global zones, install the packages for this data service in the global zone. The packages are propagated to any existing non-global zones and to any non-global zones that are created after you install the packages.
Ensure that you have the Sun Java Availability Suite DVD-ROM.
If you intend to run the Sun Java Enterprise System Installation Wizard with a GUI, ensure that your DISPLAY environment variable is set.
On the cluster node where you are installing the data service packages, become superuser.
Load the Sun Java Availability Suite DVD-ROM into the DVD-ROM drive.
If the Volume Management daemon vold(1M) is running and configured to manage DVD-ROM devices, the daemon automatically mounts the DVD-ROM on the /cdrom directory.
Change to the Sun Java Enterprise System Installation Wizard directory of the DVD-ROM.
Start the Sun Java Enterprise System Installation Wizard.
# ./installer |
When you are prompted, accept the license agreement.
If any Sun Java Enterprise System components are installed, you are prompted to select whether to upgrade the components or install new software.
From the list of Sun Cluster agents under Availability Services, select the data service for N1 Grid Service Provisioning System.
If you require support for languages other than English, select the option to install multilingual packages.
English language support is always installed.
When prompted whether to configure the data service now or later, choose Configure Later.
Choose Configure Later to perform the configuration after the installation.
Follow the instructions on the screen to install the data service packages on the node.
The Sun Java Enterprise System Installation Wizard displays the status of the installation. When the installation is complete, the wizard displays an installation summary and the installation logs.
(GUI only) If you do not want to register the product and receive product updates, deselect the Product Registration option.
The Product Registration option is not available with the CLI. If you are running the Sun Java Enterprise System Installation Wizard with the CLI, omit this step
Exit the Sun Java Enterprise System Installation Wizard.
Unload the Sun Java Availability Suite DVD-ROM from the DVD-ROM drive.
See Preparation of the N1 Grid Service Provisioning System Master Servers database to prepare the N1 Grid Service Provisioning System Master Servers database.
In this section you prepare the database of the N1 Grid Service Provisioning System Master Server. It needs to contain the user sc_test and the table sc_test. The user and the table are needed to monitor the PostgreSql database. The script db_prep_postgres is provided to create the user and the table.
Start the N1 Grid Service Provisioning System Master Server as described in How to Verify the Installation and Configuration of N1 Grid Service Provisioning System Master Server.
Remain within the user of the Master Server and prepare the database.
For the preparation of the database you need the N1 Grid Service Provisioning System Master Servers base directory. It is the directory that contains the server/bin directory. You prepare the database with the following command:
$/opt/SUNWscsps/master/util/db_prep_postgres <Base Directory of the master Server> CREATE USER CREATE |
An example for the Base Directory is:/global/sps/N1_Service_Provisioning_System_4.1.
Stop the N1 Grid Service Provisioning System Master Server as described inHow to Verify the Installation and Configuration of N1 Grid Service Provisioning System Master Server.
This section contains the procedures you need to configure the Master Server, the Remote Agent, or the Local Distributor of Sun Cluster HA for N1 Service Provisioning System. Sun Cluster supports the configuration of the N1 Grid Service Provisioning System in the global zone, failover zone and zone. If you install it on Solaris 9 use the global zone procedures.
If you want to install the Master Server, complete one of the tasks:
If you want to install the Remote Agent, complete one of the tasks:
If you want to install the Local Distributor, complete one of the tasks:
This procedure assumes that you installed the data service packages. Perform this procedure if you want to install the N1 Grid Service Provisioning System master server in the global zone.
If you did not install the Sun Cluster HA for N1 Service Provisioning System packages, go to Installing the Sun Cluster HA for N1 Service Provisioning System Packages.
Become superuser or assume a role that provides solaris.cluster.verb RBAC authorization on one of the nodes in the cluster that will host N1 Grid Service Provisioning System Master Server.
Prepare the parameter file, which is required by the Sun Cluster HA for N1 Service Provisioning System Master Server.
The parameter files need to be available on every node that can host the N1 Grid Service Provisioning System Master Server data service. For a failover configuration store them on the shared storage. The parameter files cannot differ for a specific instance of N1 Grid Service Provisioning System Master Server on the various nodes. For a zone you must install it on the shared storage of this zone.
# cd /opt/SUNWscsps/master/bin # cp pfile desired place |
Choose a location on the shared storage for the pfile. Edit the parameter file pfile and follow the comments within that file. For example:
#!/usr/bin/ksh # # Copyright 2006 Sun Microsystems, Inc. All rights reserved. # Use is subject to license terms. # #ident "@(#)pfile.ksh 1.2 06/03/17 SMI" # Set the Centerrun specific environment variables which the start, stop # and check fuctions will use # # User Centerrun User # Basepath Absolute path to N1 Grid Service Prosioning Systems Apache Basedir directory # Host Hostname to test Apache Tomcat # Tport Port where the N1 Grid Service Prosioning Systems Apache Tomcat instance # is configured to listen # TestCmd Apache Tomcat test command, this variable needs different contents, depending on # your master server configuration. # Your master server answers http request, configure: # TestCmd="get /index.jsp" # Your master server answers https request, configure: # TestCmd="/index.jsp" # ReturnString Use one of the strings below according to your N1 Grid Service Prosioning Systems # Server Version. # Version 4.1 and 5.x = SSL|Service # Startwait Sleeping $Startwait seconds after completion of the # start command # WgetPath If the Master server is configured to answer https requests only, the absolute path # to the wget command is needed here. Omit this variable if your master server answers # on http requests. # example: WgetPath=/usr/sfw/bin/wget # Optional User= Basepath= Host= Tport= TestCmd="get /index.jsp" ReturnString="SSL|Service" Startwait= WgetPath= |
If you configured your master server to answer https requests, you need to install a https capable wget. Follow the comments in the parameter file to configure the TestCmd variable.
The following is an example for a N1 Grid Service Provisioning System 4.1 Master Server.
User=sps Basepath=/global/sps/N1_Service_Provisioning_System_4.1 Host=N1spsma-lh Tport=8080 TestCmd="get /index.jsp" ReturnString="SSL|Service" Startwait=20 WgetPath= |
This example is from an N1 Grid Service Provisioning System 4.1 Master Server. The Apache Tomcat is configured to listen on Port 8080. The default start page contains the string Service, or the string SSL if you configured it to respond on the SSL Port.
Configure the registration scripts for each required N1 Grid Service Provisioning System Master Server instance.
# cd /opt/SUNWscsps/master/util # cp spsma_config desired place |
Edit the spsma_config file and follow the comments within that file. For example:
# # Copyright 2006 Sun Microsystems, Inc. All rights reserved. # Use is subject to license terms. # #ident "@(#)spsma_config.ksh 1.2 06/03/17 SMI" # This file will be sourced in by spsma_register and the parameters # listed below will be used. # # These parameters can be customized in (key=value) form # # RS - name of the resource for the application # RG - name of the resource group containing RS # PORT - name of the port number to satisfy GDS registration # LH - name of the LogicalHostname SC resource # PFILE - name of the parameter file for additional variables # HAS_RS - name of the HAStoragePlus SC resource # # The following variables need to be set only if the agent runs in a # failover zone # # ZONE - Zonename where the zsmf component should be registered # ZONE_BT - Resource name of the zone boot component # PROJECT - A project in the zone, that will be used for the PostgreSQL # smf service. # If the variable is not set it will be translated as :default for # the smf credentialss. # Optional # RS= RG= PORT=8080 LH= PFILE= HAS_RS= # failover zone specific options ZONE= ZONE_BT= PROJECT= |
The following is an example for the Sun Cluster HA for N1 Service Provisioning System Master Server.
RS=N1spsma-res RG=N1spsma-rg PORT=8080 LH=N1spsma-lh PFILE=/global/mnt1/N1spsma-pfile HAS_RS=N1spsma-hastplus-res |
The PORT variable is needed to satisfy the requirements of the generic data service.
After editing spsma_config register the resource.
# ksh ./spsma_register -f desired_place/spsma_config Registration of resource N1spsma-rs succeeded Validate resource N1spsma-rs in resourcegroup spsma-rg Validation of resource spsma-rs succeeded # |
Enable each N1 Grid Service Provisioning System Master Server resource.
# clresource status |
# clresource enable N1spsma-resource |
(Optional) Repeat Step 2 to Step 5 for each N1 Grid Service Provisioning System Master Server instance you need.
This procedure assumes that you installed the data service packages. Perform this procedure if you want to install the N1 Grid Service Provisioning System Master Server in a failover zone.
If you did not install the Sun Cluster HA for N1 Service Provisioning System packages, go to Installing the Sun Cluster HA for N1 Service Provisioning System Packages.
Become superuser or assume a role that provides solaris.cluster.verb RBAC authorization on one of the nodes in the cluster that will host N1 Grid Service Provisioning System Master Server.
Log in to your failover zone.
# zlogin sps-zone |
Prepare the parameter file, which is required by the Sun Cluster HA for N1 Service Provisioning System Master Server.
The parameter files need to be available on every node that can host the N1 Grid Service Provisioning System Master Server data service. For a failover configuration store them on the shared storage. The parameter files cannot differ for a specific instance of N1 Grid Service Provisioning System Master Server on the various nodes.
# cd /opt/SUNWscsps/master/bin # cp pfile desired place |
Choose a location on the shared storage for the pfile. Edit the parameter file pfile and follow the comments within that file. For example:
#!/usr/bin/ksh # # Copyright 2006 Sun Microsystems, Inc. All rights reserved. # Use is subject to license terms. # #ident "@(#)pfile.ksh 1.2 06/03/17 SMI" # Set the Centerrun specific environment variables which the start, stop # and check fuctions will use # # User Centerrun User # Basepath Absolute path to N1 Grid Service Prosioning Systems Apache Basedir directory # Host Hostname to test Apache Tomcat # Tport Port where the N1 Grid Service Prosioning Systems Apache Tomcat instance # is configured to listen # TestCmd Apache Tomcat test command, this variable needs different contents, depending on # your master server configuration. # Your master server answers http request, configure: # TestCmd="get /index.jsp" # Your master server answers https request, configure: # TestCmd="/index.jsp" # ReturnString Use one of the strings below according to your N1 Grid Service Prosioning Systems # Server Version. # Version 4.1 and 5.x = SSL|Service # Startwait Sleeping $Startwait seconds after completion of the # start command # WgetPath If the Master server is configured to answer https requests only, the absolute path # to the wget command is needed here. Omit this variable if your master server answers # on http requests. # example: WgetPath=/usr/sfw/bin/wget # Optional User= Basepath= Host= Tport= TestCmd="get /index.jsp" ReturnString="SSL|Service" Startwait= WgetPath= |
If you configured your master server to answer https requests, you need to install a https capable wget. Follow the comments in the parameter file to configure the TestCmd variable.
The following is an example for a N1 Grid Service Provisioning System 4.1 Master Server.
User=sps Basepath=/global/sps/N1_Service_Provisioning_System_4.1 Host=N1spsma-lh Tport=8080 TestCmd="get /index.jsp" ReturnString="SSL|Service" Startwait=20 WgetPath= |
This example is from an N1 Grid Service Provisioning System 4.1 Master Server. The Apache Tomcat is configured to listen on Port 8080. The default start page contains the string Service, or the string SSL if you configured it to respond on the SSL Port.
Leave the failover zone.
Configure the registration scripts for each required N1 Grid Service Provisioning System Master Server instance.
# cd /opt/SUNWscsps/master/util # cp spsma_config desired place |
Edit the spsma_config file and follow the comments within that file. For example:
# # Copyright 2006 Sun Microsystems, Inc. All rights reserved. # Use is subject to license terms. # #ident "@(#)spsma_config.ksh 1.2 06/03/17 SMI" # This file will be sourced in by spsma_register and the parameters # listed below will be used. # # These parameters can be customized in (key=value) form # # RS - name of the resource for the application # RG - name of the resource group containing RS # PORT - name of the port number to satisfy GDS registration # LH - name of the LogicalHostname SC resource # PFILE - name of the parameter file for additional variables # HAS_RS - name of the HAStoragePlus SC resource # # The following variables need to be set only if the agent runs in a # failover zone # # ZONE - Zonename where the zsmf component should be registered # ZONE_BT - Resource name of the zone boot component # PROJECT - A project in the zone, that will be used for the PostgreSQL # smf service. # If the variable is not set it will be translated as :default for # the smf credentialss. # Optional # RS= RG= PORT=8080 LH= PFILE= HAS_RS= # failover zone specific options ZONE= ZONE_BT= PROJECT= |
The following is an example for the Sun Cluster HA for N1 Service Provisioning System Master Server.
RS=N1spsma-res RG=N1spsma-rg PORT=8080 LH=N1spsma-lh PFILE=/global/mnt1/N1spsma-pfile HAS_RS=N1spsma-hastplus-res ZONE=sps-zone ZONE_BT=sps-zone-rs PROJECT= |
The PORT variable is needed to satisfy the requirements of the generic data service.
After editing spsma_config register the resource.
# ksh ./spsma_register -f desired_place/spsma_config |
Enable each N1 Grid Service Provisioning System Master Server resource.
# clresource status |
# clresource enable N1spsma-resource |
(Optional) Repeat Step 2 to Step 7 for each N1 Grid Service Provisioning System Master Server instance you need.
This procedure assumes that you installed the data service packages. Perform this procedure if you want to install the N1 Grid Service Provisioning System Master Server in a zone.
If you did not install the Sun Cluster HA for N1 Service Provisioning System packages, go to Installing the Sun Cluster HA for N1 Service Provisioning System Packages.
Become superuser or assume a role that provides solaris.cluster.verb RBAC authorization on one of the nodes in the cluster that will host N1 Grid Service Provisioning System Master Server.
Log in to your zone.
# zlogin sps-zone |
Prepare the parameter file, which is required by the Sun Cluster HA for N1 Service Provisioning System Master Server.
The parameter files need to be available on every node that can host the N1 Grid Service Provisioning System Master Server data service. For a failover configuration store them on the shared storage. The parameter files cannot differ for a specific instance of N1 Grid Service Provisioning System Master Server on the various nodes.
# cd /opt/SUNWscsps/master/bin # cp pfile desired place |
Choose a location on the shared storage for the pfile. Edit the parameter file pfile and follow the comments within that file. For example:
#!/usr/bin/ksh # # Copyright 2006 Sun Microsystems, Inc. All rights reserved. # Use is subject to license terms. # #ident "@(#)pfile.ksh 1.2 06/03/17 SMI" # Set the Centerrun specific environment variables which the start, stop # and check fuctions will use # # User Centerrun User # Basepath Absolute path to N1 Grid Service Prosioning Systems Apache Basedir directory # Host Hostname to test Apache Tomcat # Tport Port where the N1 Grid Service Prosioning Systems Apache Tomcat instance # is configured to listen # TestCmd Apache Tomcat test command, this variable needs different contents, depending on # your master server configuration. # Your master server answers http request, configure: # TestCmd="get /index.jsp" # Your master server answers https request, configure: # TestCmd="/index.jsp" # ReturnString Use one of the strings below according to your N1 Grid Service Prosioning Systems # Server Version. # Version 4.1 and 5.x = SSL|Service # Startwait Sleeping $Startwait seconds after completion of the # start command # WgetPath If the Master server is configured to answer https requests only, the absolute path # to the wget command is needed here. Omit this variable if your master server answers # on http requests. # example: WgetPath=/usr/sfw/bin/wget # Optional User= Basepath= Host= Tport= TestCmd="get /index.jsp" ReturnString="SSL|Service" Startwait= WgetPath= |
If you configured your master server to answer https requests, you need to install a https capable wget. Follow the comments in the parameter file to configure the TestCmd variable.
The following is an example for a N1 Grid Service Provisioning System 4.1 Master Server.
User=sps Basepath=/global/sps/N1_Service_Provisioning_System_4.1 Host=N1spsma-lh Tport=8080 TestCmd="get /index.jsp" ReturnString="SSL|Service" Startwait=20 WgetPath= |
This example is from an N1 Grid Service Provisioning System 4.1 Master Server. The Apache Tomcat is configured to listen on Port 8080. The default start page contains the string Service, or the string SSL if you configured it to respond on the SSL Port.
Leave the zone.
Configure the registration scripts for each required N1 Grid Service Provisioning System Master Server instance.
# cd /opt/SUNWscsps/master/util # cp spsma_config desired place |
Edit the spsma_config file and follow the comments within that file. For example:
# # Copyright 2006 Sun Microsystems, Inc. All rights reserved. # Use is subject to license terms. # #ident "@(#)spsma_config.ksh 1.2 06/03/17 SMI" # This file will be sourced in by spsma_register and the parameters # listed below will be used. # # These parameters can be customized in (key=value) form # # RS - name of the resource for the application # RG - name of the resource group containing RS # PORT - name of the port number to satisfy GDS registration # LH - name of the LogicalHostname SC resource # PFILE - name of the parameter file for additional variables # HAS_RS - name of the HAStoragePlus SC resource # # The following variables need to be set only if the agent runs in a # failover zone # # ZONE - Zonename where the zsmf component should be registered # ZONE_BT - Resource name of the zone boot component # PROJECT - A project in the zone, that will be used for the PostgreSQL # smf service. # If the variable is not set it will be translated as :default for # the smf credentialss. # Optional # RS= RG= PORT=8080 LH= PFILE= HAS_RS= # failover zone specific options ZONE= ZONE_BT= PROJECT= |
The following is an example for the Sun Cluster HA for N1 Service Provisioning System Master Server.
RS=N1spsma-res RG=N1spsma-rg PORT=8080 LH=N1spsma-lh PFILE=/global/mnt1/N1spsma-pfile HAS_RS=N1spsma-hastplus-res ZONE= ZONE_BT= PROJECT= |
The PORT variable is needed to satisfy the requirements of the generic data service.
After editing spsma_config register the resource.
# ksh ./spsma_register -f desired_place/spsma_config |
Enable each N1 Grid Service Provisioning System Master Server resource.
# clresource status |
# clresource enable N1spsma-resource |
(Optional) Repeat Step 2 to Step 7 for each N1 Grid Service Provisioning System Master Server instance you need.
Perform this procedure if you want to install the N1 Grid Service Provisioning System Remote Agent in the global zone, or in a zone. This procedure assumes that you installed the data service packages.
There is absolutely no difference in the procedure between a global zone and azone.
If you did not install the Sun Cluster HA for N1 Service Provisioning System packages, go to Installing the Sun Cluster HA for N1 Service Provisioning System Packages.
Become superuser or assume a role that provides solaris.cluster.verb RBAC authorization on one of the nodes in the cluster that will host N1 Grid Service Provisioning System Remote Agent.
Configure the registration scripts for each required N1 Grid Service Provisioning System Remote Agents instance.
# cd /opt/SUNWscsps/remoteagent/util # cp spsra_config desired place |
Edit the spsra_config file and follow the comments within that file. For example:
# # Copyright 2006 Sun Microsystems, Inc. All rights reserved. # Use is subject to license terms. # #ident "@(#)spsra_config.ksh 1.2 06/03/17 SMI" # This file will be sourced in by spsra_register and the parameters # listed below will be used. # # These parameters can be customized in (key=value) form # # RS - name of the resource for the application # RG - name of the resource group containing RS # PORT - name of the port number to satisfy GDS registration # LH - name of the LogicalHostname SC resource # USER - name of the owner of the remote agent # BASE - name of the direcotry where the N1 Service Provisioning Server # is installed # HAS_RS - name of the HAStoragePlus SC resource # # The following variables need to be set only if the agent runs in a # failover zone # # ZONE - Zonename where the zsmf component should be registered # ZONE_BT - Resource name of the zone boot component # PROJECT - A project in the zone, that will be used for the PostgreSQL # smf service. # If the variable is not set it will be translated as :default for # the smf credentialss. # Optional # RS= RG= PORT=22 LH= USER= BASE= HAS_RS= # failover zone specific options ZONE= ZONE_BT= PROJECT= |
The following is an example for the N1 Grid Service Provisioning System 4.1 Remote Agent.
RS=N1spsra-res RG=N1spsra-rg PORT=22 LH=N1spsra-lh USER=sps BASE=/global/sps/N1_Service_Provisioning_System HAS_RS=N1spsra-hastplus-res ZONE= ZONE_BT= PROJECT= |
The PORT variable is needed to satisfy the requirements of the generic data service.
After editing spsra_config register the resource.
# ksh ./spsra_register -f desired_place/spsra_config |
Enable each N1 Grid Service Provisioning System Remote Agent resource.
# clresource status |
# clresource enable N1spsra-resource |
(Optional) Repeat Step 2 to Step 5 for each N1 Grid Service Provisioning System Remote Agent instance you need.
This procedure assumes that you installed the data service packages.
If you did not install the Sun Cluster HA for N1 Service Provisioning System packages, go to Installing the Sun Cluster HA for N1 Service Provisioning System Packages.
Become superuser or assume a role that provides solaris.cluster.verb RBAC authorization on one of the nodes in the cluster that will host N1 Grid Service Provisioning System Remote Agent.
Configure the registration scripts for each required N1 Grid Service Provisioning System Remote Agents instance.
# cd /opt/SUNWscsps/remoteagent/util # cp spsra_config desired place |
Edit the spsra_config file and follow the comments within that file. For example:
# # Copyright 2006 Sun Microsystems, Inc. All rights reserved. # Use is subject to license terms. # #ident "@(#)spsra_config.ksh 1.2 06/03/17 SMI" # This file will be sourced in by spsra_register and the parameters # listed below will be used. # # These parameters can be customized in (key=value) form # # RS - name of the resource for the application # RG - name of the resource group containing RS # PORT - name of the port number to satisfy GDS registration # LH - name of the LogicalHostname SC resource # USER - name of the owner of the remote agent # BASE - name of the direcotry where the N1 Service Provisioning Server # is installed # HAS_RS - name of the HAStoragePlus SC resource # # The following variables need to be set only if the agent runs in a # failover zone # # ZONE - Zonename where the zsmf component should be registered # ZONE_BT - Resource name of the zone boot component # PROJECT - A project in the zone, that will be used for the PostgreSQL # smf service. # If the variable is not set it will be translated as :default for # the smf credentialss. # Optional # RS= RG= PORT=22 LH= USER= BASE= HAS_RS= # failover zone specific options ZONE= ZONE_BT= PROJECT= |
The following is an example for the N1 Grid Service Provisioning System 4.1 Remote Agent.
RS=N1spsra-res RG=N1spsra-rg PORT=22 LH=N1spsra-lh USER=sps BASE=/global/sps/N1_Service_Provisioning_System HAS_RS=N1spsra-hastplus-res ZONE=sps-zone ZONE_BT=sps-zone-rs PROJECT= |
The PORT variable is needed to satisfy the requirements of the generic data service.
After editing spsra_config register the resource.
# ksh ./spsra_register -f desired_place/spsra_config |
Enable each N1 Grid Service Provisioning System Remote Agent resource.
# clresource status |
# clresource enable N1spsra-resource |
(Optional) Repeat Step 2 to Step 4 for each N1 Grid Service Provisioning System Remote Agent instance you need.
Perform this procedure if you want to install the N1 Grid Service Provisioning System Remote Agent in the global zone, or in a zone. This procedure assumes that you installed the data service packages.
There is absolutely no difference in the procedure between a global zone and azone.
If you did not install the Sun Cluster HA for N1 Service Provisioning System packages, go to Installing the Sun Cluster HA for N1 Service Provisioning System Packages.
Become superuser or assume a role that provides solaris.cluster.verb RBAC authorization on one of the nodes in the cluster that will host N1 Grid Service Provisioning System Local Distributor.
Configure the registration scripts for each required N1 Grid Service Provisioning System Local Distributor instance.
# cd /opt/SUNWscsps/localdist/util # cp spsld_config desired place |
Edit the spsld_config file and follow the comments within that file. For example:
# # Copyright 2006 Sun Microsystems, Inc. All rights reserved. # Use is subject to license terms. # #ident "@(#)spsld_config.ksh 1.2 06/03/17 SMI" # This file will be sourced in by spsld_register and the parameters # listed below will be used. # # These parameters can be customized in (key=value) form # # RS - name of the resource for the application # RG - name of the resource group containing RS # PORT - name of the port number to satisfy GDS registration # LH - name of the LogicalHostname SC resource # USER - name of the owner of the local distributor # BASE - name of the directry where the N1 Service Provisioning Server # is installed # HAS_RS - name of the HAStoragePlus SC resource # # # The following variables need to be set only if the agent runs in a # failover zone # # ZONE - Zonename where the zsmf component should be registered # ZONE_BT - Resource name of the zone boot component # PROJECT - A project in the zone, that will be used for the # smf service. # If the variable is not set it will be translated as :default for # the smf credentialss. # Optional # RS= RG= PORT=22 LH= USER= BASE= HAS_RS= # failover zone specific options ZONE= ZONE_BT= PROJECT= |
The following is an example for N1 Grid Service Provisioning System 4.1 Local Distributor.
RS=N1spsld-res RG=N1spsld-rg PORT=22 LH=N1spsld-lh USER=sps BASE=/global/sps/N1_Service_Provisioning_System_4.1 HAS_RS=N1spsld-hastplus-res ZONE= ZONE_BT= PROJECT= |
The PORT variable is needed to satisfy the requirements of the generic data service.
After editing spsld_config register the resource.
# ksh ./spsld_register -f desired_place/spsld_config |
Enable each N1 Grid Service Provisioning System Local Distributor resource.
# clresource status |
# clresource enable N1spsra-resource |
(Optional) Repeat Step 2 to Step 4 for each N1 Grid Service Provisioning System Local Distributor instance you need.
This procedure assumes that you installed the data service packages.
If you did not install the Sun Cluster HA for N1 Service Provisioning System packages, go to Installing the Sun Cluster HA for N1 Service Provisioning System Packages.
Become superuser or assume a role that provides solaris.cluster.verb RBAC authorization on one of the nodes in the cluster that will host N1 Grid Service Provisioning System Local Distributor.
Configure the registration scripts for each required N1 Grid Service Provisioning System Local Distributor instance.
# cd /opt/SUNWscsps/localdist/util # cp spsld_config desired place |
Edit the spsld_config file and follow the comments within that file. For example:
# # Copyright 2006 Sun Microsystems, Inc. All rights reserved. # Use is subject to license terms. # #ident "@(#)spsld_config.ksh 1.2 06/03/17 SMI" # This file will be sourced in by spsld_register and the parameters # listed below will be used. # # These parameters can be customized in (key=value) form # # RS - name of the resource for the application # RG - name of the resource group containing RS # PORT - name of the port number to satisfy GDS registration # LH - name of the LogicalHostname SC resource # USER - name of the owner of the local distributor # BASE - name of the directry where the N1 Service Provisioning Server # is installed # HAS_RS - name of the HAStoragePlus SC resource # # # The following variables need to be set only if the agent runs in a # failover zone # # ZONE - Zonename where the zsmf component should be registered # ZONE_BT - Resource name of the zone boot component # PROJECT - A project in the zone, that will be used for the # smf service. # If the variable is not set it will be translated as :default for # the smf credentialss. # Optional # RS= RG= PORT=22 LH= USER= BASE= HAS_RS= # failover zone specific options ZONE= ZONE_BT= PROJECT= |
The following is an example for N1 Grid Service Provisioning System 4.1 Local Distributor.
RS=N1spsld-res RG=N1spsld-rg PORT=22 LH=N1spsld-lh USER=sps BASE=/global/sps/N1_Service_Provisioning_System_4.1 HAS_RS=N1spsld-hastplus-res ZONE=sps-zone ZONE_BT=sps-zone-rs PROJECT= |
The PORT variable is needed to satisfy the requirements of the generic data service.
After editing spsld_config register the resource.
# ksh ./spsld_register -f desired_place/spsld_config |
Enable each N1 Grid Service Provisioning System Local Distributor resource.
# clresource status |
# clresource enable N1spsra-resource |
(Optional) Repeat Step 2 to Step 4 for each N1 Grid Service Provisioning System Local Distributor instance you need.
This section contains the procedure you need to verify that you installed and configured your data service correctly.
Become superuser on one of the nodes in the cluster which will host the N1 Grid Service Provisioning System component. A component can be the Master Server, the Remote Agent or the Local Distributor.
Ensure all the N1 Grid Service Provisioning System resources are online with the command clresource .
# clresource status |
For each N1 Grid Service Provisioning System resource which is not online, use the scswitch command as follows.
# clresource enable N1sps-resource |
Run the clresourcegroup command to switch the N1 Grid Service Provisioning System resource group to another cluster node, such as node2 with the command described below. Use the alternative form with :zone for zone installations only.
# clresourcegroup online -h node2 N1sps-resource-group |
# clresourcegroup online -h node2:zone N1sps-resource-group |
Use the information to understand the contents of the Sun Cluster HA for N1 Service Provisioning System Master Server parameter file. This section describes the structure and the content of the Sun Cluster HA for N1 Service Provisioning System Master Server parameter file, as well as the strategy to chose some of its variables.
Sun Cluster HA for N1 Service Provisioning Systemfor the master server uses a parameter file to pass parameters to the start, stop and probe command. This parameter file needs to be a valid Korn shell script which sets several variables. The structure of this file appears in Table 4. For examples of the parameter file refer to Registering and Configuring Sun Cluster HA for N1 Service Provisioning System.
Table 4 Structure of the Sun Cluster HA for N1 Service Provisioning System Master Servers parameter file
Variable |
Explanation |
---|---|
User |
The owner of the N1 Grid Service Provisioning System Master Server instance. |
Basepath |
Basepath is the absolute pathname to the directory where the N1 Grid Service Provisioning System/server/bin directory resides. It is the directory you specified at installation time. |
Host |
The Host variable is the Host to test the functionality of the Apache Tomcat component of the N1 Grid Service Provisioning System Master Server. The test is done via a connection to Host:Tport. |
Tport |
A Port where the N1 Grid Service Provisioning Systems Apache Tomcat component is serving. This Port is used together with the Host to test the functionality of the Apache Tomcat server process of the N1 Grid Service Provisioning System Master Server. |
TestCmd |
This variable represents the command which is passed to the N1 Grid Service Provisioning Systems Apache Tomcat server process to do a sanity check. If your N1 Grid Service Provisioning System Master Server is configured to use https, provide a webpage that can be connected by wget. |
ReturnString |
The variable ReturnString represents the string which must be present in the answer to the TestCmd. It can not be “Connection refused” because this string is in the answer when the N1 Grid Service Provisioning Systems Apache Tomcat server process is not running. |
Startwait |
This variable stands for the number of seconds to wait after the N1 Grid Service Provisioning System Master Server start command is completed. It lasts until the Apache Tomcat server process of the N1 Grid Service Provisioning System Master Server is fully operational. The absolute number of seconds depends on the speed and the load of the Hardware. A good strategy is to start with 60 seconds. |
WgetPath |
Provide the absolute path to a https capable wget command. The variable is needed when you configured your N1 Grid Service Provisioning System Master Server for https. |
The parameters in Table 4 can be changed at any time. The only difference is when changes take effect.
The following parameters of the Sun Cluster HA for N1 Service Provisioning System parameter file are used for starting and stopping the Master Server. Changes to these parameters take effect at every restart or disabling and enabling of a N1 Grid Service Provisioning System Master Server resource.
User
Basepath
Startwait
The following parameters of the Sun Cluster HA for N1 Service Provisioning System Master Server parameter file are used within the fault monitor. Changes to these parameters take effect at every Thorough_probe_interval.
Host
Tport
TestCmd
ReturnString
WgetPath
The ReturnString has to be present on the page you query with the test command TestCmd.
Take the start page of your application and set the TestCmd to get /index.jspor https://start_pageif you use wget to monitor the master server. Set the ReturnString to a string contained in the startpage. With this strategy, you are monitoring that the Apache Tomcat process of the N1 Grid Service Provisioning System Master Server is operational.
If the N1 Grid Service Provisioning System Master Server is configured for SSL on the administrative port. The only request on the http port is a page containing the string SSL. In this case configure the ReturnString to SSL and the TestCmd to get /index.jsp.
If you expect changes in the configurations, configure the test command to get /index.jsp and the RetunString to SSL|Service. This expression is true if the startpage contains SSL or Service.
If none of the above is appropriate, set the TestCmd to get /a-page-which-does-not-exists. In this case, set the ReturnString to a string contained in the Error Page. With this strategy, you are monitoring that the Apache Tomcat process of the N1 Grid Service Provisioning System Master Server is operational, because it registers that it must deliver a page that does not exist.
You can evaluate the different pages by connecting using a browser with hostname:port and specifying the different pages.
This section describes the Sun Cluster HA for N1 Service Provisioning System fault monitor's probing algorithm and functionality, states the conditions, messages, recovery actions, and states the conditions and messages associated with unsuccessful probing.
For conceptual information on fault monitors, see the Sun Cluster Concepts Guide.
The Sun Cluster HA for N1 Service Provisioning System fault monitor uses the same resource properties as the resource type SUNW.gds. Refer to the SUNW.gds(5) man page for a complete list of resource properties used.
The probing of the Master Server consists of two parts. One to probe the Apache Tomcat and a second part to probe the database.
The following steps are executed to monitor the sanity of the N1 Grid Service Provisioning System Master Server.
Sleeps for Thorough_probe_interval.
Pings the Host, which is configured in the Sun Cluster HA for N1 Service Provisioning System Master Server parameter file.
Connects to the Apache Tomcat via Host and Port. If the connection is successful it sends the TestCmd and tests whether the ReturnString comes back. If it fails, it is rescheduled after 5 seconds. If this fails again, then the probe will restart the Sun Cluster HA for N1 Service Provisioning System.
The ReturnString cannot be Connection refused because this string will be returned if no connection is possible.
If the Apache Tomcat is operational, the probe manipulates the database table sc_test. If the connection to the database or the table manipulation is unsuccessful, the N1 Grid Service Provisioning System Master server will be restarted.
If the Apache Tomcat process and all the database processes died, pmf will interrupt the probe to immediately restart the N1 Grid Service Provisioning System Master Server.
If the N1 Grid Service Provisioning System Master Server is repeatedly restarted and subsequently exhausts the Retry_count within the Retry_interval, then a failover is initiated for the resource group onto another node. This is done if the resource property Failover_enabled is set to TRUE.
The probing of the Remote Agent is done by pmf only.
The following steps are executed to monitor the N1 Grid Service Provisioning System Remote Agent.
If the process of the Remote Agent has died, pmf will immediately restart the N1 Grid Service Provisioning System Remote Agent.
If the N1 Grid Service Provisioning System Remote Agent is repeatedly restarted and subsequently exhausts the Retry_count within the Retry_interval, then a failover is initiated for the resource group onto another node. This is done if the resource property Failover_enabled is set to TRUE.
The probing of the Local Distributor is done by pmf only.
The following steps are executed to monitor the N1 Grid Service Provisioning System Local Distributor.
If the process of the Local Distributor has died, pmf will immediately restart the N1 Grid Service Provisioning System Local Distributor.
If the N1 Grid Service Provisioning System Local Distributor is repeatedly restarted and subsequently exhausts the Retry_count within the Retry_interval, then a failover is initiated for the resource group onto another node. This is done if the resource property Failover_enabled is set to TRUE.
Sun Cluster HA for N1 Service Provisioning System can be used by multiple N1 Grid Service Provisioning System instances. However, it is possible to turn debug on for all N1 Grid Service Provisioning System instances or a particular N1 Grid Service Provisioning System instance. This has to be done for each component (Master Server, Remote Agent, or Local Distributor) on its own.
The Sun Cluster HA for N1 Service Provisioning System component has a DEBUG file under /opt/SUNWscsps/component-dir/etc. The directories of these components are master for the Master Server, remoteagent for the Remote Agent, and localdist for the Local Distributor.
This file allows you to switch debug on for all instances of a N1 Grid Service Provisioning System component, or for a specific instance of a N1 Grid Service Provisioning System component on a particular node in a Sun Cluster. If you require debug to be switched on for Sun Cluster HA for N1 Service Provisioning System across the whole Sun Cluster, you will need to repeat this step on all nodes within Sun Cluster.
Perform these steps for the Sun Cluster HA for N1 Service Provisioning System component that requires debug output, on each node of Sun Cluster as required.
Determine whether you are in a global zone or in a failover zone configuration.
If your operating system is Solaris 10 and your N1 Grid Service Provisioning System resource is dependent on a Solaris Container boot component resource, you are in a failover zone configuration. In any other case, especially on a Solaris 9 system , you are in a global zone configuration.
Determine whether debugging for Sun Cluster HA for N1 Service Provisioning System is active.
# grep daemon /etc/syslog.conf *.err;kern.debug;daemon.notice;mail.crit /var/adm/messages *.alert;kern.err;daemon.err operator # |
If debugging is inactive, daemon.notice is set in the file /etc/syslog.conf of the appropriate zone.
If debugging is inactive, edit the /etc/syslog.conf file in the appropriate zone to change daemon.notice to daemon.debug.
Confirm that debugging for Sun Cluster HA for N1 Service Provisioning System is active.
If debugging is active, daemon.debug is set in the file /etc/syslog.conf.
# grep daemon /etc/syslog.conf *.err;kern.debug;daemon.debug;mail.crit /var/adm/messages *.alert;kern.err;daemon.err operator # |
Restart the syslogd daemon in the appropriate zone.
If your operating system is Solaris 9, type:
# pkill -1 syslogd |
If your operating system is Solaris 10, type:
# svcadm refresh svc:/system/system-log:default |
Edit the /opt/SUNWsczone/sczbt/etc/config file to change the DEBUG= variable according to one of the examples:
DEBUG=ALL
DEBUG=resource name
DEBUG=resource name,resource name, ...
Edit /opt/SUNWscsps/component-dir/etc/config and change DEBUG= to DEBUG=ALL or DEBUG=resource
# cat /opt/SUNWscsps/component-dir/etc/config # # Copyright 2003 Sun Microsystems, Inc. All rights reserved. # Use is subject to license terms. # # Usage: # DEBUG=<RESOURCE_NAME> or ALL # DEBUG=ALL # |
To deactivate debugging, repeat step 1 to 6, changing daemon.debug to daemon.notice and changing the DEBUG variable to DEBUG=.