This chapter explains how to install and configure Sun Cluster HA for Solaris Containers.
This chapter contains the following sections.
Overview of Installing and Configuring Sun Cluster HA for Solaris Containers
Planning the Sun Cluster HA for Solaris Containers Installation and Configuration
Installing the Sun Cluster HA for Solaris Containers Packages
Registering and Configuring Sun Cluster HA for Solaris Containers
Verifying the Sun Cluster HA for Solaris Containers and Configuration
Tuning the Sun Cluster HA for Solaris Containers Fault Monitors
A Solaris Container is a complete runtime environment for applications. Solaris 10 Resource Manager and Solaris Zones software partitioning technology are both parts of the container. These components address different qualities the container can deliver and work together to create a complete container. The zones portion of the container provides a virtual mapping from the application to the platform resources. Zones allow application components to be isolated from one application even though the zones share a single instance of the Solaris Operating System. Resource management features permit you to allocate the quantity of resources that a workload receives.
The Solaris Zones facility in the Solaris Operating System provides an isolated and secure environment in which to run applications on your system. When you create a zone, you produce an application execution environment in which processes are isolated from the rest of the system.
This isolation prevents processes that are running in one zone from monitoring or affecting processes that are running in other zones. Even a process that is running with superuser credentials cannot view or affect activity in other zones. A zone also provides an abstract layer that separates applications from the physical attributes of the machine on which they are deployed. Examples of these attributes include physical device paths.
Every Solaris system contains a global zone. The global zone has a dual function. The global zone is both the default zone for the system and the zone that is used for system-wide administrative control. Non-global zones are referred to as zones and are created by the global administrator.
Sun Cluster HA for Solaris Containers enables Sun Cluster to manage Solaris Zones by providing components to perform the following operations:
The orderly booting and shutdown of a zone
The orderly startup, shutdown, and fault monitoring of an application within the zone via scripts or commands
The orderly startup, shutdown, and fault monitoring of a Solaris Service Management Facility (SMF) service within the zone
You can configure Sun Cluster HA for Solaris Containers as a failover service or a multiple-masters service. You cannot configure Sun Cluster HA for Solaris Containers as a scalable service.
When a Solaris Zone is managed by the Sun Cluster HA for Solaris Containers data service, the Solaris Zone becomes a failover Solaris Zone, or multiple-masters Solaris Zone, across the Sun Cluster nodes. The failover is managed by the Sun Cluster HA for Solaris Containers data service, which runs only within the global zone.
For conceptual information about failover data services, multiple-masters data services, and scalable data services, see Sun Cluster Concepts Guide for Solaris OS.
The following table summarizes the tasks for installing and configuring Sun Cluster HA for Solaris Containers and provides cross-references to detailed instructions for performing these tasks. Perform the tasks in the order that they are listed in the table.
Table 1 Tasks for Installing and Configuring Sun Cluster HA for Solaris Containers
Task |
Instructions |
---|---|
Plan the installation |
Planning the Sun Cluster HA for Solaris Containers Installation and Configuration |
Install and configure the Solaris Zones | |
Verify installation and configuration | |
Install Sun Cluster HA for Solaris Containers Packages |
Installing the Sun Cluster HA for Solaris Containers Packages |
Register and configure Sun Cluster HA for Solaris Containers components |
Registering and Configuring Sun Cluster HA for Solaris Containers |
Verify Sun Cluster HA for Solaris Containers Installation and Configuration |
Verifying the Sun Cluster HA for Solaris Containers and Configuration |
Tune the Sun Cluster HA for Solaris Containers fault monitors |
Tuning the Sun Cluster HA for Solaris Containers Fault Monitors |
Debug Sun Cluster HA for Solaris Containers |
This section contains the information you need to plan your Sun Cluster HA for Solaris Containers installation and configuration.
The configuration restrictions in the subsections that follow apply only to Sun Cluster HA for Solaris Containers.
Your data service configuration might not be supported if you do not observe these restrictions.
The configuration of a zone's network addresses depends on the level of high availability you require. You can choose between no HA, HA through the use of IPMP, or HA through the use of IPMP and SUNW.LogicalHostName.
Your choice of a zone`s network addresses configuration affects some configuration parameters for the zone boot resource. For more information, see Registering and Configuring Sun Cluster HA for Solaris Containers
If HA for the zone's addresses is not required then configure the zone`s addresses by using the zonecfg utility.
If HA through IPMP protection is required then configure the zone`s addresses by using the zonecfg utility and then place the zone's addresses on an adapter within an IPMP group.
If HA through IPMP protection and protection against the failure of all physical interfaces is required, choose one option from the following list:
If you require the SUNW.LogicalHostName resource type to manage one or a subset of the zone's addresses, configure a SUNW.LogicalHostName resource for those zone's addresses and not by using the zonecfg utility. Use the zonecfg utility only to configure the zones's addresses that are not required to be under the control of the SUNW.LogicalHostName resource type.
If you require the SUNW.LogicalHostName resource type to manage all the zone's addresses, configure a SUNW.LogicalHostName resource with a list of the zone`s addresses and do not configure them by using the zonecfg utility.
Otherwise, configure the zone's addresses by using the zonecfg utility and configure a separate redundant IP address for use by a SUNW.LogicalHostName resource, which must not be configured using the zonecfg utility.
The zone path of a zone in a failover configuration must reside on a highly available local file system. The zone must be configured on each cluster node where the zone can reside.
The zone is active on only one node at a time, and the zone's address is plumbed on only one node at a time. Application clients can then reach the zone through the zone's address, wherever that zone resides within the cluster.
The zone path of a zone in a multiple-masters configuration must reside on the local disks of each node. The zone must be configured with the same name on each node that can master the zone.
Each zone that is configured to run within a multiple-masters configuration must also have a zone-specific address. Load balancing for applications in these configurations is typically provided by an external load balancer. You must configure this load balancer for the address of each zone. Application clients can then reach the zone through the load balancer's address.
The zone path of a zone that Sun Cluster HA for Solaris Containers manages cannot reside on a global file system.
If the zone is in a failover configuration the zone path must reside on a highly available local file system.
If the zone is in a multiple-masters configuration, the zone path must reside on the local disks of each node.
For shared devices, Sun Cluster requires that the major and minor device numbers are identical on all nodes in the cluster. If the device is required for a zone, ensure that the major device number is the same in /etc/name_to_major on all nodes in the cluster that will host the zone.
The configuration requirements in this section apply only to Sun Cluster HA for Solaris Containers.
If your data service configuration does not conform to these requirements, the data service configuration might not be supported.
The dependencies between the Sun Cluster HA for Solaris Containers components are described in the following table:
Table 2 Dependencies Between Sun Cluster HA for Solaris Containers Components
Component |
Dependency |
---|---|
Zone boot resource |
SUNW.HAStoragePlus - In a failover configuration, the zone`s zone path must be on a highly available file system managed by a SUNW.HAStoragePlus resource . SUNW.LogicalHostName - This dependency is required only if the zone's address is managed by a SUNW.LogicalHostName resource . |
Zone script resource |
Zone boot resource |
Zone SMF resource |
Zone boot resource |
These dependencies are set when you register and configure Sun Cluster HA for Solaris Containers. For more information, see Registering and Configuring Sun Cluster HA for Solaris Containers.
The zone script resource and SMF resource are optional. If used, multiple instances of the zone script resource and SMF resource can be deployed within the same resource group as the zone boot resource. Furthermore, if more elaborate dependencies are required then refer to the r_properties(5) and rg_properties(5) man pages for further dependencies and affinities settings.
The boot component and script component of Sun Cluster HA for Solaris Containers require a parameter file to pass configuration information to the data service. You must create a directory for these files. The directory location must be available on the node that is to host the zone and must not be in the zone's zone path. The directory must be accessible only from the global zone. The parameter file for each component is created automatically when the resource for the component is registered.
Installing and configuring Solaris Zones involves the following tasks:
Enabling a zone to run in your chosen data service configuration, as explained in the following sections:
Installing and configuring a zone, as explained in:
Perform this task for each zone that you are installing and configuring. This section explains only the special requirements for installing Solaris Zones for use with Sun Cluster HA for Solaris Containers. For complete information about installing and configuring Solaris Zones, see System Administration Guide: Solaris Containers-Resource Management and Solaris Zones.
Register the SUNW.HAStoragePlus resource type.
# scrgadm -a -t SUNW.HAStoragePlus |
Create a failover resource group.
# scrgadm -a -g solaris-zone-resource-group |
Create a resource for the zone`s disk storage.
# scrgadm -a -j solaris-zone-has-resource \ -g solaris-zone-resource-group \ -t SUNW.HAStoragePlus \ -x FilesystemMountPoints=solaris-zone-instance-mount-points |
(Optional) Create a resource for the zone`s logical hostname.
# scrgadm -a -L -j solaris-zone-logical-hostname-resource-name \ -g solaris-zone-resource-group \ -l solaris-zone-logical-hostname |
Enable the failover resource group.
# scswitch -Z -g solaris-zone-resource-group |
Create a scalable resource group.
# scrgadm -a -g solaris-zone-resource-group \ -y Maximum_primaries=max-number \ -y Desired_primaries=desired-number |
Enable the scalable resource group.
# scswitch -Z -g solaris-zone-resource-group |
For complete information about installing a zone refer to System Administration Guide: Solaris Containers-Resource Management and Solaris Zones.
Determine the following requirements for the deployment of the zone with Sun Cluster:
The number of Solaris Zone instances that are to be deployed.
The cluster file system will be used by each Solaris Zone instance.
Install the zone.
If the zone that you are installing is to become a failover zone, the zone's zone path must specify a highly available local file system. The file system must be managed by the SUNW.HAStoragePlus resource that you created in Step 3.
# zlogin -C zone |
You are prompted to configure the zone.
Follow the prompts to configure the zone.
After you configure the zone has been created, an entry will exist in the /etc/zones/index file.
Disconnect from the zone`s console.
Use the escape sequence that you defined for the zone. If you did not define an escape sequence, use the default escape sequence as follows:
# ~. |
Determine the new zone's index entry by listing the contents of the /etc/zones/index.
You need the new zone's index entry for How to Enable a Zone to Run in a Failover Configuration
# cat /etc/zones/index |
Make the zone available to all nodes in the cluster.
Perform the following steps on each cluster node.
Log in to each cluster node.
To prevent a loss of data, create a backup copy of the /etc/zones/index file.
# cd /etc/zones # cp index index_backup |
Using a plain text editor, add the entry for the zone to the/etc/zones/index file on the node.
Copy the zone.xml file to the /etc/zones/index directory on the node.
# rcp zone-install-node:/etc/zones/zonexml . |
Before you install the Sun Cluster HA for Solaris Containers packages, verify that the zones that you created are correctly configured to run in a cluster. This verification does not verify that the zones are highly available because the Sun Cluster HA for Solaris Containers data service is not yet installed.
Perform this procedure for each zone that you created in Installing and Configuring Zones
# zoneadm -z zone boot |
# zlogin -z zone |
Confirm that the zone has reached the svc:/milestone/multi-user-server:default milestone.
# svcs -a | grep milestone online Apr_10 svc:/milestone/network:default online Apr_10 svc:/milestone/devices:default online Apr_10 svc:/milestone/single-user:default online Apr_10 svc:/milestone/sysconfig:default online Apr_10 svc:/milestone/name-services:default online Apr_10 svc:/milestone/multi-user:default online Apr_10 svc:/milestone/multi-user-server:default online Apr_10 svc:/system/cluster/cl-svc-cluster-milestone:default |
# zoneadm -z halt zone |
If you did not install the Sun Cluster HA for Solaris Containers packages during your initial Sun Cluster installation, perform this procedure to install the packages. Perform this procedure on each cluster node where you are installing the Sun Cluster HA for Solaris Containers packages. To complete this procedure, you need the Sun Cluster Agents CD-ROM.
If you are installing more than one data service simultaneously, perform the procedure in Installing the Software in Sun Cluster Software Installation Guide for Solaris OS.
Install these packages only in the global zone. To ensure that these packages are not propagated to any local zones that are created after you install the packages, use the scinstall utility to install these packages.
Perform this procedure on all nodes that can run Sun Cluster HA for Solaris Containers.
Load the Sun Cluster Agents CD-ROM into the CD-ROM drive.
Run the scinstall utility with no options.
The scinstall utility prompts you for additional information.
Chose the menu option, Add Support for New Data Service to this Cluster Node
This step starts the scinstall utility in interactive mode.
Provide the pathname to the Sun Cluster Agents CD-ROM.
The utility refers to the CD as “data services cd.”
Chose the menu option, q) done.
Type yes for the question, Do you want to see more data services?
The utility refers to the CD as “data services cd.”
Specify the data service to install.
The scinstall utility lists the data service that you selected and asks you to confirm your choice.
Exit the scinstall utility.
Unload the CD from the CD-ROM drive.
Before you perform this procedure, ensure that the Sun Cluster HA for Solaris Containers data service packages are installed.
Use the configuration and registration files in the following directories to register the Sun Cluster HA for Solaris Containers resources:
/opt/SUNWsczone/sczbt/util
/opt/SUNWsczone/sczsh/util
/opt/SUNWsczone/sczsmf/util
The files define the dependencies that are required between the Sun Cluster HA for Solaris Containers components. For information about these dependencies, see Dependencies Between Sun Cluster HA for Solaris Containers Components
Registering and configuring Sun Cluster HA for Solaris Containers involves the tasks that are explained in the following sections:
Specifying Configuration Parameters for the Zone Boot Resource
Specifying Configuration Parameters for the Zone Script Resource
Specifying Configuration Parameters for the Zone SMF Resource
How to Create and Enable Resources for the Zone Boot Component
How to Create and Enable Resources for the Zone Script Component
How to Create and Enable Resources for the Zone SMF Component
Sun Cluster HA for Solaris Containers provides a script that automates the process of configuring the zone boot resource. This script obtains configuration parameters from the sczbt_config file in the /opt/SUNWsczone/sczbt/util directory. To specify configuration parameters for the zone boot resource, edit the sczbt_config file.
Each configuration parameter in the sczbt_config file is defined as a keyword-value pair. The sczbt_config file already contains the required keywords and equals signs. For more information, see Listing of sczbt_config. When you edit the sczbt_config file, add the required value to each keyword.
The keyword-value pairs in the sczbt_config file are as follows:
RS=sczbt-rs RG=sczbt-rg PARAMETERDIR=sczbt-parameter-directory SC_NETWORK=true|false SC_LH=sczbt-lh-rs FAILOVER=true|false HAS_RS=sczbt-has-rs Zonename=zone-name Zonebootopt=zone-boot-options Milestone=zone-boot-milestone
The meaning and permitted values of the keywords in the sczbt_config file are as follows:
Specifies the name that you are assigning to the zone boot resource. You must specify a value for this keyword.
Specifies the name of the resource group the zone boot resource will reside in. You must specify a value for this keyword.
Specifies the directory name that you are assigning to the parameter directory where some variables and their values will be stored. You must specify a value for this keyword.
Specifies whether the zone boot resource is network aware with a SUNW.LogicalHostName resource. You must specify a value for this keyword.
If HA for the zone's addresses is not required then configure the zone`s addresses by using the zonecfg utility.
SC_NETWORK=false SC_LH= |
If HA through IPMP protection is required then configure the zone`s addresses by using the zonecfg utility and then place the zone's addresses on an adapter within an IPMP group.
SC_NETWORK=false SC_LH= |
If HA through IPMP protection and protection against the failure of all physical interfaces is required, choose one option from the following list:
If you require the SUNW.LogicalHostName resource type to manage one or a subset of the zone's addresses, configure a SUNW.LogicalHostName resource for those zone's addresses and not by using the zonecfg utility. Use the zonecfg utility to configure only the zones's addresses that are not to be under the control of the SUNW.LogicalHostName resource type.
SC_NETWORK=true SC_LH=Name of the SUNW.LogicalHostName resource |
If you require the SUNW.LogicalHostName resource type to manage all the zone's addresses, configure a SUNW.LogicalHostName resource with a list of the zone`s addresses and do not configure them by using the zonecfg utility.
SC_NETWORK=true SC_LH=Name of the SUNW.LogicalHostName resources |
Otherwise, configure the zone's addresses by using the zonecfg utility and configure a separate redundant IP address for use by a SUNW.LogicalHostName resource, which must not be configured using the zonecfg utility.
SC_NETWORK=false SC_LH=Name of the SUNW.LogicalHostName resource |
Specifies the name of the SUNW.LogicalHostName resource for the zone boot resource. Refer to Restrictions for Zone Network Addresses for a description of when to set this variable. This name must be the SUNW.LogicalHostname resource name you assigned when you created the resource in Step 4.
Specifies whether the zone`s zone path is on a highly available file system.
Specifies the name of the SUNW.HAStoragePlus resource for the zone boot resource. This name must be the SUNW.HAStoragePlus resource name you assigned when you created the resource in How to Enable a Zone to Run in a Failover Configuration. You must specify a value for this keyword if FAILOVER=true is set.
Specifies the zone name. You must specify a value for this keyword.
Specifies the zone boot option to use. Only -s is supported. Leaving this variable blank will cause the zone to boot to the multi-user-server milestone.
Specifies the milestone the zone must reach to be considered as successfully booted. You must specify a value for this keyword.
This example shows an sczbt_config file in which configuration parameters are set as follows:
The name of the zone boot resource is zone1-rs.
The name of the resource group for the zone boot resource is zone1-rg.
The name of the parameter file directory for the zone boot resource is/global/zones/pfiles.
Indicates that the zone`s address is managed by a SUNW.LogicalHostName resource and is true.
The name of the SUNW.LogicalHostName resource name for the zone boot resource is zone1-lh.
Indicates that the zone boot resource`s zone path is managed by a SUNW.LogicalHostName resource and is true.
The name of the SUNW.HAStoragePlus resource name for the zone boot resource is zone1-has.
The name of the zone is zone1.
Indicates that the zone boot resource`s boot option is null.
Indicates that the zone boot resource`s milestone is multi-user-server.
RS=zone1-rs RG=zone1-rg PARAMETERDIR=/global/zones/pfiles SC_NETWORK=true SC_LH=zone1-lh FAILOVER=true HAS_RS=zone1-has Zonename=zone1 Zonebootopt= Milestone=multi-user-server
The zone script resource provides the ability to run commands or scripts to start, stop and probe an application within a zone. The zone script resource depends on the zone boot resource. The command or script names are passed to the zone script resource when the resource is registered and must meet with the following requirements.
The command or script must contain the fully qualified path within the zone.
The command or script must be executable by root.
The command or script must return one of the following return codes.
0 |
Successful completion |
>0 |
An error has occurred |
201 |
(Probe only) — An error has occurred that requires an immediate failover of the resource group |
>0 & !=201 |
(Probe only) — An error has occurred that requires a resource restart |
For an immediate failover of the zone script resource, you must configure the resource properties Failover_mode and Failover_enabled to meet the required behavior. Refer to the r_properties(5) man page when setting the Failover_mode property and SUNW.gds(5) man page when setting the Failover_enabled property.
This example shows a simple script to test that the Apache2 service is running, beyond the process tree existing. The script /var/tmp/probe-apache2 must exist within the zone.
# cat /var/tmp/probe-apache2 #!/usr/bin/ksh if "echo GET; exit" | mconnect -p 80 then exit 0 else exit 100 fi
Sun Cluster HA for Solaris Containers provides a script that automates the process of configuring zone script resource. This script obtains configuration parameters from the sczsh_config file in the /opt/SUNWsczone/sczsh/util directory. To specify configuration parameters for the zone script resource, edit the sczsh_config file.
Each configuration parameter in the sczsh_config file is defined as a keyword-value pair. The sczsh_config file already contains the required keywords and equals signs. For more information, see Listing of sczsh_config. When you edit the sczsh_config file, add the required value to each keyword.
The keyword-value pairs in the sczsh_config file are as follows:
RS=sczsh-rs RG=sczbt-rg SCZBT_RS=sczbt-rs PARAMETERDIR=sczsh-parameter-directory Zonename=sczbt-zone-name ServiceStartCommand=sczsh-start-command ServiceStopCommand=sczsh-stop-command ServiceProbeCommand=sczsh-probe-command
The meaning and permitted values of the keywords in the sczsh_config file are as follows:
Specifies the name that you are assigning to the zone script resource. You must specify a value for this keyword.
Specifies the name of the resource group the zone boot resource resides in. You must specify a value for this keyword.
Specifies the name of the zone boot resource. You must specify a value for this keyword.
Specifies the directory name that you are assigning to the parameter directory where the following variables and their values will be stored. You must specify a value for this keyword.
Specifies the zone name. You must specify a value for this keyword.
Specifies the zone start command or script to run. You must specify a value for this keyword.
Specifies the zone stop command or script to run. You must specify a value for this keyword
Specifies the zone probe command or script to run. You must specify a value for this keyword
In this example the zone script resource uses the Apache2 scripts that are available in Solaris 10. Before this example can be used the Apache2 configuration file http.conf needs to be configured. For the purpose of this example, the delivered http.conf-example can be used. Copy the file as follows:
# zlogin zone1 # cd /etc/apache2 # cp http.conf-example http.conf # exit |
This example shows an sczsh_config file in which configuration parameters are set as follows:
The name of the zone script resource is zone1-script-rs.
The name of the resource group for the zone script resource is zone1-rg.
The name of the zone boot resource is zone1-rs.
The name of the parameter file directory for the zone script resource is /global/zones/pfiles.
The name of the zone is zone1.
The name of the zone script resource start command and it`s parameter is "/lib/svc/method/http-apache2 start".
The name of the zone script resource stop command and it`s parameter is "/lib/svc/method/http-apache2 stop".
The name of the zone script resource probe command is "/var/tmp/probe-apache2". This script is shown in Example 2 and must exist in zone1.
RS="zone1-script-rs" RG="zone1-rg" SCZBT_RS="zone1-rs" PARAMETERDIR="/global/zones/pfiles" Zonename="zone1" ServiceStartCommand="/lib/svc/method/http-apache2 start" ServiceStopCommand="/lib/svc/method/http-apache2 stop" ServiceProbeCommand="/var/tmp/probe-apache2"
The zone SMF resource provides the ability to enable, disable and probe a SMF service within a zone. The zone SMF resource depends on the zone boot resource. Probing the SMF service is performed by running a command or script against the SMF service. The SMF service and probe command or script names are passed to the zone SMF resource when the resource is registered. The probe command or script must meet to the following requirements.
The probe command or script must contain the fully qualified path within the zone.
The probe command or script must be executable by root.
The probe command or script must return one of the following return codes.
0 |
Successful completion |
100 |
An error occurred that requires a resource restart |
201 |
An error has occurred that requires an immediate failover of the resource group |
For an immediate failover of the zone SMF resource, you must configure the resource properties Failover_mode and Failover_enabled to meet the required behavior. Refer to the r_properties(5) man page when setting the Failover_mode property and SUNW.gds(5) man page when setting the Failover_enabled property.
This example shows a simple script to test that the SMF Apache2 service is running, beyond the process tree existing. The script /var/tmp/probe-apache2 must exist within the zone.
# cat /var/tmp/probe-apache2 #!/usr/bin/ksh if "echo GET; exit" | mconnect -p 80 then exit 0 else exit 100 fi
Sun Cluster HA for Solaris Containers provides a script that automates the process of configuring the zone SMF resource. This script obtains configuration parameters from the sczsmf_config file in the /opt/SUNWsczone/sczsmf/util directory. To specify configuration parameters for the zone SMF resource, edit the sczsmf_config file.
Each configuration parameter in the sczmf_config file is defined as a keyword-value pair. The sczsmf_config file already contains the required keywords and equals signs. For more information, see Listing of sczsmf_config. When you edit the sczsmf_config file, add the required value to each keyword.
The keyword-value pairs in the sczsmf_config file are as follows:
RS=sczsmf-rs RG=sczbt-rg SCZBT_RS=sczbt-rs ZONE=sczbt-zone-name SERVICE=smf-service RECURSIVE=true|false STATE=true|false SERVICE_PROBE=sczsmf-service-probe
The meaning and permitted values of the keywords in the sczsmf_config file are as follows:
Specifies the name that you are assigning to the zone SMF resource. This must be defined.
Specifies the name of the resource group the zone boot resource resides in. This must be defined.
Specifies the name of the zone boot resource. You must specify a value for this keyword.
Specifies the zone name. This must be defined.
Specifies the SMF service to enable/disable. This must be defined.
Specifies true to enable the service recursively or false to just enable the service and no dependents. This must be defined.
Specifies true to wait until the service state is reached or false to not wait until the service state is reached. This must be defined.
Specify the script to check the SMF service.
In this example the zone SMF resource uses the Apache2 SMF service that is available in Solaris 10. Before this example can be used the Apache2 configuration file http.conf needs to be configured. For the purpose of this example, the delivered http.conf-example can be used. Copy the file as follows:
# zlogin zone1 # cd /etc/apache2 # cp http.conf-example http.conf # exit |
This example shows an sczsmf_config file in which configuration parameters are set as follows:
The name of the zone SMF resource is zone1-smf-rs.
The name of the resource group for the zone SMF resource is zone1-rg.
The name of the zone boot resource is zone1-rs.
The name of the zone name is zone1.
The name of the zone SMF service is apache2.
Indicates that the zone SMF service Recursive option is true.
Indicates that the zone SMF service State option is true.
Indicates that the zone SMF service probe name is /var/tmp/probe-apache2. This script is shown in Example 4 and must exist in zone1.
RS=zone1-smf-rs RG=zone1-rg SCZBT_RS=zone1-rs ZONE=zone1 SERVICE=apache2 RECURSIVE=true STATE=true SERVICE_PROBE=/var/tmp/probe-apache2
Ensure you have edited the sczbt_config file to specify configuration parameters for the Sun Cluster HA for Solaris Containers zone boot component. For more information, see Specifying Configuration Parameters for the Zone Boot Resource.
Become superuser on one of the nodes in the cluster that will host the zone.
Register the SUNW.gds resource type.
# scrgadm -a -t SUNW.gds |
Go to the directory that contains the script for creating the Sun Cluster HA for Solaris Containers boot resource.
# cd /opt/SUNWsczone/sczbt/util |
Run the script that creates the zone boot resource.
# ./sczbt_register |
Bring online the zone boot resource.
# scswitch -e -j sczbt-rs |
Ensure you have edited the sczsh_config file to specify configuration parameters for the Sun Cluster HA for Solaris Containers zone script component. For more information, see Specifying Configuration Parameters for the Zone Script Resource.
Go to the directory that contains the script for creating the Sun Cluster HA for Solaris Containers script resource.
# cd /opt/SUNWsczone/sczsh/util |
Run the script that creates the zone script resource.
# ./sczsh_register |
Bring online the zone script resource.
# scswitch -e -j sczsh-rs |
Ensure you have edited the sczsmf_config file to specify configuration parameters for the Sun Cluster HA for Solaris Containers zone SMF component. For more information, see Specifying Configuration Parameters for the Zone SMF Resource.
Go to the directory that contains the script for creating the Sun Cluster HA for Solaris Containers SMF resource.
# cd /opt/SUNWsczone/sczsmf/util |
Run the script that creates the zone SMF resource.
# ./sczsmf_register |
Bring online the zone SMF resource.
# scswitch -e -j sczsmf-rs |
After you install, register, and configure Sun Cluster HA for Solaris Containers, verify the Sun Cluster HA for Solaris Containers installation and configuration. Verifying the Sun Cluster HA for Solaris Containers installation and configuration determines if the Sun Cluster HA for Solaris Containers data service makes your zones highly available.
Become superuser on a cluster node that is to host the Solaris Zones component.
Ensure all the Solaris Zone resources are online.
For each resource, perform the following steps.
Switch the zone resource group to another cluster node, such as node2
# scswitch -z -g solaris-zone-resource-group -h node2 |
Confirm that the resource is now online on node2.
# scstat -g |
The procedures that follow are required only if you are applying the patch to the global zone and to local zones. If you are applying a patch to only the global zone, follow the instructions in Chapter 8, Patching Sun Cluster Software and Firmware, in Sun Cluster System Administration Guide for Solaris OS.
Before you begin, consult the patch README file to determine whether the patch is a nonrebooting patch or a rebooting patch.
A nonrebooting patch does not require you to reboot a node after you apply the patch on the node. You can apply the patch to a live system.
From one node, disable monitoring of every resource in the resource group that contains the zone resource.
# scswitch -n -M -j resource-list |
On each node where the zone is not booted, comment out the entry for the zone in the /etc/zones/index file.
To comment out an entry, add the # character to the start of the line that contains the entry.
Apply the patch on all nodes where the zone is configured.
Remove the comment from each entry that you edited in Step 2.
Enable monitoring of the resources for which you disabled monitoring in Step 1.
# scswitch -e -M -j resource-list |
A rebooting patches requires you to reboot a node after you apply the patch to the node.
Disable the resources that depend on the zones to which you are applying the patch.
# scswitch -n -j zdepend-rs-list |
Disable monitoring of the zone resource.
# scswitch -n -M -j zone-rs |
Bring the resource groups that contain zone resources online on a node.
# scswitch -z -g zone-rg -h node |
On each node where the zone is not booted, comment out the entry for the zone in the /etc/zones/index file.
To comment out an entry, add the # character to the start of the line that contains the entry.
For each node where the zone is not booted, perform the following sequence of operations:
Apply the patch on the node where the zone is booted.
Remove the comment from each entry that you edited in Step 4.
Enable monitoring of the resource for which you disabled monitoring in Step 2.
# scswitch -e -M -j zone-rs |
Reboot the node where the zone is booted.
Enable the resources that you disabled in Step 1.
# scswitch -e -j zdepend-rs-list |
To verify that the patch is correctly applied, switch each resource group that contains zone resources to each node in the resource group's node list. To switch a resource group to another node, type the command:
scswitch -z -g zone-rg -h node
The Sun Cluster HA for Solaris Containers fault monitors verify that the following components are running correctly:
Zone boot resource
Zone script resource
Zone SMF resource
Each Sun Cluster HA for Solaris Containers fault monitor is contained in the resource that represents Solaris Zones component. You create these resources when you register and configure Sun Cluster HA for Solaris Containers. For more information, see Registering and Configuring Sun Cluster HA for Solaris Containers.
System properties and extension properties of these resources control the behavior of the fault monitor. The default values of these properties determine the preset behavior of the fault monitor. The preset behavior should be suitable for most Sun Cluster installations. Therefore, you should tune the Sun Cluster HA for Solaris Containers fault monitor only if you need to modify this preset behavior.
Tuning the Sun Cluster HA for Solaris Containers fault monitors involves the following tasks:
Setting the interval between fault monitor probes
Setting the time-out for fault monitor probes
Defining the criteria for persistent faults
Specifying the failover behavior of a resource
For more information, see Tuning Fault Monitors for Sun Cluster Data Services in Sun Cluster Data Services Planning and Administration Guide for Solaris OS.
The Sun Cluster HA for Solaris Containers zone boot and script resources uses a parameter file to pass parameters to the start, stop and probe commands. Changes to these parameters take effect at every restart or enabling, disabling of the resource.
The fault monitor for the zone boot component ensures that the all requirements for the zone boot component to run are met:
The Sun Cluster HA for Solaris Containers zsched process is running.
If this process is not running, the fault monitor restarts the zone. If this fault persists, the fault monitor fails over the resource group that contains resource for the zone boot component.
Every host that is managed by a SUNW.LogicalHostname resource is operational.
If the host is not operational, the fault monitor fails over the resource group that contains resource for the zone boot component.
The specified milestone is either online or degraded
If the milestone is not online or degraded, the fault monitor restarts the zone. If this fault persists, the fault monitor fails over the resource group that contains resource for the zone boot component.
To verify the state of the milestone, the fault monitor connects to the zone. If the fault monitor cannot connect to the zone, the fault monitor retries every five seconds for approximately 60% of the probe time-out. If the attempt to connect still fails, then the fault monitor restarts the zone.
The fault monitor for the zone script component runs the script that you specify for the component. The value that this script returns to the fault monitor determines the action that the fault monitor performs. For more information, see Table 3.
The fault monitor for the zone SMF component verifies that the SMF service is not disabled. If the service is disabled, the fault monitor restarts the SMF service. If this fault persists, the fault monitor fails over the resource group that contains resource for the zone SMF component.
If the service is not disabled, the fault monitor runs the SMF service probe that you specify for the component. The value that this probe returns to the fault monitor determines the action that the fault monitor performs. For more information, see Table 4.
The config file in the /opt/SUNWsczone/xxx/etcdirectory enables you to activate debugging for Solaris Zone resources. Where xxx represents sczbt for the boot component, sczsh for the script component and sczsmf for the SMF component.
Each component of Sun Cluster HA for Solaris Containers has a config that enables you to activate debugging for Solaris Zone resources. The location of this file for each component is as follows:
For the zone boot component, this file is contained in the /opt/SUNWsczone/sczbt/etc directory.
For the zone script component, this file is contained in the /opt/SUNWsczone/sczsh/etc directory.
For the zone SMF component, this file is contained in the /opt/SUNWsczone/sczsmf/etc directory.
Determine whether debugging for Sun Cluster HA for Solaris Containers is active.
If debugging is inactive, daemon.notice is set in the file /etc/syslog.conf.
# grep daemon /etc/syslog.conf *.err;kern.debug;daemon.notice;mail.crit /var/adm/messages *.alert;kern.err;daemon.err operator # |
If debugging is inactive, edit the /etc/syslog.conf file to change daemon.notice to daemon.debug.
Confirm that debugging for Sun Cluster HA for Solaris Containers is active.
If debugging is active, daemon.debug is set in the file /etc/syslog.conf.
# grep daemon /etc/syslog.conf *.err;kern.debug;daemon.debug;mail.crit /var/adm/messages *.alert;kern.err;daemon.err operator # |
Restart the syslogd daemon.
# pkill -1 syslogd |
Edit the /opt/SUNWsczone/sczbt/etc/config file to change DEBUG= to DEBUG=ALL or DEBUG=sczbt-rs.
# cat /opt/SUNWsczone/sczbt/etc/config # # Copyright 2005 Sun Microsystems, Inc. All rights reserved. # Use is subject to license terms. # # Usage: # DEBUG=<RESOURCE_NAME> or ALL # DEBUG=ALL # |
To deactivate debugging, reverse the preceding steps.