Sun Cluster Data Service for Solaris Containers Guide for Solaris OS

Installing and Configuring Sun Cluster HA for Solaris Containers

This chapter explains how to install and configure Sun Cluster HA for Solaris Containers.

This chapter contains the following sections.

Sun Cluster HA for Solaris Containers Overview

A Solaris Container is a complete runtime environment for applications. Solaris 10 Resource Manager and Solaris Zones software partitioning technology are both parts of the container. These components address different qualities the container can deliver and work together to create a complete container. The zones portion of the container provides a virtual mapping from the application to the platform resources. Zones allow application components to be isolated from one application even though the zones share a single instance of the Solaris Operating System. Resource management features permit you to allocate the quantity of resources that a workload receives.

The Solaris Zones facility in the Solaris Operating System provides an isolated and secure environment in which to run applications on your system. When you create a zone, you produce an application execution environment in which processes are isolated from the rest of the system.

This isolation prevents processes that are running in one zone from monitoring or affecting processes that are running in other zones. Even a process that is running with superuser credentials cannot view or affect activity in other zones. A zone also provides an abstract layer that separates applications from the physical attributes of the machine on which they are deployed. Examples of these attributes include physical device paths.

Every Solaris system contains a global zone. The global zone is both the default zone for the system and the zone that is used for system-wide administrative control. Non-global zones are referred to as zones and are created by the administrator of the global zone.

Sun Cluster HA for Solaris Containers enables Sun Cluster to manage Solaris Zones by providing components to perform the following operations:

You can configure Sun Cluster HA for Solaris Containers as a failover service or a multiple-masters service. You cannot configure Sun Cluster HA for Solaris Containers as a scalable service.

When a Solaris Zone is managed by the Sun Cluster HA for Solaris Containers data service, the Solaris Zone becomes a Solaris HA container or a multiple-masters Solaris Zone across the Sun Cluster nodes. The failover in case of a Solaris HA container is managed by the Sun Cluster HA for Solaris Containers data service, which runs only within the global zone.

For conceptual information about failover data services, multiple-masters data services, and scalable data services, see Sun Cluster Concepts Guide for Solaris OS.

Overview of Installing and Configuring Sun Cluster HA for Solaris Containers

The following table summarizes the tasks for installing and configuring Sun Cluster HA for Solaris Containers and provides cross-references to detailed instructions for performing these tasks. Perform the tasks in the order that they are listed in the table.

Table 1 Tasks for Installing and Configuring Sun Cluster HA for Solaris Containers

Task 

Instructions 

Plan the installation 

Planning the Sun Cluster HA for Solaris Containers Installation and Configuration

Install and configure the Solaris Zones 

Installing and Configuring Zones

Verify installation and configuration 

How to Verify the Installation and Configuration of a Zone

Install Sun Cluster HA for Solaris Containers Packages 

Installing the Sun Cluster HA for Solaris Containers Packages

Register and configure Sun Cluster HA for Solaris Containers components 

Registering and Configuring Sun Cluster HA for Solaris Containers

Verify Sun Cluster HA for Solaris Containers Installation and Configuration 

Verifying the Sun Cluster HA for Solaris Containers and Configuration

Applying Patches to the global and non-global zones 

Patching the Global Zone and Non-Global Zones

Tune the Sun Cluster HA for Solaris Containers fault monitors 

Tuning the Sun Cluster HA for Solaris Containers Fault Monitors

Tune the Sun Cluster HA for Solaris Containers Stop_timeout property

Tuning the Sun Cluster HA for Solaris Containers Stop_timeout property

Debug Sun Cluster HA for Solaris Containers 

Debugging Sun Cluster HA for Solaris Containers

Planning the Sun Cluster HA for Solaris Containers Installation and Configuration

This section contains the information you need to plan your Sun Cluster HA for Solaris Containers installation and configuration.

Configuration Restrictions

The configuration restrictions in the subsections that follow apply only to Sun Cluster HA for Solaris Containers.


Caution – Caution –

Your data service configuration might not be supported if you do not observe these restrictions.


Restrictions for Zone Network Addresses

The configuration of a zone's network addresses depends on the level of high availability (HA) you require. You can choose between no HA, HA through the use of only IPMP, or HA through the use of IPMP and SUNW.LogicalHostName.

Your choice of a zone's network addresses configuration affects some configuration parameters for the zone boot resource. For more information, see Registering and Configuring Sun Cluster HA for Solaris Containers

If ip-type=exclusive option is set with zonecfg in the zone configuration for the configured sczbt resource, the SC_NETWORK variable in the sczbt_config file must be set to false to successfully register the sczbt resource. If ip-type=exclusive option is set for the non-global zone, do not configure a resource dependency on the SUNW.LogicalHostname resource from the sczbt resource.

Restrictions for an HA Container

The zone path of a zone in an HA container configuration must reside on a highly available local file system. The zone must be configured on each cluster node where the zone can reside.

The zone is active on only one node at a time, and the zone's address is plumbed on only one node at a time. Application clients can then reach the zone through the zone's address, wherever that zone resides within the cluster.

Ensure that the zone's autoboot property is set to false. Setting a zone's autoboot property to false prevents the zone from being booted when the global zone is booted. The Sun Cluster HA for Solaris Containers data service can manage a zone only if the zone is booted under the control of the data service.

Restrictions for a Multiple-Masters Zone

The zone path of a zone in a multiple-masters configuration must reside on the local disks of each node. The zone must be configured with the same name on each node that can master the zone.

Each zone that is configured to run within a multiple-masters configuration must also have a zone-specific address. Load balancing for applications in these configurations is typically provided by an external load balancer. You must configure this load balancer for the address of each zone. Application clients can then reach the zone through the load balancer's address.

Ensure that the zone's autoboot property is set to false. Setting a zone's autoboot property to false prevents the zone from being booted when the global zone is booted. The Sun Cluster HA for Solaris Containers data service can manage a zone only if the zone is booted under the control of the data service.

Restrictions for the Zone Path of a Zone

The zone path of a zone that Sun Cluster HA for Solaris Containers manages cannot reside on a global file system.

Restrictions on Major Device Numbers in /etc/name_to_major

For shared devices, Sun Cluster requires that the major and minor device numbers are identical on all nodes in the cluster. If the device is required for a zone, ensure that the major device number is the same in /etc/name_to_major on all nodes in the cluster that will host the zone.

Configuration Requirements

The configuration requirements in this section apply only to Sun Cluster HA for Solaris Containers.


Caution – Caution –

If your data service configuration does not conform to these requirements, the data service configuration might not be supported.


Dependencies Between Sun Cluster HA for Solaris Containers Components

The dependencies between the Sun Cluster HA for Solaris Containers components are described in the following table:

Table 2 Dependencies Between Sun Cluster HA for Solaris Containers Components

Component 

Dependency 

Zone boot resource (sczbt) 

SUNW.HAStoragePlus - In a failover configuration, the zone's zone path must be on a highly available file system managed by a SUNW.HAStoragePlus resource .

SUNW.LogicalHostName - This dependency is required only if the zone's address is managed by a SUNW.LogicalHostName resource .

Zone script resource (sczsh) 

Zone boot resource 

Zone SMF resource (sczsmf) 

Zone boot resource 

These dependencies are set when you register and configure Sun Cluster HA for Solaris Containers. For more information, see Registering and Configuring Sun Cluster HA for Solaris Containers.

The zone script resource and SMF resource are optional. If used, multiple instances of the zone script resource and SMF resource can be deployed within the same resource group as the zone boot resource. Furthermore, if more elaborate dependencies are required then refer to the r_properties(5) and rg_properties(5) man pages for further dependencies and affinities settings.

Parameter File Directory for Sun Cluster HA for Solaris Containers

The boot component and script component of Sun Cluster HA for Solaris Containers require a parameter file to pass configuration information to the data service. You must create a directory for these files. The directory location must be available on the node that is to host the zone and must not be in the zone's zone path. The directory must be accessible only from the global zone. The parameter file for each component is created automatically when the resource for the component is registered.

Installing and Configuring Zones

Installing and configuring Solaris Zones involves the following tasks:

  1. Enabling a zone to run in your chosen data service configuration, as explained in the following sections:

  2. Installing and configuring a zone, as explained in:

Perform this task for each zone that you are installing and configuring. This section explains only the special requirements for installing Solaris Zones for use with Sun Cluster HA for Solaris Containers. For complete information about installing and configuring Solaris Zones, see System Administration Guide: Solaris Containers-Resource Management and Solaris Zones.

ProcedureHow to Enable a Zone to Run in a Failover Configuration

  1. Register the SUNW.HAStoragePlus resource type.


    # clresourcetype register SUNW.HAStoragePlus
    
  2. Create a failover resource group.


    # clresourcegroup create solaris-zone-resource-group
    
  3. Create a resource for the zone`s disk storage.


    # clresource create \
    -g solaris-zone-resource-group \
    -t SUNW.HAStoragePlus \
    -p FilesystemMountPoints=solaris-zone-instance-mount-points \
    solaris-zone-has-resource-name
    
  4. (Optional) Create a resource for the zone's logical hostname.


    # clreslogicalhostname create \
    -g solaris-zone-resource-group \
    -h solaris-zone-logical-hostname \
    solaris-zone-logical-hostname-resource-name
    
  5. Enable the failover resource group.


    # clresourcegroup online -M solaris-zone-resource-group
    

ProcedureHow to Enable a Zone to Run in a Multiple-Masters Configuration

  1. Create a scalable resource group.


    # clresourcegroup create \
    -p Maximum_primaries=max-number \
    -p Desired_primaries=desired-number \
    solaris-zone-resource-group
    
  2. Enable the scalable resource group.


    # clresourcegroup online -M solaris-zone-resource-group
    

ProcedureHow to Install a Zone and Perform the Initial Internal Zone Configuration

Perform this task on each node that is to host the zone.


Note –

For complete information about installing a zone, see System Administration Guide: Solaris Containers-Resource Management and Solaris Zones.


Before You Begin

Determine the following requirements for the deployment of the zone with Sun Cluster:

Ensure that the zone is configured.

If the zone that you are installing is to run in a failover configuration, configure the zone's zone path to specify a highly available local file system. The file system must be managed by the SUNW.HAStoragePlus resource that you created in How to Enable a Zone to Run in a Failover Configuration.

For detailed information about configuring a zone before installation of the zone, see the following documentation:

  1. If the zone is to run in a failover configuration, ensure that the zone's zone path can be created on the zone's disk storage.

    If the zone is to run in a multiple-masters configuration, omit this step.

    1. On the node where you are installing the zone, bring online the resource group that contains the resource for the zone's disk storage.


      # clresourcegroup switch -n node solaris-zone-resource-group
      
    2. If the zone's zone path already exists on the zone's disk storage, remove the zone path.

      The zone's zone path already exists on the zone's disk storage if you have previously installed the zone on another node.


      Caution – Caution –

      If the zone is to run in a failover configuration, each node being able to host that zone must have the exact same zone configuration for that zone. After installing the zone on the first node, the zone's zone path already exists on the zones's disk storage. Therefore it must get removed on the next node prior to successfully create and install the zone. Otherwise the next two steps will fail. Only the zone's zone path created on the last node will be kept as the final zone path for the HA container. For that reason any configuration and customization within the HA container should get performed after the HA container is known to all nodes that should be able to host it.


  2. Create the zone.


    # zonecfg -z zone
    

    For more detailed information about creating a zone, see Configuring, Verifying, and Committing a Zone in System Administration Guide: Solaris Containers-Resource Management and Solaris Zones.

  3. Install the zone.


    # zoneadm -z zone install
    

    For more detailed information about installing a zone, see How to Install a Configured Zone in System Administration Guide: Solaris Containers-Resource Management and Solaris Zones.

  4. Perform the initial internal zone configuration. If the zone is to run in a failover configuration, perform this step on the last node.

    1. Log in to the zone's console.


      # zlogin -C zone
      

      You are prompted to configure the zone.

    2. Follow the prompts to configure the zone.

    3. Disconnect from the zone's console.

      Use the escape sequence that you defined for the zone. If you did not define an escape sequence, use the default escape sequence as follows:


      # ~.
      

Verifying the Installation and Configuration of a Zone

Before you install the Sun Cluster HA for Solaris Containers packages, verify that the zones that you created are correctly configured to run in a cluster. This verification does not verify that the zones are highly available because the Sun Cluster HA for Solaris Containers data service is not yet installed.

ProcedureHow to Verify the Installation and Configuration of a Zone

Perform this procedure for each zone that you created in Installing and Configuring Zones

  1. Start the zone.


    # zoneadm -z zone boot
    
  2. Log in to the zone.


    # zlogin -z zone
    
  3. Perform the required task depending upon the brand type of the zone.

    • For a native brand type zone, confirm that the zone has reached the svc:/milestone/multi-user-server:default milestone.


      # svcs -a | grep milestone
      online         Apr_10   svc:/milestone/network:default
      online         Apr_10   svc:/milestone/devices:default
      online         Apr_10   svc:/milestone/single-user:default
      online         Apr_10   svc:/milestone/sysconfig:default
      online         Apr_10   svc:/milestone/name-services:default
      online         Apr_10   svc:/milestone/multi-user:default
      online         Apr_10   svc:/milestone/multi-user-server:default
    • For a lx brand type zone, confirm that the runlevel is 3.


      # runlevel
      N 3
    • For a solaris8 or solaris9 brand type zone, confirm that the legacy runlevel is 3.


      # who -r
              run-level 3  Sep 10 23:49     3      0  S
  4. Stop the zone.


    # zoneadm -z zone halt
    

Installing the Sun Cluster HA for Solaris Containers Packages

If you did not install the Sun Cluster HA for Solaris Containers packages during your initial Sun Cluster installation, perform this procedure to install the packages. To install the packages, use the Sun JavaTM Enterprise System Installation Wizard.


Note –

You need to install the Sun Cluster HA for Solaris Containers packages in the global cluster and not in the zone cluster.


ProcedureHow to Install the Sun Cluster HA for Solaris Containers Packages

Perform this procedure on each cluster node where you are installing the Sun Cluster HA for Solaris Containers packages.

You can run the Sun Java Enterprise System Installation Wizard with a command-line interface (CLI) or with a graphical user interface (GUI). The content and sequence of instructions in the CLI and the GUI are similar.

Before You Begin

Ensure that you have the Sun Java Availability Suite DVD-ROM.

If you intend to run the Sun Java Enterprise System Installation Wizard with a GUI, ensure that your DISPLAY environment variable is set.

  1. On the cluster node where you are installing the data service packages, become superuser.

  2. Load the Sun Java Availability Suite DVD-ROM into the DVD-ROM drive.

    If the Volume Management daemon vold(1M) is running and configured to manage DVD-ROM devices, the daemon automatically mounts the DVD-ROM on the /cdrom directory.

  3. Change to the Sun Java Enterprise System Installation Wizard directory of the DVD-ROM.

    • If you are installing the data service packages on the SPARC® platform, type the following command:


      # cd /cdrom/cdrom0/Solaris_sparc
      
    • If you are installing the data service packages on the x86 platform, type the following command:


      # cd /cdrom/cdrom0/Solaris_x86
      
  4. Start the Sun Java Enterprise System Installation Wizard.


    # ./installer
    
  5. When you are prompted, accept the license agreement.

    If any Sun Java Enterprise System components are installed, you are prompted to select whether to upgrade the components or install new software.

  6. From the list of Sun Cluster agents under Availability Services, select the data service for Solaris Zones.

  7. If you require support for languages other than English, select the option to install multilingual packages.

    English language support is always installed.

  8. When prompted whether to configure the data service now or later, choose Configure Later.

    Choose Configure Later to perform the configuration after the installation.

  9. Follow the instructions on the screen to install the data service packages on the node.

    The Sun Java Enterprise System Installation Wizard displays the status of the installation. When the installation is complete, the wizard displays an installation summary and the installation logs.

  10. (GUI only) If you do not want to register the product and receive product updates, deselect the Product Registration option.

    The Product Registration option is not available with the CLI. If you are running the Sun Java Enterprise System Installation Wizard with the CLI, omit this step.

  11. Exit the Sun Java Enterprise System Installation Wizard.

  12. Unload the Sun Java Availability Suite DVD-ROM from the DVD-ROM drive.

    1. To ensure that the DVD-ROM is not being used, change to a directory that does not reside on the DVD-ROM.

    2. Eject the DVD-ROM.


      # eject cdrom
      
Next Steps

See Registering and Configuring Sun Cluster HA for Solaris Containers to register Sun Cluster HA for Solaris Containers and to configure the cluster for the data service.

Registering and Configuring Sun Cluster HA for Solaris Containers

Before you perform this procedure, ensure that the Sun Cluster HA for Solaris Containers data service packages are installed.

Use the configuration and registration files in the following directories to register the Sun Cluster HA for Solaris Containers resources:

The files define the dependencies that are required between the Sun Cluster HA for Solaris Containers components. For information about these dependencies, see Dependencies Between Sun Cluster HA for Solaris Containers Components

Registering and configuring Sun Cluster HA for Solaris Containers involves the tasks that are explained in the following sections:

  1. Specifying Configuration Parameters for the Zone Boot Resource

  2. Writing Scripts for the Zone Script Resource

  3. Specifying Configuration Parameters for the Zone Script Resource

  4. Writing a Service Probe for the Zone SMF Resource

  5. Specifying Configuration Parameters for the Zone SMF Resource

  6. How to Create and Enable Resources for the Zone Boot Component

  7. How to Create and Enable Resources for the Zone Script Component

  8. How to Create and Enable Resources for the Zone SMF Component

Specifying Configuration Parameters for the Zone Boot Resource

Sun Cluster HA for Solaris Containers provides the script sczbt_register, which automates the process of configuring the zone boot resource. By default this script obtains configuration parameters from the sczbt_config file in the /opt/SUNWsczone/sczbt/util directory. To specify configuration parameters for the zone boot resource, copy the sczbt_config file to a different filename and amend it as described below. It is recommended to keep this file as a future reference. The register script provides option -f to specify the fully qualified filename to the copied configuration file.

Each configuration parameter in the sczbt_config file is defined as a keyword-value pair. The sczbt_config file already contains the required keywords and equals signs. For more information, see Listing of sczbt_config. When you edit the sczbt_config file, add the required value to each keyword.

The keyword-value pairs in the sczbt_config file are as follows:

RS=sczbt-rs
RG=sczbt-rg
PARAMETERDIR=sczbt-parameter-directory
SC_NETWORK=true|false
SC_LH=sczbt-lh-rs
FAILOVER=true|false
HAS_RS=sczbt-has-rs
Zonename=zone-name
Zonebrand=zone-brand-type
Zonebootopt=zone-boot-options
Milestone=zone-boot-milestone
LXrunlevel=linux-runlevel
SLrunlevel=solaris-legacy-runlevel
Mounts=list-of-mountpoints

The meaning and permitted values of the keywords in the sczbt_config file are as follows:

RS=sczbt-rs

Specifies the name that you are assigning to the zone boot resource. You must specify a value for this keyword.

RG=sczbt-rg

Specifies the name of the resource group the zone boot resource will reside in. You must specify a value for this keyword.

PARAMETERDIR=sczbt parameter directory

Specifies the directory name that you are assigning to the parameter directory where some variables and their values will be stored. You must specify a value for this keyword.

SC_NETWORK=true|false

Specifies whether the zone boot resource is network aware with a SUNW.LogicalHostName resource. You must specify a value for this keyword.

  • If HA for the zone's addresses is not required, then configure the zone`s addresses by using the zonecfg utility.


    SC_NETWORK=false
    SC_LH=
  • If only HA through IPMP protection is required, then configure the zone's addresses by using the zonecfg utility and then place the zone's addresses on an adapter within an IPMP group.


    SC_NETWORK=false
    SC_LH=
  • If HA through IPMP protection and protection against the failure of all physical interfaces by triggering a failover is required, choose one option from the following list:

    • If you require the SUNW.LogicalHostName resource type to manage one or a subset of the zone's addresses, configure a SUNW.LogicalHostName resource for those zone's addresses and not by using the zonecfg utility. Use the zonecfg utility to configure only the zones's addresses that are not to be under the control of the SUNW.LogicalHostName resource.


      SC_NETWORK=true
      SC_LH=Name of the SUNW.LogicalHostName resource
      
    • If you require the SUNW.LogicalHostName resource type to manage all the zone's addresses, configure a SUNW.LogicalHostName resource with a list of the zone's addresses and do not configure them by using the zonecfg utility.


      SC_NETWORK=true
      SC_LH=Name of the SUNW.LogicalHostName resources
      
    • Otherwise, configure the zone's addresses by using the zonecfg utility and configure a separate redundant IP address for use by a SUNW.LogicalHostName resource, which must not be configured using the zonecfg utility.


      SC_NETWORK=false
      SC_LH=Name of the SUNW.LogicalHostName resource
      
SC_LH=sczbt-lh-rs

Specifies the name of the SUNW.LogicalHostName resource for the zone boot resource. Refer to Restrictions for Zone Network Addresses for a description of when to set this variable. This name must be the SUNW.LogicalHostname resource name you assigned when you created the resource in Step 4.

FAILOVER=true|false

Specifies whether the zone's zone path is on a highly available file system.

HAS_RS=sczbt-has-rs

Specifies the name of the SUNW.HAStoragePlus resource for the zone boot resource. This name must be the SUNW.HAStoragePlus resource name you assigned when you created the resource in How to Enable a Zone to Run in a Failover Configuration. You must specify a value for this keyword if FAILOVER=true is set.

Zonename=zone-name

Specifies the zone name. You must specify a value for this keyword.

Zonebrand=zone-brand-type

Specifies the brand type of the zone. The options that are currently supported are native (default), lx, solaris8, or solaris9. You must specify a value for this keyword.

Zonebootopt=zone-boot-options

Specifies the zone boot option to use. Only -s is supported. Leaving this variable blank will cause the zone to boot to the multi-user-server milestone.

Milestone=zone-boot-milestone

Specifies the milestone the zone must reach to be considered successfully booted. This option is only used for the native brand type. You must specify a value for this keyword if you set the Zonebrand option to native.

LXrunlevel=linux-runlevel

Specifies the runlevel that needs to be attained before the zone is considered booted. This option is used only for the lx brand type. You must specify a value for this keyword if you set the Zonebrand option to lx.

SLrunlevel=solaris-legacy-runlevel

Specifies the legacy runlevel that needs to be attained before the zone is considered booted. This option is only used for the solaris8 and solaris9 brand types. You must specify a value for this keyword, if you set the Zonebrand option to solaris8 or solaris9.

Mounts=list-of-mountpoints

Specifies a space separated list of directories with their mount options, which will automatically get lofs mounted from the global zone into the booted zone. The mount point used in the global zone can be different to the mount point in the booted zone. Specifying a value for this keyword is optional.

The Mounts keyword format is as follows:


Mounts="/global-zone-dir:/local-zone-dir:mount-options <next entry>"

While mount-options can be a comma separated list of file system mount options.

The only required entry when setting this keyword is the /global-zone-dir part of the colon separated variable. The /local-zone-dir and mount-options part can be omitted.

Omitting the /local-zone-dir part will make the zone's mount point the same as the global zone directory.

Omitting the mount-options part will not provide any mount options except the default options from the mount command.


Note –

If you are omitting the /local-zone-dir or the mount-options, you must also omit the “:” as delimiter.



Note –

You must manually create any mount point directories within the booted zone that will be used within the Mounts keyword, before registering this resource within Sun Cluster.



Note –

If the file system of the source mount point in the global zone is mounted by a SUNW.HAStoragePlus resource, you must specify a strong resource dependency from the sczbt resource to this SUNW.HAStoragePlus resource.



Example 1 Sample sczbt_config File

This example shows an sczbt_config file in which configuration parameters are set as follows:

RS=zone1-rs
RG=zone1-rg
PARAMETERDIR=/global/zones/pfiles
SC_NETWORK=true
SC_LH=zone1-lh
FAILOVER=true
HAS_RS=zone1-has
Zonename=zone1
Zonebrand=native
Zonebootopt=
Milestone=multi-user-server
Mounts="/global/app/bin:/app/bin:ro /app/data:rw /logs"

Writing Scripts for the Zone Script Resource

The zone script resource provides the ability to run commands or scripts to start, stop and probe an application within a zone. The zone script resource depends on the zone boot resource. The command or script names are passed to the zone script resource when the resource is registered and must meet with the following requirements.

Table 3 Return codes

Successful completion 

>0  

An error has occurred 

201 

(Probe only) — An error has occurred that requires an immediate failover of the resource group 

>0 & !=201 

(Probe only) — An error has occurred that requires a resource restart 


Note –

For an immediate failover of the zone script resource, you must configure the resource properties Failover_mode and Failover_enabled to meet the required behavior. Refer to the r_properties(5) man page when setting the Failover_mode property and SUNW.gds(5) man page when setting the Failover_enabled property.



Example 2 Zone Probe Script for Apache2

This example shows a simple script to test that the Apache2 service is running, beyond the process tree existing. The script /var/tmp/probe-apache2 must exist and being executable within the zone.


# cat /var/tmp/probe-apache2
#!/usr/bin/ksh
if echo "GET; exit" | mconnect -p 80 > /dev/null 2>&1
then
	exit 0
else
	exit 100
fi

# chmod 755 /var/tmp/probe-apache2

Specifying Configuration Parameters for the Zone Script Resource

Sun Cluster HA for Solaris Containers provides the script sczsh_register, which automates the process of configuring zone script resource. By default this script obtains configuration parameters from the sczsh_config file in the /opt/SUNWsczone/sczsh/util directory. To specify configuration parameters for the zone script resource, copy the sczsh_config file to a different filename and amend it as described below. It is recommended to keep this file as a future reference. The register script provides option -f to specify the fully qualified filename to the copied configuration file.

Each configuration parameter in the sczsh_config file is defined as a keyword-value pair. The sczsh_config file already contains the required keywords and equals signs. For more information, see Listing of sczsh_config. When you edit the sczsh_config file, add the required value to each keyword.

The keyword-value pairs in the sczsh_config file are as follows:

RS=sczsh-rs
RG=sczbt-rg
SCZBT_RS=sczbt-rs
PARAMETERDIR=sczsh-parameter-directory
Zonename=sczbt-zone-name
ServiceStartCommand=sczsh-start-command
ServiceStopCommand=sczsh-stop-command
ServiceProbeCommand=sczsh-probe-command

The meaning and permitted values of the keywords in the sczsh_config file are as follows:

RS=sczsh-rs

Specifies the name that you are assigning to the zone script resource. You must specify a value for this keyword.

RG=sczbt-rg

Specifies the name of the resource group the zone boot resource resides in. You must specify a value for this keyword.

SCZBT_RS=sczbt-rs

Specifies the name of the zone boot resource. You must specify a value for this keyword.

PARAMETERDIR=sczsh parameter directory

Specifies the directory name that you are assigning to the parameter directory where the following variables and their values will be stored. You must specify a value for this keyword.

Zonename=sczbt-zone-name

Specifies the zone name. You must specify a value for this keyword.

ServiceStartCommand=sczsh-start-command

Specifies the zone start command or script to run. You must specify a value for this keyword.

ServiceStopCommand=sczsh-stop-command

Specifies the zone stop command or script to run. You must specify a value for this keyword

ServiceProbeCommand=sczsh-probe-command

Specifies the zone probe command or script to run. You must specify a value for this keyword


Example 3 Sample sczsh_config File

In this example the zone script resource uses the Apache2 scripts that are available in Solaris 10. Before this example can be used the Apache2 configuration file http.conf needs to be configured. For the purpose of this example, the delivered http.conf-example can be used. Copy the file as follows:


# zlogin zone1
# cd /etc/apache2
# cp http.conf-example httpd.conf
# exit

This example shows an sczsh_config file in which configuration parameters are set as follows:

RS="zone1-script-rs"
RG="zone1-rg"
SCZBT_RS="zone1-rs"
PARAMETERDIR="/global/zones/pfiles"
Zonename="zone1"
ServiceStartCommand="/lib/svc/method/http-apache2 start"
ServiceStopCommand="/lib/svc/method/http-apache2 stop"
ServiceProbeCommand="/var/tmp/probe-apache2"

Writing a Service Probe for the Zone SMF Resource

The zone SMF resource provides the ability to enable, disable, and probe an SMF service within a zone that is of brand type native. The zone SMF resource depends on the zone boot resource. Probing the SMF service is performed by running a command or script against the SMF service. The SMF service and probe command or script names are passed to the zone SMF resource when the resource is registered. The probe command or script must meet the following requirements.

Table 4 Return codes

Successful completion 

100 

An error occurred that requires a resource restart 

201 

An error has occurred that requires an immediate failover of the resource group 


Note –

For an immediate failover of the zone SMF resource, you must configure the resource properties Failover_mode and Failover_enabled to meet the required behavior. Refer to the r_properties(5) man page when setting the Failover_mode property ad SUNW.gds(5) man page when setting the Failover_enabled property.



Example 4 Zone SMF Probe Script for Apache2

This example shows a simple script to test that the SMF Apache2 service is running, beyond the process tree existing. The script /var/tmp/probe-apache2 must exist and being executable within the zone.


# cat /var/tmp/probe-apache2
#!/usr/bin/ksh
if echo "GET; exit" | mconnect -p 80 > /dev/null 2>&1
then
	exit 0
else
	exit 100
fi

# chmod 755 /var/tmp/probe-apache2

Specifying Configuration Parameters for the Zone SMF Resource

Sun Cluster HA for Solaris Containers provides the script sczsmf_register, which automates the process of configuring the zone SMF resource. By default this script obtains configuration parameters from the sczsmf_config file in the /opt/SUNWsczone/sczsmf/util directory. To specify configuration parameters for the zone SMF resource, copy the sczsmf_config file to a different filename and amend it as described below. It is recommended to keep this file as a future reference. The register script provides option -f to specify the fully qualified filename to the copied configuration file.

Each configuration parameter in the sczmf_config file is defined as a keyword-value pair. The sczsmf_config file already contains the required keywords and equals signs. For more information, see Listing of sczsmf_config. When you edit the sczsmf_config file, add the required value to each keyword.

The keyword-value pairs in the sczsmf_config file are as follows:

RS=sczsmf-rs
RG=sczbt-rg
SCZBT_RS=sczbt-rs
ZONE=sczbt-zone-name
SERVICE=smf-service
RECURSIVE=true|false
STATE=true|false
SERVICE_PROBE=sczsmf-service-probe

The meaning and permitted values of the keywords in the sczsmf_config file are as follows:

RS=sczsmf-rs

Specifies the name that you are assigning to the zone SMF resource. This must be defined.

RG=sczbt-rg

Specifies the name of the resource group the zone boot resource resides in. This must be defined.

SCZBT_RS=sczbt-rs

Specifies the name of the zone boot resource. You must specify a value for this keyword.

ZONE=sczbt-zone-name

Specifies the zone name. This must be defined.

SERVICE=smf-service

Specifies the SMF service to enable/disable. This must be defined.

RECURSIVE=true|false

Specifies true to enable the service recursively or false to just enable the service and no dependents. This must be defined.

STATE=true|false

Specifies true to wait until the service state is reached or false to not wait until the service state is reached. This must be defined.

SERVICE_PROBE=sczsmf-service-probe

Specify the script to check the SMF service. Specifying a value for this keyword is optional.


Example 5 Sample sczsmf_config File

In this example the zone SMF resource uses the Apache2 SMF service that is available in Solaris 10. Before this example can be used the Apache2 configuration file http.conf needs to be configured. For the purpose of this example, the delivered http.conf-example can be used. Copy the file as follows:


# zlogin zone1
# cd /etc/apache2
# cp http.conf-example http.conf
# exit

This example shows an sczsmf_config file in which configuration parameters are set as follows:

RS=zone1-smf-rs
RG=zone1-rg
SCZBT_RS=zone1-rs
ZONE=zone1
SERVICE=apache2
RECURSIVE=true
STATE=true
SERVICE_PROBE=/var/tmp/probe-apache2

ProcedureHow to Create and Enable Resources for the Zone Boot Component

Before You Begin

Ensure you have edited the sczbt_config file or a copy of it to specify configuration parameters for the Sun Cluster HA for Solaris Containers zone boot component. For more information, see Specifying Configuration Parameters for the Zone Boot Resource.

  1. Become superuser on one of the nodes in the cluster that will host the zone.

  2. Register the SUNW.gds resource type.


    # clresourcetype register SUNW.gds
    
  3. Go to the directory that contains the script for creating the Sun Cluster HA for Solaris Containers boot resource.


    # cd /opt/SUNWsczone/sczbt/util
    
  4. Run the script that creates the zone boot resource.


    # ./sczbt_register -f /mypath/sczbt_config
    
  5. Bring online the zone boot resource.


    # clresource enable sczbt-rs
    

ProcedureHow to Create and Enable Resources for the Zone Script Component

Before You Begin

Ensure you have edited the sczsh_config file or a copy of it to specify configuration parameters for the Sun Cluster HA for Solaris Containers zone script component. For more information, see Specifying Configuration Parameters for the Zone Script Resource.

  1. Go to the directory that contains the script for creating the Sun Cluster HA for Solaris Containers script resource.


    # cd /opt/SUNWsczone/sczsh/util
    
  2. Run the script that creates the zone script resource.


    # ./sczsh_register -f /mypath/sczsh_config
    
  3. Bring online the zone script resource.


    # clresource enable sczsh-rs
    

ProcedureHow to Create and Enable Resources for the Zone SMF Component

Before You Begin

Ensure you have edited the sczsmf_config file or a copy of it to specify configuration parameters for the Sun Cluster HA for Solaris Containers zone SMF component. For more information, see Specifying Configuration Parameters for the Zone SMF Resource.

  1. Go to the directory that contains the script for creating the Sun Cluster HA for Solaris Containers SMF resource.


    # cd /opt/SUNWsczone/sczsmf/util
    
  2. Run the script that creates the zone SMF resource.


    # ./sczsmf_register -f /mypath/sczsmf_config
    
  3. Bring online the zone SMF resource.


    # clresource enable sczsmf-rs
    

Verifying the Sun Cluster HA for Solaris Containers and Configuration

After you install, register, and configure Sun Cluster HA for Solaris Containers, verify the Sun Cluster HA for Solaris Containers installation and configuration. Verifying the Sun Cluster HA for Solaris Containers installation and configuration determines if the Sun Cluster HA for Solaris Containers data service makes your zones highly available.

ProcedureHow to Verify the Sun Cluster HA for Solaris Containers Installation and Configuration

  1. Become superuser on a cluster node that is to host the Solaris Zones component.

  2. Ensure all the Solaris Zone resources are online.

    For each resource, perform the following steps.

    1. Determine whether the resource is online.


      # cluster status -t rg,rs
      
    2. If the resource is not online, bring online the resource.


      # clresource enable solaris-zone-resource
      
  3. For a failover service configuration, switch the zone resource group to another cluster node, such as node2.


    # clresourcegroup switch -n node2 solaris-zone-resource-group
    
  4. Confirm that the resource is now online on node2.


    # cluster status -t rg,rs
    

Patching the Global Zone and Non-Global Zones

The procedure that follows is required only if you are applying a patch to the global zone and to non-global zones. If you are applying a patch to only the global zone, follow the instructions in Chapter 10, Patching Sun Cluster Software and Firmware, in Sun Cluster System Administration Guide for Solaris OS.

ProcedureHow to Patch to the Global Zone and Non-Global Zones

This task applies to both nonrebooting patches and rebooting patches.

Perform this task on all nodes in the cluster.

  1. Ensure that the node that you are patching can access the zone paths of all zones that are configured on the node.

    Some zones might be configured to run in a failover configuration. In this situation, bring online on the node that you are patching the resource group that contains the resources for the zones' disk storage.


    # clresourcegroup switch -n node solaris-zone-resource-group
    

    Note –

    This step might also start any applications managed within the resource group solaris-zone-resource-group. Verify if you need to stop any application prior to install the patches. If the applications need to be stopped, disable the corresponding resources before proceeding to the next step.



    Note –

    If the patches need to get applied in single-user mode, it will not be possible to start the resource group as described. Instead the corresponding zone paths need to get mounted manually.


  2. Apply the patch(es) to the node.

    For detailed instructions, see Chapter 10, Patching Sun Cluster Software and Firmware, in Sun Cluster System Administration Guide for Solaris OS in Sun Cluster System Administration Guide for Solaris OS.

Tuning the Sun Cluster HA for Solaris Containers Fault Monitors

The Sun Cluster HA for Solaris Containers fault monitors verify that the following components are running correctly:

Each Sun Cluster HA for Solaris Containers fault monitor is contained in the resource that represents Solaris Zones component. You create these resources when you register and configure Sun Cluster HA for Solaris Containers. For more information, see Registering and Configuring Sun Cluster HA for Solaris Containers.

System properties and extension properties of these resources control the behavior of the fault monitor. The default values of these properties determine the preset behavior of the fault monitor. The preset behavior should be suitable for most Sun Cluster installations. Therefore, you should tune the Sun Cluster HA for Solaris Containers fault monitor only if you need to modify this preset behavior.

Tuning the Sun Cluster HA for Solaris Containers fault monitors involves the following tasks:

For more information, see Tuning Fault Monitors for Sun Cluster Data Services in Sun Cluster Data Services Planning and Administration Guide for Solaris OS.

Operation of the Sun Cluster HA for Solaris Containers Parameter File

The Sun Cluster HA for Solaris Containers zone boot and script resources uses a parameter file to pass parameters to the start, stop and probe commands. Changes to these parameters take effect at every restart or enabling, disabling of the resource.

Operation of the Fault Monitor for the Zone Boot Component

The fault monitor for the zone boot component ensures that the all requirements for the zone boot component to run are met:

Operation of the Fault Monitor for the Zone Script Component

The fault monitor for the zone script component runs the script that you specify for the component. The value that this script returns to the fault monitor determines the action that the fault monitor performs. For more information, see Table 3.

Operation of the Fault Monitor for the Zone SMF Component

The fault monitor for the zone SMF component verifies that the SMF service is not disabled. If the service is disabled, the fault monitor restarts the SMF service. If this fault persists, the fault monitor fails over the resource group that contains the resource for the zone SMF component.

If the service is not disabled, the fault monitor runs the SMF service probe that you can specify for the component. The value that this probe returns to the fault monitor determines the action that the fault monitor performs. For more information, see Table 4.

Tuning the Sun Cluster HA for Solaris Containers Stop_timeout property

The Sun Cluster HA for Solaris Containers components consist all of the resource type SUNW.gds(5). As described in Stop_command Property in Sun Cluster Data Services Developer’s Guide for Solaris OS the value for the Stop_timeout should be chosen so that the Stop_command can successfully return within 80% of its value.

Choosing the Stop_timeout value for the Zone Boot Component

The stop method for the zone boot component spends 60% of the value for the Stop_timeout performing a complete "shutdown -y -g0 -i0" within the zone. If that failed, the next 20% of the value for the Stop_timeout will be spent halting the zone performing a "zoneadm —z zonename" halt and perform some additional cleanup steps in order to force the zone into the state installed. Therefore the Stop_timeout value for the zone boot component should be computed so that 60% is enough to successfully shutdown the zone.

Choosing the Stop_timeout value for the Zone Script Component

The stop method for the zone script component calls the command or script configured for the ServiceStopCommand keyword. Therefore the Stop_timeout value for the zone script component should be computed so that 80% is enough for the configured ServiceStopCommand to succeed.

Choosing the Stop_timeout value for the Zone SMF Component

The stop method for the zone SMF component spends 60% of the value for the Stop_timeout using svcadm to disable the configured SMF service in the zone. If that failed, the next 20% of the value for the Stop_timeout will be spent to first send SIGTERM then SIGKILL to the processes associated with this SMF service. Therefore the Stop_timeout value for the zone SMF component should be computed so that 60% is enough to successfully disable the configured SMF service in the zone.

Debugging Sun Cluster HA for Solaris Containers

The config file in the /opt/SUNWsczone/zone component/etc directory enables you to activate debugging for Solaris Zone resources. Where zone component represents sczbt for the boot component, sczsh for the script component and sczsmf for the SMF component.

Each component of Sun Cluster HA for Solaris Containers has a config that enables you to activate debugging for Solaris Zone resources. The location of this file for each component is as follows:

ProcedureHow to Activate Debugging for Sun Cluster HA for Solaris Containers

  1. Determine whether debugging for Sun Cluster HA for Solaris Containers is active.

    If debugging is inactive, daemon.notice is set in the file /etc/syslog.conf.


    # grep daemon /etc/syslog.conf
    *.err;kern.debug;daemon.notice;mail.crit        /var/adm/messages
    *.alert;kern.err;daemon.err                     operator
    #
  2. If debugging is inactive, edit the /etc/syslog.conf file to change daemon.notice to daemon.debug.

  3. Confirm that debugging for Sun Cluster HA for Solaris Containers is active.

    If debugging is active, daemon.debug is set in the file /etc/syslog.conf.


    # grep daemon /etc/syslog.conf
    *.err;kern.debug;daemon.debug;mail.crit        /var/adm/messages
    *.alert;kern.err;daemon.err                    operator
    #
  4. Restart the syslogd daemon.


    # svcadm restart system-log
    
  5. Edit the /opt/SUNWsczone/sczbt/etc/config file to change DEBUG= to DEBUG=ALL or DEBUG=sczbt-rs.


    # cat /opt/SUNWsczone/sczbt/etc/config
    #
    # Copyright 2006 Sun Microsystems, Inc.  All rights reserved.
    # Use is subject to license terms.
    #
    # ident "@(#)config     1.1     06/02/22 SMI"
    #
    # Usage:
    #       DEBUG=<RESOURCE_NAME> or ALL
    #
    DEBUG=ALL
    #

    Note –

    To deactivate debugging, reverse the preceding steps.