JavaScript is required to for searching.
Skip Navigation Links
Exit Print View
Oracle Solaris Cluster Data Service for Solaris Containers Guide     Oracle Solaris Cluster
search filter icon
search icon

Document Information

Preface

1.  Installing and Configuring HA for Solaris Containers

HA for Solaris Containers Overview

Overview of Installing and Configuring HA for Solaris Containers

Planning the HA for Solaris Containers Installation and Configuration

Configuration Restrictions

Restrictions for Zone Network Addresses

Restrictions for an HA Container

Restrictions for a Multiple-Masters Zone

Restrictions for the Zone Path of a Zone

Restrictions on Major Device Numbers in /etc/name_to_major

Configuration Requirements

Dependencies Between HA for Solaris Containers Components

Parameter File Directory for HA for Solaris Containers

Installing and Configuring Zones

How to Enable a Zone to Run in Failover Configuration

How to Enable a Zone to Run in a Multiple-Masters Configuration

How to Install a Zone and Perform the Initial Internal Zone Configuration

Verifying the Installation and Configuration of a Zone

How to Verify the Installation and Configuration of a Zone

Installing the HA for Solaris Containers Packages

How to Install the HA for Solaris Containers Packages

Registering and Configuring HA for Solaris Containers

Specifying Configuration Parameters for the Zone Boot Resource

Writing Scripts for the Zone Script Resource

Specifying Configuration Parameters for the Zone Script Resource

Writing a Service Probe for the Zone SMF Resource

Specifying Configuration Parameters for the Zone SMF Resource

How to Create and Enable Resources for the Zone Boot Component

How to Create and Enable Resources for the Zone Script Component

How to Create and Enable Resources for the Zone SMF Component

Verifying the HA for Solaris Containers and Configuration

How to Verify the HA for Solaris Containers Installation and Configuration

Patching the Global Zone and Non-Global Zones

How to Patch to the Global Zone and Non-Global Zones

Tuning the HA for Solaris Containers Fault Monitors

Operation of the HA for Solaris Containers Parameter File

Operation of the Fault Monitor for the Zone Boot Component

Operation of the Fault Monitor for the Zone Script Component

Operation of the Fault Monitor for the Zone SMF Component

Tuning the HA for Solaris Containers Stop_timeout property

Choosing the Stop_timeout value for the Zone Boot Component

Choosing the Stop_timeout value for the Zone Script Component

Choosing the Stop_timeout value for the Zone SMF Component

Denying Cluster Services for a Non-Global Zone

Debugging HA for Solaris Containers

How to Activate Debugging for HA for Solaris Containers

A.  Files for Configuring HA for Solaris Containers Resources

Index

Planning the HA for Solaris Containers Installation and Configuration

This section contains the information you need to plan your HA for Solaris Containers installation and configuration.

Configuration Restrictions

The configuration restrictions in the subsections that follow apply only to HA for Solaris Containers.


Caution

Caution - Your data service configuration might not be supported if you do not observe these restrictions.


Restrictions for Zone Network Addresses

The configuration of a zone's network addresses depends on the level of high availability (HA) you require. You can choose between no HA, HA through the use of only IPMP, or HA through the use of IPMP and SUNW.LogicalHostName.

Your choice of a zone's network addresses configuration affects some configuration parameters for the zone boot resource. For more information, see Registering and Configuring HA for Solaris Containers

If ip-type=exclusive option is set with zonecfg in the zone configuration for the configured sczbt resource, the SC_NETWORK variable in the sczbt_config file must be set to false to successfully register the sczbt resource. If ip-type=exclusive option is set for the non-global zone, do not configure a resource dependency on the SUNW.LogicalHostname resource from the sczbt resource.

Restrictions for an HA Container

The zone path of a zone in an HA container configuration must reside on a highly available local file system. The zone must be configured on each cluster node where the zone can reside.

The zone is active on only one node at a time, and the zone's address is plumbed on only one node at a time. Application clients can then reach the zone through the zone's address, wherever that zone resides within the cluster.

Ensure that the zone's autoboot property is set to false. Setting a zone's autoboot property to false prevents the zone from being booted when the global zone is booted. The HA for Solaris Containers data service can manage a zone only if the zone is booted under the control of the data service.

Restrictions for a Multiple-Masters Zone

The zone path of a zone in a multiple-masters configuration must reside on the local disks of each node. The zone must be configured with the same name on each node that can master the zone.

Each zone that is configured to run within a multiple-masters configuration must also have a zone-specific address. Load balancing for applications in these configurations is typically provided by an external load balancer. You must configure this load balancer for the address of each zone. Application clients can then reach the zone through the load balancer's address.

Ensure that the zone's autoboot property is set to false. Setting a zone's autoboot property to false prevents the zone from being booted when the global zone is booted. The HA for Solaris Containers data service can manage a zone only if the zone is booted under the control of the data service.

Restrictions for the Zone Path of a Zone

The zone path of a zone that HA for Solaris Containers manages cannot reside on a global file system.

Restrictions on Major Device Numbers in /etc/name_to_major

For shared devices, Solaris Cluster requires that the major and minor device numbers are identical on all nodes in the cluster. If the device is required for a zone, ensure that the major device number is the same in /etc/name_to_major on all nodes in the cluster that will host the zone.

Configuration Requirements

The configuration requirements in this section apply only to HA for Solaris Containers.


Caution

Caution - If your data service configuration does not conform to these requirements, the data service configuration might not be supported.


Dependencies Between HA for Solaris Containers Components

The dependencies between the HA for Solaris Containers components are described in the following table:

Table 1-2 Dependencies Between HA for Solaris Containers Components

Component
Dependency
Zone boot resource (sczbt)
SUNW.HAStoragePlus - In a failover configuration, the zone's zone path must be on a highly available file system managed by a SUNW.HAStoragePlus resource

SUNW.LogicalHostName - This dependency is required only if the zone's address is managed by a SUNW.LogicalHostName resource

Zone script resource (sczsh)
Zone boot resource
Zone SMF resource (sczsmf)
Zone boot resource

These dependencies are set when you register and configure HA for Solaris Containers. For more information, see Registering and Configuring HA for Solaris Containers.

The zone script resource and SMF resource are optional. If used, multiple instances of the zone script resource and SMF resource can be deployed within the same resource group as the zone boot resource. Furthermore, if more elaborate dependencies are required then refer to the r_properties(5) and rg_properties(5) man pages for further dependencies and affinities settings.

Parameter File Directory for HA for Solaris Containers

The boot component and script component of HA for Solaris Containers require a parameter file to pass configuration information to the data service. You must create a directory for these files. The directory location must be available on the node that is to host the zone and must not be in the zone's zone path. The directory must be accessible only from the global zone. The parameter file for each component is created automatically when the resource for the component is registered.


Note - If a multiple-masters zone configuration is being deployed, you must ensure that the parameter directory exists on all nodes that will host the zone. You can either place separate copies of the parameter directory on local storage on each node or place the parameter directory on a cluster file system. The advantage of using a cluster file system ensures that only one copy of the parameter directory exists, whereas if you have separate copies of the parameter directory on local storage you must ensure the contents are kept up to date after registering an sczbt or sczsh resource and before you enable the resource.