Skip Navigation Links | |
Exit Print View | |
Oracle Solaris Cluster Data Service for Solaris Containers Guide |
Installing and Configuring HA for Solaris Containers
HA for Solaris Containers Overview
Overview of Installing and Configuring HA for Solaris Containers
Planning the HA for Solaris Containers Installation and Configuration
Restrictions for Zone Network Addresses
Restrictions for an HA Container
Restrictions for a Multiple-Masters Zone
Restrictions for the Zone Path of a Zone
Restrictions on Major Device Numbers in /etc/name_to_major
Installing and Configuring Zones
How to Enable a Zone to Run in Failover Configuration
How to Enable a Zone to Run in a Multiple-Masters Configuration
How to Install a Zone and Perform the Initial Internal Zone Configuration
Verifying the Installation and Configuration of a Zone
How to Verify the Installation and Configuration of a Zone
Installing the HA for Solaris Containers Packages
How to Install the HA for Solaris Containers Packages
Registering and Configuring HA for Solaris Containers
Specifying Configuration Parameters for the Zone Boot Resource
Writing Scripts for the Zone Script Resource
Specifying Configuration Parameters for the Zone Script Resource
Writing a Service Probe for the Zone SMF Resource
Specifying Configuration Parameters for the Zone SMF Resource
How to Create and Enable Resources for the Zone Boot Component
How to Create and Enable Resources for the Zone Script Component
How to Create and Enable Resources for the Zone SMF Component
Verifying the HA for Solaris Containers and Configuration
How to Verify the HA for Solaris Containers Installation and Configuration
Patching the Global Zone and Non-Global Zones
How to Patch to the Global Zone and Non-Global Zones
Tuning the HA for Solaris Containers Fault Monitors
Operation of the HA for Solaris Containers Parameter File
Operation of the Fault Monitor for the Zone Boot Component
Operation of the Fault Monitor for the Zone Script Component
Operation of the Fault Monitor for the Zone SMF Component
Tuning the HA for Solaris Containers Stop_timeout property
Choosing the Stop_timeout value for the Zone Boot Component
Choosing the Stop_timeout value for the Zone Script Component
Choosing the Stop_timeout value for the Zone SMF Component
Denying Cluster Services for a Non-Global Zone
Debugging HA for Solaris Containers
How to Activate Debugging for HA for Solaris Containers
A. Files for Configuring HA for Solaris Containers Resources
This section contains the information you need to plan your HA for Solaris Containers installation and configuration.
The configuration restrictions in the subsections that follow apply only to HA for Solaris Containers.
Caution - Your data service configuration might not be supported if you do not observe these restrictions. |
The configuration of a zone's network addresses depends on the level of high availability (HA) you require. You can choose between no HA, HA through the use of only IPMP, or HA through the use of IPMP and SUNW.LogicalHostName.
Your choice of a zone's network addresses configuration affects some configuration parameters for the zone boot resource. For more information, see Registering and Configuring HA for Solaris Containers
If HA for the zone's addresses is not required, then configure the zone's addresses by using the zonecfg utility.
If only HA through IPMP protection is required, then configure the zone's addresses by using the zonecfg utility and place the zone's addresses on an adapter within an IPMP group.
If HA through IPMP protection and protection against the failure of all physical interfaces by triggering a failover is required, choose one option from the following list:
If you require the SUNW.LogicalHostName resource type to manage one or a subset of the zone's addresses, configure a SUNW.LogicalHostName resource for those zone's addresses and not by using the zonecfg utility. Use the zonecfg utility only to configure the zones's addresses that are not required to be under the control of the SUNW.LogicalHostName resource.
If you require the SUNW.LogicalHostName resource type to manage all the zone's addresses, configure a SUNW.LogicalHostName resource with a list of the zone's addresses and do not configure them by using the zonecfg utility.
Otherwise configure the zone's addresses by using the zonecfg utility and configure a separate redundant IP address for use by a SUNW.LogicalHostName resource, which must not be configured using the zonecfg utility.
If ip-type=exclusive option is set with zonecfg in the zone configuration for the configured sczbt resource, the SC_NETWORK variable in the sczbt_config file must be set to false to successfully register the sczbt resource. If ip-type=exclusive option is set for the non-global zone, do not configure a resource dependency on the SUNW.LogicalHostname resource from the sczbt resource.
The zone path of a zone in an HA container configuration must reside on a highly available local file system. The zone must be configured on each cluster node where the zone can reside.
The zone is active on only one node at a time, and the zone's address is plumbed on only one node at a time. Application clients can then reach the zone through the zone's address, wherever that zone resides within the cluster.
Ensure that the zone's autoboot property is set to false. Setting a zone's autoboot property to false prevents the zone from being booted when the global zone is booted. The HA for Solaris Containers data service can manage a zone only if the zone is booted under the control of the data service.
The zone path of a zone in a multiple-masters configuration must reside on the local disks of each node. The zone must be configured with the same name on each node that can master the zone.
Each zone that is configured to run within a multiple-masters configuration must also have a zone-specific address. Load balancing for applications in these configurations is typically provided by an external load balancer. You must configure this load balancer for the address of each zone. Application clients can then reach the zone through the load balancer's address.
Ensure that the zone's autoboot property is set to false. Setting a zone's autoboot property to false prevents the zone from being booted when the global zone is booted. The HA for Solaris Containers data service can manage a zone only if the zone is booted under the control of the data service.
The zone path of a zone that HA for Solaris Containers manages cannot reside on a global file system.
If the zone is in a failover configuration the zone path must reside on a highly available local file system.
If the zone is in a multiple-masters configuration, the zone path must reside on the local disks of each node.
For shared devices, Solaris Cluster requires that the major and minor device numbers are identical on all nodes in the cluster. If the device is required for a zone, ensure that the major device number is the same in /etc/name_to_major on all nodes in the cluster that will host the zone.
The configuration requirements in this section apply only to HA for Solaris Containers.
Caution - If your data service configuration does not conform to these requirements, the data service configuration might not be supported. |
The dependencies between the HA for Solaris Containers components are described in the following table:
Table 2 Dependencies Between HA for Solaris Containers Components
|
These dependencies are set when you register and configure HA for Solaris Containers. For more information, see Registering and Configuring HA for Solaris Containers.
The zone script resource and SMF resource are optional. If used, multiple instances of the zone script resource and SMF resource can be deployed within the same resource group as the zone boot resource. Furthermore, if more elaborate dependencies are required then refer to the r_properties(5) and rg_properties(5) man pages for further dependencies and affinities settings.
The boot component and script component of HA for Solaris Containers require a parameter file to pass configuration information to the data service. You must create a directory for these files. The directory location must be available on the node that is to host the zone and must not be in the zone's zone path. The directory must be accessible only from the global zone. The parameter file for each component is created automatically when the resource for the component is registered.