Go to main content

Oracle® Solaris Cluster Data Service for Oracle Solaris Zones Guide

Exit Print View

Updated: September 2015
 
 

Planning the HA for Solaris Zones Installation and Configuration

This section contains the information you need to plan your HA for Solaris Zones installation and configuration. The configuration restrictions and requirements in the subsections that follow apply only to HA for Solaris Zones.


Caution

Caution  -  Your data service configuration might not be supported if you do not observe these restrictions.


Requirements and Restrictions for Zone Network Addresses

  • The configuration of a zone's network addresses depends on the level of high availability (HA) you require for it and the configured ip-type option. You can choose between no HA, HA through the use of only public network management (PNM) objects, or HA through the use of PNM objects and SUNW.LogicalHostName (ip-type=shared only). PNM objects include Internet Protocol network multipathing (IPMP) groups, trunk and datalink multipathing (DLMP) link aggregations, and VNICs that are directly backed by link aggregations.

  • Your choice of a zone's network addresses configuration affects some configuration parameters for the zone boot resource. For more information, see Registering and Configuring HA for Solaris Zones.

  • The following restrictions apply if ip-type is set to shared:

    • If HA for the zone's addresses is not required, configure the zone's addresses by using the zonecfg utility.

    • If only HA through PNM protection in the global zone is required, configure the zone's addresses by using the zonecfg utility and place the zone's addresses on an adapter within a PNM object.

    • If HA through PNM protection in the global zone and protection against the failure of all physical interfaces by triggering a failover is required, choose one option from the following list:

      • If you require the SUNW.LogicalHostName resource type to manage one or a subset of the zone's addresses, configure a SUNW.LogicalHostName resource for those zone's addresses and not by using the zonecfg utility. Use the zonecfg utility only to configure the zone's addresses that are not required to be under the control of the SUNW.LogicalHostName resource.

      • If you require the SUNW.LogicalHostName resource type to manage all the zone's addresses, configure a SUNW.LogicalHostName resource with a list of the zone's addresses and do not configure them by using the zonecfg utility.

      • Otherwise, configure the zone's addresses by using the zonecfg utility and configure a separate redundant IP address in the same subnet for use by a SUNW.LogicalHostName resource, which must not be configured using the zonecfg utility.

  • The following restrictions apply if ip-type is set to exclusive:

    • The SC_NETWORK variable in the sczbt_config file must be set to false to successfully register the sczbt resource.

    • Do not configure a resource dependency on a SUNW.LogicalHostname resource from the sczbt resource.

    • A linkname is required for anet resources within zonecfg. Set the linkname value explicitly instead of using the auto option.

  • The zone network addresses that are managed by a SUNW.LogicalHostname resource get configured for the zone and unconfigured from the zone asynchronously during the boot and shutdown of the zone. An application that uses these network addresses has to be managed by either the sczsh component or the sczsmf component, to ensure correct order of start and stop of the application with the corresponding network addresses.

    If the application is started by runlevel or SMF services within the zone, without using the sczsh or sczsmf component, then the network addresses used by that application must be configured using the zonecfg utility and must not be managed by a SUNW.LogicalHostname resource.

Requirements and Restrictions for an HA Zone

  • If the rootzpool zone property is not set, the zone path of a non-global zone in an HA Zone configuration must reside on a highly available local file system. The zone must be configured on each cluster node where the zone can reside.

  • The zone is active on only one node at a time, and the zone's address is plumbed on only one node at a time. Application clients can then reach the zone through the zone's address, wherever that zone resides within the cluster.

  • Ensure that the zone's autoboot property is set to false. Setting a zone's autoboot property to false prevents the zone from being booted when the host global zone is booted. The HA for Solaris Zones data service can manage a zone only if the zone is booted under the control of the data service.

  • Ensure that the zone configuration defines a generic attribute with name osc-ha-zone of type boolean and value true. This attribute is used by the svc:/system/cluster/osc-ha-zone-state-cleanup SMF service on each node to identify a zone controlled by the sczbt component. The svc:/system/cluster/osc-ha-zone-state-cleanup SMF service must be enabled.

  • For a solaris branded zone, failover behavior differs depending on the version of Oracle Solaris.

    • On Oracle Solaris 11.2, the last zone boot environment that was booted is first cloned and then activated on the node.

    • On Oracle Solaris 11.3, the zone is attached using the –x deny-zbe-clone option of the zoneadm attach command. For more information about this option, see the zoneadm (1M) man page.

  • For a solaris-kz branded zone, observe the following restrictions:

    • You cannot specify the Mounts variable within the sczbt configuration file.

    • You cannot set the SC_NETWORK variable to true within the sczbt configuration file.

  • For a solaris-kz branded zone set with Migrationtype=live, a live migration of a kernel zone is performed over the cluster private interconnect. The migration uses the ssh protocol that is specified in the RAD URI using the default RAD port. A passwordless ssh login for the root user is used between the cluster nodes over the cluster interconnect.

    To support this behavior, the following SMF services must be enabled on all cluster nodes:

    • svc:/system/rad:local

    • svc:/system/rad:remote

    • svc:/network/kz-migr:stream

  • In some cases where the cluster cannot determine the target node to which the HA for Solaris Zones resource group is live migrating, it uses an ordinary resource group switchover instead of using live migration. In such cases, the kernel zone shuts down on its current node and then boots on its new node.

    To achieve live migration in such cases, relocate the HA for Solaris Zones resource group by using the clresourcegroup switch command explicitly on the resource group, rather than depending on node evacuation or strong resource group affinities to move the resource group.

  • For a solaris-kz branded zone that is set with either Migrationtype=warm or Migrationtype=live, to successfully migrate a kernel zone between different CPU types, you must set the cpu-arch zone property. For more information about the cpu-arch property, see .solaris-kz SPARC Only: Cross-CPU Migration in Oracle Solaris Zones Configuration Resources .

Requirements and Restrictions for a Multiple-Masters Zone

  • The zone path of a zone in a multiple-masters configuration must reside on the local disks of each node. The zone must be configured with the same name on each node that can master the zone.

  • Each zone that is configured to run within a multiple-masters configuration must also have a zone-specific address. Load balancing for applications in these configurations is typically provided by an external load balancer. You must configure this load balancer for the address of each zone. Application clients can then reach the zone through the load balancer's address.

  • Ensure that the zone's autoboot property is set to false. Setting a zone's autoboot property to false prevents the zone from being booted when the global zone is booted. The HA for Solaris Zones data service can manage a zone only if the zone is booted under the control of the data service.

Requirements and Restrictions for the Zone Path of a Zone

  • The zone path of a zone that HA for Solaris Zones manages cannot reside on a global file system.

  • If the non-global zone is in a failover configuration, the zone path must either reside on a highly available local file system or the rootzpool zone property must be set to point to shared-storage devices. If the storage URI points to a logical unit or iSCSI device, you can use the SUNW.HAStoragePlus resource to monitor the corresponding DID device."

  • For an Oracle Solaris Kernel Zone, the boot storage is specified as described in the suri (5) man page. If the storage URI points to a zvol, then the corresponding zpool must be managed by a SUNW.HAStoragePlus resource. If the storage URI points to a logical unit or iSCSI device, then the SUNW.HAStoragePlus resource can be used to monitor the corresponding did device.

  • If the zone is in a multiple-masters configuration, the zone path must reside on the local disks of each node.

Dependencies Between HA for Solaris Zones Components

The dependencies between the HA for Solaris Zones components are described in the following table:

Table 2  Dependencies Between HA for Solaris Zones Components
Component
Dependency
Zone boot resource (sczbt)
SUNW.HAStoragePlus - In a failover configuration for a non-global zone, if the rootzpool zone property is not set, the zone's zone path must be on a highly available file system managed by a SUNW.HAStoragePlus resource. If either the rootzpool or zpool zone property is set and if the storage URI points to a logical unit or to an iSCSI device, you can use the SUNW.HAStoragePlus resource to monitor the storage devices that are configured for those zone properties.
SUNW.HAStoragePlus - In a failover configuration for a kernel zone, if the storage URI points to a logical unit or to an iSCSI device, the SUNW.HAStoragePlus resource can be used to monitor the storage devices configured as a boot device or as a suspend device. If the boot device points to a zvol, then the corresponding zpool is managed by SUNW.HAStoragePlus. Similarly, if the suspend device is specified to point to a path, then the storage resource managing the corresponding highly available file system is specified as the resource dependency.
SUNW.LogicalHostName - This dependency is required only if the zone's address is managed by a SUNW.LogicalHostName resource and the ip-type is set to shared
Zone script resource (sczsh)
Zone boot resource
Zone SMF resource (sczsmf)
Zone boot resource

These dependencies are set when you register and configure HA for Solaris Zones. For more information, see Registering and Configuring HA for Solaris Zones.

The sczbt_register script defines a resource dependency of type Resource_dependencies_offline_restart as follows:

  • If you set the SC_LH variable within the sczbt_config file, then the Resource_dependencies_offline_restart property of the sczbt component will contain the SUNW.LogicalHostname resource name as set with the SC_LH variable.

  • If you set the HAS_RS variable within the sczbt_config file, then the Resource_dependencies_offline_restart property of the sczbt component will contain the storage resource name as set with the HAS_RS variable.

When you configure a solaris-kz branded zone for warm migration, where the suspend image is hosted on a file system managed by HAStoragePlus or on any other cluster resource managing that file system, you need to set the HAS_RS variable to the corresponding resource name. This ensures that the resource dependency to the storage resource is set up when the sczbt resource is registered.

The zone script resource and SMF resource are optional. If used, multiple instances of the zone script resource and SMF resource can be deployed within the same resource group as the zone boot resource. Furthermore, if more elaborate dependencies are required, refer to the r_properties (5) and rg_properties (5) man pages for further dependencies and affinities settings.

For a kernel zone, if the sczbt component is configured with Migrationtype=warm or Migrationtype=live, it will still perform the start and stop operations on the corresponding services that are managed by the sczsh or the sczsmf component. If you need to have all the services running within the kernel zone during warm or live migration, do not configure the sczsh or the sczsmf component for those services.