Go to main content

Oracle® Solaris Cluster Data Service for Oracle Solaris Zones Guide

Exit Print View

Updated: September 2015
 
 

How to Enable a Zone to Run in a Failover Configuration

Before You Begin

If the zone is set with ip-type=shared and will use a logical hostname, ensure that the /etc/inet/netmasks file has an IP-address subnet and netmask entry for the logical hostname. If necessary, edit the /etc/inet/netmasks file to add any missing entries.

  1. Register the SUNW.HAStoragePlus resource type.
    # clresourcetype register SUNW.HAStoragePlus
  2. Create a failover resource group.
    # clresourcegroup create solaris-zone-resource-group
  3. Create a resource for the zone's disk storage.
    • If the zone is one of the following, this step is required:
      • A solaris or solaris10 branded zone that is not set with the rootzpool zone property.

      • A kernel zone with one of the following conditions:

        • The boot device points to a zvol.

        • The suspend device points to a path.

      This HAStoragePlus resource is for the zonepath. The file system must be a failover file system.

      # clresource create \
      -g solaris-zone-resource-group \
      -t SUNW.HAStoragePlus \
      -p Zpools=solaris-zone-instance-zpool \
      solaris-zone-has-resource-name
    • If the zone is one of the following, this step is optional:
      • A kernel zone that has a storage URI that points to a logical unit or to an iSCSI device.

      • A solaris or solaris10 non-global zone with both of the following conditions:

        • The rootzpool or zpool zone property is set.

        • The storage URI points to a logical unit or to an iSCSI device.

      For any other zone, this step does not apply.

      1. Identify the devices to be used as boot storage and suspend storage for the kernel zone or the devices that are set for the rootzpool or zpool zone property.
        node-1# cldev list -v d2
        DID Device     Full Device Path
        d2             node-1:/dev/rdsk/c0t60080E5000184744000005B4513DF1A8d0
        d2             node-2:/dev/rdsk/c0t60080E5000184744000005B4513DF1A8d0
        
        node-1# suriadm lookup-uri /dev/did/dsk/d2
        dev:did/dsk/d2
        
        node-1# cldev list -v d3
        DID Device       Full Device Path
        d3               node-1:/dev/rdsk/c0t60080E5000184744000005B6513DF1B2d0
        d3               node-2:/dev/rdsk/c0t60080E5000184744000005B6513DF1B2d0
        
        node-1# suriadm lookup-uri /dev/did/dsk/d3
        dev:did/dsk/d3
        
        d2 (suri=dev:did/dsk/d2) will be used for the kernel zone rpool as boot
        device or for a non-global zone within the rootzpool zone property setting.
        
        d3 (suri=dev:did/dsk/d3) will be used as suspend device or additional
        delegated zpool for a non-global zone within the zpool zone property setting.
      2. If you require device monitoring for the storage devices configured to be used by the zone, configure a SUNW.HAStoragePlus resource.

        Specify the corresponding global device group for the did devices that were identified within the GlobalDevicePaths property in Step a.

        1. Register the SUNW.HAStoragePlus resource.
          node2# clrs create -t SUNW.HAStoragePlus -g zone-rg \
          -p GlobalDevicePaths=dsk/d2,dsk/d3 ha-zones-hasp-rs
        2. Set the resource name for that SUNW.HAStoragePlus resource within the HAS_RS variable of the sczbt_config file.

          This setting ensures that the required resource dependency gets set up for the sczbt component. For example:

          HAS_RS=ha-zones-hasp-rs
  4. If the zone is set with ip-type=shared and uses a logical hostname, create a resource for that logical hostname.
    # clreslogicalhostname create \
    -g solaris-zone-resource-group \
    -h solaris-zone-logical-hostname \
    solaris-zone-logical-hostname-resource-name
  5. Enable the failover resource group.
    # clresourcegroup online -eM solaris-zone-resource-group