JavaScript is required to for searching.
Skip Navigation Links
Exit Print View
Oracle Solaris Cluster Data Services Planning and Administration Guide     Oracle Solaris Cluster 3.3 3/13
search filter icon
search icon

Document Information

Preface

1.  Planning for Oracle Solaris Cluster Data Services

2.  Administering Data Service Resources

Overview of Tasks for Administering Data Service Resources

Configuring and Administering Oracle Solaris Cluster Data Services

Registering a Resource Type

How to Register a Resource Type

Upgrading a Resource Type

How to Install and Register an Upgrade of a Resource Type

How to Migrate Existing Resources to a New Version of the Resource Type

Downgrading a Resource Type

How to Downgrade a Resource to an Older Version of Its Resource Type

Creating a Resource Group

How to Create a Failover Resource Group

How to Create a Scalable Resource Group

Configuring Failover and Scalable Data Services on Shared File Systems

How to Configure a Failover Application Using the ScalMountPoint Resource

How to Configure a Scalable Application Using the ScalMountPoint Resource

Tools for Adding Resources to Resource Groups

How to Add a Logical Hostname Resource to a Resource Group by Using the clsetup Utility

How to Add a Logical Hostname Resource to a Resource Group Using the Command-Line Interface

How to Add a Shared Address Resource to a Resource Group by Using the clsetup Utility

How to Add a Shared Address Resource to a Resource Group Using the Command-Line Interface

How to Add a Failover Application Resource to a Resource Group

How to Add a Scalable Application Resource to a Resource Group

Bringing Resource Groups Online

How to Bring Resource Groups Online

Switching Resource Groups to Preferred Primaries

How to Switch Resource Groups to Preferred Primaries

Enabling a Resource

How to Enable a Resource

Quiescing Resource Groups

How to Quiesce a Resource Group

How to Quiesce a Resource Group Immediately

Suspending and Resuming the Automatic Recovery Actions of Resource Groups

Immediately Suspending Automatic Recovery by Killing Methods

How to Suspend the Automatic Recovery Actions of a Resource Group

How to Suspend the Automatic Recovery Actions of a Resource Group Immediately

How to Resume the Automatic Recovery Actions of a Resource Group

Disabling and Enabling Resource Monitors

How to Disable a Resource Fault Monitor

How to Enable a Resource Fault Monitor

Removing Resource Types

How to Remove a Resource Type

Removing Resource Groups

How to Remove a Resource Group

Removing Resources

How to Remove a Resource

Switching the Current Primary of a Resource Group

How to Switch the Current Primary of a Resource Group

Disabling Resources and Moving Their Resource Group Into the UNMANAGED State

How to Disable a Resource and Move Its Resource Group Into the UNMANAGED State

Displaying Resource Type, Resource Group, and Resource Configuration Information

Changing Resource Type, Resource Group, and Resource Properties

How to Change Resource Type Properties

How to Change Resource Group Properties

How to Change Resource Properties

How to Change Resource Dependency Properties

How to Modify a Logical Hostname Resource or a Shared Address Resource

Clearing the STOP_FAILED Error Flag on Resources

How to Clear the STOP_FAILED Error Flag on Resources

Clearing the Start_failed Resource State

How to Clear a Start_failed Resource State by Switching Over a Resource Group

How to Clear a Start_failed Resource State by Restarting a Resource Group

How to Clear a Start_failed Resource State by Disabling and Enabling a Resource

Upgrading a Preregistered Resource Type

Information for Registering the New Resource Type Version

Information for Migrating Existing Instances of the Resource Type

Reregistering Preregistered Resource Types After Inadvertent Deletion

How to Reregister Preregistered Resource Types After Inadvertent Deletion

Adding or Removing a Node to or From a Resource Group

Adding a Node to a Resource Group

How to Add a Node to a Scalable Resource Group

How to Add a Node to a Failover Resource Group

Removing a Node From a Resource Group

How to Remove a Node From a Scalable Resource Group

How to Remove a Node From a Failover Resource Group

How to Remove a Node From a Failover Resource Group That Contains Shared Address Resources

Example - Removing a Node From a Resource Group

Migrating the Application From a Global-Cluster Voting Node to a Global-Cluster Non-Voting Node

How to Migrate the Application From a Global-Cluster Voting Node to a Global-Cluster Non-Voting Node

Synchronizing the Startups Between Resource Groups and Device Groups

Managed Entity Monitoring by HAStoragePlus

Troubleshooting Monitoring for Managed Entities

Additional Administrative Tasks to Configure HAStoragePlus Resources for a Zone Cluster

How to Set Up the HAStoragePlus Resource Type for New Resources

How to Set Up the HAStoragePlus Resource Type for Existing Resources

Configuring an HAStoragePlus Resource for Cluster File Systems

Sample Entries in /etc/vfstab for Cluster File Systems

How to Set Up the HAStoragePlus Resource for Cluster File Systems

How to Delete an HAStoragePlus Resource Type for Cluster File Systems

Enabling Highly Available Local File Systems

Configuration Requirements for Highly Available Local File Systems

Format of Device Names for Devices Without a Volume Manager

Sample Entries in /etc/vfstab for Highly Available Local File Systems

How to Set Up the HAStoragePlus Resource Type by Using the clsetup Utility

How to Set Up the HAStoragePlus Resource Type to Make File Systems Highly Available Other Than Solaris ZFS

How to Set Up the HAStoragePlus Resource Type to Make a Local Solaris ZFS Highly Available

How to Delete an HAStoragePlus Resource That Makes a Local Solaris ZFS Highly Available

Sharing a Highly Available Local File System Across Zone Clusters

Configuration Requirements for Sharing a Highly Available Local File System Directory to a Zone Cluster

How to Set Up the HAStorage Plus Resource Type to Share a Failover File System Directory to a Zone Cluster

Upgrading From HAStorage to HAStoragePlus

How to Upgrade From HAStorage to HAStoragePlus When Using Device Groups or CFS

How to Upgrade From HAStorage With CFS to HAStoragePlus With Highly Available Local File System

Modifying Online the Resource for a Highly Available File System

How to Add File Systems Other Than Solaris ZFS to an Online HAStoragePlus Resource

How to Remove File Systems Other Than Solaris ZFS From an Online HAStoragePlus Resource

How to Add a Solaris ZFS Storage Pool to an Online HAStoragePlus Resource

How to Remove a Solaris ZFS Storage Pool From an Online HAStoragePlus Resource

Changing a ZFS Pool Configuration That is Managed by an HAStoragePlus Resource

How to Change a ZFS Pool Configuration That is Managed by an HAStoragePlus Resource in an Offline State

How to Change a ZFS Pool Configuration That is Managed by an Online HAStoragePlus Resource

How to Recover From a Fault After Modifying the FileSystemMountPoints Property of an HAStoragePlus Resource

How to Recover From a Fault After Modifying the Zpools Property of an HAStoragePlus Resource

Changing the Cluster File System to a Local File System in an HAStoragePlus Resource

How to Change the Cluster File System to Local File System in an HAStoragePlus Resource

Upgrading the HAStoragePlus Resource Type

Information for Registering the New Resource Type Version

Information for Migrating Existing Instances of the Resource Type

Distributing Online Resource Groups Among Cluster Nodes

Resource Group Affinities

Enforcing Collocation of a Resource Group With Another Resource Group

Specifying a Preferred Collocation of a Resource Group With Another Resource Group

Distributing a Set of Resource Groups Evenly Among Cluster Nodes

Specifying That a Critical Service Has Precedence

Delegating the Failover or Switchover of a Resource Group

Combining Affinities Between Resource Groups

Zone Cluster Resource Group Affinities

Replicating and Upgrading Configuration Data for Resource Groups, Resource Types, and Resources

How to Replicate Configuration Data on a Cluster Without Configured Resource Groups, Resource Types, and Resources

How to Upgrade Configuration Data on a Cluster With Configured Resource Groups, Resource Types, and Resources

Enabling Oracle Solaris SMF Services to Run With Oracle Solaris Cluster

Encapsulating an SMF Service Into a Failover Proxy Resource Configuration

Encapsulating an SMF Service Into a Multi-Master Proxy Resource Configuration

Encapsulating an SMF Service Into a Scalable Proxy Resource Configuration

Tuning Fault Monitors for Oracle Solaris Cluster Data Services

Setting the Interval Between Fault Monitor Probes

Setting the Timeout for Fault Monitor Probes

Defining the Criteria for Persistent Faults

Complete Failures and Partial Failures of a Resource

Dependencies of the Threshold and the Retry Interval on Other Properties

System Properties for Setting the Threshold and the Retry Interval

Specifying the Failover Behavior of a Resource

Denying Cluster Services For a Selected Non-Global Zone

How to Deny Cluster Services For a Non-Global Zone

How to Allow Cluster Services For a Non-Global Zone

Index

Enabling Highly Available Local File Systems

Using a highly available local file system improves the performance of I/O intensive data services. To make a local file system highly available in an Oracle Solaris Cluster environment, use the HAStoragePlus resource type.

You can specify global or local file systems. Global file systems are accessible from all nodes in a cluster. Local file systems are accessible from a single cluster node. Local file systems that are managed by a SUNW.HAStoragePlus resource are mounted on a single cluster node. These local file systems require the underlying devices to be Oracle Solaris Cluster global devices.

These file system mount points are defined in the format paths[,...]. You can specify both the path in a global-cluster non-voting node and the path in a global-cluster voting node , in this format:

Non-GlobalZonePath:GlobalZonePath

The global-cluster voting node path is optional. If you do not specify a global-cluster voting node path, Oracle Solaris Cluster assumes that the path in the global-cluster non-voting node and in the global-cluster voting node are the same. If you specify the path as Non-GlobalZonePath:GlobalZonePath, you must specify GlobalZonePath in the global-cluster voting node's /etc/vfstab.

The default setting for this property is an empty list.

The SUNW.HAStoragePlus resource performs a loopback mount to make the file system available in the local zone from the global zone when the SUNW.HAStoragePlus resource is in a local zone. Do not configure the zone to do a loopback mount when the file system is managed by an SUNW.HAStoragePlus resource. Implementing both types of configurations can cause odd behavior, such as being unable to stop the SUNW.HAStoragePlus resource.

You can use the SUNW.HAStoragePlus resource type to make a file system available to a global-cluster non-voting node. To enable the SUNW.HAStoragePlus resource type to do this, you must create a mount point in the global-cluster voting node and in the global-cluster non-voting node. The SUNW.HAStoragePlus resource type makes the file system available to the global-cluster non-voting node by mounting the file system in the global cluster. The resource type then performs a loopback mount in the global-cluster non-voting node. Each file system mount point should have an equivalent entry in /etc/vfstab on all cluster nodes and in the global-cluster voting node. The SUNW.HAStoragePlus resource type does not check /etc/vfstab in global-cluster non-voting nodes.

You can use the SUNW.HAStoragePlus resource type to make a file system available to zone cluster nodes. The file systems configured in the SUNW.HAStoragePlus resource type for zone clusters should be authorized for use in zone clusters using the clzonecluster command. For more information, see the clzonecluster(1CL) man page and Adding File Systems to a Zone Cluster in Oracle Solaris Cluster Software Installation Guide.


Note - Local file systems include the UNIX File System (UFS), Quick File System (QFS), and Solaris ZFS (Zettabyte File System). Systems with Solaris ZFS are only mounted directly into the non-global zone.


The instructions for each Oracle Solaris Cluster data service that is I/O intensive explain how to configure the data service to operate with the HAStoragePlus resource type. For more information, see the individual Oracle Solaris Cluster data service guides.


Note - Do not use the HAStoragePlus resource type to make a root file system highly available.


Oracle Solaris Cluster provides the following tools for setting up the HAStoragePlus resource type to make local file systems highly available:

Oracle Solaris Cluster Manager and the clsetup utility enable you to add resources to the resource group interactively. Configuring these resources interactively reduces the possibility for configuration errors that might result from command syntax errors or omissions. Oracle Solaris Cluster Manager and the clsetup utility ensure that all required resources are created and that all required dependencies between resources are set.

Configuration Requirements for Highly Available Local File Systems

Any file system on multihost disks must be accessible from any host that is directly connected to those multihost disks. To meet this requirement, configure the highly available local file system as follows:


Note - The use of a volume manager with the global devices for a highly available local file system is optional.


Format of Device Names for Devices Without a Volume Manager

If you are not using a volume manager, use the appropriate format for the name of the underlying storage device. The format to use depends on the type of storage device as follows:

The replaceable elements in these device names are as follows:

Sample Entries in /etc/vfstab for Highly Available Local File Systems

The following examples show entries in the /etc/vfstab file for global devices that are to be used for highly available local file systems.


Note - Solaris ZFS (Zettabyte File System) does not use the /etc/vfstab file.


Example 2-36 Entries in /etc/vfstab for a Global Device Without a Volume Manager

This example shows entries in the /etc/vfstab file for a global device on a physical disk without a volume manager.

/dev/global/dsk/d1s0       /dev/global/rdsk/d1s0
/global/local-fs/nfs  ufs     5  no     logging

Example 2-37 Entries in /etc/vfstab for a Global Device With Solaris Volume Manager

This example shows entries in the /etc/vfstab file for a global device that uses Solaris Volume Manager.

/dev/md/kappa-1/dsk/d0   /dev/md/kappa-1/rdsk/d0
/global/local-fs/nfs ufs     5  no     logging

Note - The same file system entries must be added to the zone cluster configuration when you configure the file system for a zone cluster using the SUNW.HAStoragePlus resource type.


How to Set Up the HAStoragePlus Resource Type by Using the clsetup Utility

The following instructions explain how to how to set up the HAStoragePlus resource type by using the clsetup utility. Perform this procedure from any global-cluster voting node.

This procedure provides the long forms of the Oracle Solaris Cluster maintenance commands. Most commands also have short forms. Except for the forms of the command names, the commands are identical.

Before You Begin

Ensure that the following prerequisites are met:

  1. Become superuser on any cluster voting node.
  2. Start the clsetup utility.
    # clsetup

    The clsetup main menu is displayed.

  3. Type the number that corresponds to the option for data services and press Return.

    The Data Services menu is displayed.

  4. Type the number that corresponds to the option for configuring highly available storage and press Return.

    The clsetup utility displays the list of prerequisites for performing this task.

  5. Verify that the prerequisites are met, and press Return to continue.

    The clsetup utility displays a list of the cluster nodes that can master the highly available HAStoragePlus resource.

  6. Select the nodes that can master the highly available HAStoragePlus resource.
    • To accept the default selection of all listed nodes in an arbitrary order, type a and press Return.
    • To select a subset of the listed nodes, type a comma-separated or space-separated list of the numbers that correspond to the nodes. Then press Return.

      Ensure that the nodes are listed in the order in which the nodes are to appear in the HAStoragePlus resource group's node list. The first node in the list is the primary node of this resource group.

    • To select all nodes in a particular order, type a comma-separated or space-separated ordered list of the numbers that correspond to the nodes and press Return.
  7. To confirm your selection of nodes, type d and press Return.

    The clsetup utility displays a list of types of shared storage type where data is to be stored.

  8. Type the numbers that correspond to type of shared storage that you are using for storing the data and press Return.

    The clsetup utility displays the file system mount points that are configured in the cluster. If there are no existing mount points, the clsetup utility allows you to define a new mount point.

  9. Specify the default mount directory, the raw device path, the Global Mount option and the Check File System Periodically option and press Return.

    The clsetup utility returns you the properties of the mount point that the utility will create.

  10. To create the mount point, type d and press Return.

    The clsetup utility displays the available file system mount points.


    Note - You can use the c option to define another new mount point.


  11. Select the file system mount points.
    • To accept the default selection of all listed file system mount points in an arbitrary order, type a and press Return.
    • To select a subset of the listed file system mount points, type a comma-separated or space-separated list of the numbers that correspond to the file system mount points and press Return.
  12. To confirm your selection of nodes, type d and press Return.

    The clsetup utility displays the global disk sets and device groups that are configured in the cluster.

  13. Select the global device groups.
    • To accept the default selection of all listed device groups in an arbitrary order, type a and press Return.
    • To select a subset of the listed device groups, type a comma-separated or space-separated list of the numbers that correspond to the device groups and press Return.
  14. To confirm your selection of nodes, type d and press Return.

    The clsetup utility displays the names of the Oracle Solaris Cluster objects that the utility will create.

  15. If you require a different name for any Oracle Solaris Cluster object, change the name as follows.
    1. Type the number that corresponds to the name that you are changing and press Return.

      The clsetup utility displays a screen where you can specify the new name.

    2. At the New Value prompt, type the new name and press Return.

    The clsetup utility returns you to the list of the names of the Oracle Solaris Cluster objects that the utility will create.

  16. To confirm your selection of Oracle Solaris Cluster object names, type d and press Return.

    The clsetup utility displays information about the Oracle Solaris Cluster configuration that the utility will create.

  17. To create the configuration, type c and Press Return.

    The clsetup utility displays a progress message to indicate that the utility is running commands to create the configuration. When configuration is complete, the clsetup utility displays the commands that the utility ran to create the configuration.

  18. (Optional) Type q and press Return repeatedly until you quit the clsetup utility.

    If you prefer, you can leave the clsetup utility running while you perform other required tasks before using the utility again. If you choose to quit clsetup, the utility recognizes your existing resource group when you restart the utility.

  19. Verify that the HAStoragePlus resource has been created.

    Use the clresource(1CL) utility for this purpose.

    # clresource show name_of_rg

How to Set Up the HAStoragePlus Resource Type to Make File Systems Highly Available Other Than Solaris ZFS

The following procedure explains how to set up the HAStoragePlus resource type to make file systems other than Solaris ZFS highly available.

  1. On any node in the global cluster, become superuser or assume a role that provides solaris.cluster.modify RBAC authorization.
  2. Create a failover resource group.
    # clresourcegroup create resource-group
  3. Register the HAStoragePlus resource type.
    # clresourcetype register SUNW.HAStoragePlus
  4. Create the HAStoragePlus resource and define the file system mount points.
    # clresource create -g resource-group \
    -t SUNW.HAStoragePlus -p FileSystemMountPoints=mount-point-list hasp-resource
  5. Bring online and in a managed state the resource group that contains the HAStoragePlus resource.
    # clresourcegroup online -M resource-group

Example 2-38 Setting Up the HAStoragePlus Resource Type to Make a UFS File System Highly Available for the Global Cluster

This example assumes that the file system /web-1 is configured to the HAStoragePlus resource to make the file system highly available for the global cluster.

phys-schost-1# vi /etc/vfstab
#device         device          mount           FS      fsck    mount   mount
#to mount       to fsck         point           type    pass    at boot options
#
# /dev/md/apachedg/dsk/d0 /dev/md/apachedg/rdsk/d0 /web-1 ufs 2 no logging
# clresourcegroup create hasp-rg 
# clresourcetype register SUNW.HAStoragePlus
# clresource create -g hasp-rg -t SUNW.HAStoragePlus -p FileSystemMountPoints=/global/ufs-1 hasp-rs
# clresourcegroup online -M hasp-rg

Example 2-39 Setting Up the HAStoragePlus Resource Type to Make a UFS File System Highly Available for a Zone Cluster

This example assumes that the file system /web-1 is configured to the HAStoragePlus resource to make the file system highly available for a zone cluster sczone. When a local file system is configured as a highly available file system for a zone cluster using the SUNW.HAStoragePlus resource type, the HAStoragePlus resource reads the file system information in the zone cluster configuration.

# clzonecluster configure sczone
clzc:sczone> add fs
clzc:sczone:fs> set dir=/web-1
clzc:sczone:fs> set special=/dev/md/apachedg/dsk/d0
clzc:sczone:fs> set raw=/dev/md/apachedg/rdsk/d0
clzc:sczone:fs> set type=ufs
clzc:sczone:fs> add options [logging]
clzc:sczone:fs> end
clzc:sczone:fs> exit
# clresourcegroup create -Z sczone hasp-rg
# clresourcetype register -Z sczone SUNW.HAStoragePlus
# clresource create -Z sczone -g hasp-rg \
-t SUNW.HAStoragePlus -p FileSystemMountPoints=/web-1 hasp-rs
# clresourcegroup online -Z sczone -M hasp-rg

How to Set Up the HAStoragePlus Resource Type to Make a Local Solaris ZFS Highly Available

You perform the following primary tasks to make a local Solaris ZFS (Zettabyte File System) highly available:

This section describes how to complete these tasks.

If you are planning to manually import a ZFS pool that is already managed by the cluster, ensure that the pool is not imported on multiple nodes. Importing a pool on multiple nodes can lead to problems. For more information, see Changing a ZFS Pool Configuration That is Managed by an HAStoragePlus Resource.

This section describes how to complete both tasks.

  1. Create a ZFS storage pool.

    Caution

    Caution - Do not add a configured quorum device to a ZFS storage pool. When a configured quorum device is added to a storage pool, the disk is relabeled as an EFI disk, the quorum configuration information is lost, and the disk no longer provides a quorum vote to the cluster. After a disk is in a storage pool, you can configure that disk as a quorum device. Alternatively, you can unconfigure the disk, add it to the storage pool, then reconfigure the disk as a quorum device.


    Observe the following requirements when you create a ZFS storage pool in an Oracle Solaris Clusterconfiguration:

    • Ensure that all of the devices from which you create a ZFS storage pool are accessible from all nodes in the cluster. These nodes must be configured in the node list of the resource group to which the HAStoragePlus resource belongs.

    • Ensure that the Oracle Solaris device identifier that you specify to the zpool(1M) command, for example /dev/dsk/c0t0d0, is visible to the cldevice list -v command.


    Note - The ZFS storage pool can be created using a full disk or a disk slice. It is preferred to create a ZFS storage pool using a full disk by specifying an Oracle Solaris logical device as ZFS file system performs better by enabling the disk write cache. ZFS file system labels the disk with EFI when a full disk is provided.


    See Creating a Basic ZFS Storage Pool in Oracle Solaris ZFS Administration Guide for information about how to create a ZFS storage pool.

  2. In the ZFS storage pool that you just created, create a ZFS file system.

    You can create more than one ZFS file system in the same ZFS storage pool.


    Note - HAStoragePlus does not support file systems created on ZFS file system volumes.

    Do not place a ZFS file system in the FilesystemMountPoints extension property.


    See Creating a ZFS File System Hierarchy in Oracle Solaris ZFS Administration Guide for information about how to create a ZFS file system in a ZFS storage pool.

  3. On any node in the cluster, become superuser or assume a role that provides solaris.cluster.modify RBAC authorization.
  4. Create a failover resource group.
    # clresourcegroup create resource-group
  5. Register the HAStoragePlus resource type.
    # clresourcetype register SUNW.HAStoragePlus
  6. Create an HAStoragePlus resource for the local ZFS file system.
    # clresource create -g resource-group -t SUNW.HAStoragePlus \
    -p Zpools=zpool -p ZpoolsSearchDir=/dev/did/dsk \
    resource

    The default location to search for devices of ZFS storage pools is /dev/dsk. It can be overridden by using the ZpoolsSearchDir extension property.

    The resource is created in the enabled state.

  7. Bring online and in a managed state the resource group that contains the HAStoragePlus resource.
    # clresourcegroup online -M resource-group

Example 2-40 Setting Up the HAStoragePlus Resource Type to Make a Local ZFS Highly Available for the Global Cluster

The following example shows the commands to make a local ZFS file system highly available.

phys-schost-1% su
Password: 
# cldevice list -v

DID Device          Full Device Path
----------          ----------------
d1                  phys-schost-1:/dev/rdsk/c0t0d0
d2                  phys-schost-1:/dev/rdsk/c0t1d0
d3                  phys-schost-1:/dev/rdsk/c1t8d0
d3                  phys-schost-2:/dev/rdsk/c1t8d0
d4                  phys-schost-1:/dev/rdsk/c1t9d0
d4                  phys-schost-2:/dev/rdsk/c1t9d0
d5                  phys-schost-1:/dev/rdsk/c1t10d0
d5                  phys-schost-2:/dev/rdsk/c1t10d0
d6                  phys-schost-1:/dev/rdsk/c1t11d0
d6                  phys-schost-2:/dev/rdsk/c1t11d0
d7                  phys-schost-2:/dev/rdsk/c0t0d0
d8                  phys-schost-2:/dev/rdsk/c0t1d0
you can create a ZFS storage pool using a disk slice by specifying a Solaris device identifier:
# zpool create HAzpool c1t8d0s2
or you can create a ZFS storage pool using disk slice by specifying a logical device identifier
# zpool create HAzpool /dev/did/dsk/d3s2
# zfs create HAzpool/export
# zfs create HAzpool/export/home
# clresourcegroup create hasp-rg
# clresourcetype register SUNW.HAStoragePlus
# clresource create -g hasp-rg -t SUNW.HAStoragePlus -p Zpools=HAzpool hasp-rs
# clresourcegroup online -M hasp-rg

Example 2-41 Setting Up the HAStoragePlus Resource Type to Make a Local ZFS Highly Available for a Zone Cluster

The following example shows the steps to make a local ZFS file system highly available in a zone cluster sczone.

phys-schost-1# cldevice list -v
# zpool create HAzpool c1t8d0 
# zfs create HAzpool/export 
# zfs create HAzpool/export/home
# clzonecluster configure sczone
clzc:sczone> add dataset
clzc:sczone:fs> set name=HAzpool
clzc:sczone:fs> end
clzc:sczone:fs> exit
# clresourcegroup create -Z sczone hasp-rg
# clresourcetype register -Z sczone SUNW.HAStoragePlus
# clresource create -Z sczone -g hasp-rg -t SUNW.HAStoragePlus \
-p Zpools=HAzpool hasp-rs
# clresourcegroup online -Z -sczone -M hasp-rg

How to Delete an HAStoragePlus Resource That Makes a Local Solaris ZFS Highly Available