JavaScript is required to for searching.
Skip Navigation Links
Exit Print View
Oracle Solaris Cluster Data Services Planning and Administration Guide     Oracle Solaris Cluster 3.3 3/13
search filter icon
search icon

Document Information

Preface

1.  Planning for Oracle Solaris Cluster Data Services

2.  Administering Data Service Resources

Overview of Tasks for Administering Data Service Resources

Configuring and Administering Oracle Solaris Cluster Data Services

Registering a Resource Type

How to Register a Resource Type

Upgrading a Resource Type

How to Install and Register an Upgrade of a Resource Type

How to Migrate Existing Resources to a New Version of the Resource Type

Downgrading a Resource Type

How to Downgrade a Resource to an Older Version of Its Resource Type

Creating a Resource Group

How to Create a Failover Resource Group

How to Create a Scalable Resource Group

Configuring Failover and Scalable Data Services on Shared File Systems

How to Configure a Failover Application Using the ScalMountPoint Resource

How to Configure a Scalable Application Using the ScalMountPoint Resource

Tools for Adding Resources to Resource Groups

How to Add a Logical Hostname Resource to a Resource Group by Using the clsetup Utility

How to Add a Logical Hostname Resource to a Resource Group Using the Command-Line Interface

How to Add a Shared Address Resource to a Resource Group by Using the clsetup Utility

How to Add a Shared Address Resource to a Resource Group Using the Command-Line Interface

How to Add a Failover Application Resource to a Resource Group

How to Add a Scalable Application Resource to a Resource Group

Bringing Resource Groups Online

How to Bring Resource Groups Online

Switching Resource Groups to Preferred Primaries

How to Switch Resource Groups to Preferred Primaries

Enabling a Resource

How to Enable a Resource

Quiescing Resource Groups

How to Quiesce a Resource Group

How to Quiesce a Resource Group Immediately

Suspending and Resuming the Automatic Recovery Actions of Resource Groups

Immediately Suspending Automatic Recovery by Killing Methods

How to Suspend the Automatic Recovery Actions of a Resource Group

How to Suspend the Automatic Recovery Actions of a Resource Group Immediately

How to Resume the Automatic Recovery Actions of a Resource Group

Disabling and Enabling Resource Monitors

How to Disable a Resource Fault Monitor

How to Enable a Resource Fault Monitor

Removing Resource Types

How to Remove a Resource Type

Removing Resource Groups

How to Remove a Resource Group

Removing Resources

How to Remove a Resource

Switching the Current Primary of a Resource Group

How to Switch the Current Primary of a Resource Group

Disabling Resources and Moving Their Resource Group Into the UNMANAGED State

How to Disable a Resource and Move Its Resource Group Into the UNMANAGED State

Displaying Resource Type, Resource Group, and Resource Configuration Information

Changing Resource Type, Resource Group, and Resource Properties

How to Change Resource Type Properties

How to Change Resource Group Properties

How to Change Resource Properties

How to Change Resource Dependency Properties

How to Modify a Logical Hostname Resource or a Shared Address Resource

Clearing the STOP_FAILED Error Flag on Resources

How to Clear the STOP_FAILED Error Flag on Resources

Clearing the Start_failed Resource State

How to Clear a Start_failed Resource State by Switching Over a Resource Group

How to Clear a Start_failed Resource State by Restarting a Resource Group

How to Clear a Start_failed Resource State by Disabling and Enabling a Resource

Upgrading a Preregistered Resource Type

Information for Registering the New Resource Type Version

Information for Migrating Existing Instances of the Resource Type

Reregistering Preregistered Resource Types After Inadvertent Deletion

How to Reregister Preregistered Resource Types After Inadvertent Deletion

Adding or Removing a Node to or From a Resource Group

Adding a Node to a Resource Group

How to Add a Node to a Scalable Resource Group

How to Add a Node to a Failover Resource Group

Removing a Node From a Resource Group

How to Remove a Node From a Scalable Resource Group

How to Remove a Node From a Failover Resource Group

How to Remove a Node From a Failover Resource Group That Contains Shared Address Resources

Example - Removing a Node From a Resource Group

Migrating the Application From a Global-Cluster Voting Node to a Global-Cluster Non-Voting Node

How to Migrate the Application From a Global-Cluster Voting Node to a Global-Cluster Non-Voting Node

Synchronizing the Startups Between Resource Groups and Device Groups

Managed Entity Monitoring by HAStoragePlus

Troubleshooting Monitoring for Managed Entities

Additional Administrative Tasks to Configure HAStoragePlus Resources for a Zone Cluster

How to Set Up the HAStoragePlus Resource Type for New Resources

How to Set Up the HAStoragePlus Resource Type for Existing Resources

Configuring an HAStoragePlus Resource for Cluster File Systems

Sample Entries in /etc/vfstab for Cluster File Systems

How to Set Up the HAStoragePlus Resource for Cluster File Systems

How to Delete an HAStoragePlus Resource Type for Cluster File Systems

Enabling Highly Available Local File Systems

Configuration Requirements for Highly Available Local File Systems

Format of Device Names for Devices Without a Volume Manager

Sample Entries in /etc/vfstab for Highly Available Local File Systems

How to Set Up the HAStoragePlus Resource Type by Using the clsetup Utility

How to Set Up the HAStoragePlus Resource Type to Make File Systems Highly Available Other Than Solaris ZFS

How to Set Up the HAStoragePlus Resource Type to Make a Local Solaris ZFS Highly Available

How to Delete an HAStoragePlus Resource That Makes a Local Solaris ZFS Highly Available

Sharing a Highly Available Local File System Across Zone Clusters

Configuration Requirements for Sharing a Highly Available Local File System Directory to a Zone Cluster

How to Set Up the HAStorage Plus Resource Type to Share a Failover File System Directory to a Zone Cluster

Upgrading From HAStorage to HAStoragePlus

How to Upgrade From HAStorage to HAStoragePlus When Using Device Groups or CFS

How to Upgrade From HAStorage With CFS to HAStoragePlus With Highly Available Local File System

Modifying Online the Resource for a Highly Available File System

How to Add File Systems Other Than Solaris ZFS to an Online HAStoragePlus Resource

How to Remove File Systems Other Than Solaris ZFS From an Online HAStoragePlus Resource

How to Add a Solaris ZFS Storage Pool to an Online HAStoragePlus Resource

How to Remove a Solaris ZFS Storage Pool From an Online HAStoragePlus Resource

Changing a ZFS Pool Configuration That is Managed by an HAStoragePlus Resource

How to Change a ZFS Pool Configuration That is Managed by an HAStoragePlus Resource in an Offline State

How to Change a ZFS Pool Configuration That is Managed by an Online HAStoragePlus Resource

How to Recover From a Fault After Modifying the FileSystemMountPoints Property of an HAStoragePlus Resource

How to Recover From a Fault After Modifying the Zpools Property of an HAStoragePlus Resource

Changing the Cluster File System to a Local File System in an HAStoragePlus Resource

How to Change the Cluster File System to Local File System in an HAStoragePlus Resource

Upgrading the HAStoragePlus Resource Type

Information for Registering the New Resource Type Version

Information for Migrating Existing Instances of the Resource Type

Distributing Online Resource Groups Among Cluster Nodes

Resource Group Affinities

Enforcing Collocation of a Resource Group With Another Resource Group

Specifying a Preferred Collocation of a Resource Group With Another Resource Group

Distributing a Set of Resource Groups Evenly Among Cluster Nodes

Specifying That a Critical Service Has Precedence

Delegating the Failover or Switchover of a Resource Group

Combining Affinities Between Resource Groups

Zone Cluster Resource Group Affinities

Replicating and Upgrading Configuration Data for Resource Groups, Resource Types, and Resources

How to Replicate Configuration Data on a Cluster Without Configured Resource Groups, Resource Types, and Resources

How to Upgrade Configuration Data on a Cluster With Configured Resource Groups, Resource Types, and Resources

Enabling Oracle Solaris SMF Services to Run With Oracle Solaris Cluster

Encapsulating an SMF Service Into a Failover Proxy Resource Configuration

Encapsulating an SMF Service Into a Multi-Master Proxy Resource Configuration

Encapsulating an SMF Service Into a Scalable Proxy Resource Configuration

Tuning Fault Monitors for Oracle Solaris Cluster Data Services

Setting the Interval Between Fault Monitor Probes

Setting the Timeout for Fault Monitor Probes

Defining the Criteria for Persistent Faults

Complete Failures and Partial Failures of a Resource

Dependencies of the Threshold and the Retry Interval on Other Properties

System Properties for Setting the Threshold and the Retry Interval

Specifying the Failover Behavior of a Resource

Denying Cluster Services For a Selected Non-Global Zone

How to Deny Cluster Services For a Non-Global Zone

How to Allow Cluster Services For a Non-Global Zone

Index

Sharing a Highly Available Local File System Across Zone Clusters

You can use the SUNW.HAStoragePlus resource type to share a highly available file system directory managed by a global cluster resource to a zone cluster. This method consolidates the storage and shares a highly available local file system with different applications running on different zone clusters. For information on adding a file system to a zone cluster, see Adding File Systems to a Zone Cluster in Oracle Solaris Cluster Software Installation Guide.

This section explains the requirements and procedures for sharing a highly available local file system directory across zone clusters.

Configuration Requirements for Sharing a Highly Available Local File System Directory to a Zone Cluster

The directory of a highly available local file system managed by a global cluster resource can be shared to a zone cluster. To share a highly available local file system directory, the configuration must meet the following requirements:


Note - The applications sharing a highly available local file system will experience an availability impact due to collocation of the applications. An application failure on a node and its intent to fail over might have a cascading effect on other applications and the applications would be forced to fail over to another node. Mitigate the problem by reducing the number of applications that are sharing the file system. If the file system that is being shared is UFS, you can choose to configure the cluster file system to a zone cluster. See How to Set Up the HAStoragePlus Resource for Cluster File Systems.


How to Set Up the HAStorage Plus Resource Type to Share a Failover File System Directory to a Zone Cluster

The following procedure explains how to set up the HAStoragePlus resource type to share a failover file system (for example, UFS, QFS, or a ZFS pool directory to a zone cluster called zoneclustername.

  1. On any node in the global cluster, become superuser or assume a role that provides solaris.cluster.modify RBAC authorization. Perform the steps from a node in the global cluster, because the dependencies and affinities from a zone cluster to a global cluster can only be set by an authorized global cluster node administrator.
  2. Create a failover resource group in the global cluster.
    # clresourcegroup create gc-hasp-resource-group
  3. Register the HAStoragePlus resource type in the global cluster.
    # clresourcetype register SUNW.HAStoragePlus
  4. Create an HAStoragePlus resource in a failover resource group of the global cluster with the failover file system that contains the directory that you want to share to a zone cluster.
    # clresource create -g gc-hasp-resource-group -t HAStoragePlus \
    -p FilesystemMountPoints=mount-point \
    -p Zpools=pool gc-hasp-resource
  5. Bring the global cluster failover resource group online in a managed state.
    # clresourcegroup online -M gc-hasp-resource-group
  6. Configure the directory of the failover file system that is being shared to the zone cluster as an lofs file system.
    # clzonecluster configure zoneclustername
    clzc:zoneclustername> add fs
    clzc:zoneclustername:fs> set dir = shared-dir-mount-point-in-zc
    clzc:zoneclustername:fs> set special = shared-directory
    clzc:zoneclustername:fs> set type = lofs
    clzc:zoneclustername:fs> end
    clzc:zoneclustername> exit
    #
  7. Create a failover resource group in the zone cluster that has a strong positive affinity or strong positive affinity with failover delegation on the failover resource group of the global cluster.
    # clresourcegroup create -Z zoneclustername \
    -p RG_affinities=++global:gc-hasp-resource-group \
    zc-hasp-resource-group
    OR
    # clresourcegroup create -Z zoneclustername \
    -p RG_affinities=+++global:gc-hasp-resource-group zc-hasp-resource-group
  8. Register the HAStoragePlus resource type in the zone cluster.
    # clresourcetype register -Z zoneclustername SUNW.HAStoragePlus
  9. Create an HAStoragePlus resource in a failover resource group of the zone cluster. Configure the zone cluster with the lofs file system for a shared directory with a dependency on the global cluster resource that you want to share to the zone cluster.
     # clresource create -Z zoneclustername -t SUNW.HAStoragePlus -g zc-hasp-resource-group \
    -p FilesystemMountPoints=shared-dir-mount-point-in-zc \
    -p Resource_dependencies_offline_restart=global:gc-hasp-resource zc-hasp-resource
  10. Bring the zone cluster failover resource group online.
    # clresourcegroup online -Z zoneclustername -M zc-hasp-resource-group

Example 2-42 Setting Up the HAStoragePlus Resource Type to Share a UFS Failover File System Directory to a Zone Cluster

The following example shows how to share the /local/fs/home directory of a UFS failover file system (/local/fs) to a zone cluster called sczone.

# clresourcegroup create gc-hasp-rg
# clresourcetype register -Z sczone SUNW.HAStoragePlus
# vi /etc/vfstab /dev/md/dg1/dsk/d0 /dev/md/dg1/rdsk/d0 /local/fs ufs 2 no logging
# clresource create -g gc-hasp-rg -t SUNW.HAStoragePlus \
-p FilesystemMountPoints=/local/fs gc-hasp-rs
# clresourcegroup online -M gc-hasp-rg

The steps above ensure that the gc-hasp-rs resource running in the global cluster manages the failover file system /local/fs.

# clzonecluster configure sczone
clzc:sczone> add fs
clzc:sczone:fs> set dir = /share/local/fs/home
clzc:sczone:fs> set special = /local/fs/home
clzc:sczone:fs> set type = lofs
clzc:sczone:fs> end
clzc:sczone> exit

The configuration above makes the failover file system's directory /local/fs/home available in the zone cluster sczone at mount point /share/local/fs/home.

# clresourcegroup create -Z sczone \
-p RG_affinities=++global:gc-hasp-rg zc-hasp-rg
# clresourcetype register -Z sczone SUNW.HAStoragePlus
# clresource create -Z sczone -t HAStoragePlus -g zc-hasp-rg \
-p FilesystemMountPoints=/share/local/fs/home \
-p Resource_dependencies_offline_restart=global:gc-hasp-rs zc-hasp-rs 
# clresourcegroup online -Z sczone -M zc-hasp-rg

The steps above create a zone cluster resource that manages the shared directory as an lofs file system. The steps in this example are applicable to VxFS and QFS file systems.

Example 2-43 Setting Up the HAStoragePlus Resource Type to Share a ZFS Pool Directory to a Zone Cluster

The following example shows how to share the ZFS pool "tank" directory /tank/home to a zone cluster called sczone.

# clresourcegroup create gc-hasp-rg
# clresourcetype register SUNW.HAStoragePlus
# clresource create -g gc-hasp-rg -t SUNW.HAStoragePlus \
-p Zpools=tank gc-hasp-rs
# clresourcegroup online -M gc-hasp-rg

The steps above ensure that the ZFS failover file system is managed by gc-hasp-rs running in the global cluster.

# clzonecluster configure sczone
clzc:sczone> add fs
clzc:sczone:fs> set dir = /share/tank/home
clzc:sczone:fs> set special = /tank/home
clzc:sczone:fs> set type = lofs
clzc:sczone:fs>end
clzc:sczone> exit
#

The configuration above makes the ZFS pool "tank" directory /tank/home available in the zone cluster sczone at mountpoint /share/tank/home.

 # clresourcegroup create -Z sczone \
-p RG_affinities=++global:gc-hasp-rg zc-hasp-rg
# clresourcetype register -Z sczone SUNW.HAStoragePlus
# clresource create -Z sczone -t HAStoragePlus -g zc-hasp-rg \
-p FilesystemMountPoints=/share/tank/home \
-p Resource_dependencies_offline_restart=global:gc-hasp-rs zc-hasp-rs
# clresourcegroup online -Z sczone -M zc-hasp-rg

The steps above create a zone cluster resource that manages the shared directory as an lofs file system.