Skip Navigation Links | |
Exit Print View | |
Oracle Solaris Cluster Data Services Planning and Administration Guide Oracle Solaris Cluster 4.1 |
1. Planning for Oracle Solaris Cluster Data Services
2. Administering Data Service Resources
Overview of Tasks for Administering Data Service Resources
Configuring and Administering Oracle Solaris Cluster Data Services
How to Register a Resource Type
How to Install and Register an Upgrade of a Resource Type
How to Migrate Existing Resources to a New Version of the Resource Type
How to Unregister Older Unused Versions of the Resource Type
How to Downgrade a Resource to an Older Version of Its Resource Type
How to Create a Failover Resource Group
How to Create a Scalable Resource Group
Configuring Failover and Scalable Data Services on Shared File Systems
How to Configure a Failover Application Using the ScalMountPoint Resource
How to Configure a Scalable Application Using the ScalMountPoint Resource
Tools for Adding Resources to Resource Groups
How to Add a Logical Hostname Resource to a Resource Group by Using the clsetup Utility
How to Add a Logical Hostname Resource to a Resource Group Using the Command-Line Interface
How to Add a Shared Address Resource to a Resource Group by Using the clsetup Utility
How to Add a Shared Address Resource to a Resource Group Using the Command-Line Interface
How to Add a Failover Application Resource to a Resource Group
How to Add a Scalable Application Resource to a Resource Group
Bringing Resource Groups Online
How to Bring Resource Groups Online
Switching Resource Groups to Preferred Primaries
How to Switch Resource Groups to Preferred Primaries
How to Quiesce a Resource Group
How to Quiesce a Resource Group Immediately
Suspending and Resuming the Automatic Recovery Actions of Resource Groups
Immediately Suspending Automatic Recovery by Killing Methods
How to Suspend the Automatic Recovery Actions of a Resource Group
How to Suspend the Automatic Recovery Actions of a Resource Group Immediately
How to Resume the Automatic Recovery Actions of a Resource Group
Disabling and Enabling Resource Monitors
How to Disable a Resource Fault Monitor
How to Enable a Resource Fault Monitor
How to Remove a Resource Group
Switching the Current Primary of a Resource Group
How to Switch the Current Primary of a Resource Group
Disabling Resources and Moving Their Resource Group Into the UNMANAGED State
How to Disable a Resource and Move Its Resource Group Into the UNMANAGED State
Displaying Resource Type, Resource Group, and Resource Configuration Information
Changing Resource Type, Resource Group, and Resource Properties
How to Change Resource Type Properties
How to Change Resource Group Properties
How to Change Resource Properties
How to Change Resource Dependency Properties
How to Modify a Logical Hostname Resource or a Shared Address Resource
Clearing the STOP_FAILED Error Flag on Resources
How to Clear the STOP_FAILED Error Flag on Resources
Clearing the Start_failed Resource State
How to Clear a Start_failed Resource State by Switching Over a Resource Group
How to Clear a Start_failed Resource State by Restarting a Resource Group
How to Clear a Start_failed Resource State by Disabling and Enabling a Resource
Upgrading a Preregistered Resource Type
Information for Registering the New Resource Type Version
Information for Migrating Existing Instances of the Resource Type
Reregistering Preregistered Resource Types After Inadvertent Deletion
How to Reregister Preregistered Resource Types After Inadvertent Deletion
Adding or Removing a Node to or From a Resource Group
Adding a Node to a Resource Group
How to Add a Node to a Scalable Resource Group
How to Add a Node to a Failover Resource Group
Removing a Node From a Resource Group
How to Remove a Node From a Scalable Resource Group
How to Remove a Node From a Failover Resource Group
How to Remove a Node From a Failover Resource Group That Contains Shared Address Resources
Example - Removing a Node From a Resource Group
Synchronizing the Startups Between Resource Groups and Device Groups
Managed Entity Monitoring by HAStoragePlus
Troubleshooting Monitoring for Managed Entities
Additional Administrative Tasks to Configure HAStoragePlus Resources for a Zone Cluster
How to Set Up the HAStoragePlus Resource Type for New Resources
How to Set Up the HAStoragePlus Resource Type for Existing Resources
Configuring an HAStoragePlus Resource for Cluster File Systems
Sample Entries in /etc/vfstab for Cluster File Systems
How to Set Up the HAStoragePlus Resource for Cluster File Systems
How to Delete an HAStoragePlus Resource Type for Cluster File Systems
Enabling Highly Available Local File Systems
Configuration Requirements for Highly Available Local File Systems
Format of Device Names for Devices Without a Volume Manager
Sample Entries in /etc/vfstab for Highly Available Local File Systems
How to Set Up the HAStoragePlus Resource Type by Using the clsetup Utility
How to Delete an HAStoragePlus Resource That Makes a Local Solaris ZFS Highly Available
Modifying Online the Resource for a Highly Available Local File System
How to Add File Systems Other Than Solaris ZFS to an Online HAStoragePlus Resource
How to Remove File Systems Other Than Solaris ZFS From an Online HAStoragePlus Resource
How to Add a Solaris ZFS Storage Pool to an Online HAStoragePlus Resource
How to Remove a Solaris ZFS Storage Pool From an Online HAStoragePlus Resource
Changing a ZFS Pool Configuration That is Managed by an HAStoragePlus Resource
How to Change a ZFS Pool Configuration That is Managed by an Online HAStoragePlus Resource
How to Recover From a Fault After Modifying the Zpools Property of an HAStoragePlus Resource
Changing the Cluster File System to a Local File System in an HAStoragePlus Resource
How to Change the Cluster File System to Local File System in an HAStoragePlus Resource
Upgrading the HAStoragePlus Resource Type
Information for Registering the New Resource Type Version
Information for Migrating Existing Instances of the Resource Type
Distributing Online Resource Groups Among Cluster Nodes
Enforcing Collocation of a Resource Group With Another Resource Group
Specifying a Preferred Collocation of a Resource Group With Another Resource Group
Distributing a Set of Resource Groups Evenly Among Cluster Nodes
Specifying That a Critical Service Has Precedence
Delegating the Failover or Switchover of a Resource Group
Combining Affinities Between Resource Groups
Zone Cluster Resource Group Affinities
Configuring the Distribution of Resource Group Load Across Nodes
How to Configure Load Limits for a Node
How to Set Priority for a Resource Group
How to Set Load Factors for a Resource Group
How to Set Preemption Mode for a Resource Group
How to Concentrate Load Onto Fewer Nodes in the Cluster
Enabling Oracle Solaris SMF Services to Run With Oracle Solaris Cluster
Encapsulating an SMF Service Into a Failover Proxy Resource Configuration
Encapsulating an SMF Service Into a Multi-Master Proxy Resource Configuration
Encapsulating an SMF Service Into a Scalable Proxy Resource Configuration
Tuning Fault Monitors for Oracle Solaris Cluster Data Services
Setting the Interval Between Fault Monitor Probes
Setting the Timeout for Fault Monitor Probes
Defining the Criteria for Persistent Faults
Complete Failures and Partial Failures of a Resource
Dependencies of the Threshold and the Retry Interval on Other Properties
System Properties for Setting the Threshold and the Retry Interval
You can use the SUNW.HAStoragePlus resource type to share a highly available local file system directory managed by a global cluster resource to a zone cluster. This method consolidates the storage and shares a highly available local file system with different applications running on different zone clusters. For information on adding a file system to a zone cluster, see Adding File Systems to a Zone Cluster in Oracle Solaris Cluster Software Installation Guide.
This section explains the requirements and procedures for sharing a highly available local file system directory across zone clusters.
The directory of a highly available local file system managed by a global cluster resource can be shared to a zone cluster. To share a highly available local file system directory, the configuration must meet the following requirements:
Create an HAStoragePlus resource in a failover resource group in a global cluster with the file system where the directory to be shared belongs.
The directory of the highly available local file system that you want to share must be configured to a zone cluster as an lofs file system.
Create an HAStoragePlus resource in a failover resource group in a zone cluster with the lofs file system.
The zone cluster resource must have an offline restart dependency on the global cluster resource.
The zone cluster resource's resource group must have a strong positive affinity or strong positive affinity with failover delegation on the global cluster resource's resource group.
Note - The applications sharing a highly available local file system will experience an availability impact due to collocation of the applications. An application failure on a node and its intent to fail over might have a cascading effect on other applications and the applications would be forced to fail over to another node. Mitigate the problem by reducing the number of applications that are sharing the file system. If the file system that is being shared is UFS, you can choose to configure the cluster file system to a zone cluster. See How to Set Up the HAStoragePlus Resource for Cluster File Systems.
The following procedure explains how to set up the HAStoragePlus resource type to share a highly available local file system (for example, UFS) or a ZFS pool directory to a zone cluster called zoneclustername.
Perform the steps from a node in the global cluster, because the dependencies and affinities from a zone cluster to a global cluster can only be set by an authorized cluster node administrator.
# clresourcegroup create gc-hasp-resource-group
# clresourcetype register SUNW.HAStoragePlus
# clresource create -g gc-hasp-resource-group -t HAStoragePlus \ -p FilesystemMountPoints=mount-point \ -p Zpools=pool gc-hasp-resource
# clresourcegroup online -M gc-hasp-resource-group
# clzonecluster configure zoneclustername clzc:zoneclustername> add fs clzc:zoneclustername:fs> set dir = shared-dir-mount-point-in-zc clzc:zoneclustername:fs> set special = shared-directory clzc:zoneclustername:fs> set type = lofs clzc:zoneclustername:fs> end clzc:zoneclustername> exit #
# clresourcegroup create -Z zoneclustername \ -p RG_affinities=++global:gc-hasp-resource-group \ zc-hasp-resource-group OR # clresourcegroup create -Z zoneclustername \ -p RG_affinities=+++global:gc-hasp-resource-group zc-hasp-resource-group
# clresourcetype register -Z zoneclustername SUNW.HAStoragePlus
# clresource create -Z zoneclustername -t SUNW.HAStoragePlus -g zc-hasp-resource-group \ -p FilesystemMountPoints=shared-dir-mount-point-in-zc \ -p Resource_dependencies_offline_restart=global:gc-hasp-resource zc-hasp-resource
# clresourcegroup online -Z zoneclustername -M zc-hasp-resource-group
Example 2-41 Setting Up the HAStoragePlus Resource Type to Share a UFS Highly Available Local File System Directory to a Zone Cluster
The following example shows how to share the /local/fs/home directory of a UFS highly available local file system (/local/fs) to a zone cluster called sczone.
# clresourcegroup create gc-hasp-rg # clresourcetype register -Z sczone SUNW.HAStoragePlus # vi /etc/vfstab /dev/md/dg1/dsk/d0 /dev/md/dg1/rdsk/d0 /local/fs ufs 2 no logging # clresource create -g gc-hasp-rg -t SUNW.HAStoragePlus \ -p FilesystemMountPoints=/local/fs gc-hasp-rs # clresourcegroup online -M gc-hasp-rg
The steps above ensure that the gc-hasp-rs resource running in the global cluster manages the highly available local file system /local/fs.
# clzonecluster configure sczone clzc:sczone> add fs clzc:sczone:fs> set dir = /share/local/fs/home clzc:sczone:fs> set special = /local/fs/home clzc:sczone:fs> set type = lofs clzc:sczone:fs> end clzc:sczone> exit
The configuration above makes the highly available local file system's directory /local/fs/home available in the zone cluster sczone at mount point /share/local/fs/home.
# clresourcegroup create -Z sczone \ -p RG_affinities=++global:gc-hasp-rg zc-hasp-rg # clresourcetype register -Z sczone SUNW.HAStoragePlus # clresource create -Z sczone -t HAStoragePlus -g zc-hasp-rg \ -p FilesystemMountPoints=/share/local/fs/home \ -p Resource_dependencies_offline_restart=global:gc-hasp-rs zc-hasp-rs # clresourcegroup online -Z sczone -M zc-hasp-rg
The steps above create a zone cluster resource that manages the shared directory as an lofs file system.
Example 2-42 Setting Up the HAStoragePlus Resource Type to Share a ZFS Pool Directory to a Zone Cluster
The following example shows how to share the ZFS pool "tank" directory /tank/home to a zone cluster called sczone.
# clresourcegroup create gc-hasp-rg # clresourcetype register SUNW.HAStoragePlus # clresource create -g gc-hasp-rg -t SUNW.HAStoragePlus \ -p Zpools=tank gc-hasp-rs # clresourcegroup online -M gc-hasp-rg
The steps above ensure that the ZFS highly available local file system is managed by gc-hasp-rs running in the global cluster.
# clzonecluster configure sczone clzc:sczone> add fs clzc:sczone:fs> set dir = /share/tank/home clzc:sczone:fs> set special = /tank/home clzc:sczone:fs> set type = lofs clzc:sczone:fs>end clzc:sczone> exit #
The configuration above makes the ZFS pool "tank" directory /tank/home available in the zone cluster sczone at mountpoint /share/tank/home.
# clresourcegroup create -Z sczone \ -p RG_affinities=++global:gc-hasp-rg zc-hasp-rg # clresourcetype register -Z sczone SUNW.HAStoragePlus # clresource create -Z sczone -t HAStoragePlus -g zc-hasp-rg \ -p FilesystemMountPoints=/share/tank/home \ -p Resource_dependencies_offline_restart=global:gc-hasp-rs zc-hasp-rs # clresourcegroup online -Z sczone -M zc-hasp-rg
The steps above create a zone cluster resource that manages the shared directory as an lofs file system.