Skip Navigation Links | |
Exit Print View | |
Oracle Solaris Cluster Data Services Planning and Administration Guide Oracle Solaris Cluster 4.1 |
1. Planning for Oracle Solaris Cluster Data Services
2. Administering Data Service Resources
Overview of Tasks for Administering Data Service Resources
Configuring and Administering Oracle Solaris Cluster Data Services
How to Register a Resource Type
How to Install and Register an Upgrade of a Resource Type
How to Migrate Existing Resources to a New Version of the Resource Type
How to Unregister Older Unused Versions of the Resource Type
How to Downgrade a Resource to an Older Version of Its Resource Type
How to Create a Failover Resource Group
How to Create a Scalable Resource Group
Configuring Failover and Scalable Data Services on Shared File Systems
How to Configure a Failover Application Using the ScalMountPoint Resource
How to Configure a Scalable Application Using the ScalMountPoint Resource
Tools for Adding Resources to Resource Groups
How to Add a Logical Hostname Resource to a Resource Group by Using the clsetup Utility
How to Add a Logical Hostname Resource to a Resource Group Using the Command-Line Interface
How to Add a Shared Address Resource to a Resource Group by Using the clsetup Utility
How to Add a Shared Address Resource to a Resource Group Using the Command-Line Interface
How to Add a Failover Application Resource to a Resource Group
How to Add a Scalable Application Resource to a Resource Group
Bringing Resource Groups Online
How to Bring Resource Groups Online
Switching Resource Groups to Preferred Primaries
How to Switch Resource Groups to Preferred Primaries
How to Quiesce a Resource Group
How to Quiesce a Resource Group Immediately
Suspending and Resuming the Automatic Recovery Actions of Resource Groups
Immediately Suspending Automatic Recovery by Killing Methods
How to Suspend the Automatic Recovery Actions of a Resource Group
How to Suspend the Automatic Recovery Actions of a Resource Group Immediately
How to Resume the Automatic Recovery Actions of a Resource Group
Disabling and Enabling Resource Monitors
How to Disable a Resource Fault Monitor
How to Enable a Resource Fault Monitor
How to Remove a Resource Group
Switching the Current Primary of a Resource Group
How to Switch the Current Primary of a Resource Group
Disabling Resources and Moving Their Resource Group Into the UNMANAGED State
How to Disable a Resource and Move Its Resource Group Into the UNMANAGED State
Displaying Resource Type, Resource Group, and Resource Configuration Information
Changing Resource Type, Resource Group, and Resource Properties
How to Change Resource Type Properties
How to Change Resource Group Properties
How to Change Resource Properties
How to Change Resource Dependency Properties
How to Modify a Logical Hostname Resource or a Shared Address Resource
Clearing the STOP_FAILED Error Flag on Resources
How to Clear the STOP_FAILED Error Flag on Resources
Clearing the Start_failed Resource State
How to Clear a Start_failed Resource State by Switching Over a Resource Group
How to Clear a Start_failed Resource State by Restarting a Resource Group
How to Clear a Start_failed Resource State by Disabling and Enabling a Resource
Upgrading a Preregistered Resource Type
Information for Registering the New Resource Type Version
Information for Migrating Existing Instances of the Resource Type
Reregistering Preregistered Resource Types After Inadvertent Deletion
How to Reregister Preregistered Resource Types After Inadvertent Deletion
Adding or Removing a Node to or From a Resource Group
Adding a Node to a Resource Group
How to Add a Node to a Scalable Resource Group
How to Add a Node to a Failover Resource Group
Removing a Node From a Resource Group
How to Remove a Node From a Scalable Resource Group
How to Remove a Node From a Failover Resource Group
How to Remove a Node From a Failover Resource Group That Contains Shared Address Resources
Example - Removing a Node From a Resource Group
Synchronizing the Startups Between Resource Groups and Device Groups
Managed Entity Monitoring by HAStoragePlus
Troubleshooting Monitoring for Managed Entities
Additional Administrative Tasks to Configure HAStoragePlus Resources for a Zone Cluster
How to Set Up the HAStoragePlus Resource Type for New Resources
How to Set Up the HAStoragePlus Resource Type for Existing Resources
Configuring an HAStoragePlus Resource for Cluster File Systems
Sample Entries in /etc/vfstab for Cluster File Systems
How to Set Up the HAStoragePlus Resource for Cluster File Systems
How to Delete an HAStoragePlus Resource Type for Cluster File Systems
Enabling Highly Available Local File Systems
Configuration Requirements for Highly Available Local File Systems
Format of Device Names for Devices Without a Volume Manager
Sample Entries in /etc/vfstab for Highly Available Local File Systems
How to Set Up the HAStoragePlus Resource Type by Using the clsetup Utility
How to Delete an HAStoragePlus Resource That Makes a Local Solaris ZFS Highly Available
Sharing a Highly Available Local File System Across Zone Clusters
Modifying Online the Resource for a Highly Available Local File System
How to Add File Systems Other Than Solaris ZFS to an Online HAStoragePlus Resource
How to Remove File Systems Other Than Solaris ZFS From an Online HAStoragePlus Resource
How to Add a Solaris ZFS Storage Pool to an Online HAStoragePlus Resource
How to Remove a Solaris ZFS Storage Pool From an Online HAStoragePlus Resource
Changing a ZFS Pool Configuration That is Managed by an HAStoragePlus Resource
How to Change a ZFS Pool Configuration That is Managed by an Online HAStoragePlus Resource
How to Recover From a Fault After Modifying the Zpools Property of an HAStoragePlus Resource
Changing the Cluster File System to a Local File System in an HAStoragePlus Resource
How to Change the Cluster File System to Local File System in an HAStoragePlus Resource
Upgrading the HAStoragePlus Resource Type
Information for Registering the New Resource Type Version
Information for Migrating Existing Instances of the Resource Type
Distributing Online Resource Groups Among Cluster Nodes
Enforcing Collocation of a Resource Group With Another Resource Group
Specifying a Preferred Collocation of a Resource Group With Another Resource Group
Distributing a Set of Resource Groups Evenly Among Cluster Nodes
Specifying That a Critical Service Has Precedence
Delegating the Failover or Switchover of a Resource Group
Combining Affinities Between Resource Groups
Zone Cluster Resource Group Affinities
Configuring the Distribution of Resource Group Load Across Nodes
How to Configure Load Limits for a Node
How to Set Priority for a Resource Group
How to Set Load Factors for a Resource Group
How to Set Preemption Mode for a Resource Group
How to Concentrate Load Onto Fewer Nodes in the Cluster
Enabling Oracle Solaris SMF Services to Run With Oracle Solaris Cluster
Encapsulating an SMF Service Into a Failover Proxy Resource Configuration
Encapsulating an SMF Service Into a Multi-Master Proxy Resource Configuration
Encapsulating an SMF Service Into a Scalable Proxy Resource Configuration
Tuning Fault Monitors for Oracle Solaris Cluster Data Services
Setting the Interval Between Fault Monitor Probes
Setting the Timeout for Fault Monitor Probes
Defining the Criteria for Persistent Faults
Complete Failures and Partial Failures of a Resource
Dependencies of the Threshold and the Retry Interval on Other Properties
System Properties for Setting the Threshold and the Retry Interval
You might need a highly available local file system to remain available while you are modifying the resource that represents the file system. For example, you might need the file system to remain available because storage is being provisioned dynamically. In this situation, modify the resource that represents the highly available local file system while the resource is online.
In the Oracle Solaris Cluster environment, a highly available local file system is represented by an HAStoragePlus resource. Oracle Solaris Cluster enables you to modify an online HAStoragePlus resource as follows:
Adding file systems to the HAStoragePlus resource
Removing file systems from the HAStoragePlus resource
Oracle Solaris Cluster software does not enable you to rename a file system while the file system is online.
Note - When you remove the file systems configured in the HAStoragePlus resources for a zone cluster, you also need to remove the file system configuration from the zone cluster. For information about removing a file system from a zone cluster, see How to Remove a File System From a Zone Cluster in Oracle Solaris Cluster System Administration Guide.
When you add a local or cluster file system to an HAStoragePlus resource, the HAStoragePlus resource automatically mounts the file system.
For each entry, set the mount at boot field and the mount options field as follows:
For local file systems
Set the mount at boot field to no.
Remove the global flag.
For cluster file systems
If the file system is a cluster file system, set the mount options field to contain the global option.
# scha_resource_get -O extension -R hasp-resource -G hasp-rg FileSystemMountPoints
Specifies the HAStoragePlus resource to which you are adding file systems
Specifies the resource group that contains the HAStoragePlus resource
The mount points of the file systems that the HAStoragePlus resource already manages
The mount points of the file systems that you are adding to the HAStoragePlus resource
# clresource set -p FileSystemMountPoints="mount-point-list" hasp-resource
Specifies a comma-separated list of mount points of the file systems that the HAStoragePlus resource already manages and the mount points of the file systems that you are adding. The format of each entry in the list is LocalZonePath:GlobalZonePath. In this format, the global path is optional. If the global path is not specified, the global path is the same as the local path.
Specifies the HAStoragePlus resource to which you are adding file systems.
# scha_resource_get -O extension -R hasp-resource -G hasp-rg \ FileSystemMountPoints
Specifies the HAStoragePlus resource to which you are adding file systems.
Specifies the resource group that contains the HAStoragePlus resource.
If the HAStoragePlus resource is online and faulted, validation of the resource succeeded, but an attempt by HAStoragePlus to mount a file system failed.
# clresource status hasp-resource
Example 2-43 Adding a File System to an Online HAStoragePlus Resource
This example shows how to add a file system to an online HAStoragePlus resource.
The HAStoragePlus resource is named rshasp and is contained in the resource group rghasp.
The HAStoragePlus resource named rshasp already manages the file system whose mount point is /global/global-fs/fs.
The mount point of the file system that is to be added is /global/local-fs/fs.
The example assumes that the /etc/vfstab file on each cluster node already contains an entry for the file system that is to be added.
# scha_resource_get -O extension -R rshasp -G rghasp FileSystemMountPoints STRINGARRAY /global/global-fs/fs # clresource set \ -p FileSystemMountPoints="/global/global-fs/fs,/global/local-fs/fs" # scha_resource_get -O extension -R rshasp -G rghasp FileSystemMountPoints rshasp STRINGARRAY /global/global-fs/fs /global/local-fs/fs # clresource status rshasp === Cluster Resources === Resource Name Node Name Status Message -------------- ---------- ------- -------- rshasp node46 Offline Offline node47 Online Online
When you remove a file system from an HAStoragePlus resource, the HAStoragePlus resource treats a local file system differently from a cluster file system.
The HAStoragePlus resource automatically unmounts a local file system.
The HAStoragePlus resource does not unmount the cluster file system.
# scha_resource_get -O extension -R hasp-resource -G hasp-rg FileSystemMountPoints
Specifies the HAStoragePlus resource from which you are removing file systems.
Specifies the resource group that contains the HAStoragePlus resource.
# clresource set -p FileSystemMountPoints="mount-point-list" hasp-resource
Specifies a comma-separated list of mount points of the file systems that are to remain in the HAStoragePlus resource. This list must not include the mount points of the file systems that you are removing.
Specifies the HAStoragePlus resource from which you are removing file systems.
# scha_resource_get -O extension -R hasp-resource -G hasp-rg \ FileSystemMountPoints
Specifies the HAStoragePlus resource from which you are removing file systems.
Specifies the resource group that contains the HAStoragePlus resource.
If the HAStoragePlus resource is online and faulted, validation of the resource succeeded, but an attempt by HAStoragePlus to unmount a file system failed.
# clresource status hasp-resource
Example 2-44 Removing a File System From an Online HAStoragePlus Resource
This example shows how to remove a file system from an online HAStoragePlus resource.
The HAStoragePlus resource is named rshasp and is contained in the resource group rghasp.
The HAStoragePlus resource named rshasp already manages the file systems whose mount points are as follows:
/global/global-fs/fs
/global/local-fs/fs
The mount point of the file system that is to be removed is /global/local-fs/fs.
# scha_resource_get -O extension -R rshasp -G rghasp FileSystemMountPoints STRINGARRAY /global/global-fs/fs /global/local-fs/fs # clresource set -p FileSystemMountPoints="/global/global-fs/fs" # scha_resource_get -O extension -R rshasp -G rghasp FileSystemMountPoints rshasp STRINGARRAY /global/global-fs/fs # clresource status rshasp === Cluster Resources === Resource Name Node Name Status Message -------------- ---------- ------- -------- rshasp node46 Offline Offline node47 Online Online
When you add a Solaris ZFS storage pool to an online HAStoragePlus resource, the HAStoragePlus resource does the following:
Imports the ZFS storage pool.
Mounts all file systems in the ZFS storage pool.
Caution - If you are planning to manually import a pool that is already managed by the cluster, ensure that the pool is not imported on multiple nodes. Importing a pool on multiple nodes can lead to problems. |
If you want to make configuration changes to a ZFS pool that is managed by cluster with an HAStoragePlus resource, see Changing a ZFS Pool Configuration That is Managed by an HAStoragePlus Resource.
# clresource show -g hasp-resource-group -p Zpools hasp-resource
Specifies the resource group that contains the HAStoragePlus resource.
Specifies the HAStoragePlus resource to which you are adding the ZFS storage pool.
# clresource set -p Zpools="zpools-list" hasp-resource
Specifies a comma-separated list of existing ZFS storage pool names that the HAStoragePlus resource already manages and the new ZFS storage pool name that you want to add.
Specifies the HAStoragePlus resource to which you are adding the ZFS storage pool.
# clresource show -g hasp-resource-group -p Zpools hasp-resource
Specifies the resource group that contains the HAStoragePlus resource.
Specifies the HAStoragePlus resource to which you added the ZFS storage pool.
If the HAStoragePlus resource is online but faulted, validation of the resource succeeded. However, an attempt by the HAStoragePlus resource to import and mount the ZFS file system failed. In this case, you need to repeat the preceding set of steps.
# clresourcegroup status hasp-resource
When you remove a Solaris ZFS storage pool from an online HAStoragePlus resource, the HAStoragePlus resource does the following:
Unmounts the file systems in the ZFS storage pool.
Exports the ZFS storage pool from the node.
# clresource show -g hasp-resource-group -p Zpools hasp-resource
Specifies the resource group that contains the HAStoragePlus resource.
Specifies the HAStoragePlus resource from which you are removing the ZFS storage pool.
# clresource set -p Zpools="zpools-list" hasp-resource
Specifies a comma-separated list of ZFS storage pool names that the HAStoragePlus resource currently manages, minus the ZFS storage pool name that you want to remove.
Specifies the HAStoragePlus resource from which you are removing the ZFS storage pool.
# clresource show -g hasp-resource-group -p Zpools hasp-resource
Specifies the resource group that contains the HAStoragePlus resource.
Specifies the HAStoragePlus resource from which you removed the ZFS storage pool.
If the HAStoragePlus resource is online but faulted, validation of the resource succeeded. However, an attempt by the HAStoragePlus resource to unmount and export the ZFS file system failed. In this case, you need to repeat the preceding set of steps.
# clresource status -t SUNW.HAStoragePlus +
To change the ZFS pool configuration that is managed by HAStoragePlus resource, you must ensure that the pool is never imported on multiple nodes. Performing imports on multiple nodes can have severe consequences and could cause ZFS pool corruption.
The following procedures help you avoid multiple imports when performing pool configuration changes.
# zpool list zfs-pool-name
Run this command on all cluster nodes that have a physical connection to the ZFS pool.
# zpool import -R zfs-pool-name
If the import succeeds, proceed to Step 3. If the import fails, the cluster node that previously accessed the pool might have shut down without exporting the pool. Follow the substeps below to ensure that the cluster node is not using the ZFS pool and then import the pool forcefully:
Cannot import 'zfs-pool-name': pool may be in use from other system, it was last accessed by hostname (hostid: hostid) on accessed-date.
hostname# zpool list zfs-pool-name
# zpool import -f zfs-pool-name
# zpool export zfs-pool-name # zpool list zfs-pool-name
It will be the node where the HAStoragePlus resource is online.
# clresource show hasp-rs-managing-pool === Cluster Resources === Resource Name Node Name Status Message -------------- ---------- ------- -------- hasp-rs-managing-pool phys-node-1 Offline Offline phys-node-2 Online Online phys-node-2# zpool list zfs-pool-name
If a fault occurs during a modification of the FileSystemMountPoints extension property, the status of the HAStoragePlus resource is online and faulted. After the fault is corrected, the status of the HAStoragePlus resource is online.
# clresource status hasp-resource
The status message of the faulty HAStoragePlus resource indicates the fault. Possible faults are as follows:
The device on which the file system should reside does not exist.
An attempt by the fsck command to repair a file system failed.
The mount point of a file system that you attempted to add does not exist.
A file system that you attempted to add cannot be mounted.
A file system that you attempted to remove cannot be unmounted.
# clresource set -p FileSystemMountPoints="mount-point-list" hasp-resource
Specifies a comma-separated list of mount points that you specified in the unsuccessful attempt to modify the highly available local file system
Specifies the HAStoragePlus resource that you are modifying
# clresource status
Example 2-45 Status of a Faulty HAStoragePlus Resource
This example shows the status of a faulty HAStoragePlus resource. This resource is faulty because an attempt by the fsck command to repair a file system failed.
# clresource status === Cluster Resources === Resource Name Node Name Status Status Message -------------- ---------- ------- ------------- rshasp node46 Offline Offline node47 Online Online Faulted - Failed to fsck: /mnt.
If a fault occurs during a modification of the Zpools extension property, the status of the HAStoragePlus resource is online and faulted. After the fault is corrected, the status of the HAStoragePlus resource is online.
# clresource status hasp-resource
The status message of the faulty HAStoragePlus resource indicates the fault. Possible faults are as follows:
The ZFS pool zpool failed to import.
The ZFS pool zpool failed to export.
Note - If you import a corrupt ZFS pool, the best option is to choose Continue to display an error message. Other choices are Wait (which hangs until success occurs or the node panics) or Panic (which panics the node).
# clresource set -p Zpools="zpools-list" hasp-resource
Specifies a comma-separated list of ZFS storage pool names that the HAStoragePlus currently manages, minus the ZFS storage pool name that you want to remove.
Specifies the HAStoragePlus resource that you are modifying
# clresource status
Example 2-46 Status of a Faulty HAStoragePlus Resource
This example shows the status of a faulty HAStoragePlus resource. This resource is faulty because the ZFS pool zpool failed to import.
# clresource status hasp-resource === Cluster Resources === Resource Name Node Name Status Status Message -------------- ---------- ------- ------------- hasp-resource node46 Online Faulted - Failed to import:hazpool node47 Offline Offline