Skip Navigation Links | |
Exit Print View | |
Oracle Solaris Cluster Data Services Planning and Administration Guide Oracle Solaris Cluster 4.1 |
1. Planning for Oracle Solaris Cluster Data Services
2. Administering Data Service Resources
Overview of Tasks for Administering Data Service Resources
Configuring and Administering Oracle Solaris Cluster Data Services
How to Register a Resource Type
How to Install and Register an Upgrade of a Resource Type
How to Migrate Existing Resources to a New Version of the Resource Type
How to Unregister Older Unused Versions of the Resource Type
How to Downgrade a Resource to an Older Version of Its Resource Type
How to Create a Failover Resource Group
How to Create a Scalable Resource Group
Configuring Failover and Scalable Data Services on Shared File Systems
How to Configure a Failover Application Using the ScalMountPoint Resource
How to Configure a Scalable Application Using the ScalMountPoint Resource
Tools for Adding Resources to Resource Groups
How to Add a Logical Hostname Resource to a Resource Group by Using the clsetup Utility
How to Add a Logical Hostname Resource to a Resource Group Using the Command-Line Interface
How to Add a Shared Address Resource to a Resource Group by Using the clsetup Utility
How to Add a Shared Address Resource to a Resource Group Using the Command-Line Interface
How to Add a Failover Application Resource to a Resource Group
How to Add a Scalable Application Resource to a Resource Group
Bringing Resource Groups Online
How to Bring Resource Groups Online
Switching Resource Groups to Preferred Primaries
How to Switch Resource Groups to Preferred Primaries
How to Quiesce a Resource Group
How to Quiesce a Resource Group Immediately
Suspending and Resuming the Automatic Recovery Actions of Resource Groups
Immediately Suspending Automatic Recovery by Killing Methods
How to Suspend the Automatic Recovery Actions of a Resource Group
How to Suspend the Automatic Recovery Actions of a Resource Group Immediately
How to Resume the Automatic Recovery Actions of a Resource Group
Disabling and Enabling Resource Monitors
How to Disable a Resource Fault Monitor
How to Enable a Resource Fault Monitor
How to Remove a Resource Group
Switching the Current Primary of a Resource Group
How to Switch the Current Primary of a Resource Group
Disabling Resources and Moving Their Resource Group Into the UNMANAGED State
How to Disable a Resource and Move Its Resource Group Into the UNMANAGED State
Displaying Resource Type, Resource Group, and Resource Configuration Information
Changing Resource Type, Resource Group, and Resource Properties
How to Change Resource Type Properties
How to Change Resource Group Properties
How to Change Resource Properties
How to Change Resource Dependency Properties
How to Modify a Logical Hostname Resource or a Shared Address Resource
Clearing the STOP_FAILED Error Flag on Resources
How to Clear the STOP_FAILED Error Flag on Resources
Clearing the Start_failed Resource State
How to Clear a Start_failed Resource State by Switching Over a Resource Group
How to Clear a Start_failed Resource State by Restarting a Resource Group
How to Clear a Start_failed Resource State by Disabling and Enabling a Resource
Upgrading a Preregistered Resource Type
Information for Registering the New Resource Type Version
Information for Migrating Existing Instances of the Resource Type
Reregistering Preregistered Resource Types After Inadvertent Deletion
How to Reregister Preregistered Resource Types After Inadvertent Deletion
Adding or Removing a Node to or From a Resource Group
Adding a Node to a Resource Group
How to Add a Node to a Scalable Resource Group
How to Add a Node to a Failover Resource Group
Removing a Node From a Resource Group
How to Remove a Node From a Scalable Resource Group
How to Remove a Node From a Failover Resource Group
How to Remove a Node From a Failover Resource Group That Contains Shared Address Resources
Example - Removing a Node From a Resource Group
Synchronizing the Startups Between Resource Groups and Device Groups
Managed Entity Monitoring by HAStoragePlus
Troubleshooting Monitoring for Managed Entities
Additional Administrative Tasks to Configure HAStoragePlus Resources for a Zone Cluster
How to Set Up the HAStoragePlus Resource Type for New Resources
How to Set Up the HAStoragePlus Resource Type for Existing Resources
Configuring an HAStoragePlus Resource for Cluster File Systems
Sample Entries in /etc/vfstab for Cluster File Systems
How to Set Up the HAStoragePlus Resource for Cluster File Systems
How to Delete an HAStoragePlus Resource Type for Cluster File Systems
Enabling Highly Available Local File Systems
Configuration Requirements for Highly Available Local File Systems
Format of Device Names for Devices Without a Volume Manager
Sample Entries in /etc/vfstab for Highly Available Local File Systems
How to Set Up the HAStoragePlus Resource Type by Using the clsetup Utility
How to Delete an HAStoragePlus Resource That Makes a Local Solaris ZFS Highly Available
Sharing a Highly Available Local File System Across Zone Clusters
Modifying Online the Resource for a Highly Available Local File System
How to Add File Systems Other Than Solaris ZFS to an Online HAStoragePlus Resource
How to Remove File Systems Other Than Solaris ZFS From an Online HAStoragePlus Resource
How to Add a Solaris ZFS Storage Pool to an Online HAStoragePlus Resource
How to Remove a Solaris ZFS Storage Pool From an Online HAStoragePlus Resource
Changing a ZFS Pool Configuration That is Managed by an HAStoragePlus Resource
How to Change a ZFS Pool Configuration That is Managed by an Online HAStoragePlus Resource
How to Recover From a Fault After Modifying the Zpools Property of an HAStoragePlus Resource
Changing the Cluster File System to a Local File System in an HAStoragePlus Resource
How to Change the Cluster File System to Local File System in an HAStoragePlus Resource
Upgrading the HAStoragePlus Resource Type
Information for Registering the New Resource Type Version
Information for Migrating Existing Instances of the Resource Type
Distributing Online Resource Groups Among Cluster Nodes
Enforcing Collocation of a Resource Group With Another Resource Group
Specifying a Preferred Collocation of a Resource Group With Another Resource Group
Distributing a Set of Resource Groups Evenly Among Cluster Nodes
Specifying That a Critical Service Has Precedence
Delegating the Failover or Switchover of a Resource Group
Combining Affinities Between Resource Groups
Zone Cluster Resource Group Affinities
Configuring the Distribution of Resource Group Load Across Nodes
How to Configure Load Limits for a Node
How to Set Priority for a Resource Group
How to Set Load Factors for a Resource Group
How to Set Preemption Mode for a Resource Group
How to Concentrate Load Onto Fewer Nodes in the Cluster
Enabling Oracle Solaris SMF Services to Run With Oracle Solaris Cluster
Encapsulating an SMF Service Into a Failover Proxy Resource Configuration
Encapsulating an SMF Service Into a Multi-Master Proxy Resource Configuration
Encapsulating an SMF Service Into a Scalable Proxy Resource Configuration
Tuning Fault Monitors for Oracle Solaris Cluster Data Services
Setting the Interval Between Fault Monitor Probes
Setting the Timeout for Fault Monitor Probes
Defining the Criteria for Persistent Faults
Complete Failures and Partial Failures of a Resource
Dependencies of the Threshold and the Retry Interval on Other Properties
System Properties for Setting the Threshold and the Retry Interval
Using a highly available local file system improves the performance of I/O intensive data services. To make a local file system highly available in an Oracle Solaris Cluster environment, use the HAStoragePlus resource type.
You can specify cluster file systems or local file systems. Cluster file systems are accessible from all nodes in a cluster. Local file systems are accessible from a single cluster node. Local file systems that are managed by a SUNW.HAStoragePlus resource are mounted on a single cluster node. These local file systems require the underlying devices to be Oracle Solaris Cluster global devices.
These file system mount points are defined in the format paths[,…]. The default setting for this property is an empty list.
You can use the SUNW.HAStoragePlus resource type to make a file system available to zone-cluster nodes. The file systems configured in the SUNW.HAStoragePlus resource type for zone clusters should be authorized for use in zone clusters by using the clzonecluster command. For more information, see the clzonecluster(1CL) man page and Adding File Systems to a Zone Cluster in Oracle Solaris Cluster Software Installation Guide.
The instructions for each Oracle Solaris Cluster data service that is I/O intensive explain how to configure the data service to operate with the HAStoragePlus resource type. For more information, see the individual Oracle Solaris Cluster data service guides.
Note - Do not use the HAStoragePlus resource type to make a root file system highly available.
Oracle Solaris Cluster provides the following tools for setting up the HAStoragePlus resource type to make local file systems highly available:
The clsetup utility.
Oracle Solaris Cluster maintenance commands.
The clsetup utility enables you to add resources to the resource group interactively. Configuring these resources interactively reduces the possibility for configuration errors that might result from command syntax errors or omissions. The clsetup utility ensures that all required resources are created and that all required dependencies between resources are set.
Any file system on multihost disks must be accessible from any host that is directly connected to those multihost disks. To meet this requirement, configure the highly available local file system as follows:
Ensure that the disk partitions of the local file system reside on global devices.
Set the AffinityOn extension property of the HAStoragePlus resource that specifies these global devices to True.
The Zpools extension property of the HAStoragePlus resource ignores the AffinityOn extension property.
Create the HAStoragePlus resource in a failover resource group.
Ensure that the failback settings for the device groups and the resource group that contains the HAStoragePlus resource are identical.
Note - The use of a volume manager with the global devices for a highly available local file system is optional.
If you are not using a volume manager, use the appropriate format for the name of the underlying storage device. The format to use depends on the type of storage device as follows:
For block devices: /dev/global/dsk/dDsS
For raw devices: /dev/global/rdsk/dDsS
The replaceable elements in these device names are as follows:
D is an integer that specifies the device ID (DID) instance number.
S is an integer that specifies the slice number.
The following examples show entries in the /etc/vfstab file for global devices that are to be used for highly available local file systems.
Note - Solaris ZFS does not use the /etc/vfstab file.
Example 2-35 Entries in /etc/vfstab for a Global Device Without a Volume Manager
This example shows entries in the /etc/vfstab file for a global device on a physical disk without a volume manager.
/dev/global/dsk/d1s0 /dev/global/rdsk/d1s0 /global/local-fs/nfs ufs 5 no logging
Example 2-36 Entries in /etc/vfstab for a Global Device With Solaris Volume Manager
This example shows entries in the /etc/vfstab file for a global device that uses Solaris Volume Manager.
/dev/md/kappa-1/dsk/d0 /dev/md/kappa-1/rdsk/d0 /global/local-fs/nfs ufs 5 no logging
Note - The same file system entries must be added to the zone cluster configuration when you configure the file system for a zone cluster using the SUNW.HAStoragePlus resource type.
The following instructions explain how to how to set up the HAStoragePlus resource type by using the clsetup utility. Perform this procedure from any cluster node.
This procedure provides the long forms of the Oracle Solaris Cluster maintenance commands. Most commands also have short forms. Except for the forms of the command names, the commands are identical.
Before You Begin
Ensure that the following prerequisites are met:
Ensure that the required volumes, disk groups, and file systems are created.
# clsetup
The clsetup main menu is displayed.
The Data Services menu is displayed.
The clsetup utility provides the list of prerequisites for performing this task.
The clsetup utility provides a list of the cluster nodes that can master the highly available HAStoragePlus resource.
Ensure that the nodes are listed in the order in which the nodes are to appear in the HAStoragePlus resource group's node list. The first node in the list is the primary node of this resource group.
The clsetup utility provides a list of types of shared storage type where data is to be stored.
The clsetup utility provides a list of the file system mount points that are configured in the cluster. If there are no existing mount points, the clsetup utility allows you to define a new mount point.
The clsetup utility returns you the properties of the mount point that the utility will create.
The clsetup utility provides the available file system mount points.
Note - You can use the c option to define another new mount point.
The clsetup utility provides a list of the global disk sets and device groups that are configured in the cluster.
The clsetup utility provides the names of the Oracle Solaris Cluster objects that the utility will create.
The clsetup utility provides a screen where you can specify the new name.
The clsetup utility returns you to the list of the names of the Oracle Solaris Cluster objects that the utility will create.
The clsetup utility provides information about the Oracle Solaris Cluster configuration that the utility will create.
The clsetup utility provides a progress message to indicate that the utility is running commands to create the configuration. When configuration is complete, the clsetup utility lists the commands that the utility ran to create the configuration.
If you prefer, you can leave the clsetup utility running while you perform other required tasks before using the utility again. If you choose to quit clsetup, the utility recognizes your existing resource group when you restart the utility.
Use the clresource(1CL) utility for this purpose.
# clresource show name_of_rg
The following procedure explains how to set up the HAStoragePlus resource type to make file systems other than Solaris ZFS highly available.
# clresourcegroup create resource-group
# clresourcetype register SUNW.HAStoragePlus
# clresource create -g resource-group \ -t SUNW.HAStoragePlus -p FileSystemMountPoints=mount-point-list hasp-resource
# clresourcegroup online -M resource-group
Example 2-37 Setting Up the HAStoragePlus Resource Type to Make a UFS File System Highly Available for the Global Cluster
This example assumes that the file system /web-1 is configured to the HAStoragePlus resource to make the file system highly available for the global cluster.
phys-schost-1# vi /etc/vfstab #device device mount FS fsck mount mount #to mount to fsck point type pass at boot options # # /dev/md/apachedg/dsk/d0 /dev/md/apachedg/rdsk/d0 /web-1 ufs 2 no logging # clresourcegroup create hasp-rg # clresourcetype register SUNW.HAStoragePlus # clresource create -g hasp-rg -t SUNW.HAStoragePlus -p FileSystemMountPoints=/global/ufs-1 hasp-rs # clresourcegroup online -M hasp-rg
Example 2-38 Setting Up the HAStoragePlus Resource Type to Make a UFS File System Highly Available for a Zone Cluster
This example assumes that the file system /web-1 is configured to the HAStoragePlus resource to make the file system highly available for a zone cluster sczone. When a local file system is configured as a highly available local file system for a zone cluster using the SUNW.HAStoragePlus resource type, the HAStoragePlus resource reads the file system information in the zone cluster configuration.
# clzonecluster configure sczone clzc:sczone> add fs clzc:sczone:fs> set dir=/web-1 clzc:sczone:fs> set special=/dev/md/apachedg/dsk/d0 clzc:sczone:fs> set raw=/dev/md/apachedg/rdsk/d0 clzc:sczone:fs> set type=ufs clzc:sczone:fs> add options [logging] clzc:sczone:fs> end clzc:sczone:fs> exit # clresourcegroup create -Z sczone hasp-rg # clresourcetype register -Z sczone SUNW.HAStoragePlus # clresource create -Z sczone -g hasp-rg \ -t SUNW.HAStoragePlus -p FileSystemMountPoints=/web-1 hasp-rs # clresourcegroup online -Z sczone -M hasp-rg
You perform the following primary tasks to make a local Solaris ZFS highly available:
Create a ZFS storage pool.
Create a ZFS file system in that ZFS storage pool.
Set up the HAStoragePlus resource that manages the ZFS storage pool.
This section describes how to complete these tasks.
If you are planning to manually import a ZFS pool that is already managed by the cluster, ensure that the pool is not imported on multiple nodes. Importing a pool on multiple nodes can lead to problems. For more information, see Changing a ZFS Pool Configuration That is Managed by an HAStoragePlus Resource.
Caution - Do not add a configured quorum device to a ZFS storage pool. When a configured quorum device is added to a storage pool, the disk is relabeled as an EFI disk, the quorum configuration information is lost, and the disk no longer provides a quorum vote to the cluster. After a disk is in a storage pool, you can configure that disk as a quorum device. Alternatively, you can unconfigure the disk, add it to the storage pool, then reconfigure the disk as a quorum device. |
Observe the following requirements when you create a ZFS storage pool in an Oracle Solaris Clusterconfiguration:
Ensure that all of the devices from which you create a ZFS storage pool are accessible from all nodes in the cluster. These nodes must be configured in the node list of the resource group to which the HAStoragePlus resource belongs.
Ensure that the Oracle Solaris device identifier that you specify to the zpool(1M) command, for example /dev/dsk/c0t0d0, is visible to the cldevice list -v command.
Note - The ZFS storage pool can be created using a full disk or a disk slice. It is preferred to create a ZFS storage pool using a full disk by specifying an Oracle Solaris logical device as ZFS file system performs better by enabling the disk write cache. ZFS file system labels the disk with EFI when a full disk is provided.
See Creating a Basic ZFS Storage Pool in Oracle Solaris 11.1 Administration: ZFS File Systems for information about how to create a ZFS storage pool.
Observe the following requirements when you create a ZFS file system in the ZFS pool:
You can create more than one ZFS file system in the same ZFS storage pool.
HAStoragePlus does not support file systems created on ZFS file system volumes.
Do not place a ZFS file system in the FilesystemMountPoints extension property.
If necessary, change the ZFS failmode property setting to either continue or panic, whichever best fits your requirements.
Note - The ZFS pool failmode property is set to wait by default. This setting can result in the HAStoragePlus resource blocking, which might prevent a failover of the resource group. See the zpool(1M) man page to understand the possible values for the failmode property and decide which value fits your requirements.
You can choose to encrypt a ZFS file system when you create it. TheHAStoragePlus resource automatically mounts all the file systems in the pool during resource online. The encrypted file system that requires interactive entry of a key or a passphrase during mount will experience a problem bringing the resource online. To avoid problems, do not use keysource=raw | hex | passphrase,prompt|pkcs11: for the encrypted file systems of the ZFS storage pool managed by a cluster using an HAStoragePlus resource. You can use keysource=raw | hex | passphrase,file://|https://, where the key or a passphrase location is accessible to the cluster nodes where the HAStoragePlus resource is going online.
See Creating a ZFS File System Hierarchy in Solaris ZFS Administration Guide for information about how to create a ZFS file system in a ZFS storage pool.
# clresourcegroup create resource-group
# clresourcetype register SUNW.HAStoragePlus
# clresource create -g resource-group -t SUNW.HAStoragePlus \ -p Zpools=zpool -p ZpoolsSearchDir=/dev/did/dsk \ resource
The default location to search for devices of ZFS storage pools is /dev/dsk. It can be overridden by using the ZpoolsSearchDir extension property.
The resource is created in the enabled state.
# clresourcegroup online -M resource-group
Example 2-39 Setting Up the HAStoragePlus Resource Type to Make a Local ZFS File System Highly Available for a Global Cluster
The following example shows the commands to make a local ZFS file system highly available.
phys-schost-1% su Password: # cldevice list -v DID Device Full Device Path ---------- ---------------- d1 phys-schost-1:/dev/rdsk/c0t0d0 d2 phys-schost-1:/dev/rdsk/c0t1d0 d3 phys-schost-1:/dev/rdsk/c1t8d0 d3 phys-schost-2:/dev/rdsk/c1t8d0 d4 phys-schost-1:/dev/rdsk/c1t9d0 d4 phys-schost-2:/dev/rdsk/c1t9d0 d5 phys-schost-1:/dev/rdsk/c1t10d0 d5 phys-schost-2:/dev/rdsk/c1t10d0 d6 phys-schost-1:/dev/rdsk/c1t11d0 d6 phys-schost-2:/dev/rdsk/c1t11d0 d7 phys-schost-2:/dev/rdsk/c0t0d0 d8 phys-schost-2:/dev/rdsk/c0t1d0 you can create a ZFS storage pool using a disk slice by specifying a Solaris device identifier: # zpool create HAzpool c1t8d0s2 or you can create a ZFS storage pool using disk slice by specifying a logical device identifier # zpool create HAzpool /dev/did/dsk/d3s2 # zfs create HAzpool/export # zfs create HAzpool/export/home # clresourcegroup create hasp-rg # clresourcetype register SUNW.HAStoragePlus # clresource create -g hasp-rg -t SUNW.HAStoragePlus -p Zpools=HAzpool hasp-rs # clresourcegroup online -M hasp-rg
Example 2-40 Setting Up the HAStoragePlus Resource Type to Make a Local ZFS File System Highly Available for a Zone Cluster
The following example shows the steps to make a local ZFS file system highly available in a zone cluster sczone.
phys-schost-1# cldevice list -v # zpool create HAzpool c1t8d0 # zfs create HAzpool/export # zfs create HAzpool/export/home # clzonecluster configure sczone clzc:sczone> add dataset clzc:sczone:fs> set name=HAzpool clzc:sczone:fs> end clzc:sczone:fs> exit # clresourcegroup create -Z sczone hasp-rg # clresourcetype register -Z sczone SUNW.HAStoragePlus # clresource create -Z sczone -g hasp-rg -t SUNW.HAStoragePlus \ -p Zpools=HAzpool hasp-rs # clresourcegroup online -Z -sczone -M hasp-rg
# clresource delete -F -g resource-group -t SUNW.HAStoragePlus resource