JavaScript is required to for searching.
Skip Navigation Links
Exit Print View
Oracle Solaris Cluster Data Services Planning and Administration Guide     Oracle Solaris Cluster 4.1
search filter icon
search icon

Document Information

Preface

1.  Planning for Oracle Solaris Cluster Data Services

2.  Administering Data Service Resources

Overview of Tasks for Administering Data Service Resources

Configuring and Administering Oracle Solaris Cluster Data Services

Registering a Resource Type

How to Register a Resource Type

Upgrading a Resource Type

How to Install and Register an Upgrade of a Resource Type

How to Migrate Existing Resources to a New Version of the Resource Type

How to Unregister Older Unused Versions of the Resource Type

Downgrading a Resource Type

How to Downgrade a Resource to an Older Version of Its Resource Type

Creating a Resource Group

How to Create a Failover Resource Group

How to Create a Scalable Resource Group

Configuring Failover and Scalable Data Services on Shared File Systems

How to Configure a Failover Application Using the ScalMountPoint Resource

How to Configure a Scalable Application Using the ScalMountPoint Resource

Tools for Adding Resources to Resource Groups

How to Add a Logical Hostname Resource to a Resource Group by Using the clsetup Utility

How to Add a Logical Hostname Resource to a Resource Group Using the Command-Line Interface

How to Add a Shared Address Resource to a Resource Group by Using the clsetup Utility

How to Add a Shared Address Resource to a Resource Group Using the Command-Line Interface

How to Add a Failover Application Resource to a Resource Group

How to Add a Scalable Application Resource to a Resource Group

Bringing Resource Groups Online

How to Bring Resource Groups Online

Switching Resource Groups to Preferred Primaries

How to Switch Resource Groups to Preferred Primaries

Enabling a Resource

How to Enable a Resource

Quiescing Resource Groups

How to Quiesce a Resource Group

How to Quiesce a Resource Group Immediately

Suspending and Resuming the Automatic Recovery Actions of Resource Groups

Immediately Suspending Automatic Recovery by Killing Methods

How to Suspend the Automatic Recovery Actions of a Resource Group

How to Suspend the Automatic Recovery Actions of a Resource Group Immediately

How to Resume the Automatic Recovery Actions of a Resource Group

Disabling and Enabling Resource Monitors

How to Disable a Resource Fault Monitor

How to Enable a Resource Fault Monitor

Removing Resource Types

How to Remove a Resource Type

Removing Resource Groups

How to Remove a Resource Group

Removing Resources

How to Remove a Resource

Switching the Current Primary of a Resource Group

How to Switch the Current Primary of a Resource Group

Disabling Resources and Moving Their Resource Group Into the UNMANAGED State

How to Disable a Resource and Move Its Resource Group Into the UNMANAGED State

Displaying Resource Type, Resource Group, and Resource Configuration Information

Changing Resource Type, Resource Group, and Resource Properties

How to Change Resource Type Properties

How to Change Resource Group Properties

How to Change Resource Properties

How to Change Resource Dependency Properties

How to Modify a Logical Hostname Resource or a Shared Address Resource

Clearing the STOP_FAILED Error Flag on Resources

How to Clear the STOP_FAILED Error Flag on Resources

Clearing the Start_failed Resource State

How to Clear a Start_failed Resource State by Switching Over a Resource Group

How to Clear a Start_failed Resource State by Restarting a Resource Group

How to Clear a Start_failed Resource State by Disabling and Enabling a Resource

Upgrading a Preregistered Resource Type

Information for Registering the New Resource Type Version

Information for Migrating Existing Instances of the Resource Type

Reregistering Preregistered Resource Types After Inadvertent Deletion

How to Reregister Preregistered Resource Types After Inadvertent Deletion

Adding or Removing a Node to or From a Resource Group

Adding a Node to a Resource Group

How to Add a Node to a Scalable Resource Group

How to Add a Node to a Failover Resource Group

Removing a Node From a Resource Group

How to Remove a Node From a Scalable Resource Group

How to Remove a Node From a Failover Resource Group

How to Remove a Node From a Failover Resource Group That Contains Shared Address Resources

Example - Removing a Node From a Resource Group

Synchronizing the Startups Between Resource Groups and Device Groups

Managed Entity Monitoring by HAStoragePlus

Troubleshooting Monitoring for Managed Entities

Additional Administrative Tasks to Configure HAStoragePlus Resources for a Zone Cluster

How to Set Up the HAStoragePlus Resource Type for New Resources

How to Set Up the HAStoragePlus Resource Type for Existing Resources

Configuring an HAStoragePlus Resource for Cluster File Systems

Sample Entries in /etc/vfstab for Cluster File Systems

How to Set Up the HAStoragePlus Resource for Cluster File Systems

How to Delete an HAStoragePlus Resource Type for Cluster File Systems

Enabling Highly Available Local File Systems

Configuration Requirements for Highly Available Local File Systems

Format of Device Names for Devices Without a Volume Manager

Sample Entries in /etc/vfstab for Highly Available Local File Systems

How to Set Up the HAStoragePlus Resource Type by Using the clsetup Utility

How to Set Up the HAStoragePlus Resource Type to Make File Systems Highly Available Other Than Solaris ZFS

How to Set Up the HAStoragePlus Resource Type to Make a Local Solaris ZFS File System Highly Available

How to Delete an HAStoragePlus Resource That Makes a Local Solaris ZFS Highly Available

Sharing a Highly Available Local File System Across Zone Clusters

Configuration Requirements for Sharing a Highly Available Local File System Directory to a Zone Cluster

How to Set Up the HAStorage Plus Resource Type to Share a Highly Available Local File System Directory to a Zone Cluster

Modifying Online the Resource for a Highly Available Local File System

How to Add File Systems Other Than Solaris ZFS to an Online HAStoragePlus Resource

How to Remove File Systems Other Than Solaris ZFS From an Online HAStoragePlus Resource

How to Add a Solaris ZFS Storage Pool to an Online HAStoragePlus Resource

How to Remove a Solaris ZFS Storage Pool From an Online HAStoragePlus Resource

Changing a ZFS Pool Configuration That is Managed by an HAStoragePlus Resource

How to Change a ZFS Pool Configuration That is Managed by an HAStoragePlus Resource in an Offline State

How to Change a ZFS Pool Configuration That is Managed by an Online HAStoragePlus Resource

How to Recover From a Fault After Modifying the FileSystemMountPoints Property of an HAStoragePlus Resource

How to Recover From a Fault After Modifying the Zpools Property of an HAStoragePlus Resource

Changing the Cluster File System to a Local File System in an HAStoragePlus Resource

How to Change the Cluster File System to Local File System in an HAStoragePlus Resource

Upgrading the HAStoragePlus Resource Type

Information for Registering the New Resource Type Version

Information for Migrating Existing Instances of the Resource Type

Distributing Online Resource Groups Among Cluster Nodes

Resource Group Affinities

Enforcing Collocation of a Resource Group With Another Resource Group

Specifying a Preferred Collocation of a Resource Group With Another Resource Group

Distributing a Set of Resource Groups Evenly Among Cluster Nodes

Specifying That a Critical Service Has Precedence

Delegating the Failover or Switchover of a Resource Group

Combining Affinities Between Resource Groups

Zone Cluster Resource Group Affinities

Configuring the Distribution of Resource Group Load Across Nodes

How to Configure Load Limits for a Node

How to Set Priority for a Resource Group

How to Set Load Factors for a Resource Group

How to Set Preemption Mode for a Resource Group

How to Concentrate Load Onto Fewer Nodes in the Cluster

Enabling Oracle Solaris SMF Services to Run With Oracle Solaris Cluster

Encapsulating an SMF Service Into a Failover Proxy Resource Configuration

Encapsulating an SMF Service Into a Multi-Master Proxy Resource Configuration

Encapsulating an SMF Service Into a Scalable Proxy Resource Configuration

Tuning Fault Monitors for Oracle Solaris Cluster Data Services

Setting the Interval Between Fault Monitor Probes

Setting the Timeout for Fault Monitor Probes

Defining the Criteria for Persistent Faults

Complete Failures and Partial Failures of a Resource

Dependencies of the Threshold and the Retry Interval on Other Properties

System Properties for Setting the Threshold and the Retry Interval

Specifying the Failover Behavior of a Resource

Index

Modifying Online the Resource for a Highly Available Local File System

You might need a highly available local file system to remain available while you are modifying the resource that represents the file system. For example, you might need the file system to remain available because storage is being provisioned dynamically. In this situation, modify the resource that represents the highly available local file system while the resource is online.

In the Oracle Solaris Cluster environment, a highly available local file system is represented by an HAStoragePlus resource. Oracle Solaris Cluster enables you to modify an online HAStoragePlus resource as follows:

Oracle Solaris Cluster software does not enable you to rename a file system while the file system is online.


Note - When you remove the file systems configured in the HAStoragePlus resources for a zone cluster, you also need to remove the file system configuration from the zone cluster. For information about removing a file system from a zone cluster, see How to Remove a File System From a Zone Cluster in Oracle Solaris Cluster System Administration Guide.


How to Add File Systems Other Than Solaris ZFS to an Online HAStoragePlus Resource

When you add a local or cluster file system to an HAStoragePlus resource, the HAStoragePlus resource automatically mounts the file system.

  1. On one node of the cluster, assume the root role that provides solaris.cluster.modify RBAC authorization.
  2. In the /etc/vfstab file on each node of the cluster, add an entry for the mount point of each file system that you are adding.

    For each entry, set the mount at boot field and the mount options field as follows:

    • For local file systems

      • Set the mount at boot field to no.

      • Remove the global flag.

    • For cluster file systems

      • If the file system is a cluster file system, set the mount options field to contain the global option.

  3. Retrieve the list of mount points for the file systems that the HAStoragePlus resource already manages.
    # scha_resource_get -O extension -R hasp-resource -G hasp-rg FileSystemMountPoints
    -R hasp-resource

    Specifies the HAStoragePlus resource to which you are adding file systems

    -G hasp-rg

    Specifies the resource group that contains the HAStoragePlus resource

  4. Modify the FileSystemMountPoints extension property of the HAStoragePlus resource to contain the following mount points:
    • The mount points of the file systems that the HAStoragePlus resource already manages

    • The mount points of the file systems that you are adding to the HAStoragePlus resource

    # clresource set -p FileSystemMountPoints="mount-point-list" hasp-resource
    -p FileSystemMountPoints="mount-point-list"

    Specifies a comma-separated list of mount points of the file systems that the HAStoragePlus resource already manages and the mount points of the file systems that you are adding. The format of each entry in the list is LocalZonePath:GlobalZonePath. In this format, the global path is optional. If the global path is not specified, the global path is the same as the local path.

    hasp-resource

    Specifies the HAStoragePlus resource to which you are adding file systems.

  5. Confirm that you have a match between the mount point list of the HAStoragePlus resource and the list that you specified in Step 4.
    # scha_resource_get -O extension -R hasp-resource -G hasp-rg \
     FileSystemMountPoints
    -R hasp-resource

    Specifies the HAStoragePlus resource to which you are adding file systems.

    -G hasp-rg

    Specifies the resource group that contains the HAStoragePlus resource.

  6. Confirm that the HAStoragePlus resource is online and not faulted.

    If the HAStoragePlus resource is online and faulted, validation of the resource succeeded, but an attempt by HAStoragePlus to mount a file system failed.

    # clresource status hasp-resource

Example 2-43 Adding a File System to an Online HAStoragePlus Resource

This example shows how to add a file system to an online HAStoragePlus resource.

The example assumes that the /etc/vfstab file on each cluster node already contains an entry for the file system that is to be added.

# scha_resource_get -O extension -R rshasp -G rghasp FileSystemMountPoints
STRINGARRAY
/global/global-fs/fs
# clresource set  \
-p FileSystemMountPoints="/global/global-fs/fs,/global/local-fs/fs"
# scha_resource_get -O extension -R rshasp -G rghasp FileSystemMountPoints rshasp
STRINGARRAY
/global/global-fs/fs
/global/local-fs/fs
# clresource status rshasp


=== Cluster Resources ===

Resource Name          Node Name      Status        Message
--------------        ----------      -------       --------
   rshasp               node46       Offline         Offline
                        node47       Online          Online

How to Remove File Systems Other Than Solaris ZFS From an Online HAStoragePlus Resource

When you remove a file system from an HAStoragePlus resource, the HAStoragePlus resource treats a local file system differently from a cluster file system.


Caution

Caution - Before removing a file system from an online HAStoragePlus resource, ensure that no applications are using the file system. When you remove a file system from an online HAStoragePlus resource, the file system might be forcibly unmounted. If a file system that an application is using is forcibly unmounted, the application might fail or hang.


  1. On one node of the cluster, assume the root role that provides solaris.cluster.modify RBAC authorization.
  2. Retrieve the list of mount points for the file systems that the HAStoragePlus resource already manages.
    # scha_resource_get -O extension -R hasp-resource -G hasp-rg FileSystemMountPoints
    -R hasp-resource

    Specifies the HAStoragePlus resource from which you are removing file systems.

    -G hasp-rg

    Specifies the resource group that contains the HAStoragePlus resource.

  3. Modify the FileSystemMountPoints extension property of the HAStoragePlus resource to contain only the mount points of the file systems that are to remain in the HAStoragePlus resource.
    # clresource set -p FileSystemMountPoints="mount-point-list" hasp-resource
    -p FileSystemMountPoints="mount-point-list"

    Specifies a comma-separated list of mount points of the file systems that are to remain in the HAStoragePlus resource. This list must not include the mount points of the file systems that you are removing.

    hasp-resource

    Specifies the HAStoragePlus resource from which you are removing file systems.

  4. Confirm that you have a match between the mount point list of the HAStoragePlus resource and the list that you specified in Step 3.
    # scha_resource_get -O extension -R hasp-resource -G hasp-rg \
    FileSystemMountPoints
    -R hasp-resource

    Specifies the HAStoragePlus resource from which you are removing file systems.

    -G hasp-rg

    Specifies the resource group that contains the HAStoragePlus resource.

  5. Confirm that the HAStoragePlus resource is online and not faulted.

    If the HAStoragePlus resource is online and faulted, validation of the resource succeeded, but an attempt by HAStoragePlus to unmount a file system failed.

    # clresource status hasp-resource
  6. (Optional) From the /etc/vfstab file on each node of the cluster, remove the entry for the mount point of each file system that you are removing.

Example 2-44 Removing a File System From an Online HAStoragePlus Resource

This example shows how to remove a file system from an online HAStoragePlus resource.

# scha_resource_get -O extension -R rshasp -G rghasp FileSystemMountPoints
STRINGARRAY
/global/global-fs/fs
/global/local-fs/fs
# clresource set -p FileSystemMountPoints="/global/global-fs/fs"
# scha_resource_get -O extension -R rshasp -G rghasp FileSystemMountPoints rshasp
STRINGARRAY
/global/global-fs/fs
 # clresource status rshasp


=== Cluster Resources ===

Resource Name          Node Name      Status        Message
--------------        ----------      -------       --------
   rshasp               node46       Offline         Offline
                        node47       Online          Online

How to Add a Solaris ZFS Storage Pool to an Online HAStoragePlus Resource

When you add a Solaris ZFS storage pool to an online HAStoragePlus resource, the HAStoragePlus resource does the following:


Caution

Caution - If you are planning to manually import a pool that is already managed by the cluster, ensure that the pool is not imported on multiple nodes. Importing a pool on multiple nodes can lead to problems.


If you want to make configuration changes to a ZFS pool that is managed by cluster with an HAStoragePlus resource, see Changing a ZFS Pool Configuration That is Managed by an HAStoragePlus Resource.

  1. On any node in the cluster, assume the root role that provides solaris.cluster.modify RBAC authorization.
  2. Determine the ZFS storage pools that the HAStoragePlus resource already manages.
    # clresource show -g hasp-resource-group -p Zpools hasp-resource
    -g hasp-resource-group

    Specifies the resource group that contains the HAStoragePlus resource.

    hasp-resource

    Specifies the HAStoragePlus resource to which you are adding the ZFS storage pool.

  3. Add the new ZFS storage pool to the existing list of ZFS storage pools that the HAStoragePlus resource already manages.
    # clresource set -p Zpools="zpools-list" hasp-resource
    -p Zpools="zpools-list"

    Specifies a comma-separated list of existing ZFS storage pool names that the HAStoragePlus resource already manages and the new ZFS storage pool name that you want to add.

    hasp-resource

    Specifies the HAStoragePlus resource to which you are adding the ZFS storage pool.

  4. Compare the new list of ZFS storage pools that the HAStoragePlus resource manages with the list that you generated in Step 2.
    # clresource show -g hasp-resource-group -p Zpools hasp-resource
    -g hasp-resource-group

    Specifies the resource group that contains the HAStoragePlus resource.

    hasp-resource

    Specifies the HAStoragePlus resource to which you added the ZFS storage pool.

  5. Confirm that the HAStoragePlus resource is online and not faulted.

    If the HAStoragePlus resource is online but faulted, validation of the resource succeeded. However, an attempt by the HAStoragePlus resource to import and mount the ZFS file system failed. In this case, you need to repeat the preceding set of steps.

    # clresourcegroup status hasp-resource

How to Remove a Solaris ZFS Storage Pool From an Online HAStoragePlus Resource

When you remove a Solaris ZFS storage pool from an online HAStoragePlus resource, the HAStoragePlus resource does the following:

  1. On any node in the cluster, assume the root role that provides solaris.cluster.modify RBAC authorization.
  2. Determine the ZFS storage pools that the HAStoragePlus resource already manages.
    # clresource show -g hasp-resource-group -p Zpools hasp-resource
    -g hasp-resource-group

    Specifies the resource group that contains the HAStoragePlus resource.

    hasp-resource

    Specifies the HAStoragePlus resource from which you are removing the ZFS storage pool.

  3. Remove the ZFS storage pool from the list of ZFS storage pools that the HAStoragePlus resource currently manages.
    # clresource set -p Zpools="zpools-list" hasp-resource
    -p Zpools="zpools-list"

    Specifies a comma-separated list of ZFS storage pool names that the HAStoragePlus resource currently manages, minus the ZFS storage pool name that you want to remove.

    hasp-resource

    Specifies the HAStoragePlus resource from which you are removing the ZFS storage pool.

  4. Compare the new list of ZFS storage pools that the HAStoragePlus resource now manages with the list that you generated in Step 2.
    # clresource show -g hasp-resource-group -p Zpools hasp-resource
    -g hasp-resource-group

    Specifies the resource group that contains the HAStoragePlus resource.

    hasp-resource

    Specifies the HAStoragePlus resource from which you removed the ZFS storage pool.

  5. Confirm that the HAStoragePlus resource is online and not faulted.

    If the HAStoragePlus resource is online but faulted, validation of the resource succeeded. However, an attempt by the HAStoragePlus resource to unmount and export the ZFS file system failed. In this case, you need to repeat the preceding set of steps.

    # clresource status -t SUNW.HAStoragePlus +

Changing a ZFS Pool Configuration That is Managed by an HAStoragePlus Resource

To change the ZFS pool configuration that is managed by HAStoragePlus resource, you must ensure that the pool is never imported on multiple nodes. Performing imports on multiple nodes can have severe consequences and could cause ZFS pool corruption.

The following procedures help you avoid multiple imports when performing pool configuration changes.

How to Change a ZFS Pool Configuration That is Managed by an HAStoragePlus Resource in an Offline State

  1. Ensure that the ZFS pool that requires configuration changes is not imported on any node.
    # zpool list zfs-pool-name

    Run this command on all cluster nodes that have a physical connection to the ZFS pool.

  2. Import the pool on the alternate root without using the force option on a cluster node that has a physical connection to the ZFS pool.
    # zpool import -R zfs-pool-name

    If the import succeeds, proceed to Step 3. If the import fails, the cluster node that previously accessed the pool might have shut down without exporting the pool. Follow the substeps below to ensure that the cluster node is not using the ZFS pool and then import the pool forcefully:

    1. Check if the import failed due to an error message similar to the one below. If it did, proceed to Step b and Step c:

      Cannot import 'zfs-pool-name': pool may be in use from other system, it was last accessed by hostname (hostid: hostid) on accessed-date.

    2. Verify that the pool is not in use on the machine that last accessed it.
      hostname# zpool list zfs-pool-name
    3. If the ZFS pool is not in use on that node, import the pool forcefully.
      # zpool import -f zfs-pool-name
  3. Perform the ZFS pool configuration changes.
  4. Export the ZFS pool and check that the pool is not in use.
    # zpool export zfs-pool-name
    # zpool list zfs-pool-name

How to Change a ZFS Pool Configuration That is Managed by an Online HAStoragePlus Resource

  1. Find the cluster node where the ZFS pool is imported.

    It will be the node where the HAStoragePlus resource is online.

    # clresource show hasp-rs-managing-pool
    
    === Cluster Resources ===
    Resource Name           Node Name        Status       Message
    --------------          ----------      -------       --------
    hasp-rs-managing-pool   phys-node-1      Offline      Offline
                            phys-node-2      Online       Online
    
    phys-node-2# zpool list zfs-pool-name
  2. Perform the ZFS pool configuration changes.

How to Recover From a Fault After Modifying the FileSystemMountPoints Property of an HAStoragePlus Resource

If a fault occurs during a modification of the FileSystemMountPoints extension property, the status of the HAStoragePlus resource is online and faulted. After the fault is corrected, the status of the HAStoragePlus resource is online.

  1. Determine the fault that caused the attempted modification to fail.
    # clresource status hasp-resource

    The status message of the faulty HAStoragePlus resource indicates the fault. Possible faults are as follows:

    • The device on which the file system should reside does not exist.

    • An attempt by the fsck command to repair a file system failed.

    • The mount point of a file system that you attempted to add does not exist.

    • A file system that you attempted to add cannot be mounted.

    • A file system that you attempted to remove cannot be unmounted.

  2. Correct the fault that caused the attempted modification to fail.
  3. Repeat the step to modify the FileSystemMountPoints extension property of the HAStoragePlus resource.
    # clresource set -p FileSystemMountPoints="mount-point-list" hasp-resource
    -p FileSystemMountPoints="mount-point-list"

    Specifies a comma-separated list of mount points that you specified in the unsuccessful attempt to modify the highly available local file system

    hasp-resource

    Specifies the HAStoragePlus resource that you are modifying

  4. Confirm that the HAStoragePlus resource is online and not faulted.
    # clresource status

Example 2-45 Status of a Faulty HAStoragePlus Resource

This example shows the status of a faulty HAStoragePlus resource. This resource is faulty because an attempt by the fsck command to repair a file system failed.

# clresource status

  === Cluster Resources ===

  Resource Name     Node Name     Status       Status Message
  --------------    ----------    -------      -------------
  rshasp            node46        Offline      Offline
                    node47        Online       Online Faulted - Failed to fsck: /mnt.

How to Recover From a Fault After Modifying the Zpools Property of an HAStoragePlus Resource

If a fault occurs during a modification of the Zpools extension property, the status of the HAStoragePlus resource is online and faulted. After the fault is corrected, the status of the HAStoragePlus resource is online.

  1. Determine the fault that caused the attempted modification to fail.
    # clresource status hasp-resource

    The status message of the faulty HAStoragePlus resource indicates the fault. Possible faults are as follows:

    • The ZFS pool zpool failed to import.

    • The ZFS pool zpool failed to export.


    Note - If you import a corrupt ZFS pool, the best option is to choose Continue to display an error message. Other choices are Wait (which hangs until success occurs or the node panics) or Panic (which panics the node).


  2. Correct the fault that caused the attempted modification to fail.
  3. Repeat the step to modify the Zpools extension property of the HAStoragePlus resource.
    # clresource set -p Zpools="zpools-list" hasp-resource
    -p Zpools="zpools-list"

    Specifies a comma-separated list of ZFS storage pool names that the HAStoragePlus currently manages, minus the ZFS storage pool name that you want to remove.

    hasp-resource

    Specifies the HAStoragePlus resource that you are modifying

  4. Confirm that the HAStoragePlus resource is online and not faulted.
    # clresource status

Example 2-46 Status of a Faulty HAStoragePlus Resource

This example shows the status of a faulty HAStoragePlus resource. This resource is faulty because the ZFS pool zpool failed to import.

# clresource status hasp-resource

  === Cluster Resources ===

  Resource Name     Node Name     Status            Status Message
  --------------    ----------    -------           -------------
  hasp-resource     node46        Online            Faulted - Failed to import:hazpool
                    node47        Offline           Offline