Go to main content

Administering an Oracle® Solaris Cluster 4.4 Configuration

Exit Print View

Updated: November 2019
 
 

How to Remove a File System From a Zone Cluster

A file system can be exported to a zone cluster using either a direct mount or a loopback mount.

Zone clusters support direct mounts for the following:

  • UFS local file system

  • Oracle HSM standalone file system

  • Oracle HSM shared file system, when used to support Oracle RAC

  • Oracle Solaris ZFS (exported as a data set)

  • NFS from supported NAS devices

Zone clusters can manage loopback mounts for the following:

  • UFS local file system

  • ZFS cluster file system

  • Oracle HSM standalone file system

  • Oracle HSM shared file system, only when used to support Oracle RAC

  • UFS cluster file system

You configure an HAStoragePlus or ScalMountPoint resource to manage the mounting of the file system. For instructions on adding a file system to a zone cluster, see Adding File Systems to a Zone Cluster in Installing and Configuring an Oracle Solaris Cluster 4.4 Environment.

An HAStoragePlus resource does not monitor a ZFS file system if the file system has its mountpoint property set to none or legacy, or its canmount property set to off. For all other ZFS file systems, the HAStoragePlus resource fault monitor checks if the file system is mounted. If the file system is mounted, the HAStoragePlus resource then probes the file system's accessibility by reading and writing to it, depending on the value of the IOOption property called ReadOnly/ReadWrite.

If the ZFS file system is not mounted or the probe of the file system fails, the resource fault monitor fails and the resource is set to Faulted. The RGM will attempt to restart it, determined by the retry_count and retry_interval properties of the resource. This action results in remounting the file system if the specific mountpoint and canmount property settings described above are not in play. If the fault monitor continues to fail and exceeds the retry_count within the retry_interval, the RGM fails the resource over to another node.

The phys-schost# prompt reflects a global-cluster prompt. This procedure provides the long forms of the Oracle Solaris Cluster commands. Most commands also have short forms. Except for the long and short forms of the command names, the commands are identical.


Note -  You can also use the Oracle Solaris Cluster Manager browser interface to remove a file system from a zone cluster. Click Zone Clusters, click the name of the zone cluster to go to its page, then click the Solaris Resources tab to administer zone-cluster components. For Oracle Solaris Cluster Manager log-in instructions, see How to Access Oracle Solaris Cluster Manager.
  1. Assume the root role on a node of the global cluster that hosts the zone cluster.

    Some steps in this procedure are performed from a node of the global cluster. Other steps are performed from a node of the zone cluster.

  2. Delete the resources related to the file system being removed.
    1. Identify and remove the Oracle Solaris Cluster resource types, such as HAStoragePlus and SUNW.ScalMountPoint, that are configured for the zone cluster's file system that you are removing.
      phys-schost# clresource delete -F -Z zone-cluster-name fs_zone_resources
    2. If applicable, identify and remove the Oracle Solaris Cluster resources of type SUNW.qfs that are configured in the global cluster for the file system that you are removing.
      phys-schost# clresource delete -F fs_global_resources

      Use the –F option carefully because it forces the deletion of all the resources you specify, even if you did not disable them first. All the resources you specified are removed from the resource-dependency settings of other resources, which can cause a loss of service in the cluster. Dependent resources that are not deleted can be left in an invalid state or in an error state. For more information, see the clresource(8CL) man page.


    Tip  -  If the resource group for the removed resource later becomes empty, you can safely delete the resource group.
  3. Determine the path to the file-system mount point directory.

    For example:

    phys-schost# clzonecluster configure zone-cluster-name
  4. Remove the file system from the zone-cluster configuration.
    phys-schost# clzonecluster configure zone-cluster-name
    clzc:zone-cluster-name> remove fs dir=filesystemdirectory
    clzc:zone-cluster-name> commit

    The file system mount point is specified by dir=.

  5. Verify the removal of the file system.
    phys-schost# clzonecluster show –v zone-cluster-name
Example 78  Removing a Highly Available Local File System in a Zone Cluster

This example shows how to remove a file system with a mount-point directory (/local/ufs-1) that is configured in a zone cluster called sczone. The resource is hasp-rs and is of the type HAStoragePlus.

phys-schost# clzonecluster show -v sczone
...
Resource Name:                           fs
dir:                                     /local/ufs-1
special:                                 /dev/md/ds1/dsk/d0
raw:                                     /dev/md/ds1/rdsk/d0
type:                                    ufs
options:                                 [logging]
...
phys-schost# clresource delete -F -Z sczone hasp-rs
phys-schost# clzonecluster configure sczone
clzc:sczone> remove fs dir=/local/ufs-1
clzc:sczone> commit
phys-schost# clzonecluster show -v sczone
Example 79  Removing a Highly Available ZFS File System in a Zone Cluster

This example shows to remove a ZFS file systems in a ZFS pool called HAzpool, which is configured in the sczone zone cluster in resource hasp-rs of type SUNW.HAStoragePlus.

phys-schost# clzonecluster show -v sczone
...
Resource Name:                           dataset
name:                                     HAzpool
...
phys-schost# clresource delete -F -Z sczone hasp-rs
phys-schost# clzonecluster configure sczone
clzc:sczone> remove dataset name=HAzpool
clzc:sczone> commit
phys-schost# clzonecluster show -v sczone