A file system can be exported to a zone cluster using either a direct mount or a loopback mount.
Zone clusters support direct mounts for the following:
UFS local file system
StorageTek QFS standalone file system
StorageTek QFS shared file system, when used to support Oracle RAC
Oracle Solaris ZFS (exported as a data set)
NFS from supported NAS devices
Zone clusters can manage loopback mounts for the following:
UFS local file system
StorageTek QFS standalone file system
StorageTek QFS shared file system, only when used to support Oracle RAC
UFS cluster file system
You configure an HAStoragePlus or ScalMountPoint resource to manage the mounting of the file system. For instructions on adding a file system to a zone cluster, see Adding File Systems to a Zone Cluster in Oracle Solaris Cluster 4.3 Software Installation Guide.
An HAStoragePlus resource does not monitor a ZFS file system if the file system has its mountpoint property set to none or legacy, or its canmount property set to off. For all other ZFS file systems, the HAStoragePlus resource fault monitor checks if the file system is mounted. If the file system is mounted, the HAStoragePlus resource then probes the file system's accessibility by reading and writing to it, depending on the value of the IOOption property called ReadOnly/ReadWrite.
If the ZFS file system is not mounted or the probe of the file system fails, the resource fault monitor fails and the resource is set to Faulted. The RGM will attempt to restart it, determined by the retry_count and retry_interval properties of the resource. This action results in remounting the file system if the specific mountpoint and canmount property settings described above are not in play. If the fault monitor continues to fail and exceeds the retry_count within the retry_interval, the RGM fails the resource over to another node.
The phys-schost# prompt reflects a global-cluster prompt. This procedure provides the long forms of the Oracle Solaris Cluster commands. Most commands also have short forms. Except for the long and short forms of the command names, the commands are identical.
Some steps in this procedure are performed from a node of the global cluster. Other steps are performed from a node of the zone cluster.
phys-schost# clresource delete -F -Z zone-cluster-name fs_zone_resources
phys-schost# clresource delete -F fs_global_resources
Use the –F option carefully because it forces the deletion of all the resources you specify, even if you did not disable them first. All the resources you specified are removed from the resource-dependency settings of other resources, which can cause a loss of service in the cluster. Dependent resources that are not deleted can be left in an invalid state or in an error state. For more information, see the clresource(1CL) man page.
For example:
phys-schost# clzonecluster configure zone-cluster-name
phys-schost# clzonecluster configure zone-cluster-name
clzc:zone-cluster-name> remove fs dir=filesystemdirectory
clzc:zone-cluster-name> commit
The file system mount point is specified by dir=.
phys-schost# clzonecluster show -v zone-cluster-name
This example shows how to remove a file system with a mount-point directory (/local/ufs-1) that is configured in a zone cluster called sczone. The resource is hasp-rs and is of the type HAStoragePlus.
phys-schost# clzonecluster show -v sczone ... Resource Name: fs dir: /local/ufs-1 special: /dev/md/ds1/dsk/d0 raw: /dev/md/ds1/rdsk/d0 type: ufs options: [logging] ... phys-schost# clresource delete -F -Z sczone hasp-rs phys-schost# clzonecluster configure sczone clzc:sczone> remove fs dir=/local/ufs-1 clzc:sczone> commit phys-schost# clzonecluster show -v sczoneExample 86 Removing a Highly Available ZFS File System in a Zone Cluster
This example shows to remove a ZFS file systems in a ZFS pool called HAzpool, which is configured in the sczone zone cluster in resource hasp-rs of type SUNW.HAStoragePlus.
phys-schost# clzonecluster show -v sczone ... Resource Name: dataset name: HAzpool ... phys-schost# clresource delete -F -Z sczone hasp-rs phys-schost# clzonecluster configure sczone clzc:sczone> remove dataset name=HAzpool clzc:sczone> commit phys-schost# clzonecluster show -v sczone