You might need a highly available file system to remain available while you are modifying the resource that represents the file system. For example, you might need the file system to remain available because storage is being provisioned dynamically. In this situation, modify the resource that represents the highly available file system while the resource is online.
In the Sun Cluster environment, a highly available file system is represented by an HAStoragePlus resource. Sun Cluster enables you to modify an online HAStoragePlus resource as follows:
Adding file systems to the HAStoragePlus resource
Removing file systems from the HAStoragePlus resource
Sun Cluster software does not enable you to rename a file system while the file system is online.
When you remove the file systems configured in the HAStoragePlus resources for a zone cluster, you also need to remove the file system configuration from the zone cluster. For information on removing a file system from a zone cluster, seeHow to Remove a File System from a Zone Cluster in Sun Cluster System Administration Guide for Solaris OS.
When you add a local or global file system to a HAStoragePlus resource, the HAStoragePlus resource automatically mounts the file system.
On one node of the cluster, become superuser or assume a role that provides solaris.cluster.modify RBAC authorization.
In the /etc/vfstab file on each node of the cluster, add an entry for the mount point of each file system that you are adding.
For each entry, set the mount at boot field and the mount options field as follows:
Retrieve the list of mount points for the file systems that the HAStoragePlus resource already manages.
# scha_resource_get -O extension -R hasp-resource -G hasp-rg \ FileSystemMountPoints |
Specifies the HAStoragePlus resource to which you are adding file systems
Specifies the resource group that contains the HAStoragePlus resource
Modify the FileSystemMountPoints extension property of the HAStoragePlus resource to contain the following mount points:
The mount points of the file systems that the HAStoragePlus resource already manages
The mount points of the file systems that you are adding to the HAStoragePlus resource
# clresource set -p FileSystemMountPoints="mount-point-list" hasp-resource |
Specifies a comma-separated list of mount points of the file systems that the HAStoragePlus resource already manages and the mount points of the file systems that you are adding. The format of each entry in the list is LocalZonePath:GlobalZonePath. In this format, the global path is optional. If the global path is not specified, the global path is the same as the local path.
Specifies the HAStoragePlus resource to which you are adding file systems.
Confirm that you have a match between the mount point list of the HAStoragePlus resource and the list that you specified in Step 4.
# scha_resource_get -O extension -R hasp-resource -G hasp-rg \ FileSystemMountPoints |
Specifies the HAStoragePlus resource to which you are adding file systems.
Specifies the resource group that contains the HAStoragePlus resource.
Confirm that the HAStoragePlus resource is online and not faulted.
If the HAStoragePlus resource is online and faulted, validation of the resource succeeded, but an attempt by HAStoragePlus to mount a file system failed.
# clresource status hasp-resource |
This example shows how to add a file system to an online HAStoragePlus resource.
The HAStoragePlus resource is named rshasp and is contained in the resource group rghasp.
The HAStoragePlus resource named rshasp already manages the file system whose mount point is /global/global-fs/fs.
The mount point of the file system that is to be added is /global/local-fs/fs.
The example assumes that the /etc/vfstab file on each cluster node already contains an entry for the file system that is to be added.
# scha_resource_get -O extension -R rshasp -G rghasp FileSystemMountPoints STRINGARRAY /global/global-fs/fs # clresource set \ -p FileSystemMountPoints="/global/global-fs/fs,/global/local-fs/fs" # scha_resource_get -O extension -R rshasp -G rghasp FileSystemMountPoints rshasp STRINGARRAY /global/global-fs/fs /global/local-fs/fs # clresource status rshasp === Cluster Resources === Resource Name Node Name Status Message -------------- ---------- ------- -------- rshasp node46 Offline Offline node47 Online Online |
When you remove a file system from an HAStoragePlus resource, the HAStoragePlus resource treats a local file system differently from a global file system.
The HAStoragePlus resource automatically unmounts a local file system.
The HAStoragePlus resource does not unmount the global file system.
Before removing a file system from an online HAStoragePlus resource, ensure that no applications are using the file system. When you remove a file system from an online HAStoragePlus resource, the file system might be forcibly unmounted. If a file system that an application is using is forcibly unmounted, the application might fail or hang.
On one node of the cluster, become superuser or assume a role that provides solaris.cluster.modify RBAC authorization.
Retrieve the list of mount points for the file systems that the HAStoragePlus resource already manages.
# scha_resource_get -O extension -R hasp-resource -G hasp-rg \ FileSystemMountPoints |
Specifies the HAStoragePlus resource from which you are removing file systems.
Specifies the resource group that contains the HAStoragePlus resource.
Modify the FileSystemMountPoints extension property of the HAStoragePlus resource to contain only the mount points of the file systems that are to remain in the HAStoragePlus resource.
# clresource set -p FileSystemMountPoints="mount-point-list" hasp-resource |
Specifies a comma-separated list of mount points of the file systems that are to remain in the HAStoragePlus resource. This list must not include the mount points of the file systems that you are removing.
Specifies the HAStoragePlus resource from which you are removing file systems.
Confirm that you have a match between the mount point list of the HAStoragePlus resource and the list that you specified in Step 3.
# scha_resource_get -O extension -R hasp-resource -G hasp-rg \ FileSystemMountPoints |
Specifies the HAStoragePlus resource from which you are removing file systems.
Specifies the resource group that contains the HAStoragePlus resource.
Confirm that the HAStoragePlus resource is online and not faulted.
If the HAStoragePlus resource is online and faulted, validation of the resource succeeded, but an attempt by HAStoragePlus to unmount a file system failed.
# clresource status hasp-resource |
(Optional) From the /etc/vfstab file on each node of the cluster, remove the entry for the mount point of each file system that you are removing.
This example shows how to remove a file system from an online HAStoragePlus resource.
The HAStoragePlus resource is named rshasp and is contained in the resource group rghasp.
The HAStoragePlus resource named rshasp already manages the file systems whose mount points are as follows:
/global/global-fs/fs
/global/local-fs/fs
The mount point of the file system that is to be removed is /global/local-fs/fs.
# scha_resource_get -O extension -R rshasp -G rghasp FileSystemMountPoints STRINGARRAY /global/global-fs/fs /global/local-fs/fs # clresource set -p FileSystemMountPoints="/global/global-fs/fs" # scha_resource_get -O extension -R rshasp -G rghasp FileSystemMountPoints rshasp STRINGARRAY /global/global-fs/fs # clresource status rshasp === Cluster Resources === Resource Name Node Name Status Message -------------- ---------- ------- -------- rshasp node46 Offline Offline node47 Online Online |
When you add a Solaris ZFS (Zettabyte File System) storage pool to an online HAStoragePlus resource, the HAStoragePlus resource does the following:
Imports the ZFS storage pool.
Mounts all file systems in the ZFS storage pool.
On any node in the cluster, become superuser or assume a role that provides solaris.cluster.modify RBAC authorization.
Determine the ZFS storage pools that the HAStoragePlus resource already manages.
# clresource show -g hasp-resource-group -p Zpools hasp-resource |
Specifies the resource group that contains the HAStoragePlus resource.
Specifies the HAStoragePlus resource to which you are adding the ZFS storage pool.
Add the new ZFS storage pool to the existing list of ZFS storage pools that the HAStoragePlus resource already manages.
# clresource set -p Zpools="zpools-list" hasp-resource |
Specifies a comma-separated list of existing ZFS storage pool names that the HAStoragePlus resource already manages and the new ZFS storage pool name that you want to add.
Specifies the HAStoragePlus resource to which you are adding the ZFS storage pool.
Compare the new list of ZFS storage pools that the HAStoragePlus resource manages with the list that you generated in Step 2.
# clresource show -g hasp-resource-group -p Zpools hasp-resource |
Specifies the resource group that contains the HAStoragePlus resource.
Specifies the HAStoragePlus resource to which you added the ZFS storage pool.
Confirm that the HAStoragePlus resource is online and not faulted.
If the HAStoragePlus resource is online but faulted, validation of the resource succeeded. However, an attempt by the HAStoragePlus resource to import and mount the ZFS file system failed. In this case, you need to repeat the preceding set of steps.
# clresourcegroup status hasp-resource |
When you remove a Solaris ZFS (Zettabyte File System) storage pool from an online HAStoragePlus resource, the HAStoragePlus resource does the following:
Unmounts the file systems in the ZFS storage pool.
Exports the ZFS storage pool from the node.
On any node in the cluster, become superuser or assume a role that provides solaris.cluster.modify RBAC authorization.
Determine the ZFS storage pools that the HAStoragePlus resource already manages.
# clresource show -g hasp-resource-group -p Zpools hasp-resource |
Specifies the resource group that contains the HAStoragePlus resource.
Specifies the HAStoragePlus resource from which you are removing the ZFS storage pool.
Remove the ZFS storage pool from the list of ZFS storage pools that the HAStoragePlus resource currently manages.
# clresource set -p Zpools="zpools-list" hasp-resource |
Specifies a comma-separated list of ZFS storage pool names that the HAStoragePlus resource currently manages, minus the ZFS storage pool name that you want to remove.
Specifies the HAStoragePlus resource from which you are removing the ZFS storage pool.
Compare the new list of ZFS storage pools that the HAStoragePlus resource now manages with the list that you generated in Step 2.
# clresource show -g hasp-resource-group -p Zpools hasp-resource |
Specifies the resource group that contains the HAStoragePlus resource.
Specifies the HAStoragePlus resource from which you removed the ZFS storage pool.
Confirm that the HAStoragePlus resource is online and not faulted.
If the HAStoragePlus resource is online but faulted, validation of the resource succeeded. However, an attempt by the HAStoragePlus resource to unmount and export the ZFS file system failed. In this case, you need to repeat the preceding set of steps.
# clresourcegroup status SUNW.HAStoragePlus + |
If a fault occurs during a modification of the FileSystemMountPoints extension property, the status of the HAStoragePlus resource is online and faulted. After the fault is corrected, the status of the HAStoragePlus resource is online.
Determine the fault that caused the attempted modification to fail.
# clresource status hasp-resource |
The status message of the faulty HAStoragePlus resource indicates the fault. Possible faults are as follows:
The device on which the file system should reside does not exist.
An attempt by the fsck command to repair a file system failed.
The mount point of a file system that you attempted to add does not exist.
A file system that you attempted to add cannot be mounted.
A file system that you attempted to remove cannot be unmounted.
Correct the fault that caused the attempted modification to fail.
Repeat the step to modify the FileSystemMountPoints extension property of the HAStoragePlus resource.
# clresource set -p FileSystemMountPoints="mount-point-list" hasp-resource |
Specifies a comma-separated list of mount points that you specified in the unsuccessful attempt to modify the highly available file system
Specifies the HAStoragePlus resource that you are modifying
Confirm that the HAStoragePlus resource is online and not faulted.
# clresource status |
This example shows the status of a faulty HAStoragePlus resource. This resource is faulty because an attempt by the fsck command to repair a file system failed.
# clresource status === Cluster Resources === Resource Name Node Name Status Status Message -------------- ---------- ------- ------------- rshasp node46 Offline Offline node47 Online Online Faulted - Failed to fsck: /mnt. |
If a fault occurs during a modification of the Zpools extension property, the status of the HAStoragePlus resource is online and faulted. After the fault is corrected, the status of the HAStoragePlus resource is online.
Determine the fault that caused the attempted modification to fail.
# clresource status hasp-resource |
The status message of the faulty HAStoragePlus resource indicates the fault. Possible faults are as follows:
The ZFS pool zpool failed to import.
The ZFS pool zpool failed to export.
Correct the fault that caused the attempted modification to fail.
Repeat the step to modify the Zpools extension property of the HAStoragePlus resource.
# clresource set -p Zpools="zpools-list" hasp-resource |
Specifies a comma-separated list of ZFS storage pool names that the HAStoragePlus currently manages, minus the ZFS storage pool name that you want to remove.
Specifies the HAStoragePlus resource that you are modifying
Confirm that the HAStoragePlus resource is online and not faulted.
# clresource status |
This example shows the status of a faulty HAStoragePlus resource. This resource is faulty because the ZFS pool zpool failed to import.
# clresource status hasp-resource === Cluster Resources === Resource Name Node Name Status Status Message -------------- ---------- ------- ------------- hasp-resource node46 Online Faulted - Failed to import:hazpool node47 Offline Offline |