|Skip Navigation Links|
|Exit Print View|
|Oracle Solaris Cluster System Administration Guide Oracle Solaris Cluster|
No special Oracle Solaris Cluster commands are necessary for cluster file system administration. Administer a cluster file system as you would any other Oracle Solaris file system, using standard Oracle Solaris file system commands, such as mount and newfs. Mount cluster file systems by specifying the -g option to the mount command. Cluster file systems can also be automatically mounted at boot. Cluster file systems are only visible from the voting node in a global cluster. If you require the cluster file system data to be accessible from a non-voting node, map the data to the non-voting node with zoneadm(1M) or HAStoragePlus.
Note - When the cluster file system reads files, the file system does not update the access time on those files.
The following restrictions apply to the cluster file system administration:
The unlink(1M) command is not supported on directories that are not empty.
The lockfs -d command is not supported. Use lockfs -n as a workaround.
You cannot remount a cluster file system with the directio mount option added at remount time.
ZFS for root file systems is supported, with one significant exception. If you use a dedicated partition of the boot disk for the global-devices file system, you must use only UFS as its file system. The global-devices namespace requires the proxy file system (PxFS) running on a UFS file system. However, a UFS file system for the global-devices namespace can coexist with a ZFS file system for the root (/) file system and other root file systems, for example, /var or /home. Alternatively, if you instead use a lofi device to host the global-devices namespace, there is no limitation on the use of ZFS for root file systems.
The following VxFS features are not supported in an Oracle Solaris Cluster cluster file system. They are, however, supported in a local file system.
VxFS-specific mount options:
convosync (Convert O_SYNC)
qlog, delaylog, tmplog
Veritas cluster file system (requires VxVM cluster feature & Veritas Cluster Server). The VxVM cluster feature is not supported on x86 based systems.
Cache advisories can be used, but the effect is observed on the given node only.
All other VxFS features and options that are supported in a cluster file system are supported by Oracle Solaris Cluster software. See VxFS documentation for details about VxFS options that are supported in a cluster configuration.
The following guidelines for using VxFS to create highly available cluster file systems are specific to an Oracle Solaris 3.3 configuration.
Create a VxFS file system by the following procedures in the VxFS documentation.
Mount and unmount a VxFS file system from the primary node. The primary node masters the disk on which the VxFS file system resides. A VxFS file system mount or unmount operation that is performed from a secondary node might fail.
Perform all VxFS administration commands from the primary node of the VxFS cluster file system.
The following guidelines for administering VxFS cluster file systems are not specific to Oracle Solaris 3.3 software. However, the guidelines are different from the way you administer UFS cluster file systems.
You can administer files on a VxFS cluster file system from any node in the cluster. The exception is ioctls, which you must issue only from the primary node. If you do not know whether an administration command involves ioctls, issue the command from the primary node.
If a VxFS cluster file system fails over to a secondary node, all standard system-call operations that were in progress during failover are reissued transparently on the new primary. However, any ioctl-related operation in progress during the failover will fail. After a VxFS cluster file system failover, check the state of the cluster file system. Administrative commands that were issued on the old primary before failover might require corrective measures. See VxFS documentation for more information.