Planning Cluster File Systems

For information about the purpose and function of cluster file systems, see Cluster File Systems in Concepts for Oracle Solaris Cluster 4.4.

Note:

You can alternatively configure highly available local file systems. This can provide better performance to support a data service with high I/O, or to permit use of certain file system features that are not supported in a cluster file system. For more information, see Enabling Highly Available Local File Systems in Planning and Administering Data Services for Oracle Solaris Cluster 4.4.

Consider the following points when you plan cluster file systems:

  • Quotas – Quotas are not supported on cluster file systems. However, quotas are supported on highly available local file systems.

  • Zone clusters – You cannot configure ZFS or UFS cluster file systems directly in a zone cluster. However, you can configure them in the global cluster and loopback-mounted into the zone cluster; or you can use highly available local file systems. For further information, see Adding File Systems to a Zone Cluster

  • Zone file systems – Zones cannot be installed on a global ZFS file system. Configuring the zone root path of a zone under a file system of a zpool for a globally mounted ZFS file system is not supported.

  • Loopback file system (LOFS) – During cluster creation, LOFS is enabled by default. You must manually disable LOFS on each cluster node if the cluster meets both of the following conditions:

    • HA for NFS (HA for NFS) is configured on a highly available local file system.

    • The automountd daemon is running.

    If the cluster meets both of these conditions, you must disable LOFS to avoid switchover problems or other failures. If the cluster meets only one of these conditions, you can safely enable LOFS.

    If you require both LOFS and the automountd daemon to be enabled, exclude from the automounter map all files that are part of the highly available local file system that is exported by HA for NFS.

  • Process accounting log files – Do not locate process accounting log files on a cluster file system or on a highly available local file system. A switchover would be blocked by writes to the log file, which would cause the node to hang. Use only a local file system to contain process accounting log files.

  • Communication endpoints – The cluster file system does not support any of the file system features of Oracle Solaris software by which one would put a communication endpoint in the file system namespace. Therefore, do not attempt to use the fattach command from any node other than the local node.

    • Although you can create a UNIX domain socket whose name is a path name into the cluster file system, the socket would not survive a node failover.

    • Any FIFOs or named pipes that you create on a cluster file system would not be globally accessible.

  • Device special files – Neither block special files nor character special files are supported in a cluster file system. To specify a path name to a device node in a cluster file system, create a symbolic link to the device name in the /dev directory. Do not use the mknod command for this purpose.

  • atime – Cluster file systems do not maintain atime.

  • ctime – When a file on a cluster file system is accessed, the update of the file's ctime might be delayed.

  • Installing applications - If you want the binaries of a highly available application to reside on a cluster file system, wait to install the application until after the cluster file system is configured.

  • Using chmod to change setuid permissions – The chmod command might fail to change setuid permissions on a file in a cluster file system. If the chmod command is run on a non-global zone and the non-global zone is not on the PxFS primary server, the chmod command fails to change the setuid permission.

    Use one of these methods to sucessfully change setuid permissions:

    • Perform the operation on any global-cluster node that accesses the cluster file system.

    • Perform the operation on any non-global zone that runs on the PxFS primary node that has a loopback mount to the cluster file system.

    • Switch the PxFS primary to the global-cluster node where the non-global zone that encountered the error is running.