Skip Navigation Links | |
Exit Print View | |
Oracle Solaris Cluster Concepts Guide Oracle Solaris Cluster 4.1 |
2. Key Concepts for Hardware Service Providers
3. Key Concepts for System Administrators and Application Developers
Device IDs and DID Pseudo Driver
Cluster Configuration Repository (CCR)
Local and Global Namespaces Example
Using the cldevice Command to Monitor and Administer Disk Paths
Using the clnode set Command to Manage Disk Path Failure
Adhering to Quorum Device Requirements
Adhering to Quorum Device Best Practices
Recommended Quorum Configurations
Quorum in Two-Node Configurations
Quorum in Greater Than Two-Node Configurations
Characteristics of Scalable Services
Data Service API and Data Service Development Library API
Using the Cluster Interconnect for Data Service Traffic
Resources, Resource Groups, and Resource Types
Resource and Resource Group States and Settings
Resource and Resource Group Properties
Support for Oracle Solaris Zones
Support for Zones on Cluster Nodes Through Oracle Solaris Cluster HA for Solaris Zones
Criteria for Using Oracle Solaris Cluster HA for Solaris Zones
Requirements for Using Oracle Solaris Cluster HA for Solaris Zones
Additional Information About Oracle Solaris Cluster HA for Solaris Zones
Data Service Project Configuration
Determining Requirements for Project Configuration
Setting Per-Process Virtual Memory Limits
Two-Node Cluster With Two Applications
Two-Node Cluster With Three Applications
Failover of Resource Group Only
Public Network Adapters and IP Network Multipathing
SPARC: Dynamic Reconfiguration Support
SPARC: Dynamic Reconfiguration General Description
SPARC: DR Clustering Considerations for CPU Devices
SPARC: DR Clustering Considerations for Memory
SPARC: DR Clustering Considerations for Disk and Tape Drives
SPARC: DR Clustering Considerations for Quorum Devices
SPARC: DR Clustering Considerations for Cluster Interconnect Interfaces
SPARC: DR Clustering Considerations for Public Network Interfaces
Oracle Solaris Cluster software provides a cluster file system based on the Oracle Solaris Cluster Proxy File System (PxFS). The cluster file system has the following features:
File access locations are transparent. A process can open a file that is located anywhere in the system. Processes on all cluster nodes can use the same path name to locate a file.
Note - When the cluster file system reads files, it does not update the access time on those files.
Coherency protocols are used to preserve the UNIX file access semantics even if the file is accessed concurrently from multiple nodes.
Extensive caching is used along with zero-copy bulk I/O movement to move file data efficiently.
The cluster file system provides highly available, advisory file-locking functionality by using the fcntl command interfaces. Applications that run on multiple cluster nodes can synchronize access to data by using advisory file locking on a cluster file system. File locks are recovered immediately from nodes that leave the cluster, and from applications that fail while holding locks.
Continuous access to data is ensured, even when failures occur. Applications are not affected by failures if a path to disks is still operational. This guarantee is maintained for raw disk access and all file system operations.
Cluster file systems are independent from the underlying file system and volume management software.
You can mount a file system on a global device globally with mount -g or locally with mount.
Programs can access a file in a cluster file system from any node in the cluster through the same file name (for example, /global/foo).
A cluster file system is mounted on all cluster members. You cannot mount a cluster file system on a subset of cluster members.
A cluster file system is not a distinct file system type. Clients verify the underlying file system (for example, UFS).
In the Oracle Solaris Cluster software, all multihost disks are placed into device groups, which can be Solaris Volume Manager disk sets, raw-disk groups, or individual disks that are not under control of a software-based volume manager.
For a cluster file system to be highly available, the underlying disk storage must be connected to more than one cluster node. Therefore, a local file system (a file system that is stored on a node's local disk) that is made into a cluster file system is not highly available.
You can mount cluster file systems as you would mount file systems:
Manually. Use the mount command and the -g or -o global mount options to mount the cluster file system from the command line, for example:
SPARC: # mount -g /dev/global/dsk/d0s0 /global/oracle/data
Automatically. Create an entry in the /etc/vfstab file with a global mount option to mount the cluster file system at boot. You then create a mount point under the /global directory on all nodes. The directory /global is a recommended location, not a requirement. Here's a sample line for a cluster file system from an /etc/vfstab file:
/dev/md/oracle/dsk/d1 /dev/md/oracle/rdsk/d1 /global/oracle/data ufs 2 yes global,logging
Note - While Oracle Solaris Cluster software does not impose a naming policy for cluster file systems, you can ease administration by creating a mount point for all cluster file systems under the same directory, such as /global/disk-group. See the Oracle Solaris Cluster System Administration Guide for more information.
The HAStoragePlus resource type is designed to make local and global file system configurations highly available. You can use the HAStoragePlus resource type to integrate your shared local or global file system into the Oracle Solaris Cluster environment and make the file system highly available.
Oracle Solaris Cluster systems support the following cluster file system:
UNIX® File System (UFS) – Uses Oracle Solaris Cluster Proxy File System (PxFS)
Oracle Solaris Cluster software supports the following as highly available failover local file systems:
UFS
Solaris ZFS (default file system)
The HAStoragePlus resource type provides additional file system capabilities such as checks, mounts, and forced unmounts. These capabilities enable Oracle Solaris Cluster to fail over local file systems. In order to fail over, the local file system must reside on global disk groups with affinity switchovers enabled.
See Enabling Highly Available Local File Systems in Oracle Solaris Cluster Data Services Planning and Administration Guide for information about how to use the HAStoragePlus resource type.
You can also use the HAStoragePlus resource type to synchronize the startup of resources and device groups on which the resources depend. For more information, see Resources, Resource Groups, and Resource Types.
You can use the syncdir mount option for cluster file systems that use UFS as the underlying file system. However, performance significantly improves if you do not specify syncdir. If you specify syncdir, the writes are guaranteed to be POSIX compliant. If you do not specify syncdir, you experience the same behavior as in NFS file systems. For example, without syncdir, you might not discover an out of space condition until you close a file. With syncdir (and POSIX behavior), the out-of-space condition would have been discovered during the write operation. The cases in which you might have problems if you do not specify syncdir are rare.