1. Introduction to Administering Oracle Solaris Cluster
2. Oracle Solaris Cluster and RBAC
3. Shutting Down and Booting a Cluster
4. Data Replication Approaches
5. Administering Global Devices, Disk-Path Monitoring, and Cluster File Systems
Overview of Administering Global Devices and the Global Namespace
Global Device Permissions for Solaris Volume Manager
Dynamic Reconfiguration With Global Devices
Veritas Volume Manager Administration Considerations
Administering Storage-Based Replicated Devices
Administering Hitachi TrueCopy Replicated Devices
How to Configure a Hitachi TrueCopy Replication Group
How to Configure DID Devices for Replication Using Hitachi TrueCopy
How to Verify a Hitachi TrueCopy Replicated Global Device Group Configuration
Example: Configuring a TrueCopy Replication Group for Oracle Solaris Cluster
Administering EMC Symmetrix Remote Data Facility Replicated Devices
How to Configure an EMC SRDF Replication Group
How to Configure DID Devices for Replication Using EMC SRDF
How to Verify EMC SRDF Replicated Global Device Group Configuration
Example: Configuring an SRDF Replication Group for Oracle Solaris Cluster
How to Update the Global-Devices Namespace
How to Change the Size of a lofi Device That Is Used for the Global-Devices Namespace
Migrating the Global-Devices Namespace
How to Migrate the Global-Devices Namespace From a Dedicated Partition to a lofi Device
How to Migrate the Global-Devices Namespace From a lofi Device to a Dedicated Partition
Adding and Registering Device Groups
How to Add and Register a Device Group (Solaris Volume Manager)
How to Add and Register a Device Group (Raw-Disk)
How to Add and Register a Replicated Device Group (ZFS)
How to Create a New Disk Group When Initializing Disks (Veritas Volume Manager)
How to Remove and Unregister a Device Group (Solaris Volume Manager)
How to Remove a Node From All Device Groups
How to Remove a Node From a Device Group (Solaris Volume Manager)
How to Create a New Disk Group When Encapsulating Disks (Veritas Volume Manager)
How to Add a New Volume to an Existing Device Group (Veritas Volume Manager)
How to Convert an Existing Disk Group to a Device Group (Veritas Volume Manager)
How to Assign a New Minor Number to a Device Group (Veritas Volume Manager)
How to Register a Disk Group as a Device Group (Veritas Volume Manager)
How to Register Disk Group Configuration Changes (Veritas Volume Manager)
How to Convert a Local Disk Group to a Device Group (VxVM)
How to Convert a Device Group to a Local Disk Group (VxVM)
How to Remove a Volume From a Device Group (Veritas Volume Manager)
How to Remove and Unregister a Device Group (Veritas Volume Manager)
How to Add a Node to a Device Group (Veritas Volume Manager)
How to Remove a Node From a Device Group (Veritas Volume Manager)
How to Remove a Node From a Raw-Disk Device Group
How to Change Device Group Properties
How to Set the Desired Number of Secondaries for a Device Group
How to List a Device Group Configuration
How to Switch the Primary for a Device Group
How to Put a Device Group in Maintenance State
Administering the SCSI Protocol Settings for Storage Devices
How to Display the Default Global SCSI Protocol Settings for All Storage Devices
How to Display the SCSI Protocol of a Single Storage Device
How to Change the Default Global Fencing Protocol Settings for All Storage Devices
How to Change the Fencing Protocol for a Single Storage Device
Administering Cluster File Systems
How to Add a Cluster File System
How to Remove a Cluster File System
How to Check Global Mounts in a Cluster
Administering Disk-Path Monitoring
How to Print Failed Disk Paths
How to Resolve a Disk-Path Status Error
How to Monitor Disk Paths From a File
How to Enable the Automatic Rebooting of a Node When All Monitored Shared-Disk Paths Fail
How to Disable the Automatic Rebooting of a Node When All Monitored Shared-Disk Paths Fail
7. Administering Cluster Interconnects and Public Networks
10. Configuring Control of CPU Usage
11. Patching Oracle Solaris Cluster Software and Firmware
12. Backing Up and Restoring a Cluster
13. Administering Oracle Solaris Cluster With the Graphical User Interfaces
No special Oracle Solaris Cluster commands are necessary for cluster file system administration. Administer a cluster file system as you would any other Oracle Solaris file system, using standard Oracle Solaris file system commands, such as mount and newfs. Mount cluster file systems by specifying the -g option to the mount command. Cluster file systems can also be automatically mounted at boot. Cluster file systems are only visible from the voting node in a global cluster. If you require the cluster file system data to be accessible from a non-voting node, map the data to the non-voting node with zoneadm(1M) or HAStoragePlus.
Note - When the cluster file system reads files, the file system does not update the access time on those files.
The following restrictions apply to the cluster file system administration:
The unlink(1M) command is not supported on directories that are not empty.
The lockfs -d command is not supported. Use lockfs -n as a workaround.
You cannot remount a cluster file system with the directio mount option added at remount time.
ZFS for root file systems is supported, with one significant exception. If you use a dedicated partition of the boot disk for the global-devices file system, you must use only UFS as its file system. The global-devices namespace requires the proxy file system (PxFS) running on a UFS file system. However, a UFS file system for the global-devices namespace can coexist with a ZFS file system for the root (/) file system and other root file systems, for example, /var or /home. Alternatively, if you instead use a lofi device to host the global-devices namespace, there is no limitation on the use of ZFS for root file systems.
The following VxFS features are not supported in an Oracle Solaris Cluster cluster file system. They are, however, supported in a local file system.
Quick I/O
Snapshots
Storage checkpoints
VxFS-specific mount options:
convosync (Convert O_SYNC)
mincache
qlog, delaylog, tmplog
Veritas cluster file system (requires VxVM cluster feature & Veritas Cluster Server). The VxVM cluster feature is not supported on x86 based systems.
Cache advisories can be used, but the effect is observed on the given node only.
All other VxFS features and options that are supported in a cluster file system are supported by Oracle Solaris Cluster software. See VxFS documentation for details about VxFS options that are supported in a cluster configuration.
The following guidelines for using VxFS to create highly available cluster file systems are specific to an Oracle Solaris 3.3 configuration.
Create a VxFS file system by the following procedures in the VxFS documentation.
Mount and unmount a VxFS file system from the primary node. The primary node masters the disk on which the VxFS file system resides. A VxFS file system mount or unmount operation that is performed from a secondary node might fail.
Perform all VxFS administration commands from the primary node of the VxFS cluster file system.
The following guidelines for administering VxFS cluster file systems are not specific to Oracle Solaris 3.3 software. However, the guidelines are different from the way you administer UFS cluster file systems.
You can administer files on a VxFS cluster file system from any node in the cluster. The exception is ioctls, which you must issue only from the primary node. If you do not know whether an administration command involves ioctls, issue the command from the primary node.
If a VxFS cluster file system fails over to a secondary node, all standard system-call operations that were in progress during failover are reissued transparently on the new primary. However, any ioctl-related operation in progress during the failover will fail. After a VxFS cluster file system failover, check the state of the cluster file system. Administrative commands that were issued on the old primary before failover might require corrective measures. See VxFS documentation for more information.