Skip Navigation Links | |
Exit Print View | |
Oracle Solaris Cluster Concepts Guide Oracle Solaris Cluster 4.1 |
2. Key Concepts for Hardware Service Providers
3. Key Concepts for System Administrators and Application Developers
Device IDs and DID Pseudo Driver
Cluster Configuration Repository (CCR)
Local and Global Namespaces Example
Adhering to Quorum Device Requirements
Adhering to Quorum Device Best Practices
Recommended Quorum Configurations
Quorum in Two-Node Configurations
Quorum in Greater Than Two-Node Configurations
Characteristics of Scalable Services
Data Service API and Data Service Development Library API
Using the Cluster Interconnect for Data Service Traffic
Resources, Resource Groups, and Resource Types
Resource and Resource Group States and Settings
Resource and Resource Group Properties
Support for Oracle Solaris Zones
Support for Zones on Cluster Nodes Through Oracle Solaris Cluster HA for Solaris Zones
Criteria for Using Oracle Solaris Cluster HA for Solaris Zones
Requirements for Using Oracle Solaris Cluster HA for Solaris Zones
Additional Information About Oracle Solaris Cluster HA for Solaris Zones
Data Service Project Configuration
Determining Requirements for Project Configuration
Setting Per-Process Virtual Memory Limits
Two-Node Cluster With Two Applications
Two-Node Cluster With Three Applications
Failover of Resource Group Only
Public Network Adapters and IP Network Multipathing
SPARC: Dynamic Reconfiguration Support
SPARC: Dynamic Reconfiguration General Description
SPARC: DR Clustering Considerations for CPU Devices
SPARC: DR Clustering Considerations for Memory
SPARC: DR Clustering Considerations for Disk and Tape Drives
SPARC: DR Clustering Considerations for Quorum Devices
SPARC: DR Clustering Considerations for Cluster Interconnect Interfaces
SPARC: DR Clustering Considerations for Public Network Interfaces
The current release of Oracle Solaris Cluster software supports disk path monitoring (DPM). This section provides conceptual information about DPM, the DPM daemon, and administration tools that you use to monitor disk paths. Refer to Oracle Solaris Cluster System Administration Guide for procedural information about how to monitor, unmonitor, and check the status of disk paths.
DPM improves the overall reliability of failover and switchover by monitoring secondary disk path availability. Use the cldevice command to verify the availability of the disk path that is used by a resource before the resource is switched. Options that are provided with the cldevice command enable you to monitor disk paths to a single node or to all nodes in the cluster. See the cldevice(1CL) man page for more information about command-line options.
The following table describes the default location for installation of DPM components.
|
A multi-threaded DPM daemon runs on each node. The DPM daemon (scdpmd) is started by an SMF service, system/cluster/scdpm, when a node boots. If a problem occurs, the daemon is managed by that SMF service and restarts automatically. The following list describes how the scdpmd works on initial startup.
Note - At startup, the status for each disk path is initialized to UNKNOWN.
The DPM daemon gathers disk path and node name information from the previous status file or from the CCR database. See Cluster Configuration Repository (CCR) for more information about the CCR. After a DPM daemon is started, you can force the daemon to read the list of monitored disks from a specified file name.
The DPM daemon initializes the communication interface to respond to requests from components that are external to the daemon, such as the command-line interface.
The DPM daemon pings each disk path in the monitored list every 10 minutes by using scsi_inquiry commands. Each entry is locked to prevent the communication interface access to the content of an entry that is being modified.
The DPM daemon notifies the Oracle Solaris Cluster Event Framework and logs the new status of the path through the UNIX syslogd command. See the syslogd(1M) man page.
Note - All errors that are related to the daemon are reported by pmfd. All the functions from the API return 0 on success and -1 for any failure.
The DPM daemon monitors the availability of the logical path that is visible through multipath drivers such as Oracle Solaris I/O multipathing (MPxIO), formerly named Sun StorEdge Traffic Manager, and EMC PowerPath. The individual physical paths are not monitored because the multipath driver masks individual failures from the DPM daemon.
You can monitor disk paths in your cluster by using the cldevice command. Use this command to monitor, unmonitor, or display the status of disk paths in your cluster. You can also use this command to print a list of faulted disks and to monitor disk paths from a file. See the cldevice(1CL) man page.
The cldevice command enables you to perform the following tasks:
Monitor a new disk path
Unmonitor a disk path
Reread the configuration data from the CCR database
Read the disks to monitor or unmonitor from a specified file
Report the status of a disk path or all disk paths in the cluster
Print all the disk paths that are accessible from a node
You use the clnode set command to enable and disable the automatic rebooting of a node when all monitored shared-disk paths fail. When you enable the reboot_on_path_failure property, the states of local-disk paths are not considered when determining if a node reboot is necessary. Only monitored shared disks are affected.