JavaScript is required to for searching.
Skip Navigation Links
Exit Print View
Oracle Solaris Cluster Concepts Guide     Oracle Solaris Cluster 3.3 3/13
search filter icon
search icon

Document Information

Preface

1.  Introduction and Overview

2.  Key Concepts for Hardware Service Providers

3.  Key Concepts for System Administrators and Application Developers

Administrative Interfaces

Cluster Time

Campus Clusters

High-Availability Framework

Global Devices

Device IDs and DID Pseudo Driver

Zone Membership

Cluster Membership Monitor

Failfast Mechanism

Cluster Configuration Repository (CCR)

Device Groups

Device Group Failover

Device Group Ownership

Global Namespace

Local and Global Namespaces Example

Cluster File Systems

Using Cluster File Systems

HAStoragePlus Resource Type

syncdir Mount Option

Disk Path Monitoring

DPM Overview

Monitoring Disk Paths

Using the cldevice Command to Monitor and Administer Disk Paths

Using Oracle Solaris Cluster Manager to Monitor Disk Paths

Using the clnode set Command to Manage Disk Path Failure

Quorum and Quorum Devices

About Quorum Vote Counts

About Quorum Configurations

Adhering to Quorum Device Requirements

Adhering to Quorum Device Best Practices

Recommended Quorum Configurations

Quorum in Two-Node Configurations

Quorum in Greater Than Two-Node Configurations

Load Limits

Data Services

Data Service Methods

Failover Data Services

Scalable Data Services

Load-Balancing Policies

Failback Settings

Data Services Fault Monitors

Developing New Data Services

Characteristics of Scalable Services

Data Service API and Data Service Development Library API

Using the Cluster Interconnect for Data Service Traffic

Resources, Resource Groups, and Resource Types

Resource Group Manager (RGM)

Resource and Resource Group States and Settings

Resource and Resource Group Properties

Support for Oracle Solaris Zones

Support for Global-Cluster Non-Voting Nodes (Oracle Solaris Zones) Directly Through the RGM

Criteria for Using Support for Oracle Solaris Zones Directly Through the RGM

Requirements for Using Support for Oracle Solaris Zones Directly Through the RGM

Additional Information About Support for Solaris Zones Directly Through the RGM

Support for Oracle Solaris Zones on Cluster Nodes Through Oracle Solaris Cluster HA for Solaris Zones

Criteria for Using Oracle Solaris Cluster HA for Solaris Zones

Requirements for Using Oracle Solaris Cluster HA for Solaris Zones

Additional Information About Oracle Solaris Cluster HA for Solaris Zones

Service Management Facility

System Resource Usage

System Resource Monitoring

Control of CPU

Viewing System Resource Usage

Data Service Project Configuration

Determining Requirements for Project Configuration

Setting Per-Process Virtual Memory Limits

Failover Scenarios

Two-Node Cluster With Two Applications

Two-Node Cluster With Three Applications

Failover of Resource Group Only

Public Network Adapters and IP Network Multipathing

SPARC: Dynamic Reconfiguration Support

SPARC: Dynamic Reconfiguration General Description

SPARC: DR Clustering Considerations for CPU Devices

SPARC: DR Clustering Considerations for Memory

SPARC: DR Clustering Considerations for Disk and Tape Drives

SPARC: DR Clustering Considerations for Quorum Devices

SPARC: DR Clustering Considerations for Cluster Interconnect Interfaces

SPARC: DR Clustering Considerations for Public Network Interfaces

Index

Disk Path Monitoring

The current release of Oracle Solaris Cluster software supports disk path monitoring (DPM). This section provides conceptual information about DPM, the DPM daemon, and administration tools that you use to monitor disk paths. Refer to Oracle Solaris Cluster System Administration Guide for procedural information about how to monitor, unmonitor, and check the status of disk paths.

DPM Overview

DPM improves the overall reliability of failover and switchover by monitoring secondary disk path availability. Use the cldevice command to verify the availability of the disk path that is used by a resource before the resource is switched. Options that are provided with the cldevice command enable you to monitor disk paths to a single node or to all nodes in the cluster. See the cldevice(1CL) man page for more information about command-line options.

The following table describes the default location for installation of DPM components.

Location
Component
Daemon
/usr/cluster/lib/sc/scdpmd
Command-line interface
/usr/cluster/bin/cldevice
Daemon status file (created at runtime)
/var/run/cluster/scdpm.status

A multithreaded DPM daemon runs on each node. The DPM daemon (scdpmd) is started by an rc.d script when a node boots. If a problem occurs, the daemon is managed by pmfd and restarts automatically.


Note - At startup, the status for each disk path is initialized to UNKNOWN.


The following list describes how the scdpmd works on initial startup:

  1. The DPM daemon gathers disk path and node name information from the previous status file or from the CCR database. See Cluster Configuration Repository (CCR) for more information about the CCR. After a DPM daemon is started, you can force the daemon to read the list of monitored disks from a specified file name.

  2. The DPM daemon initializes the communication interface to respond to requests from components that are external to the daemon, such as the command-line interface.

  3. The DPM daemon pings each disk path in the monitored list every 10 minutes by using scsi_inquiry commands. Each entry is locked to prevent the communication interface access to the content of an entry that is being modified.

  4. The DPM daemon notifies the Oracle Solaris Cluster Event Framework and logs the new status of the path through the UNIX syslogd command. See the syslogd(1M) man page.


Note - All errors that are related to the daemon are reported by pmfd. All the functions from the API return 0 on success and -1 for any failure.


The DPM daemon monitors the availability of the logical path that is visible through multipath drivers such as Oracle Solaris I/O multipathing(formerly named Sun StorEdge Traffic Manager) and EMC PowerPath. The individual physical paths that are managed by these drivers are not monitored because the multipath driver masks individual failures from the DPM daemon.

Monitoring Disk Paths

This section describes two methods for monitoring disk paths in your cluster. The first method uses the cldevice command to monitor, unmonitor, or display the status of disk paths in your cluster. You can also use this command to print a list of faulted disks and to monitor disk paths from a file. See the cldevice(1CL) man page.

The second method for monitoring disk paths in your cluster is provided by the Oracle Solaris Cluster Manager graphical user interface (GUI). Oracle Solaris Cluster Manager provides a topological view of the monitored disk paths in your cluster. The view is updated every 10 minutes to provide information about the number of failed pings. Use the information that is provided by the Oracle Solaris Cluster Manager GUI in conjunction with the cldevice command to administer disk paths. See Chapter 13, Administering Oracle Solaris Cluster With the Graphical User Interfaces, in Oracle Solaris Cluster System Administration Guide for information about Oracle Solaris Cluster Manager.

Using the cldevice Command to Monitor and Administer Disk Paths

The cldevice command enables you to perform the following tasks:


Note - Always specify a global disk path name rather than a UNIX disk path name because a global disk path name is consistent throughout a cluster. A UNIX disk path name is not. For example, the disk path name can be c1t0d0 on one node and c2t0d0 on another node. To determine a global disk path name for a device that is connected to a node, use the cldevice list command before issuing DPM commands. See the cldevice(1CL) man page.


Table 3-3 Sample Disk Path Names

Name Type
Sample Disk Path Name
Description
Global disk path
schost-1:/dev/did/dsk/d1
Disk path d1 on the schost-1 node
all:d1
Disk path d1 on all nodes in the cluster
UNIX disk path
schost-1:/dev/rdsk/c0t0d0s0
Disk path c0t0d0s0 on the schost-1 node
schost-1:all
All disk paths on the schost-1 node
All disk paths
all:all
All disk paths on all nodes of the cluster

Using Oracle Solaris Cluster Manager to Monitor Disk Paths

Oracle Solaris Cluster Manager enables you to perform the following basic DPM administration tasks:

The Oracle Solaris Cluster Manager online help provides procedural information about how to administer disk paths

Using the clnode set Command to Manage Disk Path Failure

You use the clnode set command to enable and disable the automatic rebooting of a node when all monitored shared-disk paths fail. When you enable the reboot_on_path_failure property, the states of local-disk paths are not considered when determining whether a node reboot is necessary. Only monitored shared disks are affected. You can also use Oracle Solaris Cluster Manager to perform these tasks.