Sun Cluster Data Service for Oracle RAC Guide for Solaris OS

Tuning the Sun Cluster Support for Oracle RAC Fault Monitors

Fault monitoring for the Sun Cluster Support for Oracle RAC data service is provided by fault monitors for the following resources:

Each fault monitor is contained in a resource whose resource type is shown in the following table.

Table 4–4 Resource Types for Sun Cluster Support for Oracle RAC Fault Monitors

Fault Monitor 

Resource Type 

Scalable device group 

SUNW.ScalDeviceGroup

Scalable file-system mount point 

SUNW.ScalMountPoint

Oracle 9i RAC server 

SUNW.scalable_rac_server

Oracle 9iRAC listener 

SUNW.scalable_rac_listener

System properties and extension properties of these resources control the behavior of the fault monitors. The default values of these properties determine the preset behavior of the fault monitors. The preset behavior should be suitable for most Sun Cluster installations. Therefore, you should tune the Sun Cluster Support for Oracle RAC fault monitors only if you need to modify this preset behavior.

Tuning the Sun Cluster Support for Oracle RAC fault monitors involves the following tasks:

For more information, see Tuning Fault Monitors for Sun Cluster Data Services in Sun Cluster Data Services Planning and Administration Guide for Solaris OS. Information about the Sun Cluster Support for Oracle RAC fault monitors that you need to perform these tasks is provided in the subsections that follow.

Operation of the Fault Monitor for a Scalable Device Group

By default, the fault monitor monitors all logical volumes in the device group that the resource represents. If you require only a subset of the logical volumes in a device group to be monitored, set the LogicalDeviceList extension property.

The status of the device group is derived from the statuses of the individual logical volumes that are monitored. If all monitored logical volumes are healthy, the device group is healthy. If any monitored logical volume is faulty, the device group is faulty. If a device group is discovered to be faulty, monitoring of the resource that represents the group is stopped and the resource is put into the disabled state.

The status of an individual logical volume is obtained by querying the volume's volume manager. If the status of a Solaris Volume Manager for Sun Cluster volume cannot be determined from a query, the fault monitor performs file input/output (I/O) operations to determine the status.


Note –

For mirrored disks, if one submirror is faulty, the device group is still considered to be healthy.


If reconfiguration of userland cluster membership causes an I/O error, the monitoring of device group resources by fault monitors is suspended while userland cluster membership monitor (UCMM) reconfigurations are in progress.

Operation of the Fault Monitor for Scalable File-System Mount Points

To determine if the mounted file system is available, the fault monitor performs I/O operations such as opening, reading, and writing to a test file on the file system. If an I/O operation is not completed within the timeout period, the fault monitor reports an error. To specify the timeout for I/O operations, set the IOTimeout extension property.

The response to an error depends on the type of the file system, as follows:

Operation of the Oracle 9i RAC Server Fault Monitor

The fault monitor for the Oracle 9i RAC server uses a request to the server to query the health of the server.

The server fault monitor is started through pmfadm to make the monitor highly available. If the monitor is killed for any reason, the Process Monitor Facility (PMF) automatically restarts the monitor.

The server fault monitor consists of the following processes.

Operation of the Main Fault Monitor

The main fault monitor determines that an operation is successful if the database is online and no errors are returned during the transaction.

Operation of the Database Client Fault Probe

The database client fault probe performs the following operations:

  1. Monitoring the partition for archived redo logs

  2. If the partition is healthy, determining whether the database is operational

The probe uses the timeout value that is set in the resource property Probe_timeout to determine how much time to allocate to successfully probe Oracle.

Operations to Monitor the Partition for Archived Redo Logs

The database client fault probe queries the dynamic performance view v$archive_dest to determine all possible destinations for archived redo logs. For every active destination, the probe determines whether the destination is healthy and has sufficient free space for storing archived redo logs.

Operations to Determine Whether the Database is Operational

If the partition for archived redo logs is healthy, the database client fault probe queries the dynamic performance view v$sysstat to obtain database performance statistics. Changes to these statistics indicate that the database is operational. If these statistics remain unchanged between consecutive queries, the fault probe performs database transactions to determine if the database is operational. These transactions involve the creation, updating, and dropping of a table in the user table space.

The database client fault probe performs all its transactions as the Oracle user. The ID of this user is specified during the preparation of the nodes or zones as explained in How to Create the DBA Group and the DBA User Accounts.

Actions by the Server Fault Monitor in Response to a Database Transaction Failure

If a database transaction fails, the server fault monitor performs an action that is determined by the error that caused the failure. To change the action that the server fault monitor performs, customize the server fault monitor as explained in Customizing the Oracle 9i RAC Server Fault Monitor.

If the action requires an external program to be run, the program is run as a separate process in the background.

Possible actions are as follows:

Scanning of Logged Alerts by the Server Fault Monitor

The Oracle software logs alerts in an alert log file. The absolute path of this file is specified by the alert_log_file extension property of the SUNW.scalable_rac_server resource. The server fault monitor scans the alert log file for new alerts at the following times:

If an action is defined for a logged alert that the server fault monitor detects, the server fault monitor performs the action in response to the alert.

Preset actions for logged alerts are listed in Table B–2. To change the action that the server fault monitor performs, customize the server fault monitor as explained in Customizing the Oracle 9i RAC Server Fault Monitor.

Operation of the Oracle 9i RAC Listener Fault Monitor

The Oracle 9i RAC listener fault monitor checks the status of an Oracle listener.

If the listener is running, the Oracle 9i RAC listener fault monitor considers a probe successful. If the fault monitor detects an error, the listener is restarted.


Note –

The listener resource does not provide a mechanism for setting the listener password. If Oracle listener security is enabled, a probe by the listener fault monitor might return Oracle error TNS-01169. Because the listener is able to respond, the listener fault monitor treats the probe as a success. This action does not cause a failure of the listener to remain undetected. A failure of the listener returns a different error, or causes the probe to time out.


The listener probe is started through pmfadm to make the probe highly available. If the probe is killed, PMF automatically restarts the probe.

If a problem occurs with the listener during a probe, the probe tries to restart the listener. The value that is set for the resource property retry_count determines the maximum number of times that the probe attempts the restart. If, after trying for the maximum number of times, the probe is still unsuccessful, the probe stops the fault monitor.

Obtaining Core Files for Troubleshooting DBMS Timeouts

To facilitate troubleshooting of unexplained DBMS timeouts, you can enable the fault monitor to create a core file when a probe timeout occurs. The contents of the core file relate to the fault monitor process. The fault monitor creates the core file in the / directory. To enable the fault monitor to create a core file, use the coreadm command to enable set-id core dumps. For more information, see the coreadm(1M) man page.