Sun Cluster 2.2 Software Installation Guide

Cluster Configuration Database

The Cluster Configuration Database (CCD) is a highly available, replicated database that is used to store internal configuration data for Sun Cluster configuration needs. The CCD is for Sun Cluster internal use--it is not a public interface and you should not attempt to update it directly.

The CCD relies on the Cluster Membership Monitor (CMM) service to determine the current cluster membership and determine its consistency domain, that is, the set of nodes that must have a consistent copy of the database and that propagate updates. The CCD database is divided into an Initial (Init) and a Dynamic database.

The purpose of the Init CCD database is storage of non-modifiable boot configuration parameters whose values are set during the CCD package installation (scinstall). The Dynamic CCD contains the remaining database entries. Unlike the Init CCD, entries in the Dynamic CCD can be updated at any time with the restrictions that the CCD database is recovered (that is, the cluster is up) and the CCD has quorum. (See "CCD Operation", for the definition of quorum.)

The Init CCD (/etc/opt/SUNWcluster/conf/ccd.database.init) is also used to store data for components that are started before the CCD is up. This means that queries to the Init CCD can occur before the CCD database has recovered and global consistency is checked.

The Dynamic CCD (/etc/opt/SUNWcluster/conf/ccd.database) contains the remaining database entries. The CCD guarantees the consistent replication of the Dynamic CCD across all of the nodes of its consistency domain.

The CCD database is replicated on all the nodes to guarantee its availability in case of a node failure. CCD daemons establish communications among themselves to synchronize and serialize database operations within the CCD consistency domain. Database updates and query operations can be issued from any node--the CCD does not have a single point of control.

In addition, the CCD offers:

CCD Operation

The CCD guarantees a consistent replication of the database across all the nodes of the elected consistency domain. Only nodes that are found to have a valid copy of the CCD are allowed to be in the cluster. Consistency checks are performed at two levels, local and global. Locally, each replicated database copy has a self-contained consistency record that stores the checksum and length of the database. This consistency record validates the local database copy in case of an update or database recovery. The consistency record timestamps the last update of the database.

The CCD also performs a global consistency check to verify that every node has an identical copy of the database. The CCD daemons exchange and verify their consistency record. During a cluster restart, a quorum voting scheme is used for recovering the database. The recovery process determines how many nodes have a valid copy of the CCD (the local consistency is checked through the consistency record), and how many copies are identical (have the same checksum and length).

A quorum majority (when more than half the nodes are up) must be found within the default consistency domain to guarantee that the CCD copy is current.


Note -

A quorum majority is required to perform updates to the CCD.


The equation Q= [Na/2]+1 specifies the number of nodes required to perform updates to the CCD. Na is the number of nodes physically present in the cluster. These nodes might be physically present, but not running the cluster software.

In the case of a two-node cluster with VERITAS Volume Manager, quorum may be maintained with only one node up by the use of a shared CCD volume. In a shared-CCD configuration, one copy of the CCD is kept on the local disk of each node and another copy is kept on in a special disk group that can be shared between the nodes. In normal operation, only the copies on the local disks are used, but if one node fails, the shared CCD is used to maintain CCD quorum with only one node in the cluster. When the failed node rejoins the cluster, it is updated with the current copy of the shared CCD. Refer to Chapter 3, Installing and Configuring Sun Cluster Software, for details on setting up a shared CCD volume in a two-node cluster.

If one node stays up, its valid CCD can be propagated to the newly joining nodes. The CCD recovery algorithm guarantees that the CCD database is up only if a valid copy is found and is correctly replicated on all the nodes. If the recovery fails, you must intervene and decide which one of the CCD copies is the valid one. The elected copy can then be used to restore the database via the ccdadm -r command. See the Sun Cluster 2.2 System Administration Guide for the procedures used to administer the CCD.


Note -

The CCD provides a backup facility, ccdadm(1M), to checkpoint the current content of the database. The backup copy can subsequently be used to restore the database. Refer to the ccdadm(1M) man page for details.