Solaris Volume Manager Administration Guide

Chapter 6 State Database (Overview)

This chapter provides conceptual information about state database replicas. For information about performing related tasks, see Chapter 7, State Database (Tasks).

This chapter contains the following information:

About the Solaris Volume Manager State Database and Replicas

The Solaris Volume Manager state database contains configuration and status information for all volumes, hot spares, and disk sets. Solaris Volume Manager maintains multiple copies (replicas) of the state database to provide redundancy and to prevent the database from being corrupted during a system crash (at most, only one database copy will be corrupted).

The state database replicas ensure that the data in the state database is always valid. When the state database is updated, each state database replica is also updated. The updates occur one at a time (to protect against corrupting all updates if the system crashes).

If your system loses a state database replica, Solaris Volume Manager must figure out which state database replicas still contain valid data. Solaris Volume Manager determines this information by using a majority consensus algorithm. This algorithm requires that a majority (half + 1) of the state database replicas be available and in agreement before any of them are considered valid. Because of the requirements of the majority consensus algorithm, you must create at least three state database replicas when you set up your disk configuration. A consensus can be reached as long as at least two of the three state database replicas are available.

During booting, Solaris Volume Manager ignores corrupted state database replicas. In some cases, Solaris Volume Manager tries to rewrite state database replicas that are corrupted. Otherwise, they are ignored until you repair them. If a state database replica becomes corrupted because its underlying slice encountered an error, you need to repair or replace the slice and then enable the replica.


Caution – Caution –

Do not place state database replicas on fabric-attached storage, SANs, or other storage that is not directly attached to the system. You might not be able to boot Solaris Volume Manager. Replicas must be on storage devices that are available at the same point in the boot process as traditional SCSI or IDE drives.


If all state database replicas are lost, you could, in theory, lose all data that is stored on your Solaris Volume Manager volumes. For this reason, it is good practice to create enough state database replicas on separate drives and across controllers to prevent catastrophic failure. It is also wise to save your initial Solaris Volume Manager configuration information, as well as your disk partition information.

See Chapter 7, State Database (Tasks) for information on adding additional state database replicas to the system. See Recovering From State Database Replica Failures for information on recovering when state database replicas are lost.

State database replicas are also used for RAID-1 volume resynchronization regions. Too few state database replicas relative to the number of mirrors might cause replica I/O to impact RAID-1 volume performance. That is, if you have a large number of mirrors, make sure that you have at least two state database replicas per RAID-1 volume, up to the maximum of 50 replicas per disk set.

By default each state database replica for volumes, the local set and for disk sets occupies 4 Mbytes (8192 disk sectors) of disk storage. The default size of a state database replica for a multi-owner disk set is 16 Mbytes.

Replicas can be stored on the following devices:

Replicas cannot be stored on the root (/), swap, or /usr slices. Nor can replicas be stored on slices that contain existing file systems or data. After the replicas have been stored, volumes or file systems can be placed on the same slice.

Understanding the Majority Consensus Algorithm

An inherent problem with replicated databases is that it can be difficult to determine which database has valid and correct data. To solve this problem, Solaris Volume Manager uses a majority consensus algorithm. This algorithm requires that a majority of the database replicas agree with each other before any of them are declared valid. This algorithm requires the presence of at least three initial replicas, which you create. A consensus can then be reached as long as at least two of the three replicas are available. If only one replica exists and the system crashes, it is possible that all volume configuration data will be lost.

To protect data, Solaris Volume Manager does not function unless half of all state database replicas are available. The algorithm, therefore, ensures against corrupt data.

The majority consensus algorithm provides the following:

If insufficient state database replicas are available, you must boot into single-user mode and delete enough of the corrupted or missing replicas to achieve a quorum. See How to Recover From Insufficient State Database Replicas.


Note –

When the total number of state database replicas is an odd number, Solaris Volume Manager computes the majority by dividing the number in half, rounding down to the nearest integer, then adding 1 (one). For example, on a system with seven replicas, the majority would be four (seven divided by two is three and one-half, rounded down is three, plus one is four).


Administering State Database Replicas

Handling State Database Replica Errors

If a state database replica fails, the system continues to run if at least half of the remaining replicas are available. The system panics when fewer than half of the replicas are available.

The system can into reboot multiuser mode when at least one more than half of the replicas are available. If fewer than a majority of replicas are available, you must reboot into single-user mode and delete the unavailable replicas (by using the metadb command).

For example, assume you have four replicas. The system continues to run as long as two replicas (half the total number) are available. However, to reboot the system, three replicas (half the total + 1) must be available.

In a two-disk configuration, you should always create at least two replicas on each disk. For example, assume you have a configuration with two disks, and you only create three replicas (two replicas on the first disk and one replica on the second disk). If the disk with two replicas fails, the system panics because the remaining disk only has one replica. This is less than half the total number of replicas.


Note –

If you create two replicas on each disk in a two-disk configuration, Solaris Volume Manager still functions if one disk fails. But because you must have one more than half of the total replicas available for the system to reboot, you cannot reboot.


If a slice that contains a state database replica fails, the rest of your configuration should remain in operation. Solaris Volume Manager finds a valid state database during boot (as long as at least half +1 valid state database replicas are available).

When you manually repair or enable state database replicas, Solaris Volume Manager updates them with valid data.

Scenario—State Database Replicas

State database replicas provide redundant data about the overall Solaris Volume Manager configuration. The following example, is based on the sample system in the scenario provided in Chapter 5, Configuring and Using Solaris Volume Manager (Scenario). This example describes how state database replicas can be distributed to provide adequate redundancy.

The sample system has one internal IDE controller and drive, plus two SCSI controllers. Each SCSI controller has six disks attached. With three controllers, the system can be configured to avoid any single point-of-failure. Any system with only two controllers cannot avoid a single point-of-failure relative to Solaris Volume Manager. By distributing replicas evenly across all three controllers and across at least one disk on each controller (across two disks, if possible), the system can withstand any single hardware failure.

In a minimal configuration, you could put a single state database replica on slice 7 of the root disk, then an additional replica on slice 7 of one disk on each of the other two controllers. To help protect against the admittedly remote possibility of media failure, add another replica to the root disk and then two replicas on two different disks on each controller, for a total of six replicas, provides more than adequate security.

To provide even more security, add 12 additional replicas spread evenly across the 6 disks on each side of the two mirrors. This configuration results in a total of 18 replicas with 2 on the root disk and 8 on each of the SCSI controllers, distributed across the disks on each controller.