JavaScript is required to for searching.
Skip Navigation Links
Exit Print View
Solaris Volume Manager Administration Guide
search filter icon
search icon

Document Information

Preface

1.  Getting Started With Solaris Volume Manager

2.  Storage Management Concepts

3.  Solaris Volume Manager Overview

4.  Solaris Volume Manager for Sun Cluster (Overview)

5.  Configuring and Using Solaris Volume Manager (Scenario)

6.  State Database (Overview)

About the Solaris Volume Manager State Database and Replicas

Understanding the Majority Consensus Algorithm

Administering State Database Replicas

Handling State Database Replica Errors

Scenario--State Database Replicas

7.  State Database (Tasks)

8.  RAID-0 (Stripe and Concatenation) Volumes (Overview)

9.  RAID-0 (Stripe and Concatenation) Volumes (Tasks)

10.  RAID-1 (Mirror) Volumes (Overview)

11.  RAID-1 (Mirror) Volumes (Tasks)

12.  Soft Partitions (Overview)

13.  Soft Partitions (Tasks)

14.  RAID-5 Volumes (Overview)

15.  RAID-5 Volumes (Tasks)

16.  Hot Spare Pools (Overview)

17.  Hot Spare Pools (Tasks)

18.  Disk Sets (Overview)

19.  Disk Sets (Tasks)

20.  Maintaining Solaris Volume Manager (Tasks)

21.  Best Practices for Solaris Volume Manager

22.  Top-Down Volume Creation (Overview)

23.  Top-Down Volume Creation (Tasks)

24.  Monitoring and Error Reporting (Tasks)

25.  Troubleshooting Solaris Volume Manager (Tasks)

A.  Important Solaris Volume Manager Files

B.  Solaris Volume Manager Quick Reference

C.  Solaris Volume Manager CIM/WBEM API

Index

Handling State Database Replica Errors

If a state database replica fails, the system continues to run if at least half of the remaining replicas are available. The system panics when fewer than half of the replicas are available.

The system can into reboot multiuser mode when at least one more than half of the replicas are available. If fewer than a majority of replicas are available, you must reboot into single-user mode and delete the unavailable replicas (by using the metadb command).

For example, assume you have four replicas. The system continues to run as long as two replicas (half the total number) are available. However, to reboot the system, three replicas (half the total + 1) must be available.

In a two-disk configuration, you should always create at least two replicas on each disk. For example, assume you have a configuration with two disks, and you only create three replicas (two replicas on the first disk and one replica on the second disk). If the disk with two replicas fails, the system panics because the remaining disk only has one replica. This is less than half the total number of replicas.


Note - If you create two replicas on each disk in a two-disk configuration, Solaris Volume Manager still functions if one disk fails. But because you must have one more than half of the total replicas available for the system to reboot, you cannot reboot.


If a slice that contains a state database replica fails, the rest of your configuration should remain in operation. Solaris Volume Manager finds a valid state database during boot (as long as at least half +1 valid state database replicas are available).

When you manually repair or enable state database replicas, Solaris Volume Manager updates them with valid data.