Solaris Volume Manager Administration Guide

Chapter 10 RAID 1 (Mirror) Volumes (Overview)

This chapter explains essential Solaris Volume Manager concepts related to mirrors and submirrors. For information about performing related tasks, see Chapter 11, RAID 1 (Mirror) Volumes (Tasks).

This chapter contains the following information:

Overview of RAID 1 (Mirror) Volumes

A RAID 1 volume, or mirror, is a volume that maintains identical copies of the data in RAID 0 (stripe or concatenation) volumes. Mirroring requires an investment in disks. You need at least twice as much disk space as the amount of data you have to mirror. Because Solaris Volume Manager must write to all submirrors, mirroring can also increase the amount of time it takes for write requests to be written to disk.

After you configure a mirror, it can be used just as if it were a physical slice.

You can mirror any file system, including existing file systems. You can also use a mirror for any application, such as a database.


Tip –

Use Solaris Volume Manager's hot spare feature with mirrors to keep data safe and available. For information on hot spares, see Chapter 16, Hot Spare Pools (Overview) and Chapter 17, Hot Spare Pools (Tasks).


If you have no existing data that you are mirroring and you are comfortable destroying all data on all submirrors, you can speed the creation process by creating all submirrors with a single command.

Overview of Submirrors

The RAID 0 volumes that are mirrored are called submirrors. A mirror is made of one or more RAID 0 volumes (stripes or concatenations).

A mirror can consist of up to four submirrors. Practically, a two-way mirror is usually sufficient. A third submirror enables you to make online backups without losing data redundancy while one submirror is offline for the backup.

If you take a submirror “offline,” the mirror stops reading and writing to the submirror. At this point, you could access the submirror itself, for example, to perform a backup. However, the submirror is in a read-only state. While a submirror is offline, Solaris Volume Manager keeps track of all writes to the mirror. When the submirror is brought back online, only the portions of the mirror that were written while the submirror was offline (resynchronization regions) are resynchronized. Submirrors can also be taken offline to troubleshoot or repair physical devices which have errors.

Submirrors can be attached or detached from a mirror at any time, though at least one submirror must remain attached at all times.

Normally, you create a mirror with only a single submirror. Then, you attach a second submirror after you create the mirror.

Scenario—RAID 1 (Mirror) Volume

Figure 10–1 illustrates a mirror, d2, that is made of two volumes (submirrors) d21 and d22.

Solaris Volume Manager software makes duplicate copies of the data on multiple physical disks, and presents one virtual disk to the application. All disk writes are duplicated; disk reads come from one of the underlying submirrors. The total capacity of mirror d2 is the size of the smallest of the submirrors (if they are not of equal size).

Figure 10–1 RAID 1 (Mirror) Example

Diagram shows how two RAID 0 volumes are used together
as a RAID 1 (mirror) volume to provide redundant storage.

Providing RAID 1+0 and RAID 0+1

Solaris Volume Manager supports both RAID 1+0 (which is like having mirrors that are then striped) and RAID 0+1 (stripes that are then mirrored) redundancy, depending on the context. The Solaris Volume Manager interface makes it appear that all RAID 1 devices are strictly RAID 0+1, but Solaris Volume Manager recognizes the underlying components and mirrors each individually, when possible.


Note –

Solaris Volume Manager cannot always provide RAID 1+0 functionality. However, in a best practices environment, where both submirrors are identical to each other and are made up of disk slices (and not soft partitions), RAID 1+0 will be possible.


For example, with a pure RAID 0+1 implementation and a two-way mirror that consists of three striped slices, a single slice failure could fail one side of the mirror. And, assuming that no hot spares were in use, a second slice failure would fail the mirror. Using Solaris Volume Manager, up to three slices could potentially fail without failing the mirror, because each of the three striped slices are individually mirrored to their counterparts on the other half of the mirror.

Consider this example:

Figure 10–2 RAID 1+ 0 Example

Diagram shows how three of six total slices in a RAID
1 volume can potentially fail without data loss because of the RAID 1+0 implementation.

Mirror d1 consists of two submirrors, each of which consists of three identical physical disks and the same interlace value. A failure of three disks, A, B, and F can be tolerated because the entire logical block range of the mirror is still contained on at least one good disk.

If, however, disks A and D fail, a portion of the mirror's data is no longer available on any disk and access to these logical blocks will fail.

When a portion of a mirror's data is unavailable due to multiple slice errors, access to portions of the mirror where data is still available will succeed. Under this situation, the mirror acts like a single disk that has developed bad blocks. The damaged portions are unavailable, but the rest is available.

Configuration Guidelines for RAID 1 Volumes


Note –

If you have a mirrored file system in which the first submirror attached does not start on cylinder 0, all additional submirrors you attach must also not start on cylinder 0. If you attempt to attach a submirror starting on cylinder 0 to a mirror in which the original submirror does not start on cylinder 0, the following error message displays:


can't attach labeled submirror to an unlabeled mirror 

You must ensure that all submirrors intended for use within a specific mirror either all start on cylinder 0, or that none of them start on cylinder 0.

Starting cylinders do not have to be the same across all submirrors, but all submirrors must either include or not include cylinder 0.


RAID 1 Volume Options

The following options are available to optimize mirror performance:

You can define mirror options when you initially create the mirror, or after a mirror has been set up. For tasks related to changing these options, see How to Change RAID 1 Volume Options.

RAID 1 Volume Read and Write Policies

Solaris Volume Manager enables different read and write policies to be configured for a RAID 1 volume. Properly set read and write policies can improve performance for a given configuration.

Table 10–1 RAID 1 Volume Read Policies

Read Policy 

Description 

Round Robin (Default) 

Attempts to balance the load across the submirrors. All reads are made in a round-robin order (one after another) from all submirrors in a mirror. 

Geometric 

Enables reads to be divided among submirrors on the basis of a logical disk block address. For instance, with a two-way submirror, the disk space on the mirror is divided into two equally-sized logical address ranges. Reads from one submirror are restricted to one half of the logical range, and reads from the other submirror are restricted to the other half. The geometric read policy effectively reduces the seek time necessary for reads. The performance gained by this mode depends on the system I/O load and the access patterns of the applications. 

First 

Directs all reads to the first submirror. This policy should be used only when the device or devices that comprise the first submirror are substantially faster than those of the second submirror. 

Table 10–2 RAID 1 Volume Write Policies

Write Policy 

Description 

Parallel (Default) 

A write to a mirror is replicated and dispatched to all of the submirrors simultaneously. 

Serial 

Performs writes to submirrors serially (that is, the first submirror write completes before the second is started). The serial option specifies that writes to one submirror must complete before the next submirror write is initiated. The serial option is provided in case a submirror becomes unreadable, for example, due to a power failure. 

RAID 1 Volume (Mirror) Resynchronization

RAID 1 volume (mirror) resynchronization is the process of copying data from one submirror to another after submirror failures, system crashes, when a submirror has been taken offline and brought back online, or after the addition of a new submirror.

While the resynchronization takes place, the mirror remains readable and writable by users.

A mirror resynchronization ensures proper mirror operation by maintaining all submirrors with identical data, with the exception of writes in progress.


Note –

A mirror resynchronization is mandatory, and cannot be omitted. You do not need to manually initiate a mirror resynchronization. This process occurs automatically.


Full Resynchronization

When a new submirror is attached (added) to a mirror, all the data from another submirror in the mirror is automatically written to the newly attached submirror. Once the mirror resynchronization is done, the new submirror is readable. A submirror remains attached to a mirror until it is explicitly detached.

If the system crashes while a resynchronization is in progress, the resynchronization is restarted when the system finishes rebooting.

Optimized Resynchronization

During a reboot following a system failure, or when a submirror that was offline is brought back online, Solaris Volume Manager performs an optimized mirror resynchronization. The metadisk driver tracks submirror regions and knows which submirror regions might be out-of-sync after a failure. An optimized mirror resynchronization is performed only on the out-of-sync regions. You can specify the order in which mirrors are resynchronized during reboot, and you can omit a mirror resynchronization by setting submirror pass numbers to 0 (zero). (See Pass Number for information.)


Caution – Caution –

A pass number of 0 (zero) should only be used on mirrors that are mounted as read-only.


Partial Resynchronization

Following a replacement of a slice within a submirror, Solaris Volume Manager performs a partial mirror resynchronization of data. Solaris Volume Manager copies the data from the remaining good slices of another submirror to the replaced slice.

Pass Number

The pass number, a number in the range 0–9, determines the order in which a particular mirror is resynchronized during a system reboot. The default pass number is 1. Smaller pass numbers are resynchronized first. If 0 is used, the mirror resynchronization is skipped. A pass number of 0 should be used only for mirrors that are mounted as read-only. Mirrors with the same pass number are resynchronized at the same time.

Background Information for RAID 1 Volumes

Background Information for Creating RAID 1 Volumes

Background Information for Changing RAID 1 Volume Options

How Booting Into Single-User Mode Affects RAID 1 Volumes

If a system with mirrors for root (/), /usr, and swap, the so-called “boot” file systems, is booted into single-user mode (by using the boot -s command), these mirrors and possibly all mirrors on the system will appear in the “Needing Maintenance” state when viewed with the metastat command. Furthermore, if writes occur to these slices, the metastat command shows an increase in dirty regions on the mirrors.

Though this situation appears to be potentially dangerous, there is no need for concern. The metasync -r command, which normally occurs during boot to resynchronize mirrors, is interrupted when the system is booted into single-user mode. Once the system is rebooted, the metasync -r command will run and resynchronize all mirrors.

If this is a concern, run the metasync -r command manually.

Scenario—RAID 1 Volumes (Mirrors)

RAID 1 volumes provide a means of constructing redundant volumes, in which a partial or complete failure of one of the underlying RAID 0 volumes does not cause data loss or interruption of access to the file systems. The following example, drawing on the sample system explained in Chapter 5, Configuring and Using Solaris Volume Manager (Scenario), describes how RAID 1 volumes can provide redundant storage.

As described in Interlace Values for Stripes, the sample system has two RAID 0 volumes, each of which is approximately 27 Gbytes in size and spans three disks. By creating a RAID 1 volume to mirror these two RAID 0 volumes, a fully redundant storage space can provide resilient data storage.

Within this RAID 1 volume, the failure of either of the disk controllers will not interrupt access to the volume. Similarly, failure of up to three individual disks might be tolerated without access interruption.

To provide additional protection against problems that could interrupt access, use hot spares, as described in Chapter 16, Hot Spare Pools (Overview) and specifically in How Hot Spares Work.