This section describes some tips regarding mirrors and their operation.
The following two tasks show how to change the interlace value of submirrors without destroying a mirror, and how to use a mirror for an online backup.
Use this task to change the interlace value of a mirror's underlying submirrors which are composed of striped metadevices. Using this method does away with the need to recreate the mirror and submirrors and restore data.
To use the command line to perform this task, refer to the metadetach(1M), metainit(1M), and metattach(1M)man pages.
The high-level overview of the steps in this task are:
Detaching submirror1
Clearing submirror1
Creating a new stripe, with the new interlace value, to be used as submirror1
Attaching submirror1 to the mirror
Waiting for the mirror resync to finish
Repeating the above steps for submirror2
Make sure DiskSuite Tool is started.
Double-click the Mirror object in the Objects list.
The object appears on the canvas.
Click inside the submirror to be detached.
Drag the submirror out of the Mirror object to the canvas.
If this is a two-way mirror, the mirror's status changes to "Urgent."
Click the top rectangle of the Mirror object then click Commit.
Create a new submirror with the desired interlace value.
Refer to "How to Create a Striped Metadevice (DiskSuite Tool)".
Drag the new Submirror object to the Mirror object. Then click Commit to commit the mirror.
A mirror resync begins.
The Configuration Log shows that the mirror was committed.
Repeat Step 3 through Step 7 for the second (and possibly third) submirror in the mirror.
Although DiskSuite is not meant to be a "backup product," it does provide a means for backing up mirrored data without unmounting the mirror or taking the entire mirror offline, and without halting the system or denying users access to data. This happens as follows: one of the submirrors is taken offline--temporarily losing the mirroring--and backed up; that submirror is then placed online and resynced as soon as the backup is complete.
You can use this procedure on any file system except root (/). Be aware that this type of backup creates a "snapshot" of an active file system. Depending on how the file system is being used when it is write-locked, some files and file content on the backup may not correspond to the actual files on disk.
If you use this procedure on a two-way mirror, be aware that data redundancy is lost while one submirror is offline for backup. A three-way mirror does not have this problem.
There is some overhead on the system when the offlined submirror is brought back online after the backup is complete.
If you use these procedures regularly, put them into a script for ease of use.
The high-level steps in this procedure are:
Write locking the file system (UFS only). Do not lock root (/).
Using the metaoffline(1M) command to take one submirror offline from the mirror
Unlocking the file system
Backing up the data on the offlined submirror
Using the metaonline(1M) command to place the offlined submirror back online
Before beginning, run the metastat(1M) command to make sure the mirror is in the "Okay" state.
A mirror that is in the "Maintenance" state should be repaired first.
For all file systems except root (/), lock the file system from writes.
# /usr/sbin/lockfs -w mount point |
Only a UFS needs to be write-locked. If the metadevice is set up as a raw device for database management software or some other specific application, running lockfs(1M) is not necessary. (You may, however, want to run the appropriate vendor-supplied utility to flush any buffers and lock access.)
Write-locking root (/) causes the system to hang, so it should never be performed.
Take one submirror offline from the mirror.
# metaoffline mirror submirror |
In this command,
mirror |
Is the metadevice name of the mirror. |
submirror |
Is the metadevice name of the submirror (metadevice) being taken offline. |
Reads will continue to be made from the other submirror. The mirror will be out of sync as soon as the first write is made. This inconsistency is corrected when the offlined submirror is brought back online in Step 6.
There is no need to run fsck(1M) on the offlined file system.
Unlock the file system and allow writes to continue.
# /usr/sbin/lockfs -u mount point |
You may need to perform necessary unlocking procedures based on vendor-dependent utilities used in Step 2 above.
Perform a backup of the offlined submirror. Use ufsdump(1M) or your usual backup utility.
To ensure a proper backup, use the raw metadevice, for example, /dev/md/rdsk/d4. Using "rdsk" allows greater than 2 Gbyte access.
Place the mirror back online.
# metaonline mirror submirror |
DiskSuite automatically begins resyncing the submirror with the mirror.
This example uses a mirror named d1, consisting of submirrors d2 and d3. d3 is taken offline and backed up while d2 stays online. The file system on the mirror is /home1.
# /usr/sbin/lockfs -w /home1 # metaoffline d1 d3 d1: submirror d3 is offlined # /usr/sbin/lockfs -u /home1 (Perform backup using /dev/md/rdsk/d3) # metaonline d1 d3 d1: submirror d3 is onlined |
If a system with mirrors for root (/), /usr, and swap--the so-called "boot" file systems--is booted into single-user mode (boot -s), these mirrors and possibly all mirrors on the system will appear in the "Needing Maintenance" state when viewed with the metastat command. Furthermore, if writes occur to these slices, metastat shows an increase in dirty regions on the mirrors.
Though this appears potentially dangerous, there is no need for concern. The metasync -r command, which normally occurs during boot to resync mirrors, is interrupted when the system is booted into single-user mode. Once the system is rebooted, metasync -r will run and resync all mirrors.
If this is a concern, run metasync -r manually.