The following Solaris Volume Manager bugs apply to the Solaris 10 release.
If you have a Solaris Volume Manager mirrored root (/) file system in which the file system does not start on cylinder 0, all submirrors you attach must also not start on cylinder 0.
If you attempt to attach a submirror starting on cylinder 0 to a mirror in which the original submirror does not start on cylinder 0, the following error message is displayed:
can't attach labeled submirror to an unlabeled mirror |
Workaround: Choose one of the following workarounds:
Ensure that both the root file system and the volume for the other submirror start on cylinder 0.
Ensure that both the root file system and the volume for the other submirror do not start on cylinder 0.
By default, the JumpStart installation process starts swap at cylinder 0 and the root (/) file system somewhere else on the disk. Common system administration practice is to start slice 0 at cylinder 0. Mirroring a default JumpStart installation with root on slice 0, but not cylinder 0, to a typical secondary disk with slice 0 that starts at cylinder 0, can cause problems. This mirroring results in an error message when you attempt to attach the second submirror. For more information about the default behavior of Solaris installation programs, see the Solaris 10 Installation Guides.
In non-English locales, the Solaris Volume Manager metassist command might fail to create volumes. For example, if LANG is set to ja (Japanese), the following error message is displayed:
xmlEncodeEntitiesReentrant : input not UTF-8 Syntax of value for attribute read on mirror is not valid Value "XXXXXX"(unknown word) for attribute read on mirror is not among the enumerated set Syntax of value for attribute write on mirror is not valid Value "XXXXXX"(Parallel in Japanse) for attribute write on mirror is not among the enumerated set metassist: XXXXXX(invalid in Japanese) volume-config |
Workaround: As superuser, set the LANG variable to LANG=C.
For the Bourne, Korn, and Bash shells, use the following command:
# LANG=C; export LANG |
For the C shell, use the following command:
# setenv LANG C |
Creating Solaris Volume Manager volume configurations with the metassist command might fail if an unformatted disk is in the system. The following error message is displayed:
metassist: failed to repartition disk |
Workaround: Manually format any unformatted disks before you issue the metassist command.
If you create a Solaris Volume Manager RAID-1 (mirror) or RAID-5 volume in a disk set that is built on top of a soft partition, hot spare devices do not work correctly.
Problems that you might encounter include, but are not limited to, the following:
A hot spare device might not activate.
A hot spare device status might change, indicating the device is broken.
A hot spare device is used, but resynced from the wrong drive.
A hot spare device in use encounters a failure, but the broken status is not reported.
Workaround: Do not use this configuration to create a Solaris Volume Manager RAID-1 or RAID-5 volume in disk sets.
You cannot replace a failed drive with a drive that has been configured with the Solaris Volume Manager software. The replacement drive must be new to Solaris Volume Manager software. If you physically move a disk from one slot to another slot on a Sun StorEdge A5x00, the metadevadm command fails. This failure occurs when the logical device name for the slice no longer exists. However, the device ID for the disk remains present in the metadevice replica. The following message is displayed:
Unnamed device detected. Please run 'devfsadm && metadevadm -r to resolve. |
You can access the disk at the new location during this time. However, you might need to use the old logical device name to access the slice.
Workaround: Physically move the drive back to its original slot.
If you remove and replace a physical disk from the system, and then use the metarecover -p -d command to write the appropriate soft partition specific information to the disk, an open failure results. The command does not update the metadevice database namespace to reflect the change in disk device identification. The condition causes an open failure for each such soft partition that is built on top of the disk. The following message is displayed:
Open Error |
Workaround: Create a soft partition on the new disk instead of using the metarecover command to recover the soft partition.
If the soft partition is part of a mirror or RAID 5, use the metareplace command without the -e option to replace the old soft partition with the new soft partition.
# metareplace dx mirror or RAID 5 old_soft_partition new_soft_partition |