The following file system bugs apply to the Solaris 10 release.
Attach operation fails on branded zones, although it succeeds on native (Solaris 10) zones. The following error message is displayed:
zone mount operation is invalid for branded zones. Cannot generate the information needed to attach this zone. |
Workaround: Use the attach -F command for non-native branded zones. For more information on this procedure, see System Administration Guide: Solaris Containers--Resource Management and Solaris Zones.
Do not take the primary disk offline in a mirrored ZFS root configuration. The system will not boot from a disk taken offline in mirrored root-pool configuration.
Workaround: To detach a mirrored root disk for replacement or take it offline, boot from another mirrored disk in the pool. Choose one of the following methods:
Bring the primary disk in the mirrored ZFS root pool back online. For example:
# zpool online rpool c0t1d0s0 |
If the primary disk has failed or needs to be replaced, boot from another disk in the pool.
If you use the lucreate command to create a ZFS root file system and the LOCALE is set to a non-English locale, the creation of the ZFS dump volume fails. The following error message is displayed:
ERROR: Unable to determine dump device for boot environment <{c1t1d0s0}>. ERROR: Unable to create all required file systems for boot environment <zfsUp6>. ERROR: Cannot make file systems for boot environment <zfsUp6>. |
Workaround: Choose one of the following workarounds:
Include locale setting with the lucreate command. For example:
# LC_ALL=C lucreate -n zfsUp6 -p rpool |
If you receive the dump device failure messages during an lucreate operation with a non-English locale, you can create the ZFS dump volume manually. For example:
# zfs create -V 2G -b 128k rpool/dump |
When Solaris Live Upgrade is used to convert a UFS root filesystem to ZFS, the bootlst command is not copied to the correct location. This error prevents the boot -L command from working. The following error message is displayed:
Evaluating: boot -L The file just loaded does not appear to be executable. Boot device: /pci@1f,0/pci@1/scsi@8/disk@1,0:a File and args: Can't mount root Error in Fcode execution !!! Evaluating: boot The file just loaded does not appear to be executable. |
Workaround: Copy the bootlst command from /platform/`uname -m`/bootlst to /root pool/platform/`uname -m`/bootlst. For example, if the root pool is rpool, type the following command:
# cp -p /platform/`uname -m`/bootlst /rpool/platform/`uname -m`/bootlst |
The bootadm command fails to construct a properly formatted GRUB menu entry when you boot a system in the 32-bit mode by using the following commands:
reboot kernel/unix
reboot -- -r
As a result, the system boots in the 64-bit mode. The faulty menu.lst file might appear as follows:
findroot rootfs0 kernel /platform/i86pc/kernel/unix module /platform/i86pc/boot_archive |
In the previous example, the kernel line does not contain the multiboot information and is therefore incorrect. No error message is displayed.
Workaround: Edit the /boot/grub/menu.lst file manually and add the following information:
title Solaris 10 10/08 findroot rootfs0 kernel /platform/i86pc/multiboot kernel/unix module /platform/i86pc/boot_archive |
After making these changes, the system boots in the 32-bit mode.
The changes you make to the menu.lst file persist over system reboots.
Alternately, you can edit the GRUB menu at boot time, adding the kernel/unix boot argument, as shown in the following example:
grub edit> kernel /platform/i86pc/multiboot kernel/unix |
Changes made by editing the GRUB menu at boot time do not persist over system reboots.
For more information, see Modifying Boot Behavior on x86 Based Systems in System Administration Guide: Basic Administration.
When you attach a device to a root pool to create a mirrored root pool, zpool attach might create an illegal root pool if a whole disk is added to the pool. A ZFS root pool must be created with disk slices, not whole disks. If you attempt to boot from the whole disk that was added to the mirrored root pool, the system will not boot.
Workaround: Perform the following steps:
Detach the disk from the pool. For example
# zpool detach rpool c0t2d0 |
Change the disk label to a VTOC (SMI) label. For example:
# format -e . . . Select disk c0t2d0 format> label [0] SMI Label [1] EFI Label Specify Label type[0]:0 Ready to label disk, continue? yes format> quit |
Add a disk slice back to the pool to create a mirrored root pool. For example:
# zpool attach rpool c0t2d0s0 |
See also zpool attach Command Does Not Copy bootblock Information (6668666).
On the SPARC platform, a menu.lst file needs to be created in the root pool's dataset. No error message is displayed.
Workaround: Create the menu.lst file manually. For example, if you have two ZFS boot environments, zfs1008BE and zfs10082BE, in the ZFS root pool, rpool, type the following commands:
# mkdir -p /rpool/boot # cd /rpool/boot # vi menu.lst |
Add the following entries to the menu.lst file:
title zfs1008BE bootfs rpool/ROOT/zfs1008BE title zfs10082BE bootfs rpool/ROOT/zfs10082BE |
If you use the zpool attach command to add a disk to a ZFS root pool, the bootblock information is not copied to the newly added disk. This problem does not affect mirrored ZFS root pools that are created with an initial installation. System does not boot from alternate disk in the mirrored root pool.
Workaround: Choose one of the following workarounds:
On a SPARC system, identify the alternate disk device and install the boot information. For example:
# installboot -F zfs /usr/platform/`uname -i`/lib/fs/zfs/bootblk /dev/rdsk/c0t1d0s0 |
On an x86 system, identify the alternate disk device and install the boot information. For example:
# installgrub /boot/grub/stage1 /boot/grub/stage2 /dev/rdsk/c0t1d0s0 |
ata driver timeouts might occur during system boot on Intel multiprocessor systems. These timeouts occur when the root device is on a drive with the HBA controller bound to the legacy ata driver. These timeouts lead to a momentary hang, hard hang, or a panic during system boot with console messages similar to the following:
scsi: [ID 107833 kern.warning] WARNING: /pci@0,0/pci-ide@1f,2/ide@0 (ata0): timeout: reset bus, target=0 lun=0 scsi: [ID 107833 kern.warning] WARNING: /pci@0,0/pci-ide@1f,2/ide@0 (ata0): timeout: early timeout, target=0 lun=0 gda: [ID 107833 kern.warning] WARNING: /pci@0,0/pci-ide@1f,2/ide@0/cmdk@0,0 (Disk0): Error for command 'read sector' Error Level: Informational gda: [ID 107833 kern.notice] Sense Key: aborted command gda: [ID 107833 kern.notice] Vendor 'Gen-ATA ' error code: 0x3 gda: [ID 107833 kern.warning] WARNING: /pci@0,0/pci-ide@1f,2/ide@0/cmdk@0,0 (Disk0): Error for command 'read sector' Error Level: Informational gda: [ID 107833 kern.notice] Sense Key: aborted command gda: [ID 107833 kern.notice] Vendor 'Gen-ATA ' error code: 0x3 scsi: [ID 107833 kern.warning] WARNING: /pci@0,0/pci-ide@1f,2/ide@0 (ata0): timeout: abort request, target=0 lun=0 scsi: [ID 107833 kern.warning] WARNING: /pci@0,0/pci-ide@1f,2/ide@0 (ata0): timeout: abort device, target=0 lun=0 scsi: [ID 107833 kern.warning] WARNING: /pci@0,0/pci-ide@1f,2/ide@0 (ata0): timeout: reset target, target=0 lun=0 scsi: [ID 107833 kern.warning] WARNING: /pci@0,0/pci-ide@1f,2/ide@0 (ata0): timeout: reset bus, target=0 lun=0 scsi: [ID 107833 kern.warning] WARNING: /pci@0,0/pci-ide@1f,2/ide@0 (ata0): timeout: early timeout, target=0 lun=0 gda: [ID 107833 kern.warning] WARNING: /pci@0,0/pci-ide@1f,2/ide@0/cmdk@0,0 (Disk0): Error for command 'read sector' Error Level: Informational gda: [ID 107833 kern.notice] Sense Key: aborted command gda: [ID 107833 kern.notice] Vendor 'Gen-ATA ' error code: 0x3 gda: [ID 107833 kern.warning] WARNING: /pci@0,0/pci-ide@1f,2/ide@0/cmdk@0,0 (Disk0): |
Workaround: Choose one of the following workarounds:
To avoid performance degradation, workaround 3 or workaround 4 should only be used temporarily until workaround 5 can be used .
Workaround 1: Enable AHCI in BIOS if available on the system. Enabling this setting requires a reinstall of the Solaris OS.
Workaround 2: Install Solaris on a disk on a controller which does not use the ata driver.
Workaround 3: Disable MP in the BIOS setup so that a single processor is active.
Workaround 4: Disable MP in Solaris so that a single processor is active. Perform the following steps from the Grand Unified Bootloader (GRUB) menu:
Type e to edit your selected Solaris entry.
Navigate to the line that begins with kernel.
Type e to switch to the GRUB edit mode.
Append -kd to the line.
Press Enter to accept the change.
Type b to boot the selected Solaris entry.
At the kbmd prompt, type the following command:
use_mp/W 0 :c |
If you are performing a system boot, proceed to Step 10, otherwise install the Solaris 10 5/09 software.
At the end of the installation, reboot the system. Repeat steps 1 through 7.
To make this change permanent so that the above steps do not need to be repeated for subsequent boots, do the following:
Become the super user, when the system boot is completed.
Open the /etc/system file.
Add the following line:
set use_mp = 0 |
Workaround 5: Disable microcode update. Type the following command:
# mv /platform/i86pc/ucode /platform/i86pc/ucode.disabled |
Microcode update can be invoked manually after the system is up:
# ucodeadm -u /platform/i86pc/ucode.disabled/intel-ucode.txt |
This issue is caused because of a problem with the way synchronous I/O is handled when a file system is unmounted. When more than one ZFS BE exists in a ZFS root pool, a recursive snapshot might fail.
cannot create snapshot 'rpool@today': dataset is busy |
Workaround: Choose one of the following workarounds:
Workaround 1: Mount and unmount the file systems that are indicated as busy in the error messages.
Workaround 2: Remove any additional ZFS BEs before creating a recursive snapshot of a ZFS root pool.
If a non-global zone is initially configured with a ZFS file system to be mounted with the `add fs subcommand and specifies mountpoint=legacy, the subsequent zone installation fails. The following error message is displayed.
ERROR: No such file or directory: cannot mount </zones/path/root/usr/local> in non-global zone to install: the source block device or directory </path/local> cannot be accessed |
Workaround: Add access to a ZFS file system after installing the non-global zone.
ZFS is designed to be a POSIX compliant file system and in most situations, ZFS is POSIX compliant. However, two edge case conditions exist when ZFS does not meet the POSIX compliance tests:
Updating ZFS files system capacity statistics.
Modifying existing data with a 100 percent full file system.
Related CRs:
6362314
6362156
6361650
6343113
6343039
6742203
If you use the fdisk -E command to modify a disk that is used by a ZFS storage pool, the pool becomes unusable and might cause an I/O failure or system panic.
Workaround:
Do not use the fdisk command to modify a disk that is used by a ZFS storage pool. If you need to access a disk that is used by a ZFS storage pool, use the format utility. In general, disks that are in use by file systems should not be modified.
The following are the issues with Brightstor ARCserve Backup products.
The BrightStor ARCserve Backup (BAB) Client Agent for UNIX (Solaris) can be used to backup and restore ZFS files.
However, ZFS NFSv4-style ACLs are not preserved during backup. Traditional UNIX file permissions and attributes are preserved.
Workaround: If you want to preserve ZFS files with NFSv4-style ACLs, use the tar command with the -p option or the cpio command with the -P option to write the ZFS files to a file. Then, use BAB to backup the tar or cpio archive.
If you add the SUNWzfsg package from a Solaris 10 5/09 release to a system that runs a pre-Solaris 10 6/06 release, which does not have the embedded_su patch, the ZFS Administration application wizards are not fully functional.
If you attempt to run the ZFS Administration application on a system without the embedded_su patch, you will only be able to browse your ZFS configuration. The following error message is displayed:
/usr/lib/embedded_su: not found |
Workaround:
Add the embedded_su patch (119574-02) to the system that runs a pre-Solaris 10 6/06 release.
If a host panics with file system I/O occurring to a target, which is connected by using the Solaris iSCSI software initiator, the I/O might not be able to flush or sync to the target device. This inability to flush or sync might cause file system corruption. No error message is displayed.
Workaround:
Use the journaling file system like UFS. Starting with Solaris 10, UFS logging is enabled by default. For more information about UFS, see What’s New in File Systems? in System Administration Guide: Devices and File Systems.
After you upgrade an NFSv4 server from Solaris Express 6/05 to Solaris Express 7/05 or later (including all Solaris 10 updates), your programs might encounter EACCES errors. Furthermore, directories might erroneously appear to be empty.
To prevent these errors, unmount and then remount the client file systems. In case unmounting fails, you might need to forcibly unmount the file system by using umount -f. Alternatively, you can also reboot the client.
NFSv4 Access Control List (ACL) functions might work improperly if clients and servers in the network are installed with different previous Solaris 10 releases. The affected ACL functions and command-line utilities that use these functions are the following:
acl()
facl()
getfacl
setfacl
For more information about these functions and utilities, see their respective man pages.
For example, errors might be observed in a network that includes the following configuration:
A client that is running Solaris 10 Beta software
A server that is running Solaris 10 software
The following table illustrates the results of the ACL functions in client-server configurations with different Solaris 10 releases.
Operation |
Client S10 OS |
Server S10 OS |
Result |
---|---|---|---|
get ACL |
S10 Beta |
S10 OS |
fabricated ACL * |
get ACL |
S10 OS |
S10 Beta |
works ok |
set ACL |
S10 Beta |
S10 OS |
works ok |
set ACL |
S10 OS |
S10 Beta |
Error: EOPNOTSUP |
Workaround: For the NFSv4 ACL functionality to work properly, perform a full installation of the Solaris 10 OS on both the server and the client.
In the current Solaris 10 version, Solaris implementation of NFSv4 Access Control Lists (ACL) is now compliant with RFC 3530 specifications. However, errors occur for NFSv4 clients that use the Solaris 10 Beta 2 or Beta 1 versions. These clients cannot create files in the NFSv4 servers that are using the current Solaris 10 release. The following error message is displayed:
NFS getacl failed for server_name: error 9 (RPC: Program/version mismatch) |
Workaround: None.
The mkfs command might be unable to create a file system on disks with a certain disk geometry and whose sizes are greater than 8 Gbytes. The derived cylinder group size is too large for the 1-Kbyte fragment. The large size of the cylinder group means that the excess metadata cannot be accommodated in a block.
The following error message is displayed:
With 15625 sectors per cylinder, minimum cylinders per group is 16. This requires the fragment size to be changed from 1024 to 4096. Please re-run mkfs with corrected parameters. |
Workaround: Use the newfs command instead. Or, assign a larger fragment size, such as 4096, when you use the mkfs command.
The system cannot generate a dump on a partition that is equal to or greater than 1 Tbyte in size. If such a device is on a system, the following might occur after the system boots subsequent to a system panic:
The system does not save the dump.
The following message is displayed:
0% done: 0 pages dumped, compression ratio 0.00, dump failed: error 6 |
Workaround: Configure the size of your system's dump device to less than 1 Tbyte.