Solaris 10 10/08 Release Notes

File Systems

The following file system bugs apply to the Solaris 10 release.

Taking the Primary Disk Offline in a Mirrored ZFS Root Pool

Do not take the primary disk offline in a mirrored ZFS root configuration. The system will not boot from a disk taken offline in mirrored root-pool configuration.

Workaround: To detach a mirrored root disk for replacement or take it offline, boot from another mirrored disk in the pool. Choose one of the following methods:

lucreate Fails When Destination File System Is ZFS and Locale Is Japanese EUC (6750725)

If you use the lucreate command to create a ZFS root file system and the LOCALE is set to a non-English locale, the creation of the ZFS dump volume fails. The following error message is displayed:


ERROR: Unable to determine dump device for boot environment <{c1t1d0s0}>.
ERROR: Unable to create all required file systems for boot environment <zfsUp6>.
ERROR: Cannot make file systems for boot environment <zfsUp6>.

Workaround: Choose one of the following workarounds:

boot -L Does Not Work After Converting UFS to ZFS (6741743)

When Solaris Live Upgrade is used to convert a UFS root filesystem to ZFS, the bootlst command is not copied to the correct location. This error prevents the boot -L command from working. The following error message is displayed:


Evaluating: boot -L
The file just loaded does not appear to be executable.
Boot device: /pci@1f,0/pci@1/scsi@8/disk@1,0:a  File and args: 

Can't mount root

Error in Fcode execution !!!
Evaluating: boot
The file just loaded does not appear to be executable.

Workaround: Copy the bootlst command from /platform/`uname -m`/bootlst to /root pool/platform/`uname -m`/bootlst. For example, if the root pool is rpool, type the following command:


# cp -p /platform/`uname -m`/bootlst /rpool/platform/`uname -m`/bootlst

x86: Unable to Use reboot Command to Boot 32-Bit Kernel (6741682)

The bootadm command fails to construct a properly formatted GRUB menu entry when you boot a system in the 32-bit mode by using the following commands:

As a result, the system boots in the 64-bit mode. The faulty menu.lst file might appear as follows:


findroot rootfs0
kernel /platform/i86pc/kernel/unix
module /platform/i86pc/boot_archive

In the previous example, the kernel line does not contain the multiboot information and is therefore incorrect. No error message is displayed.

Workaround: Edit the /boot/grub/menu.lst file manually and add the following information:


title Solaris 10 10/08
findroot rootfs0
kernel /platform/i86pc/multiboot kernel/unix
module /platform/i86pc/boot_archive

After making these changes, the system boots in the 32-bit mode.


Note –

The changes you make to the menu.lst file persist over system reboots.


Alternately, you can edit the GRUB menu at boot time, adding the kernel/unix boot argument, as shown in the following example:


grub edit> kernel /platform/i86pc/multiboot kernel/unix

Note –

Changes made by editing the GRUB menu at boot time do not persist over system reboots.


For more information, see Modifying Boot Behavior on x86 Based Systems in System Administration Guide: Basic Administration.

zpool attach Might Create an Illegal Root Pool (6740164)

When you attach a device to a root pool to create a mirrored root pool, zpool attach might create an illegal root pool if a whole disk is added to the pool. A ZFS root pool must be created with disk slices, not whole disks. If you attempt to boot from the whole disk that was added to the mirrored root pool, the system will not boot.

Workaround: Perform the following steps:

  1. Detach the disk from the pool. For example


    # zpool detach rpool c0t2d0
  2. Change the disk label to a VTOC (SMI) label. For example:


    # format -e
    .
    .
    .
    Select disk c0t2d0
    format> label
    [0] SMI Label
    [1] EFI Label
    Specify Label type[0]:0
    Ready to label disk, continue? yes
    format> quit
  3. Add a disk slice back to the pool to create a mirrored root pool. For example:


    # zpool attach rpool c0t2d0s0

See also zpool attach Command Does Not Copy bootblock Information (6668666).

SPARC: Solaris Live Upgrade Does Not Create a menu.lst File (6696226)

On the SPARC platform, a menu.lst file needs to be created in the root pool's dataset. No error message is displayed.

Workaround: Create the menu.lst file manually. For example, if you have two ZFS boot environments, zfs1008BE and zfs10082BE, in the ZFS root pool, rpool, type the following commands:


# mkdir -p /rpool/boot
# cd /rpool/boot
# vi menu.lst

Add the following entries to the menu.lst file:


title zfs1008BE
bootfs rpool/ROOT/zfs1008BE
title zfs10082BE
bootfs rpool/ROOT/zfs10082BE

zpool attach Command Does Not Copy bootblock Information (6668666)

If you use the zpool attach command to add a disk to a ZFS root pool, the bootblock information is not copied to the newly added disk. This problem does not affect mirrored ZFS root pools that are created with an initial installation. System does not boot from alternate disk in the mirrored root pool.

Workaround: Choose one of the following workarounds:

x86: ata Timeouts During Boot (6586621)

ata driver timeouts might occur during system boot on Intel multiprocessor systems. These timeouts occur when the root device is on a drive with the HBA controller bound to the legacy ata driver. These timeouts lead to a momentary hang, hard hang, or a panic during system boot with console messages similar to the following:


scsi: [ID 107833 kern.warning] WARNING: /pci@0,0/pci-ide@1f,2/ide@0 (ata0):
        timeout: reset bus, target=0 lun=0
scsi: [ID 107833 kern.warning] WARNING: /pci@0,0/pci-ide@1f,2/ide@0 (ata0):
        timeout: early timeout, target=0 lun=0
gda: [ID 107833 kern.warning] WARNING: /pci@0,0/pci-ide@1f,2/ide@0/cmdk@0,0 (Disk0):
        Error for command 'read sector'   Error Level: Informational
gda: [ID 107833 kern.notice]           Sense Key: aborted command
gda: [ID 107833 kern.notice]           Vendor 'Gen-ATA ' error code: 0x3
gda: [ID 107833 kern.warning] WARNING: /pci@0,0/pci-ide@1f,2/ide@0/cmdk@0,0 (Disk0):
        Error for command 'read sector'   Error Level: Informational
gda: [ID 107833 kern.notice]           Sense Key: aborted command
gda: [ID 107833 kern.notice]           Vendor 'Gen-ATA ' error code: 0x3
scsi: [ID 107833 kern.warning] WARNING: /pci@0,0/pci-ide@1f,2/ide@0 (ata0):
        timeout: abort request, target=0 lun=0
scsi: [ID 107833 kern.warning] WARNING: /pci@0,0/pci-ide@1f,2/ide@0 (ata0):
        timeout: abort device, target=0 lun=0
scsi: [ID 107833 kern.warning] WARNING: /pci@0,0/pci-ide@1f,2/ide@0 (ata0):
        timeout: reset target, target=0 lun=0
scsi: [ID 107833 kern.warning] WARNING: /pci@0,0/pci-ide@1f,2/ide@0 (ata0):
        timeout: reset bus, target=0 lun=0
scsi: [ID 107833 kern.warning] WARNING: /pci@0,0/pci-ide@1f,2/ide@0 (ata0):
        timeout: early timeout, target=0 lun=0
gda: [ID 107833 kern.warning] WARNING: /pci@0,0/pci-ide@1f,2/ide@0/cmdk@0,0 (Disk0):
        Error for command 'read sector'   Error Level: Informational
gda: [ID 107833 kern.notice]           Sense Key: aborted command
gda: [ID 107833 kern.notice]           Vendor 'Gen-ATA ' error code: 0x3
gda: [ID 107833 kern.warning] WARNING: /pci@0,0/pci-ide@1f,2/ide@0/cmdk@0,0 (Disk0):

Workaround: Choose one of the following workarounds:


Note –

To avoid performance degradation, workaround 3 or workaround 4 should only be used temporarily until workaround 5 can be used .


zoneadm install Fails With a ZFS Legacy Mount (6449301)

If a non-global zone is initially configured with a ZFS file system to be mounted with the `add fs subcommand and specifies mountpoint=legacy, the subsequent zone installation fails. The following error message is displayed.


ERROR: No such file or directory:
cannot mount </zones/path/root/usr/local> in non-global zone to install:
the source block device or directory </path/local> cannot be accessed

Workaround: Add access to a ZFS file system after installing the non-global zone.

ZFS and UNIX/POSIX Compliance Issues

ZFS is designed to be a POSIX compliant file system and in most situations, ZFS is POSIX compliant. However, two edge case conditions exist when ZFS does not meet the POSIX compliance tests:

  1. Updating ZFS files system capacity statistics.

  2. Modifying existing data with a 100 percent full file system.

Related CRs:

fdisk -E Can Sweep Disk Used by ZFS Without Warning (6412771)

If you use the fdisk -E command to modify a disk that is used by a ZFS storage pool, the pool becomes unusable and might cause an I/O failure or system panic.

Workaround:

Do not use the fdisk command to modify a disk that is used by a ZFS storage pool. If you need to access a disk that is used by a ZFS storage pool, use the format utility. In general, disks that are in use by file systems should not be modified.

ZFS and Third-Party Backup Product Issues

The following are the issues with Brightstor ARCserve Backup products.

BrightStor ARCserve Backup Client Agent for UNIX (Solaris) and ZFS Support

The BrightStor ARCserve Backup (BAB) Client Agent for UNIX (Solaris) can be used to backup and restore ZFS files.

However, ZFS NFSv4-style ACLs are not preserved during backup. Traditional UNIX file permissions and attributes are preserved.

Workaround: If you want to preserve ZFS files with NFSv4-style ACLs, use the tar command with the -p option or the cpio command with the -P option to write the ZFS files to a file. Then, use BAB to backup the tar or cpio archive.

ZFS GUI Should Check For /usr/lib/embedded_su at the Beginning of Each Wizard (6326334)

If you add the SUNWzfsg package from a Solaris 10 10/08 release to a system that runs a pre-Solaris 10 6/06 release, which does not have the embedded_su patch, the ZFS Administration application wizards are not fully functional.

If you attempt to run the ZFS Administration application on a system without the embedded_su patch, you will only be able to browse your ZFS configuration. The following error message is displayed:


/usr/lib/embedded_su: not found

Workaround:

Add the embedded_su patch (119574-02) to the system that runs a pre-Solaris 10 6/06 release.

Fails to Sync File System on Panic (6250422)

If a host panics with file system I/O occurring to a target, which is connected by using the Solaris iSCSI software initiator, the I/O might not be able to flush or sync to the target device. This inability to flush or sync might cause file system corruption. No error message is displayed.

Workaround:

Use the journaling file system like UFS. Starting with Solaris 10, UFS logging is enabled by default. For more information about UFS, see What’s New in File Systems? in System Administration Guide: Devices and File Systems.

Upgrading From Some Solaris Express or Solaris 10 Releases Requires Remounting of File Systems

After you upgrade an NFSv4 server from Solaris Express 6/05 to Solaris Express 7/05 or later (including all Solaris 10 updates), your programs might encounter EACCES errors. Furthermore, directories might erroneously appear to be empty.

To prevent these errors, unmount and then remount the client file systems. In case unmounting fails, you might need to forcibly unmount the file system by using umount -f. Alternatively, you can also reboot the client.

NFSv4 Access Control List Functions Might Work Incorrectly

NFSv4 Access Control List (ACL) functions might work improperly if clients and servers in the network are installed with different previous Solaris 10 releases. The affected ACL functions and command-line utilities that use these functions are the following:

For more information about these functions and utilities, see their respective man pages.

For example, errors might be observed in a network that includes the following configuration:

The following table illustrates the results of the ACL functions in client-server configurations with different Solaris 10 releases.

Operation 

Client S10 OS 

Server S10 OS 

Result 

get ACL 

S10 Beta 

S10 OS 

fabricated ACL * 

get ACL 

S10 OS 

S10 Beta 

works ok 

set ACL 

S10 Beta 

S10 OS 

works ok 

set ACL 

S10 OS 

S10 Beta 

Error: EOPNOTSUP 

Workaround: For the NFSv4 ACL functionality to work properly, perform a full installation of the Solaris 10 OS on both the server and the client.

Access Problems Between Solaris NFSv4 Clients and NFSv4 Servers

In the current Solaris 10 version, Solaris implementation of NFSv4 Access Control Lists (ACL) is now compliant with RFC 3530 specifications. However, errors occur for NFSv4 clients that use the Solaris 10 Beta 2 or Beta 1 versions. These clients cannot create files in the NFSv4 servers that are using the current Solaris 10 release. The following error message is displayed:


NFS getacl failed for server_name: error 9 (RPC: Program/version mismatch)

Workaround: None.

Using mkfs Command to Create File System Might Fail on Very Large Disks (6352813)

The mkfs command might be unable to create a file system on disks with a certain disk geometry and whose sizes are greater than 8 Gbytes. The derived cylinder group size is too large for the 1-Kbyte fragment. The large size of the cylinder group means that the excess metadata cannot be accommodated in a block.

The following error message is displayed:


With 15625 sectors per cylinder, minimum cylinders
per group is 16. This requires the fragment size to be
changed from 1024 to 4096.
Please re-run mkfs with corrected parameters.

Workaround: Use the newfs command instead. Or, assign a larger fragment size, such as 4096, when you use the mkfs command.

System Crash Dump Fails on Devices Greater Than 1 TByte (6214480)

The system cannot generate a dump on a partition that is equal to or greater than 1 Tbyte in size. If such a device is on a system, the following might occur after the system boots subsequent to a system panic:

Workaround: Configure the size of your system's dump device to less than 1 Tbyte.

Using smosservice Command to Add OS Services Results in Insufficient Disk Space Message (5073840)

If you use the smosservice command to add OS services to a UFS file system, a message that there is insufficient disk space available is displayed. This error is specific to UFS file systems on EFI-labeled disks.

Workaround: Complete the following workaround.

  1. Apply the SMI VTOC disk label.

  2. Re-create the file system.

  3. Rerun the smosservice command.