The following file system bugs apply to the Solaris 10 release.
ZFS is designed to be a POSIX compliant file system and in most situations, ZFS is POSIX compliant. However, two edge case conditions exist when ZFS does not meet the POSIX compliance tests:
Updating ZFS files system capacity statistics.
Modifying existing data with a 100 percent full file system.
Adding ZFS patches to a Solaris 10 6/06 system causes spurious warning messages from the patchadd command because the ZFS packages are being added to the system for the first time. The following error message is displayed:
The following requested patches have packages not installed on the system: Package SUNWzfskr from directory SUNWzfskr in patch 122641-03 is not installed on the system. Changes for package SUNWzfskr will not be applied to the system.
Ignore the spurious messages from the patchadd command.
The ufsrestore utility generates errors if a UFS archive with POSIX-draft ACLs is restored in a ZFS file system. The files are restored correctly, but the ACL information is ignored.
During the ufsrestore operation, the following error message is generated:
setacl failed: Invalid argument
Use any of the following ACL-aware commands to transfer UFS files with ACLs to a ZFS file system:
The POSIX-draft ACLs are translated into the equivalent NFSv4-style ACLs.
If you use the fdisk -E command to modify a disk that is used by a ZFS storage pool, the pool becomes unusable and might cause an I/O failure or system panic.
Do not use the fdisk command to modify a disk that is used by a ZFS storage pool. If you need to access a disk that is used by a ZFS storage pool, use the format utility. In general, disks that are in use by file systems should not be modified.
A Sun Ultra 20 workstation or Sun Fire X2100 server might hang on reboot if disks connected these systems to contain a ZFS storage pool.
BIOS versions that might exhibit this problem are:
Sun Ultra 20 workstations with a BIOS version lower than 2.2.3
Sun Fire X2100 servers with a BIOS version lower than 1.1.1
Disconnect the disks that are used by ZFS before rebooting the system until the minimum BIOS level that supports ZFS on these systems can be installed.
From the release 1.4 Supplemental CD, install the minimum supported BIOS version to support ZFS.
The supported BIOS versions are:
BIOS version 2.2.3 for Sun Ultra 20 workstations.
BIOS version 1.1.1 for Sun Fire X2100 servers.
You can also download the supplemental CD image for release 1.4 from the following locations:
The following are the issues with the Veritas NetBackup and Sun StorEdge Enterprise Backup Software (EMC and Legato NetWorker) products.
The Veritas NetBackup product can be used to back up ZFS files, and this configuration is supported. However, this product does not currently support backing up or restoring NFSv4-style ACL information from ZFS files. Traditional permission bits and other file attributes are correctly backed up and restored.
If a user tries to back up or restore ZFS files, the NFSv4-style ACL information from ZFS files is silently dropped. There is no error message indicating that the ACL information from ZFS files has been dropped.
Support for ZFS/NFSv4 ACLs is under development and is expected to be available in the next Veritas NetBackup release.
As of the Solaris 10 06/06 release, both the tar and cpio commands correctly handle ZFS files with NFSv4-style ACLs.
Use the tar command with the -p option or the cpiocommand with the -P option to write the ZFS files to a file. Then, use the Veritas NetBackup to back up the tar or cpio archive.
As an alternative to using Veritas NetBackup, use the ZFS send and receive commands to back up ZFS files. These commands correctly handle all attributes of ZFS files.
Currently, the Sun StorEdge Enterprise Backup Software product cannot be used to back up or restore ZFS files.
If a user tries to back up or restore ZFS files, the following error message is displayed:
save: Unable to read ACL information for '/path': Operation not applicable
The support for ZFS/NFSv4 ACLs is expected to be available in the upcoming Sun StorEdge EBS 7.3 Service Update 1 release.
Mount the ZFS file system by using NFSv4 on another system.
Back up or restore the ZFS files from the NFSv4-mounted directory.
If you add the SUNWzfsg package from a Solaris 10 6/06 release to a system that runs a pre-Solaris 10 6/06 release, which does not have the embedded_su patch, the ZFS Administration application wizards are not fully functional.
If you attempt to run the ZFS Administration application on a system without the embedded_su patch, you will only be able to browse your ZFS configuration. The following error message is displayed:
/usr/lib/embedded_su: not found
Add the embedded_su patch (119574-02) to the system that runs a pre-Solaris 10 6/06 release.
For a RAID-Z virtual device, the following commands report inflated “space used” and “space available” size information:
The reported space information includes the space used to store the parity data.
If a host panics with file system I/O occurring to a target, which is connected by using the Solaris iSCSI software initiator, the I/O might not be able to flush or sync to the target device. This inability to flush or sync might cause file system corruption. No error message is displayed.
Use the journaling file system like UFS. Starting with Solaris 10, UFS logging is enabled by default. For more information about UFS, see What’s New in File Systems? in System Administration Guide: Devices and File Systems.
If a ZFS snapshot is created while a data scrub or resilver operation is in progress, the scrub or resilver operation will restart from the beginning. If snapshots are taken frequently, the scrub or resilver operation might never complete.
Do not take snapshots while a scrub or resilver operation is in progress.
After you upgrade an NFSv4 server from 6/05 to Solaris Express 7/05 or later (including all Solaris 10 updates), your programs might encounter EACCES errors. Furthermore, directories might erroneously appear to be empty.
To prevent these errors, unmount and then remount the client file systems. In case unmounting fails, you might need to forcibly unmount the file system by using umount -f. Alternatively, you can also reboot the client.
NFSv4 Access Control List (ACL) functions might work improperly if clients and servers in the network are installed with different previous Solaris 10 releases. The affected ACL functions and command-line utilities that use these functions are the following:
For more information about these functions and utilities, see their respective man pages.
For example, errors might be observed in a network that includes the following configuration:
A client that is running Solaris 10 Beta software
A server that is running Solaris 10 software
The following table illustrates the results of the ACL functions in client-server configurations with different Solaris 10 releases.
Workaround: For the NFSv4 ACL functionality to work properly, perform a full installation of the Solaris 10 OS on both the server and the client.
In the current Solaris 10 version, Solaris implementation of NFSv4 Access Control Lists (ACL) is now compliant with RFC 3530 specifications. However, errors occur for NFSv4 clients that use the Solaris 10 Beta 2 or Beta 1 versions. These clients cannot create files in the NFSv4 servers that are using the current Solaris 10 release. The following error message is displayed:
NFS getacl failed for server_name: error 9 (RPC: Program/version mismatch)
The mkfs command might be unable to create a file system on disks with a certain disk geometry and whose sizes are greater than 8 Gbytes. The derived cylinder group size is too large for the 1-Kbyte fragment. The large size of the cylinder group means that the excess metadata cannot be accommodated in a block.
The following error message is displayed:
With 15625 sectors per cylinder, minimum cylinders per group is 16. This requires the fragment size to be changed from 1024 to 4096. Please re-run mkfs with corrected parameters.
Workaround: Use the newfs command instead. Or, assign a larger fragment size, such as 4096, when you use the mkfs command.
Creating a UFS file system with the newfs command might fail under the following conditions:
The size of the slice is small, approximately less than 4 Mbytes.
The size of the disk exceeds 8 Gbytes.
The error is caused by the large-size requirement of the file system for metadata. The following warning message is displayed:
Warning: inode blocks/cyl group (295) >= data blocks (294) in last cylinder group. This implies 4712 sector(s) cannot be allocated. /dev/rdsk/c0t0d0s6: 0 sectors in 0 cylinders of 48 tracks, 128 sectors 0.0MB in 0 cyl groups (13 c/g, 39.00MB/g, 18624 i/g) super-block backups (for fsck -F ufs -o b=#) at: #
Workaround: As superuser, perform one of the following workarounds:
Workaround 1: Specify the number of tracks when you use the newfs command. Follow these steps.
Use the format command to find out the number of tracks to assign. For example:
# format Searching for disks...done AVAILABLE DISK SELECTIONS: 0. c0t0d0 <SUN18G cyl 7506 alt 2 hd 19 sec 248> /pci@1f,4000/scsi@3/sd@0,0 Specify disk (enter its number):
In the example, the number of tracks is 19.
Assign the number to the file system that you create with the newfs command. For example:
# newfs -v -t 19 /dev/dsk/c0t0d0s6 newfs: construct a new file system /dev/rdsk/c0t0d0s6: (y/n)? y mkfs -F ufs /dev/rdsk/c0t0d0s6 4712 -1 19 8192 1024 16 10 167 2048 t 0 -1 8 128 n mkfs: bad value for nsect: -1 must be between 1 and 32768 mkfs: nsect reset to default 32 Warning: 152 sector(s) in last cylinder unallocated /dev/rdsk/c0t0d0s6: 4712 sectors in 8 cylinders of 19 tracks, 32 sectors 2.3MB in 1 cyl groups (16 c/g, 4.75MB/g, 2304 i/g) super-block backups (for fsck -F ufs -o b=#) at: 32, #
Workaround 2: Specify the number of bytes per inode (nbpi) in the newfs command to reduce the inode density in the file system. For example:
# newfs -i 4096 /dev/dsk/c0t0d0s6 newfs: construct a new file system /dev/rdsk/c0t0d0s6: (y/n)? y Warning: 1432 sector(s) in last cylinder unallocated /dev/rdsk/c0t0d0s6: 4712 sectors in 1 cylinders of 48 tracks, 128 sectors 2.3MB in 1 cyl groups (16 c/g, 48.00MB/g, 11648 i/g) super-block backups (for fsck -F ufs -o b=#) at: 32, #
An NFSv4 client whose file system is near full capacity mishandles error codes that are returned from the server. The client receives the correct (NFS4ERR_NOSPC) error code from the server. However, the client fails to transmit the (ENOSPC) error code to the application. The application does not receive the error notifications through the normal system functions such as write(), close(), or fsync(). Consequently, the application's continuous attempts to write or modify data can cause data loss or corruption.
The following error message is recorded in /var/adm/messages:
nfs: [ID 174370 kern.notice] NFS write error on host hostname : No space left on device. nfs: [ID 942943 kern.notice] File: userid=uid, groupid= gid nfs: [ID 983240 kern.notice] User: userid=uid, groupid= gid nfs: [ID 702911 kern.notice] (file handle: 86007000 2000000 a000000 6000000 32362e48 a000000 2000000 5c8fa257)
Workaround: Do not perform work on client systems whose file systems are near full capacity.
The system cannot generate a dump on a partition that is equal to or greater than 1 Tbyte in size. If such a device is on a system, the following might occur after the system boots subsequent to a system panic:
The system does not save the dump.
The following message is displayed:
0% done: 0 pages dumped, compression ratio 0.00, dump failed: error 6
Workaround: Configure the size of your system's dump device to less than 1 Tbyte.
If you use the smosservice command to add OS services to a UFS file system, a message that there is insufficient disk space available is displayed. This error is specific to UFS file systems on EFI-labeled disks.
Workaround: Complete the following workaround.
Apply the SMI VTOC disk label.
Re-create the file system.
Rerun the smosservice command.