This document summarizes all features in the Solaris 10 Operating System that are new or have been enhanced since the Solaris 9 OS was originally distributed in May 2002. This chapter summarizes new features in the current release, the Solaris 10 10/09 release. Chapter 2, What's New in the Solaris 10 5/09 Release summarizes new features in the previous Solaris 10 5/09 release. Chapter 3, What's New in the Solaris 10 10/08 Release summarizes new features in the Solaris 10 10/08 release. Chapter 4, What's New in the Solaris 10 5/08 Release summarizes new features in the Solaris 10 5/08 release. Chapter 5, What's New in the Solaris 10 8/07 Release summarizes new features in the Solaris 10 8/07 release. Chapter 6, What's New in the Solaris 10 11/06 Release summarizes new features in the Solaris 10 11/06 release. Chapter 7, What's New in the Solaris 10 6/06 Release summarizes new features in the Solaris 10 6/06 release. Chapter 8, What's New in the Solaris 10 1/06 Release summarizes new features in the Solaris 10 1/06 release. Chapter 9, What's New in the Solaris 10 3/05 Release summarizes new features in the Solaris 10 3/05 release. Chapter 9, What's New in the Solaris 10 3/05 Release also summarizes all features, which are sorted by the Software Express release that introduced these features.
The following system administration features and enhancements have been added to the Solaris 10 10/09 release.
Starting with the Solaris 10 10/09 release, you can install and boot the Solaris OS from a disk that is up to 2 Tbytes in size. In previous Solaris releases, you could not install and boot the Solaris OS from a disk that was greater than 1 Tbyte in size.
In this Solaris release, you can use the VTOC label on a disk of any size. However, the addressable space by the VTOC is limited to 2 Tbytes. This feature enables disks that are larger than 2 Tbytes to also be used as boot drive. However, the usable space from label is limited to 2 Tbytes.
This feature is available only on systems that run a 64-bit kernel. A minimum of 1 Gbyte of memory is required for x86–based systems.
For more information about the Solaris disk drivers and disk utilities that have been updated to support boot on disks greater than 1 Tbyte, see System Administration Guide: Devices and File Systems.
The pcitool utility enables system administrators to bind interrupts to specific hardware strands for enhanced performance. This utility exists in the public SUNWio-tools package. For more information about using pcitool, see the pcitool man page.
The following section summarizes new features in the ZFS file system.
ZFS and Flash installation support – In the Solaris 10 10/09 release, you can set up a JumpStart profile to identify a flash archive of a ZFS root pool. For more information, see the Solaris ZFS Administration Guide.
Setting ZFS user and group quotas – In previous Solaris releases, you could apply quotas and reservations to ZFS file systems to manage and reserve space. In this Solaris release, you can set a quota on the amount of space consumed by files that are owned by a particular user or group. You might consider setting user and group quotas in an environment with a large number of users or groups. You can set user or group quotas by using the zfs userspace and zfs groupspace properties as follows:
# zfs set userquota@user1=5G tank/data # zfs set groupquota@staff=10G tank/staff/admins |
You can display a user's or group's current quota setting as follows:
# zfs get userquota@user1 tank/data NAME PROPERTY VALUE SOURCE tank/data userquota@user1 5G local # zfs get groupquota@staff tank/staff/admins NAME PROPERTY VALUE SOURCE tank/staff/admins groupquota@staff 10G local |
Using ZFS ACL pass through inheritance for execute permission – In previous Solaris releases, you could apply ACL inheritance so that all files are created with 0664 or 0666 permissions. If you want to optionally include the execute bit from the file creation mode into the inherited ACL, you can use the pass through inheritance for execute permission in this release.
If aclinherit=passthrough-x is enabled on a ZFS dataset, you can include execute permission for an output file that is generated from cc or gcc tools. If the inherited ACL does not include execute permission, then the executable output from the compiler won't be executable until you use the chmod command to change the file's permissions.
Using cache devices in your ZFS storage pool – In the Solaris 10 10/09 release, you can create pool and specify cache devices, which are used to cache storage pool data. Cache devices provide an additional layer of caching between main memory and disk. Using cache devices provide the greatest performance improvement for random read-workloads of mostly static content.
One or more cache devices can specified when the pool is created. For example:
# zpool create pool mirror c0t2d0 c0t4d0 cache c0t0d0 # zpool status pool pool: pool state: ONLINE scrub: none requested config: NAME STATE READ WRITE CKSUM pool ONLINE 0 0 0 mirror ONLINE 0 0 0 c0t2d0 ONLINE 0 0 0 c0t4d0 ONLINE 0 0 0 cache c0t0d0 ONLINE 0 0 0 errors: No known data errors |
For information about determining whether using cache devices is appropriate for your environment, see the Solaris ZFS Administration Guide.
ZFS property enhancements – The Solaris 10 10/09 release provides the following ZFS property enhancements:
You can set ZFS file system properties at pool creation time. In the following example, compression is enabled on the ZFS file system that is created when the pool is created.
# zpool create -O compression=on pool mirror c0t1d0 c0t2d0 |
You can set two cache properties on a ZFS file system that allow you to control what is cached in the primary cache (ARC) or the secondary cache (L2ARC). The cache properties are set as follows:
primarycache – Controls what is cached in the ARC.
secondarycache – Controls what is cached in the L2ARC.
You can set these properties on an existing file system or when the file system is created. For example:
# zfs set primarycache=metadata tank/datab # zfs create -o primarycache=metadata tank/newdatab |
Some database environments might benefit from not caching user data. You will have determine if setting cache properties is appropriate for your environment.
For more information, see the Solaris ZFS Administration Guide.
You can use the space usage properties to identify space usage for clones, file systems, and volumes, but not snapshots. The properties are as follows:
usedbychildren – Identifies the amount of space that is used by children of this dataset, which would be freed if all the dataset's children were destroyed. The property abbreviation is usedchild.
usedbydataset – Identifies the amount of space that is used by this dataset itself, which would be freed if the dataset was destroyed, after first destroying any snapshots and removing any refreservation. The property abbreviation is usedds.
usedbyrefreservation – Identifies the amount of space that is used by a refreservation set on this dataset, which would be freed if the refreservation was removed. The property abbreviation is usedrefreserv.
usedbysnapshots – Identifies the amount of space that is consumed by snapshots of this dataset. In particular, it is the amount of space that would be freed if all of this dataset's snapshots were destroyed. Note that this is not simply the sum of the snapshots' used properties, because space can be shared by multiple snapshots. The property abbreviation is usedsnap.
These new properties break down the value of the used property into the various elements that consume space. In particular, the value of the used property breaks down as follows:
used property = usedbychildren + usedbydataset + usedbyrefreservation + usedbysnapshots |
You can view these properties by using the zfs list -o space command. For example:
# zfs list -o space NAME AVAIL USED USEDSNAP USEDDS USEDREFRESERV USEDCHILD pool 33.2G 72K 0 21K 0 51K rpool 27.0G 6.27G 20.5K 97K 0 6.27G rpool/ROOT 27.0G 4.73G 0 21K 0 4.73G rpool/ROOT/zfsBE 27.0G 4.73G 97.5M 4.63G 0 0 rpool/dump 27.0G 1.00G 16K 1.00G 0 0 rpool/export 27.0G 60K 16K 23K 0 21K rpool/export/home 27.0G 21K 0 21K 0 0 rpool/swap 27.5G 553M 0 41.5M 512M 0 |
In this release, snapshots are omitted from zfs list output. The listsnaps pool property controls whether snapshot information is displayed by the zfs list command. If you use the zfs list -t snapshots command, snapshot information is displayed. The default value is off, which means snapshot information is not displayed by default.
ZFS log device recovery – In the Solaris 10 10/09 release, ZFS identifies intent log failures in the zpool status command. FMA reports these errors as well. Both ZFS and FMA describe how to recover from an intent log failure.
For example, if the system shuts down abruptly before synchronous write operations are committed to a pool with a separate log device, you will see intent-log related error messages in the zpool status output. For information about resolving log device failures, see the Solaris ZFS Administration Guide.
Using ZFS ACL Sets – The Solaris 10 10/09 release provides the ability to apply NFSv4–style ACLs in sets, rather than apply different ACL permissions individually. The following ACL sets are provided:
full_set = all permissions
modify_set = all permissions except write_acl and write_owner
read_set = read_data, read_attributes, read_xattr, and read_acl
write_set = write_data, append_data, write_attributes, and write_xattr
These ACL sets are pre-defined and cannot be modified.
Zone migration in a ZFS environment – In the Solaris 10 5/09 release, support for migrating zones in a ZFS environment with Live Upgrade is extended. For more information, see the Solaris ZFS Administration Guide.
ZFS installation and boot support – In the Solaris 10 10/08 release, you can install and boot a ZFS root file system. The initial installation option or the JumpStart feature is available to install a ZFS root file system. Or, you can use the Solaris Live Upgrade feature to migrate a UFS root file system to a ZFS root file system. ZFS support for swap and dump devices is also provided.
Rolling back a ZFS dataset without unmounting – In the Solaris 10 10/08 release, you can rollback a dataset without unmounting it first. This feature means that the zfs rollback -f option is no longer needed to force an unmount operation. The -f option is no longer supported, and is ignored if specified.
Enhancements to the zfs send command – The Solaris 10 10/08 release includes the following enhancements to the zfs send command:
You can send all incremental streams from one snapshot to a cumulative snapshot. For example:
# zfs list NAME USED AVAIL REFER MOUNTPOINT pool 428K 16.5G 20K /pool pool/fs 71K 16.5G 21K /pool/fs pool/fs@snapA 16K - 18.5K - pool/fs@snapB 17K - 20K - pool/fs@snapC 17K - 20.5K - pool/fs@snapD 0 - 21K - # zfs send -I pool/fs@snapA pool/fs@snapD > /snaps/fs@combo |
This syntax shows how to send all incremental snapshots between fs@snapA to fs@snapD to fs@combo.
You can send an incremental stream from the origin snapshot to create a clone. The original snapshot must already exist on the receiving side to accept the incremental stream. For example:
# zfs send -I pool/fs@snap1 pool/clone@snapA > /snaps/fsclonesnap-I . . # zfs receive -F pool/clone < /snaps/fsclonesnap-I |
You can send a replication stream of all descendent file systems, up to the named snapshots. When received, all properties, snapshots, descendent file systems, and clones are preserved. For example:
# zfs send -R pool/fs@snap > snaps/fs-R |
Send an incremental replication stream.
zfs send -R -[iI] @snapA pool/fs@snapD |
For extended examples, see the Solaris ZFS Administration Guide.
ZFS quotas and reservations for file system data only – In the Solaris 10 10/08 release, dataset quotas and reservations are provided that do not include descendents, such as snapshots and clones, in the space consumption accounting. The existing ZFS quota and reservation features remain as in previous Solaris releases.
The refquota property limits the amount of space a dataset can consume. This property enforces a hard limit on the amount of space that can be used. This hard limit does not include space used by descendents, such as snapshots and clones.
The refreservation property sets the minimum amount of space that is guaranteed to a dataset, not including its descendents.
For example, you can set a 10-Gbyte refquota for studentA that sets a 10-Gbyte hard limit of referenced space. For additional flexibility, you can set a 20-Gbyte quota that allows you to manage studentA's snapshots.
# zfs set refquota=10g tank/studentA # zfs set quota=20g tank/studentA |
ZFS storage pool properties – New ZFS storage pool property information is provided in the Solaris 10 10/08 release.
Display all pool attributes – You can use the zpool get all pool command to display all pool property information. For example:
# zpool get all users NAME PROPERTY VALUE SOURCE users size 16.8G - users used 194K - users available 16.7G - users capacity 0% - users altroot - default users health ONLINE - users guid 14526624140147884971 - users version 10 default users bootfs - default users delegation on default users autoreplace off default users cachefile - default users failmode wait default |
The cachefile property – This release provides the cachefile property, which controls where pool configuration information is cached. All pools in the cache are automatically imported when the system boots. However, installation and clustering environments might need to cache this information in a different location so that pools are not automatically imported.
You can set this property to cache pool configuration in a different location that can be imported later by using the zpool import -c command. For most ZFS configurations, this property would not be used.
The cachefile property is not persistent and is not stored on disk. This property replaces the temporary property that was used to indicate that pool information should not be cached in previous Solaris releases.
The failmode property – This release provides the failmode property for determining the behavior of a catastrophic pool failure due to a loss of device connectivity or the failure of all devices in the pool. The failmode property can be set to these values: wait, continue, or panic. The default value is wait, which means you must reconnect the device or replace a failed device and clear the error with the zpool clear command.
The failmode property is set like other settable ZFS properties, which can be set either before or after the pool is created. For example:
# zpool set failmode=continue tank # zpool get failmode tank NAME PROPERTY VALUE SOURCE tank failmode continue local |
# zpool create -o failmode=continue users mirror c0t1d0 c1t1d0 |
ZFS and file system mirror mounts – In the Solaris 10 10/08 release, NFSv4 mount enhancements are provided to make ZFS file systems more accessible to NFS clients.
When file systems are created on the NFS server, the NFS client can automatically discover these newly created file systems within their existing mount of a parent file system.
For example, if the server neo already shares the tank file system and client zee has it mounted, /tank/baz is automatically visible on the client after it is created on the server.
zee# mount neo:/tank /mnt zee# ls /mnt baa bar neo# zfs create tank/baz zee% ls /mnt baa bar baz zee% ls /mnt/baz file1 file2 |
ZFS command history enhancements (zpool history) – In the Solaris 10 10/08 release, the zpool history command provides the following new features:
ZFS file system event information is displayed. For example:
# zpool history users History for 'users': 2008-07-10.09:43:05 zpool create users mirror c1t1d0 c1t2d0 2008-07-10.09:43:48 zfs create users/home 2008-07-10.09:43:56 zfs create users/home/markm 2008-07-10.09:44:02 zfs create users/home/marks 2008-07-10.09:44:19 zfs snapshot -r users/home@yesterday |
A -l option for displaying a long format that includes the user name, the host name, and the zone in which the operation was performed. For example:
# zpool history -l users History for 'users': 2008-07-10.09:43:05 zpool create users mirror c1t1d0 c1t2d0 [user root on corona:global] 2008-07-10.09:43:13 zfs create users/marks [user root on corona:global] 2008-07-10.09:43:44 zfs destroy users/marks [user root on corona:global] 2008-07-10.09:43:48 zfs create users/home [user root on corona:global] 2008-07-10.09:43:56 zfs create users/home/markm [user root on corona:global] 2008-07-10.09:44:02 zfs create users/home/marks [user root on corona:global] 2008-07-11.10:44:19 zfs snapshot -r users/home@yesterday [user root on corona:global] |
A -i option for displaying internal event information that can be used for diagnostic purposes. For example:
# zpool history -i users History for 'users': 2008-07-10.09:43:05 zpool create users mirror c1t1d0 c1t2d0 2008-07-10.09:43:13 [internal create txg:6] dataset = 21 2008-07-10.09:43:13 zfs create users/marks 2008-07-10.09:43:48 [internal create txg:12] dataset = 27 2008-07-10.09:43:48 zfs create users/home 2008-07-10.09:43:55 [internal create txg:14] dataset = 33 2008-07-10.09:43:56 zfs create users/home/markm 2008-07-10.09:44:02 [internal create txg:16] dataset = 39 2008-07-10.09:44:02 zfs create users/home/marks 2008-07-10.09:44:19 [internal snapshot txg:21] dataset = 42 2008-07-10.09:44:19 [internal snapshot txg:21] dataset = 44 2008-07-10.09:44:19 [internal snapshot txg:21] dataset = 46 2008-07-10.09:44:19 zfs snapshot -r users/home@yesterday |
Upgrading ZFS file systems (zfs upgrade) – In the Solaris 10 10/08 release, you can use the zfs upgrade command to upgrade your existing ZFS file systems with new file system enhancements. ZFS storage pools have a similar upgrade feature to provide pool enhancements to existing storage pools.
For example:
# zfs upgrade This system is currently running ZFS filesystem version 2. The following filesystems are out of date, and can be upgraded. After being upgraded, these filesystems (and any 'zfs send' streams generated from subsequent snapshots) will no longer be accessible by older software versions. VER FILESYSTEM --- ------------ 1 datab 1 datab/users 1 datab/users/area51 |
File systems that are upgraded and any streams that are created from those upgraded file systems by the zfs send command are not accessible on systems that are running older software releases.
ZFS delegated administration – In the Solaris 10 10/08 release, you can delegate granular permissions to perform ZFS administration tasks to non-privileged users.
You can use the zfs allow and zfs unallow commands to grant and remove permissions.
You can modify the ability to use delegated administration with the pool's delegation property. For example:
# zpool get delegation users NAME PROPERTY VALUE SOURCE users delegation on default # zpool set delegation=off users # zpool get delegation users NAME PROPERTY VALUE SOURCE users delegation off local |
By default, the delegation property is enabled.
Setting up separate ZFS logging devices – The ZFS intent log (ZIL) is provided to satisfy POSIX requirements for synchronous transactions. For example, databases often require their transactions to be on stable storage devices when returning from a system call. NFS and other applications can also use fsync() to ensure data stability. By default, the ZIL is allocated from blocks within the main storage pool. However, better performance in the Solaris 10 10/08 release might be possible by using separate ZIL devices in your ZFS storage pool, such as with NVRAM or a dedicated disk,
Log devices for the ZIL are not related to database log files.
You can set up a ZFS logging device when the storage pool is created or after the pool is created. For examples of setting up log devices, see Solaris ZFS Administration Guide.
Creating intermediate ZFS datasets – In the Solaris 10 10/08 release, you can use the -p option with the zfs create, zfs clone, and zfs rename commands to quickly create a nonexistent intermediate dataset, if it doesn't already exist.
For example, create ZFS datasets (users/area51) in the datab storage pool.
# zfs list NAME USED AVAIL REFER MOUNTPOINT datab 106K 16.5G 18K /datab # zfs create -p -o compression=on datab/users/area51 |
If the intermediate dataset exists during the create operation, the operation completes successfully.
Properties specified apply to the target dataset, not to the intermediate datasets. For example:
# zfs get mountpoint,compression datab/users/area51 NAME PROPERTY VALUE SOURCE datab/users/area51 mountpoint /datab/users/area51 default datab/users/area51 compression on local |
The intermediate dataset is created with the default mount point. Any additional properties are disabled for the intermediate dataset. For example:
# zfs get mountpoint,compression datab/users NAME PROPERTY VALUE SOURCE datab/users mountpoint /datab/users default datab/users compression off default |
For more information, see zfs(1M).
ZFS hot-plugging enhancements – In the Solaris 10 10/08 release, ZFS more effectively responds to removed devices and provides a mechanism to automatically identify devices that are inserted:
You can replace an existing device with an equivalent device without having to use the zpool replace command.
The autoreplace property controls automatic device replacement. If set to off, device replacement must be initiated by the administrator by using the zpool replace command. If set to on, any new device that is found in the same physical location as a device that previously belonged to the pool, is automatically formatted and replaced. The default behavior is off.
The storage pool state REMOVED is provided when a device or hot spare has been removed, if the device was physically removed while the system was running. A hot spare device is substituted for the removed device, if available.
If a device is removed and then inserted, the device is placed online. If a hot spare was activated when the device is re-inserted, the hot spare is removed when the online operation completes.
Automatic detection when devices are removed or inserted is hardware-dependent and might not be supported on all platforms. For example, USB devices are automatically configured upon insertion. However, you might have to use the cfgadm -c configure command to configure a SATA drive.
Hot spares are checked periodically to make sure that they are online and available.
For more information, see zpool(1M).
Recursively renaming ZFS snapshots (zfs rename -r) – In the Solaris 10 10/08 release, you can recursively rename all descendent ZFS snapshots by using the zfs rename -r command.
For example, snapshot a set of ZFS file systems.
# zfs snapshot -r users/home@today # zfs list NAME USED AVAIL REFER MOUNTPOINT users 216K 16.5G 20K /users users/home 76K 16.5G 22K /users/home users/home@today 0 - 22K - users/home/markm 18K 16.5G 18K /users/home/markm users/home/markm@today 0 - 18K - users/home/marks 18K 16.5G 18K /users/home/marks users/home/marks@today 0 - 18K - users/home/neil 18K 16.5G 18K /users/home/neil users/home/neil@today 0 - 18K - |
Then, rename the snapshots the following day.
# zfs rename -r users/home@today @yesterday # zfs list NAME USED AVAIL REFER MOUNTPOINT users 216K 16.5G 20K /users users/home 76K 16.5G 22K /users/home users/home@yesterday 0 - 22K - users/home/markm 18K 16.5G 18K /users/home/markm users/home/markm@yesterday 0 - 18K - users/home/marks 18K 16.5G 18K /users/home/marks users/home/marks@yesterday 0 - 18K - users/home/neil 18K 16.5G 18K /users/home/neil users/home/neil@yesterday 0 - 18K - |
Snapshots are the only datasets that can be renamed recursively.
GZIP compression now available for ZFS – In the Solaris 10 10/08 release, you can set gzip compression on ZFS file systems in addition to lzjb compression. You can specify compression as gzip, the default, or gzip-N, where N equals 1 through 9. For example:
# zfs create -o compression=gzip users/home/snapshots # zfs get compression users/home/snapshots NAME PROPERTY VALUE SOURCE users/home/snapshots compression gzip local # zfs create -o compression=gzip-9 users/home/oldfiles # zfs get compression users/home/oldfiles NAME PROPERTY VALUE SOURCE users/home/oldfiles compression gzip-9 local |
Storing multiple copies of ZFS user data – A ZFS file system automatically stores metadata multiple times across different disks, if possible, as a reliability feature. This feature is known as ditto blocks. In the Solaris 10 10/08 release, you can specify that multiple copies of user data is also stored per file system by using the zfs set copies command. For example:
# zfs set copies=2 users/home # zfs get copies users/home NAME PROPERTY VALUE SOURCE users/home copies 2 local |
Available values are 1, 2, or 3. The default value is 1. These copies are in addition to any pool-level redundancy, such as in a mirrored or RAID-Z configuration.
For more information about using this property, see the Solaris ZFS Administration Guide.
ZFS command history (zpool history) – In the Solaris 10 8/07 release, ZFS automatically logs successful zfs and zpool commands that modify pool state information. This features enables you or Sun support personnel to identify the exact ZFS commands that were executed to troubleshoot an error scenario.
Improved storage pool status information (zpool status) – In the Solaris 10 8/07 release, you can use the zpool status -v command to display a list of files with persistent errors. Previously, you had to use the find -inum command to identify the file names from the list of displayed inodes.
ZFS and Solaris iSCSI improvements – In the Solaris 10 8/07 release, you can create a ZFS volume as a Solaris iSCSI target device by setting the shareiscsi property on the ZFS volume. This method is a convenient way to quickly set up a Solaris iSCSI target. For example:
# zfs create -V 2g tank/volumes/v2 # zfs set shareiscsi=on tank/volumes/v2 # iscsitadm list target Target: tank/volumes/v2 iSCSI Name: iqn.1986-03.com.sun:02:984fe301-c412-ccc1-cc80-cf9a72aa062a Connections: 0 |
After the iSCSI target is created, you would set up the iSCSI initiator. For information about setting up a Solaris iSCSI initiator, see Chapter 14, Configuring Solaris iSCSI Targets and Initiators (Tasks), in System Administration Guide: Devices and File Systems.
For more information about managing a ZFS volume as an iSCSI target, see the Solaris ZFS Administration Guide.
ZFS property improvements
ZFS xattr property – In the Solaris 10 8/07 release, you can use the xattr property to disable or enable extended attributes for a specific ZFS file system. The default value is on.
ZFS canmount property – In the Solaris 10 8/07 release, you use the canmount property to specify whether a dataset can be mounted by using the zfs mount command.
ZFS user properties – In the Solaris 10 8/07 release, ZFS supports user properties, in addition to the standard native properties that can either export internal statistics or control ZFS file system behavior. User properties have no effect on ZFS behavior, but you can use them to annotate datasets with information that is meaningful in your environment.
Setting properties when creating ZFS file systems – In the Solaris 10 8/07 release, you can set properties when you create a file system, in addition to setting properties after the file system is created.
The following examples illustrate equivalent syntax:
# zfs create tank/home # zfs set mountpoint=/export/zfs tank/home # zfs set sharenfs=on tank/home # zfs set compression=on tank/home |
Or, set the properties when the file system is created.
# zfs create -o mountpoint=/export/zfs -o sharenfs=on -o compression=on tank/home |
Display all ZFS file system information – In the Solaris 10 8/07 release, you can use various forms of the zfs get command to display information about all datasets if you do not specify a dataset. In previous releases, all dataset information was not retrievable with the zfs get command.
For example:
# zfs get -s local all tank/home atime off local tank/home/bonwick atime off local tank/home/marks quota 50G local |
New zfs receive -F option – In the Solaris 10 8/07 release, you can use the new -F option to the zfs receive command to force a rollback of the file system to the most recent snapshot before doing the receive operation. Using this option might be necessary when the file system is modified between the time a rollback occurs and the receive operation is initiated.
Recursive ZFS snapshots – In the Solaris 10 11/06 release, recursive snapshots are available. When you use the zfs snapshot command to create a file system snapshot, you can use the -r option to recursively create snapshots for all descendant file systems. In addition, using the -r option recursively destroys all descendant snapshots when a snapshot is destroyed.
Double Parity RAID-Z (raidz2) – In the Solaris 10 11/06 release, replicated RAID-Z configuration can now have either single-parity or double-parity, which means that one or two device failures can be sustained respectively, without any data loss. You can specify the raidz2 keyword for a double-parity RAID-Z configuration. Or, you can specify the raidz or raidz1 keyword for a single-parity RAID-Z configuration.
Hot spares for ZFS storage pool devices – Starting in the Solaris 10 11/06 release, the ZFS hot spares feature enables you to identify disks that could be used to replace a failed or faulted device in one or more storage pools. Designating a device as a hot spare means that if an active device in the pool fails, the hot spare automatically replaces the failed device. Or, you can manually replace a device in a storage pool with a hot spare.
Replacing a ZFS file system with a ZFS clone (zfs promote) – In the Solaris 10 11/06 release, the zfs promote command enables you to replace an existing ZFS file system with a clone of that file system. This feature is helpful when you want to run tests on an alternative version of a file system and then, make that alternative version of the file system the active file system.
ZFS and zones improvements – In the Solaris 10 11/06 release, the ZFS and zones interaction is improved. On a Solaris system with zones installed, you can use the zoneadm clone feature to copy the data from an existing source ZFS zonepath to a target ZFS zonepath on your system. You cannot use the ZFS clone feature to clone the non-global zone. You must use the zoneadm clone command. For more information, see System Administration Guide: Solaris Containers-Resource Management and Solaris Zones.
Upgrading ZFS storage pools (zpool upgrade) – Starting in the Solaris 10 6/06 release, you can upgrade your storage pools to a newer version to take advantage of the latest features by using the zpool upgrade command. In addition, the zpool status command has been modified to notify you when your pools are running older versions.
Clearing device errors – Starting in the Solaris 10 6/06 release, you can use the zpool clear command to clear error counts that are associated with a device or the pool. Previously, error counts were cleared when a device in a pool was brought online with the zpool online command.
Recovering destroyed pools – In the Solaris 10 6/06 release, the zpool import -D command enables you to recover pools that were previously destroyed with the zpool destroy command.
ZFS backup and restore commands renamed – In the Solaris 10 6/06 release, the zfs backup and zfs restore commands are renamed to zfs send and zfs receive to more accurately describe their function. Their function is to save and restore ZFS data stream representations.
Compact NFSv4 ACL format – Starting in the Solaris 10 6/06 release, three NFSv4 ACL formats are available: verbose, positional, and compact. The new compact and positional ACL formats are available to set and display ACLs. You can use the chmod command to set all three ACL formats. Use the ls -V command to display compact and positional ACL formats. Use the ls -v command to display verbose ACL formats.
Temporarily take a device offline – Starting in the Solaris 10 6/06 release, you can use the zpool offline -t command to take a device offline temporarily. When the system is rebooted, the device is automatically returned to the ONLINE state.
ZFS is integrated with Fault Manager – Starting in the Solaris 10 6/06 release, a ZFS diagnostic engine that is capable of diagnosing and reporting pool failures and device failures is included. Checksum, I/O, and device errors associated with pool or device failures are also reported. Diagnostic error information is written to the console and the /var/adm/messages file. In addition, detailed information about recovering from a reported error can be displayed by using the zpool status command.
For more information about these improvements and changes, see the Solaris ZFS Administration Guide.
See the following What's New sections for related ZFS feature information:
The LDAP name service is enhanced to support the account locking and password aging functionality using the data in the shadow database stored on a configured LDAP server. This support enables the passwd(1) utility and the pam_unix_*(5) PAM modules to function almost the same when handling account locking and password aging for local accounts and remote LDAP user accounts. Therefore, using the pam_ldap(5) module is no longer the only way to implement the password policy and account control for the LDAP name service. pam_unix_*(5) can be used to obtain the same consistent results as with the files and nisplus name services.
For more information, see System Administration Guide: Naming and Directory Services (DNS, NIS, and LDAP).
The SunVTSTM 7.0 Patch Set 6 is integrated in the Solaris 10 10/09 release. The SunVTS 7.0 Patch Set 6 follows a conventional three-tier architecture model. The patch set includes a browser-based user interface (BUI), a Java technology-based middle server, and a diagnostic agent. Enhancements to the SunVTS infrastructure include:
Support for solid-state drives (SSD) added to vtsk
Default level of logical tests enhanced to adapt to system configuration size
Minimum and maximum values or hard limit for reserve swap in vtsk
Ability to change the sequence of Logical Test execution
The Solaris 10 10/09 release includes the following enhancements to memory and CPU diagnostics:
Coverage is added for X86-L3$ in l3sramtest
Enhanced vmemtest, fputest, and l2sramtest provide callbacks to return swap requirements
Tuned logical tests for x86 systems and UltraSPARC® T2 Processor-based systems
The Solaris 10 10/09 release also includes the following enhancements to the I/O diagnostics:
disktest is enhanced to run in Read only mode if Write or Read option is not applicable
Disk Logical Test is tuned for x86, UltraSPARC T2 Processor, and UltraSPARC IV systems
disktest options are automated to run solid-state drive (SSD) and hard disk drive (HDD) Tasks in Disk LT
Selection of test-options is automated in netlbtest
Support in disktest and iobustest for safe and unsafe test options
The following installation feature has been added to the Solaris 10 10/09 release.
Starting with the Solaris 10 10/09 release, SVR4 package commands run faster. This enhancement means that the Solaris installation technologies, such as initial installations, upgrades, live upgrades, and zone installations, perform significantly faster.
The following system resources feature has been added to the Solaris 10 10/09 release.
The zones parallel patching enhancement to the standard Solaris 10 patch utilities increases the patching tools performance on systems with multiple zones by allowing parallel patching of the non-global zones. In releases prior to Solaris 10 10/09, the feature is implemented through the patch utilities patch 119254-66 or later revision for SPARC and 119255-66 or later revision for x86. The global zone is still patched before the non-global zones are patched.
For more information, see the following:
When using the Sun xVM hypervisor in a Solaris OS, fully virtualized guest domains are referred to as hardware-assisted virtual machines (HVMs). HVM + PVIO guests provide better performance through the use of PV drivers.
Releases starting with the Solaris 10 10/08 release are shipped with the Solaris PV drivers. A patch is available for Solaris 10 5/08.
For more information, see “Solaris 10 releases” in Guests That Are Known to Work in System Administration Guide: Virtualization Using the Solaris Operating System. This guide also discusses HVM-capable machines.
The following device management feature has been added to the Solaris 10 10/09 release.
A new SMF service, under FMRI svc:/network/iscsi/initiator:default is introduced to control the availability of iSCSI devices. The SMF service also controls the timing to start discovery and enumeration of iSCSI devices during OS start—up.
Other services that rely on the availability of iSCSI devices can customize their dependency on the this new iSCSI Initiator services. For more information see, the iscsi(7D) man page.
Starting with the Solaris 10 10/09 release, Solaris MPxIO supports the LSI 6180 controller-based storage arrays.
The following system performance feature has been added to the Solaris 10 10/09 release.
The callout subsystem is redesigned to include the following features:
Performance and scalability improvements:
Per-CPU data structures to minimize mutex contention
Per-CPU callout processing to improve scalability
Event-based implementation that avoids polling overhead
High-resolution timers for improved functionality. Many API calls use high-resolution timers and do not experience latency because the system rounds off the specified intervals. These timers include commonly used calls such as poll() and nanosleep().
Observability improvements:
Comprehensive set of options for the MDB dcmd callout
New MDB dcmd calloutid
New callout kstats
The following driver features and enhancements have been added to the Solaris 10 10/09 release.
The Solaris 10 5/09 includes many enhancements to the Solaris 10GbE drivers. The nxge 10GbE driver includes the following enhancements:
TCP receive throughput is improved from 40% at 8 connections to over 90% improvements for 32, 100, 400, and 1000 connections
TCP transmit throughput is improved from almost 80% at 8 connections to over 100% for the higher connection tests
UDP transmit throughput is improved from 80% for 64 byte messages to over 160% for 8 Kbyte messages
The ixgbe driver on x86 systems includes the following enhancements:
TCP transmit throughput is improved to almost 100% for 8 or more connections
TCP receive rates are 10Gb line rate for 8, 32, 100, 400, and 1000 connections
UDP transmit maximum throughput doubles to 10Gb line rate
Ping pong data rates are improved from 2x to 3x as the message size increases from 64 bytes to 512 bytes
Solaris 10GbE drivers can now deliver close to line data rates providing optimal performance on 10 Gigabit networks.
The Solaris 10 5/09 release includes the following InfiniBand-related enhancements:
InfiniBand Host Channel Adapter (HCA) – The Solaris 10 5/09 release includes a significantly enhanced InfiniBand driver for the Mellanox ConnectX HCA. The InfiniBand driver enables InfiniBand protocols to operate over both Double Data Rate (DDR) and Quad Data Rate (QDR) InfiniBand fabrics. The driver is also integrated into the Solaris FMA framework for fault management and the driver supports relaxed ordering on SPARC systems.
InfiniBand Transport Framework (IBTF) – The Solaris 10 5/09 release includes a significantly improved IBTF implementation that provides enhanced support for running RDMA-based InfiniBand protocols in Solaris. InfiniBand for SPARC now supports both the PCI Dynamic Reconfiguration (DR).
Internet Protocol over InfiniBand (IPoIB) – The Solaris 10 5/09 release includes a significantly improved IPoIB driver (ibd) supporting Internet RFCs 4391 and 4392. The IPoIB driver in the Solaris 10 5/09 release supports the User Datagram (UD) mode of operation, IPv4 and IPv6 addressing, and takes advantage of hardware offloads in the ConnectX HCA for improved throughput at lower CPU utilization. IPoIB-UD enables the use of any TCP/IP application protocol such as SSH, HTTP, FTP, NFS, and iSCSI over both Double Data Rate (DDR) and Quad Data Rate (QDR) InfiniBand fabrics. The new IPoIB driver for both SPARC and x86 platforms offers a significant performance boost over the previously available driver.
Sockets Direct Protocol (SDP) – The Solaris 10 5/09 release includes a significantly improved SDP driver and sockfs implementation. SDP is a transport protocol layered over the Infiniband Transport Framework (IBTF). SDP is a standard implementation based on Annex 4 of the Infiniband Architecture Specification Vol 1. The SDP protocol provides reliable byte-stream, flow controlled two-way data transmission that is similar to the Transmission Control Protocol (TCP). InfiniBand programmers use SDP through the libsdp C library that supports a sockets-based SOCK_STREAM interface to application programs. The SDP protocol supports graceful close, IPv4 and IPv6 addressing, the connecting/accepting connect model, out-of-band (OOB) data and common socket options. The SDP protocol also supports kernel bypass data transfers and data transfers from send-upper-layer-protocol (ULP) buffers to receive ULP buffers.
Reliable Datagram Sockets (RDS) – The Solaris 10 5/09 release includes an improved RDSv1 driver certified for use with Oracle RAC (Real Application Clusters) 10gR2.
User-Level Direct Access Programming Library (uDAPL) – The Solaris 10 5/09 release includes an updated uDAPL over InfiniBand API that conforms to the latest Direct Access Transport (DAT) Collaborative uDAPL 1.2 specification.
The mpt_sas(7D) driver supports SAS, SATA, SMP physical devices, and virtual devices by using the Integrated RAID feature. The new architecture for SAS drivers supports the following features:
SAS initiator ports (iports)
Dynamic reconfiguration of SAS, SATA, and SMP targets
FWARC 2008/013-compliant device representation
Multipathing
For more information, see the mpt_sas(7D) man page.
The Solaris 10 10/09 release includes new chipsets support such as bcm5716c and bcm5716s.
The Solaris 10 10/09 release provides an interrupt-remapping table that isolates interrupts on at least the Intel Nehalem platform and ensures that devices can only use authorized interrupts and that the interrupts are properly targeted. This feature improves system reliability, availability, and serviceability (RAS).
SATA tape devices are now supported by the AHCI driver. Users can connect or hot-plug the SATA tape drive to the AHCI controller though the SATA or eSATA cable. Error handling mechanism is also enhanced for SATA ATAPI devices including CD, DVD, or tape.
For more information, see the ahci(7D) man page.
The mr_sas MegaRAID SAS2.0 controller host bus adapter driver is a SCSA-compliant nexus driver that supports the LSI MegaRAID SAS 92xx series of controllers and the StorageTek 6Gb/s SAS RAID HBA series of controllers and the LSI MegaRAID SAS 92xx series of controllers.
Some of the supported RAID features include:
RAID levels 0, 1, 5, and 6, and RAID spans 10, 50 and 60
Online capacity expansion (OCE)
Online RAID level migration (RLM)
Auto resume after loss of system power during array rebuild or reconstruction (OCE or RLM)
Configurable stripe size up to 1 Mbyte
Capability to check consistency for background data integrity
Patrol read for media scanning and repairing
64 logical drive support
Up to 64 TB LUN support
Automatic rebuild, and global and dedicated hot—spare support
Starting with the Solaris 10 10/09 release, the ixgbe driver supports the Intel 82599 10Gb PCI Express Ethernet Controller chipset.
Starting with the Solaris 10 10/09 release, the ixgbe driver supports the Intel 82598 10Gb PCI Express Ethernet Controller chipset.
The following freeware features and enhancements have been added to the Solaris 10 10/09 release.
The Solaris 10 10/09 release includes the latest version of the network time protocol that supports enhanced authentication, IPv6, and greater performance. For more information, see the ntpdate(1M) man page.
The Solaris 10 10/09 release supports PostgreSQL versions 8.1.17, 8.2.13, and 8.3.7.
The Solaris 10 10/09 release supports Samba 3.0.35.