The following system administration features and enhancements have been added to the Solaris 10 10/09 release.
Starting with the Solaris 10 10/09 release, you can install and boot the Solaris OS from a disk that is up to 2 Tbytes in size. In previous Solaris releases, you could not install and boot the Solaris OS from a disk that was greater than 1 Tbyte in size.
In this Solaris release, you can use the VTOC label on a disk of any size. However, the addressable space by the VTOC is limited to 2 Tbytes. This feature enables disks that are larger than 2 Tbytes to also be used as boot drive. However, the usable space from label is limited to 2 Tbytes.
This feature is available only on systems that run a 64-bit kernel. A minimum of 1 Gbyte of memory is required for x86–based systems.
For more information about the Solaris disk drivers and disk utilities that have been updated to support boot on disks greater than 1 Tbyte, see System Administration Guide: Devices and File Systems.
The pcitool utility enables system administrators to bind interrupts to specific hardware strands for enhanced performance. This utility exists in the public SUNWio-tools package. For more information about using pcitool, see the pcitool man page.
The following section summarizes new features in the ZFS file system.
ZFS and Flash installation support – In the Solaris 10 10/09 release, you can set up a JumpStart profile to identify a flash archive of a ZFS root pool. For more information, see the Solaris ZFS Administration Guide.
Setting ZFS user and group quotas – In previous Solaris releases, you could apply quotas and reservations to ZFS file systems to manage and reserve space. In this Solaris release, you can set a quota on the amount of space consumed by files that are owned by a particular user or group. You might consider setting user and group quotas in an environment with a large number of users or groups. You can set user or group quotas by using the zfs userspace and zfs groupspace properties as follows:
# zfs set userquota@user1=5G tank/data # zfs set groupquota@staff=10G tank/staff/admins |
You can display a user's or group's current quota setting as follows:
# zfs get userquota@user1 tank/data NAME PROPERTY VALUE SOURCE tank/data userquota@user1 5G local # zfs get groupquota@staff tank/staff/admins NAME PROPERTY VALUE SOURCE tank/staff/admins groupquota@staff 10G local |
Using ZFS ACL pass through inheritance for execute permission – In previous Solaris releases, you could apply ACL inheritance so that all files are created with 0664 or 0666 permissions. If you want to optionally include the execute bit from the file creation mode into the inherited ACL, you can use the pass through inheritance for execute permission in this release.
If aclinherit=passthrough-x is enabled on a ZFS dataset, you can include execute permission for an output file that is generated from cc or gcc tools. If the inherited ACL does not include execute permission, then the executable output from the compiler won't be executable until you use the chmod command to change the file's permissions.
Using cache devices in your ZFS storage pool – In the Solaris 10 10/09 release, you can create pool and specify cache devices, which are used to cache storage pool data. Cache devices provide an additional layer of caching between main memory and disk. Using cache devices provide the greatest performance improvement for random read-workloads of mostly static content.
One or more cache devices can specified when the pool is created. For example:
# zpool create pool mirror c0t2d0 c0t4d0 cache c0t0d0 # zpool status pool pool: pool state: ONLINE scrub: none requested config: NAME STATE READ WRITE CKSUM pool ONLINE 0 0 0 mirror ONLINE 0 0 0 c0t2d0 ONLINE 0 0 0 c0t4d0 ONLINE 0 0 0 cache c0t0d0 ONLINE 0 0 0 errors: No known data errors |
For information about determining whether using cache devices is appropriate for your environment, see the Solaris ZFS Administration Guide.
ZFS property enhancements – The Solaris 10 10/09 release provides the following ZFS property enhancements:
You can set ZFS file system properties at pool creation time. In the following example, compression is enabled on the ZFS file system that is created when the pool is created.
# zpool create -O compression=on pool mirror c0t1d0 c0t2d0 |
You can set two cache properties on a ZFS file system that allow you to control what is cached in the primary cache (ARC) or the secondary cache (L2ARC). The cache properties are set as follows:
primarycache – Controls what is cached in the ARC.
secondarycache – Controls what is cached in the L2ARC.
You can set these properties on an existing file system or when the file system is created. For example:
# zfs set primarycache=metadata tank/datab # zfs create -o primarycache=metadata tank/newdatab |
Some database environments might benefit from not caching user data. You will have determine if setting cache properties is appropriate for your environment.
For more information, see the Solaris ZFS Administration Guide.
You can use the space usage properties to identify space usage for clones, file systems, and volumes, but not snapshots. The properties are as follows:
usedbychildren – Identifies the amount of space that is used by children of this dataset, which would be freed if all the dataset's children were destroyed. The property abbreviation is usedchild.
usedbydataset – Identifies the amount of space that is used by this dataset itself, which would be freed if the dataset was destroyed, after first destroying any snapshots and removing any refreservation. The property abbreviation is usedds.
usedbyrefreservation – Identifies the amount of space that is used by a refreservation set on this dataset, which would be freed if the refreservation was removed. The property abbreviation is usedrefreserv.
usedbysnapshots – Identifies the amount of space that is consumed by snapshots of this dataset. In particular, it is the amount of space that would be freed if all of this dataset's snapshots were destroyed. Note that this is not simply the sum of the snapshots' used properties, because space can be shared by multiple snapshots. The property abbreviation is usedsnap.
These new properties break down the value of the used property into the various elements that consume space. In particular, the value of the used property breaks down as follows:
used property = usedbychildren + usedbydataset + usedbyrefreservation + usedbysnapshots |
You can view these properties by using the zfs list -o space command. For example:
# zfs list -o space NAME AVAIL USED USEDSNAP USEDDS USEDREFRESERV USEDCHILD pool 33.2G 72K 0 21K 0 51K rpool 27.0G 6.27G 20.5K 97K 0 6.27G rpool/ROOT 27.0G 4.73G 0 21K 0 4.73G rpool/ROOT/zfsBE 27.0G 4.73G 97.5M 4.63G 0 0 rpool/dump 27.0G 1.00G 16K 1.00G 0 0 rpool/export 27.0G 60K 16K 23K 0 21K rpool/export/home 27.0G 21K 0 21K 0 0 rpool/swap 27.5G 553M 0 41.5M 512M 0 |
In this release, snapshots are omitted from zfs list output. The listsnaps pool property controls whether snapshot information is displayed by the zfs list command. If you use the zfs list -t snapshots command, snapshot information is displayed. The default value is off, which means snapshot information is not displayed by default.
ZFS log device recovery – In the Solaris 10 10/09 release, ZFS identifies intent log failures in the zpool status command. FMA reports these errors as well. Both ZFS and FMA describe how to recover from an intent log failure.
For example, if the system shuts down abruptly before synchronous write operations are committed to a pool with a separate log device, you will see intent-log related error messages in the zpool status output. For information about resolving log device failures, see the Solaris ZFS Administration Guide.
Using ZFS ACL Sets – The Solaris 10 10/09 release provides the ability to apply NFSv4–style ACLs in sets, rather than apply different ACL permissions individually. The following ACL sets are provided:
full_set = all permissions
modify_set = all permissions except write_acl and write_owner
read_set = read_data, read_attributes, read_xattr, and read_acl
write_set = write_data, append_data, write_attributes, and write_xattr
These ACL sets are pre-defined and cannot be modified.
For more information about these improvements and changes, see the Solaris ZFS Administration Guide.
See the following What's New sections for related ZFS feature information:
The LDAP name service is enhanced to support the account locking and password aging functionality using the data in the shadow database stored on a configured LDAP server. This support enables the passwd(1) utility and the pam_unix_*(5) PAM modules to function almost the same when handling account locking and password aging for local accounts and remote LDAP user accounts. Therefore, using the pam_ldap(5) module is no longer the only way to implement the password policy and account control for the LDAP name service. pam_unix_*(5) can be used to obtain the same consistent results as with the files and nisplus name services.
For more information, see System Administration Guide: Naming and Directory Services (DNS, NIS, and LDAP).
The SunVTSTM 7.0 Patch Set 6 is integrated in the Solaris 10 10/09 release. The SunVTS 7.0 Patch Set 6 follows a conventional three-tier architecture model. The patch set includes a browser-based user interface (BUI), a Java technology-based middle server, and a diagnostic agent. Enhancements to the SunVTS infrastructure include:
Support for solid-state drives (SSD) added to vtsk
Default level of logical tests enhanced to adapt to system configuration size
Minimum and maximum values or hard limit for reserve swap in vtsk
Ability to change the sequence of Logical Test execution
The Solaris 10 10/09 release includes the following enhancements to memory and CPU diagnostics:
Coverage is added for X86-L3$ in l3sramtest
Enhanced vmemtest, fputest, and l2sramtest provide callbacks to return swap requirements
Tuned logical tests for x86 systems and UltraSPARC® T2 Processor-based systems
The Solaris 10 10/09 release also includes the following enhancements to the I/O diagnostics:
disktest is enhanced to run in Read only mode if Write or Read option is not applicable
Disk Logical Test is tuned for x86, UltraSPARC T2 Processor, and UltraSPARC IV systems
disktest options are automated to run solid-state drive (SSD) and hard disk drive (HDD) Tasks in Disk LT
Selection of test-options is automated in netlbtest
Support in disktest and iobustest for safe and unsafe test options