1. What's New in the Oracle Solaris 10 9/10 Release
Oracle Solaris Auto Registration
SPARC: Support for ITU Construction Tools on SPARC Platforms
Oracle Solaris Upgrade Enhancement for Oracle Solaris Zone- Cluster Nodes
Virtualization Enhancements for Oracle Solaris Zones
Migrating a Physical Oracle Solaris 10 System Into a Zone
Updating Packages by Using the New zoneadm attach -U Option
Virtualization Enhancements for Oracle VM Server for SPARC
Memory Dynamic Reconfiguration Capability
Virtual Disk Multipathing Enhancements
Virtual Domain Information Command and API
System Administration Enhancements
Oracle Solaris ZFS Features and Enhancements
x86: Support for the IA32_ENERGY_PERF_BIAS MSR
Support for Multiple Disk Sector Size
Sparse File Support in the cpio Command
x86: 64-Bit libc String Functions Improvements With SSE
x86: Intel AES-NI Optimization
New Oracle Solaris Unicode Locales
Device Management Enhancements
x86: HP Smart Array HBA Driver
x86: Support for Broadcom NetXtreme II 10 Gigabit Ethernet NIC Driver
x86: New SATA HBA Driver, bcm_sata, for Broadcom HT1000 SATA Controllers
Support for SATA/AHCI Port Multiplier
Support for Netlogic NLP2020 PHY in the nxge Driver
BIND 9.6.1 for the Oracle Solaris 10 OS
Open Fabrics User Verbs Primary Kernel Components
InfiniBand Infrastructure Enhancements
Support for the setxkbmap Command
ixgbe Driver to Integrate Intel Shared Code Version 3.1.9
Broadcom Support to bge Networking Driver
x86: Fully Buffered DIMM Idle Power Enhancement
Fault Management Architecture Enhancements
FMA Support for AMD's Istanbul Based Systems
Oracle Solaris FMA Enhancement
Sun Validation Test Suite 7.0ps9
Enhancements to the mdb Command to Improve the Debugging Capability of kmem and libumem
The following system administration features and enhancements have been added to the Oracle Solaris 10 9/10 release.
The following list summarizes new features in the ZFS file system. For more information about these new features, see the Oracle Solaris ZFS Administration Guide.
ZFS device replacement enhancements – In this release, a system event, or sysevent is provided when an underlying device is expanded. ZFS has been enhanced to recognize these events and adjusts the storage pool based on the new size of the expanded LUN, depending on the setting of the autoexpand property. You can use the autoexpand property to enable or disable automatic pool expansion when a dynamic LUN expansion event is received.
This feature enables you to expand a LUN, and the resulting pool can access the expanded disk space without requiring to export and import the pool or reboot the system. The autoexpand property is disabled by default, so you can decide whether you want the LUN expanded. Or, you can use the zpool online -e command to expand the full size of a LUN.
Changes to the zpool list command — In this release, the zpool list output provides better space allocation information. For example:
# zpool list tank NAME SIZE ALLOC FREE CAP HEALTH ALTROOT tank 136G 55.2G 80.8G 40% ONLINE -
The previous USED and AVAIL fields have been replaced with ALLOC and FREE.
The ALLOC field identifies the amount of physical space that is allocated to all datasets and internal metadata. The FREE field identifies the amount of unallocated disk space in the storage pool.
Holding ZFS snapshots – If you implement different automatic snapshot policies such that older snapshots are being inadvertently destroyed by the zfs receive command because they no longer exist on the sending side, you might consider using the snapshots hold feature new in this release.
Holding a snapshot prevents it from being destroyed. In addition, this feature allows a snapshot with clones to be deleted pending the removal of the last clone by using the zfs destroy -d command.
You can apply the keep hold tag with the zfs hold command to hold a snapshot or a set of snapshots.
Triple parity RAID-Z (raidz3) – In this release, a redundant RAID-Z configuration can now have either single-parity, double-parity, or triple-parity, which means that one, two, or three device failures can be sustained respectively, without any data loss. You can specify the raidz3 keyword for a triple-parity RAID-Z configuration when the storage pool is created.
ZFS log device enhancements – The following log device enhancements are available in this release:
The logbias property – You can use this property to instruct ZFS about how to handle synchronous requests for a specific dataset. If logbias is set to latency, ZFS uses the storage pool's separate log devices, if any, to handle the requests at low latency. If logbias is set to throughput, ZFS does not use the pool's separate log devices. Instead, ZFS optimizes synchronous operations for global pool throughput and for the efficient use of resources. The default value is latency. For most configurations, the default value is optimal. However, the logbias=throughput value might improve performance for writing database files.
Log device removal – You can now remove a log device from a storage pool by using the zpool remove command. A single log device can be removed by specifying the device name. A mirrored log device can be removed by specifying the top-level mirror for the log device. When a separate log device is removed from the system, ZFS intent log (ZIL) transaction records are written to the main pool.
Redundant top-level virtual devices are now identified with a numeric identifier. For example, in a mirrored storage pool of two disks, the top level virtual device is mirror-0.
ZFS storage pool recovery – A storage pool can become damaged if underlying devices become unavailable, if a power failure occurs, or if more than the supported number of devices fail in a redundant ZFS configuration. This release provides new command features for recovering your damaged pool. However, using this recovery feature means that the last few transactions that occurred prior to the pool outage might be lost.
Both the zpool clear and zpool import commands support the -F option to possibly recover a damaged pool. In addition, the zpool status, zpool clear, and zpool import commands automatically report a damaged pool. These commands also describe how to recover the pool.
New ZFS system process – In this release, each storage pool has an associated process, zpool-poolname. The threads in this process are the pool's I/O processing threads that are used to handle I/O tasks, such as compression and checksum validation. The purpose of this process is to provide visibility into each storage pool's CPU utilization. Information about these processes can be reviewed by using the ps and prstat commands. These processes are only available in the global zone. For more information, see the SDC(7) man page.
Splitting a mirrored ZFS storage pool (zpool split) – In this release, you can use the zpool split command to split a mirrored storage pool, which detaches a disk or disks in the original mirrored pool to create another identical pool.
The fast crash dump facility enables the system to save crash dumps in less time, while using less space. The time that is required for a crash dump to complete is now 2 to 10 times faster, depending on the platform. The amount of disk space that is required to save crash dumps in the savecore directory is reduced by the same factors.
To accelerate the creation and compression of a crash dump file, the new crash dump facility utilizes lightly used CPUs on large systems. A new crash dump file, vmdump.n, is a compressed version of the vmcore.n and unix.n files. Compressed crash dumps can be moved over the network more quickly and then analyzed offsite. Note that you must uncompress the dump file before it can be used with tools such as the mdb utility. You can use the savecore command, either locally or remotely, to uncompress the dump file.
In addition, a new -z option has been added to the dumpadm command. This option enables you to specify whether to save dumps in a compressed or an uncompressed format. Note that the default format is compressed.
For more information, see the dumpadm(1M) and savecore(1M) man pages. Also, see Managing System Crash Dump Information in System Administration Guide: Advanced Administration.
The Intel Xeon processor 5600 series supports the IA32_ENERGY_PERF_BIAS Model Support Register (MSR). You can set the MSR to the desired energy and performance bias on the hardware. In this release, you can set the register at boot time. To set the register, add the following line to the /etc/system file and reboot the system:
set cpupm_iepb_policy = `value`
where value is a number from 0 to 15.
For more information, see Intel 64 and IS-32 Architectures Software Developer's Manual Volume 3A: System Programming Guide, part 1.
The multiple disk sector size enables the Oracle Solaris OS to run on a disk where the sector size is 512 bytes, 1024 bytes, 2048 bytes, or 4096 bytes.
In addition, this feature supports the following:
Correct labeling on large sector size disks
Perform I/O (raw & block)
Support for a ZFS non–root disk
Support for Xen and Oracle VM Server for SPARC to identify large sector size disks
iSCSI initiator tunables enable you to tune several parameters that are specific for an iSCSI initiator to access a given iSCSI target. This feature greatly improves the iSCSI initiator connection response time for various network scenarios. In particular, this feature is effective when the network between the iSCSI initiator and the target is slow or unstable. These tunable parameters can be managed by using the iscsiadm command or the library libima interface.
The cpio command in pass mode preserves holes in sparse files. In this release, administrative tools that utilize cpio in pass mode, such as Oracle Solaris Live Upgrade, will no longer fill holes. Instead these tools will precisely copy holes in sparse files.
For more information, see the lseek(2) and cpio(1) man pages.
64-bit libc string functions have been enhanced with streaming SIMD extensions (SSE) instructions that provide significant performance improvements in the common strcmp(), strcpy(), and strlen() functions for 64-bit applications running on x86 platforms. However, note that applications that copy or compare strings of 2 mbytes or more should use the memcpy() and memmove() functions instead.
In this release, new properties have been added to the sendmail service to provide for the automatic rebuilding of the sendmail.cf and submit.mc configuration files. In addition, the sendmail instance is split into two instances to provide better management of the traditional daemon and the client queue runner.
For more information about these enhancements, see What’s New With Mail Services in System Administration Guide: Network Services.
Starting in this release, boot archive recovery on the SPARC platform is automatic.
To support automatic recovery of the boot archives on the x86 platform, a new auto-reboot-safe property has been added to the boot configuration service, svc:/system/boot-config:default. By default, the property's value is set to false to ensure that the system does not automatically reboot to an unknown boot device. However, if your system is configured to point to the BIOS boot device and the default GRUB menu entry on which the Oracle Solaris 10 OS is installed, you can set the property's value to true. This value enables an automatic reboot of the system for the purpose of recovering an out-of-date boot archive.
To set or change this property's value, use the svccfg and svcadm commands. See the svccfg(1M) and svcadm(1M) man pages for more information about configuring SMF services.
For more information about automatic boot archive recovery, see the boot(1M) man page.
For instructions on clearing failure with automatic boot archive recovery, see Automatic Boot Archive Recovery in System Administration Guide: Basic Administration.