The following system administration features and enhancements have been added to the Solaris 10 7/07 HW release.
The fault management feature introduces error-handling and fault-management support for CPUs and memory in systems that use AMD (TM) Opteron and Athlon 64 Rev F processors. These processors are used in the “M2” products from Sun such as the Sun Fire X2200 M2 and Ultra 20 M2. Releases prior to Solaris 10 7/07 HW provided fault management support for Opteron and Athlon 64 revisions B through E.
Fault management support is enabled by default. The fault management service detects correctable CPU and memory errors, the resulting telemetry is analyzed by diagnosis engines, and errors and faults are corrected whenever possible. When errors cannot be corrected by the system, the extended telemetry provides greater assistance to the system administrator.
For more information see http://www.opensolaris.org/os/community/fm/.
Enhancements have been made to the name service switch (nss) and to the Name Switch Cache Daemon (nscd(1M)) in order to deliver new functionality. These enhancements include the following:
Better caching in nscd(1M) and management of connections within the updated framework
Name service lookups that are access controlled at the naming service on a per-user basis. The updated switch framework adds support for this style of lookups using SASL/GSS/Kerberos in a manner that is compatible with the authentication model used in the Microsoft Active Directory.
A framework for the future addition of putXbyY interfaces.
The -Y option of the iostat command provides new performance information for machines that use Solaris I/O multipathing.
For more information, see the iostat(1M) man page.
The following section summarizes new features in the SolarisTM ZFS file system. For more information about these new features, see the Solaris ZFS Administration Guide.
ZFS command history (zpool history) – Starting with the Solaris 10 7/07 HW release, ZFS automatically logs successful zfs and zpool commands that modify pool state information. This feature enables you or Sun support personnel to identify the exact set of ZFS commands that was executed to troubleshoot an error scenario.
Improved storage pool status information (zpool status) – Starting with the Solaris 10 7/07 HW release, you can use the zpool status -v command to display a list of files with persistent errors. Previously, you had to use the find -inum command to identify the filenames from the list of displayed inodes.
ZFS and Solaris iSCSI improvements – Starting with the Solaris 10 7/07 HW release, you can create a ZFS volume as a Solaris iSCSI target device by setting the shareiscsi property on the ZFS volume. This method is a convenient way to quickly set up a Solaris iSCSI target. For example:
# zfs create -V 2g tank/volumes/v2 # zfs set shareiscsi=on tank/volumes/v2 # iscsitadm list target Target: tank/volumes/v2 iSCSI Name: iqn.1986-03.com.sun:02:984fe301-c412-ccc1-cc80-cf9a72aa062a Connections: 0 |
After the iSCSI target is created, set up the iSCSI initiator. For information about setting up a Solaris iSCSI initiator, see Chapter 14, Configuring Solaris iSCSI Targets and Initiators (Tasks), in System Administration Guide: Devices and File Systems.
For more information about managing a ZFS volume as an iSCSI target, see the Solaris ZFS Administration Guide.
ZFS property improvements
ZFS xattr property – Starting with the Solaris 10 7/07 HW release, you can use the xattr property to disable or enable extended attributes for a specific ZFS file system. The default value is on.
ZFS canmount property – Starting with the Solaris 10 7/07 HW release, you use the canmount property to specify whether a dataset can be mounted by using the zfs mount command.
ZFS user properties – Starting with the Solaris 10 7/07 HW release, ZFS supports user properties, in addition to the standard native properties that can either export internal statistics or control ZFS file system behavior. User properties have no effect on ZFS behavior, but you can use them to annotate datasets with information that is meaningful in your environment.
Setting properties when creating ZFS file systems – Starting with the Solaris 10 7/07 HW release, you can set properties when you create a file system, in addition to setting properties after the file system is created.
The following examples illustrate equivalent syntax:
# zfs create tank/home # zfs set mountpoint=/export/zfs tank/home # zfs set sharenfs=on tank/home # zfs set compression=on tank/home |
# zfs create -o mountpoint=/export/zfs -o sharenfs=on -o compression=on tank/home |
Display all ZFS file system information – Starting with the Solaris 10 7/07 HW, you can use various forms of the zfs get command to display information about all datasets if you do not specify a dataset. In releases prior to the current release, all dataset information was not retrievable with the zfs get command. For example:
# zfs get -s local all tank/home atime off local tank/home/bonwick atime off local tank/home/marks quota 50G local |
New zfs receive -F option – Starting with the Solaris 10 7/07 HW, you can use the new -F option to the zfs receive command to force a rollback of the file system to the most recent snapshot before doing the receive. Using this option might be necessary when the file system is modified between the time a rollback occurs and the receive is initiated.
Recursive ZFS snapshots – Starting with the Solaris 10 11/06 release, recursive snapshots are available. When you use the zfs snapshot command to create a file system snapshot, you can use the -r option to recursively create snapshots for all descendant file systems. In addition, using the -r option recursively destroys all descendant snapshots when a snapshot is destroyed.
Double Parity RAID-Z (raidz2) – Starting with the Solaris 10 11/06 release, replicated RAID-Z configuration can now have either single- or double-parity, which means that one or two device failures can be sustained respectively, without any data loss. You can specify the raidz2 keyword for a double-parity RAID-Z configuration. Or, you can specify the raidz or raidz1 keyword for a single-parity RAID-Z configuration.
Hot spares for ZFS storage pool devices – Starting with the Solaris 10 11/06 release, the ZFS hot spares feature enables you to identify disks that could be used to replace a failed or faulted device in one or more storage pools. Designating a device as a hot spare means that if an active device in the pool fails, the hot spare automatically replaces the failed device. Or, you can manually replace a device in a storage pool with a hot spare.
Replacing a ZFS file system with a ZFS clone (zfs promote) – Starting with the Solaris 10 11/06 release, the zfs promote command enables you to replace an existing ZFS file system with a clone of that file system. This feature is helpful when you want to run tests on an alternative version of a file system and then, make that alternative version of the file system the active file system.
ZFS and zones improvements – Starting with the Solaris 10 11/06 release, the ZFS and zones interaction is improved. On a Solaris system with zones installed, you can use the zoneadm clone feature to copy the data from an existing source ZFS zonepath to a target ZFS zonepath on your system. You cannot use the ZFS clone feature to clone the non-global zone. You must use the zoneadm clone command. For more information, see System Administration Guide: Solaris Containers-Resource Management and Solaris Zones.
Upgrading ZFS storage pools (zpool upgrade) – Starting with the Solaris 10 6/06 release, you can upgrade your storage pools to a newer version to take advantage of the latest features by using the zpool upgrade command. In addition, the zpool status command has been modified to notify you when your pools are running older versions.
Clearing device errors – Starting with the Solaris 10 6/06 release, you can use the zpool clear command to clear error counts associated with a device or the pool. Previously, error counts were cleared when a device in a pool was brought online with the zpool online command.
Recovering destroyed pools – Starting with the Solaris 10 6/06 release, the zpool import -D command enables you to recover pools that were previously destroyed with the zpool destroy command.
ZFS backup and restore commands are renamed – Starting with the Solaris 10 6/06 release, the zfs backup and zfs restore commands are renamed to zfs send and zfs receive to more accurately describe their function. The function of these commands is to save and restore ZFS data stream representations.
Compact NFSv4 ACL format – Starting with the Solaris 10 6/06 release, three NFSv4 ACL formats are available: verbose, positional, and compact. The new compact and positional ACL formats are available to set and display ACLs. You can use the chmod command to set all 3 ACL formats. Use the ls -V command to display compact and positional ACL formats and the ls -v command to display verbose ACL formats.
Temporarily take a device offline – Starting with the Solaris 10 6/06 release, you can use the zpool offline -t command to take a device offline temporarily. When the system is rebooted, the device is automatically returned to the ONLINE state.
ZFS is integrated with Fault Manager – Starting with the Solaris 10 6/06 release, a ZFS diagnostic engine is included that is capable of diagnosing and reporting pool failures and device failures. Checksum, I/O, and device errors associated with pool or device failures are also reported. Diagnostic error information is written to the console and the /var/adm/messages file. In addition, detailed information about recovering from a reported error can be displayed by using the zpool status command.
For more information about these improvements and changes, see the Solaris ZFS Administration Guide.
See the following What's New sections for related ZFS feature information:
Starting with this release, you can register the Solaris OS by using one of the following methods:
Basic Registration 1.1 - Use this method if you want to use Sun Connection's hosted deployment architecture or Update Manager.
Solaris Registration - Use this method if you want to use Sun Connection to maintain an inventory of systems that you have registered.
Basic Registration 1.1 is a system administration feature that was introduced in the Solaris 10 6/06 release. The Basic Registration feature enables you to create a registration profile and ID to automate your Solaris 10 software registrations for the Update Manager. The Update Manager is the single system update client that is used by Sun Connection. Sun Connection was formerly known as Sun Update Connection System Edition. The Basic Registration wizard appears on system reboot. For information on the Basic Registration 1.1 feature , see Basic Registration 1.1. For information about how to register with the wizard, see the Sun Connection Information Hub at http://www.sun.com/bigadmin/hubs/connection/.
Solaris Registration enables you to register one or more instances of your Solaris software at the same time by providing a Sun Online Account user name and password. To register, go to https://sunconnection.sun.com.
For information about Sun Connection's product portfolio, see the Sun Connection Information Hub at http://www.sun.com/bigadmin/hubs/connection/.
The MPxIO path steering feature includes a mechanism for issuing SCSI commands to an MPxIO LU to be delivered down a specified path to the LU. In order to provide this functionality, a new IOCTL command, MP_SEND_SCSI_CMD, is added and is referenced through the existing scsi_vhci IOCTL interface. An extension is introduced to the multipath management library (MP-API) which provides access to this new IOCTL command. This allows network administrators to run diagnostic commands through a specified path.
Starting with this release, the Solaris OS includes a set of predictive self-healing features to automatically capture and diagnose hardware errors detected on your system.
The Solaris Fault Manager automatically diagnoses failures in x64 hardware. Diagnostic messages are reported by the fmd daemon.
For more information about Fault Management in Solaris, see the following:
A Sun Service Tag is a product identifier that is designed to automatically discover your Sun systems, software, and services for quick and easy registration. A service tag uniquely identifies each tagged asset, and allows the asset information to be shared over a local network in a standard XML format.
Service tags are enabled as part of the Service Management Facility (SMF) and the SMF generic_open.xml profile. If you select the SMF generic_limited_net.xml profile, service tags are not enabled.
For more information about SMF, see the System Administration Guide: Basic Administration. For more information about service tags, the types of information collected, and automatic registration, see Sun Connection on BigAdmin at http://www.sun.com/bigadmin/hubs/connection/tasks/register.jsp.
Starting with this release, the stmsboot utility is ported to x86 systems. stmsboot is a utility that is used to enable or disable MPxIO for fibre-channel devices. This stmsboot utility already exists on SPARC systems.
Users can use this utility to enable or disable MPxIO automatically. Previously, users had to enable or disable MPxIO manually, which was difficult, especially for a SAN system boot.
For more information, see the following:
stmsboot(1M) man page
Section about Enabling or Disabling Multipathing on x86 Based Systems in Solaris Fibre Channel Storage Configuration and Multipathing Support Guide at http://docs.sun.com.
raidctl is a utility that can perform RAID configuration work by using multiple RAID controllers. The raidctl feature contains more detailed information about RAID components, including controller, volume and physical disks. The raidctl utility enables the user to track the RAID system more closely and simplify the learning effort on diverse RAID controllers.
For more information, see:
Starting with this release, concurrent READ/WRITE FPDMA QUEUED commands are supported. There is considerable performance enhancement when performing I/O operations using the Solaris marvell88sx driver under specific workload conditions. Other workloads benefit to a smaller degree. There is also significant performance enhancement under many workloads for drives that support this optional portion of the SATA specification.
Tagged queuing enables SATA disks to optimize head motion and performance.
The zoneadm(1M) command is modified to call an external program that performs validation checks against a specific zoneadm operation on a branded zone. The checks are performed before the specified zoneadm subcommand is executed. However, the external brand-specific handler program for zoneadm(1M) should be specified by the brand's configuration file, /usr/lib/brand/<brand_name>/config.xml. The external program is specified by the brand's configuration file by using the <verify_adm> tag.
To introduce a new type of branded zone, and list brand-specific handlers for the zoneadm(1M) subcommand, add the following line to the brand's config.xml file:
<verify_adm><absolute path to external program> %z %* %*</verify_adm> |
In this line, %z is the zone name, the first %*is the zoneadm subcommand, and the second %* is the subcommand's arguments.
This feature is useful when a given branded zone might not support all the zoneadm(1M) operations possible. Brand-specific handlers provide a way to gracefully fail unsupported zoneadm commands.
Ensure that the handler program that you specify recognizes all zoneadm(1M) subcommands.