Go to main content
What's New in Oracle® Solaris 11.3

Exit Print View

Updated: November 2016

Data Management Features

This section describes the data management features in this release. These features enable you to scale out design with unlimited capacity for future growth and also provide enhanced data integrity.

Review ZFS Snapshot Differences Recursively

In Oracle Solaris 11.3, you can recursively display ZFS snapshots differences within the descendent file system. For example, in the following command output, one snapshot is compared to another snapshot. You can also see that multiple files are added to the second snapshot, including a snapshot that does not exist in the first snapshot.

# zfs diff -r west@snap1 west@snap4
D /west/users/ (west/users)
+ /west/users/file.a
+ /west/users/reptar
west/users/reptar@snap1: snapshot does not exist
D /west/data/ (west/data)
+ /west/data/file.1
+ /west/data/file.2
+ /west/data/file.3  

In the output, the + sign indicates an entry in the given file system and D indicates an existing file system.

For more information about ZFS snapshots, see Managing ZFS File Systems in Oracle Solaris 11.3.

ZFS LZ4 Compression

Enabling LZ4 compression on your ZFS file systems can reduce storage, power, and cooling in the 2x to 5x range. Oracle Solaris 11.3 adds support for the LZ4 compression algorithm that generally provides a 2x compression ratio with reduced CPU overhead.

For example, to set the LZ4 compression on your ZFS file system:

# zfs set compression=lz4 east/data

For more information about ZFS compression, see Managing ZFS File Systems in Oracle Solaris 11.3.

SMB 2.1

Previous Oracle Solaris 11 releases provide server message block (SMB) protocol support, which allows you to share data between Microsoft Windows and Oracle Solaris systems. Oracle Solaris 11.3 provides support for SMB 2.1, which provides the following enhancements:

  • Reduces the previous number of SMB 1.0 commands and subcommands from over a hundred to just 19 commands.

  • Supports a new caching model called Lease. This model enables the SMB client to have multiple opens on a single file which helps in holding on to the cache.

  • Provides more scalable performance for high-speed networks and includes the following performance benefits:

    • SMB payload requests can scale up to 1MB instead of 64K.

    • Reduces CPU utilization on the SMB server and the SMB client.

    • SMB clients gain the performance benefit of not losing local caching when the same file is opened multiple times.

For more information about the commands and subcommands, see the smb(4), smbd(1M), and smbfs(7FS) man pages. For more information, see Managing SMB File Sharing and Windows Interoperability in Oracle Solaris 11.3.

ZFS Default User or Group Quotas

You can simplify the management of large user deployments and more easily allocate storage resources by setting a default user or group quota.

If a large ZFS file system has default quota for all users of 25 GB, you can still set an individual user quota of 50 GB, if required. For example:

# zfs set defaultuserquota=25gb sandbox/bigfs
# zfs set userquota@marks=50gb sandbox/bigfs

For more information, see Managing ZFS File Systems in Oracle Solaris 11.3.

ZFS Scalable Performance Improvements

ZFS performance scales to enterprise-class systems with large amounts of memory and includes the following enhancements in the Oracle Solaris 11.3 release:

  • ZFS adaptive replacement cache (ARC) has been redesigned to provide scalability for large memory systems.

  • Persistent L2ARC means that important data is cached after the system reboots to avoid long cache warm-up time. As a bonus, compressed data remains compressed in the L2ARC cache, which reduces the processing time.

  • Local directory access lock performance now scales with an increasing number of threads or CPUs.

  • Improved block allocation means that pool capacity can reach 90% and more.

For more information, see Managing ZFS File Systems in Oracle Solaris 11.3.

Monitoring ZFS Operations

Oracle Solaris 11.3 provides improved visibility into ongoing ZFS file systems and pool operations.

You can monitor ongoing pool and file system operations by using the zpool monitor command. For example, ZFS send stream time estimates are provided for all in progress send stream operations.

# zpool monitor -t send west 5 5

pool                    provider  pctdone  total speed  timeleft  other
west                    send      36.3     17.2G 74.1M  2m31s     west/fs1@snap1
west                    send      38.7     17.2G 74.7M  2m24s     west/fs1@snap1
west                    send      41.3     17.2G 75.5M  2m16s     west/fs1@snap1
west                    send      43.8     17.2G 76.2M  2m09s     west/fs1@snap1

For more information about using the zpool monitor command, see Managing ZFS File Systems in Oracle Solaris 11.3.

Better Handling of ZFS Spare Devices

Configuring hot spares for your ZFS storage pool is a best practice and you should continue to do so. Starting with Oracle Solaris 11.3, unused spare disks are checked automatically to determine if they are still operational when configuring hot spares for your ZFS storage pool. ZFS reports when a spare disk fails and fault management architecture (FMA) generates a fault report if ZFS cannot open the spare device.

For more information about using spares, see Managing ZFS File Systems in Oracle Solaris 11.3.