Go to main content

What's New in Oracle® Solaris 11.4

Exit Print View

Updated: August 2018
 
 

Data Management Features

This section describes the data management features that are new in this release. These features enable you to scale out design with unlimited capacity for future growth and also provide enhanced data integrity.

See also Configure Immutable Zones by Running in the Trusted Path.

ZFS Top-Level Device Removal

The zpool remove command enables you to remove top-level data devices. Removing a top-level data device migrates the data from the device to be removed to the remaining data devices in the pool. The zpool status command reports the progress of the remove operation until the resilvering completes.

See Oracle Solaris ZFS Device Removal, Removing Devices From a Storage Pool in Managing ZFS File Systems in Oracle Solaris 11.4, and the zpool(8) man page for information about removing top-level data devices.

ZFS Scheduled Scrub

By default, ZFS pool scrub runs in the background every 30 days with an automatically tuned priority. The priority of the scrub is low by default but is automatically increased if the system is idle. Scrub priority is adjusted based on the specified scrub interval, the progress, and the system load. The start time of the last successful scrub is reported by the zpool status command.

You can customize the scheduling of the pool scrub, including disabling it, by setting the scrubinterval property. See Scheduled Data Scrubbing in Managing ZFS File Systems in Oracle Solaris 11.4 and see the zpool(8) man page for information about the scrubinterval and lastscrub properties.

Fast ZFS Based File Copying

The reflink() and reflinkat() functions enable you to copy files very quickly using the underlying ZFS technology. The reflink() function creates a new file with the content of an existing file without reading or writing the underlying data blocks. The existing file and the file to be created must be in the same ZFS pool.

For more information, see the reflink(3C) man page.

The –z option (fast copy) of the cp command uses reflink. See the cp(1) man page.

ZFS Raw Send Streams

In Oracle Solaris 11.4, you can optimize ZFS send stream transmissions of compressed file systems and reduce network transmission traffic by using raw ZFS send streams.

In previous releases, a send stream of a compressed ZFS file system was first decompressed upon transmission, and then, the blocks were recompressed if compression was enabled on the receiving end. In the Oracle Solaris 11.4 release, these two steps are avoided because the compressed file system blocks in the stream remain compressed. Using this optimization also reduces network transmission traffic. You can optimize a ZFS send stream by sending it in raw mode with the new zfs send –w option. Enabling this new option allows the send stream to encode the presence of raw blocks so that a receiving system knows to process raw blocks without compressing them.

For example, to create a ZFS file system with compression enabled and send the snapshot stream with and without the –w option and review the resulting stream sizes:

# zfs create compression=on pond/cdata
# cp -r somefiles /pond/data
# zfs snapshot pond/cdata@snap1
# zfs get compressratio pond/cdata@snap1
NAME              PROPERTY       VALUE  SOURCE
pond/cdata@snap1  compressratio  1.79x  -

# zfs send pond/cdata@snap1 > /tmp/stream
# zfs send -w compress pond/cdata@snap1 > /tmp/cstream
# ls -lh /tmp/*stream*
-rw-r--r--   1 root     root        126M Feb 15 14:35 /tmp/cstream
-rw-r--r--   1 root     root        219M Feb 15 14:35 /tmp/stream

Systems that run previous versions of Oracle Solaris are not able to receive such streams, and an error message will be generated.

For more information, see Managing ZFS File Systems in Oracle Solaris 11.4.

Resumable ZFS Send Streams

In Oracle Solaris 11.4, if a network transmission is interrupted or an error occurs, ZFS send streams can be restarted where they left off.

Using ZFS send and receive to transfer ZFS snapshots across systems is a convenient way to replicate ZFS file system data previously suffering from the following issues:

  • A ZFS send operation could take many hours or days to complete. During that time, the send operation could be interrupted by a network outage or a system failure.

  • If the send operation fails to complete, even if it is almost complete, it must be restarted from the beginning.

  • The ZFS send operation might be unable to transfer large streams in the window between interruptions.

  • A ZFS recv operation might be unable to detect and report transmission errors until the entire stream has been processed.

    This Oracle Solaris 11.4 release provides a way for ZFS send streams to be resumed at the point they were interrupted with the following new options:

  • zfs receive –C – Writes a receive checkpoint to stdout.

  • zfs send –C – Reads a receive checkpoint from stdin.

  • zfs send –s (nocheck) – Disables the new on-the-wire format.

  • zfs list –I (state) – Recursively displays incomplete datasets as incomplete datasets are not displayed by default.

For more information, see Managing ZFS File Systems in Oracle Solaris 11.4.

Configurable ZFS Read and Write Throughput Limits

The Oracle Solaris 11.4 release provides the ability to limit a ZFS file system's reads and writes to disk. You can enable a read or write limit on a ZFS file system by setting the readlimit and writelimit properties, in units of bytes per second. Using these features allow you to optimize ZFS I/O resources in a multitenant environment.

The defaultwritelimit and defaultreadlimit properties are added to increase the manageability of a large number of ZFS file systems. If defaultwritelimit and defaultreadlimit properties are set, all file system descendants inherit the assigned value. If you apply the default read or write limit to ZFS file system, it is only applied to descendant file systems, and not to the file system itself. The read-only effectivereadlimit and effectivewritelimit properties are added to provide a view of what the effective limit is on a file system. The reported effective limit is the lowest data limit at any point between the parent and the indicated file system.

For example, you would set read and write limits as follows:

# zfs set writelimit=500mb pond/apps/web
# zfs set readlimit=200mb pond/apps/logdata

The following example shows how to display read and write limits:

# zfs get -r writelimit,readlimit pond/apps
NAME                 PROPERTY    VALUE    SOURCE
pond/apps            writelimit  default  default
pond/apps            readlimit   default  default
pond/apps/logdata    writelimit  default  default
pond/apps/logdata    readlimit   200M     local
pond/apps/web        writelimit  500M     local
pond/apps/web        readlimit   default  default
pond/apps/web/tier1  writelimit  default  default
pond/apps/web/tier1  readlimit   default  default

You can display the effective write limit as follows:

# zfs get effectivewritelimit pond/apps/web
NAME           PROPERTY             VALUE  SOURCE
pond/apps/web  effectivewritelimit  500M   local

For more information, see Managing ZFS File Systems in Oracle Solaris 11.4.

Monitor and Manage ZFS Shadow Migration

The Oracle Solaris 11.4 release provides improved ZFS shadow migration operation with enhancements for better visibility in monitoring migration errors and controlling in-progress migrations. The following new options are introduced:

  • shadowstat –E and –e – Provided for monitoring migration errors for all migrations or a single migration.

  • shadowadm – Control in-progress migrations.

For example, you can identify shadow migration errors of multiple migration operations:

# shadowstat
                                        EST             
                                BYTES   BYTES           ELAPSED
DATASET                         XFRD    LEFT    ERRORS  TIME
tank/logarchive                 16.4M   195M    1       00:01:20
pond/dbarchive                  4.49M   248M    -       00:00:51
tank/logarchive                 16.6M   194M    1       00:01:21
pond/dbarchive                  4.66M   248M    -       00:00:52
tank/logarchive                 16.7M   194M    1       00:01:22
pond/dbarchive                  4.80M   248M    -       00:00:53
tank/logarchive                 17.1M   194M    1       00:01:23
pond/dbarchive                  5.00M   248M    -       00:00:54
tank/logarchive                 17.3M   194M    1       00:01:24
pond/dbarchive                  5.16M   247M    -       00:00:55

You can identify the specific migration error as follows:

# shadowstat -E
tank/logarchive:
PATH                                            ERROR
e-dir/socket                                    Operation not supported
pond/dbarchive:
No errors encountered.

For example, to cancel the migration as the open socket cannot be migrated:

# shadowadm cancel tank/logarchive

For more information, see Managing ZFS File Systems in Oracle Solaris 11.4.

Preserving ZFS ACL Inheritance

In Oracle Solaris 11.4, a new ZFS ACL feature helps in providing a better experience when sharing a ZFS file system over both the NFS and Server Message Block (SMB) protocols. A new inheritance value for the aclinherit property is introduced, which allows passthrough semantics but overrides the permissions set in the inherited owner@, group@, and everyone@ ACEs with the values requested in the open, create, or mkdir system call. When set, any inheritable ACEs have their inherit bits preserved. This behavior is important to allow SMB and NFS sharing to inherit ACLs in a natural way. The new value is called passthrough-mode-preserve. No changes to the aclmode property occurred, but a chmod operation takes into account what the inheritance behavior is with respect to the aclinherit property. In particular, it preserves the inheritance bits during a chmod operation.

For more information, see Managing ZFS File Systems in Oracle Solaris 11.4.

NFS Version 4.1 Server Support

    Oracle Solaris 11.4 includes server support for NFS version 4.1. The protocol provides the following new features and considerations:

  • Exactly Once Semantics (EOS) – Provides a reliable duplicate request cache for the NFS version 4.1 protocol. This duplicate request cache guarantees that non-idempotent requests such as the remove request executes only once, even in cases of transient network failures and retransmissions. This feature eliminates the long-standing problems with NFS version 3 and NFS version 4.

  • reclaim_complete – A new protocol feature that enables the server to resume NFS service quickly after a server restart. Unlike NFS version 4, the user does not need to wait for a specific amount of time, known as the grace period, before returning to service. With reclaim_complete, the server can end the grace period once all clients have recovered. This feature is particularly important for high-availability environments.

  • Planned GRACE-less Recovery (PGR) – Allows an Oracle Solaris NFS version 4 or NFS version 4.1 server to preserve the NFS version 4 state across NFS service restarts or graceful system reboot, so that the NFS version 4 server does not enter the GRACE period to recover NFS version 4 state. The advantage is that NFS client applications can avoid a data downtime of potentially 90 seconds across NFS service restarts and graceful system reboots.

  • Consider the following interoperability issues:

    • Oracle Solaris NFS version 4.1 server supports both Linux clients and VMware. However, delegation should be disabled on the server for Linux clients.

    • There are known issues with locking and in-state recovery when using delegation with the Linux client.

      # sharectl set -p server_delegation=off nfs

      You can disable NFS version 4.1 support on the server as follows:

      # sharectl set -p server_versmax=4.0 nfs

For more information, see Managing Network File Systems in Oracle Solaris 11.4.

NFSv3 Mount Using TCP

When a file system is mounted using NFS Version 3 and TCP is the selected transport, the initial mount setup will also use TCP as the transport. In previous Oracle Solaris releases, UDP would be used for the mount setup and TCP would be used only after the mount had been established.

If you allow NFS mounts through a firewall, this feature might enable you to simplify your firewall configuration.

This feature also enables you to use NFS Version 3 at sites where UDP traffic is blocked.

For more information, see Managing Network File Systems in Oracle Solaris 11.4 and the mount_nfs(8) man page.

Extended File System Attributes in tmpfs

tmpfs file systems support extended system attributes. See the tmpfs(4FS) and fgetattr(3C) man pages.

SMB 3.1.1 Support

    Oracle Solaris 11.4 provides SMB 3.1.1 protocol support on the Oracle Solaris SMB server, which includes the following SMB features:

  • Continuously Available Shares – This feature enables an Oracle Solaris SMB server to make shares continuously available in the event of a server crash or reboot.

  • Multichannel – This feature enables an Oracle Solaris SMB file server to use multiple network connections per SMB session to provide increased throughput and fault tolerance.

  • Encryption – This feature enables an Oracle Solaris SMB server to encrypt SMB network traffic between clients and the server. SMB encryption secures SMB sessions and protects against tampering and eavesdropping attacks.

For more information, see Managing SMB File Sharing and Windows Interoperability in Oracle Solaris 11.4 and the smbstat(8) man page.