This chapter describes how to create and manage Oracle Solaris ZFS snapshots and clones. Information about saving snapshots is also provided.
The following sections are provided in this chapter:
A snapshot is a read-only copy of a file system or volume. Snapshots can be created almost instantly, and they initially consume no additional disk space within the pool. However, as data within the active dataset changes, the snapshot consumes disk space by continuing to reference the old data, thus preventing the disk space from being freed.
ZFS snapshots include the following features:
The persist across system reboots.
The theoretical maximum number of snapshots is 264.
Snapshots use no separate backing store. Snapshots consume disk space directly from the same storage pool as the file system or volume from which they were created.
Recursive snapshots are created quickly as one atomic operation. The snapshots are created together (all at once) or not created at all. The benefit of atomic snapshot operations is that the snapshot data is always taken at one consistent time, even across descendent file systems.
Snapshots of volumes cannot be accessed directly, but they can be cloned, backed up, rolled back to, and so on. For information about backing up a ZFS snapshot, see Sending and Receiving ZFS Data.
Snapshots are created by using the zfs snapshot command, which takes as its only argument the name of the snapshot to create. The snapshot name is specified as follows:
filesystem@snapname volume@snapname |
The snapshot name must satisfy the naming requirements in ZFS Component Naming Requirements.
In the following example, a snapshot of tank/home/ahrens that is named friday is created.
# zfs snapshot tank/home/ahrens@friday |
You can create snapshots for all descendent file systems by using the -r option. For example:
# zfs snapshot -r tank/home@now # zfs list -t snapshot NAME USED AVAIL REFER MOUNTPOINT rpool/ROOT/zfs2BE@zfs2BE 78.3M - 4.53G - tank/home@now 0 - 26K - tank/home/ahrens@now 0 - 259M - tank/home/anne@now 0 - 156M - tank/home/bob@now 0 - 156M - tank/home/cindys@now 0 - 104M - |
Snapshots have no modifiable properties. Nor can dataset properties be applied to a snapshot. For example:
# zfs set compression=on tank/home/ahrens@now cannot set compression property for 'tank/home/ahrens@now': snapshot properties cannot be modified |
Snapshots are destroyed by using the zfs destroy command. For example:
# zfs destroy tank/home/ahrens@now |
A dataset cannot be destroyed if snapshots of the dataset exist. For example:
# zfs destroy tank/home/ahrens cannot destroy 'tank/home/ahrens': filesystem has children use '-r' to destroy the following datasets: tank/home/ahrens@tuesday tank/home/ahrens@wednesday tank/home/ahrens@thursday |
In addition, if clones have been created from a snapshot, then they must be destroyed before the snapshot can be destroyed.
For more information about the destroy subcommand, see Destroying a ZFS File System.
If you have different automatic snapshot policies such that older snapshots are being inadvertently destroyed by zfs receive because they no longer exist on the sending side, you might consider using the snapshots hold feature.
Holding a snapshot prevents it from being destroyed. In addition, this feature allows a snapshot with clones to be deleted pending the removal of the last clone by using the zfs destroy -d command. Each snapshot has an associated user-reference count, which is initialized to zero. This count increases by one whenever a hold is put on a snapshot and decreases by one whenever a hold is released.
In the previous Solaris release, a snapshot could only be destroyed by using the zfs destroy command if it had no clones. In this Solaris release, the snapshot must also have a zero user-reference count.
You can hold a snapshot or set of snapshots. For example, the following syntax puts a hold tag, keep, on tank/home/cindys/snap@1.
# zfs hold keep tank/home/cindys@snap1 |
You can use the -r option to recursively hold the snapshots of all descendent file systems. For example:
# zfs snapshot -r tank/home@now # zfs hold -r keep tank/home@now |
This syntax adds a single reference, keep, to the given snapshot or set of snapshots. Each snapshot has its own tag namespace and hold tags must be unique within that space. If a hold exists on a snapshot, attempts to destroy that held snapshot by using the zfs destroy command will fail. For example:
# zfs destroy tank/home/cindys@snap1 cannot destroy 'tank/home/cindys@snap1': dataset is busy |
If you want to destroy a held snapshot, use the -d option. For example:
# zfs destroy -d tank/home/cindys@snap1 |
Use the zfs holds command to display a list of held snapshots. For example:
# zfs holds tank/home@now NAME TAG TIMESTAMP tank/home@now keep Thu Jul 15 11:25:39 2010 |
# zfs holds -r tank/home@now NAME TAG TIMESTAMP tank/home/cindys@now keep Thu Jul 15 11:25:39 2010 tank/home/mark@now keep Thu Jul 15 11:25:39 2010 tank/home@now keep Thu Jul 15 11:25:39 2010 |
You can use the zfs release command to release a hold on a snapshot or set of snapshots. For example:
# zfs release -r keep tank/home@now |
If the snapshot is released, the snapshot can be destroyed by using the zfs destroy command. For example:
# zfs destroy -r tank/home@now |
Two new properties identify snapshot hold information:
The defer_destroy property is on if the snapshot has been marked for deferred destruction by using the zfs destroy -d command. Otherwise, the property is off.
The userrefs property is set to the number of holds on this snapshot, also referred to as the user-reference count.
You can rename snapshots, but they must be renamed within the same pool and dataset from which they were created. For example:
# zfs rename tank/home/cindys@083006 tank/home/cindys@today |
In addition, the following shortcut syntax is equivalent to the preceding syntax:
# zfs rename tank/home/cindys@083006 today |
The following snapshot rename operation is not supported because the target pool and file system name are different from the pool and file system where the snapshot was created:
# zfs rename tank/home/cindys@today pool/home/cindys@saturday cannot rename to 'pool/home/cindys@today': snapshots must be part of same dataset |
You can recursively rename snapshots by using the zfs rename -r command. For example:
# zfs list NAME USED AVAIL REFER MOUNTPOINT users 270K 16.5G 22K /users users/home 76K 16.5G 22K /users/home users/home@yesterday 0 - 22K - users/home/markm 18K 16.5G 18K /users/home/markm users/home/markm@yesterday 0 - 18K - users/home/marks 18K 16.5G 18K /users/home/marks users/home/marks@yesterday 0 - 18K - users/home/neil 18K 16.5G 18K /users/home/neil users/home/neil@yesterday 0 - 18K - # zfs rename -r users/home@yesterday @2daysago # zfs list -r users/home NAME USED AVAIL REFER MOUNTPOINT users/home 76K 16.5G 22K /users/home users/home@2daysago 0 - 22K - users/home/markm 18K 16.5G 18K /users/home/markm users/home/markm@2daysago 0 - 18K - users/home/marks 18K 16.5G 18K /users/home/marks users/home/marks@2daysago 0 - 18K - users/home/neil 18K 16.5G 18K /users/home/neil users/home/neil@2daysago 0 - 18K - |
You can enable or disable the display of snapshot listings in the zfs list output by using the listsnapshots pool property. This property is enabled by default.
If you disable this property, you can use the zfs list -t snapshot command to display snapshot information. Or, enable the listsnapshots pool property. For example:
# zpool get listsnapshots tank NAME PROPERTY VALUE SOURCE tank listsnapshots on default # zpool set listsnapshots=off tank # zpool get listsnapshots tank NAME PROPERTY VALUE SOURCE tank listsnapshots off local |
Snapshots of file systems are accessible in the .zfs/snapshot directory within the root of the file system. For example, if tank/home/ahrens is mounted on /home/ahrens, then the tank/home/ahrens@thursday snapshot data is accessible in the /home/ahrens/.zfs/snapshot/thursday directory.
# ls /tank/home/ahrens/.zfs/snapshot tuesday wednesday thursday |
You can list snapshots as follows:
# zfs list -t snapshot NAME USED AVAIL REFER MOUNTPOINT pool/home/anne@monday 0 - 780K - pool/home/bob@monday 0 - 1.01M - tank/home/ahrens@tuesday 8.50K - 780K - tank/home/ahrens@wednesday 8.50K - 1.01M - tank/home/ahrens@thursday 0 - 1.77M - tank/home/cindys@today 8.50K - 524K - |
You can list snapshots that were created for a particular file system as follows:
# zfs list -r -t snapshot -o name,creation tank/home NAME CREATION tank/home@now Wed Jun 30 16:16 2010 tank/home/ahrens@now Wed Jun 30 16:16 2010 tank/home/anne@now Wed Jun 30 16:16 2010 tank/home/bob@now Wed Jun 30 16:16 2010 tank/home/cindys@now Wed Jun 30 16:16 2010 |
When a snapshot is created, its disk space is initially shared between the snapshot and the file system, and possibly with previous snapshots. As the file system changes, disk space that was previously shared becomes unique to the snapshot, and thus is counted in the snapshot's used property. Additionally, deleting snapshots can increase the amount of disk space unique to (and thus used by) other snapshots.
A snapshot's space referenced property value is the same as the file system's was when the snapshot was created.
You can identify additional information about how the values of the used property are consumed. New read-only file system properties describe disk space usage for clones, file systems, and volumes. For example:
$ zfs list -o space # zfs list -ro space tank/home NAME AVAIL USED USEDSNAP USEDDS USEDREFRESERV USEDCHILD tank/home 66.3G 675M 0 26K 0 675M tank/home@now - 0 - - - - tank/home/ahrens 66.3G 259M 0 259M 0 0 tank/home/ahrens@now - 0 - - - - tank/home/anne 66.3G 156M 0 156M 0 0 tank/home/anne@now - 0 - - - - tank/home/bob 66.3G 156M 0 156M 0 0 tank/home/bob@now - 0 - - - - tank/home/cindys 66.3G 104M 0 104M 0 0 tank/home/cindys@now - 0 - - - - |
For a description of these properties, see Table 6–1.
You can use the zfs rollback command to discard all changes made to a file system since a specific snapshot was created. The file system reverts to its state at the time the snapshot was taken. By default, the command cannot roll back to a snapshot other than the most recent snapshot.
To roll back to an earlier snapshot, all intermediate snapshots must be destroyed. You can destroy earlier snapshots by specifying the -r option.
If clones of any intermediate snapshots exist, the -R option must be specified to destroy the clones as well.
The file system that you want to roll back is unmounted and remounted, if it is currently mounted. If the file system cannot be unmounted, the rollback fails. The -f option forces the file system to be unmounted, if necessary.
In the following example, the tank/home/ahrens file system is rolled back to the tuesday snapshot:
# zfs rollback tank/home/ahrens@tuesday cannot rollback to 'tank/home/ahrens@tuesday': more recent snapshots exist use '-r' to force deletion of the following snapshots: tank/home/ahrens@wednesday tank/home/ahrens@thursday # zfs rollback -r tank/home/ahrens@tuesday |
In this example, the wednesday and thursday snapshots are destroyed because you rolled back to the earlier tuesday snapshot.
# zfs list -r -t snapshot -o name,creation tank/home/ahrens NAME CREATION tank/home/ahrens@now Wed Jun 30 16:16 2010 |
A clone is a writable volume or file system whose initial contents are the same as the dataset from which it was created. As with snapshots, creating a clone is nearly instantaneous and initially consumes no additional disk space. In addition, you can snapshot a clone.
Clones can only be created from a snapshot. When a snapshot is cloned, an implicit dependency is created between the clone and snapshot. Even though the clone is created somewhere else in the dataset hierarchy, the original snapshot cannot be destroyed as long as the clone exists. The origin property exposes this dependency, and the zfs destroy command lists any such dependencies, if they exist.
Clones do not inherit the properties of the dataset from which it was created. Use the zfs get and zfs set commands to view and change the properties of a cloned dataset. For more information about setting ZFS dataset properties, see Setting ZFS Properties.
Because a clone initially shares all its disk space with the original snapshot, its used property value is initially zero. As changes are made to the clone, it uses more disk space. The used property of the original snapshot does not include the disk space consumed by the clone.
To create a clone, use the zfs clone command, specifying the snapshot from which to create the clone, and the name of the new file system or volume. The new file system or volume can be located anywhere in the ZFS hierarchy. The new dataset is the same type (for example, file system or volume) as the snapshot from which the clone was created. You cannot create a clone of a file system in a pool that is different from where the original file system snapshot resides.
In the following example, a new clone named tank/home/ahrens/bug123 with the same initial contents as the snapshot tank/ws/gate@yesterday is created:
# zfs snapshot tank/ws/gate@yesterday # zfs clone tank/ws/gate@yesterday tank/home/ahrens/bug123 |
In the following example, a cloned workspace is created from the projects/newproject@today snapshot for a temporary user as projects/teamA/tempuser. Then, properties are set on the cloned workspace.
# zfs snapshot projects/newproject@today # zfs clone projects/newproject@today projects/teamA/tempuser # zfs set sharenfs=on projects/teamA/tempuser # zfs set quota=5G projects/teamA/tempuser |
ZFS clones are destroyed by using the zfs destroy command. For example:
# zfs destroy tank/home/ahrens/bug123 |
Clones must be destroyed before the parent snapshot can be destroyed.
You can use the zfs promote command to replace an active ZFS file system with a clone of that file system. This feature enables you to clone and replace file systems so that the original file system becomes the clone of the specified file system. In addition, this feature makes it possible to destroy the file system from which the clone was originally created. Without clone promotion, you cannot destroy an original file system of active clones. For more information about destroying clones, see Destroying a ZFS Clone.
In the following example, the tank/test/productA file system is cloned and then the clone file system, tank/test/productAbeta, becomes the original tank/test/productA file system.
# zfs create tank/test # zfs create tank/test/productA # zfs snapshot tank/test/productA@today # zfs clone tank/test/productA@today tank/test/productAbeta # zfs list -r tank/test NAME USED AVAIL REFER MOUNTPOINT tank/test 104M 66.2G 23K /tank/test tank/test/productA 104M 66.2G 104M /tank/test/productA tank/test/productA@today 0 - 104M - tank/test/productAbeta 0 66.2G 104M /tank/test/productAbeta # zfs promote tank/test/productAbeta # zfs list -r tank/test NAME USED AVAIL REFER MOUNTPOINT tank/test 104M 66.2G 24K /tank/test tank/test/productA 0 66.2G 104M /tank/test/productA tank/test/productAbeta 104M 66.2G 104M /tank/test/productAbeta tank/test/productAbeta@today 0 - 104M - |
In this zfs list output, note that the disk space accounting information for the original productA file system has been replaced with the productAbeta file system.
You can complete the clone replacement process by renaming the file systems. For example:
# zfs rename tank/test/productA tank/test/productAlegacy # zfs rename tank/test/productAbeta tank/test/productA # zfs list -r tank/test |
Optionally, you can remove the legacy file system. For example:
# zfs destroy tank/test/productAlegacy |
The zfs send command creates a stream representation of a snapshot that is written to standard output. By default, a full stream is generated. You can redirect the output to a file or to a different system. The zfs receive command creates a snapshot whose contents are specified in the stream that is provided on standard input. If a full stream is received, a new file system is created as well. You can send ZFS snapshot data and receive ZFS snapshot data and file systems with these commands. See the examples in the next section.
The following backup solutions for saving ZFS data are available:
Enterprise backup products – If you need the following features, then consider an enterprise backup solution:
Per-file restoration
Backup media verification
Media management
File system snapshots and rolling back snapshots – Use the zfs snapshot and zfs rollback commands if you want to easily create a copy of a file system and revert to a previous file system version, if necessary. For example, to restore a file or files from a previous version of a file system, you could use this solution.
For more information about creating and rolling back to a snapshot, see Overview of ZFS Snapshots.
Saving snapshots – Use the zfs send and zfs receive commands to send and receive a ZFS snapshot. You can save incremental changes between snapshots, but you cannot restore files individually. You must restore the entire file system snapshot. These commands do not provide a complete backup solution for saving your ZFS data.
Remote replication – Use the zfs send and zfs receive commands to copy a file system from one system to another system. This process is different from a traditional volume management product that might mirror devices across a WAN. No special configuration or hardware is required. The advantage of replicating a ZFS file system is that you can re-create a file system on a storage pool on another system, and specify different levels of configuration for the newly created pool, such as RAID-Z, but with identical file system data.
Archive utilities – Save ZFS data with archive utilities such as tar, cpio, and pax or third-party backup products. Currently, both tar and cpio translate NFSv4-style ACLs correctly, but pax does not.
In addition to the zfs send and zfs receive commands, you can also use archive utilities, such as the tar and cpio commands, to save ZFS files. These utilities save and restore ZFS file attributes and ACLs. Check the appropriate options for both the tar and cpio commands.
For up-to-date information about issues with ZFS and third-party backup products, see the Solaris 10 Release Notes or the ZFS FAQ, available here:
http://hub.opensolaris.org/bin/view/Community+Group+zfs/faq/#backupsoftware
You can use the zfs send command to send a copy of a snapshot stream and receive the snapshot stream in another pool on the same system or in another pool on a different system that is used to store backup data. For example, to send the snapshot stream on a different pool to the same system, use syntax similar to the following:
# zfs send tank/data@snap1 | zfs recv spool/ds01 |
You can use zfs recv as an alias for the zfs receive command.
If you are sending the snapshot stream to a different system, pipe the zfs send output through the ssh command. For example:
host1# zfs send tank/dana@snap1 | ssh host2 zfs recv newtank/dana |
When you send a full stream, the destination file system must not exist.
You can send incremental data by using the zfs send -i option. For example:
host1# zfs send -i tank/dana@snap1 tank/dana@snap2 | ssh host2 zfs recv newtank/dana |
Note that the first argument (snap1) is the earlier snapshot and the second argument (snap2) is the later snapshot. In this case, the newtank/dana file system must already exist for the incremental receive to be successful.
The incremental snap1 source can be specified as the last component of the snapshot name. This shortcut means you only have to specify the name after the @ sign for snap1, which is assumed to be from the same file system as snap2. For example:
host1# zfs send -i snap1 tank/dana@snap2 > ssh host2 zfs recv newtank/dana |
This shortcut syntax is equivalent to the incremental syntax in the preceding example.
The following message is displayed if you attempt to generate an incremental stream from a different file system snapshot1:
cannot send 'pool/fs@name': not an earlier snapshot from the same fs |
If you need to store many copies, consider compressing a ZFS snapshot stream representation with the gzip command. For example:
# zfs send pool/fs@snap | gzip > backupfile.gz |
Keep the following key points in mind when you receive a file system snapshot:
Both the snapshot and the file system are received.
The file system and all descendent file systems are unmounted.
The file systems are inaccessible while they are being received.
The original file system to be received must not exist while it is being transferred.
If the file system name already exists, you can use zfs rename command to rename the file system.
For example:
# zfs send tank/gozer@0830 > /bkups/gozer.083006 # zfs receive tank/gozer2@today < /bkups/gozer.083006 # zfs rename tank/gozer tank/gozer.old # zfs rename tank/gozer2 tank/gozer |
If you make a change to the destination file system and you want to perform another incremental send of a snapshot, you must first roll back the receiving file system.
Consider the following example. First, make a change to the file system as follows:
host2# rm newtank/dana/file.1 |
Then, perform an incremental send of tank/dana@snap3. However, you must first roll back the receiving file system to receive the new incremental snapshot. Or, you can eliminate the rollback step by using the -F option. For example:
host1# zfs send -i tank/dana@snap2 tank/dana@snap3 | ssh host2 zfs recv -F newtank/dana |
When you receive an incremental snapshot, the destination file system must already exist.
If you make changes to the file system and you do not roll back the receiving file system to receive the new incremental snapshot or you do not use the -F option, you see a message similar to the following:
host1# zfs send -i tank/dana@snap4 tank/dana@snap5 | ssh host2 zfs recv newtank/dana cannot receive: destination has been modified since most recent snapshot |
The following checks are performed before the -F option is successful:
If the most recent snapshot doesn't match the incremental source, neither the roll back nor the receive is completed, and an error message is returned.
If you accidentally provide the name of different file system that doesn't match the incremental source specified in the zfs receive command, neither the rollback nor the receive is completed, and the following error message is returned:
cannot send 'pool/fs@name': not an earlier snapshot from the same fs |
This section describes how to use the zfs send -I and -R options to send and receive more complex snapshot streams.
Keep the following points in mind when sending and receiving complex ZFS snapshot streams:
Use the zfs send -I option to send all incremental streams from one snapshot to a cumulative snapshot. Or, use this option to send an incremental stream from the original snapshot to create a clone. The original snapshot must already exist on the receiving side to accept the incremental stream.
Use the zfs send -R option to send a replication stream of all descendent file systems. When the replication stream is received, all properties, snapshots, descendent file systems, and clones are preserved.
Use both options to send an incremental replication stream.
Changes to properties are preserved, as are snapshot and file system rename and destroy operations are preserved.
If zfs recv -F is not specified when receiving the replication stream, dataset destroy operations are ignored. The zfs recv -F syntax in this case also retains its rollback if necessary meaning.
As with other (non zfs send -R) -i or -I cases, if -I is used, all snapshots between snapA and snapD are sent. If -i is used, only snapD (for all descendents) are sent.
To receive any of these new types of zfs send streams, the receiving system must be running a software version capable of sending them. The stream version is incremented.
However, you can access streams from older pool versions by using a newer software version. For example, you can send and receive streams created with the newer options to and from a version 3 pool. But, you must be running recent software to receive a stream sent with the newer options.
A group of incremental snapshots can be combined into one snapshot by using the zfs send -I option. For example:
# zfs send -I pool/fs@snapA pool/fs@snapD > /snaps/fs@all-I |
Then, you would remove snapB, snapC, and snapD.
# zfs destroy pool/fs@snapB # zfs destroy pool/fs@snapC # zfs destroy pool/fs@snapD |
To receive the combined snapshot, you would use the following command.
# zfs receive -d -F pool/fs < /snaps/fs@all-I # zfs list NAME USED AVAIL REFER MOUNTPOINT pool 428K 16.5G 20K /pool pool/fs 71K 16.5G 21K /pool/fs pool/fs@snapA 16K - 18.5K - pool/fs@snapB 17K - 20K - pool/fs@snapC 17K - 20.5K - pool/fs@snapD 0 - 21K - |
You can also use the zfs send -I command to combine a snapshot and a clone snapshot to create a combined dataset. For example:
# zfs create pool/fs # zfs snapshot pool/fs@snap1 # zfs clone pool/fs@snap1 pool/clone # zfs snapshot pool/clone@snapA # zfs send -I pool/fs@snap1 pool/clone@snapA > /snaps/fsclonesnap-I # zfs destroy pool/clone@snapA # zfs destroy pool/clone # zfs receive -F pool/clone < /snaps/fsclonesnap-I |
You can use the zfs send -R command to replicate a ZFS file system and all descendent file systems, up to the named snapshot. When this stream is received, all properties, snapshots, descendent file systems, and clones are preserved.
In the following example, snapshots are created for user file systems. One replication stream is created for all user snapshots. Next, the original file systems and snapshots are destroyed and then recovered.
# zfs snapshot -r users@today # zfs list NAME USED AVAIL REFER MOUNTPOINT users 187K 33.2G 22K /users users@today 0 - 22K - users/user1 18K 33.2G 18K /users/user1 users/user1@today 0 - 18K - users/user2 18K 33.2G 18K /users/user2 users/user2@today 0 - 18K - users/user3 18K 33.2G 18K /users/user3 users/user3@today 0 - 18K - # zfs send -R users@today > /snaps/users-R # zfs destroy -r users # zfs receive -F -d users < /snaps/users-R # zfs list NAME USED AVAIL REFER MOUNTPOINT users 196K 33.2G 22K /users users@today 0 - 22K - users/user1 18K 33.2G 18K /users/user1 users/user1@today 0 - 18K - users/user2 18K 33.2G 18K /users/user2 users/user2@today 0 - 18K - users/user3 18K 33.2G 18K /users/user3 users/user3@today 0 - 18K - |
In the following example, the zfs send -R command was used to replicate the users dataset and its descendents, and to send the replicated stream to another pool, users2.
# zfs create users2 mirror c0t1d0 c1t1d0 # zfs receive -F -d users2 < /snaps/users-R # zfs list NAME USED AVAIL REFER MOUNTPOINT users 224K 33.2G 22K /users users@today 0 - 22K - users/user1 33K 33.2G 18K /users/user1 users/user1@today 15K - 18K - users/user2 18K 33.2G 18K /users/user2 users/user2@today 0 - 18K - users/user3 18K 33.2G 18K /users/user3 users/user3@today 0 - 18K - users2 188K 16.5G 22K /users2 users2@today 0 - 22K - users2/user1 18K 16.5G 18K /users2/user1 users2/user1@today 0 - 18K - users2/user2 18K 16.5G 18K /users2/user2 users2/user2@today 0 - 18K - users2/user3 18K 16.5G 18K /users2/user3 users2/user3@today 0 - 18K - |
You can use the zfs send and zfs recv commands to remotely copy a snapshot stream representation from one system to another system. For example:
# zfs send tank/cindy@today | ssh newsys zfs recv sandbox/restfs@today |
This command sends the tank/cindy@today snapshot data and receives it into the sandbox/restfs file system. The command also creates a restfs@today snapshot on the newsys system. In this example, the user has been configured to use ssh on the remote system.