Chapter 1 Oracle ZFS Storage Appliance Overview
Chapter 3 Initial Configuration
Chapter 4 Network Configuration
Chapter 5 Storage Configuration
Chapter 6 Storage Area Network Configuration
Chapter 8 Setting ZFSSA Preferences
Chapter 10 Cluster Configuration
Chapter 12 Shares, Projects, and Schema
Project Replication Actions and Packages
Project Replication Storage Pools
Project-level vs. Share-level Replication
Configuring Project Replication
Creating and Editing Targets in the BUI
Creating and Editing Targets in the CLI
Creating and Editing Actions in the BUI
Creating and Editing Actions in the CLI
Replication Modes: Scheduled or Continuous
Replication - Including Intermediate Snapshots
Replication - Sending and Canceling Updates
Managing Replication Packages in the BUI
Managing Replication Packages in the CLI
Cloning a Package or Individual Shares
Exporting Replicated Filesystems
Reversing the Direction of Replication
Destroying a Replication Package
Reversing Replication - Establish Replication
Reversing Replication - Simulate Recovery from a Disaster
Reversing Replication - Resume Replication from Production System
Forcing Replication to use a Static Route
Force Replication to use a Static Route
Cloning a Received Replication Project
Snapshots and Data Consistency
Replicating iSCSI Configuration
Upgrading From 2009.Q3 and Earlier
Individual replication updates can fail for a number of reasons. Where possible, the ZFSSA reports the reason for the failure in alerts posted on the source ZFSSA or target ZFSSA, or on the Replication screen for the action that failed. You may be able to get details on the failure by clicking the orange alert icon representing the action's status. The following are the most common types of failures:
|
A replication update fails if any part of the update fails. The current implementation replicates the shares inside a project serially and does not rollback changes from failed updates. As a result, when an update fails, some shares on the target may be up-to-date while others are not. See "Snapshots and Data Consistency" above for details.
Although some data may have been successfully replicated as part of a failed update, the current implementation resends all data that was sent as part of the previous (failed) update. That is, failed updates will not pick up where they left off, but rather will start where the failed update started.
When manual or scheduled updates fail, the system does not automatically try again until the next scheduled update (if any). When continuous replication fails, the system waits several minutes and tries again. The system will continue retrying failed continuous replications indefinitely.
When a replication update is in progress and another update is scheduled to occur, the latter update is skipped entirely rather than started immediately after the previous update completes. The next update will be sent only when the next update is scheduled to occur. The system posts an alert when an update is skipped for this reason.