Replication Failures
Individual replication updates can fail for a number of reasons. The appliance reports the
reason for the failure in alerts posted on the source appliance or replication target, or on
the Replication screen for the action that failed. You may be able to get details on the
failure by clicking the orange alert icon representing the action's status.
The following are some common replication failures:
|
|
Cancelled
|
The replication update was cancelled by an administrator. Replication
can be cancelled on the source or target.
|
Network connectivity failure
|
The appliance was unable to connect to the replication target due to a
network problem. Check for a misconfiguration on the source, target, or
the network.
|
Peer verification failed
|
The appliance failed to verify the identity of the target. This occurs
most commonly when the target has been reinstalled or factory reset. A
new replication target must be configured on the source appliance for a
target which has been reinstalled or factory reset in order to generate
a new set of authentication keys. See Replication Targets.
|
Peer RPC failed
|
|
Name collision
|
Replication of <project/share> from
<source> failed due to a name
collision with @<snapname> being held on
the target for NDMP. To recover, rename (or remove) the snapshot on the
replication source that has the same name as the snapshot held by NDMP
on the target (the one named in the alert), unless it starts with
.rr. Then either perform a manual sync or allow
the replication source to automatically retry the replication
update.
|
No package
|
Replication failed because no package exists on the target to contain
the replicated data. Since the package is created when configuring the
action, this error typically happens after an administrator has
destroyed the package on the target. This error could also occur if the
storage pool containing the package is not imported on the target
system, which may occur if the pool is faulted or if storage or
networking has been reconfigured on the replication target.
|
Disabled
|
Replication failed because it is disabled on the target. Either the
replication service is disabled on the target or replication has been
disabled for the specific package being replicated.
|
Target busy
|
Replication failed because the target system has reached the maximum
number of concurrent replication updates. The system limits the maximum
number of ongoing replication operations to avoid resource exhaustion.
When this limit is reached, subsequent attempts to receive updates will
fail with this error, while subsequent attempts to send updates will
queue up until resources are available.
|
Target is missing
|
The most recent replication update failed because the target is
missing. If the target is no longer configured on the source, the action
will be permanently disabled. If this error occurs, destroy the
replication action and reconfigure the replication target and
action.
|
Out of space
|
Replication failed because the source system had insufficient space to
create a new snapshot. This may be because there is no physical space
available in the storage pool or because the project or one of its
shares would be over quota because of reservations that do not include
snapshots.
|
Key Unavailability
|
Replication failed because the encryption key used by the share is not
available either on the source or target system. Review the alerts on
both the source and replication target to ensure the key is available on
both systems. See Replicating an Encrypted Share for information about replicating encrypted shares and
projects.
|
Incompatible target
|
Replication failed because the target system is unable to receive the
source system's data stream format. This can happen as a result of
upgrading a source system and applying deferred updates without having
upgraded and applied the same updates on the target. For deferred
updates that have remote replication implications, see Oracle ZFS
Storage Appliance: Remote Replication Compatibility [Doc ID 1958039.1]
https://support.oracle.com/epmos/faces/DocumentDisplay?id=1958039.1.
|
iSCSI initiator/target missing
|
A replication clone, sever, or reverse operation failed because the
initiator group or target group LUNs do not exist for the LUNs included
in the replication package. The initiator or target group name was
either deleted or renamed on the replication target.
|
Misc
|
Replication failed, but no additional information is available on the
source. Check the alert log on the target system and if necessary
contact support for assistance. Some failure modes that currently fall
into this category include insufficient disk space on the target to
receive the update and attempting to replicate a clone whose origin
snapshot does not exist on the target system.
|
|
A replication update fails if any part of the update fails. The
shares inside a project are replicated serially and changes are not rolled back from a
failed update. As a result, when an update fails, some shares on the target may be up to
date while others are not. For more information, see Replication Snapshots and Data Consistency.
When a scheduled or continuous replication fails, the system waits several minutes and
tries again. The system will continue retrying failed scheduled or continuous replications
indefinitely. At any point during the retry procedure, initiating a manual update will
immediately begin a retry, circumventing the usual delay between successive retries. If the
manual update completes successfully, it terminates the retry sequence and the replication
action reverts to its normal scheduled or continuous updates.
For more information about failed or interrupted replication
updates, see Resumable Replication.
When a replication update is in progress and another update is scheduled, the scheduled
replication is deferred until the previous update completes, and an alert is posted.
Related Topics