With Sun Cluster 3.2 11/09 software, the Sun Cluster Geographic Edition module supports Hitachi TrueCopy and Universal Replicator device groups in asynchronous mode replication. Routine operations for both Hitachi TrueCopy and Universal Replicator provide data consistency in asynchronous mode. However, in the event of a temporary loss of communications or of a “rolling disaster” where different parts of the system fail at different times, only Hitachi Universal Replicator software can prevent loss of consistency of replicated data for asynchronous mode. In addition, Hitachi Universal Replicator software can only ensure data consistency with the configuration described in this section and in Configuring the /etc/horcm.conf File on the Nodes of the Primary Cluster and Configuring the /etc/horcm.conf File on the Nodes of the Secondary Cluster.
In Hitachi Universal Replicator software, the Hitachi storage arrays replicate data from primary storage to secondary storage. The application that produced the data is not involved. Even so, to guarantee data consistency, replication must preserve the application's I/O write ordering, regardless of how many disk devices the application writes.
During routine operations, Hitachi Universal Replicator software on the storage secondary array pulls data from cache on the primary storage array. If data is produced faster than it can be transferred, Hitachi Universal Replicator can commit backlogged I/O and a sequence number for each write to a journal volume on the primary storage array. The secondary storage array pulls that data from primary storage and commits it to its own journal volumes, from where it is transferred to application storage. If communications fail and are later restored, the secondary storage array begins to resynchronize the two sites by continuing to pull backlogged data and sequence numbers from the journal volume. Sequence numbers control the order in which data blocks are committed to disk so that write ordering is maintained at the secondary site despite the interruption. As long as journal volumes have enough disk space to record all data that is generated by the application that is running on the primary cluster during the period of failure, consistency is guaranteed.
In the event of a rolling disaster, where only some of the backlogged data and sequence numbers reach the secondary storage array after failures begin, sequence numbers determine which data should be committed to data LUNs to preserve consistency.
In the Sun Cluster Geographic Edition module with Hitachi Universal Replicator, journal volumes are associated with application storage in the /etc/horcm.conf file. That configuration is described in Journal Volumes and Configuring the /etc/horcm.conf File on the Nodes of the Primary Cluster. For information about how to configure journal volumes on a storage array, see the Hitachi documentation for that array.