This appendix describes available data replication approaches with Sun Cluster. Host-based data replication is important in all clusters. Storage-based data replication, using special software, provides additional data redundancy for any type of cluster.
This appendix includes the following sections:
The examples in this appendix illustrate general campus cluster configurations and are not intended to indicate required or recommended setups. For simplicity, the diagrams and explanations concentrate only on features unique to understanding campus clustering. For example, public-network Ethernet connections are not shown.
A two-room configuration with host-based data replication is defined as follows:
Two separate rooms
Both rooms with one node each and disk subsystems
Data replicated across disk subsystems in these rooms
At least one disk subsystem, attached to both hosts, used as a quorum device, located in one of the rooms
If the room containing the quorum disk is lost, the system cannot recover automatically. Recovery requires intervention from your Sun service provider.
Figure A–1 shows a sample two-room configuration with storage arrays in each room.
Figure 7–7 is similar to a standard noncampus configuration. The most obvious difference is that FC switches have been added to switch from multimode to single-mode fibers.
Figure A–2 shows a sample two-room configuration where data is replicated between two storage arrays. In this configuration, the primary storage array is contained in the first room, where it provides data to the nodes in both rooms. The primary storage array also provides the secondary storage array with replicated data. During normal cluster operation, the secondary storage array is not visible to the cluster. However, if the primary storage array becomes unavailable, the secondary storage array can be manually configured into the cluster by a Sun service provider.
As shown in Figure A–2, the quorum device is on an unreplicated volume.
Storage-based data replication can be performed synchronously or asynchronously in the Sun Cluster environment, depending on the type of application that is used.
To ensure data integrity, use multipathing and the proper in-box RAID. The following list includes considerations for implementing a storage-based data replication, campus cluster configuration. These considerations are not specific to Sun Cluster.
Node-to-node distance is limited by Sun Cluster's fibre channel and interconnect infrastructure. Contact your Sun service provider for more information about current limitations and supported technologies.
Do not configure a replicated volume as a quorum device. Locate any quorum devices on an unreplicated volume.
Ensure that only the primary copy of the data is visible to cluster nodes. Otherwise, the volume manager might try to access both primary and secondary copies of the data, which could result in data corruption because the secondary copy is read-only.
Refer to the documentation that came with your storage array for information about controlling the visibility of your data copies.
Particular application-specific data might not be suitable for asynchronous data replication. Use your understanding of your application's behavior to determine how best to replicate application-specific data across the storage devices.
As with all campus clusters, those that use storage-based data replication generally do not need intervention when they experience a single failure. However, if you lose the room that holds your primary storage device (as shown in Figure A–2), problems arise in a 2–node cluster. The remaining node cannot reserve the quorum device and cannot boot as a cluster member. In this situation, your cluster requires the following manual intervention:
Your Sun service provider must reconfigure the remaining node to boot as a cluster member.
You (or your Sun service provider) must configure an unreplicated volume of your secondary storage device as a quorum device.
You (or your Sun service provider) must reconfigure the remaining node to use the secondary storage device as primary storage. This reconfiguration might involve rebuilding volume manager volumes, restoring data, or changing application associations with storage volumes.