JavaScript is required to for searching.
Skip Navigation Links
Exit Print View
Oracle Solaris Cluster System Administration Guide     Oracle Solaris Cluster 3.3 3/13
search filter icon
search icon

Document Information

Preface

1.  Introduction to Administering Oracle Solaris Cluster

2.  Oracle Solaris Cluster and RBAC

3.  Shutting Down and Booting a Cluster

4.  Data Replication Approaches

5.  Administering Global Devices, Disk-Path Monitoring, and Cluster File Systems

6.  Administering Quorum

7.  Administering Cluster Interconnects and Public Networks

8.  Adding and Removing a Node

9.  Administering the Cluster

10.  Configuring Control of CPU Usage

11.  Patching Oracle Solaris Cluster Software and Firmware

12.  Backing Up and Restoring a Cluster

13.  Administering Oracle Solaris Cluster With the Graphical User Interfaces

A.  Example

Configuring Host-Based Data Replication With the Availability Suite Software

Understanding Availability Suite Software in a Cluster

Data Replication Methods Used by Availability Suite Software

Remote Mirror Replication

Point-in-Time Snapshot

Replication in the Example Configuration

Guidelines for Configuring Host-Based Data Replication Between Clusters

Configuring Replication Resource Groups

Configuring Application Resource Groups

Configuring Resource Groups for a Failover Application

Configuring Resource Groups for a Scalable Application

Guidelines for Managing a Takeover

Task Map: Example of a Data Replication Configuration

Connecting and Installing the Clusters

Example of How to Configure Device Groups and Resource Groups

How to Configure a Device Group on the Primary Cluster

How to Configure a Device Group on the Secondary Cluster

How to Configure the File System on the Primary Cluster for the NFS Application

How to Configure the File System on the Secondary Cluster for the NFS Application

How to Create a Replication Resource Group on the Primary Cluster

How to Create a Replication Resource Group on the Secondary Cluster

How to Create an NFS Application Resource Group on the Primary Cluster

How to Create an NFS Application Resource Group on the Secondary Cluster

Example of How to Enable Data Replication

How to Enable Replication on the Primary Cluster

How to Enable Replication on the Secondary Cluster

Example of How to Perform Data Replication

How to Perform a Remote Mirror Replication

How to Perform a Point-in-Time Snapshot

How to Verify That Replication Is Configured Correctly

Example of How to Manage a Takeover

How to Provoke a Switchover

How to Update the DNS Entry

Index

How to Provoke a Switchover

  1. Access nodeA and nodeC as superuser or assume a role that provides solaris.cluster.admin RBAC authorization.
  2. Change the primary cluster to logging mode.

    Run the following command for Availability Suite software:

    nodeA# /usr/sbin/sndradm -n -l lhost-reprg-prim \
    /dev/vx/rdsk/devgrp/vol01 \
    /dev/vx/rdsk/devgrp/vol04 lhost-reprg-sec \
    /dev/vx/rdsk/devgrp/vol01 \
    /dev/vx/rdsk/devgrp/vol04 ip sync

    When the data volume on the disk is written to, the bitmap volume on the same device group is updated. No replication occurs.

  3. Confirm that the primary cluster and the secondary cluster are in logging mode, with autosynchronization off.
    1. On nodeA, confirm the mode and setting:

      Run the following command for Availability Suite software:

      nodeA# /usr/sbin/sndradm -P

      The output should resemble the following:

      /dev/vx/rdsk/devgrp/vol01 ->
      lhost-reprg-sec:/dev/vx/rdsk/devgrp/vol01
      autosync:off, max q writes:4194304,max q fbas:16384,mode:sync,ctag:
      devgrp, state: logging
    2. On nodeC, confirm the mode and setting:

      Run the following command for Availability Suite software:

      nodeC# /usr/sbin/sndradm -P

      The output should resemble the following:

      /dev/vx/rdsk/devgrp/vol01 <-
      lhost-reprg-prim:/dev/vx/rdsk/devgrp/vol01
      autosync:off, max q writes:4194304,max q fbas:16384,mode:sync,ctag:
      devgrp, state: logging

    For nodeA and nodeC, the state should be logging, and the active state of autosynchronization should be off.

  4. Confirm that the secondary cluster is ready to take over from the primary cluster.
    nodeC# fsck -y /dev/vx/rdsk/devgrp/vol01
  5. Switch over to the secondary cluster.
    nodeC# clresourcegroup switch -n nodeC nfs-rg

Next Steps

Go to How to Update the DNS Entry.