JavaScript is required to for searching.
Skip Navigation Links
Exit Print View
Oracle Solaris Cluster System Administration Guide     Oracle Solaris Cluster 4.1
search filter icon
search icon

Document Information

Preface

1.  Introduction to Administering Oracle Solaris Cluster

2.  Oracle Solaris Cluster and RBAC

3.  Shutting Down and Booting a Cluster

4.  Data Replication Approaches

5.  Administering Global Devices, Disk-Path Monitoring, and Cluster File Systems

6.  Administering Quorum

7.  Administering Cluster Interconnects and Public Networks

8.  Adding and Removing a Node

9.  Administering the Cluster

10.  Configuring Control of CPU Usage

11.  Updating Your Software

12.  Backing Up and Restoring a Cluster

A.  Example

Configuring Host-Based Data Replication With Availability Suite Software

Understanding Availability Suite Software in a Cluster

Data Replication Methods Used by Availability Suite Software

Remote Mirror Replication

Point-in-Time Snapshot

Replication in the Example Configuration

Guidelines for Configuring Host-Based Data Replication Between Clusters

Configuring Replication Resource Groups

Configuring Application Resource Groups

Configuring Resource Groups for a Failover Application

Configuring Resource Groups for a Scalable Application

Guidelines for Managing a Takeover

Task Map: Example of a Data Replication Configuration

Connecting and Installing the Clusters

Example of How to Configure Device Groups and Resource Groups

How to Configure a Device Group on the Primary Cluster

How to Configure a Device Group on the Secondary Cluster

How to Configure the File System on the Primary Cluster for the NFS Application

How to Configure the File System on the Secondary Cluster for the NFS Application

How to Create a Replication Resource Group on the Primary Cluster

How to Create a Replication Resource Group on the Secondary Cluster

How to Create an NFS Application Resource Group on the Primary Cluster

How to Create an NFS Application Resource Group on the Secondary Cluster

Example of How to Enable Data Replication

How to Enable Replication on the Primary Cluster

How to Enable Replication on the Secondary Cluster

Example of How to Perform Data Replication

How to Perform a Remote Mirror Replication

How to Perform a Point-in-Time Snapshot

How to Verify That Replication Is Configured Correctly

Example of How to Manage a Takeover

How to Update the DNS Entry

Index

How to Verify That Replication Is Configured Correctly

Before You Begin

Complete the procedure How to Perform a Point-in-Time Snapshot.

  1. Access nodeA and nodeC as the role that provides solaris.cluster.admin RBAC authorization.
  2. Verify that the primary cluster is in replicating mode, with autosynchronization on.

    Use the following command for Availability Suite software:

    nodeA# /usr/sbin/sndradm -P

    The output should resemble the following:

    /dev/md/nfsset/rdsk/d200 ->
    lhost-reprg-sec:/dev/md/nfsset/rdsk/d200
    autosync: on, max q writes:4194304, max q fbas:16384, mode:sync,ctag:
    devgrp, state: replicating

    In replicating mode, the state is replicating, and the active state of autosynchronization is on. When the primary volume is written to, the secondary volume is updated by Availability Suite software.

  3. If the primary cluster is not in replicating mode, put it into replicating mode.

    Use the following command for Availability Suite software:

    nodeA# /usr/sbin/sndradm -n -u lhost-reprg-prim \
    /dev/md/nfsset/rdsk/d200 \
    /dev/md/nfsset/rdsk/d203 lhost-reprg-sec \
    /dev/md/nfsset/rdsk/d200 \
    /dev/md/nfsset/rdsk/d203 ip sync
  4. Create a directory on a client machine.
    1. Log in to a client machine as the root role.

      You see a prompt that resembles the following:

      client-machine#
    2. Create a directory on the client machine.
      client-machine# mkdir /dir
  5. Mount the primary volume on the application directory and display the mounted directory.
    1. Mount the primary volume on the application directory.
      client-machine# mount -o rw lhost-nfsrg-prim:/global/mountpoint /dir
    2. Display the mounted directory.
      client-machine# ls /dir
  6. Unmount the primary volume from the application directory.
    1. Unmount the primary volume from the application directory.
      client-machine# umount /dir
    2. Take the application resource group offline on the primary cluster.
      nodeA# clresource disable -g nfs-rg +
      nodeA# clresourcegroup offline nfs-rg
    3. Change the primary cluster to logging mode.

      Run the following command for Availability Suite software:

      nodeA# /usr/sbin/sndradm -n -l lhost-reprg-prim \
      /dev/md/nfsset/rdsk/d200  \
      /dev/md/nfsset/rdsk/d203 lhost-reprg-sec \
      /dev/md/nfsset/rdsk/d200 \
      /dev/md/nfsset/rdsk/d203 ip sync

      When the data volume on the disk is written to, the bitmap file on the same disk is updated. No replication occurs.

    4. Ensure that the PathPrefix directory is available.
      nodeC# mount | grep /global/etc
    5. Confirm that the file system is fit to be mounted on the secondary cluster.
      nodeC# fsck -y /dev/md/nfsset/rdsk/d200
    6. Bring the application into a managed state, and bring it online on the secondary cluster.
      nodeC# clresourcegroup online -eM nodeC nfs-rg
    7. Access the client machine as the root role.

      You see a prompt that resembles the following:

      client-machine#
    8. Mount the application directory that was created in Step 4 to the application directory on the secondary volume.
      client-machine# mount -o rw lhost-nfsrg-sec:/global/mountpoint /dir
    9. Display the mounted directory.
      client-machine# ls /dir
  7. Ensure that the directory displayed in Step 5 is the same as the directory displayed in Step 6.
  8. Return the application on the primary volume to the mounted application directory.
    1. Take the application resource group offline on the secondary volume.
      nodeC# clresource disable -g nfs-rg +
      nodeC# clresourcegroup offline nfs-rg
    2. Ensure that the global volume is unmounted from the secondary volume.
      nodeC# umount /global/mountpoint
    3. Bring the application resource group into a managed state, and bring it online on the primary cluster.
      nodeA# clresourcegroup online -eM nodeA nfs-rg
    4. Change the primary volume to replicating mode.

      Run the following command for Availability Suite software:

      nodeA# /usr/sbin/sndradm -n -u lhost-reprg-prim \
      /dev/md/nfsset/rdsk/d200 \
      /dev/md/nfsset/rdsk/d203 lhost-reprg-sec \
      /dev/md/nfsset/rdsk/d200 \
      /dev/md/nfsset/rdsk/d203 ip sync

      When the primary volume is written to, the secondary volume is updated by Availability Suite software.

See Also

Example of How to Manage a Takeover

Example of How to Manage a Takeover

This section describes how to update the DNS entries. For additional information, see Guidelines for Managing a Takeover.

This section contains the following procedure: