JavaScript is required to for searching.
Skip Navigation Links
Exit Print View
Oracle Solaris Cluster 4.1 Hardware Administration Manual     Oracle Solaris Cluster 4.1
search filter icon
search icon

Document Information

Preface

1.  Introduction to Oracle Solaris Cluster Hardware

2.  Installing and Configuring the Terminal Concentrator

3.  Installing Cluster Interconnect Hardware and Configuring VLANs

4.  Maintaining Cluster Interconnect Hardware

5.  Installing and Maintaining Public Network Hardware

6.  Maintaining Platform Hardware

Mirroring Internal Disks on Servers that Use Internal Hardware Disk Mirroring or Integrated Mirroring

How to Configure Internal Disk Mirroring After the Cluster Is Established

How to Remove an Internal Disk Mirror

Configuring Cluster Nodes With a Single, Dual-Port HBA

Risks and Trade-offs When Using One Dual-Port HBA

Supported Configurations When Using a Single, Dual-Port HBA

Cluster Configuration When Using Solaris Volume Manager and a Single Dual-Port HBA

Configuration Requirements

Expected Failure Behavior with Solaris Volume Manager

Failure Recovery with Solaris Volume Manager

Cluster Configuration When Using Solaris Volume Manager for Oracle Solaris Cluster and a Single Dual-Port HBA

Expected Failure Behavior with Solaris Volume Manager for Oracle Solaris Cluster

Failure Recovery with Solaris Volume Manager for Oracle Solaris Cluster

Kernel Cage DR Recovery

Preparing the Cluster for Kernel Cage DR

How to Recover From an Interrupted Kernel Cage DR Operation

7.  Campus Clustering With Oracle Solaris Cluster Software

8.  Verifying Oracle Solaris Cluster Hardware Redundancy

Index

Mirroring Internal Disks on Servers that Use Internal Hardware Disk Mirroring or Integrated Mirroring

Some servers support the mirroring of internal hard drives (internal hardware disk mirroring or integrated mirroring) to provide redundancy for node data. To use this feature in a cluster environment, follow the steps in this section.

The best way to set up hardware disk mirroring is to perform RAID configuration during cluster installation, before you configure multipathing. For instructions on performing this configuration, see the Oracle Solaris Cluster Software Installation Guide. If you need to change your mirroring configuration after you have established the cluster, you must perform some cluster-specific steps to clean up the device IDs, as described in the procedure that follows.


Note - Specific servers might have additional restrictions. See the documentation that shipped with your server hardware.


For specifics about how to configure your server's internal disk mirroring, refer to the documents that shipped with your server and the raidctl(1M) man page.

How to Configure Internal Disk Mirroring After the Cluster Is Established

Before You Begin

This procedure assumes that you have already installed your hardware and software and have established the cluster. To configure an internal disk mirror during cluster installation, see the Oracle Solaris Cluster Software Installation Guide.


Caution

Caution - If there are state database replicas on the disk that you are mirroring, you must recreate them during this procedure.


  1. If necessary, prepare the node for establishing the mirror.
    1. Determine the resource groups and device groups that are running on the node.

      Record this information because you use it later in this procedure to return resource groups and device groups to the node.

      Use the following command:

      # clresourcegroup status -n nodename
      # cldevicegroup status -n nodename
    2. If necessary, move all resource groups and device groups off the node.
      # clnode evacuate fromnode
  2. Configure the internal mirror.
    # raidctl -c clt0d0 clt1d0 
    -c clt0d0 clt1d0

    Creates the mirror of primary disk to the mirror disk. Enter the name of your primary disk as the first argument. Enter the name of the mirror disk as the second argument.

  3. Boot the node into single user mode.
    # reboot -- -S
  4. Clean up the device IDs.

    Use the following command:

    # cldevice repair /dev/rdsk/clt0d0
    /dev/rdsk/clt0d0

    Updates the cluster's record of the device IDs for the primary disk. Enter the name of your primary disk as the argument.

  5. Confirm that the mirror has been created and only the primary disk is visible to the cluster.
    # cldevice list

    The command lists only the primary disk, and not the mirror disk, as visible to the cluster.

  6. Boot the node back into cluster mode.
    # reboot
  7. If you are using Solaris Volume Manager and if the state database replicas are on the primary disk, recreate the state database replicas.
    # metadb -a /dev/rdsk/clt0d0s4
  8. If you moved device groups off the node in Step 1, restore device groups to the original node.

    Perform the following step for each device group you want to return to the original node.

    # cldevicegroup switch -n nodename devicegroup1[ devicegroup2 ...]
    -n nodename

    The node to which you are restoring device groups.

    devicegroup1[ devicegroup2 …]

    The device group or groups that you are restoring to the node.

  9. If you moved resource groups off the node in Step 1, move all resource groups back to the node.

    Perform the following step for each resource group you want to return to the original node.

    # clresourcegroup switch -n nodename resourcegroup1[ resourcegroup2 …]
    nodename

    For failover resource groups, the node to which the groups are returned. For scalable resource groups, the node list to which the groups are returned.

    resourcegroup1[ resourcegroup2 …]

    The resource group or groups that you are returning to the node or nodes.

How to Remove an Internal Disk Mirror

  1. If necessary, prepare the node for removing the mirror.
    1. Determine the resource groups and device groups that are running on the node.

      Record this information because you use this information later in this procedure to return resource groups and device groups to the node.

      Use the following command:

      # clresourcegroup status -n nodename
      # cldevicegroup status -n nodename
    2. If necessary, move all resource groups and device groups off the node.
      # clnode evacuate fromnode
  2. Remove the internal mirror.
    # raidctl -d clt0d0
    -d clt0d0

    Deletes the mirror of primary disk to the mirror disk. Enter the name of your primary disk as the argument.

  3. Boot the node into single user mode.
    # reboot -- -S
  4. Clean up the device IDs.

    Use the following command:

    # cldevice repair /dev/rdsk/clt0d0 /dev/rdsk/clt1d0
    /dev/rdsk/clt0d0 /dev/rdsk/clt1d0

    Updates the cluster's record of the device IDs. Enter the names of your disks separated by spaces.

  5. Confirm that the mirror has been deleted and that both disks are visible.
    # cldevice list

    The command lists both disks as visible to the cluster.

  6. Boot the node back into cluster mode.
    # reboot
  7. If you are using Solaris Volume Manager and if the state database replicas are on the primary disk, recreate the state database replicas.
    # metadb -c 3 -ag /dev/rdsk/clt0d0s4
  8. If you moved device groups off the node in Step 1, restore the device groups to the original node.
    # cldevicegroup switch -n nodename devicegroup1 devicegroup2
    -n nodename

    The node to which you are restoring device groups.

    devicegroup1[ devicegroup2 …]

    The device group or groups that you are restoring to the node.

  9. If you moved resource groups off the node in Step 1, restore the resource groups and device groups to the original node.

    Perform the following step for each resource group you want to return to the original node.

    # clresourcegroup switch -n nodename resourcegroup[ resourcegroup2 …]
    nodename

    For failover resource groups, the node to which the groups are restored. For scalable resource groups, the node list to which the groups are restored.

    resourcegroup[ resourcegroup2 …]

    The resource group or groups that you are restoring to the node or nodes.