Sun Cluster 3.0-3.1 Hardware Administration Manual for Solaris OS

Chapter 6 Maintaining Platform Hardware

This chapter contains information about node hardware in a cluster environment. It contains the following topics:

Mirroring Internal Disks on Servers that Use Internal Hardware Disk Mirroring or Integrated Mirroring

Some servers support the mirroring of internal hard drives (internal hardware disk mirroring or integrated mirroring) to provide redundancy for node data. To use this feature in a cluster environment, follow the steps in this section.

Depending on the version of the Solaris operating system you use, you might need to install a patch to correct change request 5023670 and ensure the proper operation of internal mirroring. Check the PatchPro site to find the patch for your server.

The best way to set up hardware disk mirroring is to perform RAID configuration during cluster installation, before you configure multipathing. (For instructions on performing this configuration, see Chapter 1, Sun Cluster 3.1 8/05 Release Notes Supplement, in Sun Cluster 3.0-3.1 Release Notes Supplement.) If you need to change your mirroring configuration after you have established the cluster, you must perform some cluster-specific steps to clean up the device IDs.


Note –

Specific servers might have additional restrictions. See the documentation that shipped with your server hardware.


For specifics about how to configure your server's internal disk mirroring, refer to the documents that shipped with your server and the raidctl(1M) man page.

This section contains the following procedures:

ProcedureHow to Configure Internal Disk Mirroring After the Cluster is Established

Before You Begin

This procedure assumes that you have already installed your hardware and software and have established the cluster. To configure an internal disk mirror during cluster installation, see the Sun Cluster Installation Guide.

Check the PatchPro site for any patches required for using internal disk mirroring.

PatchPro is a patch-management tool that eases the selection and download of patches required for installation or maintenance of Sun Cluster software. PatchPro provides an Interactive Mode tool especially for Sun Cluster. The Interactive Tool makes the installation of patches easier. PatchPro's Expert Mode tool helps you to maintain your configuration with the latest set of patches. Expert Mode is especially useful for obtaining all of the latest patches, not just the high availability and security patches.

To access the PatchPro tool for Sun Cluster software, go to http://www.sun.com/PatchPro/, click Sun Cluster, then choose either Interactive Mode or Expert Mode. Follow the instructions in the PatchPro tool to describe your cluster configuration and download the patches.

For third-party firmware patches, see the SunSolveSM Online site at http://sunsolve.ebay.sun.com.

  1. If necessary, prepare the node for establishing the mirror.

    1. Determine the resource groups and device groups that are running on the node.

      Record this information because you use this information in later in this procedure to return resource groups and device groups to the node.


      # scstat
      
    2. If necessary, move all resource groups and device groups off the node.


      # scswitch -S -h fromnode
      
  2. Configure the internal mirror.


    # raidctl -c clt0d0 clt1d0 
    
    -c clt0d0 clt1d0

    Creates the mirror of primary disk to the mirror disk. Enter the name of your primary disk as the first argument. Enter the name of the mirror disk as the second argument.

  3. Boot the node into single user mode.


    # reboot -- -S
    
  4. Clean up the device IDs.


    # scdidadm -R /dev/rdsk/clt0d0 
    
    -R /dev/rdsk/clt0d0

    Updates the cluster's record of the device IDs for the primary disk. Enter the name of your primary disk as the argument.

  5. Confirm that the mirror has been created and only the primary disk is visible to the cluster.


    # scdidadm -l  
    

    The command lists only the primary disk as visible to the cluster.

  6. Boot the node back into cluster mode.


    # reboot 
    
  7. If you are using Solstice DiskSuite or Solaris Volume Manager and if the state database replicas are on the primary disk, recreate the state database replicas.


    # metadb -afc 3 /dev/rdsk/clt0d0s4
    
  8. If you moved device groups off the node in Step 1, move all device groups back to the node.

    Perform the following step for each device group you want to return to the original node.


    # scswitch -z -D devicegroup -h nodename
    

    In these commands, devicegroup is one or more device groups that are returned to the node.

  9. If you moved resource groups off the node in Step 1, move all resource groups back to the node.


    # scswitch -z -g resourcegroup -h nodename
    

ProcedureHow to Remove an Internal Disk Mirror

  1. If necessary, prepare the node for removing the mirror.

    1. Determine the resource groups and device groups that are running on the node.

      Record this information because you use this information later in this procedure to return resource groups and device groups to the node.


      # scstat
      
    2. If necessary, move all resource groups and device groups off the node.


      # scswitch -S -h fromnode
      
  2. Remove the internal mirror.


    # raidctl -d clt0d0 
    
    -d clt0d0

    Deletes the mirror of primary disk to the mirror disk. Enter the name of your primary disk as the argument.

  3. Boot the node into single user mode.


    # reboot -- -S
    
  4. Clean up the device IDs.


    # scdidadm -R /dev/rdsk/clt0d0 
    # scdidadm -R /dev/rdsk/clt1d0
    
    -R /dev/rdsk/clt0d0
    -R /dev/rdsk/clt1d0

    Updates the cluster's record of the device IDs. Enter the names of your disks separated by spaces.

    Confirm that the mirror has been deleted and that both disks are visible.


    # scdidadm -l  
    

    The command lists both disks as visible to the cluster.

  5. Boot the node back into cluster mode.


    # reboot 
    
  6. If you are using Solstice DiskSuite or Solaris Volume Manager and if the state database replicas are on the primary disk, recreate the state database replicas.


    # metadb -c 3 -ag /dev/rdsk/clt0d0s4
    
  7. If you moved device groups off the node in Step 1, return the device groups to the original node.


    # scswitch -z -D devicegroup -h nodename
    
  8. If you moved resource groups off the node in Step 1, return the resource groups and device groups to the original node.


    # scswitch -z -g resourcegroup -h nodename
    

Configuring Cluster Nodes With a Single, Dual-Port HBA

This section explains the use of dual-port host bus adapters (HBAs) to provide both connections to shared storage in the cluster. While Sun Cluster supports this configuration, it is less redundant than the recommended configuration. You must understand the risks that a dual-port HBA configuration poses to the availability of your application, if you choose to use this configuration.

This section contains the following topics:

Risks and Trade-offs When Using One Dual-Port HBA

Sun recommends that you strive for as much separation and hardware redundancy as possible when connecting each cluster node to shared data storage. This approach provides the following advantages to your cluster:

Sun Cluster is usually layered on top of a volume manager, mirrored data with independent I/O paths, or a multipathed I/O link to a hardware RAID arrangement. Therefore, the cluster software does not expect a node ever to ever lose access to shared data. These redundant paths to storage ensure that the cluster can survive any single failure.

Sun Cluster does support certain configurations that use a single, dual-port HBA to provide the required two paths to the shared data. However, using a single, dual-port HBA for connecting to shared data increases the vulnerability of your cluster. If this single HBA fails and takes down both ports connected to the storage device, the node is unable to reach the stored data. How the cluster handles such a dual-port failure depends on several factors:

If you choose one of these configurations for your cluster, you must understand that the supported configurations mitigate the risks to high availability and the other advantages. The supported configurations do not eliminate these previously mentioned risks.

Supported Configurations When Using a Single, Dual-Port HBA

Sun Cluster supports the following volume manager configurations when you use a single, dual-port HBA for connecting to shared data:

Cluster Configuration When Using Solaris Volume Manager and a Single Dual-Port HBA

If the Solaris Volume Manager metadbs lose replica quorum for a diskset on a cluster node, the volume manager panics the cluster node. Sun Cluster then takes over the diskset on a surviving node and your application fails over to a secondary node.

To ensure that the node panics and is fenced off if it loses its connection to shared storage, configure each metaset with at least two disks. In this configuration, the metadbs stored on the disks create their own replica quorum for each diskset.

Dual-string mediators are not supported in Solaris Volume Manager configurations that use a single dual-port HBA. Using dual-string mediators prevents the service from failing over to a new node.

Configuration Requirements

When configuring Solaris Volume Manager metasets, ensure that each metaset contains at least two disks. Do not configure dual-string mediators.

Expected Failure Behavior

When a dual-port HBA fails with both ports in this configuration, the cluster behavior depends on whether the affected node is primary for the diskset.

Failure Recovery

Follow the instructions for replacing an HBA in your storage device documentation.

Cluster Configuration When Using Solaris Volume Manager for Sun Cluster and a Single Dual-Port HBA

Because Solaris Volume Manager for Sun Cluster uses raw disks only and is specific to Oracle RAC, no special configuration is required.

Expected Failure Behavior

When a dual-port HBA fails and takes down both ports in this configuration, the cluster behavior depends on whether the affected node is the current master for the multi-owner diskset.

Failure Recovery

Follow the instructions for replacing an HBA in your storage device documentation.