Sun Cluster 3.1 - 3.2 Hardware Administration Manual for Solaris OS

ProcedureHow to Configure Internal Disk Mirroring After the Cluster Is Established

Before You Begin

This procedure assumes that you have already installed your hardware and software and have established the cluster. To configure an internal disk mirror during cluster installation, see the Sun Cluster Software Installation Guide for Solaris OS.

If you use the Solaris 8, Solaris 9, or Solaris 10 Operating System, Sun Connection Update Manager keeps you informed of the latest versions of patches and features. Using notifications and intelligent needs-based updating, Sun Connection helps improve operational efficiency and ensures that you have the latest software patches for your Sun software.

You can download the Sun Connection Update Manager product for free by going to http://www.sun.com/download/products.xml?id=4457d96d.

Additional information for using the Sun patch management tools is provided in Solaris Administration Guide: Basic Administration at http://docs.sun.com. Refer to the version of this manual for the Solaris OS release that you have installed.

If you must apply a patch when a node is in noncluster mode, you can apply it in a rolling fashion, one node at a time, unless instructions for a patch require that you shut down the entire cluster. Follow the procedures in How to Apply a Rebooting Patch (Node) in Sun Cluster System Administration Guide for Solaris OS to prepare the node and to boot it in noncluster mode. For ease of installation, consider applying all patches at the same time. That is, apply all patches to the node that you place in noncluster mode.

For a list of patches that affect Sun Cluster, see the Sun Cluster Wiki Patch Klatch.

For required firmware, see the Sun System Handbook.


Caution – Caution –

If there are state database replicas on the disk that you are mirroring, you must recreate them during this procedure.


  1. If necessary, prepare the node for establishing the mirror.

    1. Determine the resource groups and device groups that are running on the node.

      Record this information because you use it later in this procedure to return resource groups and device groups to the node.

      • If you are using Sun Cluster 3.2, use the following commands:


        # clresourcegroup status -n nodename
        # cldevicegroup status -n nodename
        
      • If you are using Sun Cluster 3.1, use the following command:


        # scstat
        
    2. If necessary, move all resource groups and device groups off the node.

      • If you are using Sun Cluster 3.2, use the following command:


        # clnode evacuate fromnode
        
      • If you are using Sun Cluster 3.1, use the following command:


        # scswitch -S -h fromnode
        
  2. Configure the internal mirror.


    # raidctl -c clt0d0 clt1d0 
    
    -c clt0d0 clt1d0

    Creates the mirror of primary disk to the mirror disk. Enter the name of your primary disk as the first argument. Enter the name of the mirror disk as the second argument.

  3. Boot the node into single user mode.


    # reboot -- -S
    
  4. Clean up the device IDs.

    • If you are using Sun Cluster 3.2, use the following command:


      # cldevice repair /dev/rdsk/clt0d0
      
      /dev/rdsk/clt0d0

      Updates the cluster's record of the device IDs for the primary disk. Enter the name of your primary disk as the argument.

    • If you are using Sun Cluster 3.1, use the following command:


      # scdidadm -R /dev/rdsk/clt0d0
      
      -R /dev/rdsk/clt0d0

      Updates the cluster's record of the device IDs for the primary disk. Enter the name of your primary disk as the argument.

  5. Confirm that the mirror has been created and only the primary disk is visible to the cluster.

    • If you are using Sun Cluster 3.2, use the following command:


      # cldevice list
      
    • If you are using Sun Cluster 3.1, use the following command:


      # scdidadm -l
      

    The command lists only the primary disk, and not the mirror disk, as visible to the cluster.

  6. Boot the node back into cluster mode.


    # reboot
    
  7. If you are using Solaris Volume Manager and if the state database replicas are on the primary disk, recreate the state database replicas.


    # metadb -a /dev/rdsk/clt0d0s4
    
  8. If you moved device groups off the node in Step 1, restore device groups to the original node.

    Perform the following step for each device group you want to return to the original node.

    • If you are using Sun Cluster 3.2, use the following command:


      # cldevicegroup switch -n nodename devicegroup1[ devicegroup2 ...]
      
      -n nodename

      The node to which you are restoring device groups.

      devicegroup1[ devicegroup2 …]

      The device group or groups that you are restoring to the node.

    • If you are using Sun Cluster 3.1, use the following command:


      # scswitch -z -D devicegroup -h nodename
      
  9. If you moved resource groups off the node in Step 1, move all resource groups back to the node.

    • If you are using Sun Cluster 3.2, use the following command:

      Perform the following step for each resource group you want to return to the original node.


      # clresourcegroup switch -n nodename resourcegroup1[ resourcegroup2 …]
      
      nodename

      For failover resource groups, the node to which the groups are returned. For scalable resource groups, the node list to which the groups are returned.

      resourcegroup1[ resourcegroup2 …]

      The resource group or groups that you are returning to the node or nodes.

    • If you are using Sun Cluster 3.1, use the following command:


      # scswitch -z -g resourcegroup -h nodename