Sun Cluster 3.1 - 3.2 With Sun StorEdge T3 or T3+ Array Manual for Solaris OS

ProcedureHow to Migrate From a Single-Controller Configuration to a Partner-Group Configuration

Use this procedure to migrate your storage arrays from a single-controller (noninterconnected) configuration to a partner-group (interconnected) configuration. This procedure assumes that the two storage arrays in the partner-group configuration are correctly isolated from each other on separate FC switches. Do not disconnect the cables from the FC switches or the nodes.


Caution – Caution –

You must be a Sun service provider to perform this procedure. If you need to migrate from a single-controller configuration to a partner-group configuration, contact your Sun service provider.


This procedure provides the long forms of the Sun Cluster commands. Most commands also have short forms. Except for the forms of the command names, the commands are identical. For a list of the commands and their short forms, see Appendix A, Sun Cluster Object-Oriented Commands, in Sun Cluster 3.1 - 3.2 Hardware Administration Manual for Solaris OS.

Before You Begin

To perform this procedure, become superuser or assume a role that provides solaris.cluster.read and solaris.cluster.modify RBAC authorization.

  1. Back up all data on the storage arrays before you remove the storage arrays from the Sun Cluster configuration.

  2. Remove the noninterconnected storage arrays to be in your partner group from the cluster configuration.

    Follow the procedure in How to Remove a Storage Array in a Single-Controller Configuration.

  3. Connect and configure the single storage arrays to form a partner group.

    Follow the procedure in the Sun StorEdge T3 and T3+ Array Field Service Manual.

  4. Ensure that each storage array has a unique target address.

    For the procedure about how to verify and assign a target address to a storage array, see the Sun StorEdge T3 and T3+ Array Configuration Guide.

  5. Ensure that the cache and mirror settings for each storage array are set to auto.

  6. Ensure that the mp_support parameter for each storage array is set to mpxio.

  7. If necessary, upgrade the host adapter firmware on Node A.

    If you use the Solaris 8, Solaris 9, or Solaris 10 Operating System, Sun Connection Update Manager keeps you informed of the latest versions of patches and features. Using notifications and intelligent needs-based updating, Sun Connection helps improve operational efficiency and ensures that you have the latest software patches for your Sun software.

    You can download the Sun Connection Update Manager product for free by going to http://www.sun.com/download/products.xml?id=4457d96d.

    Additional information for using the Sun patch management tools is provided in Solaris Administration Guide: Basic Administration at http://docs.sun.com. Refer to the version of this manual for the Solaris OS release that you have installed.

    If you must apply a patch when a node is in noncluster mode, you can apply it in a rolling fashion, one node at a time, unless instructions for a patch require that you shut down the entire cluster. Follow the procedures in How to Apply a Rebooting Patch (Node) in Sun Cluster System Administration Guide for Solaris OS to prepare the node and to boot it in noncluster mode. For ease of installation, consider applying all patches at the same time. That is, apply all patches to the node that you place in noncluster mode.

    For a list of patches that affect Sun Cluster, see the Sun Cluster Wiki Patch Klatch.

    For required firmware, see the Sun System Handbook.

  8. If necessary, install the required Solaris patches for storage array support on Node A.

    If you use the Solaris 8, Solaris 9, or Solaris 10 Operating System, Sun Connection Update Manager keeps you informed of the latest versions of patches and features. Using notifications and intelligent needs-based updating, Sun Connection helps improve operational efficiency and ensures that you have the latest software patches for your Sun software.

    You can download the Sun Connection Update Manager product for free by going to http://www.sun.com/download/products.xml?id=4457d96d.

    Additional information for using the Sun patch management tools is provided in Solaris Administration Guide: Basic Administration at http://docs.sun.com. Refer to the version of this manual for the Solaris OS release that you have installed.

    If you must apply a patch when a node is in noncluster mode, you can apply it in a rolling fashion, one node at a time, unless instructions for a patch require that you shut down the entire cluster. Follow the procedures in How to Apply a Rebooting Patch (Node) in Sun Cluster System Administration Guide for Solaris OS to prepare the node and to boot it in noncluster mode. For ease of installation, consider applying all patches at the same time. That is, apply all patches to the node that you place in noncluster mode.

    For a list of patches that affect Sun Cluster, see the Sun Cluster Wiki Patch Klatch.

    For required firmware, see the Sun System Handbook.

  9. Shut down Node A.

    For the procedure about how to shut down and power off a node, see your Sun Cluster system administration documentation.

  10. Perform a reconfiguration boot on Node A to create the new Solaris device files and links.


    # boot -r
    
  11. On Node A, update the /devices and /dev entries.


    # devfsadm -C 
    
  12. On Node A, update the paths to the DID instances.

    • If you are using Sun Cluster 3.2, use the following command:


      # cldevice clear
      
    • If you are using Sun Cluster 3.1, use the following command:


      # scdidadm -C
      
  13. Label the new storage array logical volume.

    For the procedure about how to label a logical volume, see the Sun StorEdge T3 and T3+ Array Administrator's Guide.

  14. If necessary, upgrade the host adapter firmware on Node B.

    If you use the Solaris 8, Solaris 9, or Solaris 10 Operating System, Sun Connection Update Manager keeps you informed of the latest versions of patches and features. Using notifications and intelligent needs-based updating, Sun Connection helps improve operational efficiency and ensures that you have the latest software patches for your Sun software.

    You can download the Sun Connection Update Manager product for free by going to http://www.sun.com/download/products.xml?id=4457d96d.

    Additional information for using the Sun patch management tools is provided in Solaris Administration Guide: Basic Administration at http://docs.sun.com. Refer to the version of this manual for the Solaris OS release that you have installed.

    If you must apply a patch when a node is in noncluster mode, you can apply it in a rolling fashion, one node at a time, unless instructions for a patch require that you shut down the entire cluster. Follow the procedures in How to Apply a Rebooting Patch (Node) in Sun Cluster System Administration Guide for Solaris OS to prepare the node and to boot it in noncluster mode. For ease of installation, consider applying all patches at the same time. That is, apply all patches to the node that you place in noncluster mode.

    For a list of patches that affect Sun Cluster, see the Sun Cluster Wiki Patch Klatch.

    For required firmware, see the Sun System Handbook.

  15. If necessary, install the required Solaris patches for storage array support on Node B.

    For a list of required Solaris patches for storage array support, see the Sun StorEdge T3 Disk Tray Release Notes.

  16. Shut down Node B.

    For the procedure about how to shut down and power off a node, see your Sun Cluster system administration documentation.

  17. Perform a reconfiguration boot to create the new Solaris device files and links on Node B.


    # boot -r
    
  18. On Node B, update the /devices and /dev entries.


    # devfsadm -C 
    
  19. On Node B, update the paths to the DID instances.

    • If you are using Sun Cluster 3.2, use the following command:


      # cldevice clear
      
    • If you are using Sun Cluster 3.1, use the following command:


      # scdidadm -C
      
  20. (Optional) On Node B, verify that the DIDs are assigned to the new LUNs.

    • If you are using Sun Cluster 3.2, use the following command:


      # cldevice list -n NodeB -v
      
    • If you are using Sun Cluster 3.1, use the following command:


      # scdidadm -l
      
  21. On one node that is attached to the new storage arrays, reset the SCSI reservation state.

    • If you are using Sun Cluster 3.2, use the following command:


      # cldevice repair
      
    • If you are using Sun Cluster 3.1, use the following command:


      # scdidadm -R
      

    Note –

    Repeat this command on the same node for each storage array LUN that you are adding to the cluster.


  22. Perform volume management administration to incorporate the new logical volumes into the cluster.

    For more information, see your Solstice DiskSuite or Veritas Volume Manager documentation.

Next Steps

The best way to enable multipathing for a cluster is to install the multipathing software and enable multipathing before installing the Sun Cluster software and establishing the cluster. For this procedure, see How to Install Sun Multipathing Software in Sun Cluster Software Installation Guide for Solaris OS. If you need to add multipathing software to an established cluster, see How to Install Sun Multipathing Software in Sun Cluster Software Installation Guide for Solaris OS and follow the troubleshooting steps to clean up the device IDs.