Sun Cluster 3.0-3.1 With Sun StorEdge 6120 Array Manual for Solaris OS

How to Add a Dual-Controller Configuration to an Existing Cluster

Use this procedure to add a dual-controller configuration to a running cluster. The following procedures contain instructions for other array-installation situations:

This procedure defines Node N as the node with which you begin working.

ProcedureHow to Perform Initial Configuration Tasks on the Storage Array

Steps
  1. Power on the storage arrays.


    Note –

    The storage arrays might require several minutes to boot.


    For the procedure about how to power on storage arrays, see the Sun StorEdge 6120 Array Installation Guide.

  2. Administer the storage array's network settings. Network settings include the following settings.

    • IP address

    • gateway (if necessary)

    • netmask (if necessary)

    • hostname

    Assign an IP address to the master controller unit only. The master controller unit is the storage array that has the interconnect cables attached to the second port of each interconnect card.

    For the procedure about how to set up an IP address, gateway, netmask, and hostname on a storage array, see the Sun StorEdge 6020 and 6120 Array System Manual.

  3. Ensure that each storage array has a unique target address.

    For the procedure about how to assign a target address to a storage array, see the Sun StorEdge 6020 and 6120 Array System Manual.

  4. Ensure that the cache and mirror settings for each storage array are set to auto.

    For more information, see the Sun StorEdge 6020 and 6120 Array System Manual.

  5. Ensure that the mp_support parameter for each storage array is set to mpxio.

    For more information, see the Sun StorEdge 6020 and 6120 Array System Manual.

  6. Install any required controller firmware for the storage arrays you are adding.

    Access the master controller unit and administer the storage arrays. The master controller unit is the storage array that has the interconnect cables attached to the second port of each interconnect card, when viewed from the rear of the storage arrays.

    For the required controller firmware for the storage array, see the Sun StorEdge 6120 Array Release Notes.

  7. Reset the storage array to update the network settings and system settings that you changed.

    For the procedure about how to reboot or reset a storage array, see the Sun StorEdge 6020 and 6120 Array System Manual.

ProcedureHow to Connect the Storage Array to FC Switches

Steps
  1. Install the GBICs or SFPs in the storage array that you plan to add.

    For the procedure about how to install a GBICs or SFPs, see the Sun StorEdge 6120 Array Installation Guide.

  2. If necessary, install GBICs or SFPs in the FC switches.

    For the procedure about how to install a GBIC or an SFP, see the documentation that shipped with your FC switch hardware.

  3. Install the Ethernet cable between the storage arrays and the local area network (LAN).

  4. If necessary, daisy-chain or interconnect the storage arrays.

    For the procedure about how to install interconnect cables, see the Sun StorEdge 6120 Array Installation Guide.

  5. Install a fiber-optic cable between each FC switch and both new storage arrays of the partner group.

    For the procedure about how to install a fiber-optic cable, see the Sun StorEdge 6120 Array Installation Guide.

ProcedureHow to Connect the Node to the FC Switches or the Storage Array

Steps
  1. Determine the resource groups and device groups that are running on all nodes.

    Record this information because you use this information in Step 18 of this procedure to return resource groups and device groups to these nodes.


    # scstat
    
  2. Move all resource groups and device groups off Node N.


    # scswitch -S -h from-node
    
  3. Do you need to install host adapters in Node N?

  4. Is the host adapter that you that are installing the first host adapter on Node N?

    • If no, skip to Step 6.

    • If yes, determine whether the required drivers for the host adapter are already installed on this node. For the required packages, see the documentation that shipped with your host adapters.

  5. Are the required support packages already installed?

    • If yes, skip to Step 6.

    • If no, install the packages.

    The support packages are located in the Product directory of the Solaris CD-ROM.

  6. Shut down and power off Node N.

    For the procedure about how to shut down and power off a node, see your Sun Cluster system administration documentation.

  7. Install the host adapters in Node N.

    For the procedure about how to install host adapters, see the documentation that shipped with your host adapters and nodes.

  8. Power on and boot Node N into noncluster mode.

    For more information about how to boot nodes, see your Sun Cluster system administration documentation.

  9. If necessary, upgrade the host adapter firmware on Node N.

    PatchPro is a patch-management tool that eases the selection and download of patches required for installation or maintenance of Sun Cluster software. PatchPro provides an Interactive Mode tool especially for Sun Cluster. The Interactive Tool makes the installation of patches easier. PatchPro's Expert Mode tool helps you to maintain your configuration with the latest set of patches. Expert Mode is especially useful for obtaining all of the latest patches, not just the high availability and security patches.

    To access the PatchPro tool for Sun Cluster software, go to http://www.sun.com/PatchPro/, click Sun Cluster, then choose either Interactive Mode or Expert Mode. Follow the instructions in the PatchPro tool to describe your cluster configuration and download the patches.

    For third-party firmware patches, see the SunSolveSM Online site at http://sunsolve.ebay.sun.com.

  10. If necessary, install a GBIC or an SFP in the FC switch or the storage array.

    For the procedure about how to install a GBIC or an SFP, see the documentation that shipped with your FC switch hardware. For the procedure on how to install a GBIC or an SFP, see the Sun StorEdge 6120 Array Installation Guide.

  11. Connect a fiber-optic cable between the FC switch and Node N.

    For the procedure about how to install a fiber-optic cable, see the Sun StorEdge 6120 Array Installation Guide.

  12. Install the required Solaris patches for storage array support on Node N.

    PatchPro is a patch-management tool that eases the selection and download of patches required for installation or maintenance of Sun Cluster software. PatchPro provides an Interactive Mode tool especially for Sun Cluster. The Interactive Tool makes the installation of patches easier. PatchPro's Expert Mode tool helps you to maintain your configuration with the latest set of patches. Expert Mode is especially useful for obtaining all of the latest patches, not just the high availability and security patches.

    To access the PatchPro tool for Sun Cluster software, go to http://www.sun.com/PatchPro/, click Sun Cluster, then choose either Interactive Mode or Expert Mode. Follow the instructions in the PatchPro tool to describe your cluster configuration and download the patches.

    For third-party firmware patches, see the SunSolveSM Online site at http://sunsolve.ebay.sun.com.

  13. Perform a reconfiguration boot on Node N to create the new Solaris device files and links.


    # boot -r
    
  14. On Node N, update the paths to the DID instances.


    # scgdevs
    
  15. If necessary, label the new storage array logical volume.

    For the procedure about how to label a logical volume, see the Sun StorEdge 6020 and 6120 Array System Manual.

  16. (Optional) On Node N, verify that the device IDs (DIDs) are assigned to the new LUNs.


    # scdidadm -C
    # scdidadm -l
    
  17. Repeat Step 2 through Step 16 for each remaining node that you plan to connect to the storage array.

  18. (Optional) Return the resource groups and device groups that you identified in Step 1 to the original nodes.


    # scswitch -z -g resource-group -h nodename
    # scswitch -z -D device-group-name -h nodename
    

    For more information, see your Sun Cluster system administration documentation.

  19. Perform volume management administration to incorporate the new logical volumes into the cluster.

    For more information, see your Solstice DiskSuite/Solaris Volume Manager or VERITAS Volume Manager documentation.

Next Steps

The best way to enable multipathing for a cluster is to install the multipathing software and enable multipathing before installing the Sun Cluster software and establishing the cluster. For this procedure, see How to Install Sun Multipathing Software in Sun Cluster Software Installation Guide for Solaris OS. If you need to add multipathing software to an established cluster, see How to Install Sun Multipathing Software in Sun Cluster Software Installation Guide for Solaris OS and follow the troubleshooting steps to clean up the device IDs.