Sun Cluster 3.0-3.1 With Sun StorEdge 3510 or 3511 FC RAID Array Manual

ProcedureHow to Connect the Node to the FC Switches or the Storage Array

Use this procedure when you add a storage array to a SAN or DAS configuration. In SAN configurations, you connect the node to the FC switches. In DAS configurations, you connect the node directly to the storage array.

Steps
  1. Determine the resource groups and device groups that are running on all nodes.

    Record this information because you will use it in Step 12 of this procedure to return resource groups and device groups to these nodes.


    # scstat
    
  2. Move all resource groups and device groups off the node that you plan to connect.


    # scswitch -S -h from-node
    
  3. Determine if you need to install host adapters in the node.

    • If no, skip to Step 4.

    • If yes, see the documentation that shipped with your host adapters.

  4. If necessary, install GBICs or SFPs to the FC switches or the storage array.

    For the procedure on installing a GBIC or an SFP to an FC switch, see the documentation that shipped with your FC switch hardware.

    For the procedure on installing a GBIC or an SFP to a storage array, see the Sun StorEdge 3000 Family Installation, Operation, and Service Manual.

  5. Connect fiber-optic cables between the node and the FC switches or the storage array.

    For the procedure on installing a fiber-optic cable, see the Sun StorEdge 3000 Family Installation, Operation, and Service Manual.

  6. If necessary, install the required Solaris patches for storage array support on the node.

    See the Sun Cluster release notes documentation for information about accessing Sun's EarlyNotifier web pages. The EarlyNotifier web pages list information about any required patches or firmware levels that are available for download. For the procedure on applying any host adapter's firmware patch, see the firmware patch README file.

  7. On the node, update the /devices and /dev entries.


    # devfsadm -C 
    
  8. On the node, update the paths to the device ID instances.


    # scgdevs
    
  9. If necessary, label the LUNs on the new storage array.


    # format
    
  10. (Optional) On the node, verify that the device IDs are assigned to the new LUNs.


    # scdidadm -C
    # scdidadm -l
    
  11. Repeat Step 2 to Step 10 for each remaining node that you plan to connect to the storage array.

  12. Return the resource groups and device groups that you identified in Step 1 to the original nodes.


    # scswitch -z -g resource-group -h nodename
    # scswitch -z -D device-group-name -h nodename
    

    For more information, see the Sun Cluster system administration documentation.

  13. Perform volume management administration to incorporate the new logical drives into the cluster.

    For more information, see your Solstice DiskSuite or VERITAS Volume Manager documentation.

Next Steps

The best way to enable multipathing for a cluster is to install the multipathing software and enable multipathing before installing the Sun Cluster software and establishing the cluster. For this procedure, see How to Install Sun Multipathing Software in Sun Cluster Software Installation Guide for Solaris OS. If you need to add multipathing software to an established cluster, see How to Install Sun Multipathing Software in Sun Cluster Software Installation Guide for Solaris OS and follow the troubleshooting steps to clean up the device IDs.