Sun Cluster 3.1 - 3.2 With Sun StorEdge 3510 or 3511 FC RAID Array Manual for Solaris OS

Adding a Storage Array to a Running Cluster

Use this procedure to add new storage array to a running cluster. To install to a new Sun Cluster that is not running, use the procedure in How to Install a Storage Array.

If you need to add a storage array to more than two nodes, repeat the steps for each additional node that connects to the storage array.


Note –

This procedure assumes that your nodes are not configured with dynamic reconfiguration functionality.

If your nodes are configured for dynamic reconfiguration, see the Sun Cluster system administration documentation and skip steps that instruct you to shut down the node.


ProcedureHow to Perform Initial Configuration Tasks on the Storage Array

  1. Power on the storage array.

  2. Set up and configure the storage array.

    For the procedures on configuring the storage array, see the Sun StorEdge 3000 Family Installation, Operation, and Service Manual.

  3. If necessary, upgrade the storage array's controller firmware.

    Sun Cluster software requires patch version 113723-03 or later for each Sun StorEdge 3510 array in the cluster.

    See the Sun Cluster release notes documentation for information about accessing Sun's EarlyNotifier web pages. The EarlyNotifier web pages list information about any required patches or firmware levels that are available for download. For the procedure on applying any host adapter's firmware patch, see the firmware patch README file.

  4. Configure the new storage array. Map the LUNs to the host channels.

    For the procedures on setting up logical drives and LUNs, see the Sun StorEdge 3000 Family Installation, Operation, and Service Manual or Sun StorEdge 3000 Family RAID Firmware 3.27 User's Guide.

  5. To continue adding the storage array, proceed to How to Connect the Storage Array to FC Switches.

ProcedureHow to Connect the Storage Array to FC Switches

Use this procedure if you plan to add a storage array to a SAN environment. If you do not plan to add the storage array to a SAN environment, go to How to Connect the Node to the FC Switches or the Storage Array.

  1. Install the SFPs in the storage array that you plan to add.

    For the procedure on installing an SFP, see the Sun StorEdge 3000 Family Installation, Operation, and Service Manual.

  2. If necessary, install GBICs or SFPs in the FC switches.

    For the procedure on installing a GBIC or an SFP to an FC switch, see the documentation that shipped with your FC switch hardware.

  3. Install a fiber-optic cable between the new storage array and each FC switch.

    For the procedure on installing a fiber-optic cable, see the Sun StorEdge 3000 Family Installation, Operation, and Service Manual.

  4. To finish adding your storage array, see How to Connect the Node to the FC Switches or the Storage Array.

ProcedureHow to Connect the Node to the FC Switches or the Storage Array

Use this procedure when you add a storage array to a SAN or DAS configuration. In SAN configurations, you connect the node to the FC switches. In DAS configurations, you connect the node directly to the storage array.

Before You Begin

This procedure provides the long forms of the Sun Cluster commands. Most commands also have short forms. Except for the forms of the command names, the commands are identical. For a list of the commands and their short forms, see Appendix A, Sun Cluster Object-Oriented Commands, in Sun Cluster 3.1 - 3.2 Hardware Administration Manual for Solaris OS.

To perform this procedure, become superuser or assume a role that provides solaris.cluster.read and solaris.cluster.modify role-based access control (RBAC) authorization.

  1. Determine the resource groups and device groups that are running on all nodes.

    Record this information because you will use it in Step 12 and Step 13 of this procedure to return resource groups and device groups to these nodes.

    • If you are using Sun Cluster 3.2, use the following commands:


      # clresourcegroup status + 
      # cldevicegroup status + 
      
    • If you are using Sun Cluster 3.1, use the following command:


      # scstat
      
  2. Move all resource groups and device groups off the node that you plan to connect.

    • If you are using Sun Cluster 3.2, use the following command:


      # clnode evacuate nodename 
      
    • If you are using Sun Cluster 3.1, use the following command:


      # scswitch -S -h nodename 
      
  3. If you need to install host adapters in the node, see the documentation that shipped with your host adapters and install the adapters.

  4. If necessary, install GBICs or SFPs to the FC switches or the storage array.

    For the procedure on installing a GBIC or an SFP to an FC switch, see the documentation that shipped with your FC switch hardware.

    For the procedure on installing a GBIC or an SFP to a storage array, see the Sun StorEdge 3000 Family Installation, Operation, and Service Manual.

  5. Connect fiber-optic cables between the node and the FC switches or the storage array.

    For the procedure on installing a fiber-optic cable, see the Sun StorEdge 3000 Family Installation, Operation, and Service Manual.

  6. If necessary, install the required Solaris patches for storage array support on the node.

    See the Sun Cluster release notes documentation for information about accessing Sun's EarlyNotifier web pages. The EarlyNotifier web pages list information about any required patches or firmware levels that are available for download. For the procedure on applying any host adapter's firmware patch, see the firmware patch README file.

  7. On the node, update the /devices and /dev entries.


    # devfsadm -C 
    
  8. On the node, update the paths to the device ID instances.

    • If you are using Sun Cluster 3.2, use the following command:


      # cldevice populate
      
    • If you are using Sun Cluster 3.1, use the following command:


      # scgdevs
      
  9. If necessary, label the LUNs on the new storage array.


    # format
    
  10. (Optional) On the node, verify that the device IDs are assigned to the new LUNs.

    • If you are using Sun Cluster 3.2, use the following command:


      # cldevice list -v 
      
    • If you are using Sun Cluster 3.1, use the following commands:


      # scdidadm -C
      # scdidadm -l
      
  11. Repeat Step 2 to Step 10 for each remaining node that you plan to connect to the storage array.

  12. (Optional) Restore the device groups to the original node.

    Perform the following step for each device group you want to return to the original node.

    • If you are using Sun Cluster 3.2, use the following command:


      # cldevicegroup switch -n nodename devicegroup1[ devicegroup2 …]
      -n nodename

      The node to which you are restoring device groups.

      devicegroup1[ devicegroup2 …]

      The device group or groups that you are restoring to the node.

    • If you are using Sun Cluster 3.1, use the following command:


      # scswitch -z -D devicegroup -h nodename
      
  13. (Optional) Restore the resource groups to the original node.

    Perform the following step for each resource group you want to return to the original node.

    • If you are using Sun Cluster 3.2, use the following command:


      # clresourcegroup switch -n nodename resourcegroup1[ resourcegroup2 …]
      
      nodename

      For failover resource groups, the node to which the groups are returned. For scalable resource groups, the node list to which the groups are returned.

      resourcegroup1[ resourcegroup2 …]

      The resource group or groups that you are returning to the node or nodes.

    • If you are using Sun Cluster 3.1, use the following command:


       # scswitch -z -g resourcegroup -h nodename
      
  14. Perform volume management administration to incorporate the new logical drives into the cluster.

    For more information, see your Solstice DiskSuite or Veritas Volume Manager documentation.

Next Steps

The best way to enable multipathing for a cluster is to install the multipathing software and enable multipathing before installing the Sun Cluster software and establishing the cluster. For this procedure, see How to Install Sun Multipathing Software in Sun Cluster Software Installation Guide for Solaris OS. If you need to add multipathing software to an established cluster, see How to Install Sun Multipathing Software in Sun Cluster Software Installation Guide for Solaris OS and follow the troubleshooting steps to clean up the device IDs.