Sun Cluster 3.1 - 3.2 With Sun StorEdge T3 or T3+ Array Manual for Solaris OS

ProcedureHow to Install a Storage Array in a New Cluster, Using a Partner-Group Configuration

Use this procedure to install and configure the first storage array partner groups in a new cluster. Perform the steps in this procedure in conjunction with the procedures in the Sun Cluster software installation documentation and your server hardware manual.

Make certain that you are using the correct procedure. This procedure contains instructions about how to install a partner group into a new cluster, before the cluster is operational. The following procedures contain instructions for other array-installation situations:


Note –

When you upgrade firmware on a storage device or on an enclosure, redefine the stripe size of a LUN, or perform other LUN operations, a device ID might change unexpectedly. When you perform a check of the device ID configuration by running the cldevice check or scdidadm -c command, the following error message appears on your console if the device ID changed unexpectedly.


device id for nodename:/dev/rdsk/cXtYdZsN does not match physical 
device's id for ddecimalnumber, device may have been replaced.

To fix device IDs that report this error, run the cldevice repair or scdidadm -R command for each affected device.


  1. Install the host adapters in the nodes to be connected to the storage arrays.

    To install host adapters, see the documentation that shipped with your host adapters and nodes.

  2. Install the Fibre Channel (FC) switches.

    To install an FC switch, see the documentation that shipped with your switch hardware.


    Note –

    If you are using two FC switches and Sun SAN software to create a storage area network (SAN), see SAN Considerations for more information.


  3. If you are installing Sun StorEdge T3+ arrays, install the media interface adapters (MIAs) in the Sun StorEdge T3 arrays that you are installing, as shown in Figure 1–2.

    Figure 1–2 Installing a Partner-Group Configuration

    Illustration: The preceding context describes
the graphic.

    To install a media interface adapter (MIA), see the Sun StorEdge T3 and T3+ Array Configuration Guide.

  4. If necessary, install GBICs or SFPs in the FC switches, as shown in Figure 1–2.

    To install a GBIC or an SFP to an FC switch, see the documentation that shipped with your FC switch hardware.

  5. Set up a Reverse Address Resolution Protocol (RARP) server on the network on which the new storage arrays are to reside.

    This RARP server enables you to assign an IP address to the new storage arrays. Assign an IP address by using the storage array's unique MAC address. For the procedure about how to set up a RARP server, see the Sun StorEdge T3 and T3+ Array Installation, Operation, and Service Manual.

  6. Cable the storage arrays, as shown in Figure 1–2.

    1. Connect the storage arrays to the FC switches by using fiber-optic cables.

    2. Connect the Ethernet cables from each storage array to the LAN.

    3. Connect interconnect cables between the two storage arrays of each partner group.

    4. Connect power cords to each storage array.

    To install fiber-optic, Ethernet, and interconnect cables, see the Sun StorEdge T3 and T3+ Array Installation, Operation, and Service Manual.

  7. Power on the storage arrays. Verify that all components are powered on and functional.

    To power on the storage arrays and verify the hardware configuration, see the Sun StorEdge T3 and T3+ Array Installation, Operation, and Service Manual.

  8. Administer the storage arrays' network settings.

    Use the telnet command to access the master controller unit and administer the storage arrays. To administer the storage array network addresses and settings, see the Sun StorEdge T3 and T3+ Array Installation, Operation, and Service Manual.

    The master controller unit is the storage array that has the interconnect cables attached to the second port of each interconnect card, when viewed from the rear of the storage arrays. For example, Figure 1–2 shows the master controller unit of the partner group as the lower storage array. In this diagram, the interconnect cables are connected to the second port of each interconnect card on the master controller unit.

  9. Install any required storage array controller firmware.

    For partner-group configurations, use the telnet command to access the master controller unit. Install the required controller firmware.

    If you use the Solaris 8, Solaris 9, or Solaris 10 Operating System, Sun Connection Update Manager keeps you informed of the latest versions of patches and features. Using notifications and intelligent needs-based updating, Sun Connection helps improve operational efficiency and ensures that you have the latest software patches for your Sun software.

    You can download the Sun Connection Update Manager product for free by going to http://www.sun.com/download/products.xml?id=4457d96d.

    Additional information for using the Sun patch management tools is provided in Solaris Administration Guide: Basic Administration at http://docs.sun.com. Refer to the version of this manual for the Solaris OS release that you have installed.

    If you must apply a patch when a node is in noncluster mode, you can apply it in a rolling fashion, one node at a time, unless instructions for a patch require that you shut down the entire cluster. Follow the procedures in How to Apply a Rebooting Patch (Node) in Sun Cluster System Administration Guide for Solaris OS to prepare the node and to boot it in noncluster mode. For ease of installation, consider applying all patches at the same time. That is, apply all patches to the node that you place in noncluster mode.

    For a list of patches that affect Sun Cluster, see the Sun Cluster Wiki Patch Klatch.

    For required firmware, see the Sun System Handbook.

  10. Ensure that each storage array has a unique target address.

    To verify and assign a target address to a storage array, see the Sun StorEdge T3 and T3+ Array Configuration Guide.

  11. Ensure that the cache and mirror settings for each storage array are set to auto.

    For more information, see the Sun StorEdge T3 and T3+ Array Administrator's Guide.

  12. Ensure that the mp_support parameter for each storage array is set to mpxio.

    For more information, see the Sun StorEdge T3 and T3+ Array Administrator's Guide.

  13. Ensure that both storage array controllers are online.

    For more information about how to correct the situation if both controllers are not online, see the Sun StorEdge T3 and T3+ Array Administrator's Guide.

  14. (Optional) Configure the storage arrays with the desired logical volumes.

    To create and initialize a logical volume, see the Sun StorEdge T3 and T3+ Array Administrator's Guide. To mount a logical volume, see the Sun StorEdge T3 and T3+ Array Installation, Operation, and Service Manual.

  15. Reset the storage arrays.

    To reboot or reset a storage array, see the Sun StorEdge T3 and T3+ Array Installation, Operation, and Service Manual.

  16. On all nodes, install the Solaris operating environment. Apply the required Solaris patches for Sun Cluster software and storage array support.

    To install the Solaris operating environment, see How to Install Solaris Software in Sun Cluster Software Installation Guide for Solaris OS

  17. Install any required patches or software for Solaris I/O multipathing software support to nodes.

    To install the Solaris I/O multipathing software, see How to Install Sun Multipathing Software in Sun Cluster Software Installation Guide for Solaris OS.

  18. On all nodes, update the /devices and /dev entries.


    # devfsadm -C 
    
  19. On all nodes, confirm that all storage arrays that you installed are visible.


    # luxadm display 
    
See Also

To continue with Sun Cluster software installation tasks, see your Sun Cluster software installation documentation.