Sun Cluster 3.1 - 3.2 With Sun StorEdge T3 or T3+ Array Manual for Solaris OS

Installing Sun StorEdge T3 and T3+ Arrays

This section contains the procedures about how to install storage arrays in a cluster. The following table lists these procedures.

Table 1–1 Task Map: Installing a Storage Array

Task 

Information 

Installing arrays in a new cluster, using a single-controller configuration 

How to Install a Storage Array in a New Cluster Using a Single-Controller Configuration

Installing arrays in a new cluster, using a partner-group configuration 

How to Install a Storage Array in a New Cluster, Using a Partner-Group Configuration

Adding arrays to an existing cluster, using a single-controller configuration. 

How to Add a Storage Array to an Existing Cluster, Using a Single-Controller Configuration

Adding arrays to an existing cluster, using a partner-group configuration. 

How to Add a Storage Array to an Existing Cluster, Using a Partner-Group Configuration

Upgrading a T3 storage array to a T3+ array. 

How to Upgrade a StorEdge T3 Controller to a StorEdge T3+ Controller

Migrating a single-controller array configuration to a partner-group configuration. 

How to Migrate From a Single-Controller Configuration to a Partner-Group Configuration

ProcedureHow to Install a Storage Array in a New Cluster Using a Single-Controller Configuration

Use this procedure to install and configure the first storage array in a new cluster, using a single-controller configuration. Perform the steps in this procedure in conjunction with the procedures in the Sun Cluster software installation documentation and your server hardware manual.

The following procedures contain instructions for other array-installation situations:


Note –

When you upgrade firmware on a storage device or on an enclosure, redefine the stripe size of a LUN, or perform other LUN operations, a device ID might change unexpectedly. When you perform a check of the device ID configuration by running the cldevice check or scdidadm -c command, the following error message appears on your console if the device ID changed unexpectedly.


device id for nodename:/dev/rdsk/cXtYdZsN does not match physical 
device's id for ddecimalnumber, device may have been replaced.

To fix device IDs that report this error, run the cldevice repair or scdidadm -R command for each affected device.


  1. Install the host adapters in the nodes that are to be connected to the storage array.

    To install host adapters, see the documentation that shipped with your host adapters and nodes.

  2. Install the FC hubs/switches.

    To install FC hubs/switches, see the documentation that shipped with your FC hub/switch hardware.


    Note –

    If you are using two FC switches and Sun SAN software to create a storage area network (SAN), see SAN Solutions in a Sun Cluster Environment in Sun Cluster 3.1 - 3.2 Hardware Administration Manual for Solaris OS for more information.


  3. Set up a Reverse Address Resolution Protocol (RARP) server on the network on which the new storage arrays are to reside.

    This RARP server enables you to assign an IP address to the new storage array by using each storage array's unique MAC address.

    To set up a RARP server, see the Sun StorEdge T3 and T3+ Array Installation, Operation, and Service Manual.

  4. If you are not adding a StorEdge T3+ array, install the media interface adapters (MIAs) in the storage array that you are installing, as shown in Figure 1–1.

    To install a media interface adapter (MIA), see the Sun StorEdge T3 and T3+ Array Configuration Guide.

  5. If you are adding a StorEdge T3+ array, and if it is necessary, install gigabit interface converters (GBICs) or Small Form-Factor Pluggables (SFPs) in the FC hubs/switches, as shown in Figure 1–1.

    The GBICs or SFPs let you connect the FC hubs/switches to the storage array that you are installing. To install an FC hub/switch GBIC or an SFP, see the documentation that shipped with your FC hub/switch hardware.

  6. Install fiber-optic cables between the FC hubs/switches and the storage array, as shown in Figure 1–1.

    To install a fiber-optic cable, see the Sun StorEdge T3 and T3+ Array Configuration Guide.

  7. Install fiber-optic cables between the FC hubs/switches and the nodes, as shown in Figure 1–1.

  8. Install the Ethernet cables between the storage array and the Local Area Network (LAN), as shown in Figure 1–1.

    Figure 1–1 Installing a Single-Controller Configuration

    Illustration: The preceding context describes
the graphic.


    Note –

    Figure 1–1 shows how to cable two storage arrays to enable data sharing and host-based mirroring. This configuration prevents a single-point of failure.


  9. Install power cords to each storage array that you are installing.

  10. Power on the storage array and confirm that all components are powered on and functional.


    Note –

    The storage array might require a few minutes to boot.


    To power on a storage array, see the Sun StorEdge T3 and T3+ Array Installation, Operation, and Service Manual.

  11. (Optional) Configure the storage array with logical volumes.

    To configure the storage array with logical volumes, see the Sun StorEdge T3 and T3+ Array Administrator's Guide.

  12. Access each storage array that you are adding. Install the required controller firmware for the storage array.

    If you use the Solaris 8, Solaris 9, or Solaris 10 Operating System, Sun Connection Update Manager keeps you informed of the latest versions of patches and features. Using notifications and intelligent needs-based updating, Sun Connection helps improve operational efficiency and ensures that you have the latest software patches for your Sun software.

    You can download the Sun Connection Update Manager product for free by going to http://www.sun.com/download/products.xml?id=4457d96d.

    Additional information for using the Sun patch management tools is provided in Solaris Administration Guide: Basic Administration at http://docs.sun.com. Refer to the version of this manual for the Solaris OS release that you have installed.

    If you must apply a patch when a node is in noncluster mode, you can apply it in a rolling fashion, one node at a time, unless instructions for a patch require that you shut down the entire cluster. Follow the procedures in How to Apply a Rebooting Patch (Node) in Sun Cluster System Administration Guide for Solaris OS to prepare the node and to boot it in noncluster mode. For ease of installation, consider applying all patches at the same time. That is, apply all patches to the node that you place in noncluster mode.

    For a list of patches that affect Sun Cluster, see the Sun Cluster Wiki Patch Klatch.

    For required firmware, see the Sun System Handbook.

  13. Ensure that this new storage array has a unique target address.

    To verify and assign a target address, see the Sun StorEdge T3 and T3+ Array Configuration Guide.

  14. Reset the storage array.

    To reboot or reset a storage array, see the Sun StorEdge T3 and T3+ Array Installation, Operation, and Service Manual.

  15. Install to the nodes the Solaris operating environment. Apply any required Solaris patches for Sun Cluster software and storage array support.

    To install the Solaris operating environment, see your Sun Cluster software installation documentation. For the location of required Solaris patches and installation instructions for Sun Cluster software support, see your Sun Cluster release notes documentation. For a list of required Solaris patches for storage array support, see the Sun StorEdge T3 and T3+ Array Release Notes.

See Also

To continue with Sun Cluster software installation tasks, see your Sun Cluster software installation documentation.

ProcedureHow to Install a Storage Array in a New Cluster, Using a Partner-Group Configuration

Use this procedure to install and configure the first storage array partner groups in a new cluster. Perform the steps in this procedure in conjunction with the procedures in the Sun Cluster software installation documentation and your server hardware manual.

Make certain that you are using the correct procedure. This procedure contains instructions about how to install a partner group into a new cluster, before the cluster is operational. The following procedures contain instructions for other array-installation situations:


Note –

When you upgrade firmware on a storage device or on an enclosure, redefine the stripe size of a LUN, or perform other LUN operations, a device ID might change unexpectedly. When you perform a check of the device ID configuration by running the cldevice check or scdidadm -c command, the following error message appears on your console if the device ID changed unexpectedly.


device id for nodename:/dev/rdsk/cXtYdZsN does not match physical 
device's id for ddecimalnumber, device may have been replaced.

To fix device IDs that report this error, run the cldevice repair or scdidadm -R command for each affected device.


  1. Install the host adapters in the nodes to be connected to the storage arrays.

    To install host adapters, see the documentation that shipped with your host adapters and nodes.

  2. Install the Fibre Channel (FC) switches.

    To install an FC switch, see the documentation that shipped with your switch hardware.


    Note –

    If you are using two FC switches and Sun SAN software to create a storage area network (SAN), see SAN Considerations for more information.


  3. If you are installing Sun StorEdge T3+ arrays, install the media interface adapters (MIAs) in the Sun StorEdge T3 arrays that you are installing, as shown in Figure 1–2.

    Figure 1–2 Installing a Partner-Group Configuration

    Illustration: The preceding context describes
the graphic.

    To install a media interface adapter (MIA), see the Sun StorEdge T3 and T3+ Array Configuration Guide.

  4. If necessary, install GBICs or SFPs in the FC switches, as shown in Figure 1–2.

    To install a GBIC or an SFP to an FC switch, see the documentation that shipped with your FC switch hardware.

  5. Set up a Reverse Address Resolution Protocol (RARP) server on the network on which the new storage arrays are to reside.

    This RARP server enables you to assign an IP address to the new storage arrays. Assign an IP address by using the storage array's unique MAC address. For the procedure about how to set up a RARP server, see the Sun StorEdge T3 and T3+ Array Installation, Operation, and Service Manual.

  6. Cable the storage arrays, as shown in Figure 1–2.

    1. Connect the storage arrays to the FC switches by using fiber-optic cables.

    2. Connect the Ethernet cables from each storage array to the LAN.

    3. Connect interconnect cables between the two storage arrays of each partner group.

    4. Connect power cords to each storage array.

    To install fiber-optic, Ethernet, and interconnect cables, see the Sun StorEdge T3 and T3+ Array Installation, Operation, and Service Manual.

  7. Power on the storage arrays. Verify that all components are powered on and functional.

    To power on the storage arrays and verify the hardware configuration, see the Sun StorEdge T3 and T3+ Array Installation, Operation, and Service Manual.

  8. Administer the storage arrays' network settings.

    Use the telnet command to access the master controller unit and administer the storage arrays. To administer the storage array network addresses and settings, see the Sun StorEdge T3 and T3+ Array Installation, Operation, and Service Manual.

    The master controller unit is the storage array that has the interconnect cables attached to the second port of each interconnect card, when viewed from the rear of the storage arrays. For example, Figure 1–2 shows the master controller unit of the partner group as the lower storage array. In this diagram, the interconnect cables are connected to the second port of each interconnect card on the master controller unit.

  9. Install any required storage array controller firmware.

    For partner-group configurations, use the telnet command to access the master controller unit. Install the required controller firmware.

    If you use the Solaris 8, Solaris 9, or Solaris 10 Operating System, Sun Connection Update Manager keeps you informed of the latest versions of patches and features. Using notifications and intelligent needs-based updating, Sun Connection helps improve operational efficiency and ensures that you have the latest software patches for your Sun software.

    You can download the Sun Connection Update Manager product for free by going to http://www.sun.com/download/products.xml?id=4457d96d.

    Additional information for using the Sun patch management tools is provided in Solaris Administration Guide: Basic Administration at http://docs.sun.com. Refer to the version of this manual for the Solaris OS release that you have installed.

    If you must apply a patch when a node is in noncluster mode, you can apply it in a rolling fashion, one node at a time, unless instructions for a patch require that you shut down the entire cluster. Follow the procedures in How to Apply a Rebooting Patch (Node) in Sun Cluster System Administration Guide for Solaris OS to prepare the node and to boot it in noncluster mode. For ease of installation, consider applying all patches at the same time. That is, apply all patches to the node that you place in noncluster mode.

    For a list of patches that affect Sun Cluster, see the Sun Cluster Wiki Patch Klatch.

    For required firmware, see the Sun System Handbook.

  10. Ensure that each storage array has a unique target address.

    To verify and assign a target address to a storage array, see the Sun StorEdge T3 and T3+ Array Configuration Guide.

  11. Ensure that the cache and mirror settings for each storage array are set to auto.

    For more information, see the Sun StorEdge T3 and T3+ Array Administrator's Guide.

  12. Ensure that the mp_support parameter for each storage array is set to mpxio.

    For more information, see the Sun StorEdge T3 and T3+ Array Administrator's Guide.

  13. Ensure that both storage array controllers are online.

    For more information about how to correct the situation if both controllers are not online, see the Sun StorEdge T3 and T3+ Array Administrator's Guide.

  14. (Optional) Configure the storage arrays with the desired logical volumes.

    To create and initialize a logical volume, see the Sun StorEdge T3 and T3+ Array Administrator's Guide. To mount a logical volume, see the Sun StorEdge T3 and T3+ Array Installation, Operation, and Service Manual.

  15. Reset the storage arrays.

    To reboot or reset a storage array, see the Sun StorEdge T3 and T3+ Array Installation, Operation, and Service Manual.

  16. On all nodes, install the Solaris operating environment. Apply the required Solaris patches for Sun Cluster software and storage array support.

    To install the Solaris operating environment, see How to Install Solaris Software in Sun Cluster Software Installation Guide for Solaris OS

  17. Install any required patches or software for Solaris I/O multipathing software support to nodes.

    To install the Solaris I/O multipathing software, see How to Install Sun Multipathing Software in Sun Cluster Software Installation Guide for Solaris OS.

  18. On all nodes, update the /devices and /dev entries.


    # devfsadm -C 
    
  19. On all nodes, confirm that all storage arrays that you installed are visible.


    # luxadm display 
    
See Also

To continue with Sun Cluster software installation tasks, see your Sun Cluster software installation documentation.

ProcedureHow to Add a Storage Array to an Existing Cluster, Using a Single-Controller Configuration

This procedure contains instructions about how to add a new storage array to a running cluster in a single-controller configuration. The following procedures contain instructions for other array-installation situations:

This procedure defines Node A as the node with which you begin working. Node B is another node in the cluster.

This procedure provides the long forms of the Sun Cluster commands. Most commands also have short forms. Except for the forms of the command names, the commands are identical. For a list of the commands and their short forms, see Appendix A, Sun Cluster Object-Oriented Commands, in Sun Cluster 3.1 - 3.2 Hardware Administration Manual for Solaris OS.

  1. Become superuser or assume a role that provides solaris.cluster.read and solaris.cluster.modify role-based access control (RBAC) authorization.

  2. Set up a Reverse Address Resolution Protocol (RARP) server on the network on which you want the new storage arrays to be located.

  3. Assign an IP address to the new storage arrays.

    This RARP server enables you to assign an IP address to the new storage array by using the storage array's unique MAC address.

    To set up a RARP server, see the Sun StorEdge T3 and T3+ Array Installation, Operation, and Service Manual.

  4. If you are adding a StorEdge T3+ array, install the media interface adapter (MIA) in storage array that you are adding, as shown in Figure 1–3.

    If you are not adding a StorEdge T3+ array, skip this step.

    To install a media interface adapter (MIA), see the Sun StorEdge T3 and T3+ Array Configuration Guide.

  5. If necessary, install gigabit interface converters (GBICs) or Small Form-Factor Pluggables (SFPs) in the FC hub/switch, as shown in Figure 1–3.

    The GBICs or SFPs enables you to connect the FC hubs/switches to the storage array that you are adding.

    To install an FC hub/switch GBIC or an SFP, see the documentation that shipped with your FC hub/switch hardware.


    Note –

    If you are using two FC switches and Sun SAN software to create a storage area network (SAN), see SAN Solutions in a Sun Cluster Environment in Sun Cluster 3.1 - 3.2 Hardware Administration Manual for Solaris OS for more information.


  6. Install the Ethernet cable between the storage array and the Local Area Network (LAN), as shown in Figure 1–3.

  7. Power on the storage array array.


    Note –

    The storage array might require a few minutes to boot.


    To power on a storage array, see the Sun StorEdge T3 and T3+ Array Installation, Operation, and Service Manual.

  8. Access the storage array that you are adding. If necessary, install the required controller firmware for the storage array.

    If you use the Solaris 8, Solaris 9, or Solaris 10 Operating System, Sun Connection Update Manager keeps you informed of the latest versions of patches and features. Using notifications and intelligent needs-based updating, Sun Connection helps improve operational efficiency and ensures that you have the latest software patches for your Sun software.

    You can download the Sun Connection Update Manager product for free by going to http://www.sun.com/download/products.xml?id=4457d96d.

    Additional information for using the Sun patch management tools is provided in Solaris Administration Guide: Basic Administration at http://docs.sun.com. Refer to the version of this manual for the Solaris OS release that you have installed.

    If you must apply a patch when a node is in noncluster mode, you can apply it in a rolling fashion, one node at a time, unless instructions for a patch require that you shut down the entire cluster. Follow the procedures in How to Apply a Rebooting Patch (Node) in Sun Cluster System Administration Guide for Solaris OS to prepare the node and to boot it in noncluster mode. For ease of installation, consider applying all patches at the same time. That is, apply all patches to the node that you place in noncluster mode.

    For a list of patches that affect Sun Cluster, see the Sun Cluster Wiki Patch Klatch.

    For required firmware, see the Sun System Handbook.

  9. If this new storage array does not yet have a unique target address, change the target address for this new storage array.

    If the target address for this array is already unique, skip this step.

    To verify and assign a target address, see the Sun StorEdge T3 and T3+ Array Configuration Guide.

  10. Install a fiber-optic cable between the FC hub/switch and the storage array, as shown in Figure 1–3.

    To install a fiber-optic cable, see the Sun StorEdge T3 and T3+ Array Configuration Guide.

    Figure 1–3 Adding a Single-Controller Configuration: Part I

    Illustration: The preceding context describes
the graphic.


    Note –

    Figure 1–3 shows how to cable two storage arrays to enable data sharing and host-based mirroring. This configuration prevents a single-point of failure.


  11. Configure the new storage array.

    To create a logical volume, see the Sun StorEdge T3 and T3+ Array Administrator's Guide.

  12. Determine the resource groups and device groups that are running on Node A and Node B.

    Record this information because you use this information in Step 40 and Step 41 of this procedure to return resource groups and device groups to these nodes.

    • If you are using Sun Cluster 3.2, use the following commands:


      # clresourcegroup status + 
      # cldevicegroup status +
      
    • If you are using Sun Cluster 3.1, use the following command:


      # scstat
      
  13. Move all resource groups and device groups off Node A.

    • If you are using Sun Cluster 3.2, use the following command:


      # clnode evacuate
      
    • If you are using Sun Cluster 3.1, use the following command:


      # scswitch -S -h from-node
      
  14. If you need to install a host adapter in Node A, and if it is the first FC host adapter on Node A, determine whether the Fibre Channel support packages are already installed on these nodes.

    This product requires the following packages.


    # pkginfo | egrep Wlux
    system	SUNWluxd   Sun Enterprise Network Array sf Device Driver
    system	SUNWluxdx  Sun Enterprise Network Array sf Device Driver
    									(64-bit)
    system	SUNWluxl   Sun Enterprise Network Array socal Device Driver
    system	SUNWluxlx  Sun Enterprise Network Array socal Device Driver
    									(64-bit)
    system	SUNWluxop  Sun Enterprise Network Array firmware and utilities

    If this is not the first FC host adapter on Node A, skip to Step 16. If you do not need to install a host adapter in Node A, skip to Step 35.

  15. If the Fibre Channel support packages are not installed, install the required support packages that are missing.

    The storage array packages are located in the Product directory of the Solaris DVD. Add any necessary packages.

  16. Shut down Node A.

    To shut down and power off a node, see your Sun Cluster system administration documentation.

  17. Power off Node A.

  18. Install the host adapter in Node A.

    To install a host adapter, see the documentation that shipped with your host adapter and node.

  19. If necessary, power on and boot Node A into noncluster mode by adding -x to your boot instruction.

    To boot a node in noncluster mode, see Chapter 3, Shutting Down and Booting a Cluster, in Sun Cluster System Administration Guide for Solaris OS.

  20. If necessary, upgrade the host adapter firmware on Node A.

    If you use the Solaris 8, Solaris 9, or Solaris 10 Operating System, Sun Connection Update Manager keeps you informed of the latest versions of patches and features. Using notifications and intelligent needs-based updating, Sun Connection helps improve operational efficiency and ensures that you have the latest software patches for your Sun software.

    You can download the Sun Connection Update Manager product for free by going to http://www.sun.com/download/products.xml?id=4457d96d.

    Additional information for using the Sun patch management tools is provided in Solaris Administration Guide: Basic Administration at http://docs.sun.com. Refer to the version of this manual for the Solaris OS release that you have installed.

    If you must apply a patch when a node is in noncluster mode, you can apply it in a rolling fashion, one node at a time, unless instructions for a patch require that you shut down the entire cluster. Follow the procedures in How to Apply a Rebooting Patch (Node) in Sun Cluster System Administration Guide for Solaris OS to prepare the node and to boot it in noncluster mode. For ease of installation, consider applying all patches at the same time. That is, apply all patches to the node that you place in noncluster mode.

    For a list of patches that affect Sun Cluster, see the Sun Cluster Wiki Patch Klatch.

    For required firmware, see the Sun System Handbook.

  21. Connect a fiber-optic cable between the FC hub/switch and Node A, as shown in Figure 1–4.

    To install an FC host adapter GBIC or an SFP, see your host adapter documentation. To install a fiber-optic cable, see the Sun StorEdge T3 and T3+ Array Configuration Guide.

    Figure 1–4 Adding a Single-Controller Configuration: Part II

    Illustration: The preceding context describes
the graphic.

  22. If necessary, install the required Solaris patches for array support on Node A.

    For a list of required Solaris patches for storage array support, see the Sun StorEdge T3 and T3+ Array Release Notes.

  23. Shut down Node A.

    To shut down and power off a node, see your Sun Cluster system administration documentation.

  24. To create the new Solaris device files and links on Node A, perform a reconfiguration boot.


    # boot -r
    
  25. Label the new logical volume.

    To label a logical volume, see the Sun StorEdge T3 and T3+ Array Administrator's Guide.

  26. (Optional) On Node A, verify that the device IDs (DIDs) are assigned to the new LUNs.

    • If you are using Sun Cluster 3.2, use the following command:


      # cldevice list -n NodeA -v
      
    • If you are using Sun Cluster 3.1, use the following command:


      # scdidadm -l
      
  27. If you need to install a host adapter in Node B, and if the host adapter that you are installing the first FC host adapter on Node B, determine whether the Fibre Channel support packages are already installed on these nodes.

    This product requires the following packages.


    # pkginfo | egrep Wlux
    system	SUNWluxd	   Sun Enterprise Network Array sf Device Driver
    system	SUNWluxdx	   Sun Enterprise Network Array sf Device Driver 
    									(64-bit)
    system	SUNWluxl	   Sun Enterprise Network Array socal Device Driver
    system	SUNWluxlx	   Sun Enterprise Network Array socal Device Driver 
    									(64-bit)
    system	SUNWluxop	   Sun Enterprise Network Array firmware and utilities

    If this is not the first FC host adapter on Node B, skip to Step 29. If you do not need to install a host adapter, skip to Step 34.

  28. If the Fibre Channel support packages are not installed, install the required support packages that are missing.

    The storage array packages are located in the Product directory of the Solaris DVD. Add any necessary packages.

  29. Shut down Node B.

    To shut down and power off a node, see your Sun Cluster system administration documentation.

  30. Power off Node B.

    For more information, see your Sun Cluster system administration documentation.

  31. Install the host adapter in Node B.

    To install a host adapter, see the documentation that shipped with your host adapter and node.

  32. If necessary, power on and boot Node B into noncluster mode by adding -x to your boot instruction.

    To boot a node in noncluster mode, see Chapter 3, Shutting Down and Booting a Cluster, in Sun Cluster System Administration Guide for Solaris OS.

  33. If necessary, upgrade the host adapter firmware on Node B.

    If you use the Solaris 8, Solaris 9, or Solaris 10 Operating System, Sun Connection Update Manager keeps you informed of the latest versions of patches and features. Using notifications and intelligent needs-based updating, Sun Connection helps improve operational efficiency and ensures that you have the latest software patches for your Sun software.

    You can download the Sun Connection Update Manager product for free by going to http://www.sun.com/download/products.xml?id=4457d96d.

    Additional information for using the Sun patch management tools is provided in Solaris Administration Guide: Basic Administration at http://docs.sun.com. Refer to the version of this manual for the Solaris OS release that you have installed.

    If you must apply a patch when a node is in noncluster mode, you can apply it in a rolling fashion, one node at a time, unless instructions for a patch require that you shut down the entire cluster. Follow the procedures in How to Apply a Rebooting Patch (Node) in Sun Cluster System Administration Guide for Solaris OS to prepare the node and to boot it in noncluster mode. For ease of installation, consider applying all patches at the same time. That is, apply all patches to the node that you place in noncluster mode.

    For a list of patches that affect Sun Cluster, see the Sun Cluster Wiki Patch Klatch.

    For required firmware, see the Sun System Handbook.

  34. If necessary, install a GBIC or an SFP, as shown in Figure 1–5.

    To install an FC hub/switch GBIC or an SFP, see the documentation that shipped with your FC hub/switch hardware.


    Note –

    If you are using two FC switches and Sun SAN software to create a storage area network (SAN), see SAN Solutions in a Sun Cluster Environment in Sun Cluster 3.1 - 3.2 Hardware Administration Manual for Solaris OS for more information.


  35. If necessary, connect a fiber-optic cable between the FC hub/switch and Node B, as shown in Figure 1–5.

    To install an FC host adapter GBIC or an SFP, see your host adapter documentation. To install a fiber-optic cable, see the Sun StorEdge T3 and T3+ Array Configuration Guide.

    Figure 1–5 Adding a Single-Controller Configuration: Part III

    Illustration: The preceding context describes
the graphic.

  36. If necessary, install the required Solaris patches for storage array support on Node B.

    For a list of required Solaris patches for storage array support, see the Sun StorEdge T3 and T3+ Array Release Notes.

  37. Shut down Node B.

    To shut down and power off a node, see your Sun Cluster system administration documentation.

  38. Perform a reconfiguration boot to create the new Solaris device files and links on Node B.


    # boot -r
    
  39. (Optional) On Node B, verify that the device IDs (DIDs) are assigned to the new LUNs.

    • If you are using Sun Cluster 3.2, use the following command:


      # cldevice list -n NodeB -v
      
    • If you are using Sun Cluster 3.1, use the following command:


      # scdidadm -l
      
  40. (Optional) Restore the device groups to the original node.

    Do the following for each device group that you want to return to the original node.

    • If you are using Sun Cluster 3.2, use the following command:


      # cldevicegroup switch -n nodename devicegroup1[ devicegroup2 …]
      
      -n nodename

      The node to which you are restoring device groups.

      devicegroup1[ devicegroup2 …]

      The device group or groups that you are restoring to the node.

    • If you are using Sun Cluster 3.1, use the following command:


      # scswitch -z -D devicegroup -h nodename
      
  41. (Optional) Restore the resource groups to the original node.

    Do the following for each resource group that you want to return to the original node.

    • If you are using Sun Cluster 3.2, use the following command:


      # clresourcegroup switch -n nodename resourcegroup1[ resourcegroup2 …]
      
      nodename

      For failover resource groups, the node to which the groups are returned. For scalable resource groups, the node list to which the groups are returned.

      resourcegroup1[ resourcegroup2 …]

      The resource group or groups that you are returning to the node or nodes.

    • If you are using Sun Cluster 3.1, use the following command:


      # scswitch -z -g resourcegroup -h nodename
      
  42. Perform volume management administration to incorporate the new logical volumes into the cluster.

    For more information, see your Solaris Volume Manager or Veritas Volume Manager documentation.

ProcedureHow to Add a Storage Array to an Existing Cluster, Using a Partner-Group Configuration

This procedure contains instructions for adding new storage array partner groups to a running cluster. The following procedures contain instructions for other array-installation situations:

This procedure provides the long forms of the Sun Cluster commands. Most commands also have short forms. Except for the forms of the command names, the commands are identical. For a list of the commands and their short forms, see Appendix A, Sun Cluster Object-Oriented Commands, in Sun Cluster 3.1 - 3.2 Hardware Administration Manual for Solaris OS.

  1. Become superuser or assume a role that provides solaris.cluster.read and solaris.cluster.modify RBAC authorization.

  2. Set up a Reverse Address Resolution Protocol (RARP) server on the network you want the new storage arrays to reside on. Afterward, assign an IP address to the new storage arrays.


    Note –

    Assign an IP address to the master controller unit only. The master controller unit is the storage array that has the interconnect cables attached to the second port of each interconnect card, as shown in Figure 1–6.


    This RARP server enables you to assign an IP address to the new storage arrays. Assign an IP address by using the storage array's unique MAC address. To set up a RARP server, see the Sun StorEdge T3 and T3+ Array Installation, Operation, and Service Manual.

  3. Install the  Ethernet cable between the storage arrays and the local area network (LAN), as shown in Figure 1–6.

  4. If they are not already installed, install interconnect cables between the two storage arrays of each partner group, as shown in Figure 1–6.

    To install interconnect cables, see the Sun StorEdge T3 and T3+ Array Installation, Operation, and Service Manual.

    Figure 1–6 Adding a Partner-Group Configuration: Part I

    Illustration: The preceding context describes
the graphic.

  5. Power on the storage arrays.


    Note –

    The storage arrays might require several minutes to boot.


    To power on storage arrays, see the Sun StorEdge T3 and T3+ Array Installation, Operation, and Service Manual.

  6. Administer the storage arrays' network addresses and settings.

    Use the telnet command to access the master controller unit and to administer the storage arrays.

    To administer the network address and the settings of a storage array, see the Sun StorEdge T3 and T3+ Array Installation, Operation, and Service Manual.

  7. Install any required storage array controller firmware upgrades.

    For partner-group configurations, use the telnet command to the master controller unit. If necessary, install the required controller firmware for the storage array.

    For the required revision number of the storage array controller firmware, see the Sun StorEdge T3 Disk Tray Release Notes.

  8. Ensure that each storage array has a unique target address.

    To verify and assign a target address to a storage array, see the Sun StorEdge T3 and T3+ Array Configuration Guide.

  9. Ensure that the cache and mirror settings for each storage array are set to auto.

    For more information, see the Sun StorEdge T3 and T3+ Array Administrator's Guide.

  10. Ensure that the mp_support parameter for each storage array is set to mpxio.

    For more information, see the Sun StorEdge T3 and T3+ Array Administrator's Guide.

  11. Configure the new storage arrays with the desired logical volumes.

    To create and initialize a logical volume, see the Sun StorEdge T3 and T3+ Array Administrator's Guide. To mount a logical volume, see the Sun StorEdge T3 and T3+ Array Installation, Operation, and Service Manual.

  12. Reset the storage arrays.

    To reboot or reset a storage array, see the Sun StorEdge T3 and T3+ Array Installation, Operation, and Service Manual.

  13. If you are adding Sun StorEdge T3+ arrays, install the media interface adapter (MIA) in the Sun StorEdge T3 arrays that you are adding, as shown in Figure 1–6.

    To install an MIA, see the Sun StorEdge T3 and T3+ Array Configuration Guide.

  14. If necessary, install GBICs or SFPs in the FC switches, as shown in Figure 1–6.

    To install a GBIC or an SFP to an FC switch, see the documentation that shipped with your FC switch hardware.

  15. Install a fiber-optic cable between each FC switch and both new storage arrays of the partner group, as shown in Figure 1–6.

    To install a fiber-optic cable, see the Sun StorEdge T3 and T3+ Array Installation, Operation, and Service Manual.


    Note –

    If you are using two FC switches and Sun SAN software to create a storage area network (SAN), see SAN Considerations for more information.


  16. Determine the resource groups and device groups that are running on all nodes.

    Record this information because you use it in Step 30 of this procedure to return resource groups and device groups to these nodes.

    • If you are using Sun Cluster 3.2, use the following commands:


      # clresourcegroup status +
      # cldevicegroup status +
      
    • If you are using Sun Cluster 3.1, use the following command:


      # scstat
      
  17. Move all resource groups and device groups off each node in the cluster.

    • If you are using Sun Cluster 3.2, on each node use the following command:


      # clnode evacuate
      
    • If you are using Sun Cluster 3.1, on each node use the following command:


      # scswitch -S -h from-node
      
  18. If you need to install host adapters in the node, and if the host adapter you are installing is the first adapter on the node, determine whether the required support packages are already installed on this node.

    The following packages are required.


    # pkginfo | egrep Wlux
    system 	SUNWluxd     Sun Enterprise Network Array sf Device Driver
    system 	SUNWluxdx    Sun Enterprise Network Array sf Device Driver
    								(64-bit)
    system 	SUNWluxl     Sun Enterprise Network Array socal Device Driver
    system 	SUNWluxlx    Sun Enterprise Network Array socal Device Driver
    								(64-bit)
    system 	SUNWluxop    Sun Enterprise Network Array firmware and utilities
    system 	SUNWluxox    Sun Enterprise Network Array libraries (64-bit)

    If this is not the first host adapter on the node, skip to Step 20.

  19. If the required support packages are not present, install them.

    The support packages are located in the Product directory of the Solaris DVD. Add any missing packages.

  20. If you need to install host adapters in the node, shut down and power off the node, and then install them in the node.

    To shut down and power off a node, see your Sun Cluster system administration documentation.

    To install host adapters, see the documentation that shipped with your host adapters and nodes.

  21. If you installed host adapters in the node, power on and boot the node into noncluster mode.

    To boot nodes, see Chapter 3, Shutting Down and Booting a Cluster, in Sun Cluster System Administration Guide for Solaris OS.

  22. If you installed host adapters in the node, and if necessary, upgrade the host adapter firmware on the node.

  23. If necessary, install the required Solaris patches for storage array support on the node.

    If you use the Solaris 8, Solaris 9, or Solaris 10 Operating System, Sun Connection Update Manager keeps you informed of the latest versions of patches and features. Using notifications and intelligent needs-based updating, Sun Connection helps improve operational efficiency and ensures that you have the latest software patches for your Sun software.

    You can download the Sun Connection Update Manager product for free by going to http://www.sun.com/download/products.xml?id=4457d96d.

    Additional information for using the Sun patch management tools is provided in Solaris Administration Guide: Basic Administration at http://docs.sun.com. Refer to the version of this manual for the Solaris OS release that you have installed.

    If you must apply a patch when a node is in noncluster mode, you can apply it in a rolling fashion, one node at a time, unless instructions for a patch require that you shut down the entire cluster. Follow the procedures in How to Apply a Rebooting Patch (Node) in Sun Cluster System Administration Guide for Solaris OS to prepare the node and to boot it in noncluster mode. For ease of installation, consider applying all patches at the same time. That is, apply all patches to the node that you place in noncluster mode.

    For a list of patches that affect Sun Cluster, see the Sun Cluster Wiki Patch Klatch.

    For required firmware, see the Sun System Handbook.

  24. If you installed host adapters in the node, reboot the node in cluster mode.

  25. Connect fiber-optic cables between the node and the FC switches, as shown in Figure 1–7.

    To install a fiber-optic cable, see the Sun StorEdge T3 and T3+ Array Installation, Operation, and Service Manual.


    Note –

    If you are using two FC switches and Sun SAN software to create a storage area network (SAN), see SAN Considerations for more information.


    Figure 1–7 Adding a Partner-Group Configuration: Part II

    Illustration: The preceding context describes
the graphic.

  26. On the current node, update the /devices and /dev entries.


    # devfsadm
    
  27. From any node in the cluster, update the global device namespace.

    • If you are using Sun Cluster 3.2, use the following command:


      # cldevice populate
      
    • If you are using Sun Cluster 3.1, use the following command:


      # scgdevs
      
  28. Label the new storage array logical volume.

    To label a logical volume, see the Sun StorEdge T3 and T3+ Array Administrator's Guide.

  29. (Optional) On the current node, verify that the device IDs (DIDs) are assigned to the new LUNs.

    • If you are using Sun Cluster 3.2, use the following command:


      # cldevice list -n CurrentNode -v
      
    • If you are using Sun Cluster 3.1, use the following command:


      # scdidadm -l
      
  30. (Optional) Restore the resource groups to the original node.

    Do the following for each resource group that you want to return to the original node.

    • If you are using Sun Cluster 3.2, use the following command:


      # clresourcegroup switch -n nodename resourcegroup1[ resourcegroup2 …]
      
      nodename

      For failover resource groups, the node to which the groups are returned. For scalable resource groups, the node list to which the groups are returned.

      resourcegroup1[ resourcegroup2 …]

      The resource group or groups that you are returning to the node or nodes.

    • If you are using Sun Cluster 3.1, use the following command:


      # scswitch -z -g resourcegroup -h nodename
      
  31. For each of the other nodes in the cluster, repeat Step 17 through Step 30.

  32. Perform volume management administration to incorporate the new logical volumes into the cluster.

    For more information, see your Solstice DiskSuite or Veritas Volume Manager documentation.