Sun Cluster 3.1 - 3.2 With Sun StorEdge 6120 Array Manual for Solaris OS

Installing Storage Arrays

This section contains the procedures for installing single-controller and dual-controller storage array configurations in new and existing Sun Cluster configurations. The following table lists these procedures.

Table 1–1 Task Map: Installing a Storage Array

Task 

Information 

Installing arrays in a new cluster, using a single-controller configuration 

How to Install a Single-Controller Configuration in a New Cluster

Installing arrays in a new cluster, using a dual-controller configuration 

How to Install a Dual-Controller Configuration in a New Cluster

Adding arrays to an existing cluster, using a single-controller configuration. 

How to Add a Single-Controller Configuration to an Existing Cluster

Adding arrays to an existing cluster, using a dual-controller configuration. 

How to Add a Dual-Controller Configuration to an Existing Cluster

Storage Array Cabling Configurations

You can install your storage array in several different configurations. Use the Sun StorEdge 6120 Array Installation Guide to evaluate your needs and determine which configuration is best for your situation.

The following figures illustrate example configurations.

Figure 1–1 shows two storage arrays, and two have controllers. The storage arrays connect to a two-node cluster through two switches. Dual-controller configurations require software RAID-1 (host-based mirroring).

Figure 1–1 Installing a 1x1 Configuration With Software RAID-1

Illustration: The preceding context describes the graphic.

Figure 1–2 shows four storage arrays, and two have controllers. The first storage array without a controller connects to the second storage array, which has a controller. The third storage array without a controller connects to the fourth storage array, which has a controller. The two storage arrays with controllers connect to a two-node cluster through two switches. Dual-controller configurations require software RAID-1 (host-based mirroring).

Figure 1–2 Installing a 1x2 Configuration With Software RAID-1

Illustration: The preceding context describes the graphic.

Figure 1–3 shows two storage arrays, and two have controllers. The two storage arrays are daisy-chained. The two storage arrays connect to a 2-node cluster through two switches.

Figure 1–3 Installing a 2x2 Configuration

Illustration: The preceding context describes the graphic.

Figure 1–4 shows four storage arrays, and two have controllers. All storage array are daisy-chained in the following order: alternate master, master, alternate master, and master. The two storage arrays with controllers connect to a two-node cluster through two switches.

Figure 1–4 Installing a 2x4 Configuration

Illustration: The preceding context describes the graphic.

ProcedureHow to Install a Single-Controller Configuration in a New Cluster

Use this procedure to install a storage array in a single-controller configuration before you install the Solaris operating environment and Sun Cluster software on your nodes. The following procedures contain instructions for other array-installation situations:

  1. Install the host adapters in the nodes that are to be connected to the storage array.

    For the procedure about how to install host adapters, see the documentation that shipped with your host adapters and nodes.

  2. Install the Fibre Channel (FC) switches.

    For the procedure about how to install FC switches, see the documentation that shipped with your FC switch hardware.

  3. Set up a Reverse Address Resolution Protocol (RARP) server on the network on which the new storage array is to reside.

    Use the RARP server to set up the following network settings.

    • IP address

    • gateway (if necessary)

    • netmask (if necessary)

    • hostname

    For the procedure about how to set up a RARP server, see the Sun StorEdge 6120 Array Installation Guide.

  4. Cable the storage arrays.

    For the procedures on how to connect your storage array, see the Sun StorEdge 6120 Array Installation Guide.

    1. Connect the storage arrays to the FC switches by using fiber-optic cables.

    2. Connect the Ethernet cables from each storage array to the Local Area Network (LAN).

    3. If necessary, install the interconnect cables between storage arrays.

    4. Connect the power cords to each storage array.

  5. Power on the storage array.

    Verify that all components are powered on and functional.


    Note –

    The storage array might require a few minutes to boot.


    For the procedure about how to power on a storage array, see the Sun StorEdge 6120 Array Installation Guide.

  6. Install any required controller firmware for the storage arrays.

    For the required controller firmware for the storage array, see the Sun StorEdge 6120 Array Release Notes.

    If you use the Solaris 8, Solaris 9, or Solaris 10 Operating System, Sun Connection Update Manager keeps you informed of the latest versions of patches and features. Using notifications and intelligent needs-based updating, Sun Connection helps improve operational efficiency and ensures that you have the latest software patches for your Sun software.

    You can download the Sun Connection Update Manager product for free by going to http://www.sun.com/download/products.xml?id=4457d96d.

    Additional information for using the Sun patch management tools is provided in Solaris Administration Guide: Basic Administration at http://docs.sun.com. Refer to the version of this manual for the Solaris OS release that you have installed.

    If you must apply a patch when a node is in noncluster mode, you can apply it in a rolling fashion, one node at a time, unless instructions for a patch require that you shut down the entire cluster. Follow the procedures in How to Apply a Rebooting Patch (Node) in Sun Cluster System Administration Guide for Solaris OS to prepare the node and to boot it in noncluster mode. For ease of installation, consider applying all patches at the same time. That is, apply all patches to the node that you place in noncluster mode.

    For a list of patches that affect Sun Cluster, see the Sun Cluster Wiki Patch Klatch.

    For required firmware, see the Sun System Handbook.

  7. Ensure that the cache and mirror settings for each storage array are set to auto.

    For more information, see the Sun StorEdge 6020 and 6120 Array System Manual.

  8. Ensure that the mp_support parameter for each storage array is set to none.

    For more information, see the Sun StorEdge 6020 and 6120 Array System Manual.

  9. Ensure that all storage array controllers are ONLINE.

    For more information about how to bring controllers online, see the Sun StorEdge 6020 and 6120 Array System Manual.

  10. Reset the storage array to update the network settings and system settings that you changed.

    For the procedure about how to reboot or reset a storage array, see the Sun StorEdge 6020 and 6120 Array System Manual.

  11. On all nodes, install the Solaris operating environment. Apply any required Solaris patches for Sun Cluster software and storage array support.

    For the procedure about how to install the Solaris operating environment, see your Sun Cluster software installation documentation.

    If you use the Solaris 8, Solaris 9, or Solaris 10 Operating System, Sun Connection Update Manager keeps you informed of the latest versions of patches and features. Using notifications and intelligent needs-based updating, Sun Connection helps improve operational efficiency and ensures that you have the latest software patches for your Sun software.

    You can download the Sun Connection Update Manager product for free by going to http://www.sun.com/download/products.xml?id=4457d96d.

    Additional information for using the Sun patch management tools is provided in Solaris Administration Guide: Basic Administration at http://docs.sun.com. Refer to the version of this manual for the Solaris OS release that you have installed.

    If you must apply a patch when a node is in noncluster mode, you can apply it in a rolling fashion, one node at a time, unless instructions for a patch require that you shut down the entire cluster. Follow the procedures in How to Apply a Rebooting Patch (Node) in Sun Cluster System Administration Guide for Solaris OS to prepare the node and to boot it in noncluster mode. For ease of installation, consider applying all patches at the same time. That is, apply all patches to the node that you place in noncluster mode.

    For a list of patches that affect Sun Cluster, see the Sun Cluster Wiki Patch Klatch.

    For required firmware, see the Sun System Handbook.

  12. On each node, ensure that the mpxio-disable parameter is set to yes in the /kernel/drv/scsi_vhci.conf file.

See Also

To continue with Sun Cluster software installation tasks, see your Sun Cluster software installation documentation.

ProcedureHow to Install a Dual-Controller Configuration in a New Cluster

Use this procedure to install a storage array in a dual-controller configuration before you install the Solaris operating environment and Sun Cluster software on your nodes. The following procedures contain instructions for other array-installation situations:

  1. Install the host adapters in the nodes to be connected to the storage arrays.

    For the procedure about how to install host adapters, see the documentation that shipped with your host adapters and nodes.

  2. Install the Fibre Channel (FC) switches.

    For the procedure about how to install FC switches, see the documentation that shipped with your FC switch hardware.

  3. If necessary, install GBICs or SFPs in the FC switches.

    For the procedure about how to install a GBIC or an SFP, see the documentation that shipped with your FC switch hardware.

  4. Set up a Reverse Address Resolution Protocol (RARP) server on the network on which the new storage array is to reside.

    Use the RARP server to set up the following network settings.

    • IP address

    • gateway (if necessary)

    • netmask (if necessary)

    • hostname

    For the procedure about how to set up a RARP server, see the Sun StorEdge 6120 Array Installation Guide.

  5. Cable the storage arrays.

    For the procedures on how to connect your storage array, see the Sun StorEdge 6120 Array Installation Guide.

    1. Connect the storage arrays to the FC switches by using fiber-optic cables.

    2. Connect the Ethernet cables from each storage array to the LAN.

    3. If necessary, install the interconnect cables between storage arrays.

    4. Connect the power cords to each storage array.

    For the procedure about how to install fiber-optic, Ethernet, and interconnect cables, see the Sun StorEdge 6120 Array Installation Guide.

  6. Power on the storage arrays.

    Verify that all components are powered on and functional.

    For the procedure about how to power on the storage arrays, see the Sun StorEdge 6120 Array Installation Guide.

  7. Install any required controller firmware for the storage arrays.

    Access the master controller unit and administer the storage arrays. The master controller unit is the storage array that has the interconnect cables attached to the second port of each interconnect card, when viewed from the rear of the storage arrays.

    For the required controller firmware for the storage array, see the Sun StorEdge 6120 Array Release Notes.

    If you use the Solaris 8, Solaris 9, or Solaris 10 Operating System, Sun Connection Update Manager keeps you informed of the latest versions of patches and features. Using notifications and intelligent needs-based updating, Sun Connection helps improve operational efficiency and ensures that you have the latest software patches for your Sun software.

    You can download the Sun Connection Update Manager product for free by going to http://www.sun.com/download/products.xml?id=4457d96d.

    Additional information for using the Sun patch management tools is provided in Solaris Administration Guide: Basic Administration at http://docs.sun.com. Refer to the version of this manual for the Solaris OS release that you have installed.

    If you must apply a patch when a node is in noncluster mode, you can apply it in a rolling fashion, one node at a time, unless instructions for a patch require that you shut down the entire cluster. Follow the procedures in How to Apply a Rebooting Patch (Node) in Sun Cluster System Administration Guide for Solaris OS to prepare the node and to boot it in noncluster mode. For ease of installation, consider applying all patches at the same time. That is, apply all patches to the node that you place in noncluster mode.

    For a list of patches that affect Sun Cluster, see the Sun Cluster Wiki Patch Klatch.

    For required firmware, see the Sun System Handbook.

  8. Ensure that each storage array has a unique target address.

    For the procedure about how to assign a target address to a storage array, see the Sun StorEdge 6020 and 6120 Array System Manual.

  9. Ensure that the cache and mirror settings for each storage array are set to auto.

    For more information, see the Sun StorEdge 6020 and 6120 Array System Manual.

  10. On each node, ensure that the mp_support parameter for each storage array is set to mpxio.

    For more information, see the Sun StorEdge 6020 and 6120 Array System Manual.

  11. Ensure that all storage array controllers are ONLINE.

    For more information about how to bring controllers online, see the Sun StorEdge 6020 and 6120 Array System Manual.

  12. Reset the storage array to update the network settings and system settings that you changed.

    For the procedure about how to reboot or reset a storage array, see the Sun StorEdge 6020 and 6120 Array System Manual.

  13. On all nodes, install the Solaris operating system and apply the required Solaris patches for Sun Cluster software and storage array support.

    For the procedure about how to install the Solaris operating environment, see How to Install Solaris Software in Sun Cluster Software Installation Guide for Solaris OS.

  14. Install any required patches or software for Solaris I/O multipathing software support to nodes and enable multipathing.

    For the procedure about how to install the Solaris I/O multipathing software, see How to Install Sun Multipathing Software in Sun Cluster Software Installation Guide for Solaris OS.

  15. Confirm that all storage arrays that you installed are visible to all nodes.


    # luxadm probe 
    
See Also

To continue with Sun Cluster software installation tasks, see your Sun Cluster software installation documentation.

How to Add a Single-Controller Configuration to an Existing Cluster

Use this procedure to add a single-controller configuration to a running cluster. The following procedures contain instructions for other array-installation situations:

This procedure defines Node N as the node with which you begin working.

ProcedureHow to Perform Initial Configuration Tasks on the Storage Array

  1. Power on the storage array.


    Note –

    The storage array might require a few minutes to boot.


    For the procedure about how to power on a storage array, see the Sun StorEdge 6120 Array Installation Guide.

  2. Administer the storage array's network settings. Network settings include the following settings.

    • IP address

    • gateway (if necessary)

    • netmask (if necessary)

    • hostname

    For the procedure about how to set up an IP address, gateway, netmask, and hostname on a storage array, see the Sun StorEdge 6020 and 6120 Array System Manual.

  3. Ensure that the cache and mirror settings for each storage array are set to auto.

    For more information, see the Sun StorEdge 6020 and 6120 Array System Manual.

  4. Ensure that the mp_support parameter for each storage array is set to none.

    For more information about, see the Sun StorEdge 6020 and 6120 Array System Manual.

  5. Install any required controller firmware for the storage arrays you are adding.

    For the required controller firmware for the storage array, see the Sun StorEdge 6120 Array Release Notes.

    If you use the Solaris 8, Solaris 9, or Solaris 10 Operating System, Sun Connection Update Manager keeps you informed of the latest versions of patches and features. Using notifications and intelligent needs-based updating, Sun Connection helps improve operational efficiency and ensures that you have the latest software patches for your Sun software.

    You can download the Sun Connection Update Manager product for free by going to http://www.sun.com/download/products.xml?id=4457d96d.

    Additional information for using the Sun patch management tools is provided in Solaris Administration Guide: Basic Administration at http://docs.sun.com. Refer to the version of this manual for the Solaris OS release that you have installed.

    If you must apply a patch when a node is in noncluster mode, you can apply it in a rolling fashion, one node at a time, unless instructions for a patch require that you shut down the entire cluster. Follow the procedures in How to Apply a Rebooting Patch (Node) in Sun Cluster System Administration Guide for Solaris OS to prepare the node and to boot it in noncluster mode. For ease of installation, consider applying all patches at the same time. That is, apply all patches to the node that you place in noncluster mode.

    For a list of patches that affect Sun Cluster, see the Sun Cluster Wiki Patch Klatch.

    For required firmware, see the Sun System Handbook.

  6. Reset the storage array to update the network settings and system settings that you changed.

    For the procedure about how to reboot or reset a storage array, see the Sun StorEdge 6020 and 6120 Array System Manual.

  7. Confirm that all storage arrays that you upgraded are visible to all nodes.


    # luxadm probe 
    

ProcedureHow to Connect the Storage Array to FC Switches

  1. Install the GBICs or SFPs in the storage array that you plan to add.

    For the procedure about how to install a GBICs or SFPs, see the Sun StorEdge 6120 Array Installation Guide.

  2. If necessary, install GBICs or SFPs in the FC switches.

    For the procedure about how to install a GBIC or an SFP, see the documentation that shipped with your FC switch hardware.

  3. Install the Ethernet cable between the storage array and the Local Area Network (LAN).

  4. If necessary, daisy-chain or interconnect the storage arrays.

    For the procedure about how to install interconnect cables, see the Sun StorEdge 6120 Array Installation Guide.

  5. Install a fiber-optic cable between the FC switch and the storage array.

    For the procedure about how to install a fiber-optic cable, see the Sun StorEdge 6120 Array Installation Guide.

ProcedureHow to Connect the Node to the FC Switches or the Storage Array

Before You Begin

This procedure provides the long forms of the Sun Cluster commands. Most commands also have short forms. Except for the forms of the command names, the commands are identical. For a list of the commands and their short forms, see Appendix A, Sun Cluster Object-Oriented Commands, in Sun Cluster 3.1 - 3.2 Hardware Administration Manual for Solaris OS.

To perform this procedure, become superuser or assume a role that provides solaris.cluster.read and solaris.cluster.modify role-based access control (RBAC) authorization.

  1. Determine the resource groups and device groups that are running on all nodes.

    Record this information because you use this information in Step 19 and Step 20 of this procedure to return resource groups and device groups to these nodes.

    • If you are using Sun Cluster 3.2, use the following commands:


      # clresourcegroup status +
      # cldevicegroup status +
      
    • If you are using Sun Cluster 3.1, use the following command:


      # scstat
      
  2. Move all resource groups and device groups off Node N.

    • If you are using Sun Cluster 3.2, use the following command:


      # clnode evacuate nodename
      
    • If you are using Sun Cluster 3.1, use the following command:


      # scswitch -S -h nodename
      
  3. If you need to install a host adapter in Node N, proceed to Step 4.

    If you do not need to install host adapters, skip to Step 10.

  4. If the host adapter that you are installing is the first FC host adapter on Node N, determine whether the required drivers for the host adapter are already installed on this node.

    For the required packages, see the documentation that shipped with your host adapters. If the host adapter that you are installing is not the first FC host adapter on Node N, skip to Step 6.

  5. If the Fibre Channel support packages are not installed, install them.

    The storage array packages are located in the Product directory of the Solaris CD-ROM. Add any necessary packages.

  6. Shut down and power off Node N.

    For the procedure about how to shut down and power off a node, see your Sun Cluster system administration documentation.

  7. Install the host adapter in Node N.

    For the procedure about how to install a host adapter, see the documentation that shipped with your host adapter and node.

  8. Power on and boot Node N into noncluster mode by adding -x to your boot instruction.

    For the procedure about how to boot a node in noncluster mode, see Chapter 3, Shutting Down and Booting a Cluster, in Sun Cluster System Administration Guide for Solaris OS.

  9. If necessary, upgrade the host adapter firmware on Node N.

    If you use the Solaris 8, Solaris 9, or Solaris 10 Operating System, Sun Connection Update Manager keeps you informed of the latest versions of patches and features. Using notifications and intelligent needs-based updating, Sun Connection helps improve operational efficiency and ensures that you have the latest software patches for your Sun software.

    You can download the Sun Connection Update Manager product for free by going to http://www.sun.com/download/products.xml?id=4457d96d.

    Additional information for using the Sun patch management tools is provided in Solaris Administration Guide: Basic Administration at http://docs.sun.com. Refer to the version of this manual for the Solaris OS release that you have installed.

    If you must apply a patch when a node is in noncluster mode, you can apply it in a rolling fashion, one node at a time, unless instructions for a patch require that you shut down the entire cluster. Follow the procedures in How to Apply a Rebooting Patch (Node) in Sun Cluster System Administration Guide for Solaris OS to prepare the node and to boot it in noncluster mode. For ease of installation, consider applying all patches at the same time. That is, apply all patches to the node that you place in noncluster mode.

    For a list of patches that affect Sun Cluster, see the Sun Cluster Wiki Patch Klatch.

    For required firmware, see the Sun System Handbook.

  10. If necessary, install a GBIC or an SFP in the FC switch or the storage array.

    For the procedure about how to install a GBIC or an SFP, see the documentation that shipped with your FC switch hardware. For the procedure on how to install a GBIC or an SFP, see the Sun StorEdge 6120 Array Installation Guide.

  11. Connect a fiber-optic cable between the FC switch and Node N.

    For the procedure about how to install a fiber-optic cable, see the Sun StorEdge 6120 Array Installation Guide.

  12. If necessary, install the required Solaris patches for storage array support on Node N.

    For a list of required Solaris patches for storage array support, see the Sun StorEdge 6120 Array Release Notes.

  13. On the node, update the /devices and /dev entries.


    # devfsadm -C 
    
  14. Boot the node into cluster mode.

    For the procedure about how to boot a node, see Chapter 3, Shutting Down and Booting a Cluster, in Sun Cluster System Administration Guide for Solaris OS.

  15. On the node, update the paths to the DID instances.

    • If you are using Sun Cluster 3.2, use the following command:


      # cldevice populate 
      
    • If you are using Sun Cluster 3.1, use the following command:


      # scgdevs
      
  16. If necessary, label the new logical volume.

    For the procedure about how to label a logical volume, see the Sun StorEdge 6020 and 6120 Array System Manual.

  17. (Optional) On Node N, verify that the device IDs (DIDs) are assigned to the new LUNs.

    • If you are using Sun Cluster 3.2, use the following commands:


      # cldevice clear
      # cldevice list -v 
      
    • If you are using Sun Cluster 3.1, use the following commands:


      # scdidadm -C
      # scdidadm -l
      
  18. Repeat Step 2 through Step 17 for each remaining node that you plan to connect to the storage array.

  19. (Optional) Restore the device groups to the original node.

    Perform the following step for each device group you want to return to the original node.

    • If you are using Sun Cluster 3.2, use the following command:


      # cldevicegroup switch -n nodename devicegroup1[ devicegroup2 ...]
      
      -n nodename

      The node to which you are restoring device groups.

      devicegroup1[ devicegroup2 …]

      The device group or groups that you are restoring to the node.

    • If you are using Sun Cluster 3.1, use the following command:


      # scswitch -z -D devicegroup -h nodename
      
  20. (Optional) Restore the resource groups to the original node.

    Perform the following step for each resource group you want to return to the original node.

    • If you are using Sun Cluster 3.2, use the following command:


      # clresourcegroup switch -n nodename  resourcegroup1[ resourcegroup2 …]
      
      nodename

      For failover resource groups, the node to which the groups are returned. For scalable resource groups, the node list to which the groups are returned.

      resourcegroup1[ resourcegroup2 …]

      The resource group or groups that you are returning to the node or nodes.

    • If you are using Sun Cluster 3.1, use the following command:


      # scswitch -z -g resourcegroup -h nodename
      
  21. Perform volume management administration to incorporate the new logical volumes into the cluster.

    For more information, see your Solaris Volume Manager or Veritas Volume Manager documentation.

How to Add a Dual-Controller Configuration to an Existing Cluster

Use this procedure to add a dual-controller configuration to a running cluster. The following procedures contain instructions for other array-installation situations:

This procedure defines Node N as the node with which you begin working.

ProcedureHow to Perform Initial Configuration Tasks on the Storage Array

  1. Power on the storage arrays.


    Note –

    The storage arrays might require several minutes to boot.


    For the procedure about how to power on storage arrays, see the Sun StorEdge 6120 Array Installation Guide.

  2. Administer the storage array's network settings. Network settings include the following settings.

    • IP address

    • gateway (if necessary)

    • netmask (if necessary)

    • hostname

    Assign an IP address to the master controller unit only. The master controller unit is the storage array that has the interconnect cables attached to the second port of each interconnect card.

    For the procedure about how to set up an IP address, gateway, netmask, and hostname on a storage array, see the Sun StorEdge 6020 and 6120 Array System Manual.

  3. Ensure that each storage array has a unique target address.

    For the procedure about how to assign a target address to a storage array, see the Sun StorEdge 6020 and 6120 Array System Manual.

  4. Ensure that the cache and mirror settings for each storage array are set to auto.

    For more information, see the Sun StorEdge 6020 and 6120 Array System Manual.

  5. Ensure that the mp_support parameter for each storage array is set to mpxio.

    For more information, see the Sun StorEdge 6020 and 6120 Array System Manual.

  6. Install any required controller firmware for the storage arrays you are adding.

    Access the master controller unit and administer the storage arrays. The master controller unit is the storage array that has the interconnect cables attached to the second port of each interconnect card, when viewed from the rear of the storage arrays.

    For the required controller firmware for the storage array, see the Sun StorEdge 6120 Array Release Notes.

  7. Reset the storage array to update the network settings and system settings that you changed.

    For the procedure about how to reboot or reset a storage array, see the Sun StorEdge 6020 and 6120 Array System Manual.

ProcedureHow to Connect the Storage Array to FC Switches

  1. Install the GBICs or SFPs in the storage array that you plan to add.

    For the procedure about how to install a GBICs or SFPs, see the Sun StorEdge 6120 Array Installation Guide.

  2. If necessary, install GBICs or SFPs in the FC switches.

    For the procedure about how to install a GBIC or an SFP, see the documentation that shipped with your FC switch hardware.

  3. Install the Ethernet cable between the storage arrays and the local area network (LAN).

  4. If necessary, daisy-chain or interconnect the storage arrays.

    For the procedure about how to install interconnect cables, see the Sun StorEdge 6120 Array Installation Guide.

  5. Install a fiber-optic cable between each FC switch and both new storage arrays of the partner group.

    For the procedure about how to install a fiber-optic cable, see the Sun StorEdge 6120 Array Installation Guide.

ProcedureHow to Connect the Node to the FC Switches or the Storage Array

Before You Begin

This procedure provides the long forms of the Sun Cluster commands. Most commands also have short forms. Except for the forms of the command names, the commands are identical. For a list of the commands and their short forms, see Appendix A, Sun Cluster Object-Oriented Commands, in Sun Cluster 3.1 - 3.2 Hardware Administration Manual for Solaris OS.

To perform this procedure, become superuser or assume a role that provides solaris.cluster.read and solaris.cluster.modify role-based access control (RBAC) authorization.

  1. Determine the resource groups and device groups that are running on all nodes.

    Record this information because you use this information in Step 18 and Step 19of this procedure to return resource groups and device groups to these nodes.

    • If you are using Sun Cluster 3.2, use the following commands:


      # clresourcegroup status +
      # cldevicegroup status +
      
    • If you are using Sun Cluster 3.1, use the following command:


      # scstat
      
  2. Move all resource groups and device groups off Node N.

    • If you are using Sun Cluster 3.2, use the following command:


      # clnode evacuate nodename
      
    • If you are using Sun Cluster 3.1, use the following command:


      # scswitch -S -h nodename
      
  3. If you need to install host adapters in Node N, proceed to Step 4.

    If you do not need to install host adapters, skip to Step 10.

  4. If the host adapter that you that are installing is the first host adapter on Node N, determine whether the required drivers for the host adapter are already installed on this node.

    For the required packages, see the documentation that shipped with your host adapters.

    If the host adapter that you are installing is not the first host adapter on Node N, skip to Step 6.

  5. If the required support packages are not already installed, install them.

    The support packages are located in the Product directory of the Solaris CD-ROM.

  6. Shut down and power off Node N.

    For the procedure about how to shut down and power off a node, see Chapter 3, Shutting Down and Booting a Cluster, in Sun Cluster System Administration Guide for Solaris OS.

  7. Install the host adapters in Node N.

    For the procedure about how to install host adapters, see the documentation that shipped with your host adapters and nodes.

  8. Power on and boot Node N into noncluster mode by adding -x to your boot instruction.

    For the procedure about how to boot a node in noncluster mode, see Chapter 3, Shutting Down and Booting a Cluster, in Sun Cluster System Administration Guide for Solaris OS.

  9. If necessary, upgrade the host adapter firmware on Node N.

    If you use the Solaris 8, Solaris 9, or Solaris 10 Operating System, Sun Connection Update Manager keeps you informed of the latest versions of patches and features. Using notifications and intelligent needs-based updating, Sun Connection helps improve operational efficiency and ensures that you have the latest software patches for your Sun software.

    You can download the Sun Connection Update Manager product for free by going to http://www.sun.com/download/products.xml?id=4457d96d.

    Additional information for using the Sun patch management tools is provided in Solaris Administration Guide: Basic Administration at http://docs.sun.com. Refer to the version of this manual for the Solaris OS release that you have installed.

    If you must apply a patch when a node is in noncluster mode, you can apply it in a rolling fashion, one node at a time, unless instructions for a patch require that you shut down the entire cluster. Follow the procedures in How to Apply a Rebooting Patch (Node) in Sun Cluster System Administration Guide for Solaris OS to prepare the node and to boot it in noncluster mode. For ease of installation, consider applying all patches at the same time. That is, apply all patches to the node that you place in noncluster mode.

    For a list of patches that affect Sun Cluster, see the Sun Cluster Wiki Patch Klatch.

    For required firmware, see the Sun System Handbook.

  10. If necessary, install a GBIC or an SFP in the FC switch or the storage array.

    For the procedure about how to install a GBIC or an SFP, see the documentation that shipped with your FC switch hardware. For the procedure on how to install a GBIC or an SFP, see the Sun StorEdge 6120 Array Installation Guide.

  11. Connect a fiber-optic cable between the FC switch and Node N.

    For the procedure about how to install a fiber-optic cable, see the Sun StorEdge 6120 Array Installation Guide.

  12. Install the required Solaris patches for storage array support on Node N.

    If you use the Solaris 8, Solaris 9, or Solaris 10 Operating System, Sun Connection Update Manager keeps you informed of the latest versions of patches and features. Using notifications and intelligent needs-based updating, Sun Connection helps improve operational efficiency and ensures that you have the latest software patches for your Sun software.

    You can download the Sun Connection Update Manager product for free by going to http://www.sun.com/download/products.xml?id=4457d96d.

    Additional information for using the Sun patch management tools is provided in Solaris Administration Guide: Basic Administration at http://docs.sun.com. Refer to the version of this manual for the Solaris OS release that you have installed.

    If you must apply a patch when a node is in noncluster mode, you can apply it in a rolling fashion, one node at a time, unless instructions for a patch require that you shut down the entire cluster. Follow the procedures in How to Apply a Rebooting Patch (Node) in Sun Cluster System Administration Guide for Solaris OS to prepare the node and to boot it in noncluster mode. For ease of installation, consider applying all patches at the same time. That is, apply all patches to the node that you place in noncluster mode.

    For a list of patches that affect Sun Cluster, see the Sun Cluster Wiki Patch Klatch.

    For required firmware, see the Sun System Handbook.

  13. Perform a reconfiguration boot on Node N by adding -r to your boot instruction, to create the new Solaris device files and links.

    For the procedure about how to boot a node, see Chapter 3, Shutting Down and Booting a Cluster, in Sun Cluster System Administration Guide for Solaris OS.

  14. On Node N, update the paths to the DID instances.

    • If you are using Sun Cluster 3.2, use the following command:


      # cldevice populate 
      
    • If you are using Sun Cluster 3.1, use the following command:


      # scgdevs
      
  15. If necessary, label the new storage array logical volume.

    For the procedure about how to label a logical volume, see the Sun StorEdge 6020 and 6120 Array System Manual.

  16. (Optional) On Node N, verify that the device IDs (DIDs) are assigned to the new LUNs.

    • If you are using Sun Cluster 3.2, use the following commands:


      # cldevice clear
      # cldevice list -v 
      
    • If you are using Sun Cluster 3.1, use the following commands:


      # scdidadm -C
      # scdidadm -l
      
  17. Repeat Step 2 through Step 16 for each remaining node that you plan to connect to the storage array.

  18. (Optional) Restore the device groups to the original node.

    Perform the following step for each device group you want to return to the original node.

    • If you are using Sun Cluster 3.2, use the following command:


      # cldevicegroup switch -n nodename  devicegroup1[ devicegroup2 ...]
      
      -n nodename

      The node to which you are restoring device groups.

      devicegroup1[ devicegroup2 …]

      The device group or groups that you are restoring to the node.

    • If you are using Sun Cluster 3.1, use the following command:


      # scswitch -z -D devicegroup -h nodename
      
  19. (Optional) Restore the resource groups to the original node.

    Perform the following step for each resource group you want to return to the original node.

    • If you are using Sun Cluster 3.2, use the following command:


      # clresourcegroup switch -n nodename  resourcegroup1[ resourcegroup2 …]
      
      nodename

      For failover resource groups, the node to which the groups are returned. For scalable resource groups, the node list to which the groups are returned.

      resourcegroup1[ resourcegroup2 …]

      The resource group or groups that you are returning to the node or nodes.

    • If you are using Sun Cluster 3.1, use the following command:


      # scswitch -z -g resourcegroup -h nodename
      
  20. Perform volume management administration to incorporate the new logical volumes into the cluster.

    For more information, see your Solaris Volume Manager or Veritas Volume Manager documentation.

Next Steps

The best way to enable multipathing for a cluster is to install the multipathing software and enable multipathing before installing the Sun Cluster software and establishing the cluster. For this procedure, see How to Install Sun Multipathing Software in Sun Cluster Software Installation Guide for Solaris OS. If you need to add multipathing software to an established cluster, see How to Install Sun Multipathing Software in Sun Cluster Software Installation Guide for Solaris OS and follow the troubleshooting steps to clean up the device IDs.