Sun Cluster 3.1 - 3.2 With SCSI JBOD Storage Device Manual for Solaris OS

ProcedureSPARC: How to Add a Storage Array to an Existing SPARC Based Cluster

This procedure contains instructions for adding storage arrays to an operational cluster. If you need to install storage arrays to a new cluster, use the procedure in SPARC: How to Install a Storage Array in a New SPARC Based Cluster or x86: How to Install a Storage Array in a New x86 Based Cluster.

Adding a storage array enables you to alter your storage pool. You might want to perform this procedure in the following scenarios.

This procedure defines Node A as the node with which you begin working. Node B is the remaining node.

Before You Begin

This procedure relies on the following prerequisites and assumptions.

This procedure provides the long forms of the Sun Cluster commands. Most commands also have short forms. Except for the forms of the command names, the commands are identical. For a list of the commands and their short forms, see Appendix A, Sun Cluster Object-Oriented Commands, in Sun Cluster 3.1 - 3.2 Hardware Administration Manual for Solaris OS.

To perform this procedure, become superuser or assume a role that provides solaris.cluster.read and solaris.cluster.modify role-based access control (RBAC) authorization.

  1. Install all software that is specific to the storage array or to any new host adapters.

    Install the software and patches to all nodes that will connect to the new storage array.

    If you use the Solaris 8, Solaris 9, or Solaris 10 Operating System, Sun Connection Update Manager keeps you informed of the latest versions of patches and features. Using notifications and intelligent needs-based updating, Sun Connection helps improve operational efficiency and ensures that you have the latest software patches for your Sun software.

    You can download the Sun Connection Update Manager product for free by going to http://www.sun.com/download/products.xml?id=4457d96d.

    Additional information for using the Sun patch management tools is provided in Solaris Administration Guide: Basic Administration at http://docs.sun.com. Refer to the version of this manual for the Solaris OS release that you have installed.

    If you must apply a patch when a node is in noncluster mode, you can apply it in a rolling fashion, one node at a time, unless instructions for a patch require that you shut down the entire cluster. Follow the procedures in How to Apply a Rebooting Patch (Node) in Sun Cluster System Administration Guide for Solaris OS to prepare the node and to boot it in noncluster mode. For ease of installation, consider applying all patches at the same time. That is, apply all patches to the node that you place in noncluster mode.

    For a list of patches that affect Sun Cluster, see the Sun Cluster Wiki Patch Klatch.

    For required firmware, see the Sun System Handbook.

  2. Move all resource groups and device groups off Node A.

    • If you are using Sun Cluster 3.2, use the following command:


      # clnode evacuate NodeA
      
    • If you are using Sun Cluster 3.1, use the following command:


      # scswitch -S -h NodeA
      
  3. If you need to install host adapters in Node A, perform the following steps.

    1. Shut down and power off Node A.

      For the procedure about how to shut down and power off a node, see the Sun Cluster system administration documentation.

    2. Install host adapters in Node A.

      For the procedure about how to install host adapters, see your host adapters and server documentation.

  4. Connect the storage array to the host adapters on Node A.

    • If necessary, terminate the ports that will connect to Node B.

      • If you have a NetraTM D130 array, always terminate the ports that connect to Node B.

      • If you have a StorEdge 3310 or 3320 SCSI array, terminate the ports that connect to Node B when using a split-bus configuration.

    • If your storage array is a StorEdge 3310 or 3320 SCSI array, do not power on the storage array until the storage array is cabled to Node A.

    For cabling diagrams, see Chapter 3, Cabling Diagrams.


    Note –

    Ensure that the bus length does not exceed SCSI bus-length specifications. This measurement includes the cables to both nodes, as well as the bus length that is internal to each storage array, node, and the host adapter. For more information about SCSI bus-length limitations, see your hardware documentation.


  5. Ensure that each device in the SCSI chain has a unique SCSI address.

    To avoid SCSI-chain conflicts, the following steps instruct you to reserve SCSI address 7 for one host adapter in the SCSI chain and change the other host adapter's global scsi-initiator-id to an available SCSI address. Then the steps instruct you to change the scsi-initiator-id for local devices back to 7.


    Note –

    A slot in the storage array might not be in use. However, do not set the scsi-initiator-id to a SCSI address for that disk slot. This precaution minimizes future complications if you install additional disk drives.


    1. If necessary, power on the storage devices.

      For the procedure about powering on a storage device, see the service manual that shipped with your storage device.


      Note –

      If necessary, halt the nodes so that you can perform OpenBoot PROM (OBP) Monitor tasks at the ok prompt.


    2. If necessary, power on Node A, but do not allow it to boot. If necessary, halt the system to continue with OBP Monitor tasks.

    3. Set the scsi-initiator-id for Node A to 6.


      {1} ok setenv scsi-initiator-id 6
      scsi-initiator-id = 6
    4. Find the paths to the host adapters that connect to the local disk drives.


      {0} ok show-disks
      

      Use this information to change the SCSI addresses in the nvramrc script. Do not include the /sd directories in the device paths.

    5. Edit the nvramrc script to set the scsi-initiator-id for the local devices on the first node to 7.

      For a full list of commands, see the OpenBoot 2.x Command Reference Manual.


      Caution – Caution –

      Insert exactly one space after the first double quote and before scsi-initiator-id.



      {0} ok nvedit
       0: probe-all 
       1: cd /pci@1f,4000/scsi@2
       2: 7 encode-int " scsi-initiator-id" property
       3: device-end 
       4: cd /pci@1f,4000/scsi@3 
       5: 7 encode-int " scsi-initiator-id" property 
       6: device-end
       7: install-console
       8: banner[Control C] 
      {0} ok
    6. Store the changes.

      The changes you make through the nvedit command are recorded on a temporary copy of the nvramrc script. You can continue to edit this copy without risk. After you complete your edits, save the changes. If you are not sure about the changes, discard them.

      • To store the changes, type the following command:


        {0} ok nvstore
        {1} ok 
      • To discard the changes, type the following command:


        {0} ok nvquit
        {1} ok 
    7. Verify the contents of the nvramrc script that you created, as shown in the following example.

      If the contents of the nvramrc script are incorrect, use the nvedit command to make corrections.


      {0} ok printenv nvramrc
      nvramrc =             probe-all
                            cd /pci@1f,4000/scsi@2 
                            7 " scsi-initiator-id" integer-property
                            device-end 
                            cd /pci@1f,4000/scsi@3
                            7 " scsi-initiator-id" integer-property
                            device-end 
                            install-console
                            banner
      {1} ok
    8. Instruct the OpenBoot PROM (OBP) Monitor to use the nvramrc script, as shown in the following example.


      {0} ok setenv use-nvramrc? true
      use-nvramrc? = true
      {1} ok 
  6. To create the new Solaris device files and links, perform a reconfiguration boot on Node A by adding -r to your boot instruction.

  7. If necessary, format and label the disks.

  8. On Node A, verify that the device IDs have been assigned to the disk drives in the storage array.

    • If you are using Sun Cluster 3.2, use the following command:


      # cldevice list -n NodeA -v
      
    • If you are using Sun Cluster 3.1, use the following command:


      # scdidadm -l
      
  9. Move all resource groups and device groups off Node B.

    • If you are using Sun Cluster 3.2, use the following command:


      # clnode evacuate NodeB
      
    • If you are using Sun Cluster 3.1, use the following command:


      # scswitch -S -h NodeB
      
  10. If you need to install host adapters in Node B, perform the following steps.

    1. Shut down Node B.

      For the procedure about how to shut down and power off a node, see the Sun Cluster system administration documentation.

    2. Install the host adapters in Node B.

      For the procedure about how to install a host adapter, see your host adapter and server documentation.

    3. Power on and boot Node B.

      For the procedure about booting cluster nodes, see Chapter 3, Shutting Down and Booting a Cluster, in Sun Cluster System Administration Guide for Solaris OS.

  11. Connect the storage array to the host adapters on Node B.

    If you added port terminators in Step 4, remove the terminator ports and connect the storage array to Node B.

    For cabling diagrams, see Chapter 3, Cabling Diagrams.


    Note –

    Ensure that the bus length does not exceed bus-length specifications. This measurement includes the cables to both nodes, as well as the bus length that is internal to each storage array, node, and the host adapter. For more information about bus-length limitations, see your hardware documentation.


  12. Verify that the scsi-initiator-id is set correctly on the second node.

    1. If necessary, power on the second node, but do not allow it to boot. If necessary, halt the system to continue with OBP Monitor tasks.

    2. Verify that the scsi-initiator-id for each host adapter on the second node is set to 7.

      Use the show-disks command to find the paths to the host adapters that are connected to these enclosures. Select each host adapter's device tree node, and display the node's properties to confirm that the scsi-initiator-id for each host adapter is set to 7.


      {0} ok cd /pci@6,4000/pci@3/scsi@5
      {0} ok .properties
      scsi-initiator-id        00000007 
      ...
  13. To create the new Solaris device files and links, perform a reconfiguration boot by adding -r to your boot instruction.

    For the procedure about booting cluster nodes, see Chapter 3, Shutting Down and Booting a Cluster, in Sun Cluster System Administration Guide for Solaris OS.

  14. On Node B, verify that the device IDs have been assigned to the disk drives in the storage array.

    • If you are using Sun Cluster 3.2, use the following command:


      # cldevice list -n NodeB -v
      
    • If you are using Sun Cluster 3.1, use the following command:


      # scdidadm -l
      
  15. Perform volume management administration to add the disk drives in the storage array to the volume management configuration.

    For more information, see your Solaris Volume Manager or Veritas Volume Manager documentation.

Next Steps

If needed, finish setting up your storage arrays, including partitions. If you are using Solstice DiskSuite/Solaris Volume Manager as your volume manager, save the disk-partitioning information. You might need disk-partitioning information if you replace a failed disk drive in the future.


Caution – Caution –

Do not save disk-partitioning information in /tmp because you will lose this file when you reboot. Instead, save this file in /usr/tmp.