Sun Cluster 3.0-3.1 With SCSI JBOD Storage Device Manual for Solaris OS

Procedurex86: How to Add a Storage Array to an Existing X86 Based Cluster

This procedure contains instructions for adding storage arrays to an operational cluster. If you need to install storage arrays to a new cluster, use the procedure in SPARC: How to Install a Storage Array in a New SPARC Based Cluster or x86: How to Install a Storage Array in a New X86 Based Cluster.

Adding a storage array enables you to alter your storage pool. You might want to perform this procedure in the following scenarios.

This procedure defines Node A as the node with which you begin working. Node B is the remaining node.

Before You Begin

This procedure relies on the following prerequisites and assumptions.

Steps
  1. Install all software that is specific to the storage array or to any new host adapters.

    Install the software and patches to all nodes that will connect to the new storage array.

    PatchPro is a patch-management tool that eases the selection and download of patches required for installation or maintenance of Sun Cluster software. PatchPro provides an Interactive Mode tool especially for Sun Cluster. The Interactive Tool makes the installation of patches easier. PatchPro's Expert Mode tool helps you to maintain your configuration with the latest set of patches. Expert Mode is especially useful for obtaining all of the latest patches, not just the high availability and security patches.

    To access the PatchPro tool for Sun Cluster software, go to http://www.sun.com/PatchPro/, click Sun Cluster, then choose either Interactive Mode or Expert Mode. Follow the instructions in the PatchPro tool to describe your cluster configuration and download the patches.

    For third-party firmware patches, see the SunSolveSM Online site at http://sunsolve.ebay.sun.com.

  2. Move all resource groups and device groups off Node A.


    # scswitch -S -h from-node
    
  3. If you need to install host adapters in Node A, perform the following steps.

    1. Shut down and power off Node A.

      For the procedure about how to shut down and power off a node, see the Sun Cluster system administration documentation.

    2. Install host adapters in Node A.

      For the procedure about how to install host adapters, see your host adapters and server documentation.

  4. If you installed host adapters in Step 3, or if you intend to use previously unconfigured host adapters, ensure that each device in the SCSI chain has a unique SCSI address by configuring the scsi-initiator-id in the BIOS.


    Note –

    Perform these steps on one cluster node, the node on which you have configured SCSI initiator IDs for the cluster in the past.


    1. Access your host adaptor's BIOS settings.

      To access the BIOS on the V40z server with X4422A Sun Dual Gigabit Ethernet and Dual SCSI Adapter cards, press Ctrl-C when prompted during reboot.

    2. Verify that internal controller is set to the default value of 7.

    3. Select a unique value for each of the new host adapter's ports.

    4. Set each controller's scsi-initiator-id to that value.

  5. Connect the storage array to the host adapters on Node A.

    • If necessary, terminate the ports that will connect to Node B.

      • If you have a NetraTM D130 array, always terminate the ports that connect to Node B.

      • If you have a StorEdge 3310 or 3320 SCSI array, terminate the ports that connect to Node B when using a split-bus configuration.

    • If your storage array is a StorEdge 3310 or 3320 SCSI array, do not power on the storage array until the storage array is cabled to Node A.

    For cabling diagrams, see Chapter 3, Cabling Diagrams.


    Note –

    Ensure that the bus length does not exceed SCSI bus-length specifications. This measurement includes the cables to both nodes, as well as the bus length that is internal to each storage array, node, and the host adapter. For more information about SCSI bus-length limitations, see your hardware documentation.


  6. If you installed host adapters in Step 3, or if you intend to use previously unconfigured host adapters, finish configuring the SCSI initiator IDs on the same node on which you configured the BIOS in Step 4.

    1. Get the information required for the mpt.conf file.

      To create the mpt.conf entries, you need the path to your boot disk and the SCSI unit address.

      To find this information on the V40z server with SG-XCPI2SCSI-LM320 Sun StorEdge PCI/PCI-X Dual Ultra320 SCSI Host Adapter cards, use the following command:


      # echo | format
      Searching for disks...done
      
      AVAILABLE DISK SELECTIONS:
      		0. clt0d0 <DEFAULT cyl 8938 alt 2 hd 255 sec 63>
      			/pci@0,0/pci1022,7450@a/pci17c2,10@4/sd@0,0
    2. Create or edit your /kernel/drv/mpt.conf file.

    3. Include the following entries:


      scsi-initiator-id=6;
      name="mpt" parent="/pci@0,0/pci1022,7450@a"
             unit-address="4"
             scsi-initiator-id=7;

      Note –

      These entries are based on the foregoing example output of the format command. Your entries must include the values output from your format command. Also, note that the parent and unit-address values are strings. The quotation marks are required to form correct values in the mpt.conf file.


      The entries in this example have the following meanings:

      scsi-initiator-id=6;

      Matches your setting in the BIOS for the host adapter ports.

      name="mpt"

      Indicates that these settings are for the mpt driver.

      parent

      Is set to the path to your local drive.

      unit-address

      Specifies the unit address of the local drive. In the example in Step a, this information derives from the pci17c2,10@4 portion of the output.

      scsi-initiator-id=7;

      Sets your node's local drive back to the default SCSI setting of 7.

  7. Perform a reconfiguration boot on Node A to create the new Solaris device files and links.

  8. If necessary, format and label the disks.

  9. On Node A, verify that the device IDs have been assigned to the disk drives in the storage array.

    # scdidadm -l

  10. Move all resource groups and device groups off Node B.


    # scswitch -S -h from-node
    
  11. If you need to install host adapters in Node B, perform the following steps.

    1. Shut down Node B.

      For the procedure about how to shut down and power off a node, see the Sun Cluster system administration documentation.

    2. Install the host adapters in Node B.

      For the procedure about how to install a host adapter, see your host adapter and server documentation.

    3. Power on and boot Node B.

  12. Connect the storage array to the host adapters on Node B.

    If you added port terminators in Step 5, remove the terminator ports and connect the storage array to Node B.

    For cabling diagrams, see Chapter 3, Cabling Diagrams.


    Note –

    Ensure that the bus length does not exceed bus-length specifications. This measurement includes the cables to both nodes, as well as the bus length that is internal to each storage array, node, and the host adapter. For more information about bus-length limitations, see your hardware documentation.


  13. Verify that the scsi-initiator-id is set correctly on the second node.

    1. Access your BIOS settings.

      To access the BIOS on the V40z server with SG-XCPI2SCSI-LM320 Sun StorEdge PCI/PCI-X Dual Ultra320 SCSI Host Adapter cards, press Ctrl-C when prompted during reboot.

    2. Verify that the scsi-initiator-id for each host adapter on the second node is set to 7.

  14. Perform a reconfiguration boot to create the new Solaris device files and links.

  15. On Node B, verify that the device IDs have been assigned to the disk drives in the storage array.


    # scdidadm -L
    
  16. Perform volume management administration to add the disk drives in the storage array to the volume management configuration.

    For more information, see your Solstice DiskSuite/Solaris Volume Manager or VERITAS Volume Manager documentation.

Next Steps

If needed, finish setting up your storage arrays, including partitions. If you are using Solstice DiskSuiteTM/Solaris Volume Manager as your volume manager, save the disk-partitioning information. You might need disk-partitioning information if you replace a failed disk drive in the future.


Caution – Caution –

Do not save disk-partitioning information in /tmp because you will lose this file when you reboot. Instead, save this file in /usr/tmp.