Sun Cluster 3.0-3.1 With SCSI JBOD Storage Device Manual for Solaris OS

ProcedureSPARC: How to Install a Storage Array in a New SPARC Based Cluster

This procedure assumes that you are installing one or more storage arrays at initial installation of a SPARC based cluster. If you are adding arrays to a running cluster, use the procedure in SPARC: How to Add a Storage Array to an Existing SPARC Based Cluster.

Multihost storage in clusters uses the multi-initiator capability of the small computer system interface (SCSI) specification. When installing arrays in your cluster, you must ensure that each device in each SCSI chain has a unique SCSI address. The procedure that follows has specific instructions for achieving this requirement. For additional information about multi-initiator capability, see Multi-Initiator SCSI in Sun Cluster Concepts Guide for Solaris OS.


Note –

This procedure uses an updated method for setting the scsi-initiator-id. The method that was published in earlier documentation is still applicable. However, the method changes if your cluster configuration uses a Sun StorEdge PCI Dual Ultra3 SCSI host adapter to connect to any other shared storage. You then must update your nvramrc script and set the scsi-initiator-id by following the steps in this procedure.


Before You Begin

Before performing this procedure, ensure that you have met the following prerequisites. This procedure relies on the following prerequisites and assumptions.

Steps
  1. Verify that the storage arrays are set up correctly for your planned configuration.

  2. If necessary, install the host adapters in the nodes that you plan to connect to the storage array.

    If possible, put each host adapter on a separate I/O board to ensure maximum redundancy.

  3. Cable the storage arrays.

    For cabling diagrams, see Chapter 3, Cabling Diagrams.


    Note –

    Ensure that the SCSI bus length does not exceed bus-length specifications. This measurement includes the cables to both nodes, as well as the bus length that is internal to each storage array, node, and the host adapter. For more information about bus-length limitations, see your hardware documentation.


  4. Connect the AC or DC power cords for each storage array to a different power source.

    If your storage array has redundant power inputs, connect each power cord from the storage array to a different power source. If the arrays are not mirrors of each other, the arrays can share power sources.

  5. Ensure that each device in the SCSI chain has a unique SCSI address.

    To avoid SCSI-chain conflicts, the following steps instruct you to reserve SCSI address 7 for one host adapter in the SCSI chain and change the other host adapter's global scsi-initiator-id to an available SCSI address. Then the steps instruct you to change the scsi-initiator-id for local devices back to 7.


    Note –

    A slot in the storage array might not be in use. However, do not set the scsi-initiator-id to a SCSI address for that disk slot. This precaution minimizes future complications if you install additional disk drives.


    1. If necessary, power on the storage devices.


      Note –

      If necessary, halt the nodes so that you can perform OpenBootTM PROM (OBP) Monitor tasks at the ok prompt.


      For the procedure about powering on a storage device, see the service manual that shipped with your storage device.

    2. If necessary, power on a node, but do not allow it to boot. If necessary, halt the system to continue with OBP Monitor tasks.

    3. Set the scsi-initiator-id for one node to 6.


      {1} ok setenv scsi-initiator-id 6
      scsi-initiator-id = 6
    4. Find the paths to the host adapters that connect to the local disk drives.


      {0} ok show-disks
      

      Use this information to change the SCSI addresses in the nvramrc script. Do not include the /sd directories in the device paths.

    5. Edit the nvramrc script to set the scsi-initiator-id for the local devices on the first node to 7.

      For a full list of commands, see the OpenBoot 2.x Command Reference Manual.


      Caution – Caution –

      Insert exactly one space after the first double quote and before scsi-initiator-id.



      {0} ok nvedit
       0: probe-all 
       1: cd /pci@1f,4000/scsi@2
       2: 7 encode-int " scsi-initiator-id" property
       3: device-end 
       4: cd /pci@1f,4000/scsi@3 
       5: 7 encode-int " scsi-initiator-id" property 
       6: device-end
       7: install-console
       8: banner[Control C] 
      {0} ok
    6. Store the changes.

      The changes you make through the nvedit command are recorded on a temporary copy of the nvramrc script. You can continue to edit this copy without risk. After you complete your edits, save the changes. If you are not sure about the changes, discard them.

      • To store the changes, type the following command:


        {0} ok nvstore
        {1} ok 
      • To discard the changes, type the following command:


        {0} ok nvquit
        {1} ok 
    7. Verify the contents of the nvramrc script that you created, as shown in the following example.

      If the contents of the nvramrc script are incorrect, use the nvedit command to make corrections.


      {0} ok printenv nvramrc
      nvramrc =             probe-all
                            cd /pci@1f,4000/scsi@2 
                            7 " scsi-initiator-id" integer-property
                            device-end 
                            cd /pci@1f,4000/scsi@3
                            7 " scsi-initiator-id" integer-property
                            device-end 
                            install-console
                            banner
      {1} ok
    8. Instruct the OpenBoot PROM (OBP) Monitor to use the nvramrc script, as shown in the following example.


      {0} ok setenv use-nvramrc? true
      use-nvramrc? = true
      {1} ok 
  6. Verify that the scsi-initiator-id is set correctly on the second node.

    1. If necessary, power on the second node, but do not allow it to boot. If necessary, halt the system to continue with OBP Monitor tasks.

    2. Verify that the scsi-initiator-id for each host adapter on the second node is set to 7.

      Use the show-disks command to find the paths to the host adapters that are connected to these enclosures. Select each host adapter's device tree node, and display the node's properties to confirm that the scsi-initiator-id for each host adapter is set to 7.


      {0} ok cd /pci@6,4000/pci@3/scsi@5
      {0} ok .properties
      scsi-initiator-id        00000007 
      ...
  7. Install the operating system software.

    1. Install the Solaris operating system.

      See your Sun Cluster installation documentation for instructions.

    2. Install any unbundled drivers required by your cluster configuration.

      See the host adapter documentation for driver installation procedures .

    3. Apply any required Solaris patches.

      PatchPro is a patch-management tool that eases the selection and download of patches required for installation or maintenance of Sun Cluster software. PatchPro provides an Interactive Mode tool especially for Sun Cluster. The Interactive Tool makes the installation of patches easier. PatchPro's Expert Mode tool helps you to maintain your configuration with the latest set of patches. Expert Mode is especially useful for obtaining all of the latest patches, not just the high availability and security patches.

      To access the PatchPro tool for Sun Cluster software, go to http://www.sun.com/PatchPro/, click Sun Cluster, then choose either Interactive Mode or Expert Mode. Follow the instructions in the PatchPro tool to describe your cluster configuration and download the patches.

      For third-party firmware patches, see the SunSolveSM Online site at http://sunsolve.ebay.sun.com.

  8. If you are using Sun StorEdge 3310 JBOD arrays with the Sun StorEdge PCI/PCI-X Dual Ultra320 SCSI host adapter, you must throttle down the speed of the adapter to U160. Add the following entry to your /kernel/drv/mpt.conf file:


    scsi-options=0x1ff8;
  9. Install the Sun Cluster software and volume management software.

    For software installation procedures, see the Sun Cluster installation documentation.

  10. If you are using Solstice DiskSuiteTM/Solaris Volume Manager as your volume manager, save the disk-partitioning information.


    Caution – Caution –

    Do not save disk-partitioning information in /tmp because you will lose this file when you reboot. Instead, save this file in /usr/tmp.


    You might need disk-partitioning information if you replace a failed disk drive in the future.