Sun Cluster 3.0-3.1 With Sun StorEdge 3310 or 3320 SCSI RAID Array Manual

Installing Storage Arrays

This section contains instructions on installing storage arrays both in new clusters and in existing clusters.

Table 1–1 Task Map: Installing Storage Arrays

Task 

Information 

Install a storage array in a new cluster, before the OS and Sun Cluster software are installed. 

How to Install a Storage Array in a New Cluster

Add a storage array to an operational cluster. 

How to Add a Storage Array to an Existing Cluster

ProcedureHow to Install a Storage Array in a New Cluster

Use this procedure to install and configure RAID storage arrays before installing the Solaris operating environment and Sun Cluster software on your nodes. To add storage arrays to an operating cluster, use the procedure, How to Add a Storage Array to an Existing Cluster.

Before You Begin

This procedure assumes that the hardware is not connected.


SPARC only –

To attach a JBOD storage array to a RAID storage array as an expansion unit, attach the JBOD storage array before connecting the RAID storage array to the nodes. For more information, see the Sun StorEdge 3000 Family Installation, Operation, and Service Manual.


Steps
  1. Install the host adapters in the nodes that connect to the storage array.

    For the procedure about how to install host adapters, see the documentation that shipped with your host adapters and nodes.

  2. Cable the storage array to the nodes.

    Ensure the cable does not exceed bus length limitations. For more information on bus length limitations, see the documentation that shipped with your hardware.

    For the procedure about how to cable the storage arrays, see the Sun StorEdge 3000 Family Installation, Operation, and Service Manual.

  3. RAID storage arrays have redundant power inputs. Connect each power cord from the storage array to a different power source.

    Different RAID storage arrays can share power sources.

  4. Install the Solaris operating environment, then apply any required Solaris patches.

    For software installation procedures, see your Sun Cluster software installation documentation.


    Note –

    For the current list of patches that are required for the Solaris operating environment, refer to SunSolve. SunSolve is available online to Sun service providers and to customers with SunSolve service contracts at the SunSolve site: http://sunsolve.sun.com.


  5. If necessary, install the qus driver and appropriate driver patches.

    For driver installation procedures, see the Sun StorEdge PCI Dual Ultra 3 SCSI Host Adapter Release Notes.

  6. If necessary, upgrade the controller firmware.

  7. Set up and configure the storage arrays with logical units (LUNs).

    For the procedure about how to set up the storage array with LUNs, see How to Create and Map a LUN.


    Note –

    If you want to use the Configuration Service Console, perform this step after Step 8.


  8. (Optional) Install the Configuration Service.

    For the procedure about how to install the Configuration Service, see the Sun StorEdge 3000 Family Configuration Service 2.1 User's Guide.

  9. Install the Sun Cluster software and volume management software.

    For software installation procedures, see your Sun Cluster software installation documentation.

See Also

To continue with Sun Cluster software and data services installation tasks, see your Sun Cluster software installation documentation and your Sun Cluster data services collection.

ProcedureHow to Add a Storage Array to an Existing Cluster

Use this procedure to add RAID storage arrays to a running cluster. If you need to install a storage array in a new cluster, use the procedure in How to Install a Storage Array in a New Cluster.

Before You Begin

This procedure assumes that your nodes are not configured with dynamic reconfiguration functionality.

If your nodes are configured for dynamic reconfiguration, see your Dynamic Reconfiguration Operations For Sun Cluster Nodes in Sun Cluster 3.0-3.1 Hardware Administration Manual for Solaris OS.

Steps
  1. Install any storage array packages and patches on nodes.


    Note –

    For the most current list of software, firmware, and patches that are required for the RAID storage array, refer to SunSolve. SunSolve is available online to Sun service providers and to customers with SunSolve service contracts at the SunSolve site: http://sunsolve.sun.com.


  2. Power on the storage array.

    For procedures about how to power on the storage array, see the Sun StorEdge 3000 Family Installation, Operation, and Service Manual.

  3. Configure the storage array.

    For the procedure about how to create LUNs, see How to Create and Map a LUN.

  4. On each node that is connected to the storage array, ensure that each LUN has an associated entry in the /kernel/drv/sd.conf file.

    For more information, see the Sun StorEdge 3000 Family Installation, Operation, and Service Manual.

  5. If you need to install host adapters in the node, perform the following steps.

    1. Shut down the node.

      For the procedure about how to shut down and power off a node, see your Sun Cluster system administration documentation.

    2. Power off the node.

      For the procedure about how to power off a node, see the Sun StorEdge 3000 Family Installation, Operation, and Service Manual.

    3. Install the host adapters in the node.

      For the procedure about how to install host adapters, see the documentation that shipped with your host adapters and nodes.

  6. Cable the storage array to the node.

    Ensure the cable does not exceed bus length limitations. For more information on bus length limitations, see the documentation that shipped with your hardware.

    For the procedure about how to cable the storage arrays, see the Sun StorEdge 3000 Family Installation, Operation, and Service Manual.

  7. Boot the node.

    For the procedure about how to boot nodes, see your Sun Cluster system administration documentation.

  8. Verify that the node recognizes the new host adapters and disk drives.

    If the node does not recognize the new hardware, check all hardware connections and repeat installation steps you performed in Step c.

    • SPARC:


      {0} ok show-disks
      a) /pci@1f,4000/pci@2/scsi@5/sd
      b) /pci@1f,4000/pci@2/scsi@4/sd
      ...
       
    • x86:


      Adaptec AIC-7899 SCSI BIOS v2.57S4
      (c) 2000 Adaptec, Inc. All Rights Reserved.
          Press <Ctrl><A> for SCSISelect(TM) Utility!
      
      Ch B,  SCSI ID: 0 SEAGATE  ST336605LC        160
             SCSI ID: 1 SEAGATE  ST336605LC        160
             SCSI ID: 6 ESG-SHV  SCA HSBP M18      ASYN
      Ch A,  SCSI ID: 2 SUN      StorEdge 3310     160
             SCSI ID: 3 SUN      StorEdge 3310     160
      
      AMIBIOS (C)1985-2002 American Megatrends Inc.,
      Copyright 1996-2002 Intel Corporation
      SCB20.86B.1064.P18.0208191106
      SCB2 Production BIOS Version 2.08
      BIOS Build 1064
      
      2 X Intel(R) Pentium(R) III CPU family      1400MHz
      Testing system memory, memory size=2048MB
      2048MB Extended Memory Passed
      512K L2 Cache SRAM Passed
      ATAPI CD-ROM SAMSUNG CD-ROM SN-124    
      
      SunOS - Intel Platform Edition     Primary Boot Subsystem, vsn 2.0
      
                              Current Disk Partition Information
      
                       Part#   Status    Type      Start       Length
                      ================================================
                         1     Active   X86 BOOT     2428       21852
                         2              SOLARIS     24280     71662420
                         3              <unused>
                         4              <unused>
                    Please select the partition you wish to boot: *   *
      
      Solaris DCB
      
      			       loading /solaris/boot.bin
      
      SunOS Secondary Boot version 3.00
      
                        Solaris Intel Platform Edition Booting System
      
      Autobooting from bootpath: /pci@1,0/pci8086,340f@7,1/sd@0,0:a
      
      If the system hardware has changed, or to boot from a different
      device, interrupt the autoboot process by pressing ESC.
      Press ESCape to interrupt autoboot in 2 seconds.
      Initializing system
      Please wait...
      Warning: Resource Conflict - both devices are added
      
      NON-ACPI device: ISY0050
           Port: 3F0-3F5, 3F7; IRQ: 6; DMA: 2
      ACPI device: ISY0050
           Port: 3F2-3F3, 3F4-3F5, 3F7; IRQ: 6; DMA: 2
      
                           <<< Current Boot Parameters >>>
      Boot path: /pci@1,0/pci8086,340f@7,1/sd@0,0:a
      Boot args: 
      
      Type    b [file-name] [boot-flags] <ENTER>  to boot with options
      or      i <ENTER>                           to enter boot interpreter
      or      <ENTER>                             to boot with defaults
      
                        <<< timeout in 5 seconds >>>
      
      Select (b)oot or (i)nterpreter:
  9. If necessary, perform a reconfiguration boot on the node to create the new Solaris device files and links.

  10. Perform Step 1 through Step 9 on each additional node connected to the new array.

  11. For all nodes that are attached to the storage array, verify that the DIDs have been assigned to the LUNs.


    # scdidadm -L