Sun Cluster 3.1 - 3.2 With Sun StorEdge 3310 or 3320 SCSI RAID Array Manual for Solaris OS

Installing RAID Storage Arrays

This section contains instructions on installing storage arrays both in new clusters and in existing clusters.

Table 1–1 Task Map: Installing Storage Arrays

Task 

Information 

Install a storage array in a new cluster, before the OS and Sun Cluster software are installed. 

How to Install a RAID Storage Array in a New Cluster

Add a storage array to an operational cluster. 

How to Add a RAID Storage Array to an Existing Cluster

ProcedureHow to Install a RAID Storage Array in a New Cluster

Use this procedure to install and configure RAID storage arrays before installing the Solaris operating environment and Sun Cluster software on your nodes. To add storage arrays to an operating cluster, use the procedure, How to Add a RAID Storage Array to an Existing Cluster.


Note –

The storage array must be mirrored with another storage array to ensure high availability.


Before You Begin

This procedure assumes that the hardware is not connected.


SPARC only –

To attach a JBOD storage array to a RAID storage array as an expansion unit, attach the JBOD storage array before connecting the RAID storage array to the nodes. For more information, see the Sun StorEdge 3000 Family Installation, Operation, and Service Manual.


  1. Install the host adapters in the nodes that connect to the RAID storage array.

    For the procedure about how to install host adapters, see the documentation that shipped with your host adapters and nodes.

  2. Cable the RAID storage array to the nodes.

    Ensure the cable does not exceed bus length limitations. For more information on bus length limitations, see the documentation that shipped with your hardware.

    For the procedure about how to cable the storage arrays, see the Sun StorEdge 3000 Family Installation, Operation, and Service Manual.

  3. RAID storage arrays have redundant power inputs. Connect each power cord from the RAID storage array to a different power source.

    Different RAID storage arrays can share power sources.

  4. Install the Solaris operating environment, then apply any required Solaris patches.

    For software installation procedures, see your Sun Cluster software installation documentation.


    Note –

    For the current list of patches that are required for the Solaris operating environment, refer to SunSolve. SunSolve is available online to Sun service providers and to customers with SunSolve service contracts at the SunSolve site: http://sunsolve.sun.com.


  5. If necessary, install the qus driver and appropriate driver patches.

    For driver installation procedures, see the Sun StorEdge PCI Dual Ultra 3 SCSI Host Adapter Release Notes.

  6. If necessary, upgrade the controller firmware.

  7. Set up and configure the RAID storage arrays with logical units (LUNs).

    For the procedure about how to set up the storage array with LUNs, see How to Create and Map a LUN.


    Note –

    If you want to use the Configuration Service Console, perform this step after Step 8.


  8. (Optional) Install the Configuration Service.

    For the procedure about how to install the Configuration Service, see the Sun StorEdge 3000 Family Configuration Service 1.5 User’s Guide for the Sun StorEdge 3310 SCSI Array and the Sun StorEdge 3510 FC Array.

  9. Install the Sun Cluster software and volume management software.

    For software installation procedures, see the your Sun Cluster software installation documentation.

See Also

To continue with Sun Cluster software and data services installation tasks, see your Sun Cluster software installation documentation and your Sun Cluster data services collection.

ProcedureHow to Add a RAID Storage Array to an Existing Cluster

Use this procedure to add RAID storage arrays to a running cluster. If you need to install a storage array in a new cluster, use the procedure in How to Install a RAID Storage Array in a New Cluster.

Before You Begin

This procedure assumes that your nodes are not configured with dynamic reconfiguration functionality.

If your nodes are configured for dynamic reconfiguration, see your Sun Cluster Hardware Administration Manual for Solaris OS.

This procedure provides the long forms of the Sun Cluster commands. Most commands also have short forms. Except for the forms of the command names, the commands are identical. For a list of the commands and their short forms, see Appendix A, Sun Cluster Object-Oriented Commands, in Sun Cluster 3.1 - 3.2 Hardware Administration Manual for Solaris OS.

To perform this procedure, become superuser or assume a role that provides solaris.cluster.read RBAC (role-based access control) authorization.

  1. Install any RAID storage array packages and patches on nodes.


    Note –

    For the most current list of software, firmware, and patches that are required for the RAID storage array, refer to SunSolve. SunSolve is available online to Sun service providers and to customers with SunSolve service contracts at the SunSolve site: http://sunsolve.sun.com.


  2. Power on the RAID storage array.

    For procedures about how to power on the storage array, see the Sun StorEdge 3000 Family Installation, Operation, and Service Manual.

  3. Configure the RAID storage array.

    For the procedure about how to create LUNs, see How to Create and Map a LUN.

  4. On each node that is connected to the RAID storage array, ensure that each LUN has an associated entry in the /kernel/drv/sd.conf file.

    For more information, see the Sun StorEdge 3000 Family Installation, Operation, and Service Manual.

  5. Shut down the first node.

    For the procedure about how to shut down and power off a node, see your Sun Cluster system administration documentation.

  6. If you are installing new host adapters, power off the first node.

    For the procedure about how to power off a node, see the Sun StorEdge 3000 Family Installation, Operation, and Service Manual.

  7. Install the host adapters in the first node.

    For the procedure about how to install host adapters, see the documentation that shipped with your host adapters and nodes.

  8. Cable the RAID storage array to the first node.

    Ensure the cable does not exceed bus length limitations. For more information on bus length limitations, see the documentation that shipped with your hardware.

    For the procedure about how to cable the storage arrays, see the Sun StorEdge 3000 Family Installation, Operation, and Service Manual.

  9. Boot the first node.

    For the procedure about how to boot cluster nodes, see Chapter 3, Shutting Down and Booting a Cluster, in Sun Cluster System Administration Guide for Solaris OS.

  10. Verify that the first node recognizes the new host adapters and disk drives.

    If the node does not recognize the new hardware, check all hardware connections and repeat installation steps you performed in Step 7.

    • SPARC:


      {0} ok show-disks
      a) /pci@1f,4000/pci@2/scsi@5/sd
      b) /pci@1f,4000/pci@2/scsi@4/sd
      ...
    • x86:


      Adaptec AIC-7899 SCSI BIOS v2.57S4
      (c) 2000 Adaptec, Inc. All Rights Reserved.
          Press <Ctrl><A> for SCSISelect(TM) Utility!
      
      Ch B,  SCSI ID: 0 SEAGATE  ST336605LC        160
             SCSI ID: 1 SEAGATE  ST336605LC        160
             SCSI ID: 6 ESG-SHV  SCA HSBP M18      ASYN
      Ch A,  SCSI ID: 2 SUN      StorEdge 3310     160
             SCSI ID: 3 SUN      StorEdge 3310     160
      
      AMIBIOS (C)1985-2002 American Megatrends Inc.,
      Copyright 1996-2002 Intel Corporation
      SCB20.86B.1064.P18.0208191106
      SCB2 Production BIOS Version 2.08
      BIOS Build 1064
      
      2 X Intel(R) Pentium(R) III CPU family      1400MHz
      Testing system memory, memory size=2048MB
      2048MB Extended Memory Passed
      512K L2 Cache SRAM Passed
      ATAPI CD-ROM SAMSUNG CD-ROM SN-124    
      
      SunOS - Intel Platform Edition     Primary Boot Subsystem, vsn 2.0
      
                              Current Disk Partition Information
      
                       Part#   Status    Type      Start       Length
                      ================================================
                         1     Active   X86 BOOT     2428       21852
                         2              SOLARIS     24280     71662420
                         3              <unused>
                         4              <unused>
                    Please select the partition you wish to boot: *   *
      
      Solaris DCB
      
      			       loading /solaris/boot.bin
      
      SunOS Secondary Boot version 3.00
      
                        Solaris Intel Platform Edition Booting System
      
      Autobooting from bootpath: /pci@1,0/pci8086,340f@7,1/sd@0,0:a
      
      If the system hardware has changed, or to boot from a different
      device, interrupt the autoboot process by pressing ESC.
      Press ESCape to interrupt autoboot in 2 seconds.
      Initializing system
      Please wait...
      Warning: Resource Conflict - both devices are added
      
      NON-ACPI device: ISY0050
           Port: 3F0-3F5, 3F7; IRQ: 6; DMA: 2
      ACPI device: ISY0050
           Port: 3F2-3F3, 3F4-3F5, 3F7; IRQ: 6; DMA: 2
      
                           <<< Current Boot Parameters >>>
      Boot path: /pci@1,0/pci8086,340f@7,1/sd@0,0:a
      Boot args: 
      
      Type    b [file-name] [boot-flags] <ENTER>  to boot with options
      or      i <ENTER>                           to enter boot interpreter
      or      <ENTER>                             to boot with defaults
      
                        <<< timeout in 5 seconds >>>
      
      Select (b)oot or (i)nterpreter:
  11. If necessary, perform a reconfiguration boot on the first node to create the new Solaris device files and links.

  12. Shut down the second node.

    For the procedure about how to shut down a node, see your Sun Cluster system administration documentation.

  13. If you are installing new host adapters, power off the second node.

    For the procedure about how to shut down and power off a node, see your Sun Cluster system administration documentation.

  14. Install the host adapters in the second node.

    For the procedure about how to install host adapters, see the documentation that shipped with your nodes.

  15. Cable the RAID storage array to the second node.

    Ensure the cable does not exceed bus length limitations. For more information on bus length limitations, see the documentation that shipped with your hardware.

    For the procedure about how to cable the storage arrays, see the Sun StorEdge 3000 Family Installation, Operation, and Service Manual.

  16. Boot the second node.

    For the procedure about how to boot cluster nodes, see Chapter 3, Shutting Down and Booting a Cluster, in Sun Cluster System Administration Guide for Solaris OS.

  17. Verify that the second node recognizes the new host adapters and disk drives.

    If the node does not recognize the new hardware, check all hardware connections and repeat installation steps you performed in Step 14.

    • SPARC:


      {0} ok show-disks
      a) /pci@1f,4000/pci@2/scsi@5/sd
      b) /pci@1f,4000/pci@2/scsi@4/sd
      ...
    • x86:


      Adaptec AIC-7899 SCSI BIOS v2.57S4
      (c) 2000 Adaptec, Inc. All Rights Reserved.
          Press <Ctrl><A> for SCSISelect(TM) Utility!
      
      Ch B,  SCSI ID: 0 SEAGATE  ST336605LC        160
             SCSI ID: 1 SEAGATE  ST336605LC        160
             SCSI ID: 6 ESG-SHV  SCA HSBP M18      ASYN
      Ch A,  SCSI ID: 2 SUN      StorEdge 3310     160
             SCSI ID: 3 SUN      StorEdge 3310     160
      
      AMIBIOS (C)1985-2002 American Megatrends Inc.,
      Copyright 1996-2002 Intel Corporation
      SCB20.86B.1064.P18.0208191106
      SCB2 Production BIOS Version 2.08
      BIOS Build 1064
      
      2 X Intel(R) Pentium(R) III CPU family      1400MHz
      Testing system memory, memory size=2048MB
      2048MB Extended Memory Passed
      512K L2 Cache SRAM Passed
      ATAPI CD-ROM SAMSUNG CD-ROM SN-124    
      
      SunOS - Intel Platform Edition     Primary Boot Subsystem, vsn 2.0
      
                              Current Disk Partition Information
      
                       Part#   Status    Type      Start       Length
                      ================================================
                         1     Active   X86 BOOT     2428       21852
                         2              SOLARIS     24280     71662420
                         3              <unused>
                         4              <unused>
                    Please select the partition you wish to boot: *   *
      
      Solaris DCB
      
      			       loading /solaris/boot.bin
      
      SunOS Secondary Boot version 3.00
      
                        Solaris Intel Platform Edition Booting System
      
      Autobooting from bootpath: /pci@1,0/pci8086,340f@7,1/sd@0,0:a
      
      If the system hardware has changed, or to boot from a different
      device, interrupt the autoboot process by pressing ESC.
      Press ESCape to interrupt autoboot in 2 seconds.
      Initializing system
      Please wait...
      Warning: Resource Conflict - both devices are added
      
      NON-ACPI device: ISY0050
           Port: 3F0-3F5, 3F7; IRQ: 6; DMA: 2
      ACPI device: ISY0050
           Port: 3F2-3F3, 3F4-3F5, 3F7; IRQ: 6; DMA: 2
      
                           <<< Current Boot Parameters >>>
      Boot path: /pci@1,0/pci8086,340f@7,1/sd@0,0:a
      Boot args: 
      
      Type    b [file-name] [boot-flags] <ENTER>  to boot with options
      or      i <ENTER>                           to enter boot interpreter
      or      <ENTER>                             to boot with defaults
      
                        <<< timeout in 5 seconds >>>
      
      Select (b)oot or (i)nterpreter:
  18. If necessary, perform a reconfiguration boot on the second node to create the new Solaris device files and links.

  19. For all nodes that are attached to the RAID storage array, verify that the DIDs have been assigned to the LUNs.

    • If you are using Sun Cluster 3.2, use the following command:


      # cldevice list -v
      
    • If you are using Sun Cluster 3.1, use the following command:


       # scdidadm -l