Sun Cluster 3.0-3.1 With Sun StorEdge 3310 or 3320 SCSI RAID Array Manual

Chapter 1 Installing and Configuring a Sun StorEdge 3310 and 3320 SCSI RAID Array

This chapter describes the procedures about how to install and configure Sun StorEdgeTM 3310 and 3320 SCSI RAID arrays in a SunTM Cluster environment.

Read the entire procedure before you perform any steps within a procedure in this chapter. If you are not reading an online version of this document, ensure that you have the books listed in Related Documentation available.

This chapter contains the following major topics:

Installing Storage Arrays

This section contains instructions on installing storage arrays both in new clusters and in existing clusters.

Table 1–1 Task Map: Installing Storage Arrays

Task 

Information 

Install a storage array in a new cluster, before the OS and Sun Cluster software are installed. 

How to Install a Storage Array in a New Cluster

Add a storage array to an operational cluster. 

How to Add a Storage Array to an Existing Cluster

ProcedureHow to Install a Storage Array in a New Cluster

Use this procedure to install and configure RAID storage arrays before installing the Solaris operating environment and Sun Cluster software on your nodes. To add storage arrays to an operating cluster, use the procedure, How to Add a Storage Array to an Existing Cluster.

Before You Begin

This procedure assumes that the hardware is not connected.


SPARC only –

To attach a JBOD storage array to a RAID storage array as an expansion unit, attach the JBOD storage array before connecting the RAID storage array to the nodes. For more information, see the Sun StorEdge 3000 Family Installation, Operation, and Service Manual.


Steps
  1. Install the host adapters in the nodes that connect to the storage array.

    For the procedure about how to install host adapters, see the documentation that shipped with your host adapters and nodes.

  2. Cable the storage array to the nodes.

    Ensure the cable does not exceed bus length limitations. For more information on bus length limitations, see the documentation that shipped with your hardware.

    For the procedure about how to cable the storage arrays, see the Sun StorEdge 3000 Family Installation, Operation, and Service Manual.

  3. RAID storage arrays have redundant power inputs. Connect each power cord from the storage array to a different power source.

    Different RAID storage arrays can share power sources.

  4. Install the Solaris operating environment, then apply any required Solaris patches.

    For software installation procedures, see your Sun Cluster software installation documentation.


    Note –

    For the current list of patches that are required for the Solaris operating environment, refer to SunSolve. SunSolve is available online to Sun service providers and to customers with SunSolve service contracts at the SunSolve site: http://sunsolve.sun.com.


  5. If necessary, install the qus driver and appropriate driver patches.

    For driver installation procedures, see the Sun StorEdge PCI Dual Ultra 3 SCSI Host Adapter Release Notes.

  6. If necessary, upgrade the controller firmware.

  7. Set up and configure the storage arrays with logical units (LUNs).

    For the procedure about how to set up the storage array with LUNs, see How to Create and Map a LUN.


    Note –

    If you want to use the Configuration Service Console, perform this step after Step 8.


  8. (Optional) Install the Configuration Service.

    For the procedure about how to install the Configuration Service, see the Sun StorEdge 3000 Family Configuration Service 2.1 User's Guide.

  9. Install the Sun Cluster software and volume management software.

    For software installation procedures, see your Sun Cluster software installation documentation.

See Also

To continue with Sun Cluster software and data services installation tasks, see your Sun Cluster software installation documentation and your Sun Cluster data services collection.

ProcedureHow to Add a Storage Array to an Existing Cluster

Use this procedure to add RAID storage arrays to a running cluster. If you need to install a storage array in a new cluster, use the procedure in How to Install a Storage Array in a New Cluster.

Before You Begin

This procedure assumes that your nodes are not configured with dynamic reconfiguration functionality.

If your nodes are configured for dynamic reconfiguration, see your Dynamic Reconfiguration Operations For Sun Cluster Nodes in Sun Cluster 3.0-3.1 Hardware Administration Manual for Solaris OS.

Steps
  1. Install any storage array packages and patches on nodes.


    Note –

    For the most current list of software, firmware, and patches that are required for the RAID storage array, refer to SunSolve. SunSolve is available online to Sun service providers and to customers with SunSolve service contracts at the SunSolve site: http://sunsolve.sun.com.


  2. Power on the storage array.

    For procedures about how to power on the storage array, see the Sun StorEdge 3000 Family Installation, Operation, and Service Manual.

  3. Configure the storage array.

    For the procedure about how to create LUNs, see How to Create and Map a LUN.

  4. On each node that is connected to the storage array, ensure that each LUN has an associated entry in the /kernel/drv/sd.conf file.

    For more information, see the Sun StorEdge 3000 Family Installation, Operation, and Service Manual.

  5. If you need to install host adapters in the node, perform the following steps.

    1. Shut down the node.

      For the procedure about how to shut down and power off a node, see your Sun Cluster system administration documentation.

    2. Power off the node.

      For the procedure about how to power off a node, see the Sun StorEdge 3000 Family Installation, Operation, and Service Manual.

    3. Install the host adapters in the node.

      For the procedure about how to install host adapters, see the documentation that shipped with your host adapters and nodes.

  6. Cable the storage array to the node.

    Ensure the cable does not exceed bus length limitations. For more information on bus length limitations, see the documentation that shipped with your hardware.

    For the procedure about how to cable the storage arrays, see the Sun StorEdge 3000 Family Installation, Operation, and Service Manual.

  7. Boot the node.

    For the procedure about how to boot nodes, see your Sun Cluster system administration documentation.

  8. Verify that the node recognizes the new host adapters and disk drives.

    If the node does not recognize the new hardware, check all hardware connections and repeat installation steps you performed in Step c.

    • SPARC:


      {0} ok show-disks
      a) /pci@1f,4000/pci@2/scsi@5/sd
      b) /pci@1f,4000/pci@2/scsi@4/sd
      ...
       
    • x86:


      Adaptec AIC-7899 SCSI BIOS v2.57S4
      (c) 2000 Adaptec, Inc. All Rights Reserved.
          Press <Ctrl><A> for SCSISelect(TM) Utility!
      
      Ch B,  SCSI ID: 0 SEAGATE  ST336605LC        160
             SCSI ID: 1 SEAGATE  ST336605LC        160
             SCSI ID: 6 ESG-SHV  SCA HSBP M18      ASYN
      Ch A,  SCSI ID: 2 SUN      StorEdge 3310     160
             SCSI ID: 3 SUN      StorEdge 3310     160
      
      AMIBIOS (C)1985-2002 American Megatrends Inc.,
      Copyright 1996-2002 Intel Corporation
      SCB20.86B.1064.P18.0208191106
      SCB2 Production BIOS Version 2.08
      BIOS Build 1064
      
      2 X Intel(R) Pentium(R) III CPU family      1400MHz
      Testing system memory, memory size=2048MB
      2048MB Extended Memory Passed
      512K L2 Cache SRAM Passed
      ATAPI CD-ROM SAMSUNG CD-ROM SN-124    
      
      SunOS - Intel Platform Edition     Primary Boot Subsystem, vsn 2.0
      
                              Current Disk Partition Information
      
                       Part#   Status    Type      Start       Length
                      ================================================
                         1     Active   X86 BOOT     2428       21852
                         2              SOLARIS     24280     71662420
                         3              <unused>
                         4              <unused>
                    Please select the partition you wish to boot: *   *
      
      Solaris DCB
      
      			       loading /solaris/boot.bin
      
      SunOS Secondary Boot version 3.00
      
                        Solaris Intel Platform Edition Booting System
      
      Autobooting from bootpath: /pci@1,0/pci8086,340f@7,1/sd@0,0:a
      
      If the system hardware has changed, or to boot from a different
      device, interrupt the autoboot process by pressing ESC.
      Press ESCape to interrupt autoboot in 2 seconds.
      Initializing system
      Please wait...
      Warning: Resource Conflict - both devices are added
      
      NON-ACPI device: ISY0050
           Port: 3F0-3F5, 3F7; IRQ: 6; DMA: 2
      ACPI device: ISY0050
           Port: 3F2-3F3, 3F4-3F5, 3F7; IRQ: 6; DMA: 2
      
                           <<< Current Boot Parameters >>>
      Boot path: /pci@1,0/pci8086,340f@7,1/sd@0,0:a
      Boot args: 
      
      Type    b [file-name] [boot-flags] <ENTER>  to boot with options
      or      i <ENTER>                           to enter boot interpreter
      or      <ENTER>                             to boot with defaults
      
                        <<< timeout in 5 seconds >>>
      
      Select (b)oot or (i)nterpreter:
  9. If necessary, perform a reconfiguration boot on the node to create the new Solaris device files and links.

  10. Perform Step 1 through Step 9 on each additional node connected to the new array.

  11. For all nodes that are attached to the storage array, verify that the DIDs have been assigned to the LUNs.


    # scdidadm -L
    

Configuring Storage Arrays

This product supports the use of hardware RAID and host-based software RAID. For host-based software RAID, this product supports RAID levels 0+1 and 1+0.


Note –

When you use host-based software RAID with hardware RAID, the hardware RAID levels you use affect the hardware maintenance procedures due to volume management administration.

If you use hardware RAID level 1, 3, or 5, you can perform most maintenance procedures in Maintaining RAID Storage Arrays without volume management disruptions. If you use hardware RAID level 0, some maintenance procedures in Maintaining RAID Storage Arrays require additional volume management administration because the availability of the LUNs is impacted.



Note –

When you upgrade firmware on a storage device or on an enclosure, redefine the stripe size of a LUN, or perform other LUN operations, a device ID might change unexpectedly. When you perform a check of the device ID configuration by running the scdidadm -c command, the following error message appears on your console if the device ID changed unexpectedly.


device id for nodename:/dev/rdsk/cXtYdZsN does not match physical 
device's id for ddecimalnumber, device may have been replaced.

To fix device IDs that report this error, run the scdidadm -R command for each affected device.


This section describes the procedures about how to configure a RAID storage array after installing Sun Cluster software. Table 1–2 lists these procedures.

To configure a RAID storage array before you install Sun Cluster software, follow the same procedure that you use in a noncluster environment. For procedures about how to configure RAID storage arrays before you install Sun Cluster, see the Sun StorEdge 3000 Family Installation, Operation, and Service Manual.

Table 1–2 Task Map: Configuring Disk Drives

Task 

Information 

Create a logical unit (LUN). 

How to Create and Map a LUN

Remove a LUN. 

How to Unmap and Delete a LUN

ProcedureHow to Create and Map a LUN

Use this procedure to create a logical unit (LUN) from unassigned disk drives or remaining capacity. See the Sun StorEdge 3000 Family RAID Firmware 4.1x User's Guide for the latest information about LUN administration.

Steps
  1. Create and partition the logical device(s).

    For more information on creating a LUN, see the Sun StorEdge 3000 Family RAID Firmware 4.1x User's Guide.

  2. Map the LUNs to the host channels that are cabled to the nodes.

    For more information on mapping LUNs to host channels, see the Sun StorEdge 3000 Family RAID Firmware 4.1x User's Guide.


    Note –

    You can have a maximum of 64 shared LUNs.


  3. Ensure that each LUN has an associated entry in the /kernel/drv/sd.conf file.

    For more information, see the Sun StorEdge 3000 Family Installation, Operation, and Service Manual.

  4. To make the changes to the /kernel/drv/sd.conf file active, perform one of the following options.

    • On systems that run Solaris 8 Update 7 or below, perform a reconfiguration boot.

    • For Solaris 9 and above, run the update_drv -f sd command and then the devfsadm command.

  5. If necessary, label the LUNs.

  6. If the cluster is online and active, update the global device namespace.


    # scgdevs
    
  7. If you want a volume manager to manage the new LUN, run the appropriate Solstice DiskSuite/Solaris Volume Manager commands or VERITAS Volume Manager commands. Use these commands to incorporate the new LUN into a diskset or disk group.

    For information on administering LUNs, see your Sun Cluster system administration documentation.

    For more information, see your Solstice DiskSuite/Solaris Volume Manager or VERITAS Volume Manager documentation.

  8. If you want the new LUN to be a quorum device, add the quorum device.

    For the procedure about how to add a quorum device, see your Sun Cluster system administration documentation.

ProcedureHow to Unmap and Delete a LUN

Use this procedure to delete a LUN(s). See the Sun StorEdge 3000 Family RAID Firmware 4.1x User's Guide for the latest information about LUN administration.


Caution – Caution –

When you delete the LUN, you remove all data on that LUN.


Before You Begin

This procedure assumes that the cluster is online. A cluster is online if the RAID storage array is connected to the nodes and all nodes are powered on. This procedure also assumes that you plan to telnet to the RAID storage array perform this procedure.

Steps
  1. Identify the LUNs that you need to remove.


    # cfgadm -al
    
  2. Is the LUN a quorum device? This LUN is the LUN that you are removing.


    # scstat -q
    
    • If no, proceed to Step 3.

    • If yes, relocate that quorum device to another suitable RAID storage array.

      For procedures about how to add and remove quorum devices, see your Sun Cluster system administration documentation.

  3. Remove the LUN from disksets or disk groups.

    Run the appropriate Solstice DiskSuite/Solaris Volume Manager commands or VERITAS Volume Manager commands to remove the LUN from any diskset or disk group. For more information, see your Solstice DiskSuite/Solaris Volume Manager or VERITAS Volume Manager documentation. See the following paragraph for additional VERITAS Volume Manager commands that are required.


    Note –

    LUNs that were managed by VERITAS Volume Manager must be completely removed from VERITAS Volume Manager control before you can delete the LUNs from the Sun Cluster environment. After you delete the LUN from any disk group, use the following commands on both nodes to remove the LUN from VERITAS Volume Manager control.



    # vxdisk offline cNtXdY
    # vxdisk rm cNtXdY
    
  4. On both nodes, unconfigure the device that is associated with the LUN.


    # cfgadm -c unconfigure cx::dsk/cxtydz
    
  5. Unmap the LUN from both host channels.

    For the procedure about how to unmap a LUN, see the Sun StorEdge 3000 Family RAID Firmware 4.1x User's Guide.

  6. Delete the logical drive.

    For more information, see the Sun StorEdge 3000 Family RAID Firmware 4.1x User's Guide.

  7. On both nodes, remove the paths to the LUN that you are deleting.


    # devfsadm -C
    
  8. On both nodes, remove all obsolete device IDs (DIDs).


    # scdidadm -C
    
  9. If no other LUN is assigned to the target and LUN ID, remove the LUN entries from /kernel/drv/sd.conf file.

    Perform this step on both nodes to prevent extended boot time caused by unassigned LUN entries.


    Note –

    Do not remove the default cXtXdX entries.