Sun Cluster 3.0 U1 Hardware Guide

Installing a Sun StorEdge A3500/A3500FC System

This section describes the procedure for an initial installation of a StorEdge A3500/A3500FC system.

How to Install a StorEdge A3500/A3500FC System

Use this procedure for an initial installation and configuration, before installing the Solaris operating environment and Sun Cluster software.

  1. Install the host adapters in the nodes that are to be connected to the StorEdge A3500/A3500FC system.

    For the procedure on installing host adapters, see the documentation that shipped with your host adapters and nodes.

  2. Cable the StorEdge A3500/A3500FC system:

    • See Figure 7-1 for a sample StorEdge A3500 system cabling.

    • See Figure 7-2 for a sample StorEdge A3500FC system cabling.

    For more sample configurations, see the Sun StorEdge A3500/A3500FC Hardware Configuration Guide.

    For the procedure on installing the cables, see the Sun StorEdge A3500/A3500FC Controller Module Guide.

    Figure 7-1 Sample StorEdge A3500 System Cabling

    Graphic

    Figure 7-2 Sample StorEdge A3500FC System Cabling

    Graphic

  3. Depending on which type of controller module you are installing:

    • If you are installing a StorEdge A3500 controller module, go to Step 4.

    • If you are installing a StorEdge A3500FC controller module, set the loop ID of the controller module by installing jumpers to the appropriate pins on the rear of the controller module.

      For diagrams and information about setting FC-AL ID settings, see the Sun StorEdge A3500/A3500FC Controller Module Guide.

  4. Power on the StorEdge A3500/A3500FC system and cluster nodes.


    Note -

    For StorEdge A3500 controller modules only: When you power on the nodes, do not allow them to boot. If necessary, halt the nodes so that you can perform OpenBootTM PROM (OBP) Monitor tasks at the ok prompt.


    For the procedure on powering on the StorEdge A3500/A3500FC system, see the Sun StorEdge A3500/A3500FC Controller Module Guide.

  5. Depending on which type of controller module you are installing:

    • For a StorEdge A3500FC controller module, go to Step 13.

    • For a StorEdge A3500 controller module, find the paths to the host adapters in the first node:


      {0} ok show-disks
      ...b) /sbus@6,0/QLGC,isp@2,10000/sd...d) /sbus@2,0/QLGC,isp@2,10000/sd...


    Note -

    Use this information to change the SCSI addresses of the host adapters in the nvramrc script in Step 6, but do not include the sd directories in the device paths.


  6. Edit the nvramrc script to change the scsi-initiator-id for the host adapters on the first node.

    The default SCSI address for host adapters is 7. Reserve SCSI address 7 for one host adapter in the SCSI chain. This procedure refers to the node that has a host adapter with SCSI address 7 as the "second node."

    To avoid conflicts, you must change the scsi-initiator-id of the remaining host adapter in the SCSI chain to an available SCSI address. This procedure refers to the node that has a host adapter with an available SCSI address as the "first node."

    For a partial list of nvramrc editor and nvedit keystroke commands, see Appendix B, NVRAMRC Editor and NVEDIT Keystroke Commands of this guide. For a full list of commands, see the OpenBoot 3.x Command Reference Manual.

    The following example sets the scsi-initiator-id of the host adapter on the first node to 6. The OpenBoot PROM Monitor prints the line numbers (0:, 1:, and so on).


    Note -

    Insert exactly one space after the first quotation mark and before scsi-initiator-id.


    {0} ok nvedit 
    0: probe-all
    1: cd /sbus@6,0/QLGC,isp@2,10000
    2: 6 " scsi-initiator-id" integer-property 
    3: device-end 
    4: cd /sbus@2,0/QLGC,isp@2,10000
    5: 6 " scsi-initiator-id" integer-property 
    6: device-end 
    7: install-console 
    8: banner <Control C>
    {0} ok


  7. Store the changes.

    The changes you make through the nvedit command are recorded on a temporary copy of the nvramrc script. You can continue to edit this copy without risk. After you complete your edits, save the changes. If you are not sure about the changes, discard them.

    • To store the changes, type:


      {0} ok nvstore
      

    • To discard the changes, type:


      {0} ok nvquit
      

  8. Verify the contents of the nvramrc script you created in Step 6, as shown in the following example.

    If the contents of the nvramrc script are incorrect, use the nvedit command again to make corrections.


    {0} ok printenv nvramrc
    nvramrc =             probe-all
                          cd /sbus@6,0/QLGC,isp@2,10000
                          6 " scsi-initiator-id" integer-property
                          device-end 
                          cd /sbus@2,0/QLGC,isp@2,10000
                          6 " scsi-initiator-id" integer-property
                          device-end 
                          install-console
                          banner

  9. Set the parameter to instruct the OpenBoot PROM Monitor to use the nvramrc script:


    {0} ok setenv use-nvramrc? true
    use-nvramrc? = true

  10. Verify that the scsi-initiator-id for each host adapter on the second node is set to 7.

    Use the show-disks command to find the paths to the host adapters. Select each host adapter's device tree node, then display the node's properties to confirm that the scsi-initiator-id for each host adapter is set to 7:


    {0} ok show-disks
    ...b) /sbus@6,0/QLGC,isp@2,10000/sd...d) /sbus@2,0/QLGC,isp@2,10000/sd...
    {0} ok cd /sbus@6,0/QLGC,isp@2,10000
    {0} ok .properties
    scsi-initiator-id        00000007

  11. Install the Solaris operating environment, then apply any required Solaris patches.

    For the procedure on installing the Solaris operating environment, see the Sun Cluster 3.0 U1 Installation Guide. For the location of patches and installation instructions, see the Sun Cluster 3.0 U1 Release Notes.

  12. Read the following two conditions carefully to determine whether you must reboot the cluster nodes now:

    • If you are using a version of RAID Manager later than 6.22 or you are using a version of the Solaris operating environment earlier than Solaris 8 Update 4, go to Step 13.

    • If you are using RAID Manager 6.22 and the Solaris 8 Update 4 or later operating environment, reboot both cluster nodes now.


      # reboot
      

  13. Install the RAID Manager software.

    For the procedure on installing the RAID Manager software, see the Sun StorEdge RAID Manager Installation and Support Guide.


    Note -

    RAID Manager 6.22 or a compatible version is required for clustering with Sun Cluster 3.0.



    Note -

    For the most current list of software, firmware, and patches that are required for the StorEdge A3x00/A3500FC controller module, refer to EarlyNotifier 20029, "A1000/A3x00/A3500FC Software/Firmware Configuration Matrix." This document is available online to Sun service providers and to customers with SunSolve service contracts at the SunSolve site: http://sunsolve.sun.com.


  14. Install any StorEdge A3500/A3500FC controller module or RAID Manager patches.

    For more information, see the Sun StorEdge RAID Manager Release Notes.

  15. Check the StorEdge A3500/A3500FC controller module NVSRAM file revision, and if necessary, install the most recent revision.

    For the NVSRAM file revision number and boot level, see the Sun StorEdge RAID Manager Release Notes. For the procedure on upgrading the NVSRAM file, see the Sun StorEdge RAID Manager User's Guide.

  16. Check the StorEdge A3500/A3500FC controller module firmware revision, and if necessary, install the most recent revision.

    For the firmware revision number and boot level, see the Sun StorEdge RAID Manager Release Notes. For the procedure on upgrading the firmware, see the Sun StorEdge RAID Manager User's Guide.

  17. Set the Rdac parameters in the /etc/osa/rmparams file:


    Rdac_RetryCount=1
    Rdac_NoAltOffline=TRUE
    

  18. Verify that the controller module is set to active/active mode (if it is not, set it to active/active).

    For more information on controller modes, see the Sun StorEdge RAID Manager Installation and Support Guide and the Sun StorEdge RAID Manager User's Guide.

  19. Set up the StorEdge A3500/A3500FC controller module with logical unit numbers (LUNs) and hot spares.

    For the procedure on setting up the StorEdge A3500/A3500FC controller module with LUNs and hot spares, see the Sun StorEdge RAID Manager User's Guide.


    Note -

    Use the format command to verify Solaris logical device names.


  20. Copy the /etc/raid/rdac_address file from the node on which you created the LUNs to the other node to ensure consistency across both nodes.

  21. Ensure that the new logical name for the LUN you created in Step 19 appears in the /dev/rdsk directory on both nodes by running the hot_add command on both nodes:


    # /etc/raid/bin/hot_add
    

Where to Go From Here

To continue with Sun Cluster software and data services installation tasks, see the Sun Cluster 3.0 U1 Installation Guide and the Sun Cluster 3.0 U1 Data Services Installation and Configuration Guide.