Sun Cluster 3.0 12/01 Release Notes Supplement

Installing a StorEdge/Netra st A1000 Array

This section provides the procedure for an initial installation of a pair of StorEdge/Netra st A1000 arrays to a non-configured cluster. To add StorEdge/Netra st A1000 arrays to an operating cluster, use the procedure, "How to Add a Pair of StorEdge/Netra st A1000 Arrays to a Running Cluster".

How to Install a Pair of StorEdge/Netra st A1000 Arrays

Use this procedure to install and configure a pair of StorEdge/Netra st A1000 arrays, before installing the Solaris operating environment and Sun Cluster software on your cluster nodes.

  1. Install the host adapters in the nodes that connect to the arrays.

    For the procedure on installing host adapters, see the documentation that shipped with your host adapters and nodes.

  2. Cable the arrays.

    The StorEdge/Netra st A1000 arrays must be configured in pairs for the Sun Cluster environment. Figure 7-1 illustrates the StorEdge/Netra st A1000 cabled in a Sun Cluster environment.

    Figure F-1 StorEdge/Netra st A1000 Array Cabling

    Graphic

  3. Power on the arrays and then the cluster nodes.


    Note -

    When you power on the nodes, do not allow them to boot. If necessary, halt the nodes so that you can perform OpenBootTM PROM (OBP) Monitor tasks at the ok prompt.


    To power on a StorEdge/Netra st A1000 array, push the power switch to the momentary on position (right side) and then release it.

  4. Find the paths to the host adapters in the first node:


    {0} ok show-disks
    ...b) /sbus@6,0/QLGC,isp@2,10000/sd...d) /sbus@2,0/QLGC,isp@2,10000/sd...


    Note -

    Use this information to change the SCSI addresses of the host adapters in the nvramrc script in Step 5, but do not include the sd directories in the device paths.


  5. Edit the nvramrc script to change the scsi-initiator-id for the host adapters on the first node.

    The default SCSI address for host adapters is 7. Reserve SCSI address 7 for one host adapter in the SCSI chain. This procedure refers to the node that has a host adapter with SCSI address 7 as the "second node."

    To avoid conflicts, you must change the scsi-initiator-id of the remaining host adapter in the SCSI chain to an available SCSI address. This procedure refers to the node that has a host adapter with an available SCSI address as the "first node."

    For a partial list of nvramrc editor and nvedit keystroke commands, see Appendix B of the Sun Cluster 3.0 12/01 Hardware Guide. For a full list of commands, see the OpenBoot 3.x Command Reference Manual.

    The following example sets the scsi-initiator-id of the host adapter on the first node to 6. The OpenBoot PROM Monitor prints the line numbers (0:, 1:, and so on).


    Note -

    Insert exactly one space after the first quotation mark and before scsi-initiator-id.


    {0} ok nvedit 
    0: probe-all
    1: cd /sbus@6,0/QLGC,isp@2,10000
    2: 6 " scsi-initiator-id" integer-property 
    3: device-end 
    4: cd /sbus@2,0/QLGC,isp@2,10000
    5: 6 " scsi-initiator-id" integer-property 
    6: device-end 
    7: install-console 
    8: banner <Control C>
    {0} ok


  6. Store the changes.

    The changes you make through the nvedit command are recorded on a temporary copy of the nvramrc script. You can continue to edit this copy without risk. After you complete your edits, save the changes. If you are not sure about the changes, discard them.

    • To store the changes, type:


      {0} ok nvstore
      

    • To discard the changes, type:


      {0} ok nvquit
      

  7. Verify the contents of the nvramrc script you created in Step 5, as shown in the following example.

    If the contents of the nvramrc script are incorrect, use the nvedit command again to make corrections.


    {0} ok printenv nvramrc
    nvramrc =             probe-all
                          cd /sbus@6,0/QLGC,isp@2,10000
                          6 " scsi-initiator-id" integer-property
                          device-end 
                          cd /sbus@2,0/QLGC,isp@2,10000
                          6 " scsi-initiator-id" integer-property
                          device-end 
                          install-console
                          banner

  8. Set the parameter to instruct the OpenBoot PROM Monitor to use the nvramrc script:


    {0} ok setenv use-nvramrc? true
    use-nvramrc? = true

  9. Verify that the scsi-initiator-id for each host adapter on the second node is set to 7.

    Use the show-disks command to find the paths to the host adapters. Select each host adapter's device tree node, then display the node's properties to confirm that the scsi-initiator-id for each host adapter is set to 7.


    {0} ok show-disks
    ...b) /sbus@6,0/QLGC,isp@2,10000/sd...d) /sbus@2,0/QLGC,isp@2,10000/sd...
    {0} ok cd /sbus@6,0/QLGC,isp@2,10000
    {0} ok .properties
    scsi-initiator-id        00000007

  10. Install the Solaris operating environment, then apply any required Solaris patches.


    Note -

    For the current list of patches that are required for the Solaris operating environment, refer to SunSolve. SunSolve is available online to Sun service providers and to customers with SunSolve service contracts at the SunSolve site: http://sunsolve.sun.com.


  11. Install the RAID Manager software.

    For the procedure on installing the RAID Manager software, see the Sun StorEdge RAID Manager 6.22.1 Release Notes.


    Note -

    RAID Manager 6.22.1 is required for clustering the Sun StorEdge/Netra st A1000 array with Sun Cluster 3.0.


  12. Install any StorEdge/Netra st A1000 array or RAID Manager patches.


    Note -

    For the most current list of software, firmware, and patches that are required for the StorEdge/Netra st A1000 Array, refer to EarlyNotifier 20029, "A1000/A3x00/A1000FC Software/Firmware Configuration Matrix." This document is available online to Sun service providers and to customers with SunSolve service contracts at the SunSolve site, http://sunsolve.sun.com, under advanced search.


  13. Check the StorEdge/Netra st A1000 array NVSRAM file revision, and if necessary, install the most recent revision.

    For the NVSRAM file revision number, boot level, and procedure on upgrading the NVSRAM file, see the Sun StorEdge RAID Manager 6.22.1 Release Notes.

  14. Set the Rdac parameters in the /etc/osa/rmparams file on both nodes.


    Rdac_RetryCount=1
    Rdac_NoAltOffline=TRUE
    

  15. Set up the arrays with logical units (LUNs) and hot spares.

    For the procedure on setting up the StorEdge/Netra st A1000 array with LUNs and hot spares, see the Sun StorEdge RAID Manager User's Guide.


    Note -

    Use the format command to verify Solaris logical device names.


  16. Ensure that the new logical name for the LUN you created in Step 15 appears in the /dev/rdsk directory on both nodes by running the hot_add command on both nodes:


    # /etc/raid/bin/hot_add
    

Where to Go From Here

To continue with Sun Cluster software and data services installation tasks, see the Sun Cluster 3.0 12/01 Software Installation Guide and the Sun Cluster 3.0 12/01 Data Services Installation and Configuration Guide.