Sun Cluster 3.0 12/01 Release Notes Supplement

How to Add a Pair of StorEdge/Netra st A1000 Arrays to a Running Cluster

Use this procedure to add a pair of StorEdge/Netra st A1000 arrays to a running cluster.

  1. Install the RAID Manager software on cluster nodes.

    For the procedure on installing RAID Manager software, see the Sun StorEdge RAID Manager Installation and Support Guide.


    Note -

    RAID Manager 6.22 or a compatible version is required for clustering with Sun Cluster 3.0.


  2. Install any StorEdge/Netra st A1000 array patches on cluster nodes.


    Note -

    For the most current list of software, firmware, and patches that are required for the StorEdge/Netra st A1000 array, refer to EarlyNotifier 20029, "A1000/A3x00/A1000FC Software/Firmware Configuration Matrix." This document is available online to Sun service providers and to customers with SunSolve service contracts at the SunSolve site: http://sunsolve.sun.com.


  3. Set the Rdac parameters in the /etc/osa/rmparams file on both nodes.


    Rdac_RetryCount=1
    Rdac_NoAltOffline=TRUE
    

  4. Power on the StorEdge/Netra st A1000 array.

    To power on the StorEdge/Netra st A1000 array, push the power switch to the momentary on position (right side) and then release it.

  5. Shut down the first node.


    # scswitch -S -h nodename
    # shutdown -y -g0 -i0
    
  6. If you are installing new host adapters, power off the first node.

    For the full procedure on shutting down and powering off a node, see the Sun Cluster 3.0 12/01 System Administration Guide.

  7. Install the host adapters in the first node.

    For the procedure on installing host adapters, see the documentation that shipped with your host adapters and nodes.

  8. Cable the StorEdge/Netra st A1000 array to the first node.

    If you are adding a StorEdge/Netra st A1000 array, connect the differential SCSI cable between the node and the array. Verify that the entire SCSI bus length to each enclosure is less than 25 m. This measurement includes the cables to both nodes, as well as the bus length internal to each enclosure, node, and host adapter.

    Figure F-2 StorEdge/Netra st A1000 Array Cabling

    Graphic

  9. Did you power off the first node to install a host adapter?

    • If not, go to Step 10.

    • If you did power off the first node, power it and the StorEdge/Netra st A1000 array on, but do not allow the node to boot. If necessary, halt the array to continue with OpenBoot PROM (OBP) Monitor tasks.

  10. Find the paths to the SCSI host adapters.


    {0} ok show-disks
    ...b) /sbus@6,0/QLGC,isp@2,10000/sd...d) /sbus@2,0/QLGC,isp@2,10000/sd...

    Identify and record the two controllers that are to be connected to the disk arrays, and record these paths. Use this information to change the SCSI addresses of these controllers in the nvramrc script in Step 11. Do not include the sd directories in the device paths.

  11. Edit the nvramrc script to change the scsi-initiator-id for the host adapters on the first node.

    The default SCSI address for host adapters is 7. Reserve SCSI address 7 for one host adapter in the SCSI chain. This procedure refers to the host adapter that has SCSI address 7 as the host adapter on the "second node."

    To avoid conflicts, change the scsi-initiator-id of the remaining host adapter in the SCSI chain to an available SCSI address. This procedure refers to the host adapter that has an available SCSI address as the host adapter on the "first node."

    For a partial list of nvramrc editor and nvedit keystroke commands, see Appendix B of the Sun Cluster 3.0 12/01 Hardware Guide. For a full list of commands, see the OpenBoot 3.x Command Reference Manual.

    The following example sets the scsi-initiator-id to 6. The OpenBoot PROM Monitor prints the line numbers (0:, 1:, and so on).


    Note -

    Insert exactly one space after the quotation mark and before scsi-initiator-id.


    {0} ok nvedit 
    0: probe-all
    1: cd /sbus@6,0/QLGC,isp@2,10000 
    2: 6 " scsi-initiator-id" integer-property 
    3: device-end 
    4: cd /sbus@2,0/QLGC,isp@2,10000
    5: 6 " scsi-initiator-id" integer-property 
    6: device-end 
    7: install-console 
    8: banner <Control C>
    {0} ok


  12. Store the changes.

    The changes you make through the nvedit command are recorded on a temporary copy of the nvramrc script. You can continue to edit this copy without risk. After you have completed your edits, save the changes. If you are not sure about the changes, discard them.

    • To store the changes, type:


      {0} ok nvstore
      {0} ok 

    • To discard the changes, type:


      {0} ok nvquit
      {0} ok 

  13. Verify the contents of the nvramrc script you created in Step 11, as shown in the following example.

    If the contents of the nvramrc script are incorrect, use the nvedit command to make corrections.


    {0} ok printenv nvramrc
    nvramrc =             probe-all
                          cd /sbus@6,0/QLGC,isp@2,10000
                          6 " scsi-initiator-id" integer-property
                          device-end 
                          cd /sbus@2,0/QLGC,isp@2,10000
                          6 " scsi-initiator-id" integer-property
                          device-end 
                          install-console
                          banner
    {0} ok

  14. Instruct the OpenBoot PROM Monitor to use the nvramrc script:


    {0} ok setenv use-nvramrc? true
    use-nvramrc? = true
    {0} ok 

  15. Boot the first node.


    {0} ok boot -r
    

    For more information on booting nodes, see the Sun Cluster 3.0 12/01 System Administration Guide.

  16. Check the StorEdge/Netra st A1000 array NVSRAM file and firmware revisions, and if necessary, install the most recent revision.

    To verify that you have the current revision, see the Sun StorEdge RAID Manager Release Notes. For the procedure on upgrading the NVSRAM file and firmware, see the Sun StorEdge RAID Manager User's Guide.

  17. Shut down the second node.


    # scswitch -S -h nodename
    # shutdown -y -g0 -i0
    
  18. If you are installing new host adapters, power off the second node.

    For the full procedure on shutting down and powering off a node, see the Sun Cluster 3.0 12/01 System Administration Guide.

  19. Install the host adapters in the second node.

    For the procedure on installing host adapters, see the documentation that shipped with your nodes.

  20. Cable the StorEdge/Netra st A1000 array to your node.

    Connect the differential SCSI cable between the node and the array. Make sure that the entire SCSI bus length to each enclosure is less than 25 m. This measurement includes the cables to both nodes, as well as the bus length internal to each enclosure, node, and host adapter.

    Figure F-3 StorEdge/Netra st A1000 Array Cabling

    Graphic

  21. Did you power off the second node to install a host adapter?

    • If not, go to Step 23.

    • If you did power off the second node, power it and the StorEdge/Netra st A1000 array on, but do not allow the node to boot. If necessary, halt the array to continue with OpenBoot PROM (OBP) Monitor tasks.

  22. Verify that the second node recognizes the new host adapters and disk drives.

    If the node does not recognize the new hardware, check all hardware connections and repeat installation steps you performed in Step 19.


    {0} ok show-disks
    ...b) /sbus@6,0/QLGC,isp@2,10000/sd...d) /sbus@2,0/QLGC,isp@2,10000/sd...{0} ok

  23. Verify that the scsi-initiator-id for the host adapters on the second node is set to 7.

    Use the show-disks command to find the paths to the host adapters that are connected to these enclosures. Select each host adapter's device tree node, and display the node's properties to confirm that the scsi-initiator-id for each host adapter is set to 7.


    {0} ok cd /sbus@6,0/QLGC,isp@2,10000
    {0} ok .properties
    scsi-initiator-id        00000007 
    ...

  24. Boot the second node.


    {0} ok boot -r
    

    For more information, see the Sun Cluster 3.0 12/01 System Administration Guide.

  25. On one node, verify that the DIDs have been assigned to the StorEdge/Netra st A1000 LUNs for all nodes that are attached to the StorEdge/Netra st A1000 array:


    # scdidadm -L
    

Where to Go From Here

To create a LUN from disk drives that are unassigned, see "How to Create a LUN".

To upgrade StorEdge/Netra st A1000 array firmware, see "How to Upgrade Disk Drive Firmware in a Running Cluster".