Sun Cluster 3.0 U1 Hardware Guide

How to Add a StorEdge A3500/A3500FC System to a Running Cluster

Use this procedure to add a StorEdge A3500/A3500FC system to a running cluster.

  1. Install the RAID Manager software.

    For the procedure on installing RAID Manager software, see the Sun StorEdge RAID Manager Installation and Support Guide.


    Note -

    RAID Manager 6.22 or a compatible version is required for clustering with Sun Cluster 3.0.



    Note -

    For the most current list of software, firmware, and patches that are required for the StorEdge A3x00/A3500FC controller module, refer to EarlyNotifier 20029, "A1000/A3x00/A3500FC Software/Firmware Configuration Matrix." This document is available online to Sun service providers and to customers with SunSolve service contracts at the SunSolve site: http://sunsolve.sun.com.


  2. Install any StorEdge A3500/A3500FC system patches.

    For the location of patches and installation instructions, see the Sun Cluster 3.0 U1 Release Notes.

  3. Set the Rdac parameters in the /etc/osa/rmparams file:


    Rdac_RetryCount=1
    Rdac_NoAltOffline=TRUE
    

  4. Power on the StorEdge A3500/A3500FC system.

    For the procedure on powering on the StorEdge A3500/A3500FC system, see the Sun StorEdge A3500/A3500FC Controller Module Guide.

  5. Depending on which type of system you are adding:

    • If you are adding a StorEdge A3500 system, go to Step 6.

    • If you are adding a StorEdge A3500FC system, set the loop ID of the controller module by installing jumpers to the appropriate pins on the rear of the controller module.

      For diagrams and information about setting FC-AL ID settings, see the Sun StorEdge A3500/A3500FC Controller Module Guide.

  6. Are you installing new host adapters to your nodes for connection to the StorEdge A3500/A3500FC system?

    • If not, go to Step 8.

    • If you are installing new host adapters, shut down and power off the first node.


      # scswitch -S -h nodename
      # shutdown -y -g0 -i0
      

    For the full procedure on shutting down and powering off a node, see the Sun Cluster 3.0 U1 System Administration Guide.

  7. Install the host adapters in the first node.

    For the procedure on installing host adapters, see the documentation that shipped with your host adapters and nodes.

  8. Cable the StorEdge A3500/A3500FC system to the first node. Depending on which type of system you are adding:

    • If you are adding a StorEdge A3500 system, connect the differential SCSI cable between the node and the controller module as shown in Figure 7-3. Make sure that the entire SCSI bus length to each enclosure is less than 25 m. This measurement includes the cables to both nodes, as well as the bus length internal to each enclosure, node, and host adapter.

    • If you are installing a StorEdge A3500FC system, see Figure 7-4 for a sample StorEdge A3500FC cabling connection. The example shows the first node that is connected to a StorEdge A3500FC controller module.

      For more sample configurations, see the Sun StorEdge A3500/A3500FC Hardware Configuration Guide.

      For the procedure on installing the cables, see the Sun StorEdge A3500/A3500FC Controller Module Guide.

    Figure 7-3 Sample StorEdge A3500 Cabling

    Graphic

    Figure 7-4 Sample StorEdge A3500FC Cabling (1st Node Attached)

    Graphic

  9. Did you power off the first node to install a host adapter?

    • If not, go to Step 10.

    • If you did power off the first node, power it and the StorEdge A3500 system on, but do not allow the node to boot. If necessary, halt the system to continue with OpenBoot PROM (OBP) Monitor tasks.

  10. Depending on which type of controller module you are adding, do the following:

    • If you are installing a StorEdge A3500FC controller module, go to Step 15.

    • If you are adding a StorEdge A3500 controller module, find the paths to the SCSI host adapters.


      {0} ok show-disks
      ...b) /sbus@6,0/QLGC,isp@2,10000/sd...d) /sbus@2,0/QLGC,isp@2,10000/sd...

      Identify and record the two controllers that are to be connected to the disk arrays, and record these paths. Use this information to change the SCSI addresses of these controllers in the nvramrc script in Step 11. Do not include the sd directories in the device paths.

  11. Edit the nvramrc script to change the scsi-initiator-id for the host adapters on the first node.

    The default SCSI address for host adapters is 7. Reserve SCSI address 7 for one host adapter in the SCSI chain. This procedure refers to the host adapter that has SCSI address 7 as the host adapter on the "second node."

    To avoid conflicts, change the scsi-initiator-id of the remaining host adapter in the SCSI chain to an available SCSI address. This procedure refers to the host adapter that has an available SCSI address as the host adapter on the "first node."

    For a partial list of nvramrc editor and nvedit keystroke commands, see Appendix B, NVRAMRC Editor and NVEDIT Keystroke Commands of this guide. For a full list of commands, see the OpenBoot 3.x Command Reference Manual.

    The following example sets the scsi-initiator-id to 6. The OpenBoot PROM Monitor prints the line numbers (0:, 1:, and so on).


    Note -

    Insert exactly one space after the quotation mark and before scsi-initiator-id.


    {0} ok nvedit 
    0: probe-all
    1: cd /sbus@6,0/QLGC,isp@2,10000 
    2: 6 " scsi-initiator-id" integer-property 
    3: device-end 
    4: cd /sbus@2,0/QLGC,isp@2,10000
    5: 6 " scsi-initiator-id" integer-property 
    6: device-end 
    7: install-console 
    8: banner <Control C>
    {0} ok


  12. Store the changes.

    The changes you make through the nvedit command are recorded on a temporary copy of the nvramrc script. You can continue to edit this copy without risk. After you have completed your edits, save the changes. If you are not sure about the changes, discard them.

    • To store the changes, type:


      {0} ok nvstore
      {0} ok 

    • To discard the changes, type:


      {0} ok nvquit
      {0} ok 

  13. Verify the contents of the nvramrc script you created in Step 11, as shown in the following example.

    If the contents of the nvramrc script are incorrect, use the nvedit command to make corrections.


    {0} ok printenv nvramrc
    nvramrc =             probe-all
                          cd /sbus@6,0/QLGC,isp@2,10000
                          6 " scsi-initiator-id" integer-property
                          device-end 
                          cd /sbus@2,0/QLGC,isp@2,10000
                          6 " scsi-initiator-id" integer-property
                          device-end 
                          install-console
                          banner
    {0} ok

  14. Instruct the OpenBoot PROM Monitor to use the nvramrc script:


    {0} ok setenv use-nvramrc? true
    use-nvramrc? = true
    {0} ok 

  15. Did you power off the first node to install a host adapter?

    • If not, go to Step 21.

    • If you powered off the first node, boot it now and wait for it to join the cluster.


      {0} ok boot -r
      

    For more information on booting nodes, see the Sun Cluster 3.0 U1 System Administration Guide.

  16. Are you installing new host adapters to the second node for connection to the StorEdge A3500/A3500FC system?

    • If not, go to Step 21.

    • If you are installing new host adapters, shut down and power off the second node.


      # scswitch -S -h nodename
      # shutdown -y -g0 -i0
      

    For the procedure on shutting down and powering off a node, see the Sun Cluster 3.0 U1 System Administration Guide.

  17. Install the host adapters in the second node.

    For the procedure on installing host adapters, see the documentation that shipped with your nodes.

  18. Cable the StorEdge A3500/A3500FC system to your node. Depending on which type of controller module you are adding, do the following:

    • If you are adding a StorEdge A3500 controller module, connect the differential SCSI cable between the node and the controller module as shown in Figure 7-3. Make sure that the entire SCSI bus length to each enclosure is less than 25 m. This measurement includes the cables to both nodes, as well as the bus length internal to each enclosure, node, and host adapter.

    • If you are installing a StorEdge A3500FC controller module, see Figure 7-5 for a sample StorEdge A3500FC cabling connection. The example shows two nodes that are connected to a StorEdge A3500FC controller module.

      For more sample configurations, see the Sun StorEdge A3500/A3500FC Hardware Configuration Guide.

      For the procedure on installing the cables, see the Sun StorEdge A3500/A3500FC Controller Module Guide.

    Figure 7-5 Sample StorEdge A3500FC Cabling (2nd Node Attached)

    Graphic

  19. Did you power off the second node to install a host adapter?

    • If not, go to Step 21.

    • If you did power off the second node, power it and the StorEdge A3500/A3500FC system on, but do not allow the node to boot. If necessary, halt the system to continue with OpenBoot PROM (OBP) Monitor tasks.

  20. Verify that the second node recognizes the new host adapters and disk drives.

    If the node does not recognize the new hardware, check all hardware connections and repeat installation steps you performed in Step 17.


    {0} ok show-disks
    ...b) /sbus@6,0/QLGC,isp@2,10000/sd...d) /sbus@2,0/QLGC,isp@2,10000/sd...{0} ok

  21. Depending on which type of controller module you are adding, do the following:

    • If you are installing a StorEdge A3500FC controller module, go to Step 26.

    • If you are adding a StorEdge A3500 controller module, verify that the scsi-initiator-id for the host adapters on the second node is set to 7.

      Use the show-disks command to find the paths to the host adapters that are connected to these enclosures. Select each host adapter's device tree node, and display the node's properties to confirm that the scsi-initiator-id for each host adapter is set to 7.


      {0} ok cd /sbus@6,0/QLGC,isp@2,10000
      {0} ok .properties
      scsi-initiator-id        00000007 
      ...

  22. Did you power off the second node to install a host adapter?

    • If not, go to Step 26.

    • If you powered off the second node, boot it now and wait for it to join the cluster.


      {0} ok boot -r
      

    For more information, see the Sun Cluster 3.0 U1 System Administration Guide.

  23. Check the StorEdge A3500/A3500FC controller module NVSRAM file revision, and if necessary, install the most recent revision.

    For the NVSRAM file revision number and boot level, see the Sun StorEdge RAID Manager Release Notes. For the procedure on upgrading the NVSRAM file, see the Sun StorEdge RAID Manager User's Guide.

  24. Check the StorEdge A3500/A3500FC controller module firmware revision, and, if necessary, install the most recent firmware revision.

    For the revision number and boot level of the StorEdge A3500/A3500FC controller module firmware, see the Sun StorEdge RAID Manager Release Notes. For the procedure on upgrading the StorEdge A3500/A3500FC controller firmware, see "How to Upgrade Controller Module Firmware in a Running Cluster".

  25. One at a time, boot each node into cluster mode.


    # reboot
    

  26. On one node, verify that the DIDs have been assigned to the StorEdge A3500/A3500FC LUNs for all nodes that are attached to the StorEdge A3500/A3500FC system:


    # scdidadm -L
    

  27. Verify that the controller module is set to active/active mode (if it is not, set it to active/active).

    For more information on controller modes, see the Sun StorEdge RAID Manager Installation and Support Guide and the Sun StorEdge RAID Manager User's Guide.

Where to Go From Here

To create a LUN from disk drives that are unassigned, see "How to Create a LUN".

To upgrade StorEdge A3500/A3500FC controller module firmware, see "How to Upgrade Controller Module Firmware in a Running Cluster".