Sun Cluster 3.0 12/01 Hardware Guide

How to Add a StorEdge MultiPack Enclosure to a Running Cluster

Use this procedure to install a StorEdge MultiPack enclosure in a running cluster. Perform the steps in this procedure in conjunction with the procedures in the Sun Cluster 3.0 12/01 Software Installation Guide and your server hardware manual.

For conceptual information on multi-initiator SCSI and device IDs, see the Sun Cluster 3.0 12/01 Concepts document.


Caution - Caution -

Quorum failures have been observed when clustering StorEdge Multipack enclosures that contain a particular model of Quantum disk drive: SUN4.2G VK4550J. Avoid the use of this particular model of Quantum disk drive for clustering with StorEdge Multipack enclosures. If you do use this model of disk drive, you must set the scsi-initiator-id of the "first node" to 6. If you are using a six-slot StorEdge Multipack, you must also set the enclosure for the 9-through-14 SCSI target address range (for more information, see the Sun StorEdge MultiPack Storage Guide).


  1. Ensure that each device in the SCSI chain has a unique SCSI address.

    The default SCSI address for host adapters is 7. Reserve SCSI address 7 for one host adapter in the SCSI chain. This procedure refers to the node with SCSI address 7 as the "second node."

    To avoid conflicts, in Step 9 you change the scsi-initiator-id of the remaining host adapter in the SCSI chain to an available SCSI address. This procedure refers to the node with an available SCSI address as the "first node."

    For a partial list of nvramrc editor and nvedit keystroke commands, see Appendix B, NVRAMRC Editor and NVEDIT Keystroke Commands of this guide. For a full list of commands, see the OpenBoot 3.x Command Reference Manual.


    Note -

    Even though a slot in the StorEdge MultiPack enclosure might not be in use, do not set the scsi-initiator-id for the first node to the SCSI address for that disk slot. This precaution minimizes future complications if you install additional disk drives.


  2. Shut down and power off the first node.


    # scswitch -S -h nodename
    # shutdown -y -g0 -i0
    

    For more information, see the Sun Cluster 3.0 12/01 System Administration Guide.

  3. Install the host adapters in the first node.

    For the procedure on installing a host adapter, see the documentation that shipped with your host adapter and node.

  4. Connect the single-ended SCSI cable between the node and the StorEdge MultiPack enclosures, as shown in Figure 4-2.

    Make sure that the entire SCSI bus length to each StorEdge MultiPack enclosure is less than 6 m. This measurement includes the cables to both nodes, as well as the bus length internal to each StorEdge MultiPack enclosure, node, and host adapter. Refer to the documentation that shipped with the StorEdge MultiPack enclosure for other restrictions about SCSI operation.

    Figure 4-2 Example of a StorEdge MultiPack Enclosure Mirrored Pair

    Graphic

  5. Temporarily install a single-ended terminator on the SCSI IN port of the second StorEdge MultiPack enclosure, as shown in Figure 4-2.

  6. Connect each StorEdge MultiPack enclosure of the mirrored pair to different power sources.

  7. Power on the first node and the StorEdge MultiPack enclosures.

  8. Find the paths to the host adapters.


    {0} ok show-disks
    a) /pci@1f,4000/pci@4/SUNW,isptwo@4/sd
    b) /pci@1f,4000/pci@2/SUNW,isptwo@4/sd

    Identify and record the two controllers that are to be connected to the storage devices, and record these paths. Use this information to change the SCSI addresses of these controllers in the nvramrc script in Step 9. Do not include the /sd directories in the device paths.

  9. Edit the nvramrc script to set the scsi-initiator-id for the host adapters on the first node.

    For a partial list of nvramrc editor and nvedit keystroke commands, see Appendix B, NVRAMRC Editor and NVEDIT Keystroke Commands. For a full list of commands, see the OpenBoot 3.x Command Reference Manual.

    The following example sets the scsi-initiator-id to 6. The OpenBoot PROM Monitor prints the line numbers (0:, 1:, and so on).


    Caution - Caution -

    Insert exactly one space after the first quotation mark and before scsi-initiator-id.



    {0} ok nvedit 
    0: probe-all
    1: cd /pci@1f,4000/pci@4/SUNW,isptwo@4
    2: 6 " scsi-initiator-id" integer-property 
    3: device-end 
    4: cd /pci@1f,4000/pci@2/SUNW,isptwo@4 
    5: 6 " scsi-initiator-id" integer-property 
    6: device-end 
    7: install-console 
    8: banner <Control C> 
    {0} ok
  10. Store the changes.

    The changes you make through the nvedit command are recorded on a temporary copy of the nvramrc script. You can continue to edit this copy without risk. After you complete your edits, save the changes. If you are not sure about the changes, discard them.

    • To store the changes, type:


      {0} ok nvstore
      {0} ok 

    • To discard the changes, type:


      {0} ok nvquit
      {0} ok 
  11. Verify the contents of the nvramrc script you created in Step 9, as shown in the following example.

    If the contents of the nvramrc script are incorrect, use the nvedit command to make corrections.


    {0} ok printenv nvramrc 
    nvramrc =             probe-all
                          cd /pci@1f,4000/pci@4/SUNW,isptwo@4
                          6 " scsi-initiator-id" integer-property 
                          device-end 
                          cd /pci@1f,4000/pci@2/SUNW,isptwo@4
                          6 " scsi-initiator-id" integer-property  
                          device-end  
                          install-console
                          banner
    {0} ok
  12. Instruct the OpenBoot PROM Monitor to use the nvramrc script, as shown in the following example.


    {0} ok setenv use-nvramrc? true
    use-nvramrc? = true
    {0} ok 

  13. Boot the first node and wait for it to join the cluster.


    {0} ok boot -r
    

    For more information, see the Sun Cluster 3.0 12/01 System Administration Guide.

  14. On all nodes, verify that the DIDs have been assigned to the disk drives in the StorEdge MultiPack enclosure.


    # scdidadm -l
    

  15. Shut down and power off the second node.


    # scswitch -S -h nodename
    # shutdown -y -g0 -i0
    

  16. Install the host adapters in the second node.

    For the procedure on installing a host adapter, see the documentation that shipped with your host adapter and node.

  17. Remove the SCSI terminator you installed in Step 5.

  18. Connect the StorEdge MultiPack enclosures to the host adapters by using single-ended SCSI cables.

    Figure 4-3 Example of a StorEdge MultiPack Enclosure Mirrored Pair

    Graphic

  19. Power on the second node but do not allow it to boot. If necessary, halt the node to continue with OpenBoot PROM Monitor tasks.

  20. Verify that the second node checks for the new host adapters and disk drives.


    {0} ok show-disks
    
  21. Verify that the scsi-initiator-id for the host adapter on the second node is set to 7.

    Use the show-disks command to find the paths to the host adapters that are connected to these enclosures. Select each host adapter's device tree node, and display the node's properties to confirm that the scsi-initiator-id for each host adapter is set to 7.


    {0} ok cd /pci@1f,4000/pci@4/SUNW,isptwo@4
    {0} ok .properties
    ...
    scsi-initiator-id        00000007
    ...
    {0} ok cd /pci@1f,4000/pci@2/SUNW,isptwo@4
    {0} ok .properties
    ...
    scsi-initiator-id        00000007
  22. Boot the second node and wait for it to join the cluster.


    {0} ok boot -r
    
  23. On all nodes, verify that the DIDs have been assigned to the disk drives in the StorEdge MultiPack enclosure.


    # scdidadm -l
    

  24. Perform volume management administration to add the disk drives in the StorEdge MultiPack enclosure to the volume management configuration.

    For more information, see your Solstice DiskSuite or VERITAS Volume Manager documentation.