Sun Cluster 3.0 Hardware Guide

Installing a StorEdge A3500

This section provides the procedure for an initial installation of a StorEdge A3500 disk array. The following table lists the steps involved in an initial installation of an StorEdge A3500 disk array. Perform these procedures in the order that they are listed.

Table 7-1 Task Map: Installing a StorEdge A3500

Task 

For Instructions, Go To... 

Install the host adapters 

The documentation that shipped with your host adapters and nodes 

Ensure that each device in the SCSI chain has a unique SCSI address 

"How to Install a StorEdge A3500"

Cable, power on, and configure the disk array 

Sun StorEdge A3500/A3500FC Hardware Configuration Guide

 

Sun StorEdge A3500/A3500FC Controller Module Guide

Install the Solaris operating environment 

Sun Cluster 3.0 Installation Guide

Apply the required Solaris patches 

Sun Cluster 3.0 Release Notes

Install the RAID Manager 

Sun StorEdge RAID Manager Installation and Support Guide

Install StorEdge A3500 patch(es) 

Sun Cluster 3.0 Release Notes

Upgrade the StorEdge A3500 controller firmware 

Sun StorEdge RAID Manager User's Guide

Set up the StorEdge A3500 with the desired LUNs and configuration 

Sun StorEdge RAID Manager User's Guide

Continue with Sun Cluster software and data services installation tasks 

Sun Cluster 3.0 Installation Guide

 

Sun Cluster 3.0 Data Services Installation and Configuration Guide

How to Install a StorEdge A3500

Use this procedure for an initial installation and configuration of an StorEdge A3500 disk array, prior to installing the Solaris operating environment and Sun Cluster software. Perform the steps in this procedure in conjunction with the procedures in Sun Cluster 3.0 Installation Guide and your server hardware manual.

  1. Ensure that each device in the SCSI chain has a unique SCSI address.

    The default SCSI address for host adapters is 7. Reserve SCSI address 7 for one host adapter in the SCSI chain. This procedure refers to the host adapter you choose for SCSI address 7 as the host adapter on the second node. To avoid conflicts, in Step 5 you will change the scsi-initiator-id of the remaining host adapter in the SCSI chain to an available SCSI address. This procedure refers to the host adapter with an available SCSI address as the host adapter on the first node. Depending on the device and configuration settings of the device, either SCSI address 6 or 8 is usually available.


    Caution - Caution -

    Even though a slot in the enclosure might not be in use, you should avoid setting the scsi-initiator-id for the first node to the SCSI address for that disk slot. This precaution minimizes future complications if you install additional disk drives.


    For more information, see the OpenBoot 3.x Command Reference Manual and the labels inside the storage device.

  2. Install the host adapters in the nodes that will be connected to the disk array.

    For the procedure on installing host adapters, see the documentation that shipped with your host adapters and nodes.

  3. Cable, power on, and configure the disk array as shown in Figure 7-1.

    For the procedure on installing the SCSI cables, see Sun StorEdge A3500/A3500FC Hardware Configuration Guide. For the procedure on powering on the disk array, see Sun StorEdge A3500/A3500FC Controller Module Guide.

    Figure 7-1 Example of a StorEdge A3500 disk array

    Graphic

  4. Find the paths to the host adapters.


    {0} ok show-disks
    

    Identify and record the two controllers that will be connected to the storage devices and record these paths. You will use this information to change the SCSI addresses of these controllers in the nvramrc script. Do not include the /sd directories in the device paths.

  5. Edit the nvramrc script to change the scsi-initiator-id for the host adapters on the first node.

    For a list of nvramrc editor and nvedit keystroke commands, see Appendix B, NVRAMRC Editor and NVEDIT Keystroke Commands.

    The following example sets the scsi-initiator-id to 6. The OpenBoot PROM Monitor prints the line numbers (0:, 1:, and so on).


    Caution - Caution -

    Insert exactly one space after the first double quote and before scsi-initiator-id.



    {0} ok nvedit 
    0: probe-all
    1: cd /sbus@1f,0/QLGC,isp@3,10000 
    2: 6 encode-int " scsi-initiator-id" property 
    3: device-end 
    4: cd /sbus@1f,0/ 
    5: 6 encode-int " scsi-initiator-id" property 
    6: device-end 
    7: install-console 
    8: banner [Control C] 
    {0} ok
  6. Store the changes.

    The changes you make through the nvedit command are done on a temporary copy of the nvramrc script. You can continue to edit this copy without risk. After you complete your edits, save the changes. If you are not sure about the changes, discard them.

    • To store the changes, type:


      {0} ok nvstore
      {0} ok 

    • To discard the changes, type:


      {0} ok nvquit
      {0} ok 

  7. Verify the contents of the nvramrc script you created in Step 5.

    If the contents of the nvramrc script are incorrect, use the nvedit command to make corrections.


    {0} ok printenv nvramrc
    nvramrc =             probe-all
                          cd /sbus@1f,0/QLGC,isp@3,10000
                          6 encode-int " scsi-initiator-id" property
                          device-end 
                          cd /sbus@1f,0/
                          6 encode-int " scsi-initiator-id" property
                          device-end  
                          install-console
                          banner
    {0} ok
  8. Instruct the OpenBoot PROM Monitor to use the nvramrc script.


    {0} ok setenv use-nvramrc? true
    use-nvramrc? = true
    {0} ok 

  9. Without allowing the node to boot, power on the second node. If necessary, abort the system to continue with OpenBoot PROM Monitor tasks.

  10. Verify that the scsi-initiator-id for each host adapter on the second node is set to 7.

    Use the show-disks command to find the paths to the host adapters connected to these enclosures. Select each host adapter's device tree node, and display the node's properties to confirm that the scsi-initiator-id for each host adapter is set to 7.


    {0} ok cd /sbus@1f,0/QLGC,isp@3,10000
    {0} ok .properties
    scsi-initiator-id        00000007 
  11. Install the SolarisTM operating environment, and apply the required Solaris patches.

    For the procedure on installing the Solaris operating environment, see Sun Cluster 3.0 Installation Guide. For the location of patches and installation instructions, see Sun Cluster 3.0 Release Notes.

  12. Install the RAID Manager.

    For the procedure on installing the RAID Manager, see Sun StorEdge RAID Manager Installation and Support Guide.

  13. Install StorEdge A3500 disk array patches.

    For the location of patches and installation instructions, see Sun Cluster 3.0 Release Notes.

  14. Upgrade the StorEdge A3500 disk array controller firmware.

    For the StorEdge A3500 disk array controller firmware version number and boot level, see Sun Cluster 3.0 Release Notes. For the procedure on upgrading the StorEdge A3500 disk array controller firmware, see Sun StorEdge RAID Manager User's Guide.

  15. Set up the A3500 disk array with the desired LUNs and hot spares.

    For the procedure on setting up the StorEdge A3500 with LUNs and hot spares, see Sun StorEdge RAID Manager User's Guide.


    Note -

    The RAID Manager 6.x graphical user interface does not consistently display Solaris logical device names. Use the format command to verify Solaris logical device names.


Where to Go From Here

To continue with Sun Cluster software and data services installation tasks, see Sun Cluster 3.0 Installation Guide and Sun Cluster 3.0 Data Services Installation and Configuration Guide.