Sun Cluster 3.0-3.1 With StorEdge A1000 Array, Netra st A1000 Array, or StorEdge A3500 System Manual

ProcedureHow to Install a Storage Array in a New Cluster

This procedure assumes you are installing one or more storage arrays at initial installation of a cluster.

This procedure uses an updated method for setting the scsi-initiator-id. The method that was published in earlier documentation is still applicable. However, if your cluster configuration uses a Sun StorEdge PCI Dual Ultra3 SCSI host adapter to connect to any other shared storage, you need to update your nvramrc script and set the scsi-initiator-id by following this procedure.

Before You Begin

Before performing this procedure, ensure that you have met the following prerequisites. This procedure relies on the following prerequisites and assumptions.

Steps
  1. Install the host adapters in the nodes that connect to the storage arrays.

    For the procedure about how to install host adapters, see the documentation that shipped with your host adapters and nodes.

  2. Cable the storage arrays.

    For cabling diagrams, see Appendix A, Cabling Diagrams.

  3. Ensure that each device in the SCSI chain has a unique SCSI address.

    To avoid SCSI-chain conflicts, the following steps instruct you to reserve SCSI address 7 for one host adapter in the SCSI chain and change the other host adapter's global scsi-initiator-id to an available SCSI address. Then the steps instruct you to change the scsi-initiator-id for local devices back to 7.


    Note –

    A slot in the storage array might not be in use. However, do not set the scsi-initiator-id to a SCSI address for that disk slot. This precaution minimizes future complications if you install additional disk drives.


    1. If necessary, power on the storage devices.


      Note –

      If necessary, halt the nodes so that you can perform OpenBootTM PROM (OBP) Monitor tasks at the ok prompt.


      For the procedure about powering on a storage device, see the service manual that shipped with your storage device.

    2. If necessary, power on a node, but do not allow it to boot. If necessary, halt the system to continue with OBP Monitor tasks.

    3. Set the scsi-initiator-id for one node to 6.


      {1} ok setenv scsi-initiator-id 6
      scsi-initiator-id = 6
    4. Find the paths to the host adapters that connect to the local disk drives.


      {0} ok show-disks
      

      Use this information to change the SCSI addresses in the nvramrc script. Do not include the /sd directories in the device paths.

    5. Edit the nvramrc script to set the scsi-initiator-id for the local devices on the first node to 7.

      For a full list of commands, see the OpenBoot 2.x Command Reference Manual.


      Caution – Caution –

      Insert exactly one space after the first double quote and before scsi-initiator-id.



      {0} ok nvedit
       0: probe-all 
       1: cd /pci@1f,4000/scsi@2
       2: 7 encode-int " scsi-initiator-id" property
       3: device-end 
       4: cd /pci@1f,4000/scsi@3 
       5: 7 encode-int " scsi-initiator-id" property 
       6: device-end
       7: install-console
       8: banner[Control C] 
      {0} ok
    6. Store the changes.

      The changes you make through the nvedit command are recorded on a temporary copy of the nvramrc script. You can continue to edit this copy without risk. After you complete your edits, save the changes. If you are not sure about the changes, discard them.

      • To store the changes, type the following command:


        {0} ok nvstore
        {1} ok 
      • To discard the changes, type the following command:


        {0} ok nvquit
        {1} ok 
    7. Verify the contents of the nvramrc script that you created, as shown in the following example.

      If the contents of the nvramrc script are incorrect, use the nvedit command to make corrections.


      {0} ok printenv nvramrc
      nvramrc =             probe-all
                            cd /pci@1f,4000/scsi@2 
                            7 " scsi-initiator-id" integer-property
                            device-end 
                            cd /pci@1f,4000/scsi@3
                            7 " scsi-initiator-id" integer-property
                            device-end 
                            install-console
                            banner
      {1} ok
    8. Instruct the OpenBoot PROM (OBP) Monitor to use the nvramrc script, as shown in the following example.


      {0} ok setenv use-nvramrc? true
      use-nvramrc? = true
      {1} ok 
  4. Verify that the scsi-initiator-id is set correctly on the second node.

    1. If necessary, power on the second node, but do not allow it to boot. If necessary, halt the system to continue with OBP Monitor tasks.

    2. Verify that the scsi-initiator-id for each host adapter on the second node is set to 7.

      Use the show-disks command to find the paths to the host adapters that are connected to these enclosures. Select each host adapter's device tree node, and display the node's properties to confirm that the scsi-initiator-id for each host adapter is set to 7.


      {0} ok cd /pci@6,4000/pci@3/scsi@5
      {0} ok .properties
      scsi-initiator-id        00000007 
      ...
  5. Install the Solaris Operating System, then apply any required Solaris patches.

    For the most current list of patches, see http://sunsolve.sun.com.

  6. Read the following two conditions carefully to determine whether you must reboot the nodes.

    • If you are using a version of RAID Manager later than 6.22, proceed to Step 7.

    • If you are using a version of the Solaris Operating System earlier than Solaris 8 Update 4, proceed to Step 7.

    • If you are using RAID Manager 6.22 and the Solaris 8 Update 4 or later operating environment, reboot both nodes.


      # reboot
      
  7. Install the RAID Manager software.

    For the procedure about how to install the RAID Manager software, see the Sun StorEdge RAID Manager User’s Guide.

    For the required version of the RAID Manager software that Sun Cluster software supports, see Restrictions and Requirements.

  8. Install patches for the controller modules and RAID Manager software.

    For the most current list of patches, see http://sunsolve.sun.com.

  9. Check the NVSRAM file revision for the storage arrays. If necessary, install the most recent revision.

    For the NVSRAM file revision number, boot level, and procedure about how to upgrade the NVSRAM file, see the Sun StorEdge RAID Manager Release Notes.

  10. Check the controller module firmware revision for the storage arrays. If necessary, install the most recent revision.

    For the firmware revision number and boot level, see the Sun StorEdge RAID Manager Release Notes. For the procedure about how to upgrade the firmware, see the Sun StorEdge RAID Manager User’s Guide.

  11. Set the Rdac parameters in the /etc/osa/rmparams file on both nodes.


    Rdac_RetryCount=1
    Rdac_NoAltOffline=TRUE
    
  12. Ensure that the controller module is set to active/active mode.

    For more information about controller modes, see the Sun StorEdge RAID Manager Installation and Support Guide and the Sun StorEdge RAID Manager User’s Guide.

  13. Set up the storage arrays with logical unit numbers (LUNs) and hot spares.

    For the procedure about how to set up the storage array with LUNs and hot spares, see the Sun StorEdge RAID Manager User’s Guide.


    Note –

    Use the format command to verify Solaris logical device names.


  14. Copy the /etc/raid/rdac_address file from the node on which you created the LUNs to the other node. If you copy this file to the other node, you ensure consistency across both nodes.

  15. Ensure that the new logical name for the LUN that you created in Step 13 appears in the /dev/rdsk directory on both nodes.


    # /etc/raid/bin/hot_add
    
See Also

To continue with Sun Cluster software and data services installation tasks, see your Sun Cluster software installation documentation and the Sun Cluster data services developer's documentation. For a list of Sun Cluster documentation, see Related Documentation.