Sun Cluster 3.0-3.1 With StorEdge A1000 Array, Netra st A1000 Array, or StorEdge A3500 System Manual

Installing Storage Arrays

This section contains the instructions for installing storage arrays both in new clusters and existing clusters.

Table 1–1 Task Map: Installing Storage Arrays

Task 

Information 

Install an array in a new cluster, before the OS and Sun Cluster software are installed.  

How to Install a Storage Array in a New Cluster

Add an array to an operational cluster.  

How to Add a Storage Array to an Existing Cluster

ProcedureHow to Install a Storage Array in a New Cluster

This procedure assumes you are installing one or more storage arrays at initial installation of a cluster.

This procedure uses an updated method for setting the scsi-initiator-id. The method that was published in earlier documentation is still applicable. However, if your cluster configuration uses a Sun StorEdge PCI Dual Ultra3 SCSI host adapter to connect to any other shared storage, you need to update your nvramrc script and set the scsi-initiator-id by following this procedure.

Before You Begin

Before performing this procedure, ensure that you have met the following prerequisites. This procedure relies on the following prerequisites and assumptions.

Steps
  1. Install the host adapters in the nodes that connect to the storage arrays.

    For the procedure about how to install host adapters, see the documentation that shipped with your host adapters and nodes.

  2. Cable the storage arrays.

    For cabling diagrams, see Appendix A, Cabling Diagrams.

  3. Ensure that each device in the SCSI chain has a unique SCSI address.

    To avoid SCSI-chain conflicts, the following steps instruct you to reserve SCSI address 7 for one host adapter in the SCSI chain and change the other host adapter's global scsi-initiator-id to an available SCSI address. Then the steps instruct you to change the scsi-initiator-id for local devices back to 7.


    Note –

    A slot in the storage array might not be in use. However, do not set the scsi-initiator-id to a SCSI address for that disk slot. This precaution minimizes future complications if you install additional disk drives.


    1. If necessary, power on the storage devices.


      Note –

      If necessary, halt the nodes so that you can perform OpenBootTM PROM (OBP) Monitor tasks at the ok prompt.


      For the procedure about powering on a storage device, see the service manual that shipped with your storage device.

    2. If necessary, power on a node, but do not allow it to boot. If necessary, halt the system to continue with OBP Monitor tasks.

    3. Set the scsi-initiator-id for one node to 6.


      {1} ok setenv scsi-initiator-id 6
      scsi-initiator-id = 6
    4. Find the paths to the host adapters that connect to the local disk drives.


      {0} ok show-disks
      

      Use this information to change the SCSI addresses in the nvramrc script. Do not include the /sd directories in the device paths.

    5. Edit the nvramrc script to set the scsi-initiator-id for the local devices on the first node to 7.

      For a full list of commands, see the OpenBoot 2.x Command Reference Manual.


      Caution – Caution –

      Insert exactly one space after the first double quote and before scsi-initiator-id.



      {0} ok nvedit
       0: probe-all 
       1: cd /pci@1f,4000/scsi@2
       2: 7 encode-int " scsi-initiator-id" property
       3: device-end 
       4: cd /pci@1f,4000/scsi@3 
       5: 7 encode-int " scsi-initiator-id" property 
       6: device-end
       7: install-console
       8: banner[Control C] 
      {0} ok
    6. Store the changes.

      The changes you make through the nvedit command are recorded on a temporary copy of the nvramrc script. You can continue to edit this copy without risk. After you complete your edits, save the changes. If you are not sure about the changes, discard them.

      • To store the changes, type the following command:


        {0} ok nvstore
        {1} ok 
      • To discard the changes, type the following command:


        {0} ok nvquit
        {1} ok 
    7. Verify the contents of the nvramrc script that you created, as shown in the following example.

      If the contents of the nvramrc script are incorrect, use the nvedit command to make corrections.


      {0} ok printenv nvramrc
      nvramrc =             probe-all
                            cd /pci@1f,4000/scsi@2 
                            7 " scsi-initiator-id" integer-property
                            device-end 
                            cd /pci@1f,4000/scsi@3
                            7 " scsi-initiator-id" integer-property
                            device-end 
                            install-console
                            banner
      {1} ok
    8. Instruct the OpenBoot PROM (OBP) Monitor to use the nvramrc script, as shown in the following example.


      {0} ok setenv use-nvramrc? true
      use-nvramrc? = true
      {1} ok 
  4. Verify that the scsi-initiator-id is set correctly on the second node.

    1. If necessary, power on the second node, but do not allow it to boot. If necessary, halt the system to continue with OBP Monitor tasks.

    2. Verify that the scsi-initiator-id for each host adapter on the second node is set to 7.

      Use the show-disks command to find the paths to the host adapters that are connected to these enclosures. Select each host adapter's device tree node, and display the node's properties to confirm that the scsi-initiator-id for each host adapter is set to 7.


      {0} ok cd /pci@6,4000/pci@3/scsi@5
      {0} ok .properties
      scsi-initiator-id        00000007 
      ...
  5. Install the Solaris Operating System, then apply any required Solaris patches.

    For the most current list of patches, see http://sunsolve.sun.com.

  6. Read the following two conditions carefully to determine whether you must reboot the nodes.

    • If you are using a version of RAID Manager later than 6.22, proceed to Step 7.

    • If you are using a version of the Solaris Operating System earlier than Solaris 8 Update 4, proceed to Step 7.

    • If you are using RAID Manager 6.22 and the Solaris 8 Update 4 or later operating environment, reboot both nodes.


      # reboot
      
  7. Install the RAID Manager software.

    For the procedure about how to install the RAID Manager software, see the Sun StorEdge RAID Manager User’s Guide.

    For the required version of the RAID Manager software that Sun Cluster software supports, see Restrictions and Requirements.

  8. Install patches for the controller modules and RAID Manager software.

    For the most current list of patches, see http://sunsolve.sun.com.

  9. Check the NVSRAM file revision for the storage arrays. If necessary, install the most recent revision.

    For the NVSRAM file revision number, boot level, and procedure about how to upgrade the NVSRAM file, see the Sun StorEdge RAID Manager Release Notes.

  10. Check the controller module firmware revision for the storage arrays. If necessary, install the most recent revision.

    For the firmware revision number and boot level, see the Sun StorEdge RAID Manager Release Notes. For the procedure about how to upgrade the firmware, see the Sun StorEdge RAID Manager User’s Guide.

  11. Set the Rdac parameters in the /etc/osa/rmparams file on both nodes.


    Rdac_RetryCount=1
    Rdac_NoAltOffline=TRUE
    
  12. Ensure that the controller module is set to active/active mode.

    For more information about controller modes, see the Sun StorEdge RAID Manager Installation and Support Guide and the Sun StorEdge RAID Manager User’s Guide.

  13. Set up the storage arrays with logical unit numbers (LUNs) and hot spares.

    For the procedure about how to set up the storage array with LUNs and hot spares, see the Sun StorEdge RAID Manager User’s Guide.


    Note –

    Use the format command to verify Solaris logical device names.


  14. Copy the /etc/raid/rdac_address file from the node on which you created the LUNs to the other node. If you copy this file to the other node, you ensure consistency across both nodes.

  15. Ensure that the new logical name for the LUN that you created in Step 13 appears in the /dev/rdsk directory on both nodes.


    # /etc/raid/bin/hot_add
    
See Also

To continue with Sun Cluster software and data services installation tasks, see your Sun Cluster software installation documentation and the Sun Cluster data services developer's documentation. For a list of Sun Cluster documentation, see Related Documentation.

ProcedureHow to Add a Storage Array to an Existing Cluster

Use this procedure to add a storage device to an existing cluster. If you need to install a storage device in a new cluster, use the procedure in How to Install a Storage Array in a New Cluster.

You might want to perform this procedure in the following scenarios.

Before You Begin

This procedure relies on the following prerequisites and assumptions.

Steps
  1. Install the RAID Manager software.

    • For the required version of the RAID Manager software that Sun Cluster software supports, see Restrictions and Requirements.

    • For the procedure about how to install RAID Manager software, see the Sun StorEdge RAID Manager Installation and Support Guide.

    • For the most current list of software, firmware, and patches that your storage array or storage system requires, refer to the appropriate EarlyNotifier that is outlined in Related Documentation. This document is available online to Sun service providers and to customers with SunSolve service contracts at the SunSolve site: http://sunsolve.sun.com.

  2. Install the storage array or storage system patches.

    For the location of patches and installation instructions, see your Sun Cluster release notes documentation. For a list of Sun Cluster documentation, see Related Documentation.

  3. Set the Rdac parameters in the /etc/osa/rmparams file.


    Rdac_RetryCount=1
    Rdac_NoAltOffline=TRUE
    
  4. Power on the storage array or storage system.

    For the procedure about how to power on the storage array or storage system, see your storage documentation. For a list of storage documentation, see Related Documentation.

  5. Are you installing new host adapters in your nodes?

    • If no, skip to Step 7.

    • If yes, shut down and power off Node A.

      For the procedure about how to shut down and power off a node, see your Sun Cluster system administration documentation. For a list of Sun Cluster documentation, see Related Documentation.

  6. Install the host adapters in Node A.

    For the procedure about how to install host adapters, see the documentation that shipped with your host adapters and nodes.

  7. Cable the storage array or storage system to Node A.

    For cabling diagrams, see Appendix A, Cabling Diagrams.

  8. Ensure that each device in the SCSI chain has a unique SCSI address.

    To avoid SCSI-chain conflicts, the following steps instruct you to reserve SCSI address 7 for one host adapter in the SCSI chain and change the other host adapter's global scsi-initiator-id to an available SCSI address. Then the steps instruct you to change the scsi-initiator-id for local devices back to 7.


    Note –

    A slot in the storage array might not be in use. However, do not set the scsi-initiator-id to a SCSI address for that disk slot. This precaution minimizes future complications if you install additional disk drives.


    1. If necessary, power on the storage devices.


      Note –

      If necessary, halt the nodes so that you can perform OpenBootTM PROM (OBP) Monitor tasks at the ok prompt.


      For the procedure about powering on a storage device, see the service manual that shipped with your storage device.

    2. If necessary, power on a node, but do not allow it to boot. If necessary, halt the system to continue with OBP Monitor tasks.

    3. Set the scsi-initiator-id for one node to 6.


      {1} ok setenv scsi-initiator-id 6
      scsi-initiator-id = 6
    4. Find the paths to the host adapters that connect to the local disk drives.


      {0} ok show-disks
      

      Use this information to change the SCSI addresses in the nvramrc script. Do not include the /sd directories in the device paths.

    5. Edit the nvramrc script to set the scsi-initiator-id for the local devices on the first node to 7.

      For a full list of commands, see the OpenBoot 2.x Command Reference Manual.


      Caution – Caution –

      Insert exactly one space after the first double quote and before scsi-initiator-id.



      {0} ok nvedit
       0: probe-all 
       1: cd /pci@1f,4000/scsi@2
       2: 7 encode-int " scsi-initiator-id" property
       3: device-end 
       4: cd /pci@1f,4000/scsi@3 
       5: 7 encode-int " scsi-initiator-id" property 
       6: device-end
       7: install-console
       8: banner[Control C] 
      {0} ok
    6. Store the changes.

      The changes you make through the nvedit command are recorded on a temporary copy of the nvramrc script. You can continue to edit this copy without risk. After you complete your edits, save the changes. If you are not sure about the changes, discard them.

      • To store the changes, type the following command:


        {0} ok nvstore
        {1} ok 
      • To discard the changes, type the following command:


        {0} ok nvquit
        {1} ok 
    7. Verify the contents of the nvramrc script that you created, as shown in the following example.

      If the contents of the nvramrc script are incorrect, use the nvedit command to make corrections.


      {0} ok printenv nvramrc
      nvramrc =             probe-all
                            cd /pci@1f,4000/scsi@2 
                            7 " scsi-initiator-id" integer-property
                            device-end 
                            cd /pci@1f,4000/scsi@3
                            7 " scsi-initiator-id" integer-property
                            device-end 
                            install-console
                            banner
      {1} ok
    8. Instruct the OpenBoot PROM (OBP) Monitor to use the nvramrc script, as shown in the following example.


      {0} ok setenv use-nvramrc? true
      use-nvramrc? = true
      {1} ok 
  9. Are you installing new host adapters in Node B to connect Node B to the storage array or storage system?

    • If no, skip to Step 11.

    • If yes, shut down and power off the node.

      For the procedure about how to shut down and power off a node, see your Sun Cluster system administration documentation. For a list of Sun Cluster documentation, see Related Documentation.

  10. Install the host adapters in Node B.

    For the procedure about how to install host adapters, see the documentation that shipped with your nodes.

  11. Cable the storage array or storage system to Node B.

    For cabling diagrams, see Adding a Sun StorEdge A3500 Storage System.

  12. Did you power off Node B to install a host adapter?

    • If no, skip to Step 14.

    • If yes, power on Node B and the storage array or storage system. Do not enable the node to boot. If necessary, halt the system to continue with OpenBoot PROM (OBP) Monitor tasks.

  13. Verify that Node B recognizes the new host adapters and disk drives.

    If the node does not recognize the new hardware, check all hardware connections and repeat the installation steps you performed in Step 10.


    {0} ok show-disks
    ...
    b) /sbus@6,0/QLGC,isp@2,10000/sd...
    d) /sbus@2,0/QLGC,isp@2,10000/sd...{0} ok
  14. Verify that the scsi-initiator-id is set correctly on the second node.

    1. If necessary, power on the second node, but do not allow it to boot. If necessary, halt the system to continue with OBP Monitor tasks.

    2. Verify that the scsi-initiator-id for each host adapter on the second node is set to 7.

      Use the show-disks command to find the paths to the host adapters that are connected to these enclosures. Select each host adapter's device tree node, and display the node's properties to confirm that the scsi-initiator-id for each host adapter is set to 7.


      {0} ok cd /pci@6,4000/pci@3/scsi@5
      {0} ok .properties
      scsi-initiator-id        00000007 
      ...
  15. Did you power off Node B to install a host adapter?

    • If no, skip to Step 19.

    • If yes, perform a reconfiguration boot to create the new Solaris device files and links.

  16. Check the controller module NVSRAM file revision. If necessary, install the most recent revision.

    For the NVSRAM file revision number and boot level, see the Sun StorEdge RAID Manager Release Notes. For the procedure about how to upgrade the NVSRAM file, see the Sun StorEdge RAID Manager User’s Guide.

  17. Verify the controller module firmware revision. If necessary, install the most recent firmware revision.

    For the revision number and boot level of the controller module firmware, see the Sun StorEdge RAID Manager Release Notes. For the procedure about how to upgrade the controller firmware, see How to Upgrade Controller Module Firmware.

  18. One node at a time, boot each node into cluster mode.


    # reboot
    
  19. On one node, verify that the device IDs have been assigned to the LUNs for all nodes. These nodes are attached to the storage array or storage system.


    # scdidadm -L
    
  20. (StorEdge A3500 Only) Verify that the controller module is set to active/active mode.

    For more information about controller modes, see the Sun StorEdge RAID Manager Installation and Support Guide and the Sun StorEdge RAID Manager User’s Guide.

See Also

To create a LUN from disk drives that are unassigned, see How to Create a LUN.

To upgrade controller module firmware, see How to Upgrade Controller Module Firmware.