Sun Cluster 3.0-3.1 With Fibre Channel JBOD Storage Device Manual

Installing Storage Arrays

This section contains instructions on installing arrays both to new clusters and operational clusters.

Table 1–1 Task Map: Installing Storage Arrays

Task 

Information 

Install a storage array in a new cluster, before the OS and Sun Cluster software are installed. 

How to Install a Storage Array in a New Cluster

Add a storage array to an operational cluster. 

How to Add the First Storage Array to an Existing Cluster

How to Add a Subsequent Storage Array to an Existing Cluster

ProcedureHow to Install a Storage Array in a New Cluster

This procedure assumes you are installing one or more storage arrays at initial installation of a cluster.

Steps
  1. Install host adapters in the nodes that are to be connected to the storage array.

    For the procedure about how to install host adapters, see the documentation that shipped with your network adapters and nodes.


    Note –

    To ensure maximum redundancy, put each host adapter on a separate I/O board, if possible.


  2. Cable the storage arrays to the nodes.

    For cabling diagrams, see Appendix A, Cabling Diagrams.

  3. Check the revision number for the storage array's controller firmware. If necessary, install the most recent firmware.

    For more information, see your storage documentation. For a list of storage documentation, see Related Documentation.

ProcedureHow to Add the First Storage Array to an Existing Cluster

Before You Begin

This procedure relies on the following prerequisites and assumptions.

Steps
  1. Determine if the storage array packages need to be installed on the nodes. These nodes are the nodes to which you are connecting the storage array. This product requires the following packages.


    # pkginfo | egrep Wlux
    system	SUNWld      Sun Enterprise Network Array sf Device Driver
    system	SUNWluxdx   Sun Enterprise Network Array sf Device Driver
    									(64-bit)
    system	SUNWluxl    Sun Enterprise Network Array socal Device Driver
    system	SUNWluxlx   Sun Enterprise Network Array socal Device Driver
    									(64-bit)
    system	SUNWluxop   Sun Enterprise Network Array firmware and utilities
    system	SUNWluxox   Sun Enterprise Network array libraries (64 bit)				
  2. On each node, install any necessary packages for the Solaris Operating System.

    The storage array packages are located in the Product directory of the CD-ROM. Use the pkgadd command to add any necessary packages.


    Note –

    The -G option applies only if you are using the Solaris 10 OS. Omit this option if you are using Solaris 8 or 9 OS.



    # pkgadd -G -d path_to_Solaris/Product Pkg1 Pkg2 Pkg3 ... PkgN
    
    -G

    Add package(s) in the current zone only. When used in the global zone, the package is added to the global zone only and is not propagated to any existing or yet-to-be-created non-global zone. When used in non-global zone, the package(s) are added to the non-global zone only.

    path_to_Solaris

    Path to the Solaris Operating System

    Pkg1 Pkg2

    The packages to be added

  3. Shut down and power off any node that is connected to the storage array.

    For the procedure about how to shut down and power off a node, see Sun Cluster system administration documentation.

  4. Install host adapters in the node that is to be connected to the storage array.

    For the procedure about how to install host adapters, see the documentation that shipped with your network adapters and nodes.

  5. Cable, configure, and power on the storage array.

    For cabling diagrams, see Appendix A, Cabling Diagrams.

  6. Perform a reconfiguration boot to create the new Solaris device files and links.


    # boot -r
    
  7. Determine if any patches need to be installed on nodes that are to be connected to the storage array.

    For a list of patches specific to Sun Cluster, see your Sun Cluster release notes documentation.

  8. Obtain and install any necessary patches on the nodes that are to be connected to the storage array.

    For procedures about how to apply patches, see your Sun Cluster system administration documentation.


    Note –

    Read any README files that accompany the patches before you begin this installation. Some patches must be installed in a specific order.


  9. If required by the patch README instructions, shut down and reboot the node.

    For the procedure about how to shut down and power off a node, see Sun Cluster system administration documentation.

  10. Perform Step 3 through Step 9 for each node that is attached to the storage array.

  11. Perform volume management administration to add the disk drives in the storage array to the volume management configuration.

    For more information, see your Solstice DiskSuite/Solaris Volume Manager or VERITAS Volume Manager documentation.

ProcedureHow to Add a Subsequent Storage Array to an Existing Cluster

Before You Begin

This procedure relies on the following prerequisites and assumptions.

Steps
  1. Configure the new storage array.


    Note –

    Each storage array in the loop must have a unique box ID. If necessary, use the front-panel module (FPM) to change the box ID for the new storage array that you are adding. For more information about loops and general configuration, see the Sun StorEdge A5000 Configuration Guide and the Sun StorEdge A5000 Installation and Service Manual.


  2. On both nodes, insert the new storage array into the cluster. Add paths to the disk drives.


    # luxadm insert_device
    Please hit <RETURN> when you have finished adding
    Fibre Channel Enclosure(s)/Device(s):
    

    Note –

    Do not press the Return key until you complete Step 3.


  3. Cable the new storage array to a spare port in the existing hub, switch, or host adapter in your cluster.

    For cabling diagrams, see Appendix A, Cabling Diagrams.


    Note –

    You must use FC switches when installing storage arrays in a partner-group configuration. If you want to create a storage area network (SAN) by using two FC switches and Sun SAN software, see SAN Solutions in a Sun Cluster Environment in Sun Cluster 3.0-3.1 Hardware Administration Manual for Solaris OS for more information.


  4. After you cable the new storage array, press the Return key to complete the luxadm insert_device operation.


    Waiting for Loop Initialization to complete...
    New Logical Nodes under /dev/dsk and /dev/rdsk :
    c4t98d0s0
    c4t98d0s1
    c4t98d0s2
    c4t98d0s3
    c4t98d0s4
    c4t98d0s5
    c4t98d0s6
    ...
    New Logical Nodes under /dev/es:
    ses12
    ses13
    
  5. On both nodes, verify that the new storage array is visible to both nodes.


    #luxadm probe
    
  6. On one node, use the scgdevs command to update the DID database.


    #scgdevs