Sun Cluster 3.0-3.1 With Fibre Channel JBOD Storage Device Manual

ProcedureHow to Add a Subsequent Storage Array to an Existing Cluster

Before You Begin

This procedure relies on the following prerequisites and assumptions.

Steps
  1. Configure the new storage array.


    Note –

    Each storage array in the loop must have a unique box ID. If necessary, use the front-panel module (FPM) to change the box ID for the new storage array that you are adding. For more information about loops and general configuration, see the Sun StorEdge A5000 Configuration Guide and the Sun StorEdge A5000 Installation and Service Manual.


  2. On both nodes, insert the new storage array into the cluster. Add paths to the disk drives.


    # luxadm insert_device
    Please hit <RETURN> when you have finished adding
    Fibre Channel Enclosure(s)/Device(s):
    

    Note –

    Do not press the Return key until you complete Step 3.


  3. Cable the new storage array to a spare port in the existing hub, switch, or host adapter in your cluster.

    For cabling diagrams, see Appendix A, Cabling Diagrams.


    Note –

    You must use FC switches when installing storage arrays in a partner-group configuration. If you want to create a storage area network (SAN) by using two FC switches and Sun SAN software, see SAN Solutions in a Sun Cluster Environment in Sun Cluster 3.0-3.1 Hardware Administration Manual for Solaris OS for more information.


  4. After you cable the new storage array, press the Return key to complete the luxadm insert_device operation.


    Waiting for Loop Initialization to complete...
    New Logical Nodes under /dev/dsk and /dev/rdsk :
    c4t98d0s0
    c4t98d0s1
    c4t98d0s2
    c4t98d0s3
    c4t98d0s4
    c4t98d0s5
    c4t98d0s6
    ...
    New Logical Nodes under /dev/es:
    ses12
    ses13
    
  5. On both nodes, verify that the new storage array is visible to both nodes.


    #luxadm probe
    
  6. On one node, use the scgdevs command to update the DID database.


    #scgdevs