Sun Cluster 3.0-3.1 With Sun StorEdge 3310 or 3320 SCSI RAID Array Manual

Configuring Storage Arrays

This product supports the use of hardware RAID and host-based software RAID. For host-based software RAID, this product supports RAID levels 0+1 and 1+0.


Note –

When you use host-based software RAID with hardware RAID, the hardware RAID levels you use affect the hardware maintenance procedures due to volume management administration.

If you use hardware RAID level 1, 3, or 5, you can perform most maintenance procedures in Maintaining RAID Storage Arrays without volume management disruptions. If you use hardware RAID level 0, some maintenance procedures in Maintaining RAID Storage Arrays require additional volume management administration because the availability of the LUNs is impacted.



Note –

When you upgrade firmware on a storage device or on an enclosure, redefine the stripe size of a LUN, or perform other LUN operations, a device ID might change unexpectedly. When you perform a check of the device ID configuration by running the scdidadm -c command, the following error message appears on your console if the device ID changed unexpectedly.


device id for nodename:/dev/rdsk/cXtYdZsN does not match physical 
device's id for ddecimalnumber, device may have been replaced.

To fix device IDs that report this error, run the scdidadm -R command for each affected device.


This section describes the procedures about how to configure a RAID storage array after installing Sun Cluster software. Table 1–2 lists these procedures.

To configure a RAID storage array before you install Sun Cluster software, follow the same procedure that you use in a noncluster environment. For procedures about how to configure RAID storage arrays before you install Sun Cluster, see the Sun StorEdge 3000 Family Installation, Operation, and Service Manual.

Table 1–2 Task Map: Configuring Disk Drives

Task 

Information 

Create a logical unit (LUN). 

How to Create and Map a LUN

Remove a LUN. 

How to Unmap and Delete a LUN

ProcedureHow to Create and Map a LUN

Use this procedure to create a logical unit (LUN) from unassigned disk drives or remaining capacity. See the Sun StorEdge 3000 Family RAID Firmware 4.1x User's Guide for the latest information about LUN administration.

Steps
  1. Create and partition the logical device(s).

    For more information on creating a LUN, see the Sun StorEdge 3000 Family RAID Firmware 4.1x User's Guide.

  2. Map the LUNs to the host channels that are cabled to the nodes.

    For more information on mapping LUNs to host channels, see the Sun StorEdge 3000 Family RAID Firmware 4.1x User's Guide.


    Note –

    You can have a maximum of 64 shared LUNs.


  3. Ensure that each LUN has an associated entry in the /kernel/drv/sd.conf file.

    For more information, see the Sun StorEdge 3000 Family Installation, Operation, and Service Manual.

  4. To make the changes to the /kernel/drv/sd.conf file active, perform one of the following options.

    • On systems that run Solaris 8 Update 7 or below, perform a reconfiguration boot.

    • For Solaris 9 and above, run the update_drv -f sd command and then the devfsadm command.

  5. If necessary, label the LUNs.

  6. If the cluster is online and active, update the global device namespace.


    # scgdevs
    
  7. If you want a volume manager to manage the new LUN, run the appropriate Solstice DiskSuite/Solaris Volume Manager commands or VERITAS Volume Manager commands. Use these commands to incorporate the new LUN into a diskset or disk group.

    For information on administering LUNs, see your Sun Cluster system administration documentation.

    For more information, see your Solstice DiskSuite/Solaris Volume Manager or VERITAS Volume Manager documentation.

  8. If you want the new LUN to be a quorum device, add the quorum device.

    For the procedure about how to add a quorum device, see your Sun Cluster system administration documentation.

ProcedureHow to Unmap and Delete a LUN

Use this procedure to delete a LUN(s). See the Sun StorEdge 3000 Family RAID Firmware 4.1x User's Guide for the latest information about LUN administration.


Caution – Caution –

When you delete the LUN, you remove all data on that LUN.


Before You Begin

This procedure assumes that the cluster is online. A cluster is online if the RAID storage array is connected to the nodes and all nodes are powered on. This procedure also assumes that you plan to telnet to the RAID storage array perform this procedure.

Steps
  1. Identify the LUNs that you need to remove.


    # cfgadm -al
    
  2. Is the LUN a quorum device? This LUN is the LUN that you are removing.


    # scstat -q
    
    • If no, proceed to Step 3.

    • If yes, relocate that quorum device to another suitable RAID storage array.

      For procedures about how to add and remove quorum devices, see your Sun Cluster system administration documentation.

  3. Remove the LUN from disksets or disk groups.

    Run the appropriate Solstice DiskSuite/Solaris Volume Manager commands or VERITAS Volume Manager commands to remove the LUN from any diskset or disk group. For more information, see your Solstice DiskSuite/Solaris Volume Manager or VERITAS Volume Manager documentation. See the following paragraph for additional VERITAS Volume Manager commands that are required.


    Note –

    LUNs that were managed by VERITAS Volume Manager must be completely removed from VERITAS Volume Manager control before you can delete the LUNs from the Sun Cluster environment. After you delete the LUN from any disk group, use the following commands on both nodes to remove the LUN from VERITAS Volume Manager control.



    # vxdisk offline cNtXdY
    # vxdisk rm cNtXdY
    
  4. On both nodes, unconfigure the device that is associated with the LUN.


    # cfgadm -c unconfigure cx::dsk/cxtydz
    
  5. Unmap the LUN from both host channels.

    For the procedure about how to unmap a LUN, see the Sun StorEdge 3000 Family RAID Firmware 4.1x User's Guide.

  6. Delete the logical drive.

    For more information, see the Sun StorEdge 3000 Family RAID Firmware 4.1x User's Guide.

  7. On both nodes, remove the paths to the LUN that you are deleting.


    # devfsadm -C
    
  8. On both nodes, remove all obsolete device IDs (DIDs).


    # scdidadm -C
    
  9. If no other LUN is assigned to the target and LUN ID, remove the LUN entries from /kernel/drv/sd.conf file.

    Perform this step on both nodes to prevent extended boot time caused by unassigned LUN entries.


    Note –

    Do not remove the default cXtXdX entries.