Sun Cluster 3.0 U1 Hardware Guide

Configuring a StorEdge T3 Disk Tray

This section provides the procedures for configuring a StorEdge T3 disk tray in a running cluster. The following table lists these procedures.

Table 8-1 Task Map: Configuring a StorEdge T3 Disk Tray

Task 

For Instructions, Go To 

Create a disk tray logical volume 

"How to Create a Sun StorEdge T3 Disk Tray Logical Volume"

Remove a disk tray logical volume 

"How to Remove a Sun StorEdge T3 Disk Tray Logical Volume"

How to Create a Sun StorEdge T3 Disk Tray Logical Volume

Use this procedure to create a logical volume. This procedure assumes all cluster nodes are booted and attached to the StorEdge T3 disk tray that is to host the logical volume you are creating.

  1. Telnet to the StorEdge T3 disk tray that is to host the logical volume you are creating.

  2. Create the logical volume.

    The creation of a logical volume involves adding, mounting, and initializing the logical volume.

    For the procedure on creating and initializing a logical volume, see the Sun StorEdge T3 Disk Tray Administrator's Guide. For the procedure on mounting a logical volume, see the Sun StorEdge T3 Installation, Operation, and Service Manual.

  3. On all cluster nodes, update the /devices and /dev entries.


    # devfsadm
    

    After this process, a Solaris logical device name for the new logical volume appears in the /dev/rdsk and /dev/dsk directories on all cluster nodes that are attached to the StorEdge T3 disk tray.

  4. If you are running VERITAS Volume Manager, update VERITAS Volume Manager's device tables on all cluster nodes that are attached to the logical volume you created in Step 2. Otherwise, proceed to Step 5.

  5. If necessary, partition the logical volume.

  6. From any node in the cluster, update the global device namespace.


    # scgdevs
    

    If a volume management daemon such as vold is running on your node, and you have a CD-ROM drive that is connected to the node, a device busy error might be returned even if no disk is in the drive. This error is expected behavior.

Where to Go From Here

To create a new resource or reconfigure a running resource to use the new StorEdge T3 disk tray logical volume, see the Sun Cluster 3.0 U1 Data Services Installation and Configuration Guide.

To configure a logical volume as a quorum device, see the Sun Cluster 3.0 U1 System Administration Guide for the procedure on adding a quorum device.

How to Remove a Sun StorEdge T3 Disk Tray Logical Volume

Use this procedure to remove a logical volume. This procedure assumes all cluster nodes are booted and attached to the StorEdge T3 disk tray that hosts the logical volume you are removing.

This procedure defines Node A as the node you begin working with, and Node B as the remaining node.


Caution - Caution -

This procedure removes all data on the logical volume you are removing.


  1. If necessary, migrate all data and volumes off the logical volume you are removing. Otherwise, proceed to Step 2.

  2. Is the logical volume you are removing a quorum device?


    # scstat -q
    
    • If yes, remove the quorum device before you proceed.

    • If no, proceed to Step 3.

    For the procedure on removing a quorum device, see the Sun Cluster 3.0 U1 System Administration Guide.

  3. If you are running VERITAS Volume Manager, update VERITAS Volume Manager's device tables on all cluster nodes that are attached to the logical volume you are removing. Otherwise, proceed to Step 4.

  4. Run the appropriate Solstice DiskSuite or VERITAS Volume Manager commands to remove the reference to the logical unit number (LUN) from any diskset or disk group.

    For more information, see your Solstice DiskSuite or VERITAS Volume Manager documentation.

  5. Remove the logical volume.

    For the procedure on deleting a logical volume, see the Sun StorEdge T3 Disk Tray Administrator's Guide.

  6. Determine the resource groups and device groups that are running on Node A and Node B.

    Record this information because you will use it in Step 15 of this procedure to return resource groups and device groups to these nodes.


    # scstat
    
  7. Move all resource groups and device groups off Node A.


    # scswitch -S -h nodename
    
  8. Shut down Node A.


    # shutdown -y -g0 -i0
    
  9. Boot Node A into cluster mode.


    {0} ok boot
    

    For more information, see the Sun Cluster 3.0 U1 System Administration Guide.

  10. On Node A, remove the obsolete device IDs (DIDs).


    # devfsadm -C
    # scdidadm -C
    
  11. Move all resource groups and device groups off Node B.


    # scswitch -S -h nodename
    
  12. Shut down Node B.


    # shutdown -y -g0 -i0
    
  13. Boot Node B into cluster mode.


    {0} ok boot
    

    For more information, see the Sun Cluster 3.0 U1 System Administration Guide.

  14. On Node B, remove the obsolete DIDs.


    # devfsadm -C
    # scdidadm -C
    
  15. Return the resource groups and device groups you identified in Step 6 to Node A and Node B.


    # scswitch -z -g resource-group -h nodename
    # scswitch -z -D device-group-name -h nodename
    

    For more information, see the Sun Cluster 3.0 U1 System Administration Guide.

Where to Go From Here

To create a logical volume, see "How to Create a Sun StorEdge T3 Disk Tray Logical Volume".