Sun Cluster 3.0 U1 Release Notes Supplement

Configuring StorEdge T3/T3+ Disk Trays in a Running Cluster

This section contains the procedures for configuring a StorEdge T3 or StorEdge T3+ disk tray in a running cluster. Table B-1 lists these procedures.

Table B-1 Task Map: Configuring a StorEdge T3/T3+ Disk Tray 

Task 

For Instructions, Go To... 

Create a logical volume 

"How to Create a Logical Volume"

Remove a logical volume 

"How to Remove a Logical Volume"

How to Create a Logical Volume

Use this procedure to create a StorEdge T3/T3+ disk tray logical volume. This procedure assumes all cluster nodes are booted and attached to the disk tray that will host the logical volume you are creating.

  1. Telnet to the disk tray that is the master controller unit of your partner-group.

    The master controller unit is the disk tray that has the interconnect cables attached to the right-hand connectors of its interconnect cards (when viewed from the rear of the disk trays). For example, Figure B-1 shows the master controller unit of the partner-group as the lower disk tray. Note in this diagram that the interconnect cables are connected to the right-hand connectors of both interconnect cards on the master controller unit.

  2. Create the logical volume.

    Creating a logical volume involves adding, initializing, and mounting the logical volume.

    For the procedure on creating and initializing a logical volume, see the Sun StorEdge T3 and T3+ Array Administrator's Guide. For the procedure on mounting a logical volume, see the Sun StorEdge T3 and T3+ Array Installation, Operation, and Service Manual.

  3. On all cluster nodes, update the /devices and /dev entries:


    # devfsadm
    
  4. On one node connected to the partner-group, use the format command to verify that the new logical volume is visible to the system.


    # format
    

    See the format command man page for more information about using the command.

  5. Are you running VERITAS Volume Manager?

    • If not, go to Step 6

    • If you are running VERITAS Volume Manager, update its list of devices on all cluster nodes attached to the logical volume you created in Step 2.

    See your VERITAS Volume Manager documentation for information about using the vxdctl enable command to update new devices (volumes) in your VERITAS Volume Manager list of devices.

  6. If needed, partition the logical volume.

  7. From any node in the cluster, update the global device namespace by using the scgdevs command.


    # scgdevs
    

    Note -

    If a volume management daemon such as vold is running on your node, and you have a CD-ROM drive connected to the node, a device busy error might be returned even if no disk is in the drive. This error is expected behavior.



Note -

Do not configure StorEdge T3/T3+ logical volumes as quorum devices in partner-group configurations. The use of StorEdge T3/T3+ logical volumes as quorum devices in partner-group configurations is not supported.


Where to Go From Here

To create a new resource or reconfigure a running resource to use the new logical volume, see the Sun Cluster 3.0 U1 Data Services Installation and Configuration Guide.

How to Remove a Logical Volume

Use this procedure to remove a StorEdge T3/T3+ disk tray logical volume. This procedure assumes all cluster nodes are booted and attached to the disk tray that hosts the logical volume you are removing.

This procedure defines "Node A" as the node you begin working with, and "Node B" as the other node.


Caution - Caution -

This procedure removes all data from the logical volume you are removing.


  1. If necessary, migrate all data and volumes off the logical volume you are removing.

  2. Are you running VERITAS Volume Manager?

    • If not, go to Step 3.

    • If you are running VERITAS Volume Manager, update its list of devices on all cluster nodes attached to the logical volume you are removing.

    See your VERITAS Volume Manager documentation for information about using the vxdisk rm command to remove devices (volumes) in your VERITAS Volume Manager device list.

  3. Run the appropriate Solstice DiskSuiteTM or VERITAS Volume Manager commands to remove the reference to the logical unit number (LUN) from any diskset or disk group.

    For more information, see your Solstice DiskSuite or VERITAS Volume Manager documentation.

  4. Telnet to the disk tray that is the master controller unit of your partner-group.

    The master controller unit is the disk tray that has the interconnect cables attached to the right-hand connectors of its interconnect cards (when viewed from the rear of the disk trays). For example, Figure B-1 shows the master controller unit of the partner-group as the lower disk tray. Note in this diagram that the interconnect cables are connected to the right-hand connectors of both interconnect cards on the master controller unit.

  5. Remove the logical volume.

    For the procedure on removing a logical volume, see the Sun StorEdge T3 and T3+ Array Administrator's Guide.

  6. Use the scstat command to identify the resource groups and device groups running on all nodes.

    Record this information because you will use it in Step 15 of this procedure to return resource groups and device groups to these nodes.


    # scstat
    
  7. Move all resource groups and device groups off of Node A:


    # scswitch -S -h nodename
    
  8. Shut down and reboot Node A by using the shutdown command with the i6 option.

    The -i6 option with the shutdown command causes the node to reboot after it shuts down to the ok prompt.


    # shutdown -y -g0 -i6
    

    For more information, see the Sun Cluster 3.0 U1 System Administration Guide.

  9. On Node A, remove the obsolete device IDs (DIDs):


    # devfsadm -C
    # scdidadm -C
    
  10. On Node A, use the format command to verify that the logical volume you removed is no longer visible to the system.


    # format
    

    See the format command man page for more information about using the command.

  11. Move all resource groups and device groups off Node B:


    # scswitch -S -h nodename
    
  12. Shut down and reboot Node B by using the shutdown command with the i6 option.

    The -i6 option with the shutdown command causes the node to reboot after it shuts down to the ok prompt.


    # shutdown -y -g0 -i6
    

    For more information, see the Sun Cluster 3.0 U1 System Administration Guide.

  13. On Node B, remove the obsolete DIDs:


    # devfsadm -C
    # scdidadm -C
    
  14. On Node B, use the format command to verify that the logical volume you removed is no longer visible to the system.


    # format
    

    See the format command man page for more information about using the command.

  15. Return the resource groups and device groups you identified in Step 6 to Node A and Node B:


    # scswitch -z -g resource-group -h nodename
    # scswitch -z -D device-group-name -h nodename
    

    For more information, see the Sun Cluster 3.0 U1 System Administration Guide.

Where to Go From Here

To create a logical volume, see "How to Create a Logical Volume".