Sun Cluster 3.0 12/01 Release Notes Supplement

Configuring a StorEdge 9910/9960 Array

This section contains the procedures for configuring a StorEdge 9910/9960 array in a running cluster. The following table lists these procedures.

Table D-1 Task Map: Configuring a StorEdge 9910/9960 Array

Task 

For Instructions, Go To 

Add a StorEdge 9910/9960 array logical volume 

See "How to Add a StorEdge 9910/9960 Array Logical Volume to a Cluster".

Remove a StorEdge 9910/9960 array logical volume 

See "How to Remove a StorEdge 9910/9960 Array Logical Volume".

How to Add a StorEdge 9910/9960 Array Logical Volume to a Cluster

Use this procedure to add a logical volume to a cluster. This procedure assumes that your service provider has created your logical volume and that all cluster nodes are booted and attached to the StorEdge 9910/9960 array.

  1. On all cluster nodes, update the /devices and /dev entries.


    # devfsadm
    
  2. On one node connected to the array, use the format(1M) command to label, partition, and verify that the new logical volume is visible to the system.


    # format
    

    See the format command man page for more information about using the command.

  3. Are you running VERITAS Volume Manager?

    • If not, go to Step 4

    • If you are running VERITAS Volume Manager, update its list of devices on all cluster nodes attached to the logical volume that you created in Step 2.

    See your VERITAS Volume Manager documentation for information about using the vxdctl enable command to update new devices (volumes) in your VERITAS Volume Manager list of devices.

  4. From any node in the cluster, update the global device namespace.


    # scgdevs
    

    If a volume management daemon such as vold is running on your node, and you have a CD-ROM drive that is connected to the node, a device busy error might be returned even if no disk is in the drive. This error is expected behavior.

Where to Go From Here

To create a new resource or reconfigure a running resource to use the new StorEdge 9910/9960 Array logical volume, see the Sun Cluster 3.0 12/01 Data Services Installation and Configuration Guide.

How to Remove a StorEdge 9910/9960 Array Logical Volume

Use this procedure to remove a logical volume. This procedure assumes all cluster nodes are booted and connected to the StorEdge 9910/9960 array that hosts the logical volume you are removing.

This procedure defines Node A as the node you begin working with, and Node B as the remaining nodes in the cluster. If you remove an array from more than 2 nodes, repeat Step 9 through Step 11 for each additional node that connects to the logical volume.


Caution - Caution -

This procedure removes all data on the logical volume that you are removing.


  1. If necessary, back up all data and migrate all resource groups and disk device groups to another node.

  2. Determine if the logical volume that you plan to remove is configured as a quorum device.


    # scstat -q
    
    • If the logical volume is not a quorum device, go to Step 3.

    • If the logical volume is configured as a quorum device, choose and configure another device to be the new quorum device. Then remove the old quorum device.

    To add and remove a quorum device in you configuration, see the Sun Cluster 3.0 12/01 System Administration Guide.

  3. Run the appropriate Solstice DiskSuite or VERITAS Volume Manager commands to remove the reference to the logical volume from any diskset or disk group.

    For more information, see your Solstice DiskSuite or VERITAS Volume Manager documentation.

  4. If the cluster is running VERITAS Volume Manager, update the list of devices on all cluster nodes attached to the logical volume that you are removing.

    See your VERITAS Volume Manager documentation for information about using the vxdisk rm command to remove devices (volumes) in your VERITAS Volume Manager device list.

  5. Remove the logical volume.

    Contact your service provider to remove the logical volume.

  6. Determine the resource groups and device groups that are running on Node A and Node B.

    Record this information because you will use it in Step 11 of this procedure to return resource groups and device groups to these nodes.


    # scstat
    
  7. Shut down and reboot Node A by using the shutdown command with the -i6 option.


    # shutdown -y -g0 -i6
    

    For procedures on shutting down a node, see the Sun Cluster 3.0 12/01 System Administration Guide.

  8. On Node A, update the /devices and /dev entries.


    # devfsadm -C
    # scdidadm -C
    
  9. Shut down and reboot Node B by using the shutdown command with the -i6 option.


    # shutdown -y -g0 -i6
    

    For procedures on shutting down a node, see the Sun Cluster 3.0 12/01 System Administration Guide.

  10. On Node B, update the /devices and /dev entries.


    # devfsadm -C
    # scdidadm -C
    
  11. Return the resource groups and device groups you identified in Step 6 to Node B.


    # scswitch -z -g resource-group -h nodename
    # scswitch -z -D device-group-name -h nodename
    

    For more information, see the Sun Cluster 3.0 12/01 System Administration Guide.

  12. Repeat Step 9 through Step 11 for each additional node that connects to the logical volume.

Where to Go From Here

To create a logical volume, see "How to Add a StorEdge 9910/9960 Array Logical Volume to a Cluster".