JavaScript is required to for searching.
Skip Navigation Links
Exit Print View
Oracle Solaris Cluster 3.3 With Sun StorEdge 9900 Series Storage Device Manual
search filter icon
search icon

Document Information

Preface

1.  Installing and Configuring a Sun StorEdge or StorageTek 9900 Series Storage Array

Restrictions

Installing Storage Arrays

How to Add a Storage Array to an Existing Cluster

Configuring Storage Arrays

How to Add a Logical Volume

How to Remove a Logical Volume

2.  Enabling Multipathing Software in a Sun StorEdge or StorageTek 9900 Series Storage Array

3.  Maintaining a Sun StorEdge or StorageTek 9900 Series Storage Array

Index

Configuring Storage Arrays

This section contains the procedures about how to configure a storage array in an Oracle Solaris Cluster environment. The following table lists these procedures. For configuration tasks that are not cluster-specific, see the documentation that shipped with your storage array.


Note - When you upgrade firmware on a storage device or on an enclosure, redefine the stripe size of a LUN, or perform other LUN operations, a device ID might change unexpectedly. When you perform a check of the device ID configuration by running the cldevice check command, the following error message appears on your console if the device ID changed unexpectedly.

device id for nodename:/dev/rdsk/cXtYdZsN does not match physical 
device's id for ddecimalnumber, device may have been replaced.

To fix device IDs that report this error, run the cldevice repair command for each affected device.


Table 1-1 Task Map: Configuring a Storage Array

Task
Information
Add a logical volume.
Remove a logical volume.

How to Add a Logical Volume

Use this procedure to add a logical volume to a cluster. This procedure assumes that your service provider created your logical volume. This procedure also assumes that all nodes are booted and are attached to the storage array.

This procedure provides the long forms of the Oracle Solaris Cluster commands. Most commands also have short forms. Except for the forms of the command names, the commands are identical.

Before You Begin

To perform this procedure, become superuser or assume a role that provides solaris.cluster.modify RBAC authorization.

  1. On all nodes, update the /devices and /dev entries.
    # devfsadm
  2. On each node connected to the storage array, use the appropriate multipathing software commands to verify that the same set of LUNs is visible to the expected controllers.
  3. If you are running Veritas Volume Manager, update the list of devices on all nodes that are attached to the logical volume that you created in Step 2.

    See your Veritas Volume Manager documentation for information about how to use the vxdctl enable command. Use this command to update new devices (volumes) in your Veritas Volume Manager list of devices.


    Note - You might need to install the Veritas Array Support Library (ASL) package that corresponds to the array. For more information, see your Veritas Volume Manager documentation.


    If you are not running Veritas Volume Manager, proceed to Step 4.

  4. From any node in the cluster, update the global device namespace.
    # cldevice populate

    If a volume management daemon such as vold is running on your node, and you have a CD-ROM drive that is connected to the node, a device busy error might be returned even if no disk is in the drive. This error is expected behavior.

See Also

To create a new resource or reconfigure a running resource to use the new logical volume, see your Oracle Solaris Cluster data services collection.

How to Remove a Logical Volume

Use this procedure to remove a logical volume. This procedure assumes all nodes are booted and are connected to the storage array. This storage array hosts the logical volume that you are removing.

This procedure defines Node A as the node with which you begin working. Node B is the remaining node.

If you need to remove a storage array from more than two nodes, repeat Step 9 through Step 12 for each additional node. Each node connects to the logical volume.


Caution

Caution - During this procedure, you lose access to the data that resides on the logical volume that you are removing.


This procedure provides the long forms of the Oracle Solaris Cluster commands. Most commands also have short forms. Except for the forms of the command names, the commands are identical.

Before You Begin

To perform this procedure, become superuser or assume a role that provides solaris.cluster.modify RBAC authorization.

  1. If necessary, back up all data. Migrate all resource groups and disk device groups to another node.
  2. If the logical volume that you plan to remove is configured as a quorum device, choose and configure another device to be the new quorum device. Then remove the old quorum device.

    To determine whether this logical volume is configured as a quorum device, use the following command.

    #clquorum show 

    For procedures about how to add and remove quorum devices, see your Oracle Solaris Cluster system administration documentation.

  3. Run the appropriate Solaris Volume Manager commands or Veritas Volume Manager commands to remove the reference to the logical volume from any diskset or disk group.

    For more information, see your Solaris Volume Manager or Veritas Volume Manager documentation.

  4. If the cluster is running Veritas Volume Manager, update the list of devices on all nodes. These nodes are attached to the logical volume that you are removing.

    See your Veritas Volume Manager documentation for information about how to use the vxdisk rm command to remove devices (volumes) in your Veritas Volume Manager device list.

  5. Remove the logical volume.

    Contact your service provider to remove the logical volume.

  6. Determine the resource groups and device groups that are running on Node A and Node B.

    Record this information because you use this information in Step 11 and Step 12 of this procedure to return resource groups and device groups to these nodes.

    Use the following command:

    # clresourcegroup status -n NodeA[ NodeB ...] 
    # cldevicegroup status -n NodeA[ NodeB ...]
    -n NodeA[ NodeB…]

    The node or nodes for which you are determining resource groups and device groups.

  7. Shut down and reboot Node A by using the shutdown command with the -i6 option.

    For the procedure about how to shut down and power off a node, see your Oracle Solaris Cluster system administration documentation.

  8. On Node A, update the /devices and /dev entries.
    # devfsadm -C 
    # cldevice clear
  9. Shut down and reboot Node B by using the shutdown command with the -i6 option.

    For the procedure about how to shut down and power off a node, see Chapter 3, Shutting Down and Booting a Cluster, in Oracle Solaris Cluster System Administration Guide.

  10. On Node B, update the /devices and /dev entries.
    # devfsadm -C 
    # cldevice clear
  11. (Optional) Restore the device groups to the original node.

    Perform the following step for each device group you want to return to the original node.

    # cldevicegroup switch -n nodename devicegroup1[ devicegroup2 ...]
    -n nodename

    The node to which you are restoring device groups.

    devicegroup1[ devicegroup2 …]

    The device group or groups that you are restoring to the node.

  12. (Optional) Restore the resource groups to the original node.

    Perform the following step for each resource group you want to return to the original node.

    # clresourcegroup switch -n nodename resourcegroup1[ resourcegroup2 …]
    nodename

    For failover resource groups, the node to which the groups are returned. For scalable resource groups, the node list to which the groups are restored.

    resourcegroup1[ resourcegroup2 …]

    The resource group or groups that you are restoring to the node or nodes.

  13. Repeat Step 9 through Step 12 for each additional node that connects to the logical volume.

See Also

To create a logical volume, see How to Add a Logical Volume.