JavaScript is required to for searching.
Skip Navigation Links
Exit Print View
Oracle Solaris Cluster 3.3 With StorageTek 2540 RAID Arrays Manual
search filter icon
search icon

Document Information

Preface

1.  Requirements and Restrictions

2.  Installing and Configuring a StorageTek Array

Installing Storage Arrays

Storage Array Cabling Configurations

How to Install Storage Arrays in a New Cluster

Install and Cable the Hardware

Install the Solaris Operating System

How to Add Storage Arrays to an Existing Cluster

Configuring Storage Arrays

How to Create a Logical Volume

How to Remove a Logical Volume

3.  Maintaining a StorageTek Array

Index

Configuring Storage Arrays

This section contains the procedures about how to configure a storage array in a running cluster. Table 2-2 lists these procedures.

Table 2-2 Task Map: Configuring a Storage Array

Task
Information
Create a logical volume.
Remove a logical volume.

The following list outlines administrative tasks that require no cluster-specific procedures. See the storage array's online help for the following procedures.

How to Create a Logical Volume

Use this procedure to create a logical volume from unassigned storage capacity.


Note - Oracle's Sun storage documentation uses the following terms:

This manual uses logical volume to refer to all such logical constructs.


Before You Begin

This procedure relies on the following prerequisites and assumptions.

This procedure provides the long forms of the Oracle Solaris Cluster commands. Most commands also have short forms. Except for the forms of the command names, the commands are identical.

  1. Become superuser or assume a role that provides solaris.cluster.modify role-based access control (RBAC) authorization.
  2. Follow the instructions in your storage device's documentation to create and map the logical volume. For a URL to this storage documentation, see Related Documentation.
    • Completely set up the logical volume. When you are finished, the volume must be created, mapped, mounted, and initialized.

    • If necessary, partition the volume.

    • To allow multiple clusters and nonclustered nodes to access the storage device, create initiator groups by using LUN masking.

  3. If you are not using multipathing, skip to Step 5.
  4. If you are using multipathing, and if any devices that are associated with the volume you created are at an unconfigured state, configure the multipathing paths on each node that is connected to the storage device.

    To determine whether any devices that are associated with the volume you created are at an unconfigured state, use the following command.

    # cfgadm -al | grep disk

    Note - To configure the Oracle Solaris I/O multipathing paths on each node that is connected to the storage device, use the following command.

    # cfgadm -o force_update -c configure controllerinstance

    To configure multipathing, see the Solaris Fibre Channel Storage Configuration and Multipathing Support Guide.

  5. On one node that is connected to the storage device, use the format command to label the new logical volume.
  6. From any node in the cluster, update the global device namespace.
    # cldevice populate

    Note - You might have a volume management daemon such as vold running on your node, and have a DVD drive connected to the node. Under these conditions, a device busy error might be returned even if no disk is inserted in the drive. This error is expected behavior. You can safely ignore this error message.


  7. To manage this volume with volume management software, use Solaris Volume Manager or Veritas Volume Manager commands to update the list of devices on all nodes that are attached to the new volume that you created.

    For more information, see your Solaris Volume Manager or Veritas Volume Manager documentation.

See Also

How to Remove a Logical Volume

Use this procedure to remove a logical volume. This procedure defines Node A as the node with which you begin working.


Note - Sun storage documentation uses the following terms:

This manual uses logical volume to refer to all such logical constructs.


Before You Begin

This procedure relies on the following prerequisites and assumptions.

This procedure provides the long forms of the Oracle Solaris Cluster commands. Most commands also have short forms. Except for the forms of the command names, the commands are identical.

  1. Become superuser or assume a role that provides solaris.cluster.read and solaris.cluster.modify RBAC authorization.
  2. Identify the logical volume that you are removing.

    Refer to your Solaris Volume Manager or Veritas Volume Manager documentation for more information.

  3. (Optional) Migrate all data off the logical volume that you are removing. Alternatively, back up that data.
  4. If the LUN that you are removing is configured as a quorum device, choose and configure another device as the quorum device. Then remove the old quorum device.

    To determine whether the LUN is configured as a quorum device, use the following command.

    # clquorum show

    For procedures about how to add and remove quorum devices, see Chapter 6, Administering Quorum, in Oracle Solaris Cluster System Administration Guide.

  5. If you are using volume management software, use that software to update the list of devices on all nodes that are attached to the logical volume that you are removing.

    For instructions about how to update the list of devices, see your Solaris Volume Manager or Veritas Volume Manager documentation.

  6. If you are using volume management software, run the appropriate Solaris Volume Manager or Veritas Volume Manager commands to remove the logical volume from any diskset or disk group.

    For more information, see your Solaris Volume Manager or Veritas Volume Manager documentation.


    Note - Volumes that were managed by Veritas Volume Manager must be completely removed from Veritas Volume Manager control before you can delete them from the Oracle Solaris Cluster environment. After you delete the volume from any disk group, use the following commands on both nodes to remove the volume from Veritas Volume Manager control.

    # vxdisk offline Accessname
    # vxdisk rm Accessname
    Accessname

    Disk access name


  7. If you are using multipathing, unconfigure the volume in Solaris I/O multipathing.
    # cfgadm -o force_update -c unconfigure Logical_Volume
  8. Access the storage device and remove the logical volume.

    To remove the volume, see your storage documentation. For a list of storage documentation, see Related Documentation.

  9. Determine the resource groups and device groups that are running on all nodes.

    Record this information because you use it in Step 14 and Step 15 of this procedure to return resource groups and device groups to these nodes.

    Use the following command:

    # clresourcegroup status + 
    # cldevicegroup status +
  10. Move all resource groups and device groups off Node A.
    # clnode evacuate nodename
  11. Shut down and reboot Node A.

    To shut down and boot a node, see Chapter 3, Shutting Down and Booting a Cluster, in Oracle Solaris Cluster System Administration Guide.

  12. On Node A, remove the paths to the logical volume that you removed. Remove obsolete device IDs.
    # devfsadm -C
    # cldevice clear
  13. For each additional node that is connected to the shared storage that hosted the logical volume, repeat Step 9 to Step 12.
  14. (Optional) Restore the device groups to the original node.

    Do the following for each device group that you want to return to the original node.

    # cldevicegroup switch -n nodename devicegroup1[ devicegroup2 …]
    -n nodename

    The node to which you are restoring device groups.

    devicegroup1[ devicegroup2 …]

    The device group or groups that you are restoring to the node.

  15. (Optional) Restore the resource groups to the original node.

    Do the following for each resource group that you want to return to the original node.

    # clresourcegroup switch -n nodename resourcegroup1[ resourcegroup2 …]
    nodename

    For failover resource groups, the node to which the groups are returned. For scalable resource groups, the node list to which the groups are returned.

    resourcegroup1[ resourcegroup2 …]

    The resource group or groups that you are returning to the node or nodes.