Skip Navigation Links | |
Exit Print View | |
Oracle Solaris Cluster 3.3 With StorageTek 2540 RAID Arrays Manual |
1. Requirements and Restrictions
2. Installing and Configuring a StorageTek Array
Storage Array Cabling Configurations
How to Install Storage Arrays in a New Cluster
Install and Cable the Hardware
Install the Solaris Operating System
How to Add Storage Arrays to an Existing Cluster
This section contains the procedures about how to configure a storage array in a running cluster. Table 2-2 lists these procedures.
Table 2-2 Task Map: Configuring a Storage Array
|
The following list outlines administrative tasks that require no cluster-specific procedures. See the storage array's online help for the following procedures.
Use this procedure to create a logical volume from unassigned storage capacity.
Note - Oracle's Sun storage documentation uses the following terms:
Logical volume
Logical device
Logical unit number (LUN)
This manual uses logical volume to refer to all such logical constructs.
Before You Begin
This procedure relies on the following prerequisites and assumptions.
All nodes are booted in cluster mode and attached to the storage device.
The storage device is installed and configured. If you are using multipathing, the storage device is configured as described in the installation procedure.
If you are using Solaris I/O multipathing (MPxIO) for the Oracle Solaris 10 OS, previously called Sun StorEdge Traffic Manager in the Solaris 9 OS, verify that the paths to the storage device are functioning. To configure multipathing, see the Solaris Fibre Channel Storage Configuration and Multipathing Support Guide.
This procedure provides the long forms of the Oracle Solaris Cluster commands. Most commands also have short forms. Except for the forms of the command names, the commands are identical.
Completely set up the logical volume. When you are finished, the volume must be created, mapped, mounted, and initialized.
If necessary, partition the volume.
To allow multiple clusters and nonclustered nodes to access the storage device, create initiator groups by using LUN masking.
To determine whether any devices that are associated with the volume you created are at an unconfigured state, use the following command.
# cfgadm -al | grep disk
Note - To configure the Oracle Solaris I/O multipathing paths on each node that is connected to the storage device, use the following command.
# cfgadm -o force_update -c configure controllerinstance
To configure multipathing, see the Solaris Fibre Channel Storage Configuration and Multipathing Support Guide.
# cldevice populate
Note - You might have a volume management daemon such as vold running on your node, and have a DVD drive connected to the node. Under these conditions, a device busy error might be returned even if no disk is inserted in the drive. This error is expected behavior. You can safely ignore this error message.
For more information, see your Solaris Volume Manager or Veritas Volume Manager documentation.
See Also
To configure a logical volume as a quorum device, see Chapter 6, Administering Quorum, in Oracle Solaris Cluster System Administration Guide.
To create a new resource or configure a running resource to use the new logical volume, see Chapter 2, Administering Data Service Resources, in Oracle Solaris Cluster Data Services Planning and Administration Guide.
Use this procedure to remove a logical volume. This procedure defines Node A as the node with which you begin working.
Note - Sun storage documentation uses the following terms:
Logical volume
Logical device
Logical unit number (LUN)
This manual uses logical volume to refer to all such logical constructs.
Before You Begin
This procedure relies on the following prerequisites and assumptions.
All nodes are booted in cluster mode and attached to the storage device.
The logical volume and the path between the nodes and the storage device are both operational.
This procedure provides the long forms of the Oracle Solaris Cluster commands. Most commands also have short forms. Except for the forms of the command names, the commands are identical.
Refer to your Solaris Volume Manager or Veritas Volume Manager documentation for more information.
To determine whether the LUN is configured as a quorum device, use the following command.
# clquorum show
For procedures about how to add and remove quorum devices, see Chapter 6, Administering Quorum, in Oracle Solaris Cluster System Administration Guide.
For instructions about how to update the list of devices, see your Solaris Volume Manager or Veritas Volume Manager documentation.
For more information, see your Solaris Volume Manager or Veritas Volume Manager documentation.
Note - Volumes that were managed by Veritas Volume Manager must be completely removed from Veritas Volume Manager control before you can delete them from the Oracle Solaris Cluster environment. After you delete the volume from any disk group, use the following commands on both nodes to remove the volume from Veritas Volume Manager control.
# vxdisk offline Accessname # vxdisk rm Accessname
Disk access name
# cfgadm -o force_update -c unconfigure Logical_Volume
To remove the volume, see your storage documentation. For a list of storage documentation, see Related Documentation.
Record this information because you use it in Step 14 and Step 15 of this procedure to return resource groups and device groups to these nodes.
Use the following command:
# clresourcegroup status + # cldevicegroup status +
# clnode evacuate nodename
To shut down and boot a node, see Chapter 3, Shutting Down and Booting a Cluster, in Oracle Solaris Cluster System Administration Guide.
# devfsadm -C # cldevice clear
Do the following for each device group that you want to return to the original node.
# cldevicegroup switch -n nodename devicegroup1[ devicegroup2 …]
The node to which you are restoring device groups.
The device group or groups that you are restoring to the node.
Do the following for each resource group that you want to return to the original node.
# clresourcegroup switch -n nodename resourcegroup1[ resourcegroup2 …]
For failover resource groups, the node to which the groups are returned. For scalable resource groups, the node list to which the groups are returned.
The resource group or groups that you are returning to the node or nodes.