| Skip Navigation Links | |
| Exit Print View | |
|
Oracle Solaris Cluster 3.3 With Sun StorEdge 3510 or 3511 FC RAID Array Manual |
1. Installing and Maintaining Sun StorEdge 3510 and 3511 Fibre Channel RAID Arrays
Storage Array Cabling Configurations
How to Install a Storage Array
Adding a Storage Array to a Running Cluster
How to Perform Initial Configuration Tasks on the Storage Array
How to Connect the Storage Array to FC Switches
How to Connect the Node to the FC Switches or the Storage Array
StorEdge 3510 and 3511 FC RAID Array FRUs
How to Remove a Storage Array From a Running Cluster
How to Upgrade Storage Array Firmware
Replacing a Node-to-Switch Component
How to Replace a Node-to-Switch Component in a Cluster That Uses Multipathing
How to Replace a Node-to-Switch Component in a Cluster Without Multipathing
This section contains the procedures for configuring a storage array in a running cluster. Table 1-2 lists these procedures.
Note - When you upgrade firmware on a storage device or on an enclosure, redefine the stripe size of a LUN, or perform other LUN operations, a device ID might change unexpectedly. When you perform a check of the device ID configuration by running the cldevice check command, the following error message appears on your console if the device ID changed unexpectedly.
device id for nodename:/dev/rdsk/cXtYdZsN does not match physical device's id for ddecimalnumber, device may have been replaced.
To fix device IDs that report this error, run the cldevice repair command for each affected device.
Note - Logical volumes are not supported in an Oracle Solaris Cluster environment. Use logical drives as an alternative.
Table 1-2 Task Map: Configuring a Fibre-Channel Storage Array
|
Use this procedure to create a LUN from unassigned storage capacity.
Before You Begin
This procedure relies on the following prerequisites and assumptions.
All nodes are booted in cluster mode and attached to the storage device.
The storage device is installed and configured If you are using multipathing, it is configured as described in the installation procedure.
If you are using Solaris I/O multipathing (MPxIO) for the Oracle Solaris 10 OS, previously called Sun StorEdge Traffic Manager in the Solaris 9 OS, verify that the paths to the storage device are functioning. To configure multipathing for the Oracle Solaris 10 OS, see the Solaris Fibre Channel Storage Configuration and Multipathing Support Guide.
This procedure provides the long forms of the Oracle Solaris Cluster commands. Most commands also have short forms. Except for the forms of the command names, the commands are identical.
To perform this procedure, become superuser or assume a role that provides solaris.cluster.modify RBAC authorization.
To allow multiple clusters and nonclustered systems to access the storage device, create initiator groups by using LUN filtering or masking.
To determine if any devices are at an unconfigured state, use the following command:
# cfgadm -al | grep disk
To configure the STMS paths on each node, use the following command:
cfgadm -o force_update -c configure controllerinstance
See the Solaris Fibre Channel Storage Configuration and Multipathing Support Guide for information on Solaris I/O multipathing.
# cldevice populate
Note - You might have a volume management daemon such as vold running on your node, and have a CD-ROM drive connected to the node. Under these conditions, a device busy error might be returned even if no disk is in the drive. This error is expected behavior. You can safely ignore this error message.
For more information, see your Solaris Volume Manager or Veritas Volume Manager documentation.
See Also
To configure a LUN as a quorum device, see Chapter 6, Administering Quorum, in Oracle Solaris Cluster System Administration Guide.
To create a new resource or configure a running resource to use the new LUN, see Chapter 2, Administering Data Service Resources, in Oracle Solaris Cluster Data Services Planning and Administration Guide.
Use this procedure to remove one or more LUNs. See the Sun StorEdge 3000 Family RAID Firmware 3.25 and 3.27 User's Guide for the latest information about LUN administration.
This procedure assumes that all nodes are booted in cluster mode and attached to the storage device.
![]() | Caution - When you delete a LUN, you remove all data on the LUN that you delete. |
Before You Begin
This procedure provides the long forms of the Oracle Solaris Cluster commands. Most commands also have short forms. Except for the forms of the command names, the commands are identical.
To perform this procedure, become superuser or assume a role that provides solaris.cluster.read and solaris.cluster.modify RBAC authorization.
Refer to your Solaris Volume Manager or Veritas Volume Manager documentation for the appropriate commands.
Use the following pairs of commands.
# luxadm probe # cldevice show
To determine whether the LUN is configured as a quorum device, use the following command.
# clquorum show
For procedures about how to add and remove quorum devices, see Chapter 6, Administering Quorum, in Oracle Solaris Cluster System Administration Guide.
Run the appropriate Solaris Volume Manager or Veritas Volume Manager commands to remove the LUN from any diskset or disk group. For more information, see your Solaris Volume Manager or Veritas Volume Manager documentation for more information. See the following note for additional Veritas Volume Manager commands that are required.
Note - LUNs that were managed by Veritas Volume Manager must be completely removed from Veritas Volume Manager control before you can delete them from the Oracle Solaris Cluster environment. After you delete the LUN from any disk group, use the following commands on both nodes to remove the LUN from Veritas Volume Manager control.
# vxdisk offline cNtXdY # vxdisk rm cNtXdY
For the procedure on unmapping a LUN, see the Sun StorEdge 3000 Family RAID Firmware 3.25 and 3.27 User's Guide.
For more information, see Sun StorEdge 3000 Family RAID Firmware 3.25 and 3.27 User's Guide.
# devfsadm -C
# cldevice clear