JavaScript is required to for searching.
Skip Navigation Links
Exit Print View
Oracle Solaris Cluster 3.3 With Sun StorEdge 3510 or 3511 FC RAID Array Manual
search filter icon
search icon

Document Information

Preface

1.  Installing and Maintaining Sun StorEdge 3510 and 3511 Fibre Channel RAID Arrays

Installing Storage Arrays

Storage Array Cabling Configurations

How to Install a Storage Array

Adding a Storage Array to a Running Cluster

How to Perform Initial Configuration Tasks on the Storage Array

How to Connect the Storage Array to FC Switches

How to Connect the Node to the FC Switches or the Storage Array

Configuring Storage Arrays in a Running Cluster

How to Create and Map a LUN

How to Unmap and Remove a LUN

Maintaining Storage Arrays

StorEdge 3510 and 3511 FC RAID Array FRUs

How to Remove a Storage Array From a Running Cluster

How to Upgrade Storage Array Firmware

How to Replace a Disk Drive

How to Replace a Host Adapter

Replacing a Node-to-Switch Component

How to Replace a Node-to-Switch Component in a Cluster That Uses Multipathing

How to Replace a Node-to-Switch Component in a Cluster Without Multipathing

How to Replace a Chassis in a Running Cluster

Index

Configuring Storage Arrays in a Running Cluster

This section contains the procedures for configuring a storage array in a running cluster. Table 1-2 lists these procedures.


Note - When you upgrade firmware on a storage device or on an enclosure, redefine the stripe size of a LUN, or perform other LUN operations, a device ID might change unexpectedly. When you perform a check of the device ID configuration by running the cldevice check command, the following error message appears on your console if the device ID changed unexpectedly.

device id for nodename:/dev/rdsk/cXtYdZsN does not match physical 
device's id for ddecimalnumber, device may have been replaced.

To fix device IDs that report this error, run the cldevice repair command for each affected device.



Note - Logical volumes are not supported in an Oracle Solaris Cluster environment. Use logical drives as an alternative.


Table 1-2 Task Map: Configuring a Fibre-Channel Storage Array 

Task
Information
Create a LUN
Remove a LUN

How to Create and Map a LUN

Use this procedure to create a LUN from unassigned storage capacity.

Before You Begin

This procedure relies on the following prerequisites and assumptions.

This procedure provides the long forms of the Oracle Solaris Cluster commands. Most commands also have short forms. Except for the forms of the command names, the commands are identical.

To perform this procedure, become superuser or assume a role that provides solaris.cluster.modify RBAC authorization.

  1. Follow the instructions in your storage device's documentation to create and map the LUN.

    To allow multiple clusters and nonclustered systems to access the storage device, create initiator groups by using LUN filtering or masking.

  2. If you are using multipathing, and if any devices that are associated with the LUN you created are at an unconfigured state, configure the STMS paths on each node that is connected to the storage device.

    To determine if any devices are at an unconfigured state, use the following command:

    # cfgadm -al | grep disk

    To configure the STMS paths on each node, use the following command:

    cfgadm -o force_update -c configure controllerinstance

    See the Solaris Fibre Channel Storage Configuration and Multipathing Support Guide for information on Solaris I/O multipathing.

  3. On one node that is connected to the storage device, use the format command to label the new LUN.
  4. From any node in the cluster, update the global device namespace.
    # cldevice populate

    Note - You might have a volume management daemon such as vold running on your node, and have a CD-ROM drive connected to the node. Under these conditions, a device busy error might be returned even if no disk is in the drive. This error is expected behavior. You can safely ignore this error message.


  5. If you will manage this LUN with volume management software, use the appropriate Solaris Volume Manager or Veritas Volume Manager commands to update the list of devices on all nodes that are attached to the new volume that you created.

    For more information, see your Solaris Volume Manager or Veritas Volume Manager documentation.

See Also

How to Unmap and Remove a LUN

Use this procedure to remove one or more LUNs. See the Sun StorEdge 3000 Family RAID Firmware 3.25 and 3.27 User's Guide for the latest information about LUN administration.

This procedure assumes that all nodes are booted in cluster mode and attached to the storage device.


Caution

Caution - When you delete a LUN, you remove all data on the LUN that you delete.


Before You Begin

This procedure provides the long forms of the Oracle Solaris Cluster commands. Most commands also have short forms. Except for the forms of the command names, the commands are identical.

To perform this procedure, become superuser or assume a role that provides solaris.cluster.read and solaris.cluster.modify RBAC authorization.

  1. Identify the LUN or LUNs that you will remove.

    Refer to your Solaris Volume Manager or Veritas Volume Manager documentation for the appropriate commands.

    Use the following pairs of commands.

    # luxadm probe
    # cldevice show 
  2. If the LUN that you will remove is configured as a quorum device, choose and configure another device as the quorum device. Then remove the old quorum device.

    To determine whether the LUN is configured as a quorum device, use the following command.

    # clquorum show 

    For procedures about how to add and remove quorum devices, see Chapter 6, Administering Quorum, in Oracle Solaris Cluster System Administration Guide.

  3. Remove the LUN from disksets or disk groups.

    Run the appropriate Solaris Volume Manager or Veritas Volume Manager commands to remove the LUN from any diskset or disk group. For more information, see your Solaris Volume Manager or Veritas Volume Manager documentation for more information. See the following note for additional Veritas Volume Manager commands that are required.


    Note - LUNs that were managed by Veritas Volume Manager must be completely removed from Veritas Volume Manager control before you can delete them from the Oracle Solaris Cluster environment. After you delete the LUN from any disk group, use the following commands on both nodes to remove the LUN from Veritas Volume Manager control.

    # vxdisk offline cNtXdY
    # vxdisk rm cNtXdY

  4. Unmap the LUN from both host channels.

    For the procedure on unmapping a LUN, see the Sun StorEdge 3000 Family RAID Firmware 3.25 and 3.27 User's Guide.

  5. (Optional) Delete the logical drive.

    For more information, see Sun StorEdge 3000 Family RAID Firmware 3.25 and 3.27 User's Guide.

  6. On both nodes, remove the paths to the LUN that you are deleting.
    # devfsadm -C
  7. On both nodes, remove all obsolete device IDs.
    # cldevice clear