JavaScript is required to for searching.
Skip Navigation Links
Exit Print View
Oracle Solaris Cluster 3.3 With Sun StorEdge 3510 or 3511 FC RAID Array Manual
search filter icon
search icon

Document Information

Preface

1.  Installing and Maintaining Sun StorEdge 3510 and 3511 Fibre Channel RAID Arrays

Installing Storage Arrays

Storage Array Cabling Configurations

How to Install a Storage Array

Adding a Storage Array to a Running Cluster

How to Perform Initial Configuration Tasks on the Storage Array

How to Connect the Storage Array to FC Switches

How to Connect the Node to the FC Switches or the Storage Array

Configuring Storage Arrays in a Running Cluster

How to Create and Map a LUN

How to Unmap and Remove a LUN

Maintaining Storage Arrays

StorEdge 3510 and 3511 FC RAID Array FRUs

How to Remove a Storage Array From a Running Cluster

How to Upgrade Storage Array Firmware

How to Replace a Disk Drive

How to Replace a Host Adapter

Replacing a Node-to-Switch Component

How to Replace a Node-to-Switch Component in a Cluster That Uses Multipathing

How to Replace a Node-to-Switch Component in a Cluster Without Multipathing

How to Replace a Chassis in a Running Cluster

Index

Installing Storage Arrays

This section contains the procedures listed in Table 1-1

Table 1-1 Task Map: Installing Storage Arrays

Task
Information
Install a storage array in a new cluster, before the OS and Oracle Solaris Cluster software are installed.
Add a storage array to an existing cluster.

Storage Array Cabling Configurations

You can install the StorEdge 3510 and 3511 FC RAID arrays in several different configurations. Use the Sun StorEdge 3000 Family Best Practices Manual to help evaluate your needs and determine which configuration is best for your situation. See your Oracle service provider for currently supported Oracle Solaris Cluster configurations.

The following figures provide examples of configurations with multipathing solutions. With direct attach storage (DAS) configurations with multipathing, you map each LUN to each host channel. All nodes can see all 256 LUNs.

Figure 1-1 Sun StorEdge 3510 DAS Configuration With Multipathing and Two Controllers

image:Illustration: The preceding context describes the graphic.

Figure 1-2 Sun StorEdge 3511 DAS Configuration With Multipathing and Two Controllers

image:Illustration: The preceding context describes the graphic.

The two-controller SAN configurations allow 32 LUNs to be mapped to each pair of host channels. Since these configurations use multipathing, each node sees a total of 64 LUNs.

Figure 1-3 Sun StorEdge 3510 SAN Configuration With Multipathing and Two Controllers

image:Illustration: The preceding context describes the graphic.

Figure 1-4 Sun StorEdge 3511 SAN Configuration With Multipathing and Two Controllers

image:Illustration: The preceding context describes the graphic.

How to Install a Storage Array

Use this procedure to install and configure storage arrays before installing the Oracle Solaris Operating System and Oracle Solaris Cluster software on your cluster nodes. If you need to add a storage array to an existing cluster, use the procedure in Adding a Storage Array to a Running Cluster.

Before You Begin

This procedure assumes that the hardware is not connected.


Note - If you plan to attach a StorEdge 3510 or 3511 FC expansion storage array to a StorEdge 3510 or 3511 FC RAID storage array, attach the expansion storage array before connecting the RAID storage array to the cluster nodes. See the Sun StorEdge 3000 Family Installation, Operation, and Service Manual for more information.


  1. Install host adapters in the nodes that connect to the storage array.

    For the procedure on installing host adapters, see the documentation that shipped with your host adapters and nodes.

  2. If necessary, install the Fibre Channel (FC) switches.

    For the procedure on installing an FC switch, see the documentation that shipped with your switch hardware.


    Note - You must use FC switches when installing storage arrays in a SAN configuration.


  3. If necessary, install gigabit interface converters (GBICs) or Small Form-Factor Pluggables (SFPs) in the FC switches.

    For the procedures on installing a GBIC or an SFP to an FC switch, see the documentation that shipped with your FC switch hardware.

  4. Cable the storage array.

    For the procedures on connecting your FC storage array, see Sun StorEdge 3000 Family Installation, Operation, and Service Manual.

    • If you plan to create a storage area network (SAN), connect the storage array to the FC switches using fiber-optic cables.

    • If you plan to have a DAS configuration, connect the storage array to the nodes.

  5. Power on the storage arrays.

    Verify that all components are powered on and functional.

    For the procedure on powering up the storage arrays and checking LEDs, see the Sun StorEdge 3000 Family Installation, Operation, and Service Manual.

  6. Set up and configure the storage array.

    For procedures on setting up logical drives and LUNs, see the Sun StorEdge 3000 Family Installation, Operation, and Service Manual or the Sun StorEdge 3000 Family RAID Firmware 3.27 User's Guide.

    For the procedure on configuring the storage array, see Sun StorEdge 3000 Family Installation, Operation, and Service Manual.

  7. On all nodes, install the Solaris operating system and apply the required Solaris patches for Oracle Solaris Cluster software and storage array support.

    For the procedure about how to install the Solaris operating environment, see How to Install Solaris Software in Oracle Solaris Cluster Software Installation Guide.

  8. Install any required storage array controller firmware.

    Oracle Solaris Cluster software requires patch version 113723–03 or later for each Sun StorEdge 3510 array in the cluster.

    See the Oracle Solaris Cluster release notes documentation for information about accessing Oracle's Sun EarlyNotifier web pages. The EarlyNotifier web pages list information about any required patches or firmware levels that are available for download.

  9. Install any required patches or software for Solaris I/O multipathing software support to nodes and enable multipathing.

    When using these arrays, Oracle Solaris Cluster software requires Sun StorEdge SAN Foundation software:

    • SPARC: For the Sun StorEdge 3510 storage array, at least Sun StorEdge SAN Foundation software version 4.2.
    • SPARC: For the Sun StorEdge 3511 storage array, at least Sun StorEdge SAN Foundation software version 4.4.
    • x86: For x86 based clusters, at least the Sun StorEdge SAN Foundation software that is bundled with Oracle Solaris 10.
  10. On all nodes, update the /devices and /dev entries.
    # devfsadm -C 
  11. On all nodes, confirm that the storage arrays that you installed are visible.
    # luxadm probe 
  12. If necessary, label the LUNs.
    # format
  13. Install the Oracle Solaris Cluster software and volume management software.

    For software installation procedures, see the Oracle Solaris Cluster software installation documentation.

See Also

To continue with Oracle Solaris Cluster software installation tasks, see the Oracle Solaris Cluster software installation documentation.

Adding a Storage Array to a Running Cluster

Use this procedure to add new storage array to a running cluster. To install to a new Oracle Solaris Cluster that is not running, use the procedure in How to Install a Storage Array.

If you need to add a storage array to more than two nodes, repeat the steps for each additional node that connects to the storage array.


Note - This procedure assumes that your nodes are not configured with dynamic reconfiguration functionality.

If your nodes are configured for dynamic reconfiguration, see the Oracle Solaris Cluster system administration documentation and skip steps that instruct you to shut down the node.


How to Perform Initial Configuration Tasks on the Storage Array

  1. Power on the storage array.
  2. Set up and configure the storage array.

    For the procedures on configuring the storage array, see the Sun StorEdge 3000 Family Installation, Operation, and Service Manual.

  3. If necessary, upgrade the storage array's controller firmware.

    Oracle Solaris Cluster software requires patch version 113723-03 or later for each Sun StorEdge 3510 array in the cluster.

    See the Oracle Solaris Cluster release notes documentation for information about accessing Oracle's Sun's EarlyNotifier web pages. The EarlyNotifier web pages list information about any required patches or firmware levels that are available for download. For the procedure on applying any host adapter's firmware patch, see the firmware patch README file.

  4. Configure the new storage array. Map the LUNs to the host channels.

    For the procedures on setting up logical drives and LUNs, see the Sun StorEdge 3000 Family Installation, Operation, and Service Manual or Sun StorEdge 3000 Family RAID Firmware 3.27 User's Guide.

  5. To continue adding the storage array, proceed to How to Connect the Storage Array to FC Switches.

How to Connect the Storage Array to FC Switches

Use this procedure if you plan to add a storage array to a SAN environment. If you do not plan to add the storage array to a SAN environment, go to How to Connect the Node to the FC Switches or the Storage Array.

  1. Install the SFPs in the storage array that you plan to add.

    For the procedure on installing an SFP, see the Sun StorEdge 3000 Family Installation, Operation, and Service Manual.

  2. If necessary, install GBICs or SFPs in the FC switches.

    For the procedure on installing a GBIC or an SFP to an FC switch, see the documentation that shipped with your FC switch hardware.

  3. Install a fiber-optic cable between the new storage array and each FC switch.

    For the procedure on installing a fiber-optic cable, see the Sun StorEdge 3000 Family Installation, Operation, and Service Manual.

  4. To finish adding your storage array, see How to Connect the Node to the FC Switches or the Storage Array.

How to Connect the Node to the FC Switches or the Storage Array

Use this procedure when you add a storage array to a SAN or DAS configuration. In SAN configurations, you connect the node to the FC switches. In DAS configurations, you connect the node directly to the storage array.

Before You Begin

This procedure provides the long forms of the Oracle Solaris Cluster commands. Most commands also have short forms. Except for the forms of the command names, the commands are identical.

To perform this procedure, become superuser or assume a role that provides solaris.cluster.read and solaris.cluster.modify role-based access control (RBAC) authorization.

  1. Determine the resource groups and device groups that are running on all nodes.

    Record this information because you will use it in Step 12 and Step 13 of this procedure to return resource groups and device groups to these nodes.

    # clresourcegroup status + 
    # cldevicegroup status + 
  2. Move all resource groups and device groups off the node that you plan to connect.
    # clnode evacuate nodename 
  3. If you need to install host adapters in the node, see the documentation that shipped with your host adapters and install the adapters.
  4. If necessary, install GBICs or SFPs to the FC switches or the storage array.

    For the procedure on installing a GBIC or an SFP to an FC switch, see the documentation that shipped with your FC switch hardware.

    For the procedure on installing a GBIC or an SFP to a storage array, see the Sun StorEdge 3000 Family Installation, Operation, and Service Manual.

  5. Connect fiber-optic cables between the node and the FC switches or the storage array.

    For the procedure on installing a fiber-optic cable, see the Sun StorEdge 3000 Family Installation, Operation, and Service Manual.

  6. If necessary, install the required Solaris patches for storage array support on the node.

    See the Oracle Solaris Cluster release notes documentation for information about accessing Oracle's EarlyNotifier web pages. The EarlyNotifier web pages list information about any required patches or firmware levels that are available for download. For the procedure on applying any host adapter's firmware patch, see the firmware patch README file.

  7. On the node, update the /devices and /dev entries.
    # devfsadm -C 
  8. On the node, update the paths to the device ID instances.
    # cldevice populate
  9. If necessary, label the LUNs on the new storage array.
    # format
  10. (Optional) On the node, verify that the device IDs are assigned to the new LUNs.
    # cldevice list -v 
  11. Repeat Step 2 to Step 10 for each remaining node that you plan to connect to the storage array.
  12. (Optional) Restore the device groups to the original node.

    Perform the following step for each device group you want to return to the original node.

    # cldevicegroup switch -n nodename devicegroup1[ devicegroup2 …]
    -n nodename

    The node to which you are restoring device groups.

    devicegroup1[ devicegroup2 …]

    The device group or groups that you are restoring to the node.

  13. (Optional) Restore the resource groups to the original node.

    Perform the following step for each resource group you want to return to the original node.

    # clresourcegroup switch -n nodename resourcegroup1[ resourcegroup2 …]
    nodename

    For failover resource groups, the node to which the groups are returned. For scalable resource groups, the node list to which the groups are returned.

    resourcegroup1[ resourcegroup2 …]

    The resource group or groups that you are returning to the node or nodes.

  14. Perform volume management administration to incorporate the new logical drives into the cluster.

    For more information, see your Veritas Volume Manager documentation.