JavaScript is required to for searching.
Skip Navigation Links
Exit Print View
Oracle Solaris Cluster 3.3 With Sun StorEdge 6320 System Manual
search filter icon
search icon

Document Information

Preface

1.  Installing and Maintaining a Sun StorEdge 6320 System

Installing Storage Systems

How to Install Storage Systems in a New Cluster

Adding Storage Systems to an Existing Cluster

How to Perform Initial Configuration Tasks on the Storage Array

How to Connect the Node to the FC Switches

Configuring Storage Systems

How to Create a Logical Volume

How to Remove a Logical Volume

Maintaining Storage Systems

StorEdge 6320 System FRUs

How to Upgrade Storage Array Firmware

How to Remove a Storage System

Replacing a Node-to-Switch Component

How to Replace a Node-to-Switch Component in a Cluster That Uses Multipathing

How to Replace a Node-to-Switch Component in a Cluster Without Multipathing

How to Replace a Chassis

How to Replace a Host Adapter

Index

Installing Storage Systems

This section contains the procedures listed in Table 1-1

Table 1-1 Task Map: Installing Storage Systems

Task
Information
Install storage systems in a new cluster, before the OS and Oracle Solaris Cluster software is installed.
Add storage systems to an existing cluster.

You can install your storage system in several different configurations. Evaluate your needs. Determine which configuration is best for your situation. See the Sun StorEdge 6320 System Installation Guide, and Installing Storage Arrays in Oracle Solaris Cluster 3.3 With Sun StorEdge 6120 Array Manual.

How to Install Storage Systems in a New Cluster

Use this procedure to install a storage system before you install the Oracle Solaris operating environment and Oracle Solaris Cluster software on your nodes. To add a storage system to an existing cluster, use the procedure in Adding Storage Systems to an Existing Cluster.

  1. If necessary, install host adapters in the nodes to be connected to the storage system.

    For the procedure about how to install host adapters, see the documentation that shipped with your host adapters and nodes.

  2. (StorEdge 6320SL storage system ONLY) Install the Fibre Channel (FC) switch for the storage system if you do not have a switch installed.

    Note - In a StorEdge 6320SL storage system, the customer provides the switch.


    For the procedure about how to install an FC switch, see the documentation that shipped with your FC switch hardware.

  3. Unpack, place, and level the storage system.

    For instructions, see the Sun StorEdge 6320 System Installation Guide.

  4. Install the system power cord and the system grounding strap.

    For instructions, see the Sun StorEdge 6320 System Installation Guide.

  5. (StorEdge 6320SL storage system ONLY) Connect the storage arrays to the FC switches by using fiber-optic cables.

    Caution

    Caution - Do not connect the switch's Ethernet port to the storage system's private LAN.


    For the procedure about how to cable the storage system, see the Sun StorEdge 6320 System Installation Guide.

  6. Power on the storage system and the nodes.

    For instructions about how to power on the storage system, see the Sun StorEdge 6320 System Installation Guide. For instructions about how to power on a node, see the documentation that shipped with your node hardware.

  7. Configure the service processor.

    For more information, see the Sun StorEdge 6320 System Installation Guide.

  8. Create a volume.

    For the procedure about how to create a volume, see the Sun StorEdge 6320 System Reference and Service Manual.

  9. (Optional) Specify initiator groups for the volume.

    For the procedure about how to specify initiator groups, see the Sun StorEdge 6320 System Reference and Service Manual.

  10. If necessary, reconfigure the storage system's FC switches to ensure that all nodes can access each storage array.

    The following configurations might prevent some nodes from accessing each storage array in the cluster.

    • Zone configuration

    • Multiple clusters that use same switch

    • Unconfigured ports or misconfigured ports

  11. On all nodes, install the Oracle Solaris operating system and apply the required Oracle Solaris patches for Oracle Solaris Cluster software and storage array support.

    For the procedure about how to install the Oracle Solaris operating environment, see How to Install Solaris Software in Oracle Solaris Cluster Software Installation Guide.

    If you are using Solaris I/O multipathing (MPxIO) for the Oracle Solaris 10 OS, previously called Sun StorEdge Traffic Manager in the Solaris 9 OS, verify that the paths to the storage device are functioning. To configure multipathing for the Oracle Solaris 10 OS, see the Solaris Fibre Channel Storage Configuration and Multipathing Support Guide.

  12. Update the Oracle Solaris device files and links.
    # devfsadm

    Note - You can wait for the devfsadm daemon to automatically update the Oracle Solaris device files and links, or you can run the devfsadm command to immediately update the Oracle Solaris device files and links.


  13. Confirm that all storage arrays that you installed are visible to all nodes.
    # luxadm probe 

See Also

To continue with Oracle Solaris Cluster software installation tasks, see your Oracle Solaris Cluster software installation documentation.

Adding Storage Systems to an Existing Cluster

Use this procedure to add a new storage system to a running cluster. To install systems to a new Oracle Solaris Cluster configuration that is not running, use the procedure in How to Install Storage Systems in a New Cluster.

This procedure defines Node N as the node to be connected to the storage system you are adding and the node with which you begin working.

How to Perform Initial Configuration Tasks on the Storage Array

  1. (StorEdge 6320SL storage system ONLY) Install the Fibre Channel (FC) switch for the storage system if you do not have a switch installed.

    Note - In a StorEdge 6320SL storage system, the customer provides the switch.


    For the procedure about how to install an FC switch, see the documentation that shipped with your FC switch hardware.

  2. Configure the service processor.

    For more information, see the Sun StorEdge 6320 System Installation Guide.

  3. Create a volume.

    For the procedure about how to create a volume, see the Sun StorEdge 6320 System Reference and Service Manual.

  4. (Optional) Specify initiator groups for the volume.

    For the procedure about how to specify initiator groups, see the Sun StorEdge 6320 System Reference and Service Manual.

  5. Unpack, place, and level the storage system.

    For instructions, see the Sun StorEdge 6320 System Installation Guide.

  6. Install the system power cord and the system grounding strap.

    For instructions, see the Sun StorEdge 6320 System Installation Guide.

  7. (StorEdge 6320SL storage system ONLY) Connect the storage arrays to the FC switches by using fiber-optic cables.

    Caution

    Caution - Do not connect the switch's Ethernet port to the storage system's private LAN.


    For the procedure about how to cable the storage system, see the Sun StorEdge 6320 System Installation Guide.

  8. Power on the new storage system.

    Note - The storage arrays in your system might require several minutes to boot.


    For the procedure about how to power on the storage system, see the Sun StorEdge 6320 System Installation Guide.

  9. If necessary, reconfigure the storage system's FC switches to ensure that all nodes can access each storage array.

    The following configurations might prevent some nodes from accessing each storage array in the cluster:

    • Zone configuration

    • Multiple clusters that use the same switch

    • Unconfigured ports or misconfigured ports

How to Connect the Node to the FC Switches

This procedure provides the long forms of the Oracle Solaris Cluster commands. Most commands also have short forms. Except for the forms of the command names, the commands are identical.

To perform this procedure, become superuser or assume a role that provides solaris.cluster.modify role-based access control (RBAC) authorization.

  1. Determine the resource groups and device groups that are running on all nodes.

    Record this information because you use this information in Step 19 and Step 20 of this procedure to return resource groups and device groups to these nodes.

    Use the following command:

    # clresourcegroup status + 
    # cldevicegroup status +
  2. Move all resource groups and device groups off Node N.
    # clnode evacuate nodename
  3. If you do not need to install one or more host adapters in Node N, skip to Step 10.

    To install host adapters, proceed to Step 4.

  4. If the host adapter that you that are installing is the first host adapter on Node N, determine whether the required drivers for the host adapter are already installed on this node.

    For the required packages, see the documentation that shipped with your host adapters.

    If this is not the first host adapter, skip to Step 6.

  5. If the required support packages not already installed, install them.

    The support packages are located in the Product directory of the Oracle Solaris CD-ROM.

  6. Shut down and power off Node N.

    For the procedure about how to shut down and power off a node, see Chapter 3, Shutting Down and Booting a Cluster, in Oracle Solaris Cluster System Administration Guide.

  7. Install one or more host adapters in Node N.

    For the procedure about how to install host adapters, see the documentation that shipped with your host adapters and nodes.

  8. Power on and boot Node N into noncluster mode by adding -x to your boot instruction.

    For more information about how to boot nodes, see Chapter 3, Shutting Down and Booting a Cluster, in Oracle Solaris Cluster System Administration Guide.

  9. If necessary, upgrade the host adapter firmware on Node N.

    The Oracle Enterprise Manager Ops Center 2.5 software helps you patch and monitor your data center assets. Oracle Enterprise Manager Ops Center 2.5 helps improve operational efficiency and ensures that you have the latest software patches for your software. Contact your Oracle representative to purchase Oracle Enterprise Manager Ops Center 2.5.

    Additional information for using the Oracle patch management tools is provided in Oracle Solaris Administration Guide: Basic Administration at http://docs.sun.com. Refer to the version of this manual for the Oracle Solaris OS release that you have installed.

    If you must apply a patch when a node is in noncluster mode, you can apply it in a rolling fashion, one node at a time, unless instructions for a patch require that you shut down the entire cluster. Follow the procedures in How to Apply a Rebooting Patch (Node) in Oracle Solaris Cluster System Administration Guide to prepare the node and to boot it in noncluster mode. For ease of installation, consider applying all patches at the same time. That is, apply all patches to the node that you place in noncluster mode.

    For required firmware, see the Sun System Handbook.

  10. If necessary, install a GBIC or an SFP in the FC switch or the storage array.

    For the procedure about how to install a GBIC or an SFP, see the documentation that shipped with your FC switch hardware or the Sun StorEdge 6320 System Installation Guide.

  11. Connect a fiber-optic cable between the FC switch and Node N.

    For the procedure about how to install a fiber-optic cable, see the Sun StorEdge 6320 System Installation Guide.

  12. Install the required Oracle Solaris patches for storage array support on Node N.

    The Oracle Enterprise Manager Ops Center 2.5 software helps you patch and monitor your data center assets. Oracle Enterprise Manager Ops Center 2.5 helps improve operational efficiency and ensures that you have the latest software patches for your software. Contact your Oracle representative to purchase Oracle Enterprise Manager Ops Center 2.5.

    Additional information for using the Oracle patch management tools is provided in Oracle Solaris Administration Guide: Basic Administration at http://docs.sun.com. Refer to the version of this manual for the Oracle Solaris OS release that you have installed.

    If you must apply a patch when a node is in noncluster mode, you can apply it in a rolling fashion, one node at a time, unless instructions for a patch require that you shut down the entire cluster. Follow the procedures in How to Apply a Rebooting Patch (Node) in Oracle Solaris Cluster System Administration Guide to prepare the node and to boot it in noncluster mode. For ease of installation, consider applying all patches at the same time. That is, apply all patches to the node that you place in noncluster mode.

    For required firmware, see the Sun System Handbook.

  13. To create the new Oracle Solaris device files and links, perform a reconfiguration boot on Node N by adding -r to your boot instruction.

    For more information about how to boot nodes, see Chapter 3, Shutting Down and Booting a Cluster, in Oracle Solaris Cluster System Administration Guide.

  14. Update the Oracle Solaris device files and links.
    # devfsadm

    Note - You can wait for the devfsadm daemon to automatically update the Oracle Solaris device files and links, or you can run the devfsadm command to immediately update the Oracle Solaris device files and links.


  15. On Node N, update the paths to the DID instances.
    # cldevice populate
  16. If necessary, label the new storage array logical volume.

    For the procedure about how to label a logical volume, see the Sun StorEdge 6320 System Reference and Service Manual.

  17. (Optional) On Node N, verify that the device IDs (DIDs) are assigned to the new storage array.
    # cldevice list -n NodeN -v
  18. Repeat Step 2 through Step 17 for each remaining node that you plan to connect to the storage array.
  19. (Optional) Restore the device groups to the original node.

    Perform the following step for each device group you want to return to the original node.

    # cldevicegroup switch -n nodename devicegroup1[ devicegroup2 ...]
    -n nodename

    The node to which you are restoring device groups.

    devicegroup1[ devicegroup2 …]

    The device group or groups that you are restoring to the node.

  20. (Optional) Restore the resource groups to the original node.

    Perform the following step for each resource group you want to return to the original node.

    Use the following command:

    # clresourcegroup switch -n nodename resourcegroup1[ resourcegroup2 …]
    nodename

    For failover resource groups, the node to which the groups are returned. For scalable resource groups, the node list to which the groups are returned.

    resourcegroup1[ resourcegroup2 …]

    The resource group or groups that you are returning to the node or nodes.

  21. Perform volume management administration to incorporate the new volumes into the cluster.

    For more information, see your Solaris Volume Manager or Veritas Volume Manager documentation.