JavaScript is required to for searching.
Skip Navigation Links
Exit Print View
Oracle Solaris Cluster 3.3 With Sun StorEdge 6120 Array Manual
search filter icon
search icon

Document Information

Preface

1.  Installing and Maintaining a Sun StorEdge 6120 Array

Installing Storage Arrays

Storage Array Cabling Configurations

How to Install a Single-Controller Configuration in a New Cluster

How to Install a Dual-Controller Configuration in a New Cluster

How to Add a Single-Controller Configuration to an Existing Cluster

How to Perform Initial Configuration Tasks on the Storage Array

How to Connect the Storage Array to FC Switches

How to Connect the Node to the FC Switches or the Storage Array

How to Add a Dual-Controller Configuration to an Existing Cluster

How to Perform Initial Configuration Tasks on the Storage Array

How to Connect the Storage Array to FC Switches

How to Connect the Node to the FC Switches or the Storage Array

Configuring Storage Arrays

How to Create a Logical Volume

How to Remove a Logical Volume

Maintaining Storage Arrays

StorEdge 6120 Array FRUs

How to Upgrade Storage Array Firmware

How to Remove a Single-Controller Configuration

How to Remove a Dual-Controller Configuration

Replacing a Node-to-Switch Component

How to Replace a Node-to-Switch Component in a Cluster That Uses Multipathing

How to Replace a Node-to-Switch Component in a Cluster Without Multipathing

How to Replace a Host Adapter

Index

Installing Storage Arrays

This section contains the procedures for installing single-controller and dual-controller storage array configurations in new and existing Oracle Solaris Cluster configurations. The following table lists these procedures.

Table 1-1 Task Map: Installing a Storage Array

Task
Information
Installing arrays in a new cluster, using a single-controller configuration
Installing arrays in a new cluster, using a dual-controller configuration
Adding arrays to an existing cluster, using a single-controller configuration.
Adding arrays to an existing cluster, using a dual-controller configuration.

Storage Array Cabling Configurations

You can install your storage array in several different configurations. Use the Sun StorEdge 6120 Array Installation Guide to evaluate your needs and determine which configuration is best for your situation.

The following figures illustrate example configurations.

Figure 1-1 shows two storage arrays, and two have controllers. The storage arrays connect to a two-node cluster through two switches. Dual-controller configurations require software RAID-1 (host-based mirroring).

Figure 1-1 Installing a 1x1 Configuration With Software RAID-1

image:Illustration: The preceding context describes the graphic.

Figure 1-2 shows four storage arrays, and two have controllers. The first storage array without a controller connects to the second storage array, which has a controller. The third storage array without a controller connects to the fourth storage array, which has a controller. The two storage arrays with controllers connect to a two-node cluster through two switches. Dual-controller configurations require software RAID-1 (host-based mirroring).

Figure 1-2 Installing a 1x2 Configuration With Software RAID-1

image:Illustration: The preceding context describes the graphic.

Figure 1-3 shows two storage arrays, and two have controllers. The two storage arrays are daisy-chained. The two storage arrays connect to a 2-node cluster through two switches.

Figure 1-3 Installing a 2x2 Configuration

image:Illustration: The preceding context describes the graphic.

Figure 1-4 shows four storage arrays, and two have controllers. All storage array are daisy-chained in the following order: alternate master, master, alternate master, and master. The two storage arrays with controllers connect to a two-node cluster through two switches.

Figure 1-4 Installing a 2x4 Configuration

image:Illustration: The preceding context describes the graphic.

How to Install a Single-Controller Configuration in a New Cluster

Use this procedure to install a storage array in a single-controller configuration before you install the Solaris operating environment and Oracle Solaris Clustersoftware on your nodes. The following procedures contain instructions for other array-installation situations:

  1. Install the host adapters in the nodes that are to be connected to the storage array.

    For the procedure about how to install host adapters, see the documentation that shipped with your host adapters and nodes.

  2. Install the Fibre Channel (FC) switches.

    For the procedure about how to install FC switches, see the documentation that shipped with your FC switch hardware.

  3. Set up a Reverse Address Resolution Protocol (RARP) server on the network on which the new storage array is to reside.

    Use the RARP server to set up the following network settings.

    • IP address

    • gateway (if necessary)

    • netmask (if necessary)

    • hostname

    For the procedure about how to set up a RARP server, see the Sun StorEdge 6120 Array Installation Guide.

  4. Cable the storage arrays.

    For the procedures on how to connect your storage array, see the Sun StorEdge 6120 Array Installation Guide.

    1. Connect the storage arrays to the FC switches by using fiber-optic cables.
    2. Connect the Ethernet cables from each storage array to the Local Area Network (LAN).
    3. If necessary, install the interconnect cables between storage arrays.
    4. Connect the power cords to each storage array.
  5. Power on the storage array.

    Verify that all components are powered on and functional.


    Note - The storage array might require a few minutes to boot.


    For the procedure about how to power on a storage array, see the Sun StorEdge 6120 Array Installation Guide.

  6. Install any required controller firmware for the storage arrays.

    For the required controller firmware for the storage array, see the Sun StorEdge 6120 Array Release Notes.

    The Oracle Enterprise Manager Ops Center 2.5 software helps you patch and monitor your data center assets. Oracle Enterprise Manager Ops Center 2.5 helps improve operational efficiency and ensures that you have the latest software patches for your software. Contact your Oracle representative to purchase Oracle Enterprise Manager Ops Center 2.5.

    Additional information for using the Oracle patch management tools is provided in Oracle Solaris Administration Guide: Basic Administration at http://docs.sun.com. Refer to the version of this manual for the Oracle Solaris OS release that you have installed.

    If you must apply a patch when a node is in noncluster mode, you can apply it in a rolling fashion, one node at a time, unless instructions for a patch require that you shut down the entire cluster. Follow the procedures in How to Apply a Rebooting Patch (Node) in Oracle Solaris Cluster System Administration Guide to prepare the node and to boot it in noncluster mode. For ease of installation, consider applying all patches at the same time. That is, apply all patches to the node that you place in noncluster mode.

    For required firmware, see the Sun System Handbook.

  7. Ensure that the cache and mirror settings for each storage array are set to auto.

    For more information, see the Sun StorEdge 6020 and 6120 Array System Manual.

  8. Ensure that the mp_support parameter for each storage array is set to none.

    For more information, see the Sun StorEdge 6020 and 6120 Array System Manual.

  9. Ensure that all storage array controllers are ONLINE.

    For more information about how to bring controllers online, see the Sun StorEdge 6020 and 6120 Array System Manual.

  10. Reset the storage array to update the network settings and system settings that you changed.

    For the procedure about how to reboot or reset a storage array, see the Sun StorEdge 6020 and 6120 Array System Manual.

  11. On all nodes, install the Oracle Solaris operating environment. Apply any required Oracle Solaris patches for Oracle Solaris Cluster software and storage array support.

    For the procedure about how to install the Oracle Solaris operating environment, see your Oracle Solaris Cluster software installation documentation.

    The Oracle Enterprise Manager Ops Center 2.5 software helps you patch and monitor your data center assets. Oracle Enterprise Manager Ops Center 2.5 helps improve operational efficiency and ensures that you have the latest software patches for your software. Contact your Oracle representative to purchase Oracle Enterprise Manager Ops Center 2.5.

    Additional information for using the Oracle patch management tools is provided in Oracle Solaris Administration Guide: Basic Administration at http://docs.sun.com. Refer to the version of this manual for the Oracle Solaris OS release that you have installed.

    If you must apply a patch when a node is in noncluster mode, you can apply it in a rolling fashion, one node at a time, unless instructions for a patch require that you shut down the entire cluster. Follow the procedures in How to Apply a Rebooting Patch (Node) in Oracle Solaris Cluster System Administration Guide to prepare the node and to boot it in noncluster mode. For ease of installation, consider applying all patches at the same time. That is, apply all patches to the node that you place in noncluster mode.

    For required firmware, see the Sun System Handbook.

  12. On each node, ensure that the mpxio-disable parameter is set to yes in the /kernel/drv/scsi_vhci.conf file.

See Also

To continue with Oracle Solaris Cluster software installation tasks, see your Oracle Solaris Cluster software installation documentation.

How to Install a Dual-Controller Configuration in a New Cluster

Use this procedure to install a storage array in a dual-controller configuration before you install the Oracle Solaris operating environment and Oracle Solaris Cluster software on your nodes. The following procedures contain instructions for other array-installation situations:

  1. Install the host adapters in the nodes to be connected to the storage arrays.

    For the procedure about how to install host adapters, see the documentation that shipped with your host adapters and nodes.

  2. Install the Fibre Channel (FC) switches.

    For the procedure about how to install FC switches, see the documentation that shipped with your FC switch hardware.

  3. If necessary, install GBICs or SFPs in the FC switches.

    For the procedure about how to install a GBIC or an SFP, see the documentation that shipped with your FC switch hardware.

  4. Set up a Reverse Address Resolution Protocol (RARP) server on the network on which the new storage array is to reside.

    Use the RARP server to set up the following network settings.

    • IP address

    • gateway (if necessary)

    • netmask (if necessary)

    • hostname

    For the procedure about how to set up a RARP server, see the Sun StorEdge 6120 Array Installation Guide.

  5. Cable the storage arrays.

    For the procedures on how to connect your storage array, see the Sun StorEdge 6120 Array Installation Guide.

    1. Connect the storage arrays to the FC switches by using fiber-optic cables.
    2. Connect the Ethernet cables from each storage array to the LAN.
    3. If necessary, install the interconnect cables between storage arrays.
    4. Connect the power cords to each storage array.

    For the procedure about how to install fiber-optic, Ethernet, and interconnect cables, see the Sun StorEdge 6120 Array Installation Guide.

  6. Power on the storage arrays.

    Verify that all components are powered on and functional.

    For the procedure about how to power on the storage arrays, see the Sun StorEdge 6120 Array Installation Guide.

  7. Install any required controller firmware for the storage arrays.

    Access the master controller unit and administer the storage arrays. The master controller unit is the storage array that has the interconnect cables attached to the second port of each interconnect card, when viewed from the rear of the storage arrays.

    For the required controller firmware for the storage array, see the Sun StorEdge 6120 Array Release Notes.

    The Oracle Enterprise Manager Ops Center 2.5 software helps you patch and monitor your data center assets. Oracle Enterprise Manager Ops Center 2.5 helps improve operational efficiency and ensures that you have the latest software patches for your software. Contact your Oracle representative to purchase Oracle Enterprise Manager Ops Center 2.5.

    Additional information for using the Oracle patch management tools is provided in Oracle Solaris Administration Guide: Basic Administration at http://docs.sun.com. Refer to the version of this manual for the Oracle Solaris OS release that you have installed.

    If you must apply a patch when a node is in noncluster mode, you can apply it in a rolling fashion, one node at a time, unless instructions for a patch require that you shut down the entire cluster. Follow the procedures in How to Apply a Rebooting Patch (Node) in Oracle Solaris Cluster System Administration Guide to prepare the node and to boot it in noncluster mode. For ease of installation, consider applying all patches at the same time. That is, apply all patches to the node that you place in noncluster mode.

    For required firmware, see the Sun System Handbook.

  8. Ensure that each storage array has a unique target address.

    For the procedure about how to assign a target address to a storage array, see the Sun StorEdge 6020 and 6120 Array System Manual.

  9. Ensure that the cache and mirror settings for each storage array are set to auto.

    For more information, see the Sun StorEdge 6020 and 6120 Array System Manual.

  10. On each node, ensure that the mp_support parameter for each storage array is set to mpxio.

    For more information, see the Sun StorEdge 6020 and 6120 Array System Manual.

  11. Ensure that all storage array controllers are ONLINE.

    For more information about how to bring controllers online, see the Sun StorEdge 6020 and 6120 Array System Manual.

  12. Reset the storage array to update the network settings and system settings that you changed.

    For the procedure about how to reboot or reset a storage array, see the Sun StorEdge 6020 and 6120 Array System Manual.

  13. On all nodes, install the Oracle Solaris operating system and apply the required Oracle Solaris patches for Oracle Solaris Cluster software and storage array support.

    For the procedure about how to install the Oracle Solaris operating environment, see How to Install Solaris Software in Oracle Solaris Cluster Software Installation Guide.

  14. Confirm that all storage arrays that you installed are visible to all nodes.
    # luxadm probe 

See Also

To continue with Oracle Solaris Cluster software installation tasks, see your Oracle Solaris Cluster software installation documentation.

How to Add a Single-Controller Configuration to an Existing Cluster

Use this procedure to add a single-controller configuration to a running cluster. The following procedures contain instructions for other array-installation situations:

This procedure defines Node N as the node with which you begin working.

How to Perform Initial Configuration Tasks on the Storage Array

  1. Power on the storage array.

    Note - The storage array might require a few minutes to boot.


    For the procedure about how to power on a storage array, see the Sun StorEdge 6120 Array Installation Guide.

  2. Administer the storage array's network settings. Network settings include the following settings.
    • IP address

    • gateway (if necessary)

    • netmask (if necessary)

    • hostname

    For the procedure about how to set up an IP address, gateway, netmask, and hostname on a storage array, see the Sun StorEdge 6020 and 6120 Array System Manual.

  3. Ensure that the cache and mirror settings for each storage array are set to auto.

    For more information, see the Sun StorEdge 6020 and 6120 Array System Manual.

  4. Ensure that the mp_support parameter for each storage array is set to none.

    For more information about, see the Sun StorEdge 6020 and 6120 Array System Manual.

  5. Install any required controller firmware for the storage arrays you are adding.

    For the required controller firmware for the storage array, see the Sun StorEdge 6120 Array Release Notes.

    The Oracle Enterprise Manager Ops Center 2.5 software helps you patch and monitor your data center assets. Oracle Enterprise Manager Ops Center 2.5 helps improve operational efficiency and ensures that you have the latest software patches for your software. Contact your Oracle representative to purchase Oracle Enterprise Manager Ops Center 2.5.

    Additional information for using the Oracle patch management tools is provided in Oracle Solaris Administration Guide: Basic Administration at http://docs.sun.com. Refer to the version of this manual for the Oracle Solaris OS release that you have installed.

    If you must apply a patch when a node is in noncluster mode, you can apply it in a rolling fashion, one node at a time, unless instructions for a patch require that you shut down the entire cluster. Follow the procedures in How to Apply a Rebooting Patch (Node) in Oracle Solaris Cluster System Administration Guide to prepare the node and to boot it in noncluster mode. For ease of installation, consider applying all patches at the same time. That is, apply all patches to the node that you place in noncluster mode.

    For required firmware, see the Sun System Handbook.

  6. Reset the storage array to update the network settings and system settings that you changed.

    For the procedure about how to reboot or reset a storage array, see the Sun StorEdge 6020 and 6120 Array System Manual.

  7. Confirm that all storage arrays that you upgraded are visible to all nodes.
    # luxadm probe 

How to Connect the Storage Array to FC Switches

  1. Install the GBICs or SFPs in the storage array that you plan to add.

    For the procedure about how to install a GBICs or SFPs, see the Sun StorEdge 6120 Array Installation Guide.

  2. If necessary, install GBICs or SFPs in the FC switches.

    For the procedure about how to install a GBIC or an SFP, see the documentation that shipped with your FC switch hardware.

  3. Install the Ethernet cable between the storage array and the Local Area Network (LAN).
  4. If necessary, daisy-chain or interconnect the storage arrays.

    For the procedure about how to install interconnect cables, see the Sun StorEdge 6120 Array Installation Guide.

  5. Install a fiber-optic cable between the FC switch and the storage array.

    For the procedure about how to install a fiber-optic cable, see the Sun StorEdge 6120 Array Installation Guide.

How to Connect the Node to the FC Switches or the Storage Array

Before You Begin

This procedure provides the long forms of the Oracle Solaris Cluster commands. Most commands also have short forms. Except for the forms of the command names, the commands are identical.

To perform this procedure, become superuser or assume a role that provides solaris.cluster.read and solaris.cluster.modify role-based access control (RBAC) authorization.

  1. Determine the resource groups and device groups that are running on all nodes.

    Record this information because you use this information in Step 19 and Step 20 of this procedure to return resource groups and device groups to these nodes.

    Use the following command:

    # clresourcegroup status +
    # cldevicegroup status +
  2. Move all resource groups and device groups off Node N.
    # clnode evacuate nodename
  3. If you need to install a host adapter in Node N, proceed to Step 4.

    If you do not need to install host adapters, skip to Step 10.

  4. If the host adapter that you are installing is the first FC host adapter on Node N, determine whether the required drivers for the host adapter are already installed on this node.

    For the required packages, see the documentation that shipped with your host adapters. If the host adapter that you are installing is not the first FC host adapter on Node N, skip to Step 6.

  5. If the Fibre Channel support packages are not installed, install them.

    The storage array packages are located in the Product directory of the Oracle Solaris CD-ROM. Add any necessary packages.

  6. Shut down and power off Node N.

    For the procedure about how to shut down and power off a node, see your Oracle Solaris Cluster system administration documentation.

  7. Install the host adapter in Node N.

    For the procedure about how to install a host adapter, see the documentation that shipped with your host adapter and node.

  8. Power on and boot Node N into noncluster mode by adding -x to your boot instruction.

    For the procedure about how to boot a node in noncluster mode, see Chapter 3, Shutting Down and Booting a Cluster, in Oracle Solaris Cluster System Administration GuideChapter 3, Shutting Down and Booting a Cluster, in Oracle Solaris Cluster System Administration Guide.

  9. If necessary, upgrade the host adapter firmware on Node N.

    The Oracle Enterprise Manager Ops Center 2.5 software helps you patch and monitor your data center assets. Oracle Enterprise Manager Ops Center 2.5 helps improve operational efficiency and ensures that you have the latest software patches for your software. Contact your Oracle representative to purchase Oracle Enterprise Manager Ops Center 2.5.

    Additional information for using the Oracle patch management tools is provided in Oracle Solaris Administration Guide: Basic Administration at http://docs.sun.com. Refer to the version of this manual for the Oracle Solaris OS release that you have installed.

    If you must apply a patch when a node is in noncluster mode, you can apply it in a rolling fashion, one node at a time, unless instructions for a patch require that you shut down the entire cluster. Follow the procedures in How to Apply a Rebooting Patch (Node) in Oracle Solaris Cluster System Administration Guide to prepare the node and to boot it in noncluster mode. For ease of installation, consider applying all patches at the same time. That is, apply all patches to the node that you place in noncluster mode.

    For required firmware, see the Sun System Handbook.

  10. If necessary, install a GBIC or an SFP in the FC switch or the storage array.

    For the procedure about how to install a GBIC or an SFP, see the documentation that shipped with your FC switch hardware. For the procedure on how to install a GBIC or an SFP, see the Sun StorEdge 6120 Array Installation Guide.

  11. Connect a fiber-optic cable between the FC switch and Node N.

    For the procedure about how to install a fiber-optic cable, see the Sun StorEdge 6120 Array Installation Guide.

  12. If necessary, install the required Oracle Solaris patches for storage array support on Node N.

    For a list of required Oracle Solaris patches for storage array support, see the Sun StorEdge 6120 Array Release Notes.

  13. On the node, update the /devices and /dev entries.
    # devfsadm -C 
  14. Boot the node into cluster mode.

    For the procedure about how to boot a node, see Chapter 3, Shutting Down and Booting a Cluster, in Oracle Solaris Cluster System Administration Guide.

  15. On the node, update the paths to the DID instances.
    # cldevice populate 
  16. If necessary, label the new logical volume.

    For the procedure about how to label a logical volume, see the Sun StorEdge 6020 and 6120 Array System Manual.

  17. (Optional) On Node N, verify that the device IDs (DIDs) are assigned to the new LUNs.
    # cldevice clear
    # cldevice list -v 
  18. Repeat Step 2 through Step 17 for each remaining node that you plan to connect to the storage array.
  19. (Optional) Restore the device groups to the original node.

    Perform the following step for each device group you want to return to the original node.

    # cldevicegroup switch -n nodename devicegroup1[ devicegroup2 ...]
    -n nodename

    The node to which you are restoring device groups.

    devicegroup1[ devicegroup2 …]

    The device group or groups that you are restoring to the node.

  20. (Optional) Restore the resource groups to the original node.

    Perform the following step for each resource group you want to return to the original node.

    # clresourcegroup switch -n nodename resourcegroup1[ resourcegroup2 …]
    nodename

    For failover resource groups, the node to which the groups are returned. For scalable resource groups, the node list to which the groups are returned.

    resourcegroup1[ resourcegroup2 …]

    The resource group or groups that you are returning to the node or nodes.

  21. Perform volume management administration to incorporate the new logical volumes into the cluster.

    For more information, see your Solaris Volume Manager or Veritas Volume Manager documentation.

How to Add a Dual-Controller Configuration to an Existing Cluster

Use this procedure to add a dual-controller configuration to a running cluster. The following procedures contain instructions for other array-installation situations:

This procedure defines Node N as the node with which you begin working.

How to Perform Initial Configuration Tasks on the Storage Array

  1. Power on the storage arrays.

    Note - The storage arrays might require several minutes to boot.


    For the procedure about how to power on storage arrays, see the Sun StorEdge 6120 Array Installation Guide.

  2. Administer the storage array's network settings. Network settings include the following settings.
    • IP address

    • gateway (if necessary)

    • netmask (if necessary)

    • hostname

    Assign an IP address to the master controller unit only. The master controller unit is the storage array that has the interconnect cables attached to the second port of each interconnect card.

    For the procedure about how to set up an IP address, gateway, netmask, and hostname on a storage array, see the Sun StorEdge 6020 and 6120 Array System Manual.

  3. Ensure that each storage array has a unique target address.

    For the procedure about how to assign a target address to a storage array, see the Sun StorEdge 6020 and 6120 Array System Manual.

  4. Ensure that the cache and mirror settings for each storage array are set to auto.

    For more information, see the Sun StorEdge 6020 and 6120 Array System Manual.

  5. Ensure that the mp_support parameter for each storage array is set to mpxio.

    For more information, see the Sun StorEdge 6020 and 6120 Array System Manual.

  6. Install any required controller firmware for the storage arrays you are adding.

    Access the master controller unit and administer the storage arrays. The master controller unit is the storage array that has the interconnect cables attached to the second port of each interconnect card, when viewed from the rear of the storage arrays.

    For the required controller firmware for the storage array, see the Sun StorEdge 6120 Array Release Notes.

  7. Reset the storage array to update the network settings and system settings that you changed.

    For the procedure about how to reboot or reset a storage array, see the Sun StorEdge 6020 and 6120 Array System Manual.

How to Connect the Storage Array to FC Switches

  1. Install the GBICs or SFPs in the storage array that you plan to add.

    For the procedure about how to install a GBICs or SFPs, see the Sun StorEdge 6120 Array Installation Guide.

  2. If necessary, install GBICs or SFPs in the FC switches.

    For the procedure about how to install a GBIC or an SFP, see the documentation that shipped with your FC switch hardware.

  3. Install the Ethernet cable between the storage arrays and the local area network (LAN).
  4. If necessary, daisy-chain or interconnect the storage arrays.

    For the procedure about how to install interconnect cables, see the Sun StorEdge 6120 Array Installation Guide.

  5. Install a fiber-optic cable between each FC switch and both new storage arrays of the partner group.

    For the procedure about how to install a fiber-optic cable, see the Sun StorEdge 6120 Array Installation Guide.

How to Connect the Node to the FC Switches or the Storage Array

Before You Begin

This procedure provides the long forms of the Oracle Solaris Cluster commands. Most commands also have short forms. Except for the forms of the command names, the commands are identical.

To perform this procedure, become superuser or assume a role that provides solaris.cluster.read and solaris.cluster.modify role-based access control (RBAC) authorization.

  1. Determine the resource groups and device groups that are running on all nodes.

    Record this information because you use this information in Step 18 and Step 19of this procedure to return resource groups and device groups to these nodes.

    Use the following command:

    # clresourcegroup status +
    # cldevicegroup status +
  2. Move all resource groups and device groups off Node N.
    # clnode evacuate nodename
  3. If you need to install host adapters in Node N, proceed to Step 4.

    If you do not need to install host adapters, skip to Step 10.

  4. If the host adapter that you that are installing is the first host adapter on Node N, determine whether the required drivers for the host adapter are already installed on this node.

    For the required packages, see the documentation that shipped with your host adapters.

    If the host adapter that you are installing is not the first host adapter on Node N, skip to Step 6.

  5. If the required support packages are not already installed, install them.

    The support packages are located in the Product directory of the Solaris CD-ROM.

  6. Shut down and power off Node N.

    For the procedure about how to shut down and power off a node, see Chapter 3, Shutting Down and Booting a Cluster, in Oracle Solaris Cluster System Administration Guide.

  7. Install the host adapters in Node N.

    For the procedure about how to install host adapters, see the documentation that shipped with your host adapters and nodes.

  8. Power on and boot Node N into noncluster mode by adding -x to your boot instruction.

    For the procedure about how to boot a node in noncluster mode, see Chapter 3, Shutting Down and Booting a Cluster, in Oracle Solaris Cluster System Administration Guide.

  9. If necessary, upgrade the host adapter firmware on Node N.

    The Oracle Enterprise Manager Ops Center 2.5 software helps you patch and monitor your data center assets. Oracle Enterprise Manager Ops Center 2.5 helps improve operational efficiency and ensures that you have the latest software patches for your software. Contact your Oracle representative to purchase Oracle Enterprise Manager Ops Center 2.5.

    Additional information for using the Oracle patch management tools is provided in Oracle Solaris Administration Guide: Basic Administration at http://docs.sun.com. Refer to the version of this manual for the Oracle Solaris OS release that you have installed.

    If you must apply a patch when a node is in noncluster mode, you can apply it in a rolling fashion, one node at a time, unless instructions for a patch require that you shut down the entire cluster. Follow the procedures in How to Apply a Rebooting Patch (Node) in Oracle Solaris Cluster System Administration Guide to prepare the node and to boot it in noncluster mode. For ease of installation, consider applying all patches at the same time. That is, apply all patches to the node that you place in noncluster mode.

    For required firmware, see the Sun System Handbook.

  10. If necessary, install a GBIC or an SFP in the FC switch or the storage array.

    For the procedure about how to install a GBIC or an SFP, see the documentation that shipped with your FC switch hardware. For the procedure on how to install a GBIC or an SFP, see the Sun StorEdge 6120 Array Installation Guide.

  11. Connect a fiber-optic cable between the FC switch and Node N.

    For the procedure about how to install a fiber-optic cable, see the Sun StorEdge 6120 Array Installation Guide.

  12. Install the required Solaris patches for storage array support on Node N.

    The Oracle Enterprise Manager Ops Center 2.5 software helps you patch and monitor your data center assets. Oracle Enterprise Manager Ops Center 2.5 helps improve operational efficiency and ensures that you have the latest software patches for your software. Contact your Oracle representative to purchase Oracle Enterprise Manager Ops Center 2.5.

    Additional information for using the Oracle patch management tools is provided in Oracle Solaris Administration Guide: Basic Administration at http://docs.sun.com. Refer to the version of this manual for the Oracle Solaris OS release that you have installed.

    If you must apply a patch when a node is in noncluster mode, you can apply it in a rolling fashion, one node at a time, unless instructions for a patch require that you shut down the entire cluster. Follow the procedures in How to Apply a Rebooting Patch (Node) in Oracle Solaris Cluster System Administration Guide to prepare the node and to boot it in noncluster mode. For ease of installation, consider applying all patches at the same time. That is, apply all patches to the node that you place in noncluster mode.

    For required firmware, see the Sun System Handbook.

  13. Perform a reconfiguration boot on Node N by adding -r to your boot instruction, to create the new Solaris device files and links.

    For the procedure about how to boot a node, see Chapter 3, Shutting Down and Booting a Cluster, in Oracle Solaris Cluster System Administration Guide

  14. On Node N, update the paths to the DID instances.
    # cldevice populate 
  15. If necessary, label the new storage array logical volume.

    For the procedure about how to label a logical volume, see the Sun StorEdge 6020 and 6120 Array System Manual.

  16. (Optional) On Node N, verify that the device IDs (DIDs) are assigned to the new LUNs.
    # cldevice clear
    # cldevice list -v 
  17. Repeat Step 2 through Step 16 for each remaining node that you plan to connect to the storage array.
  18. (Optional) Restore the device groups to the original node.

    Perform the following step for each device group you want to return to the original node.

    # cldevicegroup switch -n nodename devicegroup1[ devicegroup2 ...]
    -n nodename

    The node to which you are restoring device groups.

    devicegroup1[ devicegroup2 …]

    The device group or groups that you are restoring to the node.

  19. (Optional) Restore the resource groups to the original node.

    Perform the following step for each resource group you want to return to the original node.

    # clresourcegroup switch -n nodename resourcegroup1[ resourcegroup2 …]
    nodename

    For failover resource groups, the node to which the groups are returned. For scalable resource groups, the node list to which the groups are returned.

    resourcegroup1[ resourcegroup2 …]

    The resource group or groups that you are returning to the node or nodes.

  20. Perform volume management administration to incorporate the new logical volumes into the cluster.

    For more information, see your Solaris Volume Manager or Veritas Volume Manager documentation.