Sun Cluster 3.1 - 3.2 With SCSI JBOD Storage Device Manual for Solaris OS

Replacing a Host Adapter

You need to replace a host adapter if your host adapter fails, if it becomes unstable, or if you want to upgrade to a newer version. These procedures define Node A as the node with the host adapter that you plan to replace.

Choose the procedure that corresponds to your cluster configuration.


x86 only –

If your cluster is x86 based, Oracle RAC services are not supported. Follow the instructions in How to Replace a Host Adapter When Using Failover and Scalable Data Services Only.


Cluster Configuration 

Instructions 

Sun Cluster failover and scalable data services only, using the recommended HBA configuration 

How to Replace a Host Adapter When Using Failover and Scalable Data Services Only

Oracle Parallel Server/Real Application Clusters (OPS/RAC) only, using the recommended HBA configuration 

How to Replace a Host Adapter When Using Oracle Real Application Clusters Only

Both failover and scalable data services and OPS/RAC, using the recommended HBA configuration 

How to Replace a Host Adapter When Using Both Failover and Scalable Data Services and Oracle Real Application Clusters

All clusters using a single, dual-port HBA to provide both paths to shared data 

How to Replace a Host Adapter When Using a Single, Dual-Port HBA to Provide Both Paths to Shared Data


Note –

The first three procedures in this section assume that you are using the recommended HBA configuration: two redundant hardware paths to shared data. If you choose to use a single HBA configuration, see Configuring Cluster Nodes With a Single, Dual-Port HBA in Sun Cluster 3.1 - 3.2 Hardware Administration Manual for Solaris OS for the risks and restrictions of that configuration and use How to Replace a Host Adapter When Using a Single, Dual-Port HBA to Provide Both Paths to Shared Data.


ProcedureHow to Replace a Host Adapter When Using Failover and Scalable Data Services Only

Before You Begin

This procedure relies on the following prerequisites and assumptions.

If your nodes are configured for dynamic reconfiguration, see the Sun Cluster system administration documentation, and skip steps that instruct you to shut down the node.

This procedure provides the long forms of the Sun Cluster commands. Most commands also have short forms. Except for the forms of the command names, the commands are identical. For a list of the commands and their short forms, see Appendix A, Sun Cluster Object-Oriented Commands, in Sun Cluster 3.1 - 3.2 Hardware Administration Manual for Solaris OS.

  1. Become superuser or assume a role that provides solaris.cluster.read and solaris.cluster.modify RBAC authorization.

  2. Determine the resource groups and device groups that are running on Node A.

    Record this information because you use this information later in this procedure to return resource groups and device groups to Node A.

    • If you are using Sun Cluster 3.2, use the following commands:


      # clresourcegroup status -n NodeA
      # cldevicegroup status -n NodeA
      
    • If you are using Sun Cluster 3.1, use the following command:


      # scstat
      
  3. (Optional) If necessary, move all resource groups and device groups off Node A.

    • If you are using Sun Cluster 3.2, use the following command:


      # clnode evacuate NodeA
      
    • If you are using Sun Cluster 3.1, use the following command:


      # scswitch -S -h NodeA
      
  4. If the storage device that is attached to the failed host adapter is configured as a quorum device, add a new quorum device on a storage device that is not affected by this procedure. Then remove the old quorum device.

    To determine whether the affected device contains a quorum device, use one of the following commands.

    • If you are using Sun Cluster 3.2, use the following command:


      # clquorum show
      
    • If you are using Sun Cluster 3.1, use the following command:


      # scstat -q
      

    To add and remove quorum devices, see the Sun Cluster system administration documentation.

  5. Detach the Solaris Volume Manager submirrors or Veritas Volume Manager plexes on the storage array.

    For more information, see your Solaris Volume Manager or Veritas Volume Manager documentation.

  6. Record the details of disk groups and volumes affected by the failed host adapter.

    Record this information because you use it in Step 16 of this procedure to reattach submirrors on the storage array. To determine which submirrors or plexes are affected, see your Solaris Volume Manager or Veritas Volume Manager documentation.

  7. If Node A is enabled with the Solaris DR feature, perform any DR-specific steps and skip to Step 10.

    For more information on DR, see your Sun Cluster system administration documentation.

  8. Shut down Node A.

    To shut down and power off a node, see your Sun Cluster system administration documentation.

  9. Power off Node A.

  10. Replace the failed host adapter.

    To remove and add host adapters, see the documentation that shipped with your nodes.

  11. If Node A is enabled with the Solaris DR feature, perform any DR-specific steps and skip to Step 15.

    For more information on DR, see your Sun Cluster system administration documentation.

  12. Power on Node A.

  13. x86: Set the HBA ports to ensure that each array has a unique SCSI address.

    For instructions on setting SCSI initiator IDs in x86 based systems, see x86: How to Install a Storage Array in a New x86 Based Cluster.

  14. Boot Node A into cluster mode.

    For the procedure about booting cluster nodes, see Chapter 3, Shutting Down and Booting a Cluster, in Sun Cluster System Administration Guide for Solaris OS.

  15. If necessary, upgrade the host adapter firmware on Node A.

    If you use the Solaris 8, Solaris 9, or Solaris 10 Operating System, Sun Connection Update Manager keeps you informed of the latest versions of patches and features. Using notifications and intelligent needs-based updating, Sun Connection helps improve operational efficiency and ensures that you have the latest software patches for your Sun software.

    You can download the Sun Connection Update Manager product for free by going to http://www.sun.com/download/products.xml?id=4457d96d.

    Additional information for using the Sun patch management tools is provided in Solaris Administration Guide: Basic Administration at http://docs.sun.com. Refer to the version of this manual for the Solaris OS release that you have installed.

    If you must apply a patch when a node is in noncluster mode, you can apply it in a rolling fashion, one node at a time, unless instructions for a patch require that you shut down the entire cluster. Follow the procedures in How to Apply a Rebooting Patch (Node) in Sun Cluster System Administration Guide for Solaris OS to prepare the node and to boot it in noncluster mode. For ease of installation, consider applying all patches at the same time. That is, apply all patches to the node that you place in noncluster mode.

    For a list of patches that affect Sun Cluster, see the Sun Cluster Wiki Patch Klatch.

    For required firmware, see the Sun System Handbook.

  16. Reattach the Solaris Volume Manager submirrors or Veritas Volume Manager plexes on the storage array.

    For more information, see your Solstice DiskSuite/Solaris Volume Manager or Veritas Volume Manager documentation.

  17. (Optional) If you moved device groups off the node in Step 4 of the disconnect procedure, move all device groups back to the node.

    Perform the following step for each device group you want to return to the original node.

    • If you are using Sun Cluster 3.2, use the following command:


      # cldevicegroup switch -n NodeA devicegroup1[ devicegroup2 ...]
      
      -n NodeA

      The node to which you are restoring device groups.

      devicegroup1[ devicegroup2 ...]

      The device group or groups that you are restoring to the node.

    • If you are using Sun Cluster 3.1, use the following command:


      # scswitch -z -D devicegroup -h NodeA
      
  18. (Optional) If you moved resource groups off the node in Step 4 of the disconnect procedure, move all resource groups back to the node.

    Perform the following step for each resource group you want to return to the original node.

    • If you are using Sun Cluster 3.2, use the following command:


      # clresourcegroup switch -n NodeA resourcegroup1[ resourcegroup2 ...]
      
      -n NodeA

      For failover resource groups, the node to which the groups are returned. For scalable resource groups, the node list to which the groups are returned.

      resourcegroup1[ resourcegroup2 ...]

      The resource group or groups that you are returning to the node or nodes.

    • If you are using Sun Cluster 3.1, use the following command:


      # scswitch -z -g resourcegroup -h NodeA
      
  19. If you relocated a quorum device in Step 4, and if you want the cluster configured as it was before replacing the host adapter, relocate the quorum device function back to this storage array.

    To add and remove quorum devices, see your Sun Cluster system administration documentation.

ProcedureHow to Replace a Host Adapter When Using Oracle Real Application Clusters Only

Before You Begin

This procedure relies on the following prerequisites and assumptions.

If your nodes are configured for dynamic reconfiguration, see the Sun Cluster system administration documentation, and skip steps that instruct you to shut down the node.

This procedure provides the long forms of the Sun Cluster commands. Most commands also have short forms. Except for the forms of the command names, the commands are identical. For a list of the commands and their short forms, see Appendix A, Sun Cluster Object-Oriented Commands, in Sun Cluster 3.1 - 3.2 Hardware Administration Manual for Solaris OS.

  1. Become superuser or assume a role that provides solaris.cluster.read and solaris.cluster.modify RBAC authorization.

  2. Determine the Oracle instance that is running on Node A.


    # ps -ef | grep oracle
    
  3. Shut down the Oracle Real Application Clusters instance and any other process on Node A that should be stopped before shutting down the node.

    To shut down and restart an Oracle instance in the RAC environment, refer to your Oracle documentation.

  4. If the storage devices that are attached to the failed host adapter contain a quorum device, add a new quorum device on a different storage device. Then remove the old quorum device.

    To determine whether the affected device contains a quorum device, use one of the following commands.

    • If you are using Sun Cluster 3.2, use the following command:


      # clquorum show
      
    • If you are using Sun Cluster 3.1, use the following command:


      # scstat -q
      

    To add and remove quorum devices, see the Sun Cluster system administration documentation.

  5. Detach the Veritas Volume Manager plexes on the storage array attached to the failed host adapter.

    For more information, see your Veritas Volume Manager documentation.

  6. Record the details of disk groups and volumes affected by the failed host adapter

    Record this information because you use it in Step 15 of this procedure to reattach plexes on the storage array. To determine which plexes are affected, see your Veritas Volume Manager documentation.

  7. If Node A is enabled with the Solaris dynamic reconfiguration (DR) feature, perform any DR-specific steps and skip to Step 10.

    For more information on DR, see your Sun Cluster system administration documentation.

  8. Shut down Node A.

    To shut down and power off a node, see your Sun Cluster system administration documentation.

  9. Power off Node A.

  10. Replace the failed host adapter.

    To remove and add host adapters, see the documentation that shipped with your nodes.

  11. If Node A enabled with the Solaris dynamic reconfiguration feature, perform any DR-specific steps and skip to Step 14.

    For more information on DR, see your Sun Cluster system administration documentation.

  12. Power on Node A.

  13. Boot Node A into cluster mode.

    For the procedure about booting cluster nodes, see Chapter 3, Shutting Down and Booting a Cluster, in Sun Cluster System Administration Guide for Solaris OS.

  14. If necessary, upgrade the host adapter firmware on Node A.

    If you use the Solaris 8, Solaris 9, or Solaris 10 Operating System, Sun Connection Update Manager keeps you informed of the latest versions of patches and features. Using notifications and intelligent needs-based updating, Sun Connection helps improve operational efficiency and ensures that you have the latest software patches for your Sun software.

    You can download the Sun Connection Update Manager product for free by going to http://www.sun.com/download/products.xml?id=4457d96d.

    Additional information for using the Sun patch management tools is provided in Solaris Administration Guide: Basic Administration at http://docs.sun.com. Refer to the version of this manual for the Solaris OS release that you have installed.

    If you must apply a patch when a node is in noncluster mode, you can apply it in a rolling fashion, one node at a time, unless instructions for a patch require that you shut down the entire cluster. Follow the procedures in How to Apply a Rebooting Patch (Node) in Sun Cluster System Administration Guide for Solaris OS to prepare the node and to boot it in noncluster mode. For ease of installation, consider applying all patches at the same time. That is, apply all patches to the node that you place in noncluster mode.

    For a list of patches that affect Sun Cluster, see the Sun Cluster Wiki Patch Klatch.

    For required firmware, see the Sun System Handbook.

  15. Reattach the Veritas Volume Manager plexes on the storage array to their respective volumes.

    For more information, see your Veritas Volume Manager documentation.

  16. (Optional) Bring your Oracle Real Application Clusters instance online. This is the instance you identified in Step 1.

    To shut down and restart an Oracle instance in the RAC environment, refer to the Oracle 9iRAC Administration Guide.

  17. (Optional) If you relocated a quorum device in Step 4, and if you want your configuration to use the same quorum structure after the host adapter replacement, relocate the quorum device function back to this storage array.

    To add and remove quorum devices, see the Sun Cluster system administration documentation.

ProcedureHow to Replace a Host Adapter When Using Both Failover and Scalable Data Services and Oracle Real Application Clusters

Before You Begin

This procedure relies on the following prerequisites and assumptions.

If your nodes are configured for dynamic reconfiguration, see the Sun Cluster system administration documentation, and skip steps that instruct you to shut down the node.

This procedure provides the long forms of the Sun Cluster commands. Most commands also have short forms. Except for the forms of the command names, the commands are identical. For a list of the commands and their short forms, see Appendix A, Sun Cluster Object-Oriented Commands, in Sun Cluster 3.1 - 3.2 Hardware Administration Manual for Solaris OS.

  1. Become superuser or assume a role that provides solaris.cluster.read and solaris.cluster.modify RBAC authorization.

  2. Determine the Oracle instance that is running on Node A.


    # ps -ef | grep oracle
    
  3. Shut down the Oracle Real Application Clusters instance identified in Step 1.

    To shut down and restart an Oracle instance in the RAC environment, refer to the Oracle 9iRAC Administration Guide.

  4. Determine the resource groups and device groups that are running on Node A.

    Record this information because you use it later in this procedure to return resource groups and device groups to Node A.

    • If you are using Sun Cluster 3.2, use the following commands:


      # clresourcegroup status -n NodeA
      # cldevicegroup status -n NodeA
      
    • If you are using Sun Cluster 3.1, use the following command:


      # scstat
      
  5. (Optional) If necessary, move all resource groups and device groups off Node A.

    • If you are using Sun Cluster 3.2, use the following command:


      # clnode evacuate NodeA
      
    • If you are using Sun Cluster 3.1, use the following command:


      # scswitch -S -h NodeA
      
  6. If the storage device that is connected to the failed host adapter contains a quorum device, add a new quorum device on a different storage device. Then remove the old quorum device.

    To determine whether the affected device contains a quorum device, use one of the following commands.

    • If you are using Sun Cluster 3.2, use the following command:


      # clquorum show
      
    • If you are using Sun Cluster 3.1, use the following command:


      # scstat -q
      

    To add and remove quorum devices, see the Sun Cluster system administration documentation.

  7. Detach the Veritas Volume Manager plexes on the storage array that is attached to the failed host adapter.

    For more information, see your Veritas Volume Manager documentation.

  8. Record the details of plexes that are affected by the failed host adapter.

    Record this information because you use it in Step 17 of this procedure to reattach plexes on the storage array. To determine which plexes are affected, see your Veritas Volume Manager documentation.

  9. If Node A is enabled with the Solaris dynamic reconfiguration (DR) feature, perform any DR-specific steps and skip to Step 12.

    For more information on DR, see your Sun Cluster system administration documentation.

  10. Shut down Node A.

    To shut down and power off a node, see your Sun Cluster system administration documentation.

  11. Power off Node A.

  12. Replace the failed host adapter.

    To remove and add host adapters, see the documentation that shipped with your nodes.

  13. If Node A enabled with the Solaris DR feature, perform any DR-specific steps and skip to Step 16.

    For more information, see your Sun Cluster system administration documentation.

  14. Power on Node A.

  15. Boot Node A into cluster mode.

    For the procedure about booting cluster nodes, see Chapter 3, Shutting Down and Booting a Cluster, in Sun Cluster System Administration Guide for Solaris OS.

  16. If necessary, upgrade the host adapter firmware on Node A.

    If you use the Solaris 8, Solaris 9, or Solaris 10 Operating System, Sun Connection Update Manager keeps you informed of the latest versions of patches and features. Using notifications and intelligent needs-based updating, Sun Connection helps improve operational efficiency and ensures that you have the latest software patches for your Sun software.

    You can download the Sun Connection Update Manager product for free by going to http://www.sun.com/download/products.xml?id=4457d96d.

    Additional information for using the Sun patch management tools is provided in Solaris Administration Guide: Basic Administration at http://docs.sun.com. Refer to the version of this manual for the Solaris OS release that you have installed.

    If you must apply a patch when a node is in noncluster mode, you can apply it in a rolling fashion, one node at a time, unless instructions for a patch require that you shut down the entire cluster. Follow the procedures in How to Apply a Rebooting Patch (Node) in Sun Cluster System Administration Guide for Solaris OS to prepare the node and to boot it in noncluster mode. For ease of installation, consider applying all patches at the same time. That is, apply all patches to the node that you place in noncluster mode.

    For a list of patches that affect Sun Cluster, see the Sun Cluster Wiki Patch Klatch.

    For required firmware, see the Sun System Handbook.

  17. Reattach the Veritas Volume Manager plexes on the storage array to their respective volumes.

    For more information, see your Veritas Volume Manager documentation.

  18. (Optional) If you moved device groups off the node in Step 4 of the disconnect procedure, move all device groups back to the node.

    Perform the following step for each device group you want to return to the original node.

    • If you are using Sun Cluster 3.2, use the following command:


      # cldevicegroup switch -n NodeA devicegroup1[ devicegroup2 ...]
      
      -n NodeA

      The node to which you are restoring device groups.

      devicegroup1[ devicegroup2 ...]

      The device group or groups that you are restoring to the node.

    • If you are using Sun Cluster 3.1, use the following command:


      # scswitch -z -D devicegroup -h NodeA
      
  19. (Optional) If you moved resource groups off the node in Step 4 of the disconnect procedure, move all resource groups back to the node.

    Perform the following step for each resource group you want to return to the original node.

    • If you are using Sun Cluster 3.2, use the following command:


      # clresourcegroup switch -n NodeA resourcegroup1[ resourcegroup2 ...]
      
      -n NodeA

      For failover resource groups, the node to which the groups are returned. For scalable resource groups, the node list to which the groups are returned.

      resourcegroup1[ resourcegroup2 ...]

      The resource group or groups that you are returning to the node or nodes.

    • If you are using Sun Cluster 3.1, use the following command:


      # scswitch -z -g resourcegroup -h NodeA
      
  20. (Optional) Bring your Oracle Real Application Clusters instance online. This is the instance you identified in Step 1.

    To shut down and restart an Oracle instance in the RAC environment, refer to the Oracle 9iRAC Administration Guide.

  21. (Optional) If you relocated a quorum device in Step 6 and you want your configuration to use the same quorum structure after the host adapter replacement, relocate the quorum device function back to this storage array.

    To add and remove quorum devices, see the Sun Cluster system administration documentation.


Example 2–1 SPARC: Replacing a Host Adapter in a Running Cluster

In the following example, a two-node cluster is running Oracle Real Application Clusters and Veritas Volume Manager. In this situation, you begin the host adapter replacement by determining the Oracle instance name.


# ps -ef | grep oracle
oracle 14716 14414  0 14:05:47 console  0:00 grep oracle
oracle 14438     1  0 13:05:44 ?        0:02 ora_lmon_tpcc1
.
.
.
oracle 14434     1  0 13:05:43 ?        0:00 ora_pmon_tpcc1
oracle 14458     1  0 13:05:50 ?        0:00 ora_d000_tpcc1

This output identifies the Oracle Real Application Clusters instance as tpcc1.

Shutting down the Oracle Real Application Clusters instance on Node A involves several steps, as shown in the following example.


# su - oracle 
Sun Microsystems Inc.   SunOS 5.9       Generic May 2002
$ ksh 
$ ORACLE_SID=tpcc1
$ ORACLE_HOME=/export/home/oracle/OraHome1
$ export ORACLE_SID ORACLE_HOME 
$ sqlplus " /as sysdba " 

SQL*Plus: Release 9.2.0.4.0 - Production on Mon Jan 5 14:12:28 2004

Copyright (c) 1982, 2002, Oracle Corporation.  All rights reserved.


Connected to:
Oracle9i Enterprise Edition Release 9.2.0.4.0 - 64bit Production
With the Partitioning, Real Application Clusters, OLAP and Oracle Data 
Mining options
JServer Release 9.2.0.4.0 - Production

SQL> shutdown immediate ;
Database closed.
Database dismounted.
ORACLE instance shut down.

SQL> exit 
Disconnected from Oracle9i Enterprise Edition Release 9.2.0.4.0 - 64bit 
Production With the Partitioning, Real Application Clusters, OLAP and 
Oracle Data Mining options
JServer Release 9.2.0.4.0 - Production
$ lsnrctl 

LSNRCTL for Solaris: Version 9.2.0.4.0 - Production on 05-JAN-2004 14:15:09

Copyright (c) 1991, 2002, Oracle Corporation.  All rights reserved.

Welcome to LSNRCTL, type "help" for information.

LSNRCTL> stop 
Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=IPC)(KEY=EXTPROC)))
The command completed successfully
LSNRCTL> 
  1. After you have stopped the Oracle Real Application Clusters instance, check for a quorum device and, if necessary, reconfigure the quorum device.

  2. When you are certain that the node with the failed adapter does not contain a quorum device, proceed to determine the affected plexes.

    Record this information for use in reestablishing the original storage configuration. In the following example, c2 is the controller with the failed host adapter.


 # vxprint -ht -g tpcc | grep c2
 dm tpcc01    c2t0d0s2     sliced   4063     8374320  -
 dm tpcc02    c2t1d0s2     sliced   4063     8374320  -
 dm tpcc03    c2t2d0s2     sliced   4063     8374320  -
 dm tpcc04    c2t3d0s2     sliced   4063     8374320  -
 dm tpcc09    c2t8d0s2     sliced   4063     8374320  -
 dm tpcc10    c2t9d0s2     sliced   4063     8374320  -
 sd tpcc02-01 control_001-01 tpcc02 0        41040    0       c2t1d0 ENA
.
.
.
 sd tpcc03-06 temp_0_0-02  tpcc03   2967840  276480   0       c2t2d0 ENA
 sd tpcc03-04 ware_0_0-02  tpcc03   2751840  95040    0       c2t2d0 ENA

From this output, you can easily determine which plexes and subdisks are affected by the failed adapter. These are the plexes you detach from the storage array.


/usr/sbin/vxplex -g tpcc  det  control_001-02
/usr/sbin/vxplex -g tpcc  det  temp_0_0-02
  1. After the plexes are detached, you can safely shut down the node, if necessary.

  2. Proceed with replacing the failed host adapter, following instructions that accompanied that device.

  3. After you replace the failed host adapter, and Node A is in cluster mode, reattach the plexes and replace any quorum device to reestablish your original cluster configuration.


ProcedureHow to Replace a Host Adapter When Using a Single, Dual-Port HBA to Provide Both Paths to Shared Data

Before You Begin

This procedure relies on the following prerequisites and assumptions.

This procedure provides the long forms of the Sun Cluster commands. Most commands also have short forms. Except for the forms of the command names, the commands are identical. For a list of the commands and their short forms, see Appendix A, Sun Cluster Object-Oriented Commands, in Sun Cluster 3.1 - 3.2 Hardware Administration Manual for Solaris OS.

  1. Become superuser or assume a role that provides solaris.cluster.read and solaris.cluster.modify RBAC authorization.

  2. If you are using scalable and failover services, determine the resource groups and device groups that are running on Node A.

    Record this information because you use it in Step 17 of this procedure to return resource groups and device groups to Node A.

    • If you are using Sun Cluster 3.2, use the following commands:


      # clresourcegroup status -n NodeA
      # cldevicegroup status -n NodeA
      
    • If you are using Sun Cluster 3.1, use the following command:


      # scstat
      
  3. Record the details of metadevices that are affected by the failed host adapter.

  4. SPARC: If you are using Oracle Real Application Clusters, shut down all RAC instances running in your cluster.

    To shut down and restart an Oracle instance in the RAC environment, refer to the Oracle 9iRAC Administration Guide.

  5. Shut down the cluster.

    To shut down a cluster, see your Sun Cluster system administration documentation.

  6. Power off Node A.

  7. Replace the failed host adapter.

    To remove and add host adapters, see the documentation that shipped with your nodes.

  8. Power on Node A.

  9. Boot all nodes into cluster mode.

    For the procedure about booting cluster nodes, see Chapter 3, Shutting Down and Booting a Cluster, in Sun Cluster System Administration Guide for Solaris OS.

  10. If necessary, upgrade the host adapter firmware on Node A.

    If you use the Solaris 8, Solaris 9, or Solaris 10 Operating System, Sun Connection Update Manager keeps you informed of the latest versions of patches and features. Using notifications and intelligent needs-based updating, Sun Connection helps improve operational efficiency and ensures that you have the latest software patches for your Sun software.

    You can download the Sun Connection Update Manager product for free by going to http://www.sun.com/download/products.xml?id=4457d96d.

    Additional information for using the Sun patch management tools is provided in Solaris Administration Guide: Basic Administration at http://docs.sun.com. Refer to the version of this manual for the Solaris OS release that you have installed.

    If you must apply a patch when a node is in noncluster mode, you can apply it in a rolling fashion, one node at a time, unless instructions for a patch require that you shut down the entire cluster. Follow the procedures in How to Apply a Rebooting Patch (Node) in Sun Cluster System Administration Guide for Solaris OS to prepare the node and to boot it in noncluster mode. For ease of installation, consider applying all patches at the same time. That is, apply all patches to the node that you place in noncluster mode.

    For a list of patches that affect Sun Cluster, see the Sun Cluster Wiki Patch Klatch.

    For required firmware, see the Sun System Handbook.

  11. Perform any volume management maintenance procedures that are necessary to fix any metadevices affected by this procedure.

    For more information, see your volume manager software documentation.

  12. (Optional) If necessary, move the device groups back to the original node.

    Perform the following step for each device group you want to return to the original node.

    • If you are using Sun Cluster 3.2, use the following command:


      # cldevicegroup switch -n NodeA devicegroup
      
    • If you are using Sun Cluster 3.1, use the following command:


      # scswitch -z -D devicegroup -h NodeA
      
  13. (Optional) If necessary, move the resource groups back to the node.

    Perform the following step for each resource group you want to return to the original node.

    • If you are using Sun Cluster 3.2, use the following command:


      # clresourcegroup switch -n NodeA resourcegroup
      
      nodename

      For failover resource groups, the node to which the groups are returned. For scalable resource groups, the node list to which the groups are returned.

      resourcegroup

      The resource group that is returned to the node or nodes.

    • If you are using Sun Cluster 3.1, use the following command:


      # scswitch -z -g resourcegroup -h NodeA
      
  14. (Optional) Bring all Oracle Real Application Clusters instance online.

    To shut down and restart an Oracle instance in the RAC environment, refer to the Oracle 9iRAC Administration Guide.