Sun Cluster 3.1 - 3.2 With SCSI JBOD Storage Device Manual for Solaris OS

Maintaining Storage Arrays

The maintenance procedures in FRUs That Do Not Require Sun Cluster Maintenance Procedures are performed the same as in a noncluster environment. Table 2–1 lists the procedures that require cluster-specific steps.

Table 2–1 Task Map: Maintaining a Storage Array

Task 

Information 

Remove a storage array 

How to Remove a Storage Array

Replace a storage array 

To replace a storage array, remove the storage array. Add a new storage array to the configuration. 

SPARC: How to Add a Storage Array to an Existing SPARC Based Cluster

How to Remove a Storage Array

Add a JBOD as an expansion unit. 

Follow the same procedure that you use in a noncluster environment. 

Sun StorEdge 3000 Family Installation, Operation, and Service Manual

Remove a JBOD expansion unit. 

Follow the same procedure that you use in a noncluster environment. 

Sun StorEdge 3000 Family Installation, Operation, and Service Manual

Replace a chassis 

How to Replace the Chassis

Replace a SCSI cable 

How to Replace a SCSI Cable

Replace a host adapter 

Replacing a Host Adapter

Add a disk drive 

How to Add a Disk Drive

Remove a disk drive 

How to Remove a Disk Drive

Replace a disk drive 

How to Replace a Disk Drive Without Oracle Real Application Clusters

SPARC: How to Replace a Disk Drive With Oracle Real Application Clusters

Upgrade disk drive firmware 

How to Upgrade Disk Drive Firmware

Upgrade host adapter firmware 

How to Upgrade Host Adapter Firmware

FRUs That Do Not Require Sun Cluster Maintenance Procedures

Each storage device has a different set of FRUs that do not require cluster-specific procedures. Choose among the following storage devices:

Sun StorEdge 3120 Storage Array FRUs

The following is a list of administrative tasks that require no cluster-specific procedures. See the Sun StorEdge 3000 Family FRU Installation Guide for the procedures for the following FRUs. For a URL to this storage documentation, see Related Documentation.

Sun StorEdge 3310 and 3320 SCSI Storage Array FRUs

The following is a list of administrative tasks that require no cluster-specific procedures. See the Sun StorEdge 3000 Family FRU Installation Guide for instructions on replacing the following FRUs. http://www.tokensbeads.com/education.htm

SPARC: Sun StorEdge D1000 Storage Array FRUs

The following is a list of administrative tasks that require no cluster-specific procedures. See the Sun StorEdge A1000 and D1000 Installation, Operations, and Service Manual and the Sun StorEdge A1000 and D1000 Installation, Operations, and Service Manual for the procedures for the following FRUs. For a URL to this storage documentation, see Related Documentation.

SPARC: Sun StorEdge Netra D130/S1 Storage Array FRUs

The following is a list of administrative tasks that require no cluster-specific procedures. See the Netra st D130 Installation and Maintenance Manual for the procedures for the following FRUs. For a URL to this storage documentation, see Related Documentation.

SPARC: Sun StorEdge D2 Storage Array FRUs

The following is a list of administrative tasks that require no cluster-specific procedures. See the Sun StorEdge D2 Array Installation, Operation, and Service Manual for the procedures for the following FRUs. For a URL to this storage documentation, see Related Documentation.

SPARC: Sun StorEdge Multipack Storage Array FRUs

The following is a list of administrative tasks that require no cluster-specific procedures. See the SPARCstorage MultiPack Service Manual for the procedures for the following FRUs. For a URL to this storage documentation, see Related Documentation.

ProcedureHow to Remove a Storage Array

Removing a storage array enables you to downsize or reallocate your existing storage pool.

Before You Begin

This procedure relies on the following prerequisites and assumptions.

This procedure provides the long forms of the Sun Cluster commands. Most commands also have short forms. Except for the forms of the command names, the commands are identical. For a list of the commands and their short forms, see Appendix A, Sun Cluster Object-Oriented Commands, in Sun Cluster 3.1 - 3.2 Hardware Administration Manual for Solaris OS.

To perform this procedure, become superuser or assume a role that provides solaris.cluster.read and solaris.cluster.modify role-based access control (RBAC) authorization.

  1. If the storage array that you want to remove contains a quorum device, add a new quorum device that will not be affected by this procedure. Then remove the old quorum device.

    To determine whether the affected array contains a quorum device, use one of the following commands.

    • If you are using Sun Cluster 3.2, use the following command:


      # clquorum show
      
    • If you are using Sun Cluster 3.1, use the following command:


      # scstat -q
      

    For procedures about how to add and remove quorum devices, see the Sun Cluster system administration documentation.

  2. If necessary, back up the metadevice or volume.

    For more information, see your Solaris Volume Manager or Veritas Volume Manager documentation.

  3. On each node that is connected to the storage array, perform volume management administration to remove the storage array from the configuration.

    If a volume manager does manage the disk drives, run the appropriate volume manager commands to remove the disk drives from any diskset or disk group. For more information, see your Solaris Volume Manager or Veritas Volume Manager documentation. See the following paragraph for additional Veritas Volume Manager commands that are required.


    Note –

    Disk drives that were managed by Veritas Volume Manager must be completely removed from Veritas Volume Manager control before you can remove the disk drives from the Sun Cluster environment. After you delete the disk drives from any disk group, use the following commands on both nodes to remove the disk drives from Veritas Volume Manager control.



    # vxdisk offline cNtXdY
    # vxdisk rm cNtXdY
    
  4. Identify the disk drives that you plan to remove.


    # cfgadm -al
    
  5. On all nodes, remove references to the disk drives in the storage array that you plan to remove.


    # cfgadm -c unconfigure cN::dsk/cNtXdY
    
  6. Disconnect the SCSI cables from the storage array.

  7. On all nodes, update device namespaces.


    # devfsadm -C
    
  8. On all nodes, remove all obsolete device IDs.

    • If you are using Sun Cluster 3.2, use the following command:


      # cldevice clear
      
    • If you are using Sun Cluster 3.1, use the following command:


      # scdidadm -C
      
  9. Power off the storage array. Disconnect the storage array from the power source.

    For the procedure about how to power off a storage array, see your storage documentation. For a list of storage documentation, see Related Documentation.

  10. Remove the storage array.

    For the procedure about how to remove a storage array, see your storage documentation. For a list of storage documentation, see Related Documentation.

  11. If you plan to remove a host adapter that has an entry in the nvramrc script, delete the references to the host adapters in the nvramrc script.


    Note –

    If other parallel SCSI devices are connected to the nodes, you can delete the contents of the nvramrc script. Then, at the OpenBoot PROM, set setenv use-nvramrc? false. Afterward, reset the scsi-initiator-id to 7 as outlined in Installing a Storage Array.


  12. If necessary, remove any unused host adapters from the nodes.

    For the procedure about how to remove a host adapter, see your host adapter and server documentation.

  13. From any node, verify that the configuration is correct.

    • If you are using Sun Cluster 3.2, use the following command:


      # cldevice list -v
      
    • If you are using Sun Cluster 3.1, use the following command:


      # scdidadm -l
      

ProcedureHow to Replace the Chassis

You might need to replace a chassis if the chassis fails. With this procedure, you are able to replace the chassis and retain the storage array's disk drives and the references to those disk drives. By retaining the storage array's disk drives, you save time because you no longer need to resynchronize your mirrors or restore your data.

If you need to replace the entire storage array, see How to Remove a Storage Array.

Before You Begin

This procedure relies on the following assumptions.

This procedure provides the long forms of the Sun Cluster commands. Most commands also have short forms. Except for the forms of the command names, the commands are identical. For a list of the commands and their short forms, see Appendix A, Sun Cluster Object-Oriented Commands, in Sun Cluster 3.1 - 3.2 Hardware Administration Manual for Solaris OS.

To perform this procedure, become superuser or assume a role that provides solaris.cluster.read and solaris.cluster.modify RBAC authorization.

  1. If the chassis you are replacing contains a drive that is configured as quorum device, add a new quorum device that will not be affected by this procedure. Then remove the old quorum device.

    To determine whether a quorum device will be affected by this procedure, use one of the following commands.

    • If you are using Sun Cluster 3.2, use the following command:


      # clquorum show
      
    • If you are using Sun Cluster 3.1, use the following command:


      # scstat -q
      

    For procedures about how to add and remove quorum devices, see the Sun Cluster system administration documentation.

  2. If possible, back up the metadevices or volumes that reside in the storage array.

    For more information, see your Solaris Volume Manager or Veritas Volume Manager documentation.

  3. On each node that is connected to the storage array, perform volume management administration to remove the storage array from the configuration.


    Note –

    Disk drives that were managed by Veritas Volume Manager must be completely removed from Veritas Volume Manager control before you can remove the disk drives from the Sun Cluster environment. After you delete the disk drives from any disk group, use the following commands on both nodes to remove the disk drives from Veritas Volume Manager control.


    If a volume manager manages the disk drives, run the appropriate volume manager commands to remove the disk drives from any diskset or disk group. For more information, see your Solaris Volume Manager or Veritas Volume Manager documentation.


    # vxdisk offline cNtXdY# vxdisk rm cNtXdY
    
  4. Disconnect the SCSI cables from the storage array.

    You can disconnect the cables in any order.

  5. Power off and disconnect the storage array from the power source.

    For more information, see your storage documentation. For a list of storage documentation, see Related Documentation.

  6. Connect the new storage array to the power sources.

  7. Connect the SCSI cables to the new storage array.

    You can connect the cables in any order.

    Ensure the cable does not exceed bus-length limitations. For more information on bus-length limitations, see your hardware documentation.

  8. One disk drive at a time, remove the disk drives from the old storage array. Insert the disk drives into the same slots in the new storage array.

    Move your other components to the new chassis as well. For the procedures about how to replace your storage array's components, see your storage documentation. For a list of storage documentation, see Related Documentation.

  9. Power on the storage array.

    For the procedure about how to shut down and power off the storage array, see the documentation that came with your array.

  10. On each node that is attached to the storage array, run the devfsadm(1M) command.


    # devfsadm
    
  11. From one node, attach the new storage array to the global device namespace.

    • If you are using Sun Cluster 3.2, use the following command:


      # cldevice populate
      
    • If you are using Sun Cluster 3.1, use the following command:


      # scgdevs
      
  12. One at a time, shut down and reboot the nodes that are connected to the storage array.

    For the procedure about how to shut down and power off a node, see Chapter 3, Shutting Down and Booting a Cluster, in Sun Cluster System Administration Guide for Solaris OS.

  13. Perform volume management administration to add the storage array back into the configuration.

    For more information, see your Solaris Volume Manager or Veritas Volume Manager documentation.

Disconnecting and Reconnecting a Node from Shared Storage

Use this procedure to temporarily disconnect a node from shared storage. You need to temporarily disconnect a node from shared storage if you intend to replace an HBA.

This procedure relies on the following assumptions.

These procedures provide the long forms of the Sun Cluster commands. Most commands also have short forms. Except for the forms of the command names, the commands are identical. For a list of the commands and their short forms, see Appendix A, Sun Cluster Object-Oriented Commands, in Sun Cluster 3.1 - 3.2 Hardware Administration Manual for Solaris OS.

ProcedureHow to Disconnect the Node from Shared Storage

You must maintain proper SCSI-bus termination during this procedure. The process by which you disconnect the node from shared storage depends on whether you have host adapters available on Node B (see Figure 2–1) . If you do not have host adapters available on Node B and your storage device does not have auto-termination, you must use terminators (see Figure 2–2).


Note –

To determine the specific terminator that your storage array supports, see your storage documentation. For a list of storage documentation, see Related Documentation.


  1. Become superuser or assume a role that provides solaris.cluster.read and solaris.cluster.modify RBAC authorization.

  2. Determine the resource groups and device groups that are running on Node A.

    Record this information because you use this information in How to Reconnect the Node to Shared Storage to return resource groups and device groups to this node.

    • If you are using Sun Cluster 3.2, use the following commands:


      # clresourcegroup status -n NodeA
      # cldevicegroup status -n NodeA
      
    • If you are using Sun Cluster 3.1, use the following command:


      # scstat
      
  3. Identify the submirrors on the storage array that is connected to Node A.

  4. Move all resource groups and device groups off Node A.

    • If you are using Sun Cluster 3.2, use the following command:


      # clnode evacuate NodeA
      
    • If you are using Sun Cluster 3.1, use the following command:


      # scswitch -S -h NodeA
      
  5. (Optional) If necessary, detach the submirrors on the storage array that is connected to Node A.

    For more information, see your Solaris Volume Manager or Veritas Volume Manager documentation.

  6. Shut down Node A.

    To shut down and power off a node, see your Sun Cluster system administration documentation.

  7. To maintain proper SCSI-bus termination during this procedure, perform one of the following steps. The approach you choose depends on whether you have an available host adapter on Node B.

    • Disconnect the SCSI cable between Node A and the storage array. Attach this SCSI cable to Node B on the storage array. For an illustration, see Figure 2–1.

      Figure 2–1 Disconnecting the Node from Shared Storage by Using Host Adapters on Node B

      Illustration: The following context describes the graphic.

    • Disconnect the SCSI cable between Node A and the storage array. Install an appropriate SCSI terminator to this SCSI connector on the storage array. For an illustration, see Figure 2–2.

      Figure 2–2 Disconnecting the Node from Shared Storage by Using Terminators

      Illustration: The following context describes the graphic.

  8. Boot Node A into cluster mode.

    For the procedure about booting cluster nodes, see Chapter 3, Shutting Down and Booting a Cluster, in Sun Cluster System Administration Guide for Solaris OS.

  9. If you detached the submirrors in Step 5, reattach the submirrors. Wait for the submirrors to resynchronize.

    For more information, see your Solaris Volume Manager or Veritas Volume Manager documentation.

  10. Repeat Step 5 through Step 9 for each remaining storage array that is connected to Node A.

  11. Reconnect the node to the shared storage as outlined in How to Reconnect the Node to Shared Storage.

ProcedureHow to Reconnect the Node to Shared Storage

  1. Become superuser or assume a role that provides solaris.cluster.modify RBAC authorization.

  2. (Optional) If necessary, detach the submirrors on the storage array.

    For more information, see your Solaris Volume Manager or Veritas Volume Manager documentation.

  3. Disconnect the SCSI terminator. Reattach the SCSI cable between the storage array and Node A.


    Caution – Caution –

    Connect this storage array to the same host adapter that the storage array was connected before you disconnected the SCSI cable.


  4. Boot Node A into cluster mode.

    For the procedure about booting cluster nodes, see Chapter 3, Shutting Down and Booting a Cluster, in Sun Cluster System Administration Guide for Solaris OS.

  5. If you detached submirrors in Step 1, reattach the submirrors on the storage array. Wait for the submirrors to resynchronize.

    For more information, see your Solaris Volume Manager or Veritas Volume Manager documentation.

  6. Repeat Step 1 through Step 5 for the remaining storage array that you want to reconnect to Node A.

  7. (Optional) If you moved device groups off the node in Step 4 of the disconnect procedure, move all device groups back to the node.

    Perform the following step for each device group you want to return to the original node.

    • If you are using Sun Cluster 3.2, use the following command:


      # cldevicegroup switch -n NodeA devicegroup1[ devicegroup2 ...]
      
      -n NodeA

      The node to which you are restoring device groups.

      devicegroup1[ devicegroup2 ...]

      The device group or groups that you are restoring to the node.

    • If you are using Sun Cluster 3.1, use the following command:


      # scswitch -z -D devicegroup -h NodeA
      
  8. (Optional) If you moved resource groups off the node in Step 4 of the disconnect procedure, move all resource groups back to the node.

    Perform the following step for each resource group you want to return to the original node.

    • If you are using Sun Cluster 3.2, use the following command:


      # clresourcegroup switch -n NodeA resourcegroup1[ resourcegroup2 ...]
      
      -n NodeA

      For failover resource groups, the node to which the groups are returned. For scalable resource groups, the node list to which the groups are returned.

      resourcegroup1[ resourcegroup2 ...]

      The resource group or groups that you are returning to the node or nodes.

    • If you are using Sun Cluster 3.1, use the following command:


      # scswitch -z -g resourcegroup -h NodeA
      

ProcedureHow to Replace a SCSI Cable

Use this procedure to replace a failed SCSI cable in a running cluster.

Before You Begin

This procedure provides the long forms of the Sun Cluster commands. Most commands also have short forms. Except for the forms of the command names, the commands are identical. For a list of the commands and their short forms, see Appendix A, Sun Cluster Object-Oriented Commands, in Sun Cluster 3.1 - 3.2 Hardware Administration Manual for Solaris OS.

  1. Become superuser or assume a role that provides solaris.cluster.read and solaris.cluster.modify RBAC authorization.

  2. If one of the disks in the storage array that you are replaces is configured as a quorum device, configure a new quorum device that will not be affected by this procedure. Then remove the old quorum device.

    To determine whether a quorum device will be affected by this procedure, use one of the following commands.

    • If you are using Sun Cluster 3.2, use the following command:


      # clquorum show
      
    • If you are using Sun Cluster 3.1, use the following command:


      # scstat -q
      

    To add and remove quorum devices, see the Sun Cluster system administration documentation.

  3. Detach the submirror or submirrors on the storage array.

    For more information, see your Solaris Volume Manager or Veritas Volume Manager documentation.

  4. Replace the SCSI cable.

  5. Reattach the submirror or submirrors on the storage array.

    For more information, see your Solaris Volume Manager or Veritas Volume Manager documentation.

  6. If you relocated a quorum device in Step 1, relocate the quorum device function to this storage array.

    To add and remove quorum devices, see the Sun Cluster system administration documentation.

Replacing a Host Adapter

You need to replace a host adapter if your host adapter fails, if it becomes unstable, or if you want to upgrade to a newer version. These procedures define Node A as the node with the host adapter that you plan to replace.

Choose the procedure that corresponds to your cluster configuration.


x86 only –

If your cluster is x86 based, Oracle RAC services are not supported. Follow the instructions in How to Replace a Host Adapter When Using Failover and Scalable Data Services Only.


Cluster Configuration 

Instructions 

Sun Cluster failover and scalable data services only, using the recommended HBA configuration 

How to Replace a Host Adapter When Using Failover and Scalable Data Services Only

Oracle Parallel Server/Real Application Clusters (OPS/RAC) only, using the recommended HBA configuration 

How to Replace a Host Adapter When Using Oracle Real Application Clusters Only

Both failover and scalable data services and OPS/RAC, using the recommended HBA configuration 

How to Replace a Host Adapter When Using Both Failover and Scalable Data Services and Oracle Real Application Clusters

All clusters using a single, dual-port HBA to provide both paths to shared data 

How to Replace a Host Adapter When Using a Single, Dual-Port HBA to Provide Both Paths to Shared Data


Note –

The first three procedures in this section assume that you are using the recommended HBA configuration: two redundant hardware paths to shared data. If you choose to use a single HBA configuration, see Configuring Cluster Nodes With a Single, Dual-Port HBA in Sun Cluster 3.1 - 3.2 Hardware Administration Manual for Solaris OS for the risks and restrictions of that configuration and use How to Replace a Host Adapter When Using a Single, Dual-Port HBA to Provide Both Paths to Shared Data.


ProcedureHow to Replace a Host Adapter When Using Failover and Scalable Data Services Only

Before You Begin

This procedure relies on the following prerequisites and assumptions.

If your nodes are configured for dynamic reconfiguration, see the Sun Cluster system administration documentation, and skip steps that instruct you to shut down the node.

This procedure provides the long forms of the Sun Cluster commands. Most commands also have short forms. Except for the forms of the command names, the commands are identical. For a list of the commands and their short forms, see Appendix A, Sun Cluster Object-Oriented Commands, in Sun Cluster 3.1 - 3.2 Hardware Administration Manual for Solaris OS.

  1. Become superuser or assume a role that provides solaris.cluster.read and solaris.cluster.modify RBAC authorization.

  2. Determine the resource groups and device groups that are running on Node A.

    Record this information because you use this information later in this procedure to return resource groups and device groups to Node A.

    • If you are using Sun Cluster 3.2, use the following commands:


      # clresourcegroup status -n NodeA
      # cldevicegroup status -n NodeA
      
    • If you are using Sun Cluster 3.1, use the following command:


      # scstat
      
  3. (Optional) If necessary, move all resource groups and device groups off Node A.

    • If you are using Sun Cluster 3.2, use the following command:


      # clnode evacuate NodeA
      
    • If you are using Sun Cluster 3.1, use the following command:


      # scswitch -S -h NodeA
      
  4. If the storage device that is attached to the failed host adapter is configured as a quorum device, add a new quorum device on a storage device that is not affected by this procedure. Then remove the old quorum device.

    To determine whether the affected device contains a quorum device, use one of the following commands.

    • If you are using Sun Cluster 3.2, use the following command:


      # clquorum show
      
    • If you are using Sun Cluster 3.1, use the following command:


      # scstat -q
      

    To add and remove quorum devices, see the Sun Cluster system administration documentation.

  5. Detach the Solaris Volume Manager submirrors or Veritas Volume Manager plexes on the storage array.

    For more information, see your Solaris Volume Manager or Veritas Volume Manager documentation.

  6. Record the details of disk groups and volumes affected by the failed host adapter.

    Record this information because you use it in Step 16 of this procedure to reattach submirrors on the storage array. To determine which submirrors or plexes are affected, see your Solaris Volume Manager or Veritas Volume Manager documentation.

  7. If Node A is enabled with the Solaris DR feature, perform any DR-specific steps and skip to Step 10.

    For more information on DR, see your Sun Cluster system administration documentation.

  8. Shut down Node A.

    To shut down and power off a node, see your Sun Cluster system administration documentation.

  9. Power off Node A.

  10. Replace the failed host adapter.

    To remove and add host adapters, see the documentation that shipped with your nodes.

  11. If Node A is enabled with the Solaris DR feature, perform any DR-specific steps and skip to Step 15.

    For more information on DR, see your Sun Cluster system administration documentation.

  12. Power on Node A.

  13. x86: Set the HBA ports to ensure that each array has a unique SCSI address.

    For instructions on setting SCSI initiator IDs in x86 based systems, see x86: How to Install a Storage Array in a New x86 Based Cluster.

  14. Boot Node A into cluster mode.

    For the procedure about booting cluster nodes, see Chapter 3, Shutting Down and Booting a Cluster, in Sun Cluster System Administration Guide for Solaris OS.

  15. If necessary, upgrade the host adapter firmware on Node A.

    If you use the Solaris 8, Solaris 9, or Solaris 10 Operating System, Sun Connection Update Manager keeps you informed of the latest versions of patches and features. Using notifications and intelligent needs-based updating, Sun Connection helps improve operational efficiency and ensures that you have the latest software patches for your Sun software.

    You can download the Sun Connection Update Manager product for free by going to http://www.sun.com/download/products.xml?id=4457d96d.

    Additional information for using the Sun patch management tools is provided in Solaris Administration Guide: Basic Administration at http://docs.sun.com. Refer to the version of this manual for the Solaris OS release that you have installed.

    If you must apply a patch when a node is in noncluster mode, you can apply it in a rolling fashion, one node at a time, unless instructions for a patch require that you shut down the entire cluster. Follow the procedures in How to Apply a Rebooting Patch (Node) in Sun Cluster System Administration Guide for Solaris OS to prepare the node and to boot it in noncluster mode. For ease of installation, consider applying all patches at the same time. That is, apply all patches to the node that you place in noncluster mode.

    For a list of patches that affect Sun Cluster, see the Sun Cluster Wiki Patch Klatch.

    For required firmware, see the Sun System Handbook.

  16. Reattach the Solaris Volume Manager submirrors or Veritas Volume Manager plexes on the storage array.

    For more information, see your Solstice DiskSuite/Solaris Volume Manager or Veritas Volume Manager documentation.

  17. (Optional) If you moved device groups off the node in Step 4 of the disconnect procedure, move all device groups back to the node.

    Perform the following step for each device group you want to return to the original node.

    • If you are using Sun Cluster 3.2, use the following command:


      # cldevicegroup switch -n NodeA devicegroup1[ devicegroup2 ...]
      
      -n NodeA

      The node to which you are restoring device groups.

      devicegroup1[ devicegroup2 ...]

      The device group or groups that you are restoring to the node.

    • If you are using Sun Cluster 3.1, use the following command:


      # scswitch -z -D devicegroup -h NodeA
      
  18. (Optional) If you moved resource groups off the node in Step 4 of the disconnect procedure, move all resource groups back to the node.

    Perform the following step for each resource group you want to return to the original node.

    • If you are using Sun Cluster 3.2, use the following command:


      # clresourcegroup switch -n NodeA resourcegroup1[ resourcegroup2 ...]
      
      -n NodeA

      For failover resource groups, the node to which the groups are returned. For scalable resource groups, the node list to which the groups are returned.

      resourcegroup1[ resourcegroup2 ...]

      The resource group or groups that you are returning to the node or nodes.

    • If you are using Sun Cluster 3.1, use the following command:


      # scswitch -z -g resourcegroup -h NodeA
      
  19. If you relocated a quorum device in Step 4, and if you want the cluster configured as it was before replacing the host adapter, relocate the quorum device function back to this storage array.

    To add and remove quorum devices, see your Sun Cluster system administration documentation.

ProcedureHow to Replace a Host Adapter When Using Oracle Real Application Clusters Only

Before You Begin

This procedure relies on the following prerequisites and assumptions.

If your nodes are configured for dynamic reconfiguration, see the Sun Cluster system administration documentation, and skip steps that instruct you to shut down the node.

This procedure provides the long forms of the Sun Cluster commands. Most commands also have short forms. Except for the forms of the command names, the commands are identical. For a list of the commands and their short forms, see Appendix A, Sun Cluster Object-Oriented Commands, in Sun Cluster 3.1 - 3.2 Hardware Administration Manual for Solaris OS.

  1. Become superuser or assume a role that provides solaris.cluster.read and solaris.cluster.modify RBAC authorization.

  2. Determine the Oracle instance that is running on Node A.


    # ps -ef | grep oracle
    
  3. Shut down the Oracle Real Application Clusters instance and any other process on Node A that should be stopped before shutting down the node.

    To shut down and restart an Oracle instance in the RAC environment, refer to your Oracle documentation.

  4. If the storage devices that are attached to the failed host adapter contain a quorum device, add a new quorum device on a different storage device. Then remove the old quorum device.

    To determine whether the affected device contains a quorum device, use one of the following commands.

    • If you are using Sun Cluster 3.2, use the following command:


      # clquorum show
      
    • If you are using Sun Cluster 3.1, use the following command:


      # scstat -q
      

    To add and remove quorum devices, see the Sun Cluster system administration documentation.

  5. Detach the Veritas Volume Manager plexes on the storage array attached to the failed host adapter.

    For more information, see your Veritas Volume Manager documentation.

  6. Record the details of disk groups and volumes affected by the failed host adapter

    Record this information because you use it in Step 15 of this procedure to reattach plexes on the storage array. To determine which plexes are affected, see your Veritas Volume Manager documentation.

  7. If Node A is enabled with the Solaris dynamic reconfiguration (DR) feature, perform any DR-specific steps and skip to Step 10.

    For more information on DR, see your Sun Cluster system administration documentation.

  8. Shut down Node A.

    To shut down and power off a node, see your Sun Cluster system administration documentation.

  9. Power off Node A.

  10. Replace the failed host adapter.

    To remove and add host adapters, see the documentation that shipped with your nodes.

  11. If Node A enabled with the Solaris dynamic reconfiguration feature, perform any DR-specific steps and skip to Step 14.

    For more information on DR, see your Sun Cluster system administration documentation.

  12. Power on Node A.

  13. Boot Node A into cluster mode.

    For the procedure about booting cluster nodes, see Chapter 3, Shutting Down and Booting a Cluster, in Sun Cluster System Administration Guide for Solaris OS.

  14. If necessary, upgrade the host adapter firmware on Node A.

    If you use the Solaris 8, Solaris 9, or Solaris 10 Operating System, Sun Connection Update Manager keeps you informed of the latest versions of patches and features. Using notifications and intelligent needs-based updating, Sun Connection helps improve operational efficiency and ensures that you have the latest software patches for your Sun software.

    You can download the Sun Connection Update Manager product for free by going to http://www.sun.com/download/products.xml?id=4457d96d.

    Additional information for using the Sun patch management tools is provided in Solaris Administration Guide: Basic Administration at http://docs.sun.com. Refer to the version of this manual for the Solaris OS release that you have installed.

    If you must apply a patch when a node is in noncluster mode, you can apply it in a rolling fashion, one node at a time, unless instructions for a patch require that you shut down the entire cluster. Follow the procedures in How to Apply a Rebooting Patch (Node) in Sun Cluster System Administration Guide for Solaris OS to prepare the node and to boot it in noncluster mode. For ease of installation, consider applying all patches at the same time. That is, apply all patches to the node that you place in noncluster mode.

    For a list of patches that affect Sun Cluster, see the Sun Cluster Wiki Patch Klatch.

    For required firmware, see the Sun System Handbook.

  15. Reattach the Veritas Volume Manager plexes on the storage array to their respective volumes.

    For more information, see your Veritas Volume Manager documentation.

  16. (Optional) Bring your Oracle Real Application Clusters instance online. This is the instance you identified in Step 1.

    To shut down and restart an Oracle instance in the RAC environment, refer to the Oracle 9iRAC Administration Guide.

  17. (Optional) If you relocated a quorum device in Step 4, and if you want your configuration to use the same quorum structure after the host adapter replacement, relocate the quorum device function back to this storage array.

    To add and remove quorum devices, see the Sun Cluster system administration documentation.

ProcedureHow to Replace a Host Adapter When Using Both Failover and Scalable Data Services and Oracle Real Application Clusters

Before You Begin

This procedure relies on the following prerequisites and assumptions.

If your nodes are configured for dynamic reconfiguration, see the Sun Cluster system administration documentation, and skip steps that instruct you to shut down the node.

This procedure provides the long forms of the Sun Cluster commands. Most commands also have short forms. Except for the forms of the command names, the commands are identical. For a list of the commands and their short forms, see Appendix A, Sun Cluster Object-Oriented Commands, in Sun Cluster 3.1 - 3.2 Hardware Administration Manual for Solaris OS.

  1. Become superuser or assume a role that provides solaris.cluster.read and solaris.cluster.modify RBAC authorization.

  2. Determine the Oracle instance that is running on Node A.


    # ps -ef | grep oracle
    
  3. Shut down the Oracle Real Application Clusters instance identified in Step 1.

    To shut down and restart an Oracle instance in the RAC environment, refer to the Oracle 9iRAC Administration Guide.

  4. Determine the resource groups and device groups that are running on Node A.

    Record this information because you use it later in this procedure to return resource groups and device groups to Node A.

    • If you are using Sun Cluster 3.2, use the following commands:


      # clresourcegroup status -n NodeA
      # cldevicegroup status -n NodeA
      
    • If you are using Sun Cluster 3.1, use the following command:


      # scstat
      
  5. (Optional) If necessary, move all resource groups and device groups off Node A.

    • If you are using Sun Cluster 3.2, use the following command:


      # clnode evacuate NodeA
      
    • If you are using Sun Cluster 3.1, use the following command:


      # scswitch -S -h NodeA
      
  6. If the storage device that is connected to the failed host adapter contains a quorum device, add a new quorum device on a different storage device. Then remove the old quorum device.

    To determine whether the affected device contains a quorum device, use one of the following commands.

    • If you are using Sun Cluster 3.2, use the following command:


      # clquorum show
      
    • If you are using Sun Cluster 3.1, use the following command:


      # scstat -q
      

    To add and remove quorum devices, see the Sun Cluster system administration documentation.

  7. Detach the Veritas Volume Manager plexes on the storage array that is attached to the failed host adapter.

    For more information, see your Veritas Volume Manager documentation.

  8. Record the details of plexes that are affected by the failed host adapter.

    Record this information because you use it in Step 17 of this procedure to reattach plexes on the storage array. To determine which plexes are affected, see your Veritas Volume Manager documentation.

  9. If Node A is enabled with the Solaris dynamic reconfiguration (DR) feature, perform any DR-specific steps and skip to Step 12.

    For more information on DR, see your Sun Cluster system administration documentation.

  10. Shut down Node A.

    To shut down and power off a node, see your Sun Cluster system administration documentation.

  11. Power off Node A.

  12. Replace the failed host adapter.

    To remove and add host adapters, see the documentation that shipped with your nodes.

  13. If Node A enabled with the Solaris DR feature, perform any DR-specific steps and skip to Step 16.

    For more information, see your Sun Cluster system administration documentation.

  14. Power on Node A.

  15. Boot Node A into cluster mode.

    For the procedure about booting cluster nodes, see Chapter 3, Shutting Down and Booting a Cluster, in Sun Cluster System Administration Guide for Solaris OS.

  16. If necessary, upgrade the host adapter firmware on Node A.

    If you use the Solaris 8, Solaris 9, or Solaris 10 Operating System, Sun Connection Update Manager keeps you informed of the latest versions of patches and features. Using notifications and intelligent needs-based updating, Sun Connection helps improve operational efficiency and ensures that you have the latest software patches for your Sun software.

    You can download the Sun Connection Update Manager product for free by going to http://www.sun.com/download/products.xml?id=4457d96d.

    Additional information for using the Sun patch management tools is provided in Solaris Administration Guide: Basic Administration at http://docs.sun.com. Refer to the version of this manual for the Solaris OS release that you have installed.

    If you must apply a patch when a node is in noncluster mode, you can apply it in a rolling fashion, one node at a time, unless instructions for a patch require that you shut down the entire cluster. Follow the procedures in How to Apply a Rebooting Patch (Node) in Sun Cluster System Administration Guide for Solaris OS to prepare the node and to boot it in noncluster mode. For ease of installation, consider applying all patches at the same time. That is, apply all patches to the node that you place in noncluster mode.

    For a list of patches that affect Sun Cluster, see the Sun Cluster Wiki Patch Klatch.

    For required firmware, see the Sun System Handbook.

  17. Reattach the Veritas Volume Manager plexes on the storage array to their respective volumes.

    For more information, see your Veritas Volume Manager documentation.

  18. (Optional) If you moved device groups off the node in Step 4 of the disconnect procedure, move all device groups back to the node.

    Perform the following step for each device group you want to return to the original node.

    • If you are using Sun Cluster 3.2, use the following command:


      # cldevicegroup switch -n NodeA devicegroup1[ devicegroup2 ...]
      
      -n NodeA

      The node to which you are restoring device groups.

      devicegroup1[ devicegroup2 ...]

      The device group or groups that you are restoring to the node.

    • If you are using Sun Cluster 3.1, use the following command:


      # scswitch -z -D devicegroup -h NodeA
      
  19. (Optional) If you moved resource groups off the node in Step 4 of the disconnect procedure, move all resource groups back to the node.

    Perform the following step for each resource group you want to return to the original node.

    • If you are using Sun Cluster 3.2, use the following command:


      # clresourcegroup switch -n NodeA resourcegroup1[ resourcegroup2 ...]
      
      -n NodeA

      For failover resource groups, the node to which the groups are returned. For scalable resource groups, the node list to which the groups are returned.

      resourcegroup1[ resourcegroup2 ...]

      The resource group or groups that you are returning to the node or nodes.

    • If you are using Sun Cluster 3.1, use the following command:


      # scswitch -z -g resourcegroup -h NodeA
      
  20. (Optional) Bring your Oracle Real Application Clusters instance online. This is the instance you identified in Step 1.

    To shut down and restart an Oracle instance in the RAC environment, refer to the Oracle 9iRAC Administration Guide.

  21. (Optional) If you relocated a quorum device in Step 6 and you want your configuration to use the same quorum structure after the host adapter replacement, relocate the quorum device function back to this storage array.

    To add and remove quorum devices, see the Sun Cluster system administration documentation.


Example 2–1 SPARC: Replacing a Host Adapter in a Running Cluster

In the following example, a two-node cluster is running Oracle Real Application Clusters and Veritas Volume Manager. In this situation, you begin the host adapter replacement by determining the Oracle instance name.


# ps -ef | grep oracle
oracle 14716 14414  0 14:05:47 console  0:00 grep oracle
oracle 14438     1  0 13:05:44 ?        0:02 ora_lmon_tpcc1
.
.
.
oracle 14434     1  0 13:05:43 ?        0:00 ora_pmon_tpcc1
oracle 14458     1  0 13:05:50 ?        0:00 ora_d000_tpcc1

This output identifies the Oracle Real Application Clusters instance as tpcc1.

Shutting down the Oracle Real Application Clusters instance on Node A involves several steps, as shown in the following example.


# su - oracle 
Sun Microsystems Inc.   SunOS 5.9       Generic May 2002
$ ksh 
$ ORACLE_SID=tpcc1
$ ORACLE_HOME=/export/home/oracle/OraHome1
$ export ORACLE_SID ORACLE_HOME 
$ sqlplus " /as sysdba " 

SQL*Plus: Release 9.2.0.4.0 - Production on Mon Jan 5 14:12:28 2004

Copyright (c) 1982, 2002, Oracle Corporation.  All rights reserved.


Connected to:
Oracle9i Enterprise Edition Release 9.2.0.4.0 - 64bit Production
With the Partitioning, Real Application Clusters, OLAP and Oracle Data 
Mining options
JServer Release 9.2.0.4.0 - Production

SQL> shutdown immediate ;
Database closed.
Database dismounted.
ORACLE instance shut down.

SQL> exit 
Disconnected from Oracle9i Enterprise Edition Release 9.2.0.4.0 - 64bit 
Production With the Partitioning, Real Application Clusters, OLAP and 
Oracle Data Mining options
JServer Release 9.2.0.4.0 - Production
$ lsnrctl 

LSNRCTL for Solaris: Version 9.2.0.4.0 - Production on 05-JAN-2004 14:15:09

Copyright (c) 1991, 2002, Oracle Corporation.  All rights reserved.

Welcome to LSNRCTL, type "help" for information.

LSNRCTL> stop 
Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=IPC)(KEY=EXTPROC)))
The command completed successfully
LSNRCTL> 
  1. After you have stopped the Oracle Real Application Clusters instance, check for a quorum device and, if necessary, reconfigure the quorum device.

  2. When you are certain that the node with the failed adapter does not contain a quorum device, proceed to determine the affected plexes.

    Record this information for use in reestablishing the original storage configuration. In the following example, c2 is the controller with the failed host adapter.


 # vxprint -ht -g tpcc | grep c2
 dm tpcc01    c2t0d0s2     sliced   4063     8374320  -
 dm tpcc02    c2t1d0s2     sliced   4063     8374320  -
 dm tpcc03    c2t2d0s2     sliced   4063     8374320  -
 dm tpcc04    c2t3d0s2     sliced   4063     8374320  -
 dm tpcc09    c2t8d0s2     sliced   4063     8374320  -
 dm tpcc10    c2t9d0s2     sliced   4063     8374320  -
 sd tpcc02-01 control_001-01 tpcc02 0        41040    0       c2t1d0 ENA
.
.
.
 sd tpcc03-06 temp_0_0-02  tpcc03   2967840  276480   0       c2t2d0 ENA
 sd tpcc03-04 ware_0_0-02  tpcc03   2751840  95040    0       c2t2d0 ENA

From this output, you can easily determine which plexes and subdisks are affected by the failed adapter. These are the plexes you detach from the storage array.


/usr/sbin/vxplex -g tpcc  det  control_001-02
/usr/sbin/vxplex -g tpcc  det  temp_0_0-02
  1. After the plexes are detached, you can safely shut down the node, if necessary.

  2. Proceed with replacing the failed host adapter, following instructions that accompanied that device.

  3. After you replace the failed host adapter, and Node A is in cluster mode, reattach the plexes and replace any quorum device to reestablish your original cluster configuration.


ProcedureHow to Replace a Host Adapter When Using a Single, Dual-Port HBA to Provide Both Paths to Shared Data

Before You Begin

This procedure relies on the following prerequisites and assumptions.

This procedure provides the long forms of the Sun Cluster commands. Most commands also have short forms. Except for the forms of the command names, the commands are identical. For a list of the commands and their short forms, see Appendix A, Sun Cluster Object-Oriented Commands, in Sun Cluster 3.1 - 3.2 Hardware Administration Manual for Solaris OS.

  1. Become superuser or assume a role that provides solaris.cluster.read and solaris.cluster.modify RBAC authorization.

  2. If you are using scalable and failover services, determine the resource groups and device groups that are running on Node A.

    Record this information because you use it in Step 17 of this procedure to return resource groups and device groups to Node A.

    • If you are using Sun Cluster 3.2, use the following commands:


      # clresourcegroup status -n NodeA
      # cldevicegroup status -n NodeA
      
    • If you are using Sun Cluster 3.1, use the following command:


      # scstat
      
  3. Record the details of metadevices that are affected by the failed host adapter.

  4. SPARC: If you are using Oracle Real Application Clusters, shut down all RAC instances running in your cluster.

    To shut down and restart an Oracle instance in the RAC environment, refer to the Oracle 9iRAC Administration Guide.

  5. Shut down the cluster.

    To shut down a cluster, see your Sun Cluster system administration documentation.

  6. Power off Node A.

  7. Replace the failed host adapter.

    To remove and add host adapters, see the documentation that shipped with your nodes.

  8. Power on Node A.

  9. Boot all nodes into cluster mode.

    For the procedure about booting cluster nodes, see Chapter 3, Shutting Down and Booting a Cluster, in Sun Cluster System Administration Guide for Solaris OS.

  10. If necessary, upgrade the host adapter firmware on Node A.

    If you use the Solaris 8, Solaris 9, or Solaris 10 Operating System, Sun Connection Update Manager keeps you informed of the latest versions of patches and features. Using notifications and intelligent needs-based updating, Sun Connection helps improve operational efficiency and ensures that you have the latest software patches for your Sun software.

    You can download the Sun Connection Update Manager product for free by going to http://www.sun.com/download/products.xml?id=4457d96d.

    Additional information for using the Sun patch management tools is provided in Solaris Administration Guide: Basic Administration at http://docs.sun.com. Refer to the version of this manual for the Solaris OS release that you have installed.

    If you must apply a patch when a node is in noncluster mode, you can apply it in a rolling fashion, one node at a time, unless instructions for a patch require that you shut down the entire cluster. Follow the procedures in How to Apply a Rebooting Patch (Node) in Sun Cluster System Administration Guide for Solaris OS to prepare the node and to boot it in noncluster mode. For ease of installation, consider applying all patches at the same time. That is, apply all patches to the node that you place in noncluster mode.

    For a list of patches that affect Sun Cluster, see the Sun Cluster Wiki Patch Klatch.

    For required firmware, see the Sun System Handbook.

  11. Perform any volume management maintenance procedures that are necessary to fix any metadevices affected by this procedure.

    For more information, see your volume manager software documentation.

  12. (Optional) If necessary, move the device groups back to the original node.

    Perform the following step for each device group you want to return to the original node.

    • If you are using Sun Cluster 3.2, use the following command:


      # cldevicegroup switch -n NodeA devicegroup
      
    • If you are using Sun Cluster 3.1, use the following command:


      # scswitch -z -D devicegroup -h NodeA
      
  13. (Optional) If necessary, move the resource groups back to the node.

    Perform the following step for each resource group you want to return to the original node.

    • If you are using Sun Cluster 3.2, use the following command:


      # clresourcegroup switch -n NodeA resourcegroup
      
      nodename

      For failover resource groups, the node to which the groups are returned. For scalable resource groups, the node list to which the groups are returned.

      resourcegroup

      The resource group that is returned to the node or nodes.

    • If you are using Sun Cluster 3.1, use the following command:


      # scswitch -z -g resourcegroup -h NodeA
      
  14. (Optional) Bring all Oracle Real Application Clusters instance online.

    To shut down and restart an Oracle instance in the RAC environment, refer to the Oracle 9iRAC Administration Guide.

ProcedureHow to Add a Disk Drive

Adding a disk drive enables you to increase your storage space after a storage array has been added to your cluster.


Caution – Caution –

(Sun StorEdge Multipack Enclosures Only) SCSI-reservations failures have been observed when clustering storage arrays that contain a particular model of Quantum disk drive: SUN4.2G VK4550J. Do not use this particular model of Quantum disk drive for clustering with storage arrays.

If you must use this model of disk drive, you must set the scsi-initiator-id of Node A to 6. If you use a six-slot storage array, you must set the storage array for the 9-through-14 SCSI target address range. For more information, see the Sun StorEdge MultiPack Storage Guide.


Before You Begin

This procedure relies on the following prerequisites and assumptions.

This procedure provides the long forms of the Sun Cluster commands. Most commands also have short forms. Except for the forms of the command names, the commands are identical. For a list of the commands and their short forms, see Appendix A, Sun Cluster Object-Oriented Commands, in Sun Cluster 3.1 - 3.2 Hardware Administration Manual for Solaris OS.

To perform this procedure, become superuser or assume a role that provides solaris.cluster.read and solaris.cluster.modify RBAC authorization.

  1. Locate an empty disk slot in the storage array for the disk drive you are adding.

    Identify the empty disk slot and note the target number. Refer to your storage array documentation.

  2. Install the disk drive.

    For the procedure about how to install a disk drive, see your storage array documentation.

  3. On all nodes that are attached to the storage array, configure the disk drive.


    # cfgadm -c configure cN
    
  4. On all nodes that are attached to the storage device, probe all devices and write the new disk drive to the /dev/rdsk directory.


    # devfsadm
    

    Depending on the number of devices that are connected to the node, the devfsadm command can require at least five minutes to complete.

  5. On all nodes, verify that the entries for the disk drive have been added to the /dev/rdsk directory.


    # ls -l /dev/rdsk
    
  6. If necessary, use the format(1M) command or the fmthard(1M) command to partition and label the disk drive.

  7. From any node, update the global device namespace.

    If a volume management daemon such as vold is running on your node, and you have a CD-ROM drive that is connected to the node, a device busy error might be returned even if no disk is in the drive. This error is an expected behavior.

    • If you are using Sun Cluster 3.2, use the following command:


      # cldevice populate
      
    • If you are using Sun Cluster 3.1, use the following command:


      # scgdevs
      
  8. On one node, verify that a device ID has been assigned to the disk drive.

    The new device ID that is assigned to the new disk drive might not be in sequential order in the storage array.

    • If you are using Sun Cluster 3.2, use the following command:


      # cldevice list -v
      
    • If you are using Sun Cluster 3.1, use the following command:


      # scdidadm -L
      
  9. Perform volume management administration to add the new disk drive to the configuration.

    For more information, see your Solaris Volume Manager or Veritas Volume Manager documentation.

  10. If you want this new disk drive to be a quorum device, add the quorum device.

    For procedures about how to add and remove quorum devices, see the Sun Cluster system administration documentation.

ProcedureHow to Remove a Disk Drive

Removing a disk drive can allow you to downsize or reallocate your existing storage pool. You might want to perform this procedure in the following scenarios.

For conceptual information about quorum, quorum devices, global devices, and device IDs, see the Sun Cluster concepts documentation.

Before You Begin

This procedure relies on the following prerequisites and assumptions.

This procedure provides the long forms of the Sun Cluster commands. Most commands also have short forms. Except for the forms of the command names, the commands are identical. For a list of the commands and their short forms, see Appendix A, Sun Cluster Object-Oriented Commands, in Sun Cluster 3.1 - 3.2 Hardware Administration Manual for Solaris OS.

To perform this procedure, become superuser or assume a role that provides solaris.cluster.read and solaris.cluster.modify RBAC authorization.

  1. Identify the disk drive that you are removing. Also identify the slot that the disk drive needs to be removed from.

    If the disk error message reports the drive problem by device ID, use the cldevice list or scdidadm -l command to determine the Solaris device name. To list all configurable hardware information, use the cfgadm -al command.

    • If you are using Sun Cluster 3.2, use the following commands:


      # cldevice list -v
      # cfgadm -al
      
    • If you are using Sun Cluster 3.1, use the following commands:


      # scdidadm -l deviceID
      # cfgadm -al
      
  2. If the disk you are removing is configured as quorum device, add a new quorum device that will not be affected by this procedure. Then remove the old quorum device.

    To determine whether a quorum device will be affected by this procedure, use one of the following commands.

    • If you are using Sun Cluster 3.2, use the following command:


      # clquorum show +
      
    • If you are using Sun Cluster 3.1, use the following command:


      # scstat -q
      

    For procedures about how to add and remove quorum devices, see the Sun Cluster system administration documentation.

  3. If possible, back up the metadevice or volume.

    For more information, see your Solaris Volume Manager or Veritas Volume Manager documentation.

  4. Perform volume management administration to remove the disk drive from the configuration.

    For more information, see your Solaris Volume Manager or Veritas Volume Manager documentation.

  5. On all nodes, unconfigure the disk drive. Remove references to the disk drive from the operating system and clustering environment.


    # cfgadm -c unconfigure cN::dsk/cNtXdY
    
  6. Remove the disk drive.

    For the procedure about how to remove a disk drive, see your storage documentation. For a list of storage documentation, see Related Documentation.

  7. On both nodes, delete the paths to the disk drive that you plan to remove.


    # devfsadm -C
    
  8. On all nodes, remove all obsolete device IDs.

    • If you are using Sun Cluster 3.2, use the following command:


      # cldevice clear
      
    • If you are using Sun Cluster 3.1, use the following command:


      # scdidadm -C
      

ProcedureHow to Replace a Disk Drive Without Oracle Real Application Clusters

You need to replace a disk drive if the disk drive fails or when you want to upgrade to a higher-quality or to a larger disk.

For conceptual information about quorum, quorum devices, global devices, and device IDs, see the Sun Cluster concepts documentation.

Before You Begin

This procedure relies on the following prerequisites and assumptions.

This procedure provides the long forms of the Sun Cluster commands. Most commands also have short forms. Except for the forms of the command names, the commands are identical. For a list of the commands and their short forms, see Appendix A, Sun Cluster Object-Oriented Commands, in Sun Cluster 3.1 - 3.2 Hardware Administration Manual for Solaris OS.

To perform this procedure, become superuser or assume a role that provides solaris.cluster.read and solaris.cluster.modify RBAC authorization.

  1. Become superuser or assume a role that provides solaris.cluster.read and solaris.cluster.modify RBAC authorization.

  2. Identify the failed disk drive.

    • If you are using Sun Cluster 3.2, use the following command:


      # cldevice show -v cNtNdN
      
    • If you are using Sun Cluster 3.1, use the following command:


      # scdidadm -o diskid -l cNtNdN
      
  3. Record the device identifier (DID) of the failed disk drive because you assign it to the replaced disk drive later in this procedure.

  4. If the disk drive you are removing is configured as a quorum device, add a new quorum device that will not be affected by this procedure. Then remove the old quorum device.

    To determine whether a quorum device will be affected by this procedure, use one of the following commands.

    • If you are using Sun Cluster 3.2, use the following command:


      # clquorum show +
      
    • If you are using Sun Cluster 3.1, use the following command:


      # scstat -q
      

    To add and remove quorum devices, see the Sun Cluster system administration documentation.

  5. If possible, back up the metadevice or volume.

    For more information, see your Solaris Volume Manager or Veritas Volume Manager documentation.

  6. (Solaris Volume Manager Only) If your disk drive failure prevents Solaris Volume Manager from reading the disk label, use the disk partitioning information that you saved.

    You saved this information when you performed one of the following tasks:

  7. (Solaris Volume Manager Only) If your disk drive failure does not prevent Solaris Volume Manager from reading the disk label, save the disk partitioning information now if you have not already done so.


    Caution – Caution –

    Do not save disk partitioning information under /tmp because you will lose this file when you reboot. Instead, save this file under /usr/tmp.



    # prtvtoc /dev/rdsk/cNtNdNsN > filename
    

    Use this information when you partition the new disk drive.

  8. Replace the failed disk.

    1. Determine which node owns the device group.

      • If you are using Sun Cluster 3.2, use the following command:


        # cldevicegroup status devgroup1
        
      • If you are using Sun Cluster 3.1, use the following command:


        # scstat -D
        
    2. If you are using Veritas Volume Manager, remove the disk drives from the Veritas Volume Manager control on a node that does not have ownership of the device group.


      # vxdisk offline cNtNdN
      # vxdisk rm cNtNdN
      
    3. On a node that does not have ownership of the device group, suspend activity on the SCSI bus.


      # cfgadm -x replace_device cN::disk/cNtNdN
      

      When prompted, type y to suspend activity on the SCSI bus.

    4. If the message cfgadm: Component system is busy, try again: failed to offline is displayed, follow these steps:

      1. Become superuser.

      2. Temporarily rename the file named es_rcm.pl.

        • If you are using Version 4.1 of Veritas Volume Manager or a version of Veritas Volume Manager that was released after 4.1, type:


          # mv /usr/lib/rcm/scripts/es_rcm.pl /usr/lib/rcm/scripts/DONTUSE
          
        • If you are using a version of Veritas Volume Manager that was released before Version 4.1, type:


          # mv /etc/rcm/scripts/es_rcm.pl /etc/rcm/scripts/DONTUSE
          
      3. Reissue the cfgadm command that you tried to issue previously.


        # cfgadm -x replace_device cN::disk/cNtNdN
        
      4. Rename the DONTUSE file to its original name.

        • If you are using Version 4.1 of Veritas Volume Manager or a version of Veritas Volume Manager that was released after 4.1, type:


          # mv /usr/lib/rcm/scripts/DONTUSE /usr/lib/rcm/scripts/es_rcm.pl
          
        • If you are using a version of Veritas Volume Manager that was released before Version 4.1, type:


          # mv /etc/rcm/scripts/DONTUSE /etc/rcm/scripts/es_rcm.pl
          
    5. After SCSI bus activity stops, replace the disk and type y at the prompt.

      After replacing the disk, warning messages might be displayed. Ignore these messages.

  9. On all nodes that are attached to the device, run the devfsadm command to probe all devices and to update the device tree.


    # devfsadm
    

    Depending on the number of devices that are connected to the node, the devfsadm(1M) command can require at least five minutes to complete.

  10. Label the new disk drive by using the format command.

  11. (Solaris Volume Manager Only) If you successfully saved the disk partitioning information in Step 7, from any node that is connected to the device, partition the new disk drive by using the partitioning you saved when you installed or added the storage array.


    # fmthard -s filename /dev/rdsk/cNtNdNsN
    
  12. On all nodes, repair the device instance for the replaced disk drive.

    • If you are using Sun Cluster 3.2, use the following command:


      # cldevice repair DID_number
      
    • If you are using Sun Cluster 3.1, use the following command:


      # scdidadm -R DID_number
      

      DID_number is the DID of the failed disk drive that you recorded earlier in this procedure.

  13. Perform volume management administration to add the disk drive back to its diskset or disk group.

    For more information, see your Solaris Volume Manager or Veritas Volume Manager documentation.

  14. If you want this new disk drive to be a quorum device, add the quorum device.

    To add and remove quorum devices, see the Sun Cluster system administration documentation.

ProcedureSPARC: How to Replace a Disk Drive With Oracle Real Application Clusters

You need to replace a disk drive if the disk drive fails or when you want to upgrade to a higher-quality or to a larger disk.

For conceptual information about quorum, quorum devices, global devices, and device IDs, see the Sun Cluster concepts documentation.

While performing this procedure, ensure that you use the correct controller number. Controller numbers can be different on each node.


Note –

If the following warning message is displayed, ignore the message. Continue with the next step.


vxvm:vxconfigd: WARNING: no transactions on slave
vxvm:vxassist: ERROR:  Operation must be executed on master

Before You Begin

This procedure relies on the following prerequisites and assumptions.

This procedure provides the long forms of the Sun Cluster commands. Most commands also have short forms. Except for the forms of the command names, the commands are identical. For a list of the commands and their short forms, see Appendix A, Sun Cluster Object-Oriented Commands, in Sun Cluster 3.1 - 3.2 Hardware Administration Manual for Solaris OS.

To perform this procedure, become superuser or assume a role that provides solaris.cluster.read and solaris.cluster.modify RBAC authorization.

  1. On each node, check the status of the failed disk drive.


    # vxdisk list
    
  2. On each node, identify the failed disk's instance number.

    You use this instance number in Step 12.


    # ls -l /dev/dsk/cWtXdYsZ
    # cat /etc/path_to_inst | grep "device_path"
    

    Note –

    Ensure that you do not use the ls command output as the device path. The following example demonstrates how to find the device path and how to identify the sd instance number by using the device path.


    # ls -l /dev/dsk/c4t0d0s2
    lrwxrwxrwx 1 root root 40 Jul 31 12:02 /dev/dsk/c4t0d0s2 ->
    ../../devices/pci@4,2000/scsi@1/sd@0,0:cb41d
    # cat /etc/path_to_inst | grep "/pci@4,2000/scsi@1/sd@0,0"
    "/node@2/pci@4,2000/scsi@1/sd@0,0" 60 "sd"

    If you are using Solaris 10, the node@2 portion of this output is not present. Solaris 10 does not add this prefix for cluster nodes.


  3. On one node, identify the failed disk's disk ID number.

    You will use the disk ID number in Step 15 and Step 16.

    • If you are using Sun Cluster 3.2, use the following command:


      # cldevice show -v cNtXdY
      
    • If you are using Sun Cluster 3.1, use the following command:


      # scdidadm -o diskid -l cNtXdY
      
  4. If the disk drive you are removing is configured as quorum device, add a new quorum device that will not be affected by this procedure. Then remove the old quorum device.

    To determine whether a quorum device will be affected by this procedure, use one of the following commands.

    • If you are using Sun Cluster 3.2, use the following command:


      # clquorum show +
      
    • If you are using Sun Cluster 3.1, use the following command:


      # scstat -q
      

    For procedures about how to add and remove quorum devices, see the Sun Cluster system administration documentation.

  5. On each node, remove the failed disk from Veritas Volume Manager control.


    # vxdisk offline cXtYdZ
    # vxdisk rm cXtYdZ
    
  6. On each node, verify that you removed the disk entry.


    # vxdisk list
    
  7. Remove the failed disk from the storage array.

    For the procedure about how to remove a disk drive, see your storage documentation. For a list of storage documentation, see Related Documentation.

  8. On each node, unconfigure the failed disk.


    # cfgadm -c unconfigure cX::dsk/cXtYdZ
    
  9. On each node, remove the paths to the disk drive that you are removing.


    # devfsadm -C
    
  10. On each node, verify that you removed the disk.


    # cfgadm -al | grep cXtYdZ
    # ls /dev/dsk/cXtYdZ
    
  11. Add the new disk to the storage array.

    For the procedure about how to add a disk drive, see your storage documentation. For a list of storage documentation, see Related Documentation.

  12. On each node, configure the new disk.

    Use the instance number that you identified in Step 2.


    # cfgadm -c configure cX::sd_instance_Number
    # devfsadm
    
  13. Verify that you added the disk.


    # ls /dev/dsk/cXtYdZ
    
  14. On one node, update the device ID numbers.

    Use the device ID number that you identified in Step 3.

    • If you are using Sun Cluster 3.2, use the following command:


      # cldevice repair
      
    • If you are using Sun Cluster 3.1, use the following command:


      # scdidadm -R deviceID
      

      Note –

      After running scdidadm —R on the first node, each subsequent node that you run the command on might display a warning. Ignore this warning.


  15. Verify that you updated the disk's device ID number.

    Use the device ID number that you identified in Step 3.

    • If you are using Sun Cluster 3.2, use the following command:


      # cldevice show | grep DID_number
      
    • If you are using Sun Cluster 3.1, use the following command:


      # scdidadm -L | grep DID_number
      
  16. Verify that the disk ID number is different than the disk ID number that you identified in Step 3.

  17. On each node, add the new disk to the Veritas Volume Manager database.


    # vxdctl enable
    
  18. Verify that you added the new disk.


    # vxdisk list |grep cXtYdZ
    
  19. Determine the master node.


    # vxdctl -c mode
    
  20. Perform disk recovery tasks on the master node.

    Depending on your configuration and volume layout, select the appropriate Veritas Volume Manager menu item to recover the failed disk.


    # vxdiskadm
    # vxtask list
    

ProcedureHow to Upgrade Disk Drive Firmware

Upgrade your disk drive firmware if you want to apply bug fixes or enable new functionality.

Before You Begin

This procedure relies on the following prerequisites and assumptions.

This procedure provides the long forms of the Sun Cluster commands. Most commands also have short forms. Except for the forms of the command names, the commands are identical. For a list of the commands and their short forms, see Appendix A, Sun Cluster Object-Oriented Commands, in Sun Cluster 3.1 - 3.2 Hardware Administration Manual for Solaris OS.

To perform this procedure, become superuser or assume a role that provides solaris.cluster.read and solaris.cluster.modify RBAC authorization.

  1. If the disk drive on which you want to upgrade the firmware is configured as quorum device, add a new quorum device that will not be affected by this procedure. Then remove the old quorum device.

    To determine whether a quorum device will be affected by this procedure, use one of the following commands.

    • If you are using Sun Cluster 3.2, use the following command:


      # clquorum show
      
    • If you are using Sun Cluster 3.1, use the following command:


      # scstat -q
      

    For procedures about how to add and remove quorum devices, see the Sun Cluster system administration documentation.

  2. If possible, back up the metadevices or volumes that reside in the disk drive.

    For more information, see your Solaris Volume Manager or Veritas Volume Manager documentation.

  3. Perform volume management administration to remove the disk drive from the configuration.

    For more information, see your Solaris Volume Manager or Veritas Volume Manager documentation.

  4. Install the firmware.

    For more information about how to install firmware, see the patch installation instructions.

  5. Perform volume management administration to add the disk drive back into the configuration.

    For more information, see your Solaris Volume Manager or Veritas Volume Manager documentation.

ProcedureHow to Upgrade Host Adapter Firmware

Upgrade your host adapter firmware if you want to apply bug fixes or enable new functionality.

Before You Begin

This procedure relies on the following prerequisites and assumptions.

This procedure provides the long forms of the Sun Cluster commands. Most commands also have short forms. Except for the forms of the command names, the commands are identical. For a list of the commands and their short forms, see Appendix A, Sun Cluster Object-Oriented Commands, in Sun Cluster 3.1 - 3.2 Hardware Administration Manual for Solaris OS.

To perform this procedure, become superuser or assume a role that provides solaris.cluster.read and solaris.cluster.modify RBAC authorization.

  1. Determine the resource groups and device groups that are running on the node on which you plan to upgrade the host adapter firmware.

    • If you are using Sun Cluster 3.2, use the following commands:


      # clresourcegroup status -n NodeA
      # cldevicegroup status -n NodeA
      
    • If you are using Sun Cluster 3.1, use the following command:


      # scstat
      
  2. Move all resource groups and device groups off Node A.

    • If you are using Sun Cluster 3.2, use the following command:


      # clnode evacuate NodeA
      
    • If you are using Sun Cluster 3.1, use the following command:


      # scswitch -S -h NodeA
      
  3. Perform the firmware upgrade.

    For the procedure about how to upgrade your host adapter firmware, see the patch documentation.

  4. (Optional) Return the device groups back to Node A.

    Perform the following step for each device group you want to return to the original node.

    • If you are using Sun Cluster 3.2, use the following command:


      # cldevicegroup switch -n NodeA devicegroup
      
    • If you are using Sun Cluster 3.1, use the following command:


      # scswitch -z -D devicegroup -h NodeA
      
  5. (Optional) Return the resource groups back to Node A.

    Perform the following step for each resource group you want to return to the original node.

    • If you are using Sun Cluster 3.2, use the following command:


      # clresourcegroup switch -n NodeA resourcegroup
      
      nodename

      For failover resource groups, the node to which the groups are returned. For scalable resource groups, the node list to which the groups are returned.

      resourcegroup

      The resource group that is returned to the node or nodes.

    • If you are using Sun Cluster 3.1, use the following command:


      # scswitch -z -g resourcegroup -h NodeA