Sun Cluster 3.1 - 3.2 With Sun StorEdge A3500FC System Manual for Solaris OS

Maintaining Storage Systems

This section contains the procedures about how to maintain a storage system in a Sun Cluster environment.

Some maintenance procedures in Table 1–3 are performed in the same way as in a noncluster environment. This section references these procedures, but this section does not contain these procedures. Table 1–3 lists the procedures about how to maintain a storage system.


Note –

When you upgrade firmware on a storage device or on an enclosure, redefine the stripe size of a LUN, or perform other LUN operations, a device ID might change unexpectedly. When you perform a check of the device ID configuration by running the cldevice check or scdidadm -c command, the following error message appears on your console if the device ID changed unexpectedly.


device id for nodename:/dev/rdsk/cXtYdZsN does not match physical 
device's id for ddecimalnumber, device may have been replaced.

To fix device IDs that report this error, run the cldevice repair or scdidadm -R command for each affected device.


Table 1–3 Tasks: Maintaining a Storage System

Task 

Information 

Remove a storage system. 

How to Remove a Storage System

Replace a failed controller module. Restore an offline controller module. 

How to Replace a Failed Controller or Restore an Offline Controller

Upgrade controller module firmware and NVSRAM file. 

How to Upgrade Controller Module Firmware in a Running Cluster

Add a disk drive. 

How to Add a Disk Drive in a Running Cluster

Replace a disk drive. 

How to Replace a Failed Disk Drive in a Running Cluster

Remove a disk drive. 

How to Remove a Disk Drive From a Running Cluster

Upgrade disk drive firmware. 

How to Upgrade Disk Drive Firmware in a Running Cluster

Replace a host adapter in a node. 

How to Replace a Host Adapter

StorEdge A3500FC System FRUs

With the exception of one instruction, the following is a list of administrative tasks that require no cluster-specific procedures. Shut down the cluster, and then see the Sun StorEdge A3500/A3500FC Controller Module Guide, the Sun StorEdge A1000 and D1000 Installation, Operations, and Service Manual, and the Sun StorEdge Expansion Cabinet Installation and Service Manual for the following procedures. See the Sun Cluster system administration documentation for procedures about how to shut down a cluster.

The following is a list of administrative tasks that require no cluster-specific procedures. See the Sun StorEdge A3500/A3500FC Controller Module Guide, the Sun StorEdge RAID Manager User’s Guide, the Sun StorEdge RAID Manager Release Notes, the Sun StorEdge FC-100 Hub Installation and Service Manual, and the documentation that shipped with your FC hub or FC switch for the following procedures.

ProcedureHow to Remove a Storage System

Use this procedure to remove a storage system from a running cluster.


Caution – Caution –

This procedure removes all data that is on the storage system you remove.


This procedure provides the long forms of the Sun Cluster commands. Most commands also have short forms. Except for the forms of the command names, the commands are identical. For a list of the commands and their short forms, see Appendix A, Sun Cluster Object-Oriented Commands, in Sun Cluster 3.1 - 3.2 Hardware Administration Manual for Solaris OS.

Before You Begin

To perform this procedure, become superuser or assume a role that provides solaris.cluster.modify RBAC authorization.

  1. Migrate any Oracle Real Application Clusters tables, data services, or volumes off the storage system.

  2. Halt all activity to the controller module.

    For instructions, see the Sun StorEdge RAID Manager User’s Guide and your operating system documentation.

  3. If a volume manager manages any of the LUNs on the controller module you are removing, remove the LUN from any diskset or disk group.

    For more information, see your Solaris Volume Manager or Veritas Volume Manager documentation.

    You must completely remove LUNs that were managed by Veritas Volume from Veritas Volume Manager control before you can delete the LUNs.


    # vxdisk offline cNtXdY
    # vxdisk rm cNtXdY
    
  4. Disconnect all cables from the storage system. Remove the hardware from your cluster.

  5. From one node, delete the LUN.

    For the procedure about how to delete a LUN, see the Sun StorEdge RAID Manager User’s Guide.

  6. Remove the paths to the LUNs you are deleting.


    # rm /dev/rdsk/cNtXdY*
    # rm /dev/dsk/cNtXdY*
    
    # rm /dev/osa/dev/dsk/cNtXdY*
    # rm /dev/osa/dev/rdsk/cNtXdY*
    
  7. Use the lad command to determine the alternate paths to the LUNs you are deleting.

    The RAID Manager software creates two paths to the LUN in the /dev/osa/dev/rdsk directory. Substitute the cNtXdY number from the other controller module in the storage array to determine the alternate path.

    For example, with this configuration.


    # lad
    c0t5d0 1T93600714 LUNS: 0 1
    c1t4d0 1T93500595 LUNS: 2

    The alternate paths would be the following.


    /dev/osa/dev/dsk/c1t4d1*
    /dev/osa/dev/rdsk/c1t4d1*
  8. Remove the alternate paths to the LUNs you are deleting.


    # rm /dev/osa/dev/dsk/cNtXdY*
    # rm /dev/osa/dev/rdsk/cNtXdY*
    
  9. On all nodes, remove references to the storage system.

    • If you are using Sun Cluster 3.2, use the following command:


      # cldevice clear
      
    • If you are using Sun Cluster 3.1, use the following command:


      # scdidadm -C
      
  10. If you are removing the last StorEdge A3500FC controller module from a hub or FC switch in your cluster, remove the hub or FC switch hardware and cables from your cluster.


    Note –

    If you are using your StorEdge A3500FC storage array in a SAN-configured cluster, you must keep two FC switches configured in parallel. This configuration maintains cluster availability. See SAN Solutions in a Sun Cluster Environment in Sun Cluster 3.1 - 3.2 Hardware Administration Manual for Solaris OS for more information.


    If you are not removing the last controller module, skip to Step 12.

  11. If you plan to remove a host adapter that has an entry in the nvramrc script, delete the references to the host adapters in the nvramrc script.


    Note –

    If there are no other parallel SCSI devices connected to the nodes, you can delete the contents of the nvramrc script and, at the OpenBoot PROM, set setenv use-nvramrc? false.


  12. Remove any unused host adapter from nodes that were attached to the storage system.

    1. Shut down and power off Node A from which you are removing a host adapter.

      For the procedure about how to shut down and power off a node, see your Sun Cluster system administration documentation.

    2. Remove the host adapter from Node A.

      For the procedure about how to remove a host adapter, see the documentation that shipped with your node hardware.

    3. Perform a reconfiguration boot to create the new Solaris device files and links.

    4. Repeat Step a through Step c for Node B that was attached to the storage system.

  13. Switch the cluster back online.

    • If you are using Sun Cluster 3.2, use the following command:


      # clresourcegroup online +
      
    • If you are using Sun Cluster 3.1, use the following command:


      # scswitch -Z
      
  14. If you are removing the last storage system from your cluster, remove the software packages.

    For the procedure about how to remove software packages, see the documentation that shipped with your storage system.

ProcedureHow to Replace a Failed Controller or Restore an Offline Controller

Use this procedure to replace a controller, or to restore an offline controller.

For conceptual information on SCSI reservations and failure fencing, see your Sun Cluster concepts documentation.


Note –

If you want to create a SAN by using two FC switches and Sun SAN software, see SAN Solutions in a Sun Cluster Environment in Sun Cluster 3.1 - 3.2 Hardware Administration Manual for Solaris OS for more information. .


  1. On both nodes, to prevent LUNs from automatic assignment to the controller that is being brought online, set the System_LunReDistribution parameter in the /etc/raid/rmparams file to false.


    Caution – Caution –

    You must set the System_LunReDistribution parameter in the /etc/raid/rmparams file to false so that no LUNs are assigned to the controller being brought online. After you verify in Step 5 that the controller has the correct SCSI reservation state, you can balance LUNs between both controllers.


    For the procedure about how to modify the rmparams file, see the Sun StorEdge RAID Manager Installation and Support Guide.

  2. Restart the RAID Manager daemon.


    # /etc/init.d/amdemon stop
    # /etc/init.d/amdemon start
    
  3. If you have a failed controller, replace the failed controller with a new controller.


    Note –

    Do not bring the controller online.


    For the procedure about how to replace controllers, see the Sun StorEdge A3500/A3500FC Controller Module Guide and the Sun StorEdge RAID Manager Installation and Support Guide for additional considerations.

    If your controller module is offline, but does not have a failed controller, proceed to Step 4.

  4. On one node, use the RAID Manager GUI's Recovery application to restore the controller online.


    Note –

    You must use the RAID Manager GUI's Recovery application to bring the controller online. Do not use the Redundant Disk Array Controller Utility (rdacutil) because this utility ignores the value of the System_LunReDistribution parameter in the /etc/raid/rmparams file.


    For information on the Recovery application, see the Sun StorEdge RAID Manager User’s Guide. If you have problems with bringing the controller online, see the Sun StorEdge RAID Manager Installation and Support Guide.

  5. On one node that is connected to the storage system, verify that the controller has the correct SCSI reservation state.

    Run the repair device command on LUN 0 of the controller you want to bring online.

    • If you are using Sun Cluster 3.2, use the following command:


      # cldevice repair 
      
    • If you are using Sun Cluster 3.1, use the following command:


      # scdidadm -R /dev/dsk/cNtXdY
      
  6. Set the controller to active/active mode. Assign LUNs to the controller.

    For more information on controller modes, see the Sun StorEdge RAID Manager Installation and Support Guide and the Sun StorEdge RAID Manager User’s Guide.

  7. Reset the System_LunReDistribution parameter in the /etc/raid/rmparams file to true.

    For the procedure about how to change the rmparams file, see the Sun StorEdge RAID Manager Installation and Support Guide.

  8. Restart the RAID Manager daemon.


    # /etc/init.d/amdemon stop
    # /etc/init.d/amdemon start
    

ProcedureHow to Upgrade Controller Module Firmware in a Running Cluster

Use this procedure to upgrade firmware in a controller module in a running cluster. Use either the online or the offline method to upgrade your NVSRAM firmware. The method that you choose depends on your firmware.

  1. Determine the correct procedure for your upgrade.

    • If you are not upgrading the NVSRAM file, you can use the online method.

      Upgrade the firmware by using the online method, as described in the Sun StorEdge RAID Manager User’s Guide. No special steps are required for a cluster environment.

    • If you are upgrading the NVSRAM file, you must use an offline method. Use one of the following procedures.

      • If the data on your controller module is mirrored on another controller module, use the procedure in Step 2.

      • If the data on your controller module is not mirrored on another controller module, use the procedure in Step 3.

  2. Use this step if you are upgrading the NVSRAM and other firmware files on a controller module. This controller module must have mirrored data.

    1. Halt all activity to the controller module.

      For more information, see your Solaris Volume Manager or Veritas Volume Manager documentation.

    2. Update the firmware files by using the offline method, as described in the Sun StorEdge RAID Manager User’s Guide.

    3. Restore all activity to the controller module.

      For more information, see your Solaris Volume Manager or Veritas Volume Manager documentation.

      This step completes the firmware upgrade.

  3. Use this step if you are upgrading the NVSRAM and other firmware files on a controller module. This controller module must not have mirrored data.

    1. Shut down the entire cluster.

      For the procedure about how to shut down a cluster, see your Sun Cluster system administration documentation.

    2. Boot one node that is attached to the controller module into noncluster mode.

      For the procedure about how to boot a node in noncluster mode, see your Sun Cluster system administration documentation.

    3. Update the firmware files using the offline method, as described in the RAID Manager User's Guide.

    4. Boot both nodes into cluster mode.

      For more information about how to boot nodes, see your Sun Cluster system administration documentation.

      This step completes the firmware upgrade.

ProcedureHow to Add a Disk Drive in a Running Cluster

Use this procedure to add a disk drive to a storage array that is in a running cluster.


Caution – Caution –

If the disk drive that you are adding was previously owned by another controller module, reformat the disk drive. Reformat the disk drive to wipe clean the old DacStore information before adding the disk drive to this storage array.


  1. Install the new disk drive to the storage array.

    For the procedure about how to install a disk drive, see the Sun StorEdge D1000 Storage Guide.

  2. Enable the disk drive to spin up approximately 30 seconds.

  3. Run Health Check to ensure that the new disk drive is not defective.

    For instructions about how to run Recovery Guru and Health Check, see the Sun StorEdge RAID Manager User’s Guide.

  4. Fail the new drive, then revive the drive to update DacStore on the drive.

    For procedure about how to fail and revive drives, see the Sun StorEdge RAID Manager User’s Guide.

  5. Repeat Step 1 through Step 4 for each disk drive you are adding.

See Also

To create LUNs for the new drives, see How to Create a LUN for more information.

ProcedureHow to Replace a Failed Disk Drive in a Running Cluster

Use this procedure to replace a failed disk drive in a running cluster.

  1. If replacing the disk drive affects any LUN's availability, remove the LUNs from volume management control.

    For more information, see your Solaris Volume Manager or Veritas Volume Manager documentation.

  2. Replace the disk drive in the storage array.

    For the procedure about how to replace a disk drive, see the Sun StorEdge D1000 Storage Guide.

  3. Run Health Check to ensure that the new disk drive is not defective.

    For the procedure about how to run Recovery Guru and Health Check, see the Sun StorEdge RAID Manager User’s Guide.

  4. If the failed drive does not belong to a device group, skip to Step 6.

  5. If the failed drive belongs to a device group, reconstruction is started automatically.

    If reconstruction does not start automatically for any reason, then select Reconstruct from the Manual Recovery application. Do not select Revive. When reconstruction is complete, skip to Step 7.

  6. Fail the new drive, then revive the drive to update DacStore on the drive.

    For the procedure about how to fail and revive drives, see the Sun StorEdge RAID Manager User’s Guide.

  7. If you removed LUNs from volume management control in Step 1, return the LUNs to volume management control.

    For more information, see your Solaris Volume Manager or Veritas Volume Manager documentation.

ProcedureHow to Remove a Disk Drive From a Running Cluster

Use this procedure to remove a disk drive from a running cluster.

  1. Remove the logical unit number (LUN) that is associated with the disk drive you are removing.

    For the procedure about how to remove a LUN, see How to Delete a LUN.

  2. Remove the disk drive from the storage array.

    For the procedure about how to remove a disk drive, see the Sun StorEdge D1000 Storage Guide.


    Caution – Caution –

    After you remove the disk drive, install a dummy drive to maintain proper cooling.


How to Upgrade Disk Drive Firmware in a Running Cluster


Caution – Caution –

You must be a Sun service provider to perform disk drive firmware updates. If you need to upgrade drive firmware, contact your Sun service provider.


ProcedureHow to Replace a Host Adapter

Use this procedure to replace a failed host adapter in a running cluster. This procedure defines Node A as the node with the failed host adapter that you are replacing.

Before You Begin

This procedure relies on the following prerequisites and assumptions.

This procedure provides the long forms of the Sun Cluster commands. Most commands also have short forms. Except for the forms of the command names, the commands are identical. For a list of the commands and their short forms, see Appendix A, Sun Cluster Object-Oriented Commands, in Sun Cluster 3.1 - 3.2 Hardware Administration Manual for Solaris OS.

  1. Become superuser or assume a role that provides solaris.cluster.read and solaris.cluster.modify RBAC authorization.

  2. Determine the resource groups and device groups that are running on Node A.

    Record this information because you use this information in Step 13 and Step 14 of this procedure to return resource groups and device groups to Node A.

    • If you are using Sun Cluster 3.2, use the following commands:


      # clresourcegroup status -n NodeA 
      # cldevicegroup status -n NodeA
      
      -n NodeA

      The node for which you are determining resource groups and device groups.

    • If you are using Sun Cluster 3.1, use the following command:


      # scstat
      
  3. Move all resource groups and device groups off Node A.

    • If you are using Sun Cluster 3.2, use the following command:


      # clnode evacuate nodename
      
    • If you are using Sun Cluster 3.1, use the following command:


      # scswitch -S -h nodename
      
  4. Halt all I/O activity on the affected controller module.

    For instructions, see the Sun StorEdge RAID Manager User's Guide.

  5. Shut down Node A.

    For the full procedure about how to shut down and power off a node, see Chapter 3, Shutting Down and Booting a Cluster, in Sun Cluster System Administration Guide for Solaris OS.

  6. Power off Node A.

  7. Replace the failed host adapter.

    To remove and add host adapters, see the documentation that shipped with your nodes.

  8. If you need to upgrade the node's host adapter firmware, boot Node A into noncluster mode by adding -x to your boot instruction. Proceed to Step 9.

    If you do not need to upgrade firmware, skip to Step 10.

  9. Upgrade the host adapter firmware on Node A.

    If you use the Solaris 8, Solaris 9, or Solaris 10 Operating System, Sun Connection Update Manager keeps you informed of the latest versions of patches and features. Using notifications and intelligent needs-based updating, Sun Connection helps improve operational efficiency and ensures that you have the latest software patches for your Sun software.

    You can download the Sun Connection Update Manager product for free by going to http://www.sun.com/download/products.xml?id=4457d96d.

    Additional information for using the Sun patch management tools is provided in Solaris Administration Guide: Basic Administration at http://docs.sun.com. Refer to the version of this manual for the Solaris OS release that you have installed.

    If you must apply a patch when a node is in noncluster mode, you can apply it in a rolling fashion, one node at a time, unless instructions for a patch require that you shut down the entire cluster. Follow the procedures in How to Apply a Rebooting Patch (Node) in Sun Cluster System Administration Guide for Solaris OS to prepare the node and to boot it in noncluster mode. For ease of installation, consider applying all patches at the same time. That is, apply all patches to the node that you place in noncluster mode.

    For a list of patches that affect Sun Cluster, see the Sun Cluster Wiki Patch Klatch.

    For required firmware, see the Sun System Handbook.

  10. Boot Node A into cluster mode.

    For more information about how to boot nodes, see Chapter 3, Shutting Down and Booting a Cluster, in Sun Cluster System Administration Guide for Solaris OS.

  11. Restart I/O activity on the affected controller module.

    For instructions, see the Sun StorEdge RAID Manager User's Guide and your operating system documentation.

  12. Rebalance LUNs that are running on the affected controller module.

    For instructions, see the Sun StorEdge RAID Manager User's Guide.

  13. (Optional) Restore the device groups to the original node.

    Do the following for each device group that you want to return to the original node.

    • If you are using Sun Cluster 3.2, use the following command:


      # cldevicegroup switch -n nodename devicegroup1[ devicegroup2 ...]
      
      -n nodename

      The node to which you are restoring device groups.

      devicegroup1[ devicegroup2 …]

      The device group or groups that you are restoring to the node.

    • If you are using Sun Cluster 3.1, use the following command:


      # scswitch -z -D devicegroup -h nodename
      
  14. (Optional) Restore the resource groups to the original node.

    Do the following for each resource group that you want to return to the original node.

    • If you are using Sun Cluster 3.2, use the following command:


      # clresourcegroup switch -n nodename resourcegroup1[ resourcegroup2 …]
      
      nodename

      For failover resource groups, the node to which the groups are returned. For scalable resource groups, the node list to which the groups are returned.

      resourcegroup1[ resourcegroup2 …]

      The resource group or groups that you are returning to the node or nodes.

    • If you are using Sun Cluster 3.1, use the following command:


       # scswitch -z -g resourcegroup -h nodename