Sun Cluster 3.1 - 3.2 With StorEdge A1000 Array, Netra st A1000 Array, or StorEdge A3500 System Manual

ProcedureHow to Replace a Host Adapter


Note –

Several steps in this procedure require you to halt I/O activity. To halt I/O activity, take the controller module offline by using the RAID Manager GUI's manual recovery procedure in the Sun StorEdge RAID Manager User’s Guide.


Before You Begin

This procedure relies on the following prerequisites and assumptions.

This procedure provides the long forms of the Sun Cluster commands. Most commands also have short forms. Except for the forms of the command names, the commands are identical. For a list of the commands and their short forms, see Appendix A, Sun Cluster Object-Oriented Commands, in Sun Cluster 3.1 - 3.2 Hardware Administration Manual for Solaris OS.

To perform this procedure, become superuser or assume a role that provides solaris.cluster.read and solaris.cluster.modify RBAC authorization.

  1. Determine the resource groups and device groups that are running on Node A.

    • If you are using Sun Cluster 3.2, use the following commands:


      # clresourcegroup status -n nodename
      # cldevicegroup status -n nodename
      
    • If you are using Sun Cluster 3.1, use the following command:


      # scstat
      

    Note the device groups, the resource groups, and the node list for the resource groups. You will need this information to restore the cluster to its original configuration in Step 25 of this procedure.

  2. Move all resource groups and device groups off Node A.

    • If you are using Sun Cluster 3.2, use the following command:


      # clnode evacuate fromnode
      
    • If you are using Sun Cluster 3.1, use the following command:


      # scswitch -S -h fromnode
      
  3. Without powering off the node, shut down Node A.

    For the procedure about how to shut down and power off a node, see your Sun Cluster system administration documentation. For a list of Sun Cluster documentation, see Related Documentation.

  4. From Node B, halt I/O activity to SCSI bus A.

    For instructions, see the Sun StorEdge RAID Manager User’s Guide.

  5. From the controller module end of the SCSI cable, disconnect the SCSI bus A cable. This cable connects the controller module to Node A. Afterward, replace this cable with a differential SCSI terminator.

  6. Restart I/O activity on SCSI bus A.

    For instructions, see the Sun StorEdge RAID Manager User’s Guide.

  7. If servicing the failed host adapter affects SCSI bus B, proceed to Step 9.

  8. If servicing the failed host adapter does not affect SCSI bus B, skip to Step 12.

  9. From Node B, halt I/O activity to the controller module on SCSI bus B.

    For instructions, see the Sun StorEdge RAID Manager User’s Guide.

  10. From the controller module end of the SCSI cable, disconnect the SCSI bus B cable. This cable connects the controller module to Node A. Afterward, replace this cable with a differential SCSI terminator.

  11. Restart I/O activity on SCSI bus B.

    For instructions, see the Sun StorEdge RAID Manager User’s Guide.

  12. Power off Node A.

  13. Replace Node A's host adapter.

    For the procedure about how to replace a host adapter, see the documentation that shipped with your node hardware.

  14. Power on Node A. Do not enable the node to boot. If necessary, halt the system.

  15. From Node B, halt I/O activity to the controller module on SCSI bus A.

    For instructions, see the Sun StorEdge RAID Manager User’s Guide.

  16. Remove the differential SCSI terminator from SCSI bus A. Afterward, reinstall the SCSI cable to connect the controller module to Node A.

  17. Restart I/O activity on SCSI bus A.

    For instructions, see the Sun StorEdge RAID Manager User’s Guide.

  18. Did you install a differential SCSI terminator to SCSI bus B in Step 10?

    • If no, skip to Step 21.

    • If yes, halt I/O activity on SCSI bus B, then continue with Step 19.

  19. Remove the differential SCSI terminator from SCSI bus B. Afterward, reinstall the SCSI cable to connect the controller module to Node A.

  20. Restart I/O activity on SCSI bus B.

    For instructions, see the Sun StorEdge RAID Manager User’s Guide.

  21. Bring the controller module online.

    For instructions, see the Sun StorEdge RAID Manager User’s Guide.

  22. Rebalance all logical unit numbers (LUNs).

    For instructions, see the Sun StorEdge RAID Manager User’s Guide.

  23. Boot Node A into cluster mode.

  24. (Optional) Return resource groups and device groups to Node A.

  25. If you moved device groups off their original node in Step 2, restore the device groups that you identified in Step 1 to their original node.

    Perform the following step for each device group you want to return to the original node.

    • If you are using Sun Cluster 3.2, use the following command:


      # cldevicegroup switch -n nodename devicegroup1[ devicegroup2 …]
      
      -n nodename

      The node to which you are restoring device groups.

      devicegroup1[ devicegroup2 …]

      The device group or groups that you are restoring to the node.

    • If you are using Sun Cluster 3.1, use the following command:


      # scswitch -z -D devicegroup -h nodename
      

    In these commands, devicegroup is one or more device groups that are returned to the node.

  26. If you moved resource groups off their original node in Step 2 restore the resource groups that you identified in Step 1 to their original node.

    • If you are using Sun Cluster 3.2, use the following command:

      Perform the following step for each resource group you want to return to the original node.


      # clresourcegroup switch -n nodename resourcegroup1[ resourcegroup2 …]
      
      nodename

      For failover resource groups, the node to which the groups are returned. For scalable resource groups, the node list to which the groups are returned.

      resourcegroup1[ resourcegroup2 …]

      The resource group or groups that you are returning to the node or nodes.

    • If you are using Sun Cluster 3.1, use the following command:


      # scswitch -z -g resourcegroup -h nodename