Sun Cluster 3.0-3.1 With Sun StorEdge 6120 Array Manual for Solaris OS

Maintaining Storage Arrays

This section contains the procedures about how to maintain a storage array in a running cluster. Table 1–3 lists these procedures.


Note –

When you upgrade firmware on a storage device or on an enclosure, redefine the stripe size of a LUN, or perform other LUN operations, a device ID might change unexpectedly. When you perform a check of the device ID configuration by running the scdidadm -c command, the following error message appears on your console if the device ID changed unexpectedly.


device id for nodename:/dev/rdsk/cXtYdZsN does not match physical 
device's id for ddecimalnumber, device may have been replaced.

To fix device IDs that report this error, run the scdidadm -R command for each affected device.


Table 1–3 Task Map: Maintaining a Storage Array

Task 

Information 

Upgrade storage array firmware. 

How to Upgrade Storage Array Firmware

Remove a storage array or partner group. 

How to Remove a Single-Controller Configuration

How to Remove a Dual-Controller Configuration

Replace a node-to-switch component. 

  • Node-to-switch fiber-optic cable

  • FC host adapter

  • FC switch

  • GBIC or SFP

Replacing a Node-to-Switch Component

Replace a node's host adapter. 

How to Replace a Host Adapter

Add a node to the storage array.

Sun Cluster system administration documentation 

Remove a node from the storage device.

Sun Cluster system administration documentation 

StorEdge 6120 Array FRUs

The following is a list of administrative tasks that require no cluster-specific procedures. See the Sun StorEdge 6020 and 6120 Array System Manual for the following procedures.

ProcedureHow to Upgrade Storage Array Firmware

Use this procedure to upgrade storage array firmware in a running cluster. Storage array firmware includes controller firmware, unit interconnect card (UIC) firmware, EPROM firmware, and disk drive firmware.


Note –

When you upgrade firmware on a storage device or on an enclosure, redefine the stripe size of a LUN, or perform other LUN operations, a device ID might change unexpectedly. When you perform a check of the device ID configuration by running the scdidadm -c command, the following error message appears on your console if the device ID changed unexpectedly.


device id for nodename:/dev/rdsk/cXtYdZsN does not match physical 
device's id for ddecimalnumber, device may have been replaced.

To fix device IDs that report this error, run the scdidadm -R command for each affected device.


Steps
  1. Stop all I/O to the storage arrays you are upgrading.

  2. Apply the controller, disk drive, and loop-card firmware patches.

    For the list of required patches, see the Sun StorEdge 6120 Array Release Notes. For the procedure about how to apply firmware patches, see the firmware patch README file. For the procedure about how to verify the firmware level, see the Sun StorEdge 6020 and 6120 Array System Manual.

  3. Confirm that all storage arrays that you upgraded are visible to all nodes.


    # luxadm probe
    
  4. Restart all I/O to the storage arrays.

    You stopped I/O to these storage arrays in Step 1.

ProcedureHow to Remove a Single-Controller Configuration

Use this procedure to permanently remove a storage array from a running cluster. This storage array resides in a single-controller configuration. This procedure provides the flexibility to remove the host adapters from the nodes for the storage array that you are removing.

This procedure defines Node A as the node with which you begin working. Node B is another node in the cluster.


Caution – Caution –

During this procedure, you lose access to the data that resides on the storage array that you are removing.


Steps
  1. Back up all database tables, data services, and volumes that are associated with the storage array that you are removing.

  2. Detach the submirrors from the storage array that you are removing. Detach the submirrors to stop all I/O activity to the storage array.

    For more information, see your Solstice DiskSuite/Solaris Volume Manager or VERITAS Volume Manager documentation.

  3. Remove the references to the LUN(s) from any diskset or disk group.

    For more information, see your Solstice DiskSuite/Solaris Volume Manager or VERITAS Volume Manager documentation.

  4. Determine the resource groups and device groups that are running on Node A.


    # scstat
    
  5. Shut down Node A.

    For the procedure about how to shut down and power off a node, see your Sun Cluster system administration documentation.

  6. Is the storage array that you are removing the last storage array that is connected to Node A?

    • If yes, disconnect the fiber-optic cable between Node A and the FC switch that is connected to this storage array. Afterward, disconnect the fiber-optic cable between the FC switch and this storage array.

    • If no, proceed to Step 7.

    For the procedure about how to remove a fiber-optic cable, see the Sun StorEdge 6020 and 6120 Array System Manual.

  7. Do you want to remove the host adapter from Node A?

    • If yes, power off Node A.

    • If no, skip to Step 10.

  8. Remove the host adapter from Node A.

    For the procedure about how to remove host adapters, see the documentation that shipped with your nodes.

  9. Without enabling the node to boot, power on Node A.

    For more information, see your Sun Cluster system administration documentation.

  10. Boot Node A into cluster mode.

    For the procedure about how to boot nodes, see your Sun Cluster system administration documentation.

  11. Shut down Node B.

    For the procedure about how to shut down and power off a node, see your Sun Cluster system administration documentation.

  12. Is the storage array that you are removing the last storage array that is connected to the FC switch?

    • If yes, disconnect the fiber-optic cable that connects this FC switch and Node B.

    • If no, proceed to Step 13.

    For the procedure about how to remove a fiber-optic cable, see the Sun StorEdge 6020 and 6120 Array System Manual.

  13. Do you want to remove the host adapter from Node B?

    • If yes, power off Node B.

    • If no, skip to Step 16.

  14. Remove the host adapter from Node B.

    For the procedure about how to remove host adapters, see the documentation that shipped with your nodes.

  15. Without enabling the node to boot, power on Node B.

    For more information, see your Sun Cluster system administration documentation.

  16. Boot Node B into cluster mode.

    For the procedure about how to boot nodes, see your Sun Cluster system administration documentation.

  17. On all nodes, update the /devices and /dev entries.


    # devfsadm -C
    # scdidadm -C
    
  18. Return the resource groups and device groups that you identified in Step 4 to Node A and Node B.


    # scswitch -z -g resource-group -h nodename
    # scswitch -z -D device-group-name -h nodename
    

ProcedureHow to Remove a Dual-Controller Configuration

Use this procedure to remove a dual-controller configuration from a running cluster. This procedure defines Node A as the node with which you begin working. Node B is another node in the cluster.


Caution – Caution –

During this procedure, you lose access to the data that resides on each partner group that you are removing.


Steps
  1. If necessary, back up all database tables, data services, and volumes that are associated with each partner group that you are removing.

  2. If necessary, detach the submirrors from each storage array or partner group that you are removing. Detach the submirrors to stop all I/O activity to the storage array or partner group.

    For more information, see your Solstice DiskSuite/Solaris Volume Manager or VERITAS Volume Manager documentation.

  3. Remove references to each LUN. This LUN belongs to the storage array or partner group that you are removing.

    For more information, see your Solstice DiskSuite/Solaris Volume Manager or VERITAS Volume Manager documentation.

  4. Determine the resource groups and device groups that are running on all nodes.

    Record this information because you use this information in Step 19 of this procedure to return resource groups and device groups to these nodes.


    # scstat
    
  5. Shut down Node A.

    For the procedure about how to shut down and power off a node, see your Sun Cluster system administration documentation.

  6. Disconnect from both storage arrays the fiber-optic cables that connect to the FC switches. Then disconnect the Ethernet cables.

  7. Is any storage array that you are removing the last storage array that is connected to an FC switch that is on Node A?

    • If no, skip to Step 11.

    • If yes, disconnect the fiber-optic cable between Node A and the FC switch that was connected to this storage array.

    For the procedure about how to remove a fiber-optic cable, see the Sun StorEdge 6020 and 6120 Array System Manual.

  8. Do you want to remove the host adapters from Node A?

    • If no, skip to Step 11.

    • If yes, power off Node A.

  9. Remove the host adapters from Node A.

    For the procedure about how to remove host adapters, see the documentation that shipped with your host adapter and nodes.

  10. Without enabling the node to boot, power on Node A.

    For more information, see your Sun Cluster system administration documentation.

  11. Boot Node A into cluster mode.

    For more information about how to boot nodes, see your Sun Cluster system administration documentation.

  12. Shut down Node B.

    For the procedure about how to shut down and power off a node, see your Sun Cluster system administration documentation.

  13. Is any storage array that you are removing the last storage array that is connected to an FC switch that is on Node B?

    • If no, proceed to Step 14.

    • If yes, disconnect the fiber-optic cable that connects this FC switch to Node B.

    For the procedure about how to remove a fiber-optic cable, see the Sun StorEdge 6020 and 6120 Array System Manual.

  14. Do you want to remove the host adapters from Node B?

    • If no, skip to Step 17.

    • If yes, power off Node B.

  15. Remove the host adapters from Node B.

    For the procedure about how to remove host adapters, see the documentation that shipped with your nodes.

  16. Without enabling the node to boot, power on Node B.

    For more information, see your Sun Cluster system administration documentation.

  17. Boot Node B into cluster mode.

    For more information about how to boot nodes, see your Sun Cluster system administration documentation.

  18. On all nodes, update the /devices and /dev entries.


    # devfsadm -C
    # scdidadm -C
    
  19. Return the resource groups and device groups that you identified in Step 4 to all nodes.


    # scswitch -z -g resource-group -h nodename
    # scswitch -z -D device-group-name -h nodename
    

Replacing a Node-to-Switch Component

Use this procedure to replace a node-to-switch component that has failed or that you suspect might be contributing to a problem.


Note –

Node-to-switch components that are covered by this procedure include the following components:

For the procedure about how to replace a host adapter, see How to Replace a Host Adapter.


This procedure defines Node A as the node that is connected to the node-to-switch component that you are replacing. This procedure assumes that, except for the component you are replacing, your cluster is operational.

Ensure that you are following the appropriate instructions:

ProcedureHow to Replace a Node-to-Switch Component in a Cluster That Uses Multipathing

Steps
  1. Is your configuration active-passive?

    If yes, and the active path is the path that needs a component replaced, make that path passive.

  2. Replace the component.

    Refer to your hardware documentation for any component-specific instructions.

  3. (Optional) If your configuration is active-passive and you changed your configuration in Step 1, switch your original data path back to active.

ProcedureHow to Replace a Node-to-Switch Component in a Cluster Without Multipathing

Steps
  1. Check if the physical data path failed.

    If no, proceed to Step 2.

    If yes:

    1. Replace the component.

      Refer to your hardware documentation for any component-specific instructions.

    2. Fix the volume manager error that was caused by the failed data path.

    3. (Optional) If necessary, return resource groups and device groups to this node.

    You have completed this procedure.

  2. Determine the resource groups and device groups that are running on Node A.


    # scstat
    
  3. Move all resource groups and device groups to another node.


    # scswitch -s -h from-node
    
  4. Replace the node-to-switch component.

    Refer to your hardware documentation for any component-specific instructions.

  5. (Optional) If necessary, return the resource groups and device groups that you identified in Step 2 to Node A.


    # scswitch -z -g resource-group -h nodename
    # swswitch -z -D device-group -h nodename
    

ProcedureHow to Replace a Host Adapter

Use this procedure to replace a failed host adapter in a running cluster. This procedure defines Node A as the node with the failed host adapter that you are replacing.

Before You Begin

This procedure relies on the following prerequisites and assumptions.

Steps
  1. Determine the resource groups and device groups that are running on Node A.

    Record this information because you use this information in Step 9 of this procedure to return resource groups and device groups to Node A.


    # scstat
    
  2. Move all resource groups and device groups off Node A.


    # scswitch -S -h nodename
    
  3. Shut down Node A.

    For the full procedure about how to shut down and power off a node, see your Sun Cluster system administration documentation.

  4. Power off Node A.

  5. Replace the failed host adapter.

    For the procedure about how to remove and add host adapters, see the documentation that shipped with your nodes.

  6. Do you need to upgrade the node's host adapter firmware?

    • If yes, boot Node A into noncluster mode. Proceed to Step 7.

      For more information about how to boot nodes, see your Sun Cluster system administration documentation.

    • If no, proceed to Step 8.

  7. Upgrade the host adapter firmware on Node A.

    PatchPro is a patch-management tool that eases the selection and download of patches required for installation or maintenance of Sun Cluster software. PatchPro provides an Interactive Mode tool especially for Sun Cluster. The Interactive Tool makes the installation of patches easier. PatchPro's Expert Mode tool helps you to maintain your configuration with the latest set of patches. Expert Mode is especially useful for obtaining all of the latest patches, not just the high availability and security patches.

    To access the PatchPro tool for Sun Cluster software, go to http://www.sun.com/PatchPro/, click Sun Cluster, then choose either Interactive Mode or Expert Mode. Follow the instructions in the PatchPro tool to describe your cluster configuration and download the patches.

    For third-party firmware patches, see the SunSolveSM Online site at http://sunsolve.ebay.sun.com.

  8. Boot Node A into cluster mode.

    For more information about how to boot nodes, see your Sun Cluster system administration documentation.

  9. Return the resource groups and device groups you identified in Step 1 to Node A.


    # scswitch -z -g resource-group -h nodename
    # scswitch -z -D device-group-name -h nodename
    

    For more information, see your Sun Cluster system administration documentation.