Sun Cluster 3.1 - 3.2 With Sun StorEdge 6320 System Manual for Solaris OS

Chapter 1 Installing and Maintaining a Sun StorEdge 6320 System

This chapter contains the procedures about how to install, configure, and maintain a SunTM StorEdgeTM 6320 system. These procedures are specific to a Sun Cluster environment.

This chapter contains the following main topics:

For detailed information about storage system architecture, features, and configuration utilities, see the Sun StorEdge 6320 System Reference and Service Manual and the Sun StorEdge 6320 System Installation Guide.

Installing Storage Systems

This section contains the procedures listed in Table 1–1

Table 1–1 Task Map: Installing Storage Systems

Task 

Information 

Install storage systems in a new cluster, before the OS and Sun Cluster software is installed. 

How to Install Storage Systems in a New Cluster

Add storage systems to an existing cluster.  

Adding Storage Systems to an Existing Cluster

You can install your storage system in several different configurations. Evaluate your needs. Determine which configuration is best for your situation. See the Sun StorEdge 6320 System Installation Guide, and Installing Storage Arrays in Sun Cluster 3.1 - 3.2 With Sun StorEdge 6120 Array Manual for Solaris OS.

ProcedureHow to Install Storage Systems in a New Cluster

Use this procedure to install a storage system before you install the Solaris operating environment and Sun Cluster software on your nodes. To add a storage system to an existing cluster, use the procedure in Adding Storage Systems to an Existing Cluster.

  1. If necessary, install host adapters in the nodes to be connected to the storage system.

    For the procedure about how to install host adapters, see the documentation that shipped with your host adapters and nodes.

  2. (StorEdge 6320SL storage system ONLY) Install the Fibre Channel (FC) switch for the storage system if you do not have a switch installed.


    Note –

    In a StorEdge 6320SL storage system, the customer provides the switch.


    For the procedure about how to install an FC switch, see the documentation that shipped with your FC switch hardware.

  3. Unpack, place, and level the storage system.

    For instructions, see the Sun StorEdge 6320 System Installation Guide.

  4. Install the system power cord and the system grounding strap.

    For instructions, see the Sun StorEdge 6320 System Installation Guide.

  5. (StorEdge 6320SL storage system ONLY) Connect the storage arrays to the FC switches by using fiber-optic cables.


    Caution – Caution –

    Do not connect the switch's Ethernet port to the storage system's private LAN.


    For the procedure about how to cable the storage system, see the Sun StorEdge 6320 System Installation Guide.

  6. Power on the storage system and the nodes.

    For instructions about how to power on the storage system, see the Sun StorEdge 6320 System Installation Guide. For instructions about how to power on a node, see the documentation that shipped with your node hardware.

  7. Configure the service processor.

    For more information, see the Sun StorEdge 6320 System Installation Guide.

  8. Create a volume.

    For the procedure about how to create a volume, see the Sun StorEdge 6320 System Reference and Service Manual.

  9. (Optional) Specify initiator groups for the volume.

    For the procedure about how to specify initiator groups, see the Sun StorEdge 6320 System Reference and Service Manual.

  10. If necessary, reconfigure the storage system's FC switches to ensure that all nodes can access each storage array.

    The following configurations might prevent some nodes from accessing each storage array in the cluster.

    • Zone configuration

    • Multiple clusters that use same switch

    • Unconfigured ports or misconfigured ports

  11. On all nodes, install the Solaris operating system and apply the required Solaris patches for Sun Cluster software and storage array support.

    For the procedure about how to install the Solaris operating environment, see How to Install Solaris Software in Sun Cluster Software Installation Guide for Solaris OS.

  12. Install any required patches or software for Solaris I/O multipathing software support to nodes and enable multipathing.

    For the procedure about how to install the Solaris I/O multipathing software, see How to Install Sun Multipathing Software in Sun Cluster Software Installation Guide for Solaris OS.

  13. Configure the STMS paths.


    cfgadm -c configure controllerinstance
    

    For the procedure about how to configure STMS paths for the Solaris 9 OS, see the Sun StorEdge Traffic Manager Installation and Configuration Guide. To configure multipathing for the Solaris 10 OS, see the Solaris Fibre Channel Storage Configuration and Multipathing Support Guide.

  14. Update the Solaris device files and links.


    # devfsadm
    

    Note –

    You can wait for the devfsadm daemon to automatically update the Solaris device files and links, or you can run the devfsadm command to immediately update the Solaris device files and links.


  15. Confirm that all storage arrays that you installed are visible to all nodes.


    # luxadm probe 
    
See Also

To continue with Sun Cluster software installation tasks, see your Sun Cluster software installation documentation.

Adding Storage Systems to an Existing Cluster

Use this procedure to add a new storage system to a running cluster. To install systems to a new Sun Cluster configuration that is not running, use the procedure in How to Install Storage Systems in a New Cluster.

This procedure defines Node N as the node to be connected to the storage system you are adding and the node with which you begin working.

ProcedureHow to Perform Initial Configuration Tasks on the Storage Array

  1. (StorEdge 6320SL storage system ONLY) Install the Fibre Channel (FC) switch for the storage system if you do not have a switch installed.


    Note –

    In a StorEdge 6320SL storage system, the customer provides the switch.


    For the procedure about how to install an FC switch, see the documentation that shipped with your FC switch hardware.

  2. Configure the service processor.

    For more information, see the Sun StorEdge 6320 System Installation Guide.

  3. Create a volume.

    For the procedure about how to create a volume, see the Sun StorEdge 6320 System Reference and Service Manual.

  4. (Optional) Specify initiator groups for the volume.

    For the procedure about how to specify initiator groups, see the Sun StorEdge 6320 System Reference and Service Manual.

  5. Unpack, place, and level the storage system.

    For instructions, see the Sun StorEdge 6320 System Installation Guide.

  6. Install the system power cord and the system grounding strap.

    For instructions, see the Sun StorEdge 6320 System Installation Guide.

  7. (StorEdge 6320SL storage system ONLY) Connect the storage arrays to the FC switches by using fiber-optic cables.


    Caution – Caution –

    Do not connect the switch's Ethernet port to the storage system's private LAN.


    For the procedure about how to cable the storage system, see the Sun StorEdge 6320 System Installation Guide.

  8. Power on the new storage system.


    Note –

    The storage arrays in your system might require several minutes to boot.


    For the procedure about how to power on the storage system, see the Sun StorEdge 6320 System Installation Guide.

  9. If necessary, reconfigure the storage system's FC switches to ensure that all nodes can access each storage array.

    The following configurations might prevent some nodes from accessing each storage array in the cluster:

    • Zone configuration

    • Multiple clusters that use the same switch

    • Unconfigured ports or misconfigured ports

ProcedureHow to Connect the Node to the FC Switches

This procedure provides the long forms of the Sun Cluster commands. Most commands also have short forms. Except for the forms of the command names, the commands are identical. For a list of the commands and their short forms, see Appendix A, Sun Cluster Object-Oriented Commands, in Sun Cluster 3.1 - 3.2 Hardware Administration Manual for Solaris OS.

To perform this procedure, become superuser or assume a role that provides solaris.cluster.modify role-based access control (RBAC) authorization.

  1. Determine the resource groups and device groups that are running on all nodes.

    Record this information because you use this information in Step 20 and Step 21 of this procedure to return resource groups and device groups to these nodes.

    • If you are using Sun Cluster 3.2, use the following commands:


      # clresourcegroup status + 
      # cldevicegroup status +  
      
    • If you are using Sun Cluster 3.1, use the following command:


      # scstat
      
  2. Move all resource groups and device groups off Node N.

    • If you are using Sun Cluster 3.2, use the following command:


      # clnode evacuate nodename
      
    • If you are using Sun Cluster 3.1, use the following command:


       # scswitch -S -h nodename
      
  3. If you do not need to install one or more host adapters in Node N, skip to Step 10.

    To install host adapters, proceed to Step 4.

  4. If the host adapter that you that are installing is the first host adapter on Node N, determine whether the required drivers for the host adapter are already installed on this node.

    For the required packages, see the documentation that shipped with your host adapters.

    If this is not the first host adapter, skip to Step 6.

  5. If the required support packages not already installed, install them.

    The support packages are located in the Product directory of the Solaris CD-ROM.

  6. Shut down and power off Node N.

    For the procedure about how to shut down and power off a node, see Chapter 3, Shutting Down and Booting a Cluster, in Sun Cluster System Administration Guide for Solaris OS.

  7. Install one or more host adapters in Node N.

    For the procedure about how to install host adapters, see the documentation that shipped with your host adapters and nodes.

  8. Power on and boot Node N into noncluster mode by adding -x to your boot instruction.

    For more information about how to boot nodes, see Chapter 3, Shutting Down and Booting a Cluster, in Sun Cluster System Administration Guide for Solaris OS.

  9. If necessary, upgrade the host adapter firmware on Node N.

    If you use the Solaris 8, Solaris 9, or Solaris 10 Operating System, Sun Connection Update Manager keeps you informed of the latest versions of patches and features. Using notifications and intelligent needs-based updating, Sun Connection helps improve operational efficiency and ensures that you have the latest software patches for your Sun software.

    You can download the Sun Connection Update Manager product for free by going to http://www.sun.com/download/products.xml?id=4457d96d.

    Additional information for using the Sun patch management tools is provided in Solaris Administration Guide: Basic Administration at http://docs.sun.com. Refer to the version of this manual for the Solaris OS release that you have installed.

    If you must apply a patch when a node is in noncluster mode, you can apply it in a rolling fashion, one node at a time, unless instructions for a patch require that you shut down the entire cluster. Follow the procedures in How to Apply a Rebooting Patch (Node) in Sun Cluster System Administration Guide for Solaris OS to prepare the node and to boot it in noncluster mode. For ease of installation, consider applying all patches at the same time. That is, apply all patches to the node that you place in noncluster mode.

    For a list of patches that affect Sun Cluster, see the Sun Cluster Wiki Patch Klatch.

    For required firmware, see the Sun System Handbook.

  10. If necessary, install a GBIC or an SFP in the FC switch or the storage array.

    For the procedure about how to install a GBIC or an SFP, see the documentation that shipped with your FC switch hardware or the Sun StorEdge 6320 System Installation Guide.

  11. Connect a fiber-optic cable between the FC switch and Node N.

    For the procedure about how to install a fiber-optic cable, see the Sun StorEdge 6320 System Installation Guide.

  12. Install the required Solaris patches for storage array support on Node N.

    If you use the Solaris 8, Solaris 9, or Solaris 10 Operating System, Sun Connection Update Manager keeps you informed of the latest versions of patches and features. Using notifications and intelligent needs-based updating, Sun Connection helps improve operational efficiency and ensures that you have the latest software patches for your Sun software.

    You can download the Sun Connection Update Manager product for free by going to http://www.sun.com/download/products.xml?id=4457d96d.

    Additional information for using the Sun patch management tools is provided in Solaris Administration Guide: Basic Administration at http://docs.sun.com. Refer to the version of this manual for the Solaris OS release that you have installed.

    If you must apply a patch when a node is in noncluster mode, you can apply it in a rolling fashion, one node at a time, unless instructions for a patch require that you shut down the entire cluster. Follow the procedures in How to Apply a Rebooting Patch (Node) in Sun Cluster System Administration Guide for Solaris OS to prepare the node and to boot it in noncluster mode. For ease of installation, consider applying all patches at the same time. That is, apply all patches to the node that you place in noncluster mode.

    For a list of patches that affect Sun Cluster, see the Sun Cluster Wiki Patch Klatch.

    For required firmware, see the Sun System Handbook.

  13. To create the new Solaris device files and links, perform a reconfiguration boot on Node N by adding -r to your boot instruction.

    For more information about how to boot nodes, see Chapter 3, Shutting Down and Booting a Cluster, in Sun Cluster System Administration Guide for Solaris OS.

  14. Configure any unconfigured STMS paths.

    1. Determine whether any devices are at an unconfigured state.


      cfgadm -c configure controllerinstance
      
    2. If any devices are at an unconfigured state, configure the STMS paths.


      cfgadm -c configure controllerinstance
      

      For the procedure about how to configure STMS paths, see the Sun StorEdge Traffic Manager Installation and Configuration Guide.


    Note –

    You need to reboot if the cfgadm command does not configure the unconfigured devices that are associated with the volume you are creating. See the Sun StorEdge Traffic Manager Installation and Configuration Guide for more information.


  15. Update the Solaris device files and links.


    # devfsadm
    

    Note –

    You can wait for the devfsadm daemon to automatically update the Solaris device files and links, or you can run the devfsadm command to immediately update the Solaris device files and links.


  16. On Node N, update the paths to the DID instances.

    • If you are using Sun Cluster 3.2, use the following command:


      # cldevice populate
      
    • If you are using Sun Cluster 3.1, use the following command:


      # scgdevs
      
  17. If necessary, label the new storage array logical volume.

    For the procedure about how to label a logical volume, see the Sun StorEdge 6320 System Reference and Service Manual.

  18. (Optional) On Node N, verify that the device IDs (DIDs) are assigned to the new storage array.

    • If you are using Sun Cluster 3.2, use the following commands:


      # cldevice list -n NodeN -v
      
    • If you are using Sun Cluster 3.1, use the following commands:


      # scdidadm -l
      
  19. Repeat Step 2 through Step 18 for each remaining node that you plan to connect to the storage array.

  20. (Optional) Restore the device groups to the original node.

    Perform the following step for each device group you want to return to the original node.

    • If you are using Sun Cluster 3.2, use the following command:


      # cldevicegroup switch -n nodename devicegroup1[ devicegroup2 ...]
      
      -n nodename

      The node to which you are restoring device groups.

      devicegroup1[ devicegroup2 …]

      The device group or groups that you are restoring to the node.

    • If you are using Sun Cluster 3.1, use the following command:


      # scswitch -z -D devicegroup -h nodename
      
  21. (Optional) Restore the resource groups to the original node.

    Perform the following step for each resource group you want to return to the original node.

    • If you are using Sun Cluster 3.2, use the following command:


      # clresourcegroup switch -n nodename resourcegroup1[ resourcegroup2 …]
      
      nodename

      For failover resource groups, the node to which the groups are returned. For scalable resource groups, the node list to which the groups are returned.

      resourcegroup1[ resourcegroup2 …]

      The resource group or groups that you are returning to the node or nodes.

    • If you are using Sun Cluster 3.1, use the following command:


      # scswitch -z -g resourcegroup -h nodename
      
  22. Perform volume management administration to incorporate the new volumes into the cluster.

    For more information, see your Solaris Volume Manager or Veritas Volume Manager documentation.

Next Steps

The best way to enable multipathing for a cluster is to install the multipathing software and enable multipathing before installing the Sun Cluster software and establishing the cluster. For this procedure, see How to Install Sun Multipathing Software in Sun Cluster Software Installation Guide for Solaris OS. If you need to add multipathing software to an established cluster, see How to Install Sun Multipathing Software in Sun Cluster Software Installation Guide for Solaris OS and follow the troubleshooting steps to clean up the device IDs.

Configuring Storage Systems

This section contains the procedures about how to configure a storage system in a running cluster. Table 1–2 lists these procedures.

Table 1–2 Task Map: Configuring Storage Arrays

Task 

Information 

Create a volume. 

How to Create a Logical Volume

Remove a volume. 

How to Remove a Logical Volume

The following is a list of administrative tasks that require no cluster-specific procedures. See the Sun StorEdge 6320 System Reference and Service Manual for the following procedures.

ProcedureHow to Create a Logical Volume

Use this procedure to create a logical volume from unassigned storage capacity.


Note –

Sun storage documentation uses the following terms:

This manual uses logical volume to refer to all such logical constructs.


Before You Begin

This procedure relies on the following prerequisites and assumptions.

This procedure provides the long forms of the Sun Cluster commands. Most commands also have short forms. Except for the forms of the command names, the commands are identical. For a list of the commands and their short forms, see Appendix A, Sun Cluster Object-Oriented Commands, in Sun Cluster 3.1 - 3.2 Hardware Administration Manual for Solaris OS.

  1. Become superuser or assume a role that provides solaris.cluster.modify role-based access control (RBAC) authorization.

  2. Follow the instructions in your storage device's documentation to create and map the logical volume. For a URL to this storage documentation, see Related Documentation.

    • Completely set up the logical volume. When you are finished, the volume must be created, mapped, mounted, and initialized.

    • If necessary, partition the volume.

    • To allow multiple clusters and nonclustered nodes to access the storage device, create initiator groups by using LUN masking.

  3. If you are not using multipathing, skip to Step 5.

  4. If you are using multipathing, and if any devices that are associated with the volume you created are at an unconfigured state, configure the multipathing paths on each node that is connected to the storage device.

    To determine whether any devices that are associated with the volume you created are at an unconfigured state, use the following command.


    # cfgadm -al | grep disk
    

    Note –

    To configure the Solaris I/O multipathing paths on each node that is connected to the storage device, use the following command.


    # cfgadm -o force_update -c configure controllerinstance
    

    To configure the Traffic Manager for the Solaris 9 OS, see the Sun StorEdge Traffic Manager Installation and Configuration Guide. To configure multipathing for the Solaris 10 OS, see the Solaris Fibre Channel Storage Configuration and Multipathing Support Guide.

  5. On one node that is connected to the storage device, use the format command to label the new logical volume.

  6. From any node in the cluster, update the global device namespace.

    • If you are using Sun Cluster 3.2, use the following command:


      # cldevice populate
      
    • If you are using Sun Cluster 3.1, use the following command:


      # scgdevs
      

    Note –

    You might have a volume management daemon such as vold running on your node, and have a DVD drive connected to the node. Under these conditions, a device busy error might be returned even if no disk is inserted in the drive. This error is expected behavior. You can safely ignore this error message.


  7. To manage this volume with volume management software, use Solaris Volume Manager or Veritas Volume Manager commands to update the list of devices on all nodes that are attached to the new volume that you created.

    For more information, see your Solaris Volume Manager or Veritas Volume Manager documentation.

See Also

ProcedureHow to Remove a Logical Volume

Use this procedure to remove a logical volume. This procedure defines Node A as the node with which you begin working.


Note –

Sun storage documentation uses the following terms:

This manual uses logical volume to refer to all such logical constructs.


Before You Begin

This procedure relies on the following prerequisites and assumptions.

This procedure provides the long forms of the Sun Cluster commands. Most commands also have short forms. Except for the forms of the command names, the commands are identical. For a list of the commands and their short forms, see Appendix A, Sun Cluster Object-Oriented Commands, in Sun Cluster 3.1 - 3.2 Hardware Administration Manual for Solaris OS.

  1. Become superuser or assume a role that provides solaris.cluster.read and solaris.cluster.modify RBAC authorization.

  2. Identify the logical volume that you are removing.

    Refer to your Solaris Volume Manager or Veritas Volume Manager documentation for more information.

  3. (Optional) Migrate all data off the logical volume that you are removing. Alternatively, back up that data.

  4. If the LUN that you are removing is configured as a quorum device, choose and configure another device as the quorum device. Then remove the old quorum device.

    To determine whether the LUN is configured as a quorum device, use one of the following commands.

    • If you are using Sun Cluster 3.2, use the following command:


      # clquorum show
      
    • If you are using Sun Cluster 3.1, use the following command:


       # scstat -q
      

    For procedures about how to add and remove quorum devices, see Chapter 6, Administering Quorum, in Sun Cluster System Administration Guide for Solaris OS.

  5. If you are using volume management software, use that software to update the list of devices on all nodes that are attached to the logical volume that you are removing.

    For instructions about how to update the list of devices, see your Solaris Volume Manager or Veritas Volume Manager documentation.

  6. If you are using volume management software, run the appropriate Solaris Volume Manager or Veritas Volume Manager commands to remove the logical volume from any diskset or disk group.

    For more information, see your Solaris Volume Manager or Veritas Volume Manager documentation.


    Note –

    Volumes that were managed by Veritas Volume Manager must be completely removed from Veritas Volume Manager control before you can delete them from the Sun Cluster environment. After you delete the volume from any disk group, use the following commands on both nodes to remove the volume from Veritas Volume Manager control.


    # vxdisk offline Accessname
    # vxdisk rm Accessname
    
    Accessname

    Disk access name


  7. If you are using multipathing, unconfigure the volume in Solaris I/O multipathing.


    # cfgadm -o force_update -c unconfigure Logical_Volume
    
  8. Access the storage device and remove the logical volume.

    To remove the volume, see your storage documentation. For a list of storage documentation, see Related Documentation.

  9. Determine the resource groups and device groups that are running on all nodes.

    Record this information because you use it in Step 14 and Step 15 of this procedure to return resource groups and device groups to these nodes.

    • If you are using Sun Cluster 3.2, use the following commands:


      # clresourcegroup status + 
      # cldevicegroup status +
      
    • If you are using Sun Cluster 3.1, use the following command:


      # scstat
      
  10. Move all resource groups and device groups off Node A.

    • If you are using Sun Cluster 3.2, use the following command:


      # clnode evacuate nodename
      
    • If you are using Sun Cluster 3.1, use the following command:


      # scswitch -S -h nodename
      
  11. Shut down and reboot Node A.

    To shut down and boot a node, see Chapter 3, Shutting Down and Booting a Cluster, in Sun Cluster System Administration Guide for Solaris OS.

  12. On Node A, remove the paths to the logical volume that you removed. Remove obsolete device IDs.

    • If you are using Sun Cluster 3.2, use the following command:


      # devfsadm -C
      # cldevice clear
      
    • If you are using Sun Cluster 3.1, use the following command:


      # devfsadm -C
      # scdidadm -C
      
  13. For each additional node that is connected to the shared storage that hosted the logical volume, repeat Step 9 to Step 12.

  14. (Optional) Restore the device groups to the original node.

    Do the following for each device group that you want to return to the original node.

    • If you are using Sun Cluster 3.2, use the following command:


      # cldevicegroup switch -n nodename devicegroup1[ devicegroup2 …]
      
      -n nodename

      The node to which you are restoring device groups.

      devicegroup1[ devicegroup2 …]

      The device group or groups that you are restoring to the node.

    • If you are using Sun Cluster 3.1, use the following command:


      # scswitch -z -D devicegroup -h nodename
      
  15. (Optional) Restore the resource groups to the original node.

    Do the following for each resource group that you want to return to the original node.

    • If you are using Sun Cluster 3.2, use the following command:


      # clresourcegroup switch -n nodename resourcegroup1[ resourcegroup2 …]
      
      nodename

      For failover resource groups, the node to which the groups are returned. For scalable resource groups, the node list to which the groups are returned.

      resourcegroup1[ resourcegroup2 …]

      The resource group or groups that you are returning to the node or nodes.

    • If you are using Sun Cluster 3.1, use the following command:


      # scswitch -z -g resourcegroup -h nodename
      

Maintaining Storage Systems

This section contains the procedures for maintaining a storage system in a running cluster. Table 1–3 lists these procedures.

Table 1–3 Task Map: Maintaining Storage Systems

Task 

Information 

Remove a storage system. 

How to Remove a Storage System

Upgrade storage array firmware. 

How to Upgrade Storage Array Firmware

Replace a node-to-switch component. 

  • Node-to-switch fiber-optic cable

  • FC host adapter

  • FC switch

  • GBIC or SFP

How to Replace a Node-to-Switch Component in a Cluster Without Multipathing

Replace a node's host adapter. 

How to Replace a Host Adapter

Add a node to the storage array.

Sun Cluster system administration documentation 

Remove a node from the storage array.

Sun Cluster system administration documentation 

StorEdge 6320 System FRUs

The following is a list of administrative tasks that require no cluster-specific procedures. See the Sun StorEdge 6320 System Reference and Service Manual for the following procedures.

ProcedureHow to Upgrade Storage Array Firmware

Use this procedure to upgrade storage array firmware in a running cluster. Storage array firmware includes controller firmware, unit interconnect card (UIC) firmware, EPROM firmware, and disk drive firmware.


Note –

When you upgrade firmware on a storage device or on an enclosure, redefine the stripe size of a LUN, or perform other LUN operations, a device ID might change unexpectedly. When you perform a check of the device ID configuration by running the cldevice check or scdidadm -c command, the following error message appears on your console if the device ID changed unexpectedly.


device id for nodename:/dev/rdsk/cXtYdZsN does not match physical 
device's id for ddecimalnumber, device may have been replaced.

To fix device IDs that report this error, run the cldevice repair or scdidadm -R command for each affected device.


  1. Stop all I/O to the storage arrays you are upgrading.

  2. Apply the controller, disk drive, and loop-card firmware patches by using the arrays' GUI tools.

    For the list of required patches, see the Sun StorEdge 6320 System Reference and Service Manual. For the procedure about how to apply firmware patches, see the firmware patch README file. For the procedure about how to verify the firmware level, see the Sun StorEdge 6320 System Reference and Service Manual.

    For specific instructions, see your storage array's documentation.

  3. Confirm that all storage arrays that you upgraded are visible to all nodes.


    # luxadm probe
    
  4. Restart all I/O to the storage arrays.

    You stopped I/O to these storage arrays in Step 1.

ProcedureHow to Remove a Storage System

Use this procedure to permanently remove a storage system from a running cluster.

This procedure defines Node N as the node that is connected to the storage system you are removing and the node with which you begin working.


Caution – Caution –

During this procedure, you lose access to the data that resides on the storage system that you are removing.


  1. If necessary, back up all database tables, data services, and volumes that are associated with each partner group that is affected.

  2. Remove references to the volumes that reside on the storage system that you are removing.

    For more information, see your Solaris Volume Manager or Veritas Volume Manager documentation.

  3. Disconnect the cables that connected Node N to the FC switches in your storage system.

  4. On all nodes, remove the obsolete Solaris links and device IDs (DIDs).

    • If you are using Sun Cluster 3.2, use the following command:


      # devfsadm -C
      # cldevice clear
      
    • If you are using Sun Cluster 3.1, use the following command:


      # devfsadm -C
      # scdidadm -C
      
  5. Repeat Step 3 through Step 4 for each node that is connected to the storage system.

Replacing a Node-to-Switch Component

Use this procedure to replace a node-to-switch component that has failed or that you suspect might be contributing to a problem.


Note –

Node-to-switch components that are covered by this procedure include the following components:

To replace a host adapter, see How to Replace a Host Adapter.


This procedure defines Node A as the node that is connected to the node-to-switch component that you are replacing. This procedure assumes that, except for the component you are replacing, your cluster is operational.

Ensure that you are following the appropriate instructions:

ProcedureHow to Replace a Node-to-Switch Component in a Cluster That Uses Multipathing

  1. If your configuration is active-passive, and if the active path is the path that needs a component replaced, make that path passive.

  2. Replace the component.

    Refer to your hardware documentation for any component-specific instructions.

  3. (Optional) If your configuration is active-passive and you changed your configuration in Step 1, switch your original data path back to active.

ProcedureHow to Replace a Node-to-Switch Component in a Cluster Without Multipathing

Before You Begin

This procedure provides the long forms of the Sun Cluster commands. Most commands also have short forms. Except for the forms of the command names, the commands are identical. For a list of the commands and their short forms, see Appendix A, Sun Cluster Object-Oriented Commands, in Sun Cluster 3.1 - 3.2 Hardware Administration Manual for Solaris OS.

  1. Become superuser or assume a role that provides solaris.cluster.read and solaris.cluster.modify RBAC authorization.

  2. If the physical data path has failed, do the following:

    1. Replace the component.

    2. Fix the volume manager error that was caused by the failed data path.

    3. (Optional) If necessary, return resource groups and device groups to this node.

    You have completed this procedure.

  3. If the physical data path has not failed, determine the resource groups and device groups that are running on Node A.

    • If you are using Sun Cluster 3.2, use the following commands:


      # clresourcegroup status -n NodeA
      # cldevicegroup status -n NodeA
      
      -n NodeA

      The node for which you are determining resource groups and device groups.

    • If you are using Sun Cluster 3.1, use the following command:


      # scstat
      
  4. Move all resource groups and device groups to another node.

    • If you are using Sun Cluster 3.2, use the following command:


      # clnode evacuate nodename
      
    • If you are using Sun Cluster 3.1, use the following command:


      # scswitch -S -h nodename
      
  5. Replace the node-to-switch component.

    Refer to your hardware documentation for any component-specific instructions.

  6. (Optional) Restore the device groups to the original node.

    Do the following for each device group that you want to return to the original node.

    • If you are using Sun Cluster 3.2, use the following command:


      # cldevicegroup switch -n nodename devicegroup1[ devicegroup2 ...]
      
      -n nodename

      The node to which you are restoring device groups.

      devicegroup1[ devicegroup2 …]

      The device group or groups that you are restoring to the node.

    • If you are using Sun Cluster 3.1, use the following command:


      # scswitch -z -D devicegroup -h nodename
      
  7. (Optional) Restore the resource groups to the original node.

    Do the following for each resource group that you want to return to the original node.

    • If you are using Sun Cluster 3.2, use the following command:


      # clresourcegroup switch -n nodename resourcegroup1[ resourcegroup2 …]
      
      nodename

      For failover resource groups, the node to which the groups are returned. For scalable resource groups, the node list to which the groups are returned.

      resourcegroup1[ resourcegroup2 …]

      The resource group or groups that you are returning to the node or nodes.

    • If you are using Sun Cluster 3.1, use the following command:


       # scswitch -z -g resourcegroup -h nodename
      

ProcedureHow to Replace a Chassis

Use this procedure to replace a storage array chassis in a running cluster. This procedure assumes that you want to retain all FRUs other than the chassis and the backplane. To replace the chassis, you must replace both the chassis and the backplane. These components are manufactured as one part.


Caution – Caution –

You must be a Sun service provider to perform this procedure. If you need to replace a storage array chassis or a storage array midplane, contact your Sun service provider.


  1. Detach the submirrors on the storage array that is connected to the chassis and the midplane that you are replacing. Detach the submirrors to stop all I/O activity to this storage array.

    For more information, see your Solaris Volume Manager or Veritas Volume Manager documentation.

  2. Are the storage arrays in your partner–group configuration redundant as a result of host-based mirroring?

    • If yes, proceed to Step 3.

    • If no, shut down the cluster.

      For the procedure about how to shut down a cluster, see your Sun Cluster system administration documentation.

  3. Replace the chassis and the midplane.

    For the procedure about how to replace a storage array chassis and a storage array midplane, see the Sun StorEdge 6320 System Reference and Service Manual.

  4. Did you shut down the cluster in Step 2?

    • If no, proceed to Step 5.

    • If yes, boot the cluster back into cluster mode.

      For the procedure about how to boot a cluster, see your Sun Cluster system administration documentation.

  5. Reattach the submirrors that you detached in Step 1 to resynchronize the submirrors.


    Caution – Caution –

    The world wide numbers (WWNs) change as a result of this procedure. You must reconfigure your volume manager software to recognize the new WWNs.


    For more information, see your Solaris Volume Manager or Veritas Volume Manager documentation.

ProcedureHow to Replace a Host Adapter

Use this procedure to replace a failed host adapter in a running cluster. This procedure defines Node A as the node with the failed host adapter that you are replacing.

Before You Begin

This procedure relies on the following prerequisites and assumptions.

This procedure provides the long forms of the Sun Cluster commands. Most commands also have short forms. Except for the forms of the command names, the commands are identical. For a list of the commands and their short forms, see Appendix A, Sun Cluster Object-Oriented Commands, in Sun Cluster 3.1 - 3.2 Hardware Administration Manual for Solaris OS.

  1. Become superuser or assume a role that provides solaris.cluster.read and solaris.cluster.modify RBAC authorization.

  2. Determine the resource groups and device groups that are running on Node A.

    Record this information because you use this information in Step 10 and Step 11 of this procedure to return resource groups and device groups to Node A.

    • If you are using Sun Cluster 3.2, use the following commands:


      # clresourcegroup status -n NodeA 
      # cldevicegroup status -n NodeA
      
      -n NodeA

      The node for which you are determining resource groups and device groups.

    • If you are using Sun Cluster 3.1, use the following command:


      # scstat
      
  3. Move all resource groups and device groups off Node A.

    • If you are using Sun Cluster 3.2, use the following command:


      # clnode evacuate nodename
      
    • If you are using Sun Cluster 3.1, use the following command:


      # scswitch -S -h nodename
      
  4. Shut down Node A.

    For the full procedure about how to shut down and power off a node, see Chapter 3, Shutting Down and Booting a Cluster, in Sun Cluster System Administration Guide for Solaris OS.

  5. Power off Node A.

  6. Replace the failed host adapter.

    To remove and add host adapters, see the documentation that shipped with your nodes.

  7. If you need to upgrade the node's host adapter firmware, boot Node A into noncluster mode by adding -x to your boot instruction. Proceed to Step 8.

    If you do not need to upgrade firmware, skip to Step 9.

  8. Upgrade the host adapter firmware on Node A.

    If you use the Solaris 8, Solaris 9, or Solaris 10 Operating System, Sun Connection Update Manager keeps you informed of the latest versions of patches and features. Using notifications and intelligent needs-based updating, Sun Connection helps improve operational efficiency and ensures that you have the latest software patches for your Sun software.

    You can download the Sun Connection Update Manager product for free by going to http://www.sun.com/download/products.xml?id=4457d96d.

    Additional information for using the Sun patch management tools is provided in Solaris Administration Guide: Basic Administration at http://docs.sun.com. Refer to the version of this manual for the Solaris OS release that you have installed.

    If you must apply a patch when a node is in noncluster mode, you can apply it in a rolling fashion, one node at a time, unless instructions for a patch require that you shut down the entire cluster. Follow the procedures in How to Apply a Rebooting Patch (Node) in Sun Cluster System Administration Guide for Solaris OS to prepare the node and to boot it in noncluster mode. For ease of installation, consider applying all patches at the same time. That is, apply all patches to the node that you place in noncluster mode.

    For a list of patches that affect Sun Cluster, see the Sun Cluster Wiki Patch Klatch.

    For required firmware, see the Sun System Handbook.

  9. Boot Node A into cluster mode.

    For more information about how to boot nodes, see Chapter 3, Shutting Down and Booting a Cluster, in Sun Cluster System Administration Guide for Solaris OS.

  10. (Optional) Restore the device groups to the original node.

    Do the following for each device group that you want to return to the original node.

    • If you are using Sun Cluster 3.2, use the following command:


      # cldevicegroup switch -n nodename devicegroup1[ devicegroup2 ...]
      
      -n nodename

      The node to which you are restoring device groups.

      devicegroup1[ devicegroup2 …]

      The device group or groups that you are restoring to the node.

    • If you are using Sun Cluster 3.1, use the following command:


      # scswitch -z -D devicegroup -h nodename
      
  11. (Optional) Restore the resource groups to the original node.

    Do the following for each resource group that you want to return to the original node.

    • If you are using Sun Cluster 3.2, use the following command:


      # clresourcegroup switch -n nodename resourcegroup1[ resourcegroup2 …]
      
      nodename

      For failover resource groups, the node to which the groups are returned. For scalable resource groups, the node list to which the groups are returned.

      resourcegroup1[ resourcegroup2 …]

      The resource group or groups that you are returning to the node or nodes.

    • If you are using Sun Cluster 3.1, use the following command:


       # scswitch -z -g resourcegroup -h nodename