Sun Cluster 3.0 U1 Hardware Guide

Chapter 8 Installing and Maintaining a Sun StorEdge T3 Disk Tray Single-Controller Configuration

This chapter provides the procedures for installing, configuring, and maintaining Sun StorEdgeTM T3 disk trays in a single-controller (non-interconnected) configuration.

This chapter contains the following procedures:

For conceptual information on multihost disks, see the Sun Cluster 3.0 U1 Concepts document.

Installing a StorEdge T3 Disk Tray

This section provides the procedure for an initial installation of a new StorEdge T3 disk tray.

How to Install a StorEdge T3 Disk Tray

Use this procedure to install and configure a new StorEdge T3 disk tray in a cluster that is not running. Perform the steps in this procedure in conjunction with the procedures in the Sun Cluster 3.0 U1 Installation Guide and your server hardware manual.

  1. Install the host adapters in the nodes that are to be connected to the StorEdge T3 disk trays.

    For the procedure on installing host adapters, see the documentation that shipped with your host adapters and nodes.

  2. Install the Sun StorEdge FC-100 hubs.

    For the procedure on installing a Sun StorEdge FC-100 hub, see the FC-100 Hub Installation and Service Manual.

  3. Set up a Reverse Address Resolution Protocol (RARP) server on the network you want the new StorEdge T3 disk tray to reside on.

    This RARP server enables you to assign an IP address to the new StorEdge T3 disk tray by using the StorEdge T3 disk tray's unique MAC address.

    For the procedure on setting up a RARP server, see the Sun StorEdge T3 Installation, Operation, and Service Manual.

  4. Cable and power on the StorEdge T3 disk trays as shown in Figure 8-1.


    Note -

    No restrictions are placed on the hub port assignments. You can connect your StorEdge T3 disk tray and node to any hub port.


    For the procedure on installing fiber-optic cables, see the Sun StorEdge T3 Configuration Guide. For the procedure on powering on the StorEdge T3 disk tray, see the Sun StorEdge T3 Installation, Operation, and Service Manual.

    Figure 8-1 Cabling a StorEdge T3 Disk Tray in a Single-Controller Configuration

    Graphic


    Note -

    Although Figure 8-1 shows a single-controller configuration, two disk trays are shown to illustrate how two non-interconnected disk trays are typically cabled in a cluster to allow data sharing and host-based mirroring.


  5. (Optional) Configure the StorEdge T3 disk tray with logical volumes.

    For the procedure on configuring the StorEdge T3 disk tray with logical volumes, see the Sun StorEdge T3 Disk Tray Administrator's Guide.

  6. Telnet to the StorEdge T3 disk tray you are adding, and install the necessary StorEdge T3 disk tray controller firmware.

    Revision 1.16a firmware is required for the StorEdge T3 disk tray controller. For the procedure on upgrading firmware, see the firmware patch README.

  7. Ensure that this new StorEdge T3 disk tray has a unique target address.

    For the procedure on verifying and assigning a target address, see the Sun StorEdge T3 Configuration Guide.

  8. Reset the StorEdge T3 disk tray.

    For the procedure on rebooting or resetting a StorEdge T3 disk tray, see the Sun StorEdge T3 Installation, Operation, and Service Manual.

  9. Install the Solaris operating environment, and apply the required Solaris patches for Sun Cluster software and StorEdge T3 disk tray support.

    For the procedure on installing the Solaris operating environment, see the Sun Cluster 3.0 U1 Installation Guide. For the location of required Solaris patches and installation instructions for Sun Cluster software support, see the Sun Cluster 3.0 U1 Release Notes. For a list of required Solaris patches for StorEdge T3 disk tray support, see the Sun StorEdge T3 Disk Tray Release Notes.

Where to Go From Here

To continue with Sun Cluster software installation tasks, see the Sun Cluster 3.0 U1 Installation Guide.

Configuring a StorEdge T3 Disk Tray

This section provides the procedures for configuring a StorEdge T3 disk tray in a running cluster. The following table lists these procedures.

Table 8-1 Task Map: Configuring a StorEdge T3 Disk Tray

Task 

For Instructions, Go To 

Create a disk tray logical volume 

"How to Create a Sun StorEdge T3 Disk Tray Logical Volume"

Remove a disk tray logical volume 

"How to Remove a Sun StorEdge T3 Disk Tray Logical Volume"

How to Create a Sun StorEdge T3 Disk Tray Logical Volume

Use this procedure to create a logical volume. This procedure assumes all cluster nodes are booted and attached to the StorEdge T3 disk tray that is to host the logical volume you are creating.

  1. Telnet to the StorEdge T3 disk tray that is to host the logical volume you are creating.

  2. Create the logical volume.

    The creation of a logical volume involves adding, mounting, and initializing the logical volume.

    For the procedure on creating and initializing a logical volume, see the Sun StorEdge T3 Disk Tray Administrator's Guide. For the procedure on mounting a logical volume, see the Sun StorEdge T3 Installation, Operation, and Service Manual.

  3. On all cluster nodes, update the /devices and /dev entries.


    # devfsadm
    

    After this process, a Solaris logical device name for the new logical volume appears in the /dev/rdsk and /dev/dsk directories on all cluster nodes that are attached to the StorEdge T3 disk tray.

  4. If you are running VERITAS Volume Manager, update VERITAS Volume Manager's device tables on all cluster nodes that are attached to the logical volume you created in Step 2. Otherwise, proceed to Step 5.

  5. If necessary, partition the logical volume.

  6. From any node in the cluster, update the global device namespace.


    # scgdevs
    

    If a volume management daemon such as vold is running on your node, and you have a CD-ROM drive that is connected to the node, a device busy error might be returned even if no disk is in the drive. This error is expected behavior.

Where to Go From Here

To create a new resource or reconfigure a running resource to use the new StorEdge T3 disk tray logical volume, see the Sun Cluster 3.0 U1 Data Services Installation and Configuration Guide.

To configure a logical volume as a quorum device, see the Sun Cluster 3.0 U1 System Administration Guide for the procedure on adding a quorum device.

How to Remove a Sun StorEdge T3 Disk Tray Logical Volume

Use this procedure to remove a logical volume. This procedure assumes all cluster nodes are booted and attached to the StorEdge T3 disk tray that hosts the logical volume you are removing.

This procedure defines Node A as the node you begin working with, and Node B as the remaining node.


Caution - Caution -

This procedure removes all data on the logical volume you are removing.


  1. If necessary, migrate all data and volumes off the logical volume you are removing. Otherwise, proceed to Step 2.

  2. Is the logical volume you are removing a quorum device?


    # scstat -q
    
    • If yes, remove the quorum device before you proceed.

    • If no, proceed to Step 3.

    For the procedure on removing a quorum device, see the Sun Cluster 3.0 U1 System Administration Guide.

  3. If you are running VERITAS Volume Manager, update VERITAS Volume Manager's device tables on all cluster nodes that are attached to the logical volume you are removing. Otherwise, proceed to Step 4.

  4. Run the appropriate Solstice DiskSuite or VERITAS Volume Manager commands to remove the reference to the logical unit number (LUN) from any diskset or disk group.

    For more information, see your Solstice DiskSuite or VERITAS Volume Manager documentation.

  5. Remove the logical volume.

    For the procedure on deleting a logical volume, see the Sun StorEdge T3 Disk Tray Administrator's Guide.

  6. Determine the resource groups and device groups that are running on Node A and Node B.

    Record this information because you will use it in Step 15 of this procedure to return resource groups and device groups to these nodes.


    # scstat
    
  7. Move all resource groups and device groups off Node A.


    # scswitch -S -h nodename
    
  8. Shut down Node A.


    # shutdown -y -g0 -i0
    
  9. Boot Node A into cluster mode.


    {0} ok boot
    

    For more information, see the Sun Cluster 3.0 U1 System Administration Guide.

  10. On Node A, remove the obsolete device IDs (DIDs).


    # devfsadm -C
    # scdidadm -C
    
  11. Move all resource groups and device groups off Node B.


    # scswitch -S -h nodename
    
  12. Shut down Node B.


    # shutdown -y -g0 -i0
    
  13. Boot Node B into cluster mode.


    {0} ok boot
    

    For more information, see the Sun Cluster 3.0 U1 System Administration Guide.

  14. On Node B, remove the obsolete DIDs.


    # devfsadm -C
    # scdidadm -C
    
  15. Return the resource groups and device groups you identified in Step 6 to Node A and Node B.


    # scswitch -z -g resource-group -h nodename
    # scswitch -z -D device-group-name -h nodename
    

    For more information, see the Sun Cluster 3.0 U1 System Administration Guide.

Where to Go From Here

To create a logical volume, see "How to Create a Sun StorEdge T3 Disk Tray Logical Volume".

Maintaining a StorEdge T3 Disk Tray

This section provides the procedures for maintaining a StorEdge T3 disk tray. The following table lists these procedures. This section does not include a procedure for adding a disk drive and a procedure for removing a disk drive because a StorEdge T3 disk tray only operates when fully configured.


Caution - Caution -

If you remove any field-replaceable unit (FRU) for an extended period of time, thermal complications might result. To prevent this complication, the StorEdge T3 disk tray is designed so an orderly shutdown occurs when you remove a component for longer than 30 minutes. A replacement part must be immediately available before starting a FRU replacement procedure. You must replace a FRU within 30 minutes or the StorEdge T3 disk tray, and all attached StorEdge T3 disk trays, will shut down and power off.


Table 8-2 Task Map: Maintaining a StorEdge T3 Disk Tray

Task 

For Instructions, Go To 

Upgrade StorEdge T3 disk tray firmware. 

"How to Upgrade StorEdge T3 Disk Tray Firmware"

Replace a disk drive. 

"How to Replace a Disk Drive"

Add a StorEdge T3 disk tray. 

"How to Add a StorEdge T3 Disk Tray"

Remove a StorEdge T3 disk tray. 

"How to Remove a StorEdge T3 Disk Tray"

Replace a host-to-hub fiber-optic cable. 

"How to Replace a Host-to-Hub Component"

Replace an FC-100/S host adapter GBIC. 

"How to Replace a Host-to-Hub Component"

Replace an FC-100 hub GBIC that connects a FC-100 hub to a host. 

"How to Replace a Host-to-Hub Component"

Replace a hub-to-disk tray fiber optic cable. 

 

"How to Replace a Sun StorEdge FC-100 Hub and Hub-to-Disk Tray Component"

Replace an FC-100 hub GBIC that connects the FC-100 hub to a StorEdge T3 disk tray. 

"How to Replace a Sun StorEdge FC-100 Hub and Hub-to-Disk Tray Component"

Replace a Sun StorEdge FC-100 hub. 

 

"How to Replace a Sun StorEdge FC-100 Hub and Hub-to-Disk Tray Component"

Replace a Sun StorEdge FC-100 hub power cord. 

"How to Replace a Sun StorEdge FC-100 Hub and Hub-to-Disk Tray Component"

Replace a media interface adapter (MIA). 

 

"How to Replace a Sun StorEdge FC-100 Hub and Hub-to-Disk Tray Component"

Replace a StorEdge T3 disk tray controller. 

 

"How to Replace a StorEdge T3 Disk Tray Controller"

Replace a StorEdge T3 disk tray chassis. 

 

"How to Replace a StorEdge T3 Disk Tray Chassis"

Replace a host adapter. 

"How to Replace a Host Adapter"

Replace a Power and Cooling Unit (PCU). 

 

Follow the same procedure used in a non-cluster environment. 

Sun StorEdge T3 Installation, Operation, and Service Manual

Replace a unit interconnect card (UIC). 

 

Follow the same procedure used in a non-cluster environment. 

Sun StorEdge T3 Installation, Operation, and Service Manual

Replace a StorEdge T3 disk tray power cable. 

 

Follow the same procedure used in a non-cluster environment. 

Sun StorEdge T3 Installation, Operation, and Service Manual

Replace an Ethernet cable. 

 

Follow the same procedure used in a non-cluster environment. 

Sun StorEdge T3 Installation, Operation, and Service Manual

How to Upgrade StorEdge T3 Disk Tray Firmware

Use this procedure to upgrade StorEdge T3 disk tray firmware in a running cluster. StorEdge T3 disk tray firmware includes controller firmware, unit interconnect card (UIC) firmware, and disk drive firmware.


Caution - Caution -

Perform this procedure on one StorEdge T3 disk tray at a time. This procedure requires that you reset the StorEdge T3 disk tray you want to upgrade. If you reset more than one StorEdge T3 disk tray, your cluster will lose access to data if the StorEdge T3 disk trays are submirrors of each other.


  1. On one node attached to the StorEdge T3 disk tray you are upgrading, detach that StorEdge T3 disk tray's submirrors.

    For more information, see your Solstice DiskSuite or VERITAS Volume Manager documentation.

  2. Apply the controller, disk drive, and UIC firmware patches.

    For the list of required StorEdge T3 disk tray patches, see the Sun StorEdge T3 Disk Tray Release Notes. For the procedure on applying firmware patches, see the firmware patch README. For the procedure on verifying the firmware level, see the Sun StorEdge T3 Installation, Operation, and Service Manual.

  3. Reset the StorEdge T3 disk tray, if you have not already done so.

    For the procedure on rebooting a StorEdge T3 disk tray, see the Sun StorEdge T3 Installation, Operation, and Service Manual.

  4. Reattach the submirrors to resynchronize them.

    For more information, see your Solstice DiskSuite or VERITAS Volume Manager documentation.

How to Replace a Disk Drive

Use this procedure to replace one failed disk drive in a StorEdge T3 disk tray in a running cluster.


Caution - Caution -

If you remove any field replaceable unit (FRU) for an extended period of time, thermal complications might result. To prevent this complication, the StorEdge T3 disk tray is designed so an orderly shutdown occurs when you remove a component for longer than 30 minutes. A replacement part must be immediately available before starting a FRU replacement procedure. You must replace a FRU within 30 minutes or the StorEdge T3 disk tray, and all attached StorEdge T3 disk trays, will shut down and power off.


  1. If the failed disk drive impacted the logical volume's availability, remove the logical volume from volume management control. Otherwise, proceed to Step 2.

    For more information, see your Solstice DiskSuite or VERITAS Volume Manager documentation.

  2. Replace the disk drive.

    For the procedure on replacing a disk drive, see the Sun StorEdge T3 Installation, Operation, and Service Manual.

  3. If you removed a LUN from volume management control in Step 1, return the LUN(s) to volume management control. Otherwise, Step 2 completes this procedure.

    For more information, see your Solstice DiskSuite or VERITAS Volume Manager documentation.

How to Add a StorEdge T3 Disk Tray

Use this procedure to add a new StorEdge T3 disk tray to a running cluster.

This procedure defines Node A as the node you begin working with, and Node B as the remaining node.

  1. Set up a Reverse Address Resolution Protocol (RARP) server on the network the new StorEdge T3 disk tray is to reside on, and then assign an IP address to the new StorEdge T3 disk tray.

    This RARP server enables you to assign an IP address to the new StorEdge T3 disk tray by using the StorEdge T3 disk tray's unique MAC address.

    For the procedure on setting up a RARP server, see the Sun StorEdge T3 Installation, Operation, and Service Manual.

  2. Install the media interface adapter (MIA) in the StorEdge T3 disk tray you want to add as shown in Figure 8-2.

    For the procedure on installing a media interface adapter (MIA), see the Sun StorEdge T3 Configuration Guide.

  3. If necessary, install a gigabit interface converter (GBIC) in the Sun StorEdge FC-100 hub as shown in Figure 8-2.

    This GBIC enables you to connect the Sun StorEdge FC-100 hub to the StorEdge T3 disk tray you want to add.


    Note -

    No restrictions are placed on the hub port assignments. You can connect your StorEdge T3 disk tray and node to any hub port.


    For the procedure on installing an FC-100 hub GBIC, see the FC-100 Hub Installation and Service Manual.

  4. Install the 10Base-T Ethernet cable between the StorEdge T3 disk tray and the Local Area Network (LAN), as shown in Figure 8-2.

  5. Power on the StorEdge T3 disk tray.


    Note -

    The StorEdge T3 disk tray might require a few minutes to boot.


    For the procedure on powering on a StorEdge T3 disk tray, see the Sun StorEdge T3 Installation, Operation, and Service Manual.

  6. Telnet to the StorEdge T3 disk tray you are adding, and, if necessary, install the required StorEdge T3 disk tray controller firmware.

    Revision 1.16a firmware is required for the StorEdge T3 disk tray controller. For the procedure on upgrading firmware, see the firmware patch README.

  7. Does this new StorEdge T3 disk tray have a unique target address?

    • If yes, proceed to Step 8.

    • If no, change the target address for this new StorEdge T3 disk tray.

    For the procedure on verifying and assigning a target address, see the Sun StorEdge T3 Configuration Guide.

  8. Install a fiber-optic cable between the Sun StorEdge FC-100 hub and the StorEdge T3 disk tray as shown in Figure 8-2.


    Note -

    No restrictions are placed on the hub port assignments. You can connect your StorEdge T3 disk tray and node to any hub port.


    For the procedure on installing a fiber-optic cable, see the Sun StorEdge T3 Configuration Guide.

    Figure 8-2 Adding a StorEdge T3 Disk Tray in a Single-Controller Configuration

    Graphic


    Note -

    Although Figure 8-2 shows a single-controller configuration, two disk trays are shown to illustrate how two non-interconnected disk trays are typically cabled in a cluster to allow data sharing and host-based mirroring.


  9. Configure the new StorEdge T3 disk tray.

    For the procedure on creating a logical volume, see the Sun StorEdge T3 Disk Tray Administrator's Guide.

  10. Determine the resource groups and device groups that are running on Node A and Node B.

    Record this information because you will use it in Step 42 of this procedure to return resource groups and device groups to these nodes.


    # scstat
    
  11. Move all resource groups and device groups off Node A.


    # scswitch -S -h nodename
    
  12. Do you need to install a host adapter in Node A?

  13. Is the host adapter you are installing the first FC-100/S host adapter on Node A?

    • If no, skip to Step 15.

    • If yes, determine whether the Fibre Channel support packages are already installed on these nodes. This product requires the following packages.


    # pkginfo | egrep Wlux
    system 				SUNWluxd 					Sun Enterprise Network Array sf Device Driver
    system 				SUNWluxdx					Sun Enterprise Network Array sf Device Driver (64-bit)
    system 				SUNWluxl 					Sun Enterprise Network Array socal Device Driver
    system 				SUNWluxlx 					Sun Enterprise Network Array socal Device Driver (64-bit)
    system 				SUNWluxop 					Sun Enterprise Network Array firmware and utilities
  14. Are the Fibre Channel support packages installed?

    • If yes, proceed to Step 15.

    • If no, install them.

    The StorEdge T3 disk tray packages are located in the Product directory of the Solaris CD-ROM. Use the pkgadd command to add any necessary packages.


    # pkgadd -d path_to_Solaris/Product Pkg1 Pkg2 Pkg3 ... PkgN
    
  15. Stop the Sun Cluster software on Node A and shut down Node A.


    # shutdown -y -g0 -i0
    

    For the procedure on shutting down a node, see the Sun Cluster 3.0 U1 System Administration Guide.

  16. Power off Node A.

  17. Install the host adapter in Node A.

    For the procedure on installing a host adapter, see the documentation that shipped with your host adapter and node.

  18. If necessary, power on and boot Node A.


    {0} ok boot -x
    

    For more information, see the Sun Cluster 3.0 U1 System Administration Guide.

  19. If necessary, upgrade the host adapter firmware on Node A.

    For the required host adapter firmware patch, see the Sun StorEdge T3 Disk Tray Release Notes. For the procedure on applying the host adapter firmware patch, see the firmware patch README.

  20. If necessary, install gigabit interface converters (GBIC), as shown in Figure 8-3.

    For the procedure on installing an FC-100 hub GBIC, see the FC-100 Hub Installation and Service Manual.

  21. If necessary, connect a fiber-optic cable between the Sun StorEdge FC-100 hub and Node A as shown in Figure 8-3.


    Note -

    No restrictions are placed on hub port assignments. You can connect your StorEdge T3 disk tray and node to any hub port.


    For the procedure on installing an FC-100/S host adapter GBIC, see your host adapter documentation. For the procedure on installing a fiber-optic cable, see the Sun StorEdge T3 Configuration Guide.

    Figure 8-3 Adding a StorEdge T3 Disk Tray in a Single-Controller Configuration

    Graphic

  22. If necessary, install the required Solaris patches for StorEdge T3 disk tray support on Node A.

    For a list of required Solaris patches for StorEdge T3 disk tray support, see the Sun StorEdge T3 Disk Tray Release Notes.

  23. Shut down Node A.


    # shutdown -y -g0 -i0
    
  24. Perform a reconfiguration boot to create the new Solaris device files and links on Node A.


    {0} ok boot -r
    
  25. Label the new logical volume.

    For the procedure on labeling a logical volume, see the Sun StorEdge T3 Disk Tray Administrator's Guide.

  26. (Optional) On Node A, verify that the device IDs (DIDs) are assigned to the new StorEdge T3 disk tray.


    # scdidadm -l
    

  27. Do you need to install a host adapter in Node B?

  28. Is the host adapter you want to install the first FC-100/S host adapter on Node B?

    • If no, skip to Step 30.

    • If yes, determine whether the Fibre Channel support packages are already installed on these nodes. This product requires the following packages.


    # pkginfo | egrep Wlux
    system 				SUNWluxd 					Sun Enterprise Network Array sf Device Driver
    system 				SUNWluxdx					Sun Enterprise Network Array sf Device Driver (64-bit)
    system 				SUNWluxl 					Sun Enterprise Network Array socal Device Driver
    system 				SUNWluxlx 					Sun Enterprise Network Array socal Device Driver (64-bit)
    system 				SUNWluxop 					Sun Enterprise Network Array firmware and utilities
  29. Are the Fibre Channel support packages installed?

    • If yes, proceed to Step 30.

    • If no, install them.

    The StorEdge T3 disk tray packages are located in the Product directory of the Solaris CD-ROM. Use the pkgadd command to add any necessary packages.


    # pkgadd -d path_to_Solaris/Product Pkg1 Pkg2 Pkg3 ... PkgN
    
  30. Move all resource groups and device groups off Node B.


    # scswitch -S -h nodename
    
  31. Stop the Sun Cluster software on Node B, and shut down the node.


    # shutdown -y -g0 -i0
    

    For the procedure on shutting down a node, see the Sun Cluster 3.0 U1 System Administration Guide.

  32. Power off Node B.

    For more information, see the Sun Cluster 3.0 U1 System Administration Guide.

  33. Install the host adapter in Node B.

    For the procedure on installing a host adapter, see the documentation that shipped with your host adapter and node.

  34. If necessary, power on and boot Node B.


    {0} ok boot -x
    

    For more information, see the Sun Cluster 3.0 U1 System Administration Guide.

  35. If necessary, upgrade the host adapter firmware on Node B.

    For the required host adapter firmware patch, see the Sun StorEdge T3 Disk Tray Release Notes. For the procedure on applying the host adapter firmware patch, see the firmware patch README.

  36. If necessary, install gigabit interface converters (GBIC) as shown in Figure 8-4.

    For the procedure on installing an FC-100 hub GBIC, see the FC-100 Hub Installation and Service Manual.

  37. If necessary, connect a fiber-optic cable between the Sun StorEdge FC-100 hub and Node B as shown in Figure 8-4.

    For the procedure on installing a FC-100/S host adapter GBIC, see your host adapter documentation. For the procedure on installing a fiber-optic cable, see the Sun StorEdge T3 Configuration Guide.

    Figure 8-4 Adding a StorEdge T3 Disk Tray in a Single-Controller Configuration

    Graphic

  38. If necessary, install the required Solaris patches for StorEdge T3 disk tray support on Node B.

    For a list of required Solaris patches for StorEdge T3 disk tray support, see the Sun StorEdge T3 Disk Tray Release Notes.

  39. Shut down Node B.


    # shutdown -y -g0 -i0
    
  40. Perform a reconfiguration boot to create the new Solaris device files and links on Node B.


    {0} ok boot -r
    
  41. (Optional) On Node B, verify that the device IDs (DIDs) are assigned to the new StorEdge T3 disk tray.


    # scdidadm -l
    

  42. Return the resource groups and device groups you identified in Step 10 to Node A and Node B.


    # scswitch -z -g resource-group -h nodename
    # scswitch -z -D device-group-name -h nodename
    

    For more information, see the Sun Cluster 3.0 U1 System Administration Guide.

  43. Perform volume management administration to incorporate the new logical volumes into the cluster.

    For more information, see your Solstice DiskSuite or VERITAS Volume Manager documentation.

How to Remove a StorEdge T3 Disk Tray

Use this procedure to permanently remove a StorEdge T3 disk tray and its submirrors from a running cluster. This procedure provides the flexibility to remove the host adapters from the nodes for the StorEdge T3 disk tray you are removing.

This procedure defines Node A as the node you want to begin working with, and Node B as the remaining node.


Caution - Caution -

During this procedure, you will lose access to the data that resides on the StorEdge T3 disk tray you are removing.


  1. If necessary, back up all database tables, data services, and volumes that are associated with the StorEdge T3 disk tray that you are removing.

  2. Detach the submirrors from the StorEdge T3 disk tray you are removing in order to stop all I/O activity to the StorEdge T3 disk tray.

    For more information, see your Solstice DiskSuite or VERITAS Volume Manager documentation.

  3. Run the appropriate Solstice DiskSuite or VERITAS Volume Manager commands to remove the references to the LUN(s) from any diskset or disk group.

    For more information, see your Solstice DiskSuite or VERITAS Volume Manager documentation.

  4. Determine the resource groups and device groups that are running on Node B.


    # scstat
    
  5. Move all resource groups and device groups off Node A.


    # scswitch -S -h nodename
    
  6. Stop the Sun Cluster software on Node A, and shut down Node A.


    # shutdown -y -g0 -i0
    

    For the procedure on shutting down a node, see the Sun Cluster 3.0 U1 System Administration Guide.

  7. Is the StorEdge T3 disk tray you are removing the last StorEdge T3 disk tray that is connected to Node A?

    • If yes, disconnect the fiber-optic cable between Node A and the Sun StorEdge FC-100 hub that is connected to this StorEdge T3 disk tray, then disconnect the fiber-optic cable between the Sun StorEdge FC-100 hub and this StorEdge T3 disk tray.

    • If no, proceed to Step 8.

    For the procedure on removing a fiber-optic cable, see the Sun StorEdge T3 Configuration Guide.

  8. Do you want to remove the host adapter from Node A?

    • If yes, power off Node A.

    • If no, skip to Step 10.

  9. Remove the host adapter from Node A.

    For the procedure on removing host adapters, see the documentation that shipped with your nodes.

  10. Without allowing the node to boot, power on Node A.

    For more information, see the Sun Cluster 3.0 U1 System Administration Guide.

  11. Boot Node A into cluster mode.


    {0} ok boot
    
  12. Move all resource groups and device groups off Node B.


    # scswitch -S -h nodename
    
  13. Stop the Sun Cluster software on Node B, and shut down Node B.


    # shutdown -y -g0 -i0
    
  14. Is the StorEdge T3 disk tray you are removing the last StorEdge T3 disk tray that is connected to the Sun StorEdge FC-100 hub.

    • If yes, disconnect the fiber-optic cable that connects this Sun StorEdge FC-100 hub and Node B.

    • If no, proceed to Step 15.

    For the procedure on removing a fiber-optic cable, see the Sun StorEdge T3 Configuration Guide.

  15. Do you want to remove the host adapter from Node B?

    • If yes, power off Node B.

    • If no, skip to Step 18.

  16. Remove the host adapter from Node B.

    For the procedure on removing host adapters, see the documentation that shipped with your nodes.

  17. Without allowing the node to boot, power on Node B.

    For more information, see the Sun Cluster 3.0 U1 System Administration Guide.

  18. Boot Node B into cluster mode.


    {0} ok boot
    

    For more information, see the Sun Cluster 3.0 U1 System Administration Guide.

  19. On all cluster nodes, update the /devices and /dev entries.


    # devfsadm -C
    # scdidadm -C
    
  20. Return the resource groups and device groups you identified in Step 4 to Node A and Node B.


    # scswitch -z -g resource-group -h nodename
    # scswitch -z -D device-group-name -h nodename
    

How to Replace a Host-to-Hub Component

Use this procedure to replace the following host-to-hub components.

  1. On the node that is connected to the host-to-hub connection you are replacing, determine the resource groups and device groups that are running on this node.


    # scstat
    
  2. Move all resource groups and device groups to another node.


    # scswitch -S -h nodename
    
  3. Replace the host-to-hub component.

    For the procedure on replacing a fiber-optic cable, see the Sun StorEdge T3 Configuration Guide. For the procedure on replacing an FC-100 hub GBIC, see the FC-100 Hub Installation and Service Manual. For the procedure on replacing a FC-100/S host adapter GBIC, see your host adapter documentation.

  4. Return the resource groups and device groups you identified in Step 1 to the node that is connected to the host-to-hub connection you replaced.


    # scswitch -z -g resource-group -h nodename
    # scswitch -z -D device-group-name -h nodename
    

How to Replace a Sun StorEdge FC-100 Hub and Hub-to-Disk Tray Component

Use this procedure to replace the following hub-to-disk tray components.

  1. Detach the submirrors on the StorEdge T3 disk tray that is connected to the hub-to-disk tray fiber-optic cable you are replacing in order to stop all I/O activity to this StorEdge T3 disk tray.

    For more information, see your Solstice DiskSuite or VERITAS Volume Manager documentation.

  2. Replace the hub-to-disk tray component.

    For the procedure on replacing a fiber-optic cable, see the Sun StorEdge T3 Configuration Guide. For the procedure on replacing an FC-100 hub GBIC, a Sun StorEdge FC-100 hub, or a Sun StorEdge FC-100 hub power cord, see the FC-100 Hub Installation and Service Manual. For the procedure on replacing an MIA, see the Sun StorEdge T3 Installation, Operation, and Service Manual.

  3. Reattach the submirrors to resynchronize them.

    For more information, see your Solstice DiskSuite or VERITAS Volume Manager documentation.

How to Replace a StorEdge T3 Disk Tray Controller

Use this procedure to replace a StorEdge T3 disk tray controller.

  1. Detach the submirrors on the StorEdge T3 disk tray that is connected to the controller you are replacing in order to stop all I/O activity to this StorEdge T3 disk tray.

    For more information, see your Solstice DiskSuite or VERITAS Volume Manager documentation.

  2. Replace the controller.

    For the procedure on replacing a StorEdge T3 controller, see the Sun StorEdge T3 Installation, Operation, and Service Manual.

  3. Reattach the submirrors to resynchronize them.

    For more information, see your Solstice DiskSuite or VERITAS Volume Manager documentation.

How to Replace a StorEdge T3 Disk Tray Chassis

Use this procedure to replace a StorEdge T3 disk tray chassis. This procedure assumes that you are retaining all FRUs other than the chassis and the backplane. To replace the chassis, you must replace both the chassis and the backplane because these components are manufactured as one part.


Note -

Only trained, qualified service providers should use this procedure to replace a StorEdge T3 disk tray chassis.


  1. Detach the submirrors on the StorEdge T3 disk tray that is connected to the chassis you are replacing in order to stop all I/O activity to this StorEdge T3 disk tray.

    For more information, see your Solstice DiskSuite or VERITAS Volume Manager documentation.

  2. Replace the chassis/backplane.

    For the procedure on replacing a StorEdge T3 chassis, see the Sun StorEdge T3 Field Service Manual.

  3. Reattach the submirrors to resynchronize them.


    Note -

    Account for the change in the World Wide Number (WWN).


    For more information, see your Solstice DiskSuite or VERITAS Volume Manager documentation.

How to Replace a Host Adapter

Use this procedure to replace a failed host adapter in a running cluster. Node A in this procedure refers to the node with the failed host adapter you are replacing. Node B is a backup node.

  1. Determine the resource groups and device groups that are running on Node A and Node B.

    Record this information because you will use it in Step 8 of this procedure to return resource groups and device groups to these nodes.


    # scstat
    
  2. Move all resource groups and device groups off Node A.


    # scswitch -S -h nodename
    
  3. Shut down Node A.


    # shutdown -y -g0 -i0
    
  4. Power off Node A.

    For more information, see the Sun Cluster 3.0 U1 System Administration Guide.

  5. Replace the failed host adapter.

    For the procedure on removing and adding host adapters, see the documentation that shipped with your nodes.

  6. Power on Node A.

    For more information, see the Sun Cluster 3.0 U1 System Administration Guide.

  7. Boot Node A into cluster mode.


    {0} ok boot
    

    For more information, see the Sun Cluster 3.0 U1 System Administration Guide.

  8. Return the resource groups and device groups you identified in Step 1 to Node A and Node B.


    # scswitch -z -g resource-group -h nodename
    # scswitch -z -D device-group-name -h nodename
    

    For more information, see the Sun Cluster 3.0 U1 System Administration Guide.