Sun Cluster 3.0 U1 Release Notes Supplement

Appendix C Installing and Maintaining a Sun StorEdge T3 or T3+ Disk Tray Single-Controller Configuration

This chapter contains the procedures for installing, configuring, and maintaining Sun StorEdgeTM T3 and Sun StorEdge T3+ disk trays in a single-controller (non-interconnected) configuration. Differences between the StorEdge T3 and StorEdge T3+ procedures are noted where appropriate.

This chapter contains the following procedures:

For conceptual information on multihost disks, see the Sun Cluster 3.0 U1 Concepts document.

Installing StorEdge T3/T3+ Disk Trays

This section contains the procedure for an initial installation of new StorEdge T3 or StorEdge T3+ disk trays.

How to Install StorEdge T3/T3+ Disk Trays

Use this procedure to install and configure new StorEdge T3 or StorEdge T3+ disk trays in a cluster that is not running. Perform the steps in this procedure in conjunction with the procedures in the Sun Cluster 3.0 U1 Installation Guide and your server hardware manual.

  1. Install the host adapters in the nodes that are to be connected to the StorEdge T3/T3+ disk trays.

    For the procedure on installing host adapters, see the documentation that shipped with your host adapters and nodes.

  2. Install the Sun StorEdge FC-100 hubs.

    For the procedure on installing Sun StorEdge FC-100 hubs, see the FC-100 Hub Installation and Service Manual.

  3. Set up a Reverse Address Resolution Protocol (RARP) server on the network you want the new StorEdge T3/T3+ disk trays to reside on.

    This RARP server enables you to assign an IP address to the new StorEdge T3/T3+ disk trays by using each StorEdge T3/T3+ disk tray's unique MAC address.

    For the procedure on setting up a RARP server, see the Sun StorEdge T3 Installation, Operation, and Service Manual.

  4. (Skip this step if you are installing a StorEdge T3+ disk tray) Install the media interface adapters (MIAs) in the StorEdge T3 disk trays you are installing, as shown in Figure C-1.

    For the procedure on installing a media interface adapter (MIA), see the Sun StorEdge T3 Configuration Guide.

  5. If necessary, install gigabit interface converters (GBICs) in the Sun StorEdge FC-100 hubs, as shown in Figure C-1.

    The GBICs let you connect the Sun StorEdge FC-100 hubs to the StorEdge T3/T3+ disk trays you are installing. For the procedure on installing an FC-100 hub GBIC, see the FC-100 Hub Installation and Service Manual.

  6. Install fiber-optic cables between the Sun StorEdge FC-100 hubs and the StorEdge T3/T3+ disk trays as shown in Figure C-1.

    For the procedure on installing a fiber-optic cable, see the Sun StorEdge T3 Configuration Guide.

  7. Install fiber-optic cables between the Sun StorEdge FC-100 hubs and the cluster nodes as shown in Figure C-1.

  8. Install the Ethernet cables between the StorEdge T3/T3+ disk trays and the Local Area Network (LAN), as shown in Figure C-1.

  9. Install power cords to each disk tray you are installing.

  10. Power on the StorEdge T3/T3+ disk trays and confirm that all components are powered on and functional.


    Note -

    The StorEdge T3/T3+ disk trays might require a few minutes to boot.


    For the procedure on powering on a StorEdge T3/T3+ disk tray, see the Sun StorEdge T3 Installation, Operation, and Service Manual.

    Figure C-1 Cabling StorEdge T3/T3+ Disk Trays in a Single-Controller Configuration

    Graphic


    Note -

    Although Figure C-1 shows a single-controller configuration, two disk trays are shown to illustrate how two non-interconnected disk trays are typically cabled in a cluster to allow data sharing and host-based mirroring.


  11. (Optional) Configure the StorEdge T3/T3+ disk trays with logical volumes.

    For the procedure on configuring the StorEdge T3/T3+ disk tray with logical volumes, see the Sun StorEdge T3 Disk Tray Administrator's Guide.

  12. Telnet to each StorEdge T3/T3+ disk tray you are adding and install the required StorEdge T3/T3+ disk tray controller firmware.

    See the Sun Cluster 3.0 U1 Release Notes for information about accessing Sun's EarlyNotifier web pages, which list information about any required patches or firmware levels that are available for download. For the procedure on applying any host adapter firmware patch, see the firmware patch README file.

  13. Ensure that this new StorEdge T3/T3+ disk tray has a unique target address.

    For the procedure on verifying and assigning a target address, see the Sun StorEdge T3 Configuration Guide.

  14. Reset the StorEdge T3/T3+ disk tray.

    For the procedure on rebooting or resetting a StorEdge T3/T3+ disk tray, see the Sun StorEdge T3 Installation, Operation, and Service Manual.

  15. Install to the cluster nodes the Solaris operating environment, and apply any required Solaris patches for Sun Cluster software and StorEdge T3/T3+ disk tray support.

    For the procedure on installing the Solaris operating environment, see the Sun Cluster 3.0 U1 Installation Guide. For the location of required Solaris patches and installation instructions for Sun Cluster software support, see the Sun Cluster 3.0 U1 Release Notes. For a list of required Solaris patches for StorEdge T3/T3+ disk tray support, see the Sun StorEdge T3 Disk Tray Release Notes.

Where to Go From Here

To continue with Sun Cluster software installation tasks, see the Sun Cluster 3.0 U1 Installation Guide.

Configuring a StorEdge T3/T3+ Disk Tray

This section contains the procedures for configuring a StorEdge T3 or StorEdge T3+ disk tray in a running cluster. The following table lists these procedures.

Table C-1 Task Map: Configuring a StorEdge T3/T3+ Disk Tray

Task 

For Instructions, Go To 

Create a disk tray logical volume 

"How to Create a Sun StorEdge T3/T3+ Disk Tray Logical Volume"

Remove a disk tray logical volume 

"How to Remove a Sun StorEdge T3/T3+ Disk Tray Logical Volume"

How to Create a Sun StorEdge T3/T3+ Disk Tray Logical Volume

Use this procedure to create a logical volume. This procedure assumes all cluster nodes are booted and attached to the StorEdge T3/T3+ disk tray that is to host the logical volume you are creating.

  1. Telnet to the StorEdge T3/T3+ disk tray that is to host the logical volume you are creating.

  2. Create the logical volume.

    The creation of a logical volume involves adding, mounting, and initializing the logical volume.

    For the procedure on creating and initializing a logical volume, see the Sun StorEdge T3 Disk Tray Administrator's Guide. For the procedure on mounting a logical volume, see the Sun StorEdge T3 Installation, Operation, and Service Manual.

  3. On all cluster nodes, update the /devices and /dev entries.


    # devfsadm
    
  4. On one node connected to the partner-group, use the format command to verify that the new logical volume is visible to the system.


    # format
    

    See the format command man page for more information about using the command.

  5. Are you running VERITAS Volume Manager?

    • If not, go to Step 6

    • If you are running VERITAS Volume Manager, update its list of devices on all cluster nodes attached to the logical volume you created in Step 2.

    See your VERITAS Volume Manager documentation for information about using the vxdctl enable command to update new devices (volumes) in your VERITAS Volume Manager list of devices.

  6. If necessary, partition the logical volume.

  7. From any node in the cluster, update the global device namespace.


    # scgdevs
    

    If a volume management daemon such as vold is running on your node, and you have a CD-ROM drive that is connected to the node, a device busy error might be returned even if no disk is in the drive. This error is expected behavior.

Where to Go From Here

To create a new resource or reconfigure a running resource to use the new StorEdge T3/T3+ disk tray logical volume, see the Sun Cluster 3.1 Data Services Installation and Configuration Guide.

To configure a logical volume as a quorum device, see the Sun Cluster 3.0 U1 System Administration Guide for the procedure on adding a quorum device.

How to Remove a Sun StorEdge T3/T3+ Disk Tray Logical Volume

Use this procedure to remove a logical volume. This procedure assumes all cluster nodes are booted and attached to the StorEdge T3/T3+ disk tray that hosts the logical volume you are removing.

This procedure defines Node A as the node you begin working with, and Node B as the remaining node.


Caution - Caution -

This procedure removes all data on the logical volume you are removing.


  1. If necessary, migrate all data and volumes off the logical volume you are removing. Otherwise, proceed to Step 2.

  2. Is the logical volume you are removing a quorum device?


    # scstat -q
    
    • If yes, remove the quorum device before you proceed.

    • If no, go to Step 3.

    For the procedure on removing a quorum device, see the Sun Cluster 3.0 U1 System Administration Guide.

  3. Are you running VERITAS Volume Manager?

    • If not, go to Step 4.

    • If you are running VERITAS Volume Manager, update its list of devices on all cluster nodes attached to the logical volume you are removing.

    See your VERITAS Volume Manager documentation for information about using the vxdisk rm command to remove devices (volumes) in your VERITAS Volume Manager device list.

  4. Run the appropriate Solstice DiskSuite or VERITAS Volume Manager commands to remove the reference to the logical unit number (LUN) from any diskset or disk group.

    For more information, see your Solstice DiskSuite or VERITAS Volume Manager documentation.

  5. Telnet to the disk tray and remove the logical volume.

    For the procedure on deleting a logical volume, see the Sun StorEdge T3 Disk Tray Administrator's Guide.

  6. Determine the resource groups and device groups that are running on Node A and Node B.

    Record this information because you will use it in Step 13 of this procedure to return resource groups and device groups to these nodes.


    # scstat
    
  7. Move all resource groups and device groups off Node A.


    # scswitch -S -h nodename
    
  8. Shut down and reboot Node A by using the shutdown command with the i6 option.

    The -i6 option with the shutdown command causes the node to reboot after it shuts down to the ok prompt.


    # shutdown -y -g0 -i6
    

    For more information, see the Sun Cluster 3.0 U1 System Administration Guide.

  9. On Node A, remove the obsolete device IDs (DIDs).


    # devfsadm -C
    # scdidadm -C
    
  10. Move all resource groups and device groups off Node B.


    # scswitch -S -h nodename
    
  11. Shut down and reboot Node B by using the shutdown command with the i6 option.

    The -i6 option with the shutdown command causes the node to reboot after it shuts down to the ok prompt.


    # shutdown -y -g0 -i6
    

    For more information, see the Sun Cluster 3.0 U1 System Administration Guide.

  12. On Node B, remove the obsolete DIDs.


    # devfsadm -C
    # scdidadm -C
    
  13. Return the resource groups and device groups you identified in Step 6 to Node A and Node B.


    # scswitch -z -g resource-group -h nodename
    # scswitch -z -D device-group-name -h nodename
    

    For more information, see the Sun Cluster 3.0 U1 System Administration Guide.

Where to Go From Here

To create a logical volume, see "How to Create a Sun StorEdge T3/T3+ Disk Tray Logical Volume".

Maintaining a StorEdge T3/T3+ Disk Tray

This section contains the procedures for maintaining a StorEdge T3 or StorEdge T3+ disk tray. The following table lists these procedures. This section does not include a procedure for adding a disk drive and a procedure for removing a disk drive because a StorEdge T3/T3+ disk tray only operates when fully configured.


Caution - Caution -

If you remove any field-replaceable unit (FRU) for an extended period of time, thermal complications might result. To prevent this complication, the StorEdge T3/T3+ disk tray is designed so an orderly shutdown occurs when you remove a component for longer than 30 minutes. A replacement part must be immediately available before starting a FRU replacement procedure. You must replace a FRU within 30 minutes or the StorEdge T3/T3+ disk tray, and all attached StorEdge T3/T3+ disk trays, will shut down and power off.


Table C-2 Task Map: Maintaining a StorEdge T3/T3+ Disk Tray

Task 

For Instructions, Go To 

Upgrade StorEdge T3/T3+ disk tray firmware. 

"How to Upgrade StorEdge T3/T3+ Disk Tray Firmware"

Replace a disk drive. 

"How to Replace a Disk Drive"

Add a StorEdge T3/T3+ disk tray. 

"How to Add a StorEdge T3/T3+ Disk Tray"

Remove a StorEdge T3/T3+ disk tray. 

"How to Remove a StorEdge T3/T3+ Disk Tray"

Replace a host-to-hub fiber-optic cable. 

"How to Replace a Host-to-Hub Component"

Replace an FC-100/S host adapter GBIC. 

"How to Replace a Host-to-Hub Component"

Replace an FC-100 hub GBIC that connects a FC-100 hub to a host. 

"How to Replace a Host-to-Hub Component"

Replace a hub-to-disk tray fiber optic cable. 

 

"How to Replace a Sun StorEdge FC-100 Hub and Hub-to-Disk Tray Component"

Replace an FC-100 hub GBIC that connects the FC-100 hub to a StorEdge T3 disk tray. 

"How to Replace a Sun StorEdge FC-100 Hub and Hub-to-Disk Tray Component"

Replace a Sun StorEdge FC-100 hub. 

 

"How to Replace a Sun StorEdge FC-100 Hub and Hub-to-Disk Tray Component"

Replace a Sun StorEdge FC-100 hub power cord. 

"How to Replace a Sun StorEdge FC-100 Hub and Hub-to-Disk Tray Component"

Replace a media interface adapter (MIA) on a StorEdge T3 disk tray (not applicable for StorEdge T3+ disk trays). 

"How to Replace a Sun StorEdge FC-100 Hub and Hub-to-Disk Tray Component"

Replace a StorEdge T3 disk tray controller. 

 

"How to Replace a StorEdge T3/T3+ Disk Tray Chassis"

Replace a StorEdge T3 disk tray chassis. 

 

"How to Replace a StorEdge T3/T3+ Disk Tray Chassis"

Replace a host adapter. 

"How to Replace a Host Adapter"

Upgrade a StorEdge T3 disk tray controller to a StorEdge T3+ disk tray controller. 

Sun StorEdge T3 Array Controller Upgrade Manual

Replace a Power and Cooling Unit (PCU). 

 

Follow the same procedure used in a non-cluster environment. 

Sun StorEdge T3 Installation, Operation, and Service Manual

Replace a unit interconnect card (UIC). 

 

Follow the same procedure used in a non-cluster environment. 

Sun StorEdge T3 Installation, Operation, and Service Manual

Replace a StorEdge T3 disk tray power cable. 

 

Follow the same procedure used in a non-cluster environment. 

Sun StorEdge T3 Installation, Operation, and Service Manual

Replace an Ethernet cable. 

 

Follow the same procedure used in a non-cluster environment. 

Sun StorEdge T3 Installation, Operation, and Service Manual

How to Upgrade StorEdge T3/T3+ Disk Tray Firmware

Use this procedure to upgrade StorEdge T3/T3+ disk tray firmware in a running cluster. StorEdge T3/T3+ disk tray firmware includes controller firmware, unit interconnect card (UIC) firmware, and disk drive firmware.


Caution - Caution -

Perform this procedure on one StorEdge T3/T3+ disk tray at a time. This procedure requires that you reset the StorEdge T3/T3+ disk tray you are upgrading. If you reset more than one StorEdge T3/T3+ disk tray, your cluster will lose access to data if the StorEdge T3/T3+ disk trays are submirrors of each other.


  1. On one node attached to the StorEdge T3/T3+ disk tray you are upgrading, detach that StorEdge T3/T3+ disk tray's submirrors.

    For more information, see your Solstice DiskSuite or VERITAS Volume Manager documentation.

  2. Apply the controller, disk drive, and UIC firmware patches.

    See the Sun Cluster 3.0 U1 Release Notes for information about accessing Sun's EarlyNotifier web pages, which list information about any required patches or firmware levels that are available for download. For the procedure on applying any host adapter firmware patch, see the firmware patch README file.

  3. Reset the StorEdge T3/T3+ disk tray, if you have not already done so.

    For the procedure on rebooting a StorEdge T3/T3+ disk tray, see the Sun StorEdge T3 Installation, Operation, and Service Manual.

  4. Reattach the submirrors to resynchronize them.

    For more information, see your Solstice DiskSuite or VERITAS Volume Manager documentation.

How to Replace a Disk Drive

Use this procedure to replace one failed disk drive in a StorEdge T3/T3+ disk tray in a running cluster.


Caution - Caution -

If you remove any field replaceable unit (FRU) for an extended period of time, thermal complications might result. To prevent this complication, the StorEdge T3/T3+ disk tray is designed so an orderly shutdown occurs when you remove a component for longer than 30 minutes. A replacement part must be immediately available before starting a FRU replacement procedure. You must replace a FRU within 30 minutes or the StorEdge T3/T3+ disk tray, and all attached StorEdge T3/T3+ disk trays, will shut down and power off.


  1. If the failed disk drive impacted the logical volume's availability, remove the logical volume from volume management control. Otherwise, proceed to Step 2.

    For more information, see your Solstice DiskSuite or VERITAS Volume Manager documentation.

  2. Replace the disk drive.

    For the procedure on replacing a disk drive, see the Sun StorEdge T3 Installation, Operation, and Service Manual.

  3. If you removed a LUN from volume management control in Step 1, return the LUN(s) to volume management control. Otherwise, Step 2 completes this procedure.

    For more information, see your Solstice DiskSuite or VERITAS Volume Manager documentation.

How to Add a StorEdge T3/T3+ Disk Tray

Use this procedure to add a new StorEdge T3/T3+ disk tray to a running cluster.

This procedure defines Node A as the node you begin working with, and Node B as the remaining node.

  1. Set up a Reverse Address Resolution Protocol (RARP) server on the network the new StorEdge T3/T3+ disk tray is to reside on, and then assign an IP address to the new StorEdge T3/T3+ disk tray.

    This RARP server enables you to assign an IP address to the new StorEdge T3/T3+ disk tray by using the StorEdge T3/T3+ disk tray's unique MAC address.

    For the procedure on setting up a RARP server, see the Sun StorEdge T3 Installation, Operation, and Service Manual.

  2. (Skip this step if you are adding a StorEdge T3+ disk tray) Install the media interface adapter (MIA) in the StorEdge T3 disk tray you are adding as shown in Figure C-2.

    For the procedure on installing a media interface adapter (MIA), see the Sun StorEdge T3 Configuration Guide.

  3. If necessary, install a gigabit interface converter (GBIC) in the Sun StorEdge FC-100 hub as shown in Figure C-2.

    This GBIC enables you to connect the Sun StorEdge FC-100 hub to the StorEdge T3/T3+ disk tray you are adding.

    For the procedure on installing an FC-100 hub GBIC, see the FC-100 Hub Installation and Service Manual.

  4. Install the Ethernet cable between the StorEdge T3/T3+ disk tray and the Local Area Network (LAN), as shown in Figure C-2.

  5. Power on the StorEdge T3/T3+ disk tray.


    Note -

    The StorEdge T3/T3+ disk tray might require a few minutes to boot.


    For the procedure on powering on a StorEdge T3/T3+ disk tray, see the Sun StorEdge T3 Installation, Operation, and Service Manual.

  6. Telnet to the StorEdge T3/T3+ disk tray you are adding, and, if necessary, install the required StorEdge T3/T3+ disk tray controller firmware.

    See the Sun Cluster 3.0 U1 Release Notes for information about accessing Sun's EarlyNotifier web pages, which list information about any required patches or firmware levels that are available for download. For the procedure on applying any host adapter firmware patch, see the firmware patch README file.

  7. Does this new StorEdge T3/T3+ disk tray have a unique target address?

    • If yes, proceed to Step 8.

    • If no, change the target address for this new StorEdge T3/T3+ disk tray.

    For the procedure on verifying and assigning a target address, see the Sun StorEdge T3 Configuration Guide.

  8. Install a fiber-optic cable between the Sun StorEdge FC-100 hub and the StorEdge T3/T3+ disk tray as shown in Figure C-2.

    For the procedure on installing a fiber-optic cable, see the Sun StorEdge T3 Configuration Guide.

    Figure C-2 Adding a StorEdge T3/T3+ Disk Tray in a Single-Controller Configuration

    Graphic


    Note -

    Although Figure C-2 shows a single-controller configuration, two disk trays are shown to illustrate how two non-interconnected disk trays are typically cabled in a cluster to allow data sharing and host-based mirroring.


  9. Configure the new StorEdge T3/T3+ disk tray.

    For the procedure on creating a logical volume, see the Sun StorEdge T3 Disk Tray Administrator's Guide.

  10. Determine the resource groups and device groups that are running on Node A and Node B.

    Record this information because you will use it in Step 42 of this procedure to return resource groups and device groups to these nodes.


    # scstat
    
  11. Move all resource groups and device groups off Node A.


    # scswitch -S -h nodename
    
  12. Do you need to install a host adapter in Node A?

  13. Is the host adapter you are installing the first FC-100/S host adapter on Node A?

    • If no, skip to Step 15.

    • If yes, determine whether the Fibre Channel support packages are already installed on these nodes. This product requires the following packages.


    # pkginfo | egrep Wlux
    system 				SUNWluxd 					Sun Enterprise Network Array sf Device Driver
    system 				SUNWluxdx					Sun Enterprise Network Array sf Device Driver (64-bit)
    system 				SUNWluxl 					Sun Enterprise Network Array socal Device Driver
    system 				SUNWluxlx 					Sun Enterprise Network Array socal Device Driver (64-bit)
    system 				SUNWluxop 					Sun Enterprise Network Array firmware and utilities
  14. Are the Fibre Channel support packages installed?

    • If yes, proceed to Step 15.

    • If no, install them.

    The StorEdge T3/T3+ disk tray packages are located in the Product directory of the Solaris CD-ROM. Use the pkgadd command to add any necessary packages.


    # pkgadd -d path_to_Solaris/Product Pkg1 Pkg2 Pkg3 ... PkgN
    
  15. Stop the Sun Cluster software on Node A and shut down Node A.


    # shutdown -y -g0 -i0
    

    For the procedure on shutting down a node, see the Sun Cluster 3.0 U1 System Administration Guide.

  16. Power off Node A.

  17. Install the host adapter in Node A.

    For the procedure on installing a host adapter, see the documentation that shipped with your host adapter and node.

  18. If necessary, power on and boot Node A.


    {0} ok boot -x
    

    For more information, see the Sun Cluster 3.0 U1 System Administration Guide.

  19. If necessary, upgrade the host adapter firmware on Node A.

    See the Sun Cluster 3.0 U1 Release Notes for information about accessing Sun's EarlyNotifier web pages, which list information about any required patches or firmware levels that are available for download. For the procedure on applying any host adapter firmware patch, see the firmware patch README file.

  20. If necessary, install gigabit interface converters (GBIC), as shown in Figure C-3.

    For the procedure on installing an FC-100 hub GBIC, see the FC-100 Hub Installation and Service Manual.

  21. If necessary, connect a fiber-optic cable between the Sun StorEdge FC-100 hub and Node A as shown in Figure C-3.

    For the procedure on installing an FC-100/S host adapter GBIC, see your host adapter documentation. For the procedure on installing a fiber-optic cable, see the Sun StorEdge T3 Configuration Guide.

    Figure C-3 Adding a StorEdge T3/T3+ Disk Tray in a Single-Controller Configuration

    Graphic

  22. If necessary, install the required Solaris patches for StorEdge T3/T3+ disk tray support on Node A.

    For a list of required Solaris patches for StorEdge T3/T3+ disk tray support, see the Sun StorEdge T3 Disk Tray Release Notes.

  23. Shut down Node A.


    # shutdown -y -g0 -i0
    
  24. Perform a reconfiguration boot to create the new Solaris device files and links on Node A.


    {0} ok boot -r
    
  25. Label the new logical volume.

    For the procedure on labeling a logical volume, see the Sun StorEdge T3 Disk Tray Administrator's Guide.

  26. (Optional) On Node A, verify that the device IDs (DIDs) are assigned to the new StorEdge T3/T3+ disk tray.


    # scdidadm -l
    

  27. Do you need to install a host adapter in Node B?

  28. Is the host adapter you are installing the first FC-100/S host adapter on Node B?

    • If no, skip to Step 30.

    • If yes, determine whether the Fibre Channel support packages are already installed on these nodes. This product requires the following packages.


    # pkginfo | egrep Wlux
    system 				SUNWluxd 					Sun Enterprise Network Array sf Device Driver
    system 				SUNWluxdx					Sun Enterprise Network Array sf Device Driver (64-bit)
    system 				SUNWluxl 					Sun Enterprise Network Array socal Device Driver
    system 				SUNWluxlx 					Sun Enterprise Network Array socal Device Driver (64-bit)
    system 				SUNWluxop 					Sun Enterprise Network Array firmware and utilities
  29. Are the Fibre Channel support packages installed?

    • If yes, proceed to Step 30.

    • If no, install them.

    The StorEdge T3/T3+ disk tray packages are located in the Product directory of the Solaris CD-ROM. Use the pkgadd command to add any necessary packages.


    # pkgadd -d path_to_Solaris/Product Pkg1 Pkg2 Pkg3 ... PkgN
    
  30. Move all resource groups and device groups off Node B.


    # scswitch -S -h nodename
    
  31. Stop the Sun Cluster software on Node B, and shut down the node.


    # shutdown -y -g0 -i0
    

    For the procedure on shutting down a node, see the Sun Cluster 3.0 U1 System Administration Guide.

  32. Power off Node B.

    For more information, see the Sun Cluster 3.0 U1 System Administration Guide.

  33. Install the host adapter in Node B.

    For the procedure on installing a host adapter, see the documentation that shipped with your host adapter and node.

  34. If necessary, power on and boot Node B.


    {0} ok boot -x
    

    For more information, see the Sun Cluster 3.0 U1 System Administration Guide.

  35. If necessary, upgrade the host adapter firmware on Node B.

    See the Sun Cluster 3.0 U1 Release Notes for information about accessing Sun's EarlyNotifier web pages, which list information about any required patches or firmware levels that are available for download. For the procedure on applying any host adapter firmware patch, see the firmware patch README file.

  36. If necessary, install gigabit interface converters (GBIC) as shown in Figure C-4.

    For the procedure on installing an FC-100 hub GBIC, see the FC-100 Hub Installation and Service Manual.

  37. If necessary, connect a fiber-optic cable between the Sun StorEdge FC-100 hub and Node B as shown in Figure C-4.

    For the procedure on installing a FC-100/S host adapter GBIC, see your host adapter documentation. For the procedure on installing a fiber-optic cable, see the Sun StorEdge T3 Configuration Guide.

    Figure C-4 Adding a StorEdge T3/T3+ Disk Tray in a Single-Controller Configuration

    Graphic

  38. If necessary, install the required Solaris patches for StorEdge T3/T3+ disk tray support on Node B.

    For a list of required Solaris patches for StorEdge T3/T3+ disk tray support, see the Sun StorEdge T3 Disk Tray Release Notes.

  39. Shut down Node B.


    # shutdown -y -g0 -i0
    
  40. Perform a reconfiguration boot to create the new Solaris device files and links on Node B.


    {0} ok boot -r
    
  41. (Optional) On Node B, verify that the device IDs (DIDs) are assigned to the new StorEdge T3/T3+ disk tray.


    # scdidadm -l
    

  42. Return the resource groups and device groups you identified in Step 10 to Node A and Node B.


    # scswitch -z -g resource-group -h nodename
    # scswitch -z -D device-group-name -h nodename
    

    For more information, see the Sun Cluster 3.0 U1 System Administration Guide.

  43. Perform volume management administration to incorporate the new logical volumes into the cluster.

    For more information, see your Solstice DiskSuite or VERITAS Volume Manager documentation.

How to Remove a StorEdge T3/T3+ Disk Tray

Use this procedure to permanently remove a StorEdge T3/T3+ disk tray and its submirrors from a running cluster. This procedure provides the flexibility to remove the host adapters from the nodes for the StorEdge T3/T3+ disk tray you are removing.

This procedure defines Node A as the node begin working with, and Node B as the remaining node.


Caution - Caution -

During this procedure, you will lose access to the data that resides on the StorEdge T3/T3+ disk tray you are removing.


  1. Back up all database tables, data services, and volumes that are associated with the StorEdge T3/T3+ disk tray that you are removing.

  2. Detach the submirrors from the StorEdge T3/T3+ disk tray you are removing in order to stop all I/O activity to the StorEdge T3/T3+ disk tray.

    For more information, see your Solstice DiskSuite or VERITAS Volume Manager documentation.

  3. Run the appropriate Solstice DiskSuite or VERITAS Volume Manager commands to remove the references to the LUN(s) from any diskset or disk group.

    For more information, see your Solstice DiskSuite or VERITAS Volume Manager documentation.

  4. Determine the resource groups and device groups that are running on Node B.


    # scstat
    
  5. Move all resource groups and device groups off Node A.


    # scswitch -S -h nodename
    
  6. Stop the Sun Cluster software on Node A, and shut down Node A.


    # shutdown -y -g0 -i0
    

    For the procedure on shutting down a node, see the Sun Cluster 3.0 U1 System Administration Guide.

  7. Is the StorEdge T3/T3+ disk tray you are removing the last StorEdge T3/T3+ disk tray that is connected to Node A?

    • If yes, disconnect the fiber-optic cable between Node A and the Sun StorEdge FC-100 hub that is connected to this StorEdge T3/T3+ disk tray, then disconnect the fiber-optic cable between the Sun StorEdge FC-100 hub and this StorEdge T3/T3+ disk tray.

    • If no, proceed to Step 8.

    For the procedure on removing a fiber-optic cable, see the Sun StorEdge T3 Configuration Guide.

  8. Do you want to remove the host adapter from Node A?

    • If yes, power off Node A.

    • If no, skip to Step 11.

  9. Remove the host adapter from Node A.

    For the procedure on removing host adapters, see the documentation that shipped with your nodes.

  10. Without allowing the node to boot, power on Node A.

    For more information, see the Sun Cluster 3.0 U1 System Administration Guide.

  11. Boot Node A into cluster mode.


    {0} ok boot
    
  12. Move all resource groups and device groups off Node B.


    # scswitch -S -h nodename
    
  13. Stop the Sun Cluster software on Node B, and shut down Node B.


    # shutdown -y -g0 -i0
    
  14. Is the StorEdge T3/T3+ disk tray you are removing the last StorEdge T3/T3+ disk tray that is connected to the Sun StorEdge FC-100 hub.

    • If yes, disconnect the fiber-optic cable that connects this Sun StorEdge FC-100 hub and Node B.

    • If no, proceed to Step 15.

    For the procedure on removing a fiber-optic cable, see the Sun StorEdge T3 Configuration Guide.

  15. Do you want to remove the host adapter from Node B?

    • If yes, power off Node B.

    • If no, skip to Step 18.

  16. Remove the host adapter from Node B.

    For the procedure on removing host adapters, see the documentation that shipped with your nodes.

  17. Without allowing the node to boot, power on Node B.

    For more information, see the Sun Cluster 3.0 U1 System Administration Guide.

  18. Boot Node B into cluster mode.


    {0} ok boot
    

    For more information, see the Sun Cluster 3.0 U1 System Administration Guide.

  19. On all cluster nodes, update the /devices and /dev entries.


    # devfsadm -C
    # scdidadm -C
    
  20. Return the resource groups and device groups you identified in Step 4 to Node A and Node B.


    # scswitch -z -g resource-group -h nodename
    # scswitch -z -D device-group-name -h nodename
    

How to Replace a Host-to-Hub Component

Use this procedure to replace the following host-to-hub components.

  1. On the node that is connected to the host-to-hub connection you are replacing, determine the resource groups and device groups that are running on this node.


    # scstat
    
  2. Move all resource groups and device groups to another node.


    # scswitch -S -h nodename
    
  3. Replace the host-to-hub component.

    For the procedure on replacing a fiber-optic cable, see the Sun StorEdge T3 Configuration Guide. For the procedure on replacing an FC-100 hub GBIC, see the FC-100 Hub Installation and Service Manual. For the procedure on replacing a FC-100/S host adapter GBIC, see your host adapter documentation.

  4. Return the resource groups and device groups you identified in Step 1 to the node that is connected to the host-to-hub connection you replaced.


    # scswitch -z -g resource-group -h nodename
    # scswitch -z -D device-group-name -h nodename
    

How to Replace a Sun StorEdge FC-100 Hub and Hub-to-Disk Tray Component

Use this procedure to replace the following hub-to-disk tray components.

  1. Detach the submirrors on the StorEdge T3/T3+ disk tray that is connected to the hub-to-disk tray fiber-optic cable you are replacing in order to stop all I/O activity to this StorEdge T3/T3+ disk tray.

    For more information, see your Solstice DiskSuite or VERITAS Volume Manager documentation.

  2. Replace the hub-to-disk tray component.

    For the procedure on replacing a fiber-optic cable, see the Sun StorEdge T3 Configuration Guide. For the procedure on replacing an FC-100 hub GBIC, a Sun StorEdge FC-100 hub, or a Sun StorEdge FC-100 hub power cord, see the FC-100 Hub Installation and Service Manual. For the procedure on replacing an MIA, see the Sun StorEdge T3 Installation, Operation, and Service Manual.

  3. Reattach the submirrors to resynchronize them.

    For more information, see your Solstice DiskSuite or VERITAS Volume Manager documentation.

How to Replace a StorEdge T3/T3+ Disk Tray Controller

Use this procedure to replace a StorEdge T3/T3+ disk tray controller.

  1. Detach the submirrors on the StorEdge T3/T3+ disk tray that is connected to the controller you are replacing in order to stop all I/O activity to this StorEdge T3/T3+ disk tray.

    For more information, see your Solstice DiskSuite or VERITAS Volume Manager documentation.

  2. Replace the controller.

    For the procedure on replacing a StorEdge T3/T3+ controller, see the Sun StorEdge T3 Installation, Operation, and Service Manual.

  3. Reattach the submirrors to resynchronize them.

    For more information, see your Solstice DiskSuite or VERITAS Volume Manager documentation.

How to Replace a StorEdge T3/T3+ Disk Tray Chassis

Use this procedure to replace a StorEdge T3/T3+ disk tray chassis. This procedure assumes that you are retaining all FRUs other than the chassis and the backplane. To replace the chassis, you must replace both the chassis and the backplane because these components are manufactured as one part.


Note -

Only trained, qualified service providers should use this procedure to replace a StorEdge T3/T3+ disk tray chassis.


  1. Detach the submirrors on the StorEdge T3/T3+ disk tray that is connected to the chassis you are replacing in order to stop all I/O activity to this StorEdge T3/T3+ disk tray.

    For more information, see your Solstice DiskSuite or VERITAS Volume Manager documentation.

  2. Replace the chassis/backplane.

    For the procedure on replacing a StorEdge T3/T3+ chassis, see the Sun StorEdge T3 Field Service Manual.

  3. Reattach the submirrors to resynchronize them.


    Note -

    Account for the change in the World Wide Number (WWN).


    For more information, see your Solstice DiskSuite or VERITAS Volume Manager documentation.

How to Replace a Host Adapter

Use this procedure to replace a failed host adapter in a running cluster. Node A in this procedure refers to the node with the failed host adapter you are replacing. Node B is a backup node.

  1. Determine the resource groups and device groups that are running on Node A and Node B.

    Record this information because you will use it in Step 9 of this procedure to return resource groups and device groups to these nodes.


    # scstat
    
  2. Move all resource groups and device groups off Node A.


    # scswitch -S -h nodename
    
  3. Shut down Node A.


    # shutdown -y -g0 -i0
    
  4. Power off Node A.

    For more information, see the Sun Cluster 3.0 U1 System Administration Guide.

  5. Replace the failed host adapter.

    For the procedure on removing and adding host adapters, see the documentation that shipped with your nodes.

  6. Power on Node A.

    For more information, see the Sun Cluster 3.0 U1 System Administration Guide.

  7. Boot Node A into cluster mode.


    {0} ok boot
    

    For more information, see the Sun Cluster 3.0 U1 System Administration Guide.

  8. If necessary, upgrade the host adapter firmware on Node A.

    See the Sun Cluster 3.0 U1 Release Notes for information about accessing Sun's EarlyNotifier web pages, which list information about any required patches or firmware levels that are available for download. For the procedure on applying any host adapter firmware patch, see the firmware patch README file.

  9. Return the resource groups and device groups you identified in Step 1 to Node A and Node B.


    # scswitch -z -g resource-group -h nodename
    # scswitch -z -D device-group-name -h nodename
    

    For more information, see the Sun Cluster 3.0 U1 System Administration Guide.