Sun Cluster 3.0 12/01 Hardware Guide

Chapter 8 Installing and Maintaining a Sun StorEdge T3 or T3+ Array Single-Controller Configuration

This chapter contains the procedures for installing, configuring, and maintaining Sun StorEdgeTM T3 and Sun StorEdge T3+ arrays in a single-controller (non-interconnected) configuration. Differences between the StorEdge T3 and StorEdge T3+ procedures are noted where appropriate.

This chapter contains the following procedures:

For conceptual information on multihost disks, see the Sun Cluster 3.0 12/01 Concepts document.

For information about using a StorEdge T3 or StorEdge T3+ array as a storage device in a storage area network (SAN), see "StorEdge T3 and T3+ Array (Single-Controller) SAN Considerations".

Installing StorEdge T3/T3+ Arrays

This section contains the procedure for an initial installation of new StorEdge T3 or StorEdge T3+ arrays.

How to Install StorEdge T3/T3+ Arrays

Use this procedure to install and configure new StorEdge T3 or StorEdge T3+ arrays in a cluster that is not running. Perform the steps in this procedure in conjunction with the procedures in the Sun Cluster 3.0 12/01 Software Installation Guide and your server hardware manual.

  1. Install the host adapters in the nodes that are to be connected to the StorEdge T3/T3+ arrays.

    For the procedure on installing host adapters, see the documentation that shipped with your host adapters and nodes.

  2. Install the Sun StorEdge FC-100 hubs.

    For the procedure on installing Sun StorEdge FC-100 hubs, see the FC-100 Hub Installation and Service Manual.


    Note -

    Cabling procedures are different if you are using your StorEdge T3/T3+ arrays to create a storage area network (SAN) by using two Sun StorEdge Network FC Switch-8 or Switch-16 switches and Sun SAN Version 3.0 release software. See "StorEdge T3 and T3+ Array (Single-Controller) SAN Considerations" for more information.


  3. Set up a Reverse Address Resolution Protocol (RARP) server on the network you want the new StorEdge T3/T3+ arrays to reside on.

    This RARP server enables you to assign an IP address to the new StorEdge T3/T3+ arrays by using each StorEdge T3/T3+ array's unique MAC address.

    For the procedure on setting up a RARP server, see the Sun StorEdge T3 and T3+ Array Installation, Operation, and Service Manual.

  4. (Skip this step if you are installing a StorEdge T3+ array) Install the media interface adapters (MIAs) in the StorEdge T3 arrays you are installing, as shown in Figure 8-1.

    For the procedure on installing a media interface adapter (MIA), see the Sun StorEdge T3 and T3+ Array Configuration Guide.

  5. If necessary, install gigabit interface converters (GBICs) in the Sun StorEdge FC-100 hubs, as shown in Figure 8-1.

    The GBICs let you connect the Sun StorEdge FC-100 hubs to the StorEdge T3/T3+ arrays you are installing. For the procedure on installing an FC-100 hub GBIC, see the FC-100 Hub Installation and Service Manual.

  6. Install fiber-optic cables between the Sun StorEdge FC-100 hubs and the StorEdge T3/T3+ arrays as shown in Figure 8-1.

    For the procedure on installing a fiber-optic cable, see the Sun StorEdge T3 and T3+ Array Configuration Guide.

  7. Install fiber-optic cables between the Sun StorEdge FC-100 hubs and the cluster nodes as shown in Figure 8-1.

  8. Install the Ethernet cables between the StorEdge T3/T3+ arrays and the Local Area Network (LAN), as shown in Figure 8-1.

  9. Install power cords to each array you are installing.

  10. Power on the StorEdge T3/T3+ arrays and confirm that all components are powered on and functional.


    Note -

    The StorEdge T3/T3+ arrays might require a few minutes to boot.


    For the procedure on powering on a StorEdge T3/T3+ array, see the Sun StorEdge T3 and T3+ Array Installation, Operation, and Service Manual.

    Figure 8-1 Cabling StorEdge T3/T3+ Arrays in a Single-Controller Configuration

    Graphic


    Note -

    Although Figure 8-1 shows a single-controller configuration, two arrays are shown to illustrate how two non-interconnected arrays are typically cabled in a cluster to allow data sharing and host-based mirroring.


  11. (Optional) Configure the StorEdge T3/T3+ arrays with logical volumes.

    For the procedure on configuring the StorEdge T3/T3+ array with logical volumes, see the Sun StorEdge T3 and T3+ Array Administrator's Guide.

  12. Telnet to each StorEdge T3/T3+ array you are adding and install the required StorEdge T3/T3+ array controller firmware.

    See the Sun Cluster 3.0 12/01 Release Notes for information about accessing Sun's EarlyNotifier web pages, which list information about any required patches or firmware levels that are available for download. For the procedure on applying any host adapter firmware patch, see the firmware patch README file.

  13. Ensure that this new StorEdge T3/T3+ array has a unique target address.

    For the procedure on verifying and assigning a target address, see the Sun StorEdge T3 and T3+ Array Configuration Guide.

  14. Reset the StorEdge T3/T3+ array.

    For the procedure on rebooting or resetting a StorEdge T3/T3+ array, see the Sun StorEdge T3 and T3+ Array Installation, Operation, and Service Manual.

  15. Install to the cluster nodes the Solaris operating environment, and apply any required Solaris patches for Sun Cluster software and StorEdge T3/T3+ array support.

    For the procedure on installing the Solaris operating environment, see the Sun Cluster 3.0 12/01 Software Installation Guide. For the location of required Solaris patches and installation instructions for Sun Cluster software support, see the Sun Cluster 3.0 12/01 Release Notes. For a list of required Solaris patches for StorEdge T3/T3+ array support, see the Sun StorEdge T3 and T3+ Array Release Notes.

Where to Go From Here

To continue with Sun Cluster software installation tasks, see the Sun Cluster 3.0 12/01 Software Installation Guide.

Configuring a StorEdge T3/T3+ Array

This section contains the procedures for configuring a StorEdge T3 or StorEdge T3+ array in a running cluster. The following table lists these procedures.

Table 8-1 Task Map: Configuring a StorEdge T3/T3+ Array

Task 

For Instructions, Go To 

Create an array logical volume 

"How to Create a Sun StorEdge T3/T3+ Array Logical Volume"

Remove an array logical volume 

"How to Remove a Sun StorEdge T3/T3+ Array Logical Volume"

How to Create a Sun StorEdge T3/T3+ Array Logical Volume

Use this procedure to create a logical volume. This procedure assumes all cluster nodes are booted and attached to the StorEdge T3/T3+ array that is to host the logical volume you are creating.

  1. Telnet to the StorEdge T3/T3+ array that is to host the logical volume you are creating.

  2. Create the logical volume.

    The creation of a logical volume involves adding, mounting, and initializing the logical volume.

    For the procedure on creating and initializing a logical volume, see the Sun StorEdge T3 and T3+ Array Administrator's Guide. For the procedure on mounting a logical volume, see the Sun StorEdge T3 and T3+ Array Installation, Operation, and Service Manual.

  3. On all cluster nodes, update the /devices and /dev entries.


    # devfsadm
    
  4. On one node connected to the partner-group, use the format command to verify that the new logical volume is visible to the system.


    # format
    

    See the format command man page for more information about using the command.

  5. Are you running VERITAS Volume Manager?

    • If not, go to Step 6

    • If you are running VERITAS Volume Manager, update its list of devices on all cluster nodes attached to the logical volume you created in Step 2.

    See your VERITAS Volume Manager documentation for information about using the vxdctl enable command to update new devices (volumes) in your VERITAS Volume Manager list of devices.

  6. If necessary, partition the logical volume.

  7. From any node in the cluster, update the global device namespace.


    # scgdevs
    

    If a volume management daemon such as vold is running on your node, and you have a CD-ROM drive that is connected to the node, a device busy error might be returned even if no disk is in the drive. This error is expected behavior.

Where to Go From Here

To create a new resource or reconfigure a running resource to use the new StorEdge T3/T3+ array logical volume, see the Sun Cluster 3.0 12/01 Data Services Installation and Configuration Guide.

To configure a logical volume as a quorum device, see the Sun Cluster 3.0 12/01 System Administration Guide for the procedure on adding a quorum device.

How to Remove a Sun StorEdge T3/T3+ Array Logical Volume

Use this procedure to remove a logical volume. This procedure assumes all cluster nodes are booted and attached to the StorEdge T3/T3+ array that hosts the logical volume you are removing.

This procedure defines Node A as the node you begin working with, and Node B as the remaining node.


Caution - Caution -

This procedure removes all data on the logical volume you are removing.


  1. If necessary, migrate all data and volumes off the logical volume you are removing. Otherwise, proceed to Step 2.

  2. Is the logical volume you are removing a quorum device?


    # scstat -q
    
    • If yes, remove the quorum device before you proceed.

    • If no, go to Step 3.

    For the procedure on removing a quorum device, see the Sun Cluster 3.0 12/01 System Administration Guide.

  3. Are you running VERITAS Volume Manager?

    • If not, go to Step 4.

    • If you are running VERITAS Volume Manager, update its list of devices on all cluster nodes attached to the logical volume you are removing.

    See your VERITAS Volume Manager documentation for information about using the vxdisk rm command to remove devices (volumes) in your VERITAS Volume Manager device list.

  4. Run the appropriate Solstice DiskSuite or VERITAS Volume Manager commands to remove the reference to the logical unit number (LUN) from any diskset or disk group.

    For more information, see your Solstice DiskSuite or VERITAS Volume Manager documentation.

  5. Telnet to the array and remove the logical volume.

    For the procedure on deleting a logical volume, see the Sun StorEdge T3 and T3+ Array Administrator's Guide.

  6. Determine the resource groups and device groups that are running on Node A and Node B.

    Record this information because you will use it in Step 13 of this procedure to return resource groups and device groups to these nodes.


    # scstat
    
  7. Move all resource groups and device groups off Node A.


    # scswitch -S -h nodename
    
  8. Shut down and reboot Node A by using the shutdown command with the i6 option.

    The -i6 option with the shutdown command causes the node to reboot after it shuts down to the ok prompt.


    # shutdown -y -g0 -i6
    

    For more information, see the Sun Cluster 3.0 U1 System Administration Guide.

  9. On Node A, remove the obsolete device IDs (DIDs).


    # devfsadm -C
    # scdidadm -C
    
  10. Move all resource groups and device groups off Node B.


    # scswitch -S -h nodename
    
  11. Shut down and reboot Node B by using the shutdown command with the i6 option.

    The -i6 option with the shutdown command causes the node to reboot after it shuts down to the ok prompt.


    # shutdown -y -g0 -i6
    

    For more information, see the Sun Cluster 3.0 U1 System Administration Guide.

  12. On Node B, remove the obsolete DIDs.


    # devfsadm -C
    # scdidadm -C
    
  13. Return the resource groups and device groups you identified in Step 6 to Node A and Node B.


    # scswitch -z -g resource-group -h nodename
    # scswitch -z -D device-group-name -h nodename
    

    For more information, see the Sun Cluster 3.0 12/01 System Administration Guide.

Where to Go From Here

To create a logical volume, see "How to Create a Sun StorEdge T3/T3+ Array Logical Volume".

Maintaining a StorEdge T3/T3+ Array

This section contains the procedures for maintaining a StorEdge T3 or StorEdge T3+ array. The following table lists these procedures. This section does not include a procedure for adding a disk drive and a procedure for removing a disk drive because a StorEdge T3/T3+ array only operates when fully configured.


Caution - Caution -

If you remove any field-replaceable unit (FRU) for an extended period of time, thermal complications might result. To prevent this complication, the StorEdge T3/T3+ array is designed so an orderly shutdown occurs when you remove a component for longer than 30 minutes. A replacement part must be immediately available before starting a FRU replacement procedure. You must replace a FRU within 30 minutes or the StorEdge T3/T3+ array, and all attached StorEdge T3/T3+ arrays, will shut down and power off.


Table 8-2 Task Map: Maintaining a StorEdge T3/T3+ Array

Task 

For Instructions, Go To 

Upgrade StorEdge T3/T3+ array firmware. 

"How to Upgrade StorEdge T3/T3+ Array Firmware"

Replace a disk drive. 

"How to Replace a Disk Drive"

Add a StorEdge T3/T3+ array. 

"How to Add a StorEdge T3/T3+ Array"

Remove a StorEdge T3/T3+ array. 

"How to Remove a StorEdge T3/T3+ Array"

Replace a host-to-hub fiber-optic cable. 

"How to Replace a Host-to-Hub/Switch Component"

Replace an FC-100/S host adapter GBIC. 

"How to Replace a Host-to-Hub/Switch Component"

Replace an FC-100 hub GBIC that connects a FC-100 hub to a host. 

"How to Replace a Host-to-Hub/Switch Component"

Replace a hub-to-array fiber optic cable. 

 

"How to Replace a Hub, Switch, or Hub/Switch-to-Array Component"

Replace an FC-100 hub GBIC that connects the FC-100 hub to a StorEdge T3 array. 

"How to Replace a Hub, Switch, or Hub/Switch-to-Array Component"

Replace a Sun StorEdge FC-100 hub. 

 

"How to Replace a Hub, Switch, or Hub/Switch-to-Array Component"

Replace a StorEdge Network FC Switch-8 or Switch-16 

 

(Applies to SAN-configured clusters only.) 

See "How to Replace a Hub, Switch, or Hub/Switch-to-Array Component"

Replace a Sun StorEdge FC-100 hub power cord. 

"How to Replace a Hub, Switch, or Hub/Switch-to-Array Component"

Replace a media interface adapter (MIA) on a StorEdge T3 array (not applicable for StorEdge T3+ arrays). 

 

"How to Replace a Hub, Switch, or Hub/Switch-to-Array Component"

Replace a StorEdge T3 array controller. 

 

"How to Replace a StorEdge T3/T3+ Array Controller"

Replace a StorEdge T3 array chassis. 

 

"How to Replace a StorEdge T3/T3+ Array Chassis"

Replace a host adapter. 

"How to Replace a Host Adapter"

Upgrade a StorEdge T3 array controller to a StorEdge T3+ array controller. 

Sun StorEdge T3 Array Controller Upgrade Manual

Replace a Power and Cooling Unit (PCU). 

 

Follow the same procedure used in a non-cluster environment. 

Sun StorEdge T3 and T3+ Array Installation, Operation, and Service Manual

Replace a unit interconnect card (UIC). 

 

Follow the same procedure used in a non-cluster environment. 

Sun StorEdge T3 and T3+ Array Installation, Operation, and Service Manual

Replace a StorEdge T3 array power cable. 

 

Follow the same procedure used in a non-cluster environment. 

Sun StorEdge T3 and T3+ Array Installation, Operation, and Service Manual

Replace an Ethernet cable. 

 

Follow the same procedure used in a non-cluster environment. 

Sun StorEdge T3 and T3+ Array Installation, Operation, and Service Manual

How to Upgrade StorEdge T3/T3+ Array Firmware

Use this procedure to upgrade StorEdge T3/T3+ array firmware in a running cluster. StorEdge T3/T3+ array firmware includes controller firmware, unit interconnect card (UIC) firmware, and disk drive firmware.


Caution - Caution -

Perform this procedure on one StorEdge T3/T3+ array at a time. This procedure requires that you reset the StorEdge T3/T3+ array you are upgrading. If you reset more than one StorEdge T3/T3+ array, your cluster will lose access to data if the StorEdge T3/T3+ arrays are submirrors of each other.


  1. On one node attached to the StorEdge T3/T3+ array you are upgrading, detach that StorEdge T3/T3+ array's submirrors.

    For more information, see your Solstice DiskSuite or VERITAS Volume Manager documentation.

  2. Apply the controller, disk drive, and UIC firmware patches.

    See the Sun Cluster 3.0 12/01 Release Notes for information about accessing Sun's EarlyNotifier web pages, which list information about any required patches or firmware levels that are available for download. For the procedure on applying any host adapter firmware patch, see the firmware patch README file.

  3. Reset the StorEdge T3/T3+ array, if you have not already done so.

    For the procedure on rebooting a StorEdge T3/T3+ array, see the Sun StorEdge T3 and T3+ Array Installation, Operation, and Service Manual.

  4. Reattach the submirrors to resynchronize them.

    For more information, see your Solstice DiskSuite or VERITAS Volume Manager documentation.

How to Replace a Disk Drive

Use this procedure to replace one failed disk drive in a StorEdge T3/T3+ array in a running cluster.


Caution - Caution -

If you remove any field replaceable unit (FRU) for an extended period of time, thermal complications might result. To prevent this complication, the StorEdge T3/T3+ array is designed so an orderly shutdown occurs when you remove a component for longer than 30 minutes. A replacement part must be immediately available before starting a FRU replacement procedure. You must replace a FRU within 30 minutes or the StorEdge T3/T3+ array, and all attached StorEdge T3/T3+ arrays, will shut down and power off.


  1. If the failed disk drive impacted the logical volume's availability, remove the logical volume from volume management control. Otherwise, proceed to Step 2.

    For more information, see your Solstice DiskSuite or VERITAS Volume Manager documentation.

  2. Replace the disk drive.

    For the procedure on replacing a disk drive, see the Sun StorEdge T3 and T3+ Array Installation, Operation, and Service Manual.

  3. If you removed a LUN from volume management control in Step 1, return the LUN(s) to volume management control. Otherwise, Step 2 completes this procedure.

    For more information, see your Solstice DiskSuite or VERITAS Volume Manager documentation.

How to Add a StorEdge T3/T3+ Array

Use this procedure to add a new StorEdge T3/T3+ array to a running cluster.

This procedure defines Node A as the node you begin working with, and Node B as the remaining node.

  1. Set up a Reverse Address Resolution Protocol (RARP) server on the network the new StorEdge T3/T3+ array is to reside on, and then assign an IP address to the new StorEdge T3/T3+ array.

    This RARP server enables you to assign an IP address to the new StorEdge T3/T3+ array by using the StorEdge T3/T3+ array's unique MAC address.

    For the procedure on setting up a RARP server, see the Sun StorEdge T3 and T3+ Array Installation, Operation, and Service Manual.

  2. (Skip this step if you are adding a StorEdge T3+ array) Install the media interface adapter (MIA) in the StorEdge T3 array you are adding as shown in Figure 8-2.

    For the procedure on installing a media interface adapter (MIA), see the Sun StorEdge T3 and T3+ Array Configuration Guide.

  3. If necessary, install gigabit interface converters (GBICs) in the Sun StorEdge FC-100 hub as shown in Figure 8-2.

    The GBICs enables you to connect the Sun StorEdge FC-100 hubs to the StorEdge T3/T3+ arrays you are adding.

    For the procedure on installing an FC-100 hub GBIC, see the FC-100 Hub Installation and Service Manual.


    Note -

    Cabling procedures are different if you are using your StorEdge T3/T3+ arrays to create a SAN by using two Sun StorEdge Network FC Switch-8 or Switch-16 switches and Sun SAN Version 3.0 release software. See "StorEdge T3 and T3+ Array (Single-Controller) SAN Considerations" for more information.


  4. Install the Ethernet cable between the StorEdge T3/T3+ array and the Local Area Network (LAN), as shown in Figure 8-2.

  5. Power on the StorEdge T3/T3+ array.


    Note -

    The StorEdge T3/T3+ array might require a few minutes to boot.


    For the procedure on powering on a StorEdge T3/T3+ array, see the Sun StorEdge T3 and T3+ Array Installation, Operation, and Service Manual.

  6. Telnet to the StorEdge T3/T3+ array you are adding, and, if necessary, install the required StorEdge T3/T3+ array controller firmware.

    See the Sun Cluster 3.0 12/01 Release Notes for information about accessing Sun's EarlyNotifier web pages, which list information about any required patches or firmware levels that are available for download. For the procedure on applying any host adapter firmware patch, see the firmware patch README file.

  7. Does this new StorEdge T3/T3+ array have a unique target address?

    • If yes, proceed to Step 8.

    • If no, change the target address for this new StorEdge T3/T3+ array.

    For the procedure on verifying and assigning a target address, see the Sun StorEdge T3 and T3+ Array Configuration Guide.

  8. Install a fiber-optic cable between the Sun StorEdge FC-100 hub and the StorEdge T3/T3+ array as shown in Figure 8-2.

    For the procedure on installing a fiber-optic cable, see the Sun StorEdge T3 and T3+ Array Configuration Guide.

    Figure 8-2 Adding a StorEdge T3/T3+ Array in a Single-Controller Configuration

    Graphic


    Note -

    Although Figure 8-2 shows a single-controller configuration, two arrays are shown to illustrate how two non-interconnected arrays are typically cabled in a cluster to allow data sharing and host-based mirroring.


  9. Configure the new StorEdge T3/T3+ array.

    For the procedure on creating a logical volume, see the Sun StorEdge T3 and T3+ Array Administrator's Guide.

  10. Determine the resource groups and device groups that are running on Node A and Node B.

    Record this information because you will use it in Step 42 of this procedure to return resource groups and device groups to these nodes.


    # scstat
    
  11. Move all resource groups and device groups off Node A.


    # scswitch -S -h nodename
    
  12. Do you need to install a host adapter in Node A?

  13. Is the host adapter you are installing the first FC-100/S host adapter on Node A?

    • If no, skip to Step 15.

    • If yes, determine whether the Fibre Channel support packages are already installed on these nodes. This product requires the following packages.


    # pkginfo | egrep Wlux
    system 				SUNWluxd 					Sun Enterprise Network Array sf Device Driver
    system 				SUNWluxdx					Sun Enterprise Network Array sf Device Driver (64-bit)
    system 				SUNWluxl 					Sun Enterprise Network Array socal Device Driver
    system 				SUNWluxlx 					Sun Enterprise Network Array socal Device Driver (64-bit)
    system 				SUNWluxop 					Sun Enterprise Network Array firmware and utilities
  14. Are the Fibre Channel support packages installed?

    • If yes, proceed to Step 15.

    • If no, install them.

    The StorEdge T3/T3+ array packages are located in the Product directory of the Solaris CD-ROM. Use the pkgadd command to add any necessary packages.


    # pkgadd -d path_to_Solaris/Product Pkg1 Pkg2 Pkg3 ... PkgN
    
  15. Stop the Sun Cluster software on Node A and shut down Node A.


    # shutdown -y -g0 -i0
    

    For the procedure on shutting down a node, see the Sun Cluster 3.0 12/01 System Administration Guide.

  16. Power off Node A.

  17. Install the host adapter in Node A.

    For the procedure on installing a host adapter, see the documentation that shipped with your host adapter and node.

  18. If necessary, power on and boot Node A.


    {0} ok boot -x
    

    For more information, see the Sun Cluster 3.0 12/01 System Administration Guide.

  19. If necessary, upgrade the host adapter firmware on Node A.

    See the Sun Cluster 3.0 12/01 Release Notes for information about accessing Sun's EarlyNotifier web pages, which list information about any required patches or firmware levels that are available for download. For the procedure on applying any host adapter firmware patch, see the firmware patch README file.

  20. If necessary, install a GBIC in the Sun StorEdge FC-100 hub, as shown in Figure 8-3.

    For the procedure on installing an FC-100 hub GBIC, see the FC-100 Hub Installation and Service Manual.


    Note -

    Cabling procedures are different if you are using your StorEdge T3/T3+ arrays to create a SAN by using two Sun StorEdge Network FC Switch-8 or Switch-16 switches and Sun SAN Version 3.0 release software. See "StorEdge T3 and T3+ Array (Single-Controller) SAN Considerations" for more information.


  21. If necessary, connect a fiber-optic cable between the Sun StorEdge FC-100 hub and Node A as shown in Figure 8-3.

    For the procedure on installing an FC-100/S host adapter GBIC, see your host adapter documentation. For the procedure on installing a fiber-optic cable, see the Sun StorEdge T3 and T3+ Array Configuration Guide.

    Figure 8-3 Adding a StorEdge T3/T3+ Array in a Single-Controller Configuration

    Graphic

  22. If necessary, install the required Solaris patches for StorEdge T3/T3+ array support on Node A.

    For a list of required Solaris patches for StorEdge T3/T3+ array support, see the Sun StorEdge T3 and T3+ Array Release Notes.

  23. Shut down Node A.


    # shutdown -y -g0 -i0
    
  24. Perform a reconfiguration boot to create the new Solaris device files and links on Node A.


    {0} ok boot -r
    
  25. Label the new logical volume.

    For the procedure on labeling a logical volume, see the Sun StorEdge T3 and T3+ Array Administrator's Guide.

  26. (Optional) On Node A, verify that the device IDs (DIDs) are assigned to the new StorEdge T3/T3+ array.


    # scdidadm -l
    

  27. Do you need to install a host adapter in Node B?

  28. Is the host adapter you are installing the first FC-100/S host adapter on Node B?

    • If no, skip to Step 30.

    • If yes, determine whether the Fibre Channel support packages are already installed on these nodes. This product requires the following packages.


    # pkginfo | egrep Wlux
    system 				SUNWluxd 					Sun Enterprise Network Array sf Device Driver
    system 				SUNWluxdx					Sun Enterprise Network Array sf Device Driver (64-bit)
    system 				SUNWluxl 					Sun Enterprise Network Array socal Device Driver
    system 				SUNWluxlx 					Sun Enterprise Network Array socal Device Driver (64-bit)
    system 				SUNWluxop 					Sun Enterprise Network Array firmware and utilities
  29. Are the Fibre Channel support packages installed?

    • If yes, proceed to Step 30.

    • If no, install them.

    The StorEdge T3/T3+ array packages are located in the Product directory of the Solaris CD-ROM. Use the pkgadd command to add any necessary packages.


    # pkgadd -d path_to_Solaris/Product Pkg1 Pkg2 Pkg3 ... PkgN
    
  30. Move all resource groups and device groups off Node B.


    # scswitch -S -h nodename
    
  31. Stop the Sun Cluster software on Node B, and shut down the node.


    # shutdown -y -g0 -i0
    

    For the procedure on shutting down a node, see the Sun Cluster 3.0 12/01 System Administration Guide.

  32. Power off Node B.

    For more information, see the Sun Cluster 3.0 12/01 System Administration Guide.

  33. Install the host adapter in Node B.

    For the procedure on installing a host adapter, see the documentation that shipped with your host adapter and node.

  34. If necessary, power on and boot Node B.


    {0} ok boot -x
    

    For more information, see the Sun Cluster 3.0 12/01 System Administration Guide.

  35. If necessary, upgrade the host adapter firmware on Node B.

    See the Sun Cluster 3.0 12/01 Release Notes for information about accessing Sun's EarlyNotifier web pages, which list information about any required patches or firmware levels that are available for download. For the procedure on applying any host adapter firmware patch, see the firmware patch README file.

  36. If necessary, install a GBIC as shown in Figure 8-4.

    For the procedure on installing an FC-100 hub GBIC, see the FC-100 Hub Installation and Service Manual.


    Note -

    Cabling procedures are different if you are using your StorEdge T3/T3+ arrays to create a SAN by using two Sun StorEdge Network FC Switch-8 or Switch-16 switches and Sun SAN Version 3.0 release software. See "StorEdge T3 and T3+ Array (Single-Controller) SAN Considerations" for more information.


  37. If necessary, connect a fiber-optic cable between the Sun StorEdge FC-100 hub and Node B as shown in Figure 8-4.

    For the procedure on installing a FC-100/S host adapter GBIC, see your host adapter documentation. For the procedure on installing a fiber-optic cable, see the Sun StorEdge T3 and T3+ Array Configuration Guide.

    Figure 8-4 Adding a StorEdge T3/T3+ Array in a Single-Controller Configuration

    Graphic

  38. If necessary, install the required Solaris patches for StorEdge T3/T3+ array support on Node B.

    For a list of required Solaris patches for StorEdge T3/T3+ array support, see the Sun StorEdge T3 and T3+ Array Release Notes.

  39. Shut down Node B.


    # shutdown -y -g0 -i0
    
  40. Perform a reconfiguration boot to create the new Solaris device files and links on Node B.


    {0} ok boot -r
    
  41. (Optional) On Node B, verify that the device IDs (DIDs) are assigned to the new StorEdge T3/T3+ array.


    # scdidadm -l
    

  42. Return the resource groups and device groups you identified in Step 10 to Node A and Node B.


    # scswitch -z -g resource-group -h nodename
    # scswitch -z -D device-group-name -h nodename
    

    For more information, see the Sun Cluster 3.0 12/01 System Administration Guide.

  43. Perform volume management administration to incorporate the new logical volumes into the cluster.

    For more information, see your Solstice DiskSuite or VERITAS Volume Manager documentation.

How to Remove a StorEdge T3/T3+ Array

Use this procedure to permanently remove a StorEdge T3/T3+ array and its submirrors from a running cluster. This procedure provides the flexibility to remove the host adapters from the nodes for the StorEdge T3/T3+ array you are removing.

This procedure defines Node A as the node you begin working with, and Node B as the remaining node.


Caution - Caution -

During this procedure, you will lose access to the data that resides on the StorEdge T3/T3+ array you are removing.


  1. Back up all database tables, data services, and volumes that are associated with the StorEdge T3/T3+ array that you are removing.

  2. Detach the submirrors from the StorEdge T3/T3+ array you are removing in order to stop all I/O activity to the StorEdge T3/T3+ array.

    For more information, see your Solstice DiskSuite or VERITAS Volume Manager documentation.

  3. Run the appropriate Solstice DiskSuite or VERITAS Volume Manager commands to remove the references to the LUN(s) from any diskset or disk group.

    For more information, see your Solstice DiskSuite or VERITAS Volume Manager documentation.

  4. Determine the resource groups and device groups that are running on Node B.


    # scstat
    
  5. Move all resource groups and device groups off Node A.


    # scswitch -S -h nodename
    
  6. Stop the Sun Cluster software on Node A, and shut down Node A.


    # shutdown -y -g0 -i0
    

    For the procedure on shutting down a node, see the Sun Cluster 3.0 12/01 System Administration Guide.

  7. Is the StorEdge T3/T3+ array you are removing the last StorEdge T3/T3+ array that is connected to Node A?

    • If yes, disconnect the fiber-optic cable between Node A and the Sun StorEdge FC-100 hub that is connected to this StorEdge T3/T3+ array, then disconnect the fiber-optic cable between the Sun StorEdge FC-100 hub and this StorEdge T3/T3+ array.

    • If no, proceed to Step 8.

    For the procedure on removing a fiber-optic cable, see the Sun StorEdge T3 and T3+ Array Configuration Guide.


    Note -

    If you are using your StorEdge T3/T3+ arrays in a SAN-configured cluster, you must keep two FC switches configured in parallel to maintain cluster availability. See "StorEdge T3 and T3+ Array (Single-Controller) SAN Considerations" for more information.


  8. Do you want to remove the host adapter from Node A?

    • If yes, power off Node A.

    • If no, skip to Step 11.

  9. Remove the host adapter from Node A.

    For the procedure on removing host adapters, see the documentation that shipped with your nodes.

  10. Without allowing the node to boot, power on Node A.

    For more information, see the Sun Cluster 3.0 12/01 System Administration Guide.

  11. Boot Node A into cluster mode.


    {0} ok boot
    
  12. Move all resource groups and device groups off Node B.


    # scswitch -S -h nodename
    
  13. Stop the Sun Cluster software on Node B, and shut down Node B.


    # shutdown -y -g0 -i0
    
  14. Is the StorEdge T3/T3+ array you are removing the last StorEdge T3/T3+ array that is connected to the Sun StorEdge FC-100 hub.

    • If yes, disconnect the fiber-optic cable that connects this Sun StorEdge FC-100 hub and Node B.

    • If no, proceed to Step 15.

    For the procedure on removing a fiber-optic cable, see the Sun StorEdge T3 and T3+ Array Configuration Guide.


    Note -

    If you are using your StorEdge T3/T3+ arrays in a SAN-configured cluster, you must keep two FC switches configured in parallel to maintain cluster availability. See "StorEdge T3 and T3+ Array (Single-Controller) SAN Considerations" for more information.


  15. Do you want to remove the host adapter from Node B?

    • If yes, power off Node B.

    • If no, skip to Step 18.

  16. Remove the host adapter from Node B.

    For the procedure on removing host adapters, see the documentation that shipped with your nodes.

  17. Without allowing the node to boot, power on Node B.

    For more information, see the Sun Cluster 3.0 12/01 System Administration Guide.

  18. Boot Node B into cluster mode.


    {0} ok boot
    

    For more information, see the Sun Cluster 3.0 12/01 System Administration Guide.

  19. On all cluster nodes, update the /devices and /dev entries.


    # devfsadm -C
    # scdidadm -C
    
  20. Return the resource groups and device groups you identified in Step 4 to Node A and Node B.


    # scswitch -z -g resource-group -h nodename
    # scswitch -z -D device-group-name -h nodename
    

How to Replace a Host-to-Hub/Switch Component

Use this procedure to replace the following host-to-hub/switch components. (StorEdge T3/T3+ arrays in single-controller configuration can be used with Sun StorEdge Network FC Switch-8 or Switch-16 switches when creating a SAN.)

  1. On the node that is connected to the host-to-hub/switch connection you are replacing, determine the resource groups and device groups that are running on this node.


    # scstat
    
  2. Move all resource groups and device groups to another node.


    # scswitch -S -h nodename
    
  3. Replace the host-to-hub/switch component.

    • For the procedure on replacing a fiber-optic cable, see the Sun StorEdge T3 and T3+ Array Configuration Guide.

    • For the procedure on replacing an FC-100 hub GBIC, see the FC-100 Hub Installation and Service Manual.

    • For the procedure on replacing a FC-100/S host adapter GBIC, see your host adapter documentation.

  4. Return the resource groups and device groups you identified in Step 1 to the node that is connected to the host-to-hub/switch connection you replaced.


    # scswitch -z -g resource-group -h nodename
    # scswitch -z -D device-group-name -h nodename
    

How to Replace a Hub, Switch, or Hub/Switch-to-Array Component

Use this procedure to replace a hub, switch, or the following hub/switch-to-array components. (StorEdge T3/T3+ arrays in single-controller configuration can be used with StorEdge Network FC Switch-8 or Switch-16 switches when creating a SAN.)

  1. Detach the submirrors on the StorEdge T3/T3+ array that is connected to the hub/switch-to-array fiber-optic cable you are replacing in order to stop all I/O activity to this StorEdge T3/T3+ array.

    For more information, see your Solstice DiskSuite or VERITAS Volume Manager documentation.

  2. Replace the hub, switch, or hub/switch-to-array component.

    • For the procedure on replacing a fiber-optic cable, see the Sun StorEdge T3 and T3+ Array Configuration Guide.

    • For the procedure on replacing an FC-100 hub GBIC, a Sun StorEdge FC-100 hub, or a Sun StorEdge FC-100 hub power cord, see the FC-100 Hub Installation and Service Manual.

    • For the procedure on replacing an MIA, see the Sun StorEdge T3 and T3+ Array Installation, Operation, and Service Manual.

    • If you are replacing FC switches in a SAN, follow the hardware installation and SAN configuration instructions in the Sun StorEdge Network FC Switch-8 and Switch-16 Installation and Configuration Guide, Sun SAN 3.0.


      Note -

      If you are replacing an FC switch and you intend to save the switch IP configuration for restoration to the replacement switch, do not connect the cables to the replacement switch until after you recall the Fabric configuration to the replacement switch. For more information about saving and recalling switch configurations see the Sun StorEdge Network FC Switch-8 and Switch-16 Installation and Configuration Guide, Sun SAN 3.0.



      Note -

      Before you replace an FC switch, be sure that the probe_timeout parameter of your data service software is set to more than 90 seconds. Increasing the value of the probe_timeout parameter to more than 90 seconds avoids unnecessary resource group restarts when one of the FC switches is powered off.


  3. Reattach the submirrors to resynchronize them.

    For more information, see your Solstice DiskSuite or VERITAS Volume Manager documentation.

How to Replace a StorEdge T3/T3+ Array Controller

Use this procedure to replace a StorEdge T3/T3+ array controller.

  1. Detach the submirrors on the StorEdge T3/T3+ array that is connected to the controller you are replacing in order to stop all I/O activity to this StorEdge T3/T3+ array.

    For more information, see your Solstice DiskSuite or VERITAS Volume Manager documentation.

  2. Replace the controller.

    For the procedure on replacing a StorEdge T3/T3+ controller, see the Sun StorEdge T3 and T3+ Array Installation, Operation, and Service Manual.

  3. Reattach the submirrors to resynchronize them.

    For more information, see your Solstice DiskSuite or VERITAS Volume Manager documentation.

How to Replace a StorEdge T3/T3+ Array Chassis

Use this procedure to replace a StorEdge T3/T3+ array chassis. This procedure assumes that you are retaining all FRUs other than the chassis and the backplane. To replace the chassis, you must replace both the chassis and the backplane because these components are manufactured as one part.


Note -

Only trained, qualified service providers should use this procedure to replace a StorEdge T3/T3+ array chassis.


  1. Detach the submirrors on the StorEdge T3/T3+ array that is connected to the chassis you are replacing in order to stop all I/O activity to this StorEdge T3/T3+ array.

    For more information, see your Solstice DiskSuite or VERITAS Volume Manager documentation.

  2. Replace the chassis/backplane.

    For the procedure on replacing a StorEdge T3/T3+ chassis, see the Sun StorEdge T3 and T3+ Array Field Service Manual.

  3. Reattach the submirrors to resynchronize them.


    Note -

    Account for the change in the World Wide Number (WWN).


    For more information, see your Solstice DiskSuite or VERITAS Volume Manager documentation.

How to Replace a Host Adapter

Use this procedure to replace a failed host adapter in a running cluster. Node A in this procedure refers to the node with the failed host adapter you are replacing. Node B is a backup node.

  1. Determine the resource groups and device groups that are running on Node A and Node B.

    Record this information because you will use it in Step 9 of this procedure to return resource groups and device groups to these nodes.


    # scstat
    
  2. Move all resource groups and device groups off Node A.


    # scswitch -S -h nodename
    
  3. Shut down Node A.


    # shutdown -y -g0 -i0
    
  4. Power off Node A.

    For more information, see the Sun Cluster 3.0 12/01 System Administration Guide.

  5. Replace the failed host adapter.

    For the procedure on removing and adding host adapters, see the documentation that shipped with your nodes.

  6. Power on Node A.

    For more information, see the Sun Cluster 3.0 12/01 System Administration Guide.

  7. Boot Node A into cluster mode.


    {0} ok boot
    

    For more information, see the Sun Cluster 3.0 12/01 System Administration Guide.

  8. If necessary, upgrade the host adapter firmware on Node A.

    See the Sun Cluster 3.0 12/01 Release Notes for information about accessing Sun's EarlyNotifier web pages, which list information about any required patches or firmware levels that are available for download. For the procedure on applying any host adapter firmware patch, see the firmware patch README file.

  9. Return the resource groups and device groups you identified in Step 1 to Node A and Node B.


    # scswitch -z -g resource-group -h nodename
    # scswitch -z -D device-group-name -h nodename
    

    For more information, see the Sun Cluster 3.0 12/01 System Administration Guide.

StorEdge T3 and T3+ Array (Single-Controller) SAN Considerations

This section contains information for using StorEdge T3/T3+ arrays in a single-controller configuration as the storage devices in a SAN that is in a Sun Cluster environment.

Full, detailed hardware and software installation and configuration instructions for creating and maintaining a SAN are described in the Sun StorEdge Network FC Switch-8 and Switch-16 Installation and Configuration Guide, Sun SAN 3.0 that is shipped with your switch hardware. Use the cluster-specific procedures in this chapter for installing and maintaining StorEdge T3/T3+ arrays in your cluster; refer to the Sun StorEdge Network FC Switch-8 and Switch-16 Installation and Configuration Guide, Sun SAN 3.0 for switch and SAN instructions and information on such topics as switch ports and zoning, and required software and firmware

Hardware components of a SAN include Fibre Channel switches, Fibre Channel host adapters, and storage devices and enclosures. The software components include drivers bundled with the operating system, firmware for the switches, management tools for the switches and storage devices, volume managers, if needed, and other administration tools.

StorEdge T3/T3+ Array (Single Controller) Supported SAN Features

Table 8-3 lists the SAN features that are supported with the StorEdge T3/T3+ array in a single-controller configuration. See the Sun StorEdge Network FC Switch-8 and Switch-16 Installation and Configuration Guide, Sun SAN 3.0 for details about these features.

Table 8-3 StorEdge T3/T3+ Array (Single-Controller) Supported SAN Features

Feature 

Supported 

Cascading 

Yes 

Zone type 

SL zone, nameserver zone* 

 

*When using nameserver zones, the host must be connected to the F-port on the switch; the StorEdge T3/T3+ array must be connected to the TL port of the switch. 

Maximum number of arrays per SL zone 

Maximum initiators per LUN 

Maximum initiators per zone 

Sample StorEdge T3/T3+ Array (Single-Controller) SAN Configuration

Figure 8-5 shows a sample SAN hardware configuration when using two hosts and four StorEdge T3 arrays that are in a single-controller configuration. See the Sun StorEdge Network FC Switch-8 and Switch-16 Installation and Configuration Guide, Sun SAN 3.0 for details.

Figure 8-5 Sample StorEdge T3/T3+ Array (Single-Controller) SAN Configuration

Graphic

StorEdge T3/T3+ Array (Single-Controller) SAN Clustering Considerations

If you are replacing an FC switch and you intend to save the switch IP configuration for restoration to the replacement switch, do not connect the cables to the replacement switch until after you recall the Fabric configuration to the replacement switch. For more information about saving and recalling switch configurations see the Sun StorEdge Network FC Switch-8 and Switch-16 Installation and Configuration Guide, Sun SAN 3.0.