This chapter contains the procedures for installing, configuring, and maintaining Sun StorEdge 3900 Series and Sun StorEdge 6900 Series systems.
This chapter contains the following procedures:
"How to Configure LUNs on the Arrays in Your StorEdge 3900 or 6900 Series System"
"How to Add StorEdge 3900 or 6900 Series Systems to a Running Cluster"
"How to Remove StorEdge 3900 or 6900 Series Systems From a Running Cluster"
"How to Replace a Node-to-Switch Component in a Running Cluster"
"How to Replace a Virtualization Engine in a Running Cluster (StorEdge 6900 Series Only)"
"How to Remove StorEdge T3/T3+ Arrays From a Running Cluster"
The StorEdge 3900 and 6900 Series configuration utilities can be run from a menu-driven interface or a command line interface (this chapter describes the menu-driven interface). For detailed information about Sun StorEdge 3900 and 6900 Series architecture, features, and configuration utilities, see the Sun StorEdge 3900 and 6900 Series Reference Manual and the Sun StorEdge 3900 and 6900 Series Cabinet Installation and Service Manual.
This section contains the procedure for an initial installation of StorEdge 3900 or 6900 Series systems in a new Sun Cluster that is not running. If you are adding systems to an existing cluster, use the procedure in "How to Add StorEdge 3900 or 6900 Series Systems to a Running Cluster".
Perform the steps in this procedure in conjunction with the procedures in the Sun Cluster 3.0 12/01 Software Installation Guide and your server hardware manual.
Install host adapters in the nodes that will be connected to the system.
For the procedure on installing host adapters, see the documentation that shipped with your host adapters and nodes.
Unpack, place, and level the system cabinet.
For instructions, see the Sun StorEdge 3900 and 6900 Series Cabinet Installation and Service Manual.
Install cables in the following order.
Install the system power cord.
Install the system grounding strap.
Install the cables from the FC switches to the cluster nodes (see Figure E-1 for an example).
Install the Ethernet cable to the local area network (LAN).
For instructions on cabling, see the Sun StorEdge 3900 and 6900 Series Cabinet Installation and Service Manual.
Power on the StorEdge 3900 or 6900 Series system and the cluster nodes.
For instructions on powering on the system, see the Sun StorEdge 3900 and 6900 Series Cabinet Installation and Service Manual. For instructions on powering on a node, see the documentation that came with your node hardware.
Set the host name, IP address, date, and timezone for the system's storage service processor.
For detailed instructions, see the initial field installation instructions in the Sun StorEdge 3900 and 6900 Series Reference Manual.
For StorEdge 3900 Series systems only: Remove the preconfigured, default hard zoning from the system's FC switches.
For StorEdge 3900 Series only: To configure the StorEdge 3900 Series system for use with Sun Cluster host-based mirroring, the default hard zones must be removed from the system's FC switches. See the SANbox-8/16 Switch Management User's Manual for instructions on using the installed SANsurfer interface for removing the preconfigured hard zones from all Sun StorEdge Network FC Switch-8 and Switch-16 switches.
Install the Solaris operating environment to the cluster nodes and apply the required Solaris patches for Sun Cluster software and StorEdge T3+ array support.
For the procedure on installing the Solaris operating environment, see the Sun Cluster 3.0 12/01 Software Installation Guide.
See the Sun Cluster 3.0 5/02 Release Notes for information about accessing Sun's EarlyNotifier web pages, which list information about any required patches or firmware levels that are available for download. For the procedure on applying any host adapter firmware patch, see the firmware patch README file.
Install any required patches or software for Sun StorEdge Traffic Manager software support to the cluster nodes from the Sun Download Center Web site, http://www.sun.com/storage/san/.
For instructions on installing the software, see the information on the web site.
Activate the Sun StorEdge Traffic Manager software functionality in the software you installed to the cluster nodes in Step 8.
To activate the Sun StorEdge Traffic Manager software functionality, manually edit the /kernel/drv/scsi_vhci.conf file that is installed on the node to change the mpxio-disable parameter to no:
mpxio-disable="no" |
Shut down the entire cluster.
# scshutdown -y -g0 |
For the full procedure on shutting down a cluster, see the Sun Cluster 3.0 12/01 System Administration Guide.
Perform a reconfiguration boot on all nodes to create the new Solaris device files and links.
{0} ok boot -r |
On all nodes, update the /devices and /dev entries.
# devfsadm -C # devfsadm |
On all nodes, update the paths to the DID instances
# scdidadm -C # scdidadm -r |
For StorEdge 6900 Series systems only: On any cluster node, use the cfgadm command as shown below to view the virtualization engine (VE) controller status and to enable the VE controllers.
# cfgadm -al # cfgadm -c configure <c::controller id> |
See the cfgadm(1M) man page for more information about the command and its options.
On all nodes, use the luxadm probe command to confirm that all arrays you installed are now visible.
# luxadm probe |
To continue with Sun Cluster software installation tasks, see the Sun Cluster 3.0 12/01 Software Installation Guide.
You can customize the configuration of the logical unit numbers (LUNs) on the StorEdge T3+ arrays in your StorEdge 3900 or 6900 Series system, using the system's menu-driven or command-line interface. The Sun StorEdge 3900 and 6900 Series Reference Manual describes the factory default settings for the StorEdge T3+ arrays in the StorEdge 3900 and 6900 Series systems.
For StorEdge 6900 Series systems only, you can also customize the configuration of the virtual LUNs (VLUNs) on the virtualization engines in the system, using the system's menu-driven or command-line interface. The Sun StorEdge 3900 and 6900 Series Reference Manual describes the factory default settings for the virtualization engines in the StorEdge 6900 Series systems.
A summary of the steps used to configure the LUNs, or logical volumes, on the StorEdge T3+ arrays in your StorEdge 3900 or 6900 system is listed below.
If you are using the menu-driven interface, perform the following steps.
On the storage service processor (SSP), start the Configuration Utilities using the runsecfg command.
Select T3+ Configuration Utility from the main menu.
Enter your StorEdge T3+ array password when you are prompted.
Step through the submenus to configure the StorEdge T3+ arrays. See the Sun StorEdge 3900 and 6900 Series Reference Manual for more information about the submenus.
If you are using the command-line interface, perform the following steps.
On the SSP, enter your StorEdge T3+ array password (if prompted to do so).
Use the SSP StorEdge T3+ array commands to configure the arrays in your system. See the Sun StorEdge 3900 and 6900 Series Reference Manual and the man pages for these commands for more information.
The Sun StorEdge 6900 Series systems' virtualization engines allow you to divide StorEdge T3+ array LUNs into small virtual LUNs (VLUNs) for more customized storage usage, such as in a storage area network (SAN).
A summary of the steps used to configure the VLUNs on the virtualization engines in your StorEdge 6900 system is listed below.
If you are using the menu-driven interface, perform the following steps.
On the SSP, start the Configuration Utilities using the runsecfg command.
Select VE Configuration Utility from the main menu.
Step through the submenus to configure the virtualization engines. See the Sun StorEdge 3900 and 6900 Series Reference Manual for more information about the submenus.
After you configure the VLUNs, from any cluster node use the scgdevs command to update the global device IDs.
# scgdevs |
On one node connected to the partner-group, use the format command to label the new VLUNs.
# format |
See the format command man page for more information about using the command.
If you are using the command-line interface, perform the following steps.
Use the SSP virtualization engine commands to configure the virtualization engines in your system. See the Sun StorEdge 3900 and 6900 Series Reference Manual and the man pages for these commands for more information.
After you configure the VLUNs, from any cluster node use the scgdevs command to update the global device IDs.
# scgdevs |
On one node connected to the partner-group, use the format command to label the new VLUNs.
# format |
See the format command man page for more information about using the command.
This section contains the procedures for maintaining StorEdge 3900 and 6900 Series systems. Table E-1 lists these procedures. This section does not include procedures for adding or removing disk drives because the StorEdge T3+ arrays in your StorEdge 3900 or 6900 Series system operate only when fully configured with disk drives.
If you remove any field replaceable unit (FRU) from the StorEdge T3+ arrays for an extended period of time, thermal complications might result. To prevent these complications, the StorEdge T3+ array is designed so that an orderly shutdown occurs when you remove a component for longer than 30 minutes. Therefore, a replacement part must be immediately available before you start an FRU replacement procedure. You must replace an FRU within 30 minutes or the StorEdge T3+ array, and all attached StorEdge T3+ arrays, will shut down and power off.
Task |
For Instructions, Go To... |
---|---|
Add a StorEdge 3900 or 6900 Series system. |
"How to Add StorEdge 3900 or 6900 Series Systems to a Running Cluster" |
Remove a StorEdge 3900 or 6900 Series system. |
"How to Remove StorEdge 3900 or 6900 Series Systems From a Running Cluster" |
Replace a virtualization engine (StorEdge 6900 Series only). |
"How to Replace a Virtualization Engine in a Running Cluster (StorEdge 6900 Series Only)" |
Replace a node-to-switch fiber optic cable. |
"How to Replace a Node-to-Switch Component in a Running Cluster" |
Replace a gigabit interface converter (GBIC) on a node's host adapter. |
"How to Replace a Node-to-Switch Component in a Running Cluster" |
Replace a GBIC on an FC switch, connecting to a node. |
"How to Replace a Node-to-Switch Component in a Running Cluster" |
Remove a T3+ array partner group from the system. |
"How to Remove StorEdge T3/T3+ Arrays From a Running Cluster" |
Upgrade StorEdge T3+ array firmware. | |
Replace a disk drive in an array. | |
Replace a host adapter in a node. | |
Replace a StorEdge network FC switch-8 or switch-16.
Follow the same procedure used in a non-cluster environment. |
Sun StorEdge 3900 and 6900 Series Reference Manual |
Replace an array-to-switch fiber optic cable.
Follow the same procedure used in a non-cluster environment. |
Sun StorEdge 3900 and 6900 Series Reference Manual |
Replace a GBIC on an FC switch, connecting to an array.
Follow the same procedure used in a non-cluster environment. |
Sun StorEdge 3900 and 6900 Series Reference Manual |
Replace a StorEdge 3900 or 6900 Series storage service processor.
Follow the same procedure used in a non-cluster environment. |
Sun StorEdge 3900 and 6900 Series Reference Manual |
Replace a StorEdge 3900 or 6900 Series Ethernet hub.
Follow the same procedure used in a non-cluster environment. |
Sun StorEdge 3900 and 6900 Series Reference Manual |
Replace a StorEdge T3+ Power and Cooling Unit (PCU).
Follow the same procedure used in a non-cluster environment. |
Sun StorEdge T3 and T3+ Array Installation, Operation, and Service Manual |
Replace a StorEdge T3+ unit interconnect card (UIC).
Follow the same procedure used in a non-cluster environment. |
Sun StorEdge T3 and T3+ Array Installation, Operation, and Service Manual |
Replace a StorEdge T3+ array power cable.
Follow the same procedure used in a non-cluster environment. |
Sun StorEdge T3 and T3+ Array Installation, Operation, and Service Manual |
Replace a StorEdge T3+ Ethernet cable.
Follow the same procedure used in a non-cluster environment. | Sun StorEdge T3 and T3+ Array Installation, Operation, and Service Manual |
Use this procedure to add new StorEdge 3900 and 6900 Series systems to a running cluster. To install systems to a new Sun Cluster that is not running, use the procedure in "How to Install StorEdge 3900 and 6900 Series Systems".
This procedure defines "Node A" as the node you begin working with, and "Node B" as the second attached node.
Unpack, place, and level the system cabinet.
For instructions, see the Sun StorEdge 3900 and 6900 Series Cabinet Installation and Service Manual.
Install cables in the following order.
Install the system power cord.
Install the system grounding strap.
Install the cables from the FC switches to the cluster nodes (see Figure E-1 for an example).
Install the Ethernet cable to the LAN.
For instructions on cabling, see the Sun StorEdge 3900 and 6900 Series Cabinet Installation and Service Manual.
Power on the new system.
The StorEdge T3+ arrays in your system might take several minutes to boot.
For instructions, see the Sun StorEdge 3900 and 6900 Series Cabinet Installation and Service Manual.
Set the host name, IP address, date, and timezone for the system's Storage Service Processor.
For detailed instructions, see the initial field installation instructions in the Sun StorEdge 3900 and 6900 Series Reference Manual.
Remove the preconfigured, default hard zoning from the new system's FC switches.
For StorEdge 3900 Series only: To configure the StorEdge 3900 Series system for use with Sun Cluster host-based mirroring, the default hard zones must be removed from the system's FC switches. See the SANbox-8/16 Switch Management User's Manual for instructions on using the installed SANsurfer interface for removing the preconfigured hard zones from all Sun StorEdge Network FC Switch-8 and Switch-16 switches.
Determine the resource groups and device groups running on all nodes.
Record this information because you will use it in Step 48 of this procedure to return resource groups and device groups to these nodes.
# scstat |
Move all resource groups and device groups off Node A.
# scswitch -S -h nodename |
Do you need to install host adapters in Node A?
Is the host adapter you are installing the first host adapter on Node A?
If not, go to Step 11.
If it is the first host adapter, use the pkginfo command as shown below to determine whether the required support packages for the host adapter are already installed on this node. The following packages are required.
# pkginfo | egrep Wlux system SUNWluxd Sun Enterprise Network Array sf Device Driver system SUNWluxdx Sun Enterprise Network Array sf Device Driver (64-bit) system SUNWluxl Sun Enterprise Network Array socal Device Driver system SUNWluxlx Sun Enterprise Network Array socal Device Driver (64-bit) system SUNWluxop Sun Enterprise Network Array firmware and utilities system SUNWluxox Sun Enterprise Network Array libraries (64-bit) |
Are the required support packages already installed?
If they are already installed, go to Step 11.
If not, install the required support packages that are missing.
The support packages are located in the Product directory of the Solaris CD-ROM. Use the pkgadd command to add any missing packages.
# pkgadd -d path_to_Solaris/Product Pkg1 Pkg2 Pkg3 ... PkgN |
Shut down and power off Node A.
# shutdown -y -g0 -i0 |
For the procedure on shutting down and powering off a node, see the Sun Cluster 3.0 12/01 System Administration Guide.
Install the host adapters in Node A.
For the procedure on installing host adapters, see the documentation that shipped with your host adapters and nodes.
Power on and boot Node A into non-cluster mode.
{0} ok boot -x |
For more information, see the Sun Cluster 3.0 12/01 System Administration Guide.
If necessary, upgrade the host adapter firmware on Node A.
See the Sun Cluster 3.0 5/02 Release Notes for information about accessing Sun's EarlyNotifier web pages, which list information about any required patches or firmware levels that are available for download. For the procedure on applying any host adapter firmware patch, see the firmware patch README file.
If necessary, install GBICs in the FC switches.
For the procedure on installing a GBIC to an FC switch, see the SANbox 8/16 Segmented Loop Switch User's Manual.
Connect fiber optic cables between Node A and the FC switches in your StorEdge 3900 or 6900 Series system (see Figure E-1 for an example).
If necessary, install the required Solaris patches for StorEdge T3+ array support on Node A.
See the Sun Cluster 3.0 5/02 Release Notes for information about accessing Sun's EarlyNotifier web pages, which list information about any required patches or firmware levels that are available for download. For the procedure on applying any host adapter firmware patch, see the firmware patch README file.
Install any required patches or software for Sun StorEdge Traffic Manager software support to Node A from the Sun Download Center Web site, http://www.sun.com/storage/san/.
For instructions on installing the software, see the information on the web site.
Activate the Sun StorEdge Traffic Manager software functionality in the software you installed in Step 18.
To activate the Sun StorEdge Traffic Manager software functionality, manually edit the /kernel/drv/scsi_vhci.conf file that is installed to change the mpxio-disable parameter to no:
mpxio-disable="no" |
Shut down Node A.
# shutdown -y -g0 -i0 |
Perform a reconfiguration boot on Node A to create the new Solaris device files and links.
{0} ok boot -r |
On all nodes, update the /devices and /dev entries.
# devfsadm -C # devfsadm |
On all nodes, update the paths to the DID instances
# scdidadm -C # scdidadm -r |
Are you adding a StorEdge 3900 Series or StorEdge 6900 Series system?
If you are adding a StorEdge 3900 Series system, go to Step 26.
If you are adding StorEdge 6900 Series systems: On Node A, use the cfgadm command as shown below to view the virtualization engine (VE) controller status and to enable the VE controllers.
# cfgadm -al # cfgadm -c configure <c::controller id> |
See the cfgadm(1M) man page for more information about the command and its options.
(Optional) Configure VLUNs on the VEs in the new StorEdge 6900 Series system.
For instructions on configuring VLUNs in a cluster, see "How to Configure VLUNs on the Virtualization Engines in Your StorEdge 6900 Series System".
(Optional) On Node A, verify that the device IDs (DIDs) were assigned to the new array.
# scdidadm -l |
Do you need to install host adapters in Node B?
Is the host adapter you are installing the first host adapter on Node B?
If not, go to Step 30.
If it is the first host adapter, determine whether the required support packages for the host adapter are already installed on this node. The following packages are required.
# pkginfo | egrep Wlux system SUNWluxd Sun Enterprise Network Array sf Device Driver system SUNWluxdx Sun Enterprise Network Array sf Device Driver (64-bit) system SUNWluxl Sun Enterprise Network Array socal Device Driver system SUNWluxlx Sun Enterprise Network Array socal Device Driver (64-bit) system SUNWluxop Sun Enterprise Network Array firmware and utilities system SUNWluxox Sun Enterprise Network Array libraries (64-bit) |
Are the required support packages already installed?
If they are already installed, go to Step 30.
If not, install the missing support packages.
The support packages are located in the Product directory of the Solaris CD-ROM. Use the pkgadd command to add any missing packages.
# pkgadd -d path_to_Solaris/Product Pkg1 Pkg2 Pkg3 ... PkgN |
Move all resource groups and device groups off Node B.
# scswitch -S -h nodename |
Shut down and power off Node B.
# shutdown -y -g0 -i0 |
For the procedure on shutting down and powering off a node, see the Sun Cluster 3.0 12/01 System Administration Guide.
Install the host adapters in Node B.
For the procedure on installing host adapters, see the documentation that shipped with your host adapters and nodes.
Power on and boot Node B.
{0} ok boot -x |
For more information, see the Sun Cluster 3.0 12/01 System Administration Guide.
If necessary, upgrade the host adapter firmware on Node B.
See the Sun Cluster 3.0 5/02 Release Notes for information about accessing Sun's EarlyNotifier web pages, which list information about any required patches or firmware levels that are available for download. For the procedure on applying any host adapter firmware patch, see the firmware patch README file.
If necessary, install GBICs in the FC switches.
For the procedure on installing a GBIC to an FC switch, see the SANbox 8/16 Segmented Loop Switch User's Manual.
Connect fiber optic cables between the FC switches in your StorEdge 3900 or 6900 Series system and Node B (see Figure E-1 for an example.
If necessary, install the required Solaris patches for StorEdge T3+ array support on Node B.
For a list of required Solaris patches for StorEdge T3+ array support, see the Sun StorEdge T3 Disk Tray Release Notes.
Install any required patches or software for Sun StorEdge Traffic Manager software support to Node B from the Sun Download Center Web site, http://www.sun.com/storage/san/.
For instructions on installing the software, see the information on the web site.
Activate the Sun StorEdge Traffic Manager software functionality in the software you installed in Step 38.
To activate the Sun StorEdge Traffic Manager software functionality, manually edit the /kernel/drv/scsi_vhci.conf file that is installed to change the mpxio-disable parameter to no:
mpxio-disable="no" |
Shut down Node B.
# shutdown -y -g0 -i0 |
Perform a reconfiguration boot to create the new Solaris device files and links on Node B.
{0} ok boot -r |
On all nodes, update the /devices and /dev entries.
# devfsadm -C # devfsadm |
On all nodes, update the paths to the DID instances
# scdidadm -C # scdidadm -r |
Are you adding a StorEdge 3900 Series or StorEdge 6900 Series system?
If you are adding a StorEdge 3900 Series system, go to Step 46.
If you are adding StorEdge 6900 Series systems: On Node A, use the cfgadm command as shown below to view the virtualization engine (VE) controller status and to enable the VE controllers.
# cfgadm -al # cfgadm -c configure <c::controller id> |
See the cfgadm(1M) man page for more information about the command and its options.
(Optional) Configure VLUNs on the VEs in the new StorEdge 6900 Series system.
For instructions on configuring VLUNs in a cluster, see "How to Configure VLUNs on the Virtualization Engines in Your StorEdge 6900 Series System".
(Optional) On Node B, verify that the DIDs are assigned to the new arrays:
# scdidadm -l |
On one node attached to the new arrays, reset the SCSI reservation state:
# scdidadm -R n |
Where n is the DID instance of a array LUN you are adding to the cluster.
Repeat this command on the same node for each array LUN you are adding to the cluster.
Return the resource groups and device groups you identified in Step 6 to all nodes.
# scswitch -z -g resource-group -h nodename # scswitch -z -D device-group-name -h nodename |
For more information, see the Sun Cluster 3.0 12/01 System Administration Guide.
Perform volume management administration to incorporate the new logical volumes into the cluster.
For more information, see your Solstice DiskSuite or VERITAS Volume Manager documentation.
Use this procedure to permanently remove a StorEdge 3900 or 6900 Series system and its associated submirrors from a running cluster.
This procedure defines "Node A" as the cluster node you begin working with, and "Node B" as the other node.
During this procedure, you lose access to the data that resides on each StorEdge T3+ array partner-group in the StorEdge 3900 or 6900 Series system you are removing.
If necessary, back up all database tables, data services, and volumes associated with each StorEdge T3+ partner-group in the StorEdge 3900 or 6900 Series system you are removing.
If necessary, run the appropriate Solstice DiskSuite or VERITAS Volume Manager commands to detach the submirrors from each StorEdge T3+ partner-group in the StorEdge 3900 or 6900 Series system that you are removing to stop all I/O activity to the partner-groups.
For more information, see your Solstice DiskSuite or VERITAS Volume Manager documentation.
Run the appropriate Solstice DiskSuite or VERITAS Volume Manager commands to remove references to each LUN or VLUN in the StorEdge 3900 or 6900 Series system that you are removing.
For more information, see your Solstice DiskSuite or VERITAS Volume Manager documentation.
Determine the resource groups and device groups running on all nodes.
Record this information because you will use it in Step 17 of this procedure to return resource groups and device groups to these nodes.
# scstat |
Move all resource groups and device groups off Node A.
# scswitch -S -h nodename |
Shut down Node A.
# shutdown -y -g0 -i0 |
For the procedure on shutting down a node, see the Sun Cluster 3.0 12/01 System Administration Guide.
Disconnect the cables that connected Node A to the FC switches in your StorEdge 3900 or 6900 Series system.
Without allowing the node to boot, power on Node A.
For more information, see the Sun Cluster 3.0 12/01 System Administration Guide.
Boot Node A into cluster mode.
{0} ok boot |
Move all resource groups and device groups off Node B.
# scswitch -S -h nodename |
Shut down Node B.
# shutdown -y -g0 -i0 |
Disconnect the cables that connected Node B to the FC switches in your StorEdge 3900 or 6900 Series system.
Without allowing the node to boot, power on Node B.
For more information, see the Sun Cluster 3.0 12/01 System Administration Guide.
Boot Node B into cluster mode.
{0} ok boot |
For more information, see the Sun Cluster 3.0 12/01 System Administration Guide.
On all nodes, update the /devices and /dev entries.
# devfsadm -C # devfsadm |
On all nodes, update the paths to the DID instances
# scdidadm -C # scdidadm -r |
Return the resource groups and device groups you identified in Step 4 to all nodes.
# scswitch -z -g resource-group -h nodename # scswitch -z -D device-group-name -h nodename |
Use this procedure to replace a virtualization engine (VE) in a StorEdge 6900 Series system in a running cluster.
Replace the VE hardware.
Follow the instructions in the Sun StorEdge 3900 and 6900 Series Reference Manual.
On any cluster node, use the cfgadm command as shown below to view the virtualization engine (VE) controller status and to enable the VE controllers.
# cfgadm -al # cfgadm -c configure <c::controller id> |
See the cfgadm(1M) man page for more information about the command and its options.
Use this procedure to replace the following node-to-switch components in a running cluster:
Node-to-switch fiber optic cable
GBIC on an FC switch, connecting to a node
On the node connected to the component you are replacing, determine the resource groups and device groups running on the node.
Record this information because you will use it in Step 4 of this procedure to return resource groups and device groups to these nodes.
# scstat |
Move all resource groups and device groups to another node.
# scswitch -S -h nodename |
Replace the node-to-switch component.
For the procedure on replacing a fiber optic cable between a node and an FC switch, see the Sun StorEdge network FC switch-8 and switch-16 Installation and Configuration Guide.
For the procedure on replacing a GBIC on an FC switch, see the SANbox 8/16 Segmented Loop Switch User's Manual.
Return the resource groups and device groups you identified in Step 1 to the node that is connected to the component you replaced.
# scswitch -z -g resource-group -h nodename # scswitch -z -D device-group-name -h nodename |
Use this procedure to permanently remove StorEdge T3/T3+ array partner groups and their submirrors from a StorEdge 3900 or 6900 Series system in a running cluster.
This procedure defines "Node A" as the cluster node you begin working with, and "Node B" as the other node.
During this procedure, you lose access to the data that resides on each StorEdge T3+ array partner-group you are removing.
If necessary, back up all database tables, data services, and volumes associated with each partner-group you are removing.
If necessary, run the appropriate Solstice DiskSuite or VERITAS Volume Manager commands to detach the submirrors from each array or partner-group that you are removing to stop all I/O activity to the array or partner-group.
For more information, see your Solstice DiskSuite or VERITAS Volume Manager documentation.
Run the appropriate Solstice DiskSuite or VERITAS Volume Manager commands to remove references to each LUN that belongs to the array or partner-group that you are removing.
For more information, see your Solstice DiskSuite or VERITAS Volume Manager documentation.
Determine the resource groups and device groups running on all nodes.
Record this information because you will use it in Step 22 of this procedure to return resource groups and device groups to these nodes.
# scstat |
Move all resource groups and device groups off Node A.
# scswitch -S -h nodename |
Shut down Node A.
# shutdown -y -g0 -i0 |
For the procedure on shutting down a node, see the Sun Cluster 3.0 12/01 System Administration Guide.
Disconnect from both arrays the fiber optic cables connecting to the FC switches, then the Ethernet cable(s).
Is any array you are removing the last array connected to an FC switch on Node A?
If not, go to Step 12.
If it is the last array, disconnect the fiber optic cable between Node A and the FC switch that was connected to this array.
For the procedure on removing a fiber optic cable, see the Sun StorEdge T3 and T3+ Array Configuration Guide.
Do you want to remove the host adapters from Node A?
If not, go to Step 12.
If yes, power off Node A.
Remove the host adapters from Node A.
For the procedure on removing host adapters, see the documentation that shipped with your host adapter and nodes.
Without allowing the node to boot, power on Node A.
For more information, see the Sun Cluster 3.0 12/01 System Administration Guide.
Boot Node A into cluster mode.
{0} ok boot |
Move all resource groups and device groups off Node B.
# scswitch -S -h nodename |
Shut down Node B.
# shutdown -y -g0 -i0 |
Is any array you are removing the last array connected to an FC switch on Node B?
If not, go to Step 19.
If it is the last array, disconnect the fiber optic cable connecting this FC switch to Node B.
For the procedure on removing a fiber optic cable, see the Sun StorEdge T3 and T3+ Array Configuration Guide.
Do you want to remove the host adapters from Node B?
If not, go to Step 19.
If yes, power off Node B.
Remove the host adapters from Node B.
For the procedure on removing host adapters, see the documentation that shipped with your nodes.
Without allowing the node to boot, power on Node B.
For more information, see the Sun Cluster 3.0 12/01 System Administration Guide.
Boot Node B into cluster mode.
{0} ok boot |
For more information, see the Sun Cluster 3.0 12/01 System Administration Guide.
On all nodes, update the /devices and /dev entries.
# devfsadm -C # devfsadm |
On all nodes, update the paths to the DID instances
# scdidadm -C # scdidadm -r |
Return the resource groups and device groups you identified in Step 4 to all nodes.
# scswitch -z -g resource-group -h nodename # scswitch -z -D device-group-name -h nodename |
Use one of the following procedures to upgrade the firmware on the StorEdge T3+ arrays in your StorEdge 3900 or 6900 Series system, depending on whether your array partner-group has been configured to support submirrors of a cluster node's volumes. StorEdge T3+ array firmware includes controller firmware, unit interconnect card (UIC) firmware, EPROM firmware, and disk drive firmware.
"Upgrading Firmware on Arrays That Support Submirrored Data"
"Upgrading Firmware on Arrays That Do Not Support Submirrored Data"
For all firmware, always read any README files that accompany the firmware for the latest information and special notes.
Perform this procedure on one array at a time. This procedure requires that you reset the arrays you are upgrading. If you reset more than one array at a time, your cluster will lose access to data.
On the node that currently owns the disk group or disk set to which the submirror belongs, detach the submirrors of the array on which you are upgrading firmware. (This procedure refers to this node as Node A and remaining node as Node B.)
For more information, see your Solstice DiskSuite or VERITAS Volume Manager documentation.
Disconnect both array-to-switch fiber optic cables from the two arrays of the partner-group.
Apply the controller, disk drive, and UIC firmware patches.
For the list of required StorEdge T3+ array patches, see the Sun StorEdge T3 Disk Tray Release Notes. For the procedure on applying firmware patches, see the firmware patch README file. For the procedure on verifying the firmware level, see the Sun StorEdge T3 and T3+ Array Installation, Operation, and Service Manual.
Reset the arrays.
For the procedure on resetting an array, see the Sun StorEdge T3 and T3+ Array Installation, Operation, and Service Manual.
Use the StorEdge T3+ disable command to disable the array controller that is attached to Node B so that all logical volumes come under the control of the remaining controller.
t3:/:<#> disable uencidctr |
See the Sun StorEdge T3 and T3+ Array Administrator's Guide for more information about the disable command.
Reconnect both array-to-switch fiber optic cables to the two arrays of the partner-group.
On one node connected to the partner-group, use the format command to verify that the array controllers are rediscovered by the node.
# format |
Use the StorEdge T3+ enable command to enable the array controller that you disabled in Step 5.
t3:/:<#> enable uencidctr |
Reattach the submirrors that you detached in Step 1 to resynchronize them.
For more information, see your Solstice DiskSuite or VERITAS Volume Manager documentation.
In a partner-pair configuration, it is possible to have non-mirrored data; however, this requires that you shut down the cluster when upgrading firmware, as described in this procedure.
Shut down the entire cluster.
# scshutdown -y -g0 |
For the full procedure on shutting down a cluster, see the Sun Cluster 3.0 12/01 System Administration Guide.
Apply the controller, disk drive, and UIC firmware patches.
For the list of required StorEdge T3+ array patches, see the Sun StorEdge T3 Disk Tray Release Notes. For the procedure on applying firmware patches, see the firmware patch README file. For the procedure on verifying the firmware level, see the Sun StorEdge T3 and T3+ Array Installation, Operation, and Service Manual.
Reset the arrays.
For the procedure on resetting an array, see the Sun StorEdge T3 and T3+ Array Installation, Operation, and Service Manual.
Boot all nodes back into the cluster.
ok boot |
For the full procedure on booting nodes into the cluster, see the Sun Cluster 3.0 12/01 System Administration Guide.
On one node connected to the partner-group, use the format command to verify that the array controllers are rediscovered by the node.
# format |
Use this procedure to replace one failed disk drive in a StorEdge T3+ array that is in your StorEdge 3900 or 6900 Series system, in a running cluster.
If you remove any field replaceable unit (FRU) for an extended period of time, thermal complications might result. To prevent this complication, the StorEdge T3+ array is designed so that an orderly shutdown occurs when you remove a component for longer than 30 minutes. Therefore, a replacement part must be immediately available before starting an FRU replacement procedure. You must replace an FRU within 30 minutes or the StorEdge T3+ array, and all attached StorEdge T3+ arrays, will shut down and power off.
Did the failed disk drive impact the array LUN's availability?
If not, go to Step 2.
If it did impact LUN availability, remove the LUN from volume management control.
For more information, see your Solstice DiskSuite or VERITAS Volume Manager documentation.
Replace the disk drive in the array.
For the procedure on replacing a disk drive, see the Sun StorEdge T3 and T3+ Array Installation, Operation, and Service Manual.
Did you remove a LUN from volume management control in Step 1?
If not, you are finished with this procedure.
If you did remove a LUN from volume management control, return the LUN to volume management control now.
For more information, see your Solstice DiskSuite or VERITAS Volume Manager documentation.
Use this procedure to replace a failed host adapter in a running cluster. As defined in this procedure, "Node A" is the node with the failed host adapter you are replacing and "Node B" is the other node.
Determine the resource groups and device groups running on all nodes.
Record this information because you will use it in Step 8 of this procedure to return resource groups and device groups to these nodes.
# scstat |
Move all resource groups and device groups off Node A.
# scswitch -S -h nodename |
Shut down Node A.
# shutdown -y -g0 -i0 |
Power off Node A.
For more information, see the Sun Cluster 3.0 12/01 System Administration Guide.
Replace the failed host adapter.
For the procedure on removing and adding host adapters, see the documentation that shipped with your nodes.
Power on Node A.
For more information, see the Sun Cluster 3.0 12/01 System Administration Guide.
Boot Node A into cluster mode.
{0} ok boot |
For more information, see the Sun Cluster 3.0 12/01 System Administration Guide.
Return the resource groups and device groups you identified in Step 1 to all nodes.
# scswitch -z -g resource-group -h nodename # scswitch -z -D device-group-name -h nodename |
For more information, see the Sun Cluster 3.0 12/01 System Administration Guide.