This chapter provides overview information on cluster hardware. The chapter also provides overviews of the tasks that are involved in installing and maintaining this hardware specifically in a SunTM Cluster environment.
This chapter contains the following information:
The following procedure lists the tasks for installing a cluster and the sources for instructions.
Table 1–1 Task Map: Installing Cluster Hardware
Task |
For Instructions |
---|---|
Plan for cluster hardware capacity, space, and power requirements. |
The site planning documentation that shipped with your nodes and other hardware |
Install the nodes. |
The documentation that shipped with your nodes |
Install the administrative console. |
The documentation that shipped with your administrative console |
Install a console access device.
Use the procedure that is indicated for your type of console access device. For example, Sun Enterprise E10000 servers use a System Service Processor (SSP) as a console access device, rather than a terminal concentrator. |
Installing the Terminal Concentrator or The documentation that shipped with your Sun Enterprise E10000 hardware |
Install the cluster interconnect hardware. |
Chapter 3, Installing Cluster Interconnect Hardware and Configuring VLANs |
Install the public network hardware. |
Chapter 5, Installing and Maintaining Public Network Hardware |
Install and configure the shared disk storage arrays. |
Refer to the Sun Cluster manual that pertains to your storage device as well as to the device's own documentation. |
Install the Solaris Operating System and Sun Cluster software. |
Sun Cluster software installation documentation |
Configure the cluster interconnects. |
Sun Cluster software installation documentation |
Plan for cluster hardware capacity, space, and power requirements.
For more information, see the site planning documentation that shipped with your servers and other hardware. See Hardware Restrictions for critical information about hardware restrictions with Sun Cluster.
Install the nodes.
For server installation instructions, see the documentation that shipped with your servers.
Install the administrative console.
For more information, see the documentation that shipped with your administrative console.
Install a console access device.
Use the procedure that is indicated for your type of console access device. For example, Sun Enterprise E10000 servers use a System Service Processor (SSP) as a console access device, rather than a terminal concentrator.
For installation instructions, see Installing the Terminal Concentrator or the documentation that shipped with your Sun Enterprise E10000 hardware.
Install the cluster interconnect and public network hardware.
For installation instructions, see Chapter 3, Installing Cluster Interconnect Hardware and Configuring VLANs.
Install and configure the storage arrays.
Perform the service procedures that are indicated for your type of storage hardware.
Install the Solaris Operating System and Sun Cluster software.
For more information, see Sun Cluster software installation documentation .
Plan, install, and configure resource groups and data services.
For more information, see the Sun Cluster data services collection.
Sun Cluster 3.0-3.1 Hardware Administration Manual for Solaris OS augments documentation that ships with your hardware components by providing information on maintaining the hardware specifically in a Sun Cluster environment. Table 1–2 describes some of the differences between maintaining cluster hardware and maintaining standalone hardware.
Table 1–2 Sample Differences Between Servicing Standalone and Cluster Hardware
Task |
Standalone Hardware |
Cluster Hardware |
---|---|---|
Shutting down a node |
Use the shutdown(1M) command. |
To perform an orderly node shutdown, first use the scswitch(1M) command to switch device groups and resource groups to another node. Then shut down the node by running the shutdown(1M) command. |
Adding a disk |
Run boot -r on a SPARC based system, b -r on an x86 based system, or devfsadm(1M) to assign a logical device name to the disk. You also need to run volume manager commands to configure the new disk if the disks are under volume management control. |
Use the devfsadm(1M), scgdevs(1M), and scdidadm(1M) commands. You also need to run volume manager commands to configure the new disk if the disks are under volume management control. |
Adding a transport adapter or public network adapter |
Perform an orderly node shutdown, then install the public network adapter. After you install the network adapter, update the /etc/hostname.adapter and/etc/inet/hosts files. |
Perform an orderly node shutdown, then install the public network adapter. After you install the public network adapter, update the /etc/hostname.adapter and/etc/inet/hosts files. Finally, add this public network adapter to an IP Network Multipathing group. |
Consider the following when powering on and powering off cluster hardware.
Use shut down and boot procedures in your Sun Cluster system administration documentation for nodes in a running cluster.
Use the power-on and power-off procedures in the manuals that shipped with the hardware only for systems that are newly installed or are in the process of being installed.
After the cluster is online and a user application is accessing data on the cluster, do not use the power-on and power-off procedures listed in the manuals that came with the hardware.
The Sun Cluster environment supports Solaris dynamic reconfiguration (DR) operations on qualified servers. Throughout the Sun Cluster Hardware Administration Collection for Solaris OS, there are procedures that require that the user add or remove transport adapters or public network adapters in a cluster node. Contact your service provider for a list of storage arrays that are qualified for use with DR-enabled servers.
Review the documentation for the Solaris DR feature on your hardware platform before you use the DR feature with Sun Cluster software. All of the requirements, procedures, and restrictions that are documented for the Solaris DR feature also apply to Sun Cluster DR support (except for the operating environment quiescence operation).
Documentation for DR on currently qualified server platforms are listed here.
Sun Enterprise 10000 Dynamic Reconfiguration User Guide
Sun Enterprise 10000 Dynamic Reconfiguration Reference Manual
Sun Fire 6800, 4810, 4800, and 3800 Systems Dynamic Reconfiguration User Guide
Sun Fire 6800, 4810, 4800, and 3800 Systems Dynamic Reconfiguration Release Notes
Sun Fire 15K Dynamic Reconfiguration (DR) User Guide
Sun Fire 15K Dynamic Reconfiguration Release Notes
Sun Fire 880 Dynamic Reconfiguration User's Guide
Sun Fire V1280/Netra 1280 System Administration Guide
Some procedures within the Sun Cluster Hardware Administration Collection for Solaris OS instruct the user to shut down and power off a cluster node before you add, remove, or replace a transport adapter or a public network adapter (PNA).
However, if the node is a server that is enabled with the DR feature, the user does not have to power off the node before you add, remove, or replace the transport adapter or PNA. Instead, do the following:
Follow the procedure steps in Sun Cluster Hardware Administration Collection for Solaris OS, including any steps for disabling and removing the transport adapter or PNA from the active cluster interconnect.
See the Sun Cluster system administration documentation for instructions about how to remove transport adapters or PNAs from the cluster configuration.
Skip any step that instructs you to power off the node, where the purpose of the power-off is to add, remove, or replace a transport adapter or PNA.
Perform the DR operation (add, remove, or replace) on the transport adapter or PNA.
Continue with the next step of the procedure in Sun Cluster Hardware Administration Collection for Solaris OS.
For conceptual information about Sun Cluster support of the DR feature, see your Sun Cluster concepts documentation document.
Two sets of storage arrays reside within a cluster: local disks and multihost disks.
Local disks are directly connected to a single node and hold the Solaris Operating System for each node.
Multihost disks are connected to more than one node and hold client application data and other files that need to be accessed from multiple nodes.
For more conceptual information on multihost disks and local disks, see the Sun Cluster concepts documentation.
Removable media include tape and CD-ROM drives, which are local devices. Sun Cluster 3.0-3.1 Hardware Administration Manual for Solaris OS does not contain procedures for adding, removing, or replacing removable media as highly available storage arrays. Although tape and CD-ROM drives are global devices, these drives do not have more than one port and do not have multi-initiator firmware support that would enable these devices as highly available. Thus, Sun Cluster 3.0-3.1 Hardware Administration Manual for Solaris OS focuses on disk drives as global devices.
Although tape and CD-ROM drives cannot be highly available at this time, in a cluster environment, you can access tape and CD-ROM drives that are not local to your system. All the various density extensions (such as h, b, l, n, and u) are mapped so that the tape drive can be accessed from any node in the cluster.
Install, remove, replace, and use tape and CD-ROM drives as you would in a noncluster environment. For procedures about how to install, remove, and replace tape and CD-ROM drives, see the documentation that shipped with your hardware.
You cannot have a single point of failure in a SAN configuration that is in a Sun Cluster environment. For information on how to install and configure a SAN configuration, see your SAN documentation.
The following restrictions apply to hardware in all Sun Cluster configurations.
Multihost tape, CD-ROM, and DVD-ROM are not supported.
Alternate pathing (AP) is not supported.
Storage devices with more than a single path from a given cluster node to the enclosure are not supported except for the following storage devices:
Sun StorEdgeTM A3500, for which two paths are supported to each of two nodes.
Devices using Sun StorEdge Traffic Manager.
EMC storage devices that use EMC PowerPath software.
Sun StorEdge 9900 storage devices that use HDLM.
If you are using a Sun EnterpriseTM 420R server with a PCI card in slot J4701, the motherboard must be at dash-level 15 or higher (501–5168–15 or higher). To find the motherboard part number and revision level, look at the edge of the board closest to PCI slot 1.
System panics have been observed in clusters when UDWIS I/O cards are used in slot 0 of a board in a Sun Enterprise 10000 server; do not install UDWIS I/O cards in slot 0 of this server.
Sun VTSTM software is not supported.