Sun Cluster 3.0 Hardware Guide

Chapter 1 Introduction to Sun Cluster Hardware

This chapter provides overview information on cluster hardware, including the terminal concentrator, storage devices, and network components.

This chapter contains the following conceptual information:

Installing Cluster Hardware

The following table lists the tasks for building a cluster.

Table 1-1 Task Map: Installing Cluster Hardware

Task 

For Instructions, Go To... 

Plan for cluster hardware capacity, space, and power requirements 

The site planning documentation that shipped with your nodes and other hardware 

Install the nodes 

The documentation that shipped with your nodes 

Install and configure the terminal concentrator 

"Installing and Configuring the Terminal Concentrator"

Install the cluster transport adapters 

"How to Install Cluster Transport Adapters"

Install the cluster transport junction 

"Installing Cluster Hardware"

Install the cluster transport cables 

"How to Install Cluster Transport Cables"

Install public network hardware 

The documentation that shipped with your network adapters and nodes 

Install and configure the storage 

"Installing a StorEdge MultiPack"

 

"Installing a StorEdge D1000"

 

"Installing a StorEdge A5x00"

 

"Installing a StorEdge A3500"

Install the Solaris operating environment and Sun Cluster software 

Sun Cluster 3.0 Installation Guide

Configure the cluster interconnects 

Sun Cluster 3.0 System Administration Guide

Maintaining Sun Cluster Hardware

Sun Cluster 3.0 Hardware Guide augments existing documentation that ships with your hardware components by providing information on maintaining this hardware in a cluster environment. The following table describes some of the differences between maintaining cluster hardware as compared to standalone hardware.

Table 1-2 Sample Differences Between Servicing Standalone and Cluster Hardware

Task 

Standalone Hardware 

Cluster Hardware 

Shutting down a node 

Use the shutdown(1M) command.

To perform an orderly node shutdown, first use the scswitch(1M) command to switch device groups and resource groups to another node. Then shut down the node by running the shutdown(1M) command.

Adding a disk 

Run boot -r or devfsadm(1M)to assign a logical device name to the disk. You also need to run volume manager commands to configure the new disk if the disks are under volume management control

Use the devfsadm(1M), scgdevs(1M), and scdidadm(1M) commands. You also need to run volume manager commands to configure the new disk if the disks are under volume management control.

Adding a public network connection 

To install the network adapter, perform an orderly node shutdown. After you install the network adapter, update the /etc/hostname.adapter and/etc/inet/hosts files.

To install the network adapter, perform an orderly node shutdown. After you install the network adapter, update the /etc/hostname.adapter and/etc/inet/hosts files. Finally, configure the adapter as part of a NAFO group.

Powering On and Powering Off Cluster Hardware

Consider the following when powering on and powering off cluster hardware:


Caution - Caution -

After the cluster is online and a user application is accessing data on the cluster, do not use the power-on and power-off procedures listed in the manuals that came with the hardware.


Local and Multihost Disks

There are two sets of storage devices within a cluster: local disks and multihost disks. Local disks are directly connected to a single node and hold the Solaris operating environment for each node. Multihost disks are connected to more than one node and hold client application data and other files that need to be accessed from multiple nodes.

For more conceptual information on multihost disks, local disks, and global devices, see Sun Cluster 3.0 Concepts.

Removable Media

Removable media includes tape and CD-ROM drives. Tape and CD-ROM drives are local devices. This guide does not contain procedures on adding, removing, or replacing removable media as highly-available storage devices. Although tape and CD-ROM drives are global devices, tape and CD-ROM drives do not have more than one port and do not have multi-initiator firmware support. Dual-ported disks and multi-initiator firmware support enable devices to be highly available. Thus, this guide focuses on disk drives as global devices.

Although tape and CD-ROM drives cannot be highly available at this time, in a cluster environment, you can access tape and CD-ROM drives that are not local to your system. All the various density extensions (such as h, b, l, n, and u) are mapped so that the tape drive can be accessed from any node in the cluster.

Install, add, remove, replace, and use tape and CD-ROM drives as you would in a non-cluster environment. For procedures on adding, removing, and replacing tape and CD-ROM drives, see the documentation that shipped with your hardware.