Sun HPC ClusterTools 3.0 Administrator's Guide: With CRE

Sun HPC System Hardware

A Sun HPC cluster configuration can range from a single Sun SMP (symmetric multiprocessor) server to a cluster of SMPs connected via any Sun-supported, TCP/IP-capable interconnect.


Note -

An individual SMP server within a Sun HPC cluster is referred to as a node.


The recommended interconnect technology for clustering Sun HPC servers is the Scalable Coherent Interface (SCI). SCI's bandwidth and latency characteristics make it the preferred choice for the cluster's primary network. An SCI network can be used to create Sun HPC clusters with up to four nodes.

Larger Sun HPC clusters can be built using a Sun-supported, TCP/IP interconnect, such as 100BaseT Ethernet or ATM. The CRE supports parallel jobs running on clusters of up to 64 nodes containing up to 256 CPUs.

Any Sun HPC node that is connected to a disk storage system can be configured as a Parallel File System (PFS) I/O server. PFS file systems are configured by editing the appropriate sections of the system configuration file, hpc.conf. See Chapter 7, hpc.conf: Detailed Description for details.