Sun HPC ClusterTools 3.0 Administrator's Guide: With LSF

Sun HPC System Hardware

A Sun HPC system configuration can range from a single Sun SMP (symmetric multiprocessor) server to a cluster of SMPs connected by any Sun-supported, TCP/IP-capable interconnect.


Note -

An individual SMP server within a Sun HPC cluster is referred to as a node.


The recommended interconnect technology for clustering Sun HPC servers is the Scalable Coherent Interface (SCI). SCI's bandwidth and latency characteristics make it the preferred choice for the cluster's primary network. An SCI network can be used to create Sun HPC clusters with up to four nodes.

Larger Sun HPC clusters can be built using a Sun-supported TCP/IP interconnect, such as 100BaseT Ethernet or ATM. Individual parallel Sun HPC jobs can have up to 1024 processes running on as many as 64 nodes.

Any Sun HPC node that is connected to a disk storage system can be configured as a Parallel File System (PFS) I/O server. See Chapter 4, PFS Configuration Notes and Chapter 5, Starting and Stopping PFS Daemons for additional information about PFS I/O servers and PFS file systems.