A Sun Cluster configuration is an integrated hardware and Sun Cluster software solution that is used to create highly available and scalable services. This chapter provides a high-level overview of Sun Cluster features.
This chapter contains the following sections:
A cluster is a collection of loosely coupled computing nodes that provides a single client view of network services or applications, including databases, web services, and file services.
In a clustered environment, the nodes are connected by an interconnect and work together as a single entity to provide increased availability and performance.
Highly available clusters provide nearly continuous access to data and applications by keeping the cluster running through failures that would normally bring down a single server system. No single failure, hardware, software, or network, can cause a cluster to fail. By contrast, fault-tolerant hardware systems provide constant access to data and applications, but at a higher cost because of specialized hardware. Fault-tolerant systems usually have no provision for software failures.
Each Sun Cluster system is a collection of tightly coupled nodes that provide a single administration view of network services and applications. The Sun Cluster system achieves high availability through a combination of the following hardware and software:
Redundant disk systems provide storage. These disk systems are generally mirrored to permit uninterrupted operation if a disk or subsystem fails. Redundant connections to the disk systems ensures that data is not isolated if a server, controller, or cable fails. A high-speed interconnect among Solaris hosts provides access to resources. All hosts in the cluster are also connected to a public network, enabling clients on multiple networks to access the cluster.
Redundant hot-swappable components, such as power supplies and cooling systems, improve availability by enabling systems to continue operation after a hardware failure. Hot-swappable components provide the ability to add or remove hardware components in a functioning system without bringing it down.
Sun Cluster software's high-availability framework detects a node failure quickly and migrates the application or service to another node that runs in an identical environment. At no time are all applications unavailable. Applications unaffected by a down node are fully available during recovery. Furthermore, applications of the failed node become available as soon as they are recovered. A recovered application does not have to wait for all other applications to complete their recovery.
An application is highly available if it survives any single software or hardware failure in the system. Failures that are caused by bugs or data corruption within the application itself are excluded. The following apply to highly available applications:
Recovery is transparent from the applications that use a resource.
Resource access is fully preserved across node failure.
Applications cannot detect that the hosting node has been moved to another node.
Failure of a single node is completely transparent to programs on remaining nodes that use the files, devices, and disk volumes that are attached to this node.
Failover and scalable services and parallel applications enable you to make your applications highly available and to improve an application's performance on a cluster.
A failover service provides high availability through redundancy. When a failure occurs, you can configure an application that is running to either restart on the same node, or be moved to another node in the cluster, without user intervention.
To increase performance, a scalable service leverages the multiple nodes in a cluster to concurrently run an application. In a scalable configuration, each node in the cluster can provide data and process client requests.
Parallel databases enable multiple instances of the database server to do the following:
Participate in the cluster
Handle different queries on the same database simultaneously
Provide parallel query capability on large queries
For more information about failover and scalable services and parallel applications, see Data Service Types.
Clients make data requests to the cluster through the public network. Each Solaris host is connected to at least one public network through one or multiple public network adapters.
IP network multipathing enables a server to have multiple network ports connected to the same subnet. First, IP network multipathing software provides resilience from network adapter failure by detecting the failure or repair of a network adapter. The software then simultaneously switches the network address to and from the alternative adapter. When more than one network adapter is functional, IP network multipathing increases data throughput by spreading outbound packets across adapters.
Multihost storage makes disks highly available by connecting the disks to multiple Solaris hosts. Multiple hosts enable multiple paths to access the data. If one path fails, another one is available to take its place.
Multihost disks enable the following cluster processes:
Tolerating single-host failures.
Centralizing application data, application binaries, and configuration files.
Protecting against host failures. If client requests are accessing the data through a host that fails, the requests are switched over to use another host that has a direct connection to the same disks.
Providing access either globally through a primary host that “masters” the disks, or by direct concurrent access through local paths.
A volume manager enables you to manage large numbers of disks and the data on those disks. Volume managers can increase storage capacity and data availability by offering the following features:
Disk-drive striping and concatenation
Disk-drive hot spares
Disk-failure handling and disk replacements
Sun Cluster systems support the following volume managers:
Solaris Volume Manager
Multi-owner Solaris Volume Manager for Sun Cluster
Veritas Volume Manager
Solaris I/O multipathing (MPxIO), which was formerly named Sun StorEdge Traffic Manager, is fully integrated in the Solaris Operating System I/O framework. Solaris I/O multipathing enables you to represent and manage devices that are accessible through multiple I/O controller interfaces within a single instance of the Solaris operating system.
The Solaris I/O multipathing architecture provides the following features:
Protection against I/O outages due to I/O controller failures
Automatic switches to an alternate controller upon an I/O controller failure
Increased I/O performance by load balancing across multiple I/O channels
Sun Cluster systems support the use of hardware Redundant Array of Independent Disks (RAID) and host-based software RAID. Hardware RAID uses the storage array's or storage system's hardware redundancy to ensure that independent hardware failures do not impact data availability. If you mirror across separate storage arrays, host-based software RAID ensures that independent hardware failures do not impact data availability when an entire storage array is offline. Although you can use hardware RAID and host-based software RAID concurrently, you need only one RAID solution to maintain a high degree of data availability.
Because one of the inherent properties of clustered systems is shared resources, a cluster requires a file system that addresses the need for files to be shared coherently. In a Sun Cluster file system, a cluster file system enables users or applications to access any file on any node of the cluster by using remote or local standard UNIX APIs.
Sun Cluster systems support the following cluster file systems:
UNIX® File System (UFS) – Uses Sun Cluster Proxy System (PxFS)
Veritas File System (VxFS) – Uses PxFS
Sun Cluster software supports the following as highly available failover local file systems:
If an application is moved from one node to another node, no change is required for the application to access the same files. No changes need to be made to existing applications to fully utilize the cluster file system.
Standard Sun Cluster systems provide high availability and reliability from a single location. If your application must remain available after unpredictable disasters such as an earthquake, flood, or power outage, you can configure your cluster as a campus cluster.
Campus clusters enable you to locate cluster components, such as Solaris hosts and shared storage, in separate rooms that are several kilometers apart. You can separate your hosts and shared storage and locate them in different facilities around your corporate campus or elsewhere within several kilometers. When a disaster strikes one location, the surviving hosts can take over service for the failed host. This enables applications and data to remain available for your users. For additional information about campus cluster configurations, see the Sun Cluster 3.1 - 3.2 Hardware Administration Manual for Solaris OS.
The Sun Cluster system makes the path between users and data highly available by using multihost disks, multipathing, and a cluster file system. The Sun Cluster system monitors failures for the following:
Applications – Most of the Sun Cluster data services supply a fault monitor that periodically probes the data service to determine its health. A fault monitor verifies that the application daemon or daemons are running and that clients are being served. Based on the information that is returned by probes, a predefined action such as restarting daemons or causing a failover can be initiated.
Disk Paths – Sun Cluster software supports disk-path monitoring (DPM). DPM improves the overall reliability of failover and switchover by reporting the failure of a secondary disk path.
Internet Protocol (IP) Multipath – Solaris IP network multipathing software on Sun Cluster systems provide the basic mechanism for monitoring public network adapters. IP multipathing also enables failover of IP addresses from one adapter to another adapter when a fault is detected.
Quorum Devices - Sun Cluster software supports quorum device monitoring by periodically testing that quorum works on quorum devices. When Sun Cluster software detects a failure, the Sun Cluster system reports the failure and marks the quorum device that is not working correctly. When the Sun Cluster system detects that a previously failed quorum device now operates correctly, the system automatically brings the quorum device back into service. Bringing the quorum device back into service includes placing the correct quorum reservation information on the device. The Sun Cluster system automatically monitors any configured quorum device that is not in maintenance mode, regardless of type.
You can install, configure, and administer the Sun Cluster system either though the Sun Cluster Manager GUI or through the command-line interface (CLI).
The Sun Cluster system also has a module that runs as part of Sun Management Center software that provides a GUI to certain cluster tasks.
Sun Cluster Manager is a browser-based tool for administering Sun Cluster systems. The Sun Cluster Manager software enables administrators to perform system management and monitoring, software installation, and system configuration.
The Sun Cluster Manager software includes the following features:
Built-in security and authorization mechanisms
Secure Sockets Layer (SSL) support
Role-based access control (RBAC)
Pluggable Authentication Module (PAM)
NAFO and IP network multipathing group administration facilities
Quorum devices, transports, shared storage device, and resource group administration
Sophisticated error checking and autodetection of private interconnects
The Sun Cluster command-line interface (CLI) is a set of utilities you can use to install and administer Sun Cluster systems, and administer the volume manager portion of Sun Cluster software.
You can perform the following Sun Cluster administration tasks through the Sun Cluster CLI:
Validating a Sun Cluster configuration
Installing and configuring Sun Cluster software
Updating a Sun Cluster configuration
Managing the registration of resource types, the creation of resource groups, and the activation of resources within a resource group
Changing node mastery and states for resource groups and device groups
Controlling access with role-based access control (RBAC)
Shutting down the entire cluster
The Sun Cluster system also has a module that runs as part of Sun Management Center software. Sun Management Center software serves as the cluster's base for administrative and monitoring operations and enables system administrators to perform the following tasks through a GUI or CLI:
Configuring a remote system
Detecting and isolating hardware and software faults
Sun Management Center software can also be used as the interface to manage dynamic reconfiguration within Sun Cluster servers. Dynamic reconfiguration includes domain creation, dynamic board attach, and dynamic detach.
In conventional UNIX systems, the root user, also referred to as superuser, is omnipotent, with the ability to read and write to any file, run all programs, and send kill signals to any process. Solaris role-based access control (RBAC) is an alternative to the all-or-nothing superuser model. RBAC uses the security principle of least privilege, which is that no user should be given more privilege than necessary for performing his or her job.
RBAC enables an organization to separate superuser capabilities and package them into special user accounts or roles for assignment to specific individuals. This separation and packaging enables a variety of security policies. Accounts can be set up for special-purpose administrators in such areas as security, networking, firewall, backups, and system operation.