Sun Cluster 2.2 System Administration Guide

Appendix D Glossary

Active server

A node in the Sun Cluster configuration that is providing highly available data services.

Administrative workstation

A workstation that is either outside the cluster or one of the cluster nodes that is used to run cluster administrative software.

Backup group

Used by network adapter failover (NAFO). A set of network adapters on the same subnet. Adapters within a set provide backup for each other.

CCD quorum

The set of Cluster Configuration Databases needed to elect a valid and consistent copy of the Cluster Configuration Database.

Cluster

Two to four nodes configured together to run either parallel database software or highly available data services.

Cluster Configuration Database (CCD)

A highly-available, replicated database that can be used to store data for HA data services and other Sun Cluster configuration needs.

Cluster interconnect

The private network interface between cluster nodes.

Cluster Membership Monitor (CMM)

The software that maintains a consistent cluster membership roster to avoid database corruption and subsequent transmission of corrupted or inconsistent data to clients. When nodes join or leave the cluster, thus changing the membership, CMM processes on the nodes coordinate global reconfiguration of various system services.

Cluster node

A physical machine that is part of a Sun cluster. Also referred to as a cluster host or cluster server.

Cluster quorum

The set of cluster nodes that can participate in the cluster membership.

Cluster reconfiguration

An ordered multistep process that is invoked whenever there is a significant change in cluster state, such as takeover, switchover, or a physical host reboot. During cluster reconfiguration, the Sun Cluster software coordinates all of the physical hosts that are up and communicating. Those hosts agree on which logical host(s) should be mastered by which physical hosts.

Cluster pair topology

Two pairs of Sun Cluster nodes operating under a single cluster administrative framework.

Cluster SNMP agent

The cluster Simple Network Management Protocol (SNMP) agent is used to monitor several clusters (a maximum of 32) at the same time.

CMM quorum

See cluster quorum.

Concatenation

A metadevice created by sequentially mapping blocks on several physical slices (partitions) to a logical device. Two or more physical components can be concatenated. The slices are accessed sequentially rather than interlaced (as with stripes).

Data service

A network service that implements read-write access to disk-based data from clients on a network. NFS is an example of a data service. The data service may be composed of multiple processes that work together.

Default master

The node that is configured to master a disk group when the logical hosts are configured.

Direct attached device

A disk storage unit that is physically connected to all nodes in the cluster.

Distributed Lock Manager (DLM)

Locking software used in a shared disk Oracle7 or Oracle8 Parallel Server (OPS) environment. The DLM enables Oracle processes running on different nodes to synchronize database access. The DLM is designed for high availability; if a process or node crashes, the remaining nodes do not have to be shut down and restarted. A quick reconfiguration of the DLM is performed to recover from such a failure.

Disk expansion unit

The physical storage enclosure that holds the multihost disks. For example, SPARCstorage Arrays, Sun StorEdge MultiPacks, Sun StorEdge A3000s and Sun StorEdge A5000s.

Disk group

A well defined group of multhost disks that move as a unit between two servers in an HA configuration. This can be either a Solstice DiskSuite diskset or a VERITAS Volume Manager disk group.

Diskset

See disk group.

DiskSuite state database

A replicated database that is used to store the configuration of metadevices and the state of these metadevices.

Fault detection

Sun Cluster programs that detect two types of failures. The first type includes low-level failures such as system panics and hardware faults (that is, failures that cause the entire server to be inoperable). These failures can be detected quickly. The second type of failures are related to data service. These types of failures take longer to detect.

Fault monitor

A fault daemon and the programs used to probe various parts of data services.

Fibre channel connections

Fibre connections connect the nodes with the SPARCstorage Arrays.

Golden mediator

In Solstice DiskSuite configurations, the in-core state of a mediator host set if specific conditions were met when the mediator data was last updated. The state permits take operations to proceed even when a quorum of mediator hosts is not available.

HA Administrative file system

A special file system created on each logical host when Sun Cluster is first installed. It is used by Sun Cluster and by layered data services to store copies of their administrative data.

Heartbeat

A periodic message sent between the several membership monitors to each other. Lack of a heartbeat after a specified interval and number of retries may trigger a takeover.

Highly available data service

A data service that appears to remain continuously available, despite single-point failures of server hardware or software components.

Host

A physical machine that can be part of a Sun cluster. In Sun Cluster documentation, host is synonymous with node.

Hot standby server

In an N+1 configuration, the node that is connected to all multihost disks in the cluster. The hot standby is also the administrative node. If one or more active nodes fail, the data services move from the failed node to the hot standby. However, there is no requirement that the +1 node cannot run data services in normal operation.

Local disks

Disks attached to a HA server but not included in a diskset. The local disks contain the Solaris distribution and the Sun Cluster and volume management software packages. Local disks must not contain data exported by the Sun Cluster data service.

Logical host

A set of resources that moves as a unit between HA servers. In the current product, the resources include a collection of network host names and their associated IP addresses plus a group of disks (a diskset). Each logical host is mastered by one physical host at a time.

Logical host name

The name assigned to one of the logical network interfaces. A logical host name is used by clients on the network to refer to the location of data and data services. The logical host name is the name for a path to the logical host. Because a host may be on multiple networks, there may be multiple logical host names for a single logical host.

Logical network interface

In the Internet architecture, a host may have one or more IP addresses. HA configures additional logical network interfaces to establish a mapping between several logical network interfaces and a single physical network interface. This allows a single physical network interface to respond to multiple logical network interfaces. This also enables the IP address to move from one HA server to the other in the event of a takeover or haswitch(1M), without requiring additional hardware interfaces.

Master

The server with exclusive read and write access to a diskset. The current master host for the diskset runs the data service and has the logical IP addresses mapped to its Ethernet address.

Mediator

In a dual-string configuration, provides a "third vote" in determining whether access to the metadevice state database replicas can be granted or must be denied. Used only when exactly half of the metadevice state database replicas are accessible.

Mediator host

A host that is acting in the capacity of a "third vote" by running the rpc.metamed(1M) daemon and that has been added to a diskset.

Mediator quorum

The condition achieved when half + 1 of the mediator hosts are accessible.

Membership monitor

A process running on all HA servers that monitors the servers. The membership monitor sends and receives heartbeats to its sibling hosts. The monitor can initiate a takeover if the heartbeat stops. It also keeps track of which servers are active.

Metadevice

A group of components accessed as a single logical device by concatenating, striping, mirroring, or logging the physical devices. Metadevices are sometimes called pseudo devices.

Metadevice state database

Information kept in nonvolatile storage (on disk) for preserving the state and configuration of metadevices.

Metadevice state database replica

A copy of the state database. Keeping multiple copies of the state database protects against the loss of state and configuration information. This information is critical to all metadevice operations.

Mirroring

Replicating all writes made to a single logical device (the mirror) to multiple devices (the submirrors), while distributing read operations. This provides data redundancy in the event of a failure.

Multihomed host

A host that is on more than one public network.

Multihost disk

A disk configured for potential accessibility from multiple servers. Sun Cluster software enables data on a multihost disk to be exported to network clients via a highly available data service.

Multihost disk expansion unit

See Disk expansion unit.

N to N topology

All nodes are directly connected to a set of shared disks.

N+1 topology

Some number (N) active servers and one (+1) hot-standby server. The active servers provide on-going data services and the hot-standby server takes over data service processing if one or more of the active servers fail.

Node

A physical machine that can be part of a Sun cluster. In Sun Cluster documentation, it is synonymous with host or node.

Nodelock

The mechanism used in greater than two-node clusters using VERITAS Volume Manager to failure fence failed nodes.

Parallel database

A single database image that can be accessed concurrently through multiple hosts by multiple users.

Partial failover

Failing over a subset of logical hosts mastered by a single physical host.

Potential master

Any physical host that is capable of mastering a particular logical host.

Primary logical host name

The name by which a logical host is known on the primary public network.

Primary physical host name

The name by which a physical host is known on the primary public network.

Primary public network

A name used to identify the first public network.

Private links

The private network between nodes used to send and receive heartbeats between members of a server set.

Quorum device

In VxVM configurations, the system votes by majority quorum to prevent network partitioning. Since it is impossible for two nodes to vote by majority quorum, a quorum device is included in the voting. This device could be either a controller or a disk.

Replica

See metadevice state database replica.

Replica quorum

A Solstice DiskSuite concept; the condition achieved when HALF + 1 of the metadevice state database replicas are accessible.

Ring topology

One primary and one backup server is specified for each set of data services.

Scalable Coherent Interface

A high speed interconnect used as a private network interface.

Scalable topology

See N to N topology.

Secondary logical host name

The name by which a logical host is known on a secondary public network.

Secondary physical host name

The name by which a physical host is known on a secondary public network.

Secondary public network

A name used to identify the second or subsequent public networks.

Server

A physical machine that can be part of a Sun cluster. In Sun Cluster documentation, it is synonymous with host or node.

Sibling host

One of the physical servers in a symmetric HA configuration.

Solstice DiskSuite

A software product that provides data reliability through disk striping, concatenation, mirroring, UFS logging, dynamic growth of metadevices and file systems, and metadevice state database replicas.

Stripe

Similar to concatenation, except the addressing of the component blocks is non-overlapped and interlaced on the slices (partitions), rather than placed sequentially. Striping is used to gain performance. By striping data across disks on separate controllers, multiple controllers can access data simultaneously.

Submirror

A metadevice that is part of a mirror. See also mirroring.

Sun Cluster

Software and hardware that enables several machines to act as read-write data servers while acting as backups for each other.

Switch Management Agent (SMA)

The software component that manages sessions for the SCI and Ethernet links and switches.

Switchover

The coordinated moving of a logical host from one operational HA server to the other. A switchover is initiated by an administrator using the haswitch(1M) command.

Symmetric configuration

A two-node configuration where one server operates as the hot-standby server for the other.

Takeover

The automatic moving of a logical host from one HA server to another after a failure has been detected. The HA server that has the failure is forced to give up mastery of the logical host.

Terminal Concentrator

A device used to enable an administrative workstation to securely communicate with all nodes in the Sun Cluster.

Trans device

In Solstice DiskSuite configurations, a pseudo device responsible for managing the contents of a UFS log.

UFS

An acronym for the UNIX\256 file system.

UFS logging

Recording UFS updates to a log (the logging device) before the updates are applied to the UFS (the master device).

UFS logging device

In Solstice DiskSuite configurations, the component of a transdevice that contains the UFS log.

UFS master device

In Solstice DiskSuite configurations, the component of a transdevice that contains the UFS file system.