A node in the Sun Cluster configuration that is providing highly available data services.
A workstation that is either outside the cluster or one of the cluster nodes that is used to run cluster administrative software.
Used by network adapter failover (NAFO). A set of network adapters on the same subnet. Adapters within a set provide backup for each other.
The set of Cluster Configuration Databases needed to elect a valid and consistent copy of the Cluster Configuration Database.
Two to four nodes configured together to run either parallel database software or highly available data services.
A highly-available, replicated database that can be used to store data for HA data services and other Sun Cluster configuration needs.
The private network interface between cluster nodes.
The software that maintains a consistent cluster membership roster to avoid database corruption and subsequent transmission of corrupted or inconsistent data to clients. When nodes join or leave the cluster, thus changing the membership, CMM processes on the nodes coordinate global reconfiguration of various system services.
A physical machine that is part of a Sun cluster. Also referred to as a cluster host or cluster server.
The set of cluster nodes that can participate in the cluster membership.
An ordered multistep process that is invoked whenever there is a significant change in cluster state, such as takeover, switchover, or a physical host reboot. During cluster reconfiguration, the Sun Cluster software coordinates all of the physical hosts that are up and communicating. Those hosts agree on which logical host(s) should be mastered by which physical hosts.
Two pairs of Sun Cluster nodes operating under a single cluster administrative framework.
The cluster Simple Network Management Protocol (SNMP) agent is used to monitor several clusters (a maximum of 32) at the same time.
See cluster quorum.
A metadevice created by sequentially mapping blocks on several physical slices (partitions) to a logical device. Two or more physical components can be concatenated. The slices are accessed sequentially rather than interlaced (as with stripes).
A network service that implements read-write access to disk-based data from clients on a network. NFS is an example of a data service. The data service may be composed of multiple processes that work together.
The node that is configured to master a disk group when the logical hosts are configured.
A disk storage unit that is physically connected to all nodes in the cluster.
Locking software used in a shared disk Oracle7 or Oracle8 Parallel Server (OPS) environment. The DLM enables Oracle processes running on different nodes to synchronize database access. The DLM is designed for high availability; if a process or node crashes, the remaining nodes do not have to be shut down and restarted. A quick reconfiguration of the DLM is performed to recover from such a failure.
The physical storage enclosure that holds the multihost disks. For example, SPARCstorage Arrays, Sun StorEdge MultiPacks, Sun StorEdge A3000s and Sun StorEdge A5000s.
A well defined group of multhost disks that move as a unit between two servers in an HA configuration. This can be either a Solstice DiskSuite diskset or a VERITAS Volume Manager disk group.
See disk group.
A replicated database that is used to store the configuration of metadevices and the state of these metadevices.
Sun Cluster programs that detect two types of failures. The first type includes low-level failures such as system panics and hardware faults (that is, failures that cause the entire server to be inoperable). These failures can be detected quickly. The second type of failures are related to data service. These types of failures take longer to detect.
A fault daemon and the programs used to probe various parts of data services.
Fibre connections connect the nodes with the SPARCstorage Arrays.
In Solstice DiskSuite configurations, the in-core state of a mediator host set if specific conditions were met when the mediator data was last updated. The state permits take operations to proceed even when a quorum of mediator hosts is not available.
A special file system created on each logical host when Sun Cluster is first installed. It is used by Sun Cluster and by layered data services to store copies of their administrative data.
A periodic message sent between the several membership monitors to each other. Lack of a heartbeat after a specified interval and number of retries may trigger a takeover.
A data service that appears to remain continuously available, despite single-point failures of server hardware or software components.
A physical machine that can be part of a Sun cluster. In Sun Cluster documentation, host is synonymous with node.
In an N+1 configuration, the node that is connected to all multihost disks in the cluster. The hot standby is also the administrative node. If one or more active nodes fail, the data services move from the failed node to the hot standby. However, there is no requirement that the +1 node cannot run data services in normal operation.
Disks attached to a HA server but not included in a diskset. The local disks contain the Solaris distribution and the Sun Cluster and volume management software packages. Local disks must not contain data exported by the Sun Cluster data service.
A set of resources that moves as a unit between HA servers. In the current product, the resources include a collection of network host names and their associated IP addresses plus a group of disks (a diskset). Each logical host is mastered by one physical host at a time.
The name assigned to one of the logical network interfaces. A logical host name is used by clients on the network to refer to the location of data and data services. The logical host name is the name for a path to the logical host. Because a host may be on multiple networks, there may be multiple logical host names for a single logical host.
In the Internet architecture, a host may have one or more IP addresses. HA configures additional logical network interfaces to establish a mapping between several logical network interfaces and a single physical network interface. This allows a single physical network interface to respond to multiple logical network interfaces. This also enables the IP address to move from one HA server to the other in the event of a takeover or haswitch(1M), without requiring additional hardware interfaces.
The server with exclusive read and write access to a diskset. The current master host for the diskset runs the data service and has the logical IP addresses mapped to its Ethernet address.
In a dual-string configuration, provides a "third vote" in determining whether access to the metadevice state database replicas can be granted or must be denied. Used only when exactly half of the metadevice state database replicas are accessible.
A host that is acting in the capacity of a "third vote" by running the rpc.metamed(1M) daemon and that has been added to a diskset.
The condition achieved when half + 1 of the mediator hosts are accessible.
A process running on all HA servers that monitors the servers. The membership monitor sends and receives heartbeats to its sibling hosts. The monitor can initiate a takeover if the heartbeat stops. It also keeps track of which servers are active.
A group of components accessed as a single logical device by concatenating, striping, mirroring, or logging the physical devices. Metadevices are sometimes called pseudo devices.
Information kept in nonvolatile storage (on disk) for preserving the state and configuration of metadevices.
A copy of the state database. Keeping multiple copies of the state database protects against the loss of state and configuration information. This information is critical to all metadevice operations.
Replicating all writes made to a single logical device (the mirror) to multiple devices (the submirrors), while distributing read operations. This provides data redundancy in the event of a failure.
A host that is on more than one public network.
A disk configured for potential accessibility from multiple servers. Sun Cluster software enables data on a multihost disk to be exported to network clients via a highly available data service.
See Disk expansion unit.
All nodes are directly connected to a set of shared disks.
Some number (N) active servers and one (+1) hot-standby server. The active servers provide on-going data services and the hot-standby server takes over data service processing if one or more of the active servers fail.
A physical machine that can be part of a Sun cluster. In Sun Cluster documentation, it is synonymous with host or node.
The mechanism used in greater than two-node clusters using VERITAS Volume Manager to failure fence failed nodes.
A single database image that can be accessed concurrently through multiple hosts by multiple users.
Failing over a subset of logical hosts mastered by a single physical host.
Any physical host that is capable of mastering a particular logical host.
The name by which a logical host is known on the primary public network.
The name by which a physical host is known on the primary public network.
A name used to identify the first public network.
The private network between nodes used to send and receive heartbeats between members of a server set.
In VxVM configurations, the system votes by majority quorum to prevent network partitioning. Since it is impossible for two nodes to vote by majority quorum, a quorum device is included in the voting. This device could be either a controller or a disk.
See metadevice state database replica.
A Solstice DiskSuite concept; the condition achieved when HALF + 1 of the metadevice state database replicas are accessible.
One primary and one backup server is specified for each set of data services.
A high speed interconnect used as a private network interface.
See N to N topology.
The name by which a logical host is known on a secondary public network.
The name by which a physical host is known on a secondary public network.
A name used to identify the second or subsequent public networks.
A physical machine that can be part of a Sun cluster. In Sun Cluster documentation, it is synonymous with host or node.
One of the physical servers in a symmetric HA configuration.
A software product that provides data reliability through disk striping, concatenation, mirroring, UFS logging, dynamic growth of metadevices and file systems, and metadevice state database replicas.
Similar to concatenation, except the addressing of the component blocks is non-overlapped and interlaced on the slices (partitions), rather than placed sequentially. Striping is used to gain performance. By striping data across disks on separate controllers, multiple controllers can access data simultaneously.
A metadevice that is part of a mirror. See also mirroring.
Software and hardware that enables several machines to act as read-write data servers while acting as backups for each other.
The software component that manages sessions for the SCI and Ethernet links and switches.
The coordinated moving of a logical host from one operational HA server to the other. A switchover is initiated by an administrator using the haswitch(1M) command.
A two-node configuration where one server operates as the hot-standby server for the other.
The automatic moving of a logical host from one HA server to another after a failure has been detected. The HA server that has the failure is forced to give up mastery of the logical host.
A device used to enable an administrative workstation to securely communicate with all nodes in the Sun Cluster.
In Solstice DiskSuite configurations, a pseudo device responsible for managing the contents of a UFS log.
An acronym for the UNIX\256 file system.
Recording UFS updates to a log (the logging device) before the updates are applied to the UFS (the master device).
In Solstice DiskSuite configurations, the component of a transdevice that contains the UFS log.
In Solstice DiskSuite configurations, the component of a transdevice that contains the UFS file system.