JavaScript is required to for searching.
Skip Navigation Links
Exit Print View
Oracle Solaris Cluster Concepts Guide     Oracle Solaris Cluster 4.1
search filter icon
search icon

Document Information

Preface

1.  Introduction and Overview

2.  Key Concepts for Hardware Service Providers

3.  Key Concepts for System Administrators and Application Developers

Administrative Interfaces

Cluster Time

Campus Clusters

High-Availability Framework

Global Devices

Device IDs and DID Pseudo Driver

Zone Cluster Membership

Cluster Membership Monitor

Failfast Mechanism

Cluster Configuration Repository (CCR)

Device Groups

Device Group Failover

Device Group Ownership

Global Namespace

Local and Global Namespaces Example

Cluster File Systems

Using Cluster File Systems

HAStoragePlus Resource Type

syncdir Mount Option

Disk Path Monitoring

DPM Overview

Monitoring Disk Paths

Using the cldevice Command to Monitor and Administer Disk Paths

Using the clnode set Command to Manage Disk Path Failure

Quorum and Quorum Devices

About Quorum Vote Counts

About Quorum Configurations

Adhering to Quorum Device Requirements

Adhering to Quorum Device Best Practices

Recommended Quorum Configurations

Quorum in Two-Node Configurations

Quorum in Greater Than Two-Node Configurations

Load Limits

Data Services

Data Service Methods

Failover Data Services

Scalable Data Services

Load-Balancing Policies

Failback Settings

Data Services Fault Monitors

Developing New Data Services

Characteristics of Scalable Services

Data Service API and Data Service Development Library API

Using the Cluster Interconnect for Data Service Traffic

Resources, Resource Groups, and Resource Types

Resource Group Manager (RGM)

Resource and Resource Group States and Settings

Resource and Resource Group Properties

Support for Oracle Solaris Zones

Support for Zones on Cluster Nodes Through Oracle Solaris Cluster HA for Solaris Zones

Criteria for Using Oracle Solaris Cluster HA for Solaris Zones

Requirements for Using Oracle Solaris Cluster HA for Solaris Zones

Additional Information About Oracle Solaris Cluster HA for Solaris Zones

Service Management Facility

System Resource Usage

System Resource Monitoring

Control of CPU

Viewing System Resource Usage

Data Service Project Configuration

Determining Requirements for Project Configuration

Setting Per-Process Virtual Memory Limits

Failover Scenarios

Two-Node Cluster With Two Applications

Two-Node Cluster With Three Applications

Failover of Resource Group Only

Public Network Adapters and IP Network Multipathing

SPARC: Dynamic Reconfiguration Support

SPARC: Dynamic Reconfiguration General Description

SPARC: DR Clustering Considerations for CPU Devices

SPARC: DR Clustering Considerations for Memory

SPARC: DR Clustering Considerations for Disk and Tape Drives

SPARC: DR Clustering Considerations for Quorum Devices

SPARC: DR Clustering Considerations for Cluster Interconnect Interfaces

SPARC: DR Clustering Considerations for Public Network Interfaces

Index

Using the Cluster Interconnect for Data Service Traffic

A cluster must usually have multiple network connections between cluster nodes , forming the cluster interconnect.

Oracle Solaris Cluster software uses multiple interconnects to achieve the following goals:

For both internal and external traffic such as file system data or scalable services data, messages are striped across all available interconnects. The cluster interconnect is also available to applications, for highly available communication between nodes. For example, a distributed application might have components that are running on different nodes that need to communicate. By using the cluster interconnect rather than the public transport, these connections can withstand the failure of an individual link.

To use the cluster interconnect for communication between nodes, an application must use the private host names that you configured during the Oracle Solaris Cluster installation. For example, if the private host name for host1 is clusternode1-priv, use this name to communicate with host1 over the cluster interconnect. TCP sockets that are opened by using this name are routed over the cluster interconnect and can be transparently rerouted if a private network adapter fails. Application communication between any two nodes is striped over all interconnects. The traffic for a given TCP connection flows on one interconnect at any point. Different TCP connections are striped across all interconnects. Additionally, UDP traffic is always striped across all interconnects.

An application can optionally use a zone's private host name to communicate over the cluster interconnect between zones. However, you must first set each zone's private host name before the application can begin communicating. Each zone cluster node must have its own private host name to communicate. An application that is running in one zone must use the private host name in the same zone to communicate with private host names in other zones.

You can change the private host names after your Oracle Solaris Cluster installation. To determine the actual name, use the scha_cluster_get command with the scha_privatelink_hostname_node argument. See the scha_cluster_get(1HA) man page.

Each host is also assigned a fixed per-host address. This per-host address is plumbed on the clprivnet driver. The IP address maps to the private host name for the host: clusternode1-priv. See the clprivnet(7) man page.