JavaScript is required to for searching.
Skip Navigation Links
Exit Print View
Oracle Solaris Cluster Concepts Guide     Oracle Solaris Cluster
search filter icon
search icon

Document Information

Preface

1.  Introduction and Overview

2.  Key Concepts for Hardware Service Providers

3.  Key Concepts for System Administrators and Application Developers

Administrative Interfaces

Cluster Time

High-Availability Framework

Zone Membership

Cluster Membership Monitor

Failfast Mechanism

Cluster Configuration Repository (CCR)

Campus Clusters

Global Devices

Device IDs and DID Pseudo Driver

Device Groups

Device Group Failover

Multiported Device Groups

Global Namespace

Local and Global Namespaces Example

Cluster File Systems

Using Cluster File Systems

HAStoragePlus Resource Type

syncdir Mount Option

Disk Path Monitoring

DPM Overview

Monitoring Disk Paths

Using the cldevice Command to Monitor and Administer Disk Paths

Using Oracle Solaris Cluster Manager to Monitor Disk Paths

Using the clnode set Command to Manage Disk Path Failure

Quorum and Quorum Devices

About Quorum Vote Counts

About Quorum Configurations

Adhering to Quorum Device Requirements

Adhering to Quorum Device Best Practices

Recommended Quorum Configurations

Quorum in Two-Host Configurations

Quorum in Greater Than Two-Host Configurations

Atypical Quorum Configurations

Bad Quorum Configurations

Load Limits

Data Services

Data Service Methods

Failover Data Services

Scalable Data Services

Load-Balancing Policies

Failback Settings

Data Services Fault Monitors

Developing New Data Services

Characteristics of Scalable Services

Data Service API and Data Service Development Library API

Using the Cluster Interconnect for Data Service Traffic

Resources, Resource Groups, and Resource Types

Resource Group Manager (RGM)

Resource and Resource Group States and Settings

Resource and Resource Group Properties

Support for Oracle Solaris Zones

Support for Global-Cluster Non-Voting Nodes (Solaris Zones) Directly Through the RGM

Criteria for Using Support for Solaris Zones Directly Through the RGM

Requirements for Using Support for Solaris Zones Directly Through the RGM

Additional Information About Support for Solaris Zones Directly Through the RGM

Support for Solaris Zones on Oracle Solaris Cluster Nodes Through Oracle Solaris Cluster HA for Solaris Zones

Criteria for Using Oracle Solaris Cluster HA for Solaris Zones

Requirements for Using Oracle Solaris Cluster HA for Solaris Zones

Additional Information About Oracle Solaris Cluster HA for Solaris Zones

Service Management Facility

System Resource Usage

System Resource Monitoring

Control of CPU

Viewing System Resource Usage

Data Service Project Configuration

Determining Requirements for Project Configuration

Setting Per-Process Virtual Memory Limits

Failover Scenarios

Two-Host Cluster With Two Applications

Two-Host Cluster With Three Applications

Failover of Resource Group Only

Public Network Adapters and IP Network Multipathing

SPARC: Dynamic Reconfiguration Support

SPARC: Dynamic Reconfiguration General Description

SPARC: DR Clustering Considerations for CPU Devices

SPARC: DR Clustering Considerations for Memory

SPARC: DR Clustering Considerations for Disk and Tape Drives

SPARC: DR Clustering Considerations for Quorum Devices

SPARC: DR Clustering Considerations for Cluster Interconnect Interfaces

SPARC: DR Clustering Considerations for Public Network Interfaces

Index

Using the Cluster Interconnect for Data Service Traffic

A cluster must usually have multiple network connections between Oracle Solaris hosts, forming the cluster interconnect.

Oracle Solaris Cluster software uses multiple interconnects to achieve the following goals:

For both internal and external traffic such as file system data or scalable services data, messages are striped across all available interconnects. The cluster interconnect is also available to applications, for highly available communication between hosts. For example, a distributed application might have components that are running on different hosts that need to communicate. By using the cluster interconnect rather than the public transport, these connections can withstand the failure of an individual link.

To use the cluster interconnect for communication between hosts, an application must use the private host names that you configured during the Oracle Solaris Cluster installation. For example, if the private host name for host1 is clusternode1-priv, use this name to communicate with host1 over the cluster interconnect. TCP sockets that are opened by using this name are routed over the cluster interconnect and can be transparently rerouted if a private network adapter fails. Application communication between any two hosts is striped over all interconnects. The traffic for a given TCP connection flows on one interconnect at any point. Different TCP connections are striped across all interconnects. Additionally, UDP traffic is always striped across all interconnects.

An application can optionally use a zone's private host name to communicate over the cluster interconnect between zones. However, you must first set each zone's private host name before the application can begin communicating. Each zone must have its own private host name to communicate. An application that is running in one zone must use the private host name in the same zone to communicate with private host names in other zones. An application in one zone cannot communicate through the private host name in another zone.

Because you can configure the private host names during your Oracle Solaris Cluster installation, the cluster interconnect uses any name that you choose at that time. To determine the actual name, use the scha_cluster_get command with the scha_privatelink_hostname_node argument. See the scha_cluster_get(1HA) man page.

Each host is also assigned a fixed per-host address. This per-host address is plumbed on the clprivnet driver. The IP address maps to the private host name for the host: clusternode1-priv. See the clprivnet(7) man page.

If your application requires consistent IP addresses at all points, configure the application to bind to the per-host address on both the client and the server. All connections appear then to originate from and return to the per-host address.