Skip Navigation Links | |
Exit Print View | |
Oracle Solaris Cluster Concepts Guide Oracle Solaris Cluster 4.1 |
2. Key Concepts for Hardware Service Providers
3. Key Concepts for System Administrators and Application Developers
Device IDs and DID Pseudo Driver
Cluster Configuration Repository (CCR)
Local and Global Namespaces Example
Using the cldevice Command to Monitor and Administer Disk Paths
Using the clnode set Command to Manage Disk Path Failure
Adhering to Quorum Device Requirements
Adhering to Quorum Device Best Practices
Recommended Quorum Configurations
Quorum in Two-Node Configurations
Quorum in Greater Than Two-Node Configurations
Characteristics of Scalable Services
Data Service API and Data Service Development Library API
Resources, Resource Groups, and Resource Types
Resource and Resource Group States and Settings
Resource and Resource Group Properties
Support for Oracle Solaris Zones
Support for Zones on Cluster Nodes Through Oracle Solaris Cluster HA for Solaris Zones
Criteria for Using Oracle Solaris Cluster HA for Solaris Zones
Requirements for Using Oracle Solaris Cluster HA for Solaris Zones
Additional Information About Oracle Solaris Cluster HA for Solaris Zones
Data Service Project Configuration
Determining Requirements for Project Configuration
Setting Per-Process Virtual Memory Limits
Two-Node Cluster With Two Applications
Two-Node Cluster With Three Applications
Failover of Resource Group Only
Public Network Adapters and IP Network Multipathing
SPARC: Dynamic Reconfiguration Support
SPARC: Dynamic Reconfiguration General Description
SPARC: DR Clustering Considerations for CPU Devices
SPARC: DR Clustering Considerations for Memory
SPARC: DR Clustering Considerations for Disk and Tape Drives
SPARC: DR Clustering Considerations for Quorum Devices
SPARC: DR Clustering Considerations for Cluster Interconnect Interfaces
SPARC: DR Clustering Considerations for Public Network Interfaces
A cluster must usually have multiple network connections between cluster nodes , forming the cluster interconnect.
Oracle Solaris Cluster software uses multiple interconnects to achieve the following goals:
Ensure high availability
Improve performance
For both internal and external traffic such as file system data or scalable services data, messages are striped across all available interconnects. The cluster interconnect is also available to applications, for highly available communication between nodes. For example, a distributed application might have components that are running on different nodes that need to communicate. By using the cluster interconnect rather than the public transport, these connections can withstand the failure of an individual link.
To use the cluster interconnect for communication between nodes, an application must use the private host names that you configured during the Oracle Solaris Cluster installation. For example, if the private host name for host1 is clusternode1-priv, use this name to communicate with host1 over the cluster interconnect. TCP sockets that are opened by using this name are routed over the cluster interconnect and can be transparently rerouted if a private network adapter fails. Application communication between any two nodes is striped over all interconnects. The traffic for a given TCP connection flows on one interconnect at any point. Different TCP connections are striped across all interconnects. Additionally, UDP traffic is always striped across all interconnects.
An application can optionally use a zone's private host name to communicate over the cluster interconnect between zones. However, you must first set each zone's private host name before the application can begin communicating. Each zone cluster node must have its own private host name to communicate. An application that is running in one zone must use the private host name in the same zone to communicate with private host names in other zones.
You can change the private host names after your Oracle Solaris Cluster installation. To determine the actual name, use the scha_cluster_get command with the scha_privatelink_hostname_node argument. See the scha_cluster_get(1HA) man page.
Each host is also assigned a fixed per-host address. This per-host address is plumbed on the clprivnet driver. The IP address maps to the private host name for the host: clusternode1-priv. See the clprivnet(7) man page.