A cluster must usually have multiple network connections between cluster nodes, forming the cluster interconnect.
Oracle Solaris Cluster software uses multiple interconnects to achieve the following goals:
Ensure high availability
For both internal and external traffic such as file system data or scalable services data, messages are striped across all available interconnects. The cluster interconnect is also available to applications, for highly available communication between nodes. For example, a distributed application might have components that are running on different nodes that need to communicate. By using the cluster interconnect rather than the public transport, these connections can withstand the failure of an individual link.
To use the cluster interconnect for communication between nodes, an application must use the private host names that you configured during the Oracle Solaris Cluster installation. For example, if the private host name for host1 is clusternode1-priv, use this name to communicate with host1 over the cluster interconnect. TCP sockets that are opened by using this name are routed over the cluster interconnect and can be transparently rerouted if a private network adapter fails. Application communication between any two nodes is striped over all interconnects. The traffic for a given TCP connection flows on one interconnect at any point. Different TCP connections are striped across all interconnects. Additionally, UDP traffic is always striped across all interconnects.
An application can optionally use a zone's private host name to communicate over the cluster interconnect between zones. However, you must first set each zone's private host name before the application can begin communicating. Each zone cluster node must have its own private host name to communicate. An application that is running in one zone must use the private host name in the same zone to communicate with private host names in other zones.
You can change the private host names after your Oracle Solaris Cluster installation. To determine the actual name, use the scha_cluster_get command with the scha_privatelink_hostname_node argument. See the scha_cluster_get(1HA) man page.
Each host is also assigned a fixed per-host address. This per-host address is plumbed on the clprivnet driver. The IP address maps to the private host name for the host: clusternode1-priv. See the clprivnet(7) man page.