Go to main content
Oracle® Solaris Cluster 4.3 System Administration Guide

Exit Print View

Updated: June 2017
 
 

Administering the Public Network

Oracle Solaris Cluster software supports the Oracle Solaris software implementation of IPMP, link aggregations, and VNICs for public networks. Basic public network administration is the same for both cluster and noncluster environments.

Multipathing is automatically installed when you install the Oracle Solaris 11 OS, and you must enable it to use it. Multipathing administration is covered in the appropriate Oracle Solaris OS documentation. However, review the guidelines that follow before administering IPMP, link aggregations, and VNICs in an Oracle Solaris Cluster environment.

For information about IPMP, see Chapter 3, Administering IPMP in Administering TCP/IP Networks, IPMP, and IP Tunnels in Oracle Solaris 11.3. For information about link aggregations, see Chapter 2, Configuring High Availability by Using Link Aggregations in Managing Network Datalinks in Oracle Solaris 11.3.

How to Administer IP Network Multipathing Groups in a Cluster

Before performing IPMP procedures on a cluster, consider the following guidelines.

  • When configuring a scalable service resource (SCALABLE=TRUE in the resource type registration file for the resource type) that uses the SUNW.SharedAddress network resource, PNM can be configured to monitor IPMP group status on all IPMP groups on the cluster nodes in addition to the one the SUNW.SharedAddress is configured to use. This configuration allows the service to be restarted and failed over if any of the IPMP groups on the cluster nodes has failed, in order to maximize service availability for network clients that are co-located on the same subnets as the cluster nodes. For example:

    # echo ssm_monitor_all > /etc/cluster/pnm/pnm.conf

    Reboot the node.

  • The local-mac-address? variable must have a value of true for Ethernet adapters.

  • You can use probe-based IPMP groups or link-based IPMP groups in a cluster. A probe-based IPMP group tests the target IP address and provides the most protection by recognizing more conditions that might compromise availability.

    If you are using iSCSI storage as a quorum device, ensure that the probe-based IPMP device is configured correctly. If the iSCSI network is a private network containing only the cluster nodes and iSCSI storage device and there are no other hosts present on the iSCSI network, then the probe-based IPMP mechanism can break when all but one of the cluster nodes goes down. The problem occurs because there are no other hosts on the iSCSI network for IPMP to probe, so IPMP treats this as a network failure when only one node remains in the cluster. IPMP takes offline the iSCSI network adapter, and then the remaining node loses access to the iSCSI storage and thus the quorum device, To resolve this problem, you could add a router to the iSCSI network so that other hosts outside the cluster respond to the probes and prevent IPMP from taking offline the network adapter. Alternatively, you could configure IPMP with link-based failover instead of probe-based failover.

  • Unless there are one or more non-link local IPv6 public network interfaces in the public network configuration, the scinstall utility automatically configures a multiple-adapter IPMP group for each set of public-network adapters in the cluster that uses the same subnet. These groups are link-based with transitive probes. Test addresses can be added if probe-based failure detection is required.

  • Test IP addresses for all adapters in the same multipathing group must belong to a single IP subnet.

  • Test IP addresses must not be used by normal applications because they are not highly available.

  • No restrictions are placed on multipathing group naming. However, when configuring a resource group, the netiflist naming convention is any multipathing name followed by either the nodeID number or the node name. For example, given a multipathing group named sc_ipmp0 , the netiflist naming could be either sc_ipmp0@1 or sc_ipmp0@phys-schost-1, where the adapter is on the node phys-schost-1, which has the nodeID of 1.

  • Do not unconfigure (unplumb) or bring down an adapter of an IP Network Multipathing group without first switching over the IP addresses from the adapter to be removed to an alternate adapter in the group, using the if_mpadm(1M) command.

  • Do not unplumb or remove a network interface from the IPMP group where the Oracle Solaris Cluster HA IP address is plumbed. This IP address can belong to the logical hostname resource or the shared address resource. However, if you unplumb the active interface by using the ifconfig command, Oracle Solaris Cluster now recognizes this event. It fails over the resource group to some other healthy node if the IPMP group has become unusable in the process. Oracle Solaris Cluster could also restart the resource group on the same node if the IPMP group is valid but an HA IP address is missing. The IPMP group becomes unusable for several reasons: loss of IPv4 connectivity, loss of IPv6 connectivity, or both. For more information, see the if_mpadm(1M) man page.

  • Avoid rewiring adapters to different subnets without first removing them from their respective multipathing groups.

  • Logical adapter operations can be done on an adapter even if monitoring is on for the multipathing group.

  • You must maintain at least one public network connection for each node in the cluster. The cluster is inaccessible without a public network connection.

  • To view the status of IP Network Multipathing groups on a cluster, use the ipmpstat -g command. For more information, see Chapter 3, Administering IPMP in Administering TCP/IP Networks, IPMP, and IP Tunnels in Oracle Solaris 11.3.

For cluster software installation procedures, see the Oracle Solaris Cluster 4.3 Software Installation Guide. For procedures about servicing public networking hardware components, see the Oracle Solaris Cluster Hardware Administration Manual.

Dynamic Reconfiguration With Public Network Interfaces

You must consider a few issues when completing dynamic reconfiguration (DR) operations on public network interfaces in a cluster.

  • All of the requirements, procedures, and restrictions that are documented for the Oracle Solaris dynamic reconfiguration feature also apply to Oracle Solaris Cluster dynamic reconfiguration support (except for the operating system quiescence operation). Therefore, review the documentation for the Oracle Solaris dynamic reconfiguration feature before using the dynamic reconfiguration feature with Oracle Solaris Cluster software. You should review in particular the issues that affect non-network IO devices during a dynamic reconfiguration detach operation.

  • Dynamic reconfiguration remove-board operations can succeed only when public network interfaces are not active. Before removing an active public network interface, switch the IP addresses from the adapter to be removed to another adapter in the multipathing group, using the if_mpadm command. For more information, see the if_mpadm(1M) man page.

  • If you try to remove a public network interface card without having properly disabled it as an active network interface, Oracle Solaris Cluster rejects the operation and identifies the interface that would be affected by the operation.


Caution

Caution  -  For multipathing groups with two adapters, if the remaining network adapter fails while you are performing the dynamic reconfiguration remove operation on the disabled network adapter, availability is impacted. The remaining adapter has no place to fail over for the duration of the dynamic reconfiguration operation.


Complete the following procedures in the order indicated when performing dynamic reconfiguration operations on public network interfaces.

Table 14  Task Map: Dynamic Reconfiguration With Public Network Interfaces
Task
Instructions
1. Switch the IP addresses from the adapter to be removed to another adapter in the multipathing group, using the if_mpadm command
if_mpadm(1M) man page.
2. Remove the adapter from the multipathing group by using the ipadm command
ipadm(1M) man page
3. Perform the dynamic reconfiguration operation on the public network interface