Go to main content

Administering TCP/IP Networks, IPMP, and IP Tunnels in Oracle® Solaris 11.4

Exit Print View

Updated: November 2020

IPMP Support in Oracle Solaris

    IPMP support in Oracle Solaris includes the following features:

  • IPMP enables you to configure multiple IP interfaces into a single group, called an IPMP group. As a whole, the IPMP group with its multiple underlying IP interfaces is represented as a single IPMP interface. This interface is treated just like any other interface on the IP layer of the network stack. All IP administrative tasks, routing tables, Address Resolution Protocol (ARP) tables, firewall rules, and other IP-related procedures work with an IPMP group by referring to the IPMP interface.

    Note -  Although Oracle Solaris supports the use of iSCSI devices with IPMP, a server that boots from an iSCSI device cannot be part of an IPMP group.
  • The system handles the distribution of data addresses amongst the underlying active interfaces. When the IPMP group is created, data addresses belong to the IPMP interface as an address pool. The kernel then automatically and randomly binds the data addresses to the underlying active interfaces of the group.

  • You primarily use the ipmpstat command to obtain information about IPMP groups. This command provides information about all aspects of the IPMP configuration, such as the underlying IP interfaces of the group, test and data addresses, the types of failure detection that are being used, and which interfaces have failed. See Monitoring IPMP Information.

  • You can assign a custom name to an IPMP interface to identify the IPMP group more easily. See Configuring IPMP Groups.

Functions of an IPMP Configuration

IPMP configures together multiple IP interfaces into an IPMP group. The group functions like an IP interface, with data addresses to send or receive network traffic. The multiple underlying interfaces of the IPMP group ensures continuous network availability, If one underlying interface fails, the data addresses are redistributed amongst the remaining underlying active interfaces in the group. Thus, with IPMP, network connectivity is always available, provided that a minimum of one interface is usable for the group.

IPMP also improves overall network performance by automatically spreading outbound network traffic across the set of interfaces within the IPMP group. This process is called outbound load spreading. The system also indirectly controls inbound load spreading by performing source address selection for packets whose IP source address was not specified by the application. However, if an application has explicitly chosen an IP source address, then the system does not vary that source address.

In this release, outbound load spreading occurs on a per-connection basis, rather than on a next hop basis as in previous releases. This change greatly improves IPMP capabilities by enabling two different connections to the same off-link destination by using different outbound interfaces.

Link aggregations perform functions that are similar to IPMP for improving network performance and availability. For a comparison of these two technologies, see Appendix B, Link Aggregations and IPMP: Feature Comparison, in Managing Network Datalinks in Oracle Solaris 11.4.

Rules for Using IPMP

IPMP group configuration is determined by your specific system configuration.

    Observe the following rules for IPMP configuration:

  1. Multiple IP interfaces that are on the same LAN must be configured into an IPMP group. A LAN broadly refers to a variety of local network configurations, including VLANs and both wired and wireless local networks with nodes that belong to the same link-layer broadcast domain.

    Note -  Multiple IPMP groups on the same link layer (L2) broadcast domain are unsupported. An L2 broadcast domain typically maps to a specific subnet. Therefore, you must configure only one IPMP group per subnet. Note also that some exceptions to this rule apply, for example, in the case of certain engineered systems that are provided by Oracle. For further clarification, contact your Oracle support representative.
  2. Underlying IP interfaces of an IPMP group must not span different LANs.

    For example, suppose that a system with three interfaces is connected to two separate LANs. Two IP interfaces connect to one LAN while a single IP interface connects to the other LAN. In this case, the two IP interfaces connecting to the first LAN must be configured as an IPMP group, as required by the first rule. In compliance with the second rule, the single IP interface that connects to the second LAN cannot become a member of that IPMP group. No IPMP configuration is required for the single IP interface. However, you can configure the single interface into an IPMP group to monitor the availability of the interface. See Types of IPMP Interface Configurations.

    Consider another case where the link to the first LAN consists of three IP interfaces while the other link consists of two interfaces. This setup requires the configuration of two IPMP groups: a three-interface group that connects to the first LAN, and a two-interface group that connects to the second LAN.

  3. All interfaces in the same group must have the same STREAMS modules configured in the same order. When planning an IPMP group, first check the order of STREAMS modules on all interfaces in the prospective IPMP group, then push the modules of each interface in the standard order for the IPMP group. To print a list of STREAMS modules, use the ifconfig interface modlist command. For example, here is the ifconfig output for a net0 interface:

    $ ifconfig net0 modlist
    0 arp
    1 ip
    2 e1000g

    As the previous output shows, interfaces normally exist as network drivers directly below the IP module. These interfaces do not require additional configuration. However, certain technologies are pushed as STREAMS modules between the IP module and the network driver. If a STREAMS module is stateful, then unexpected behavior can occur on failover, even if you push the same module to all of the interfaces in a group. However, you can use stateless STREAMS modules, provided that you push them in the same order on all interfaces in the IPMP group.

    For example, use the following command to push the modules of each interface in the standard order for the IPMP group:

    $ ifconfig net0 modinsert vpnmod@3

    To plan an IPMP group, see How to Plan an IPMP Group.

IPMP Components

    The IPMP software components are as follows:

  • Multipathing daemon (in.mpathd) – Detects interface failures and repairs. The daemon performs both link-based failure detection and probe-based failure detection if test addresses are configured for the underlying interfaces. Depending on the type of failure detection method that is used, the daemon sets or clears the appropriate flags on the interface to indicate whether the interface has failed or has been repaired. As an option, you can also configure the daemon to monitor the availability of all interfaces, including interfaces that are not configured to belong to an IPMP group. See Failure Detection in IPMP.

    The in.mpathd daemon also controls the designation of active interfaces in the IPMP group. The daemon attempts to maintain the same number of active interfaces that were originally configured when the IPMP group was created. Thus, in.mpathd activates or deactivates underlying interfaces as needed to be consistent with the administrator's configured policy. For more information about how the in.mpathd daemon manages the activation of underlying interfaces, see How IPMP Works and the in.mpathd(8) man page.

  • IP kernel module – Manages outbound load spreading by distributing the connection over the IPMP group interface across the set of available underlying IP interfaces within the group. The module also performs source address selection to manage inbound load spreading. Both roles of the module improve network traffic performance.

  • IPMP configuration file (/etc/default/mpathd) – Defines the behavior of the mpathd daemon.

      You customize the file to set the following parameters:

    • Target interfaces to probe when running probe-based failure detection

    • Time duration to probe a target to detect failure

    • Status with which to flag a failed interface after that interface is repaired

    • Scope of IP interfaces to monitor, whether to also include IP interfaces in the system that are not configured to belong to IPMP groups

    For information about how to modify the configuration file, see How to Configure the Behavior of the IPMP Daemon.

  • ipmpstat command – Provides different types of information about the status of IPMP as a whole. The tool also displays other information about the underlying IP interfaces for each IPMP group, as well as data and test addresses that have been configured for the group. See Monitoring IPMP Information and the ipmpstat(8) man page.

Types of IPMP Interface Configurations

An IPMP configuration typically consists of two or more physical interfaces on the same system that are attached to the same LAN.

    These interfaces can belong to an IPMP group in either of the following configurations:

  • Active-active configuration – An IPMP group in which all underlying interfaces are active. An active interface is an IP interface that is currently available for use by the IPMP group.

    Note -  Be default, an underlying interface becomes active when you configure the interface to become part of an IPMP group.
  • Active-standby configuration – An IPMP group in which at least one interface is administratively configured as a standby interface. Although idle, the standby interface is monitored by the multipathing daemon to track the availability of the interface, depending on how the interface is configured. If link-failure notification is supported by the interface, link-based failure detection is used. If the interface is configured with a test address, probe-based failure detection is also used. If an active interface fails, the standby interface is automatically deployed as needed. You can configure as many standby interfaces as are needed for an IPMP group.

You can also configure a single interface in its own IPMP group. The single-interface IPMP group behaves the same as an IPMP group with multiple interfaces. However, this IPMP configuration does not provide high availability for network traffic. If the underlying interface fails, then the system loses all capability to send or receive traffic. The purpose of configuring a single-interface IPMP group is to monitor the availability of the interface by using failure detection. By configuring a test address on the interface, the multipathing daemon can track the interface by using probe-based failure detection.

Typically, a single-interface IPMP group configuration is used with other technologies that have broader failover capabilities, such as the Oracle Solaris Cluster software. The system can continue to monitor the status of the underlying interface, but the Oracle Solaris Cluster software provides the functionality to ensure availability of the network when a failure occurs. For more information about the Oracle Solaris Cluster software, see Concepts for Oracle Solaris Cluster in Oracle Help Center.

An IPMP group without underlying interfaces can also exist, such as a group whose underlying interfaces have been removed. The IPMP group is not destroyed, but the group cannot be used to send and receive traffic. As underlying interfaces are brought online for the group, then the data addresses of the IPMP interface are allocated to these interfaces, and the system resumes hosting network traffic.

How IPMP Works

IPMP maintains network availability by attempting to preserve the same number of active and standby interfaces that was originally configured when the IPMP group was created.

IPMP failure detection can be link-based, probe-based, or both to determine the availability of a specific underlying IP interface in the group. If IPMP determines that an underlying interface has failed, then that interface is flagged as failed and is no longer usable. The data IP address that is associated with the failed interface is then redistributed to another functioning interface in the group. If available, a standby interface is also deployed to maintain the original number of active interfaces.

Consider a three-interface IPMP group, itops0, with an active-standby configuration, as illustrated in the following figure.

Figure 1  IPMP Active-Standby Configuration

image:Graphic shows an active-standby configuration of itops0.

    The IPMP group itops0 is configured as follows:

  • Two data addresses are assigned to the group: and

  • Two underlying interfaces are configured as active interfaces and are assigned flexible link names: net0 and net1.

  • The group has one standby interface, also with a flexible link name: net2.

  • Probe-based failure detection is used, and thus the active and standby interfaces are configured with test addresses, as follows:

    • net0:

    • net1:

    • net2:

Note -  The active, offline, standby, and failed areas in IPMP Active-Standby Configuration, Interface Failure in IPMP, Standby Interface Failure in IPMP, and IPMP Recovery Process indicate only the status of underlying interfaces and not physical locations. No physical movement of interfaces or addresses or any transfer of IP interfaces occurs within this IPMP implementation. The areas only serve to show how an underlying interface changes status as a result of either failure or repair.

You can use the ipmpstat command with different options to display specific types of information about existing IPMP groups. See Monitoring IPMP Information.

For example, the following command displays information about the IPMP configuration, as shown in IPMP Active-Standby Configuration:

$ ipmpstat -g
itops0    itops0        ok        10.00s     net1 net0 (net2)

The following command displays the underlying interfaces in a group:

$ ipmpstat -i
net0        yes        itops0    -------    up          ok        ok
net1        yes        itops0    --mb---    up          ok        ok
net2        no         itops0    is-----    up          ok        ok

IPMP maintains network availability by managing the underlying interfaces to preserve the original number of active interfaces. Thus, if net0 fails, then net2 is deployed to ensure that the IPMP group continues to have two active interfaces. The net2 activation is shown in the following figure.

Figure 2  Interface Failure in IPMP

image:Figure that shows failure of an active interface in the IPMP group.

For the previous figure, the ipmpstat command displays the following information:

$ ipmpstat -i
net0        no         itops0    -------    up          failed    failed
net1        yes        itops0    --mb---    up          ok        ok
net2        yes        itops0    -s-----    up          ok        ok

After net0 is repaired, it reverts to its status as an active interface. In turn, net2 is returned to its original standby status.

See Standby Interface Failure in IPMP for a different failure scenario, where the standby interface net2 fails (1). Later, one active interface, net1, is taken offline by the administrator (2). The result is that the IPMP group is left with a single functioning interface, net0.

Figure 3  Standby Interface Failure in IPMP

image:Figure that shows failure of a standby interface in the IPMP group.

For the previous figure, the ipmpstat command displays the following information:

$ ipmpstat -i
net0        yes        itops0    -------     up          ok        ok
net1        no         itops0    --mb-d-     up          ok        offline
net2        no         itops0    is-----     up          failed    failed

For this particular failure, the recovery process after the interface is repaired is different. The recovery process depends on the original number of active interfaces in the IPMP group compared with the configuration after the repair. The following figure represents the recovery process.

Figure 4  IPMP Recovery Process

image:Graphic shows the IPMP recovery process.

In this recovery process, when net2 is repaired, it normally reverts to its original status as a standby interface. However, the IPMP group still does not reflect the original number of two active interfaces because net1 continues to remain offline. Thus, IPMP instead deploys net2 as an active interface.

The ipmpstat command displays the following post-repair IPMP scenario:

$ ipmpstat -i
net0        yes        itops0    -------     up          ok        ok
net1        no         itops0    --mb-d-     up          ok        offline
net2        yes        itops0    -s-----     up          ok        ok

A similar recovery process occurs if the failure involves an active interface that is also configured in FAILBACK=no mode, where a failed active interface does not automatically revert to active status upon repair. Suppose that net0 in Interface Failure in IPMP is configured in FAILBACK=no mode. In that mode, a repaired net0 becomes a standby interface, even though it was originally an active interface. The interface net2 remains active to maintain the IPMP group's original number of two active interfaces.

The ipmpstat command displays the following recovery information:

$ ipmpstat -i
net0        no         itops0    i------    up          ok        ok
net1        yes        itops0    --mb---    up          ok        ok
net2        yes        itops0    -s-----    up          ok        ok

For more information, see FAILBACK=no Mode.