System Administration Guide: IP Services

Chapter 30 Introducing IPMP (Overview)

IP network multipathing (IPMP) provides physical interface failure detection and transparent network access failover for a system with multiple interfaces on the same IP link. IPMP also provides load spreading of packets for systems with multiple interfaces.

This chapter contains the following information:

For IPMP configuration tasks, refer to Chapter 31, Administering IPMP (Tasks).

Why You Should Use IPMP

IPMP provides increased reliability, availability, and network performance for systems with multiple physical interfaces. Occasionally, a physical interface or the networking hardware attached to that interface might fail or require maintenance. Traditionally, at that point, the system can no longer be contacted through any of the IP addresses that are associated with the failed interface. Additionally, any existing connections to the system using those IP addresses are disrupted.

By using IPMP, you can configure one or more physical interfaces into an IP multipathing group, or IPMP group. After configuring IPMP, the system automatically monitors the interfaces in the IPMP group for failure. If an interface in the group fails or is removed for maintenance, IPMP automatically migrates, or fails over, the failed interface's IP addresses. The recipient of these addresses is a functioning interface in the failed interface's IPMP group. The failover feature of IPMP preserves connectivity and prevents disruption of any existing connections. Additionally, IPMP improves overall network performance by automatically spreading out network traffic across the set of interfaces in the IPMP group. This process is called load spreading.

Oracle Solaris IPMP Components

Oracle Solaris IPMP involves the following software:

Multipathing Daemon, in.mpathd

The in.mpathd daemon detects interface failures, and then implements various procedures for failover and failback. After in.mpathd detects a failure or a repair, the daemon sends an ioctl to perform the failover or failback. The ip kernel module, which implements the ioctl, does the network access failover transparently and automatically.


Note –

Do not use Alternate Pathing while using IPMP on the same set of network interface cards. Likewise, you should not use IPMP while you are using Alternate Pathing. You can use Alternate Pathing and IPMP at the same time on different sets of interfaces. For more information about Alternate Pathing, refer to the Sun Enterprise Server Alternate Pathing 2.3.1 User Guide.


The in.mpathd daemon detects failures and repairs by sending out probes on all the interfaces that are part of an IPMP group. The in.mpathd daemon also detects failures and repairs by monitoring the RUNNING flag on each interface in the group. Refer to the in.mpathd(1M) man page for more information.


Note –

DHCP is not supported to manage IPMP data addresses. If you attempt to use DHCP on these addresses, DHCP eventually abandons control of these addresses. Do not use DHCP on data addresses.


IPMP Terminology and Concepts

This section introduces terms and concepts that are used throughout the IPMP chapters in this book.

IP Link

In IPMP terminology, an IP link is a communication facility or medium over which nodes can communicate at the data-link layer of the Internet protocol suite. Types of IP links might include simple Ethernets, bridged Ethernets, hubs, or Asynchronous Transfer Mode (ATM) networks. An IP link can have one or more IPv4 subnet numbers, and, if applicable, one or more IPv6 subnet prefixes. A subnet number or prefix cannot be assigned to more than one IP link. In ATM LANE, an IP link is a single emulated local area network (LAN). With the Address Resolution Protocol (ARP), the scope of the ARP protocol is a single IP link.


Note –

Other IP-related documents, such as RFC 2460, Internet Protocol, Version 6 (IPv6) Specification, use the term link instead of IP link. Part VI uses the term IP link to avoid confusion with IEEE 802. In IEEE 802, link refers to a single wire from an Ethernet network interface card (NIC) to an Ethernet switch.


Physical Interface

The physical interface provides a system's attachment to an IP link. This attachment is often implemented as a device driver and a NIC. If a system has multiple interfaces attached to the same link, you can configure IPMP to perform failover if one of the interfaces fails. For more information on physical interfaces, refer to IPMP Interface Configurations.

Network Interface Card

A network interface card is a network adapter that can be built in to the system. Or, the NIC can be a separate card that serves as an interface from the system to an IP link. Some NICs can have multiple physical interfaces. For example, a qfe NIC can have four interfaces, qfe0 through qfe3, and so on.

IPMP Group

An IP multipathing group, or IPMP group, consists of one or more physical interfaces on the same system that are configured with the same IPMP group name. All interfaces in the IPMP group must be connected to the same IP link. The same (non-null) character string IPMP group name identifies all interfaces in the group. You can place interfaces from NICs of different speeds within the same IPMP group, as long as the NICs are of the same type. For example, you can configure the interfaces of 100-megabit Ethernet NICs and the interfaces of one gigabit Ethernet NICs in the same group. As another example, suppose you have two 100-megabit Ethernet NICs. You can configure one of the interfaces down to 10 megabits and still place the two interfaces into the same IPMP group.

You cannot place two interfaces of different media types into an IPMP group. For example, you cannot place an ATM interface in the same group as an Ethernet interface.

Failure Detection and Failover

Failure detection is the process of detecting when an interface or the path from an interface to an Internet layer device no longer works. IPMP provides systems with the ability to detect when an interface has failed. IPMP detects the following types of communication failures:

After detecting a failure, IPMP begins failover. Failover is the automatic process of switching the network access from a failed interface to a functioning physical interface in the same group. Network access includes IPv4 unicast, multicast, and broadcast traffic, as well as IPv6 unicast and multicast traffic. Failover can only occur when you have configured more than one interface in the IPMP group. The failover process ensures uninterrupted access to the network.

Repair Detection and Failback

Repair detection is the process of detecting when a NIC or the path from a NIC to an Internet layer device starts operating correctly after a failure. After detecting that a NIC has been repaired, IPMP performs failback, the process of switching network access back to the repaired interface. Repair detection assumes that you have enabled failbacks. See Detecting Physical Interface Repairs for more information.

Target Systems

Probe-based failure detection uses target systems to determine the condition of an interface. Each target system must be attached to the same IP link as the members of the IPMP group. The in.mpathd daemon on the local system sends ICMP probe messages to each target system. The probe messages help to determine the health of each interface in the IPMP group.

For more information about target system use in probe-based failure detection, refer to Probe-Based Failure Detection.

Outbound Load Spreading

With IPMP configured, outbound network packets are spread across multiple NICs without affecting the ordering of packets. This process is known as load spreading. As a result of load spreading, higher throughput is achieved. Load spreading occurs only when the network traffic is flowing to multiple destinations that use multiple connections.

Dynamic Reconfiguration

Dynamic reconfiguration (DR) is the ability to reconfigure a system while the system is running, with little or no impact on existing operations. Not all Sun platforms support DR. Some Sun platforms might only support DR of certain types of hardware. On platforms that support DR of NIC's, IPMP can be used to transparently fail over network access, providing uninterrupted network access to the system.

For more information on how IPMP supports DR, refer to IPMP and Dynamic Reconfiguration.

Basic Requirements of IPMP

IPMP is built into Oracle Solaris and does not require any special hardware. Any interface that is supported by Oracle Solaris can be used with IPMP. However, IPMP does impose the following requirements on your network configuration and topology:

IPMP Addressing

You can configure IPMP failure detection on both IPv4 networks and dual-stack, IPv4 and IPv6 networks. Interfaces that are configured with IPMP support two types of addresses: data addresses and test addresses.

Data Addresses

Data addresses are the conventional IPv4 and IPv6 addresses that are assigned to an interface of a NIC at boot time or manually, through the ifconfig command. The standard IPv4 and, if applicable, IPv6 packet traffic through an interface is considered to be data traffic.

Test Addresses

Test addresses are IPMP-specific addresses that are used by the in.mpathd daemon. For an interface to use probe-based failure and repair detection, that interface must be configured with at least one test address.


Note –

You need to configure test addresses only if you want to use probe-based failure detection.


The in.mpathd daemon uses test addresses to exchange ICMP probes, also called probe traffic, with other targets on the IP link. Probe traffic helps to determine the status of the interface and its NIC, including whether an interface has failed. The probes verify that the send and receive path to the interface is working correctly.

Each interface can be configured with an IP test address. For an interface on a dual-stack network, you can configure an IPv4 test address, an IPv6 test address, or both IPv4 and IPv6 test addresses.

After an interface fails, the test addresses remain on the failed interface so that in.mpathd can continue to send probes to check for subsequent repair. You must specifically configure test addresses so that applications do not accidentally use them. For more information, refer to Preventing Applications From Using Test Addresses.

For more information on probe-based failure detection, refer to Probe-Based Failure Detection.

IPv4 Test Addresses

In general, you can use any IPv4 address on your subnet as a test address. IPv4 test addresses do not need to be routeable. Because IPv4 addresses are a limited resource for many sites, you might want to use non-routeable RFC 1918 private addresses as test addresses. Note that the in.mpathd daemon exchanges only ICMP probes with other hosts on the same subnet as the test address. If you do use RFC 1918-style test addresses, be sure to configure other systems, preferably routers, on the IP link with addresses on the appropriate RFC 1918 subnet. The in.mpathd daemon can then successfully exchange probes with target systems.

The IPMP examples use RFC 1918 addresses from the 192.168.0/24 network as IPv4 test addresses. For more information about RFC 1918 private addresses, refer to RFC 1918, Address Allocation for Private Internets.

To configure IPv4 test addresses, refer to the task How to Configure an IPMP Group With Multiple Interfaces.

IPv6 Test Addresses

The only valid IPv6 test address is the link-local address of a physical interface. You do not need a separate IPv6 address to serve as an IPMP test address. The IPv6 link-local address is based on the Media Access Control (MAC ) address of the interface. Link-local addresses are automatically configured when the interface becomes IPv6-enabled at boot time or when the interface is manually configured through ifconfig.

To identify the link-local address of an interface, run the ifconfig interface command on an IPv6-enabled node. Check the output for the address that begins with the prefix fe80, the link-local prefix. The NOFAILOVER flag in the following ifconfig output indicates that the link-local address fe80::a00:20ff:feb9:17fa/10 of the hme0 interface is used as the test address.


hme0: flags=a000841<UP,RUNNING,MULTICAST,IPv6,NOFAILOVER> mtu 1500 index 2
        	inet6 fe80::a00:20ff:feb9:17fa/10 

For more information on link-local addresses, refer to Link-Local Unicast Address.

When an IPMP group has both IPv4 and IPv6 plumbed on all the group's interfaces, you do not need to configure separate IPv4 test addresses. The in.mpathd daemon can use the IPv6 link-local addresses as test addresses.

To create an IPv6 test address, refer to the task How to Configure an IPMP Group With Multiple Interfaces.

Preventing Applications From Using Test Addresses

After you have configured a test address, you need to ensure that this address is not used by applications. Otherwise, if the interface fails, the application is no longer reachable because test addresses do not fail over during the failover operation. To ensure that IP does not choose the test address for normal applications, mark the test address as deprecated.

IPv4 does not use a deprecated address as a source address for any communication, unless an application explicitly binds to the address. The in.mpathd daemon explicitly binds to such an address in order to send and receive probe traffic.

Because IPv6 link-local addresses are usually not present in a name service, DNS and NIS applications do not use link-local addresses for communication. Consequently, you must not mark IPv6 link-local addresses as deprecated.

IPv4 test addresses should not be placed in the DNS and NIS name service tables. In IPv6, link-local addresses are not normally placed in the name service tables.

IPMP Interface Configurations

An IPMP configuration typically consists of two or more physical interfaces on the same system that are attached to the same IP link. These physical interfaces might or might not be on the same NIC. The interfaces are configured as members of the same IPMP group. If the system has additional interfaces on a second IP link, you must configure these interfaces as another IPMP group.

A single interface can be configured in its own IPMP group. The single interface IPMP group has the same behavior as an IPMP group with multiple interfaces. However, failover and failback cannot occur for an IPMP group with only one interface.

You can also configure VLANs into an IPMP group by using the same steps to configure a group out of IP interfaces. For the procedures, see Configuring IPMP Groups. The same requirements that are listed in Basic Requirements of IPMP apply to configure VLANs into an IPMP group.


Caution – Caution –

The convention that is used to name VLANs might lead to errors when you configure VLANs as an IPMP group. For more details about VLAN names, see VLAN Tags and Physical Points of Attachment in System Administration Guide: IP Services. Consider the example of four VLANs, bge1000, bge1001, bge2000, and bge2001. IPMP implementation requires these VLANs to be grouped as follows: bge1000 and bge1001 belong to one group on the same VLAN 1, while bge2000, and bge2001 belong to another group on the same VLAN 2. Because of VLAN names, errors such as mixing VLANs that belong to different links into an IPMP group can easily occur, for example, bge1000 and bge2000.


Standby Interfaces in an IPMP Group

The standby interface in an IPMP group is not used for data traffic unless some other interface in the group fails. When a failure occurs, the data addresses on the failed interface migrate to the standby interface. Then, the standby interface is treated the same as other active interfaces until the failed interface is repaired. Some failovers might not choose a standby interface. Instead, these failovers might choose an active interface with fewer data addresses that are configured as UP than the standby interface.

You should configure only test addresses on a standby interface. IPMP does not permit you to add a data address to an interface that is configured through the ifconfig command as standby. Any attempt to create this type of configuration will fail. Similarly, if you configure as standby an interface that already has data addresses, these addresses automatically fail over to another interface in the IPMP group. Due to these restrictions, you must use the ifconfig command to mark any test addresses as deprecated and -failover prior to setting the interface as standby. To configure standby interfaces, refer to How to Configure a Standby Interface for an IPMP Group.

Common IPMP Interface Configurations

As mentioned in IPMP Addressing, interfaces in an IPMP group handle regular data traffic and probe traffic, depending on the interfaces' configuration. You use IPMP options of the ifconfig command to create the configuration.

An active interface is a physical interface that transmits both data traffic and probe traffic. You configure the interface as “active” by performing either the task How to Configure an IPMP Group With Multiple Interfaces or the task How to Configure a Single Interface IPMP Group.

The following are two common types of IPMP configurations:

Active-active configuration

A two interface IPMP group where both interfaces are “active,” that is they might be transmitting both probe and data traffic at all times.

Active-standby configuration

A two interface IPMP group where one interface is configured as “standby.”

Checking the Status of an Interface

You can check the status of an interface by issuing the ifconfig interface command. For general information on ifconfig status reporting, refer to How to Get Information About a Specific Interface.

For example, you can use the ifconfig command to obtain the status of a standby interface. When the standby interface is not hosting any data address, the interface has the INACTIVE flag for its status. You can observe this flag in the status lines for the interface in the ifconfig output.

IPMP Failure Detection and Recovery Features

The in.mpathd daemon handles the following types of failure detection:

The in.mpathd(1M) man page completely describes how the in.mpathd daemon handles the detection of interface failures.

Link-Based Failure Detection

Link-based failure detection is always enabled, provided that the interface supports this type of failure detection. The following Sun network drivers are supported in the current release of Oracle Solaris:

To determine whether a third-party interface supports link-based failure detection, refer to the manufacturer's documentation.

These network interface drivers monitor the interface's link state and notify the networking subsystem when that link state changes. When notified of a change, the networking subsystem either sets or clears the RUNNING flag for that interface, as appropriate. When the daemon detects that the interface's RUNNING flag has been cleared, the daemon immediately fails the interface.

Probe-Based Failure Detection

The in.mpathd daemon performs probe-based failure detection on each interface in the IPMP group that has a test address. Probe-based failure detection involves the sending and receiving of ICMP probe messages that use test addresses. These messages go out over the interface to one or more target systems on the same IP link. For an introduction to test addresses, refer to Test Addresses. For information on configuring test addresses, refer to How to Configure an IPMP Group With Multiple Interfaces.

The in.mpathd daemon determines which target systems to probe dynamically. Routers that are connected to the IP link are automatically selected as targets for probing. If no routers exist on the link, in.mpathd sends probes to neighbor hosts on the link. A multicast packet that is sent to the all hosts multicast address, 224.0.0.1 in IPv4 and ff02::1 in IPv6, determines which hosts to use as target systems. The first few hosts that respond to the echo packets are chosen as targets for probing. If in.mpathd cannot find routers or hosts that responded to the ICMP echo packets, in.mpathd cannot detect probe-based failures.

You can use host routes to explicitly configure a list of target systems to be used by in.mpathd. For instructions, refer to Configuring Target Systems.

To ensure that each interface in the IPMP group functions properly, in.mpathd probes all the targets separately through all the interfaces in the IPMP group. If no replies are made in response to five consecutive probes, in.mpathd considers the interface to have failed. The probing rate depends on the failure detection time (FDT). The default value for failure detection time is 10 seconds. However, you can tune the failure detection time in the /etc/default/mpathd file. For instructions, go to How to Configure the /etc/default/mpathd File.

For a repair detection time of 10 seconds, the probing rate is approximately one probe every two seconds. The minimum repair detection time is twice the failure detection time, 20 seconds by default, because replies to 10 consecutive probes must be received. The failure and repair detection times apply only to probe-based failure detection.


Note –

In an IPMP group that is composed of VLANs, link-based failure detection is implemented per physical-link and thus affects all VLANs on that link. Probe-based failure detection is performed per VLAN-link. For example, bge0/bge1 and bge1000/bge1001 are configured together in a group. If the cable for bge0 is unplugged, then link-based failure detection will report both bge0 and bge1000 as having instantly failed. However, if all of the probe targets on bge0 become unreachable, only bge0 will be reported as failed because bge1000 has its own probe targets on its own VLAN.


Group Failures

A group failure occurs when all interfaces in an IPMP group appear to fail at the same time. The in.mpathd daemon does not perform failovers for a group failure. Also, no failover occurs when all the target systems fail at the same time. In this instance, in.mpathd flushes all of its current target systems and discovers new target systems.

Detecting Physical Interface Repairs

For the in.mpathd daemon to consider an interface to be repaired, the RUNNING flag must be set for the interface. If probe-based failure detection is used, the in.mpathd daemon must receive responses to 10 consecutive probe packets from the interface before that interface is considered repaired. When an interface is considered repaired, any addresses that failed over to another interface then fail back to the repaired interface. If the interface was configured as “active” before it failed, after repair that interface can resume sending and receiving traffic.

What Happens During Interface Failover

The following two examples show a typical configuration and how that configuration automatically changes when an interface fails. When the hme0 interface fails, notice that all data addresses move from hme0 to hme1.


Example 30–1 Interface Configuration Before an Interface Failure


hme0: flags=9000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4> 
     mtu 1500 index 2
     inet 192.168.85.19 netmask ffffff00 broadcast 192.168.85.255
     groupname test
hme0:1: flags=9000843<UP,BROADCAST,RUNNING,MULTICAST,DEPRECATED,IPv4,NOFAILOVER> 
     mtu 1500 
     index 2 inet 192.168.85.21 netmask ffffff00 broadcast 192.168.85.255
hme1: flags=9000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4> mtu 1500 index 2
8     inet 192.168.85.20 netmask ffffff00 broadcast 192.168.85.255
     groupname test
hme1:1: flags=9000843<UP,BROADCAST,RUNNING,MULTICAST,DEPRECATED,IPv4,NOFAILOVER> 
     mtu 1500 
     index 2 inet 192.168.85.22 netmask ffffff00 broadcast 192.168.85.255
hme0: flags=a000841<UP,RUNNING,MULTICAST,IPv6,NOFAILOVER> mtu 1500 index 2
     inet6 fe80::a00:20ff:feb9:19fa/10
     groupname test
hme1: flags=a000841<UP,RUNNING,MULTICAST,IPv6,NOFAILOVER> mtu 1500 index 2
     inet6 fe80::a00:20ff:feb9:1bfc/10
     groupname test


Example 30–2 Interface Configuration After an Interface Failure


hme0: flags=19000842<BROADCAST,RUNNING,MULTICAST,IPv4,
      NOFAILOVER,FAILED> mtu 0 index 2
      inet 0.0.0.0 netmask 0 
      groupname test
hme0:1: flags=19040843<UP,BROADCAST,RUNNING,MULTICAST,DEPRECATED,IPv4,
      NOFAILOVER,FAILED> mtu 1500 index 2 
      inet 192.168.85.21 netmask ffffff00 broadcast 10.0.0.255
hme1: flags=9000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4> mtu 1500 index 2
      inet 192.168.85.20 netmask ffffff00 broadcast 192.168.85.255
      groupname test
hme1:1: flags=9000843<UP,BROADCAST,RUNNING,MULTICAST,DEPRECATED,IPv4,
      NOFAILOVER> mtu 1500 
      index 2 inet 192.168.85.22 netmask ffffff00 broadcast 10.0.0.255
hme1:2: flags=1000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4> mtu 1500 index 6
      inet 192.168.85.19 netmask ffffff00 broadcast 192.168.18.255
hme0: flags=a000841<UP,RUNNING,MULTICAST,IPv6,NOFAILOVER,FAILED> mtu 1500 index 2
      inet6 fe80::a00:20ff:feb9:19fa/10 
      groupname test
hme1: flags=a000841<UP,RUNNING,MULTICAST,IPv6,NOFAILOVER> mtu 1500 index 2
      inet6 fe80::a00:20ff:feb9:1bfc/10 
      groupname test

You can see that the FAILED flag is set on hme0 to indicate that this interface has failed. You can also see that hme1:2 has been created. hme1:2 was originally hme0. The address 192.168.85.19 then becomes accessible through hme1.

Multicast memberships that are associated with 192.168.85.19 can still receive packets, but they now receive packets through hme1. When the failover of address 192.168.85.19 from hme0 to hme1 occurred, a dummy address 0.0.0.0 was created on hme0. The dummy address was created so that hme0 can still be accessed. hme0:1 cannot exist without hme0. The dummy address is removed when a subsequent failback takes place.

Similarly, failover of the IPv6 address from hme0 to hme1 occurred. In IPv6, multicast memberships are associated with interface indexes. Multicast memberships also fail over from hme0 to hme1. All the addresses that in.ndpd configured also moved. This action is not shown in the examples.

The in.mpathd daemon continues to probe through the failed interface hme0. After the daemon receives 10 consecutive replies for a default repair detection time of 20 seconds, the daemon determines that the interface is repaired. Because the RUNNING flag is also set on hme0, the daemon invokes the failback. After failback, the original configuration is restored.

For a description of all error messages that are logged on the console during failures and repairs, see the in.mpathd(1M) man page.

IPMP and Dynamic Reconfiguration

The dynamic reconfiguration (DR) feature enables you to reconfigure system hardware, such as interfaces, while the system is running. This section explains how DR interoperates with IPMP.

On a system that supports DR of NICs, IPMP can be used to preserve connectivity and prevent disruption of existing connections. You can safely attach, detach, or reattach NIC's on a system that supports DR and uses IPMP. This is possible because IPMP is integrated into the Reconfiguration Coordination Manager (RCM) framework. RCM manages the dynamic reconfiguration of system components.

You typically use the cfgadm command to perform DR operations. However, some platforms provide other methods. Consult your platform's documentation for details. You can find specific documentation about DR from the following resources.

Table 30–1 Documentation Resources for Dynamic Reconfiguration

Description 

For Information 

Detailed information on the cfgadm command

cfgadm(1M) man page

Specific information about DR in the Sun Cluster environment 

Sun Cluster 3.1 System Administration Guide

Specific information about DR in the Sun Fire environment 

Sun Fire 880 Dynamic Reconfiguration Guide

Introductory information about DR and the cfgadm command

Chapter 6, Dynamically Configuring Devices (Tasks), in System Administration Guide: Devices and File Systems

Tasks for administering IPMP groups on a system that supports DR 

Replacing a Failed Physical Interface on Systems That Support Dynamic Reconfiguration

Attaching NICs

You can add interfaces to an IPMP group at any time by using the ifconfig command, as explained in How to Configure an IPMP Group With Multiple Interfaces. Thus, any interfaces on system components that you attach after system boot can be plumbed and added to an existing IPMP group. Or, if appropriate, you can configure the newly added interfaces into their own IPMP group.

These interfaces and the data addresses that are configured on them are immediately available for use by the IPMP group. However, for the system to automatically configure and use the interfaces after a reboot, you must create an /etc/hostname.interface file for each new interface. For instructions, refer toHow to Configure a Physical Interface After System Installation.

If an /etc/hostname.interface file already exists when the interface is attached, then RCM automatically configures the interface according to the contents of this file. Thus, the interface receives the same configuration that it would have received after system boot.

Detaching NICs

All requests to detach system components that contain NICs are first checked to ensure that connectivity can be preserved. For instance, by default you cannot detach a NIC that is not in an IPMP group. You also cannot detach a NIC that contains the only functioning interfaces in an IPMP group. However, if you must remove the system component, you can override this behavior by using the -f option of cfgadm, as explained in the cfgadm(1M) man page.

If the checks are successful, the data addresses associated with the detached NIC fail over to a functioning NIC in the same group, as if the NIC being detached had failed. When the NIC is detached, all test addresses on the NIC's interfaces are unconfigured. Then, the NIC is unplumbed from the system. If any of these steps fail, or if the DR of other hardware on the same system component fails, then the previous configuration is restored to its original state. You should receive a status message regarding this event. Otherwise, the detach request completes successfully. You can remove the component from the system. No existing connections are disrupted.

Reattaching NICs

RCM records the configuration information associated with any NIC's that are detached from a running system. As a result, RCM treats the reattachment of a NIC that had been previously detached identically as it would to the attachment of a new NIC. That is, RCM only performs plumbing.

However, reattached NICs typically have an existing /etc/hostname.interface file. In this case, RCM automatically configures the interface according to the contents of the existing /etc/hostname.interface file. Additionally, RCM informs the in.mpathd daemon of each data address that was originally hosted on the reattached interface. Thus, once the reattached interface is functioning properly, all of its data addresses are failed back to the reattached interface as if it had been repaired.

If the NIC being reattached does not have an /etc/hostname.interface file, then no configuration information is available. RCM has no information regarding how to configure the interface. One consequence of this situation is that addresses that were previously failed over to another interface are not failed back.

NICs That Were Missing at System Boot

NICs that are not present at system boot represent a special instance of failure detection. At boot time, the startup scripts track any interfaces with /etc/hostname.interface files that cannot be plumbed. Any data addresses in such an interface's /etc/hostname.interface file are automatically hosted on an alternative interface in the IPMP group.

In such an event, you receive error messages similar to the following


moving addresses from failed IPv4 interfaces: hme0 (moved to hme1)
moving addresses from failed IPv6 interfaces: hme0 (moved to hme1)

If no alternative interface exists, you receive error messages similar to the following:


moving addresses from failed IPv4 interfaces: hme0 (couldn't move; 
   no alternative interface) 
 moving addresses from failed IPv6 interfaces: hme0 (couldn't move; 
   no alternative interface) 

Note –

In this instance of failure detection, only data addresses that are explicitly specified in the missing interface's /etc/hostname.interface file move to an alternative interface. Any addresses that are usually acquired through other means, such as through RARP or DHCP, are not acquired or moved.


If an interface with the same name as another interface that was missing at system boot is reattached using DR, RCM automatically plumbs the interface. Then, RCM configures the interface according to the contents of the interface's /etc/hostname.interface file. Finally, RCM fails back any data addresses, just as if the interface had been repaired. Thus, the final network configuration is identical to the configuration that would have been made if the system had been booted with the interface present.