System Administration Guide: Network Interfaces and Network Virtualization

Part II Administering Interface Groups

This part discusses administration of other types of configurations such as virtual local area networks (VLANs), link aggregations, and IP multipathing (IPMP) groups.

Chapter 5 Administering VLANs

This chapter describes procedures to configure and maintain virtual local area networks (VLANs). The procedures include steps that avail of features such as support for flexible link names.

Administering Virtual Local Area Networks

A virtual local area network (VLAN) is a subdivision of a local area network at the data link layer of the TCP/IP protocol stack. You can create VLANs for local area networks that use switch technology. By assigning groups of users to VLANs, you can improve network administration and security for the entire local network. You can also assign interfaces on the same system to different VLANs.

Consider dividing your local network into VLANs if you need to do the following:

Overview of VLAN Topology

Switched LAN technology enables you to organize the systems on a local network into VLANs. Before you can divide a local network into VLANs, you must obtain switches that support VLAN technology. You can configure all ports on a switch to serve a single VLAN or multiple VLANs, depending on the VLAN topology design. Each switch manufacturer has different procedures for configuring the ports of a switch.

The following figure shows a local area network that has the subnet address 192.168.84.0. This LAN is subdivided into three VLANs, Red, Yellow, and Blue.

Figure 5–1 Local Area Network With Three VLANs

The surrounding context describes the figure's content.

Connectivity on LAN 192.168.84.0 is handled by Switches 1 and 2. The Red VLAN contains systems in the Accounting workgroup. The Human Resources workgroup's systems are on the Yellow VLAN. Systems of the Information Technologies workgroup are assigned to the Blue VLAN.

VLAN Tags and Physical Points of Attachment

Each VLAN in a local area network is identified by a VLAN tag, or VLAN ID (VID). The VID is assigned during VLAN configuration. The VID is a 12-bit identifier between 1 and 4094 that provides a unique identity for each VLAN. In Figure 5–1, the Red VLAN has the VID 789, the Yellow VLAN has the VID 456, and the Blue VLAN has the VID 123.

When you configure switches to support VLANs, you need to assign a VID to each port. The VID on the port must be the same as the VID assigned to the interface that connects to the port, as shown in the following figure.

Figure 5–2 Switch Configuration for a Network with VLANs

The surrounding context describes the figure's content.

Figure 5–2 shows multiple hosts that are connected to different VLANs. Two hosts belong to the same VLAN. In this figure, the primary network interfaces of the three hosts connect to Switch 1. Host A is a member of the Blue VLAN. Therefore, Host A's interface is configured with the VID 123. This interface connects to Port 1 on Switch 1, which is then configured with the VID 123. Host B is a member of the Yellow VLAN with the VID 456. Host B's interface connects to Port 5 on Switch 1, which is configured with the VID 456. Finally, Host C's interface connects to Port 9 on Switch 1. The Blue VLAN is configured with the VID 123.

The figure also shows that a single host can also belong to more than one VLAN. For example, Host A has two interfaces. The second interface is configured with the VID 456 and is connected to Port 3 which is also configured with the VID 456. Thus, Host A is a member of both the Blue VLAN and the Yellow VLAN.

Meaningful Names for VLANs

In this Solaris release, you can assign meaningful names to VLAN interfaces. VLAN names consist of a link name and the VLAN ID number (VID), such as sales0 You should assign customized names when you create VLANs. For more information about customized names, see Assigning Names to Data Links. For more information about valid customized names, see Rules for Valid Link Names.

Planning for VLANs on a Network

Use the following procedure to plan for VLANs on your network.

ProcedureHow to Plan a VLAN Configuration

  1. Examine the local network topology and determine where subdivision into VLANs is appropriate.

    For a basic example of such a topology, refer to Figure 5–1.

  2. Create a numbering scheme for the VIDs, and assign a VID to each VLAN.


    Note –

    A VLAN numbering scheme might already exist on the network. If so, you must create VIDs within the existing VLAN numbering scheme.


  3. On each system, determine which interfaces will be members of a particular VLAN.

    1. Determine which interfaces are configured on a system.


      # dladm show-link
      
    2. Identify which VID will be associated with each data link on the system.

    3. Create the VLAN by using the dladm create-vlan command.

  4. Check the connections of the interfaces to the network's switches.

    Note the VID of each interface and the switch port where each interface is connected.

  5. Configure each port of the switch with the same VID as the interface to which it is connected.

    Refer to the switch manufacturer's documentation for configuration instructions.

Configuring VLANs

The following procedure shows how to create and configure a VLAN. In this Solaris release, all Ethernet devices can support VLANs. However, some restrictions exist with certain devices. For these exceptions, refer to VLANs on Legacy Devices.

ProcedureHow to Configure a VLAN

Before You Begin

Data links must already be configured on your system before you can create VLANs. See How to Configure an IP Interface After System Installation.

  1. On the system in which you configure VLANs, assume the Primary Administrator role, or become superuser.

    The Primary Administrator role includes the Primary Administrator profile. To create the role and assign the role to a user, see Chapter 2, Working With the Solaris Management Console (Tasks), in System Administration Guide: Basic Administration.

  2. Determine the types of links that are in use in your system.


    # dladm show-link
    
  3. Create a VLAN link over a data-link.


    # dladm create-vlan -l link -v VID vlan-link
    
    link

    Specifies the link on which the VLAN interface is being created.

    VID

    Indicates the VLAN ID number

    vlan-link

    Specifies the name of the VLAN, which can also be an administratively-chosen name.

  4. Verify the VLAN configuration.


    # dladm show-vlan
    
  5. Configure an IP interface over the VLAN.


    # ifconfig interface plumb IP-address up
    

    where interface takes the same name as the VLAN name.


    Note –

    You can assign IPv4 or IPv6 addresses to the VLAN's IP interface.


  6. (Optional) To make the IP configuration for the VLAN persist across reboots, create an /etc/hostname.interface file to contain the interface's IP address.

    The interface takes the name that you assign to the VLAN.


Example 5–1 Configuring a VLAN

This example configures the VLAN sales over the link subitops0. The VLAN is configured to persist across reboots.


# dladm show-link
LINK        CLASS     MTU     STATE     OVER
subitops0   phys      1500    up        --
ce1         phys      1500    up        --

# dladm create-vlan -l subitops0 -v 7 sales
# dladm show-vlan
LINK       VID     OVER        FLAGS
sales      7       subitops0   ----

When link information is displayed, the VLAN link is included in the list.


# dladm show-link
LINK          CLASS    MTU      STATE     OVER
subitops0     phys     1500     up        --
ce1           phys     1500     up        --
sales         vlan     1500     up        subitops0

# ifconfig sales plumb 10.0.0.3/24 up
# echo 10.0.0.3/24 > /etc/hostname.sales

VLANs on Legacy Devices

Certain legacy devices handle only packets whose maximum frame size is 1514 bytes. Packets whose frame sizes exceed the maximum limit are dropped. For such cases, follow the same procedure listed in How to Configure a VLAN. However, when creating the VLAN, use the -f option to force the creation of the VLAN.

The general steps to perform are as follows:

  1. Create the VLAN with the -f option.


    # dladm create-vlan -f -l link -v VID [vlan-link]
    
  2. Set a lower size for the maximum transmission unit (MTU), such as 1496 bytes.


    # dladm set-linkprop -p default_mtu=1496 vlan-link
    

    The lower MTU value allows space for the link layer to insert the VLAN header prior to transmission.

  3. Perform the same step to set the same lower value for the MTU size of each node in the VLAN.

    For more information about changing link property values, refer to Administering NIC Driver Properties.

Performing Other Administrative Tasks on VLANs

This section describes the usage of new dladm subcommands for other VLAN tasks. These dladm commands also work with link names.

ProcedureHow to Display VLAN Information

  1. Assume the System Administrator role or become superuser.

    The System Administrator role includes the Network Management profile. To create the role and assign the role to a user, see Chapter 9, Using Role-Based Access Control (Tasks), in System Administration Guide: Security Services.

  2. Display VLAN information.


    # dladm show-vlan [vlan-link]
    

    If you do not specify a VLAN link, the command displays information about all configured VLANs.


Example 5–2 Displaying VLAN Information

The following example shows the available VLANs in a system.


# dladm show-vlan
LINK          VID     OVER        FLAGS
sales         7       subitops0   ----
managers      5       net0        ----

Configured VLANs also appear when you issue the dladm show-link command. In the command output, the VLANs are appropriately identified in the CLASS column.


# dladm show-link
LINK           CLASS     MTU     STATE     OVER
subitops0      phys      1500    up        --
sales          vlan      1500    up        subitops0
net0           phys      1500    up        --
managers       vlan      1500    up        net0

ProcedureHow to Remove a VLAN

  1. Assume the System Administrator role or become superuser.

    The System Administrator role includes the Network Management profile. To create the role and assign the role to a user, see Chapter 9, Using Role-Based Access Control (Tasks), in System Administration Guide: Security Services.

  2. Determine which VLAN you want to remove.


    # dladm show-vlan
    
  3. Unplumb the VLAN's IP interface.


    # ifconfig vlan-interface unplumb
    

    where vlan-interface is the IP interface that is configured over the VLAN.


    Note –

    You cannot remove a VLAN that is currently in use.


  4. Remove the VLAN by performing one of the following steps:

    • To delete the VLAN temporarily, use the -t option as follows:


      # dladm delete-vlan -t vlan
      
    • To make the deletion persist, perform the following:

      1. Remove the VLAN.


        # dladm delete-vlan vlan
        
      2. Remove the /etc/hostname.vlan-interface file.


Example 5–3 Removing a VLAN


# dladm show-vlan
LINK       VID     OVER          FLAGS
sales      5       subitops0     ----
managers   7       net0          ----

# ifconfig managers unplumb
# dladm delete-vlan managers
# rm /etc/hostname.managers

Chapter 6 Administering Link Aggregations

This chapter describes procedures to configure and maintain link aggregations. The procedures include steps that avail of new features such as support for flexible link names.

Overview of Link Aggregations

The Solaris OS supports the organization of network interfaces into link aggregations. A link aggregation consists of several interfaces on a system that are configured together as a single, logical unit. Link aggregation, also referred to as trunking, is defined in the IEEE 802.3ad Link Aggregation Standard.

The IEEE 802.3ad Link Aggregation Standard provides a method to combine the capacity of multiple full-duplex Ethernet links into a single logical link. This link aggregation group is then treated as though it were, in fact, a single link.

The following are features of link aggregations:

Link Aggregation Basics

The basic link aggregation topology involves a single aggregation that contains a set of physical interfaces. You might use the basic link aggregation in the following situations:

Figure 6–1 shows an aggregation for a server that hosts a popular web site. The site requires increased bandwidth for query traffic between Internet customers and the site's database server. For security purposes, the existence of the individual interfaces on the server must be hidden from external applications. The solution is the aggregation aggr1 with the IP address 192.168.50.32. This aggregation consists of three interfaces,bge0 through bge2. These interfaces are dedicated to sending out traffic in response to customer queries. The outgoing address on packet traffic from all the interfaces is the IP address of aggr1, 192.168.50.32.

Figure 6–1 Basic Link Aggregation Topology

The figure shows a block for the link aggr1. Three physical
interfaces, bge0–bge2, descend from the link block.

Figure 6–2 depicts a local network with two systems, and each system has an aggregation configured. The two systems are connected by a switch. If you need to run an aggregation through a switch, that switch must support aggregation technology. This type of configuration is particularly useful for high availability and redundant systems.

In the figure, System A has an aggregation that consists of two interfaces, bge0 and bge1. These interfaces are connected to the switch through aggregated ports. System B has an aggregation of four interfaces, e1000g0 through e1000g3. These interfaces are also connected to aggregated ports on the switch.

Figure 6–2 Link Aggregation Topology With a Switch

The figure is explained in the preceding context.

Back-to-Back Link Aggregations

The back-to-back link aggregation topology involves two separate systems that are cabled directly to each other, as shown in the following figure. The systems run parallel aggregations.

Figure 6–3 Basic Back-to-Back Aggregation Topology

The figure is explained in the following context.

In this figure, device bge0 on System A is directly linked to bge0 on System B, and so on. In this way, Systems A and B can support redundancy and high availability, as well as high-speed communications between both systems. Each system also has interface ce0 configured for traffic flow within the local network.

The most common application for back-to-back link aggregations is mirrored database servers. Both servers need to be updated together and therefore require significant bandwidth, high-speed traffic flow, and reliability. The most common use of back-to-back link aggregations is in data centers.

Policies and Load Balancing

If you plan to use a link aggregation, consider defining a policy for outgoing traffic. This policy can specify how you want packets to be distributed across the available links of an aggregation, thus establishing load balancing. The following are the possible layer specifiers and their significance for the aggregation policy:

Any combination of these policies is also valid. The default policy is L4. For more information, refer to the dladm(1M) man page.

Aggregation Mode and Switches

If your aggregation topology involves connection through a switch, you must note whether the switch supports the link aggregation control protocol (LACP). If the switch supports LACP, you must configure LACP for the switch and the aggregation. However, you can define one of the following modes in which LACP is to operate:

See the dladm(1M) man page and the switch manufacturer's documentation for syntax information.

Requirements for Link Aggregations

Your link aggregation configuration is bound by the following requirements:

Certain devices do not fulfill the requirement of the IEEE 802.3ad Link Aggregation Standard to support link state notification. This support must exist in order for a port to attach to an aggregation or to detach from an aggregation. Devices that do not support link state notification can be aggregated only by using the -f option of the dladm create-aggr command. For such devices, the link state is always reported as UP. For information about the use of the -f option, see How to Create a Link Aggregation.

Flexible Names for Link Aggregations

Flexible names can be assigned to link aggregations. Any meaningful name can be assigned to a link aggregation. For more information about flexible or customized names, see Assigning Names to Data Links. Previous Solaris releases identify a link aggregation by the value of a key that you assign to the aggregation. For an explanation of this method, see Overview of Link Aggregations. Although that method continues to be valid, preferably, you should use customized names to identify link aggregations.

Similar to all other data-link configurations, link aggregations are administered with the dladm command.

ProcedureHow to Create a Link Aggregation

Before You Begin

Note –

Link aggregation only works on full-duplex, point-to-point links that operate at identical speeds. Make sure that the interfaces in your aggregation conform to this requirement.


If you are using a switch in your aggregation topology, make sure that you have done the following on the switch:

  1. Assume the Primary Administrator role, or become superuser.

    The Primary Administrator role includes the Primary Administrator profile. To create the role and assign the role to a user, see Chapter 2, Working With the Solaris Management Console (Tasks), in System Administration Guide: Basic Administration.

  2. Display the network data-link information.


    # dladm show-link
    
  3. Make sure that the link over which you are creating the aggregation is not opened by any application.

    For example, if the IP interface over the link is plumbed, then unplumb the interface.


    # ifconfig interface unplumb
    

    where interface refers to the IP interface that is plumbed and using the link.

  4. Create a link aggregation.


    # dladm create-aggr [-f] -l link1 -l link2 [...] aggr
    
    -f

    Forces the creation of the aggregation. Use this option when you are attempting to aggregate devices that do not support link state notification.

    linkn

    Specifies the data links that you want to aggregate.

    aggr

    Specifies the name that you want to assign to the aggregation.

  5. Plumb and configure an IP interface over the newly created aggregation.


    # ifconfig interface plumb IP-address up
    

    where interface takes the name of the aggregation.

  6. Check the status of the aggregation you just created.

    The aggregation's state should be UP.


    # dladm show-aggr
    
  7. (Optional) Make the IP configuration of the link aggregation persist across reboots.

    1. Create the /etc/hostname file for the aggregation's interface.

      If the aggregation contains IPv4 addresses, the corresponding hostname file is /etc/hostname.aggr. For IPv6–based link aggregations, the corresponding hostname file is /etc/hostname6.aggr.

    2. Type the IPv4 or IPv6 address of the link aggregation into the file.

    3. Perform a reconfiguration boot.


      # reboot -- -r
      

Example 6–1 Creating a Link Aggregation

This example shows the commands that are used to create a link aggregation with two data links, subvideo0 and subvideo1. The configuration is persistent across system reboots.


# dladm show-link
LINK          CLASS     MTU     STATE     OVER
subvideo0     phys      1500    up        ----
subvideo1     phys      1500    up        ----

# dladm create-aggr -l subvideo0 -l subvideo1 video0
# ifconfig video0 plumb 10.8.57.50/24 up
# dladm show-aggr
LINK      POLICY  ADDRPOLICY      LACPACTIVITY   LACPTIMER  FLAGS
video0    L4      auto            off            short      -----

# echo 10.8.57.50/24 > /etc/hostname.video0

# reboot -- -r

When you display link information, the link aggregation is included in the list.


# dladm show-link
LINK          CLASS     MTU     STATE     OVER
subvideo0     phys      1500    up        ----
subvideo1     phys      1500    up        ----
video0        aggr      1500    up        subvideo0, subvideo1

ProcedureHow to Modify an Aggregation

This procedure shows how to make the following changes to an aggregation definition:

  1. Assume the System Administrator role.

    The System Administrator role includes the Network Management profile. To create the role and assign the role to a user, see Chapter 9, Using Role-Based Access Control (Tasks), in System Administration Guide: Security Services.

  2. Modify the policy of the aggregation.


    # dladm modify-aggr -P policy-key aggr
    
    policy-key

    Represents one or more of the policies L2, L3, and L4, as explained in Policies and Load Balancing.

    aggr

    Specifies the aggregation whose policy you want to modify.

  3. Modify the LACP mode of the aggregation.


    # dladm modify-aggr -L LACP-mode -T timer-value aggr
    
    -L LACP-mode

    Indicates the LACP mode in which the aggregation is to run. The values are active, passive, and off. If the switch runs LACP in passive mode, be sure to configure active mode for your aggregation.

    -T timer-value

    Indicates the LACP timer value, either short or long.


Example 6–2 Modifying a Link Aggregation

This example shows how to modify the policy of aggregation video0 to L2 and then turn on active LACP mode.


# dladm modify-aggr -P L2 video0
# dladm modify-aggr -L active -T short video0
# dladm show-aggr
LINK      POLICY  ADDRPOLICY      LACPACTIVITY   LACPTIMER  FLAGS
video0    L2      auto            active         short      -----

ProcedureHow to Add a Link to an Aggregation

  1. Assume the System Administrator role or become superuser.

    The System Administrator role includes the Network Management profile. To create the role and assign the role to a user, see Chapter 9, Using Role-Based Access Control (Tasks), in System Administration Guide: Security Services.

  2. Ensure that the link you want to add has no IP interface that is plumbed over the link.


    # ifconfig interface unplumb
    
  3. Add the link to the aggregation.


    # dladm add-aggr -l link [-l link] [...] aggr
    

    where link represents a data link that you are adding to the aggregation.

  4. Perform other tasks to modify the entire link aggregation configuration after more data links are added.

    For example, in the case of a configuration that is illustrated in Figure 6–3, you might need to add or modify cable connections and reconfigure switches to accommodate the additional data links. Refer to the switch documentation to perform any reconfiguration tasks on the switch.


Example 6–3 Adding a Link to an Aggregation

This example shows how to add a link to the aggregation video0.


# dladm show-link
LINK          CLASS     MTU     STATE     OVER
subvideo0     phys      1500    up        ----
subvideo1     phys      1500    up        ----
video0        aggr      1500    up        subvideo0, subvideo1
net3          phys      1500    unknown   ----

# dladm add-aggr -l net3 video0
# dladm show-link
LINK          CLASS     MTU     STATE     OVER
subvideo0     phys      1500    up        ----
subvideo1     phys      1500    up        ----
video0        aggr      1500    up        subvideo0, subvideo1, net3
net3          phys      1500    up        ----

ProcedureHow to Remove a Link From an Aggregation

  1. Assume the System Administrator role.

    The System Administrator role includes the Network Management profile. To create the role and assign the role to a user, see Chapter 9, Using Role-Based Access Control (Tasks), in System Administration Guide: Security Services.

  2. Remove a link from the aggregation.


    # dladm remove-aggr -l link aggr-link
    

Example 6–4 Removing a Link From an Aggregation

This example shows how to remove a link from the aggregation video0.


dladm show-link
LINK          CLASS     MTU     STATE     OVER
subvideo0     phys      1500    up        ----
subvideo1     phys      1500    up        ----
video0        aggr      1500    up        subvideo0, subvideo1, net3
net3          phys      1500    up        ----

# dladm remove-aggr -l net3 video0
# dladm show-link
LINK          CLASS     MTU     STATE     OVER
subvideo0     phys      1500    up        ----
subvideo1     phys      1500    up        ----
video0        aggr      1500    up        subvideo0, subvideo1
net3          phys      1500    unknown   ----

ProcedureHow to Delete an Aggregation

  1. Assume the System Administrator role.

    The System Administrator role includes the Network Management profile. To create the role and assign the role to a user, see Chapter 9, Using Role-Based Access Control (Tasks), in System Administration Guide: Security Services.

  2. Unplumb the aggregation.


    # ifconfig aggr unplumb
    
  3. Delete the aggregation.


    # dladm delete-aggr aggr
    
  4. To make the deletion persistent, remove the IP configuration for the link aggregation in /etc/hostname.interface file.


    # rm /etc/hostname.interface
    

Example 6–5 Deleting an Aggregation

This example deletes the aggregation video0. The deletion is persistent.


# ifconfig video0 unplumb
# dladm delete-aggr video0
# rm /etc/hostname.video0

ProcedureHow to Configure VLANs Over a Link Aggregation

In the same manner as configuring VLANs over an interface, you can also create VLANs on a link aggregation. VLANs are described in Chapter 5, Administering VLANs. This section combines configuring VLANs and link aggregations.

Before You Begin

Create the link aggregation first and configure it with a valid IP address. To create link aggregations, refer to How to Create a Link Aggregation.

  1. List the aggregations that are configured in the system.


    # dladm show-link
    
  2. Create a VLAN over the link aggregation.


    # dladm create-vlan -l link -v VID vlan-link
    

    where

    link

    Specifies the link on which the VLAN interface is being created. In this specific case, the link refers to the link aggregation.

    VID

    Indicates the VLAN ID number

    vlan-link

    Specifies the name of the VLAN, which can also be an administratively-chosen name.

  3. Repeat Step 2 to create other VLANs over the aggregation.

  4. Configure IP interfaces over the VLANs with valid IP addresses.

  5. To create persistent VLAN configurations, add the IP address information to the corresponding /etc/hostname.interface configuration files.

    The interface takes the name of the VLAN that you assigned.


Example 6–6 Configuring Multiple VLANs Over a Link Aggregation

In this example, two VLANs are configured on a link aggregation. The VLANs are assigned VIDs 193 and 194, respectively.


# dladm show-link
LINK          CLASS     MTU     STATE     OVER
subvideo0     phys      1500    up        ----
subvideo1     phys      1500    up        ----
video0        aggr      1500    up        subvideo0, subvideo1

# dladm create-vlan -l video0 -v 193 salesregion1
# dladm create-vlan -l video0 -v 194 salesregion2

# ifconfig salesregion1 192.168.10.5/24 plumb up
# ifconfig salesregion2 192.168.10.25/24 plumb up

# vi /etc/hostname.salesregion1
192.168.10.5/24

# vi /etc/hostname.salesregion2
192.168.10.25/24

Combining Network Configuration Tasks While Using Customized Names

This section provides an example that combines all the procedures in the previous chapters about configuring links, VLANs, and link aggregations while using customized names. For a description of other networking scenarios that use customized names, see the article in http://www.sun.com/bigadmin/sundocs/articles/vnamingsol.jsp.


Example 6–7 Configuring Links, Aggregations, and VLANs

In this example, a system that consists of 4 NICs needs to be configured to be a router for 8 separate subnets. To attain this objective, 8 links will be configured, one for each subnet. First, a link aggregation is created on all 4 NICs. This untagged link becomes the default untagged subnet for the network to which the default route points.

Then VLAN interfaces are configured over the link aggregation for the other subnets. The subnets are named by basing on a color-coded scheme. Accordingly, the VLAN names are likewise named to correspond to their respective subnets. The final configuration consists of 8 links for the eight subnets: 1 untagged link, and 7 tagged VLAN links.

To make the configurations persist across reboots, the same procedures apply as in previous Solaris releases. For example, IP addresses need to be added to configuration files like /etc/inet/ndpd.conf or /etc/hostname.interface. Or, filter rules for the interfaces need to be included in a rules file. These final steps are not included in the example. For these steps, refer to the appropriate chapters in System Administration Guide: IP Services, particularly TCP/IP Administration and DHCP.


# dladm show-link
LINK        CLASS      MTU  STATE    OVER
nge0        phys      1500  up       --
nge1        phys      1500  up       --
e1000g0     phys      1500  up       --
e1000g1     phys      1500  up       --

# dladm show-phys
LINK        MEDIA               STATE      SPEED  DUPLEX   DEVICE
nge0        Ethernet            up        1000Mb  full     nge0
nge1        Ethernet            up        1000Mb  full     nge1
e1000g0     Ethernet            up        1000Mb  full     e1000g0
e1000g1     Ethernet            up        1000Mb  full     e1000g1

# ifconfig nge0 unplumb
# ifconfig nge1 unplumb
# ifconfig e1000g0 unplumb
# ifconfig e1000g1 unplumb

# dladm rename-link nge0 net0
# dladm rename-link nge1 net1
# dladm rename-link e1000g0 net2
# dladm rename-link e1000g1 net3

# dladm show-link
LINK        CLASS      MTU  STATE    OVER
net0        phys      1500  up       --
net1        phys      1500  up       --
net2        phys      1500  up       --
net3        phys      1500  up       --

# dladm show-phys
LINK        MEDIA               STATE      SPEED  DUPLEX   DEVICE
net0        Ethernet            up        1000Mb  full     nge0
net1        Ethernet            up        1000Mb  full     nge1
net2        Ethernet            up        1000Mb  full     e1000g0
net3        Ethernet            up        1000Mb  full     e1000g1

# dladm create-aggr -P L2,L3 -l net0 -l net1 -l net2 -l net3 default0

# dladm show-link
LINK        CLASS      MTU  STATE    OVER
net0        phys      1500  up       --
net1        phys      1500  up       --
net2        phys      1500  up       --
net3        phys      1500  up       --
default0    aggr      1500  up       net0 net1 net2 net3

# dladm create-vlan -v 2 -l default0 orange0
# dladm create-vlan -v 3 -l default0 green0
# dladm create-vlan -v 4 -l default0 blue0
# dladm create-vlan -v 5 -l default0 white0
# dladm create-vlan -v 6 -l default0 yellow0
# dladm create-vlan -v 7 -l default0 red0
# dladm create-vlan -v 8 -l default0 cyan0

# dladm show-link
LINK        CLASS      MTU  STATE    OVER
net0        phys      1500  up       --
net1        phys      1500  up       --
net2        phys      1500  up       --
net3        phys      1500  up       --
default0    aggr      1500  up       net0 net1 net2 net3
orange0     vlan      1500  up       default0
green0      vlan      1500  up       default0
blue0       vlan      1500  up       default0
white0      vlan      1500  up       default0
yellow0     vlan      1500  up       default0
red0        vlan      1500  up       default0
cyan0       vlan      1500  up       default0

# dladm show-vlan
LINK          VID   OVER        FLAGS
orange0         2   default0    -----
green0          3   default0    -----
blue0           4   default0    -----
white0          5   default0    -----
yellow0         6   default0    -----
red0            7   default0    -----
cyan0           8   default0    -----

# ifconfig orange0 plumb ...
# ifconfig green0 plumb ...
# ifconfig blue0 plumb ...
# ifconfig white0 plumb ...
# ifconfig yellow0 plumb ...
# ifconfig red0 plumb ...
# ifconfig cyan0 plumb ...

Chapter 7 Introducing IPMP

IP network multipathing (IPMP) provides physical interface failure detection, transparent network access failover, and packet load spreading for systems with multiple interfaces that are connected to a particular local area network or LAN.

This chapter contains the following information:


Note –

Throughout the description of IPMP in this chapter and in Chapter 8, Administering IPMP, all references to the term interface specifically mean IP interface. Unless a qualification explicitly indicates a different use of the term, such as a network interface card (NIC), the term always refers to the interface that is configured on the IP layer.


What's New With IPMP

The following features differentiate the current IPMP implementation from the previous implementation:

Deploying IPMP

This section describes various topics about the use of IPMP groups.

Why You Should Use IPMP

Different factors can cause an interface to become unusable. Commonly, an IP interface can fail. Or, an interface might be switched offline for hardware maintenance. In such cases, without an IPMP group, the system can no longer be contacted by using any of the IP addresses that are associated with that unusable interface. Additionally, existing connections that use those IP addresses are disrupted.

With IPMP, one or more IP interfaces can be configured into an IPMP group. The group functions like an IP interface with data addresses to send or receive network traffic. If an underlying interface in the group fails, the data addresses are redistributed among the remaining underlying active interfaces in the group. Thus, the group maintains network connectivity despite an interface failure. With IPMP, network connectivity is always available, provided that a minimum of one interface is usable for the group.

Additionally, IPMP improves overall network performance by automatically spreading out outbound network traffic across the set of interfaces in the IPMP group. This process is called outbound load spreading. The system also indirectly controls inbound load spreading by performing source address selection for packets whose IP source address was not specified by the application. However, if an application has explicitly chosen an IP source address, then the system does not vary that source address.

When You Must Use IPMP

The configuration of an IPMP group is determined by your system configurations. Observe the following rules:

For example, suppose that a system with three interfaces is connected to two separate LANs. Two IP interfaces link to one LAN while a single IP interface connects to the other. In this case, the two IP interfaces connecting to the first LAN must be configured as an IPMP group, as required by the first rule. In compliance with the second rule, the single IP interface that connects to the second LAN cannot become a member of that IPMP group. No IPMP configuration is required of the single IP interface. However, you can configure the single interface into an IPMP group to monitor the interface's availability. The single-interface IPMP configuration is discussed further in Types of IPMP Interface Configurations.

Consider another case where the link to the first LAN consists of three IP interfaces while the other link consists of two interfaces. This setup requires the configuration of two IPMP groups: a three-interface group that links to the first LAN, and a two-interface group to connect to the second.

Comparing IPMP and Link Aggregation

IPMP and link aggregation are different technologies to achieve improved network performance as well as maintain network availability. In general, you deploy link aggregation to obtain better network performance, while you use IPMP to ensure high availability.

The following table presents a general comparison between link aggregation and IPMP.

 

IPMP 

Link Aggregation 

Network technology type 

Layer 3 (IP layer) 

Layer 2 (link layer) 

Configuration tool 

ifconfig

dladm

Link-based failure detection 

Supported. 

Supported. 

Probe-based failure detection 

ICMP-based, targeting any defined system in the same IP subnet as test addresses, across multiple levels of intervening layer-2 switches. 

Based on Link Aggregation Control Protocol (LACP), targeting immediate peer host or switch. 

Use of standby interfaces 

Supported 

Not supported 

Span multiple switches 

Supported 

Generally not supported; some vendors provide proprietary and non-interoperable solutions to span multiple switches. 

Hardware support 

Not required 

Required. For example, a link aggregation in the system that is running the Solaris OS requires that corresponding ports on the switches be also aggregated. 

Link layer requirements 

Broadcast-capable 

Ethernet-specific 

Driver framework requirements 

None 

Must use GLDv3 framework 

Load spreading support 

Present, controlled by kernel. Inbound load spreading is indirectly affected by source address selection. 

Finer grain control of the administrator over load spreading of outbound traffic by using dladm command. Inbound load spreading supported.

In link aggregations, incoming traffic is spread over the multiple links that comprise the aggregation. Thus, networking performance is enhanced as more NICs are installed to add links to the aggregation. IPMP's traffic uses the IPMP interface's data addresses as they are bound to the available active interfaces. Thus, for example, if all the data traffic is flowing between only two IP addresses but not necessarily over the same connection, then adding more NICs will not improve performance with IPMP because only two IP addresses remain usable.

The two technologies complement each other and can be deployed together to provide the combined benefits of network performance and availability. For example, except where proprietary solutions are provided by certain vendors, link aggregations currently cannot span multiple switches. Thus, a switch becomes a single point of failure for a link aggregation between the switch and a host. If the switch fails, the link aggregation is likewise lost, and network performance declines. IPMP groups do not face this switch limitation. Thus, in the scenario of a LAN using multiple switches, link aggregations that connect to their respective switches can be combined into an IPMP group on the host. With this configuration, both enhanced network performance as well as high availability are obtained. If a switch fails, the data addresses of the link aggregation to that failed switch are redistributed among the remaining link aggregations in the group.

For other information about link aggregations, see Chapter 6, Administering Link Aggregations.

Using Flexible Link Names on IPMP Configuration

With support for customized link names, link configuration is no longer bound to the physical NIC to which the link is associated. Using customized link names allows you to have greater flexibility in administering IP interfaces. This flexibility extends to IPMP administration as well. In certain cases of failure of an underlying interface of an IPMP group, the resolution would require the replacement of the physical hardware or NIC. The replacement NIC, provided it is the same type as the failed NIC, can be renamed to inherit the configuration of the failed NIC. You do not have to create new configurations for the new NIC before you can add it to the IPMP group. After you rename the new NIC's link with the link name of the replaced NIC, the new NIC automatically becomes a member of the IPMP group when you bring that NIC online. The multipathing daemon then deploys the interface according to the IPMP configuration of active and standby interfaces.

Therefore, to optimize your networking configuration and facilitate IPMP administration, you must employ flexible link names for your interfaces by assigning them generic names. In the following section How IPMP Works, all the examples use flexible link names for the IPMP group and its underlying interfaces. For details about the processes behind NIC replacements in a networking environment that uses customized link names, refer to IPMP and Dynamic Reconfiguration. For an overview of the networking stack and the use of customized link names, refer to Overview of the Networking Stack.

How IPMP Works

IPMP maintains network availability by attempting to preserve the original number of active and standby interfaces when the group was created.

IPMP failure detection can be link-based or probe-based or both to determine the availability of a specific underlying IP interface in the group. If IPMP determines that an underlying interface has failed, then that interface is flagged as failed and is no longer usable. The data IP address that was associated with the failed interface is then redistributed to another functioning interface in the group. If available, a standby interface is also deployed to maintain the original number of active interfaces.

Consider a three-interface IPMP group itops0 with an active-standby configuration, as illustrated in Figure 7–1.

Figure 7–1 IPMP Active–Standby Configuration

An active-standby configuration of itops0

The group itops0 is configured as follows:


Note –

The Active, Offline, Reserve, and Failed areas in the figures indicate only the status of underlying interfaces, and not physical locations. No physical movement of interfaces or addresses nor transfer of IP interfaces occur within this IPMP implementation. The areas only serve to show how an underlying interface changes status as a result of either failure or repair.


You can use the ipmpstat command with different options to display specific types of information about existing IPMP groups. For additional examples, see Monitoring IPMP Information.

The IPMP configuration in Figure 7–1 can be displayed by using the following ipmpstat command:


# ipmpstat -g
GROUP     GROUPNAME     STATE     FDT        INTERFACES
itops0    itops0        ok        10.00s     subitops1 subitops0 (subitops2)

To display information about the group's underlying interfaces, you would type the following:


# ipmpstat -i
INTERFACE        ACTIVE     GROUP     FLAGS      LINK        PROBE     STATE
subitops0        yes        itops0    -------    up          ok        ok
subitops1        yes        itops0    --mb---    up          ok        ok
subitops2        no         itops0    is-----    up          ok        ok

IPMP maintains network availability by managing the underlying interfaces to preserve the original number of active interfaces. Thus, if subitops0 fails, then subitops2 is deployed to ensure that the group continues to have two active interfaces. The activation of the subitops2 is shown in Figure 7–2.

Figure 7–2 Interface Failure in IPMP

Failure of an active interface in the IPMP group


Note –

The one–to–one mapping of data addresses to active interfaces in Figure 7–2 serves only to simplify the illustration. The IP kernel module can assign data addresses randomly without necessarily adhering to a one–to–one relationship between data addresses and interfaces.


The ipmpstat utility displays the information in Figure 7–2 as follows:


# ipmpstat -i
INTERFACE        ACTIVE     GROUP     FLAGS      LINK        PROBE     STATE
subitops0        no         itops0    -------    up          failed    failed
subitops1        yes        itops0    --mb---    up          ok        ok
subitops2        yes        itops0    -s-----    up          ok        ok

After subitops0 is repaired, then it reverts to its status as an active interface. In turn, subitops2 is returned to its original standby status.

A different failure scenario is shown in Figure 7–3, where the standby interface subitops2 fails (1), and later, one active interface, subitops1, is switched offline by the administrator (2). The result is that the IPMP group is left with a single functioning interface, subitops0.

Figure 7–3 Standby Interface Failure in IPMP

Failure of a standby interface in the IPMP group

The ipmpstat utility would display the information illustrated by Figure 7–3 as follows:


# ipmpstat -i
INTERFACE        ACTIVE     GROUP     FLAGS       LINK        PROBE     STATE
subitops0        yes        itops0    -------     up          ok        ok
subitops1        no         itops0    --mb-d-     up          ok        offline
subitops2        no         itops0    is-----     up          failed    failed

For this particular failure, the recovery after an interface is repaired behaves differently. The restoration depends on the IPMP group's original number of active interfaces compared with the configuration after the repair. The recovery process is represented graphically in Figure 7–4.

Figure 7–4 IPMP Recovery Process

IPMP Recovery Process

In Figure 7–4, when subitops2 is repaired, it would normally revert to its original status as a standby interface (1). However, the IPMP group still would not reflect the original number of two active interfaces, because subitops1 continues to remain offline (2). Thus, IPMP deploys subitops2 as an active interface instead (3).

The ipmpstat utility would display the post-repair IPMP scenario as follows:


# ipmpstat -i
INTERFACE        ACTIVE     GROUP     FLAGS       LINK        PROBE     STATE
subitops0        yes        itops0    -------     up          ok        ok
subitops1        no         itops0    --mb-d-     up          ok        offline
subitops2        yes        itops0    -s-----     up          ok        ok

A similar restore sequence occurs if the failure involves an active interface that is also configured in FAILBACK=no mode, where a failed active interface does not automatically revert to active status upon repair. Suppose subitops0 in Figure 7–2 is configured in FAILBACK=no mode. In that mode, a repaired subitops0 is switched to a reserve status as a standby interface, even though it was originally an active interface. The interface subitops2 would remain active to maintain the IPMP group's original number of two active interfaces. The ipmpstat utility would display the recovery information as follows:


# ipmpstat -i
INTERFACE        ACTIVE     GROUP     FLAGS      LINK        PROBE     STATE
subitops0        no         itops0    i------    up          ok        ok
subitops1        yes        itops0    --mb---    up          ok        ok
subitops2        yes        itops0    -s-----    up          ok        ok

For more information about this type of configuration, see The FAILBACK=no Mode.

Solaris IPMP Components

Solaris IPMP involves the following software:

The multipathing daemon in.mpathd detects interface failures and repairs. The daemon performs both link-based failure detection and probe-based failure detection if test addresses are configured for the underlying interfaces. Depending on the type of failure detection method that is employed, the daemon sets or clears the appropriate flags on the interface to indicate whether the interface failed or has been repaired. As an option, the daemon can also be configured to monitor the availability of all interfaces, including those that are not configured to belong to an IPMP group. For a description of failure detection, see Failure and Repair Detection in IPMP.

The in.mpathd daemon also controls the designation of active interfaces in the IPMP group. The daemon attempts to maintain the same number of active interfaces that was originally configured when the IPMP group was created. Thus in.mpathd activates or deactivates underlying interfaces as needed to be consistent with the administrator's configured policy. For more information about the manner by which the in.mpathd daemon manages activation of underlying interfaces, refer to How IPMP Works. For more information about the daemon, refer to the in.mpathd(1M) man page.

The IP kernel module manages outbound load-spreading by distributing the set of available IP data addresses in the group across the set of available underlying IP interfaces in the group. The module also performs source address selection to manage inbound load-spreading. Both roles of the IP module improve network traffic performance.

The IPMP configuration file /etc/default/mpathd is used to configure the daemon's behavior. For example, you can specify how the daemon performs probe-based failure detection by setting the time duration to probe a target to detect failure, or which interfaces to probe. You can also specify what the status of a failed interface should be after that interface is repaired. You also set the parameters in this file to specify whether the daemon should monitor all IP interfaces in the system, not only those that are configured to belong to IPMP groups. For procedures to modify the configuration file, refer to How to Configure the Behavior of the IPMP Daemon.

The ipmpstat utility provides different types of information about the status of IPMP as a whole. The tool also displays other specific information about the underlying IP interfaces for each group, as well as data and test addresses that have been configured for the group. For more information about the use of this command, see Monitoring IPMP Information and the ipmpstat(1M) man page.

Types of IPMP Interface Configurations

An IPMP configuration typically consists of two or more physical interfaces on the same system that are attached to the same LAN. These interfaces can belong to an IPMP group in either of the following configurations:

A single interface can also be configured in its own IPMP group. The single interface IPMP group has the same behavior as an IPMP group with multiple interfaces. However, this IPMP configuration does not provide high availability for network traffic. If the underlying interface fails, then the system loses all capability to send or receive traffic. The purpose of configuring a single-interfaced IPMP group is to monitor the availability of the interface by using failure detection. By configuring a test address on the interface, you can set the daemon to track the interface by using probe-based failure detection. Typically, a single-interfaced IPMP group configuration is used in conjunction with other technologies that have broader failover capabilities, such as Sun Cluster software. The system can continue to monitor the status of the underlying interface. But the Sun Cluster software provides the functionalities to ensure availability of the network when failure occurs. For more information about the Sun Cluster software, see Sun Cluster Overview for Solaris OS.

An IPMP group without underlying interfaces can also exist, such as a group whose underlying interfaces have been removed. The IPMP group is not destroyed, but the group cannot be used to send and receive traffic. As underlying IP interfaces are brought online for the group, then the data addresses of the IPMP interface are allocated to these interfaces and the system resumes hosting network traffic.

IPMP Addressing

You can configure IPMP failure detection on both IPv4 networks and dual-stack, IPv4 and IPv6 networks. Interfaces that are configured with IPMP support two types of addresses:

IPv4 Test Addresses

In general, you can use any IPv4 address on your subnet as a test address. IPv4 test addresses do not need to be routeable. Because IPv4 addresses are a limited resource for many sites, you might want to use non-routeable RFC 1918 private addresses as test addresses. Note that the in.mpathd daemon exchanges only ICMP probes with other hosts on the same subnet as the test address. If you do use RFC 1918-style test addresses, be sure to configure other systems, preferably routers, on the network with addresses on the appropriate RFC 1918 subnet. The in.mpathd daemon can then successfully exchange probes with target systems. For more information about RFC 1918 private addresses, refer to RFC 1918, Address Allocation for Private Internets.

IPv6 Test Addresses

The only valid IPv6 test address is the link-local address of a physical interface. You do not need a separate IPv6 address to serve as an IPMP test address. The IPv6 link-local address is based on the Media Access Control (MAC ) address of the interface. Link-local addresses are automatically configured when the interface becomes IPv6-enabled at boot time or when the interface is manually configured through ifconfig. Just like IPv4 test addresses, IPv6 test addresses must be configured with the NOFAILOVER flag.

For more information on link-local addresses, refer to Link-Local Unicast Address in System Administration Guide: IP Services.

When an IPMP group has both IPv4 and IPv6 plumbed on all the group's interfaces, you do not need to configure separate IPv4 test addresses. The in.mpathd daemon can use the IPv6 link-local addresses with the NOFAILOVER flag as test addresses.

Failure and Repair Detection in IPMP

To ensure continuous availability of the network to send or receive traffic, IPMP performs failure detection on the IPMP group's underlying IP interfaces. Failed interfaces remain unusable until these are repaired. Remaining active interfaces continue to function while any existing standby interfaces are deployed as needed.

A group failure occurs when all interfaces in an IPMP group appear to fail at the same time. In this case, no underlying interface is usable. Also, when all the target systems fail at the same time and probe-based failure detection is enabled, the in.mpathd daemon flushes all of its current target systems and probes for new target systems.

Types of Failure Detection in IPMP

The in.mpathd daemon handles the following types of failure detection:

Link-Based Failure Detection

Link-based failure detection is always enabled, provided that the interface supports this type of failure detection.

To determine whether a third-party interface supports link-based failure detection, use the ipmpstat -i command. If the output for a given interface includes an unknown status for its LINK column, then that interface does not support link-based failure detection. Refer to the manufacturer's documentation for more specific information about the device.

These network drivers that support link-based failure detection monitor the interface's link state and notify the networking subsystem when that link state changes. When notified of a change, the networking subsystem either sets or clears the RUNNING flag for that interface, as appropriate. If the in.mpathd daemon detects that the interface's RUNNING flag has been cleared, the daemon immediately fails the interface.

Probe-Based Failure Detection

The multipathing daemon performs probe-based failure detection on each interface in the IPMP group that has a test address. Probe-based failure detection involves sending and receiving ICMP probe messages that use test addresses. These messages, also called probe traffic or test traffic, go out over the interface to one or more target systems on the same local network. The daemon probes all the targets separately through all the interfaces that have been configured for probe-based failure detection. If no replies are made in response to five consecutive probes on a given interface, in.mpathd considers the interface to have failed. The probing rate depends on the failure detection time (FDT). The default value for failure detection time is 10 seconds. However, you can tune the failure detection time in the IPMP configuration file. For instructions, go to How to Configure the Behavior of the IPMP Daemon. To optimize probe-based failure detection, you must set multiple target systems to receive the probes from the multipathing daemon. By having multiple target systems, you can better determine the nature of a reported failure. For example, the absence of a response from the only defined target system can indicate a failure either in the target system or in one of the IPMP group's interfaces. By contrast, if only one system among several target systems does not respond to a probe, then the failure is likely in the target system rather than in the IPMP group itself.

Repair detection time is twice the failure detection time. The default time for failure detection is 10 seconds. Accordingly, the default time for repair detection is 20 seconds. After a failed interface has been repaired and the interface's RUNNING flag is once more detected, in.mpathd clears the interface's FAILED flag. The repaired interface is redeployed depending on the number of active interfaces that the administrator has originally set.

The in.mpathd daemon determines which target systems to probe dynamically. First the daemon searches the routing table for target systems that are on the same subnet as the test addresses that are associated with the IPMP group's interfaces. If such targets are found, then the daemon uses them as targets for probing. If no target systems are found on the same subnet, then in.mpathd sends multicast packets to probe neighbor hosts on the link. The multicast packet is sent to the all hosts multicast address, 224.0.0.1 in IPv4 and ff02::1 in IPv6, to determine which hosts to use as target systems. The first five hosts that respond to the echo packets are chosen as targets for probing. If in.mpathd cannot find routers or hosts that responded to the multicast probes, then ICMP echo packets, in.mpathd cannot detect probe-based failures. In this case, the ipmpstat -i utility will report the probe state as unknown.

You can use host routes to explicitly configure a list of target systems to be used by in.mpathd. For instructions, refer to Configuring for Probe-Based Failure Detection.

NICs That Are Missing at Boot

NICs that are not present at system boot represent a special instance of failure detection. At boot time, the startup scripts track any interfaces with /etc/hostname.interface files. Any data addresses in such an interface's /etc/hostname.interface file are automatically configured on the corresponding IPMP interface for the group. However, if the interfaces themselves cannot be plumbed because they are missing, then error messages similar to the following are displayed:


moving addresses from missing IPv4 interfaces: hme0 (moved to ipmp0)
moving addresses from missing IPv6 interfaces: hme0 (moved to ipmp0)

Note –

In this instance of failure detection, only data addresses that are explicitly specified in the missing interface's /etc/hostname.interface file are moved to the IPMP interface.


If an interface with the same name as another interface that was missing at system boot is reattached using DR, the Reconfiguration Coordination Manager (RCM) automatically plumbs the interface. Then, RCM configures the interface according to the contents of the interface's /etc/hostname.interface file. However, data addresses, which are addresses without the NOFAILOVER flag, that are in the /etc/hostname.interface file are ignored. This mechanism adheres to the rule that data addresses should be in the /etc/hostname.ipmp-interface file, and only test addresses should be in the underlying interface's /etc/hostname.interface file. Issuing the ifconfig group command causes that interface to again become part of the group. Thus, the final network configuration is identical to the configuration that would have been made if the system had been booted with the interface present.

For more information about missing interfaces, see About Missing Interfaces at System Boot.

Failure Detection and the Anonymous Group Feature

IPMP supports failure detection in an anonymous group. By default, IPMP monitors the status only of interfaces that belong to IPMP groups. However, the IPMP daemon can be configured to also track the status of interfaces that do not belong to any IPMP group. Thus, these interfaces are considered to be part of an “anonymous group.” When you issue the command ipmpstat -g, the anonymous group will be displayed as double-dashes (--). In anonymous groups, the interfaces would have their data addresses function also as test addresses. Because these interfaces do not belong to a named IPMP group, then these addresses are visible to applications. To enable tracking of interfaces that are not part of an IPMP group, see How to Configure the Behavior of the IPMP Daemon.

Detecting Physical Interface Repairs

When an underlying interface fails and probe-based failure detection is used, the in.mpathd daemon continues to use the interface's test address to continue probing target systems. During an interface repair, the restoration proceeds depending on the original configuration of the failed interface:

To see a graphical presentation of how IPMP behaves during interface failure and repair, see How IPMP Works.

The FAILBACK=no Mode

By default, active interfaces that have failed and then repaired automatically return to become active interfaces in the group. This behavior is controlled by the setting of the FAILBACK parameter in the daemon's configuration file. However, even the insignificant disruption that occurs as data addresses are remapped to repaired interfaces might not be acceptable to some administrators. The administrators might prefer to allow an activated standby interface to continue as an active interface. IPMP allows administrators to override the default behavior to prevent an interface to automatically become active upon repair. These interfaces must be configured in the FAILBACK=no mode. For related procedures, see How to Configure the Behavior of the IPMP Daemon.

When an active interface in FAILBACK=no mode fails and is subsequently repaired, the IPMP daemon restores the IPMP configuration as follows:


Note –

The FAILBACK=NO mode is set for the whole IPMP group. It is not a per-interface tunable parameter.


IPMP and Dynamic Reconfiguration

Dynamic reconfiguration (DR) feature allows you to reconfigure system hardware, such as interfaces, while the system is running. DR can be used only on systems that support this feature.

You typically use the cfgadm command to perform DR operations. However, some platforms provide other methods. Make sure to consult your platform's documentation for details to perform DR. For systems that use the Solaris OS, you can find specific documentation about DR in the resources that are listed in Table 7–1. Current information about DR is also available at http://docs.sun.com and can be obtained by searching for the topic “dynamic reconfiguration.”

Table 7–1 Documentation Resources for Dynamic Reconfiguration

Description 

For Information 

Detailed information on the cfgadm command

cfgadm(1M) man page

Specific information about DR in the Sun Cluster environment 

Sun Cluster 3.1 System Administration Guide

Specific information about DR in the Sun Fire environment 

Sun Fire 880 Dynamic Reconfiguration Guide

Introductory information about DR and the cfgadm command

Chapter 6, Dynamically Configuring Devices (Tasks), in System Administration Guide: Devices and File Systems

Tasks for administering IPMP groups on a system that supports DR 

Recovering an IPMP Configuration With Dynamic Reconfiguration

The sections that follow explain how DR interoperates with IPMP.

On a system that supports DR of NICs, IPMP can be used to preserve connectivity and prevent disruption of existing connections. IPMP is integrated into the Reconfiguration Coordination Manager (RCM) framework. Thus, you can safely attach, detach, or reattach NICs and RCM manages the dynamic reconfiguration of system components.

Attaching New NICs

With DR support, you can attach, plumb, and then add new interfaces to existing IPMP groups. Or, if appropriate, you can configure the newly added interfaces into their own IPMP group. For procedures to configure IPMP groups, refer to Configuring IPMP Groups. After these interfaces have been configured, they are immediately available for use by IPMP. However, to benefit from the advantages of using customized link names, you must assign generic link names to replace the interface's hardware-based link names. Then you create corresponding configuration files by using the generic name that you just assigned. For procedures to configure a singe interface by using customized link names, refer to How to Configure an IP Interface After System Installation. After you assign a generic link name to interface, make sure that you always refer to the generic name when you perform any additional configuration on the interface such as using the interface for IPMP.

Detaching NICs

All requests to detach system components that contain NICs are first checked to ensure that connectivity can be preserved. For instance, by default you cannot detach a NIC that is not in an IPMP group. You also cannot detach a NIC that contains the only functioning interfaces in an IPMP group. However, if you must remove the system component, you can override this behavior by using the -f option of cfgadm, as explained in the cfgadm(1M) man page.

If the checks are successful, the daemon sets the OFFLINE flag for the interface. All test addresses on the interfaces are unconfigured. Then, the NIC is unplumbed from the system. If any of these steps fail, or if the DR of other hardware on the same system component fails, then the previous configuration is restored to its original state. A status message about this event will be displayed. Otherwise, the detach request completes successfully. You can remove the component from the system. No existing connections are disrupted.

Replacing NICs

When an underlying interface of an IPMP group fails, a typical solution would be to replace the failed interface by attaching a new NIC. RCM records the configuration information associated with any NIC that is detached from a running system. If you replace a failed NIC with an identical NIC, then RCM automatically configures the interface according to the contents of the existing /etc/hostname.interface file.

For example, suppose you replace a failed bge0 interface with another bge0 interface. The failed bge0 already has a corresponding /etc/hostname.bge0 file. After you attach the replacement bge NIC, RCM plumbs and then configures the bge0 interface by using the information in the /etc/hostname.bge0 file. Thus the interface is properly configured with the test address and is added to the IPMP group according to the contents of the configuration file.

You can replace a failed NIC with a different NIC, provided that both are the same type, such as ethernet. In this case, RCM plumbs the new interface after it is attached. If you did not use customized link names when you first configured your interfaces, and no corresponding configuration file for the new interface exists, then you will have to perform additional configuration steps. You will need to create a new corresponding configuration file for the new NIC. Additionally, you will need to add correct information to the file before you can add the interface to the IPMP group.

However, if you used customized link names, the additional configuration steps are unnecessary. By reassigning the failed interface's link name to the new interface, then the new interface acquires the configuration specified in the removed interface's configuration file. RCM then configures the interface by using the information in that file. For procedures to recover your IPMP configuration by using DR when an interface fails, refer to Recovering an IPMP Configuration With Dynamic Reconfiguration.

IPMP Terminology and Concepts

This section introduces terms and concepts that are used throughout the IPMP chapters in this book.

active interface

Refers to an underlying interface that can be used by the system to send or receive data traffic. An interface is active if the following conditions are met:

  • At least one IP address is UP in the interface. See UP address.

  • The FAILED, INACTIVE, or OFFLINE flag is not set on the interface.

  • The interface has not been flagged as having a duplicate hardware address.

Compare to unusable interface, INACTIVE interface.

data address

Refers to an IP address that can be used as the source or destination address for data. Data addresses are part of an IPMP group and can be used to send and receive traffic on any interface in the group. Moreover, the set of data addresses in an IPMP group can be used continuously, provided that one interface in the group is functioning. In previous IPMP implementations, data addresses were hosted on the underlying interfaces of an IPMP group. In the current implementation, data addresses are hosted on the IPMP interface.

DEPRECATED address

Refers to an IP address that cannot be used as the source address for data. Typically, IPMP test addresses are DEPRECATED. However, any address can be marked DEPRECATED to prevent the address from being used as a source address.

dynamic reconfiguration

Refers to a feature that allows you to reconfigure a system while the system is running, with little or no impact on ongoing operations. Not all Sun platforms support DR. Some Sun platforms might only support DR of certain types of hardware. On platforms that support DR of NICs, IPMP can be used for uninterrupted network access to the system during DR.

For more information about how IPMP supports DR, refer to IPMP and Dynamic Reconfiguration.

explicit IPMP interface creation

Applies only to the current IPMP implementation. The term refers to the method of creating an IPMP interface by using the ifconfig ipmp command. Explicit IPMP interface creation is the preferred method for creating IPMP groups. This method allows the IPMP interface name and IPMP group name to be set by the administrator.

Compare to implicit IPMP interface creation.

FAILBACK=no mode

Refers to a setting of an underlying interface that minimizes rebinding of incoming addresses to interfaces by avoiding redistribution during interface repair. Specifically, when an interface repair is detected, the interface's FAILED flag is cleared. However, if the mode of the repaired interface is FAILBACK=no, then the INACTIVE flag is also set to prevent use of the interface, provided that a second functioning interface also exists. If the second interface in the IPMP group fails, then the INACTIVE interface is eligible to take over. While the concept of failback no longer applies in the current IPMP implementation, the name of this mode is preserved for administrative compatibility.

FAILED interface

Indicates an interface that the in.mpathd daemon has determined to be malfunctioning. The determination is achieved by either link-based or probe-based failure detection. The FAILED flag is set on any failed interface.

failure detection

Refers to the process of detecting when a physical interface or the path from an interface to an Internet layer device no longer works. Two forms of failure detection are implemented: link-based failure detection, and probe-based failure detection.

implicit IPMP interface creation

Refers to the method of creating an IPMP interface by using the ifconfig command to place an underlying interface in a nonexistent IPMP group. Implicit IPMP interface creation is supported for backward compatibility with the previous IPMP implementation. Thus, this method does not provide the ability to set the IPMP interface name or IPMP group name.

Compare to explicit IPMP interface creation.

INACTIVE interface

Refers to an interface that is functioning but is not being used according to administrative policy. The INACTIVE flag is set on any INACTIVE interface.

Compare to active interface, unusable interface.

IPMP anonymous group support

Indicates an IPMP feature in which the IPMP daemon tracks the status of all network interfaces in the system, regardless of whether they belong to an IPMP group. However, if the interfaces are not actually in an IPMP group, then the addresses on these interfaces are not available in case of interface failure.

IPMP group

Refers to a set of network interfaces that are treated as interchangeable by the system in order to improve network availability and utilization. Each IPMP group has a set of data addresses that the system can associate with any set of active interfaces in the group. Use of this set of data addresses maintains network availability and improves network utilization. The administrator can select which interfaces to place into an IPMP group. However, all interfaces in the same group must share a common set of properties, such as being attached to the same link and configured with the same set of protocols (for example, IPv4 and IPv6).

IPMP group interface

See IPMP interface.

IPMP group name

Refers to the name of an IPMP group, which can be assigned with the ifconfig group subcommand. All underlying interfaces with the same IPMP group name are defined as part of the same IPMP group. In the current implementation, IPMP group names are de-emphasized in favor of IPMP interface names. Administrators are encouraged to use the same name for both the IPMP interface and the group.

IPMP interface

Applies only to the current IPMP implementation. The term refers to the IP interface that represents a given IPMP group, any or all of the interface's underlying interfaces, and all of the data addresses. In the current IPMP implementation, the IPMP interface is the core component for administering an IPMP group, and is used in routing tables, ARP tables, firewall rules, and so forth.

IPMP interface name

Indicates the name of an IPMP interface. This document uses the naming convention of ipmpN. The system also uses the same naming convention in implicit IPMP interface creation. However, the administrator can choose any name by using explicit IPMP interface creation.

IPMP singleton

Refers to an IPMP configuration that is used by Sun Cluster software that allows a data address to also act as a test address. This configuration applies, for instance, when only one interface belongs to an IPMP group.

link-based failure detection

Specifies a passive form of failure detection, in which the link status of the network card is monitored to determine an interface's status. Link-based failure detection only tests whether the link is up. This type of failure detection is not supported by all network card drivers. Link-based failure detection requires no explicit configuration and provides instantaneous detection of link failures.

Compare to probe-based failure detection.

load spreading

Refers to the process of distributing inbound or outbound traffic over a set of interfaces. Unlike load balancing, load spreading does not guarantee that the load is evenly distributed. With load spreading, higher throughput is achieved. Load spreading occurs only when the network traffic is flowing to multiple destinations that use multiple connections.

Inbound load spreading indicates the process of distributing inbound traffic across the set of interfaces in an IPMP group. Inbound load spreading cannot be controlled directly with IPMP. The process is indirectly manipulated by the source address selection algorithm.

Outbound load spreading refers to the process of distributing outbound traffic across the set of interfaces in an IPMP group. Outbound load spreading is performed on a per-destination basis by the IP module, and is adjusted as necessary depending on the status and members of the interfaces in the IPMP group.

NOFAILOVER address

Applies only to the previous IPMP implementation. Refers to an address that is associated with an underlying interface and thus remains unavailable if the underlying interface fails. All NOFAILOVER addresses have the NOFAILOVER flag set. IPMP test addresses must be designated as NOFAILOVER, while IPMP data addresses must never be designated as NOFAILOVER. The concept of failover does not exist in the IPMP implementation. However, the term NOFAILOVER remains for administrative compatibility.

OFFLINE interface

Indicates an interface that has been administratively disabled from system use, usually in preparation for being removed from the system. Such interfaces have the OFFLINE flag set. The if_mpadm command can be used to switch an interface to an offline status.

physical interface

See: underlying interface

probe

Refers to an ICMP packet, similar to the packets that are used by the ping command. This probe is used to test the send and receive paths of a given interface. Probe packets are sent by the in.mpathd daemon, if probe-based failure detection is enabled. A probe packet uses an IPMP test address as its source address.

probe-based failure detection

Indicates an active form of failure detection, in which probes are exchanged with probe targets to determine an interface's status. When enabled, probe-based failure detection tests the entire send and receive path of each interface. However, this type of detection requires the administrator to explicitly configure each interface with a test address.

Compare to link-based failure detection.

probe target

Refers to a system on the same link as an interface in an IPMP group. The target is selected by the in.mpathd daemon to help check the status of a given interface by using probe-based failure detection. The probe target can be any host on the link that is capable of sending and receiving ICMP probes. Probe targets are usually routers. Several probe targets are usually used to insulate the failure detection logic from failures of the probe targets themselves.

source address selection

Refers to the process of selecting a data address in the IPMP group as the source address for a particular packet. Source address selection is performed by the system whenever an application has not specifically selected a source address to use. Because each data address is associated with only one hardware address, source address selection indirectly controls inbound load spreading.

STANDBY interface

Indicates an interface that has been administratively configured to be used only when another interface in the group has failed. All STANDBY interfaces will have the STANDBY flag set.

test address

Refers to an IP address that must be used as the source or destination address for probes, and must not be used as a source or destination address for data traffic. Test addresses are associated with an underlying interface. These addresses are designated as NOFAILOVER so that they remain on the underlying interface even if the interface fails to facilitate repair detection. Because test addresses are not available upon interface failure, all test addresses must be designated as DEPRECATED to keep the system from using them as a source addresses for data packets.

underlying interface

Specifies an IP interface that is part of an IPMP group and is directly associated with an actual network device. For example, if ce0 and ce1 are placed into IPMP group ipmp0, then ce0 and ce1 comprise the underlying interfaces of ipmp0. In the previous implementation, IPMP groups consist solely of underlying interfaces. However, in the current implementation, these interfaces underlie the IPMP interface (for example, ipmp0) that represents the group, hence the name.

undo-offline operation

Refers to the act of administratively enabling a previously offlined interface to be used by the system. The if_mpadm command can be used to perform an undo-offline operation.

unusable interface

Refers to an underlying interface that cannot be used to send or receive data traffic at all in its current configuration. An unusable interface differs from an INACTIVE interface, that is not currently being used but can be used if an active interface in the group becomes unusable. An interface is unusable if one of the following conditions exists:

  • The interface has no UP address.

  • The FAILED or OFFLINE flag has been set for the interface.

  • The interface has been flagged has having the same hardware address as another interface in the group.

target systems

See probe target.

UP address

Refers to an address that has been made administratively available to the system by setting the UP flag. An address that is not UP is treated as not belonging to the system, and thus is never considered during source address selection.

Chapter 8 Administering IPMP

This chapter provides tasks for administering interface groups with IP network multipathing (IPMP). The following major topics are discussed:

IPMP Administration Task Maps

In this Solaris release, the ipmpstat command is the preferred tool to use to obtain information about IPMP group information. In this chapter, the ipmpstat command replaces certain functions of the ifconfig command that were used in previous Solaris releases to provide IPMP information.

For information about the different options for the ipmpstat command, see Monitoring IPMP Information.

This following sections provide links to the tasks in this chapter.

IPMP Group Creation and Configuration (Task Map)

Task 

Description 

For Instructions 

Plan an IPMP group. 

Lists all ancillary information and required tasks before you can configure an IPMP group. 

How to Plan an IPMP Group

Configure an IPMP group by using DHCP. 

Provides an alternative method to configure IPMP groups by using DHCP. 

How to Configure an IPMP Group by Using DHCP

Configure an active-active IPMP group. 

Configures an IPMP group in which all underlying interfaces are deployed to host network traffic. 

How to Manually Configure an Active-Active IPMP Group

Configure an active-standby IPMP group. 

Configures an IPMP group in which one underlying interface is kept inactive as a reserve. 

How to Manually Configure an Active-Standby IPMP Group

IPMP Group Maintenance (Task Map)

Task 

Description 

For Instructions 

Add an interface to an IPMP group. 

Configures a new interface as a member of an existing IPMP group. 

How to Add an Interface to an IPMP Group

Remove an interface from an IPMP group. 

Removes an interface from an IPMP group. 

How to Remove an Interface From an IPMP Group

Add IP addresses to or remove IP addresses from an IPMP group. 

Adds or removes addresses for an IPMP group. 

How to Add or Remove IP Addresses

Change an interface's IPMP membership. 

Moves interfaces among IPMP groups. 

How to Move an Interface From One IPMP Group to Another Group

Delete an IPMP group. 

Deletes an IPMP group that is no longer needed. 

How to Delete an IPMP Group

Replace cards that failed. 

Removes or replaces failed NICs of an IPMP group. 

How to Replace a Physical Card That Has Failed

Probe-Based Failure Detection Configuration (Task Map)

Task 

Description 

For Instructions 

Manually specify target systems 

Identifies and adds systems to be targeted for probe-based failure detection. 

How to Manually Specify Target Systems for Probe-Based Failure Detection

Configure the behavior of probe-based failure detection. 

Modifies parameters to determine the behavior of probe-based failure detection. 

How to Configure the Behavior of the IPMP Daemon

IPMP Group Monitoring (Task Map)

Task 

Description 

For Instructions 

Obtain group information. 

Displays information about an IPMP group. 

How to Obtain IPMP Group Information

Obtain data address information. 

Displays information about the data addresses that are used by an IPMP group. 

How to Obtain IPMP Data Address Information

Obtain IPMP interface information. 

Displays information about the underlying interfaces of IPMP interfaces or groups. 

How to Obtain Information About Underlying IP Interfaces of a Group

Obtain probe target information. 

Displays information about targets of probe-based failure detection. 

How to Obtain IPMP Probe Target Information

Obtain probe information. 

Displays real-time information about ongoing probes in the system. 

How to Observe IPMP Probes

Customize the information display for monitoring IPMP groups. 

Determines the IPMP information that is displayed. 

How to Customize the Output of the ipmpstat Command in a Script

Configuring IPMP Groups

This section provides procedures that are used to plan and configure IPMP groups.

ProcedureHow to Plan an IPMP Group

The following procedure includes the required planning tasks and information to be gathered prior to configuring an IPMP group. The tasks do not have to be performed in sequence.

  1. Determine the general IPMP configuration that would suit your needs.

    Your IPMP configuration depends on what your network needs to handle the type of traffic that is hosted on your system. IPMP spreads outbound network packets across the IPMP group's interfaces, and thus improves network throughput. However, for a given TCP connection, inbound traffic normally follows only one physical path to minimize the risk of processing out-of-order packets.

    Thus, if your network handles a huge volume of outbound traffic, configuring multiple interfaces into an IPMP group can improve network performance. If instead, the system hosts heavy inbound traffic, then the number of interfaces in the group does not necessarily improve performance by load spreading traffic. However, having multiple interfaces helps to guarantee network availability during interfaces failure.

  2. For SPARC based systems, verify that each interface in the group has a unique MAC address.

    To configure a unique MAC address for each interface in the system, see SPARC: How to Ensure That the MAC Address of an Interface Is Unique.

  3. Ensure that the same set of STREAMS modules is pushed and configured on all interfaces in the IPMP group.

    All interfaces in the same group must have the same STREAMS modules configured in the same order.

    1. Check the order of STREAMS modules on all interfaces in the prospective IPMP group.

      You can print a list of STREAMS modules by using the ifconfig interface modlist command. For example, here is the ifconfig output for an hme0 interface:


      # ifconfig hme0 modlist
      	0 arp
      	1 ip
      	2 hme

      Interfaces normally exist as network drivers directly below the IP module, as shown in the output from ifconfig hme0 modlist. They should not require additional configuration.

      However, certain technologies insert themselves as a STREAMS module between the IP module and the network driver. If a STREAMS module is stateful, then unexpected behavior can occur on failover, even if you push the same module onto all of the interfaces in a group. However, you can use stateless STREAMS modules, provided that you push them in the same order on all interfaces in the IPMP group.

    2. Push the modules of an interface in the standard order for the IPMP group.


      ifconfig interface modinsert module-name@position
      

      ifconfig hme0 modinsert vpnmod@3
  4. Use the same IP addressing format on all interfaces of the IPMP group.

    If one interface is configured for IPv4, then all interfaces of the group must be configured for IPv4. For example, if you add IPv6 addressing to one interface, then all interfaces in the IPMP group must be configured for IPv6 support.

  5. Determine the type of failure detection that you want to implement.

    For example, if you want to implement probe-based failure detection, then you must configure test addresses on the underlying interfaces. For related information, seeTypes of Failure Detection in IPMP.

  6. Ensure that all interfaces in the IPMP group are connected to the same local network.

    For example, you can configure Ethernet switches on the same IP subnet into an IPMP group. You can configure any number of interfaces into an IPMP group.


    Note –

    You can also configure a single interface IPMP group, for example, if your system has only one physical interface. For related information, see Types of IPMP Interface Configurations.


  7. Ensure that the IPMP group does not contain interfaces with different network media types.

    The interfaces that are grouped together should be of the same interface type, as defined in /usr/include/net/if_types.h. For example, you cannot combine Ethernet and Token ring interfaces in an IPMP group. As another example, you cannot combine a Token bus interface with asynchronous transfer mode (ATM) interfaces in the same IPMP group.

  8. For IPMP with ATM interfaces, configure the ATM interfaces in LAN emulation mode.

    IPMP is not supported for interfaces using Classical IP over ATM.

ProcedureHow to Configure an IPMP Group by Using DHCP

In the current IPMP implementation, IPMP groups can be configured with Dynamic Host Configuration Protocol (DHCP) support.

A multiple-interfaced IPMP group can be configured with active-active interfaces or active-standby interfaces. For related information, see Types of IPMP Interface Configurations. The following procedure describes steps to configure an active-standby IPMP group by using DHCP.

Before You Begin

Make sure that IP interfaces that will be in the prospective IPMP group have been correctly configured over the system's network data links. For procedures to configure links and IP interfaces, see Data Link and IP Interface Configuration (Tasks). For information about configuring IPv6 interfaces, see Configuring an IPv6 Interface in System Administration Guide: IP Services.

Additionally, if you are using a SPARC system, configure a unique MAC address for each interface. For procedures, see SPARC: How to Ensure That the MAC Address of an Interface Is Unique.

Finally, if you are using DHCP, make sure that the underlying interfaces have infinite leases. Otherwise, in case of a group failure, the test addresses will expire and the IPMP daemon will then revert to link-based failure detection. Such circumstances would trigger errors in the manner the group's failure detection behaves during interface recovery. For more information about configuring DHCP, refer to Chapter 12, Planning for DHCP Service (Tasks), in System Administration Guide: IP Services.

  1. On the system on which you want to configure the IPMP group, assume the Primary Administrator role, or become superuser.

    The Primary Administrator role includes the Primary Administrator profile. To create the role and assign the role to a user, see Chapter 2, Working With the Solaris Management Console (Tasks), in System Administration Guide: Basic Administration.

  2. Create an IPMP interface.


    # ifconfig ipmp-interface ipmp [group group-name]
    

    Note –

    To configure IPv6 IPMP interfaces, use the same command syntax for configuring IPv6 interfaces by specifying inet6 in the ifconfig command, for example:


    # ifconfig ipmp-interface inet6 ipmp [group group-name]
    

    This note applies to all configuration procedures that involve IPv6 IPMP interfaces.


    ipmp-interface

    Specifies the name of the IPMP interface. You can assign any meaningful name to the IPMP interface. As with any IP interface, the name consists of a string and a number, such as ipmp0.

    group-name

    Specifies the name of the IPMP group. The name can be any name of your choice. Assigning a group name is optional. By default, the name of the IPMP interface also becomes the name of the IPMP group. Preferably, retain this default setting by not using the group-name option.


    Note –

    The syntax in this step uses the preferred explicit method of creating an IPMP group by creating the IPMP interface.

    An alternative method to create an IPMP group is implicit creation, in which you use the syntax ifconfig interface group group-name. In this case, the system creates the lowest available ipmpN to become the group's IPMP interface. For example, if ipmp0 already exists for group acctg, then the syntax ifconfig ce0 group fieldops causes the system to create ipmp1 for group fieldops. All UP data addresses of ce0 are then assigned to ipmp1.

    However, implicit creation of IPMP groups is not encouraged. Support for implicit creation is provided only to have compatible implementation with previous Solaris releases. Explicit creation provides optimal control over the configuration of IPMP interfaces.


  3. Add underlying IP interfaces that will contain test addresses to the IPMP group, including the standby interface.


    # ifconfig interface group group-name -failover [standby] up
    
  4. Have DHCP configure and manage the data addresses on the IPMP interface.

    You need to plumb as many logical IPMP interfaces as data addresses, and then have DHCP configure and manage the addresses on these interfaces as well.


    # ifconfig ipmp-interface dhcp start primary
    # ifconfig ipmp-interface:n plumb
    # ifconfig ipmp-interface:n dhcp start
    
  5. Have DHCP manage the test addresses in the underlying interfaces.

    You need to issue the following command for each underlying interface of the IPMP group.


    # ifconfig interface dhcp start
    

Example 8–1 Configuring an IPMP Group With DHCP

This example shows how to configure an active-standby IPMP group with DHCP. This example is based on Figure 7–1, which contains the following information:


# ifconfig itops0 ipmp

# ifconfig subitops0 plumb group itops0 -failover up
# ifconifg subitops1 plumb group itops0 -failover up
# ifconfig subitops2 plumb group itops0 -failover standby up

# ifconfig itops0 dhcp start primary
# ifconfig itops0:1 plumb
# ifconfig itops0:1 dhcp start

# ifconfig subitops0 dhcp start
# ifconfig subitops1 dhcp start
# ifconfig subitops2 dhcp start

To make the test address configuration persistent, you would need to type the following commands:


# touch /etc/dhcp.itops0 /etc/dhcp.itops0:1
# touch /etc/dhcp.subitops0 /etc/dhcp.subitops1 /etc/dhcp.subitops2
	
# echo group itops0 -failover up > /etc/hostname.subitops0
# echo group itops0 -failover up > /etc/hostname.subitops1
# echo group itops0 -failover standby up > /etc/hostname.subitops2
# echo ipmp > /etc/hostname.itops0

ProcedureHow to Manually Configure an Active-Active IPMP Group

The following procedure describes steps to manually configure an active-active IPMP group.

Before You Begin

Make sure that IP interfaces that will be in the prospective IPMP group have been correctly configured over the system's network data links. For procedures to configure links and IP interfaces, see Data Link and IP Interface Configuration (Tasks). For information about configuring IPv6 interfaces, see Configuring an IPv6 Interface in System Administration Guide: IP Services.

Additionally, if you are using a SPARC system, configure a unique MAC address for each interface. For procedures, see SPARC: How to Ensure That the MAC Address of an Interface Is Unique.

  1. On the system on which you want to configure the IPMP group, assume the Primary Administrator role, or become superuser.

    The Primary Administrator role includes the Primary Administrator profile. To create the role and assign the role to a user, see Chapter 2, Working With the Solaris Management Console (Tasks), in System Administration Guide: Basic Administration.

  2. Create an IPMP interface.


    # ifconfig ipmp-interface ipmp [group group-name]
    
    ipmp-interface

    Specifies the name of the IPMP interface. You can assign any meaningful name to the IPMP interface. As with any IP interface, the name consists of a string and a number, such as ipmp0.

    group-name

    Specifies the name of the IPMP group. The name can be any name of your choice. Any nun-null name is valid, provided that the name does not exceed 31 characters. Assigning a group name is optional. By default, the name of the IPMP interface also becomes the name of the IPMP group. Preferably, retain this default setting by not using the group-name option.


    Note –

    The syntax in this step uses the preferred explicit method of creating an IPMP group by creating the IPMP interface.

    An alternative method to create an IPMP group is implicit creation, in which you use the syntax ifconfig interface group group-name. In this case, the system creates the lowest available ipmpN to become the group's IPMP interface. For example, if ipmp0 already exists for group acctg, then the syntax ifconfig ce0 group fieldops causes the system to create ipmp1 for group fieldops. All UP data addresses of ce0 are then assigned to ipmp1.

    However, implicit creation of IPMP groups is not encouraged. Support for implicit creation is provided only to have compatible implementation with previous Solaris releases. Explicit creation provides optimal control over the configuration of IPMP interfaces.


  3. Add underlying IP interfaces to the group.


    # ifconfig ip-interface group group-name
    

    Note –

    In a dual-stack environment, placing the IPv4 instance of an interface under a particular group automatically places the IPv6 instance under the same group as well.


  4. Add data addresses to the IPMP interface.


    # ifconfig plumb ipmp-interface ip-address up
    # ifconfig ipmp-interface addif ip-address up
    

    For additional options that you can use with the ifconfig command while adding addresses, refer to the ifconfig(1M) man page.

  5. Configure test addresses on the underlying interfaces.


    # ifconfig interface -failover ip-address up
    

    Note –

    You need to configure a test address only if you want to use probe-based failure detection on a particular interface.

    All test IP addresses in an IPMP group must use the same network prefix. The test IP addresses must belong to a single IP subnet.


  6. (Optional) Preserve the IPMP group configuration across reboots.

    To configure an IPMP group that persists across system reboots, you would edit the hostname configuration file of the IPMP interface to add data addresses. Then, if you want to use test addresses, you would edit the hostname configuration file of one of the group's underlying IP interface. Note that data and test addresses can be both IPv4 and IPv6 addresses. Perform the following steps:

    1. Edit the /etc/hostname.ipmp-interface file by adding the following lines:


      ipmp group group-name data-address up
      
      addif data-address
      ...

      You can add more data addresses on separate addif lines in this file.

    2. Edit the /etc/hostname.interface file of the underlying IP interfaces that contain the test address by adding the following line:


      group group-name -failover test-address up

      Follow this same step to add test addresses to other underlying interfaces of the IPMP group.


      Caution – Caution –

      When adding test address information on the /etc/hostname.interface file, make sure to specify the -failover option before the up keyword. Otherwise, the test IP addresses will be treated as data addresses and would cause problems for system administration. Preferably, set the -failover option before specifying the IP address.


ProcedureHow to Manually Configure an Active-Standby IPMP Group

For more information about standby interfaces, see Types of IPMP Interface Configurations. The following procedure configures an IPMP group where one interface is kept as a reserve. This interface is deployed only when an active interface in the group fails.

  1. On the system on which you want to configure the IPMP group, assume the Primary Administrator role, or become superuser.

    The Primary Administrator role includes the Primary Administrator profile. To create the role and assign the role to a user, see Chapter 2, Working With the Solaris Management Console (Tasks), in System Administration Guide: Basic Administration.

  2. Create an IPMP interface.


    # ifconfig ipmp-interface ipmp [group group-name]
    
    ipmp-interface

    Specifies the name of the IPMP interface. You can assign any meaningful name to the IPMP interface. As with any IP interface, the name consists of a string and a number, such as ipmp0.

    group-name

    Specifies the name of the IPMP group. The name can be any name of your choice. Any nun-null name is valid, provided that the name does not exceed 31 characters. Assigning a group name is optional. By default, the name of the IPMP interface also becomes the name of the IPMP group. Preferably, retain this default setting by not using the group-name option.


    Note –

    The syntax in this step uses the preferred explicit method of creating an IPMP group by creating the IPMP interface.

    An alternative method to create an IPMP group is implicit creation, in which you use the syntax ifconfig interface group group-name. In this case, the system creates the lowest available ipmpN to become the group's IPMP interface. For example, if ipmp0 already exists for group acctg, then the syntax ifconfig ce0 group fieldops causes the system to create ipmp1 for group fieldops. All UP data addresses of ce0 are then assigned to ipmp1.

    However, implicit creation of IPMP groups is not encouraged. Support for implicit creation is provided only to have compatible implementation with previous Solaris releases. Explicit creation provides optimal control over the configuration of IPMP interfaces.


  3. Add underlying IP interfaces to the group.


    # ifconfig ip-interface group group-name
    

    Note –

    In a dual-stack environment, placing the IPv4 instance of an interface under a particular group automatically places the IPv6 instance under the same group as well.


  4. Add data addresses to the IPMP interface.


    # ifconfig plumb ipmp-interface ip-address up
    # ifconfig ipmp-interface addif ip-address up
    

    For additional options that you can use with the ifconfig command while adding addresses, refer to the ifconfig(1M) man page.

  5. Configure test addresses on the underlying interfaces.

    • To configure a test address on an active interface, use the following command:


      # ifconfig interface -failover ip-address up
      
    • To configure a test address on a designated standby interface, use the following command:


      # ifconfig interface -failover ip-address standby up
      

    Note –

    You need to configure a test address only if you want to use probe-based failure detection on a particular interface.

    All test IP addresses in an IPMP group must use the same network prefix. The test IP addresses must belong to a single IP subnet.


  6. (Optional) Preserve the IPMP group configuration across reboots.

    To configure an IPMP group that persists across system reboots, you would edit the hostname configuration file of the IPMP interface to add data addresses. Then, if you want to use test addresses, you would edit the hostname configuration file of one of the group's underlying IP interface. Note that data and test addresses can be both IPv4 and IPv6 addresses. Perform the following steps:

    1. Edit the /etc/hostname.ipmp-interface file by adding the following lines:


      ipmp group group-name data-address up
      addif data-address
      ...

      You can add more data addresses on separate addif lines in this file.

    2. Edit the /etc/hostname.interface file of the underlying IP interfaces that contain the test address by adding the following line:


      group group-name -failover test-address up

      Follow this same step to add test addresses to other underlying interfaces of the IPMP group. For a designated standby interface, the line must be as follows:


      group group-name -failover test-address standby up

      Caution – Caution –

      When adding test address information on the /etc/hostname.interface file, make sure to specify the -failover option before the up keyword. Otherwise, the test IP addresses will be treated as data addresses and would cause problems for system administration. Preferably, set the -failover option before specifying the IP address.



Example 8–2 Configuring an Active-Standby IPMP Group

This example shows how to manually create the same persistent active-standby IPMP configuration that is provided in Example 8–1.


# ifconfig itops0 ipmp

# ifconfig subitops0 group itops0
# ifconfig subitops1 group itops0
# ifconfig subitops2 group itops0

# ifconfig itops0 192.168.10.10/24 up
# ifconfig itops0 addif 192.168.10.15/24 up

# ifconfig subitops0 -failover 192.168.85.30/24 up
# ifconfig subitops1 -failover 192.168.86.32/24 up
# ifconfig subitops2 -failover 192.168.86.34/24 standby up

# ipmpstat -g
GROUP     GROUPNAME   STATE      FDT        INTERFACES
itops0    itops0      ok         10.00s     subitops0 subitops1 (subitops2)

# ipmpstat -t
INTERFACE      MODE     TESTADDR        TARGETS
subitops0      routes   192.168.10.30   192.168.10.1
subitops1      routes   192.168.10.32   192.168.10.1
subitops2      routes   192.168.10.34   192.168.10.5

# vi /etc/hostname.itops0
ipmp group itops0 192.168.10.10/24 up
addif 192.168.10.15/24 up

# vi /etc/hostname.subitops0
group itops0 -failover 192.168.10.30/24 up

# vi /etc/hostname.subitops1
group itops0 -failover 192.168.10.32/24 up

# vi /etc/hostname.subitops2
group itops0 -failover 192.168.10.34/24 standby up

Maintaining IPMP Groups

This section contains tasks for maintaining existing IPMP groups and the interfaces within those groups. The tasks presume that you have already configured an IPMP group, as explained in Configuring IPMP Groups.

ProcedureHow to Add an Interface to an IPMP Group

Before You Begin

Make sure that the interface that you add to the group matches all the constraints to be in the group. For a list of the requirements of an IPMP group, see How to Plan an IPMP Group.

  1. On the system with the IPMP group configuration, assume the Primary Administrator role or become superuser.

    The Primary Administrator role includes the Primary Administrator profile. To create the role and assign the role to a user, see Chapter 2, Working With the Solaris Management Console (Tasks), in System Administration Guide: Basic Administration.

  2. Add the IP interface to the IPMP group.


    # ifconfig interface group group-name
    

    The interface specified in interface becomes a member of IPMP group group-name.


Example 8–3 Adding an Interface to an IPMP Group

To add the interface hme0 to the IPMP group itops0, you would type the following command:


# ifconfig hme0 group itops0
# ipmpstat -g
GROUP   GROUPNAME   STATE      FDT       INTERFACES
itops0  itops0      ok         10.00s    subitops0 subitops1 hme0

ProcedureHow to Remove an Interface From an IPMP Group

  1. On the system with the IPMP group configuration, assume the Primary Administrator role or become superuser.

    The Primary Administrator role includes the Primary Administrator profile. To create the role and assign the role to a user, see Chapter 2, Working With the Solaris Management Console (Tasks), in System Administration Guide: Basic Administration.

  2. Remove the interface from the IPMP group.


    # ifconfig interface group ""
    

    The quotation marks indicate a null string.


Example 8–4 Removing an Interface From a Group

To remove the interface hme0 from the IPMP group itops0, you would type the following command:


# ifconfig hme0 group ""
# ipmpstat -g
GROUP   GROUPNAME   STATE      FDT       INTERFACES
itops0  itops0      ok         10.00s    subitops0 subitops1

ProcedureHow to Add or Remove IP Addresses

You use the ifconfig addif syntax to add addresses or the ifconfig removeif command to remove addresses from interfaces. In the current IPMP implementation, test addresses are hosted on the underlying IP interface, while data addresses are assigned to the IPMP interface. The following procedures describes how to add or remove IP addresses that are either test addresses or data addresses.

  1. Assume the role of Primary Administrator, or become superuser.

    The Primary Administrator role includes the Primary Administrator profile. To create the role and assign the role to a user, see Chapter 2, Working With the Solaris Management Console (Tasks), in System Administration Guide: Basic Administration.

  2. Add or remove data addresses.

    • To add data addresses to the IPMP group, type the following command:


      # ifconfig ipmp-interface addif ip-address up
      
    • To remove an address from the IPMP group, type the following command:


      # ifconfig ipmp-interface removeif ip-address
      
  3. Add or remove test addresses.

    • To assign a test address to an underlying interface of the IPMP group, type the following command:


      # ifconfig interface addif -failover ip-address up
      
    • To remove a test address from an underlying interface of the IPMP group, type the following command:


      # ifconfig interface removeif ip-address
      

Example 8–5 Removing a Test Address From an Interface

The following example uses the configuration of itops0 in Example 8–2. The step removes the test address from the interface subitops0.


# ipmpstat -t
INTERFACE      MODE     TESTADDR        TARGETS
subitops0      routes   192.168.10.30   192.168.10.1

# ifconfig subitops0 removeif 192.168.85.30

ProcedureHow to Move an Interface From One IPMP Group to Another Group

You can place an interface in a new IPMP group when the interface belongs to an existing IPMP group. You do not need to remove the interface from the current IPMP group. When you place the interface in a new group, the interface is automatically removed from any existing IPMP group.

  1. On the system with the IPMP group configuration, assume the Primary Administrator role or become superuser.

    The Primary Administrator role includes the Primary Administrator profile. To create the role and assign the role to a user, see Chapter 2, Working With the Solaris Management Console (Tasks), in System Administration Guide: Basic Administration.

  2. Move the interface to a new IPMP group.


    # ifconfig interface group group-name
    

    Placing the interface in a new group automatically removes the interface from any existing group.


Example 8–6 Moving an Interface to a Different IPMP Group

This example assumes that the underlying interfaces of your group are subitops0, subitops1, subitops2, and hme0. To change the IPMP group of interface hme0 to the group cs-link1, you would type the following:


# ifconfig hme0 group cs-link1

This command removes the hme0 interface from IPMP group itops0 and then puts the interface in the group cs-link1.


ProcedureHow to Delete an IPMP Group

Use this procedure if you no longer need a specific IPMP group.

  1. Assume the role of Primary Administrator, or become superuser.

    The Primary Administrator role includes the Primary Administrator profile. To create the role and assign the role to a user, see Chapter 2, Working With the Solaris Management Console (Tasks), in System Administration Guide: Basic Administration.

  2. Identify the IPMP group and the underlying IP interfaces.


    # ipmpstat -g
    
  3. Delete all IP interfaces that currently belong to the IPMP group.


    # ifconfig ip-interface group ""
    

    Repeat this step for all the IP interfaces that belong to the group.


    Note –

    To successfully delete an IPMP interface, no IP interface must exist as part of the IPMP group.


  4. Delete the IPMP interface.


    # ifconfig ipmp-interface unplumb
    

    After you unplumb the IPMP interface, any IP address that is associated with the interface is deleted from the system.

  5. To make the deletion persistent, perform the following additional steps:

    1. Delete the IPMP interface's corresponding hostname file.


      # rm /etc/hostname.ipmp-interface
      
    2. Remove the “group” keywords in the hostname files of the underlying interfaces.


Example 8–7 Deleting an IPMP Interface

To delete the interface itops0 that has the underlying IP interface subitops0 and subitops1, you would type the following commands:


# ipmpstat -g
GROUP   GROUPNAME   STATE      FDT        INTERFACES
itops0  itops0      ok         10.00s     subitops0 subitops1

# ifconfig subitops0 group ""
# ifconfig subitops1 group ""
# ifconfig itops0 unplumb
# rm /etc/hostname.itops0

You would then edit the files /etc/hostname.subitops0 and /etc/hostname.subitops1 to remove “group” entries in those files.


Configuring for Probe-Based Failure Detection

Probe-based failure detection involves the use of target systems, as explained in Probe-Based Failure Detection. In identifying targets for probe-based failure detection, the in.mpathd daemon operates in two modes: router target mode or multicast target mode. In the router target mode, the multipathing daemon probes targets that are defined in the routing table. If no targets are defined, then the daemon operates in multicast target mode, where multicast packets are sent out to probe neighbor hosts on the LAN.

Preferably, you should set up host targets for the in.mpathd daemon to probe. For some IPMP groups, the default router is sufficient as a target. However, for some IPMP groups, you might want to configure specific targets for probe-based failure detection. To specify the targets, set up host routes in the routing table as probe targets. Any host routes that are configured in the routing table are listed before the default router. IPMP uses the explicitly defined host routes for target selection. Thus, you should set up host routes to configure specific probe targets rather than use the default router.

To set up host routes in the routing table, you use the route command. You can use the -p option with this command to add persistent routes. For example, route -p add adds a route which will remain in the routing table even after you reboot the system. The -p option thus allows you to add persistent routes without needing any special scripts to recreate these routes every system startup. To optimally use probe-based failure detection, make sure that you set up multiple targets to receive probes.

The sample procedure that follows shows the exact syntax to add persistent routes to targets for probe-based failure detection. For more information about the options for the route command, refer to the route(1M) man page.

Consider the following criteria when evaluating which hosts on your network might make good targets.

ProcedureHow to Manually Specify Target Systems for Probe-Based Failure Detection

  1. Log in with your user account to the system where you are configuring probe-based failure detection.

  2. Add a route to a particular host to be used as a target in probe-based failure detection.


    $ route -p add -host destination-IP gateway-IP -static
    

    where destination-IP and gateway-IP are IPv4 addresses of the host to be used as a target. For example, you would type the following to specify the target system 192.168.10.137, which is on the same subnet as the interfaces in IPMP group itops0:


    $ route -p add -host 192.168.10.137 192.168.10.137 -static
    

    This new route will be automatically configured every time the system is restarted. If you want to define only a temporary route to a target system for probe-based failure detection, then do not use the -p option.

  3. Add routes to additional hosts on the network to be used as target systems.

ProcedureHow to Configure the Behavior of the IPMP Daemon

Use the IPMP configuration file /etc/default/mpathd to configure the following system-wide parameters for IPMP groups.

  1. On the system with the IPMP group configuration, assume the Primary Administrator role or become superuser.

    The Primary Administrator role includes the Primary Administrator profile. To create the role and assign the role to a user, see Chapter 2, Working With the Solaris Management Console (Tasks), in System Administration Guide: Basic Administration.

  2. Edit the /etc/default/mpathd file.

    Change the default value of one or more of the three parameters.

    1. Type the new value for the FAILURE_DETECTION_TIME parameter.


      FAILURE_DETECTION_TIME=n
      

      where n is the amount of time in seconds for ICMP probes to detect whether an interface failure has occurred. The default is 10 seconds.

    2. Type the new value for the FAILBACK parameter.


      FAILBACK=[yes | no]
      • yes– The yes value is the default for the failback behavior of IPMP. When the repair of a failed interface is detected, network access fails back to the repaired interface, as described in Detecting Physical Interface Repairs.

      • no – The no value indicates that data traffic does not move back to a repaired interface. When a failed interfaces is detected as repaired, the INACTIVE flag is set for that interface. This flag indicates that the interface is currently not to be used for data traffic. The interface can still be used for probe traffic.

        For example, the IPMP group ipmp0 consists of two interfaces, ce0 and ce1. In the /etc/default/mpathd file, the FAILBACK=no parameter is set. If ce0 fails, then it is flagged as FAILED and becomes unusable. After repair, the interface is flagged as INACTIVE and remains unusable because of the FAILBACK=no setting.

        If ce1 fails and only ce0 is in the INACTIVE state, then ce0's INACTIVE flag is cleared and the interface becomes usable. If the IPMP group has other interfaces that are also in the INACTIVE state, then any one of these INACTIVE interfaces, and not necessarily ce0, can be cleared and become usable when ce1 fails.

    3. Type the new value for the TRACK_INTERFACES_ONLY_WITH_GROUPS parameter.


      TRACK_INTERFACES_ONLY_WITH_GROUPS=[yes | no]

      Note –

      For information about this parameter and the anonymous group feature, see Failure Detection and the Anonymous Group Feature.


      • yes– The yes value is the default for the behavior of IPMP. This parameter causes IPMP to ignore network interfaces that are not configured into an IPMP group.

      • no – The no value sets failure and repair detection for all network interfaces, regardless of whether they are configured into an IPMP group. However, when a failure or repair is detected on an interface that is not configured into an IPMP group, no action is triggered in IPMP to maintain the networking functions of that interface. Therefore, theno value is only useful for reporting failures and does not directly improve network availability.

  3. Restart the in.mpathd daemon.


    # pkill -HUP in.mpathd
    

Recovering an IPMP Configuration With Dynamic Reconfiguration

This section contains procedures that relate to administering systems that support dynamic reconfiguration (DR).

ProcedureHow to Replace a Physical Card That Has Failed

This procedure explains how to replace a physical card on a system that supports DR. The procedure assumes the following conditions:

Before You Begin

The procedures for performing DR vary with the type of system. Therefore, make sure that you complete the following:

  1. On the system with the IPMP group configuration, assume the Primary Administrator role or become superuser.

    The Primary Administrator role includes the Primary Administrator profile. To create the role and assign the role to a user, see Chapter 2, Working With the Solaris Management Console (Tasks), in System Administration Guide: Basic Administration.

  2. Perform the appropriate DR steps to remove the failed NIC from the system.

    • If you are removing the card without intending to insert a replacement, then skip the rest of the steps after you remove the card.

    • If you are replacing a card, then proceed to the subsequent steps .

  3. Make sure that the replacement NIC is not being referenced by other configurations in the system.

    For example, the replacement NIC you install is bge0. If a /etc/hostname.bge0 file exists on the system, remove that file.


    # rm /etc/hostname.bge0
    
  4. Replace the default link name of the replacement NIC with the link name of the failed card.

    By default, the link name of the bge card that replaces the failed ce card is bgen, where n is the instance number, such as bge0.


    # dladm rename-link bge0 subitops0
    

    This step transfers the network configuration of subitops0 to bge0.

  5. Attach the replacement NIC to the system.

  6. Complete the DR process by enabling the new NIC's resources to become available for use.

    For example, you use the cfgadm command to perform this step. For more information, see the cfgadm(1M) man page.

    After this step, the new interface is configured with the test address, added as an underlying interface of the IPMP group, and deployed either as an active or a standby interface, all depending on the configurations that are specified in /etc/hostname.subitops0. The kernel can then allocate data addresses to this new interface according to the contents of the /etc/hostname.ipmp-interface configuration file.

About Missing Interfaces at System Boot

Certain systems might have the following configurations:

With the new IPMP implementation where data addresses belong to the IPMP interface, recovering the missing interface becomes automatic. During system boot, the boot script constructs a list of failed interfaces, including interfaces that are missing. Based on the /etc/hostname file of the IPMP interface as well as the hostname files of the underlying IP interfaces, the boot script can determine to which IPMP group an interface belongs. When the missing interface is subsequently dynamically reconfigured on the system, the script then automatically adds that interface to the appropriate IPMP group and the interface becomes immediately available for use.

Monitoring IPMP Information

The following procedures use the ipmpstat command, enabling you to monitor different aspects of IPMP groups on the system. You can observe the status of the IPMP group as a whole or its underlying IP interfaces. You can also verify the configuration of data and test addresses for the group. Information about failure detection is also obtained by using the ipmpstat command. For more details about the ipmpstat command and its options, see the PLACEHOLDER IPMPSTAT MAN PAGE.

By default, host names are displayed on the output instead of the numeric IP addresses, provided that the host names exist. To list the numeric IP addresses in the output, use the -n option together with other options to display specific IPMP group information.


Note –

In the following procedures, use of the ipmpstat command does not require system administrator privileges, unless stated otherwise.


ProcedureHow to Obtain IPMP Group Information

Use this procedure to list the status of the various IPMP groups on the system, including the status of their underlying interfaces. If probe-based failure detection is enabled for a specific group, the command also includes the failure detection time for that group.

  1. Display the IPMP group information.


    $ ipmpstat -g
    GROUP   GROUPNAME   STATE      FDT        INTERFACES
    itops0  itops0      ok         10.00s     subitops0 subitops1
    acctg1  acctg1      failed     --         [hme0 hme1]
    field2  field2      degraded   20.00s     fops0 fops3 (fops2) [fops1]
    GROUP

    Specifies the IPMP interface name. In the case of an anonymous group, this field will be empty. For more information about anonymous groups, see the in.mpathd(1M) man page.

    GROUPNAME

    Specifies the name of the IPMP group. In the case of an anonymous group, this field will be empty.

    STATE

    Indicates a group's current status, which can be one of the following:

    • ok indicates that all underlying interfaces of the IPMP group are usable.

    • degraded indicates that some of the underlying interfaces in the group are unusable.

    • failed indicates that all of the group's interfaces are unusable.

    FDT

    Specifies the failure detection time, if failure detection is enabled. If failure detection is disabled, this field will be empty.

    INTERFACES

    Specifies the underlying interfaces that belong to the group. In this field, active interfaces are listed first, then inactive interfaces, and finally unusable interfaces.The status of the interface is indicated by the manner in which it is listed:

    • interface (without parentheses or brackets) indicates an active interface. Active interfaces are those interfaces that being used by the system to send or receive data traffic.

    • (interface) (with parentheses) indicates a functioning but inactive interface. The interface is not in use as defined by administrative policy.

    • [interface] (with brackets) indicates that the interface is unusable because it has either failed or been taken offline.

ProcedureHow to Obtain IPMP Data Address Information

Use this procedure to display data addresses and the group to which each address belongs. The displayed information also includes which address is available for use, depending on whether the address has been toggled by the ifconfig [up/down] command. You can also determine on which inbound or outbound interface an address can be used.

  1. Display the IPMP address information.


    $ ipmpstat -an
    ADDRESS         STATE    GROUP      INBOUND     OUTBOUND
    192.168.10.10   up       itops0     subitops0   subitops0 subitops1
    192.168.10.15   up       itops0     subitops1   subitops0 subitops1
    192.0.0.100     up       acctg1     --          --
    192.0.0.101     up       acctg1     --          --
    128.0.0.100     up       field2     fops0       fops0 fops3
    128.0.0.101     up       field2     fops3       fops0 fops3
    128.0.0.102     down     field2     --          --
    ADDRESS

    Specifies the hostname or the data address, if the -n option is used in conjunction with the -a option.

    STATE

    Indicates whether the address on the IPMP interface is up, and therefore usable, or down, and therefore unusable.

    GROUP

    Specifies the IPMP IP interface that hosts a specific data address.

    INBOUND

    Identifies the interface that receives packets for a given address. The field information might change depending on external events. For example, if a data address is down, or if no active IP interfaces remain in the IPMP group, this field will be empty. The empty field indicates that the system is not accepting IP packets that are destined for the given address.

    OUTBOUND

    Identifies the interface that sends packets that are using a given address as a source address. As with the INBOUND field, the OUTBOUND field information might also change depending on external events. An empty field indicates that the system is not sending out packets with the given source address. The field might be empty either because the address is down, or because no active IP interfaces remain in the group.

ProcedureHow to Obtain Information About Underlying IP Interfaces of a Group

Use this procedure to display information about an IPMP group's underlying IP interfaces. For a description of the corresponding relationship between the NIC, data link, and IP interface, see Overview of the Networking Stack.

  1. Display the IPMP interface information.


    $ ipmpstat -i
    INTERFACE   ACTIVE   GROUP      FLAGS      LINK       PROBE      STATE
    subitops0   yes      itops0     --mb---    up         ok         ok
    subitops1   yes      itops0     -------    up         disabled   ok
    hme0        no       acctg1     -------    unknown    disabled   offline
    hme1        no       acctg1     is-----    down       unknown    failed
    fops0       yes      field2     --mb---    unknown    ok         ok
    fops1       no       field2     -i-----    up         ok         ok
    fops2       no       filed2     -------    up         failed     failed
    fops3       yes      field2     --mb---    up         ok         ok
    INTERFACE

    Specifies each underlying interface of each IPMP group.

    ACTIVE

    Indicates whether the interface is functioning and is in use (yes) or not (no).

    GROUP

    Specifies the IPMP interface name. In the case of anonymous groups, this field will be empty. For more information about anonymous groups, see the in.mpathd(1M) man page.

    FLAGS

    Indicates the status of the underlying interface, which can be one or any combination of the following:

    • i indicates that the INACTIVE flag is set for the interface and therefore the interface is not used to send or receive data traffic.

    • s indicates that the interface is configured to be a standby interface.

    • m indicates that the interface is designated by the system to send and receive IPv4 multicast traffic for the IPMP group.

    • b indicates that the interface is designated by the system to receive broadcast traffic for the IPMP group.

    • M indicates that the interface is designated by the system to send and receive IPv6 multicast traffic for the IPMP group.

    • d indicates that the interface is down and therefore unusable.

    • h indicates that the interface shares a duplicate physical hardware address with another interface and has been taken offline. The h flag indicates that the interface is unusable.

    LINK

    Indicates the state of link-based failure detection, which is one of the following states:

    • up or down indicates the availability or unavailability of a link.

    • unknown indicates that the driver does not support notification of whether a link is up or down and therefore does not detect link state changes.

    PROBE

    Specifies the state of the probe–based failure detection for interfaces that have been configured with a test address, as follows:

    • ok indicates that the probe is functional and active.

    • failed indicates that probe-based failure detection has detected that the interface is not working.

    • unknown indicates that no suitable probe targets could be found, and therefore probes cannot be sent.

    • disabled indicates that no IPMP test address is configured on the interface. Therefore probe-based failure detection is disabled.

    STATE

    Specifies the overall state of the interface, as follows:

    • ok indicates that the interface is online and working normally based on the configuration of failure detection methods.

    • failed indicates that the interface is not working because either the interface's link is down, or the probe detection has determined that the interface cannot send or receive traffic.

    • offline indicates that the interface is not available for use. Typically, the interface is switched offline under the following circumstances:

      • The interface is being tested.

      • Dynamic reconfiguration is being performed.

      • The interface shares a duplicate hardware address with another interface.

    • unknown indicates the IPMP interface's status cannot be determined because no probe targets can be found for probe-based failure detection.

ProcedureHow to Obtain IPMP Probe Target Information

Use this procedure to monitor the probe targets that are associated with each IP interface in an IPMP group.

  1. Display the IPMP probe targets.


    $ ipmpstat -nt
    INTERFACE   MODE          TESTADDR        TARGETS
    subitops0   routes        192.168.85.30   192.168.85.1 192.168.85.3
    subitops1   disabled      --              --
    hme0        disabled      --              --
    hme1        routes        192.1.2.200     192.1.2.1
    fops0       multicast     128.9.0.200     128.0.0.1 128.0.0.2
    fops1       multicast     128.9.0.201     128.0.0.2 128.0.0.1
    fops2       multicast     128.9.0.202     128.0.0.1 128.0.0.2
    fops3       multicast     128.9.0.203     128.0.0.1 128.0.0.2
    INTERFACE

    Specifies the underlying interfaces of the IPMP group.

    MODE

    Specifies the method for obtaining the probe targets.

    • routes indicates that the system routing table is used to find probe targets.

    • mcast indicates that multicast ICMP probes are used to find targets.

    • disabled indicates that probe-based failure detection has been disabled for the interface.

    TESTADDR

    Specifies the hostname or, if the -n option is used in conjunction with the -t option, the IP address that is assigned to the interface to send and receive probes. This field will be empty if a test address has not been configured.


    Note –

    If an IP interface is configured with both IPv4 and IPv6 test addresses, the probe target information is displayed separately for each test address.


    TARGETS

    Lists the current probe targets in a space-separated list. The probe targets are displayed either as hostnames or IP addresses, if the -n is used in conjunction with the -t option.

ProcedureHow to Observe IPMP Probes

Use this procedure to observe ongoing probes. When you issue the command to observe probes, information about probe activity on the system is continuously displayed until you terminate the command with Ctrl-C. You must have Primary Administrator privileges to run this command.

  1. Assume the role of Primary Administrator, or become superuser.

    The Primary Administrator role includes the Primary Administrator profile. To create the role and assign the role to a user, see Chapter 2, Working With the Solaris Management Console (Tasks), in System Administration Guide: Basic Administration.

  2. Display the information about ongoing probes.


    # ipmpstat -pn
    TIME    INTERFACE   PROBE   TARGET        NETRTT   RTT      RTTAVG     RTTDEV
    0.11s   subitops0   589     192.168.85.1  0.51ms   0.76ms   0.76ms     --
    0.17s   hme1        612     192.1.2.1     --       --       --         --
    0.25s   fops0       602     128.0.0.1     0.61ms   1.10ms   1.10ms     --
    0.26s   fops1       602     128.0.0.2     --       --       --         --
    0.25s   fops2       601     128.0.0.1     0.62ms   1.20ms   1.00ms     --
    0.26s   fops3       603     128.0.0.1     0.79ms   1.11ms   1.10ms     --
    1.66s   hme1        613     192.1.2.1     --       --       --         --
    1.70s   subitops0   603     192.168.85.3  0.63ms   1.10ms   1.10ms     --
    ^C
    TIME

    Specifies the time a probe was sent relative to when the ipmpstat command was issued. If a probe was initiated prior to ipmpstat being started, then the time is displayed with a negative value, relative to when the command was issued.

    PROBE

    Specifies the identifier that represents the probe.

    INTERFACE

    Specifies the interface on which the probe is sent.

    TARGET

    Specifies the hostname or, if the -n option is used in conjunction with -p, the target address to which the probe is sent.

    NETRTT

    Specifies the total network round-trip time of the probe and is measured in milliseconds. NETRTT covers the time between the moment when the IP module sends the probe and the moment the IP module receives the ack packets from the target. If the in.mpathd daemon has determined that the probe is lost, then the field will be empty.

    RTT

    Specifies the total round-trip time for the probe and is measured in milliseconds. RTT covers the time between the moment the daemon executes the code to send the probe and the moment the daemon completes processing the ack packets from the target. If the in.mpathd daemon has determined that the probe is lost, then the field will be empty. Spikes that occur in the RTT which are not present in the NETRTT might indicate that the local system is overloaded.

    RTTAVG

    Specifies the probe's average round-trip time over the interface between local system and target. The average round-trip time helps identify slow targets. If data is insufficient to calculate the average, this field will be empty.

    RTTDEV

    Specifies the standard deviation for the round-trip time to the target over the interface. The standard deviation helps identify jittery targets whose ack packets are being sent erratically. For jittery targets, the in.mpathd daemon is forced to increase the failure detection time. Consequently, the daemon would take a longer time before it can detect such a target's outage. If data is insufficient to calculate the standard deviation, this field will be empty.

ProcedureHow to Customize the Output of the ipmpstat Command in a Script

When you use the ipmpstat, by default, the most meaningful fields that fit in 80 columns are displayed. In the output, all the fields that are specific to the option that you use with the ipmpstat command are displayed, except in the case of the ipmpstat -p syntax. If you want to specify the fields to be displayed, then you use the -o option in conjunction with other options that determine the output mode of the command. This option is particularly useful when you issue the command from a script or by using a command alias

  1. To customize the output, issue one of the following commands:

    • To display selected fields of the ipmpstat command, use the -o option in combination with the specific output option. For example, to display only the GROUPNAME and the STATE fields of the group output mode, you would type the following:


      $ ipmpstat -g -o groupname,state
      
      GROUPNAME  STATE
      itops0     ok
      accgt1     failed
      field2     degraded
    • To display all the fields of a given ipmpstat command, use the following syntax:


      # ipmpstat -o all
      

ProcedureHow to Generate Machine Parseable Output of the ipmpstat Command

You can generate machine parseable information by using the ipmpstat -P syntax. The -P option is intended to be used particularly in scripts. Machine-parseable output differs from the normal output in the following ways:

To correctly use the ipmpstat -P syntax, observe the following rules:

Ignoring either one of these rules will cause ipmpstat -P to fail.

  1. To display in machine parseable format the group name, the failure detection time, and the underlying interfaces, you would type the following:


    $ ipmpstat -P -o -g groupname,fdt,interfaces
    itops0:10.00s:subitops0 subitops1
    acctg1::[hme0 hme1]
    field2:20.00s:fops0 fops3 (fops2) [fops1]

    The group name, failure detection time, and underlying interfaces are group information fields. Thus, you use the -o -g options together with the -P option.


Example 8–8 Using ipmpstat -P in a Script

This sample script displays the failure detection time of a particular IPMP group.

getfdt() {
         ipmpstat -gP -o group,fdt | while IFS=: read group fdt; do
             [[ "$group" = "$1" ]] && { echo "$fdt"; return; }
         done
     }