This part discusses administration of other types of configurations such as virtual local area networks (VLANs), link aggregations, and IP multipathing (IPMP) groups.
This chapter describes procedures to configure and maintain virtual local area networks (VLANs). The procedures include steps that avail of features such as support for flexible link names.
A virtual local area network (VLAN) is a subdivision of a local area network at the data link layer of the TCP/IP protocol stack. You can create VLANs for local area networks that use switch technology. By assigning groups of users to VLANs, you can improve network administration and security for the entire local network. You can also assign interfaces on the same system to different VLANs.
Consider dividing your local network into VLANs if you need to do the following:
Create a logical division of workgroups.
For example, suppose all hosts on a floor of a building are connected on one switched-based local network. You could create a separate VLAN for each workgroup on the floor.
Enforce differing security policies for the workgroups.
For example, the security needs of a Finance department and an Information Technologies department are quite different. If systems for both departments share the same local network, you could create a separate VLAN for each department. Then, you could enforce the appropriate security policy on a per-VLAN basis.
Split workgroups into manageable broadcast domains.
The use of VLANs reduces the size of broadcast domains and improves network efficiency.
Switched LAN technology enables you to organize the systems on a local network into VLANs. Before you can divide a local network into VLANs, you must obtain switches that support VLAN technology. You can configure all ports on a switch to serve a single VLAN or multiple VLANs, depending on the VLAN topology design. Each switch manufacturer has different procedures for configuring the ports of a switch.
The following figure shows a local area network that has the subnet address 192.168.84.0. This LAN is subdivided into three VLANs, Red, Yellow, and Blue.
Connectivity on LAN 192.168.84.0 is handled by Switches 1 and 2. The Red VLAN contains systems in the Accounting workgroup. The Human Resources workgroup's systems are on the Yellow VLAN. Systems of the Information Technologies workgroup are assigned to the Blue VLAN.
Each VLAN in a local area network is identified by a VLAN tag, or VLAN ID (VID). The VID is assigned during VLAN configuration. The VID is a 12-bit identifier between 1 and 4094 that provides a unique identity for each VLAN. In Figure 5–1, the Red VLAN has the VID 789, the Yellow VLAN has the VID 456, and the Blue VLAN has the VID 123.
When you configure switches to support VLANs, you need to assign a VID to each port. The VID on the port must be the same as the VID assigned to the interface that connects to the port, as shown in the following figure.
Figure 5–2 shows multiple hosts that are connected to different VLANs. Two hosts belong to the same VLAN. In this figure, the primary network interfaces of the three hosts connect to Switch 1. Host A is a member of the Blue VLAN. Therefore, Host A's interface is configured with the VID 123. This interface connects to Port 1 on Switch 1, which is then configured with the VID 123. Host B is a member of the Yellow VLAN with the VID 456. Host B's interface connects to Port 5 on Switch 1, which is configured with the VID 456. Finally, Host C's interface connects to Port 9 on Switch 1. The Blue VLAN is configured with the VID 123.
The figure also shows that a single host can also belong to more than one VLAN. For example, Host A has two interfaces. The second interface is configured with the VID 456 and is connected to Port 3 which is also configured with the VID 456. Thus, Host A is a member of both the Blue VLAN and the Yellow VLAN.
In this Solaris release, you can assign meaningful names to VLAN interfaces. VLAN names consist of a link name and the VLAN ID number (VID), such as sales0 You should assign customized names when you create VLANs. For more information about customized names, see Assigning Names to Data Links. For more information about valid customized names, see Rules for Valid Link Names.
Use the following procedure to plan for VLANs on your network.
Examine the local network topology and determine where subdivision into VLANs is appropriate.
For a basic example of such a topology, refer to Figure 5–1.
Create a numbering scheme for the VIDs, and assign a VID to each VLAN.
A VLAN numbering scheme might already exist on the network. If so, you must create VIDs within the existing VLAN numbering scheme.
On each system, determine which interfaces will be members of a particular VLAN.
Check the connections of the interfaces to the network's switches.
Note the VID of each interface and the switch port where each interface is connected.
Configure each port of the switch with the same VID as the interface to which it is connected.
Refer to the switch manufacturer's documentation for configuration instructions.
The following procedure shows how to create and configure a VLAN. In this Solaris release, all Ethernet devices can support VLANs. However, some restrictions exist with certain devices. For these exceptions, refer to VLANs on Legacy Devices.
Data links must already be configured on your system before you can create VLANs. See How to Configure an IP Interface After System Installation.
On the system in which you configure VLANs, assume the Primary Administrator role, or become superuser.
The Primary Administrator role includes the Primary Administrator profile. To create the role and assign the role to a user, see Chapter 2, Working With the Solaris Management Console (Tasks), in System Administration Guide: Basic Administration.
Determine the types of links that are in use in your system.
# dladm show-link |
Create a VLAN link over a data-link.
# dladm create-vlan -l link -v VID vlan-link |
Specifies the link on which the VLAN interface is being created.
Indicates the VLAN ID number
Specifies the name of the VLAN, which can also be an administratively-chosen name.
Verify the VLAN configuration.
# dladm show-vlan |
Configure an IP interface over the VLAN.
# ifconfig interface plumb IP-address up |
where interface takes the same name as the VLAN name.
You can assign IPv4 or IPv6 addresses to the VLAN's IP interface.
(Optional) To make the IP configuration for the VLAN persist across reboots, create an /etc/hostname.interface file to contain the interface's IP address.
The interface takes the name that you assign to the VLAN.
This example configures the VLAN sales over the link subitops0. The VLAN is configured to persist across reboots.
# dladm show-link LINK CLASS MTU STATE OVER subitops0 phys 1500 up -- ce1 phys 1500 up -- # dladm create-vlan -l subitops0 -v 7 sales # dladm show-vlan LINK VID OVER FLAGS sales 7 subitops0 ---- |
When link information is displayed, the VLAN link is included in the list.
# dladm show-link LINK CLASS MTU STATE OVER subitops0 phys 1500 up -- ce1 phys 1500 up -- sales vlan 1500 up subitops0 # ifconfig sales plumb 10.0.0.3/24 up # echo 10.0.0.3/24 > /etc/hostname.sales |
Certain legacy devices handle only packets whose maximum frame size is 1514 bytes. Packets whose frame sizes exceed the maximum limit are dropped. For such cases, follow the same procedure listed in How to Configure a VLAN. However, when creating the VLAN, use the -f option to force the creation of the VLAN.
The general steps to perform are as follows:
Create the VLAN with the -f option.
# dladm create-vlan -f -l link -v VID [vlan-link] |
Set a lower size for the maximum transmission unit (MTU), such as 1496 bytes.
# dladm set-linkprop -p default_mtu=1496 vlan-link |
The lower MTU value allows space for the link layer to insert the VLAN header prior to transmission.
Perform the same step to set the same lower value for the MTU size of each node in the VLAN.
For more information about changing link property values, refer to Administering NIC Driver Properties.
This section describes the usage of new dladm subcommands for other VLAN tasks. These dladm commands also work with link names.
Assume the System Administrator role or become superuser.
The System Administrator role includes the Network Management profile. To create the role and assign the role to a user, see Chapter 9, Using Role-Based Access Control (Tasks), in System Administration Guide: Security Services.
Display VLAN information.
# dladm show-vlan [vlan-link] |
If you do not specify a VLAN link, the command displays information about all configured VLANs.
The following example shows the available VLANs in a system.
# dladm show-vlan LINK VID OVER FLAGS sales 7 subitops0 ---- managers 5 net0 ---- |
Configured VLANs also appear when you issue the dladm show-link command. In the command output, the VLANs are appropriately identified in the CLASS column.
# dladm show-link LINK CLASS MTU STATE OVER subitops0 phys 1500 up -- sales vlan 1500 up subitops0 net0 phys 1500 up -- managers vlan 1500 up net0 |
Assume the System Administrator role or become superuser.
The System Administrator role includes the Network Management profile. To create the role and assign the role to a user, see Chapter 9, Using Role-Based Access Control (Tasks), in System Administration Guide: Security Services.
Determine which VLAN you want to remove.
# dladm show-vlan |
Unplumb the VLAN's IP interface.
# ifconfig vlan-interface unplumb |
where vlan-interface is the IP interface that is configured over the VLAN.
You cannot remove a VLAN that is currently in use.
Remove the VLAN by performing one of the following steps:
# dladm show-vlan LINK VID OVER FLAGS sales 5 subitops0 ---- managers 7 net0 ---- # ifconfig managers unplumb # dladm delete-vlan managers # rm /etc/hostname.managers |
This chapter describes procedures to configure and maintain link aggregations. The procedures include steps that avail of new features such as support for flexible link names.
The Solaris OS supports the organization of network interfaces into link aggregations. A link aggregation consists of several interfaces on a system that are configured together as a single, logical unit. Link aggregation, also referred to as trunking, is defined in the IEEE 802.3ad Link Aggregation Standard.
The IEEE 802.3ad Link Aggregation Standard provides a method to combine the capacity of multiple full-duplex Ethernet links into a single logical link. This link aggregation group is then treated as though it were, in fact, a single link.
The following are features of link aggregations:
Increased bandwidth – The capacity of multiple links is combined into one logical link.
Automatic failover/failback – Traffic from a failed link is failed over to working links in the aggregation.
Load balancing – Both inbound and outbound traffic is distributed according to user selected load-balancing policies, such as source and destination MAC or IP addresses.
Support for redundancy – Two systems can be configured with parallel aggregations.
Improved administration – All interfaces are administered as a single unit.
Less drain on the network address pool – The entire aggregation can be assigned one IP address.
The basic link aggregation topology involves a single aggregation that contains a set of physical interfaces. You might use the basic link aggregation in the following situations:
For systems that run an application with distributed heavy traffic, you can dedicate an aggregation to that application's traffic.
For sites with limited IP address space that nevertheless require large amounts of bandwidth, you need only one IP address for a large aggregation of interfaces.
For sites that need to hide the existence of internal interfaces, the IP address of the aggregation hides its interfaces from external applications.
Figure 6–1 shows an aggregation for a server that hosts a popular web site. The site requires increased bandwidth for query traffic between Internet customers and the site's database server. For security purposes, the existence of the individual interfaces on the server must be hidden from external applications. The solution is the aggregation aggr1 with the IP address 192.168.50.32. This aggregation consists of three interfaces,bge0 through bge2. These interfaces are dedicated to sending out traffic in response to customer queries. The outgoing address on packet traffic from all the interfaces is the IP address of aggr1, 192.168.50.32.
Figure 6–2 depicts a local network with two systems, and each system has an aggregation configured. The two systems are connected by a switch. If you need to run an aggregation through a switch, that switch must support aggregation technology. This type of configuration is particularly useful for high availability and redundant systems.
In the figure, System A has an aggregation that consists of two interfaces, bge0 and bge1. These interfaces are connected to the switch through aggregated ports. System B has an aggregation of four interfaces, e1000g0 through e1000g3. These interfaces are also connected to aggregated ports on the switch.
The back-to-back link aggregation topology involves two separate systems that are cabled directly to each other, as shown in the following figure. The systems run parallel aggregations.
In this figure, device bge0 on System A is directly linked to bge0 on System B, and so on. In this way, Systems A and B can support redundancy and high availability, as well as high-speed communications between both systems. Each system also has interface ce0 configured for traffic flow within the local network.
The most common application for back-to-back link aggregations is mirrored database servers. Both servers need to be updated together and therefore require significant bandwidth, high-speed traffic flow, and reliability. The most common use of back-to-back link aggregations is in data centers.
If you plan to use a link aggregation, consider defining a policy for outgoing traffic. This policy can specify how you want packets to be distributed across the available links of an aggregation, thus establishing load balancing. The following are the possible layer specifiers and their significance for the aggregation policy:
L2 – Determines the outgoing link by hashing the MAC (L2) header of each packet
L3 – Determines the outgoing link by hashing the IP (L3) header of each packet
L4 – Determines the outgoing link by hashing the TCP, UDP, or other ULP (L4) header of each packet
Any combination of these policies is also valid. The default policy is L4. For more information, refer to the dladm(1M) man page.
If your aggregation topology involves connection through a switch, you must note whether the switch supports the link aggregation control protocol (LACP). If the switch supports LACP, you must configure LACP for the switch and the aggregation. However, you can define one of the following modes in which LACP is to operate:
Off mode – The default mode for aggregations. LACP packets, which are called LACPDUs are not generated.
Active mode – The system generates LACPDUs at regular intervals, which you can specify.
Passive mode – The system generates an LACPDU only when it receives an LACPDU from the switch. When both the aggregation and the switch are configured in passive mode, they cannot exchange LACPDUs.
See the dladm(1M) man page and the switch manufacturer's documentation for syntax information.
Your link aggregation configuration is bound by the following requirements:
You must use the dladm command to configure aggregations.
An interface that has been plumbed cannot become a member of an aggregation.
All interfaces in the aggregation must run at the same speed and in full-duplex mode.
You must set the value for MAC addresses to “true” in the EEPROM parameter local-mac-address? For instructions, refer to How to Ensure That the MAC Address of an Interface Is Unique.
Certain devices do not fulfill the requirement of the IEEE 802.3ad Link Aggregation Standard to support link state notification. This support must exist in order for a port to attach to an aggregation or to detach from an aggregation. Devices that do not support link state notification can be aggregated only by using the -f option of the dladm create-aggr command. For such devices, the link state is always reported as UP. For information about the use of the -f option, see How to Create a Link Aggregation.
Flexible names can be assigned to link aggregations. Any meaningful name can be assigned to a link aggregation. For more information about flexible or customized names, see Assigning Names to Data Links. Previous Solaris releases identify a link aggregation by the value of a key that you assign to the aggregation. For an explanation of this method, see Overview of Link Aggregations. Although that method continues to be valid, preferably, you should use customized names to identify link aggregations.
Similar to all other data-link configurations, link aggregations are administered with the dladm command.
Link aggregation only works on full-duplex, point-to-point links that operate at identical speeds. Make sure that the interfaces in your aggregation conform to this requirement.
If you are using a switch in your aggregation topology, make sure that you have done the following on the switch:
Configured the ports to be used as an aggregation
If the switch supports LACP, configured LACP in either active mode or passive mode
Assume the Primary Administrator role, or become superuser.
The Primary Administrator role includes the Primary Administrator profile. To create the role and assign the role to a user, see Chapter 2, Working With the Solaris Management Console (Tasks), in System Administration Guide: Basic Administration.
Display the network data-link information.
# dladm show-link |
Make sure that the link over which you are creating the aggregation is not opened by any application.
For example, if the IP interface over the link is plumbed, then unplumb the interface.
# ifconfig interface unplumb |
where interface refers to the IP interface that is plumbed and using the link.
Create a link aggregation.
# dladm create-aggr [-f] -l link1 -l link2 [...] aggr |
Forces the creation of the aggregation. Use this option when you are attempting to aggregate devices that do not support link state notification.
Specifies the data links that you want to aggregate.
Specifies the name that you want to assign to the aggregation.
Plumb and configure an IP interface over the newly created aggregation.
# ifconfig interface plumb IP-address up |
where interface takes the name of the aggregation.
Check the status of the aggregation you just created.
The aggregation's state should be UP.
# dladm show-aggr |
(Optional) Make the IP configuration of the link aggregation persist across reboots.
Create the /etc/hostname file for the aggregation's interface.
If the aggregation contains IPv4 addresses, the corresponding hostname file is /etc/hostname.aggr. For IPv6–based link aggregations, the corresponding hostname file is /etc/hostname6.aggr.
Type the IPv4 or IPv6 address of the link aggregation into the file.
Perform a reconfiguration boot.
# reboot -- -r |
This example shows the commands that are used to create a link aggregation with two data links, subvideo0 and subvideo1. The configuration is persistent across system reboots.
# dladm show-link LINK CLASS MTU STATE OVER subvideo0 phys 1500 up ---- subvideo1 phys 1500 up ---- # dladm create-aggr -l subvideo0 -l subvideo1 video0 # ifconfig video0 plumb 10.8.57.50/24 up # dladm show-aggr LINK POLICY ADDRPOLICY LACPACTIVITY LACPTIMER FLAGS video0 L4 auto off short ----- # echo 10.8.57.50/24 > /etc/hostname.video0 # reboot -- -r |
When you display link information, the link aggregation is included in the list.
# dladm show-link LINK CLASS MTU STATE OVER subvideo0 phys 1500 up ---- subvideo1 phys 1500 up ---- video0 aggr 1500 up subvideo0, subvideo1 |
This procedure shows how to make the following changes to an aggregation definition:
Modifying the policy for the aggregation
Changing the mode for the aggregation
Assume the System Administrator role.
The System Administrator role includes the Network Management profile. To create the role and assign the role to a user, see Chapter 9, Using Role-Based Access Control (Tasks), in System Administration Guide: Security Services.
Modify the policy of the aggregation.
# dladm modify-aggr -P policy-key aggr |
Represents one or more of the policies L2, L3, and L4, as explained in Policies and Load Balancing.
Specifies the aggregation whose policy you want to modify.
Modify the LACP mode of the aggregation.
# dladm modify-aggr -L LACP-mode -T timer-value aggr |
Indicates the LACP mode in which the aggregation is to run. The values are active, passive, and off. If the switch runs LACP in passive mode, be sure to configure active mode for your aggregation.
Indicates the LACP timer value, either short or long.
This example shows how to modify the policy of aggregation video0 to L2 and then turn on active LACP mode.
# dladm modify-aggr -P L2 video0 # dladm modify-aggr -L active -T short video0 # dladm show-aggr LINK POLICY ADDRPOLICY LACPACTIVITY LACPTIMER FLAGS video0 L2 auto active short ----- |
Assume the System Administrator role or become superuser.
The System Administrator role includes the Network Management profile. To create the role and assign the role to a user, see Chapter 9, Using Role-Based Access Control (Tasks), in System Administration Guide: Security Services.
Ensure that the link you want to add has no IP interface that is plumbed over the link.
# ifconfig interface unplumb |
Add the link to the aggregation.
# dladm add-aggr -l link [-l link] [...] aggr |
where link represents a data link that you are adding to the aggregation.
Perform other tasks to modify the entire link aggregation configuration after more data links are added.
For example, in the case of a configuration that is illustrated in Figure 6–3, you might need to add or modify cable connections and reconfigure switches to accommodate the additional data links. Refer to the switch documentation to perform any reconfiguration tasks on the switch.
This example shows how to add a link to the aggregation video0.
# dladm show-link LINK CLASS MTU STATE OVER subvideo0 phys 1500 up ---- subvideo1 phys 1500 up ---- video0 aggr 1500 up subvideo0, subvideo1 net3 phys 1500 unknown ---- # dladm add-aggr -l net3 video0 # dladm show-link LINK CLASS MTU STATE OVER subvideo0 phys 1500 up ---- subvideo1 phys 1500 up ---- video0 aggr 1500 up subvideo0, subvideo1, net3 net3 phys 1500 up ---- |
Assume the System Administrator role.
The System Administrator role includes the Network Management profile. To create the role and assign the role to a user, see Chapter 9, Using Role-Based Access Control (Tasks), in System Administration Guide: Security Services.
Remove a link from the aggregation.
# dladm remove-aggr -l link aggr-link |
This example shows how to remove a link from the aggregation video0.
dladm show-link LINK CLASS MTU STATE OVER subvideo0 phys 1500 up ---- subvideo1 phys 1500 up ---- video0 aggr 1500 up subvideo0, subvideo1, net3 net3 phys 1500 up ---- # dladm remove-aggr -l net3 video0 # dladm show-link LINK CLASS MTU STATE OVER subvideo0 phys 1500 up ---- subvideo1 phys 1500 up ---- video0 aggr 1500 up subvideo0, subvideo1 net3 phys 1500 unknown ---- |
Assume the System Administrator role.
The System Administrator role includes the Network Management profile. To create the role and assign the role to a user, see Chapter 9, Using Role-Based Access Control (Tasks), in System Administration Guide: Security Services.
Unplumb the aggregation.
# ifconfig aggr unplumb |
Delete the aggregation.
# dladm delete-aggr aggr |
To make the deletion persistent, remove the IP configuration for the link aggregation in /etc/hostname.interface file.
# rm /etc/hostname.interface |
This example deletes the aggregation video0. The deletion is persistent.
# ifconfig video0 unplumb # dladm delete-aggr video0 # rm /etc/hostname.video0 |
In the same manner as configuring VLANs over an interface, you can also create VLANs on a link aggregation. VLANs are described in Chapter 5, Administering VLANs. This section combines configuring VLANs and link aggregations.
Create the link aggregation first and configure it with a valid IP address. To create link aggregations, refer to How to Create a Link Aggregation.
List the aggregations that are configured in the system.
# dladm show-link |
Create a VLAN over the link aggregation.
# dladm create-vlan -l link -v VID vlan-link |
where
Specifies the link on which the VLAN interface is being created. In this specific case, the link refers to the link aggregation.
Indicates the VLAN ID number
Specifies the name of the VLAN, which can also be an administratively-chosen name.
Repeat Step 2 to create other VLANs over the aggregation.
Configure IP interfaces over the VLANs with valid IP addresses.
To create persistent VLAN configurations, add the IP address information to the corresponding /etc/hostname.interface configuration files.
The interface takes the name of the VLAN that you assigned.
In this example, two VLANs are configured on a link aggregation. The VLANs are assigned VIDs 193 and 194, respectively.
# dladm show-link LINK CLASS MTU STATE OVER subvideo0 phys 1500 up ---- subvideo1 phys 1500 up ---- video0 aggr 1500 up subvideo0, subvideo1 # dladm create-vlan -l video0 -v 193 salesregion1 # dladm create-vlan -l video0 -v 194 salesregion2 # ifconfig salesregion1 192.168.10.5/24 plumb up # ifconfig salesregion2 192.168.10.25/24 plumb up # vi /etc/hostname.salesregion1 192.168.10.5/24 # vi /etc/hostname.salesregion2 192.168.10.25/24 |
This section provides an example that combines all the procedures in the previous chapters about configuring links, VLANs, and link aggregations while using customized names. For a description of other networking scenarios that use customized names, see the article in http://www.sun.com/bigadmin/sundocs/articles/vnamingsol.jsp.
In this example, a system that consists of 4 NICs needs to be configured to be a router for 8 separate subnets. To attain this objective, 8 links will be configured, one for each subnet. First, a link aggregation is created on all 4 NICs. This untagged link becomes the default untagged subnet for the network to which the default route points.
Then VLAN interfaces are configured over the link aggregation for the other subnets. The subnets are named by basing on a color-coded scheme. Accordingly, the VLAN names are likewise named to correspond to their respective subnets. The final configuration consists of 8 links for the eight subnets: 1 untagged link, and 7 tagged VLAN links.
To make the configurations persist across reboots, the same procedures apply as in previous Solaris releases. For example, IP addresses need to be added to configuration files like /etc/inet/ndpd.conf or /etc/hostname.interface. Or, filter rules for the interfaces need to be included in a rules file. These final steps are not included in the example. For these steps, refer to the appropriate chapters in System Administration Guide: IP Services, particularly TCP/IP Administration and DHCP.
# dladm show-link LINK CLASS MTU STATE OVER nge0 phys 1500 up -- nge1 phys 1500 up -- e1000g0 phys 1500 up -- e1000g1 phys 1500 up -- # dladm show-phys LINK MEDIA STATE SPEED DUPLEX DEVICE nge0 Ethernet up 1000Mb full nge0 nge1 Ethernet up 1000Mb full nge1 e1000g0 Ethernet up 1000Mb full e1000g0 e1000g1 Ethernet up 1000Mb full e1000g1 # ifconfig nge0 unplumb # ifconfig nge1 unplumb # ifconfig e1000g0 unplumb # ifconfig e1000g1 unplumb # dladm rename-link nge0 net0 # dladm rename-link nge1 net1 # dladm rename-link e1000g0 net2 # dladm rename-link e1000g1 net3 # dladm show-link LINK CLASS MTU STATE OVER net0 phys 1500 up -- net1 phys 1500 up -- net2 phys 1500 up -- net3 phys 1500 up -- # dladm show-phys LINK MEDIA STATE SPEED DUPLEX DEVICE net0 Ethernet up 1000Mb full nge0 net1 Ethernet up 1000Mb full nge1 net2 Ethernet up 1000Mb full e1000g0 net3 Ethernet up 1000Mb full e1000g1 # dladm create-aggr -P L2,L3 -l net0 -l net1 -l net2 -l net3 default0 # dladm show-link LINK CLASS MTU STATE OVER net0 phys 1500 up -- net1 phys 1500 up -- net2 phys 1500 up -- net3 phys 1500 up -- default0 aggr 1500 up net0 net1 net2 net3 # dladm create-vlan -v 2 -l default0 orange0 # dladm create-vlan -v 3 -l default0 green0 # dladm create-vlan -v 4 -l default0 blue0 # dladm create-vlan -v 5 -l default0 white0 # dladm create-vlan -v 6 -l default0 yellow0 # dladm create-vlan -v 7 -l default0 red0 # dladm create-vlan -v 8 -l default0 cyan0 # dladm show-link LINK CLASS MTU STATE OVER net0 phys 1500 up -- net1 phys 1500 up -- net2 phys 1500 up -- net3 phys 1500 up -- default0 aggr 1500 up net0 net1 net2 net3 orange0 vlan 1500 up default0 green0 vlan 1500 up default0 blue0 vlan 1500 up default0 white0 vlan 1500 up default0 yellow0 vlan 1500 up default0 red0 vlan 1500 up default0 cyan0 vlan 1500 up default0 # dladm show-vlan LINK VID OVER FLAGS orange0 2 default0 ----- green0 3 default0 ----- blue0 4 default0 ----- white0 5 default0 ----- yellow0 6 default0 ----- red0 7 default0 ----- cyan0 8 default0 ----- # ifconfig orange0 plumb ... # ifconfig green0 plumb ... # ifconfig blue0 plumb ... # ifconfig white0 plumb ... # ifconfig yellow0 plumb ... # ifconfig red0 plumb ... # ifconfig cyan0 plumb ... |
IP network multipathing (IPMP) provides physical interface failure detection, transparent network access failover, and packet load spreading for systems with multiple interfaces that are connected to a particular local area network or LAN.
This chapter contains the following information:
Throughout the description of IPMP in this chapter and in Chapter 8, Administering IPMP, all references to the term interface specifically mean IP interface. Unless a qualification explicitly indicates a different use of the term, such as a network interface card (NIC), the term always refers to the interface that is configured on the IP layer.
The following features differentiate the current IPMP implementation from the previous implementation:
An IPMP group is represented as an IPMP IP interface. This interface is treated just like any other interface on the IP layer of the networking stack. All IP administrative tasks, routing tables, Address Resolution Protocol (ARP) tables, firewall rules, and other IP-related procedures work with an IPMP group by referring to the IPMP interface.
The system becomes responsible for the distribution of data addresses among underlying active interfaces. In the previous IPMP implementation, the administrator initially determines the binding of data addresses to corresponding interfaces when the IPMP group is created. In the current implementation, when the IPMP group is created, data addresses belong to the IPMP interface as an address pool. The kernel then automatically and randomly binds the data addresses to the underlying active interfaces of the group.
The ipmpstat tool is introduced as the principal tool to obtain information about IPMP groups. This command provides information about all aspects of the IPMP configuration, such as the underlying IP interfaces of the group, test and data addresses, types of failure detection being used, and which interfaces have failed. The ipmpstat functions, the options you can use, and the output each option generates are all described in Monitoring IPMP Information.
The IPMP interface can be assigned a customized name to identify the IPMP group more easily within your network setup. For the procedures to configure IPMP groups with customized names, see any procedure that describes the creation of an IPMP group in Configuring IPMP Groups.
This section describes various topics about the use of IPMP groups.
Different factors can cause an interface to become unusable. Commonly, an IP interface can fail. Or, an interface might be switched offline for hardware maintenance. In such cases, without an IPMP group, the system can no longer be contacted by using any of the IP addresses that are associated with that unusable interface. Additionally, existing connections that use those IP addresses are disrupted.
With IPMP, one or more IP interfaces can be configured into an IPMP group. The group functions like an IP interface with data addresses to send or receive network traffic. If an underlying interface in the group fails, the data addresses are redistributed among the remaining underlying active interfaces in the group. Thus, the group maintains network connectivity despite an interface failure. With IPMP, network connectivity is always available, provided that a minimum of one interface is usable for the group.
Additionally, IPMP improves overall network performance by automatically spreading out outbound network traffic across the set of interfaces in the IPMP group. This process is called outbound load spreading. The system also indirectly controls inbound load spreading by performing source address selection for packets whose IP source address was not specified by the application. However, if an application has explicitly chosen an IP source address, then the system does not vary that source address.
The configuration of an IPMP group is determined by your system configurations. Observe the following rules:
Multiple IP interfaces on the same local area network or LAN must be configured into an IPMP group. LAN broadly refers to a variety of local network configurations including VLANs and both wired and wireless local networks whose nodes belong to the same link-layer broadcast domain.
Underlying IP interfaces of an IPMP group must not span different LANs.
For example, suppose that a system with three interfaces is connected to two separate LANs. Two IP interfaces link to one LAN while a single IP interface connects to the other. In this case, the two IP interfaces connecting to the first LAN must be configured as an IPMP group, as required by the first rule. In compliance with the second rule, the single IP interface that connects to the second LAN cannot become a member of that IPMP group. No IPMP configuration is required of the single IP interface. However, you can configure the single interface into an IPMP group to monitor the interface's availability. The single-interface IPMP configuration is discussed further in Types of IPMP Interface Configurations.
Consider another case where the link to the first LAN consists of three IP interfaces while the other link consists of two interfaces. This setup requires the configuration of two IPMP groups: a three-interface group that links to the first LAN, and a two-interface group to connect to the second.
IPMP and link aggregation are different technologies to achieve improved network performance as well as maintain network availability. In general, you deploy link aggregation to obtain better network performance, while you use IPMP to ensure high availability.
The following table presents a general comparison between link aggregation and IPMP.
IPMP |
Link Aggregation |
|
---|---|---|
Network technology type |
Layer 3 (IP layer) |
Layer 2 (link layer) |
Configuration tool |
ifconfig |
dladm |
Link-based failure detection |
Supported. |
Supported. |
Probe-based failure detection |
ICMP-based, targeting any defined system in the same IP subnet as test addresses, across multiple levels of intervening layer-2 switches. |
Based on Link Aggregation Control Protocol (LACP), targeting immediate peer host or switch. |
Use of standby interfaces |
Supported |
Not supported |
Span multiple switches |
Supported |
Generally not supported; some vendors provide proprietary and non-interoperable solutions to span multiple switches. |
Hardware support |
Not required |
Required. For example, a link aggregation in the system that is running the Solaris OS requires that corresponding ports on the switches be also aggregated. |
Link layer requirements |
Broadcast-capable |
Ethernet-specific |
Driver framework requirements |
None |
Must use GLDv3 framework |
Load spreading support |
Present, controlled by kernel. Inbound load spreading is indirectly affected by source address selection. |
Finer grain control of the administrator over load spreading of outbound traffic by using dladm command. Inbound load spreading supported. |
In link aggregations, incoming traffic is spread over the multiple links that comprise the aggregation. Thus, networking performance is enhanced as more NICs are installed to add links to the aggregation. IPMP's traffic uses the IPMP interface's data addresses as they are bound to the available active interfaces. Thus, for example, if all the data traffic is flowing between only two IP addresses but not necessarily over the same connection, then adding more NICs will not improve performance with IPMP because only two IP addresses remain usable.
The two technologies complement each other and can be deployed together to provide the combined benefits of network performance and availability. For example, except where proprietary solutions are provided by certain vendors, link aggregations currently cannot span multiple switches. Thus, a switch becomes a single point of failure for a link aggregation between the switch and a host. If the switch fails, the link aggregation is likewise lost, and network performance declines. IPMP groups do not face this switch limitation. Thus, in the scenario of a LAN using multiple switches, link aggregations that connect to their respective switches can be combined into an IPMP group on the host. With this configuration, both enhanced network performance as well as high availability are obtained. If a switch fails, the data addresses of the link aggregation to that failed switch are redistributed among the remaining link aggregations in the group.
For other information about link aggregations, see Chapter 6, Administering Link Aggregations.
With support for customized link names, link configuration is no longer bound to the physical NIC to which the link is associated. Using customized link names allows you to have greater flexibility in administering IP interfaces. This flexibility extends to IPMP administration as well. In certain cases of failure of an underlying interface of an IPMP group, the resolution would require the replacement of the physical hardware or NIC. The replacement NIC, provided it is the same type as the failed NIC, can be renamed to inherit the configuration of the failed NIC. You do not have to create new configurations for the new NIC before you can add it to the IPMP group. After you rename the new NIC's link with the link name of the replaced NIC, the new NIC automatically becomes a member of the IPMP group when you bring that NIC online. The multipathing daemon then deploys the interface according to the IPMP configuration of active and standby interfaces.
Therefore, to optimize your networking configuration and facilitate IPMP administration, you must employ flexible link names for your interfaces by assigning them generic names. In the following section How IPMP Works, all the examples use flexible link names for the IPMP group and its underlying interfaces. For details about the processes behind NIC replacements in a networking environment that uses customized link names, refer to IPMP and Dynamic Reconfiguration. For an overview of the networking stack and the use of customized link names, refer to Overview of the Networking Stack.
IPMP maintains network availability by attempting to preserve the original number of active and standby interfaces when the group was created.
IPMP failure detection can be link-based or probe-based or both to determine the availability of a specific underlying IP interface in the group. If IPMP determines that an underlying interface has failed, then that interface is flagged as failed and is no longer usable. The data IP address that was associated with the failed interface is then redistributed to another functioning interface in the group. If available, a standby interface is also deployed to maintain the original number of active interfaces.
Consider a three-interface IPMP group itops0 with an active-standby configuration, as illustrated in Figure 7–1.
The group itops0 is configured as follows:
Two data addresses are assigned to the group: 192.168.10.10 and 192.168.10.15.
Two underlying interfaces are configured as active interfaces and are assigned flexible link names: subitops0 and subitops1.
The group has one standby interface, also with a flexible link name: subitops2.
Probe–based failure detection is used, and thus the active and standby interfaces are configured with test addresses, as follows:
subitops0: 192.168.10.30
subitops1: 192.168.10.32
subitops2: 192.168.10.34
The Active, Offline, Reserve, and Failed areas in the figures indicate only the status of underlying interfaces, and not physical locations. No physical movement of interfaces or addresses nor transfer of IP interfaces occur within this IPMP implementation. The areas only serve to show how an underlying interface changes status as a result of either failure or repair.
You can use the ipmpstat command with different options to display specific types of information about existing IPMP groups. For additional examples, see Monitoring IPMP Information.
The IPMP configuration in Figure 7–1 can be displayed by using the following ipmpstat command:
# ipmpstat -g GROUP GROUPNAME STATE FDT INTERFACES itops0 itops0 ok 10.00s subitops1 subitops0 (subitops2) |
To display information about the group's underlying interfaces, you would type the following:
# ipmpstat -i INTERFACE ACTIVE GROUP FLAGS LINK PROBE STATE subitops0 yes itops0 ------- up ok ok subitops1 yes itops0 --mb--- up ok ok subitops2 no itops0 is----- up ok ok |
IPMP maintains network availability by managing the underlying interfaces to preserve the original number of active interfaces. Thus, if subitops0 fails, then subitops2 is deployed to ensure that the group continues to have two active interfaces. The activation of the subitops2 is shown in Figure 7–2.
The one–to–one mapping of data addresses to active interfaces in Figure 7–2 serves only to simplify the illustration. The IP kernel module can assign data addresses randomly without necessarily adhering to a one–to–one relationship between data addresses and interfaces.
The ipmpstat utility displays the information in Figure 7–2 as follows:
# ipmpstat -i INTERFACE ACTIVE GROUP FLAGS LINK PROBE STATE subitops0 no itops0 ------- up failed failed subitops1 yes itops0 --mb--- up ok ok subitops2 yes itops0 -s----- up ok ok |
After subitops0 is repaired, then it reverts to its status as an active interface. In turn, subitops2 is returned to its original standby status.
A different failure scenario is shown in Figure 7–3, where the standby interface subitops2 fails (1), and later, one active interface, subitops1, is switched offline by the administrator (2). The result is that the IPMP group is left with a single functioning interface, subitops0.
The ipmpstat utility would display the information illustrated by Figure 7–3 as follows:
# ipmpstat -i INTERFACE ACTIVE GROUP FLAGS LINK PROBE STATE subitops0 yes itops0 ------- up ok ok subitops1 no itops0 --mb-d- up ok offline subitops2 no itops0 is----- up failed failed |
For this particular failure, the recovery after an interface is repaired behaves differently. The restoration depends on the IPMP group's original number of active interfaces compared with the configuration after the repair. The recovery process is represented graphically in Figure 7–4.
In Figure 7–4, when subitops2 is repaired, it would normally revert to its original status as a standby interface (1). However, the IPMP group still would not reflect the original number of two active interfaces, because subitops1 continues to remain offline (2). Thus, IPMP deploys subitops2 as an active interface instead (3).
The ipmpstat utility would display the post-repair IPMP scenario as follows:
# ipmpstat -i INTERFACE ACTIVE GROUP FLAGS LINK PROBE STATE subitops0 yes itops0 ------- up ok ok subitops1 no itops0 --mb-d- up ok offline subitops2 yes itops0 -s----- up ok ok |
A similar restore sequence occurs if the failure involves an active interface that is also configured in FAILBACK=no mode, where a failed active interface does not automatically revert to active status upon repair. Suppose subitops0 in Figure 7–2 is configured in FAILBACK=no mode. In that mode, a repaired subitops0 is switched to a reserve status as a standby interface, even though it was originally an active interface. The interface subitops2 would remain active to maintain the IPMP group's original number of two active interfaces. The ipmpstat utility would display the recovery information as follows:
# ipmpstat -i INTERFACE ACTIVE GROUP FLAGS LINK PROBE STATE subitops0 no itops0 i------ up ok ok subitops1 yes itops0 --mb--- up ok ok subitops2 yes itops0 -s----- up ok ok |
For more information about this type of configuration, see The FAILBACK=no Mode.
Solaris IPMP involves the following software:
The multipathing daemon in.mpathd detects interface failures and repairs. The daemon performs both link-based failure detection and probe-based failure detection if test addresses are configured for the underlying interfaces. Depending on the type of failure detection method that is employed, the daemon sets or clears the appropriate flags on the interface to indicate whether the interface failed or has been repaired. As an option, the daemon can also be configured to monitor the availability of all interfaces, including those that are not configured to belong to an IPMP group. For a description of failure detection, see Failure and Repair Detection in IPMP.
The in.mpathd daemon also controls the designation of active interfaces in the IPMP group. The daemon attempts to maintain the same number of active interfaces that was originally configured when the IPMP group was created. Thus in.mpathd activates or deactivates underlying interfaces as needed to be consistent with the administrator's configured policy. For more information about the manner by which the in.mpathd daemon manages activation of underlying interfaces, refer to How IPMP Works. For more information about the daemon, refer to the in.mpathd(1M) man page.
The IP kernel module manages outbound load-spreading by distributing the set of available IP data addresses in the group across the set of available underlying IP interfaces in the group. The module also performs source address selection to manage inbound load-spreading. Both roles of the IP module improve network traffic performance.
The IPMP configuration file /etc/default/mpathd is used to configure the daemon's behavior. For example, you can specify how the daemon performs probe-based failure detection by setting the time duration to probe a target to detect failure, or which interfaces to probe. You can also specify what the status of a failed interface should be after that interface is repaired. You also set the parameters in this file to specify whether the daemon should monitor all IP interfaces in the system, not only those that are configured to belong to IPMP groups. For procedures to modify the configuration file, refer to How to Configure the Behavior of the IPMP Daemon.
The ipmpstat utility provides different types of information about the status of IPMP as a whole. The tool also displays other specific information about the underlying IP interfaces for each group, as well as data and test addresses that have been configured for the group. For more information about the use of this command, see Monitoring IPMP Information and the ipmpstat(1M) man page.
An IPMP configuration typically consists of two or more physical interfaces on the same system that are attached to the same LAN. These interfaces can belong to an IPMP group in either of the following configurations:
active-active configuration – an IPMP group in which all underlying interfaces are active. An active interface is an IP interface that is currently available for use by the IPMP group. By default, an underlying interface becomes active when you configure the interface to become part of an IPMP group. For additional information about active interfaces and other IPMP terms, see also IPMP Terminology and Concepts.
active-standby configuration – an IPMP group in which at least one interface is administratively configured as a reserve. The reserve interface is called the standby interface. Although idle, the standby IP interface is monitored by the multipathing daemon to track the interface's availability, depending on how the interface is configured. If link-failure notification is supported by the interface, link-based failure detection is used. If the interface is configured with a test address, probe-based failure detection is also used. If an active interface fails, the standby interface is automatically deployed as needed. You can configure as many standby interfaces as you want for an IPMP group.
A single interface can also be configured in its own IPMP group. The single interface IPMP group has the same behavior as an IPMP group with multiple interfaces. However, this IPMP configuration does not provide high availability for network traffic. If the underlying interface fails, then the system loses all capability to send or receive traffic. The purpose of configuring a single-interfaced IPMP group is to monitor the availability of the interface by using failure detection. By configuring a test address on the interface, you can set the daemon to track the interface by using probe-based failure detection. Typically, a single-interfaced IPMP group configuration is used in conjunction with other technologies that have broader failover capabilities, such as Sun Cluster software. The system can continue to monitor the status of the underlying interface. But the Sun Cluster software provides the functionalities to ensure availability of the network when failure occurs. For more information about the Sun Cluster software, see Sun Cluster Overview for Solaris OS.
An IPMP group without underlying interfaces can also exist, such as a group whose underlying interfaces have been removed. The IPMP group is not destroyed, but the group cannot be used to send and receive traffic. As underlying IP interfaces are brought online for the group, then the data addresses of the IPMP interface are allocated to these interfaces and the system resumes hosting network traffic.
You can configure IPMP failure detection on both IPv4 networks and dual-stack, IPv4 and IPv6 networks. Interfaces that are configured with IPMP support two types of addresses:
Data Addresses are the conventional IPv4 and IPv6 addresses that are assigned to an IP interface dynamically at boot time by the DHCP server, or manually by using the ifconfig command. Data addresses are assigned to the IPMP interface. The standard IPv4 packet traffic and, if applicable, IPv6 packet traffic are considered data traffic. Data traffic flow use the data addresses that are hosted on the IPMP interface and flow through the active interfaces of that group.
Test Addresses are IPMP-specific addresses that are used by the in.mpathd daemon to perform probe-based failure and repair detection. Test addresses can also be assigned dynamically by the DHCP server, or manually by using the ifconfig command. These addresses are configured with the NOFAILOVER flag that identifies them as test addresses. While data addresses are assigned to the IPMP interface, only test addresses are assigned to the underlying interfaces of the group. For an underlying interface on a dual-stack network, you can configure an IPv4 test address or an IPv6 test address or both. When an underlying interface fails, the interface's test address continues to used by the in.mpathd daemon for probe-based failure detection to check for the interface's subsequent repair.
You need to configure test addresses only if you specifically want to use probe-based failure detection. For more information about probe-based failure detection and the use of test addresses, refer to Probe-Based Failure Detection.
In previous IPMP implementations, test addresses needed to be marked as DEPRECATED to avoid being used by applications especially during interface failures. In the current implementation, test addresses reside in the underlying interfaces. Thus, these addresses can no longer be accidentally used by applications that are unaware of IPMP. Consequently, marking test addresses as DEPRECATED is no longer required.
In general, you can use any IPv4 address on your subnet as a test address. IPv4 test addresses do not need to be routeable. Because IPv4 addresses are a limited resource for many sites, you might want to use non-routeable RFC 1918 private addresses as test addresses. Note that the in.mpathd daemon exchanges only ICMP probes with other hosts on the same subnet as the test address. If you do use RFC 1918-style test addresses, be sure to configure other systems, preferably routers, on the network with addresses on the appropriate RFC 1918 subnet. The in.mpathd daemon can then successfully exchange probes with target systems. For more information about RFC 1918 private addresses, refer to RFC 1918, Address Allocation for Private Internets.
The only valid IPv6 test address is the link-local address of a physical interface. You do not need a separate IPv6 address to serve as an IPMP test address. The IPv6 link-local address is based on the Media Access Control (MAC ) address of the interface. Link-local addresses are automatically configured when the interface becomes IPv6-enabled at boot time or when the interface is manually configured through ifconfig. Just like IPv4 test addresses, IPv6 test addresses must be configured with the NOFAILOVER flag.
For more information on link-local addresses, refer to Link-Local Unicast Address in System Administration Guide: IP Services.
When an IPMP group has both IPv4 and IPv6 plumbed on all the group's interfaces, you do not need to configure separate IPv4 test addresses. The in.mpathd daemon can use the IPv6 link-local addresses with the NOFAILOVER flag as test addresses.
To ensure continuous availability of the network to send or receive traffic, IPMP performs failure detection on the IPMP group's underlying IP interfaces. Failed interfaces remain unusable until these are repaired. Remaining active interfaces continue to function while any existing standby interfaces are deployed as needed.
A group failure occurs when all interfaces in an IPMP group appear to fail at the same time. In this case, no underlying interface is usable. Also, when all the target systems fail at the same time and probe-based failure detection is enabled, the in.mpathd daemon flushes all of its current target systems and probes for new target systems.
The in.mpathd daemon handles the following types of failure detection:
Link-based failure detection, if supported by the NIC driver
Probe-based failure detection, when test addresses are configured
Detection of interfaces that were missing at boot time
Link-based failure detection is always enabled, provided that the interface supports this type of failure detection.
To determine whether a third-party interface supports link-based failure detection, use the ipmpstat -i command. If the output for a given interface includes an unknown status for its LINK column, then that interface does not support link-based failure detection. Refer to the manufacturer's documentation for more specific information about the device.
These network drivers that support link-based failure detection monitor the interface's link state and notify the networking subsystem when that link state changes. When notified of a change, the networking subsystem either sets or clears the RUNNING flag for that interface, as appropriate. If the in.mpathd daemon detects that the interface's RUNNING flag has been cleared, the daemon immediately fails the interface.
The multipathing daemon performs probe-based failure detection on each interface in the IPMP group that has a test address. Probe-based failure detection involves sending and receiving ICMP probe messages that use test addresses. These messages, also called probe traffic or test traffic, go out over the interface to one or more target systems on the same local network. The daemon probes all the targets separately through all the interfaces that have been configured for probe-based failure detection. If no replies are made in response to five consecutive probes on a given interface, in.mpathd considers the interface to have failed. The probing rate depends on the failure detection time (FDT). The default value for failure detection time is 10 seconds. However, you can tune the failure detection time in the IPMP configuration file. For instructions, go to How to Configure the Behavior of the IPMP Daemon. To optimize probe-based failure detection, you must set multiple target systems to receive the probes from the multipathing daemon. By having multiple target systems, you can better determine the nature of a reported failure. For example, the absence of a response from the only defined target system can indicate a failure either in the target system or in one of the IPMP group's interfaces. By contrast, if only one system among several target systems does not respond to a probe, then the failure is likely in the target system rather than in the IPMP group itself.
Repair detection time is twice the failure detection time. The default time for failure detection is 10 seconds. Accordingly, the default time for repair detection is 20 seconds. After a failed interface has been repaired and the interface's RUNNING flag is once more detected, in.mpathd clears the interface's FAILED flag. The repaired interface is redeployed depending on the number of active interfaces that the administrator has originally set.
The in.mpathd daemon determines which target systems to probe dynamically. First the daemon searches the routing table for target systems that are on the same subnet as the test addresses that are associated with the IPMP group's interfaces. If such targets are found, then the daemon uses them as targets for probing. If no target systems are found on the same subnet, then in.mpathd sends multicast packets to probe neighbor hosts on the link. The multicast packet is sent to the all hosts multicast address, 224.0.0.1 in IPv4 and ff02::1 in IPv6, to determine which hosts to use as target systems. The first five hosts that respond to the echo packets are chosen as targets for probing. If in.mpathd cannot find routers or hosts that responded to the multicast probes, then ICMP echo packets, in.mpathd cannot detect probe-based failures. In this case, the ipmpstat -i utility will report the probe state as unknown.
You can use host routes to explicitly configure a list of target systems to be used by in.mpathd. For instructions, refer to Configuring for Probe-Based Failure Detection.
NICs that are not present at system boot represent a special instance of failure detection. At boot time, the startup scripts track any interfaces with /etc/hostname.interface files. Any data addresses in such an interface's /etc/hostname.interface file are automatically configured on the corresponding IPMP interface for the group. However, if the interfaces themselves cannot be plumbed because they are missing, then error messages similar to the following are displayed:
moving addresses from missing IPv4 interfaces: hme0 (moved to ipmp0) moving addresses from missing IPv6 interfaces: hme0 (moved to ipmp0) |
In this instance of failure detection, only data addresses that are explicitly specified in the missing interface's /etc/hostname.interface file are moved to the IPMP interface.
If an interface with the same name as another interface that was missing at system boot is reattached using DR, the Reconfiguration Coordination Manager (RCM) automatically plumbs the interface. Then, RCM configures the interface according to the contents of the interface's /etc/hostname.interface file. However, data addresses, which are addresses without the NOFAILOVER flag, that are in the /etc/hostname.interface file are ignored. This mechanism adheres to the rule that data addresses should be in the /etc/hostname.ipmp-interface file, and only test addresses should be in the underlying interface's /etc/hostname.interface file. Issuing the ifconfig group command causes that interface to again become part of the group. Thus, the final network configuration is identical to the configuration that would have been made if the system had been booted with the interface present.
For more information about missing interfaces, see About Missing Interfaces at System Boot.
IPMP supports failure detection in an anonymous group. By default, IPMP monitors the status only of interfaces that belong to IPMP groups. However, the IPMP daemon can be configured to also track the status of interfaces that do not belong to any IPMP group. Thus, these interfaces are considered to be part of an “anonymous group.” When you issue the command ipmpstat -g, the anonymous group will be displayed as double-dashes (--). In anonymous groups, the interfaces would have their data addresses function also as test addresses. Because these interfaces do not belong to a named IPMP group, then these addresses are visible to applications. To enable tracking of interfaces that are not part of an IPMP group, see How to Configure the Behavior of the IPMP Daemon.
When an underlying interface fails and probe-based failure detection is used, the in.mpathd daemon continues to use the interface's test address to continue probing target systems. During an interface repair, the restoration proceeds depending on the original configuration of the failed interface:
Failed interface was originally an active interface – the repaired interface reverts to its original active status. The standby interface that functioned as a replacement during the failure is switched back to standby status if enough interfaces are active for the group as defined by the system administrator.
An exception to this step are cases when the repaired active interface is also configured with the FAILBACK=no mode. For more information, see The FAILBACK=no Mode
Failed interface was originally a standby interface – the repaired interface reverts to its original standby status, provided that the IPMP group reflects the original number of active interfaces. Otherwise, the standby interface is switched to become an active interface.
To see a graphical presentation of how IPMP behaves during interface failure and repair, see How IPMP Works.
By default, active interfaces that have failed and then repaired automatically return to become active interfaces in the group. This behavior is controlled by the setting of the FAILBACK parameter in the daemon's configuration file. However, even the insignificant disruption that occurs as data addresses are remapped to repaired interfaces might not be acceptable to some administrators. The administrators might prefer to allow an activated standby interface to continue as an active interface. IPMP allows administrators to override the default behavior to prevent an interface to automatically become active upon repair. These interfaces must be configured in the FAILBACK=no mode. For related procedures, see How to Configure the Behavior of the IPMP Daemon.
When an active interface in FAILBACK=no mode fails and is subsequently repaired, the IPMP daemon restores the IPMP configuration as follows:
The daemon retains the interface's INACTIVE status, provided that the IPMP group reflects the original configuration of active interfaces.
If the IPMP configuration at the moment of repair does not reflect the group's original configuration of active interfaces, then the repaired interface is redeployed as an active interface, notwithstanding the FAILBACK=no status.
The FAILBACK=NO mode is set for the whole IPMP group. It is not a per-interface tunable parameter.
Dynamic reconfiguration (DR) feature allows you to reconfigure system hardware, such as interfaces, while the system is running. DR can be used only on systems that support this feature.
You typically use the cfgadm command to perform DR operations. However, some platforms provide other methods. Make sure to consult your platform's documentation for details to perform DR. For systems that use the Solaris OS, you can find specific documentation about DR in the resources that are listed in Table 7–1. Current information about DR is also available at http://docs.sun.com and can be obtained by searching for the topic “dynamic reconfiguration.”
Table 7–1 Documentation Resources for Dynamic Reconfiguration
Description |
For Information |
---|---|
Detailed information on the cfgadm command |
cfgadm(1M) man page |
Specific information about DR in the Sun Cluster environment |
Sun Cluster 3.1 System Administration Guide |
Specific information about DR in the Sun Fire environment |
Sun Fire 880 Dynamic Reconfiguration Guide |
Introductory information about DR and the cfgadm command | |
Tasks for administering IPMP groups on a system that supports DR |
Recovering an IPMP Configuration With Dynamic Reconfiguration |
The sections that follow explain how DR interoperates with IPMP.
On a system that supports DR of NICs, IPMP can be used to preserve connectivity and prevent disruption of existing connections. IPMP is integrated into the Reconfiguration Coordination Manager (RCM) framework. Thus, you can safely attach, detach, or reattach NICs and RCM manages the dynamic reconfiguration of system components.
With DR support, you can attach, plumb, and then add new interfaces to existing IPMP groups. Or, if appropriate, you can configure the newly added interfaces into their own IPMP group. For procedures to configure IPMP groups, refer to Configuring IPMP Groups. After these interfaces have been configured, they are immediately available for use by IPMP. However, to benefit from the advantages of using customized link names, you must assign generic link names to replace the interface's hardware-based link names. Then you create corresponding configuration files by using the generic name that you just assigned. For procedures to configure a singe interface by using customized link names, refer to How to Configure an IP Interface After System Installation. After you assign a generic link name to interface, make sure that you always refer to the generic name when you perform any additional configuration on the interface such as using the interface for IPMP.
All requests to detach system components that contain NICs are first checked to ensure that connectivity can be preserved. For instance, by default you cannot detach a NIC that is not in an IPMP group. You also cannot detach a NIC that contains the only functioning interfaces in an IPMP group. However, if you must remove the system component, you can override this behavior by using the -f option of cfgadm, as explained in the cfgadm(1M) man page.
If the checks are successful, the daemon sets the OFFLINE flag for the interface. All test addresses on the interfaces are unconfigured. Then, the NIC is unplumbed from the system. If any of these steps fail, or if the DR of other hardware on the same system component fails, then the previous configuration is restored to its original state. A status message about this event will be displayed. Otherwise, the detach request completes successfully. You can remove the component from the system. No existing connections are disrupted.
When an underlying interface of an IPMP group fails, a typical solution would be to replace the failed interface by attaching a new NIC. RCM records the configuration information associated with any NIC that is detached from a running system. If you replace a failed NIC with an identical NIC, then RCM automatically configures the interface according to the contents of the existing /etc/hostname.interface file.
For example, suppose you replace a failed bge0 interface with another bge0 interface. The failed bge0 already has a corresponding /etc/hostname.bge0 file. After you attach the replacement bge NIC, RCM plumbs and then configures the bge0 interface by using the information in the /etc/hostname.bge0 file. Thus the interface is properly configured with the test address and is added to the IPMP group according to the contents of the configuration file.
You can replace a failed NIC with a different NIC, provided that both are the same type, such as ethernet. In this case, RCM plumbs the new interface after it is attached. If you did not use customized link names when you first configured your interfaces, and no corresponding configuration file for the new interface exists, then you will have to perform additional configuration steps. You will need to create a new corresponding configuration file for the new NIC. Additionally, you will need to add correct information to the file before you can add the interface to the IPMP group.
However, if you used customized link names, the additional configuration steps are unnecessary. By reassigning the failed interface's link name to the new interface, then the new interface acquires the configuration specified in the removed interface's configuration file. RCM then configures the interface by using the information in that file. For procedures to recover your IPMP configuration by using DR when an interface fails, refer to Recovering an IPMP Configuration With Dynamic Reconfiguration.
This section introduces terms and concepts that are used throughout the IPMP chapters in this book.
Refers to an underlying interface that can be used by the system to send or receive data traffic. An interface is active if the following conditions are met:
At least one IP address is UP in the interface. See UP address.
The FAILED, INACTIVE, or OFFLINE flag is not set on the interface.
The interface has not been flagged as having a duplicate hardware address.
Compare to unusable interface, INACTIVE interface.
Refers to an IP address that can be used as the source or destination address for data. Data addresses are part of an IPMP group and can be used to send and receive traffic on any interface in the group. Moreover, the set of data addresses in an IPMP group can be used continuously, provided that one interface in the group is functioning. In previous IPMP implementations, data addresses were hosted on the underlying interfaces of an IPMP group. In the current implementation, data addresses are hosted on the IPMP interface.
Refers to an IP address that cannot be used as the source address for data. Typically, IPMP test addresses are DEPRECATED. However, any address can be marked DEPRECATED to prevent the address from being used as a source address.
Refers to a feature that allows you to reconfigure a system while the system is running, with little or no impact on ongoing operations. Not all Sun platforms support DR. Some Sun platforms might only support DR of certain types of hardware. On platforms that support DR of NICs, IPMP can be used for uninterrupted network access to the system during DR.
For more information about how IPMP supports DR, refer to IPMP and Dynamic Reconfiguration.
Applies only to the current IPMP implementation. The term refers to the method of creating an IPMP interface by using the ifconfig ipmp command. Explicit IPMP interface creation is the preferred method for creating IPMP groups. This method allows the IPMP interface name and IPMP group name to be set by the administrator.
Compare to implicit IPMP interface creation.
Refers to a setting of an underlying interface that minimizes rebinding of incoming addresses to interfaces by avoiding redistribution during interface repair. Specifically, when an interface repair is detected, the interface's FAILED flag is cleared. However, if the mode of the repaired interface is FAILBACK=no, then the INACTIVE flag is also set to prevent use of the interface, provided that a second functioning interface also exists. If the second interface in the IPMP group fails, then the INACTIVE interface is eligible to take over. While the concept of failback no longer applies in the current IPMP implementation, the name of this mode is preserved for administrative compatibility.
Indicates an interface that the in.mpathd daemon has determined to be malfunctioning. The determination is achieved by either link-based or probe-based failure detection. The FAILED flag is set on any failed interface.
Refers to the process of detecting when a physical interface or the path from an interface to an Internet layer device no longer works. Two forms of failure detection are implemented: link-based failure detection, and probe-based failure detection.
Refers to the method of creating an IPMP interface by using the ifconfig command to place an underlying interface in a nonexistent IPMP group. Implicit IPMP interface creation is supported for backward compatibility with the previous IPMP implementation. Thus, this method does not provide the ability to set the IPMP interface name or IPMP group name.
Compare to explicit IPMP interface creation.
Refers to an interface that is functioning but is not being used according to administrative policy. The INACTIVE flag is set on any INACTIVE interface.
Compare to active interface, unusable interface.
Indicates an IPMP feature in which the IPMP daemon tracks the status of all network interfaces in the system, regardless of whether they belong to an IPMP group. However, if the interfaces are not actually in an IPMP group, then the addresses on these interfaces are not available in case of interface failure.
Refers to a set of network interfaces that are treated as interchangeable by the system in order to improve network availability and utilization. Each IPMP group has a set of data addresses that the system can associate with any set of active interfaces in the group. Use of this set of data addresses maintains network availability and improves network utilization. The administrator can select which interfaces to place into an IPMP group. However, all interfaces in the same group must share a common set of properties, such as being attached to the same link and configured with the same set of protocols (for example, IPv4 and IPv6).
See IPMP interface.
Refers to the name of an IPMP group, which can be assigned with the ifconfig group subcommand. All underlying interfaces with the same IPMP group name are defined as part of the same IPMP group. In the current implementation, IPMP group names are de-emphasized in favor of IPMP interface names. Administrators are encouraged to use the same name for both the IPMP interface and the group.
Applies only to the current IPMP implementation. The term refers to the IP interface that represents a given IPMP group, any or all of the interface's underlying interfaces, and all of the data addresses. In the current IPMP implementation, the IPMP interface is the core component for administering an IPMP group, and is used in routing tables, ARP tables, firewall rules, and so forth.
Indicates the name of an IPMP interface. This document uses the naming convention of ipmpN. The system also uses the same naming convention in implicit IPMP interface creation. However, the administrator can choose any name by using explicit IPMP interface creation.
Refers to an IPMP configuration that is used by Sun Cluster software that allows a data address to also act as a test address. This configuration applies, for instance, when only one interface belongs to an IPMP group.
Specifies a passive form of failure detection, in which the link status of the network card is monitored to determine an interface's status. Link-based failure detection only tests whether the link is up. This type of failure detection is not supported by all network card drivers. Link-based failure detection requires no explicit configuration and provides instantaneous detection of link failures.
Compare to probe-based failure detection.
Refers to the process of distributing inbound or outbound traffic over a set of interfaces. Unlike load balancing, load spreading does not guarantee that the load is evenly distributed. With load spreading, higher throughput is achieved. Load spreading occurs only when the network traffic is flowing to multiple destinations that use multiple connections.
Inbound load spreading indicates the process of distributing inbound traffic across the set of interfaces in an IPMP group. Inbound load spreading cannot be controlled directly with IPMP. The process is indirectly manipulated by the source address selection algorithm.
Outbound load spreading refers to the process of distributing outbound traffic across the set of interfaces in an IPMP group. Outbound load spreading is performed on a per-destination basis by the IP module, and is adjusted as necessary depending on the status and members of the interfaces in the IPMP group.
Applies only to the previous IPMP implementation. Refers to an address that is associated with an underlying interface and thus remains unavailable if the underlying interface fails. All NOFAILOVER addresses have the NOFAILOVER flag set. IPMP test addresses must be designated as NOFAILOVER, while IPMP data addresses must never be designated as NOFAILOVER. The concept of failover does not exist in the IPMP implementation. However, the term NOFAILOVER remains for administrative compatibility.
Indicates an interface that has been administratively disabled from system use, usually in preparation for being removed from the system. Such interfaces have the OFFLINE flag set. The if_mpadm command can be used to switch an interface to an offline status.
See: underlying interface
Refers to an ICMP packet, similar to the packets that are used by the ping command. This probe is used to test the send and receive paths of a given interface. Probe packets are sent by the in.mpathd daemon, if probe-based failure detection is enabled. A probe packet uses an IPMP test address as its source address.
Indicates an active form of failure detection, in which probes are exchanged with probe targets to determine an interface's status. When enabled, probe-based failure detection tests the entire send and receive path of each interface. However, this type of detection requires the administrator to explicitly configure each interface with a test address.
Compare to link-based failure detection.
Refers to a system on the same link as an interface in an IPMP group. The target is selected by the in.mpathd daemon to help check the status of a given interface by using probe-based failure detection. The probe target can be any host on the link that is capable of sending and receiving ICMP probes. Probe targets are usually routers. Several probe targets are usually used to insulate the failure detection logic from failures of the probe targets themselves.
Refers to the process of selecting a data address in the IPMP group as the source address for a particular packet. Source address selection is performed by the system whenever an application has not specifically selected a source address to use. Because each data address is associated with only one hardware address, source address selection indirectly controls inbound load spreading.
Indicates an interface that has been administratively configured to be used only when another interface in the group has failed. All STANDBY interfaces will have the STANDBY flag set.
Refers to an IP address that must be used as the source or destination address for probes, and must not be used as a source or destination address for data traffic. Test addresses are associated with an underlying interface. These addresses are designated as NOFAILOVER so that they remain on the underlying interface even if the interface fails to facilitate repair detection. Because test addresses are not available upon interface failure, all test addresses must be designated as DEPRECATED to keep the system from using them as a source addresses for data packets.
Specifies an IP interface that is part of an IPMP group and is directly associated with an actual network device. For example, if ce0 and ce1 are placed into IPMP group ipmp0, then ce0 and ce1 comprise the underlying interfaces of ipmp0. In the previous implementation, IPMP groups consist solely of underlying interfaces. However, in the current implementation, these interfaces underlie the IPMP interface (for example, ipmp0) that represents the group, hence the name.
Refers to the act of administratively enabling a previously offlined interface to be used by the system. The if_mpadm command can be used to perform an undo-offline operation.
Refers to an underlying interface that cannot be used to send or receive data traffic at all in its current configuration. An unusable interface differs from an INACTIVE interface, that is not currently being used but can be used if an active interface in the group becomes unusable. An interface is unusable if one of the following conditions exists:
The interface has no UP address.
The FAILED or OFFLINE flag has been set for the interface.
The interface has been flagged has having the same hardware address as another interface in the group.
See probe target.
Refers to an address that has been made administratively available to the system by setting the UP flag. An address that is not UP is treated as not belonging to the system, and thus is never considered during source address selection.
This chapter provides tasks for administering interface groups with IP network multipathing (IPMP). The following major topics are discussed:
In this Solaris release, the ipmpstat command is the preferred tool to use to obtain information about IPMP group information. In this chapter, the ipmpstat command replaces certain functions of the ifconfig command that were used in previous Solaris releases to provide IPMP information.
For information about the different options for the ipmpstat command, see Monitoring IPMP Information.
This following sections provide links to the tasks in this chapter.
Task |
Description |
For Instructions |
---|---|---|
Plan an IPMP group. |
Lists all ancillary information and required tasks before you can configure an IPMP group. | |
Configure an IPMP group by using DHCP. |
Provides an alternative method to configure IPMP groups by using DHCP. | |
Configure an active-active IPMP group. |
Configures an IPMP group in which all underlying interfaces are deployed to host network traffic. | |
Configure an active-standby IPMP group. |
Configures an IPMP group in which one underlying interface is kept inactive as a reserve. |
Task |
Description |
For Instructions |
---|---|---|
Add an interface to an IPMP group. |
Configures a new interface as a member of an existing IPMP group. | |
Remove an interface from an IPMP group. |
Removes an interface from an IPMP group. | |
Add IP addresses to or remove IP addresses from an IPMP group. |
Adds or removes addresses for an IPMP group. | |
Change an interface's IPMP membership. |
Moves interfaces among IPMP groups. |
How to Move an Interface From One IPMP Group to Another Group |
Delete an IPMP group. |
Deletes an IPMP group that is no longer needed. | |
Replace cards that failed. |
Removes or replaces failed NICs of an IPMP group. |
Task |
Description |
For Instructions |
---|---|---|
Manually specify target systems |
Identifies and adds systems to be targeted for probe-based failure detection. |
How to Manually Specify Target Systems for Probe-Based Failure Detection |
Configure the behavior of probe-based failure detection. |
Modifies parameters to determine the behavior of probe-based failure detection. |
Task |
Description |
For Instructions |
---|---|---|
Obtain group information. |
Displays information about an IPMP group. | |
Obtain data address information. |
Displays information about the data addresses that are used by an IPMP group. | |
Obtain IPMP interface information. |
Displays information about the underlying interfaces of IPMP interfaces or groups. |
How to Obtain Information About Underlying IP Interfaces of a Group |
Obtain probe target information. |
Displays information about targets of probe-based failure detection. | |
Obtain probe information. |
Displays real-time information about ongoing probes in the system. | |
Customize the information display for monitoring IPMP groups. |
Determines the IPMP information that is displayed. |
How to Customize the Output of the ipmpstat Command in a Script |
This section provides procedures that are used to plan and configure IPMP groups.
The following procedure includes the required planning tasks and information to be gathered prior to configuring an IPMP group. The tasks do not have to be performed in sequence.
Determine the general IPMP configuration that would suit your needs.
Your IPMP configuration depends on what your network needs to handle the type of traffic that is hosted on your system. IPMP spreads outbound network packets across the IPMP group's interfaces, and thus improves network throughput. However, for a given TCP connection, inbound traffic normally follows only one physical path to minimize the risk of processing out-of-order packets.
Thus, if your network handles a huge volume of outbound traffic, configuring multiple interfaces into an IPMP group can improve network performance. If instead, the system hosts heavy inbound traffic, then the number of interfaces in the group does not necessarily improve performance by load spreading traffic. However, having multiple interfaces helps to guarantee network availability during interfaces failure.
For SPARC based systems, verify that each interface in the group has a unique MAC address.
To configure a unique MAC address for each interface in the system, see SPARC: How to Ensure That the MAC Address of an Interface Is Unique.
Ensure that the same set of STREAMS modules is pushed and configured on all interfaces in the IPMP group.
All interfaces in the same group must have the same STREAMS modules configured in the same order.
Check the order of STREAMS modules on all interfaces in the prospective IPMP group.
You can print a list of STREAMS modules by using the ifconfig interface modlist command. For example, here is the ifconfig output for an hme0 interface:
# ifconfig hme0 modlist 0 arp 1 ip 2 hme |
Interfaces normally exist as network drivers directly below the IP module, as shown in the output from ifconfig hme0 modlist. They should not require additional configuration.
However, certain technologies insert themselves as a STREAMS module between the IP module and the network driver. If a STREAMS module is stateful, then unexpected behavior can occur on failover, even if you push the same module onto all of the interfaces in a group. However, you can use stateless STREAMS modules, provided that you push them in the same order on all interfaces in the IPMP group.
Push the modules of an interface in the standard order for the IPMP group.
ifconfig interface modinsert module-name@position |
ifconfig hme0 modinsert vpnmod@3 |
Use the same IP addressing format on all interfaces of the IPMP group.
If one interface is configured for IPv4, then all interfaces of the group must be configured for IPv4. For example, if you add IPv6 addressing to one interface, then all interfaces in the IPMP group must be configured for IPv6 support.
Determine the type of failure detection that you want to implement.
For example, if you want to implement probe-based failure detection, then you must configure test addresses on the underlying interfaces. For related information, seeTypes of Failure Detection in IPMP.
Ensure that all interfaces in the IPMP group are connected to the same local network.
For example, you can configure Ethernet switches on the same IP subnet into an IPMP group. You can configure any number of interfaces into an IPMP group.
You can also configure a single interface IPMP group, for example, if your system has only one physical interface. For related information, see Types of IPMP Interface Configurations.
Ensure that the IPMP group does not contain interfaces with different network media types.
The interfaces that are grouped together should be of the same interface type, as defined in /usr/include/net/if_types.h. For example, you cannot combine Ethernet and Token ring interfaces in an IPMP group. As another example, you cannot combine a Token bus interface with asynchronous transfer mode (ATM) interfaces in the same IPMP group.
For IPMP with ATM interfaces, configure the ATM interfaces in LAN emulation mode.
IPMP is not supported for interfaces using Classical IP over ATM.
In the current IPMP implementation, IPMP groups can be configured with Dynamic Host Configuration Protocol (DHCP) support.
A multiple-interfaced IPMP group can be configured with active-active interfaces or active-standby interfaces. For related information, see Types of IPMP Interface Configurations. The following procedure describes steps to configure an active-standby IPMP group by using DHCP.
Make sure that IP interfaces that will be in the prospective IPMP group have been correctly configured over the system's network data links. For procedures to configure links and IP interfaces, see Data Link and IP Interface Configuration (Tasks). For information about configuring IPv6 interfaces, see Configuring an IPv6 Interface in System Administration Guide: IP Services.
Additionally, if you are using a SPARC system, configure a unique MAC address for each interface. For procedures, see SPARC: How to Ensure That the MAC Address of an Interface Is Unique.
Finally, if you are using DHCP, make sure that the underlying interfaces have infinite leases. Otherwise, in case of a group failure, the test addresses will expire and the IPMP daemon will then revert to link-based failure detection. Such circumstances would trigger errors in the manner the group's failure detection behaves during interface recovery. For more information about configuring DHCP, refer to Chapter 12, Planning for DHCP Service (Tasks), in System Administration Guide: IP Services.
On the system on which you want to configure the IPMP group, assume the Primary Administrator role, or become superuser.
The Primary Administrator role includes the Primary Administrator profile. To create the role and assign the role to a user, see Chapter 2, Working With the Solaris Management Console (Tasks), in System Administration Guide: Basic Administration.
Create an IPMP interface.
# ifconfig ipmp-interface ipmp [group group-name] |
To configure IPv6 IPMP interfaces, use the same command syntax for configuring IPv6 interfaces by specifying inet6 in the ifconfig command, for example:
# ifconfig ipmp-interface inet6 ipmp [group group-name] |
This note applies to all configuration procedures that involve IPv6 IPMP interfaces.
Specifies the name of the IPMP interface. You can assign any meaningful name to the IPMP interface. As with any IP interface, the name consists of a string and a number, such as ipmp0.
Specifies the name of the IPMP group. The name can be any name of your choice. Assigning a group name is optional. By default, the name of the IPMP interface also becomes the name of the IPMP group. Preferably, retain this default setting by not using the group-name option.
The syntax in this step uses the preferred explicit method of creating an IPMP group by creating the IPMP interface.
An alternative method to create an IPMP group is implicit creation, in which you use the syntax ifconfig interface group group-name. In this case, the system creates the lowest available ipmpN to become the group's IPMP interface. For example, if ipmp0 already exists for group acctg, then the syntax ifconfig ce0 group fieldops causes the system to create ipmp1 for group fieldops. All UP data addresses of ce0 are then assigned to ipmp1.
However, implicit creation of IPMP groups is not encouraged. Support for implicit creation is provided only to have compatible implementation with previous Solaris releases. Explicit creation provides optimal control over the configuration of IPMP interfaces.
Add underlying IP interfaces that will contain test addresses to the IPMP group, including the standby interface.
# ifconfig interface group group-name -failover [standby] up |
Have DHCP configure and manage the data addresses on the IPMP interface.
You need to plumb as many logical IPMP interfaces as data addresses, and then have DHCP configure and manage the addresses on these interfaces as well.
# ifconfig ipmp-interface dhcp start primary # ifconfig ipmp-interface:n plumb # ifconfig ipmp-interface:n dhcp start |
Have DHCP manage the test addresses in the underlying interfaces.
You need to issue the following command for each underlying interface of the IPMP group.
# ifconfig interface dhcp start |
This example shows how to configure an active-standby IPMP group with DHCP. This example is based on Figure 7–1, which contains the following information:
Three underlying interfaces, subitops0, subitops1, and subitops2 are designated members of the IPMP group.
The IPMP interface itops0 shares the same name with the IPMP group.
subitops2 is the designated standby interface.
To use probe-based failure detection, all the underlying interfaces are assigned test addresses.
# ifconfig itops0 ipmp # ifconfig subitops0 plumb group itops0 -failover up # ifconifg subitops1 plumb group itops0 -failover up # ifconfig subitops2 plumb group itops0 -failover standby up # ifconfig itops0 dhcp start primary # ifconfig itops0:1 plumb # ifconfig itops0:1 dhcp start # ifconfig subitops0 dhcp start # ifconfig subitops1 dhcp start # ifconfig subitops2 dhcp start |
To make the test address configuration persistent, you would need to type the following commands:
# touch /etc/dhcp.itops0 /etc/dhcp.itops0:1 # touch /etc/dhcp.subitops0 /etc/dhcp.subitops1 /etc/dhcp.subitops2 # echo group itops0 -failover up > /etc/hostname.subitops0 # echo group itops0 -failover up > /etc/hostname.subitops1 # echo group itops0 -failover standby up > /etc/hostname.subitops2 # echo ipmp > /etc/hostname.itops0 |
The following procedure describes steps to manually configure an active-active IPMP group.
Make sure that IP interfaces that will be in the prospective IPMP group have been correctly configured over the system's network data links. For procedures to configure links and IP interfaces, see Data Link and IP Interface Configuration (Tasks). For information about configuring IPv6 interfaces, see Configuring an IPv6 Interface in System Administration Guide: IP Services.
Additionally, if you are using a SPARC system, configure a unique MAC address for each interface. For procedures, see SPARC: How to Ensure That the MAC Address of an Interface Is Unique.
On the system on which you want to configure the IPMP group, assume the Primary Administrator role, or become superuser.
The Primary Administrator role includes the Primary Administrator profile. To create the role and assign the role to a user, see Chapter 2, Working With the Solaris Management Console (Tasks), in System Administration Guide: Basic Administration.
# ifconfig ipmp-interface ipmp [group group-name] |
Specifies the name of the IPMP interface. You can assign any meaningful name to the IPMP interface. As with any IP interface, the name consists of a string and a number, such as ipmp0.
Specifies the name of the IPMP group. The name can be any name of your choice. Any nun-null name is valid, provided that the name does not exceed 31 characters. Assigning a group name is optional. By default, the name of the IPMP interface also becomes the name of the IPMP group. Preferably, retain this default setting by not using the group-name option.
The syntax in this step uses the preferred explicit method of creating an IPMP group by creating the IPMP interface.
An alternative method to create an IPMP group is implicit creation, in which you use the syntax ifconfig interface group group-name. In this case, the system creates the lowest available ipmpN to become the group's IPMP interface. For example, if ipmp0 already exists for group acctg, then the syntax ifconfig ce0 group fieldops causes the system to create ipmp1 for group fieldops. All UP data addresses of ce0 are then assigned to ipmp1.
However, implicit creation of IPMP groups is not encouraged. Support for implicit creation is provided only to have compatible implementation with previous Solaris releases. Explicit creation provides optimal control over the configuration of IPMP interfaces.
Add underlying IP interfaces to the group.
# ifconfig ip-interface group group-name |
In a dual-stack environment, placing the IPv4 instance of an interface under a particular group automatically places the IPv6 instance under the same group as well.
Add data addresses to the IPMP interface.
# ifconfig plumb ipmp-interface ip-address up # ifconfig ipmp-interface addif ip-address up |
For additional options that you can use with the ifconfig command while adding addresses, refer to the ifconfig(1M) man page.
Configure test addresses on the underlying interfaces.
# ifconfig interface -failover ip-address up |
You need to configure a test address only if you want to use probe-based failure detection on a particular interface.
All test IP addresses in an IPMP group must use the same network prefix. The test IP addresses must belong to a single IP subnet.
(Optional) Preserve the IPMP group configuration across reboots.
To configure an IPMP group that persists across system reboots, you would edit the hostname configuration file of the IPMP interface to add data addresses. Then, if you want to use test addresses, you would edit the hostname configuration file of one of the group's underlying IP interface. Note that data and test addresses can be both IPv4 and IPv6 addresses. Perform the following steps:
Edit the /etc/hostname.ipmp-interface file by adding the following lines:
ipmp group group-name data-address up addif data-address ... |
You can add more data addresses on separate addif lines in this file.
Edit the /etc/hostname.interface file of the underlying IP interfaces that contain the test address by adding the following line:
group group-name -failover test-address up |
Follow this same step to add test addresses to other underlying interfaces of the IPMP group.
When adding test address information on the /etc/hostname.interface file, make sure to specify the -failover option before the up keyword. Otherwise, the test IP addresses will be treated as data addresses and would cause problems for system administration. Preferably, set the -failover option before specifying the IP address.
For more information about standby interfaces, see Types of IPMP Interface Configurations. The following procedure configures an IPMP group where one interface is kept as a reserve. This interface is deployed only when an active interface in the group fails.
On the system on which you want to configure the IPMP group, assume the Primary Administrator role, or become superuser.
The Primary Administrator role includes the Primary Administrator profile. To create the role and assign the role to a user, see Chapter 2, Working With the Solaris Management Console (Tasks), in System Administration Guide: Basic Administration.
# ifconfig ipmp-interface ipmp [group group-name] |
Specifies the name of the IPMP interface. You can assign any meaningful name to the IPMP interface. As with any IP interface, the name consists of a string and a number, such as ipmp0.
Specifies the name of the IPMP group. The name can be any name of your choice. Any nun-null name is valid, provided that the name does not exceed 31 characters. Assigning a group name is optional. By default, the name of the IPMP interface also becomes the name of the IPMP group. Preferably, retain this default setting by not using the group-name option.
The syntax in this step uses the preferred explicit method of creating an IPMP group by creating the IPMP interface.
An alternative method to create an IPMP group is implicit creation, in which you use the syntax ifconfig interface group group-name. In this case, the system creates the lowest available ipmpN to become the group's IPMP interface. For example, if ipmp0 already exists for group acctg, then the syntax ifconfig ce0 group fieldops causes the system to create ipmp1 for group fieldops. All UP data addresses of ce0 are then assigned to ipmp1.
However, implicit creation of IPMP groups is not encouraged. Support for implicit creation is provided only to have compatible implementation with previous Solaris releases. Explicit creation provides optimal control over the configuration of IPMP interfaces.
Add underlying IP interfaces to the group.
# ifconfig ip-interface group group-name |
In a dual-stack environment, placing the IPv4 instance of an interface under a particular group automatically places the IPv6 instance under the same group as well.
Add data addresses to the IPMP interface.
# ifconfig plumb ipmp-interface ip-address up # ifconfig ipmp-interface addif ip-address up |
For additional options that you can use with the ifconfig command while adding addresses, refer to the ifconfig(1M) man page.
Configure test addresses on the underlying interfaces.
To configure a test address on an active interface, use the following command:
# ifconfig interface -failover ip-address up |
To configure a test address on a designated standby interface, use the following command:
# ifconfig interface -failover ip-address standby up |
You need to configure a test address only if you want to use probe-based failure detection on a particular interface.
All test IP addresses in an IPMP group must use the same network prefix. The test IP addresses must belong to a single IP subnet.
(Optional) Preserve the IPMP group configuration across reboots.
To configure an IPMP group that persists across system reboots, you would edit the hostname configuration file of the IPMP interface to add data addresses. Then, if you want to use test addresses, you would edit the hostname configuration file of one of the group's underlying IP interface. Note that data and test addresses can be both IPv4 and IPv6 addresses. Perform the following steps:
Edit the /etc/hostname.ipmp-interface file by adding the following lines:
ipmp group group-name data-address up addif data-address ... |
You can add more data addresses on separate addif lines in this file.
Edit the /etc/hostname.interface file of the underlying IP interfaces that contain the test address by adding the following line:
group group-name -failover test-address up |
Follow this same step to add test addresses to other underlying interfaces of the IPMP group. For a designated standby interface, the line must be as follows:
group group-name -failover test-address standby up |
When adding test address information on the /etc/hostname.interface file, make sure to specify the -failover option before the up keyword. Otherwise, the test IP addresses will be treated as data addresses and would cause problems for system administration. Preferably, set the -failover option before specifying the IP address.
This example shows how to manually create the same persistent active-standby IPMP configuration that is provided in Example 8–1.
# ifconfig itops0 ipmp # ifconfig subitops0 group itops0 # ifconfig subitops1 group itops0 # ifconfig subitops2 group itops0 # ifconfig itops0 192.168.10.10/24 up # ifconfig itops0 addif 192.168.10.15/24 up # ifconfig subitops0 -failover 192.168.85.30/24 up # ifconfig subitops1 -failover 192.168.86.32/24 up # ifconfig subitops2 -failover 192.168.86.34/24 standby up # ipmpstat -g GROUP GROUPNAME STATE FDT INTERFACES itops0 itops0 ok 10.00s subitops0 subitops1 (subitops2) # ipmpstat -t INTERFACE MODE TESTADDR TARGETS subitops0 routes 192.168.10.30 192.168.10.1 subitops1 routes 192.168.10.32 192.168.10.1 subitops2 routes 192.168.10.34 192.168.10.5 # vi /etc/hostname.itops0 ipmp group itops0 192.168.10.10/24 up addif 192.168.10.15/24 up # vi /etc/hostname.subitops0 group itops0 -failover 192.168.10.30/24 up # vi /etc/hostname.subitops1 group itops0 -failover 192.168.10.32/24 up # vi /etc/hostname.subitops2 group itops0 -failover 192.168.10.34/24 standby up |
This section contains tasks for maintaining existing IPMP groups and the interfaces within those groups. The tasks presume that you have already configured an IPMP group, as explained in Configuring IPMP Groups.
Make sure that the interface that you add to the group matches all the constraints to be in the group. For a list of the requirements of an IPMP group, see How to Plan an IPMP Group.
On the system with the IPMP group configuration, assume the Primary Administrator role or become superuser.
The Primary Administrator role includes the Primary Administrator profile. To create the role and assign the role to a user, see Chapter 2, Working With the Solaris Management Console (Tasks), in System Administration Guide: Basic Administration.
Add the IP interface to the IPMP group.
# ifconfig interface group group-name |
The interface specified in interface becomes a member of IPMP group group-name.
To add the interface hme0 to the IPMP group itops0, you would type the following command:
# ifconfig hme0 group itops0 # ipmpstat -g GROUP GROUPNAME STATE FDT INTERFACES itops0 itops0 ok 10.00s subitops0 subitops1 hme0 |
On the system with the IPMP group configuration, assume the Primary Administrator role or become superuser.
The Primary Administrator role includes the Primary Administrator profile. To create the role and assign the role to a user, see Chapter 2, Working With the Solaris Management Console (Tasks), in System Administration Guide: Basic Administration.
Remove the interface from the IPMP group.
# ifconfig interface group "" |
The quotation marks indicate a null string.
To remove the interface hme0 from the IPMP group itops0, you would type the following command:
# ifconfig hme0 group "" # ipmpstat -g GROUP GROUPNAME STATE FDT INTERFACES itops0 itops0 ok 10.00s subitops0 subitops1 |
You use the ifconfig addif syntax to add addresses or the ifconfig removeif command to remove addresses from interfaces. In the current IPMP implementation, test addresses are hosted on the underlying IP interface, while data addresses are assigned to the IPMP interface. The following procedures describes how to add or remove IP addresses that are either test addresses or data addresses.
Assume the role of Primary Administrator, or become superuser.
The Primary Administrator role includes the Primary Administrator profile. To create the role and assign the role to a user, see Chapter 2, Working With the Solaris Management Console (Tasks), in System Administration Guide: Basic Administration.
Add or remove data addresses.
To add data addresses to the IPMP group, type the following command:
# ifconfig ipmp-interface addif ip-address up |
To remove an address from the IPMP group, type the following command:
# ifconfig ipmp-interface removeif ip-address |
Add or remove test addresses.
To assign a test address to an underlying interface of the IPMP group, type the following command:
# ifconfig interface addif -failover ip-address up |
To remove a test address from an underlying interface of the IPMP group, type the following command:
# ifconfig interface removeif ip-address |
The following example uses the configuration of itops0 in Example 8–2. The step removes the test address from the interface subitops0.
# ipmpstat -t INTERFACE MODE TESTADDR TARGETS subitops0 routes 192.168.10.30 192.168.10.1 # ifconfig subitops0 removeif 192.168.85.30 |
You can place an interface in a new IPMP group when the interface belongs to an existing IPMP group. You do not need to remove the interface from the current IPMP group. When you place the interface in a new group, the interface is automatically removed from any existing IPMP group.
On the system with the IPMP group configuration, assume the Primary Administrator role or become superuser.
The Primary Administrator role includes the Primary Administrator profile. To create the role and assign the role to a user, see Chapter 2, Working With the Solaris Management Console (Tasks), in System Administration Guide: Basic Administration.
Move the interface to a new IPMP group.
# ifconfig interface group group-name |
Placing the interface in a new group automatically removes the interface from any existing group.
This example assumes that the underlying interfaces of your group are subitops0, subitops1, subitops2, and hme0. To change the IPMP group of interface hme0 to the group cs-link1, you would type the following:
# ifconfig hme0 group cs-link1 |
This command removes the hme0 interface from IPMP group itops0 and then puts the interface in the group cs-link1.
Use this procedure if you no longer need a specific IPMP group.
Assume the role of Primary Administrator, or become superuser.
The Primary Administrator role includes the Primary Administrator profile. To create the role and assign the role to a user, see Chapter 2, Working With the Solaris Management Console (Tasks), in System Administration Guide: Basic Administration.
Identify the IPMP group and the underlying IP interfaces.
# ipmpstat -g |
Delete all IP interfaces that currently belong to the IPMP group.
# ifconfig ip-interface group "" |
Repeat this step for all the IP interfaces that belong to the group.
To successfully delete an IPMP interface, no IP interface must exist as part of the IPMP group.
Delete the IPMP interface.
# ifconfig ipmp-interface unplumb |
After you unplumb the IPMP interface, any IP address that is associated with the interface is deleted from the system.
To make the deletion persistent, perform the following additional steps:
To delete the interface itops0 that has the underlying IP interface subitops0 and subitops1, you would type the following commands:
# ipmpstat -g GROUP GROUPNAME STATE FDT INTERFACES itops0 itops0 ok 10.00s subitops0 subitops1 # ifconfig subitops0 group "" # ifconfig subitops1 group "" # ifconfig itops0 unplumb # rm /etc/hostname.itops0 |
You would then edit the files /etc/hostname.subitops0 and /etc/hostname.subitops1 to remove “group” entries in those files.
Probe-based failure detection involves the use of target systems, as explained in Probe-Based Failure Detection. In identifying targets for probe-based failure detection, the in.mpathd daemon operates in two modes: router target mode or multicast target mode. In the router target mode, the multipathing daemon probes targets that are defined in the routing table. If no targets are defined, then the daemon operates in multicast target mode, where multicast packets are sent out to probe neighbor hosts on the LAN.
Preferably, you should set up host targets for the in.mpathd daemon to probe. For some IPMP groups, the default router is sufficient as a target. However, for some IPMP groups, you might want to configure specific targets for probe-based failure detection. To specify the targets, set up host routes in the routing table as probe targets. Any host routes that are configured in the routing table are listed before the default router. IPMP uses the explicitly defined host routes for target selection. Thus, you should set up host routes to configure specific probe targets rather than use the default router.
To set up host routes in the routing table, you use the route command. You can use the -p option with this command to add persistent routes. For example, route -p add adds a route which will remain in the routing table even after you reboot the system. The -p option thus allows you to add persistent routes without needing any special scripts to recreate these routes every system startup. To optimally use probe-based failure detection, make sure that you set up multiple targets to receive probes.
The sample procedure that follows shows the exact syntax to add persistent routes to targets for probe-based failure detection. For more information about the options for the route command, refer to the route(1M) man page.
Consider the following criteria when evaluating which hosts on your network might make good targets.
Make sure that the prospective targets are available and running. Make a list of their IP addresses.
Ensure that the target interfaces are on the same network as the IPMP group that you are configuring.
The netmask and broadcast address of the target systems must be the same as the addresses in the IPMP group.
The target host must be able to answer ICMP requests from the interface that is using probe-based failure detection.
Log in with your user account to the system where you are configuring probe-based failure detection.
Add a route to a particular host to be used as a target in probe-based failure detection.
$ route -p add -host destination-IP gateway-IP -static |
where destination-IP and gateway-IP are IPv4 addresses of the host to be used as a target. For example, you would type the following to specify the target system 192.168.10.137, which is on the same subnet as the interfaces in IPMP group itops0:
$ route -p add -host 192.168.10.137 192.168.10.137 -static |
This new route will be automatically configured every time the system is restarted. If you want to define only a temporary route to a target system for probe-based failure detection, then do not use the -p option.
Add routes to additional hosts on the network to be used as target systems.
Use the IPMP configuration file /etc/default/mpathd to configure the following system-wide parameters for IPMP groups.
FAILURE_DETECTION_TIME
TRACK_INTERFACES_ONLY_WITH_GROUPS
FAILBACK
On the system with the IPMP group configuration, assume the Primary Administrator role or become superuser.
The Primary Administrator role includes the Primary Administrator profile. To create the role and assign the role to a user, see Chapter 2, Working With the Solaris Management Console (Tasks), in System Administration Guide: Basic Administration.
Edit the /etc/default/mpathd file.
Change the default value of one or more of the three parameters.
Type the new value for the FAILURE_DETECTION_TIME parameter.
FAILURE_DETECTION_TIME=n |
where n is the amount of time in seconds for ICMP probes to detect whether an interface failure has occurred. The default is 10 seconds.
Type the new value for the FAILBACK parameter.
FAILBACK=[yes | no] |
yes– The yes value is the default for the failback behavior of IPMP. When the repair of a failed interface is detected, network access fails back to the repaired interface, as described in Detecting Physical Interface Repairs.
no – The no value indicates that data traffic does not move back to a repaired interface. When a failed interfaces is detected as repaired, the INACTIVE flag is set for that interface. This flag indicates that the interface is currently not to be used for data traffic. The interface can still be used for probe traffic.
For example, the IPMP group ipmp0 consists of two interfaces, ce0 and ce1. In the /etc/default/mpathd file, the FAILBACK=no parameter is set. If ce0 fails, then it is flagged as FAILED and becomes unusable. After repair, the interface is flagged as INACTIVE and remains unusable because of the FAILBACK=no setting.
If ce1 fails and only ce0 is in the INACTIVE state, then ce0's INACTIVE flag is cleared and the interface becomes usable. If the IPMP group has other interfaces that are also in the INACTIVE state, then any one of these INACTIVE interfaces, and not necessarily ce0, can be cleared and become usable when ce1 fails.
Type the new value for the TRACK_INTERFACES_ONLY_WITH_GROUPS parameter.
TRACK_INTERFACES_ONLY_WITH_GROUPS=[yes | no] |
For information about this parameter and the anonymous group feature, see Failure Detection and the Anonymous Group Feature.
yes– The yes value is the default for the behavior of IPMP. This parameter causes IPMP to ignore network interfaces that are not configured into an IPMP group.
no – The no value sets failure and repair detection for all network interfaces, regardless of whether they are configured into an IPMP group. However, when a failure or repair is detected on an interface that is not configured into an IPMP group, no action is triggered in IPMP to maintain the networking functions of that interface. Therefore, theno value is only useful for reporting failures and does not directly improve network availability.
Restart the in.mpathd daemon.
# pkill -HUP in.mpathd |
This section contains procedures that relate to administering systems that support dynamic reconfiguration (DR).
This procedure explains how to replace a physical card on a system that supports DR. The procedure assumes the following conditions:
You assigned administratively chosen names to the data links over which you configured the IP interfaces. These interfaces are subitops0 and subitops1.
Both interfaces belong to the IPMP group, itops0.
The interface subitops0 contains a test address.
The interface subitops0 has failed, and you need to remove subitops0's underlying card, ce.
You are replacing the ce card with a bge card.
The configuration files correspond to the interfaces and use the interfaces' customized link names, thus /etc/hostname.subitops0 and /etc/hostname.subitops1.
The procedures for performing DR vary with the type of system. Therefore, make sure that you complete the following:
Ensure that your system supports DR.
Consult the appropriate manual that describes DR procedures on your system. For Sun hardware, all systems that support DR are servers. To locate current DR documentation on Sun systems, search for “dynamic reconfiguration” on http://docs.sun.com.
The steps in the following procedure refer only to aspects of DR that are specifically related to IPMP and the use of link names. The procedure does not contain the complete steps to perform DR. For example, some layers beyond the IP layer require manual configuration steps, such as for ATM and other services, if the configuration is not automated. Follow the appropriate DR documentation for your system.
On the system with the IPMP group configuration, assume the Primary Administrator role or become superuser.
The Primary Administrator role includes the Primary Administrator profile. To create the role and assign the role to a user, see Chapter 2, Working With the Solaris Management Console (Tasks), in System Administration Guide: Basic Administration.
Perform the appropriate DR steps to remove the failed NIC from the system.
If you are removing the card without intending to insert a replacement, then skip the rest of the steps after you remove the card.
If you are replacing a card, then proceed to the subsequent steps .
Make sure that the replacement NIC is not being referenced by other configurations in the system.
For example, the replacement NIC you install is bge0. If a /etc/hostname.bge0 file exists on the system, remove that file.
# rm /etc/hostname.bge0 |
Replace the default link name of the replacement NIC with the link name of the failed card.
By default, the link name of the bge card that replaces the failed ce card is bgen, where n is the instance number, such as bge0.
# dladm rename-link bge0 subitops0 |
This step transfers the network configuration of subitops0 to bge0.
Attach the replacement NIC to the system.
Complete the DR process by enabling the new NIC's resources to become available for use.
For example, you use the cfgadm command to perform this step. For more information, see the cfgadm(1M) man page.
After this step, the new interface is configured with the test address, added as an underlying interface of the IPMP group, and deployed either as an active or a standby interface, all depending on the configurations that are specified in /etc/hostname.subitops0. The kernel can then allocate data addresses to this new interface according to the contents of the /etc/hostname.ipmp-interface configuration file.
Certain systems might have the following configurations:
An IPMP group is configured with underlying IP interfaces
A /etc/hostname.interface file exists for one underlying IP interface.
The physical hardware that is associated with the /etc/hostname file is missing.
With the new IPMP implementation where data addresses belong to the IPMP interface, recovering the missing interface becomes automatic. During system boot, the boot script constructs a list of failed interfaces, including interfaces that are missing. Based on the /etc/hostname file of the IPMP interface as well as the hostname files of the underlying IP interfaces, the boot script can determine to which IPMP group an interface belongs. When the missing interface is subsequently dynamically reconfigured on the system, the script then automatically adds that interface to the appropriate IPMP group and the interface becomes immediately available for use.
The following procedures use the ipmpstat command, enabling you to monitor different aspects of IPMP groups on the system. You can observe the status of the IPMP group as a whole or its underlying IP interfaces. You can also verify the configuration of data and test addresses for the group. Information about failure detection is also obtained by using the ipmpstat command. For more details about the ipmpstat command and its options, see the PLACEHOLDER IPMPSTAT MAN PAGE.
By default, host names are displayed on the output instead of the numeric IP addresses, provided that the host names exist. To list the numeric IP addresses in the output, use the -n option together with other options to display specific IPMP group information.
In the following procedures, use of the ipmpstat command does not require system administrator privileges, unless stated otherwise.
Use this procedure to list the status of the various IPMP groups on the system, including the status of their underlying interfaces. If probe-based failure detection is enabled for a specific group, the command also includes the failure detection time for that group.
Display the IPMP group information.
$ ipmpstat -g GROUP GROUPNAME STATE FDT INTERFACES itops0 itops0 ok 10.00s subitops0 subitops1 acctg1 acctg1 failed -- [hme0 hme1] field2 field2 degraded 20.00s fops0 fops3 (fops2) [fops1] |
Specifies the IPMP interface name. In the case of an anonymous group, this field will be empty. For more information about anonymous groups, see the in.mpathd(1M) man page.
Specifies the name of the IPMP group. In the case of an anonymous group, this field will be empty.
Indicates a group's current status, which can be one of the following:
ok indicates that all underlying interfaces of the IPMP group are usable.
degraded indicates that some of the underlying interfaces in the group are unusable.
failed indicates that all of the group's interfaces are unusable.
Specifies the failure detection time, if failure detection is enabled. If failure detection is disabled, this field will be empty.
Specifies the underlying interfaces that belong to the group. In this field, active interfaces are listed first, then inactive interfaces, and finally unusable interfaces.The status of the interface is indicated by the manner in which it is listed:
interface (without parentheses or brackets) indicates an active interface. Active interfaces are those interfaces that being used by the system to send or receive data traffic.
(interface) (with parentheses) indicates a functioning but inactive interface. The interface is not in use as defined by administrative policy.
[interface] (with brackets) indicates that the interface is unusable because it has either failed or been taken offline.
Use this procedure to display data addresses and the group to which each address belongs. The displayed information also includes which address is available for use, depending on whether the address has been toggled by the ifconfig [up/down] command. You can also determine on which inbound or outbound interface an address can be used.
Display the IPMP address information.
$ ipmpstat -an ADDRESS STATE GROUP INBOUND OUTBOUND 192.168.10.10 up itops0 subitops0 subitops0 subitops1 192.168.10.15 up itops0 subitops1 subitops0 subitops1 192.0.0.100 up acctg1 -- -- 192.0.0.101 up acctg1 -- -- 128.0.0.100 up field2 fops0 fops0 fops3 128.0.0.101 up field2 fops3 fops0 fops3 128.0.0.102 down field2 -- -- |
Specifies the hostname or the data address, if the -n option is used in conjunction with the -a option.
Indicates whether the address on the IPMP interface is up, and therefore usable, or down, and therefore unusable.
Specifies the IPMP IP interface that hosts a specific data address.
Identifies the interface that receives packets for a given address. The field information might change depending on external events. For example, if a data address is down, or if no active IP interfaces remain in the IPMP group, this field will be empty. The empty field indicates that the system is not accepting IP packets that are destined for the given address.
Identifies the interface that sends packets that are using a given address as a source address. As with the INBOUND field, the OUTBOUND field information might also change depending on external events. An empty field indicates that the system is not sending out packets with the given source address. The field might be empty either because the address is down, or because no active IP interfaces remain in the group.
Use this procedure to display information about an IPMP group's underlying IP interfaces. For a description of the corresponding relationship between the NIC, data link, and IP interface, see Overview of the Networking Stack.
Display the IPMP interface information.
$ ipmpstat -i INTERFACE ACTIVE GROUP FLAGS LINK PROBE STATE subitops0 yes itops0 --mb--- up ok ok subitops1 yes itops0 ------- up disabled ok hme0 no acctg1 ------- unknown disabled offline hme1 no acctg1 is----- down unknown failed fops0 yes field2 --mb--- unknown ok ok fops1 no field2 -i----- up ok ok fops2 no filed2 ------- up failed failed fops3 yes field2 --mb--- up ok ok |
Specifies each underlying interface of each IPMP group.
Indicates whether the interface is functioning and is in use (yes) or not (no).
Specifies the IPMP interface name. In the case of anonymous groups, this field will be empty. For more information about anonymous groups, see the in.mpathd(1M) man page.
Indicates the status of the underlying interface, which can be one or any combination of the following:
i indicates that the INACTIVE flag is set for the interface and therefore the interface is not used to send or receive data traffic.
s indicates that the interface is configured to be a standby interface.
m indicates that the interface is designated by the system to send and receive IPv4 multicast traffic for the IPMP group.
b indicates that the interface is designated by the system to receive broadcast traffic for the IPMP group.
M indicates that the interface is designated by the system to send and receive IPv6 multicast traffic for the IPMP group.
d indicates that the interface is down and therefore unusable.
h indicates that the interface shares a duplicate physical hardware address with another interface and has been taken offline. The h flag indicates that the interface is unusable.
Indicates the state of link-based failure detection, which is one of the following states:
up or down indicates the availability or unavailability of a link.
unknown indicates that the driver does not support notification of whether a link is up or down and therefore does not detect link state changes.
Specifies the state of the probe–based failure detection for interfaces that have been configured with a test address, as follows:
ok indicates that the probe is functional and active.
failed indicates that probe-based failure detection has detected that the interface is not working.
unknown indicates that no suitable probe targets could be found, and therefore probes cannot be sent.
disabled indicates that no IPMP test address is configured on the interface. Therefore probe-based failure detection is disabled.
Specifies the overall state of the interface, as follows:
ok indicates that the interface is online and working normally based on the configuration of failure detection methods.
failed indicates that the interface is not working because either the interface's link is down, or the probe detection has determined that the interface cannot send or receive traffic.
offline indicates that the interface is not available for use. Typically, the interface is switched offline under the following circumstances:
The interface is being tested.
Dynamic reconfiguration is being performed.
The interface shares a duplicate hardware address with another interface.
unknown indicates the IPMP interface's status cannot be determined because no probe targets can be found for probe-based failure detection.
Use this procedure to monitor the probe targets that are associated with each IP interface in an IPMP group.
Display the IPMP probe targets.
$ ipmpstat -nt INTERFACE MODE TESTADDR TARGETS subitops0 routes 192.168.85.30 192.168.85.1 192.168.85.3 subitops1 disabled -- -- hme0 disabled -- -- hme1 routes 192.1.2.200 192.1.2.1 fops0 multicast 128.9.0.200 128.0.0.1 128.0.0.2 fops1 multicast 128.9.0.201 128.0.0.2 128.0.0.1 fops2 multicast 128.9.0.202 128.0.0.1 128.0.0.2 fops3 multicast 128.9.0.203 128.0.0.1 128.0.0.2 |
Specifies the underlying interfaces of the IPMP group.
Specifies the method for obtaining the probe targets.
routes indicates that the system routing table is used to find probe targets.
mcast indicates that multicast ICMP probes are used to find targets.
disabled indicates that probe-based failure detection has been disabled for the interface.
Specifies the hostname or, if the -n option is used in conjunction with the -t option, the IP address that is assigned to the interface to send and receive probes. This field will be empty if a test address has not been configured.
If an IP interface is configured with both IPv4 and IPv6 test addresses, the probe target information is displayed separately for each test address.
Lists the current probe targets in a space-separated list. The probe targets are displayed either as hostnames or IP addresses, if the -n is used in conjunction with the -t option.
Use this procedure to observe ongoing probes. When you issue the command to observe probes, information about probe activity on the system is continuously displayed until you terminate the command with Ctrl-C. You must have Primary Administrator privileges to run this command.
Assume the role of Primary Administrator, or become superuser.
The Primary Administrator role includes the Primary Administrator profile. To create the role and assign the role to a user, see Chapter 2, Working With the Solaris Management Console (Tasks), in System Administration Guide: Basic Administration.
Display the information about ongoing probes.
# ipmpstat -pn TIME INTERFACE PROBE TARGET NETRTT RTT RTTAVG RTTDEV 0.11s subitops0 589 192.168.85.1 0.51ms 0.76ms 0.76ms -- 0.17s hme1 612 192.1.2.1 -- -- -- -- 0.25s fops0 602 128.0.0.1 0.61ms 1.10ms 1.10ms -- 0.26s fops1 602 128.0.0.2 -- -- -- -- 0.25s fops2 601 128.0.0.1 0.62ms 1.20ms 1.00ms -- 0.26s fops3 603 128.0.0.1 0.79ms 1.11ms 1.10ms -- 1.66s hme1 613 192.1.2.1 -- -- -- -- 1.70s subitops0 603 192.168.85.3 0.63ms 1.10ms 1.10ms -- ^C |
Specifies the time a probe was sent relative to when the ipmpstat command was issued. If a probe was initiated prior to ipmpstat being started, then the time is displayed with a negative value, relative to when the command was issued.
Specifies the identifier that represents the probe.
Specifies the interface on which the probe is sent.
Specifies the hostname or, if the -n option is used in conjunction with -p, the target address to which the probe is sent.
Specifies the total network round-trip time of the probe and is measured in milliseconds. NETRTT covers the time between the moment when the IP module sends the probe and the moment the IP module receives the ack packets from the target. If the in.mpathd daemon has determined that the probe is lost, then the field will be empty.
Specifies the total round-trip time for the probe and is measured in milliseconds. RTT covers the time between the moment the daemon executes the code to send the probe and the moment the daemon completes processing the ack packets from the target. If the in.mpathd daemon has determined that the probe is lost, then the field will be empty. Spikes that occur in the RTT which are not present in the NETRTT might indicate that the local system is overloaded.
Specifies the probe's average round-trip time over the interface between local system and target. The average round-trip time helps identify slow targets. If data is insufficient to calculate the average, this field will be empty.
Specifies the standard deviation for the round-trip time to the target over the interface. The standard deviation helps identify jittery targets whose ack packets are being sent erratically. For jittery targets, the in.mpathd daemon is forced to increase the failure detection time. Consequently, the daemon would take a longer time before it can detect such a target's outage. If data is insufficient to calculate the standard deviation, this field will be empty.
When you use the ipmpstat, by default, the most meaningful fields that fit in 80 columns are displayed. In the output, all the fields that are specific to the option that you use with the ipmpstat command are displayed, except in the case of the ipmpstat -p syntax. If you want to specify the fields to be displayed, then you use the -o option in conjunction with other options that determine the output mode of the command. This option is particularly useful when you issue the command from a script or by using a command alias
To customize the output, issue one of the following commands:
To display selected fields of the ipmpstat command, use the -o option in combination with the specific output option. For example, to display only the GROUPNAME and the STATE fields of the group output mode, you would type the following:
$ ipmpstat -g -o groupname,state GROUPNAME STATE itops0 ok accgt1 failed field2 degraded |
To display all the fields of a given ipmpstat command, use the following syntax:
# ipmpstat -o all |
You can generate machine parseable information by using the ipmpstat -P syntax. The -P option is intended to be used particularly in scripts. Machine-parseable output differs from the normal output in the following ways:
Headers are omitted.
Fields are separated by colons (:).
Fields with empty values are empty rather than being filled with the double dash (--).
In the case of multiple fields being requested, if a field contains a literal colon (:) or back slash (\), these can be escaped or excluded by prefixing these characters with a back slash (\) .
To correctly use the ipmpstat -P syntax, observe the following rules:
Use the -o option fields together with the -P option.
Never use -o all with the -P option.
Ignoring either one of these rules will cause ipmpstat -P to fail.
To display in machine parseable format the group name, the failure detection time, and the underlying interfaces, you would type the following:
$ ipmpstat -P -o -g groupname,fdt,interfaces itops0:10.00s:subitops0 subitops1 acctg1::[hme0 hme1] field2:20.00s:fops0 fops3 (fops2) [fops1] |
The group name, failure detection time, and underlying interfaces are group information fields. Thus, you use the -o -g options together with the -P option.
This sample script displays the failure detection time of a particular IPMP group.
getfdt() { ipmpstat -gP -o group,fdt | while IFS=: read group fdt; do [[ "$group" = "$1" ]] && { echo "$fdt"; return; } done }