2 Optimizing Network Servers for High Availability

For systems that provide network services to clients inside the network, network availability becomes a priority to ensure that the services are continuous and interruptions are prevented. With virtual local area networks (VLANs), you can also organize the network such that systems with similar functions are grouped together as though they belong to their own virtual networks. This feature improves network management and administration.

For a system to avail of these advanced features, it must have several NICs. The more NICs, the better assurances of network availability that a server can provide.

Working With Network Bonding

A system's physical network interfaces that are connected to a network switch can be grouped together into a single logical interface to provide better throughput or availability. This grouping, or aggregation, of physical network interfaces is known as a network bond.

A bonded network interface can increase data throughput by load balancing or can provide redundancy by activating failover from one component device to another. By default, a bonded interface appears similar to a normal network device to the kernel, but it sends out network packets over the available secondary devices by using a round-robin scheduler. You can configure bonding module parameters in the bonded interface's configuration file to alter the behavior of load-balancing and device failover.

The network bonding driver within the kernel can be used to configure the network bond in different modes to take advantage of different bonding features, depending on the requirements and the available network infrastructure. For example, the balance-rr mode can be used to provide basic round-robin load-balancing and fault tolerance across a set of physical network interfaces; while the active-backup mode provides basic fault tolerance for high availability configurations. Some bonding modes, such as 802.3ad, or dynamic link aggregation, require particular hardware features and configuration on the switch that the physical interfaces connect to. Basic load-balancing modes (balance-rr and balance-xor) work with any switch that supports EtherChannel or trunking. Advanced load-balancing modes (balance-tlb and balance-alb) don't impose requirements on the switching hardware, but do require that the device driver for each component interfaces implement certain specific features such as support for ethtool or the ability to change the hardware address while the device is active.

For more information on the kernel bonding driver, see the upstream documentation at https://www.kernel.org/doc/Documentation/networking/bonding.txt or included at /usr/share/doc/iputils-*/README.bonding.

Note:

For network configurations where systems are directly cabled together for high availability, a switch is required to support certain network interface bonding features such as automatic failover. Otherwise, the mechanism might not work.

Configuring Network Bonding

You can configure network bonding either by using the command line or the Network Connections Editor.

Using the Command Line

For a tutorial, which includes a hands-on lab environment, on configuring network bonds, see Create Network Bonds using Network Manager CLI .

  1. Add a bond interface using the nmcli connection add command.

    sudo nmcli connection add type bond con-name "Bond Connection 1" ifname bond0 bond.options "mode=active-backup"

    Take note to set the bond connection name, the bond interface name, and, importantly, the bond mode option. In this example, the mode is set to active-backup. If you don't set the bond connection name, then the bond interface name is also used as the connection name.

  2. Optionally configure the IP address for the bond interface using the nmcli connection modify command. By default the interface is configured to use DHCP, but if you require static IP addressing, manually configure the address. For example, to configure IPv4 settings for the bond, type:

    sudo nmcli connection modify "Bond Connection 1" ipv4.addresses '192.0.2.2/24'
    sudo nmcli connection modify "Bond Connection 1" ipv4.gateway '192.0.2.1'
    sudo nmcli connection modify "Bond Connection 1" ipv4.dns '192.0.2.254'
    sudo nmcli connection modify "Bond Connection 1" ipv4.method manual
  3. Add the physical network interfaces to the bond as secondary-type interfaces using the nmcli connection add command. For example:

    sudo nmcli connection add type ethernet slave-type bond con-name bond0-if1 ifname enp1s0 master bond0
    sudo nmcli connection add type ethernet slave-type bond con-name bond0-if2 ifname enp2s0 master bond0

    Give each secondary a connection name, and select the interface name for each interface that you want to add. You can get a list of available interfaces by running the nmcli device command. Specify the interface name of the bond to which you want to attach the secondary network interfaces.

  4. Start the bond interface.

    sudo nmcli connection up "Bond Connection 1"
  5. Verify that the network interfaces have been added to the bond correctly. You can check this by looking at the device list again.

    sudo nmcli device
    ...
    enp1s0   ethernet  connected  bond0-if1
    enp2s0   ethernet  connected  bond0-if2
Using the Network Connections Editor
  1. Start the editor:

    sudo nm-connection-editor

    The Network Connections window opens.

  2. To add a connection, use the plus (+) button at the bottom of the window.

    This step opens another window that prompts you for the type of connection to create.

  3. From the window's drop down list and under the Virtual section, select Bond, then click Create.

    The Network Bond Editor window opens.

    Network Bond Editor


    The figure shows the Network Connections interface editor open and ready to configure a new bonded network interface.
  4. Optionally configure a connection name for the bond.

  5. Add physical network interfaces to the network bond by clicking on the Add button.

    1. A new window that where you can select the type of physical interface to add to the network bond. For example, you can select the Ethernet type to add an Ethernet interface to the network bond. Click the Create button to configure the secondary interface.

    2. Optionally configure a name for the secondary interface.

    3. In the Device field, select the physical network interface to add as a secondary to the bond. Note that if a device is already configured for networking it's not listed as available to configure within the bond.

    4. Click Save to add the secondary device to the network bond.

    Repeat these steps for all the physical network interfaces that make up the bonded interface.

  6. Configure the bonding mode that you want to use for the network bond.

    Select the bonding mode from the Mode drop down list. Note that some modes might require more configuration on the network switch.

  7. Configure other bond parameters such as link monitoring as required if you don't want to use the default settings.

    If you don't intend to use DHCP for network bond IP configuration, set the IP addressing by clicking on the IPv4 and IPv6 tabs.

  8. Click the Save button to save the configuration and to create the network bond.

Verifying the Network Bond Status

  1. Run the following command to obtain information about the network bond with device name bond0:

    cat /proc/net/bonding/bond0

    The output shows the bond configuration and status, including which bond secondaries are active. The output also provides information about the status of each secondary interface.

  2. Temporarily disconnect the physical cable that's connected to one of the secondary interfaces. No other reliable method is available to test link failure.

  3. Check the status of the bond link as shown in the initial step for this procedure. The status of the secondary interface would indicate that the interface is down and a link failure has occurred.

Working With Network Interface Teaming

Network interface teaming is similar to network interface bonding and provides a way of implementing link aggregation that's relatively maintenance-free, easier to reconfigure, expand, and debug, compared to bonding.

A lightweight kernel driver implements teaming and the teamd daemon implements load-balancing and failover schemes termed runners.

The following standard runners are defined:

activebackup

Monitors the link for changes and selects the active port that's used to send packets.

broadcast

Sends packets on all member ports.

lacp

Provides load balancing by implementing the Link Aggregation Control Protocol 802.3ad on the member ports.

loadbalance

In passive mode, uses the Berkeley Packet Filter (BPF) hash function to select the port that's used to send packets.

In active mode, uses a balancing algorithm to distribute outgoing packets over the available ports.

random

Selects a port at random to send each outgoing packet.

roundrobin

Sends packets over the available ports in a round-robin fashion.

For specialized applications, you can create customized runners that teamd can interpret. Use the teamdctl command to control the operation of teamd.

For more information, see the teamd.conf(5) manual page.

Configuring Network Interface Teaming

You can configure a teamed interface by creating JSON-format definitions that specify the properties of the team and each of its component interfaces. The teamd daemon then interprets these definitions. You can use the JSON-format definitions to create a team interface by starting the teamd daemon manually, by editing interface definition files in /etc/sysconfig/network-scripts, by using the nmcli command, or by using the Network Configuration editor (nm-connection-editor). The following task describes the first of these methods.

  1. Create a JSON-format definition file for the team and its component ports. For sample configurations, see the files under /usr/share/doc/teamd/example_configs/.

    The following example from activebackup_ethtool_1.conf defines an active-backup configuration where eth1 is configured as the primary port and eth2 as the backup port and these ports are monitored by ethtool.

    {
            "device":       "team0",
            "runner":       {"name": "activebackup"},
            "link_watch":   {"name": "ethtool"},
            "ports":        {
                    "eth1": {
                            "prio": -10,
                            "sticky": true
                    },
                    "eth2": {
                            "prio": 100
                    }
            }
    }
  2. Bring down the component ports.

    sudo ip link set eth1 down
    sudo ip link set eth2 down

    Note:

    Active interfaces can't be added to a team.

  3. Start an instance of the teamd daemon and have it create the teamed interface by reading the configuration file.

    In the following example, /root/team_config/team0.conf is used.

    sudo teamd -g -f /root/team_config/team0.conf -d
    Using team device "team0".
    Using PID file "/var/run/teamd/team0.pid"
    Using config file "/root/team_config/team0.conf"

    where the -g option displays debugging messages and can be omitted.

  4. Set the IP address and network mask prefix length of the teamed interface.

    sudo ip addr add 10.0.0.5/24 dev team0

For more information, see the teamd(8) manual page.

Adding Ports to and Removing Ports from a Team

To add a port to a team, use the teamdctl command:

sudo teamdctl team0 port add eth3 

To remove a port from a team:

sudo teamdctl team0 port remove eth3 

For more information, see the teamdctl(8) manual page.

Changing the Configuration of a Port in a Team

Use the teamdctl command to update the configuration of a constituent port of a team, for example:

sudo teamdctl team0 port config update eth1 '{"prio": -10, "sticky": false}'

Enclose the JSON-format definition in single quotes and don't split it over several lines.

For more information, see the teamdctl(8) manual page.

Removing a Team

Use the following command to halt the teamd daemon:

sudo teamd -t team0 -k

For more information, see the teamd(8) manual page.

Displaying Information About Teams

Display the network state of the teamed interface as follows:

sudo ip addr show dev team0
7: team0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP 
    link/ether 08:00:27:15:7a:f1 brd ff:ff:ff:ff:ff:ff
    inet 10.0.0.5/24 scope global team0
       valid_lft forever preferred_lft forever
    inet6 fe80::a00:27ff:fe15:7af1/64 scope link 
       valid_lft forever preferred_lft forever

Use the teamnl command to display information about the component ports of the team:

sudo teamnl team0 ports
 5: eth2: up 1000Mbit FD 
 4: eth1: up 1000Mbit FD 

To display the current state of the team, use the teamdctl command:

sudo teamdctl team0 state
setup:
  runner: activebackup
ports:
  eth1
    link watches:
      link summary: down
      instance[link_watch_0]:
        name: ethtool
        link: down
  eth2
    link watches:
      link summary: up
      instance[link_watch_0]:
        name: ethtool
        link: up
runner:
  active port: em4

You can also use the teamdctl command to display the JSON configuration of the team and each of its constituent ports:

sudo teamdctl team0 config dump
{
    "device": "team0",
    "link_watch": {
        "name": "ethtool"
    },
    "mcast_rejoin": {
        "count": 1
    },
    "notify_peers": {
        "count": 1
    },
    "ports": {
        "eth1": {
            "prio": -10,
            "sticky": true
        },
        "eth2": {
            "prio": 100
        }
    },
    "runner": {
        "name": "activebackup"
    }
}

For more information, see the teamdctl(8) and teamnl(8) manual pages.

Configuring VLANs With Untagged Data Frames

A VLAN is a group of machines that can communicate as though they're attached to the same physical network. With a VLAN, you can group systems regardless of their actual physical location on a LAN. In a VLAN that uses untagged data frames, you create the broadcast domain by assigning the ports of network switches to the same permanent VLAN ID or PVID (other than 1, which is the default VLAN). All the ports that you assign with this PVID are in a single broadcast domain. Broadcasts between devices in the same VLAN aren't visible to other ports with a different VLAN, even if they exist on the same switch.

You can use the Network Settings editor or the nmcli command to create a VLAN device for an Ethernet interface.

To create a VLAN device from the command line:

sudo nmcli con add type vlan con-name bond0-pvid10 ifname bond0-pvid10 dev bond0 id 10

Running the previous command sets up the VLAN device bond0-pvid10 with a PVID of 10 for the bonded interface bond0. In addition to the regular interface, bond0, which uses the physical LAN, you now have a VLAN device, bond0-pvid10, which can use untagged frames to access the virtual LAN.

Note:

You don't need to create virtual interfaces for the component interfaces of a bonded interface. However, you must set the PVID on each switch port to which they connect.

You can also use the command to set up a VLAN device for a non bonded interface, for example:

sudo nmcli con add type vlan con-name en1-pvid5 ifname en1-pvid5 dev en1 id 5

To obtain information about the configured VLAN interfaces, view the files in the /proc/net/vlan directory.

You can also use the ip command to create VLAN devices. However, such devices don't persist across system reboots.

For example, you would create a VLAN interface en1.5 for en1 with a PVID of 5 as follows:

sudo ip link add link eth1 name eth1.5 type vlan id 5

For more information, see the ip(8) manual page.