Introduction

Network bonding, also known as link aggregation or NIC teaming, allows combining multiple network interfaces into a single logical "bonded" interface. This provides performance improvements and redundancy.

There are several main benefits to bonding network interfaces on Linux:

  • Increased bandwidth – By aggregating multiple physical links, total throughput increases linearly. This allows handling more traffic through the bonded interface.

  • Redundancy – If one physical interface fails, traffic seamlessly transfers to the remaining interfaces. This provides fault tolerance.

  • Load balancing – Traffic can be distributed across component interfaces based on different algorithms to utilize all available links.

Linux bonding supports seven different modes to provide aggregation, fault tolerance, and load balancing. This guide covers configuring and managing bonding on Linux, including troubleshooting tips.

Bonding Modes Overview

The mode defines the policy for distributing packets across the bonded interfaces. Here is a brief overview of the seven bonding modes available in Linux:

  • balance-rr: Packets are sequentially cycled through each interface in a round-robin fashion. Provides load balancing and fault tolerance.

  • active-backup: Only one interface handles traffic while the remaining are on standby. If the primary fails, the next active interface takes over. Provides fault tolerance.

  • balance-xor: Transmissions are based on the selected hash policy to determine the outbound interface. Provides load balancing and fault tolerance.

  • broadcast: Transmits everything on all interfaces. Provides fault tolerance.

  • 802.3ad: Creates aggregation groups dynamically and distributes traffic based on transmit hash policy. Requires support on the switch. Provides load balancing and fault tolerance.

  • balance-tlb: Adaptive transmit load balancing for channel bonding that does not require support from the switch. Provides load balancing and fault tolerance.

  • balance-alb: Adaptive load balancing using ARP negotiation to verify forwarding paths and select the best interface for each connection based on current load. Provides load balancing and fault tolerance.

The mode you choose depends on your specific goals, whether optimizing for maximum throughput, redundancy, or adapter utilization. Active-backup is commonly used because it offers fault tolerance without needing switch support.

For load balancing, 802.3ad is a solid option if your switches allow Link Aggregation Control Protocol (LACP). Otherwise, balance-rr provides a simple round-robin approach. Adaptive modes like balance-tlb and balance-alb are also very effective.

Now let‘s see how to configure bonding on Linux!

Installing Bonding Utilities

Most Linux distributions ship with the bonding driver built-in to the kernel so it can be activated using modprobe.

However, you need to install the userspace utilities for managing bond interfaces:

# Debian/Ubuntu
sudo apt install ifenslave

# RHEL/CentOS  
sudo yum install ifenslave

This provides the ifenslave tool and bonding.ko kernel module to create and manage bond interfaces.

You can verify bonding is properly installed:

$ modinfo bonding
$ ifenslave -V

With the utilities ready, we can move on to setting up bonding.

Creating a Bonded Interface

I‘ll demonstrate configuring an active-backup bond between two Ethernet interfaces – eth0 and eth1. Here are the steps:

  1. Identify the interfaces to bond:

    $ ip link
    
    1: lo: <LOOPBACK> [...]
    2: eth0: <BROADCAST,MULTICAST> [...]
    3: eth1: <BROADCAST,MULTICAST> [...]
  2. Take down the interfaces getting bonded to avoid issues:

    # ifdown eth0 eth1
  3. Create the bond interface bond0:

    # modprobe bonding
    # ip link add bond0 type bond

    Using modprobe bonding loads the bonding driver module if not already enabled in the running kernel.

  4. Attach slave interfaces to the bond:

    # ifenslave bond0 eth0
    # ifenslave bond0 eth1
  5. Configure bond properties under /proc:

    # echo active-backup > /sys/class/net/bond0/bonding/mode
    # echo 100 > /sys/class/net/bond0/bonding/miimon     
    # echo 1 > /sys/class/net/bond0/bonding/num_peer_notif
    
    # echo 1 > /sys/class/net/bond0/bonding/arp_interval
    # echo 0 > /sys/class/net/bond0/bonding/arp_ip_target
    
    # echo 20000 > /sys/class/net/bond0/bonding/downdelay
    # echo 20000 > /sys/class/net/bond0/bonding/updelay

    This sets active-backup mode, enables MII link monitoring at 100 millisecond intervals, and configures failover response delays. The ARP parameters help prevent losing connectivity during failover events.

  6. Assign an IP address to bond0:

    # ip addr add 192.168.10.100/24 dev bond0
  7. Bring bond0 up

    # ip link set bond0 up

And we now have an active-backup bond ready to use at 192.168.10.100!

The slave interfaces eth0 and eth1 will alternate being active and on standby based on connectivity detection. Traffic will use whichever link is currently active.

Setting a Static Bond Configuration

For reliability, you generally want bonding enabled automatically on boot instead of needing to run the commands each time.

Here is how to configure a static bond0 interface via /etc/network/interfaces:

auto bond0
iface bond0 inet static
        address 192.168.10.100
        netmask 255.255.255.0
        gateway 192.168.10.1
        bond-mode active-backup
        bond-miimon 100
        bond-downdelay 200 
        bond-updelay 200
        bond-slaves none

auto eth0 
iface eth0 inet manual
        bond-master bond0
        bond-primary eth0 eth1

auto eth1
iface eth1 inet manual 
       bond-master bond0
       bond-primary eth0 eth1

This creates the bond when the interfaces start. The slaves are attached using bond-master and will join according to the bond-* options set under bond0.

The addresses have moved to the bond section rather than set per-interface since traffic only utilizes the active member.

After changing /etc/network/interfaces you need to restart the networking service:

$ sudo systemctl restart networking

Now bond0 comes up automatically on each boot!

Using Channel Bonding for Increased Speed

When the goal is maximum throughput, combining interfaces with channel/NIC bonding is useful. This aggregates bandwidth to enhance speed beyond the limits of one card.

Here is an example using the balance-rr mode to balance traffic across a 2 x 1 Gb/s NIC bond:

auto bond0 
iface bond0 inet static
        address 10.10.10.100
        netmask 255.255.255.0
        bond-mode balance-rr
        bond-miimon 100
        bond-downdelay 200 
        bond-updelay 200
        bond-slaves none

auto eth0
iface eth0 inet manual 
       bond-master bond0

auto eth1
iface eth1 inet manual
        bond-master bond0

This will alternate packets across eth0 and eth1 in a round-robin manner to utilize the full combined bandwidth potential of the links. The speed peaks around 2 Gb/s throughput saturating both bonded 1 GbE interfaces.

Bonding Benchmark

Credit: Photo by Usman Yousaf on Unsplash

There are also aggregation modes like 802.3ad/LACP that provide more advanced load distribution algorithms. However, the above setup is straightforward and still offers excellent flexibility and performance.

Adding WiFi Redundancy via Bonding

In addition to physical interfaces, bonding can also combine virtual and wireless adapters to add redundancy.

Here is an example adding WiFi failover to a wired desktop using an active-backup bond:

/etc/network/interfaces:

[...] 

auto bond0
[...]

auto eth0
iface eth0 inet manual
       bond-master bond0
       bond-primary eth0 wlp2s0 

auto wlp2s0
iface wlp2s0 inet manual
       wpa-ssid "HomeWiFi"
       wpa-psk "wifi_password"
       bond-master bond0
       bond-primary eth0 wlp2s0

[...]

This leverages the wired ethernet card as the primary, but if it loses link the WiFi kicks in for redundancy.

Setting correct values for wpa-ssid and wpa-psk allows your wireless network credentials to connect automatically only when in use for failover.

The wired interface returns to primary if it regains connectivity thanks to bond-primary preference order. This offers backup connectivity without sacrificing hardwired speed when available.

Monitoring Bonds

There are a few handy commands for checking on bond status and performance:

  • ip link show – See bonded interfaces, slaves, statuses, which links currently active/backup.
  • cat /proc/net/bonding/[bond] – Check bond properties, MII status, slave details.
  • ethtool [interface] – Query specific slave NIC stats and health metrics.

Some examples:

# See all bond data
$ cat /proc/net/bonding/bond0

# Check slave counters   
$ ethtool eth0

# Verify master & slave status
$ ip link show bond0

This info helps monitor interface loads, response times, error counters, and other useful telemetry.

Logs are also invaluable when dealing with bonding anomalies:

$ dmesg | grep -i bond
$ journalctl -u networking

Review logs to help diagnose bonding communications issues and get visibility into any alarms or failures.

Troubleshooting Bonds

Here are some common bonding problems and potential fixes:

Bond goes down frequently

  • Adjust miimon to catch dropped links faster/slower
  • Increase failover delays (downdelay/updelay) if bonded interfaces flap
  • Check cables, ports,convex settings causing intermittent physical disconnects

High latency/slow throughput

  • Inspect Slave interface loads and errors with ethtool
  • Balance traffic better with other modes like balance-rr
  • Disable auto-negotiation and force link speed/duplex if needed

One slave missing from bond

  • Verify switch config allows aggregated links
  • Check if interface was disabled/disconnected
  • Try forcing specific speed/duplex to match peers

ARP problems after failover

  • Configure interface ARP monitoring (arp_interval, arp_ip_target)
  • Increase delays slightly (downdelay/updelay) before status change
  • Consider VRRP to assign virtual MAC and ARP handling

Thoroughly reviewing metrics and system logs will provide context to narrow down the root cause when you encounter bonding problems.

Closing Thoughts

Implementing link aggregation, failover, and load balancing with bonded NICs has many benefits for networking resilience and performance.

Linux offers very flexible options to meet requirements with its bonding driver framework and tools like ifenslave.

I hope this guide provided greater insight into configuring interface bonding on Linux! Let me know if you have any other questions.

Similar Posts