iftop Command in Linux with Examples

The first time I chased a “slow network” alert on a production box, I wasted an hour staring at CPU charts and disk queues. The real culprit was a single host streaming backups over the wrong interface. That experience taught me a simple lesson: when packets are the problem, you need a tool that speaks in packets. iftop is the fastest way I know to see what is actually flowing across a network interface in real time. It behaves like the familiar top command, but instead of processes, it shows conversations between hosts and how much bandwidth each one is consuming.

If you’re responsible for servers, containers, or even a laptop on a busy Wi‑Fi network, iftop gives you instant clarity. I’ll walk you through installation, the core options, and the real output you can expect. Then I’ll move beyond basics with workflows I use in 2026: pairing iftop with modern observability stacks, containerized deployments, and AI‑assisted triage. Along the way, I’ll call out common mistakes, when not to use iftop, and how to read its output without guessing. By the end, you’ll be able to diagnose bandwidth spikes in minutes, not hours.

Why iftop still matters in 2026

I love modern dashboards, but there are times when you want direct, local, zero‑latency visibility. iftop sits in that sweet spot. It gives you an immediate, high‑resolution view of bandwidth usage at the interface level. Unlike flow‑export tools that require infrastructure setup, iftop is a single command that works anywhere you can run it.

Here’s why I still reach for it:

  • It shows live traffic per connection, not just totals.
  • It runs locally, so you can use it even when remote telemetry is down.
  • It is fast to install and trivial to remove.
  • It doesn’t require packet capture files or long analysis pipelines.

Think of it like a live scoreboard for your network interface. Each line is a “match” between two hosts, and the bandwidth is the score. When one host dominates the scoreboard, you know where to start investigating.

Installation across popular distributions

I’ve installed iftop on everything from bare‑metal servers to ephemeral containers. The package name is consistent, but the repository setup changes with the distribution.

Red Hat‑based systems (version 8 or below)

yum install epel-release
yum install iftop

Red Hat‑based systems (version 9)

sudo dnf install epel-release

sudo dnf install iftop

Debian or Ubuntu

sudo apt install iftop

I recommend verifying the binary afterward with:

iftop -h

You should see the help output and the list of flags.

Core concepts: what you are seeing on screen

When iftop starts, it opens a real‑time display. It aggregates traffic by “conversation,” meaning a source and destination pair. Each row shows bandwidth for different time windows. The default window set is usually 2s, 10s, and 40s, so you can see short spikes and longer trends at the same time.

Here’s the mental model I use:

  • Interface: the network card or virtual interface you are observing.
  • Rows: each row is a pair of hosts exchanging traffic.
  • Columns: each column is a bandwidth average over a time slice.
  • Bars: a visual hint of how busy a conversation is.

A quick analogy: if you think of your interface as a highway, each row is a road segment between two cities. The columns tell you whether traffic is only spiking right now or staying busy for longer intervals.

Essential commands with practical examples

These are the commands I run most often. I’ll show the exact flags and explain when I choose each one.

Basic bandwidth view on the default interface

iftop

I start here when I just need “what’s happening right now.” It chooses the default interface, often eth0 or ens160. This is the fastest path to insight.

Specify a particular interface

sudo iftop -i wlo1

I use this when I know the traffic is on a specific interface, like Wi‑Fi (wlo1) or a VLAN. Always confirm the interface name with ip link before you run it.

Disable hostname resolution for speed

sudo iftop -n -i wlo1

When DNS is slow, hostname resolution makes the display lag. The -n flag keeps it fast and shows raw IPs. I use this for busy servers or when DNS timeouts are expected.

Disable service name conversion for clarity

sudo iftop -N -i wlo1

If you don’t want port numbers converted into service names, use -N. This is handy when you are correlating with firewall logs or packet captures that list raw ports.

Hide the bar graph

sudo iftop -b

The bar graph can be visually noisy in terminals with limited space. I turn it off when I’m working in split panes or remote consoles.

Text output without ncurses

sudo iftop -t

This is the mode I use for logging or piping. It produces plain text and works well in scripts.

Sort by source address

sudo iftop -o source

I use this when I suspect a single host is flooding the network and I want to group flows by their source IP.

Sort by destination address

sudo iftop -o destination

This is the opposite view. I pick it when a service is receiving more traffic than expected and I want to see who is sending it.

Limit the number of lines

sudo iftop -L 2 -i wlo1

This restricts output to the busiest two lines. It’s great for quick triage when the list is huge.

Help and options

iftop -h

I use the help output as a quick reminder of lesser‑known flags when I’m working on a new system.

Reading the display like a pro

New users often misread the output. The most common mistake is focusing on the bar graph while ignoring the time‑averaged columns. I recommend the opposite: start with the columns and use bars only as a quick sanity check.

A typical display shows:

  • Source and destination IPs
  • Bandwidth over 2s, 10s, 40s windows
  • Totals for sent and received traffic

If the 2s column spikes but the 40s column stays low, you’re seeing short bursts, not sustained load. If all columns are high, then you’re in a true congestion scenario. This is important when you decide whether to rate‑limit, reroute, or ignore.

I also watch the bottom totals:

  • Total send rate: outbound traffic from the interface
  • Total receive rate: inbound traffic to the interface

If the totals are high but no single conversation stands out, you might be dealing with many small flows, which is a different mitigation strategy than a single massive transfer.

Common mistakes and how I avoid them

I see the same pitfalls again and again. Here’s how I sidestep them.

  • Running without sudo: iftop needs elevated permissions to capture traffic. If it runs but shows nothing, check permissions first.
  • Watching the wrong interface: servers often have multiple interfaces. Always verify with ip link or ip route.
  • Using hostname lookups on busy systems: that leads to stalls and incorrect conclusions. Use -n when in doubt.
  • Misreading short spikes: a single burst isn’t always a problem. Look at the longer window columns before reacting.
  • Ignoring container or VM contexts: on hosts with containers, the traffic might be on a bridge interface like docker0 or a CNI interface. Pick the correct one.

When I use iftop vs when I do not

iftop is great, but it’s not the right tool for every scenario. I follow simple rules:

Use iftop when:

  • You need an immediate, local view of bandwidth usage.
  • You want to identify top talkers fast.
  • You are debugging a live incident and can’t wait for dashboards.

Avoid iftop when:

  • You need historical trends over hours or days.
  • You need packet‑level detail or payload inspection.
  • You are analyzing encrypted traffic and need application context.

In those cases, I use flow logs, packet captures, or a metrics pipeline. iftop is a live microscope, not a long‑term historian.

Practical workflows I use in 2026

Modern ops isn’t only about a single command. Here’s how I integrate iftop into real workflows.

Pairing with systemd and journald

When I need a quick text snapshot, I run:

sudo iftop -t -s 10 > /tmp/iftop-snapshot.txt

I then attach that snapshot to an incident ticket. It’s short, easy to scan, and captures a 10‑second view. If I need longer monitoring, I loop it in a shell script and timestamp the output.

Using with containers

On container hosts, I check the interface list first:

ip link

Then I watch the relevant bridge or CNI interface:

sudo iftop -i cni0 -n

This tells me if the traffic is inside the container network or leaving the host.

Correlating with firewall logs

I often combine iftop with firewall data. I take the top source or destination IPs from iftop and then search for them in firewall logs. This correlation helps me confirm whether the traffic is allowed, blocked, or rate‑limited. It’s a fast path from observation to action.

AI‑assisted incident triage

I sometimes paste a short iftop text output into an internal AI tool to generate hypotheses. The AI can quickly suggest likely causes, like backups, replication traffic, or unexpected outbound bursts. I still verify manually, but it speeds up the first guess.

Performance considerations and overhead

iftop is lightweight, but it does observe traffic at the interface. On a busy server, there is some overhead. In my experience, iftop typically adds small CPU usage, but the exact cost depends on traffic volume and hostname resolution.

Here’s what I do to keep it light:

  • Use -n to skip DNS lookups.
  • Avoid running it for long periods on the busiest nodes.
  • Consider short snapshots rather than continuous monitoring.

If you need continuous, long‑term analysis, then flow export or eBPF tooling might be the better fit. I treat iftop as a tactical tool, not a permanent sensor.

Key options summary

I keep this table in my notes. It’s a fast reference when I’m under pressure.

Option

What I use it for

iftop

Show live bandwidth on the default interface.

-i wlo1

Observe a specific interface.

-n

Skip hostname lookup for speed.

-N

Keep raw port numbers.

-b

Hide bar graph.

-t

Text output for logs or scripts.

-o source

Sort by source address.

-o destination

Sort by destination address.

-L 2

Show only top 2 lines.

-h

Help output.## Real‑world scenario: finding the backup flood

Here’s a scenario I’ve handled more than once: a database replica suddenly falls behind, and the monitoring says “network latency.” I log into the database host, run:

sudo iftop -n -i ens192

In the output, I see a single host saturating outbound bandwidth to an IP that is clearly a backup target. The 2s and 10s windows are high, and the 40s window is climbing. That tells me this isn’t a burst; it’s sustained load. I take the source IP, check the backup schedule, and realize the job started at the wrong time.

In minutes, I pause that job and the replica catches up. No complex profiling, no packet captures, just a live view that showed the truth.

Troubleshooting checklist I follow

When traffic looks strange, I run through this fast checklist:

  • Confirm interface name with ip link.
  • Run sudo iftop -n -i .
  • Check if totals are asymmetric (send vs receive).
  • Identify top source or destination.
  • Cross‑check with firewall or application logs.
  • Decide whether this is a burst or sustained flow.

This checklist keeps me focused and prevents random guesswork.

Practical do’s and don’ts

To close out the core guidance, here’s the distilled version I give to teams:

Do:

  • Use -n for speed and accuracy under load.
  • Capture short text snapshots with -t when you need evidence.
  • Watch the longer time window columns to avoid overreacting.

Don’t:

  • Assume the default interface is the right one.
  • Confuse a brief spike with a real incident.
  • Leave it running forever on production nodes.

Where iftop fits in a modern toolchain

In 2026, I still rely on full observability stacks: time‑series metrics, flow logs, and AI‑assisted anomaly detection. iftop doesn’t replace those. It complements them by giving immediate, local insight when you are on the box and need an answer now.

If you’re building a modern workflow, here’s how I suggest combining tools:

Traditional approach

Modern approach

Manual checks with iftop

Automated alerts that point you to iftop for verification

Packet capture for every issue

Short iftop snapshots plus targeted captures only when needed

Long on‑box troubleshooting

Rapid triage with iftop, then off‑box analysis in dashboardsThis hybrid model keeps you fast without sacrificing depth.

You now have everything you need to make iftop a reliable part of your network‑debugging toolkit. The immediate steps I recommend are simple: install it, run it on the interface that matters, and spend a few minutes learning how the time windows behave. Once you’ve done that, try a short text snapshot and compare it with your normal monitoring graphs. You’ll quickly see where it shines.

If you’re dealing with a persistent bandwidth issue, I would capture a short iftop snapshot, identify the top talker, and then pivot into your logs or flow data. That path gives you a quick answer and a reliable trail for deeper analysis. And if you’re building new operational runbooks, add a section that calls for iftop during live incidents. It’s the sort of simple tool that, used at the right moment, saves real time and real money.

Deeper flags and interactive controls you should know

iftop is deceptively simple. The flags are only half the story; the interactive keys are the other half. I keep a short mental map of the most useful controls so I can explore without quitting the session.

Here are the controls I use most:

  • h: toggles the help screen. I use this if I blank on a key combination.
  • n: toggles DNS resolution in the live display. Great when a screen is laggy or when you want quick IP‑only views.
  • N: toggles service name resolution (ports vs names). Useful when correlating with firewall logs.
  • s: toggles between showing source or destination ports on screen. Good for quick directionality checks.
  • t: toggles a text-only layout that keeps the screen compact.
  • p: pauses updating. This is crucial when you want to take a snapshot or read a specific line.
  • q: quits. I put this here because muscle memory matters under stress.

I also use c to toggle whether the display is cumulative or shows only current traffic. That helps differentiate “this was big a moment ago” from “this is big right now.”

If you’re new to the tool, it’s worth a single five‑minute session where you just press keys and observe. That muscle memory pays off in real incidents.

Understanding time windows like a signals engineer

I’ve seen smart engineers misinterpret the 2s/10s/40s columns, so I’ll make it concrete. Think of each window as a different lens:

  • 2s: zoomed‑in, best for momentary bursts.
  • 10s: balances short spikes with trend.
  • 40s: zoomed‑out, best for sustained behavior.

If the 2s column is high and the 40s is low, you’re in a bursty or cyclical workload. That could be normal if it aligns with cron jobs, cache refreshes, or batch events.

If all three are high, you’re looking at sustained throughput. That’s when you check if the interface is at or near its capacity, whether latency is rising, or whether packet drops are appearing in other tools.

If the 2s is low and the 40s is high, that usually means the storm just passed. I treat that as a clue to correlate with what just ran.

Filtering traffic: focus on what matters

iftop supports filtering so you can isolate traffic, which is huge in noisy environments. I use filters constantly in production because raw traffic on a busy node can be overwhelming.

Filter by host

sudo iftop -i ens192 -n -F 10.0.1.5

I use this when a single host is suspected. It reduces noise and helps confirm whether the host is actually the top talker.

Filter by network

sudo iftop -i ens192 -n -F 10.0.0.0/16

This is useful when isolating internal traffic, for example when I want to ignore public traffic and focus on internal east‑west flows.

Filter by port

sudo iftop -i ens192 -n -F port 443

This helps when I want to watch HTTPS traffic only, or isolate a specific service port. It’s especially useful when you want to observe a single backend service without drowning in other flows.

Combine filters

sudo iftop -i ens192 -n -F "10.0.1.5 and port 5432"

This is how I zero in on database traffic from a single app host. In environments where a single node hosts multiple services, this is the fastest way to isolate the noise.

Filters reduce cognitive load. In my experience, it’s better to narrow too much and relax the filter than to start with a noisy view.

Edge cases you should expect

iftop is reliable, but real systems are messy. I’ve hit edge cases that made me question the output until I understood the limits.

Encrypted overlays and tunnels

If you’re using VPNs, tunnels, or service meshes, you might see only the encrypted outer traffic. That means you’ll see flows between node‑to‑node endpoints rather than service‑to‑service endpoints. In this case, iftop tells you which nodes are heavy, but not which internal services are responsible. I treat that as a prompt to correlate with service‑mesh metrics or eBPF tools that can see inside the tunnels.

Offloaded or accelerated NICs

Some high‑performance NICs offload work from the CPU. iftop still sees traffic, but the timing and rates can look a bit different than expected, especially at very high throughput. I take the numbers as directional rather than absolute in those environments and correlate with switch or router counters.

Virtualized interfaces and veth pairs

On container hosts, traffic might appear both on a bridge (cni0, docker0) and on veth pairs. That can make totals look “double counted” if you watch the wrong interface. My rule: watch the interface where traffic leaves the host if you care about external bandwidth, and watch the bridge if you care about internal traffic.

Asymmetric routing

Sometimes inbound and outbound traffic are on different paths. If you only watch one interface, you’ll miss half the picture. I check routing tables (ip route) and then watch both interfaces if needed. When I see odd asymmetry in the totals, I assume routing is the issue until proven otherwise.

Practical scenarios beyond the obvious

It’s easy to say “use iftop for spikes,” but the tool shines in less obvious situations too. Here are some real‑world problems where it delivered fast answers for me.

1) Diagnosing unexpected egress costs

Cloud bills often spike from outbound data transfer. When I see a cost anomaly, I jump onto a node and use iftop to find which destination is receiving the bulk of traffic. Even a 10‑minute snapshot can reveal whether a service is misconfigured and sending data to the wrong region.

2) Tracking replication storms

During data replication incidents, iftop shows whether traffic is one‑way or bi‑directional, and which replica pairs are hot. That helps decide whether to throttle a job or reroute traffic to another window.

3) Proving a DDoS is not the culprit

Not every traffic spike is a DDoS. iftop helps differentiate “lots of different sources” from “one big internal transfer.” If I only see one or two internal IPs dominating the traffic, I stop chasing external threats and look inward.

4) Debugging cross‑zone latency

When application latency is high, I’ve used iftop to confirm that traffic is unexpectedly crossing zones or regions. A single cross‑zone link with sustained traffic is often enough to explain the lag and the cost.

5) Finding leaky backups

Misconfigured backups can run far too often. iftop makes those patterns obvious because you can see the same host pushing data repeatedly. When I correlate with cron or scheduler logs, it usually becomes clear.

iftop vs other tools: the short, practical comparison

I love specialized tools, but I also want to know when to switch. Here’s how I think about common alternatives.

iftop vs nload

nload shows total inbound and outbound bandwidth, but not per‑connection detail. It’s good for quick totals but not for finding the culprit. I use nload when I just need to know “is this interface busy,” and iftop when I need “who is doing it.”

iftop vs iptraf-ng

iptraf‑ng gives a more detailed breakdown and can capture more protocol‑specific info. It’s powerful but heavier. I reach for iptraf‑ng when I need protocol stats or more historical visibility, and iftop when I need the simplest live view.

iftop vs tcpdump

tcpdump is a packet‑level scalpel. It’s great for deep analysis but too verbose for quick triage. iftop is a higher‑level overview. I often start with iftop, then use tcpdump once I know which host or port to target.

iftop vs eBPF tools

eBPF tools are incredible for deep observability, but they require setup and are not always available in production. iftop is a “just run it” tool. I use eBPF when I need kernel‑level insight, and iftop when I need fast answers without infrastructure changes.

Using iftop in scripts and automation

While iftop is interactive, it’s surprisingly useful in automation when you use -t and -s.

Generate a timed snapshot for runbooks

sudo iftop -t -s 15 -i ens192 -n > /var/log/iftop-$(date +%F-%H%M%S).log

This gives me a log file I can attach to tickets or store for incident review. It’s a snapshot of reality, not just a graph.

Quick top‑talker check in a loop

while true; do

sudo iftop -t -s 5 -n -i ens192 tail -n +3 head -n 10

sleep 10

echo "----"

done

I use this when I want a rolling view without opening the interactive UI. It’s also good when I’m on a limited terminal or when a UI session is too heavy.

Triggering an alert on sustained high traffic

I don’t recommend relying solely on iftop for alerting, but for a quick on‑box guardrail it can work:

sudo iftop -t -s 5 -n -i ens192 | awk ‘/Total send rate/{print $4}‘

This extracts total send rate. You can compare it to a threshold and notify yourself in a pinch. It’s not enterprise‑grade, but it can save time during a live debugging session.

Deeper example: isolating a noisy microservice

Here’s a real‑style workflow I use when a microservice suddenly saturates a link.

1) Identify the busy interface

ip route | head -n 5

2) Start iftop on the suspected interface

sudo iftop -n -i ens192

3) Filter by service port

sudo iftop -n -i ens192 -F port 8080

4) Find top destination

If the destination is a load balancer, I then pivot to that host. If the destination is a single IP, I go straight to that node.

5) Correlate with app logs

I search for “large responses” or “bulk export” activity in the app logs. Often it’s an accidental batch job or a stuck retry loop.

The key is that iftop quickly narrows the search space. It doesn’t solve the root cause, but it points me where to look.

Reading totals and understanding directionality

The bottom totals can be confusing, especially if you’re comparing to external monitoring. Here’s how I reason about it:

  • Total send rate reflects outbound traffic from the interface.
  • Total receive rate reflects inbound traffic to the interface.
  • Total cumulative (if enabled) reflects the sum over the session, not a historical view.

If outbound is far higher than inbound, I suspect a data export, backup, or replication. If inbound is higher, I suspect downloads, syncs, or external clients hitting a service. When both are high, it could be a full duplex transfer or a hot internal node talking to multiple peers.

Security and privacy considerations

iftop is powerful, and with power comes risk. It can expose which hosts are talking and how much data is moving. In sensitive environments, I follow a few rules:

  • Only run it on systems where I have explicit authorization.
  • Avoid capturing or storing logs longer than needed.
  • Use filters to avoid revealing unrelated traffic.
  • Keep snapshots in restricted directories.

This might sound obvious, but it’s easy to forget in the middle of a live incident.

Performance considerations: realistic expectations

I avoid hard numbers because the overhead depends on hardware, traffic, and environment. Still, I can give a practical sense of ranges:

  • On lightly loaded systems, iftop is usually negligible.
  • On very busy interfaces, you might see a small but noticeable CPU bump.
  • DNS resolution can be more expensive than the packet capture itself.

If I suspect iftop is adding too much overhead, I immediately turn on -n, reduce the number of lines (-L), or drop to a short snapshot (-t -s). Those changes typically bring it back to a safe level.

Common pitfalls in production and how I sidestep them

Here are some additional mistakes I’ve seen in real environments, plus how I avoid them:

  • Confusing NAT endpoints with real sources: On NATed networks, the source IP might be the NAT gateway. I cross‑check with NAT logs to find the real client.
  • Assuming interface speed from iftop: iftop shows usage, not capacity. I still check interface speed and errors with tools like ethtool or ip -s link.
  • Ignoring packet drops: iftop doesn’t show drops. If I suspect congestion, I check interface error counters.
  • Believing a single snapshot is truth: a 5‑second view is useful, but I take multiple snapshots when the situation is unclear.

Advanced filters and syntax tricks

iftop’s filter syntax is powerful if you’re familiar with packet filter expressions. Here are a few patterns I use regularly:

Only show traffic to a specific subnet

sudo iftop -n -i ens192 -F "dst net 10.10.0.0/16"

Exclude a noisy backup host

sudo iftop -n -i ens192 -F "not host 10.0.2.99"

Only show TCP traffic

sudo iftop -n -i ens192 -F "tcp"

Only show UDP traffic

sudo iftop -n -i ens192 -F "udp"

These filters can be combined and layered. When the output feels chaotic, a filter is almost always the best move.

Using iftop during incident response

In real incidents, time matters. I’ve developed a tiny playbook that keeps me grounded:

1) Start with a broad view (sudo iftop -n -i ).

2) Identify the top talker and apply a filter.

3) Pause the screen and take a snapshot.

4) Cross‑check with logs or monitoring to confirm the event.

5) Decide whether to throttle, reroute, or stop the offending job.

The goal is not perfect data. The goal is a reliable direction within minutes.

Coordinating with observability stacks

iftop gives me a tactical view. Observability tools give me the strategic context. Here’s how I stitch them together:

  • Metrics dashboards tell me when the spike started.
  • iftop tells me who caused it.
  • Logs and traces explain why the traffic happened.

This is a simple loop, but when it’s followed consistently, incident resolution becomes much faster and less stressful.

Training teammates to use iftop effectively

If you’re introducing iftop to a team, I recommend a 15‑minute exercise:

1) Generate traffic (download a large file or run a test transfer).

2) Start iftop and point out the time windows.

3) Toggle DNS resolution and watch the difference.

4) Apply a filter and see how it changes the signal.

This makes the tool feel intuitive and reduces the risk of misinterpretation later.

Building an iftop runbook section

I’ve added an iftop section to every ops runbook I’ve touched in the last few years. It includes:

  • The default interface to check for each host type.
  • Common filters for the most important services.
  • A short command list with -n, -t, and -F examples.
  • A reminder to capture snapshots for incident notes.

This turns iftop from a personal habit into a team habit, which matters more than any single command.

Final checklist: a disciplined approach to iftop

When I’m tired or under pressure, I fall back to this condensed checklist:

1) Confirm interface.

2) Run sudo iftop -n -i .

3) Filter if needed.

4) Read 2s/10s/40s columns for burst vs sustained.

5) Capture a short snapshot for documentation.

6) Correlate with logs or metrics.

This structure keeps me from chasing ghosts.

Closing thoughts

iftop isn’t flashy, but it is honest. It shows you what the interface sees, right now, without guesswork. That’s invaluable when you’re in the middle of an incident or when monitoring graphs are too coarse to point to a specific culprit.

If you take only one thing from this guide, take this: use iftop as a fast, local truth source. Let it guide your next step, then pivot into deeper tools when you need them. That one habit has saved me more hours than any dashboard or ticketing system ever has.

The next time you see a bandwidth spike, don’t guess. Open iftop, pick the right interface, and let the traffic speak for itself.

Scroll to Top