Tail Command in Linux with Examples: Practical, Real‑World Usage

I still remember the first time a production error hit and I opened a multi‑gigabyte log file in a text editor. The editor froze, my CPU spiked, and I wasted precious minutes just trying to find the newest lines. That moment taught me a simple truth: when you need the freshest data at the end of a file, you don’t open the whole file—you ask for the end directly. That’s exactly what tail does, and it does it so well that it has become one of my daily tools.

You should read this if you work with logs, data pipelines, or any system that appends new lines over time. I’ll show you how tail behaves by default, how to use it with options like -n, -c, -f, and -q, and how to combine it with modern 2026 workflows (think CI logs, container output, and AI‑assisted incident response). Along the way, I’ll call out common mistakes and give you concrete, runnable examples you can paste into your terminal. Think of tail as a focused flashlight: it doesn’t light the whole room, but it reveals exactly what just happened.

Why tail is the fastest way to see “what just happened”

When a file grows over time, the most valuable lines are usually at the end: new log entries, appended transaction records, or the last lines of a report. tail reads from the end of a file, so it avoids scanning everything. That makes it efficient and reliable even on large files.

I like to describe tail as a “rear‑view mirror” for files. You’re not reading the story from the beginning; you’re checking the latest events. This mindset is perfect for debugging production incidents, monitoring batch jobs, or verifying that a script is still writing data.

Here’s the baseline behavior you should memorize:

# Shows the last 10 lines by default

tail /var/log/system.log

If you remember only one thing: tail is the fastest way to confirm the most recent output without loading the entire file.

Syntax and default behavior you can rely on

The syntax is straightforward:

tail [OPTION]... [FILE]...

If you pass a file with no options, you get the last 10 lines. That’s consistent across Linux distributions and is safe to rely on in scripts. When I write quick diagnostics, I usually start with this command and adjust if I need more context.

Let’s set up a small, real‑world dataset so you can run the examples locally:

# Create two files with realistic data

cat > state.txt <<'EOF'

Andhra Pradesh

Arunachal Pradesh

Assam

Bihar

Chhattisgarh

Goa

Gujarat

Haryana

Himachal Pradesh

Jammu and Kashmir

Jharkhand

Karnataka

Kerala

Madhya Pradesh

Maharashtra

Manipur

Meghalaya

Mizoram

Nagaland

Odisha

Punjab

Rajasthan

Sikkim

Tamil Nadu

Telangana

Tripura

Uttar Pradesh

Uttarakhand

West Bengal

EOF

cat > capital.txt <<'EOF'

Amaravati

Itanagar

Dispur

Patna

Raipur

Panaji

Gandhinagar

Chandigarh

Shimla

Srinagar (summer), Jammu (winter)

Ranchi

Bengaluru

Thiruvananthapuram

Bhopal

Mumbai

Imphal

Shillong

Aizawl

Kohima

Bhubaneswar

Chandigarh

Jaipur

Gangtok

Chennai

Hyderabad

Agartala

Lucknow

Dehradun

Kolkata

EOF

Now run:

tail state.txt

You’ll see the last 10 states. This default behavior is both predictable and widely used in tutorials and scripts.

The -n option: control the number of lines

If you need more or fewer lines, use -n. I use -n all the time when I need just enough context around an error.

# Last 3 lines

tail -n 3 state.txt

Equivalent shorthand

tail -3 state.txt

The -n value is mandatory. If you write tail -n state.txt without a number, you’ll get an error because tail expects a value. That error can be confusing in scripts, so I always recommend being explicit.

There’s a lesser‑known variation: a + prefix, which starts output at a given line number instead of from the end. This is useful if you need “everything after line 25.”

# Print from line 25 to the end

tail -n +25 state.txt

I often use this when I have a large output file and only want the end portion after a known header section.

Practical example: showing the last 50 lines of a build log

Imagine a CI job produced a 10,000‑line log. You want the last 50 lines to see the failure:

# Last 50 lines of a log file

tail -n 50 /tmp/build.log

This gives you quick context without overwhelming your terminal.

The -c option: read by bytes instead of lines

Sometimes line boundaries don’t matter. Binary logs, compressed output previews, or raw payloads might require byte‑based slicing. That’s where -c shines. It tells tail to count bytes instead of lines.

# Last 7 bytes

tail -c 7 state.txt

You can also use a negative count explicitly:

# Same result as above

tail -c -7 state.txt

To skip the first N bytes and then print everything else, use a + prefix:

# Skip the first 263 bytes

tail -c +263 state.txt

In practice, I use -c when I have a log that includes long JSON blobs or binary payloads and I just need the tail end of a record.

Byte‑level debugging example

Suppose you’re writing to a file that stores JSON snapshots and the file ends with a partial payload. You want to see the last 200 bytes to inspect corruption:

# Inspect the last 200 bytes for truncated JSON

tail -c 200 snapshots.log

This is a quick way to verify whether the file ended cleanly without loading the full file into a JSON parser.

The -f option: live log monitoring

This is the feature that turns tail into a real‑time monitoring tool. -f means “follow,” so the command keeps running and prints new lines as they are appended.

# Follow a file in real time

tail -f /var/log/system.log

I use -f when I’m testing services or watching an ingestion pipeline. For example, if a service is supposed to emit logs when it receives a request, I’ll run tail -f in one terminal and trigger requests in another. The moment the log line appears, I know the request hit the service.

Practical example: watching a web server access log

# Watch the most recent access entries

tail -f /var/log/nginx/access.log

You can combine this with filters to focus on specific patterns. This is one of the most effective ways to debug traffic or detect unexpected requests:

# Follow access log, show only 500 errors

tail -f /var/log/nginx/access.log | grep " 500 "

-f vs -F in real operations

Some systems rotate logs. When a log rotates, the file you were following may be renamed and a new file is created. In that case, plain -f stops showing new lines. -F (capital F) is a safer choice because it follows by filename and tries to reopen when the file changes.

# Robust follow that handles log rotation

tail -F /var/log/nginx/access.log

If you work in production, I recommend -F by default unless you know rotation won’t happen.

The -q option: quiet output with multiple files

When you pass multiple files, tail prints file headers by default to label the output. This is often helpful, but not always. The -q option suppresses those headers.

# Show file labels by default

tail state.txt capital.txt

You’ll see something like:

==> state.txt <==

...last 10 lines...

==> capital.txt <==

...last 10 lines...

To remove the labels and get clean output, add -q:

# Quiet mode, no file headers

tail -q state.txt capital.txt

I use -q when I’m piping results into another command and I don’t want the extra header lines to interfere with parsing.

Combining options with real‑world workflows

The real power of tail shows up when you blend options and pipelines. Here are a few patterns I use weekly.

1) Extract the last 100 lines, then search

If I’m investigating a spike in errors, I’ll narrow the data first, then search:

# Last 100 lines, then filter by keyword

tail -n 100 /var/log/app.log | grep "ERROR"

This reduces noise and speeds up the search.

2) Follow a file, but only show a pattern

You can stream output and filter as it arrives:

# Follow, only show lines with a request ID

tail -f /var/log/app.log | grep "request_id="

This is particularly helpful when debugging distributed systems where you want to trace a specific request path.

3) Follow a file, then colorize

If your log lines are dense, adding color makes it easier to read. I often pair tail with a coloring tool or a small script. Here’s a simple example using sed and ANSI color codes:

# Highlight ERROR in red (works in most terminals)

tail -f /var/log/app.log | sed -e ‘s/ERROR/\x1b[31mERROR\x1b[0m/g‘

If you don’t want ANSI codes in shared logs, avoid this in scripts that persist output.

4) Combine with awk for structured logs

If your logs are space‑delimited, you can parse fields on the fly:

# Print timestamp and message from a log

tail -n 20 /var/log/app.log | awk ‘{print $1, $2, $NF}‘

I use this to isolate time and error codes when triaging incidents.

Common mistakes and how to avoid them

I’ve seen the same pitfalls come up for years. Here’s how to avoid them.

Mistake 1: Forgetting -F on rotated logs

If you follow a file that rotates, you might miss new events. Use -F when the file is managed by log rotation tools.

# Safer for log rotation

tail -F /var/log/app.log

Mistake 2: Using -n without a value

This causes errors in scripts. Always specify a number.

# Correct usage

tail -n 20 /var/log/app.log

Mistake 3: Assuming tail reads only lines

With -c, it reads bytes. That’s useful, but you should make sure you understand the difference, especially with multi‑byte characters or binary data.

Mistake 4: Assuming tail -f is the only way to watch logs

For rotation‑safe watching, prefer -F. Also consider journalctl -f for systemd logs. I pick the tool that matches the log source rather than forcing everything through one command.

When to use tail vs when not to

tail is great for recent data, but not always the right choice. Here’s how I decide.

Use tail when:

  • You want the most recent lines in a growing file.
  • You’re monitoring logs in real time.
  • You need fast feedback during debugging.
  • The file is large and you only need the end.

Avoid tail when:

  • You need to search across the entire file for a historical event.
  • You must parse structured data at scale; use a dedicated log processor.
  • You need context that spans the beginning and end of the file.

In those cases, I switch to tools like rg, awk, or a dedicated log aggregation platform. tail is a flashlight, not a full‑room lamp.

Performance considerations in practice

tail is efficient because it reads from the end of the file, not the beginning. On most systems, showing the last 10–100 lines is fast enough to feel instant. In my experience, reading the last chunk of a multi‑gigabyte file typically feels like 10–15ms of perceived delay in a local terminal, and slightly higher on network filesystems.

If you add -f, you’re not re‑reading the whole file; the command waits for new data, so CPU use is generally low. If you pipe to heavy filters or run regex matches on every line, you can add overhead. That’s why I keep filters simple when following high‑volume logs.

Tip: Reduce overhead on busy logs

If the log is extremely hot, you can reduce CPU usage by piping to grep --line-buffered or filtering on the server side where possible.

# Use line buffering to keep real-time output smooth

tail -f /var/log/app.log | grep --line-buffered "WARN"

Real‑world scenarios that show why tail matters

To make this more concrete, here are three real scenarios where tail saves time.

1) Debugging a deployment

You deploy a service and want to ensure it boots correctly. I run:

# Watch the service log as it starts

tail -f /var/log/myservice/startup.log

I keep this running while I deploy. If I see a missing configuration value or crash loop, I can act immediately.

2) Monitoring ETL pipelines

When a batch pipeline writes to a log file on each batch, I do:

# See the last few batch summaries

tail -n 20 /data/pipeline/logs/etl.log

If a batch fails, I’ll follow the log live until the next run finishes.

3) Inspecting API gateway traffic

If I need to see whether a client is hitting a specific endpoint, I follow the access log and filter by route:

# Follow only requests to /v1/orders

tail -f /var/log/gateway/access.log | grep " /v1/orders "

This is faster than opening a GUI or waiting for dashboard metrics to update.

Table: traditional vs modern workflows with tail

I often see teams stuck in older habits, so here’s a short comparison that I use in onboarding.

Approach

Traditional workflow

Modern workflow (2026)

My recommendation

Log inspection

Open file in editor

tail -F + structured filters

Use tail -F for live triage

Error discovery

Search full log with grep

tail -n + AI‑assisted pattern summaries

Start with tail -n to get the newest context

Incident response

Manually scroll logs

tail -f + targeted filters + runbook annotations

Combine tail -f with filters and a runbook

Rotation handling

Reopen files manually

tail -F or journalctl -f

Prefer -F for rotating filesNotice I’m not telling you to abandon tail. I’m telling you to embed it in a larger workflow that respects modern tooling and automation.

AI‑assisted workflows with tail in 2026

In 2026, I often pair tail with AI‑assisted incident response. The flow looks like this:

1) Use tail -n to capture the last 200 lines.

2) Pipe the output into a summarizer or a local model for quick triage.

3) Extract error signatures and suggest likely causes.

Here’s a simple pattern using a local script that you can adapt:

# Capture last 200 lines into a temp file

tail -n 200 /var/log/app.log > /tmp/latest.log

Example: pass to a local analysis script

python3 analyze_log.py /tmp/latest.log

tail remains the first step because it provides clean, recent data without the overhead of full‑file processing.

Edge cases and how I handle them

Edge case: file doesn’t exist yet

If you run tail -f on a file that doesn’t exist, it exits. But you can use -F to wait for it to appear:

# Wait for the file to exist and then follow it

tail -F /var/log/new_service.log

This is helpful during service startup or when a process creates logs lazily.

Edge case: very long lines

Some logs have extremely long lines (think JSON payloads). tail will still show them, but your terminal may wrap them awkwardly. In this case, I’ll pipe into jq or a formatter if it’s JSON:

# Pretty-print JSON log lines

tail -n 20 /var/log/app.jsonl | jq ‘.‘

Edge case: binary files

If you run tail on a binary file, you might get unreadable output or control characters. For a quick sanity check, I stick to byte‑based output and pipe to a safe viewer:

# View last 128 bytes in a readable hex+ASCII form

tail -c 128 /path/to/binary.dat | hexdump -C

If I see corrupted data or unexpected patterns in the hex output, I know the file ended incorrectly without trying to open it in a full parser.

Edge case: permission errors

If you get a “permission denied” error, it doesn’t mean tail is broken; it means the file is protected. I switch to sudo when I have access:

# Read protected log file

sudo tail -n 50 /var/log/secure

I only do this when I’m on a machine I’m allowed to administer.

H2: Reading specific ranges with tail + head

Sometimes I don’t need the very last lines; I need a slice near the end. The pattern I use is tail -n +X to skip the first X‑1 lines, then head -n Y to limit the output.

# Print lines 200 to 260

tail -n +200 big.log | head -n 61

This is a handy way to inspect a chunk without loading the full file into an editor. I treat it like a quick window into a giant file.

Range example: show the last 300 lines, but only the middle 50

# Last 300 lines, then take 50 from the middle

tail -n 300 big.log | head -n 50

I use this when the very last lines are too noisy and I want the “lead‑up” portion right before a failure.

H2: Starting fresh with -n 0 -f

When I want only new lines and none of the existing backlog, I start tail with -n 0. This is perfect for services that already have huge logs.

# Follow only new lines from now on

tail -n 0 -f /var/log/app.log

I’ll use this during a deploy so I can see only the events triggered by that deployment, not the hours before it.

H2: Slowing the follow loop with -s

On very busy files, tail -f can spit out hundreds of lines per second. GNU tail lets you control the polling interval using -s (sleep seconds). I use this to reduce CPU and make logs more readable.

# Check for new lines every 2 seconds

tail -f -s 2 /var/log/app.log

It’s a small change, but on slow terminals or remote sessions, it can make output more manageable.

H2: Following until a process exits with --pid

When I’m debugging a process, I want tail to stop when the process dies. On GNU systems, tail supports --pid so it exits once that PID ends.

# Follow until process 12345 exits

tail -f --pid=12345 /var/log/app.log

This is clean for short‑lived tasks, like a migration script or a batch job that should shut down after completion.

H2: Multiple files and consistent labels

When I tail multiple files, I often want consistent labels so I can see which file produced which line. I use -v (verbose) to force headers even if only one file is passed, which is helpful when I swap files in and out of a command.

# Always show file headers

tail -v -n 5 state.txt capital.txt

This keeps my output readable when I’m comparing similar logs side by side.

Practical example: tailing app and error logs together

# View the last 20 lines from both logs

tail -n 20 /var/log/app.log /var/log/app.error.log

I’ll scan for matching timestamps to correlate normal activity with errors.

H2: tail in scripts and automation

tail is safe to embed in scripts, but I always keep these guidelines in mind:

  • Be explicit with -n or -c to avoid ambiguous behavior.
  • Use -F if there’s any chance of log rotation.
  • Add -n 0 when you want to ignore historical output.

Here’s a simple bash function I reuse for quick diagnostics:

# Put this in your shell profile

latest() {

local file="$1"

local lines="${2:-50}"

tail -n "$lines" "$file"

}

Usage

latest /var/log/app.log 100

It saves time and reduces copy‑paste errors under pressure.

Example: CI pipeline snippet

# In a CI job, show last 200 lines of failing test log

if [ -f /tmp/test.log ]; then

tail -n 200 /tmp/test.log

fi

I do this because most CI logs are huge, and the bottom is where failure summaries tend to appear.

H2: Using tail with less for interactive follow

Sometimes I want to follow logs but still be able to scroll back. less has a follow mode that I use for that. It’s not the same as tail, but it pairs nicely:

# Follow with the ability to scroll

tail -f /var/log/app.log | less +F

When I press Ctrl+C, I can scroll up; pressing F resumes follow. This gives me the best of both worlds: live updates and history.

H2: tail in container and cloud workflows

Modern deployments often run in containers or orchestrators, but tail still fits perfectly because the logs are usually plain files or streams.

Example: Tail a container log file mounted on the host

# Host-mounted container logs

tail -F /var/lib/docker/containers//-json.log

I don’t always recommend this in production because it can be noisy, but it’s very effective for quick debugging when centralized logs aren’t set up yet.

Example: Tail a Kubernetes log file on a node

# Node-level log follow

sudo tail -F /var/log/containers/myapp-*.log

I use this when I’m on a node and need immediate visibility without waiting for log shipping.

H2: Debugging log rotation with tail

Understanding rotation is the difference between catching a failure and missing it. When rotation happens, the file you are following is renamed and a new file takes its place. tail -F follows the filename, so it reopens the new file automatically.

Quick rotation test

# Follow and rotate by renaming the file

tail -F /tmp/rotate.log

In another terminal:

mv /tmp/rotate.log /tmp/rotate.log.1

printf "new line\n" > /tmp/rotate.log

With -F, I still see the new line. With -f, I don’t. That simple difference is why I default to -F in production.

H2: Filtering for speed and clarity

I use filters carefully because they can introduce latency or change line buffering. These patterns keep output responsive:

# Keep grep line-buffered for real-time output

tail -f /var/log/app.log | grep --line-buffered "ERROR"

If output lags, I’ll switch to awk with line buffering or use tools like stdbuf to force line‑by‑line output:

# Force line buffering through the pipeline

stdbuf -oL tail -f /var/log/app.log | stdbuf -oL grep "WARN"

H2: Handling Unicode and multi‑byte characters

Most logs are ASCII, but some apps write Unicode. tail -c counts bytes, not characters, which means you can split a multi‑byte character in half and get garbled output. When I suspect Unicode, I stick to -n (lines) or pipe into a tool that handles encoding:

# Safer for Unicode logs

tail -n 50 /var/log/app.log

If I truly need bytes, I accept that the output might look odd and I recheck with a proper parser.

H2: Getting the last line safely

Sometimes I want only the last line, not the last 10. That’s a small change, but it’s incredibly useful for state files or progress markers.

# Last line only

tail -n 1 /var/log/app.log

I use this to capture the most recent status update or last successful checkpoint from a file.

H2: Tail with file growth validation

A simple sanity check I run is to compare file size or record count over time. tail can show me whether a file is still growing:

# Check last 5 lines twice

tail -n 5 /var/log/app.log

sleep 2

tail -n 5 /var/log/app.log

If the two outputs are identical during expected activity, that’s a red flag that logging may have stalled.

H2: Alternatives and complements to tail

tail is not the only tool for the job. I use it as the default, but I switch when the situation calls for it:

  • journalctl -f when logs live in systemd’s journal
  • docker logs --tail 100 -f when the log stream isn’t in a file
  • less +F when I need follow mode with scrollback
  • multitail when I want multi‑pane monitoring (if installed)

I still start with tail because it’s fast, universal, and easy to reason about. Then I reach for specialized tools when the log source or UX demands it.

H2: Troubleshooting checklist I use under pressure

When tail isn’t showing what I expect, I run through this quick list:

1) Is the file still being written to? (I check with ls -lh or a second tail later.)

2) Do I have permissions? (If not, I try sudo.)

3) Did the log rotate? (I switch to -F.)

4) Am I filtering out lines accidentally? (I remove grep for a moment.)

5) Is the file a symlink? (I check with ls -l to confirm the real target.)

This checklist has saved me from chasing phantom bugs more times than I can count.

H2: A simple mental model that keeps me sane

I think of tail as a tiny camera pointed at the end of a file. -n tells it how wide the camera frame is. -f tells it to keep filming. -F tells it to keep filming even if the camera is moved to a new file. That mental model makes the options feel obvious under pressure.

H2: Minimal examples you can memorize

If you only remember these, you can solve most real‑world tasks:

# Default: last 10 lines

tail file.log

Last N lines

tail -n 50 file.log

Follow with rotation handling

tail -F file.log

Follow only new lines

tail -n 0 -f file.log

Last N bytes

tail -c 200 file.log

These five commands cover the majority of my day‑to‑day log work.

H2: Extended example walkthrough (end‑to‑end troubleshooting)

Let me show you a realistic, end‑to‑end flow I might use during an outage:

1) I start by grabbing recent context:

tail -n 200 /var/log/app.log

2) I filter for errors to see the pattern:

tail -n 200 /var/log/app.log | grep "ERROR"

3) I follow the log live with rotation safety:

tail -F /var/log/app.log

4) I open another terminal and trigger a reproduction request. If I need to isolate a request ID, I narrow it down:

tail -F /var/log/app.log | grep "request_id=abc123"

5) If the logs are too noisy, I slow the follow interval:

tail -F -s 2 /var/log/app.log

This flow is simple, but it keeps me grounded: I start with context, narrow the view, then go live.

H2: A quick word on portability

Linux tail is usually GNU coreutils. Most options above work there. If you ever work on macOS or BSD systems, some flags behave differently or are missing. When portability matters, I stick to the most standard flags (-n, -c, -f, -q, -F) and avoid obscure extensions.

H2: Wrap‑up and key takeaways

If you work with logs or files that grow over time, tail should be in your muscle memory. It’s fast, reliable, and easy to compose with other tools. I still use it every week because it does one job extremely well: it shows me what just happened.

Here’s the core mental checklist I keep:

  • Start with tail to get the latest context fast.
  • Use -n for precise line control and -c for bytes.
  • Use -f for live updates, -F for rotation safety.
  • Pair with filters, but keep pipelines simple when performance matters.

If you practice these patterns, you’ll respond to incidents faster, debug with less friction, and avoid the trap of opening giant files in the wrong tools. That’s a small win that adds up over time—and it all starts with a single command: tail.

Scroll to Top