Large files punish impatient workflows. I learned that early while troubleshooting production incidents over SSH on unstable links. If I open a 2 GB log in a full editor, I wait, my terminal stutters, and my context breaks. If I cat that same file, my scrollback turns into noise. less solves that with a simple idea: read only what I need, one screen at a time, with instant movement and search. It feels like reading a book where I can jump to any chapter without carrying the whole library in my hands.
When I use less well, I move from looking at output to interrogating output. I jump to failures, trace related lines, scan for repeated events, and quit in seconds once I have an answer. I rely on it for kernel messages, app logs, JSON snapshots, config audits, and long command output from tools like git, kubectl, journalctl, and docker.
In this guide, I walk through practical less command examples you can run right now, including pipelines, search patterns, must-know options, mistakes I still see in teams, edge cases, and a 2026-friendly workflow that fits modern cloud and AI-assisted debugging.
Why less still matters every day
People sometimes think of less as a beginner pager, but in daily engineering work it is one of the highest-return terminal habits I can build.
Three reasons I keep teaching it to junior and senior developers:
- It reads lazily instead of loading everything first. On large files, the difference is immediate.
- It supports forward and backward movement naturally, so I can investigate context around a match.
- It turns any command output into a navigable view through pipelines.
Baseline syntax:
less [options] filename
First habit I recommend:
less /var/log/syslog
Compare that with dumping everything:
cat /var/log/syslog
With cat, I get firehose output. With less, I get control.
I think of less as a temporary read-only workspace. I am not editing content, I am inspecting it with precision.
Core navigation: the keys that save the most time
Most people only know Space and q. That is enough to survive, but not enough to work fast. I memorize a small movement set first.
Page and line movement
Spaceorfmoves one page forward.bmoves one page backward.Entermoves one line down.korymoves one line up.dmoves half-page down.umoves half-page up.
Jump movement
gjumps to start of file.Gjumps to end of file.50gjumps to line 50 when line numbers are meaningful.%jumps by percentage in many contexts.
Exit and help
qquits.hopens built-in help.
Practice drill I use with teams:
less -N /etc/services
Then I run g, G, b, d, u, and quit. Repeat twice. By the third run, my hands stop hunting.
The -N option displays line numbers. During incident response, line numbers are useful when I quote exact positions in chat, ticket comments, or postmortems.
Searching inside less: where it becomes a diagnostic tool
Search is where less shifts from pager to problem-solving instrument.
Forward search
less /var/log/auth.log
Inside less:
/Failed passwordnfor next matchNfor previous match
That flow is perfect for scanning suspicious auth events while preserving nearby context.
Backward search
I use ?pattern to search upward from current position.
Inside less:
?timeout
I use this when I start at the bottom with G and want the most recent related event first.
Start at first match with -p
dmesg | less -p fail
This starts the view at the first fail match instead of the top. It is excellent when I already know the signal I care about.
Case sensitivity control
Case can trip me during real logs because event text is inconsistent.
journalctl -u nginx | less -i
-i gives case-insensitive search in typical setups.
Highlight behavior
Two options matter when highlight noise gets in the way:
-ghighlights only the current match.-Gdisables search highlighting.
Examples:
less -g /var/log/kern.logless -G /var/log/kern.log
I prefer -g on dense logs because it keeps focus sharp while still showing where I am.
Search refinement patterns I actually use
Generic terms like /error are often too broad. I usually search with slightly richer patterns:
/ERROR.*payment/timeout.*upstream/status=503/connection reset/retrying in [0-9]
Even without full regex complexity, small specificity upgrades reduce noise fast.
Using less with pipelines: my default pattern
Pipelines are where less earns its keep in modern workflows. Instead of writing temporary files, I pipe output straight into a pager.
Kernel messages
dmesg | less
Often the first command I run after boot issues or device events.
Filter first, then page
dmesggrep -Ei ‘usb nvme
fail‘ less -N
Now I get targeted output with line numbers. This pattern reduces cognitive load.
Systemd journal
journalctl -xe | less
For one service:
journalctl -u docker --since ‘1 hour ago‘ | less
For a bounded incident window:
journalctl -u payments-api --since ‘2026-02-06 08:00:00‘ --until ‘2026-02-06 10:00:00‘ | less -N
Git history and diffs
git log --oneline --decorate --graph | lessgit show HEAD~3 | lessgit diff main...feature-branch | less
For large diffs, paging is easier than raw flood.
Docker and Kubernetes output
docker logs api-service --since 2h | lesskubectl describe pod payment-worker-7dbf9 | lesskubectl logs deploy/web --since=30m | less -i
Why pipeline plus less works
- I avoid intermediate files.
- I keep command composition flexible.
- I can still search quickly with
/pattern. - I keep terminal history clean.
If I remember one workflow, it is this: generate output, narrow if needed, page with less.
Options that matter in real usage
less has many flags, but a short list covers most daily work.
-N show line numbers
less -N /etc/nginx/nginx.conf
Useful for reviews and precise references.
-n hide line numbers
less -n /var/log/syslog
Useful when I want a cleaner view.
-F quit if content fits one screen
less -F /home/dev/app/README.md
For short files, less exits immediately. Great in scripts where paging is only needed for long output.
-E exit at end of file
less -E /var/log/bootstrap.log
Once I hit EOF, less exits instead of waiting for q.
-s squeeze blank lines
less -s generated_report.txt
Helpful when generated output has too much vertical whitespace.
-f force opening unusual files
less -f /proc/cpuinfo
Useful when dealing with pseudo-files and special inputs.
-p pattern jump to first match on open
less -p server_name /etc/nginx/sites-available/default
Great for quick config inspections.
Option combos I use weekly
journalctl -u api --since today | less -iNless -Fs deployment_notes.txtkubectl logs job/batch-import | less -g
Practical scenarios with complete command flows
This section is intentionally concrete. These are realistic terminal sessions I run when things break.
Scenario 1: investigate failed SSH logins
Command:
sudo journalctl -u ssh --since ‘6 hours ago‘ | less -iN
Inside less flow:
/failed passwordnrepeatedly/invalid userNto move back and compare context
What I gain: fast triage without exporting logs.
Scenario 2: check boot warnings after kernel update
Command:
dmesg | less -p warn
Then search these terms:
/nvme/thermal/acpi
I move with n and N until I find clusters worth action.
Scenario 3: inspect JSON lines logs
Command:
less app-events.jsonl
Useful searches:
/\"level\":\"error\"/\"requestId\":\"a9f2/\"latency_ms\":
For wide lines, I rely on horizontal movement plus focused search terms.
Scenario 4: config audit before deployment
Command:
less -N /etc/nginx/nginx.conf
Directive searches:
/worker_processes/keepalive_timeout/gzip
I capture exact line numbers for change notes and approvals.
Scenario 5: command output that may be short or long
Command:
systemctl list-units --type=service --state=failed | less -F
If output is tiny, it returns immediately. If it grows, paging engages automatically.
Scenario 6: postmortem window review
Command:
journalctl -u payments-api --since ‘2026-02-06 08:00:00‘ --until ‘2026-02-06 10:00:00‘grep -E ‘ERROR timeout
retry 503‘less -N
This works well when I need fast evidence extraction in a noisy outage window.
Scenario 7: review huge SQL migration output
Command:
psql -f migration.sql 2>&1 | less -i
Then search:
/ERROR/duplicate key/constraint
I can move backward to see the exact statement before each failure line.
Scenario 8: compare two log windows manually
I run first window in one terminal, second window in another:
journalctl -u api --since ‘09:00‘ --until ‘09:15‘ | less -Njournalctl -u api --since ‘10:00‘ --until ‘10:15‘ | less -N
Then I search both for the same pattern and compare event density and order.
Scenario 9: diagnose intermittent API spikes
I usually run a two-stage flow:
kubectl logs deploy/api --since=2hrg -i ‘latency timeout
upstream 503‘less -N
Inside less, I jump with /requestid= and /traceid= to correlate related lines. If I need full context for one request, I rerun with a narrower request ID filter and keep paging.
Scenario 10: inspect huge CSV exports quickly
If a data team gives me a massive CSV and asks if a field exists or a delimiter changed, I avoid opening spreadsheets first:
less -N export202602_06.csv
Then I search with /customer_id, or /\"unexpected delimiter\" and jump to exact rows. I can validate format assumptions in seconds.
Edge cases that break momentum and how I handle them
Even experienced engineers trip on pager edge cases. These are the ones I see most.
Edge case 1: piped input is not a regular file
When input comes from a pipe, random jumping feels different than with a normal file. I can still search and scroll, but deep random seeks may be limited.
How I adapt:
- I narrow upstream output first with
rgorgrep. - I use stronger searches inside
less. - If I need true random access later, I persist output once to a file and open that file with
less.
Edge case 2: log lines are extremely long
Massive JSON or stack trace lines can be painful.
How I adapt:
- I filter fields upstream with
jqwhen possible. - I search by unique keys, IDs, or error codes.
- I toggle options to reduce visual clutter before deep navigation.
Edge case 3: ANSI colors pollute search readability
Some producers emit color sequences that make matching awkward.
How I adapt:
- I disable color at the source command when possible.
- I keep searches specific and short.
- I use cleaner command combinations for incident runs.
Edge case 4: interactive command hangs unexpectedly
Usually this is not a hang; it is less waiting for interaction.
How I adapt:
- I use
-Ffor optional paging in scripts. - I use
-Efor auto-exit at EOF in one-pass checks. - I document pager behavior in team runbooks.
Edge case 5: binary or compressed content
Teams sometimes accidentally page binary dumps and think the file is corrupted.
How I adapt:
- For compressed logs, I use
zlessorxzlesswhen available. - For unknown files, I run
file filenamefirst. - If content is truly binary, I switch to tools like
hexdump -Cand then page that output withless.
Edge case 6: locale and encoding surprises
I have seen UTF-8 logs appear broken when locale settings differ between jump hosts.
How I adapt:
- I confirm locale with
locale. - I test with a simple UTF-8 search term.
- I normalize environment variables in shared server profiles so teams get consistent rendering.
Performance considerations: what changes in practice
I avoid fake precision here because systems differ, but real patterns are stable.
- For multi-hundred-MB to multi-GB files, opening directly in a full editor often feels noticeably slower than
lessfor first interaction. - On remote sessions with variable latency,
lessusually feels smoother because I consume output incrementally. - In triage workflows, I often cut investigation loops from many reruns to one interactive pass by using
/pattern,n, andNeffectively. - Pre-filtering before paging usually gives a meaningful productivity bump, often in the range of modest to substantial depending on noise level.
A practical before and after pattern:
- Before:
cat logfile | grep error, rerun with different grep, lose context each time. - After:
rg -i ‘error, then refine in-place with pager search.timeout 503‘ logfileless -N
The exact speedup varies, but decision quality improves because context stays visible.
Where the gain actually comes from
In my experience, improvement is less about raw command speed and more about fewer context resets:
- I do fewer full-command reruns.
- I do less copy-paste into scratch notes.
- I preserve neighboring lines around every match.
- I quit sooner once evidence is clear.
That combination is why less consistently pays off in production work.
Common mistakes I still see in teams
Most frustration with less comes from tiny workflow errors, not from the tool itself.
Mistake 1: using cat for huge files
Problem: terminal flood, poor search context, hard backtracking.
Fix:
less /var/log/application.logrg -i error /var/log/application.log | less
Mistake 2: forgetting reverse movement
Problem: overshoot and restart command repeatedly.
Fix: use b, u, and N. Restarting should be rare.
Mistake 3: searching with generic terms only
Problem: /error yields too many hits.
Fix: use stronger search terms tied to symptom plus component.
Mistake 4: missing line numbers during handoff
Problem: weak evidence in tickets and chat.
Fix: run less -N file during audits.
Mistake 5: blocking scripts with an always-on pager
Problem: automation appears stuck.
Fix: some_command | less -F or disable pager for non-interactive contexts.
Mistake 6: treating less as file-only
Problem: missing the best capability, command output paging.
Fix: make pipeline paging your default for long outputs.
Mistake 7: no team-standard search vocabulary
Problem: every engineer searches differently, so debugging quality varies.
Fix: I keep a short team list of high-signal patterns per service, such as timeout, upstream, rate limit, connection reset, and key error codes. Then we all start from the same baseline.
When to use less and when not to
I use less when:
- Output is longer than one screen.
- I need repeated searching and context around matches.
- I inspect logs during debugging.
- I work on remote servers and want low-friction reading.
I skip less when:
- I need to edit content in place.
- I need structured querying better handled by
jq,awk, SQL, or dedicated log tools. - I run non-interactive CI pipelines where pagers can block.
My practical pairing is:
rgfor precise filteringjqfor JSON shapinglessfor human inspection and context
That keeps each tool in its strongest role.
Traditional terminal reading vs modern 2026 workflow
Terminal habits changed a lot in the AI-assisted era, but less still sits in the center of fast debugging. The tool is the same; composition changed.
Typical command pattern
—
cat logfile
grep error logfile
journalctl ...
Full context plus interaction
journalctl ...
503‘
High signal with backtracking
AI proposes patterns, I verify in less
In daily work, I often ask an assistant for candidate search strings from a stack trace, then validate those patterns in less. AI proposes. Terminal evidence decides.
Using less with follow mode for live incidents
A lot of people use tail -f and stop there. I prefer a hybrid when I need both live updates and history navigation.
Typical pattern:
less +F /var/log/nginx/error.log
What this does for me:
- Starts in follow mode similar to
tail -f. - Lets me break out, search backward, inspect context, and resume follow.
Common live-debug loop:
- Start follow mode with
+F. - Trigger a request from another terminal.
- Stop follow temporarily.
- Search for request ID or status code.
- Resume follow.
This is one of the most practical less workflows for production systems.
tail -f vs less +F
I use both, but for different goals:
Better tool
—
tail -f
less +F
less +F
Environment-level defaults that improve daily usage
I rarely run less in a vacuum. I set sane defaults once, then benefit every day.
Set the default pager
Many tools honor PAGER. I usually set:
export PAGER=‘less -iFR‘
Why this combination:
-igives case-insensitive search.-Fauto-exits if output fits one screen.-Rkeeps readable color sequences from supported tools.
This single line improves output from git, man, and other commands that respect pager settings.
Tool-specific pager settings I use
For git, I often do:
- `git config –global core.pager ‘less -iFR‘
`
For Kubernetes-heavy workflows:
export KUBECTLEXTERNALDIFF=‘diff -u‘- `export KUBE_PAGER=‘less -iFR‘
`
If a command should never page in scripts, I disable pager explicitly in automation contexts.
Keep aliases practical
I avoid clever aliases and stick to two simple ones:
alias l=‘less -iN‘alias lf=‘less -iFR‘
Now I can choose either evidence-first line numbers (l) or smooth pager defaults (lf).
Horizontal scrolling, wrapping, and readability tricks
Long lines are where people give up too early. I use a few tricks to keep control.
Chop long lines instead of wrapping
less -S huge.jsonl
With -S, long lines are truncated on screen instead of wrapped. I can move horizontally and keep each record on one visual row.
Navigate sideways when needed
Inside less, horizontal movement keys (depending on terminal and config) let me inspect hidden parts of long lines. I only do this after searching to a narrow location.
Use targeted preprocessing
For JSON logs, I prefer:
jq -c ‘.{ts,level,requestId,msg}‘ app-events.jsonl less -S
For stack traces mixed with noisy metadata:
rg -n ‘ERRORException Traceback‘ app.logless -N
I do not treat less as a replacement for shaping tools. I shape first, then inspect deeply.
Real-world command bundles I reuse
I keep a small personal snippet list. These are high-value and easy to remember.
Incident bundle
journalctl -u api --since ‘30 min ago‘rg -i ‘error timeout
503 reset‘less -iN
kubectl logs deploy/api --since=30mrg -i ‘error panic
timeout‘ less -N
dmesgrg -i ‘oom segfault
i/o error nvme‘less -N
Deployment bundle
git log --oneline --decorate --graph -n 200 | lessgit diff --stat main...HEAD | less -Fkubectl rollout status deploy/web && kubectl logs deploy/web --since=10m | less
Database bundle
psql -f migration.sql 2>&1 | less -imysql -u app -p -e ‘SHOW ENGINE INNODB STATUS\G‘ | lesssqlite3 prod.db ‘.schema‘ | less -N
Alternatives and when I choose them instead
less is great, but not always the best first tool.
Best at
less —
more Extremely simple paging
tail -f Pure live streams
bat Pretty file viewing
vim/nvim Deep navigation + edits
Cross-host analytics
My rule: if I need interactive reading with search and context, I choose less. If I need transformation, aggregation, or editing, I switch quickly.
Security and operational hygiene
less is read-oriented, but incident workflows still need guardrails.
- I never paste full sensitive logs into shared chat; I quote minimal lines and line numbers.
- I sanitize secrets before sharing command output, even in internal channels.
- I avoid storing temporary log extracts with broad permissions.
- I use least-privilege reads (
sudoonly when required).
A simple habit: inspect locally with less, extract only the smallest evidence slice for collaboration.
Team handoff workflow with less
Good debugging is not just finding answers; it is handing them off clearly. My usual handoff checklist:
- Re-run with
less -Nto pin evidence lines. - Capture 3–8 surrounding lines around each important match.
- Note exact command and time window used.
- Share one hypothesis and one next action.
Example evidence note format I use:
- Command:
journalctl -u payments-api --since ‘2026-02-06 08:00:00‘ --until ‘2026-02-06 10:00:00‘ | less -N - Matches:
/timeout.*upstream - Evidence: repeated timeouts near line references around request spikes
- Next action: verify upstream saturation and retry policy
This keeps postmortems factual, reproducible, and easy for others to validate.
Mini practice plan: build muscle memory in 20 minutes
If I am training someone new, this short drill works consistently.
Block 1 (5 min): movement only
- Open
less -N /etc/services - Practice:
g,G,b,d,u,%,q
Block 2 (5 min): search and backtracking
- Open
journalctl -n 500 | less -i - Practice:
/error,n,N,?warn
Block 3 (5 min): pipelines
- Run
dmesgrg -i ‘usb nvme
fail‘ less -N
- Practice narrowing and searching inside pager
Block 4 (5 min): live follow mode
- Open
less +F /var/log/system.log(or equivalent) - Pause, search, resume
After this, most people stop reaching for cat by default.
Troubleshooting weird pager behavior quickly
When less feels wrong, I run this quick checklist.
- Output instantly closes: remove
-Ftemporarily. - Search is case-sensitive unexpectedly: add
-i. - Colors look broken: prefer
-Rand disable source over-colorization. - Script appears stuck: pager likely waiting for input; add
-For disable paging. - Random jumps feel limited: output came from a pipe; save to file for full random access.
This usually resolves confusion in under a minute.
Final cheat sheet I keep in my head
- Open file:
less file.log - Show line numbers:
less -N file.log - Start at end:
less +G file.log - Start follow mode:
less +F file.log - Case-insensitive search:
less -i file.log - Jump to first match:
less -p timeout file.log - Useful inside pager:
/pattern,?pattern,n,N,g,G,b,d,u,q
If I had to compress everything into one practical default for long output, it would be this:
some_command | less -iN
That single habit makes terminal work calmer, faster, and much more evidence-driven.
Closing thought
I still use modern observability stacks, structured logging, and AI-assisted triage. None of that reduced the value of less; it increased it. When incidents are noisy and time is tight, I need a local, reliable, low-friction way to interrogate text directly. less remains that tool.
When I teach it well, teams stop treating terminal output as a flood to survive and start treating it as a dataset to investigate. That shift is small in mechanics and huge in outcome.


