for Loop in Linux (Bash): Practical Examples, Pitfalls, and Safer Patterns

I still see teams lose hours to repetitive terminal work: renaming batches of files by hand, re-running the same curl command across environments, or checking a list of hosts one at a time. The Bash for loop is the small, sharp tool that turns those chores into a repeatable script. The catch is that for is also where beginners (and tired seniors) accidentally break filenames with spaces, split data on the wrong delimiter, or trigger a slow process explosion by calling external commands in a tight loop.\n\nIf you write shell scripts in 2026, you want two things at the same time: simple loops you can read in 10 seconds, and guardrails that keep the loop correct under messy real-world inputs. I’ll walk you through how for works, what the shell expands before the loop runs, and the practical patterns I reach for (and the ones I avoid). You’ll see runnable examples for iterating lists, ranges, files, and command output, plus alternatives like while read and find -print0 when for isn’t the right fit.\n\n## The core mental model: for NAME in WORDS; do ...; done\nThe shell for loop has a deceptively simple shape:\n\nbash\nfor NAME in WORDS...; do\n COMMANDS\ndone\n\n\nThe important detail is this: WORDS... is expanded by the shell before the loop body runs. That expansion can include:\n\n- Globbing (like and .log)\n- Brace expansion (like {1..5})\n- Parameter expansion (like $HOME)\n- Command substitution (like $(date))\n- Word splitting (based on IFS) and pathname expansion\n\nThat’s why I treat for loops as “iterate over already-materialized items.” If the input is fundamentally a stream (like lines coming from find, JSON from an API, or arbitrary text that may contain spaces/newlines), I switch to a streaming pattern instead of trying to force it into for.\n\nA minimal, correct example:\n\nbash\n#!/usr/bin/env bash\nset -euo pipefail\n\nfor i in 0 1 2 3 4; do\n echo "$i"\ndone\n\n\nI’m using set -euo pipefail here because it catches a lot of script mistakes early:\n\n- -e: exit on unhandled errors (with some caveats)\n- -u: error on unset variables\n- pipefail: propagate pipeline failures\n\nYou don’t have to use it for every one-liner, but for scripts you commit, I do.\n\n### A quick note on Linux “for command”\nPeople often say “for command in Linux,” but for isn’t a standalone binary like ls or cp. It’s a shell keyword (built into shells like Bash). That matters because:\n\n- Syntax and features vary by shell (bash, dash, ksh, zsh)\n- Expansion rules are the shell’s rules\n- The shebang (#!/bin/sh vs #!/usr/bin/env bash) controls what’s legal\n\nSo whenever you copy a loop, first check which shell it assumes. If you want Bash features, declare Bash.\n\n## Iterating over fixed lists and arrays (the “safe default”)\nIf you already have a fixed list of items (or you can represent it as an array), for is perfect.\n\n### Fixed list of environments\n\nbash\n#!/usr/bin/env bash\nset -euo pipefail\n\nenvs=(dev staging prod)\n\nfor env in "${envs[@]}"; do\n echo "Deploying to: $env"\n # deploy "$env"\ndone\n\n\nTwo details matter:\n\n1) I store values in an array (envs=(...)) rather than a single string.\n2) I expand with "${envs[@]}" so each element stays intact even if it contains spaces.\n\nIf you accidentally write for env in ${envs[@]}; do ..., unquoted expansion can split elements on spaces and tabs. That’s a classic foot-gun.\n\n### Iterating over key/value pairs\nWhen I need pairs, I often store them as key=value strings and split inside the loop.\n\nbash\n#!/usr/bin/env bash\nset -euo pipefail\n\nservices=(\n "api=https://api.internal.example"\n "docs=https://docs.internal.example"\n "status=https://status.internal.example"\n)\n\nfor entry in "${services[@]}"; do\n name=${entry%%=}\n url=${entry#=}\n\n echo "Checking $name -> $url"\n curl -fsS --max-time 5 "$url" >/dev/null\n echo "OK: $name"\ndone\n\n\nThis avoids calling awk or cut per iteration, which helps performance and keeps dependencies down.\n\n### Iterating over script arguments ($@)\nA lot of Linux automation is “do X to whatever the user passes in.” For that, "$@" is the gold standard.\n\nbash\n#!/usr/bin/env bash\nset -euo pipefail\n\nif (($# == 0)); then\n echo "Usage: $0 FILE..." >&2\n exit 2\nfi\n\nfor path in "$@"; do\n echo "Got argument: $path"\n test -e "$path" |

{ echo "Missing: $path" >&2; exit 1; }\ndone\n\n\nThe reason I insist on "$@" (not $@, not $) is that it preserves each argument as a distinct element, including spaces.\n\n### Associative arrays (Bash) for “real” maps\nIf you’re in Bash and you want a true key->value map, associative arrays are clearer than key=value strings.\n\nbash\n#!/usr/bin/env bash\nset -euo pipefail\n\ndeclare -A ports=(\n [nginx]=80\n [ssh]=22\n [postgres]=5432\n)\n\nfor svc in "${!ports[@]}"; do\n echo "$svc -> ${ports[$svc]}"\ndone\n\n\nTwo things to remember:\n\n- "${!ports[@]}" iterates keys\n- The order is not guaranteed (so don’t build “ordered output” logic on it unless you sort the keys)\n\n## Ranges and counting: brace expansion, C-style loops, and seq\nCounting loops are where a lot of scripts start, and you have three main options.\n\n### 1) Brace expansion (Bash/Zsh): {1..5}\n\nbash\n#!/usr/bin/env bash\nset -euo pipefail\n\nfor i in {1..5}; do\n echo "Number $i"\ndone\n\n\nBrace expansion happens before the loop starts. It’s fast and readable. It’s also shell-specific (works in Bash, Zsh; not guaranteed in strict POSIX sh).\n\nYou can also count with a step:\n\nbash\n#!/usr/bin/env bash\nset -euo pipefail\n\nfor i in {0..20..5}; do\n echo "Batch index: $i"\ndone\n\n\n### 2) C-style arithmetic loop: for ((i=...; ...; i++))\nThis is Bash/Ksh style and great when your bounds are variables.\n\nbash\n#!/usr/bin/env bash\nset -euo pipefail\n\nstart=3\nend=7\n\nfor ((i=start; i<=end; i++)); do\n echo "Index: $i"\ndone\n\n\nI prefer this over seq when I’m already in Bash because it avoids spawning an external command.\n\n### 3) seq (external command)\nseq is handy, especially in minimal shells that don’t support brace expansion or C-style loops.\n\nbash\n#!/usr/bin/env bash\nset -euo pipefail\n\nfor i in $(seq 1 5); do\n echo "Number $i"\ndone\n\n\nThat said, $(seq ...) is still command substitution and word splitting. It’s okay for simple numbers, but I wouldn’t use this for arbitrary strings.\n\n### Counting with padding (common in filenames)\nIf you need zero-padded counters (001, 002, …), I usually reach for printf, not seq -w (both work; printf keeps the formatting logic close).\n\nbash\n#!/usr/bin/env bash\nset -euo pipefail\n\nfor ((i=1; i<=5; i++)); do\n printf 'frame%03d.png\n‘ "$i"\ndone\n\n\n## Iterating over files: globs, dotfiles, and names with spaces\nFile iteration is the most common for use case—and the one where correctness issues show up quickly.\n\n### The basic glob loop\n\nbash\n#!/usr/bin/env bash\nset -euo pipefail\n\nfor file in ; do\n echo "Found: $file"\ndone\n\n\nThis iterates over entries in the current directory. A few gotchas you should keep in mind:\n\n- does not match dotfiles by default (like .env or .gitignore).\n- If a filename contains spaces, your loop variable still receives the full filename correctly in this form (because globbing yields separate words already). The bigger risk is what you do next—always quote "$file".\n\n### Including dotfiles\nIf you really need dotfiles too, be explicit:\n\nbash\n#!/usr/bin/env bash\nset -euo pipefail\n\nshopt -s dotglob\n\nfor file in ; do\n echo "Found (including dotfiles): $file"\ndone\n\n\nI rarely enable dotglob globally in a long script without resetting it afterwards, because it can surprise you later.\n\n### Handling “no matches”: nullglob\nBy default in Bash, if .log matches nothing, the pattern stays literal (.log) and your loop runs once with that literal string.\n\nIf you want “no files means zero iterations,” enable nullglob.\n\nbash\n#!/usr/bin/env bash\nset -euo pipefail\n\nshopt -s nullglob\n\nfor log in .log; do\n echo "Archiving $log"\n gzip -9 -- "$log"\ndone\n\n\nThe -- is a habit I keep when passing filenames to commands, so a filename like -nasty doesn’t get interpreted as an option.\n\n### Recursing directories: don’t use for + ls\nA pattern I recommend you avoid:\n\nbash\n# Bad: breaks on spaces/newlines, and ls is for humans\nfor f in $(ls .txt); do\n echo "$f"\ndone\n\n\nInstead, use either a glob for non-recursive, or find for recursive.\n\n#### Recursive, safe with any filename: find -print0 + while read -d ‘‘\n\nbash\n#!/usr/bin/env bash\nset -euo pipefail\n\nroot="${1:-.}"\n\nfind "$root" -type f -name ‘.txt‘ -print0

\n while IFS= read -r -d ‘‘ path; do\n echo "Text file: $path"\n done\n\n\nThis handles spaces, tabs, and newlines in filenames correctly. If you deal with user-generated files, this is the pattern that keeps you out of trouble.\n\n### A practical “bulk action” template for files\nWhen I’m about to touch many files, I use the same skeleton every time: enable safe glob behavior, check existence, print what I’m doing, and keep the transformation logic obvious.\n\nbash\n#!/usr/bin/env bash\nset -euo pipefail\n\nshopt -s nullglob\n\naction="${ACTION:-dry}" # ACTION=run to execute\n\nfor f in .log; do\n target="$f.gz"\n echo "Plan: gzip -- $f -> $target"\n\n if [[ "$action" == run ]]; then\n gzip -9 -- "$f"\n fi\ndone\n\n\nI like the ACTION=run pattern because it turns scary scripts into safe scripts by default. It’s not fancy, but it prevents the “oops, I ran it from the wrong directory” class of outages.\n\n### Globs across directories (and why quoting matters)\nA common mistake is quoting the glob itself, which prevents expansion.\n\nbash\n# Wrong: the ".txt" is quoted, so it will not expand\nfor f in ".txt"; do\n echo "$f"\ndone\n\n\nCorrect is to leave the glob unquoted, but quote the variable later:\n\nbash\n#!/usr/bin/env bash\nset -euo pipefail\n\nshopt -s nullglob\n\nfor f in .txt; do\n wc -l -- "$f"\ndone\n\n\nIf you want a variable directory with a glob, build the path carefully:\n\nbash\n#!/usr/bin/env bash\nset -euo pipefail\n\ndir="${1:-.}"\nshopt -s nullglob\n\nfor f in "$dir"/.txt; do\n echo "File: $f"\ndone\n\n\nHere, the directory part is quoted (safe), the glob part still expands.\n\n### Recursion in Bash with globstar (when you want it)\nBash can do recursion, but it’s opt-in and still has caveats. I use it for quick scripts, not for “must be perfectly portable.”\n\nbash\n#!/usr/bin/env bash\nset -euo pipefail\n\nshopt -s globstar nullglob\n\nfor f in /.md; do\n echo "Markdown: $f"\ndone\n\n\nIf you need maximum correctness and portability, find still wins for recursion.\n\n## Iterating over command output: when for x in $(...) is fine (and when it isn’t)\nYou’ll often see examples like this:\n\nbash\nfor process in $(ps -e

awk ‘{print $1}‘); do\n echo "Process ID: $process"\ndone\n\n\nThis works for simple whitespace-separated values (like numeric PIDs). But the core limitation is: command substitution returns text, then the shell splits it on whitespace. If any item can contain spaces, tabs, or newlines, you’re no longer iterating “items,” you’re iterating “words.”\n\nHere’s how I choose:\n\n- If output is guaranteed to be simple tokens (IDs, single-word names), for x in $(...) is acceptable.\n- If output is lines, paths, or anything human text-like, switch to a line-safe read loop.\n\n### Better: read lines safely with mapfile (Bash)\nmapfile (aka readarray) reads lines into an array without mangling spaces.\n\nbash\n#!/usr/bin/env bash\nset -euo pipefail\n\nmapfile -t users < <(cut -d: -f1 /etc/passwd)\n\nfor user in "${users[@]}"; do\n echo "User: $user"\ndone\n\n\nProcess substitution (< <(...)) keeps things readable and avoids temporary files.\n\n### Best for streaming: while IFS= read -r\nIf the command output could be large, I prefer streaming:\n\nbash\n#!/usr/bin/env bash\nset -euo pipefail\n\njournalctl -u ssh --since "1 hour ago" --no-pager

\n while IFS= read -r line; do\n # Keep the raw line; -r avoids backslash escaping\n echo "SSH log: $line"\n done\n\n\nNote: in Bash, a while ...; do ...; done that’s fed by a pipe runs in a subshell in many configurations, so variables set inside might not be visible outside. If you need to accumulate state, prefer process substitution:\n\nbash\n#!/usr/bin/env bash\nset -euo pipefail\n\ncount=0\nwhile IFS= read -r user; do\n ((count++))\ndone < <(cut -d: -f1 /etc/passwd)\n\necho "Total users: $count"\n\n\n### “Lines vs words”: a concrete failure example\nThis is the fastest way to internalize the rule. Imagine output contains spaces:\n\nbash\n#!/usr/bin/env bash\nset -euo pipefail\n\n# Pretend these are filenames or human names\nitems=$(printf ‘%s\n‘ "alpha" "two words" "three words here")\n\n# Bad: splits into words\nfor x in $items; do\n echo "[$x]"\ndone\n\n\nYou’ll get five iterations, not three. Fix by using a line-safe loop:\n\nbash\n#!/usr/bin/env bash\nset -euo pipefail\n\nprintf ‘%s\n‘ "alpha" "two words" "three words here"

\n while IFS= read -r x; do\n echo "[$x]"\n done\n\n\n## Real-world for patterns I actually ship\nThis section is where for pays rent: small scripts you can commit and reuse.\n\n### 1) Batch checksum verification\nYou download a directory of artifacts and want quick integrity checks.\n\nbash\n#!/usr/bin/env bash\nset -euo pipefail\n\nshopt -s nullglob\n\nfor artifact in dist/
.tar.gz; do\n echo "Verifying: $artifact"\n sha256sum -- "$artifact"

tee -a checksums.txt\n\ndone\n\n\nThis is intentionally simple. If you need to compare against a known checksum list, you can add sha256sum -c.\n\n### 2) Rotate and compress logs by date\n\nbash\n#!/usr/bin/env bash\nset -euo pipefail\n\nlogdir="/var/log/myapp"\n\n# Compress yesterday‘s log files named like myapp-YYYY-MM-DD.log\n\nyesterday=$(date -d ‘yesterday‘ +%F)\n\nshopt -s nullglob\n\nfor log in "$logdir"/myapp-"$yesterday".log; do\n echo "Compressing $log"\n gzip -9 -- "$log"\ndone\n\n\nI keep it predictable: fixed naming scheme, explicit date, quoted variables.\n\n### 3) Health-check multiple endpoints with retries\nThis is the kind of loop you run in CI or a deploy hook.\n\nbash\n#!/usr/bin/env bash\nset -euo pipefail\n\nendpoints=(\n "https://service-a.example/health"\n "https://service-b.example/health"\n "https://service-c.example/health"\n)\n\nfor url in "${endpoints[@]}"; do\n echo "Checking: $url"\n\n ok=false\n for attempt in {1..5}; do\n if curl -fsS --max-time 3 "$url" >/dev/null; then\n ok=true\n break\n fi\n echo " attempt $attempt failed; waiting..."\n sleep 1\n done\n\n if [[ "$ok" != true ]]; then\n echo "ERROR: health check failed for $url" >&2\n exit 1\n fi\n\ndone\n\n\nNested loops are fine when the logic is clear. Here the inner for is bounded and small.\n\n### 4) Safe bulk rename (predictable, reversible)\nWhen renaming files, I always print the plan first.\n\nbash\n#!/usr/bin/env bash\nset -euo pipefail\n\nshopt -s nullglob\n\nfor path in .jpeg; do\n base=${path%.jpeg}\n target="$base.jpg"\n\n if [[ -e "$target" ]]; then\n echo "Skip (target exists): $path -> $target" >&2\n continue\n fi\n\n echo "Rename: $path -> $target"\n mv -- "$path" "$target"\ndone\n\n\nYou can also add a DRYRUN=1 mode if you want to preview without executing.\n\n### 5) Apply the same command across many repos\nIf you have a “mono-folder” of repos, for makes repetitive maintenance easy.\n\nbash\n#!/usr/bin/env bash\nset -euo pipefail\n\nroot="${1:-$HOME/src}"\n\nfor repo in "$root"//.git; do\n repodir=${repo%/.git}\n echo "Updating: $repodir"\n\n (cd "$repodir" && git pull --ff-only)\ndone\n\n\nI wrap cd in parentheses to avoid changing the working directory of the parent script.\n\n### 6) Run a command on many remote hosts (ssh)\nThis is where for shines: a simple inventory list and a repeatable action. The key is to keep hostnames as array elements (or line-based input) so you don’t split on spaces accidentally.\n\nbash\n#!/usr/bin/env bash\nset -euo pipefail\n\nhosts=(\n "web-01"\n "web-02"\n "db-01"\n)\n\nfor host in "${hosts[@]}"; do\n echo "== $host =="\n ssh -o BatchMode=yes -o ConnectTimeout=5 "$host" ‘uptime; df -h /‘\ndone\n\n\nIf one host being down should not kill the whole run, I make that explicit and continue:\n\nbash\n#!/usr/bin/env bash\nset -euo pipefail\n\nhosts=("web-01" "web-02" "db-01")\n\nfor host in "${hosts[@]}"; do\n echo "== $host =="\n if ! ssh -o BatchMode=yes -o ConnectTimeout=5 "$host" ‘uptime‘ ; then\n echo "WARN: ssh failed: $host" >&2\n continue\n fi\ndone\n\n\n### 7) Convert a batch of media files safely\nExternal commands in loops are normal, but I keep guardrails: nullglob, --, and a “skip if output exists.”\n\nbash\n#!/usr/bin/env bash\nset -euo pipefail\n\nshopt -s nullglob\n\nfor inpath in .wav; do\n outpath="${inpath%.wav}.mp3"\n\n if [[ -e "$outpath" ]]; then\n echo "Skip (exists): $outpath" >&2\n continue\n fi\n\n echo "Convert: $inpath -> $outpath"\n ffmpeg -hidebanner -loglevel error -i "$inpath" -codec:a libmp3lame -qscale:a 2 "$outpath"\ndone\n\n\nIf you need speed, that’s when I switch to xargs -P (covered later), because concurrency is not something I want to hand-roll inside for unless I’m forced to.\n\n## Loop control you should actually use: break, continue, and case\nBash loops are readable when you use the control structures intentionally.\n\n### continue for “skip this item”\nI use continue for guard clauses (missing file, bad value, already processed). This keeps the “happy path” less indented.\n\nbash\n#!/usr/bin/env bash\nset -euo pipefail\n\nshopt -s nullglob\n\nfor f in .csv; do\n [[ -s "$f" ]] { echo "Skip empty: $f" >&2; continue; }\n echo "Process: $f"\n # processcsv "$f"\ndone\n\n\n### break when the goal is “find the first match”\nIf you’re searching, stopping early is both faster and clearer.\n\nbash\n#!/usr/bin/env bash\nset -euo pipefail\n\nshopt -s nullglob\n\nfound=""\nfor f in .log; do\n if grep -q "FATAL" "$f"; then\n found="$f"\n break\n fi\ndone\n\nif [[ -n "$found" ]]; then\n echo "First file with FATAL: $found"\nelse\n echo "No FATAL found"\nfi\n\n\n### case inside a for loop for clean branching\nWhen “what I do” depends on extension or environment, case reads better than a chain of if statements.\n\nbash\n#!/usr/bin/env bash\nset -euo pipefail\n\nshopt -s nullglob\n\nfor f in ; do\n [[ -f "$f" ]] continue\n\n case "$f" in\n .jpg

.jpeg) echo "Image: $f" ;;\n .mp4

.mov) echo "Video: $f" ;;\n .zip

.tar.gz) echo "Archive: $f" ;;\n ) echo "Other: $f" ;;\n esac\ndone\n\n\n## When not to use for: better tools for big data, weird filenames, and parallelism\nI like for for small, clear iterations. When the data is large or messy, other patterns are safer.\n\n### If you’re parsing lines: use while read\nIf each item is a line, don’t force word splitting.\n\nbash\n#!/usr/bin/env bash\nset -euo pipefail\n\nwhile IFS= read -r hostname; do\n [[ -z "$hostname" ]] && continue\n echo "Pinging $hostname"\n ping -c 1 -W 1 "$hostname" >/dev/null

echo "Down: $hostname"\ndone < hosts.txt\n\n\n### If you’re walking the filesystem: use find\nfind already understands recursion and can filter by file type, size, mtime, and more.\n\nbash\n#!/usr/bin/env bash\nset -euo pipefail\n\n# Delete build outputs older than 14 days\nfind ./build -type f -name ‘.tmp‘ -mtime +14 -print -delete\n\n\n(Notice I’m printing what I delete. I do that by default when the action is destructive.)\n\n### If you need concurrency: consider xargs -P or GNU parallel\nA for loop runs sequentially. That’s often what you want for correctness, but sometimes it’s too slow.\n\nExample with xargs -P:\n\nbash\n#!/usr/bin/env bash\nset -euo pipefail\n\n# Fetch multiple URLs concurrently (up to 8 at a time)\ncat urls.txt

xargs -n 1 -P 8 -I {} curl -fsS --max-time 5 {} >/dev/null\n\n\nIf concurrency matters, I prefer these tools over backgrounding inside for unless I’m also adding a proper job limiter.\n\n### A simple decision table: which looping tool should I pick?\nHere’s the mental shortcut I use when I’m choosing the primitive, not the syntax.\n\n

Input type

Contains spaces/newlines?

Size

Recommended pattern

Why

\n

—:

—:

\n

Fixed items you control

Maybe

Small/medium

for x in "${arr[@]}"

Clear and safe

\n

Filenames (non-recursive)

Yes

Small/medium

for f in .ext; do ...

Globs iterate filenames cleanly

\n

Filenames (recursive)

Yes (including newlines)

Any

find ... -print0

while read -d ‘‘

Correct for all filenames

\n

Lines of text

Yes

Any

while IFS= read -r line

No word splitting

\n

Tokens (IDs, numbers)

No

Any

for id in $(cmd) (carefully)

Convenient when splitting is safe

\n

“Run this per item fast”

Any

Large

xargs -P / parallel

Parallelism without DIY jobs

\n\n## Common mistakes (and the exact fixes I use)\nMost shell loop bugs come from expansions and quoting. Here are the mistakes I see most, plus what I do instead.\n\n### Mistake 1: forgetting quotes around the loop variable\n\nbash\n# Bad\nfor f in
; do\n rm $f\nDone\n\n\nCorrect:\n\nbash\n#!/usr/bin/env bash\nset -euo pipefail\n\nfor f in ; do\n rm -- "$f"\ndone\n\n\nThis protects spaces and leading dashes.\n\n### Mistake 2: iterating $(ls ...)\nUse globbing (.txt) for simple cases, or find ... -print0 for recursive/safe cases.\n\n### Mistake 3: assuming .ext yields zero items when empty\nEnable nullglob if you want “no matches => zero iterations.”\n\nbash\nshopt -s nullglob\nfor f in .csv; do\n ...\ndone\n\n\n### Mistake 4: using for x in $(command) for paths or lines\nIf it’s line-based data:\n\nbash\nwhile IFS= read -r line; do\n ...\ndone < <(command)\n\n\nIf it’s path-based data, add -print0 and read -d ‘‘.\n\n### Mistake 5: treating for as POSIX when you’re using Bash-only features\nThese are Bash-only (or not reliably POSIX):\n\n- for ((i=0; i<10; i++))\n- arrays arr=(...)\n- mapfile\n- [[ ... ]]\n- shopt options\n\nIf your script runs as #!/bin/sh on a distro where sh is dash, those features will break. My rule: if I’m writing modern Bash, I declare it:\n\nbash\n#!/usr/bin/env bash\n\n\n…and I run ShellCheck on it.\n\n### Mistake 6: accidentally splitting on commas, colons, or newlines (IFS surprises)\nWhen people do IFS=, and forget to restore it, later loops break in non-obvious ways. My approach is to keep IFS changes local to a single read, not global.\n\nGood pattern (local IFS for a single split):\n\nbash\n#!/usr/bin/env bash\nset -euo pipefail\n\nentry="alpha,beta,gamma"\nIFS=, read -r a b c <<< "$entry"\n\necho "$a"\necho "$b"\necho "$c"\n\n\nIf I truly need IFS changed for a whole loop, I save/restore it explicitly:\n\nbash\n#!/usr/bin/env bash\nset -euo pipefail\n\noldifs=$IFS\nIFS=$‘\n‘\n\nfor line in $(printf ‘%s\n‘ "one" "two" "three"); do\n echo "[$line]"\ndone\n\nIFS=$oldifs\n\n\nI still don’t love for with newline-only splitting for general text (because command substitution strips trailing newlines and still has edge cases), but this illustrates how to avoid “IFS leak.”\n\n### Mistake 7: relying on echo for exact output\nThis isn’t for-specific, but it shows up in loops constantly. echo can interpret -n or backslash escapes depending on shell/options. For predictable output in scripts, I default to printf.\n\nbash\n#!/usr/bin/env bash\nset -euo pipefail\n\nfor x in "a" "b" "c"; do\n printf ‘Item: %s\n‘ "$x"\ndone\n\n\n## Performance notes: what makes for slow (and how to keep it fast enough)\nFor loops themselves are cheap. The slowdown is usually from what you do inside them.\n\nIn practice, on a typical developer laptop or CI runner, you’ll see:\n\n- Shell builtins (printf, parameter expansion) are usually negligible per iteration.\n- Spawning external commands (awk, sed, cut, curl, git) can cost roughly in the low-milliseconds per call range (sometimes less, sometimes much more depending on I/O and network).\n\nSo if your loop calls two external commands per file across 10,000 files, you can easily add seconds to minutes.\n\nMy fixes are boring but effective:\n\n- Replace per-iteration awk

sed

cut
with Bash parameter expansion when possible.\n- Batch work: one tar or one rsync instead of thousands of cp calls.\n- If you truly need concurrency, use xargs -P or parallel rather than DIY background jobs.\n\n### A before/after style optimization (parameter expansion vs external tools)\nIf you’re doing string slicing in a loop, external tools add overhead.\n\nLess ideal (spawns cut each time):\n\nbash\n#!/usr/bin/env bash\nset -euo pipefail\n\nitems=("api=https://a" "docs=https://b")\n\nfor entry in "${items[@]}"; do\n name=$(printf ‘%s‘ "$entry"

cut -d= -f1)\n echo "$name"\ndone\n\n\nBetter (pure Bash, no extra processes):\n\nbash\n#!/usr/bin/env bash\nset -euo pipefail\n\nitems=("api=https://a" "docs=https://b")\n\nfor entry in "${items[@]}"; do\n name=${entry%%=}\n echo "$name"\ndone\n\n\nOn small loops you won’t notice, but at scale this is the difference between “fine” and “why is this job taking so long?”\n\n## Debugging and hardening for loops (the guardrails I use)\nWhen a loop misbehaves, it’s usually expansion or environment. These are the tools I reach for.\n\n### 1) Print what you’re about to do (especially for destructive actions)\nBefore I rm, mv, chmod, or sed -i, I print the exact command (or a readable plan line). This is the cheapest safety net you can add.\n\n### 2) Turn on tracing temporarily: set -x\nBash can show you exactly what expansions produced.\n\nbash\n#!/usr/bin/env bash\nset -euo pipefail\n\nset -x\nshopt -s nullglob\nfor f in .log; do\n printf ‘File=%q\n‘ "$f"\ndone\nset +x\n\n\nThe %q format is a neat trick: it prints the value in a shell-escaped form, which makes weird whitespace visible.\n\n### 3) Use trap for cleanup in loops that create temp files\nIf you create temporary files inside loops, make cleanup automatic.\n\nbash\n#!/usr/bin/env bash\nset -euo pipefail\n\ntmpdir=$(mktemp -d)\ntrap ‘rm -rf "$tmpdir"‘ EXIT\n\nfor i in {1..3}; do\n printf ‘data %s\n‘ "$i" > "$tmpdir/$i.txt"\ndone\n\necho "Wrote files to $tmpdir"\n\n\n### 4) Prefer explicit error handling when partial failure is okay\nset -e is not a “make everything safe” switch. Sometimes you want to attempt all items and report failures at the end. In that case, track errors manually.\n\nbash\n#!/usr/bin/env bash\nset -euo pipefail\n\nshopt -s nullglob\n\nfail=0\nfor f in *.gz; do\n echo "Testing: $f"\n if ! gzip -t -- "$f"; then\n echo "Corrupt: $f" >&2\n fail=1\n fi\ndone\n\nexit "$fail"\n\n\nThis pattern is great for CI checks: it tries everything, prints all the problems, and still fails the job.\n\n## Expansion Strategy\nAdd new sections or deepen existing ones with:\n- Deeper code examples: More complete, real-world implementations\n- Edge cases: What breaks and how to handle it\n- Practical scenarios: When to use vs when NOT to use\n- Performance considerations: Before/after comparisons (use ranges, not exact numbers)\n- Common pitfalls: Mistakes developers make and how to avoid them\n- Alternative approaches: Different ways to solve the same problem\n\n## If Relevant to Topic\n- Modern tooling and AI-assisted workflows (for infrastructure/framework topics)\n- Comparison tables for Traditional vs Modern approaches\n- Production considerations: deployment, monitoring, scaling\n\n### A final rule of thumb I trust\nIf you can clearly describe your input as “a list of items the shell can expand safely,” for is the cleanest solution. If your input is “text that humans produce” or “paths that might contain anything,” treat it as a stream and move to while read + the right delimiters (\n or NUL). That single decision prevents most of the bugs I see in Linux automation.

Scroll to Top