I keep meeting teams who are automating everything but still arguing about which tool to reach for first. If you build pipelines, ship services, or keep developer environments running, you probably bounce between Python and Bash daily. I do too. They solve overlapping problems, yet their strengths sit in very different places. I’m going to show you where each one shines, where it breaks down, and how I choose in real projects. You’ll get concrete guidance, code you can run today, and a few modern patterns that weren’t on the table a few years ago.
Mental model: script vs language vs environment
When I think about Bash, I think about the shell as a living environment: it sits between you and the operating system. It is a command interpreter and a glue layer for existing tools. Bash doesn’t just run commands; it inherits environment variables, file descriptors, pipelines, exit codes, and a rich set of POSIX tools that are already present.
Python, on the other hand, is a general-purpose programming language. It gives you a complete runtime, a standard library that feels like a small operating system, and third‑party packages that can replace entire categories of shell commands. It is not a shell environment; it’s a language runtime that can call shells when needed.
A simple analogy I use: Bash is a Swiss Army knife you already carry in your pocket, while Python is a full toolbox that you open when the job gets bigger than a pocket tool. If you need to twist a screw, Bash is perfect. If you need to build a shelf, Python is the right choice.
Readability and maintainability in real teams
I see people assume Bash is “shorter,” therefore “simpler.” That is often true in a five‑line script. It stops being true at 40 or 60 lines. Bash is dense, full of quoting rules, and it has a lot of subtle behavior that only shows up at scale. You can write clean Bash, but it requires discipline and a shared style guide.
Python’s readability is usually better for teams. The syntax is regular, the error messages are clearer, and code review is easier because your teammates don’t need to keep a giant mental model of quoting and word splitting. If I expect a script to live for more than a month, I lean toward Python.
Two common maintainability traps:
- Shell scripts that start tiny and grow into “mini apps” with ad‑hoc parsing, file manipulation, and custom state handling.
- Python scripts that try to replicate shell pipelines instead of using Python’s strengths (data structures, libraries, and structured errors).
My rule: if the script will be run by the same person within the same day, Bash is often fine. If it will be run by a team for months, Python is usually the safer bet.
Error handling and reliability
Reliability matters more than syntax, especially in automation. Bash can be reliable, but you must opt in to the safer defaults and follow strict patterns.
Here’s a Bash pattern I treat as non‑negotiable for anything that modifies systems:
#!/usr/bin/env bash
set -euo pipefail
IFS=$‘\n\t‘
Fail fast when required variables are missing
: "${RELEASETAG:?Missing RELEASETAG}"
builddir="/tmp/build-${RELEASETAG}"
mkdir -p "$build_dir"
Example command with a clear error message
if ! tar -czf "$build_dir/app.tar.gz" ./app; then
echo "Failed to create archive" >&2
exit 1
fi
Python’s error handling is explicit and structured by default. That makes it easier to build safe behaviors without ceremony:
#!/usr/bin/env python3
from pathlib import Path
import tarfile
import sys
releasetag = (Path.cwd() / ".releasetag").read_text().strip()
if not release_tag:
print("Missing release tag", file=sys.stderr)
raise SystemExit(1)
builddir = Path("/tmp") / f"build-{releasetag}"
builddir.mkdir(parents=True, existok=True)
archivepath = builddir / "app.tar.gz"
try:
with tarfile.open(archive_path, "w:gz") as archive:
archive.add("./app", arcname="app")
except Exception as exc:
print(f"Failed to create archive: {exc}", file=sys.stderr)
raise SystemExit(1)
In my experience, Python wins for reliability once you have branching logic, structured data, or multiple error paths. Bash wins for short, linear pipelines that glue existing tools together.
Performance: startup, process cost, and I/O
Performance is more nuanced than “Bash is faster” or “Python is faster.” Bash itself starts quickly and shells out to other programs. Python has a heavier interpreter startup and import time. The real cost is usually process spawning and I/O.
Practical guidance I use:
- If you are running many tiny commands (10–100 per second), Bash’s overhead and process spawning can dominate. But Python won’t fix that if it still calls external processes for each step.
- If you can keep work inside a Python process, Python usually wins, because you avoid spawning dozens of child processes. For I/O‑heavy tasks, that can save tens of milliseconds per loop, and over thousands of iterations you can see seconds or minutes of difference.
- For network calls, both languages spend most time waiting. The choice becomes about clarity and correctness, not raw speed.
When performance matters, I measure in ranges. For example, a single subprocess spawn can often cost around 5–20ms on a typical workstation. If your script spawns 500 processes, you might spend 2.5–10 seconds just on process overhead. Python’s interpreter startup might cost 20–80ms, but if it replaces hundreds of subprocesses, you come out ahead.
Data handling and text processing
Bash is excellent for quick text transformations, especially when you can pipe in awk, sed, grep, or jq. But once the data shape becomes more complex than line‑oriented text, Python is easier and safer.
Here’s a real‑world example: parse a JSON log file, count errors by service, and write a CSV report.
Python version:
#!/usr/bin/env python3
import json
import csv
from collections import Counter
from pathlib import Path
log_path = Path("./logs/requests.jsonl")
counts = Counter()
with log_path.open() as f:
for line in f:
record = json.loads(line)
if record.get("level") == "error":
counts[record.get("service", "unknown")] += 1
with open("error_report.csv", "w", newline="") as out:
writer = csv.writer(out)
writer.writerow(["service", "error_count"])
for service, count in sorted(counts.items()):
writer.writerow([service, count])
Bash version can be done, but it quickly becomes a brittle pipeline. You need jq for JSON and the composition gets complex:
#!/usr/bin/env bash
set -euo pipefail
jq -r ‘select(.level=="error") | .service // "unknown"‘ logs/requests.jsonl \
| sort \
| uniq -c \
| awk ‘{print $2 "," $1}‘ \
> error_report.csv
Both are valid, but I’d reach for Python the moment the transformation needs more than two or three stages or when I need to handle malformed data.
Environment integration and system access
Bash is tightly integrated with the system environment. It naturally handles environment variables, file permissions, process IDs, pipelines, and shell expansions. Python can do all of this, but it’s not the default mode of thought.
If you’re orchestrating existing tools, Bash is perfect:
#!/usr/bin/env bash
set -euo pipefail
export APP_ENV=staging
echo "Building..."
npm ci
npm run build
echo "Deploying..."
rsync -avz dist/ deploy@server:/var/www/app
If you need to inspect system state, Bash often reads more cleanly in short scripts. But if the orchestration grows, Python’s explicitness helps with debugging.
For example, using Python’s subprocess to capture output, parse it, and make decisions is more reliable than string‑based shell parsing. I use Bash when I want straightforward execution and Python when I need structured interactions.
Tooling and 2026‑era workflows
Modern workflows blur the lines. Here’s what I see in 2026:
- Python benefits from fast environment managers and dependency tools (like
uv,pipx, and locked environments) that cut friction for automation and developer tools. I also rely on type checkers and linters to keep scripts stable in big codebases. - Bash benefits from better shell linters (
shellcheck) and safer default templates. But it still lacks a robust package ecosystem. When I need libraries, I move to Python.
I also see AI‑assisted coding in both languages. I use assistants to draft Bash templates or rewrite pipelines into Python. The difference is that Python has a richer static analysis ecosystem, so automated refactors are safer.
Traditional vs Modern methods
Here’s a quick comparison for common tasks:
Traditional
My choice in 2026
—
—
Bash with pipes
shellcheck Bash
Ad‑hoc Bash script
typer or argparse Python
awk/sed + files
pandas or streaming JSON Python
Bash install script
brew/apt + small Python helpers Mixed
Bash in CI
Python## Common mistakes and how I avoid them
I see similar mistakes over and over. Here’s how I steer around them.
Mistake 1: Using Bash for complex logic
Bash looks simple until you need nested conditionals, arrays, or complex data. The result is unreadable and fragile. I keep Bash for simple orchestration and move to Python for real logic.
Mistake 2: Using Python when a shell pipeline is enough
If the task is “find files, filter, and print,” Bash is clearer. Don’t force Python just because it’s your main language. Let the task decide.
Mistake 3: Ignoring exit codes in Bash
A pipeline can hide failures. I always use set -euo pipefail in scripts that matter, and I check command exits when I need custom handling.
Mistake 4: Writing Python that shells out for everything
If your Python script just calls external commands and parses text output, you’re getting the worst of both worlds. Either keep it in Bash or pull the logic into Python and call fewer subprocesses.
Mistake 5: Forgetting portability
Bash behavior differs across systems. If you rely on GNU‑specific flags, your script may fail on macOS. I either pin the environment or use Python for portability.
When I choose Python (and when I don’t)
I use Python when:
- The script will live for months or years.
- I need structured data handling or parsing.
- There is a chance the logic will grow.
- I want tests or type hints for safety.
- I need a reusable CLI tool for a team.
I avoid Python when:
- The task is a short pipeline or a few simple commands.
- The runtime environment is unknown and might not have Python.
- I need minimal startup time and no dependencies.
A good example: provisioning a local dev machine. I often start with Bash to check dependencies, then call a Python helper that handles configuration logic and file generation. This keeps the bootstrap lightweight and the complexity in a language that is easier to maintain.
When I choose Bash (and when I don’t)
I use Bash when:
- I’m orchestrating existing system tools.
- I need fast, simple glue for CI or local workflows.
- I expect to run the script once or a few times.
- The task is mostly about files and processes, not data structures.
I avoid Bash when:
- The script will be owned by a team long‑term.
- The logic involves complex branching or state.
- I need unit tests or structured error handling.
A good example: a CI step that builds and uploads artifacts. If it’s 10 lines, Bash is perfect. If it grows to handle multiple build targets and custom metadata, I move to Python and keep Bash as a thin wrapper.
Practical side‑by‑side example: deploying a service
Let’s compare a realistic task: build a Docker image, tag it, and push to a registry, then update a deployment file.
Bash version:
#!/usr/bin/env bash
set -euo pipefail
service_name="orders"
version_tag="$(git rev-parse --short HEAD)"
image="registry.example.com/${servicename}:${versiontag}"
Build and push
podman build -t "$image" .
podman push "$image"
Update deployment file
sed -i.bak "s
image: .* image: ${image}" k8s/deployment.yaml
Python version:
#!/usr/bin/env python3
import subprocess
from pathlib import Path
service_name = "orders"
versiontag = subprocess.checkoutput(["git", "rev-parse", "--short", "HEAD"], text=True).strip()
image = f"registry.example.com/{servicename}:{versiontag}"
subprocess.run(["podman", "build", "-t", image, "."], check=True)
subprocess.run(["podman", "push", image], check=True)
deployment_path = Path("k8s/deployment.yaml")
content = deploymentpath.readtext()
updated = []
for line in content.splitlines():
if line.strip().startswith("image:"):
updated.append(f" image: {image}")
else:
updated.append(line)
deploymentpath.writetext("\n".join(updated) + "\n")
Which would I choose? If this is a quick script in a deployment repo, the Bash version is fine. If this is part of a broader release tool, I go with Python so I can add config, tests, and safer parsing later.
A pragmatic comparison chart
Here’s a concise view I use when mentoring teams:
Python
—
General‑purpose language
Structured logic, data, long‑term scripts
Strong
High with packaged runtime
Structured exceptions
Rich package ecosystem
High for teams
50+ lines
Final guidance you can act on
If you need a single, clear recommendation: use Bash for short, linear automation and Python for everything that might grow. That is the most reliable rule I’ve seen across teams.
Here’s the quick decision tree I keep in my head:
- One‑off or tiny pipeline? Bash.
- More than 40–50 lines, or shared across a team? Python.
- Structured data or complex logic? Python.
- Only glue between trusted tools? Bash.
I also strongly recommend blending them. A Bash script can set up environment variables and call a Python module. A Python script can run a single shell command when it makes sense. The best teams aren’t ideological; they are practical.
If you choose Python, keep it small and explicit. If you choose Bash, keep it short and safe. That’s how you avoid the slow build‑up of fragile automation, and that’s how you keep your tooling reliable in 2026 and beyond.
The real difference: controlling state vs controlling processes
This is a deeper distinction that often gets missed. Bash excels at controlling processes. You chain commands, redirect their output, and wire the exit codes together. It’s all about orchestration. Python excels at controlling state. You load data into memory, transform it with explicit structures, and make decisions based on those structures.
That difference is why a complex Bash script feels brittle: it tries to manage state through strings, files, and environment variables. That works until it doesn’t. Python lets you model the world more accurately. You can validate, normalize, and cache data in memory without shelling out.
If you’re doing process control, Bash wins. If you’re doing state control, Python wins. When I get stuck, I ask: “Am I manipulating the world through files and commands, or am I modeling data and applying rules?” That question tends to answer itself.
Practical scenario: a log retention job
Let’s turn a common task into two implementations. You have logs in multiple directories. You want to archive anything older than 14 days, compress it, and keep a report of what happened.
Bash can do it in a few lines with standard tools, which is great for quick wins:
#!/usr/bin/env bash
set -euo pipefail
log_root="/var/log/myapp"
archive_dir="/var/log/myapp-archive"
mkdir -p "$archive_dir"
find "$log_root" -type f -name "*.log" -mtime +14 -print0 \
| tar --null -czf "$archive_dir/logs-$(date +%F).tar.gz" --files-from=-
Remove archived files
find "$log_root" -type f -name "*.log" -mtime +14 -delete
But now consider edge cases:
- The archive step fails, but the delete step still runs.
- A directory contains spaces and the tar pipeline breaks if you forget
-print0. - You want a report of which files were archived and which failed.
A Python version can handle those details with less fragility:
#!/usr/bin/env python3
from pathlib import Path
import tarfile
import time
import sys
log_root = Path("/var/log/myapp")
archive_dir = Path("/var/log/myapp-archive")
archivedir.mkdir(parents=True, existok=True)
cutoff = time.time() - (14 24 60 * 60)
oldlogs = [p for p in logroot.rglob("*.log") if p.stat().st_mtime < cutoff]
if not old_logs:
print("No logs to archive")
raise SystemExit(0)
archivepath = archivedir / f"logs-{time.strftime(‘%Y-%m-%d‘)}.tar.gz"
try:
with tarfile.open(archive_path, "w:gz") as tar:
for p in old_logs:
tar.add(p, arcname=p.relativeto(logroot))
except Exception as exc:
print(f"Archive failed: {exc}", file=sys.stderr)
raise SystemExit(1)
failed = []
for p in old_logs:
try:
p.unlink()
except Exception as exc:
failed.append((p, exc))
print(f"Archived {len(oldlogs)} files to {archivepath}")
if failed:
print("Failed to delete:")
for p, exc in failed:
print(f" {p}: {exc}")
In Bash you can get speed and simplicity, but once you care about auditability and safety, Python tends to feel more natural.
Edge cases that matter in production
A big difference between hobby scripts and production scripts is the number of edge cases you must handle. Here are the ones that come up most often for me, and how the choice between Bash and Python changes the outcome.
1) Filenames with spaces or newlines
Bash can handle these only if you are very careful (-print0, IFS, quoting). Python’s Path objects and list processing are much more resilient by default.
If filenames are under your control, Bash is fine. If you’re processing arbitrary user input, Python is safer.
2) International text and encodings
Bash tools are often locale‑dependent and can behave differently depending on the environment. Python can be explicit about encoding and errors. If your automation touches Unicode filenames or logs, Python is usually the path of least regret.
3) Partial failure and rollback
A Bash pipeline often fails fast. That’s good, but it doesn’t make rollback easy. Python gives you a place to record state, so you can implement compensating actions or detailed error reports.
4) Concurrency and parallelism
Bash can do simple parallelism (xargs -P, background jobs). Python offers more precise concurrency control (concurrent.futures, asyncio). If you need to throttle, prioritize, or coordinate tasks, Python scales better.
5) Portability across Linux and macOS
Bash scripts can be surprisingly brittle if you rely on GNU flags that are missing on macOS. Python’s standard library makes cross‑platform behavior more consistent. If you must support mixed environments, Python is usually safer.
Argument parsing and user experience
If your script is used by other people, the command‑line interface matters. Bash can parse args, but it’s error‑prone. Python’s argparse and third‑party libraries provide help text, validation, and defaults with minimal work.
Here’s a minimal Bash parser pattern I see a lot:
#!/usr/bin/env bash
set -euo pipefail
usage() {
echo "Usage: $0 --env ENV --dry-run" >&2
exit 1
}
env=""
dry_run=false
while [[ $# -gt 0 ]]; do
case "$1" in
--env)
env="$2"; shift 2;;
--dry-run)
dry_run=true; shift;;
*)
usage;;
esac
done
[[ -z "$env" ]] && usage
And the Python equivalent:
#!/usr/bin/env python3
import argparse
parser = argparse.ArgumentParser(description="Deploy a service")
parser.add_argument("--env", required=True, choices=["dev", "staging", "prod"])
parser.addargument("--dry-run", action="storetrue")
args = parser.parse_args()
print(f"env={args.env} dryrun={args.dryrun}")
For internal scripts, Bash parsing is okay. For anything your team relies on, Python gives you better UX with less effort.
Testing and validation
Testing is one of the clearest dividing lines. Bash has tools for tests, but they’re less common in typical workflows. Python makes tests easier to adopt and scale.
In Bash, you might use a lightweight tool or custom test scripts. It works, but most teams don’t do it. In Python, test tools are everywhere and easy to integrate. That means higher reliability over time.
If your script touches production data or infrastructure, the existence of tests should push you toward Python. Not because Bash can’t be tested, but because Python makes it far easier to do right.
Security and safety considerations
Both languages can be safe or unsafe. The risks are different.
Bash risks I watch for
- Word splitting and globbing: a missing quote can turn a safe command into a dangerous one.
- Command injection: interpolating variables into command strings is a classic foot‑gun.
- Invisible failures: without
pipefail, your script may succeed even if a key step failed.
Python risks I watch for
- Shelling out with
shell=True: this is equivalent to writing raw shell strings in Bash. - Trusting unvalidated inputs: Python makes it easy to build complex tools quickly, so validation is often forgotten.
- Dependency sprawl: a script that pulls in too many libraries becomes fragile in locked‑down environments.
Safety isn’t about the language alone; it’s about the choices you make inside it. But Python makes it harder to accidentally shoot yourself in the foot when you’re doing complex logic.
A deeper look at pipelines and streaming
Bash shines when your data naturally flows through a pipeline. That’s its home turf. If you’re reading line‑based logs, filtering, and summarizing, Bash can be unbeatable for speed of writing and ease of debugging.
Python can stream just as well, but it requires a deliberate style. A common mistake is to read everything into memory when a stream would work fine. When you use generators, you can mimic the clarity of pipelines without leaving Python.
Here’s a streaming Python pattern for large files:
#!/usr/bin/env python3
from pathlib import Path
log_path = Path("/var/log/syslog")
with log_path.open() as f:
for line in f:
if "error" in line.lower():
print(line.strip())
The difference is that Bash can do this with a one‑liner, while Python needs a script. That’s why I keep both tools close. If a pipeline is all you need, Bash saves time.
Interoperability: using Bash and Python together
In real projects I rarely use one or the other in isolation. The best results often come from small Bash wrappers that call Python for the heavy lifting.
Here’s a practical pattern I use for tooling:
- A Bash script in
bin/sets up environment variables, discovers the repo root, and checks system dependencies. - The Bash script then calls a Python module with those parameters.
- The Python module handles parsing, structured logic, and error reporting.
Example wrapper:
#!/usr/bin/env bash
set -euo pipefail
reporoot="$(cd "$(dirname "${BASHSOURCE[0]}")/.." && pwd)"
export REPOROOT="$reporoot"
if ! command -v python3 >/dev/null; then
echo "python3 is required" >&2
exit 1
fi
python3 "$repo_root/tools/deploy.py" "$@"
This keeps your runtime dependencies light while still enabling maintainable logic in Python. It’s also easier to swap out Python internals later without changing the entrypoint.
Practical scenario: generating a config file safely
Configuration generation is a classic problem where Bash is tempting but Python is safer. Let’s say you need to generate a .env file based on user input and defaults.
Bash version, quick but brittle:
#!/usr/bin/env bash
set -euo pipefail
env_file=".env"
appenv="${APPENV:-dev}"
apiurl="${APIURL:-https://api.example.com}"
cat > "$env_file" <<EOF
APPENV=$appenv
APIURL=$apiurl
EOF
Python version, with validation:
#!/usr/bin/env python3
from pathlib import Path
import os
appenv = os.getenv("APPENV", "dev")
apiurl = os.getenv("APIURL", "https://api.example.com")
if app_env not in {"dev", "staging", "prod"}:
raise SystemExit("APP_ENV must be dev, staging, or prod")
env_file = Path(".env")
content = f"APPENV={appenv}\nAPIURL={apiurl}\n"
envfile.writetext(content)
If validation or schema checks matter, Python wins. If this is a quick disposable helper, Bash is fine.
Performance tuning: where each language surprises you
Some performance assumptions are wrong in practice:
- Bash is not always faster: if you chain many small commands, process spawn dominates. Python can be faster by keeping logic in one process.
- Python is not always slower: if you use streaming, avoid heavy imports, and minimize subprocess calls, Python can be surprisingly quick.
- Disk and network I/O dominate: both languages are usually waiting on I/O. That means correctness and clarity are more valuable than micro‑optimizations.
If you really care about speed, benchmark your actual workflow. It’s common to find that the bottleneck is not the script language at all, but the external commands or network calls.
Portability and runtime availability
One of Bash’s biggest advantages is that it exists everywhere. It is present on almost every Unix‑like system. Python is also common, but versions differ, and some minimal environments omit it entirely.
Here’s how I think about portability:
- For base images or minimal containers, assume only shell utilities are available.
- For developer machines or CI systems, Python is usually installed.
- For enterprise environments, Python might be locked down or version‑pinned.
If you need to guarantee that a script runs in a minimal environment, Bash is often the only safe choice. If you can control the runtime, Python is more future‑proof.
Collaboration and onboarding costs
Team scale matters. Bash is easy to learn for small scripts, but harder to reason about in large systems. Python has a bigger upfront runtime, but most developers can read it comfortably.
In onboarding, I’ve found that:
- A 15‑line Bash script is easy for anyone to understand.
- A 200‑line Bash script is a bottleneck for most teams.
- A 200‑line Python script is usually approachable.
That’s why I bias toward Python once scripts become shared infrastructure. It reduces friction for the next person who has to maintain it.
How I structure scripts for long‑term use
This is a pattern that keeps me sane over time:
For Bash
- Keep it under ~50 lines when possible.
- Use a strict header:
set -euo pipefailandIFS. - Avoid complicated parsing; delegate to Python if needed.
- Avoid
evaland unquoted expansions.
For Python
- Use a small main function with explicit argument parsing.
- Keep the dependency list minimal.
- Fail fast on invalid inputs.
- Keep subprocess use isolated in small helper functions.
This discipline makes the difference between “works today” and “reliable next year.”
Decision checklist you can actually use
Here’s a checklist I run mentally before choosing:
1) Is the task mostly orchestration of existing commands? If yes, Bash.
2) Will this script be shared, reused, or maintained for more than a month? If yes, Python.
3) Do I need complex parsing, nested logic, or data structures? If yes, Python.
4) Is the runtime environment guaranteed to have Python? If no, Bash.
5) Is error reporting and observability critical? If yes, Python.
If you answer “yes” to both 1 and 2, consider a hybrid: Bash wrapper + Python core.
Deeper pitfalls and how to avoid them
These are the mistakes I only learned the hard way.
Pitfall: assuming pipefail solves everything
set -euo pipefail is great, but it doesn’t make logic correct. You still need to handle partial failures, retries, and idempotency. If your automation needs those, Python makes it easier to encode those rules explicitly.
Pitfall: letting scripts accrete silently
Small scripts grow into large tools without anyone noticing. The biggest warning sign is when you add a “config file” or “state directory” to a Bash script. That’s usually your moment to move to Python.
Pitfall: shelling out for core logic
In Python, don’t wrap everything in subprocess calls. If you can parse files or transform data directly, do it. Subprocess calls should be the exception, not the rule.
Pitfall: ignoring visibility and logs
Bash scripts often print to stdout only. Python makes it easy to add structured logging. If you’re running automation in CI or production, consider even a minimal logging approach so you can debug later.
A production‑style example: batch image processing
This example shows the turning point. You want to resize a directory of images, generate thumbnails, and report failures.
Bash version might look like this:
#!/usr/bin/env bash
set -euo pipefail
src="./images"
out="./thumbs"
mkdir -p "$out"
for img in "$src"/*.{jpg,png}; do
[ -e "$img" ] || continue
convert "$img" -resize 256x256 "$out/$(basename "$img")"
done
This is fine if all images are clean and you’re okay with a stop‑on‑error run. But if you want to collect failures and keep going, Python provides a better foundation:
#!/usr/bin/env python3
from pathlib import Path
import subprocess
src = Path("./images")
out = Path("./thumbs")
out.mkdir(parents=True, exist_ok=True)
failures = []
for img in list(src.glob(".jpg")) + list(src.glob(".png")):
try:
subprocess.run(["convert", str(img), "-resize", "256x256", str(out / img.name)], check=True)
except Exception as exc:
failures.append((img, exc))
print(f"Processed {len(list(out.iterdir()))} thumbnails")
if failures:
print("Failures:")
for img, exc in failures:
print(f" {img}: {exc}")
This isn’t about speed; it’s about resilience and user feedback. Python’s explicit control helps when you need a more polished tool.
Where Bash still dominates
Even after all this praise for Python, Bash holds specific territory:
- Ad‑hoc debugging: when I’m SSH’d into a box, I’m not going to spin up a Python project. Bash lets me find issues quickly.
- One‑liners and quick fixes: grep and awk are still king for quick inspections.
- Minimal environments: init containers, rescue shells, or locked‑down systems often have nothing but Bash.
- Composing existing utilities: if you already trust the tools, Bash is the cleanest glue.
That’s why I don’t see Bash going away. It remains the default language of the terminal.
Where Python clearly wins
On the other side, Python tends to dominate when:
- You need readable, structured logic for long‑term maintenance.
- Your script needs tests, validation, and schema checks.
- You are working with structured data like JSON, YAML, or APIs.
- You need to package a tool for a team or an org.
- Your automation is part of a larger system with multiple moving parts.
Python becomes more valuable the more you care about correctness and clarity over raw brevity.
A final comparison table: decision by workload type
This table is how I explain it to teams:
Bash
Recommended
—
—
Excellent
Bash
Strong
Bash
Weak
Python
Weak
Python
Inconsistent
Python
Excellent
Bash
Weak
Python
Mixed
Python## Closing thoughts: pragmatism beats ideology
If there’s one idea I want you to take away, it’s this: the best teams are pragmatic. They choose the language that fits the job, not the language they personally prefer.
Bash is unbeatable for short, linear workflows that stitch together existing tools. Python is the better choice when scripts become products: long‑lived, shared, tested, and expected to handle edge cases. If you adopt that mental model, you’ll make better decisions consistently.
In 2026, the most effective approach is often hybrid: Bash for environment setup and process orchestration, Python for logic and data handling. That combination is powerful, practical, and easy to scale.
If you need a rule you can remember: use Bash for glue and Python for structure. And when in doubt, prototype in Bash and graduate to Python as soon as the script starts to grow. That’s how you keep your automation reliable, maintainable, and friendly to the next person who has to touch it.


