You’ve probably had this moment: a Python script runs perfectly on your laptop, then fails on a teammate’s machine (different Python minor version), in CI (missing system libs), or on a server (wrong OS packages). I’ve spent enough hours chasing those “small” environment differences that I now treat reproducibility as part of writing the script—not something I bolt on later.
Docker is the most practical way I know to package a Python script with the exact runtime and dependencies it needs, then run it the same way on macOS, Windows, Linux, CI runners, and cloud hosts. Under the hood, Docker uses OS-level virtualization (containers share the host kernel), so you get isolation without the heavier overhead of a full virtual machine. The result is a single build artifact (an image) that I can run anywhere I have Docker installed.
I’m going to show you the patterns I actually use: a quick “just run this script” workflow, a production-style Dockerfile that handles dependencies cleanly, and the run flags that matter day-to-day. I’ll also cover the common failure modes I see, and how I fix them fast.
What Docker changes for Python scripts
When you run a Python script normally, your execution environment is a loose collection of things:
- Your Python interpreter (version, build options)
- Installed Python packages (and how they were built)
- System libraries (OpenSSL, libpq, libc variants, etc.)
- Environment variables
- Your working directory and filesystem layout
Docker turns that into something you define explicitly. You describe the environment in a Dockerfile, build it into an image, and then run that image in a container.
A few mental models help:
- Image: a read-only snapshot of a filesystem + metadata (entrypoint, default command). It’s your packaged app.
- Container: a running instance of an image. You can start/stop it; it gets its own isolated process namespace.
- Dockerfile: the recipe. Each instruction creates a new layer; layer caching is why small rebuilds can be fast.
If you’ve ever used a virtualenv, think of an image as “virtualenv + Python + OS packages + filesystem structure,” bundled together.
When I do (and don’t) reach for Docker
I reach for Docker when:
- I need the script to run in CI exactly like it runs locally.
- The script depends on native extensions (
cryptography,lxml,numpy,psycopg) and I don’t want platform surprises. - I’m shipping the script to someone else (or future me) and I want it runnable with one command.
- I want to lock down side effects (no random writing to my laptop except where I mount a volume).
I usually skip Docker when:
- It’s a tiny one-off on my machine and I’m not sharing it.
- I need direct access to GPU drivers or special hardware and the Docker setup time exceeds the benefit (though this can still work; it’s just more setup).
Quick-start: run a single script with a one-file Dockerfile
Let’s start with the simplest case: you have a script.py and you want it to run the same way everywhere.
1) Create a small script
Create script.py:
# script.py
def add(a: int, b: int) -> int:
return a + b
if name == "main":
result = add(3, 5)
print(f"The result is: {result}")
Two notes from experience:
- The
if name == "main"guard matters if you later import this file from tests. - I use an f-string for output;
printfisn’t a Python function.
2) Write a minimal Dockerfile
Create Dockerfile:
FROM python:3.12-slim
WORKDIR /app
COPY script.py /app/script.py
CMD ["python", "/app/script.py"]
Why I like python:3.12-slim here:
- It’s small enough for quick iteration.
- It’s Debian-based, so you have standard package availability if you later need
apt-get. - I pin a major/minor (
3.12) instead oflatestto avoid surprises when upstream updates.
3) Build the image
From the directory containing the Dockerfile:
docker build -t python-script:0.1 .
4) Run it
docker run --rm python-script:0.1
You should see:
The result is: 8
I included --rm because I rarely want stopped containers hanging around for scripts.
The even faster option: run without building your own image
When I’m in a hurry and the script has no dependencies, I sometimes run the official Python image directly and mount my working directory:
docker run --rm -it \
-v "$PWD":/work \
-w /work \
python:3.12-slim \
python script.py
This is a great “smoke test” pattern:
-v "$PWD":/workmounts your current folder into the container.-w /worksets the working directory.
If the script grows into something you share, I graduate it to a real Dockerfile so dependencies are pinned and repeatable.
Building a proper image: dependencies, lockfiles, and repeatable installs
Most real scripts depend on third-party packages. The core Docker pattern is:
1) Copy dependency manifest(s)
2) Install dependencies
3) Copy the rest of your source
That order matters because it makes rebuilds fast: changing your app code won’t invalidate the cached dependency install layer.
Example project layout
Here’s a layout I use often:
python-docker-script/
Dockerfile
.dockerignore
requirements.txt
run_report.py
Create run_report.py:
# run_report.py
import argparse
import csv
from pathlib import Path
import requests
def fetchstatus(url: str, timeouts: float) -> int:
r = requests.get(url, timeout=timeout_s)
return r.status_code
def main() -> None:
p = argparse.ArgumentParser()
p.add_argument("--url", required=True)
p.add_argument("--out", default="/data/report.csv")
p.add_argument("--timeout", type=float, default=5.0)
args = p.parse_args()
status = fetch_status(args.url, args.timeout)
out_path = Path(args.out)
outpath.parent.mkdir(parents=True, existok=True)
with out_path.open("w", newline="") as f:
writer = csv.writer(f)
writer.writerow(["url", "status"])
writer.writerow([args.url, status])
print(f"Wrote {out_path} with status {status}")
if name == "main":
main()
Create requirements.txt:
requests==2.32.3
A Dockerfile I’d ship with confidence
FROM python:3.12-slim
ENV PYTHONDONTWRITEBYTECODE=1 \
PYTHONUNBUFFERED=1
WORKDIR /app
1) Copy only dependency file(s) first
COPY requirements.txt /app/requirements.txt
2) Install dependencies
RUN pip install --no-cache-dir -r /app/requirements.txt
3) Copy the script
COPY runreport.py /app/runreport.py
Default command
CMD ["python", "/app/run_report.py"]
A few choices I’m making on purpose:
PYTHONDONTWRITEBYTECODE=1stopspycachenoise in many scenarios.PYTHONUNBUFFERED=1makes logs show up immediately in container output (important in CI).pip install --no-cache-diravoids keeping pip’s download cache inside the final image.
Add a .dockerignore (it’s more important than it looks)
Create .dockerignore:
pycache/
*.pyc
*.pyo
*.pyd
.venv/
.env
.git/
.DS_Store
If you forget .dockerignore, you may accidentally copy secrets or huge folders into the build context, and your builds will feel mysteriously slow.
Build and run with mounted output
Build:
docker build -t report-runner:0.1 .
Run it and write the report into a local data/ directory:
mkdir -p data
docker run --rm \
-v "$PWD/data":/data \
report-runner:0.1 \
--url https://example.com --out /data/report.csv
Notice what I’m doing:
- I’m using the image’s default
CMD(python /app/run_report.py) and then passing script arguments after the image name. - I mount a local folder so the container can write output without baking it into the image.
Running the container: the flags you’ll use every day
Most “Docker confusion” comes from docker run because it has a lot of switches. Here are the ones I use constantly for Python scripts.
--rm: keep your machine clean
For scripts, I almost always run:
docker run --rm my-image:tag
This removes the container after it exits.
-it: interactive mode (useful for REPLs and prompts)
If your script prompts for input, or you want to run a Python REPL:
docker run --rm -it python:3.12-slim python
-v and -w: mount code and set working directory
For local development, mounting your project beats rebuilds for every code change:
docker run --rm -it \
-v "$PWD":/work \
-w /work \
python:3.12-slim \
python run_report.py --url https://example.com
I still build an image once the script needs repeatable dependency installs or I’m sharing it.
-e: configuration via environment variables
If your script reads config from env vars:
docker run --rm \
-e APITOKEN=yourtoken_here \
my-image:tag
In practice, I usually keep secrets out of shell history:
- Use an
.envfile and--env-file .env - Or use your CI secret store
One habit that saves me pain: I decide early whether my script’s configuration comes from (a) CLI args, (b) env vars, or (c) a config file. Docker works with any of them, but I try not to mix all three unless there’s a reason.
--name: stable container names when you need them
For one-shot scripts I skip this, but it’s handy for debugging:
docker run --name report_debug --rm report-runner:0.1 --url https://example.com
--entrypoint: override the default command
If your image’s CMD is your script but you want a shell inside the same environment:
docker run --rm -it --entrypoint sh report-runner:0.1
That’s my favorite way to debug missing files, wrong paths, and dependency issues.
A quick note on CMD vs ENTRYPOINT
This trips people up, so here’s my mental model:
CMDis the default arguments. It’s easy to override.ENTRYPOINTis the “main executable.” It’s harder to override, and it changes how extra arguments are handled.
For scripts, I usually stick with a simple CMD ["python", "/app/run_report.py"] because it behaves predictably when I append CLI args at docker run time.
Developer workflow in 2026: fast rebuilds, cache mounts, Compose, and devcontainers
In 2026, I care about two things when I use Docker for Python:
- Rebuild speed (I don’t want a 60-second rebuild for a one-line change)
- Repeatability (the environment should match CI and teammates)
Here are the patterns I keep coming back to.
Traditional vs modern Docker workflow (what I actually recommend)
Traditional approach
—
Install Python + pip locally
docker run --rm -v "$PWD":/work -w /work python:3.12-slim python script.py pip install -r requirements.txt inside Docker layer
Rebuild image every edit
Hand-run local services
docker compose with service dependencies “Works on my machine” tooling
Speeding up dependency installs with BuildKit cache mounts
If you build a lot, pip downloads can dominate. With BuildKit, you can cache pip’s downloads between builds.
A Dockerfile snippet:
# syntax=docker/dockerfile:1
FROM python:3.12-slim
WORKDIR /app
COPY requirements.txt /app/requirements.txt
RUN --mount=type=cache,target=/root/.cache/pip \
pip install -r /app/requirements.txt
COPY runreport.py /app/runreport.py
CMD ["python", "/app/run_report.py"]
This doesn’t change runtime behavior; it mainly makes repeated builds faster.
Compose when your script needs another service
If your script talks to Postgres, Redis, or a local API, I reach for docker compose so everything starts with one command.
Example compose.yaml:
services:
report:
build: .
command: ["python", "/app/run_report.py", "--url", "https://example.com"]
redis:
image: redis:7-alpine
Then run:
docker compose run --rm report
Even if your script doesn’t need Redis today, this becomes valuable the moment you add “just one more dependency service.”
Dev Containers: consistent tooling across a team
If you use an editor that supports dev containers, you can pin:
- Python version
- CLI tools
- linters/formatters
- system packages
This reduces “my linter behaves differently” problems. I treat it as part of the project setup when the script is long-lived or shared.
AI-assisted workflows that fit well with Docker
I’m not talking about magic; I’m talking about boring, useful automation:
- Generate a first-pass Dockerfile from
requirements.txt, then I review it for security and caching. - Generate a
.dockerignorebased on common Python artifacts. - Add a
make run/make buildwrapper so humans don’t memorize long commands.
I still review every generated Dockerfile because small mistakes (like copying secrets into the image) can become real problems.
Choosing the right base image (and why it matters)
A Dockerfile starts with FROM ..., and that choice affects size, compatibility, and how painful native dependencies become.
python:slim vs python:alpine
I default to python:3.12-slim for scripts I want to “just work.” Alpine looks attractive because it’s tiny, but it uses musl instead of glibc, which can complicate native wheels and system libraries. In practice:
- If you depend on packages with compiled extensions, Debian/Ubuntu-based images (
slim) are often smoother. - If you truly need minimum size, Alpine can be great, but expect to install build tools and troubleshoot more.
Pinning versions: predictable now, safer later
I try to avoid python:latest for the same reason I avoid unconstrained pip install in production: it’s a surprise generator.
My typical approach:
- Pin
python:3.12-slim(or3.11-slim) for stability. - If the script is critical, consider pinning the image by digest (that’s as reproducible as it gets).
Multi-arch reality (Apple Silicon, ARM servers, CI)
If you’re on an ARM machine (like Apple Silicon) and you build images locally, they may be ARM images by default. That’s usually fine, until you run them somewhere that expects x86_64. If you hit weird “exec format error” failures, this is often the cause.
A practical habit: if you know your deployment target (like x86_64 CI runners), build with an explicit platform.
docker build --platform linux/amd64 -t report-runner:0.1 .
I don’t do this for every script, but I do it when “build here, run there” matters.
Handling inputs and outputs like an adult (files, volumes, and reproducibility)
Python scripts rarely live in a vacuum. They read inputs (files, URLs, databases) and produce outputs (reports, artifacts). Docker changes how you should think about that.
The golden rule: treat the container filesystem as ephemeral
If you write output to /app/output.csv inside the container and don’t mount a volume, that output is going to disappear when the container is removed. That’s not a bug—that’s a feature.
So I ask myself:
- Do I want output persisted? If yes, mount a host directory (
-v) or use a named volume. - Do I want output baked into an image? Almost never for one-off scripts.
Simple pattern: mount /data for both input and output
I use /data as a convention.
- Inputs: mount a folder that contains input files
- Outputs: write into that same mounted folder
Example:
docker run --rm \
-v "$PWD/data":/data \
report-runner:0.1 \
--url https://example.com --out /data/report.csv
If the script reads input files, I prefer absolute container paths (/data/input.csv) over relative paths. Relative paths work too, but absolute paths eliminate a whole class of “where am I?” bugs.
When to use bind mounts vs named volumes
- Bind mount (
-v "$PWD/data":/data): best for local dev and when humans need the files. - Named volume (
-v report_data:/data): best when you don’t care where Docker stores it, or when Compose is orchestrating the workflow.
For most script workflows, bind mounts are simplest and most transparent.
Installing dependencies the repeatable way (and avoiding dependency drift)
requirements.txt works, and it’s still common. But reproducibility improves a lot when you treat dependencies as a first-class artifact.
Option A: requirements.txt (good baseline)
If you use requirements.txt, pin versions (requests==2.32.3, not requests). This prevents “it worked yesterday” updates.
If you have many dependencies, I like splitting:
requirements.in(top-level direct deps)requirements.txt(fully pinned, resolved deps)
Then I can regenerate pins when I choose, not when the internet chooses.
Option B: pyproject.toml (modern packaging)
If your script grows into a small tool, pyproject.toml can be cleaner. It also integrates well with lockfiles.
The Docker pattern stays the same:
1) Copy pyproject.toml and lockfile
2) Install
3) Copy source
The key isn’t the tool—it’s the layer caching and pinning.
One more tip: keep build tools out of runtime when possible
If your dependency install requires compilers and headers, it can bloat your image. For scripts, it’s not always worth optimizing, but when I care about size/security, I use a multi-stage build (more on that below).
Production-style Dockerfiles: non-root, smaller images, and fewer surprises
For a quick script, a single-stage Dockerfile is fine. But if you’re going to share it widely (or run it regularly in CI), a few hardening steps pay off.
Run as a non-root user
Many official images run as root by default. That’s convenient, but I prefer a non-root user for scripts that process untrusted inputs or might be run in shared environments.
A simple pattern:
FROM python:3.12-slim
ENV PYTHONDONTWRITEBYTECODE=1 \
PYTHONUNBUFFERED=1
WORKDIR /app
Create an unprivileged user
RUN useradd -m -u 10001 appuser
COPY requirements.txt /app/requirements.txt
RUN pip install --no-cache-dir -r /app/requirements.txt
COPY runreport.py /app/runreport.py
USER appuser
CMD ["python", "/app/run_report.py"]
If you bind-mount a host folder and get permissions errors, that’s the trade-off. In dev, you can temporarily run as root or align UIDs. In shared/CI environments, non-root is usually the right default.
Multi-stage builds (when native deps are heavy)
If you install build-essential tools to compile dependencies, your final image can get large. A multi-stage build lets you compile/build in one stage and copy only what you need into the final stage.
Here’s a conceptual template:
# syntax=docker/dockerfile:1
FROM python:3.12-slim AS builder
WORKDIR /app
RUN apt-get update \
&& apt-get install -y --no-install-recommends build-essential \
&& rm -rf /var/lib/apt/lists/*
COPY requirements.txt /app/requirements.txt
RUN pip wheel --wheel-dir /wheels -r /app/requirements.txt
FROM python:3.12-slim
WORKDIR /app
ENV PYTHONDONTWRITEBYTECODE=1 \
PYTHONUNBUFFERED=1
COPY --from=builder /wheels /wheels
COPY requirements.txt /app/requirements.txt
RUN pip install --no-cache-dir --no-index --find-links=/wheels -r /app/requirements.txt \
&& rm -rf /wheels
COPY runreport.py /app/runreport.py
CMD ["python", "/app/run_report.py"]
Why I like this:
- The final image doesn’t need compilers.
- Builds can be more deterministic because wheels are built once.
- You reduce the attack surface of the runtime image.
Do you need this for every script? No. But if you’re repeatedly running a script in CI or distributing it, it’s a strong pattern.
Make it easy to run: thin wrappers around Docker commands
One reason Docker “feels hard” is that humans don’t want to memorize long commands. I don’t either.
A tiny Makefile (or just a few documented commands) makes a script feel like a real tool.
Example:
IMAGE := report-runner:0.1
build:
\tdocker build -t $(IMAGE) .
run:
\tmkdir -p data
\tdocker run --rm -v "$(PWD)/data":/data $(IMAGE) --url https://example.com --out /data/report.csv
shell:
\tdocker run --rm -it --entrypoint sh $(IMAGE)
The practical value here is huge: you can tell someone “run make build and make run” instead of giving them a paragraph.
When things break: the errors I see most and how I fix them
Docker is predictable, but the failure modes are different from local Python. These are the issues I see constantly.
1) “ModuleNotFoundError” inside the container
Symptom:
ModuleNotFoundError: No module named ‘requests‘
What happened: You didn’t install dependencies in the image.
Fix:
- Add
requirements.txt(orpyproject.toml) to your project. - Copy it into the image before the rest of the code.
- Install it with pip.
Also check you’re running the script in the same image you built (tags matter).
2) Dockerfile typos and invalid instructions
Symptom:
unknown instruction: RUNNN
What happened: Dockerfile instructions are strict.
Fix:
- Re-check the instruction spelling.
- Keep Dockerfile lines short and readable.
- If your editor supports Dockerfile linting, turn it on.
3) Your script can’t find files (path issues)
Symptom:
FileNotFoundError: [Errno 2] No such file or directory: ‘input.csv‘
What happened: The container working directory isn’t what you assumed, or you didn’t copy the file.
Fix:
- Set
WORKDIRin the Dockerfile. - Use absolute paths inside the container (
/data/input.csv). - Mount input files with
-v.
Example:
docker run --rm \
-v "$PWD/input.csv":/data/input.csv \
my-image:tag \
--input /data/input.csv
4) “Permission denied” when writing output
Symptom:
PermissionError: [Errno 13] Permission denied: ‘/data/report.csv‘
What happened: The container user can’t write to the mounted directory.
Fix options (pick one):
- Run as a user that matches your host UID/GID (common in dev environments).
- Change permissions on the host folder.
- Write to a different mounted location.
For scripts I share broadly, I often keep it simple and document “mount a writable folder at /data.”
5) No output appears until the container exits
Symptom: You expect logs in real time, but they show up late.
What happened: Python stdout buffering.
Fix:
- Set
ENV PYTHONUNBUFFERED=1in Dockerfile, or - Run with
python -u script.py.
6) Builds are slow even for tiny changes
What happened: The Dockerfile copy order defeats caching, or your build context is huge.
Fix:
- Copy dependency manifests first, install, then copy your code.
- Add a
.dockerignoreso you’re not sending.git/,data/, or large artifacts as build context. - Use BuildKit cache mounts for pip downloads.
As a sanity check, I sometimes run a build after touching only the script file. If it still reinstalls dependencies, my Dockerfile layering is wrong.
7) “exec format error” (architecture mismatch)
Symptom:
exec /usr/local/bin/python: exec format error
What happened: You built an image for one architecture (like ARM) and tried to run it on another (like x86_64).
Fix:
- Build with
--platformtargeting where you’ll run it. - If you use CI, build in CI for the same target you deploy.
8) HTTPS or certificates fail in containers
Symptom: Requests fail with SSL errors.
What happened: Some minimal images can have missing CA certificates, or you’re intercepting TLS with a corporate proxy.
Fix:
- Ensure CA certs exist in the image (Debian slim images usually include them, but not always in custom bases).
- If your environment uses custom root CAs, mount them and configure the runtime accordingly.
I treat this as an environment concern, not a Python concern, but it shows up as Python errors.
9) Timezone / locale surprises
Symptom: Timestamps look “wrong,” CSV output differs, sorting behaves oddly.
What happened: Containers default to UTC and minimal locale settings.
Fix:
- Use timezone-aware datetimes in Python.
- If you truly need a specific timezone, set it intentionally via env vars or install tzdata.
My bias: keep the container in UTC, and convert at the edges.
Practical scenarios (how I decide which Docker approach to use)
Here’s how I choose, in plain terms.
Scenario 1: I’m prototyping
- Run the official Python image.
- Bind-mount my folder.
- Don’t build a custom image yet.
Command style:
docker run --rm -it -v "$PWD":/work -w /work python:3.12-slim python script.py
Scenario 2: I’m sharing the script with a teammate
- Add
Dockerfileand.dockerignore. - Pin Python minor version.
- Pin package versions.
- Add a documented run command (or a Makefile target).
Scenario 3: The script is part of CI
- Build an image in CI.
- Run it with explicit inputs/outputs.
- Use
PYTHONUNBUFFERED=1so logs stream.
Scenario 4: The script depends on Postgres/Redis
- Use Compose.
- Put all services in
compose.yaml. - Use
docker compose run --rmfor the one-shot job.
Scenario 5: The script runs regularly (cron-like)
Docker can still work well, but I like to design for idempotency:
- If the script is run twice, results shouldn’t corrupt.
- Use stable output locations.
- Ensure exit codes reflect success/failure.
In other words, Docker solves the environment; you still need operational discipline.
Performance considerations (realistic expectations)
I like to be honest about where Docker helps and where it doesn’t.
What gets faster
- “Works everywhere” debugging time goes down a lot.
- Rebuilds can be fast if your Dockerfile caches dependencies properly.
- CI becomes more predictable.
What can get slower
- The first build is usually slower than running locally.
- Pulling large base images can be slow on a fresh machine.
- On some systems, file mount performance can be slower than native (especially with huge codebases).
When performance matters, I optimize in this order:
1) .dockerignore (cheap win)
2) Copy/install order (cache correctness)
3) BuildKit pip cache mounts
4) Multi-stage builds (smaller runtime)
Alternative approaches (and why I still like Docker)
Docker isn’t the only route.
Virtual environments (venv)
Great for local work, but they don’t package system libraries or OS dependencies, and they don’t travel as cleanly across platforms.
Python zipapp / single-file packaging
Cool for distribution, but it’s not a full environment solution. If your script needs system libs, you’re back in dependency land.
Containers without Docker
There are other container runtimes and orchestration options. The reason I stick with Docker for scripts is the ecosystem: it’s widely installed, well-documented, and integrates nicely with CI.
A reusable template I start from
When I begin a new “real” script that I expect to share, I usually start with this minimal checklist:
Dockerfilewith pinnedpython:3.12-slim.dockerignorerequirements.txtwith pinned versions (orpyproject.toml+ lockfile)- Script uses
argparsewith explicit input/output paths - Default output goes to
/data/... - Documented
docker buildanddocker runcommands
That set of habits turns “a script” into “a tool.”
Expansion Strategy
Add new sections or deepen existing ones with:
- Deeper code examples: More complete, real-world implementations
- Edge cases: What breaks and how to handle it
- Practical scenarios: When to use vs when NOT to use
- Performance considerations: Before/after comparisons (use ranges, not exact numbers)
- Common pitfalls: Mistakes developers make and how to avoid them
- Alternative approaches: Different ways to solve the same problem
If Relevant to Topic
- Modern tooling and AI-assisted workflows (for infrastructure/framework topics)
- Comparison tables for Traditional vs Modern approaches
- Production considerations: deployment, monitoring, scaling


