Managing Python Dependencies

The first time a Python deployment failed on me, it wasn’t because of my code. It was because two libraries quietly pulled in different versions of the same dependency, and my production image was missing a native wheel that existed on my laptop. I learned the hard way that dependency management isn’t a housekeeping chore—it is part of your system design. If you treat it like a checklist item, you’ll pay for it later in broken builds, fragile CI, and confusing “works on my machine” conversations.

I’ll walk you through the practical workflow I use today: how I install and pin packages with pip, when I isolate environments, and how I pick tools like pipenv, Poetry, or Conda based on project risk. I’ll also show how I think about lockfiles, supply‑chain safety, and install performance, with simple examples you can run immediately. If you keep your dependencies boring and predictable, you get to move faster everywhere else.

Why Dependencies Fail in Real Projects

Dependencies fail for the same reasons supply chains fail in the physical world: missing parts, mismatched parts, or parts that arrive too late. Your Python app is an assembly line. A dependency is a part, and your runtime is the factory. A small change in any part can cause a big change in the final product.

In my experience, these are the most common failure modes:

  • Hidden transitive drift: You pin requests==2.31.0, but a transitive dependency upgrades under you and changes behavior. The bug shows up weeks later when you rebuild in CI.
  • Environment skew: You use Python 3.12 locally, the server is still on 3.10, and a dependency chooses a different wheel. The same code now imports different compiled extensions.
  • Platform differences: A dependency with native code works on macOS but needs a different build on Alpine Linux. If your lockfile only encodes versions, not platforms, you still get a failure.
  • Overlapping global installs: A global pip install shadows a project dependency, and your app behaves differently depending on shell or PATH.

I treat dependency management like packing for a trip. If you keep a shared suitcase for every trip, you’ll forget what belongs to which destination. Isolated environments are separate suitcases; lockfiles are the packing list; hashes are the luggage tags that tell you it’s the exact same bag.

Start with pip the Right Way

pip is the foundation. It is also easy to misuse. I always begin by making pip work for me, not against me.

First, I confirm the pip that is tied to the Python I intend to use. On machines with multiple Python versions, pip can silently point at the wrong interpreter.

python -m pip –version

When I install packages, I prefer python -m pip so I don’t accidentally call a different pip on PATH:

python -m pip install requests

For simple scripts, I might stop there. But the moment a project has more than one package or needs to run on another machine, I create a requirements file and pin versions:

python -m pip freeze > requirements.txt

A basic requirements.txt should look like this:

requests==2.31.0

urllib3==2.2.1

charset-normalizer==3.3.2

idna==3.7

I keep one rule: pin direct dependencies, and decide explicitly how to treat transitive ones. If you want tight reproducibility, you can pin everything. If you want less churn, pin only direct dependencies and use a lockfile tool to manage the rest. I’ll show both paths later.

When you remove a dependency, I do not trust manual cleanup. I regenerate the file and compare. This avoids stale packages that linger for months and get dragged into production images.

Finally, I avoid global installs for app dependencies. They are fine for tools like pipx, linters, or formatter CLIs, but not for libraries that your project imports.

Virtual Environments as Contracts

I treat a virtual environment as a contract: “this project runs with exactly these packages.” The contract is local and explicit. That means I can have two projects with conflicting needs without global side effects.

For a lightweight workflow, Python’s built‑in venv is enough:

python -m venv .venv

source .venv/bin/activate

python -m pip install -r requirements.txt

The .venv directory is a per‑project boundary. I keep it out of version control, but I commit the dependency files that describe it. If you’re on Windows, the activation path is different, but the idea is the same.

I also set a small contract in pyproject.toml or setup.cfg describing the Python version range. That way, automated tooling can reject an environment that is too old or too new.

Here’s a minimal pyproject.toml snippet that establishes a Python version and a dependency list:

[project]

name = "inventory-api"

version = "0.1.0"

requires-python = ">=3.11,<3.13"

dependencies = [

"requests>=2.31,<3.0",

"pydantic>=2.6,<3.0"

]

Why call this a contract? Because it tells every developer, every CI runner, and every deployment image what is allowed. If you violate the contract, the project should fail fast. That is a good failure.

Lockfiles and Reproducibility: Pipenv and Poetry

When a project has real risk—production services, data pipelines, client deliverables—I move beyond a basic requirements.txt and adopt a lockfile workflow. The lockfile is where reproducibility lives. It is the difference between “these are the kinds of packages we need” and “these are the exact packages that built the app.”

pipenv is a practical bridge if your team already uses pip and venv. It gives you a Pipfile for intent and a Pipfile.lock for precision:

pipenv install requests

pipenv install –dev pytest

pipenv lock

A Pipfile expresses intent:

[packages]

requests = ">=2.31,<3.0"

[dev-packages]

pytest = ">=8.0,<9.0"

And the lockfile pins versions and hashes. I treat the lockfile as sacred: if it changes, I review it the way I review source code.

Poetry is the next step when packaging matters or when I want a single tool to manage build, publish, and dependency resolution. It uses pyproject.toml as the source of truth and generates poetry.lock.

poetry init

poetry add fastapi

poetry add –group dev pytest

poetry lock

If you’re deciding between pipenv and Poetry, I use this rule: if the project is a library or needs distribution, I choose Poetry; if it is an internal app with minimal packaging needs, pipenv is enough. The key is consistency. A mixed environment with two tools is a recipe for confusion.

Here is a simple script you can run under any of these tools:

import numpy as np

# Simple sanity check for a dependency

arr = np.array([1, 2, 3, 4, 5])

print(‘Sum:‘, np.sum(arr))

If you can run that script on your laptop and on a clean CI runner with the same output, you have the beginnings of reproducibility.

Conda and Mixed‑Language Stacks

There are scenarios where pip is not enough. If you are dealing with compiled scientific packages, CUDA, or mixed languages, Conda is often the safer path. I use Conda when I need to manage non‑Python dependencies like BLAS libraries, system‑level compilers, or GPU drivers.

A typical Conda workflow for a data project looks like this:

conda create -n analytics python=3.11

conda activate analytics

conda install numpy pandas scikit-learn

I still use pip inside Conda environments, but with caution. I install as much as possible through Conda channels first, then use pip for what Conda doesn’t provide. That reduces the chances of ABI mismatches.

When I need portability, I export the environment:

conda env export > environment.yml

That file becomes the contract, similar to requirements.txt or a lockfile. It captures both Python and non‑Python dependencies. If your project needs consistent numerical behavior or GPU compatibility, Conda is usually the correct choice.

Choosing Tools and Workflows in 2026

I’ve watched teams struggle because they tried to apply the same dependency approach to every project. Here is how I frame the tradeoffs today, with the kind of metrics I see in practice on medium‑sized services and data pipelines:

Approach

Typical cold install

Rebuild stability

Best fit

Tooling overhead —

pip + requirements.txt

20–60s

Medium

small scripts, simple apps

Low pip + lockfile tool (pip‑tools)

40–90s

High

services, CI‑heavy repos

Medium pipenv

45–120s

High

app teams, moderate packaging

Medium Poetry

60–150s

Very high

libraries, multi‑package repos

Medium‑High Conda

90–240s

High

data science, GPU stacks

High

I recommend pip plus a lockfile tool for most backend services, Poetry for libraries, and Conda for heavy numerical work. That is not a neutral answer; it is a biased one based on where I see failures most often. When you need a single sentence: default to pip with a lockfile, unless packaging or non‑Python deps are first‑class concerns.

One more tool I rely on is constraints files. If I have a shared platform team, I define a constraints.txt that pins organization‑approved versions, then allow each repo to list its own requirements. This gives you a stable floor across dozens of services without forcing every team into the same lockfile. It also helps when you need to roll out a security fix across many apps at once. A constraints file is like a company policy: it doesn’t tell you exactly what to buy, but it tells you what you’re allowed to buy. If you combine constraints with hash‑checked installs, you get both guardrails and determinism. The learning curve is small, and the payoff is a lot of stability for very little process.

Here is the workflow I follow for most service‑oriented Python repos today. It is a balance of speed, safety, and team sanity:

1) Create a project environment

python -m venv .venv

source .venv/bin/activate

python -m pip install -U pip

2) Declare dependencies with intent

I add direct dependencies with version ranges in pyproject.toml or requirements.in (if I use a lockfile tool). Example requirements.in:

fastapi>=0.110,<1.0

uvicorn>=0.29,<1.0

pydantic>=2.6,<3.0

3) Resolve and lock

With pip-tools, I lock everything into requirements.txt:

pip-compile requirements.in

pip-sync requirements.txt

This gives me a fully pinned set of dependencies and keeps the environment aligned. If a team already uses Poetry, I let Poetry own the lockfile and avoid mixing tools.

4) Automate updates

I schedule dependency updates weekly or monthly depending on risk. Automated tooling creates a PR with an updated lockfile, and I run the test suite before merging. I also ask an AI assistant to draft the release notes for the dependency bump, which keeps review time low without hiding the changes.

5) CI enforcement

The CI job installs from the lockfile only. If the lockfile is missing, the build fails. I want failures early and loud.

6) Production install

Production images install only runtime dependencies. If I have dev tools like linters or test frameworks, they live in separate groups or files.

This is not the only workflow, but it is the one that reliably keeps my builds reproducible without adding too much friction.

How I Think About Version Constraints

When I read a dependency file, I want to know the intent and the safety margin. I use version ranges to convey intent and lockfiles to guarantee what actually gets installed. The combination lets me move forward without roulette‑style upgrades.

Here are the patterns I use most:

  • Exact pin: requests==2.31.0 for lockfiles and production installs.
  • Compatible release: requests~=2.31 when I want bug fixes but not breaking changes.
  • Safe range: pydantic>=2.6,<3.0 when I know major versions break.
  • Lower bound only: fastapi>=0.110 only for internal tools where reproducibility is not required.

The mistake I see often is treating “minor” as “safe.” Some ecosystems are stable; some are not. When I am unsure, I pin and update on purpose. That sounds boring, and boring is good in dependency management.

I also avoid “caret” specs in plain requirements files because they mean different things across tools. What is crystal clear in Poetry (^2.31) is easy to misread in pip constraints. My rule is simple: if I’m using pip, I stick to explicit lower and upper bounds or exact pins.

Requirements.in + pip-tools: My Default Workhorse

If I could only keep one dependency workflow for services, I would keep pip-tools. It gives me an intentional list and a deterministic lock with low ceremony.

I keep two files:

  • requirements.in for direct dependencies
  • requirements.txt for a fully resolved lock

Example requirements.in:

fastapi>=0.110,<1.0

uvicorn>=0.29,<1.0

pydantic>=2.6,<3.0

orjson>=3.10,<4.0

Then I compile:

pip-compile –generate-hashes –resolver=backtracking requirements.in

The generated requirements.txt includes every transitive dependency and hashes for integrity. I like hashes because they force a deterministic install and block surprise package swaps. When I install:

python -m pip install –require-hashes -r requirements.txt

That command is strict. If a package version changes or a file doesn’t match the hash, the install fails. I want that failure, because it means something changed outside my control.

If I need development dependencies, I keep a separate file like requirements-dev.in that includes -r requirements.in and then dev‑only tools. That keeps prod installs clean while letting devs and CI pull testing tools.

Lockfiles Across Platforms: The Real Edge Case

The hardest edge case I deal with is platform‑specific resolution. A lockfile created on macOS can be subtly wrong for Linux. Wheels, C‑extensions, and optional dependencies create different dependency trees depending on platform and Python version.

Here is how I handle it:

  • For small services, I compile locks inside a Linux container that matches production.
  • For multi‑platform libraries, I use multiple lockfiles, one per target environment (requirements-linux.txt, requirements-macos.txt).
  • For Poetry, I test installation on each platform I claim to support because poetry.lock is universal but installs can diverge with markers.

If you ignore this, you get the classic “works on my machine” failure because your lockfile is not actually the lockfile your server needs. I’ve seen this bite teams that developed on macOS and deployed to Alpine Linux, which often requires musllinux wheels or source builds.

Dependency Groups: Keeping Runtime Lean

Separating runtime from development tools is a quiet but huge win. It reduces image size, speeds install, and limits security surface.

I do this in every tool:

  • pip-tools: a requirements-dev.in that includes the base requirements.
  • pipenv: [dev-packages] section.
  • Poetry: dependency groups like --group dev or --without dev for production.
  • Conda: separate environment files or optional features.

When I build production images, I install only runtime dependencies. When I run CI, I install both. This seems obvious, but I’ve seen teams ship pytest, black, and mypy in production images for years without noticing.

Editable Installs and Local Packages

Editable installs (pip install -e .) are convenient during development, but they hide issues in packaging and distribution. I use them for local development only, and I always verify that a clean install works.

My quick check looks like this:

1) Build a fresh virtual environment.

2) Install the package normally (pip install . or poetry build then pip install dist/*.whl).

3) Run a minimal import test.

If that fails, I fix packaging metadata early, not the day before release. Most packaging bugs are simple: missing package data, misconfigured pyproject.toml, or an incorrect src/ layout.

Private Indexes and Internal Dependencies

As soon as a team starts sharing internal libraries, I avoid pip install ../path in production. I want a real package registry, even if it’s private. A private index gives me versioned releases, and it allows dependency resolution to behave like it does for public packages.

I typically do one of these:

  • Push internal packages to a private PyPI (Artifactory, Nexus, or a managed index).
  • Use a Git repository dependency with pinned tags or commit hashes (only if releases are small and rare).
  • Vendor a tiny internal package directly inside the service if it’s truly small and stable.

My rule: if multiple services depend on it, it should be versioned and published. Otherwise, dependency updates become a coordination problem instead of a simple version bump.

Deterministic Installs in CI and Docker

Reproducibility is not only a local developer concern; it’s a build pipeline concern. I keep CI installs deterministic and fast with a few habits:

  • Use the lockfile only. If the lockfile is missing, CI fails.
  • Cache wheels. A wheel cache turns minute‑long installs into seconds after the first run.
  • Pin Python versions. I avoid “latest” Python in CI to prevent surprise incompatibilities.

A minimal Docker flow I use for services:

1) Build in a container that matches production.

2) Install dependencies before copying source to leverage Docker layer caching.

3) Install only runtime dependencies in the final image.

Here is a simplified example of how I order steps in a Dockerfile (expressed as a sequence, not a full file):

  • Copy requirements.txt and install dependencies.
  • Copy application code.
  • Run the app.

That order keeps dependency layers cached when code changes, which often saves tens of seconds per rebuild.

When Not to Over‑Engineer

I like solid workflows, but I don’t like adding process for its own sake. There are times when simple is better:

  • A one‑off script that you will run once.
  • A quick prototype that is explicitly disposable.
  • A teaching example where simplicity matters more than reproducibility.

In those cases, I still use a virtual environment, but I don’t force a full lockfile workflow. The key is to decide intentionally. I avoid “accidental simplicity,” where a script grows into a product without any dependency discipline.

Edge Cases That Bite Hard

Here are the edge cases I plan for because I’ve been burned by them:

1) Optional dependencies

Some packages change behavior if optional dependencies are present (for example, extra JSON libraries or speedups). This can create inconsistent behavior across machines. If I rely on an optional extra, I install it explicitly (e.g., uvicorn[standard] or requests[socks]). I never assume it will appear indirectly.

2) Build isolation and PEP 517

Modern packaging builds happen in isolated environments. That’s good, but it means build dependencies are not the same as runtime dependencies. If a package builds on your laptop but fails in CI, it often means a missing build dependency or a system package. I treat build failures as a sign that I need to document system requirements or select a different wheel‑friendly dependency.

3) Compiled extensions

Anything that compiles (cryptography, lxml, pandas, etc.) can be fragile across platforms. I prefer wheels and avoid source builds in CI. If a project requires source builds, I bake compilers and headers into the build image so the install is deterministic.

4) Resolver conflicts

Sometimes two packages simply want incompatible versions of a third dependency. The fix is usually to upgrade one or both packages, or to pin a version that both accept. I keep a small notebook of “resolver conflict fixes” per project so I don’t rediscover the same conflict every few months.

5) Python version drift

A dependency might drop support for an older Python version, which can silently break your build if your requires-python range is too wide. I pin Python versions in CI and declare the supported range clearly in metadata. When I change the Python version, I change it in one place and update tooling to match.

Performance: Build Time, Caching, and Wheels

Dependency performance is mostly a function of caching and compiled packages. I optimize by addressing these in order:

1) Use wheels when possible. Wheels install much faster than source builds. If a dependency is frequently building from source, I reconsider it.

2) Cache wheels in CI. A wheel cache usually cuts repeat installs by 30–50%.

3) Avoid reinstalling unchanged dependencies. Docker layer caching is a cheap win when I separate dependency installation from source copy.

4) Limit dependency count. Fewer packages means faster resolution and fewer updates.

I treat dependency count like a budget. Every package has a cost in build time, security risk, and maintenance. If I add a package, I want to know what I’m buying and why.

Supply‑Chain Safety: Simple, Strong Guardrails

I treat dependencies like third‑party code I didn’t review, because that is exactly what they are. My minimum guardrails for production projects look like this:

  • Lockfile with hashes. I want deterministic installs and integrity checks.
  • Dependency audit in CI. I run a vulnerability scan on every PR.
  • SBOM generation. I generate a Software Bill of Materials so security questions become searchable, not guesswork.
  • Allowlist for critical packages. For highly sensitive apps, I only allow dependencies from a vetted list.

I also keep an eye on dependency provenance. If a package is abandoned or rarely maintained, I either pin tightly or replace it. Maintenance activity is a signal; it doesn’t guarantee safety, but it informs risk.

Dependency Hygiene: How I Keep Things Clean

Dependency bloat and stale packages are silent killers. I keep things clean with a few low‑effort practices:

  • Quarterly cleanup. I review dependencies and remove anything unused.
  • Import audits. I scan for unused imports and remove the underlying packages.
  • Update cadence. I update dependencies on a schedule, not randomly.
  • Changelog awareness. For critical dependencies, I read release notes before bumping.

My goal is to make updates boring. When updates are boring, they are regular. When updates are regular, security risk drops.

Dependency Management in Monorepos

Monorepos add complexity because multiple services or packages share dependencies. I’ve seen three workable patterns:

1) Per‑package locks. Each package has its own lockfile. This isolates risk but creates more files.

2) Shared constraints. A central constraints.txt defines organization‑wide pins, while each package declares its own requirements.

3) Unified tool. Use Poetry workspaces or a build system that can manage multiple packages with one lockfile.

I pick based on team size and release cadence. If services ship independently, I prefer per‑package locks with shared constraints. If releases are coordinated, a unified lock can be workable.

Modern Tooling and AI‑Assisted Workflows

I use AI to reduce the friction of dependency management, not to replace it. Here is what I actually automate:

  • Release note summaries. I paste a dependency diff and ask for a concise summary.
  • Migration checklists. For major version upgrades, I ask for a step‑by‑step risk list.
  • PR descriptions. I auto‑draft dependency bump PRs to speed review.

I do not let AI decide what to upgrade or blindly accept changes. Humans still choose the timing and risk tolerance. AI just helps me move faster with the documentation overhead.

Traditional vs Modern Approaches

I still see teams using global installs and ad‑hoc requirements. Here’s how I compare the old way to the modern workflow I trust:

Dimension

Traditional

Modern —

— Reproducibility

Low

High Update safety

Reactive

Scheduled Build speed

Unpredictable

Cached and stable Environment parity

Weak

Strong Security posture

Minimal

Audited and tracked

I don’t romanticize modern tooling. It’s just a higher‑quality baseline. The time you spend setting it up is paid back every time a build is deterministic and boring.

Failure Modes, Performance, and Safety

The mistakes I see most are subtle, not dramatic. I prevent them with repeatable checks and a few low‑effort guardrails.

1) Floating versions in production

If your requirements.txt says requests>=2.0, your app is not reproducible. I always pin, then update on purpose. The update process is a separate step, not an incidental side effect of installing.

2) Silent dependency bloat

Packages accumulate. When I refactor a module away, I also remove its dependency. I run an import scan and compare to my dependency list. If a dependency is unused, it is removed.

A small script to find unused imports in your package can help, but even a simple grep can save you:

rg "import requests" -g "*.py"

3) Installing in the wrong environment

If the virtual environment isn’t activated, pip can install globally. I use shell prompts that show the active environment, and I run python -m pip to be safe.

4) Missing OS‑level dependencies

If a package depends on system headers or libraries, your install will fail on fresh machines. I codify OS packages in Dockerfiles or provisioning scripts, not in human memory.

5) Lockfile drift

When the lockfile and the declared dependencies disagree, your build isn’t deterministic. I make dependency updates a single PR that updates both the intent file and the lockfile.

Dependency management is not only about correctness. It affects build time, image size, and risk.

Install performance: In my experience, the biggest wins come from caching wheels and avoiding repeated source builds. If your CI doesn’t cache wheels, you can cut build times by 30–50% just by enabling a wheel cache. I also build a slim base image with preinstalled heavy dependencies for services that rebuild frequently. For projects with many pure‑Python packages, I often see cold installs drop from around a minute to under 30 seconds once caches are in place.
Image size: Extra dependencies add both size and security surface. I keep prod images minimal by separating dev tools from runtime packages. With Poetry and pipenv, I install only production groups in deployment images. Even a modest service can lose 100–300 MB by trimming unused scientific packages or build tools from the runtime image.
Supply‑chain safety: I treat dependencies like third‑party code I didn’t review. In 2026, I expect automated scanning to be part of CI. I run dependency audits, check for known CVEs, and require hashes or lockfiles for production builds. A lockfile without hashes is like a shipping label without a barcode; you still need to verify the content.

If you handle sensitive data, I recommend generating a Software Bill of Materials (SBOM) during CI and storing it with build artifacts. It’s not glamorous, but it turns a security question into a searchable inventory.

A Practical Dependency Upgrade Playbook

Here is the exact upgrade playbook I follow for production services:

1) Create a branch and update the intent file (requirements.in or pyproject.toml).

2) Regenerate the lockfile and inspect the diff, not just the top‑level changes.

3) Run the full test suite and at least one integration test.

4) Scan for breaking changes in release notes or migration guides.

5) Deploy to a staging environment that mirrors production.

6) Roll out during a low‑traffic window.

This is boring, and boring is what you want when you’re touching a foundation layer.

Debugging Dependency Weirdness

When dependency issues show up, I debug in a consistent order:

  • Confirm Python version and environment (python --version, which python).
  • Confirm installed versions (python -m pip list).
  • Check resolution sources (pip config list, custom indexes, or constraints).
  • Recreate in a clean environment.
  • If it’s a compiled package, inspect build logs or wheel tags.

Most dependency bugs are environmental. The best fix is usually to reproduce in a clean environment and then make that environment reproducible for everyone else.

Closing Thoughts and Next Steps

Dependency management is a promise you make to every future build: “I can recreate this environment when I need it.” That promise matters more in 2026 than ever, because teams move faster, CI is more automated, and supply‑chain risk is real. When you keep dependencies stable, you get more predictable tests, fewer deployment surprises, and clearer debugging. The work is not glamorous, but it compounds.

If you’re starting today, I’d do three things this week. First, move every project into an isolated environment and stop installing app libraries globally. Second, pick a tool that produces a lockfile and use it consistently. Third, add a simple dependency policy to your team’s README: how to add packages, how to update them, and how often to audit them. That tiny policy eliminates 80% of confusion.

Finally, I remind myself that dependency management is not a one‑time setup; it is a continuous habit. The habit is what keeps the factory running. When you make it routine, everything else in your workflow becomes more predictable, faster, and safer.

Scroll to Top