Last quarter I inherited a Python service that ran fine on my laptop but failed in CI because another developer had installed a newer transitive library. The bug was not in our code; it was in our assumptions. That kind of drift is why I treat dependency management as a core engineering task, not a tidy-up step. If you let versions float, two machines can build two different apps, and debugging turns into archaeology.
In this guide I share the workflow I use in 2026 to keep dependencies predictable across laptops, CI, and production. You will see the baseline pip commands, when isolated environments matter, how lockfiles keep installs deterministic, and what modern tools like uv, Poetry, and Pipenv do well. I will also cover extras, platform markers, security checks, and deployment habits that keep builds repeatable. Think of dependencies like ingredients: you can follow the same recipe, but if each cook uses a different brand, the meal changes. My goal is to help you make every kitchen produce the same dish.
Why dependency control makes or breaks delivery
Dependencies are not just the packages you type in pip install. Every package can pull in a tree of other packages, each with its own version rules. That tree is where most surprises happen. If one library says urllib3>=1.26 and another says urllib3<2, the resolver picks something that may differ across machines. I treat my direct requirements as a contract and my transitive requirements as a snapshot. The contract is the version range I accept; the snapshot is the lockfile that records the exact set I shipped.
Another failure mode is mixing runtime and tooling. I see teams install linters, test runners, and build helpers into the same list that production uses. That can lead to unused packages in containers or, worse, import conflicts when a dev-only tool pins a dependency to an older version. I separate runtime, dev, and optional feature groups from day one. It keeps your install footprint smaller and it makes security scanning more precise.
Finally, updates must be planned. If you let dependencies update only when something breaks, you accumulate a huge backlog. I schedule small, regular bumps and use automated checks to keep drift visible. Think of it like changing oil: the car runs either way, but the bill later is bigger. Consistent, small updates are also easier to review because each change has a smaller blast radius.
I also view dependencies as part of product reliability. When a dependency changes behavior, the functional surface of your system changes too. If a build is reproducible, then the incident timeline is simpler: you can rehydrate the exact environment and confirm whether the root cause is in code, configuration, or a third‑party update. That shortens recovery time and protects your team from avoidable stress.
Baseline workflow with pip and requirements files
pip is still the baseline in 2026, and I recommend treating it as the common denominator. I always call it through the active Python so I do not hit the wrong interpreter:
python -m pip --version
python -m pip install requests
python -m pip install ‘fastapi>=0.110,<1.0'
To understand what is installed, these commands are quick and reliable:
python -m pip list
python -m pip show requests
python -m pip uninstall requests
When a project is small, a single requirements.txt can be enough. I usually create it from a clean environment and keep it pinned:
python -m pip install -r requirements.txt
python -m pip freeze > requirements.txt
A pinned requirements.txt is simple and portable, but it can grow messy if you hand‑edit it. For small apps I still do it, but I always regenerate it from a clean venv to avoid stale entries. Here is a simple example of a runtime requirements file that I would keep in version control:
fastapi==0.110.0
uvicorn==0.27.1
pydantic==2.6.4
httpx==0.27.0
If I need dev tooling, I keep it separate instead of stuffing it in the runtime file:
# requirements-dev.txt
pytest==8.1.1
ruff==0.2.2
mypy==1.9.0
Then I install with two commands or a small script:
python -m pip install -r requirements.txt
python -m pip install -r requirements-dev.txt
This might look old‑school, but the clarity is a feature. Anyone can open the file and see what is pinned. For a lot of internal services that ship as containers, this is still enough. I only move to more complex tooling when the dependency graph grows or I need more automation.
Virtual environments are non‑negotiable
I never install project dependencies into the global interpreter. Global installs hide issues because they leak packages between projects. A new hire can have a clean machine while an older laptop has a global package that silently satisfies an import. That makes the project look stable when it is not.
For most projects I use the built‑in venv module and keep it in a .venv folder inside the repo. It is explicit, it works everywhere, and it can be recreated at any time:
python -m venv .venv
source .venv/bin/activate
python -m pip install -r requirements.txt
I also add .venv to .gitignore to avoid committing the environment. If a project uses multiple Python versions, I name the environment by version, such as .venv311, and I note the Python version in documentation.
When I work with multiple microservices, I set a shell prompt that shows the active venv. It sounds minor, but it prevents me from installing into the wrong environment. I also keep my base Python installations clean and let the project environment do the heavy lifting.
Requirements vs constraints files
A subtle but powerful pattern is separating requirements from constraints. I use requirements.in for top‑level dependencies and a constraints file to pin transitive dependencies. This makes the contract explicit and makes updates safer.
For example, requirements.in might look like this:
fastapi
uvicorn
httpx
Then a constraints file, generated by a resolver, pins every exact version that makes the set work:
# constraints.txt
fastapi==0.110.0
uvicorn==0.27.1
httpx==0.27.0
anyio==4.3.0
sniffio==1.3.1
I install with:
python -m pip install -r requirements.in -c constraints.txt
The value here is control. When I add a new top‑level dependency, I regenerate the constraints file. When I want to bump versions, I bump them in one place and re‑resolve. It also makes it clear which packages are direct and which are transitive, which is invaluable in code reviews and incident response.
Lockfiles and deterministic installs
A lockfile is a snapshot of the entire dependency tree at a point in time. If you want deterministic installs, you need one. I treat lockfiles as part of the production artifact. If the lockfile changes, the release changes.
There are several ways to get lockfiles in the Python ecosystem. pip-tools generates a pinned requirements.txt with hashes, Poetry generates poetry.lock, and uv generates uv.lock. The core idea is the same: capture every version so the installer has no decisions to make.
I also like hash‑checking mode because it catches supply chain surprises. With hash‑checking, pip will only install a file whose hash matches what is recorded. That means a silently replaced wheel will be rejected. With pip-tools I can generate hashes like this:
pip-compile --generate-hashes requirements.in
python -m pip install --require-hashes -r requirements.txt
A lockfile is not a silver bullet. If you check in a lockfile but allow floating ranges in your resolver input, the lockfile can change when re‑generated. I always treat the resolver input as policy and the lockfile as an artifact. Both need review.
Version specifiers and compatibility ranges
One of the biggest sources of chaos is vague version specifiers. I prefer explicit ranges where I can test compatibility and explicit pins where I cannot. A few patterns I use often:
==for exact pins when reproducibility matters>=,<for API‑stable libraries that follow SemVer~=for compatible releases within a minor version
For example, I might accept fastapi>=0.110,<0.120 if I can run tests against that range in CI, but I pin pydantic==2.6.4 if I know a minor bump changes behavior I rely on. I never use unbounded specifiers like >=0 in production services. They feel flexible, but they only push failure into the future.
I also avoid pre‑releases unless I am explicitly testing them. If I do need a pre‑release, I put it behind a feature branch and document why I am doing it. Stability comes from intention, not luck.
Dependency groups and extras
I structure dependencies around how they are used. Runtime dependencies are for the running service. Dev dependencies are for linting, testing, and local tooling. Optional dependencies are for features that only some deployments need.
If I am using a pyproject.toml, I define optional dependencies as extras. For example:
[project]
dependencies = [‘fastapi‘, ‘httpx‘]
[project.optional-dependencies]
dev = [‘pytest‘, ‘ruff‘, ‘mypy‘]
postgres = [‘asyncpg‘]
Then I can install only what I need:
python -m pip install ‘.[dev]‘
python -m pip install ‘.[postgres]‘
For requirements files, I use separate files or folders. The key is to keep environments lean. When production only needs runtime dependencies, the container is smaller, install time is faster, and the vulnerability surface is smaller.
Platform markers and optional binaries
Python runs everywhere, but not every package does. Platform markers let me specify where a dependency applies. I use them for OS‑specific packages, acceleration libraries, and optional performance improvements.
A common pattern for me is adding uvloop only on Linux:
uvloop; platform_system == ‘Linux‘
I also use markers for Python versions when I am straddling a transition:
typingextensions; pythonversion < '3.12'
Markers make a dependency list more complex, but they avoid runtime crashes on platforms that cannot build a package. The key is to keep the list readable and to document why a marker exists. If I add a marker, I add a note in the PR summary explaining the compatibility issue it resolves.
Modern tooling overview: pip, uv, Poetry, Pipenv
The ecosystem now has multiple good choices. I pick tools based on the constraints of the project, not the novelty. Here is a quick summary of how I think about them:
pip
Poetry
—
—
Great
Good
OK
Good
No
Yes
Manual
Strong
Universal
Strong
pip is universal but requires extra tooling for lockfiles. uv is fast and gives you lockfiles while staying close to pip semantics. Poetry is excellent for libraries and provides a cohesive workflow, though it adds its own model. Pipenv still works but feels less active in some teams I see.
Using uv in 2026
When I want a fast, pip‑compatible workflow with a lockfile, I reach for uv. It is especially good for large monorepos and CI pipelines where install speed matters.
A basic workflow looks like this:
uv venv .venv
source .venv/bin/activate
uv pip install -r requirements.in
uv pip compile -o requirements.txt requirements.in
If I am using uv as the primary tool, I prefer the native lockfile:
uv pip install -r requirements.in --locked
uv pip compile --locked
The payoff I see is consistency. The resolver is fast, and installs are noticeably quicker, often in the 20–50 percent range in my CI runs. I still keep the mental model of pip, which makes onboarding easy. I also like that uv can manage Python versions, which simplifies local setup for teams that span multiple services.
Poetry workflow in practice
Poetry shines when I am building a library that will be published, or when I want a single tool to manage packaging, scripts, and environments. It keeps everything in pyproject.toml and generates poetry.lock for deterministic installs.
A typical flow for me is:
poetry init
poetry add fastapi httpx
poetry add --group dev pytest ruff mypy
poetry lock
Poetry has a learning curve because it hides some pip behaviors, but it is consistent. I like that dependency groups are first‑class and that it enforces a clean separation between runtime and dev. I also export to requirements.txt when I need to integrate with legacy tooling:
poetry export -f requirements.txt -o requirements.txt
Where I am cautious is when a production environment expects plain pip commands. Poetry can work there, but it adds an additional runtime tool that must be installed. If I cannot control the environment, I stick to pip and lockfiles.
Pipenv notes and when I still use it
Pipenv is still usable, and some teams are comfortable with it. It provides a lockfile and handles dev dependencies cleanly. However, I see less momentum around it, and onboarding new developers can be a bit slower if they are not familiar with it.
I reach for Pipenv when I inherit a project that already uses it and is stable. In those cases I do not rewrite the tooling unless there is pain. I focus instead on tightening the lockfile discipline and making sure CI uses pipenv sync to install exactly what is locked.
Handling local, editable, and git dependencies
Editable installs are useful for local development, but they can create accidental coupling. I keep them out of production and out of lockfiles whenever possible. If I need them in a dev environment, I document them clearly in the setup guide.
For example:
python -m pip install -e ./libs/core-utils
Git dependencies are another sharp edge. They are convenient for quick experiments but fragile for reproducibility. A branch name is not a version. If I must use a git dependency, I pin to a commit SHA and explain why:
git+https://example.com/repo.git@9f2c9c3
I avoid local path dependencies for production builds because they do not travel well between machines. If a dependency is important enough to be used in production, I package it properly or include it as a submodule with a fixed version. Predictability wins.
Update strategy and automation
My dependency update strategy is simple: small, regular updates with automated feedback. I schedule a weekly or bi‑weekly update window and I keep the batch small. It is easier to test and easier to roll back if needed.
I use tools like Dependabot or Renovate to create PRs, but I never merge them blindly. I run the full test suite and I scan the changelog for breaking changes. If a dependency is central, I create a quick integration test that exercises the key path. The goal is not perfection; it is to make change deliberate.
When I do a larger update, I split it. I update one layer at a time: framework first, then ecosystem packages, then dev tools. This gives me a clean revert path. I also measure the time it takes to install dependencies before and after updates. If install time grows significantly, I investigate why, because it affects CI cost and developer experience.
Security checks, SBOM, and supply chain hygiene
Security is now a first‑class part of dependency management. I run a vulnerability scan on every dependency update and at least weekly for stable branches. I also capture an SBOM for production releases so I can answer questions quickly if a CVE is announced.
I typically use pip-audit or safety for scanning. The exact tool is less important than consistency. I also include hash checking in my installation flow when possible, which protects against tampered packages:
python -m pip install --require-hashes -r requirements.txt
For SBOMs, I use CycloneDX tooling. The output becomes part of the release artifact. If I am in a regulated environment, I attach the SBOM to the release metadata so it is easy to retrieve later. This discipline is useful even in small teams because it reduces uncertainty when an incident hits the news.
CI installation patterns and caching
CI is where dependency drift shows up first. I always make CI install from the lockfile or from a pinned requirements file. If CI uses a different command than local development, you will eventually see a mismatch.
My typical CI steps are:
- Create a clean venv
- Install from the lockfile or pinned requirements
- Run tests and linters
I also set deterministic flags where possible:
export PIPDISABLEPIPVERSIONCHECK=1
export PIPNOINPUT=1
Caching is helpful, but it must be safe. I cache the package downloads, not the venv. Caching the venv can hide dependency changes. Download caching gives the speed benefits while still respecting the lockfile. I also include the lockfile hash in the cache key so dependency changes create a clean cache namespace.
Docker and production deployments
In containers I keep the build as simple as possible. I copy the lockfile early to leverage layer caching, then install, then copy the source. This makes rebuilds fast when only code changes.
A pattern I like is:
COPY requirements.txt /app/requirements.txt
RUN python -m pip install --no-cache-dir -r /app/requirements.txt
COPY . /app
If I use pyproject.toml and Poetry or uv, I still export a pinned requirements.txt for production. It keeps the production image lightweight and avoids installing extra tools inside the container. I also avoid editable installs in production containers. The code is already in the image; there is no need for editables.
Native dependencies and build isolation
Some packages compile native extensions. That can be painful in CI or on minimal images. When I know a package has native extensions, I prefer prebuilt wheels. I use --only-binary=:all: for packages where wheels exist, and I fall back to source builds only when I must.
I also document the system dependencies required for native builds. If a package needs libpq or openssl, I list those clearly in the Dockerfile or CI setup. The dependency is not just Python; it is the full system stack. Making that explicit prevents brittle builds.
When I target multiple platforms, I test the dependency set on each platform. A package that installs on macOS may fail on Linux if wheels are missing. I try to catch that before the first production deploy.
Performance considerations
Dependency choices affect performance in two ways: install time and runtime overhead. Install time matters for developer productivity and CI costs. Runtime overhead matters for latency and memory. I track both.
For install time, I compare tools and caching strategies. uv often reduces install time by 20–50 percent in my experience. Hash‑checking can add a small overhead, but it is usually worth it. I also measure the size of the dependency tree; if it grows without reason, I investigate and prune.
For runtime, I avoid heavy dependencies unless they deliver clear value. If I only need one function from a large library, I consider a lighter alternative. I also avoid unnecessary extras in production. Fewer packages means less import time, less memory, and fewer potential vulnerabilities.
Common pitfalls I still see
- Floating versions in production requirements, leading to unpredictable installs
- Dev dependencies bundled into runtime images, increasing size and risk
- Editable installs used in production, causing subtle path issues
- Lockfiles generated on one platform and reused on another without review
- Git dependencies pinned to branches instead of immutable SHAs
- Big version jumps without tests, causing avoidable outages
When I see these, I fix the root cause rather than patching the symptoms. That usually means introducing a lockfile, splitting dependency groups, or enforcing a consistent install command in CI.
Practical scenarios and how I decide
When I join a small project, I start with pip and a pinned requirements.txt. It is the lowest friction path and it keeps things readable. As soon as the dependency graph grows or I see frequent conflicts, I introduce pip-tools or uv to generate a lockfile.
For a team that ships libraries, I default to Poetry because it aligns packaging metadata and dependency management. I can still export to requirements.txt for downstream consumers, but the source of truth remains pyproject.toml and poetry.lock.
For large monorepos with multiple services, I like uv because it is fast, scriptable, and close to pip. It lets me standardize one install command across services and still get deterministic behavior. The key is to pick one tool and document it clearly so the team is not split across workflows.
AI‑assisted workflows in dependency management
I sometimes use AI tools to summarize changelogs and propose safe upgrade paths. For example, when a framework releases a new minor version, I ask for a summary of breaking changes and a checklist of what to verify. This saves time, but I always verify against official release notes before changing anything.
I also use AI to scan dependency trees for unused packages or to suggest smaller alternatives, but I treat those suggestions as hypotheses. The final decision still comes from profiling and testing. AI is a good assistant, not an authority.
Final checklist I follow before release
- Lockfile or pinned requirements updated and committed
- Runtime and dev dependencies separated cleanly
- CI installs only from the lockfile or pinned list
- Vulnerability scan run and reviewed
- SBOM generated for the release
- Docker image rebuilds cleanly from scratch
- Platform markers tested on each target OS
Dependency management is not glamorous, but it is one of the highest leverage habits I know. When I keep dependencies predictable, everything else gets easier: onboarding, debugging, incident response, and long‑term maintenance. If you adopt even a few of the practices here, you will feel the difference in the next release cycle.


