How to List Installed Python Packages (Practical, Real‑World Guide)

I run into the same problem on nearly every project: “What’s actually installed in this Python environment right now?” Maybe I’m debugging a bug that only shows up on a coworker’s machine. Maybe a CI job fails because a dependency is missing. Or maybe I’m about to ship a container and want to lock versions before the next deploy. Listing installed packages sounds basic, but the choice of method matters. Some commands give you a clean version snapshot for requirements files, others reveal dependency trees, and some are best when you’re already inside a notebook or a virtual environment.

In this post I’ll walk you through the tools I use day‑to‑day to list installed Python packages, why I choose each one, and the gotchas that trip people up. I’ll stay practical: runnable commands, clear outputs, and real scenarios. By the end, you’ll know which method to use for quick sanity checks, which one produces deployment‑ready pins, and how to inspect the packages from inside Python code when you need programmatic access.

The Fastest Sanity Check: pip list

When I want a quick snapshot, I reach for pip list. It’s simple, consistent, and works in any environment that has pip. The command is fast enough to run during debugging and friendly enough to show to teammates who don’t live in terminals all day.

pip list

What you’ll see is a table with package names and installed versions. I treat it as a “roll call.” If a dependency is missing or a version looks odd, I can spot it immediately. This is my go‑to during live debugging sessions.

A few practical tips:

  • Use python -m pip list when you suspect you’re calling the wrong pip. This binds pip to the exact interpreter you care about.
  • Combine with --format=columns (default) or --format=freeze if you want a different layout.
  • For large environments, piping into rg helps you find a specific package quickly.
python -m pip list | rg "requests"

When I’m pairing with someone who just installed a package and says “it should be there,” this command gives me quick proof. It’s the quickest “is it installed?” answer you can get.

Why I don’t always stop at pip list

pip list is fast and clean, but it’s not a full dependency story. It tells me what’s installed, not why it’s installed. It doesn’t show dependency relationships, and it doesn’t guarantee a deterministic reproduction snapshot. I treat it as the first check, not the final answer.

Reproducible Snapshots: pip freeze

pip freeze is my preferred method when I need a dependency lock that can be shared, archived, or copied into a requirements file. It prints the packages in name==version format, which is exactly what requirements.txt expects.

pip freeze

This output is a flat list, one package per line. That means it’s easy to pipe into a file:

pip freeze > requirements.txt

I use pip freeze when:

  • I’m shipping a bug repro to a teammate.
  • I’m pinning versions before a deploy.
  • I need to compare two environments and spot version drift.

The most common mistake I see is running pip freeze from the wrong interpreter. If you have multiple Python versions installed, always use:

python -m pip freeze

That guarantees the snapshot is tied to the interpreter you’re actually using. If you only remember one best practice from this post, make it that one.

A practical workflow I use with pip freeze

When I’m troubleshooting a bug that only happens in production, I’ll ask for a pip freeze snapshot from the server and a snapshot from my local dev environment. Then I run a quick diff:

python -m pip freeze > local.txt

From server: production.txt

rg -n "" local.txt > local_sorted.txt

rg -n "" production.txt > production_sorted.txt

diff -u localsorted.txt productionsorted.txt

Any mismatch is a clue. It’s not always the root cause, but it’s nearly always relevant. This workflow doesn’t require fancy tools and catches 80% of “works on my machine” problems.

When I don’t use pip freeze

If I just need a readable overview, pip freeze is too noisy. It includes transitive dependencies, which can be dozens or hundreds of packages in real projects. For high‑level reporting, I’ll use pip list or an environment manager’s native command.

Programmatic Listing Inside Python (No Shell Required)

Sometimes you can’t rely on shell commands. Maybe you’re inside a script running on a server, or you need to embed package listing into a diagnostics endpoint. In those cases, I list installed packages directly from Python.

Option A: importlib.metadata (modern standard)

In recent Python versions, importlib.metadata is built‑in and works well for most environments. It’s the first option I try.

import importlib.metadata as md

for dist in md.distributions():

name = dist.metadata["Name"]

version = dist.version

print(f"{name}=={version}")

This is clean, dependency‑free, and fast. It’s also friendly for tooling: I can filter, sort, or export to JSON without leaving Python.

Option B: pkg_resources (legacy but still common)

You’ll still see pkg_resources in many codebases because it has been around for a long time. It’s part of setuptools and works across platforms.

import pkg_resources

installedpackages = pkgresources.working_set

for package in installed_packages:

print(f"{package.key}=={package.version}")

This is stable and widely supported, but it’s a bit heavier than importlib.metadata. I only use it when I’m working in older environments or when a project already depends on it.

A practical programmatic example: JSON output for diagnostics

When I build services, I often need structured output. Here’s a version that returns JSON, suitable for a diagnostics endpoint:

import importlib.metadata as md

import json

def listpackagesjson():

packages = []

for dist in md.distributions():

packages.append({

"name": dist.metadata.get("Name", ""),

"version": dist.version,

})

packages.sort(key=lambda x: x["name"].lower())

return json.dumps(packages, indent=2)

if name == "main":

print(listpackagesjson())

This is a low‑friction way to capture version info in logs or attach it to bug reports automatically. It’s one of the easiest high‑leverage diagnostics you can add to a tool.

When I use programmatic listing

  • Health checks in backend services.
  • Diagnostics in desktop apps.
  • Tooling that prints environment info before running a pipeline.

Think of it like checking the label on a jar before you cook with it. It’s safer to verify what’s inside rather than assume the pantry hasn’t changed.

Seeing Dependency Trees: pipdeptree

If pip list is the roll call, pipdeptree is the family tree. It shows which packages depend on which, which is essential when a single upgrade breaks your app and you need to find the root cause.

First, install it:

pip install pipdeptree

Then run:

pipdeptree

You’ll see a tree‑like output that makes dependency relationships obvious. I use this when:

  • A transitive dependency is pulling in an old version.
  • I want to check why a package is installed at all.
  • I’m reducing environment size and want to remove unused libs.

Common mistake: people assume pipdeptree only lists direct dependencies. It lists everything, so you may see a package you didn’t install directly. That’s normal and often helpful.

My favorite pipdeptree workflow

When I’m trying to remove a package, I run pipdeptree and search for it:

pipdeptree | rg "some-package"

This shows me which top‑level dependencies pull it in. I can decide whether to remove that root dependency, replace it, or ignore the risk. Without this tree, you’re just guessing.

Environment‑Aware Listing: pipenv, poetry, and uv

In 2026, most teams I work with use some kind of environment manager. That means “list installed packages” can be different depending on the tool you’re using.

Pipenv

If you use Pipenv, list packages inside its virtual environment like this:

pipenv shell

pipenv run pip list

The key is pipenv run: it executes pip inside the Pipenv environment, not your global Python. I use it when I’m onboarding to a project that already has a Pipfile.

Poetry

Poetry ships with its own commands for listing dependencies. This is a good fit when you want to see dependency groups.

poetry show

For a full tree:

poetry show --tree

I like this when I’m trying to understand why a dependency exists or when I’m keeping runtime and dev packages separate.

UV (modern, fast package manager)

uv has become popular for fast installs and clean environments. You can list packages either by running pip inside the environment or by using uv‑native commands if your workflow supports it.

uv pip list

The pattern is the same: target the right environment and run the list command there. The environment is the story; the command is just how you read it.

Why I still use python -m pip list with tools

Even when I’m in Poetry or Pipenv, I sometimes run:

python -m pip list

It’s a sanity check. If the environment is active, it should show the same packages. When it doesn’t, that’s a sign I’m not in the environment I think I am.

Conda Environments: conda list

If you’re in data science or ML, you’ll likely bump into Conda. In that world, pip list is sometimes incomplete because Conda can manage packages that pip doesn’t know about.

conda list

This gives you a full package list for the Conda environment, including versions, build numbers, and channels. It’s my default when I’m working inside a Conda environment.

If you also install packages with pip inside Conda (very common), I recommend running both:

conda list

python -m pip list

This is a quick way to spot inconsistencies or pip‑only installs that aren’t visible from Conda’s point of view.

Why Conda can confuse people

Conda environments can include non‑Python dependencies like MKL, CUDA, or compiled system libraries. That means conda list can look “noisy” compared to pip list. It’s still important. If a NumPy error happens in a Conda env, the compiled libraries in that list can be the clue you need.

Inside Notebooks: !pip list and safer alternatives

Jupyter notebooks are their own world. The notebook kernel might not match your terminal interpreter, so the safest approach is to run pip from inside the notebook.

!pip list

That works, but I personally prefer the Python approach for accuracy and testability:

import importlib.metadata as md

for dist in md.distributions():

print(f"{dist.metadata[‘Name‘]}=={dist.version}")

This avoids shell escape quirks and works even if shell access is restricted. If you’re teaching or sharing notebooks, that’s a nice side effect.

Notebook tip: verify the kernel interpreter

If you’re uncertain about the kernel’s Python, run this inside the notebook:

import sys

print(sys.executable)

Then compare that to your terminal interpreter. If they’re different, that explains most “it works in the terminal but not in the notebook” issues.

Traditional vs Modern Approaches (Quick Comparison)

Here’s how I think about the main approaches in 2026. The “modern” side isn’t magic; it’s just more environment‑aware and automation‑friendly.

Traditional Approach

Modern Approach

When I choose it —

pip list

python -m pip list

When I suspect path issues or multiple Python installs pip freeze

pip freeze + lock file (Poetry/UV)

When I want reproducibility with tooling support pkg_resources

importlib.metadata

When I can rely on newer Python versions manual terminal checks

programmatic listing in code

When I need diagnostics in CI or production global pip

environment manager commands

When the project defines its own environment

If you’re unsure, start with python -m pip list. It’s the least surprising option in mixed setups.

Common Mistakes I See (and How to Avoid Them)

1) Running the wrong pip

This is the classic issue. You run pip list, but it prints packages from a different interpreter.

Fix:

python -m pip list

I do this by default now. It takes two extra characters and saves hours of confusion.

2) Forgetting virtual environments

If you’re inside a venv, pip list should show only packages for that venv. If it doesn’t, you’re probably not actually in the venv.

Fix: activate the venv and re‑run. On macOS/Linux:

source .venv/bin/activate

python -m pip list

On Windows (PowerShell):

.\.venv\Scripts\Activate.ps1

python -m pip list

3) Trusting global packages in a project

Global packages are fine for quick experiments, but I don’t use them for production. They hide missing dependencies and break reproducibility.

Fix: use a project‑level environment and list from there (Pipenv, Poetry, UV, or plain venv).

4) Using pip freeze as a “nice list”

pip freeze outputs everything, including transitive dependencies. That’s great for exact reproduction, but noisy for high‑level reporting.

Fix: use pip list for a readable overview, or tools like Poetry to see direct dependencies.

5) Ignoring system packages in Conda

Conda manages binaries too, not just Python packages. If your project uses NumPy, SciPy, or GPU‑related packages, conda list is the source of truth.

Fix: check Conda first, then pip.

6) Assuming packages are installed because imports work

Sometimes a package import succeeds because it’s in a global environment, not in the project environment. That’s fragile and can fail in CI.

Fix: verify using python -m pip list or environment manager commands from the same interpreter running your app.

7) Forgetting editable installs

Editable installs (pip install -e .) don’t always show up the way you expect in pip list or pip freeze, especially if you’re looking for a version tag.

Fix: check for editable entries in pip freeze, or use pip list --editable to filter:

python -m pip list --editable

When to Use Each Method (My Practical Rules)

I keep a simple mental map:

  • Quick check: python -m pip list
  • Reproducible snapshot: python -m pip freeze > requirements.txt
  • Dependency analysis: pipdeptree
  • Inside Python code: importlib.metadata
  • Inside Pipenv: pipenv run pip list
  • Inside Poetry: poetry show
  • Inside Conda: conda list
  • Inside notebooks: importlib.metadata (or !pip list if you must)

This isn’t theory; it’s what I do in real projects. I’ll even add these commands to project docs so new contributors can get environment info instantly.

Edge Cases and Real‑World Scenarios

Scenario 1: Debugging a CI failure

The build passes on your laptop but fails in CI. I add a quick listing step:

python -m pip list

This tells me whether CI has a different version of a dependency. It’s usually a pin mismatch or a missing optional package.

Scenario 2: Reproducing a bug from a teammate

I ask them to send pip freeze output. Then I spin up a clean venv and install from that snapshot. If the bug reproduces, I know I have the same environment.

Scenario 3: Shrinking a container image

I run pipdeptree to see unused packages and then look for dependency chains I can remove. If a package only exists because another optional dependency pulled it in, I can remove it and reduce size.

Scenario 4: Notebook drift

A notebook suddenly breaks after weeks of inactivity. I list installed packages in the notebook kernel to see if the environment changed.

Scenario 5: “It works on my machine”

I run python -m pip list on both machines and compare. The differences are almost always the cause.

Scenario 6: Mixed pip and Conda installs

If a Conda environment has some packages installed with pip, it can create subtle conflicts. I list both and look for duplicates with different versions.

Scenario 7: Investigating a security alert

A security scan warns about a vulnerable library. I first check if it’s installed:

python -m pip list | rg "vulnerable-lib"

If it is, I use pipdeptree to see which package pulls it in. That tells me whether I can upgrade a root dependency to fix it.

Performance Considerations (What I Actually Notice)

Listing packages is usually fast, but there are differences:

  • pip list typically finishes in 100–500ms for standard environments.
  • pip freeze can be similar, sometimes slightly slower for very large environments.
  • pipdeptree can take 1–3s on big ML stacks because it builds a graph.
  • importlib.metadata is often in the 50–200ms range, which is great for runtime diagnostics.

These are rough ranges from my own machines. The real point is: don’t be afraid to run these commands in CI or during a local debug session. They’re fast enough for everyday use.

Performance tip: cache for repeated diagnostics

If you’re listing packages repeatedly in a server process (for example, diagnostics endpoints), you can cache the results at startup and serve them from memory. Packages don’t usually change during runtime, so this can cut overhead to almost zero.

Alternative Approaches You Might See

pip show

pip show is a targeted tool. It won’t list everything, but it’s great for one package:

python -m pip show requests

It shows version, location, dependencies, and metadata. When I need to confirm where a package is installed on disk, this is my first command.

pip list --outdated

If you want to list installed packages that have newer versions, this is a helpful view:

python -m pip list --outdated

I use it when I’m planning upgrades, but I avoid using it as a “must update everything” list. Outdated doesn’t always mean unsafe or urgent.

pip list --not-required

This shows packages that are not dependencies of other packages (often your direct installs):

python -m pip list --not-required

It’s a rough way to approximate top‑level dependencies. It’s not perfect, but it can be a quick sanity check.

pip check

While it doesn’t list packages, pip check can catch conflicts and broken requirements:

python -m pip check

If I’m listing packages because something feels “off,” I’ll run pip check too. It’s a quick way to detect dependency mismatches.

Comparing Environments Side‑by‑Side

When I really need clarity, I compare environments directly. Here’s a quick method I use:

python -m pip freeze | sort > env_a.txt

In the other environment:

python -m pip freeze | sort > env_b.txt

diff -u enva.txt envb.txt

This gives a line‑by‑line comparison of versions. It’s simple, fast, and highly effective. If you want something fancier, you can parse the files into a Python script and generate a table, but in practice the diff is enough.

A tiny Python comparer I reuse

from pathlib import Path

def read_freeze(path):

data = {}

for line in Path(path).read_text().splitlines():

if "==" in line:

name, version = line.split("==", 1)

data[name.lower()] = version

return data

left = readfreeze("enva.txt")

right = readfreeze("envb.txt")

for name in sorted(set(left) | set(right)):

lv = left.get(name)

rv = right.get(name)

if lv != rv:

print(f"{name}: {lv} -> {rv}")

This is great when you want a “change log” style output, not a raw diff.

Production Considerations

Listing packages in production is usually safe, but I treat it like any diagnostic. It should be intentional and controlled.

  • If it’s a public endpoint, avoid exposing full package lists. That can leak implementation details.
  • For internal diagnostics, I keep it behind authentication.
  • I store the list in logs or a debug endpoint to help with incident response.

A common pattern I like is “diagnostics only in debug mode.” If an environment variable like DEBUG=true is set, the service prints its package list at startup. Otherwise, it stays quiet.

Modern Workflow Tips (2026 Style)

  • I keep a small script in each repo that prints environment details, including python -m pip list or importlib.metadata output. This makes bug reports far more useful.
  • AI‑assisted tooling works better when you include exact package versions in prompts. I paste pip freeze output into bug reports so the assistant has context.
  • I store pipdeptree output during dependency upgrades. When something breaks, I can compare trees before and after.
  • I include environment dumps in CI artifacts for failed builds. It makes post‑mortems faster.

Frequently Asked Questions

“Why does pip list show packages I didn’t install?”

Because many packages are installed as dependencies of other packages. It’s normal. Use pipdeptree to see who depends on them.

“Why does pip freeze include more packages than my requirements.txt?”

pip freeze lists everything installed, including transitive dependencies. requirements.txt usually lists only the top‑level packages you care about.

“Why does pip list look different in my terminal vs my notebook?”

Because the notebook kernel may use a different interpreter. Check sys.executable inside the notebook to confirm.

“What’s the best command to list packages inside a Docker container?”

Run python -m pip list inside the container. If it’s Conda‑based, run conda list as well.

“Do I need to list packages for every deploy?”

You don’t have to, but I find it helpful to capture the versions that actually shipped. It’s invaluable during incident response.

Closing Thoughts and Next Steps

Listing installed packages isn’t just a checklist item; it’s a core debugging tool. When I’m careful about which method I use, I get faster answers and fewer surprises. If you only need a quick glance, python -m pip list is hard to beat. If you need reproducibility, pip freeze gives you a precise snapshot. If you’re hunting down dependency issues, pipdeptree shows you the full story. And when I’m inside a script or notebook, importlib.metadata keeps the whole process in Python, which is often safer and easier to automate.

The next step is to make this routine. Add one listing command to your project docs. Create a small script that prints versions when your app starts in debug mode. When a teammate sends you a bug report, ask for pip freeze output. It takes seconds, and it can save hours.

If you want to go further, try comparing two environments side‑by‑side with a simple diff. Use it to confirm upgrades, catch mismatched dependencies, or validate a container image before you ship. That’s what I do, and it keeps surprises to a minimum. You don’t need a fancy tool for this; you just need the right listing method at the right moment.

Scroll to Top