When I see the “pg_config executable not found” error during a psycopg2 install, I read it as a signal: pip is trying to compile a C extension, and the PostgreSQL build tools aren’t where the build expects them. The fix is usually straightforward, but the right fix depends on your OS, how you installed PostgreSQL, and whether you want source builds at all. I’ll walk you through the exact steps I use on Windows, Linux, and macOS, plus the decision points I make in 2026 when I’m setting up a new project or CI pipeline. You’ll also get a repeatable diagnosis checklist, practical code examples, and guidance on when to reach for prebuilt wheels versus a source build. The goal is not just to squash the error today, but to make sure you don’t trip over it again when you rebuild a venv, switch Python versions, or deploy in a container.
Why the pg_config error happens (and what pip is really doing)
pg_config is a small executable shipped with PostgreSQL. Its job is to tell build tools where the PostgreSQL headers and libraries live on your system. When you run:
pip install psycopg2
pip tries to build the package if a compatible wheel isn’t available. That build step calls pgconfig to locate libpq and the headers it needs. If pgconfig can’t be found in your PATH, the build stops and you see the error.
In practice, the error usually means one of these is true:
- PostgreSQL isn’t installed at all.
- PostgreSQL is installed, but the bin directory isn’t on your PATH.
- You installed PostgreSQL without the development headers (common on Linux).
- You’re on Windows and the “Development Tools” feature wasn’t installed.
- You’re on macOS and installed Postgres via an app (like Postgres.app) that doesn’t expose
pg_configin your shell.
I treat the error as a signal to choose between two paths:
1) Fix pg_config so a source build works.
2) Skip compilation and install a binary wheel.
Both are valid, but the choice affects maintainability, security, and portability, so I always decide intentionally.
Fastest fixes by operating system
These are the exact steps I use most often. The goal is to make pg_config visible and then install psycopg2. I’ll also include the “skip compilation” option, because it’s often the shortest path to productivity.
Windows
Step 1: Install PostgreSQL with dev tools
Download PostgreSQL from the official site and make sure Development Tools is selected during setup. That’s the easiest way to ensure pg_config.exe exists.
Step 2: Add pg_config to PATH
By default, pg_config.exe lives in a versioned PostgreSQL bin folder, for example:
C:\Program Files\PostgreSQL\15\bin\pg_config.exe
Add that bin folder to your system PATH:
setx PATH "%PATH%;C:\Program Files\PostgreSQL\15\bin"
Close and reopen your terminal, then verify:
where pg_config
Step 3: Install psycopg2
pip install psycopg2
Step 4: Skip compilation if you want
If you don’t need a source build, install the binary wheel:
pip install psycopg2-binary
I use the binary wheel for local development or CI smoke tests. For production, I usually prefer a source build that links against the system libpq.
Linux (Ubuntu/Debian as a baseline)
Step 1: Install PostgreSQL and development headers
sudo apt update
sudo apt install postgresql postgresql-contrib libpq-dev python3-dev
libpq-dev is the key package that brings in headers and pg_config.
Step 2: Install psycopg2
pip install psycopg2
Step 3: Skip compilation if you want
pip install psycopg2-binary
If you’re on Fedora/RHEL, the package names change. I check the distro docs, but the idea is the same: install PostgreSQL plus its dev package and Python dev headers.
macOS (Homebrew)
Step 1: Install PostgreSQL
brew install postgresql
Step 2: Verify pg_config
which pg_config
Typical output on Apple Silicon:
/opt/homebrew/bin/pg_config
Step 3: Add pg_config to PATH if needed
If which pg_config returns nothing, add the PostgreSQL bin directory to your shell profile:
export PATH="/opt/homebrew/opt/postgresql/bin:$PATH"
Reload your shell:
source ~/.zshrc
Step 4: Install psycopg2
pip install psycopg2
Step 5: Skip compilation if you want
pip install psycopg2-binary
If you installed Postgres via Postgres.app, I usually set PATH to the app’s bin directory (or install via Homebrew to keep things standard).
Modern packaging choices in 2026: binary wheels vs source builds
In 2026, I choose between a binary wheel and a source build based on where the code runs and how much control I need over libpq. Here’s the way I explain it to teams.
Traditional approach (source build)
My recommendation
—
—
Install PostgreSQL + headers and build
psycopg2-binary wheel Use the binary wheel for speed, switch to source only if needed
Build from source for full parity
Use wheels in early stages, source builds in release pipelines
Compile and link against system libpq
Prefer source builds so security updates track OS packages
Compile inside the image
Use source builds if image already includes Postgres toolingThe binary wheel is the best way to get unblocked fast, but it embeds a libpq version. For production, I’m usually more comfortable linking to the OS-provided libpq so security updates flow naturally. That’s why I treat the binary wheel as a dev and CI accelerator, not a final runtime choice.
If you’re using modern Python tooling like uv or pdm, the decision point is the same. The tool picks the wheel when it can, but if it can’t, it builds. If you want a guaranteed wheel path, you can pin it explicitly:
pip install psycopg2-binary==2.9.9
Diagnose with a repeatable checklist
When I’m on a new machine or a fresh container, I run a short checklist before I touch any build flags. It saves time and avoids guesswork.
1) Do I have pg_config at all?
which pg_config
If this returns nothing, I install PostgreSQL (and dev headers on Linux). If it returns a path, I move on.
2) Is pg_config on PATH for the active shell?
Sometimes the executable exists, but not in the shell that pip is running from. That can happen if your IDE uses a different shell environment. I verify with:
pg_config --version
3) Which Python and pip am I actually using?
I check for mismatched venvs:
python -c "import sys; print(sys.executable)"
pip -V
If pip points to a different Python than the one I expect, I fix the environment first.
4) Are headers and libpq visible?
On Linux, I confirm libpq-dev is installed. On macOS, I make sure Homebrew’s PostgreSQL is the active one. On Windows, I confirm the bin path is on PATH.
5) Do I want a source build at all?
If the answer is no, I install the binary wheel and move on. This is the fastest path when I don’t need fine-grained control.
Edge cases: virtual environments, multiple PostgreSQL versions, and CI
Most of the “mystery” cases come down to mismatched environments or multiple PostgreSQL installs. Here’s how I handle the ones I see most often.
Virtual environments and PATH clashes
A venv only isolates Python packages, not system PATH. If your shell PATH points to an older Postgres version, pg_config might be for PostgreSQL 12 even if you think you’re on 15. I check:
pg_config --version
If I need a specific version, I add the correct bin path ahead of the old one:
export PATH="/usr/local/opt/postgresql@15/bin:$PATH"
Multiple PostgreSQL installs
On macOS, it’s common to have Postgres.app and Homebrew Postgres both installed. I pick one and make it the default. In practice, I uninstall the unused version or set PATH to the preferred one and lock it in a shell profile.
CI pipelines
In CI, I choose one of two strategies:
- Wheel path for speed: install
psycopg2-binaryand run tests immediately. - Source path for parity: install PostgreSQL dev packages in the container and build from source.
Here’s a simple Linux CI snippet that uses the source path:
apt-get update \
&& apt-get install -y postgresql libpq-dev python3-dev build-essential
pip install psycopg2
If you’re building in a minimal base image, don’t forget build-essential (or its distro equivalent) or the build will fail even if pg_config is present.
A working Python example (and why I keep it tiny)
Once psycopg2 installs, I validate with a minimal, runnable script. Keeping it small helps me confirm the environment is correct without adding more moving parts.
import os
import psycopg2
Read connection details from environment for portability.
DBNAME = os.getenv("DBNAME", "billing_db")
DBUSER = os.getenv("DBUSER", "analytics_user")
DBPASS = os.getenv("DBPASS", "change_me")
DBHOST = os.getenv("DBHOST", "localhost")
DBPORT = int(os.getenv("DBPORT", "5432"))
conn = psycopg2.connect(
dbname=DB_NAME,
user=DB_USER,
password=DB_PASS,
host=DB_HOST,
port=DB_PORT,
)
with conn:
with conn.cursor() as cur:
cur.execute("SELECT version();")
version = cur.fetchone()[0]
print("PostgreSQL version:", version)
If that script runs, I know the install and the client library are working. If it fails with a connection error, the issue is likely networking or credentials, not pg_config.
Preventing the error in new projects
I avoid the error by setting up projects in a way that doesn’t leave room for ambiguity. Here’s the baseline I recommend when starting fresh.
1) Decide on a default install path
If you’re on macOS or Linux, use Homebrew or the system package manager. On Windows, use the PostgreSQL installer and keep the version pinned. I avoid mixing app-based installs and package manager installs because PATH tends to get messy.
2) Put the PATH update in one place
If you need a PATH entry, add it once in your shell profile and document it in your project README so teammates don’t guess.
3) Use a modern environment tool
Tools like uv and pdm make it trivial to rebuild environments consistently. If your team uses them, you can encode the choice of psycopg2 vs psycopg2-binary directly in pyproject.toml so it doesn’t drift.
4) Pin the driver intentionally
In production, I typically pin psycopg2 and build from source so it links to system libpq. In development, I allow psycopg2-binary for speed. That dual approach keeps day-to-day work fast while still giving production a clean upgrade path.
5) Teach your CI the same choices
If CI uses the binary wheel but production uses source builds, I keep at least one pipeline that mimics production. It catches missing headers and build issues long before release.
Deeper diagnosis: what I check when a “simple” fix fails
Sometimes I do all the obvious steps and the error still shows up. In those cases, I treat it like a build-system problem rather than a Python package problem. Here’s the exact workflow I run through.
1) I print the effective PATH in the same shell running pip
This sounds basic, but I’ve been burned by IDEs that spawn a shell with a stripped PATH.
python -c "import os; print(os.getenv(‘PATH‘))"
If I don’t see the PostgreSQL bin directory in the PATH printed here, I stop and fix that first.
2) I locate pg_config directly
I don’t trust PATH until I confirm the executable exists.
- On Linux:
sudo find / -type f -name pg_config 2>/dev/null | head -n 5
- On macOS:
mdfind -name pg_config | head -n 5
If I find it in a different directory than expected, I add that path to PATH or remove the unexpected install.
3) I read pg_config output to confirm headers and libs
pg_config --includedir
pg_config --libdir
If those paths are missing or don’t exist, I reinstall the dev package or PostgreSQL itself.
4) I look for “shadow” pg_config from old installs
Sometimes there’s a stale pg_config earlier in PATH. I confirm which binary is being used:
which -a pg_config
If the wrong one comes first, I update PATH order.
5) I test a build environment in isolation
I create a clean venv and try to build there. If it works in the clean venv, the issue is probably my existing environment.
python -m venv /tmp/pg-test
source /tmp/pg-test/bin/activate
pip install --upgrade pip
pip install psycopg2
Alternative driver options and when I use them
Sometimes I don’t actually need psycopg2. If I’m hitting pg_config problems repeatedly, or I’m in a minimal container, I consider alternatives.
1) psycopg (psycopg3)
The newer psycopg (v3) can use a binary package that bundles libpq. It also supports an optional pure-Python mode (slower but sometimes simpler). I like it for new projects and for cases where I want async-friendly patterns or better typing.
Example (psycopg3, synchronous):
import os
import psycopg
conn = psycopg.connect(
dbname=os.getenv("DBNAME", "billingdb"),
user=os.getenv("DBUSER", "analyticsuser"),
password=os.getenv("DBPASS", "changeme"),
host=os.getenv("DB_HOST", "localhost"),
port=int(os.getenv("DB_PORT", "5432")),
)
with conn:
with conn.cursor() as cur:
cur.execute("select version()")
print(cur.fetchone()[0])
2) asyncpg
If my app is async-first and I don’t need libpq compatibility, I consider asyncpg. It’s fast and avoids the C-extension build step. But it’s not a drop-in replacement for psycopg2, so I only use it when I’m fine with driver-specific APIs.
3) SQLAlchemy dialects
If I’m using SQLAlchemy, I can choose the driver in the connection URL. That gives me flexibility to swap psycopg2 for psycopg or asyncpg later without rewriting the whole data layer.
The key idea: If I can avoid source builds in environments where I don’t control system packages, I reduce build friction. But in production, I still like source builds or system-managed libpq for security updates.
Practical scenarios: when I pick source builds on purpose
Even though binary wheels are convenient, I still choose source builds for specific reasons.
1) I need to match a system libpq version
Some teams require a certain libpq version for compatibility or compliance reasons. If the system libpq is a must, I compile against it so I can track updates from the OS package manager.
2) I’m on a locked-down base image
In hardened environments, it’s common to allow only system packages and block downloaded binaries. A source build that links to OS packages fits that policy better.
3) I need to reduce risk from bundled libraries
Bundled libraries are great for convenience, but in regulated environments I prefer to control the dependency chain, especially for C libraries with security fixes.
4) I want consistent behavior across hosts
By linking to the same system libpq on all hosts, I reduce the risk that different embedded libpq versions behave differently. That matters most in production.
Practical scenarios: when I skip source builds on purpose
Sometimes the “correct” answer is to avoid compilation entirely.
1) I’m onboarding a new developer
For someone just joining a team, I want them productive in minutes. Installing a wheel is faster than installing PostgreSQL and headers.
2) I’m in a transient CI job
CI runners are ephemeral. I want them fast. Wheels reduce install time and remove the dependency on apt/yum being available or stable.
3) I’m doing a quick prototype
If I’m testing an idea, I avoid the overhead of system packages. I can always move to source builds later.
4) I’m in a tiny container
Minimal containers lack build toolchains. Installing a wheel avoids pulling in compilers and headers that inflate the image.
Common pitfalls I see (and how I avoid them)
A big part of avoiding the error long-term is knowing the mistakes that cause it.
Pitfall 1: Confusing psycopg2 and psycopg2-binary
These are different packages. I explicitly choose one in my dependency list instead of leaving it ambiguous. I also avoid installing both at the same time in a single environment.
Pitfall 2: PATH set in one shell, but not in another
I always put PATH changes in a shell profile that applies to all shells I use. On macOS, that’s usually ~/.zshrc. On Linux, it depends on the shell. On Windows, I set system PATH and restart the terminal.
Pitfall 3: Using a different Python than I think
This is the classic “pip points at a different interpreter” issue. I use python -m pip whenever I want to be absolutely sure:
python -m pip install psycopg2
Pitfall 4: Missing compiler toolchains on Linux
If I see errors about missing gcc or python3-dev, I install a build toolchain, not just PostgreSQL dev headers.
Pitfall 5: Mixing Homebrew and system packages on macOS
If I install Postgres from Homebrew, I keep everything else Homebrew-based too. Mixing sources often results in confusing PATH conflicts and mismatch between headers and libs.
Performance considerations: the practical impact of each choice
Performance isn’t the first thing I think about when fixing pg_config, but it does matter.
- Binary wheels: Start faster and install faster. Runtime performance is generally stable and “good enough” for most apps.
- Source builds: Take longer to install but can be tuned to system libraries and compiler optimizations. Runtime performance can be slightly better or more consistent in some environments.
In practice, the difference is usually small, but it can matter in high-throughput services. I prioritize correctness and operational stability first, then performance. If I need more speed, I measure with a realistic workload before assuming source builds will help.
More complete code example: connection pooling and robust cleanup
Once the install is fixed, I like to validate with a slightly more realistic snippet that uses a connection pool and handles errors cleanly. This catches subtle issues like incorrect SSL defaults or invalid credentials.
import os
import sys
import psycopg2
from psycopg2.pool import SimpleConnectionPool
DBNAME = os.getenv("DBNAME", "billing_db")
DBUSER = os.getenv("DBUSER", "analytics_user")
DBPASS = os.getenv("DBPASS", "change_me")
DBHOST = os.getenv("DBHOST", "localhost")
DBPORT = int(os.getenv("DBPORT", "5432"))
pool = None
try:
pool = SimpleConnectionPool(
minconn=1,
maxconn=5,
dbname=DB_NAME,
user=DB_USER,
password=DB_PASS,
host=DB_HOST,
port=DB_PORT,
)
conn = pool.getconn()
try:
with conn.cursor() as cur:
cur.execute("SELECT currentdatabase(), currentuser, version();")
db, user, version = cur.fetchone()
print("DB:", db)
print("User:", user)
print("Version:", version)
finally:
pool.putconn(conn)
except Exception as exc:
print("Connection test failed:", exc, file=sys.stderr)
sys.exit(1)
finally:
if pool is not None:
pool.closeall()
If this script works, I know more than just pg_config worked. I know the runtime library can connect, authenticate, and query. That’s a solid end-to-end validation.
Practical container guidance: Docker examples I actually use
When I’m building container images, I want consistency and small images. I use one of two patterns depending on the target.
Pattern A: Source build in a builder stage
This is useful when I need to compile psycopg2 and link against system libpq, but I don’t want to leave a compiler in the final image.
FROM python:3.12-slim AS builder
RUN apt-get update \
&& apt-get install -y --no-install-recommends \
build-essential \
libpq-dev \
&& rm -rf /var/lib/apt/lists/*
WORKDIR /app
COPY requirements.txt /app/
RUN pip install --upgrade pip \
&& pip wheel --no-cache-dir --no-deps -r requirements.txt -w /wheels
FROM python:3.12-slim
WORKDIR /app
COPY --from=builder /wheels /wheels
RUN pip install --no-cache-dir /wheels/*
COPY . /app
Pattern B: Binary wheel only
This is the fastest, smallest approach for many development containers.
FROM python:3.12-slim
WORKDIR /app
COPY requirements.txt /app/
RUN pip install --upgrade pip \
&& pip install --no-cache-dir -r requirements.txt
COPY . /app
If I don’t control the runtime OS packages, Pattern B is usually safer. If I do control the runtime OS and want consistent libpq updates, Pattern A is my choice.
Troubleshooting matrix: symptom → likely cause → fix
I keep a mental map of the most common errors that look like pg_config problems but aren’t exactly the same thing.
Likely cause
—
pg_config executable not found PATH missing or dev headers absent
fatal error: libpq-fe.h: No such file or directory libpq headers missing
libpq-dev or equivalent error: command ‘gcc‘ failed Compiler toolchain missing
ImportError: libpq.so not found Runtime library missing
wrong version of pg_config Multiple Postgres installs
This matrix lets me move quickly from symptom to action without overthinking.
When I document fixes for a team
If I’m supporting a team, I document the fix in a way that prevents it from recurring. My baseline team guidance looks like this:
- We use psycopg2-binary in local dev for fast onboarding.
- We use psycopg2 source builds in production so libpq updates flow from the OS.
- We set a standard Postgres install method (Homebrew on macOS, apt on Debian, installer on Windows).
- We keep one CI pipeline that builds from source to catch missing headers early.
This reduces randomness. It also gives new team members a clear playbook so they don’t guess or copy half-fixes from older projects.
AI-assisted workflows I actually use
When I’m fixing this problem in a new environment, I sometimes use AI tools in a narrow, practical way:
- Generate a minimal Dockerfile that includes
libpq-devfor a target base image. - Draft a quick checklist for team onboarding, then I edit it for accuracy.
- Convert a shell-based fix into PowerShell for Windows teammates.
I still validate every command, but these tools speed up the “translation” between environments.
A quick FAQ I get from teammates
Q: Should I always use psycopg2-binary?
I don’t. I use it in dev and quick CI runs, but I avoid it in production when I want system-level libpq updates.
Q: Why does pip try to build from source?
Because it can’t find a wheel that matches your platform and Python version. In that case, it falls back to compiling a C extension.
Q: Do I need PostgreSQL installed just to use psycopg2?
If you use the binary wheel, not necessarily. But if you build from source, yes—you need the headers and libpq.
Q: What about Windows Subsystem for Linux (WSL)?
I treat WSL like a Linux environment. Install libpq-dev and the Python dev headers, or use the binary wheel.
Practical takeaways and next steps
When I hit “pgconfig executable not found,” I treat it as a quick environment issue, not a mysterious Python problem. The fix is usually one of three actions: install PostgreSQL and its dev headers, put pgconfig on PATH, or install the binary wheel to skip compilation. If you’re just trying to get unblocked on a laptop, the wheel path is often the fastest and most practical. If you’re building for production, I strongly prefer a source build that links against system libpq so security updates flow through normal OS patching. The best results come from being explicit about which path you want rather than letting pip guess.
I also recommend keeping a tiny validation script around, the kind that just prints SELECT version(). It’s a quick health check for your Python client, database server, and network path. In teams, I document the PATH expectation and the chosen install method, because most “mystery” failures come from a teammate installing a different PostgreSQL build. Finally, I make CI mirror production at least once per day, even if faster pipelines use wheels. That single parity pipeline is usually enough to prevent the next “pg_config not found” surprise.
Extra guidance: if you still get the error after “fixing” PATH
If I’ve already updated PATH and the error persists, I assume that pip isn’t seeing the same PATH I see in my terminal. That’s especially common when I install from within an IDE, a task runner, or a CI script. My fix is simple: I make the PATH explicit for the command itself.
On macOS or Linux:
PATH="/opt/homebrew/opt/postgresql/bin:$PATH" python -m pip install psycopg2
On Windows (PowerShell):
$env:Path = "C:\Program Files\PostgreSQL\15\bin;" + $env:Path
python -m pip install psycopg2
Once I verify the install works, I move the PATH change into the right shell profile or CI configuration so it doesn’t depend on a one-off command.
A tiny decision tree I use in real life
If I want a quick mental shortcut, I use this:
- Am I in production? If yes, I aim for source builds and system libpq.
- Am I in a quick dev or CI environment? If yes, I use the binary wheel.
- Is pg_config on PATH? If not, I fix PATH or install dev headers.
- Is the build toolchain installed? If not, I install it.
That decision tree gets me from error to working install with minimal back-and-forth.
Summary in one paragraph
The “pgconfig executable not found” error is almost always a signal that a source build can’t locate PostgreSQL’s headers and libraries. My fix is to either install the PostgreSQL dev tools and ensure pgconfig is on PATH, or skip compilation by installing the binary wheel. I decide based on where the code runs: wheels for speed in dev and CI, source builds for production when I want OS-managed libpq updates. The best long-term outcome comes from documenting the choice, keeping a reliable PATH setup, and validating installs with a small Python script so the error doesn’t return when environments change.


