I still remember the first time a small script grew into a multi-file project and suddenly everything broke. The functions that worked perfectly yesterday were now “invisible,” and Python complained it couldn’t find my own code. That moment is where most developers learn that imports aren’t just syntax; they’re the glue that holds a codebase together. If you want maintainable Python in 2026, you need to understand local imports at a deeper level than “just add the file to the folder.”
This guide walks you through how I structure local modules, what Python’s import machinery actually does, and how to avoid the error traps that show up the moment your project stops being a single file. I’ll show simple examples, then build up to realistic package layouts with relative imports, editable installs, and tooling-friendly patterns you can use in modern workflows.
What “Local Module” Means in Practice
A module is just a Python file you can import. A local module is any module that lives inside your project instead of the global site-packages. That sounds obvious, but the behavior depends on how you run the code. When you run a file, Python sets a search path based on the current working directory and a few runtime rules. That determines where imports resolve.
I treat “local modules” as one of three things:
- A sibling file in the same directory as the entry script
- A file inside a package directory (a folder with an
init.py) - A module in a project you’ve installed in editable mode
If you internalize those three categories, you can predict how imports behave before you run the program. You’ll also make better choices about project structure, which matters more than the import syntax itself.
How Python Finds Your Code
Python’s import system scans sys.path in order. This is a list of directories it will search for modules and packages. When you run a script, the working directory gets inserted near the front of that list. That’s why import module1 works when module1.py is beside your main script.
You can inspect the search path any time:
import sys
for p in sys.path:
print(p)
If you’re debugging an import issue, this is the first printout I look at. It tells you where Python is even willing to search.
Simple analogy: sys.path is like the set of shelves in a library. An import is a request for a book. If the shelf isn’t listed, Python won’t even check whether the book exists.
The Deeper Model: Finders, Loaders, and Caches
If you want to understand “why” an import behaves a certain way, it helps to know the import pipeline:
1) Python checks sys.modules (import cache). If a module is already loaded, it returns it immediately.
2) Python walks the sys.meta_path finders in order. Each finder decides whether it can locate the module.
3) A finder returns a “loader,” which is responsible for actually creating the module object and executing its code.
That’s why “delete the file and re-run” often doesn’t fix odd behavior in a REPL: the module may still live in sys.modules. In a long-lived process (server, notebook, REPL), that cache matters. When I want a clean import state, I restart the process. If I absolutely must reload, I use importlib.reload(module) and I do it with caution.
This is also why file names matter: if you have a file named json.py in your project, Python may import your file instead of the standard library json. The search order is not neutral; it’s deterministic and local-first.
The Simplest Case: Same Directory Imports
If your project is tiny, you can put a few modules next to your entry script and import them directly.
Directory:
my_project/
├── main.py
└── reports.py
reports.py:
def summarize_sales(sales):
return {
"count": len(sales),
"total": sum(sales),
}
main.py:
import reports
sales = [120, 80, 240]
summary = reports.summarize_sales(sales)
print(summary)
That works because the working directory is added to sys.path, and Python sees reports.py right there.
This is fine for small scripts, but it doesn’t scale. The moment you add folders or start testing, you need packages.
Turning Folders into Packages
A package is just a directory Python can treat as a namespace. In modern Python, a folder with no init.py can still be a namespace package, but I almost always include init.py in local projects. It’s explicit, consistent, and works with older tooling.
A clean starter layout looks like this:
my_project/
├── app/
│ ├── init.py
│ ├── services.py
│ └── metrics.py
└── main.py
metrics.py:
def percentile(values, p):
values = sorted(values)
index = int((len(values) - 1) * p)
return values[index]
services.py:
from .metrics import percentile
def summarize(latencies):
return {
"p50": percentile(latencies, 0.50),
"p95": percentile(latencies, 0.95),
}
main.py:
from app.services import summarize
latencies = [12, 20, 35, 40, 18, 25]
print(summarize(latencies))
The key difference is the app package and the dot import in services.py. When you use a relative import like from .metrics import percentile, you’re saying “import from the same package.” This is safer than relying on the working directory, especially when you run tests or invoke the module with python -m.
What init.py Actually Does
I treat init.py as two things:
- A signal to Python: “this directory is a package.”
- A place to define the public API of the package.
You can keep it empty, but in bigger projects I often re-export the most important functions so imports look cleaner:
# app/init.py
from .services import summarize
all = ["summarize"]
Then other modules can just do from app import summarize. I only do this when the package has a well-defined, stable interface. If the API is messy or changing quickly, I leave init.py empty and import from the specific submodule.
Running Modules the Right Way
One of the most common errors I see is people running a file directly inside a package. That breaks relative imports because the module is no longer part of a package at runtime.
Bad:
python app/services.py
Good:
python -m app.services
Or better, run the project entry point:
python main.py
The -m flag tells Python to treat the module as part of a package. I recommend this approach when you’re testing internal modules directly.
The main.py Pattern (My Favorite for CLIs)
If you want python -m app to run your program, create app/main.py:
my_project/
├── app/
│ ├── init.py
│ ├── main.py
│ └── cli.py
└── pyproject.toml
main.py:
from .cli import main
if name == "main":
main()
This gives you a clean execution model and avoids the “run a file directly” trap. In real projects, I always prefer python -m package over python path/to/script.py because it preserves import context.
Importing from Subpackages
As your project grows, you’ll likely split into subpackages. Here’s a realistic layout:
my_project/
├── app/
│ ├── init.py
│ ├── core/
│ │ ├── init.py
│ │ ├── utils.py
│ │ └── validators.py
│ ├── features/
│ │ ├── init.py
│ │ ├── analytics.py
│ │ └── billing.py
│ └── shared/
│ ├── init.py
│ └── logging.py
└── main.py
From analytics.py, you can import shared utilities using relative imports:
from ..shared.logging import get_logger
Or you can use absolute imports from the project root package:
from app.shared.logging import get_logger
Both work, but I prefer absolute imports for clarity once the package is stable. Relative imports are great for internal connections inside a package, but they can get messy when you move files around. If I expect heavy refactoring, I lean on relative imports for nearby modules; if I want long-term readability, I use absolute imports with the top-level package name.
Relative vs Absolute: How I Decide
I keep this rule in my head:
- If a module is clearly internal and the package name might change, I go relative.
- If a module is core, shared, and used across the project, I go absolute.
This also helps IDEs and static analysis. Absolute imports make it obvious which package you’re referencing, and they’re easier for linters to resolve when your working directory changes.
Import Styles and When I Use Each One
You have several ways to import. Each has a trade-off between clarity, convenience, and namespace control.
Import the module
import app.core.utils
value = app.core.utils.slugify("Monthly Report")
Use this when you want full clarity about where something comes from, or when you’re importing multiple functions and don’t want to list them out.
Import a name directly
from app.core.utils import slugify
value = slugify("Monthly Report")
I use this most often in clean, stable modules. The code reads well and avoids repeated module prefixes.
Rename with as
from app.features.analytics import generatereport as buildreport
This is perfect when names collide or when you want a simpler local alias. I also use it when a function name is correct but verbose.
Import multiple symbols
from app.core.validators import validateemail, validatephone
Great when you need a short list. If the list gets long, I fall back to module imports to keep lines readable.
Avoid from module import *
I never use this in production code. It makes it hard to see what you actually imported, and it breaks static analysis tools. In 2026, most teams rely on language servers and type checkers; wildcard imports confuse them.
Relative Imports Without Headaches
Relative imports use dots to indicate package levels:
from . import helpersmeans “this package”from ..shared import loggingmeans “one level up”
Example inside app/core/utils.py:
from .validators import validate_slug
from ..shared.logging import get_logger
log = get_logger(name)
def slugify(name):
if not validate_slug(name):
log.warning("Invalid slug input")
return name.lower().replace(" ", "-")
Relative imports are concise and reduce dependency on the root package name. But they require that the module is executed as part of a package. If you often run files directly, absolute imports with the top-level package name are safer.
A Subtle Rule: Only Use Relative Imports Within Packages
A file that lives outside any package should not use relative imports. Python will raise:
ImportError: attempted relative import with no known parent package
This is why I avoid placing executable scripts inside package directories unless they’re intended to run with python -m. If I need a one-off script, I put it in a scripts/ folder at the root and keep imports absolute.
sys.path Hacks: When I Allow Them and When I Don’t
Yes, you can modify sys.path on the fly:
import sys
sys.path.append("/path/to/project")
from app.core.utils import slugify
This is a quick fix, but I treat it like duct tape. It’s okay for short-lived scripts or notebooks, but it’s fragile for real applications. The moment the path changes or a teammate runs the code from a different directory, you get import failures.
When I need a stable import path, I prefer one of these:
- Use a package layout and run with
python -m - Install the project in editable mode
- Use a
.envfile and a runner that setsPYTHONPATH
I’ll show that last option in a later section.
Editable Installs: The Modern Way to Make Local Imports Reliable
For medium or large projects, I recommend installing your project in editable mode. This is standard in 2026, and it keeps imports clean without manual sys.path tweaks.
Assuming you have a pyproject.toml, you can do:
pip install -e .
That installs your package as a live link. Now you can import app (or whatever your package name is) from anywhere on your machine. It’s especially helpful for tests, CLI scripts, and notebooks.
When I’m working on local tools, I often create a package root like this:
my_project/
├── pyproject.toml
└── app/
├── init.py
└── ...
Then import app works reliably no matter where I run the code. This is also the pattern most modern IDEs expect.
Editable Installs + src/ Layout (My Default)
In larger codebases, I prefer a src/ layout so “accidental imports” don’t pass just because the root directory is on sys.path. It forces you to install the package, which makes import behavior more realistic.
my_project/
├── pyproject.toml
└── src/
└── app/
├── init.py
└── core/
└── utils.py
This layout stops “it works on my machine” bugs where the code only runs because you executed it from the project root. If the import works in a src/ layout, it usually works in production too.
Common Errors and How I Diagnose Them
1) ModuleNotFoundError
This means Python didn’t find the module. I check three things immediately:
- The current working directory and
sys.path - The spelling of the module or package name
- Whether the directory has
init.pywhen it should
If you’re importing from app.core import utils but app isn’t on the path, you’ll see this error. Installing in editable mode fixes it instantly.
2) ImportError
This usually means the module was found, but the name you asked for doesn’t exist. For example:
from app.core.utils import slugifiy
That typo gives you an ImportError even though utils.py is importable. I check the actual file contents and confirm the symbol names.
3) Circular Import
This happens when two modules import each other at top level. Example:
billing.py:
from app.features.analytics import generate_report
analytics.py:
from app.features.billing import calculate_invoice
This can break at runtime because one module is only partially initialized when the other tries to import it. I fix this by:
- Moving shared functions to a third module
- Importing inside functions instead of at top level
- Refactoring responsibilities so the dependency graph is one-directional
Local imports inside functions are a valid short-term fix:
def generate_report(data):
from app.features.billing import calculate_invoice
return calculate_invoice(data)
I only use this as a pressure release valve. The long-term fix is to reorganize modules so they don’t depend on each other both ways.
4) Shadowing Standard Library Modules
If you name a local file logging.py or random.py, Python may import your file instead of the built-in. The failure might look like a random AttributeError or missing function. I avoid these names entirely and lint for them in code review. When I must use a similar name, I add a project prefix like app_logging.py.
5) Partial Imports in Long-Lived Processes
In a server or notebook, Python won’t re-import a module unless it’s removed from sys.modules. That can lead to “I fixed the code, why is the old bug still here?” If I’m not sure, I restart the process or reload explicitly. This is especially important in Jupyter and fast-reload dev servers.
Real Project Structure That Scales
Here’s a structure I’ve used repeatedly for production services and internal tools:
my_project/
├── pyproject.toml
├── app/
│ ├── init.py
│ ├── core/
│ │ ├── init.py
│ │ ├── config.py
│ │ └── logging.py
│ ├── domain/
│ │ ├── init.py
│ │ ├── invoices.py
│ │ └── users.py
│ ├── services/
│ │ ├── init.py
│ │ ├── billing.py
│ │ └── analytics.py
│ └── cli/
│ ├── init.py
│ └── main.py
└── tests/
├── init.py
└── test_billing.py
With this layout, you can:
- Run the CLI with
python -m app.cli.main - Run tests without hacking
sys.path - Import cleanly using
from app.services.billing import calculate_total
This is more predictable than keeping a loose collection of files in the root directory.
How I Keep Imports Clean in Tests
In tests, I want imports to behave exactly like production. That’s why I avoid adding the project root to sys.path inside tests. Instead, I install in editable mode and import normally:
from app.services.billing import calculate_total
If I need fixtures or helpers, I put them in tests/ and import them with a tests package. I don’t reach into application internals from tests unless the internal API is explicitly supported.
A Runnable Example with Subpackages
Let’s build a tiny analytics tool with a realistic import chain.
Directory:
my_project/
├── app/
│ ├── init.py
│ ├── core/
│ │ ├── init.py
│ │ └── math_tools.py
│ ├── reports/
│ │ ├── init.py
│ │ └── monthly.py
│ └── cli/
│ ├── init.py
│ └── main.py
└── pyproject.toml
math_tools.py:
def moving_average(values, window):
if window <= 0:
raise ValueError("window must be > 0")
if len(values) < window:
return []
averages = []
for i in range(len(values) - window + 1):
segment = values[i : i + window]
averages.append(sum(segment) / window)
return averages
monthly.py:
from app.core.mathtools import movingaverage
Non-obvious logic: basic smoothing before report output
def buildreport(dailysales):
smoothed = movingaverage(dailysales, window=3)
return {
"raw": daily_sales,
"smoothed": smoothed,
"trend": "up" if sum(smoothed) > sum(daily_sales) else "stable",
}
main.py:
from app.reports.monthly import build_report
def main():
sales = [120, 130, 110, 140, 150, 160]
report = build_report(sales)
print(report)
if name == "main":
main()
Run it with:
python -m app.cli.main
This works consistently because app is a package and we run through the module path. No sys.path hacks required.
If You Want a Script Entry Point
Sometimes you want my-report as a command instead of python -m. Add this to pyproject.toml:
[project.scripts]
my-report = "app.cli.main:main"
Now when you install (even editable), you can run my-report from anywhere and your imports still work. This is the cleanest approach for CLIs and internal tools.
PYTHONPATH and Environment-Based Imports
Sometimes you can’t install in editable mode. In that case, you can set PYTHONPATH. This environment variable adds directories to sys.path at runtime.
Example shell command:
PYTHONPATH=/path/to/my_project python -m app.cli.main
I use this in CI pipelines and quick scripts, but I don’t rely on it for day-to-day development. It’s easy to forget, and it’s invisible to others unless they check your environment.
If you use a tool like Poetry or uv, you can also configure scripts that set PYTHONPATH automatically. That’s fine, but I still prefer editable installs when possible.
A Safer Alternative: .pth Files
Another way to add paths is via .pth files in your site-packages directory. These are plain text files that list directories to add to sys.path on startup. I rarely use them, but they can be helpful in controlled environments where you need a global path without editable installs.
When NOT to Use Local Imports
Local imports are great inside a project, but there are times you should package and publish instead:
- You want multiple repositories to share the same code
- You need versioning and release control
- You’re building a library for internal teams
In those cases, I create a separate package and install it like any other dependency. That keeps the main project clean and avoids cross-repo import hacks.
The “Vendor Copy” Trap
I’ve seen teams copy a helper folder across multiple repos. It works at first, then diverges, then becomes a maintenance nightmare. If code needs to be shared across repos, make a proper package and version it. That’s the clean boundary Python’s import system is built to support.
Performance and Import Cost
Imports are usually fast, but the cost can add up in large applications. A big module with heavy top-level computation can add noticeable startup time. I’ve seen delays in the 10–50ms range just from imports that do too much work on load.
My rule: keep top-level code light. If you need to load a config file, initialize a database connection, or perform expensive computation, do it inside a function. That makes imports predictable and avoids slow CLI startup.
Example:
# app/core/config.py
import json
from pathlib import Path
CONFIG_PATH = Path(file).resolve().parent / "settings.json"
Avoid loading at import time
def load_settings():
with open(CONFIG_PATH) as f:
return json.load(f)
If I need caching, I keep it explicit:
cachedsettings = None
def get_settings():
global cachedsettings
if cachedsettings is None:
cachedsettings = load_settings()
return cachedsettings
This avoids unexpected import-time work and makes performance more predictable.
Lazy Imports for Optional Dependencies
Sometimes I only need a heavy dependency in one code path. I’ll import inside the function:
def render_chart(data):
import matplotlib.pyplot as plt
# render here
This makes module import faster and avoids optional dependency errors when the code path isn’t used. I keep this pattern contained and documented so it doesn’t become a habit for everything.
Practical Scenarios: How I Choose an Import Strategy
Scenario 1: A One-Off Script With Helpers
If I have a single script plus two helpers, I keep everything in one folder and import directly. I don’t add a package unless I expect the script to grow. It’s not worth the overhead.
Scenario 2: A Growing Tool With Tests
Once I add tests or multiple entry points, I switch to a package layout and install editable. This avoids sys.path hacks and makes tests stable. I almost always use python -m for module execution.
Scenario 3: A CLI + Library
If the same code powers a CLI and is imported elsewhere, I create a package with a clean public API in init.py and a script entry point. This makes imports clean for both internal and external usage.
Scenario 4: Notebooks and Data Science
In notebooks, the working directory can be unpredictable. I often install my project in editable mode and restart the kernel. It’s the fastest way to make imports reliable. If I can’t install, I use PYTHONPATH for the session, but I keep it explicit so I don’t forget what I changed.
Edge Cases That Bite Experienced Developers
Namespace Packages (No init.py)
Namespace packages let you split a single package across multiple directories. This is useful for plugin systems but can be confusing. If you accidentally omit init.py, you may create a namespace package without meaning to. That can lead to weird import behavior where Python merges directories from different locations. Unless I’m intentionally building a plugin architecture, I keep init.py in every package folder.
Multiple Modules with the Same Name
If you have app/utils.py and scripts/utils.py, and your sys.path order changes, you might import the wrong one. This usually happens when scripts insert their own directories into sys.path. I avoid repeated generic names and keep utilities in the package itself.
Compiled Artifacts and pycache
Python will happily import from a .pyc file if the source is missing. In weird deployments, that can create confusion. If you’re debugging a missing module, make sure the source file exists, not just the cache. I typically clean caches in CI to avoid stale behavior.
Tooling and Modern Workflows (2026 Reality)
If you rely on language servers, type checkers, or fast test runners, consistent imports are not optional. These tools expect a clean package layout and often assume editable installs.
- Type checkers like mypy and pyright resolve imports like Python does, but they can be stricter. A file that only works because of a
sys.pathhack will fail analysis. - Test runners like pytest treat the project root as special, which can mask import issues. I always validate by running tests from a different directory or in CI.
- IDEs cache import graphs. If imports behave strangely, I often invalidate caches and restart the IDE before rewriting code.
Modern workflows are less forgiving. If you want reliable tooling, design your imports as if the project will be installed.
A Quick Checklist I Use Before Shipping
- The package has
init.pywhere it should. - The entry points run via
python -mor script entry points. - The code runs from outside the project root (import realism test).
- No
sys.pathedits in application code. - No shadowing standard library names.
- Imports are stable under
pytestand a clean virtual environment.
If all of those are true, I’m confident the import story is solid.
Putting It All Together: A Mental Model You Can Reuse
When I’m faced with an import problem, I walk through this sequence:
1) What is the current working directory?
2) What is in sys.path?
3) Is the target module in a package (init.py)?
4) Am I running as a module (python -m) or as a file?
5) Are there naming conflicts (local file vs stdlib)?
It’s simple, but it catches 90% of problems without guesswork.
Final Thoughts
Local imports in Python aren’t hard, but they are subtle. Most issues come from mixing execution styles or relying on incidental path behavior. The moment you scale beyond a single script, it pays to structure your code like a real package, install it in editable mode, and run modules properly.
If you take one thing from this guide, let it be this: imports reflect project structure. If the structure is clear, imports become simple. If the structure is messy, imports become a daily tax.
I’ve been burned enough times to make this non-negotiable. When I set up a project now, I treat import strategy as a first-class design decision, not an afterthought.


