How to Import Local Modules with Python

I still remember the first time a medium-sized Python script turned into a sprawl of files: a data parser here, a reporting helper there, and suddenly every change felt like rummaging through a junk drawer. The fix wasn’t a new framework; it was learning how Python actually finds and loads local modules. When you get imports right, your codebase stays readable, tests stay focused, and teammates can jump in without map-reading. In this guide, I’ll show you how I structure local modules, how I choose between absolute and relative imports, how I prevent circular import failures, and how I debug the usual import errors. You’ll get runnable examples, a few project layouts that scale, and modern 2026-era workflows that keep imports predictable in editors, tests, and CI. If you’ve ever moved a file and watched everything explode, you’re in the right place.

How Python Finds Local Modules (and Why It Surprises People)

Python’s import system looks simple on the surface—import something—but it’s powered by a search path and a loader pipeline that can trip you up. The key list is sys.path, which is built at runtime from:

  • the directory containing the script you run
  • standard library locations
  • installed site-packages
  • any additions from environment variables like PYTHONPATH
  • any paths you append in code

That means “local” is relative to the execution context, not to the file doing the importing. If you run a module directly, its directory becomes the anchor. If you run a package with python -m package.module, the package root becomes the anchor. This is why from . import helpers works in one case and fails in another.

I recommend you treat the project root as the import root. That keeps the mental model steady across local development, tests, and CI. It also helps your editor and language server resolve symbols quickly.

A quick, runnable proof of what Python sees:

# file: tools/show_path.py

import sys

from pprint import pprint

print("sys.path at runtime:")

pprint(sys.path)

Run it from different places and you’ll see how “local” changes:

  • python tools/show_path.py anchors on tools/
  • python -m tools.show_path anchors on the project root

When imports act weird, I start here. It tells me whether I have an import bug or a path bug.

Project Layouts That Keep Imports Predictable

I like layouts that make the import root obvious. These two patterns cover most projects I work on.

Pattern 1: Simple script with a modules/ package

my_project/

├── modules/

│ ├── init.py

│ ├── io_tools.py

│ └── math_tools.py

└── main.py

main.py imports from modules:

# file: main.py

from modules import io_tools

from modules.math_tools import average

records = iotools.loadcsv("sales_2025.csv")

print(average(records))

Pattern 2: “src” layout for larger apps

my_project/

├── src/

│ └── inventory/

│ ├── init.py

│ ├── app.py

│ └── pricing/

│ ├── init.py

│ ├── discounts.py

│ └── taxes.py

└── tests/

Run like this:

python -m inventory.app

Then import like this:

# file: src/inventory/app.py

from inventory.pricing.discounts import seasonal_discount

from inventory.pricing.taxes import vat_rate

This layout makes your project behave like an installed package, even during local dev. It also reduces “works on my machine” errors because tests and scripts share the same import root. I use it whenever a project goes beyond a few files.

Import Techniques I Use Daily (and When to Avoid Them)

Here’s how I choose between common import styles, with a bias toward clarity and stability.

Import a whole module (good default)

# file: modules/io_tools.py

def load_csv(path: str) -> list[dict]:

# Example logic only

return []

# file: main.py

import modules.io_tools

rows = modules.iotools.loadcsv("customers_2025.csv")

This keeps namespaces explicit and prevents name collisions. It also makes it easier to spot unused imports in reviews.

Import specific names (use with care)

from modules.math_tools import average, median

print(average([10, 12, 9]))

I do this when the names are very clear and short. If I have to scroll to remember where something came from, I prefer module imports.

Alias to reduce noise

from modules.reporting.charts import createreport as buildreport

build_report(data)

Aliasing is great when a function name is long or clashes with a local variable. I keep aliases readable so future me doesn’t need a key.

Avoid wildcard imports

# Please avoid this

from modules.math_tools import *

Wildcard imports make code reviews harder and can silently overwrite names. They also confuse static analyzers and slow down autocomplete.

Importing Across Packages and Subpackages

As soon as you add subpackages, you need to be clear about absolute vs relative imports. I prefer absolute imports inside a package when it’s a real project, and relative imports when I’m writing a small library with tight internal cohesion.

Absolute imports in a package

my_project/

└── modules/

├── init.py

├── core/

│ ├── init.py

│ ├── utils.py

│ └── validators.py

└── extensions/

├── init.py

└── data_processing.py

# file: modules/extensions/data_processing.py

from modules.core.utils import normalize_text

from modules.core.validators import ensure_columns

def preprocess(rows: list[dict]) -> list[dict]:

ensure_columns(rows, required=["name", "region"]) # non-obvious logic

return [normalize_text(row) for row in rows]

Relative imports for internal cohesion

# file: modules/core/utils.py

from .validators import ensure_columns

# file: modules/core/validators.py

from ..extensions.data_processing import preprocess

Relative imports are concise, but they require package execution context. If you run utils.py directly, from .validators fails. That’s why I almost never run internal modules as scripts. I add an entrypoint instead.

Entry points prevent surprises

# file: main.py

from modules.extensions.data_processing import preprocess

... use preprocess

This keeps the internal modules import-safe and avoids any need for awkward if name == "main": blocks in leaf modules.

When Imports Go Wrong: Circular Dependencies and Fixes

Circular imports happen when module A imports module B and module B imports module A. Python can partially initialize modules, so you end up with AttributeError or ImportError that feels random.

Here’s a minimal example:

# file: billing/invoice.py

from billing.tax import compute_tax

def totalwithtax(amount: float) -> float:

return amount + compute_tax(amount)

# file: billing/tax.py

from billing.invoice import totalwithtax

def compute_tax(amount: float) -> float:

base = amount * 0.2

# This line triggers the circular import

return base if totalwithtax(amount) > 100 else base * 0.9

Two fixes I use often:

1) Split shared logic into a third module

# file: billing/common.py

def base_tax(amount: float) -> float:

return amount * 0.2

# file: billing/tax.py

from billing.common import base_tax

def compute_tax(amount: float) -> float:

return base_tax(amount)

# file: billing/invoice.py

from billing.tax import compute_tax

def totalwithtax(amount: float) -> float:

return amount + compute_tax(amount)

2) Move imports inside functions (last resort)

# file: billing/tax.py

def compute_tax(amount: float) -> float:

from billing.invoice import totalwithtax

base = amount * 0.2

return base if totalwithtax(amount) > 100 else base * 0.9

Local imports solve startup cycles but can hide dependencies. I use them only when refactoring is not immediately practical.

Modern 2026 Workflows: Tooling, Tests, and Packaging Choices

Imports are not just a runtime concern anymore. In 2026, your editor, tests, and CI are all import-aware. If they disagree about the import root, you’ll see lint noise and broken tests.

Here’s how I keep everything aligned:

1) Use python -m for module execution

Instead of running python app.py, run:

python -m inventory.app

This makes the package root unambiguous and fixes most relative import failures.

2) Configure the test runner to use the same root

With pytest, you can set the import path in pyproject.toml:

[tool.pytest.ini_options]

pythonpath = ["src"]

That makes tests behave like your application.

3) Let the editor use the same root

Most language servers now read pyproject.toml. In VS Code or Zed, you can point the workspace to the project root and set python.analysis.extraPaths to src when using the src layout. This prevents spurious “unresolved import” warnings.

4) Prefer editable installs for multi-package workspaces

When I’m working with two local packages, I install them in editable mode:

pip install -e ./libs/payments

Then imports resolve exactly as they will in production, but I still edit locally.

5) AI assistants and import hygiene

If you use AI-assisted refactoring, make it rewrite imports in bulk after file moves. I always verify with a quick rg "from .* import" and run tests. These assistants are fast, but they can miss dynamic import paths or all exports.

Traditional vs Modern import habits

Area

Traditional approach

Modern 2026 approach —

— Running scripts

python file.py

python -m package.module Project layout

flat folder

src/ package layout Path config

ad-hoc sys.path.append

pyproject.toml + test config Local packages

manual path hacks

editable installs Editor setup

guessy paths

workspace + config for root

I recommend the modern side almost always. It’s more consistent across teams and automation.

Debugging Import Errors Like a Pro

Two errors show up more than any others: ModuleNotFoundError and ImportError. Here’s how I attack them quickly.

ModuleNotFoundError

Symptoms: Python can’t find your module at all.

Checklist I use:

  • Did I run the module with python -m from the project root?
  • Is the directory actually a package (missing init.py)?
  • Is the file name spelled the same as the import?
  • Is there a shadowing file? (example: email.py shadowing the stdlib email module)
  • Does sys.path include the root I expect?

Quick diagnostic snippet:

# file: tools/debug_imports.py

import sys

from pathlib import Path

print("cwd:", Path.cwd())

print("import root candidate:", Path(file).resolve().parents[1])

print("sys.path:", sys.path)

ImportError or AttributeError on a name

Symptoms: the module is found but doesn’t contain what you expect.

Common causes:

  • Circular import partial initialization
  • The name moved, but your import didn’t
  • The file you’re importing is not the one you think (shadowing)

Debug tip:

import modules.math_tools as mt

print(mt.file) # Confirms which file loaded

When I temporarily use sys.path.append

I avoid it in production code, but I will use it for quick experiments or scripts in a notebooks folder.

# file: notebooks/load_experiment.py

import sys

from pathlib import Path

project_root = Path(file).resolve().parents[1]

sys.path.append(str(project_root)) # short-lived experimentation

from modules.iotools import loadcsv

If this script becomes important, I move it into the package and run it with python -m instead.

Best Practices I Follow (and When I Break Them)

Here’s the rule set I keep in my head. It’s not academic; it’s what keeps real projects stable.

1) Keep entrypoints thin

  • Your entry script should orchestrate, not hold business logic. This keeps imports stable and makes refactoring safe.

2) Prefer absolute imports in apps

  • They are more readable and robust to execution context.

3) Keep package roots explicit

  • src/ layout + python -m is my default for new work.

4) Avoid circular dependencies early

  • If two modules need each other, extract shared logic to a third module.

5) Avoid import side effects

  • Keep module-level code small. If importing a module writes a file or hits a network, you’ll regret it. I keep side effects behind functions or if name == "main": blocks.

6) Use meaningful names

  • If a module is called helpers.py, it will grow into a junk drawer. I prefer names like pricingrules.py or geonormalization.py.

When I break these rules

If I’m debugging a production incident and I need a temporary local import, I’ll use sys.path.append for speed. But I always follow up with a clean refactor. In teams, I treat sys.path.append in commits as a red flag.

What “Local Module” Actually Means (Context, Not Geography)

One of the most confusing things about local modules is that “local” is not about folder proximity. It’s about what Python considers importable at runtime. Two files can sit right next to each other and still be “far apart” to the import system if the execution context doesn’t include their parent package.

I think of “local” in three tiers:

1) Same package, same import root

– Files under the same package directory that share the same root on sys.path.

– You can import with absolute or relative imports.

2) Same repo, different package

– Multiple packages in a mono-repo layout.

– Requires editable installs or PYTHONPATH configuration to import cleanly.

3) Same repo, not a package

– A scripts/ or notebooks/ folder outside the package.

– Best handled with python -m entrypoints or a tools/ package.

This model helps me decide whether I should treat a module like part of the app or a standalone script. If it’s going to be reused, I make it a package; if it’s one-off, I still keep it close to the root and run it with -m.

The Subtle Role of init.py

init.py files do two jobs: they mark a directory as a package and optionally define package-level behavior. In older Python versions, init.py was mandatory; in modern Python, namespace packages can exist without it. But for local modules in most apps, I still recommend adding it consistently.

Why I keep init.py files:

  • It prevents confusing package shadowing in mixed environments.
  • It clarifies intent: “this is a package, not just a folder.”
  • It allows controlled re-exports through all.

A clean example of re-exporting:

# file: inventory/pricing/init.py

from .discounts import seasonal_discount

from .taxes import vat_rate

all = ["seasonaldiscount", "vatrate"]

Now a user can write:

from inventory.pricing import seasonal_discount

That’s not mandatory, but it can make imports more expressive. I use it for small, stable surfaces that are safe to expose to the rest of the app.

Relative Imports: The Good, the Bad, and the Unavoidable

I avoid relative imports across package boundaries because they’re fragile when the package is executed differently. But there are legitimate uses:

  • Deep internal modules that should never be imported outside the package.
  • Libraries where you want internal cohesion and strict boundaries.
  • Highly modular packages where relative imports reduce verbosity and make code easier to move internally.

A guideline I follow:

  • If the module is part of the public API of an app, use absolute imports.
  • If the module is internal-only and deep inside the package, relative is okay.

What I avoid:

# Avoid this pattern for app code

from ..core.utils import normalize_text

It’s not wrong, but in a large app it makes readers do mental gymnastics. I’d rather see the package root explicitly:

from inventory.core.utils import normalize_text

That said, relative imports can be a lifesaver during refactors because you can move a whole subpackage without rewriting every import statement. I’ve used this tactic to migrate a large internal library while minimizing changes to call sites.

Running Modules Directly vs Running Packages

A huge percentage of import errors come from running a module directly. When you do python path/to/file.py, the script’s directory becomes the first sys.path entry. That often breaks relative imports.

The safer pattern is to run modules as part of a package:

python -m inventory.cli

Then set up an entrypoint file inside your package:

# file: src/inventory/cli.py

from inventory.app import run

if name == "main":

run()

This makes cli.py a reliable entrypoint and lets you keep internal imports stable. When I onboard someone new to a repo, I show them the python -m command first. It prevents 80% of “works on my machine” messages.

A Practical Import Checklist for File Moves

File moves are the number one cause of broken imports in teams. Here’s the checklist I use in a refactor:

1) Move the file (or folder) physically.

2) Update absolute imports to point to the new path.

3) Run tests or at least python -m entrypoints.

4) Search for stale references with rg "old.module.path".

5) Check for shadowing if a filename now matches a stdlib or installed package.

I keep refactors small and frequent because big “move everything” changes create hidden import errors that tests might miss. If you’re stuck, start with the files that import the moved module directly, then work outward.

Local Modules vs Installed Packages (and Why It Matters)

A local module is any .py file or package that’s resolved via sys.path at runtime. An installed package is just a local module that’s been placed on sys.path by installation. The distinction matters because installed packages are discovered consistently regardless of your current working directory.

That’s why I like editable installs for local packages. It turns a “local module” into an installed package without copying files. You edit once, and imports resolve everywhere.

If you have a repo with multiple packages, set up a libs/ folder and install them in editable mode:

pip install -e ./libs/analytics

pip install -e ./libs/payments

Now both are importable anywhere, and you avoid custom path hacks. This becomes essential when you have notebooks, scripts, or shared tools that live outside the main app package.

When to Use PYTHONPATH (and When Not To)

PYTHONPATH lets you add directories to sys.path globally. It’s powerful, but also risky because it’s invisible to your teammates unless it’s documented.

I use PYTHONPATH only in these cases:

  • quick experiments
  • legacy systems where I can’t change the execution model
  • temporary CI fixes during migrations

I avoid it for production or shared dev setups because it creates invisible dependencies. If you must use it, set it explicitly in scripts or in pyproject.toml for test tools. That way, it’s discoverable and reproducible.

Common Pitfalls (and How I Avoid Them)

Here’s a list of mistakes I’ve made or seen repeatedly, and how I prevent them.

1) Shadowing a standard library module

– Example: naming a file json.py or random.py.

– Fix: rename your file, or move it into a package with a unique name.

2) Running a submodule directly

python inventory/pricing/taxes.py will break relative imports.

– Fix: use python -m inventory.pricing.taxes or create a dedicated entrypoint.

3) Mixing relative and absolute imports inconsistently

– Leads to confusing circular imports.

– Fix: pick a standard and enforce it with lint rules.

4) Missing init.py in package directories

– Causes ModuleNotFoundError depending on environment.

– Fix: add init.py files in all package dirs.

5) Overusing sys.path.append

– Makes code fragile and hard to replicate.

– Fix: switch to python -m or editable installs.

6) Importing from a sibling file without a package

– Example: from helpers import parse in a directory that isn’t a package.

– Fix: create a package directory or move the helper into the same file if it’s trivial.

Edge Cases That Bite in Real Projects

Imports can fail in subtle ways when you introduce non-obvious edges. Here are a few that matter in 2026 workflows.

1) Namespace packages in multi-repo environments

If two different directories contain the same top-level package name, Python may combine them into one namespace package. This can be useful, but it can also lead to confusing import resolution if you accidentally duplicate a package name.

If you see inconsistent behavior between environments, check for duplicate package names across editable installs or PYTHONPATH entries.

2) Dynamic imports with importlib

Dynamic imports are sometimes needed for plugin systems. But they bypass static analysis and can cause runtime failures if module paths change.

A safe pattern:

# file: plugins/loader.py

import importlib

def load_plugin(path: str):

module = importlib.import_module(path)

return module

Then keep plugin paths in config:

PLUGINS = [

"inventory.plugins.sales",

"inventory.plugins.audit",

]

If you use dynamic imports, add a small smoke test that imports all configured modules at startup. It’s a cheap way to catch broken paths early.

3) Test discovery vs runtime imports

Some test runners collect tests by importing modules. If your test modules execute heavy import side effects, tests can fail before they even run.

I keep tests lightweight at import time and push heavy logic into functions or fixtures. This avoids flaky behavior in CI.

4) Packaging with zipapp or single-file bundles

If you package your app into a single file or zip, relative imports inside the package still work, but path-based hacks like sys.path.append can break. That’s another reason to keep imports clean and package-oriented.

Performance Considerations (What Actually Matters)

Import performance is usually not a big deal, but it can become noticeable in CLI tools or serverless functions that boot frequently. Here’s what I pay attention to:

  • Import time scales with module size: giant modules slow down imports.
  • Heavy top-level code: avoid expensive computations at module import time.
  • Lazy imports: moving imports inside functions can reduce startup cost but may hide dependencies.

I consider lazy imports acceptable when:

  • the module is expensive to import
  • the function is rarely called
  • there is no circular dependency alternative

Example of a lazy import for a heavy dependency:

# file: inventory/reporting/export.py

def exporttopdf(data):

import reportlab # heavy dependency

# ... generate PDF

This can reduce startup time in apps where PDF export is rarely used. But I document it so future me doesn’t wonder why the dependency is hidden.

Practical Scenarios: What I Do in Real Life

Let me show how I apply these rules in common situations.

Scenario 1: A CLI tool with subcommands

I structure it like this:

cli_tool/

├── src/

│ └── cli_tool/

│ ├── init.py

│ ├── cli.py

│ └── commands/

│ ├── init.py

│ ├── sync.py

│ └── report.py

└── pyproject.toml

cli.py acts as the entrypoint:

# file: src/cli_tool/cli.py

from clitool.commands.sync import runsync

from clitool.commands.report import runreport

def main():

# parse args (omitted)

run_sync()

if name == "main":

main()

Run with:

python -m cli_tool.cli

This keeps all commands importable without path hacks.

Scenario 2: A data science project with notebooks

Notebooks often live outside the package, which makes imports weird. I use one of two approaches:

Approach A: make a tools/ package

project/

├── src/

│ └── analysis/

├── notebooks/

├── tools/

│ ├── init.py

│ └── paths.py

tools/paths.py can expose a stable project root helper.
Approach B: install the package in editable mode

pip install -e .

Then imports inside notebooks work like production imports. This is the approach I prefer because it mirrors real usage and makes notebooks portable.

Scenario 3: A web app with background jobs

I keep jobs in a package and import from the app package root. This avoids hacks in job runners or task queues. For example:

# file: src/webapp/jobs/cleanup.py

from webapp.db.session import get_session

def run_cleanup():

session = get_session()

# ... cleanup

Then run jobs with python -m webapp.jobs.cleanup or a worker that respects the package root.

Alternative Approaches (and Why I Rarely Use Them)

There are a few other ways to handle local imports. They can work, but I treat them as exceptions.

1) Modifying sys.path at runtime

Works fast, but it’s fragile and hard to debug in teams. I only use it in notebooks or quick experiments.

2) Using sitecustomize.py

You can add a sitecustomize.py to automatically modify sys.path at Python startup. This is powerful, but it’s also magic and hard to discover. I avoid it unless I’m working in a constrained environment where I can’t control execution commands.

3) Relying on implicit namespace packages

Namespace packages can be useful for plugins, but they complicate debugging and packaging. I prefer explicit packages with init.py unless I have a strong reason.

Linting and Formatting Rules That Keep Imports Clean

Imports become easier to manage when you have consistent tooling. My minimal setup:

  • Formatter: keep import sections separate from other code.
  • Linter: detect unused imports and import cycles.
  • Import sorter: keep import ordering consistent.

In practice, I configure tools to:

  • group stdlib, third-party, and local imports
  • sort alphabetically within each group
  • warn on unused imports

This reduces noise in reviews and prevents messy import blocks. It’s not about style; it’s about avoiding hidden bugs.

Circular Imports: A Deeper Dive

Circular imports are often a design smell. When they happen, it’s usually because two modules are doing too much or have poorly defined boundaries.

Here’s a real-world example with a service layer and model layer:

# file: app/services/user_service.py

from app.models.user import User

def create_user(name: str) -> User:

user = User(name=name)

user.save()

return user

# file: app/models/user.py

from app.services.userservice import createuser

class User:

def init(self, name: str):

self.name = name

def save(self):

# imagine persistence

pass

User should not depend on create_user. This is a layering violation. The fix is to move shared logic into a third module or re-architecture the layers so dependencies go one way.

My rule: imports should flow from higher-level modules to lower-level modules, not both ways. When that rule breaks, circular imports emerge.

How I Migrate a Flat Script to a Package (Step by Step)

If you have a single folder with multiple .py files and imports are breaking, here’s how I migrate it safely:

1) Create a package folder, e.g. src/myapp/.

2) Move core modules into the package.

3) Add init.py to every package directory.

4) Update imports to use the package root.

5) Add an entrypoint module (e.g. app.py).

6) Run with python -m myapp.app.

This is the most reliable path I’ve found for turning a messy script into a maintainable project.

Testing Strategies for Import Stability

Imports can be verified with tests just like behavior. I use a few small tests that act as guardrails.

1) Smoke test for key imports

# file: tests/test_imports.py

def testimportssmoke():

import inventory.app

import inventory.pricing.discounts

This catches missing dependencies and import-time errors early.

2) Test plugins or dynamic imports

If you have a plugin list, iterate and import each plugin in a test. This catches typos before runtime.

3) Run CLI entrypoints in CI

A simple python -m inventory.app --help can catch import issues in CI without running full workflows.

Production Considerations: Deployment, Containers, and CI

Imports can fail in deployment even if they work locally. The usual culprit is the working directory or missing package installs.

Here’s my deployment checklist:

  • Container working directory: ensure your container runs in the project root.
  • Package installation: install your package, don’t just copy files.
  • Entrypoints: run with python -m or use a console script entrypoint.
  • Environment: keep PYTHONPATH minimal and explicit.

In CI, I run a test step that imports the main package and runs a minimal command. It’s a cheap safety net.

Practical Rules of Thumb (Short Version)

If you only remember a few things, here’s what I want you to keep:

  • Treat the project root as the import root.
  • Use python -m to run modules.
  • Prefer absolute imports in app code.
  • Keep init.py in packages.
  • Avoid sys.path.append in committed code.
  • Fix circular imports by refactoring, not by hiding them.

Closing Thoughts and Next Steps

When you treat imports as part of your architecture, everything else becomes easier. I encourage you to settle on a clear project root, choose absolute imports for application code, and rely on python -m to make execution context predictable. If you’ve been fighting imports, start by running a sys.path check and compare it to the structure you expect. That quick check solves most of the weirdness before you start rewriting files.

If you’re planning a new project, pick a src/ layout and wire your test runner to it from day one. That single decision reduces path hacks later and keeps your editor, tests, and CI in sync. If you already have a working project, you can migrate gradually: add a package root, move one module at a time, and run tests after each move. In my experience, import refactors go smoothly when you move in small steps and keep entrypoints thin.

Your next practical move: choose one import pattern from this guide and apply it to a small slice of your codebase. I’d start with replacing one relative import chain with an absolute import from the package root. Run your tests, check the editor warnings, and verify the runtime behavior. Once that works, you’ll have a reliable template for the rest of the project.

Expansion Strategy

Add new sections or deepen existing ones with:

  • Deeper code examples: More complete, real-world implementations
  • Edge cases: What breaks and how to handle it
  • Practical scenarios: When to use vs when NOT to use
  • Performance considerations: Before/after comparisons (use ranges, not exact numbers)
  • Common pitfalls: Mistakes developers make and how to avoid them
  • Alternative approaches: Different ways to solve the same problem

If Relevant to Topic

  • Modern tooling and AI-assisted workflows (for infrastructure/framework topics)
  • Comparison tables for Traditional vs Modern approaches
  • Production considerations: deployment, monitoring, scaling
Scroll to Top