I still remember the first time a teammate dropped a from utilities import * into a production service and then overwrote a helper name in a later file. Nothing broke immediately, but a week later a refactor mysteriously changed behavior. That incident pushed me to treat module boundaries as real contracts, not just convenience. The all symbol is one of the few Python-native tools that lets you make those contracts explicit and dependable. It’s not just about hiding names; it’s about communicating what your module promises to expose and preventing accidental namespace collisions.
If you’ve ever wondered why some imports feel “clean” and others feel risky, all is the missing piece. In this post I’ll show you exactly how it behaves, what it does and doesn’t do, and how to use it in modern Python projects. I’ll also cover common mistakes, package-level patterns, and when you should avoid it altogether. My aim is to help you build modules that are easy to consume, easy to maintain, and safe to evolve.
The Contract: What all Actually Controls
At its simplest, all is a list of strings that tells Python which names should be exported when someone writes from your_module import . That sounds narrow, but it’s an important contract surface. The moment you allow import , you’ve created a wildcard boundary, and all is the only official switch that governs that boundary.
Here’s the key behavior to remember:
- If
allis defined,from module import *imports only the names in that list. - If
allis not defined,import *brings in any names that do not start with an underscore. alldoes not block access to other names via direct imports or attribute access.
So all is not a security tool. It does not prevent import module and then module.internalname from working. It only defines what the wildcard import exposes. That’s still extremely valuable because wildcard imports are the most likely place for accidental name collisions.
A simple example makes this concrete:
# email_tools.py
def send_email():
return "sent"
def validateaddress(address: str) -> bool:
return "@" in address
all = ["send_email"]
# app.py
from email_tools import *
print(send_email())
print(validateaddress("[email protected]"))
The second line in app.py will raise a NameError because validateaddress isn’t in all. But if you change the import to import emailtools you can still call emailtools.validateaddress. So the contract is about export intent, not access control.
That’s why I treat all as a documentation artifact first and a wildcard-control tool second. It communicates the “public API” of a module in a way both humans and tooling can interpret.
A Minimal, Runnable Example With Realistic Names
When I teach all, I always use a realistic module rather than generic names. Here’s a complete example you can run locally:
# pricing.py
from decimal import Decimal
def calculate_total(price: Decimal, quantity: int) -> Decimal:
return price * quantity
def roundcurrency(amount: Decimal) -> Decimal:
return amount.quantize(Decimal("0.01"))
DEFAULT_CURRENCY = "USD"
all = ["calculatetotal", "DEFAULTCURRENCY"]
# checkout.py
from pricing import *
print(calculate_total(Decimal("19.99"), 3))
print(DEFAULT_CURRENCY)
print(roundcurrency(Decimal("19.999")))
The last line will fail, because roundcurrency isn’t in all. That’s exactly what you want: you can still keep the helper around for internal use, but you don’t expose it to wildcard consumers.
This pattern scales well in larger projects because it keeps your public API small and stable while giving you freedom to refactor internals. If you later rename roundcurrency or change its signature, you don’t accidentally break wildcard users because they were never supposed to see it in the first place.
Packages: all in init.py Is a Top-Level API Map
Most of the time, all earns its keep inside packages rather than single modules. In a package, the init.py file can define all to declare which submodules should be imported when someone writes from package import *.
Consider this structure:
commerce/
init.py
pricing.py
inventory.py
analytics.py
Here’s a clean init.py:
# commerce/init.py
from .pricing import calculate_total
from .inventory import getstocklevel
all = ["calculatetotal", "getstock_level"]
Now, from commerce import * exposes only calculatetotal and getstock_level. You can still access commerce.analytics with explicit imports, but your package presents a curated API surface by default.
I like to think of init.py plus all as a package’s front door. It tells consumers where to start and discourages them from rummaging around in internal modules.
A subtle detail: modules vs names
You can list submodules in all too, but if you do, you should import them in init.py or they won’t be present at import time. Here’s how you can expose full modules instead of individual functions:
# commerce/init.py
from . import pricing
from . import inventory
all = ["pricing", "inventory"]
This makes from commerce import * bring in the modules, not their contents. It’s a clean option when you want namespacing rather than direct access.
When all Helps (And When It Hurts)
In my experience, all is most valuable in three scenarios:
- You ship a library or shared utility package.
- You maintain a large internal codebase with many contributors.
- You want tooling (like IDEs and static analysis) to reflect a “public API.”
But there are also times when all adds friction rather than clarity. If you’re writing a tiny single-module script or a fast prototype, don’t bother. If you never use from module import * anywhere, all becomes pure documentation, and you might not want that maintenance overhead.
Here’s a decision guideline I use:
- Use
allwhen the module is a building block others consume. - Skip it when the module is a one-off or internal-only script.
- Always use it in packages meant for reuse or distribution.
If you’re unsure, ask yourself: “If someone only looked at this module’s all, would they understand what I expect them to use?” If yes, you’re using it correctly.
Common Mistakes That Make all Useless
I’ve seen all used incorrectly in a few predictable ways. Here are the ones worth avoiding.
1) Treating all as a security tool
I mentioned this earlier, but it’s worth repeating. all does not prevent access to names. It only affects wildcard imports. If you need real encapsulation, rely on naming conventions (leading underscores), documentation, and packaging discipline. If you need security boundaries, use process boundaries or permission systems, not Python import tricks.
2) Forgetting to import names in init.py
When you add names to all in a package, they must be available in the namespace. This is a frequent oversight.
# ecommerce/init.py
all = ["calculate_total"]
This alone does nothing. You need to actually bring calculate_total into the package namespace:
# ecommerce/init.py
from .pricing import calculate_total
all = ["calculate_total"]
3) Listing string names that don’t exist
all doesn’t validate entries. If you list a string that doesn’t match a name, import * will skip it without warning. I recommend adding quick tests or running a linter that checks this.
A simple sanity check:
# tests/testpublicapi.py
import ecommerce
for name in ecommerce.all:
assert hasattr(ecommerce, name)
4) Using all to hide conflicts instead of fixing them
I’ve seen teams use all to hide accidental collisions in import * usage. That’s a sign you should stop using wildcard imports entirely. all helps with explicit design, not band-aids.
all and Modern Tooling (2026 Context)
In 2026, the ecosystem around Python packaging and tooling has matured. That makes all more useful than ever. Modern IDEs, language servers, and AI-assisted coding tools often use static analysis to infer module APIs. When all is present, they can present cleaner auto-complete lists, reduce suggestion noise, and generate better summaries.
I’ve noticed three practical benefits in my day-to-day work:
- IDE symbol search is faster because exported names are explicit.
- AI coding assistants generate more accurate imports when they can infer public APIs.
- Static analysis tools can flag unused exports or missing names with simple rules.
This makes all a maintenance tool, not just a runtime feature. In a large codebase, it’s the difference between “everything is public by accident” and “the API is what we say it is.”
Traditional vs modern module boundaries
Here’s a quick comparison of how teams used to treat module boundaries versus how I recommend doing it now:
Traditional approach
—
Implicit: anything not underscored
all lists public names from module import * tolerated
all clarifies wildcard Risky when helpers are used externally
IDE guesses exported symbols
all and trims suggestions The modern approach keeps your codebase easier to refactor and more predictable for users.
Real-World Scenarios Where all Pays Off
Let me walk through a few situations where I’ve found all to be a lifesaver.
Scenario 1: Shared utilities
Imagine a shared module called string_utils.py used across multiple services. It contains 20 helper functions, but only 6 are actually stable and intended for external use. If you expose everything, downstream teams might start using the internal helpers. That locks you in.
By defining all with just the 6 supported functions, you set a clear expectation. Over time, you can safely refactor the other 14 helpers without worrying about breakage. If someone still imports a hidden helper explicitly, that’s a conscious decision and a code review conversation.
Scenario 2: Plugin registries
In plugin systems, modules are loaded dynamically and the host system looks for specific names. all can become a whitelist that indicates which symbols should be registered.
For example:
# plugins/monitoring.py
class HealthCheck:
def run(self) -> str:
return "ok"
class _InternalProbe:
pass
all = ["HealthCheck"]
If the plugin loader uses wildcard imports, it will only register HealthCheck. This keeps internal classes invisible, preventing accidental registration of helpers.
Scenario 3: Public SDKs
If you maintain a client SDK, all lets you define the supported interface clearly. Consumers often use wildcard imports in notebooks or quick scripts, and you don’t want them to accidentally depend on internals. With all, you can make your intended surface obvious.
How all Interacts With Name Shadowing
One underrated benefit of all is controlling collisions. Consider this:
# logging_helpers.py
def info(msg: str) -> None:
print(f"INFO: {msg}")
all = ["info"]
# app.py
from logging_helpers import *
info = "this shadows the function"
info("hello")
The last line will fail because you reassigned info to a string. This is not a all issue, but it highlights why wildcard imports are fragile. If you still decide to use them, at least all controls which names are exposed so you reduce the surface for accidental collisions.
If you want safety, prefer explicit imports:
from logginghelpers import info as loginfo
This way you avoid shadowing and you make your intent clear. all still helps because it signals that info is a supported export.
Performance Considerations
People sometimes ask whether all improves performance. The answer is “not meaningfully.” The difference in module import time is usually negligible, often in the low single-digit milliseconds for typical modules. If you’re importing hundreds of symbols across many modules, you might see a small difference, but it’s not the primary benefit.
Where performance can improve is in developer productivity: smaller namespaces and explicit exports reduce cognitive overhead and make refactors faster. If you want runtime speed, focus on avoiding heavy top-level imports in init.py, not on all itself.
Avoid heavy imports at package import time
Be careful with this pattern:
# package/init.py
from .data_pipeline import DataPipeline
all = ["DataPipeline"]
If data_pipeline imports big dependencies at import time, then import package becomes heavy. A lightweight alternative is to expose names lazily or shift imports to functions. But that’s a separate design decision, and all itself is not the cause.
Testing Your Public API
I recommend adding a small test that ensures your all list is valid. This is especially important if you refactor frequently. Here’s a simple pattern I’ve used:
# tests/testpublicapi.py
import mypackage
def testpublicapinamesexist():
for name in mypackage.all:
assert hasattr(mypackage, name)
You can also test that all excludes private names:
# tests/testpublicapi.py
import mypackage
def testnoprivatenamesexported():
assert all(not name.startswith("_") for name in mypackage.all)
These tests are cheap but they prevent a lot of subtle regressions. I’ve seen them catch mistakes during refactors when module exports were renamed but all wasn’t updated.
all in Type Checking and Static Analysis
With modern type checkers, all can affect what tools consider public. This is useful when you want to restrict which symbols appear in documentation or in auto-complete results. If you’re using doc generation tools, they often respect all by default.
The key point: all is a convention that tooling understands. It’s not just a runtime switch. That makes it a great place to encode your intent.
If you use type hints, consider using all to keep the visible surface small. It keeps your type-checking surface clean and reduces the number of names you’re committed to supporting long-term.
When NOT to Use all
There are legitimate cases where all is more trouble than it’s worth.
- You never use wildcard imports and you don’t ship the module to others.
- Your codebase is extremely small and you value speed of iteration over API clarity.
- You’re working in exploratory notebooks where explicit export control just slows you down.
- You expect the module contents to shift daily and you don’t want to update
allevery time.
In those cases, skip it. all is a tool, not a requirement. I only use it when I want a stable, explicit public API.
Deeper Example: A Mini SDK With Stable Surface
To show how all works in a more realistic environment, let’s build a tiny SDK-like package with a stable public surface but an internal layout that can evolve.
Directory structure:
acme_sdk/
init.py
client.py
auth.py
_http.py
Here are the modules:
# acmesdk/http.py
import json
import urllib.request
class _HttpClient:
def init(self, base_url: str, token: str):
self.baseurl = baseurl
self.token = token
def get(self, path: str) -> dict:
req = urllib.request.Request(
self.base_url + path,
headers={"Authorization": f"Bearer {self.token}"},
)
with urllib.request.urlopen(req) as response:
return json.load(response)
# acme_sdk/auth.py
def parsetoken(raw: str) -> str:
# pretend to validate and normalize
return raw.strip()
# acme_sdk/client.py
from .http import HttpClient
from .auth import parsetoken
class Client:
def init(self, base_url: str, token: str):
self.http = HttpClient(baseurl, parse_token(token))
def getuser(self, userid: str) -> dict:
return self.http.get(f"/users/{userid}")
# acme_sdk/init.py
from .client import Client
all = ["Client"]
Consumers can do:
from acme_sdk import Client
And only Client is part of the public API. Internals like HttpClient are kept out of wildcard exports and are clearly private by naming convention. If later I replace http.py with an async version or a requests-based client, the public API remains stable.
This pattern is common in real libraries and all is what makes it clean. The package tells users, “Here is the supported way to use us,” while keeping implementation details flexible.
Edge Cases and Gotchas You’ll Actually Hit
1) Conditional exports
Sometimes you want to export different symbols based on environment (for example, optional dependencies). That’s legal but can get tricky.
# optional_feature.py
try:
import orjson as json
HAS_ORJSON = True
except Exception:
import json
HAS_ORJSON = False
def dumps(data: dict) -> str:
return json.dumps(data)
all = ["dumps"]
In this case, all is stable even though the underlying implementation changes. That’s fine. But avoid changing the export list itself based on conditions, because that becomes unpredictable for users and for tooling.
2) Late-binding all
You might be tempted to generate all dynamically, like this:
all = [name for name in globals() if not name.startswith("_")]
It works, but it makes the API implicit again, and tools can’t always interpret it. I recommend explicit lists unless you have a very good reason.
3) Re-export loops
In packages, you may re-export names from submodules, but be careful to avoid circular import traps.
Bad pattern:
# package/init.py
from .a import Foo
from .b import Bar
all = ["Foo", "Bar"]
If a.py imports package or b.py imports package during module import, you can hit circular import errors. To avoid this, keep init.py minimal, or move shared imports into a third module. This isn’t about all itself, but re-exporting tends to amplify circular import risks.
4) Star imports inside packages
Using from .module import * inside your own package is usually a smell. It can obscure the dependency graph and makes it harder to track what’s in all. Prefer explicit imports in internal code and use all only at the boundary.
Alternative Approaches to API Control
all is not your only option. Here are some alternatives that sometimes fit better.
1) Explicit imports everywhere
You can skip wildcard imports entirely and always use explicit imports:
from pricing import calculatetotal, DEFAULTCURRENCY
This approach is clean and minimizes accidental namespace pollution. If your team enforces this style, all becomes optional, though it still helps as documentation.
2) Explicit “public” modules
Another strategy is to create a dedicated public.py or api.py module that aggregates what you want to expose:
# package/api.py
from .pricing import calculate_total
from .inventory import getstocklevel
all = ["calculatetotal", "getstock_level"]
Consumers import from package.api, and internal modules stay hidden. This can reduce circular imports in init.py and make the public surface more explicit.
3) Using typing all conventions
Some teams also use a public constant or structured metadata to track API surfaces. That’s not standard, but it can integrate with tooling in internal ecosystems. I still prefer all because it’s built-in and widely understood.
4) Module docstring conventions
If your module starts with a docstring that lists public functions, that’s documentation but not enforceable at runtime. It’s better than nothing, but all makes the intent executable.
Practical Guidelines for Day-to-Day Use
Here’s a quick checklist I use when adding all to a new module or package:
- Keep the list short: only exports you want to support long-term.
- Use explicit lists, not dynamic generation.
- Mirror your public docs: if it’s in docs, it should be in
all. - Add a small test to validate names.
- Avoid re-exporting heavy modules in
init.pyunless you need to.
If you follow these rules, all stays a helpful contract rather than a maintenance burden.
A Deeper Package Example: Versioned API Surfaces
In larger libraries, you might need versioned APIs (v1, v2). Here’s how all can help keep those boundaries explicit.
myapi/
init.py
v1.py
v2.py
_shared.py
# myapi/v1.py
from .shared import request
def getuser(userid: str) -> dict:
return request(f"/v1/users/{userid}")
all = ["get_user"]
# myapi/v2.py
from .shared import request
def getuser(userid: str, include_meta: bool = False) -> dict:
path = f"/v2/users/{user_id}"
if include_meta:
path += "?meta=1"
return _request(path)
all = ["get_user"]
# myapi/init.py
from . import v1
from . import v2
all = ["v1", "v2"]
Now users can do from myapi import v1, v2 and choose the API surface explicitly. The internal _shared module stays private. This is a clean pattern for evolving APIs without breaking old consumers.
all and Documentation Generation
If you use documentation generators, you can align all with your public docs. Many doc tools will skip private names (underscored) and prefer names listed in all. That means your docs can stay focused on the intended API.
I’ve used this approach to keep documentation stable while allowing internal refactors. As long as the all list stays consistent, the docs stay consistent. This is a small but meaningful boost to maintainability.
Mitigating Circular Imports With all
One of the most frustrating issues in Python packages is circular imports. all doesn’t solve this directly, but it can influence how you structure your package. I’ve had success with these patterns:
- Keep
init.pyas thin as possible. - Avoid importing large modules just to re-export names.
- Use a dedicated
api.pymodule for exports. - Move shared utilities into
_shared.pyor similar.
In other words, use all for the interface, but keep the implementation import graph simple.
Modern Workflow: How I Use all In Practice
In my daily workflow, I treat all like a public contract checklist:
- I write the module or package normally.
- I decide which names are stable, intended, and documented.
- I add those names to
all. - I add a quick test that asserts the names exist.
- I run a linter or static analysis pass to catch mistakes.
This takes a few minutes, but it pays dividends in refactor safety. I’ve had fewer API regressions and fewer accidental dependencies on internals.
Common Pitfalls in Team Environments
In team settings, a few patterns cause repeated pain. If you want to avoid friction, watch for these:
- A dev adds a new helper and forgets to update
all, causing confusion for users. - A refactor renames a public function but doesn’t update
allor docs. - A package re-exports too many names, making the public API huge and unstable.
- Someone uses
from package import *internally, masking a namespace conflict that later blows up.
Most of these are easy to prevent with a small test and clear code review expectations.
all and Notebooks / Data Science Code
In notebooks, people often use wildcard imports to reduce typing. That’s a high-risk practice in production code but can be acceptable in exploratory work. If your package is likely to be used in notebooks, all can improve the experience by keeping the namespace clean and predictable.
I’ve seen notebooks with dozens of helper imports where a single name collision caused silent errors. all doesn’t solve that fully, but it reduces the number of symbols that get pulled into the notebook’s global scope. That’s a win for reliability.
Balancing Explicitness and Ergonomics
There’s a tension between explicitness (clear APIs, careful imports) and ergonomics (fast iteration, fewer keystrokes). all helps balance that tension. It allows you to support wildcard imports for convenience while still controlling what those imports expose.
If your team values ergonomics in internal tools, you can allow import * in specific modules, but use all to keep that safe. If your team values explicitness, all becomes a way to reinforce what counts as public even when people are not using wildcard imports.
A Small Checklist for Code Reviews
When reviewing code that introduces or changes all, I look for these things:
- Does the list include only truly public names?
- Are the names imported into the namespace (for packages)?
- Are there any missing updates to docs or type hints?
- Is the list stable enough for users to depend on?
- Do tests cover basic validity of
all?
This keeps API boundaries clean without turning all into a bureaucratic chore.
Summary: Why all Is Still Worth It
The main value of all isn’t about wildcard imports at all. It’s about intentional design. It forces you to make a decision: “Which names am I willing to support and maintain?” That question is the heart of sustainable Python code.
If you treat all as a contract, not a trick, it becomes one of the most useful tools in the language for building clean modules and stable packages. It clarifies your intent, improves tooling, and makes refactors safer. And when it doesn’t add value, you can simply skip it.
In short: all is small, but it’s powerful. Use it to define what your module truly promises, and your future self (and your teammates) will thank you.


