Most Python code I review treats generators as a one-way street: you pull values out with a for loop, and the generator quietly produces the next item. That mental model is fine until you hit a real system problem: you want the producer to react to runtime feedback. Maybe your log shipper needs to change verbosity without restarting. Maybe a parser needs more bytes right now, not later. Maybe a metrics pipeline should slow down when an upstream service reports overload.
That is where send() earns its keep. It turns a generator from a passive iterator into a paused function that can accept input mid-flight. The key shift is simple: yield is not just a statement that emits a value; it is an expression that can evaluate to a value provided from the outside.
In this post I show you how I think about send() in modern Python (2026), how it behaves at runtime, where it fits nicely (coroutine-style state machines and data pipelines), and where I prefer other approaches (async iterators, queues, or plain functions). I will also cover common mistakes I see in production code, plus practical debugging and testing tactics.
Generators as Suspended Functions (Not Just Fancy Iterators)
A generator function looks like a normal function, but the presence of yield changes the execution model. Instead of running from top to bottom and returning once, it can pause and resume while keeping its local state.
Here is a small generator that emits batched IDs:
def batchids(startid: int, batch_size: int):
current = start_id
while True:
batch = list(range(current, current + batch_size))
current += batch_size
yield batch
ids = batchids(startid=1000, batch_size=3)
print(next(ids))
print(next(ids))
A few things matter for understanding send():
- Each
yieldpauses execution and hands a value back to the caller. - When you resume the generator (via
next()orsend()), execution continues right after the lastyield. - Local variables (
current,batch) persist across pauses.
That persistence is why generators are a great fit for streaming large datasets: you pay memory for the current state, not for the whole result.
But that same persistence also enables something more interesting: the paused generator can wait for instructions.
A Mental Model That Actually Helps
When I teach send(), I avoid the “iterator” metaphor after the first five minutes. The mental model I use in code reviews is:
- A generator is a function whose stack frame can be paused.
- Every
yieldis a checkpoint. send(x)resumes the paused function and injectsxinto the pausedyieldexpression.
That’s why generator-based coroutines feel like “objects with methods” even though they’re just one function.
yield Is an Expression: Where the Sent Value Goes
The most important line I teach engineers is this:
yieldproduces a value outward.- The
yieldexpression evaluates to the value sent back in.
If you never call send(), the value that comes back in is always None (because next(gen) is defined as gen.send(None) under the hood).
This generator prints whatever you send it:
def console_echo():
while True:
incoming = yield
print(f‘received: {incoming!r}‘)
listener = console_echo()
next(listener) # prime
listener.send(‘ping‘)
listener.send({‘event‘: ‘deploy‘, ‘status‘: ‘ok‘})
What is happening step-by-step:
listener = console_echo()creates the generator object. No code inside runs yet.next(listener)runs until the firstyield. ThatyieldproducesNoneto the caller and pauses.listener.send(‘ping‘)resumes at the suspendedyield. Theyieldexpression evaluates to‘ping‘, soincomingbecomes‘ping‘. The loop prints it, then hitsyieldagain and pauses.
Two details that are easy to miss:
- In
incoming = yield, the generator is not yielding a value outward; it is yieldingNoneimplicitly. That is fine because the point is to receive. send(value)returns the next value the generator yields outward (or raisesStopIterationif it finishes).
That second point matters for patterns like calculators and state machines, where each send() call can return a computed response.
One More Subtlety: There Are Two Directions
In message = yield balance, there are two separate “moments” packed into one line:
- Outbound: the generator yields
balance. - Inbound: later, the generator resumes and the
yieldexpression becomes whatever the caller sent.
If you keep those as two separate phases in your head, a lot of confusing control flow becomes straightforward.
The Rules of send(): Priming, None, and Generator States
send() is strict about generator state. I keep these rules in my head:
- You cannot send a non-
Nonevalue into a generator that has not started. - You can always call
send(None)on a new generator, and it behaves likenext(). - After a generator finishes, any further
send()raisesStopIteration.
Priming: Why the First Step Is Special
When a generator is newly created, it is paused before the first line of the function body. There is no suspended yield expression waiting to receive a value yet.
So this fails:
def needsfirstvalue():
first = yield
yield f‘got {first}‘
g = needsfirstvalue()
g.send(‘hello‘) # TypeError
The fix is to prime it:
g = needsfirstvalue()
next(g) # advance to the first yield
print(g.send(‘hello‘))
I recommend a helper when you have many such coroutines, to make the prime step obvious and consistent:
def start(gen):
next(gen)
return gen
worker = start(needsfirstvalue())
print(worker.send(‘hello‘))
#### A More Defensive start()
In real code, I often make start() defensive so it fails loudly when a generator yields a value during priming (which usually signals a mismatch in protocol):
def start(gen):
first = gen.send(None)
if first is not None:
raise RuntimeError(
f‘expected coroutine to yield None on prime, got {first!r}‘
)
return gen
This catches bugs early when someone modifies the generator and changes what the first yield produces.
send(None) Is the Same as next()
This is not trivia; it helps you reason about control flow.
next(g)means: resume the generator, and treat theyieldexpression asNone.g.send(x)means: resume the generator, and treat theyieldexpression asx.
What About StopIteration?
If the generator returns (explicitly or implicitly), Python raises StopIteration to the caller. That includes send().
In my experience, production bugs often come from forgetting that send() can raise StopIteration just like next().
Generator States You Should Know
Generators have a few important states that affect send() behavior:
- Created (not started)
- Suspended (paused at a
yield) - Running (currently executing)
- Closed (finished or explicitly closed)
Why do I care? Because “ValueError: generator already executing” is what you get when you accidentally re-enter a generator while it is running (usually via a callback that indirectly calls send() again).
Practical Pattern 1: A Stateful Accumulator That Responds Immediately
A classic send() pattern is a calculator-like coroutine: each time you send a value, the generator updates state and yields a response.
Here is a running balance tracker that supports deposits, withdrawals, and a reset command:
def accountbalance(startingbalance: int = 0):
balance = starting_balance
while True:
message = yield balance # yield current balance and wait for a message
if message is None:
# Treat a bare next() like ‘no-op‘
continue
if message == ‘reset‘:
balance = 0
continue
action = message.get(‘action‘)
amount = message.get(‘amount‘, 0)
if action == ‘deposit‘:
balance += amount
elif action == ‘withdraw‘:
balance -= amount
else:
raise ValueError(f‘unknown action: {action!r}‘)
ledger = accountbalance(startingbalance=100)
print(next(ledger))
print(ledger.send({‘action‘: ‘deposit‘, ‘amount‘: 25}))
print(ledger.send({‘action‘: ‘withdraw‘, ‘amount‘: 60}))
print(ledger.send(‘reset‘))
Why I like this example:
- The generator yields a value outward (
balance) and also receives commands inward (message). - The caller gets a response synchronously from
send(). That makes this pattern feel like calling a method, but the state is held inside the suspended frame.
When I reach for this in real code:
- Command-driven state machines (parsers, protocol handlers)
- Incremental aggregations (metrics, counters, rolling windows)
- Long-running workers that need runtime control without threads
When I do not reach for it:
- Anything that should be cancellable and awaitable across I/O boundaries: I go straight to
async defandawait.
Make It Safer with Typed Commands
The dictionary approach is nice for a blog post, but it’s easy to create a “stringly typed” protocol that becomes fragile.
In production, I prefer explicit command objects:
from dataclasses import dataclass
@dataclass(frozen=True)
class Deposit:
amount: int
@dataclass(frozen=True)
class Withdraw:
amount: int
@dataclass(frozen=True)
class Reset:
pass
def accountbalance(startingbalance: int = 0):
balance = starting_balance
while True:
message = yield balance
if message is None:
continue
if isinstance(message, Reset):
balance = 0
elif isinstance(message, Deposit):
balance += message.amount
elif isinstance(message, Withdraw):
balance -= message.amount
else:
raise TypeError(f‘unknown message type: {type(message)!r}‘)
This buys you:
- Easier refactors (rename fields safely)
- Better IDE support
- Cleaner tests (construct explicit commands)
Practical Pattern 2: Pipelines with Backpressure and Control Messages
Many teams first meet send() while building streaming pipelines. The naive pipeline is pull-based: each stage yields values, the next stage iterates them.
send() makes a push-style stage possible, where the stage accepts items, buffers them, and yields processed output on demand.
A Line Framer That Accepts Chunks
Suppose you read from a socket (or file) in chunks, but you need complete newline-delimited records. A generator can keep a buffer and accept more bytes via send().
def line_framer(encoding: str = ‘utf-8‘):
buffer = b‘‘
chunk = yield # prime point
while True:
if chunk is not None:
buffer += chunk
while True:
newline_index = buffer.find(b‘\n‘)
if newline_index == -1:
break
linebytes = buffer[:newlineindex]
buffer = buffer[newline_index + 1:]
yield line_bytes.decode(encoding)
chunk = yield # ask for more data
framer = line_framer()
next(framer)
Send partial data
print(framer.send(b‘alpha\nbe‘))
print(framer.send(b‘ta\ngamma\n‘))
A subtle point: this example yields decoded lines outward and then returns to a state where it waits for more data. The caller can feed more bytes as they arrive.
#### Handling End-of-Stream Cleanly
Real streams end. When the socket closes, you usually have one last partial line in the buffer. Decide your policy:
- Drop it (common for strict line protocols)
- Yield it (common when last line doesn’t require newline)
- Treat it as an error
Here is a version that supports an explicit EOF signal and flushes the last line:
class EOF:
pass
def line_framer(encoding: str = ‘utf-8‘):
buffer = b‘‘
chunk = yield None
while True:
if isinstance(chunk, EOF):
if buffer:
yield buffer.decode(encoding)
return
if chunk:
buffer += chunk
while True:
i = buffer.find(b‘\n‘)
if i == -1:
break
line, buffer = buffer[:i], buffer[i + 1:]
yield line.decode(encoding)
chunk = yield None
Now the calling side can do framer.send(EOF()) to finish.
Adding Backpressure as a Simple Signal
Backpressure means the consumer can tell the producer to slow down. With send(), you can represent that as a control message.
Here is a processor that accepts events and a ‘pause‘ command. In real systems I usually combine this with a loop that yields status, so monitoring code can introspect what is happening.
def event_processor():
paused = False
processed = 0
message = yield {‘status‘: ‘ready‘, ‘processed‘: processed}
while True:
if message == ‘pause‘:
paused = True
message = yield {‘status‘: ‘paused‘, ‘processed‘: processed}
continue
if message == ‘resume‘:
paused = False
message = yield {‘status‘: ‘ready‘, ‘processed‘: processed}
continue
if paused:
# Ignore events while paused
message = yield {‘status‘: ‘paused‘, ‘processed‘: processed}
continue
if message is not None:
processed += 1
message = yield {‘status‘: ‘ready‘, ‘processed‘: processed}
p = event_processor()
print(next(p))
print(p.send({‘event‘: ‘login‘, ‘user_id‘: 12}))
print(p.send(‘pause‘))
print(p.send({‘event‘: ‘purchase‘, ‘user_id‘: 12}))
print(p.send(‘resume‘))
print(p.send({‘event‘: ‘purchase‘, ‘user_id‘: 12}))
Is this the only way to express backpressure? No. In modern Python codebases I more often use queue.Queue (threads) or asyncio.Queue (async) because those integrate cleanly with cancellation and timeouts.
But send() is still a great fit when:
- Everything is synchronous
- You want a tiny state machine with zero extra objects
- You need very explicit control over the exact point where the worker pauses
A More Realistic Backpressure Pattern: Token Bucket
When people say “backpressure,” they often really mean “rate limiting.” A generator coroutine can implement a token bucket that you can tune at runtime.
The core idea:
- The caller sends timestamps (or “tick” events).
- The generator yields whether the next event is allowed.
- The caller can send control messages to adjust the rate.
from dataclasses import dataclass
@dataclass(frozen=True)
class Tick:
now: float
@dataclass(frozen=True)
class SetRate:
tokenspersec: float
@dataclass(frozen=True)
class Allow:
ok: bool
tokens: float
def tokenbucket(tokensper_sec: float, capacity: float):
rate = tokenspersec
cap = capacity
tokens = capacity
last = None
msg = yield Allow(ok=True, tokens=tokens)
while True:
if isinstance(msg, SetRate):
rate = msg.tokenspersec
msg = yield Allow(ok=True, tokens=tokens)
continue
if not isinstance(msg, Tick):
msg = yield Allow(ok=False, tokens=tokens)
continue
if last is None:
last = msg.now
elapsed = msg.now - last
last = msg.now
tokens = min(cap, tokens + elapsed * rate)
if tokens >= 1.0:
tokens -= 1.0
msg = yield Allow(ok=True, tokens=tokens)
else:
msg = yield Allow(ok=False, tokens=tokens)
This is still synchronous and deterministic, which makes it easy to test. But it also gives you a knob (SetRate) you can turn without restarting anything.
send() vs next(), throw(), and close()
The generator protocol is larger than send(). In production code I think of these methods as a small control surface:
next(gen)/gen.next()resumes and injectsNonegen.send(value)resumes and injectsvaluegen.throw(exc)resumes by raising an exception at the suspension pointgen.close()requests shutdown by raisingGeneratorExitinside
Here is a quick comparison I give teammates when we are choosing an approach:
Traditional approach
Where send() fits
—
—
Call a function repeatedly
async def consumer + asyncio.Queue when I/O is involved Sync-only pipelines, tight loops
Shared mutable state, globals
send({‘command‘: ...}) is clean and local
Return error codes
throw() is often clearer than encoding errors as values
Flags checked in loops
close() works, but I usually wrap it in a context manager### A Note on throw()
throw() is the sibling you should remember. When you need to abort the generator and let it run cleanup code at a known suspension point, throw() is often the best tool.
Example: a worker that ensures final metrics flush on shutdown.
def batchingworker(batchsize: int = 3):
batch = []
try:
while True:
item = yield None
if item is None:
continue
batch.append(item)
if len(batch) >= batch_size:
yield list(batch)
batch.clear()
except GeneratorExit:
# cleanup path triggered by close()
if batch:
print(f‘flushing {len(batch)} pending items on close‘)
raise
w = batchingworker(batchsize=2)
next(w)
w.send(‘a‘)
print(w.send(‘b‘))
w.send(‘c‘)
w.close()
In real services I keep this pattern but replace print with logging and counters.
close() vs “Send a Stop Message”
You can shut down a coroutine in two common ways:
- Protocol shutdown: send a message like
Stop()orEOF(). - External shutdown: call
gen.close().
I choose based on who owns the lifecycle:
- If the generator is an internal component and the caller is “in charge,”
close()is simple. - If the generator is modeling a stream/protocol, an explicit message is often clearer and more testable.
A useful compromise is: accept an explicit stop message and also handle GeneratorExit for safety.
Common Mistakes I See (and How I Fix Them)
send() is small, but it is easy to create confusing control flow. These are the problems I actually see in code reviews.
Mistake 1: Forgetting to Prime
Symptom:
TypeError: can‘t send non-None value to a just-started generator
Fix:
- Prime with
next(gen)orgen.send(None). - If the pattern is repeated, wrap it with a helper (
start()), or hide it behind a class that primes ininit.
Mistake 2: Mixing Yielded Outputs and Control Inputs Without Clear Types
If you send dictionaries sometimes, strings other times, and None at other times, the code becomes hard to reason about.
Fix:
- Define a small command schema.
- In 2026 I usually define
TypedDictordataclassesfor command messages, even in sync code.
Example with dataclasses:
from dataclasses import dataclass
@dataclass(frozen=True)
class Deposit:
amount: int
@dataclass(frozen=True)
class Withdraw:
amount: int
@dataclass(frozen=True)
class Reset:
pass
Then your generator can do isinstance(message, Deposit) instead of guessing keys.
Mistake 3: Treating send() Like a Thread-Safe Message Bus
Generators are not thread-safe, and send() is not a queue. If multiple threads might call send() at the same time, you will corrupt state.
Fix:
- Keep generator usage confined to one thread.
- If you need cross-thread messaging, move to
queue.Queue.
Mistake 4: Losing Track of What send() Returns
Remember: send() returns the next value yielded outward.
A common bug is writing a generator that yields multiple times between receives, then assuming a single send() corresponds to a single output.
Fix:
- Design your coroutine so each input produces exactly one output (request/response style), or
- Make the caller drain outputs until the next receive point, and write that draining logic explicitly.
Here is a “drain until ready” pattern that makes the control flow explicit:
from collections.abc import Iterator
def drainuntilnone(gen) -> list:
out = []
while True:
try:
value = next(gen)
except StopIteration:
return out
if value is None:
return out
out.append(value)
This is not universally applicable, but it shows the principle: if your generator can yield multiple outputs for one input, the caller must own the draining behavior.
Mistake 5: Accidental Infinite Loops on None
Because next(gen) injects None, your coroutine must decide what None means.
Fix:
- Treat
Noneas ‘no-op‘ (common), or - Treat
Noneas a real message, but then never callnext()after priming.
I personally treat None as ‘no-op‘ and keep that convention consistent.
Mistake 6: Forgetting Cleanup in finally
If your coroutine manages resources (files, buffers, temporary state), you want deterministic cleanup.
Fix:
- Use
try/finallyinside the generator. - Ensure the caller calls
close()(or use a context manager wrapper).
def managed_worker():
resource = {‘open‘: True}
try:
msg = yield None
while True:
# ... do work ...
msg = yield None
finally:
resource[‘open‘] = False
Then wrap usage so cleanup always happens.
Debugging and Testing send()-Driven Code
When a generator is used as a coroutine, bugs can feel slippery because the stack frame is suspended between calls. These are the tactics that save time.
Make the Protocol Observable
The single biggest improvement you can make is: make the coroutine’s “waiting state” explicit.
Instead of “sometimes yields data, sometimes yields nothing,” consider yielding a structured status:
from dataclasses import dataclass
@dataclass(frozen=True)
class Waiting:
for_message: bool = True
@dataclass(frozen=True)
class Output:
value: object
def simple_worker():
msg = yield Waiting()
while True:
if msg is None:
msg = yield Waiting()
continue
msg = yield Output(value=msg)
Now your logs and tests can assert on Waiting() vs Output(...) instead of guessing.
Inspect Generator State When You’re Stuck
When a coroutine is “hung,” you want to know whether it’s waiting for input, closed, or accidentally still running.
You can inspect internal state with the inspect module:
import inspect
state = inspect.getgeneratorstate(gen) # ‘GENCREATED‘, ‘GENSUSPENDED‘, ...
frame = gen.gi_frame
running = gen.gi_running
What I do in practice:
- If
GEN_CREATED, you forgot to prime. - If
GEN_SUSPENDED, it’s waiting at ayield(good). - If
GEN_CLOSED, someone closed it or it returned. - If
gi_runningisTrue, you have a re-entrancy problem.
Add a Tiny Tracer Wrapper
For debugging (not long-term production), I sometimes wrap send() calls so I can see the exchange:
def traced(gen, name=‘gen‘):
def _send(x):
print(f‘[{name}] -> {x!r}‘)
y = gen.send(x)
print(f‘[{name}] <- {y!r}')
return y
return gen, _send
Then:
w = account_balance(100)
print(next(w))
(w, send) = traced(w, ‘ledger‘)
send({‘action‘: ‘deposit‘, ‘amount‘: 10})
This makes it obvious when your coroutine yields twice or when the caller assumes the wrong return value.
Testing: Treat It Like a State Machine
The testing style that works best is: drive it step-by-step and assert both outputs and internal state transitions.
A minimal pytest-style test looks like this:
def testaccountbalance():
g = account_balance(100)
assert next(g) == 100
assert g.send({‘action‘: ‘deposit‘, ‘amount‘: 25}) == 125
assert g.send({‘action‘: ‘withdraw‘, ‘amount‘: 60}) == 65
assert g.send(‘reset‘) == 0
For more complex coroutines, I like to write a tiny driver helper so test cases read like scripts:
def drive(gen, steps):
out = []
out.append(next(gen))
for s in steps:
out.append(gen.send(s))
return out
Example use:
outputs = drive(event_processor(), [{‘event‘: 1}, ‘pause‘, {‘event‘: 2}, ‘resume‘])
Property-Based Tests for Protocol Robustness
If your coroutine processes many message types, you can get a lot of confidence by generating random sequences of messages and asserting invariants (like “processed count never decreases” or “balance is always an int”). Even if you don’t adopt full property-based testing, you can simulate it with a few random runs.
Key invariants I look for:
- The coroutine never throws on valid messages.
- Invalid messages fail with deterministic exceptions.
- Shutdown behavior is consistent (EOF vs close).
Debugging Re-Entrancy (“generator already executing”)
This error usually means you accidentally called send() while the generator was already running.
How it happens in real code:
- A coroutine calls a callback.
- The callback calls back into the coroutine (directly or indirectly).
Fix patterns that work:
- Don’t call external callbacks while “inside” the generator step.
- Instead, yield an “action request” outward, let the caller perform it, then send the result back in.
That turns implicit recursion into an explicit protocol, which is easier to reason about.
Performance Considerations (What Matters and What Doesn’t)
I avoid pretending send() is a performance feature. Most of the time it’s a control-flow feature. That said, there are a few performance realities worth knowing.
Why Generators Can Be Efficient
- They keep only local state, not whole collections.
- They avoid allocating intermediate lists in pipelines.
- They let you process streaming data incrementally.
Where send() Can Help (and Where It Won’t)
send() itself is not “faster” than calling a function. But it can reduce overhead by:
- Avoiding repeated object construction (the generator frame holds state).
- Avoiding repeated parsing/initialization between calls.
Where it won’t help:
- If you are I/O bound, switching to
asynciousually matters more. - If you are CPU bound, algorithm choice and data structures dominate.
A Practical Rule I Use
If your coroutine is doing small, predictable work per message (parsing, transforming, counting), send() is fine.
If it starts doing blocking I/O, sleeps, or has to integrate with cancellation/timeouts, I stop and redesign around async def.
Alternatives to send() (And How I Choose)
send() is powerful, but it’s not always the best fit. Here’s how I decide.
1) Plain Functions + Explicit State
If the state is simple, a function that takes state and returns new state is often the cleanest.
Pros:
- Easy to test
- Easy to serialize/inspect
- No priming, no generator protocol
Cons:
- You must thread state through every call
2) Classes
If you want “methods” and clear boundaries, a class is sometimes better than a generator coroutine.
Pros:
- Familiar API (
obj.step(msg)) - Easy to attach debugging, metrics, and invariants
Cons:
- Slightly more boilerplate
If my generator starts accumulating too many message types, I often refactor into a class.
3) queue.Queue / asyncio.Queue
If you need real concurrency, buffering, backpressure, and cancellation semantics, queues win.
Pros:
- Thread-safe (
queue.Queue) - Cancellation/timeouts (
asyncio.Queue) - Multiple producers/consumers
Cons:
- More moving parts
- More scheduling overhead
4) Async Generators and Async Iterators
When your producer/consumer crosses I/O boundaries, async iterators are usually the modern answer.
Pros:
- Natural
awaitintegration - Cancellation support
- Plays well with async ecosystems
Cons:
- More complexity if the rest of your program is sync
A Quick Decision Table
Best first choice
send()? —
generator coroutine
queue
async
class
generator (pull-based)
A “Production-Ready” Pattern: Context-Managed Coroutines
One weakness of generator coroutines is lifecycle hygiene: people forget to close them.
I often wrap them in a context manager so cleanup is deterministic:
from contextlib import contextmanager
@contextmanager
def coroutine(gen):
# Prime and ensure close on exit.
gen.send(None)
try:
yield gen
finally:
gen.close()
Example:
with coroutine(event_processor()) as p:
p.send({‘event‘: ‘login‘})
This makes the “resource-like” nature of long-lived generators obvious.
Edge Cases You Should Handle Explicitly
This is the stuff that bites people later.
Edge Case 1: Return Values From Generators
A generator can return somevalue. The caller sees that as StopIteration(somevalue).
If you’re using generators as coroutines, this matters when you want a final summary.
def count_until(stop: int):
n = 0
while n < stop:
msg = yield n
n += 1
return {‘counted‘: n}
g = count_until(3)
print(next(g))
try:
while True:
print(g.send(None))
except StopIteration as e:
print(‘final:‘, e.value)
If you don’t care about a final value, don’t use return for signaling; use explicit messages.
Edge Case 2: Exceptions at the Suspension Point
When you call throw(), the exception is raised where the generator is paused. That means you can write cleanup or rollback logic around the yield.
This is useful, but also means: exceptions can land in surprising places if your yield is inside a large try block.
My rule: keep try blocks tight around the code that truly needs them.
Edge Case 3: yield from and Delegation
If you use yield from subgen, send() and throw() can be delegated to the subgenerator.
This enables clean composition, but it also means your protocol might span multiple generator layers.
In practice:
- Great for splitting a big parser into smaller parsers.
- Potentially confusing if the boundary isn’t documented.
If you adopt yield from, I recommend documenting the input/output protocol right at the top of each generator.
Final Checklist (What I Look For in Code Review)
When I review send()-driven code, I check for these:
- A clear protocol: what messages can be sent, and what values are yielded.
- A consistent meaning for
None. - A reliable lifecycle: explicit stop message or
close()via context manager. - No hidden re-entrancy: generator isn’t called from callbacks during execution.
- Tests that drive the coroutine step-by-step and cover shutdown.
Wrap-Up
send() is not something I sprinkle everywhere. But when I need a tiny, synchronous, explicit state machine with runtime control, it’s one of the cleanest tools Python offers.
If you remember only one idea, make it this: yield is an expression. The value you send becomes the value that expression evaluates to. Once that clicks, send() stops feeling magical and starts feeling like a practical, sharp tool you can use responsibly.


