-
-
Notifications
You must be signed in to change notification settings - Fork 379
Description
[this was originally #79, but that thread got taken over by the discussion of whether we can provide better diagnostics on missing await, so I'm moving the umbrella issue here]
This is currently very rough and needs expanding into something we can actually bring to the Python core team, but so I don't forget:
-
__iterclose__(PEP 533) – probably the single most important from a user perspective. Unfortunately appears to be stalled; definitely won't happen for 3.7. -
can we do anything about how easy it is to forget an await? In retrospect async functions shouldn't implement
__call__IMO... but probably too late to fix that. Still, it kinda sucks that one of first things in our tutorial is a giant warning about this wart, so worth at least checking if anyone has any ideas... [see Can we make forgetting an await be an error? #79 for discussion] -
possibly
Resultshould actually be builtin? I think it would actually really simplify CPython's generator API and implementation. (in particular, unifyingsendandthrowcould dramatically simplify the implementation ofyield fromwhile fixing some of the weird intractablethrowbugs that trio currently has to work around) -
speaking of which, straightforward bugs that affect us:
issue29600 (this is the worst; it actually blocks us from implementing some useful API, like an async version of(now fixed, thanks @1st1!)MultiError.catch)issue29590– fully worked around inside Trio; still affects other libraries like asyncio, but crossing it off our listissue29587– fully worked around inside Trio; still affects other libraries like asyncio, but crossing it off our listissue29728(fix merged, thanks @Mariatta!)issue29515– worked around inside Trio- issue29988
issue30038 (makes our test suite flaky on windows – see control-C sometimes getting missed on Windows #119. PR submitted and awaiting review)merged, thanks @Haypo!issue30039 (PR submitted and awaiting review)merged, thanks @1st1!issue30050 - PR submittedmerged, thanks @1st1!bpo-30579 - PR submitted for the most important partmerged!- bpo-32359
- bpo-32561
-
better ergonomics for MultiErrors (catching, printing, rethrowing...). Fundamentally the issue here is that in trio, it's effectively possible to call multiple functions in parallel, so we need a way to handle multiple errors raised in parallel. Some clever hacks let us get a long way, but currently this is really stretching the limits of the assumptions baked into Python's exception handling machinery. There are a number of pieces to this; I'm not sure all of them. But:
-
no brainer: making traceback objects instantiable or mutable (both we and jinja2 are carrying disgusting code to work around this) – this is bpo-30579
-
would be nice: attaching attributes to tracebacks (probably: subclassing them)
- one use case here is to hide/de-emphasize parts of the traceback that are in trio's guts; I think Nick had a similar use case for wanting a way to hide tracebacks inside the import machinery guts?
-
better control over implicit exception chaining. here's an example where implicit exception chaining corrupts our exception reporting and there's currently nothing we can do about it:
v = ValueError() v.__context__ = KeyError() def discard_NameError(exc): if isinstance(exc, NameError): return None return exc with pytest.raises(ValueError) as excinfo: with MultiError.catch(discard_NameError): raise MultiError([v, NameError()]) assert isinstance(excinfo.value.__context__, KeyError) assert not excinfo.value.__suppress_context__
because Python will overwrite the ValueError's
__context__in the catch's__exit__, even though it's already set. There's no way to stop it. [Well... I guess we could exploit issue29587. But that seems a bit evil?] [Update! I actually did find an effective countermeasure – see Correctly preserve exception __context__ in MultiError.catch #165. It's a little bit gross but it's certainly not the worst thing in the multierror code; not sure whether it's worth trying to get a better solution upstream or not.] -
better hooks to control exception printing? This is the biggie – overriding
sys.excepthookis really tacky, and has major limitations (e.g. ipython and pytest have their own exception printing code). But it's also the vaguest, because I'm not sure what this would look like
-
-
better support for safe
KeyboardInterruptmanagement (in particular for__(a)exit__blocks, with their currently unfixable race condition). This is complicated and gets into the code of the bytecode interpreter. Some notes:- for sync context managers,
SETUP_WITHatomically calls__enter__+ sets up the finally block - for async context managers, the async setup stuff is split over multiple bytecodes, so we lose this guarantee -- it's possible for a KI to arrive in between calling
__aenter__and theSETUP_ASYNC_WITHthat pushes the finally block - for sync context managers,
WITH_CLEANUP_STARTatomically calls__exit__ - for async context managers, there again are multiple bytecodes (
WITH_CLEANUP_STARTcalls__aenter__(i.e., instantiates the coroutine object), thenGET_AWAITABLEcalls__await__, thenLOAD_CONSTto get theNoneto use as the initialsend, thenYIELD_FROMto actually run the__aenter__body - and, of course, even once we enter the
__(a)exit__, there's currently no way for that entry to atomically enable KI protection. - so, proposals, I think?:
- make entering an
__aenter__or__aexit__a single bytecode, or otherwise atomic WRTKeyboardInterruptdelivery (now filed as issue29988) - add a field to the stack frame that points to the function object (if any). This avoids circular references (unlike putting it on the code object), would make entry/exit atomic, is much more elegant than the way we currently have to hack everyone's locals dicts, would help other projects like pytest with it's
__tracebackhide__hack, and would enable better introspection in general (right now you can't even print__qualname__in tracebacks!)
- make entering an
- for sync context managers,