-
-
Notifications
You must be signed in to change notification settings - Fork 33.4k
gh-139109: A new tracing JIT compiler frontend for CPython #140310
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
|
I'm deliberately not merging main in yet, as if we do I can't benchmark this on the meta runners, as those only have clang 19 to build the JIT with. Main has clang 20 (soon 21), which the meta runners don't have. |
savannahostrowski
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As I said in the group chat...KJ 4 PREZ ⭐ . This is very, very cool and I'm really excited about it. Just a couple comments to clarify my understanding and ask you to inline document some things 😄
| - i686-pc-windows-msvc/msvc | ||
| - x86_64-pc-windows-msvc/msvc | ||
| - aarch64-pc-windows-msvc/msvc | ||
| # To re-enable later when we support these. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Mostly because I don't know the time horizon for getting this support, can we add a link in the comments to the issue where we're tracking this? I think it's #139922, right?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Mark says he will add normal switch-case support back to this PR by building on top of it I think. So we should get it nearly immediately after this lands.
| # define IS_JIT_TRACING_MAKING_PROGRESS() (IS_JIT_TRACING() && ((_PyThreadStateImpl *)tstate)->jit_state.prev_state.specialize_counter < MAX_SPECIALIZATION_TRIES) | ||
| # define ENTER_TRACING() \ | ||
| DISPATCH_TABLE_VAR = TRACING_DISPATCH_TABLE; | ||
| # define LEAVE_TRACING() \ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I do think it might be worth inline documenting some of how the dual dispatch architecture works with regards to ENTER_TRACING/LEAVE_TRACING, since it's pretty novel if IIUC. Correct me if I'm wrong here but I think we're using the same interpreter loop for both normal and tracing.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You're right. I should document this in the internaldocs probably. I'll do it after this PR so we can discuss how to document it properly :)
| // This is the length of the trace we project initially. | ||
| #define UOP_MAX_TRACE_LENGTH 1200 | ||
| // This is the length of the trace we translate initially. | ||
| #define UOP_MAX_TRACE_LENGTH 3000 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Based on what you said in the PR comment about seeing a lot of "trace too long" aborts even with the higher limit, I'm wondering if you've benchmarked even higher limits like 5k or 10k?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Oops, the trace too long aborts are with the old limit (1200). I have not benchmarked the stats for the newer limit. We should definitely gather stats for this in the future and fine-tune it though
Include/internal/pycore_backoff.h
Outdated
| // 2. Re-specialized from deopt, in which case the counter will be 1. | ||
| // 3. Deopt -> Specialize -> Deopt -> Specialize, in which case the counter will be 2. | ||
| // We do not want the 3rd case. | ||
| #define MAX_SPECIALIZATION_TRIES 2 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I wonder if it's at all worth instrumenting stats for this? Would it be worth understanding how often we're running up against this?
| REPLACE_OP((this_instr + 1), _NOP, 0, 0); | ||
| } | ||
| } | ||
| if (frame_pop(ctx, returning_code, returning_stacklevel)) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should we add stats here to understand how often frame_pop failures (primarily underflow) are causing trace optimization to fail and the trace to be discarded?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There's both overflow and underflow, but yes we should add stats. I'm mainly too lazy to add the pystats because it's currently a little complicated to add a stat to pystats (it's not documented anywhere and also I forgor how to do it)
markshannon
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Three things remaining:
-
I think we should handle instructions that stop tracing,
INTERPRETER_EXIT,RAISE_VARARGS, etc. when dispatching. Add astop_tracinglabel to therecord_previous_instlabel in the dispatch table -
There are still a few places that check for
INTERPRETER_EXIT, that shouldn't need to.
It indicates a bug (at least it does to me) if we need to check forINTERPRETER_EXITFixing 1. above should fix this. -
There is no need for
specialize_counterin the JIT state:
Assuming there are no infinite loops in the current interpreter, we need make no changes to the base interpreter to ensure that there are no infinite loops in it with tracing.
Respecialization can be triggered by adding this to record_previous_inst:
if _PyOpcode_Caches[opcode] set_backoff_counter(&this_instr[1], 0);|
|
||
| #define WITHIN_STACK_BOUNDS() \ | ||
| (STACK_LEVEL() >= 0 && STACK_LEVEL() <= STACK_SIZE()) | ||
| (CURRENT_FRAME_IS_INIT_SHIM() || (STACK_LEVEL() >= 0 && STACK_LEVEL() <= STACK_SIZE())) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The __init__ shim should be a normal frame, so this shouldn't need changing.
What happens if you leave this unchanged?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The problem is not the code object or frame. The problem is that the stack pointer does not correspond to the stack base in between the time the init frame is pushed, to the actual _PUSH_FRAME. A reminder:
_CREATE_INIT_FRAME
_PUSH_FRAME
Between _CREATE_INIT_FRAME and _PUSH_FRAME, the stack pointer is the old frame's stack pointer but the stack base is of the init shim frame, thus leading to a misleading segfault.
|
|
||
| tier2 op(_DEOPT, (--)) { | ||
| GOTO_TIER_ONE(_PyFrame_GetBytecode(frame) + CURRENT_TARGET()); | ||
| GOTO_TIER_ONE((frame->owner == FRAME_OWNED_BY_INTERPRETER) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't think that _DEOPT should ever see INTERPRETER_EXIT, it is only after a dynamic exit that the next instruction can be INTERPRETER_EXIT
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You asked me to end the trace with a _DEOPT if I hit an INTERPRETER_EXIT in a previous review. Which I did. Thus it now _DEOPT exits to INTERPRETER_EXIT.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
OK. I think I meant "return to tier 1" more generally, rather than _DEOPT specifically, but this works.
We can tweak it in another PR.
That isn't true for the FT interpreter. It is possible to enter an infinite specialization loop if we are unlucky enough that LOCK_OBJECT fails every time. |
So I tried this and we are now hitting bottom in the abstract interpreter, indicating the trace itself is inconsistent and theres a bug somewhere (a trace recording interpreter should never hit bottom). I've narrowed it down to LOAD_ATTR_INSTANCE_VALUE. Probably the specialization and the guard don't agree now. I already found one more here #141367 (comment) I would really like to make the JIT more fault tolerant and not so dependent on the specializing interpreter. We should definitely fix those bugs, but we should also make the JIT optimizer as resilient as possible |
|
Nevermind, I decided to do |
If it fails every time, that's an infinite loop regardless. Setting the backoff counter to zero, just sets it to a state that it could be in anyway without tracing. |
markshannon
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is starting to look good. I've a few more minor suggestions, but nothing deep.
Hopefully we can get this merged very soon.
Lib/test/test_capi/test_opt.py
Outdated
| self.assertIn("_JUMP_TO_TOP", uops) | ||
| self.assertIn("_LOAD_FAST_BORROW_0", uops) | ||
|
|
||
| @unittest.skip("gh-139109 WIP") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
How do these skipped tests fail, and how do we get them back again?
I'm OK leaving it to another PR to restore them, but only if there is a plan.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Woops this test works. I accidentally left them them out.
All the stack space optimization is broken. I think a possible fix is to check if the returning code is not matching, then just bail the optimization. That's a conservative and safe way. We'd need to handle YIELD_VALUE too.
| _tstate->jit_tracer_state.initial_state.exit = exit; | ||
| _tstate->jit_tracer_state.initial_state.stack_depth = curr_stackdepth; | ||
| _tstate->jit_tracer_state.initial_state.chain_depth = chain_depth; | ||
| _tstate->jit_tracer_state.prev_state.instr_frame = frame; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why are we setting prev_state here? There is no previous instruction.
Could we either set instr == NULL or point it to some NOP so that record_previous_inst doesn't record anything.
I now see why you are using the no-tracing form of dispatch after starting a trace.
It would be simpler to set instr == NULL here and use normal dispatch. It would save some code duplication as well.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If we did that it would complicate the side exit tracing case. Right now that is very clean. I don't think we should change that.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
When you say "side exit", you mean from _COLD_EXIT?
How would it complicate it?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Say the target of the _COLD_EXIT is at instruction target. We record target at the intiialization. When DISPATCH() at INSTR(target), it records target as the instruction, which is right.
If we set it to NULL, the first DISPATCH() at INSTR(target) is invalid, as there is no previous instruction for the target.
We cannot normal DISPATCH() when returning from tier 1 too, as we have not yet actually executed the instruction. So it will double count it.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If we don't record target at initialization, and DISPATCH() normally, everything works fine I think.
target is recorded at the start of the next instruction as normal.
It tracing we do the following:
- Add the
previnstruction to the trace - Set the
prevto the instruction about to be executed, for the next dispatch to record - Execute the instruction as normal
If we set prev = NOP at a side exit, this should work fine.
Likewise, instead of doing this when we start tracing:
- Set the
prevto the instruction about to be executed, for the next dispatch to record - Do non-tracing dispatch
we can do this:
- Set the
prevtoNOP/Null - Do tracing dispatch
This might be a tiny bit slower, but it saves a significant amount of code duplication.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Having said that, I think it would be fine to make these changes after this PR is merged.
I'll add a task to the issue.
|
|
||
| tier2 op(_DEOPT, (--)) { | ||
| GOTO_TIER_ONE(_PyFrame_GetBytecode(frame) + CURRENT_TARGET()); | ||
| GOTO_TIER_ONE((frame->owner == FRAME_OWNED_BY_INTERPRETER) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
OK. I think I meant "return to tier 1" more generally, rather than _DEOPT specifically, but this works.
We can tweak it in another PR.
|
The results after trusting the interpreter with specialization is slower. https://raw.githubusercontent.com/facebookexperimental/free-threading-benchmarking/refs/heads/main/results/bm-20251112-3.15.0a1%2B-f8a764a-JIT/bm-20251112-vultr-x86_64-Fidget%252dSpinner-tracing_jit-3.15.0a1%2B-f8a764a-vs-base.svg This likely indicates bugs in the specializing interpreter. For example, deltablue is suffering from the CALL_LIST_APPEND bug. I say we just merge the "slower" version though. We should fix the bugs in the interpreter separately. |
|
I'm not worried about the deltablue slowdown. It subclasses a list for no reason, which I don't think we care about. |
|
Is the iOS failure anything to do with this PR or is it failing elsewhere? |
| GOTO_TIER_ONE(target); | ||
| } | ||
|
|
||
| tier2 op(_GUARD_IP__PUSH_FRAME, (ip/4 --)) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I thought the plan was to generate these exits, otherwise they get out of sync with the source instruction.
What is OFFSET_OF(x)? The offset of what?
If generating the whole uop is too cumbersome because of uop ids and such, then generate the body, leaving the declaration looking like this:
tier2 op(_GUARD_IP__PUSH_FRAME, (ip/4 --)) {
IP_GUARD(_PUSH_FRAME);
}There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It is automatically generating it from the offset of the IP.
I've changed the name to IP_OFFSET_OF to make it clearer.
| DISPATCH(); | ||
| } | ||
|
|
||
| label(record_previous_inst) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
NOTE: To support the switch/case interpreter this will need to be an instruction.
No need to do so in this PR.
It's been failing intermittently for a few weeks: #141358 (comment). |
This PR changes the current JIT model from trace projection to trace recording. Benchmarking: better pyperformance (about 1.7% overall) geomean versus current https://raw.githubusercontent.com/facebookexperimental/free-threading-benchmarking/refs/heads/main/results/bm-20251108-3.15.0a1%2B-7e2bc1d-JIT/bm-20251108-vultr-x86_64-Fidget%252dSpinner-tracing_jit-3.15.0a1%2B-7e2bc1d-vs-base.svg, 100% faster Richards on the most improved benchmark versus the current JIT. Slowdown of about 10-15% on the worst benchmark versus the current JIT. Note: the fastest version isn't the one merged, as it relies on fixing bugs in the specializing interpreter, which is left to another PR. The speedup in the merged version is about 1.1%. https://raw.githubusercontent.com/facebookexperimental/free-threading-benchmarking/refs/heads/main/results/bm-20251112-3.15.0a1%2B-f8a764a-JIT/bm-20251112-vultr-x86_64-Fidget%252dSpinner-tracing_jit-3.15.0a1%2B-f8a764a-vs-base.svg
Stats: 50% more uops executed, 30% more traces entered the last time we ran them. It also suggests our trace lengths for a real trace recording JIT are too short, as a lot of trace too long aborts https://github.com/facebookexperimental/free-threading-benchmarking/blob/main/results/bm-20251023-3.15.0a1%2B-eb73378-CLANG%2CJIT/bm-20251023-vultr-x86_64-Fidget%252dSpinner-tracing_jit-3.15.0a1%2B-eb73378-pystats-vs-base.md .
This new JIT frontend is already able to record/execute significantly more instructions than the previous JIT frontend. In this PR, we are now able to record through custom dunders, simple object creation, generators, etc. None of these were done by the old JIT frontend. Some custom dunders uops were discovered to be broken as part of this work gh-140277
Some tests for the optimizers are disabled for now because their invariants have changed. The main one is the stack space check, as it's no longer valid to deal with underflow.
The full test suite passes on my system.
Not close enough to a holiday, so no poem this time!
Pros:
optimizer.cis now significantly simpler, as we don't have to do strange things to recover the bytecode from a trace.Cons:
Design:
record_previous_instfunction/label is executed. This does as the name suggests._DYNAMIC_EXIT.Future ideas:
_DYNAMIC_EXITwill cause a re-trace when hot enough starting from the target instruction.