Skip to content

Conversation

@Fidget-Spinner
Copy link
Member

@Fidget-Spinner Fidget-Spinner commented Oct 18, 2025

This PR changes the current JIT model from trace projection to trace recording. Benchmarking: better pyperformance (about 1.7% overall) geomean versus current https://raw.githubusercontent.com/facebookexperimental/free-threading-benchmarking/refs/heads/main/results/bm-20251108-3.15.0a1%2B-7e2bc1d-JIT/bm-20251108-vultr-x86_64-Fidget%252dSpinner-tracing_jit-3.15.0a1%2B-7e2bc1d-vs-base.svg, 100% faster Richards on the most improved benchmark versus the current JIT. Slowdown of about 10-15% on the worst benchmark versus the current JIT. Note: the fastest version isn't the one merged, as it relies on fixing bugs in the specializing interpreter, which is left to another PR. The speedup in the merged version is about 1.1%. https://raw.githubusercontent.com/facebookexperimental/free-threading-benchmarking/refs/heads/main/results/bm-20251112-3.15.0a1%2B-f8a764a-JIT/bm-20251112-vultr-x86_64-Fidget%252dSpinner-tracing_jit-3.15.0a1%2B-f8a764a-vs-base.svg

Stats: 50% more uops executed, 30% more traces entered the last time we ran them. It also suggests our trace lengths for a real trace recording JIT are too short, as a lot of trace too long aborts https://github.com/facebookexperimental/free-threading-benchmarking/blob/main/results/bm-20251023-3.15.0a1%2B-eb73378-CLANG%2CJIT/bm-20251023-vultr-x86_64-Fidget%252dSpinner-tracing_jit-3.15.0a1%2B-eb73378-pystats-vs-base.md .

This new JIT frontend is already able to record/execute significantly more instructions than the previous JIT frontend. In this PR, we are now able to record through custom dunders, simple object creation, generators, etc. None of these were done by the old JIT frontend. Some custom dunders uops were discovered to be broken as part of this work gh-140277

Some tests for the optimizers are disabled for now because their invariants have changed. The main one is the stack space check, as it's no longer valid to deal with underflow.

The full test suite passes on my system.

Not close enough to a holiday, so no poem this time!

Pros:

  • Ignoring the generated tracer code as it's automatically created, this is only additional 1k lines of code. The maintenance burden is handled by the DSL and code generator.
  • optimizer.c is now significantly simpler, as we don't have to do strange things to recover the bytecode from a trace.
  • The new JIT frontend is able to handle a lot more control-flow than the old one.
  • Tracing is very low overhead. We use the tail calling interpreter/computed goto interpreter to switch between tracing mode and non-tracing mode. I call this mechanism dual dispatch, as we have two dispatch tables dispatching to each other. Specialization is still enabled while tracing.
  • Better handling of polymorphism. We leverage the specializing interpreter for this.

Cons:

Design:

  • After each instruction, the record_previous_inst function/label is executed. This does as the name suggests.
  • The tracing interpreter lowers bytecode to uops directly so that it can obtain "fresh" values at the point of lowering.
  • The tracing version behaves nearly identical to the normal interpreter, in fact it even has specialization! This allows it to run without much of a slowdown when tracing. The actual cost of tracing is only a function call and writes to memory.
  • The tracing interpreter uses the specializing interpreter's deopt to naturally form the side exit chains. This allows it to side exit chain effectively, without repeating much code. We force a re-specializing when tracing a deopt.
  • The tracing interpreter can even handle goto errors/exceptions, but I chose to disable them for now as it's not tested.
  • Because we do not share interpreter dispatch, there is should be no significant slowdown to the original specializing interpreter on tailcall and computed got with JIT disabled. With JIT enabled, there might be a slowdown in the form of the JIT trying to trace.
  • Things that could have dynamic instruction pointer effects are guarded on. The guard deopts to a new instruction --- _DYNAMIC_EXIT.

Future ideas:

  • _DYNAMIC_EXIT will cause a re-trace when hot enough starting from the target instruction.
  • As such, traces "grow" like a web/graph rather than a tree. It starts from a loop, then grows outwards naturally consuming more code around it as the hotspot gets hotter.

@Fidget-Spinner
Copy link
Member Author

I'm deliberately not merging main in yet, as if we do I can't benchmark this on the meta runners, as those only have clang 19 to build the JIT with. Main has clang 20 (soon 21), which the meta runners don't have.

Copy link
Member

@savannahostrowski savannahostrowski left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As I said in the group chat...KJ 4 PREZ ⭐ . This is very, very cool and I'm really excited about it. Just a couple comments to clarify my understanding and ask you to inline document some things 😄

- i686-pc-windows-msvc/msvc
- x86_64-pc-windows-msvc/msvc
- aarch64-pc-windows-msvc/msvc
# To re-enable later when we support these.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Mostly because I don't know the time horizon for getting this support, can we add a link in the comments to the issue where we're tracking this? I think it's #139922, right?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Mark says he will add normal switch-case support back to this PR by building on top of it I think. So we should get it nearly immediately after this lands.

# define IS_JIT_TRACING_MAKING_PROGRESS() (IS_JIT_TRACING() && ((_PyThreadStateImpl *)tstate)->jit_state.prev_state.specialize_counter < MAX_SPECIALIZATION_TRIES)
# define ENTER_TRACING() \
DISPATCH_TABLE_VAR = TRACING_DISPATCH_TABLE;
# define LEAVE_TRACING() \
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I do think it might be worth inline documenting some of how the dual dispatch architecture works with regards to ENTER_TRACING/LEAVE_TRACING, since it's pretty novel if IIUC. Correct me if I'm wrong here but I think we're using the same interpreter loop for both normal and tracing.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You're right. I should document this in the internaldocs probably. I'll do it after this PR so we can discuss how to document it properly :)

// This is the length of the trace we project initially.
#define UOP_MAX_TRACE_LENGTH 1200
// This is the length of the trace we translate initially.
#define UOP_MAX_TRACE_LENGTH 3000
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Based on what you said in the PR comment about seeing a lot of "trace too long" aborts even with the higher limit, I'm wondering if you've benchmarked even higher limits like 5k or 10k?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Oops, the trace too long aborts are with the old limit (1200). I have not benchmarked the stats for the newer limit. We should definitely gather stats for this in the future and fine-tune it though

// 2. Re-specialized from deopt, in which case the counter will be 1.
// 3. Deopt -> Specialize -> Deopt -> Specialize, in which case the counter will be 2.
// We do not want the 3rd case.
#define MAX_SPECIALIZATION_TRIES 2
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I wonder if it's at all worth instrumenting stats for this? Would it be worth understanding how often we're running up against this?

REPLACE_OP((this_instr + 1), _NOP, 0, 0);
}
}
if (frame_pop(ctx, returning_code, returning_stacklevel)) {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should we add stats here to understand how often frame_pop failures (primarily underflow) are causing trace optimization to fail and the trace to be discarded?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There's both overflow and underflow, but yes we should add stats. I'm mainly too lazy to add the pystats because it's currently a little complicated to add a stat to pystats (it's not documented anywhere and also I forgor how to do it)

Copy link
Member

@markshannon markshannon left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Three things remaining:

  1. I think we should handle instructions that stop tracing, INTERPRETER_EXIT, RAISE_VARARGS, etc. when dispatching. Add a stop_tracing label to the record_previous_inst label in the dispatch table

  2. There are still a few places that check for INTERPRETER_EXIT, that shouldn't need to.
    It indicates a bug (at least it does to me) if we need to check for INTERPRETER_EXIT Fixing 1. above should fix this.

  3. There is no need for specialize_counter in the JIT state:

Assuming there are no infinite loops in the current interpreter, we need make no changes to the base interpreter to ensure that there are no infinite loops in it with tracing.
Respecialization can be triggered by adding this to record_previous_inst:

   if _PyOpcode_Caches[opcode] set_backoff_counter(&this_instr[1], 0);


#define WITHIN_STACK_BOUNDS() \
(STACK_LEVEL() >= 0 && STACK_LEVEL() <= STACK_SIZE())
(CURRENT_FRAME_IS_INIT_SHIM() || (STACK_LEVEL() >= 0 && STACK_LEVEL() <= STACK_SIZE()))
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The __init__ shim should be a normal frame, so this shouldn't need changing.
What happens if you leave this unchanged?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The problem is not the code object or frame. The problem is that the stack pointer does not correspond to the stack base in between the time the init frame is pushed, to the actual _PUSH_FRAME. A reminder:

_CREATE_INIT_FRAME
_PUSH_FRAME

Between _CREATE_INIT_FRAME and _PUSH_FRAME, the stack pointer is the old frame's stack pointer but the stack base is of the init shim frame, thus leading to a misleading segfault.


tier2 op(_DEOPT, (--)) {
GOTO_TIER_ONE(_PyFrame_GetBytecode(frame) + CURRENT_TARGET());
GOTO_TIER_ONE((frame->owner == FRAME_OWNED_BY_INTERPRETER)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think that _DEOPT should ever see INTERPRETER_EXIT, it is only after a dynamic exit that the next instruction can be INTERPRETER_EXIT

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You asked me to end the trace with a _DEOPT if I hit an INTERPRETER_EXIT in a previous review. Which I did. Thus it now _DEOPT exits to INTERPRETER_EXIT.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

OK. I think I meant "return to tier 1" more generally, rather than _DEOPT specifically, but this works.
We can tweak it in another PR.

@Fidget-Spinner
Copy link
Member Author

Assuming there are no infinite loops in the current interpreter, we need make no changes to the base interpreter to ensure that there are no infinite loops in it with tracing. Respecialization can be triggered by adding this to record_previous_inst:

   if _PyOpcode_Caches[opcode] set_backoff_counter(&this_instr[1], 0);

That isn't true for the FT interpreter. It is possible to enter an infinite specialization loop if we are unlucky enough that LOCK_OBJECT fails every time.

@Fidget-Spinner
Copy link
Member Author

Fidget-Spinner commented Nov 10, 2025

   if _PyOpcode_Caches[opcode] set_backoff_counter(&this_instr[1], 0);

So I tried this and we are now hitting bottom in the abstract interpreter, indicating the trace itself is inconsistent and theres a bug somewhere (a trace recording interpreter should never hit bottom). I've narrowed it down to LOAD_ATTR_INSTANCE_VALUE. Probably the specialization and the guard don't agree now. I already found one more here #141367 (comment)

I would really like to make the JIT more fault tolerant and not so dependent on the specializing interpreter. We should definitely fix those bugs, but we should also make the JIT optimizer as resilient as possible

@Fidget-Spinner
Copy link
Member Author

Nevermind, I decided to do if _PyOpcode_Caches[opcode] set_backoff_counter(&this_instr[1], 0);. Thanks. We might lose a slight bit of performance from the bad specialization bugs. However, we can fix that in the future.

@markshannon
Copy link
Member

Assuming there are no infinite loops in the current interpreter, we need make no changes to the base interpreter to ensure that there are no infinite loops in it with tracing. Respecialization can be triggered by adding this to record_previous_inst:

   if _PyOpcode_Caches[opcode] set_backoff_counter(&this_instr[1], 0);

That isn't true for the FT interpreter. It is possible to enter an infinite specialization loop if we are unlucky enough that LOCK_OBJECT fails every time.

If it fails every time, that's an infinite loop regardless. Setting the backoff counter to zero, just sets it to a state that it could be in anyway without tracing.

Copy link
Member

@markshannon markshannon left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is starting to look good. I've a few more minor suggestions, but nothing deep.
Hopefully we can get this merged very soon.

self.assertIn("_JUMP_TO_TOP", uops)
self.assertIn("_LOAD_FAST_BORROW_0", uops)

@unittest.skip("gh-139109 WIP")
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

How do these skipped tests fail, and how do we get them back again?
I'm OK leaving it to another PR to restore them, but only if there is a plan.

Copy link
Member Author

@Fidget-Spinner Fidget-Spinner Nov 11, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Woops this test works. I accidentally left them them out.

All the stack space optimization is broken. I think a possible fix is to check if the returning code is not matching, then just bail the optimization. That's a conservative and safe way. We'd need to handle YIELD_VALUE too.

_tstate->jit_tracer_state.initial_state.exit = exit;
_tstate->jit_tracer_state.initial_state.stack_depth = curr_stackdepth;
_tstate->jit_tracer_state.initial_state.chain_depth = chain_depth;
_tstate->jit_tracer_state.prev_state.instr_frame = frame;
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why are we setting prev_state here? There is no previous instruction.
Could we either set instr == NULL or point it to some NOP so that record_previous_inst doesn't record anything.

I now see why you are using the no-tracing form of dispatch after starting a trace.
It would be simpler to set instr == NULL here and use normal dispatch. It would save some code duplication as well.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If we did that it would complicate the side exit tracing case. Right now that is very clean. I don't think we should change that.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

When you say "side exit", you mean from _COLD_EXIT?
How would it complicate it?

Copy link
Member Author

@Fidget-Spinner Fidget-Spinner Nov 11, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Say the target of the _COLD_EXIT is at instruction target. We record target at the intiialization. When DISPATCH() at INSTR(target), it records target as the instruction, which is right.

If we set it to NULL, the first DISPATCH() at INSTR(target) is invalid, as there is no previous instruction for the target.

We cannot normal DISPATCH() when returning from tier 1 too, as we have not yet actually executed the instruction. So it will double count it.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If we don't record target at initialization, and DISPATCH() normally, everything works fine I think.
target is recorded at the start of the next instruction as normal.

It tracing we do the following:

  • Add the prev instruction to the trace
  • Set the prev to the instruction about to be executed, for the next dispatch to record
  • Execute the instruction as normal

If we set prev = NOP at a side exit, this should work fine.

Likewise, instead of doing this when we start tracing:

  • Set the prev to the instruction about to be executed, for the next dispatch to record
  • Do non-tracing dispatch

we can do this:

  • Set the prev toNOP/Null
  • Do tracing dispatch

This might be a tiny bit slower, but it saves a significant amount of code duplication.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Having said that, I think it would be fine to make these changes after this PR is merged.
I'll add a task to the issue.


tier2 op(_DEOPT, (--)) {
GOTO_TIER_ONE(_PyFrame_GetBytecode(frame) + CURRENT_TARGET());
GOTO_TIER_ONE((frame->owner == FRAME_OWNED_BY_INTERPRETER)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

OK. I think I meant "return to tier 1" more generally, rather than _DEOPT specifically, but this works.
We can tweak it in another PR.

@Fidget-Spinner
Copy link
Member Author

Fidget-Spinner commented Nov 12, 2025

The results after trusting the interpreter with specialization is slower. https://raw.githubusercontent.com/facebookexperimental/free-threading-benchmarking/refs/heads/main/results/bm-20251112-3.15.0a1%2B-f8a764a-JIT/bm-20251112-vultr-x86_64-Fidget%252dSpinner-tracing_jit-3.15.0a1%2B-f8a764a-vs-base.svg

This likely indicates bugs in the specializing interpreter. For example, deltablue is suffering from the CALL_LIST_APPEND bug.

I say we just merge the "slower" version though. We should fix the bugs in the interpreter separately.

@markshannon
Copy link
Member

I'm not worried about the deltablue slowdown. It subclasses a list for no reason, which I don't think we care about.

@markshannon
Copy link
Member

Is the iOS failure anything to do with this PR or is it failing elsewhere?

GOTO_TIER_ONE(target);
}

tier2 op(_GUARD_IP__PUSH_FRAME, (ip/4 --)) {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I thought the plan was to generate these exits, otherwise they get out of sync with the source instruction.

What is OFFSET_OF(x)? The offset of what?

If generating the whole uop is too cumbersome because of uop ids and such, then generate the body, leaving the declaration looking like this:

tier2 op(_GUARD_IP__PUSH_FRAME, (ip/4 --)) {
    IP_GUARD(_PUSH_FRAME);
}

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It is automatically generating it from the offset of the IP.

I've changed the name to IP_OFFSET_OF to make it clearer.

DISPATCH();
}

label(record_previous_inst) {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

NOTE: To support the switch/case interpreter this will need to be an instruction.
No need to do so in this PR.

@mhsmith
Copy link
Member

mhsmith commented Nov 12, 2025

Is the iOS failure anything to do with this PR or is it failing elsewhere?

It's been failing intermittently for a few weeks: #141358 (comment).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

10 participants