Skip to content

[fx] Move the rest of torch.fx.Node to C++#170946

Closed
jansel wants to merge 11 commits intogh/jansel/586/basefrom
gh/jansel/586/head
Closed

[fx] Move the rest of torch.fx.Node to C++#170946
jansel wants to merge 11 commits intogh/jansel/586/basefrom
gh/jansel/586/head

Conversation

[ghstack-poisoned]
@pytorch-bot
Copy link

pytorch-bot bot commented Dec 20, 2025

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/170946

Note: Links to docs will display an error until the docs builds have been completed.

✅ You can merge normally! (1 Unrelated Failure)

As of commit feb6b74 with merge base e81980e (image):

FLAKY - The following job failed but was likely due to flakiness present on trunk:

This comment was automatically generated by Dr. CI and updates every 15 minutes.

@pytorch-bot pytorch-bot bot added the release notes: fx release notes category label Dec 20, 2025
jansel added a commit that referenced this pull request Dec 20, 2025
@jansel jansel mentioned this pull request Dec 20, 2025
[ghstack-poisoned]
jansel added a commit that referenced this pull request Dec 20, 2025
@jansel jansel added the suppress-bc-linter Suppresses the failures of API backward-compatibility linter (Lint/bc_linter) label Dec 20, 2025
[ghstack-poisoned]
[ghstack-poisoned]
[ghstack-poisoned]
@jansel jansel requested a review from williamwen42 December 24, 2025 19:26
@jansel jansel marked this pull request as ready for review December 24, 2025 19:27
@jansel jansel requested a review from anijain2305 December 24, 2025 19:27
[ghstack-poisoned]
Copy link
Contributor

@oulgen oulgen left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

i remember the last time you did this, there was a memory leak. Is there anything we can do preemptively to make sure there's no leak? I stared through the memory allocation logic but its hard to be sure

@jansel
Copy link
Contributor Author

jansel commented Dec 24, 2025

i remember the last time you did this, there was a memory leak. Is there anything we can do preemptively to make sure there's no leak? I stared through the memory allocation logic but its hard to be sure

That one was fixed and landed already.

I asked both Claude and Codex to check for leaks and semantics changes.

[ghstack-poisoned]
@pytorchmergebot
Copy link
Collaborator

Rebased gh/jansel/588/orig onto refs/remotes/origin/viable/strict because #170973 was rebased, please pull locally before adding more changes (for example, via ghstack checkout https://github.com/pytorch/pytorch/pull/170946)

@pytorchmergebot
Copy link
Collaborator

Starting merge as part of PR stack under #170962

pytorchmergebot pushed a commit that referenced this pull request Dec 25, 2025
Pull Request resolved: #170962
Approved by: https://github.com/oulgen
ghstack dependencies: #170945, #170946
krastogi-in pushed a commit to krastogi-in/pytorch that referenced this pull request Jan 9, 2026
krastogi-in pushed a commit to krastogi-in/pytorch that referenced this pull request Jan 9, 2026
krastogi-in pushed a commit to krastogi-in/pytorch that referenced this pull request Jan 9, 2026
This reverts commit a50d65a.

Reverted pytorch#170962 on behalf of https://github.com/clee2000 due to sorry but I think this broke a test on inductor test/inductor/test_max_autotune.py::TestMaxAutotuneAsyncPipelined::test_empty_conv_input_search_space_EXHAUSTIVE_kernel_size_1 [GH job link](https://github.com/pytorch/pytorch/actions/runs/20510247600/job/58931365661) [HUD commit link](https://hud.pytorch.org/pytorch/pytorch/commit/a50d65ad2f8c39a19be6ee3d7e0b37b0f3f47189)? it might be a landrace since I see that it did succeed on this PR. Also its failing some tests internally D89800849 ([comment](pytorch#170946 (comment)))
krastogi-in pushed a commit to krastogi-in/pytorch that referenced this pull request Jan 9, 2026
This reverts commit 84db132.

Reverted pytorch#170946 on behalf of https://github.com/clee2000 due to sorry but I think this broke a test on inductor test/inductor/test_max_autotune.py::TestMaxAutotuneAsyncPipelined::test_empty_conv_input_search_space_EXHAUSTIVE_kernel_size_1 [GH job link](https://github.com/pytorch/pytorch/actions/runs/20510247600/job/58931365661) [HUD commit link](https://hud.pytorch.org/pytorch/pytorch/commit/a50d65ad2f8c39a19be6ee3d7e0b37b0f3f47189)? it might be a landrace since I see that it did succeed on this PR. Also its failing some tests internally D89800849 ([comment](pytorch#170946 (comment)))
SergeyTyshkevich pushed a commit to SergeyTyshkevich/chart2 that referenced this pull request Jan 19, 2026
@github-actions
Copy link
Contributor

Looks like this PR hasn't been updated in a while so we're going to go ahead and mark this as Stale.
Feel free to remove the Stale label if you feel this was a mistake.
If you are unable to remove the Stale label please contact a maintainer in order to do so.
If you want the bot to never mark this PR stale again, add the no-stale label.
Stale pull requests will automatically be closed after 30 days of inactivity.

@github-actions github-actions bot added the Stale label Feb 24, 2026
[ghstack-poisoned]
@jansel jansel removed the Stale label Feb 24, 2026
[ghstack-poisoned]
[ghstack-poisoned]
[ghstack-poisoned]
@pytorchmergebot
Copy link
Collaborator

Starting merge as part of PR stack under #170973

pytorchmergebot pushed a commit that referenced this pull request Feb 26, 2026
Pull Request resolved: #170962
Approved by: https://github.com/oulgen
ghstack dependencies: #170946
pytorchmergebot pushed a commit that referenced this pull request Feb 26, 2026
Cumulative Changes (PR 170946 → PR 170973)

  | Benchmark                             | PR 170946 | PR 170973 | Delta |
  |---------------------------------------|-----------|-----------|-------|
  | add_loop_eager                        | 3,308M    | 3,206M    | -3.1% |
  | aotdispatcher_partitioner_cpu2        | 1,944M    | 1,865M    | -4.1% |
  | aotdispatcher_partitioner_cpu         | 8,340M    | 8,062M    | -3.3% |
  | sum_floordiv_regression               | 3,700M    | 3,591M    | -3.0% |
  | aotdispatcher_training_nosubclass_cpu | 3,242M    | 3,168M    | -2.3% |
  | update_hint_regression                | 1,674M    | 1,641M    | -2.0% |
  | symint_sum_loop                       | 4,682M    | 4,590M    | -2.0% |
  | add_loop_inductor_gpu                 | 26,418M   | 26,031M   | -1.5% |
  | basic_NestedModule_eager              | 6,171M    | 6,084M    | -1.4% |
  | add_loop_inductor                     | 29,736M   | 29,342M   | -1.3% |

Pull Request resolved: #170973
Approved by: https://github.com/oulgen
ghstack dependencies: #170946, #170962
@wdvr
Copy link
Contributor

wdvr commented Mar 10, 2026

@pytorchmergebot revert -m "reverting as part of the #170962 stack" -c ghfirst

@pytorchmergebot
Copy link
Collaborator

@pytorchbot successfully started a revert job. Check the current status here.
Questions? Feedback? Please reach out to the PyTorch DevX Team

@pytorchmergebot
Copy link
Collaborator

Reverting PR 170946 failed

Reason: Command git -C /home/runner/work/pytorch/pytorch revert --no-edit fa27d4f89f4d52bceabdfb18a3acbff4c96362b1 returned non-zero exit code 1

Auto-merging test/fx/test_graph_pickler.py
Auto-merging torch/_C/__init__.pyi.in
Auto-merging torch/_inductor/fx_passes/joint_graph.py
Auto-merging torch/fx/node.py
CONFLICT (content): Merge conflict in torch/fx/node.py
error: could not revert fa27d4f89f4... [fx] Move the rest of torch.fx.Node to C++ (#170946)
hint: After resolving the conflicts, mark them with
hint: "git add/rm <pathspec>", then run
hint: "git revert --continue".
hint: You can instead skip this commit with "git revert --skip".
hint: To abort and get back to the state before "git revert",
hint: run "git revert --abort".
hint: Disable this message with "git config set advice.mergeConflict false"
Details for Dev Infra team Raised by workflow job

wdvr added a commit to wdvr/pytorch that referenced this pull request Mar 10, 2026
pytorchmergebot pushed a commit that referenced this pull request Mar 11, 2026
## Summary

Reverts #170946

This reverts commit fa27d4f.

The two stacked PRs (#170962 and #170973) have already been reverted on main:
- `0a06050b` Revert "[fx] Move _FindNodesLookupTable and _node_list to C++ (#170973)"
- `2ee3377b` Revert "[fx] Move _Namespace to C++ (#170962)"
- `fb60dd58` Revert "[fx] Fix quadratic name generation in _NamespaceBase.create_name (#176515)"

The only merge conflict was in `torch/fx/node.py` due to a type annotation modernization PR (#176308, `Optional[X]` → `X | None`). The conflict was resolved by keeping the modern annotation style while restoring the original Python implementation.

## Test plan
CI

Pull Request resolved: #177047
Approved by: https://github.com/huydhn
pytorchmergebot added a commit that referenced this pull request Mar 11, 2026
This reverts commit 68bcdcb.

Reverted #177047 on behalf of https://github.com/pytorch-auto-revert due to Reverted automatically by pytorch's autorevert, to avoid this behaviour add the tag autorevert: disable ([comment](#177047 (comment)))
wdvr added a commit to wdvr/pytorch that referenced this pull request Mar 11, 2026
pytorchmergebot pushed a commit that referenced this pull request Mar 12, 2026
Reland of #177047 which was auto-reverted due to lint failures.

This reverts commit 4bc9d7f ("Reapply "[fx] Move the rest of torch.fx.Node to C++ (#170946)" (#177047)").

Authored with Claude.
Pull Request resolved: #177183
Approved by: https://github.com/huydhn
sandy-gags pushed a commit to sandy-gags/pytorch that referenced this pull request Mar 12, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

ci-no-td Do not run TD on this PR ciflow/inductor fx Merged module: inductor release notes: fx release notes category Reverted skip-pr-sanity-checks suppress-bc-linter Suppresses the failures of API backward-compatibility linter (Lint/bc_linter)

Projects

None yet

Development

Successfully merging this pull request may close these issues.

6 participants