Skip to content

Separate provenance tracking to different levels#160383

Closed
yushangdi wants to merge 1 commit intopytorch:mainfrom
yushangdi:export-D80031559
Closed

Separate provenance tracking to different levels#160383
yushangdi wants to merge 1 commit intopytorch:mainfrom
yushangdi:export-D80031559

Conversation

@yushangdi
Copy link
Contributor

@yushangdi yushangdi commented Aug 12, 2025

Summary: as title. We've got request from various parties who are interested in turning on the provenance tracking by default. In this PR, we prepare to turn on part of the provenance tracking that doesn't have too much overhead by default.

  • Change provenance_tracking config to provenance_tracking_level
  • turn on the following provenance tracking by default when basic_provenance_tracking=True
    • set_kernel_post_grad_provenance_tracing for kernels, this add mapping between triton kernels and post_grad nodes
    • dump_inductor_provenance_info if we're dumping tlparse log
    • get_graph_provenance_json and dump reate_mapping_pre_post_grad_nodes. This creates mapping between pre_grad and post_grad nodes. Since we're not turning on the provenance tracking in GraphTransformObserver by default, the mapping here maybe incomplete/limited.
    • add stack trace from post grad nodes to inductor IR nodes
    • add exception swallowing for all functions above

Test Plan:
CI

Rollback Plan:

Differential Revision: D80031559

cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov @coconutruben @Lucaskabela

@pytorch-bot
Copy link

pytorch-bot bot commented Aug 12, 2025

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/160383

Note: Links to docs will display an error until the docs builds have been completed.

✅ You can merge normally! (1 Unrelated Failure)

As of commit adc0047 with merge base 211c988 (image):

BROKEN TRUNK - The following job failed but were present on the merge base:

👉 Rebase onto the `viable/strict` branch to avoid these failures

This comment was automatically generated by Dr. CI and updates every 15 minutes.

@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D80031559

yushangdi added a commit to yushangdi/pytorch that referenced this pull request Aug 12, 2025
Summary:

as title. We've got request from various parties who are interested in turning on the provenance tracking by default. In thi PR, we turn on part of the provenance tracking that doesn't have too much overhead by default.

Test Plan:
CI

Rollback Plan:

Differential Revision: D80031559
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D80031559

yushangdi added a commit to yushangdi/pytorch that referenced this pull request Aug 12, 2025
Summary:

as title. We've got request from various parties who are interested in turning on the provenance tracking by default. In thi PR, we turn on part of the provenance tracking that doesn't have too much overhead by default.

Test Plan:
CI

Rollback Plan:

Differential Revision: D80031559
@yushangdi yushangdi marked this pull request as draft August 12, 2025 00:46
@kflu
Copy link
Contributor

kflu commented Aug 12, 2025

Thanks @yushangdi ! Is there potential latency regression by turning it on by default? can we study them? If so, I can also help study some production models latency implication.

yushangdi added a commit to yushangdi/pytorch that referenced this pull request Aug 12, 2025
Summary:

as title. We've got request from various parties who are interested in turning on the provenance tracking by default. In thi PR, we turn on part of the provenance tracking that doesn't have too much overhead by default.

Test Plan:
CI

Rollback Plan:

Differential Revision: D80031559
yushangdi added a commit to yushangdi/pytorch that referenced this pull request Aug 12, 2025
Summary:

as title. We've got request from various parties who are interested in turning on the provenance tracking by default. In thi PR, we turn on part of the provenance tracking that doesn't have too much overhead by default.

Test Plan:
CI

Rollback Plan:

Differential Revision: D80031559
@yushangdi
Copy link
Contributor Author

Thanks @yushangdi ! Is there potential latency regression by turning it on by default? can we study them? If so, I can also help study some production models latency implication.

@kflu I don't expect any big latency regression, but it may increase a little.

yushangdi added a commit to yushangdi/pytorch that referenced this pull request Aug 12, 2025
Summary:

as title. We've got request from various parties who are interested in turning on the provenance tracking by default. In thi PR, we turn on part of the provenance tracking that doesn't have too much overhead by default.

Test Plan:
CI

Rollback Plan:

Differential Revision: D80031559
@pytorch-bot pytorch-bot bot added the release notes: fx release notes category label Aug 12, 2025
@yushangdi yushangdi marked this pull request as ready for review August 12, 2025 18:37
@facebook-github-bot
Copy link
Contributor

@yushangdi has imported this pull request. If you are a Meta employee, you can view this in D80031559.

@pytorch-bot pytorch-bot bot added the ciflow/trunk Trigger trunk jobs on your pull request label Aug 12, 2025
# Save mapping info from inductor generated kernel to post_grad fx nodes to pre_grad fx nodes
# Will be changed to default to True
# TODO: remove this flag once it's running stable
basic_provenance_tracking = os.environ.get("INDUCTOR_PROVENANCE_BASIC", "0") == "1"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Instead of having a "basic", would it be more generic to define it as the "level" of provenance tracking? That way, we can re-use config provenance_tracking and the env vars INDUCTOR_PROVENANCE which is already an integer.

0: disabled
1: normal prvenance
2: basic

if config.trace.provenance_tracking:

if config.trace.basic_provenance_tracking or config.trace.provenance_tracking:
for node in origins:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This code block also need to be exception handled I think

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I moved this code to be called lazily, so it's not in IR node initialization anymore. Currently it's only called when we want to print an IR node.

)
# Dump provenance artifacts for debugging trace
if config.trace.basic_provenance_tracking or config.trace.provenance_tracking:
trace_structured(
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is trace_structured exception safe?

Copy link
Contributor Author

@yushangdi yushangdi Aug 13, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

we use trace_structured everywhere for tlparse already, it's safe as long as the payload_fn function passed to it is safe.

config.trace.basic_provenance_tracking
or config.trace.provenance_tracking
):
provenance_tracking_json = (
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This code block needs to be exception free

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This block should be exception free already. I added exception check in get_graph_provenance_json and create_mapping_pre_post_grad_nodes

yushangdi added a commit to yushangdi/pytorch that referenced this pull request Aug 13, 2025
Summary:
as title. We've got request from various parties who are interested in turning on the provenance tracking by default. In this PR, we prepare to turn on part of the provenance tracking that doesn't have too much overhead by default.

- Change `provenance_tracking` config to `provenance_tracking_level`. This is defaults to 1 (normal) now, but will be defaults to 2 to turn on basic provenance tracking
- turn on the following provenance tracking by default when `basic_provenance_tracking`=True
    - `set_kernel_post_grad_provenance_tracing` for kernels, this add mapping between triton kernels and post_grad nodes
    - `dump_inductor_provenance_info` if we're dumping tlparse log
    - `get_graph_provenance_json` and dump `reate_mapping_pre_post_grad_nodes`. This creates mapping between pre_grad and post_grad nodes. Since we're not turning on the provenance tracking in GraphTransformObserver by default, the mapping here maybe incomplete/limited.
    - add exception swallowing for all functions above


Test Plan:
CI

Rollback Plan: cc voznesenskym penguinwu EikanWang jgong5 Guobing-Chen XiaobingSuper zhuhaozhe blzheng wenzhe-nrv jiayisunx ipiszy chenyang78 kadeng muchulee8 amjames chauhang aakhundov coconutruben Lucaskabela

Differential Revision: D80031559

Pulled By: yushangdi
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D80031559

@yushangdi
Copy link
Contributor Author

@kflu Would it be possible to also verify whether we can turn on "normal" mode by default in the lowering stack?

@yushangdi yushangdi requested a review from kflu August 13, 2025 20:28
@yushangdi yushangdi changed the title Turn on part of provenance tracking by default Separate provenance tracking to different levels Aug 13, 2025
skarjala pushed a commit to skarjala/pytorch that referenced this pull request Aug 14, 2025
Summary:
as title. We've got request from various parties who are interested in turning on the provenance tracking by default. In this PR, we prepare to turn on part of the provenance tracking that doesn't have too much overhead by default.

- add `basic_provenance_tracking` config. This is defaults to False now, but will be defaults to True to turn on basic provenance tracking
- turn on the following provenance tracking by default when `basic_provenance_tracking`=True
    - `set_kernel_post_grad_provenance_tracing` for kernels, this add mapping between triton kernels and post_grad nodes
    - `dump_inductor_provenance_info` if we're dumping tlparse log
    - `get_graph_provenance_json` and dump `reate_mapping_pre_post_grad_nodes`. This creates mapping between pre_grad and post_grad nodes. Since we're not turning on the provenance tracking in GraphTransformObserver by default, the mapping here maybe incomplete/limited.
    - add stack trace from post grad nodes to inductor IR nodes
    - add exception swallowing for all functions above

Pull Request resolved: pytorch#160383

Test Plan:
CI

Rollback Plan:
cc voznesenskym penguinwu EikanWang jgong5 Guobing-Chen XiaobingSuper zhuhaozhe blzheng wenzhe-nrv jiayisunx ipiszy chenyang78 kadeng muchulee8 amjames chauhang aakhundov coconutruben Lucaskabela

Differential Revision: D80031559

Pulled By: yushangdi
skarjala pushed a commit to skarjala/pytorch that referenced this pull request Aug 14, 2025
Summary:
as title. We've got request from various parties who are interested in turning on the provenance tracking by default. In this PR, we prepare to turn on part of the provenance tracking that doesn't have too much overhead by default.

- add `basic_provenance_tracking` config. This is defaults to False now, but will be defaults to True to turn on basic provenance tracking
- turn on the following provenance tracking by default when `basic_provenance_tracking`=True
    - `set_kernel_post_grad_provenance_tracing` for kernels, this add mapping between triton kernels and post_grad nodes
    - `dump_inductor_provenance_info` if we're dumping tlparse log
    - `get_graph_provenance_json` and dump `reate_mapping_pre_post_grad_nodes`. This creates mapping between pre_grad and post_grad nodes. Since we're not turning on the provenance tracking in GraphTransformObserver by default, the mapping here maybe incomplete/limited.
    - add stack trace from post grad nodes to inductor IR nodes
    - add exception swallowing for all functions above

Pull Request resolved: pytorch#160383

Test Plan:
CI

Rollback Plan:
cc voznesenskym penguinwu EikanWang jgong5 Guobing-Chen XiaobingSuper zhuhaozhe blzheng wenzhe-nrv jiayisunx ipiszy chenyang78 kadeng muchulee8 amjames chauhang aakhundov coconutruben Lucaskabela

Differential Revision: D80031559

Pulled By: yushangdi
skarjala pushed a commit to skarjala/pytorch that referenced this pull request Aug 14, 2025
Summary:
as title. We've got request from various parties who are interested in turning on the provenance tracking by default. In this PR, we prepare to turn on part of the provenance tracking that doesn't have too much overhead by default.

- add `basic_provenance_tracking` config. This is defaults to False now, but will be defaults to True to turn on basic provenance tracking
- turn on the following provenance tracking by default when `basic_provenance_tracking`=True
    - `set_kernel_post_grad_provenance_tracing` for kernels, this add mapping between triton kernels and post_grad nodes
    - `dump_inductor_provenance_info` if we're dumping tlparse log
    - `get_graph_provenance_json` and dump `reate_mapping_pre_post_grad_nodes`. This creates mapping between pre_grad and post_grad nodes. Since we're not turning on the provenance tracking in GraphTransformObserver by default, the mapping here maybe incomplete/limited.
    - add stack trace from post grad nodes to inductor IR nodes
    - add exception swallowing for all functions above

Pull Request resolved: pytorch#160383

Test Plan:
CI

Rollback Plan:
cc voznesenskym penguinwu EikanWang jgong5 Guobing-Chen XiaobingSuper zhuhaozhe blzheng wenzhe-nrv jiayisunx ipiszy chenyang78 kadeng muchulee8 amjames chauhang aakhundov coconutruben Lucaskabela

Differential Revision: D80031559

Pulled By: yushangdi
skarjala pushed a commit to skarjala/pytorch that referenced this pull request Aug 14, 2025
Summary:
as title. We've got request from various parties who are interested in turning on the provenance tracking by default. In this PR, we prepare to turn on part of the provenance tracking that doesn't have too much overhead by default.

- add `basic_provenance_tracking` config. This is defaults to False now, but will be defaults to True to turn on basic provenance tracking
- turn on the following provenance tracking by default when `basic_provenance_tracking`=True
    - `set_kernel_post_grad_provenance_tracing` for kernels, this add mapping between triton kernels and post_grad nodes
    - `dump_inductor_provenance_info` if we're dumping tlparse log
    - `get_graph_provenance_json` and dump `reate_mapping_pre_post_grad_nodes`. This creates mapping between pre_grad and post_grad nodes. Since we're not turning on the provenance tracking in GraphTransformObserver by default, the mapping here maybe incomplete/limited.
    - add stack trace from post grad nodes to inductor IR nodes
    - add exception swallowing for all functions above

Pull Request resolved: pytorch#160383

Test Plan:
CI

Rollback Plan:
cc voznesenskym penguinwu EikanWang jgong5 Guobing-Chen XiaobingSuper zhuhaozhe blzheng wenzhe-nrv jiayisunx ipiszy chenyang78 kadeng muchulee8 amjames chauhang aakhundov coconutruben Lucaskabela

Differential Revision: D80031559

Pulled By: yushangdi
@yushangdi yushangdi requested a review from angelayi August 14, 2025 16:53
skarjala pushed a commit to skarjala/pytorch that referenced this pull request Aug 14, 2025
Summary:
as title. We've got request from various parties who are interested in turning on the provenance tracking by default. In this PR, we prepare to turn on part of the provenance tracking that doesn't have too much overhead by default.

- add `basic_provenance_tracking` config. This is defaults to False now, but will be defaults to True to turn on basic provenance tracking
- turn on the following provenance tracking by default when `basic_provenance_tracking`=True
    - `set_kernel_post_grad_provenance_tracing` for kernels, this add mapping between triton kernels and post_grad nodes
    - `dump_inductor_provenance_info` if we're dumping tlparse log
    - `get_graph_provenance_json` and dump `reate_mapping_pre_post_grad_nodes`. This creates mapping between pre_grad and post_grad nodes. Since we're not turning on the provenance tracking in GraphTransformObserver by default, the mapping here maybe incomplete/limited.
    - add stack trace from post grad nodes to inductor IR nodes
    - add exception swallowing for all functions above

Pull Request resolved: pytorch#160383

Test Plan:
CI

Rollback Plan:
cc voznesenskym penguinwu EikanWang jgong5 Guobing-Chen XiaobingSuper zhuhaozhe blzheng wenzhe-nrv jiayisunx ipiszy chenyang78 kadeng muchulee8 amjames chauhang aakhundov coconutruben Lucaskabela

Differential Revision: D80031559

Pulled By: yushangdi
skarjala pushed a commit to skarjala/pytorch that referenced this pull request Aug 14, 2025
Summary:
as title. We've got request from various parties who are interested in turning on the provenance tracking by default. In this PR, we prepare to turn on part of the provenance tracking that doesn't have too much overhead by default.

- add `basic_provenance_tracking` config. This is defaults to False now, but will be defaults to True to turn on basic provenance tracking
- turn on the following provenance tracking by default when `basic_provenance_tracking`=True
    - `set_kernel_post_grad_provenance_tracing` for kernels, this add mapping between triton kernels and post_grad nodes
    - `dump_inductor_provenance_info` if we're dumping tlparse log
    - `get_graph_provenance_json` and dump `reate_mapping_pre_post_grad_nodes`. This creates mapping between pre_grad and post_grad nodes. Since we're not turning on the provenance tracking in GraphTransformObserver by default, the mapping here maybe incomplete/limited.
    - add stack trace from post grad nodes to inductor IR nodes
    - add exception swallowing for all functions above

Pull Request resolved: pytorch#160383

Test Plan:
CI

Rollback Plan:

cc voznesenskym penguinwu EikanWang jgong5 Guobing-Chen XiaobingSuper zhuhaozhe blzheng wenzhe-nrv jiayisunx ipiszy chenyang78 kadeng muchulee8 amjames chauhang aakhundov coconutruben Lucaskabela

Rollback Plan:

Differential Revision: D80139160

Pulled By: yushangdi
@kflu
Copy link
Contributor

kflu commented Aug 14, 2025

@kflu Would it be possible to also verify whether we can turn on "normal" mode by default in the lowering stack?

sure, once it's landed we can test both modes.

skarjala pushed a commit to skarjala/pytorch that referenced this pull request Aug 14, 2025
Summary:
as title. We've got request from various parties who are interested in turning on the provenance tracking by default. In this PR, we prepare to turn on part of the provenance tracking that doesn't have too much overhead by default.

- add `basic_provenance_tracking` config. This is defaults to False now, but will be defaults to True to turn on basic provenance tracking
- turn on the following provenance tracking by default when `basic_provenance_tracking`=True
    - `set_kernel_post_grad_provenance_tracing` for kernels, this add mapping between triton kernels and post_grad nodes
    - `dump_inductor_provenance_info` if we're dumping tlparse log
    - `get_graph_provenance_json` and dump `reate_mapping_pre_post_grad_nodes`. This creates mapping between pre_grad and post_grad nodes. Since we're not turning on the provenance tracking in GraphTransformObserver by default, the mapping here maybe incomplete/limited.
    - add stack trace from post grad nodes to inductor IR nodes
    - add exception swallowing for all functions above

Pull Request resolved: pytorch#160383

Test Plan:
CI

Rollback Plan:
cc voznesenskym penguinwu EikanWang jgong5 Guobing-Chen XiaobingSuper zhuhaozhe blzheng wenzhe-nrv jiayisunx ipiszy chenyang78 kadeng muchulee8 amjames chauhang aakhundov coconutruben Lucaskabela

Differential Revision: D80031559

Pulled By: yushangdi
skarjala pushed a commit to skarjala/pytorch that referenced this pull request Aug 14, 2025
Summary:
as title. We've got request from various parties who are interested in turning on the provenance tracking by default. In this PR, we prepare to turn on part of the provenance tracking that doesn't have too much overhead by default.

- add `basic_provenance_tracking` config. This is defaults to False now, but will be defaults to True to turn on basic provenance tracking
- turn on the following provenance tracking by default when `basic_provenance_tracking`=True
    - `set_kernel_post_grad_provenance_tracing` for kernels, this add mapping between triton kernels and post_grad nodes
    - `dump_inductor_provenance_info` if we're dumping tlparse log
    - `get_graph_provenance_json` and dump `reate_mapping_pre_post_grad_nodes`. This creates mapping between pre_grad and post_grad nodes. Since we're not turning on the provenance tracking in GraphTransformObserver by default, the mapping here maybe incomplete/limited.
    - add stack trace from post grad nodes to inductor IR nodes
    - add exception swallowing for all functions above

Pull Request resolved: pytorch#160383

Test Plan:
CI

Rollback Plan:

cc voznesenskym penguinwu EikanWang jgong5 Guobing-Chen XiaobingSuper zhuhaozhe blzheng wenzhe-nrv jiayisunx ipiszy chenyang78 kadeng muchulee8 amjames chauhang aakhundov coconutruben Lucaskabela

Rollback Plan:

Differential Revision: D80139160

Pulled By: yushangdi
skarjala pushed a commit to skarjala/pytorch that referenced this pull request Aug 14, 2025
Summary:
as title. We've got request from various parties who are interested in turning on the provenance tracking by default. In this PR, we prepare to turn on part of the provenance tracking that doesn't have too much overhead by default.

- add `basic_provenance_tracking` config. This is defaults to False now, but will be defaults to True to turn on basic provenance tracking
- turn on the following provenance tracking by default when `basic_provenance_tracking`=True
    - `set_kernel_post_grad_provenance_tracing` for kernels, this add mapping between triton kernels and post_grad nodes
    - `dump_inductor_provenance_info` if we're dumping tlparse log
    - `get_graph_provenance_json` and dump `reate_mapping_pre_post_grad_nodes`. This creates mapping between pre_grad and post_grad nodes. Since we're not turning on the provenance tracking in GraphTransformObserver by default, the mapping here maybe incomplete/limited.
    - add stack trace from post grad nodes to inductor IR nodes
    - add exception swallowing for all functions above

Pull Request resolved: pytorch#160383

Test Plan:
CI

Rollback Plan:
cc voznesenskym penguinwu EikanWang jgong5 Guobing-Chen XiaobingSuper zhuhaozhe blzheng wenzhe-nrv jiayisunx ipiszy chenyang78 kadeng muchulee8 amjames chauhang aakhundov coconutruben Lucaskabela

Differential Revision: D80031559

Pulled By: yushangdi
skarjala pushed a commit to skarjala/pytorch that referenced this pull request Aug 14, 2025
Summary:
as title. We've got request from various parties who are interested in turning on the provenance tracking by default. In this PR, we prepare to turn on part of the provenance tracking that doesn't have too much overhead by default.

- add `basic_provenance_tracking` config. This is defaults to False now, but will be defaults to True to turn on basic provenance tracking
- turn on the following provenance tracking by default when `basic_provenance_tracking`=True
    - `set_kernel_post_grad_provenance_tracing` for kernels, this add mapping between triton kernels and post_grad nodes
    - `dump_inductor_provenance_info` if we're dumping tlparse log
    - `get_graph_provenance_json` and dump `reate_mapping_pre_post_grad_nodes`. This creates mapping between pre_grad and post_grad nodes. Since we're not turning on the provenance tracking in GraphTransformObserver by default, the mapping here maybe incomplete/limited.
    - add stack trace from post grad nodes to inductor IR nodes
    - add exception swallowing for all functions above

Pull Request resolved: pytorch#160383

Test Plan:
CI

Rollback Plan:

cc voznesenskym penguinwu EikanWang jgong5 Guobing-Chen XiaobingSuper zhuhaozhe blzheng wenzhe-nrv jiayisunx ipiszy chenyang78 kadeng muchulee8 amjames chauhang aakhundov coconutruben Lucaskabela

Rollback Plan:

Differential Revision: D80139160

Pulled By: yushangdi
# Backward compatibility:
# If TORCH_COMPILE_DEBUG=1, level is set to at least 1.
# If INDUCTOR_PROVENANCE is set, use its integer value.
provenance_tracking_level: int = int(
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think you can use something like, Literal[0, 1, 2] like this

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'll add this change to my next PR!

Summary:
as title. We've got request from various parties who are interested in turning on the provenance tracking by default. In this PR, we prepare to turn on part of the provenance tracking that doesn't have too much overhead by default.

- Change `provenance_tracking` config to `provenance_tracking_level`. This is defaults to 1 (normal) now, but will be defaults to 2 to turn on basic provenance tracking
- turn on the following provenance tracking by default when `basic_provenance_tracking`=True
    - `set_kernel_post_grad_provenance_tracing` for kernels, this add mapping between triton kernels and post_grad nodes
    - `dump_inductor_provenance_info` if we're dumping tlparse log
    - `get_graph_provenance_json` and dump `reate_mapping_pre_post_grad_nodes`. This creates mapping between pre_grad and post_grad nodes. Since we're not turning on the provenance tracking in GraphTransformObserver by default, the mapping here maybe incomplete/limited.
    - add exception swallowing for all functions above

Pull Request resolved: pytorch#160383

Test Plan:
CI

Rollback Plan: cc voznesenskym penguinwu EikanWang jgong5 Guobing-Chen XiaobingSuper zhuhaozhe blzheng wenzhe-nrv jiayisunx ipiszy chenyang78 kadeng muchulee8 amjames chauhang aakhundov coconutruben Lucaskabela

Reviewed By: angelayi

Differential Revision: D80031559

Pulled By: yushangdi
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D80031559

@facebook-github-bot
Copy link
Contributor

@pytorchbot merge -i

(Initiating merge automatically since Phabricator Diff has merged, merging with -i because oss signals were bypassed internally)

@pytorchmergebot
Copy link
Collaborator

Merge started

Your change will be merged while ignoring the following 1 checks: pull / linux-jammy-py3.9-clang12 / build

Learn more about merging in the wiki.

Questions? Feedback? Please reach out to the PyTorch DevX Team

Advanced Debugging
Check the merge workflow status
here

@pytorchmergebot
Copy link
Collaborator

This PR (#160383) was merged in aa99e09 but it is still open, likely due to a Github bug, so mergebot is closing it manually. If you think this is a mistake, please feel free to reopen and contact Dev Infra.

pull bot pushed a commit to ScorpiusDraconis83/pytorch that referenced this pull request Aug 15, 2025
Summary: as title. We've got request from various parties who are interested in turning on the provenance tracking by default. In this PR, we prepare to turn on part of the provenance tracking that doesn't have too much overhead by default.

- Change `provenance_tracking` config to `provenance_tracking_level`
- turn on the following provenance tracking by default when `basic_provenance_tracking`=True
    - `set_kernel_post_grad_provenance_tracing` for kernels, this add mapping between triton kernels and post_grad nodes
    - `dump_inductor_provenance_info` if we're dumping tlparse log
    - `get_graph_provenance_json` and dump `reate_mapping_pre_post_grad_nodes`. This creates mapping between pre_grad and post_grad nodes. Since we're not turning on the provenance tracking in GraphTransformObserver by default, the mapping here maybe incomplete/limited.
    - add stack trace from post grad nodes to inductor IR nodes
    - add exception swallowing for all functions above

Test Plan:
CI

Rollback Plan:

Differential Revision: D80031559

Pull Request resolved: pytorch#160383
Approved by: https://github.com/angelayi
can-gaa-hou pushed a commit to can-gaa-hou/pytorch that referenced this pull request Aug 22, 2025
Summary: as title. We've got request from various parties who are interested in turning on the provenance tracking by default. In this PR, we prepare to turn on part of the provenance tracking that doesn't have too much overhead by default.

- Change `provenance_tracking` config to `provenance_tracking_level`
- turn on the following provenance tracking by default when `basic_provenance_tracking`=True
    - `set_kernel_post_grad_provenance_tracing` for kernels, this add mapping between triton kernels and post_grad nodes
    - `dump_inductor_provenance_info` if we're dumping tlparse log
    - `get_graph_provenance_json` and dump `reate_mapping_pre_post_grad_nodes`. This creates mapping between pre_grad and post_grad nodes. Since we're not turning on the provenance tracking in GraphTransformObserver by default, the mapping here maybe incomplete/limited.
    - add stack trace from post grad nodes to inductor IR nodes
    - add exception swallowing for all functions above

Test Plan:
CI

Rollback Plan:

Differential Revision: D80031559

Pull Request resolved: pytorch#160383
Approved by: https://github.com/angelayi
markc-614 pushed a commit to markc-614/pytorch that referenced this pull request Sep 17, 2025
Summary: as title. We've got request from various parties who are interested in turning on the provenance tracking by default. In this PR, we prepare to turn on part of the provenance tracking that doesn't have too much overhead by default.

- Change `provenance_tracking` config to `provenance_tracking_level`
- turn on the following provenance tracking by default when `basic_provenance_tracking`=True
    - `set_kernel_post_grad_provenance_tracing` for kernels, this add mapping between triton kernels and post_grad nodes
    - `dump_inductor_provenance_info` if we're dumping tlparse log
    - `get_graph_provenance_json` and dump `reate_mapping_pre_post_grad_nodes`. This creates mapping between pre_grad and post_grad nodes. Since we're not turning on the provenance tracking in GraphTransformObserver by default, the mapping here maybe incomplete/limited.
    - add stack trace from post grad nodes to inductor IR nodes
    - add exception swallowing for all functions above

Test Plan:
CI

Rollback Plan:

Differential Revision: D80031559

Pull Request resolved: pytorch#160383
Approved by: https://github.com/angelayi
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants