Skip to content

log ViewAndMutationMeta to trace_structured#133784

Closed
bdhirsh wants to merge 13 commits intogh/bdhirsh/609/basefrom
gh/bdhirsh/609/head
Closed

log ViewAndMutationMeta to trace_structured#133784
bdhirsh wants to merge 13 commits intogh/bdhirsh/609/basefrom
gh/bdhirsh/609/head

Conversation

@bdhirsh
Copy link
Collaborator

@bdhirsh bdhirsh commented Aug 17, 2024

I ended up bundling it into the existing tlparse logs for the AOT forward graph, since it looked like registering it as a separate artifact requires changes to tlparse itself (maybe that is wrong though?)

Example new fw AOT graph tlparse output for the below code: https://interncache-all.fbcdn.net/manifold/tlparse_reports/tree/logs/.tmp70zKiO/0_0_0/aot_forward_graph_2.txt

import torch


@torch.compile
def f(x):
    out1 = torch.view_as_complex(x)
    out2 = torch.view_as_complex(x)
    return out1, out2, x * 2



x_ = torch.randn(4, 2, requires_grad=True, dtype=torch.float64)
out = f(x_)

Stack from ghstack (oldest at bottom):

cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames @rec

@pytorch-bot
Copy link

pytorch-bot bot commented Aug 17, 2024

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/133784

Note: Links to docs will display an error until the docs builds have been completed.

✅ No Failures

As of commit c1b7c35 with merge base 4470339 (image):
💚 Looks good so far! There are no failures yet. 💚

This comment was automatically generated by Dr. CI and updates every 15 minutes.

payload_fn=lambda: fw_module.print_readable(
print_output=False, include_stride=True, include_device=True
),
payload_fn=lambda: fw_metadata_str,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This doesn't do what you want, payload_fn=fw_metadata_str

Have you inspected the output tlparse yet?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Oh it looks like you didn't exercise this one

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

whoops... thank you

print_output=False, include_stride=True, include_device=True
),
]
)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why put these together? Just output them as separate log entries. If you don't want to have to modify tlparse cli, dump the fw metadata as "artifact"

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

you're completely right - I didn't realize that dumping using "artifact" allows us to add new artifacts without needing to change the tlparse cli. updated

@ezyang
Copy link
Contributor

ezyang commented Aug 19, 2024

nit: I think the dumped structures would be substantially easier to understand if they were indented. There are two ways you could do this: dump as JSON, or use torchgen's dump dataclass with indentation utility

I ended up bundling it into the existing tlparse logs for the AOT forward graph, since it looked like registering it as a separate artifact requires changes to tlparse itself (maybe that is wrong though?)

Example new fw AOT graph tlparse output for the below code: https://interncache-all.fbcdn.net/manifold/tlparse_reports/tree/logs/.tmp70zKiO/0_0_0/aot_forward_graph_2.txt

```
import torch


torch.compile
def f(x):
    out1 = torch.view_as_complex(x)
    out2 = torch.view_as_complex(x)
    return out1, out2, x * 2



x_ = torch.randn(4, 2, requires_grad=True, dtype=torch.float64)
out = f(x_)
```





[ghstack-poisoned]
I ended up bundling it into the existing tlparse logs for the AOT forward graph, since it looked like registering it as a separate artifact requires changes to tlparse itself (maybe that is wrong though?)

Example new fw AOT graph tlparse output for the below code: https://interncache-all.fbcdn.net/manifold/tlparse_reports/tree/logs/.tmp70zKiO/0_0_0/aot_forward_graph_2.txt

```
import torch


torch.compile
def f(x):
    out1 = torch.view_as_complex(x)
    out2 = torch.view_as_complex(x)
    return out1, out2, x * 2



x_ = torch.randn(4, 2, requires_grad=True, dtype=torch.float64)
out = f(x_)
```





[ghstack-poisoned]
@bdhirsh
Copy link
Collaborator Author

bdhirsh commented Aug 22, 2024

Just updated to use dataclass_repr() for formatting and it is much nicer https://interncache-all.fbcdn.net/manifold/tlparse_reports/tree/logs/.tmp2lEs4q/0_0_1/aot_forward_graph_fw_metadata_2.txt

(I also locally exercised all four paths of inference/training, subclasses/no_subclasses)

@albanD albanD removed their request for review August 22, 2024 15:27
I ended up bundling it into the existing tlparse logs for the AOT forward graph, since it looked like registering it as a separate artifact requires changes to tlparse itself (maybe that is wrong though?)

Example new fw AOT graph tlparse output for the below code: https://interncache-all.fbcdn.net/manifold/tlparse_reports/tree/logs/.tmp70zKiO/0_0_0/aot_forward_graph_2.txt

```
import torch


torch.compile
def f(x):
    out1 = torch.view_as_complex(x)
    out2 = torch.view_as_complex(x)
    return out1, out2, x * 2



x_ = torch.randn(4, 2, requires_grad=True, dtype=torch.float64)
out = f(x_)
```





[ghstack-poisoned]
I ended up bundling it into the existing tlparse logs for the AOT forward graph, since it looked like registering it as a separate artifact requires changes to tlparse itself (maybe that is wrong though?)

Example new fw AOT graph tlparse output for the below code: https://interncache-all.fbcdn.net/manifold/tlparse_reports/tree/logs/.tmp70zKiO/0_0_0/aot_forward_graph_2.txt

```
import torch


torch.compile
def f(x):
    out1 = torch.view_as_complex(x)
    out2 = torch.view_as_complex(x)
    return out1, out2, x * 2



x_ = torch.randn(4, 2, requires_grad=True, dtype=torch.float64)
out = f(x_)
```





[ghstack-poisoned]
I ended up bundling it into the existing tlparse logs for the AOT forward graph, since it looked like registering it as a separate artifact requires changes to tlparse itself (maybe that is wrong though?)

Example new fw AOT graph tlparse output for the below code: https://interncache-all.fbcdn.net/manifold/tlparse_reports/tree/logs/.tmp70zKiO/0_0_0/aot_forward_graph_2.txt

```
import torch


torch.compile
def f(x):
    out1 = torch.view_as_complex(x)
    out2 = torch.view_as_complex(x)
    return out1, out2, x * 2



x_ = torch.randn(4, 2, requires_grad=True, dtype=torch.float64)
out = f(x_)
```





[ghstack-poisoned]
I ended up bundling it into the existing tlparse logs for the AOT forward graph, since it looked like registering it as a separate artifact requires changes to tlparse itself (maybe that is wrong though?)

Example new fw AOT graph tlparse output for the below code: https://interncache-all.fbcdn.net/manifold/tlparse_reports/tree/logs/.tmp70zKiO/0_0_0/aot_forward_graph_2.txt

```
import torch


torch.compile
def f(x):
    out1 = torch.view_as_complex(x)
    out2 = torch.view_as_complex(x)
    return out1, out2, x * 2



x_ = torch.randn(4, 2, requires_grad=True, dtype=torch.float64)
out = f(x_)
```





cc voznesenskym penguinwu EikanWang jgong5 Guobing-Chen XiaobingSuper zhuhaozhe blzheng wenzhe-nrv jiayisunx chenyang78 kadeng chauhang amjames rec

[ghstack-poisoned]
I ended up bundling it into the existing tlparse logs for the AOT forward graph, since it looked like registering it as a separate artifact requires changes to tlparse itself (maybe that is wrong though?)

Example new fw AOT graph tlparse output for the below code: https://interncache-all.fbcdn.net/manifold/tlparse_reports/tree/logs/.tmp70zKiO/0_0_0/aot_forward_graph_2.txt

```
import torch


torch.compile
def f(x):
    out1 = torch.view_as_complex(x)
    out2 = torch.view_as_complex(x)
    return out1, out2, x * 2



x_ = torch.randn(4, 2, requires_grad=True, dtype=torch.float64)
out = f(x_)
```





cc voznesenskym penguinwu EikanWang jgong5 Guobing-Chen XiaobingSuper zhuhaozhe blzheng wenzhe-nrv jiayisunx chenyang78 kadeng chauhang amjames rec

[ghstack-poisoned]
I ended up bundling it into the existing tlparse logs for the AOT forward graph, since it looked like registering it as a separate artifact requires changes to tlparse itself (maybe that is wrong though?)

Example new fw AOT graph tlparse output for the below code: https://interncache-all.fbcdn.net/manifold/tlparse_reports/tree/logs/.tmp70zKiO/0_0_0/aot_forward_graph_2.txt

```
import torch


torch.compile
def f(x):
    out1 = torch.view_as_complex(x)
    out2 = torch.view_as_complex(x)
    return out1, out2, x * 2



x_ = torch.randn(4, 2, requires_grad=True, dtype=torch.float64)
out = f(x_)
```





cc voznesenskym penguinwu EikanWang jgong5 Guobing-Chen XiaobingSuper zhuhaozhe blzheng wenzhe-nrv jiayisunx chenyang78 kadeng chauhang amjames rec

[ghstack-poisoned]
I ended up bundling it into the existing tlparse logs for the AOT forward graph, since it looked like registering it as a separate artifact requires changes to tlparse itself (maybe that is wrong though?)

Example new fw AOT graph tlparse output for the below code: https://interncache-all.fbcdn.net/manifold/tlparse_reports/tree/logs/.tmp70zKiO/0_0_0/aot_forward_graph_2.txt

```
import torch


torch.compile
def f(x):
    out1 = torch.view_as_complex(x)
    out2 = torch.view_as_complex(x)
    return out1, out2, x * 2



x_ = torch.randn(4, 2, requires_grad=True, dtype=torch.float64)
out = f(x_)
```





cc voznesenskym penguinwu EikanWang jgong5 Guobing-Chen XiaobingSuper zhuhaozhe blzheng wenzhe-nrv jiayisunx chenyang78 kadeng chauhang amjames rec

[ghstack-poisoned]
I ended up bundling it into the existing tlparse logs for the AOT forward graph, since it looked like registering it as a separate artifact requires changes to tlparse itself (maybe that is wrong though?)

Example new fw AOT graph tlparse output for the below code: https://interncache-all.fbcdn.net/manifold/tlparse_reports/tree/logs/.tmp70zKiO/0_0_0/aot_forward_graph_2.txt

```
import torch


torch.compile
def f(x):
    out1 = torch.view_as_complex(x)
    out2 = torch.view_as_complex(x)
    return out1, out2, x * 2



x_ = torch.randn(4, 2, requires_grad=True, dtype=torch.float64)
out = f(x_)
```





cc voznesenskym penguinwu EikanWang jgong5 Guobing-Chen XiaobingSuper zhuhaozhe blzheng wenzhe-nrv jiayisunx chenyang78 kadeng chauhang amjames rec

[ghstack-poisoned]
I ended up bundling it into the existing tlparse logs for the AOT forward graph, since it looked like registering it as a separate artifact requires changes to tlparse itself (maybe that is wrong though?)

Example new fw AOT graph tlparse output for the below code: https://interncache-all.fbcdn.net/manifold/tlparse_reports/tree/logs/.tmp70zKiO/0_0_0/aot_forward_graph_2.txt

```
import torch


torch.compile
def f(x):
    out1 = torch.view_as_complex(x)
    out2 = torch.view_as_complex(x)
    return out1, out2, x * 2



x_ = torch.randn(4, 2, requires_grad=True, dtype=torch.float64)
out = f(x_)
```





cc voznesenskym penguinwu EikanWang jgong5 Guobing-Chen XiaobingSuper zhuhaozhe blzheng wenzhe-nrv jiayisunx chenyang78 kadeng chauhang amjames rec

[ghstack-poisoned]
@bdhirsh
Copy link
Collaborator Author

bdhirsh commented Oct 14, 2024

@pytorchbot merge

@pytorch-bot pytorch-bot bot added the ciflow/trunk Trigger trunk jobs on your pull request label Oct 14, 2024
@pytorchmergebot
Copy link
Collaborator

Merge started

Your change will be merged once all checks pass (ETA 0-4 Hours).

Learn more about merging in the wiki.

Questions? Feedback? Please reach out to the PyTorch DevX Team

Advanced Debugging
Check the merge workflow status
here

I ended up bundling it into the existing tlparse logs for the AOT forward graph, since it looked like registering it as a separate artifact requires changes to tlparse itself (maybe that is wrong though?)

Example new fw AOT graph tlparse output for the below code: https://interncache-all.fbcdn.net/manifold/tlparse_reports/tree/logs/.tmp70zKiO/0_0_0/aot_forward_graph_2.txt

```
import torch


torch.compile
def f(x):
    out1 = torch.view_as_complex(x)
    out2 = torch.view_as_complex(x)
    return out1, out2, x * 2



x_ = torch.randn(4, 2, requires_grad=True, dtype=torch.float64)
out = f(x_)
```





cc voznesenskym penguinwu EikanWang jgong5 Guobing-Chen XiaobingSuper zhuhaozhe blzheng wenzhe-nrv jiayisunx chenyang78 kadeng chauhang amjames rec

[ghstack-poisoned]
@pytorchmergebot
Copy link
Collaborator

Merge failed

Reason: New commits were pushed while merging. Please rerun the merge command.

Details for Dev Infra team Raised by workflow job

@bdhirsh
Copy link
Collaborator Author

bdhirsh commented Oct 14, 2024

@pytorchbot merge

@pytorchmergebot
Copy link
Collaborator

Merge started

Your change will be merged once all checks pass (ETA 0-4 Hours).

Learn more about merging in the wiki.

Questions? Feedback? Please reach out to the PyTorch DevX Team

Advanced Debugging
Check the merge workflow status
here

@pytorchmergebot
Copy link
Collaborator

The merge job was canceled or timed out. This most often happen if two merge requests were issued for the same PR, or if merge job was waiting for more than 6 hours for tests to finish. In later case, please do not hesitate to reissue the merge command
For more information see pytorch-bot wiki.

@bdhirsh
Copy link
Collaborator Author

bdhirsh commented Oct 15, 2024

@pytorchbot merge

@pytorchmergebot
Copy link
Collaborator

Merge started

Your change will be merged once all checks pass (ETA 0-4 Hours).

Learn more about merging in the wiki.

Questions? Feedback? Please reach out to the PyTorch DevX Team

Advanced Debugging
Check the merge workflow status
here

@pytorchmergebot
Copy link
Collaborator

This PR (#133784) was merged in ed94725 but it is still open, likely due to a Github bug, so mergebot is closing it manually. If you think this is a mistake, please feel free to reopen and contact Dev Infra.

@github-actions github-actions bot deleted the gh/bdhirsh/609/head branch November 15, 2024 02:09
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants