Skip to content

Dynamo AutocastModeVariable bug: missing with context on the graph break instruction #95837

@yanboliang

Description

@yanboliang

🐛 Describe the bug

This bug was found when i run Dynamo benchmarks after #95416. Since Dynamo benchmark AMP tests are using torch.cuda.amp.autocast which is not supported before #95416, so it falls back to eager mode and this bug was hidden.

Repro:

import torch
import torch._dynamo

def fn(x):
    torch._dynamo.graph_break()
    return x.sum() / x.numel()

class MyModule(torch.nn.Module):
    def forward(self, a, b):
        with torch.amp.autocast(device_type="cuda"):
            x = a + b
            y1 = fn(x)
        return y1

module = MyModule()
a = torch.rand((8, 8), dtype=torch.float16, device="cuda")
b = torch.rand((8, 8), dtype=torch.float16, device="cuda")
opt_m = torch._dynamo.optimize("eager")(module)
print(module(a, b).dtype)
print(opt_m(a, b).dtype)

Output

torch.float32
torch.float16

The root cause is exactly the same as pytorch/torchdynamo#207, but for AutocastModeVariable.

Versions

N/A

cc @ezyang @soumith @msaroufim @wconstab @ngimel @bdhirsh @voznesenskym @penguinwu @anijain2305 @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @Xia-Weiwen @wenzhe-nrv @jiayisunx @desertfire

Metadata

Metadata

Labels

module: dynamooncall: pt2triagedThis issue has been looked at a team member, and triaged and prioritized into an appropriate module

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions