Skip to content

Batched grad coverage rollup #50743

@zou3519

Description

@zou3519

🐛 Bug

This issue is a collection of bugs where batched grad computation doesn't work. Most of these were discovered while turning batched grad testing on. Some of these manifest as crashes (😞 ), some as silently incorrect results, and some as loud error messages.

  • test_linalg.py: test_cholesky_solve_autograd: The randomness in batched grad testing exposes unrelated silent correctness issues
  • test_nn.py: test_broadcast_double_backwards_gpu: segfault
  • test_nn.py: test_TransformerEncoderLayer_relu_activation: loud error message
  • test_nn.py: test_CrossMapLRN2d: loud error message
  • batched grad of torch.mm with a tensor of size (0, 0), as per @IvanKobzarev's comment

cc @ezyang @gchanan @zou3519 @bdhirsh @jbschlosser @albanD @gqchen @pearu @nikitaved @soulitzer

Metadata

Metadata

Assignees

No one assigned

    Labels

    module: autogradRelated to torch.autograd, and the autograd engine in generalmodule: vmaptriagedThis issue has been looked at a team member, and triaged and prioritized into an appropriate module

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions