Conversation
🔗 Helpful links
❌ 37 New FailuresAs of commit 44221ef (more details on the Dr. CI page): Expand to see more
🕵️ 35 new failures recognized by patternsThe following CI failures do not appear to be due to upstream breakages
|
| Job | Step |
|---|---|
| Unknown | |
| Unknown |
This comment was automatically generated by Dr. CI (expand for details).
Please report bugs/suggestions to the (internal) Dr. CI Users group.
9f7bc95 to
44221ef
Compare
torch/_meta_registrations.py
Outdated
|
|
||
| # TODO: smarter way to copy? copy.deepcopy? | ||
| # can I just use list(shape)? | ||
| res = list(shape) |
torch/_meta_registrations.py
Outdated
| def reshape(self, proposed_shape): | ||
| if self.is_sparse: | ||
| # TODO: not sure what else to do here? | ||
| raise RuntimeError("reshape is not implemented for sparse tensors") |
|
This seems fine but you will have to monkeypatch it in for it to do anything |
731fb9e to
3f5840d
Compare
| a = torch.rand(4, 4) | ||
| b = a.reshape((16, 1)) | ||
| #b = torch.reshape(a, (16, 1)) | ||
| print(b) |
There was a problem hiding this comment.
this test is bad. There should be an OpInfo reshape test you can use
| _register_jit_decomposition_for_jvp(torch.ops.aten.log_sigmoid_forward.default) | ||
| _register_jit_decomposition_for_jvp(torch.ops.aten.native_layer_norm_backward.default) | ||
| _register_jit_decomposition_for_jvp(torch.ops.aten.native_batch_norm_backward.default) | ||
| #_register_jit_decomposition_for_jvp(torch.ops.aten.native_batch_norm_backward.default) |
| } | ||
| return from_complex.get(dtype, dtype) | ||
|
|
||
|
|
There was a problem hiding this comment.
don't wobble lines unnecessarily; especially in a long lived branch these lead to spurious merge conflicts. Big problem!
| return self.view(self.clone(memory_format=torch.contiguous_format), shape) | ||
|
|
||
| torch.reshape = reshape | ||
| torch.Tensor.reshape = reshape |
There was a problem hiding this comment.
This should ultimately live in torch._refs
There was a problem hiding this comment.
Oh shoot, there's a reshape implementation in torch._refs.reshape
|
/easycla As part of the transition to the PyTorch Foundation, this project now requires contributions be covered under the new CLA. See #85559 for additional details. This comment will trigger a new check of this PR. If you are already covered, you will simply see a new "EasyCLA" check that passes. If you are not covered, a bot will leave a new comment with a link to sign. |
|
|
Looks like this PR hasn't been updated in a while so we're going to go ahead and mark this as |
This PR gets `reflect @ R @ reflect` working, where R has unbacked batch size. This pattern occurred in CrystalDPR. The billing of changes: * torch.broadcast_shapes avoids guarding on unbacked SymInts when testing for broadcastable dims. I extracted this to #95217 for separate review; it's repeated in this PR as it is necessary for the E2E test * I disable matrix multiply folding when there is an unbacked SymInt on any input. Folding is strictly a performance optimization and can be omitted. Also, I believe export would prefer to get matmul (rather than bmm/etc), so we should eventually actually get #91081 going * I add a direct Python transcription of the reshape composite adapted from #84584 . I cannot use the PrimTorch composite as it has problems when I register it pre-autograd. It has the same implementation as regular reshape, but at the beginning there is one more test for trivial reshapes, which is sufficient for the matmul example. * I hand-write a meta function for expand, rather than using the PrimTorch decomposition. I couldn't really figure out how to make the PrimTorch decomposition guard free, but with the hand-written meta it is clear where the divergence lies: we cannot easily choose the correct stride for the unbacked dim, as we need to know whether or not the size is one (in which case we give the predicted stride) versus non-one (in which case we MUST give zero.) In composability sync, we agreed that changes to striding behavior are fair game with unbacked SymInts, so I just unconditionally give these zero stride. Signed-off-by: Edward Z. Yang <ezyangmeta.com> [ghstack-poisoned]
Fixes #ISSUE_NUMBER