Conversation
[ghstack-poisoned]
🔗 Helpful links
✅ No Failures (0 Pending)As of commit c8e78db (more details on the Dr. CI page): Expand to see more💚 💚 Looks good so far! There are no failures yet. 💚 💚 This comment was automatically generated by Dr. CI (expand for details).Please report bugs/suggestions to the (internal) Dr. CI Users group. |
This is just copying over the PR from albanD/subclass_zoo#32, and pulling in `BaseTensor`, plus adding one test based on invariants about schemas. From code comments > Meta tensors give you the ability to run PyTorch code without having to actually do computation through tensors allocated on a `meta` device. Because the device is `meta`, meta tensors do not model device propagation. FakeTensor extends MetaTensors to also carry an additional `fake_device` which tracks devices that would have been used. [ghstack-poisoned]
This is just copying over the PR from albanD/subclass_zoo#32, and pulling in `BaseTensor`, plus adding one test based on invariants about schemas. From code comments > Meta tensors give you the ability to run PyTorch code without having to actually do computation through tensors allocated on a `meta` device. Because the device is `meta`, meta tensors do not model device propagation. FakeTensor extends MetaTensors to also carry an additional `fake_device` which tracks devices that would have been used. [ghstack-poisoned]
This is just copying over the PR from albanD/subclass_zoo#32, and pulling in `BaseTensor`, plus adding one test based on invariants about schemas. From code comments > Meta tensors give you the ability to run PyTorch code without having to actually do computation through tensors allocated on a `meta` device. Because the device is `meta`, meta tensors do not model device propagation. FakeTensor extends MetaTensors to also carry an additional `fake_device` which tracks devices that would have been used. [ghstack-poisoned]
This is just copying over the PR from albanD/subclass_zoo#32, and pulling in `BaseTensor`, plus adding one test based on invariants about schemas. From code comments > Meta tensors give you the ability to run PyTorch code without having to actually do computation through tensors allocated on a `meta` device. Because the device is `meta`, meta tensors do not model device propagation. FakeTensor extends MetaTensors to also carry an additional `fake_device` which tracks devices that would have been used. [ghstack-poisoned]
| @@ -0,0 +1,80 @@ | |||
| # Owner(s): ["module: unknown"] | |||
There was a problem hiding this comment.
well, could always chuck it in module: meta tensors
There was a problem hiding this comment.
test_meta is under module: primTorch.. I dont think there is a meta module
torch/_subclasses/base_tensor.py
Outdated
| # To ensure constructors can cooperate with one another, must accept and | ||
| # ignore element tensor (TODO: is this right???) | ||
| def __init__(self, elem): | ||
| super().__init__() |
There was a problem hiding this comment.
I'm pretty confident we can dump this constructor, it never actually got used for anything
torch/_subclasses/base_tensor.py
Outdated
| # typically must be disabled | ||
| __torch_function__ = torch._C._disabled_torch_function_impl | ||
|
|
||
| __all__ = ["BaseTensor"] |
There was a problem hiding this comment.
My preference is to not add this class, and inline the only thing you need (in this case, __torch_function__ = torch._C._disabled_torch_function_impl)
There was a problem hiding this comment.
My understanding from talking to various folks was the
@staticmethod
def __new__(cls, elem, *, requires_grad=None):
if requires_grad is None:
return super().__new__(cls, elem) # type: ignore
else:
return cls._make_subclass(cls, elem, requires_grad)
part was necessary.. I think I'll file issue for removal of this in future PR and leave for now
| aten.is_pinned.default, | ||
| aten.to.device, | ||
| aten.to.prim_Device, | ||
| aten._pin_memory.default, |
There was a problem hiding this comment.
@anjali411 Would this be a good operator tag? (Or maybe we should be able to figure this out from schema?)
There was a problem hiding this comment.
We can already infer this information from the FunctionSchema by iterating through the inputs
>>> torch.ops.aten.to.device._schema.arguments
[const Tensor& self, Device device, int dtype, bool non_blocking, bool copy, int? memory_format]
>>> torch.ops.aten.to.device._schema.arguments[1]
Device device
>>> torch.ops.aten.to.device._schema.arguments[1].kwarg_only
False
There was a problem hiding this comment.
Yeaaa I have a test for that. Maybe I just should just get rid of the list and move over the checking logic
There was a problem hiding this comment.
yeah we should have one source of truth
| # elem does not need to be recorded, because FakeTensor *is a* elem | ||
| assert elem.device.type == "meta" | ||
| device = device if isinstance(device, torch.device) else torch.device(device) | ||
| assert device.type != "meta" |
There was a problem hiding this comment.
You know... it might be ok for the inner device to be meta lol
There was a problem hiding this comment.
That seems like something has gone wrong (lost what the actual device is)... maybe if it proves to be needed in the future we can change
This is just copying over the PR from albanD/subclass_zoo#32, and pulling in `BaseTensor`, plus adding one test based on invariants about schemas. From code comments > Meta tensors give you the ability to run PyTorch code without having to actually do computation through tensors allocated on a `meta` device. Because the device is `meta`, meta tensors do not model device propagation. FakeTensor extends MetaTensors to also carry an additional `fake_device` which tracks devices that would have been used. [ghstack-poisoned]
This is just copying over the PR from albanD/subclass_zoo#32, and pulling in `BaseTensor`, plus adding one test based on invariants about schemas. From code comments > Meta tensors give you the ability to run PyTorch code without having to actually do computation through tensors allocated on a `meta` device. Because the device is `meta`, meta tensors do not model device propagation. FakeTensor extends MetaTensors to also carry an additional `fake_device` which tracks devices that would have been used. [ghstack-poisoned]
|
@eellison has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator. |
This is just copying over the PR from albanD/subclass_zoo#32, and pulling in `BaseTensor`, plus adding one test based on invariants about schemas. From code comments > Meta tensors give you the ability to run PyTorch code without having to actually do computation through tensors allocated on a `meta` device. Because the device is `meta`, meta tensors do not model device propagation. FakeTensor extends MetaTensors to also carry an additional `fake_device` which tracks devices that would have been used. Differential Revision: [D36618467](https://our.internmc.facebook.com/intern/diff/D36618467) [ghstack-poisoned]
|
@eellison has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator. |
This is just copying over the PR from albanD/subclass_zoo#32, and pulling in `BaseTensor`, plus adding one test based on invariants about schemas. From code comments > Meta tensors give you the ability to run PyTorch code without having to actually do computation through tensors allocated on a `meta` device. Because the device is `meta`, meta tensors do not model device propagation. FakeTensor extends MetaTensors to also carry an additional `fake_device` which tracks devices that would have been used. Differential Revision: [D36618467](https://our.internmc.facebook.com/intern/diff/D36618467) [ghstack-poisoned]
This is just copying over the PR from albanD/subclass_zoo#32, and pulling in `BaseTensor`, plus adding one test based on invariants about schemas. From code comments > Meta tensors give you the ability to run PyTorch code without having to actually do computation through tensors allocated on a `meta` device. Because the device is `meta`, meta tensors do not model device propagation. FakeTensor extends MetaTensors to also carry an additional `fake_device` which tracks devices that would have been used. Differential Revision: [D36618467](https://our.internmc.facebook.com/intern/diff/D36618467) [ghstack-poisoned]
This is just copying over the PR from albanD/subclass_zoo#32, and pulling in `BaseTensor`, plus adding one test based on invariants about schemas. From code comments > Meta tensors give you the ability to run PyTorch code without having to actually do computation through tensors allocated on a `meta` device. Because the device is `meta`, meta tensors do not model device propagation. FakeTensor extends MetaTensors to also carry an additional `fake_device` which tracks devices that would have been used. Differential Revision: [D36618467](https://our.internmc.facebook.com/intern/diff/D36618467) [ghstack-poisoned]
This is just copying over the PR from albanD/subclass_zoo#32, and pulling in `BaseTensor`, plus adding one test based on invariants about schemas. From code comments > Meta tensors give you the ability to run PyTorch code without having to actually do computation through tensors allocated on a `meta` device. Because the device is `meta`, meta tensors do not model device propagation. FakeTensor extends MetaTensors to also carry an additional `fake_device` which tracks devices that would have been used. Differential Revision: [D36618467](https://our.internmc.facebook.com/intern/diff/D36618467) [ghstack-poisoned]
This is just copying over the PR from albanD/subclass_zoo#32, and pulling in `BaseTensor`, plus adding one test based on invariants about schemas. From code comments > Meta tensors give you the ability to run PyTorch code without having to actually do computation through tensors allocated on a `meta` device. Because the device is `meta`, meta tensors do not model device propagation. FakeTensor extends MetaTensors to also carry an additional `fake_device` which tracks devices that would have been used. [ghstack-poisoned]
|
@pytorchbot merge this please |
|
Hey @eellison. |
Summary: Pull Request resolved: #77969 Approved by: https://github.com/ezyang Test Plan: contbuild & OSS CI, see https://hud.pytorch.org/commit/pytorch/pytorch/678213ead2fd6676344e29e927246c658c1dab5c Reviewed By: seemethere Differential Revision: D36784747 Pulled By: seemethere fbshipit-source-id: 85a75483d4a0bf7247368cd1bca0576a31173c62
Stack from ghstack (oldest at bottom):
_make_subclass#77970This is just copying over the PR from albanD/subclass_zoo#32, and pulling in
BaseTensor, plus adding one test based on invariants about schemas.From code comments