[nativert] Move TensorMeta to pytorch core#152475
[nativert] Move TensorMeta to pytorch core#152475yiming0416 wants to merge 1 commit intopytorch:mainfrom
Conversation
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/152475
Note: Links to docs will display an error until the docs builds have been completed. ❗ 1 Active SEVsThere are 1 currently active SEVs. If your PR is affected, please view them below: ✅ You can merge normally! (1 Unrelated Failure)As of commit 004d31a with merge base 99dac70 ( BROKEN TRUNK - The following job failed but were present on the merge base:👉 Rebase onto the `viable/strict` branch to avoid these failures
This comment was automatically generated by Dr. CI and updates every 15 minutes. |
|
This pull request was exported from Phabricator. Differential Revision: D73820548 |
|
This pull request was exported from Phabricator. Differential Revision: D73820548 |
c5fee75 to
c815ce9
Compare
|
This pull request was exported from Phabricator. Differential Revision: D73820548 |
c815ce9 to
c7b3295
Compare
|
This pull request was exported from Phabricator. Differential Revision: D73820548 |
c7b3295 to
fd8fa87
Compare
|
This pull request was exported from Phabricator. Differential Revision: D73820548 |
fd8fa87 to
a715f62
Compare
Summary: Pull Request resolved: pytorch#152475 Torch Native Runtime RFC: pytorch/rfcs#72 Thi diff moves `TensorMeta.cpp` and `TensorMeta.h` to PyTorch core. Test Plan: ``` buck2 run mode/dev-nosan caffe2/test/cpp/nativert:nativert ``` Differential Revision: D73820548
|
Can you add some details in the description on how this is different from export::TensorMeta and why we need this separate one? |
test/cpp/nativert/CMakeLists.txt
Outdated
| ) | ||
|
|
||
| # We also build with UBSAN flag in build_asan.h | ||
| if(USE_ASAN) |
There was a problem hiding this comment.
We actually run this under asan?
Or is this a CMake from somewhere else?
There was a problem hiding this comment.
I copied this from test/cpp/jit/CMakeLists.txt. I guess we can remove the USE_ASAN branch if not needed?
There was a problem hiding this comment.
Yes let's remove everything we don't need from this cmake
There was a problem hiding this comment.
@albanD sure! removed in the latest version.
| ) | ||
|
|
||
| libtorch_cmake_sources = libtorch_core_sources + libtorch_core_jit_sources | ||
| libtorch_cmake_sources = libtorch_core_sources + libtorch_core_jit_sources + libtorch_nativert_sources |
There was a problem hiding this comment.
Should we make the runtime a separate .so like other subsets of libtorch to improve dependency management?
In particular, do you expect you users to only link against your function (and transitively get the rest of libtorch). Or you expect them to use whole of libtorch?
cc @malfet
There was a problem hiding this comment.
I think we expect the users to use whole of libtorch cc @zhxchen17
There was a problem hiding this comment.
I think we expect users to use the whole of libtorch.
@albanD Added details in the PR summary |
|
This pull request was exported from Phabricator. Differential Revision: D73820548 |
a715f62 to
d1ec045
Compare
|
This pull request was exported from Phabricator. Differential Revision: D73820548 |
Summary: Pull Request resolved: pytorch#152475 Torch Native Runtime RFC: pytorch/rfcs#72 Thi diff moves `TensorMeta.cpp` and `TensorMeta.h` to PyTorch core. Test Plan: Internal CI. GitHub CI with newly added test under `test/cpp/nativert/test_tensor_meta.cpp` Differential Revision: D73820548
a5a12ef to
d72e177
Compare
|
This pull request was exported from Phabricator. Differential Revision: D73820548 |
d72e177 to
a794bb9
Compare
a794bb9 to
5f4e9ea
Compare
Summary: Pull Request resolved: pytorch#152475 Torch Native Runtime RFC: pytorch/rfcs#72 Thi diff moves `TensorMeta.cpp` and `TensorMeta.h` to PyTorch core. Test Plan: Internal CI. GitHub CI with newly added test under `test/cpp/nativert/test_tensor_meta.cpp` Differential Revision: D73820548
|
This pull request was exported from Phabricator. Differential Revision: D73820548 |
5f4e9ea to
ff54b0b
Compare
Summary: Pull Request resolved: pytorch#152475 Torch Native Runtime RFC: pytorch/rfcs#72 Thi diff moves `TensorMeta.cpp` and `TensorMeta.h` to PyTorch core. Test Plan: Internal CI. GitHub CI with newly added test under `test/cpp/nativert/test_tensor_meta.cpp` Differential Revision: D73820548
|
@albanD Can you pls review again? I have made APIs private and added the corresponding source file to the test cmake file. If there is any other suggesting changes before merging, I'd be happy to make those. Thanks! |
| c10::ScalarType convertJsonScalarType( | ||
| const torch::_export::ScalarType& scalarType) { | ||
| switch (scalarType) { | ||
| case torch::_export::ScalarType::UNKNOWN: | ||
| TORCH_CHECK(false, "scalar type is not properly set"); | ||
| case torch::_export::ScalarType::BYTE: | ||
| return c10::ScalarType::Byte; | ||
| case torch::_export::ScalarType::CHAR: | ||
| return c10::ScalarType::Char; | ||
| case torch::_export::ScalarType::SHORT: | ||
| return c10::ScalarType::Short; | ||
| case torch::_export::ScalarType::INT: | ||
| return c10::ScalarType::Int; | ||
| case torch::_export::ScalarType::LONG: | ||
| return c10::ScalarType::Long; | ||
| case torch::_export::ScalarType::HALF: | ||
| return c10::ScalarType::Half; | ||
| case torch::_export::ScalarType::FLOAT: | ||
| return c10::ScalarType::Float; | ||
| case torch::_export::ScalarType::DOUBLE: | ||
| return c10::ScalarType::Double; | ||
| case torch::_export::ScalarType::COMPLEXHALF: | ||
| return c10::ScalarType::ComplexHalf; | ||
| case torch::_export::ScalarType::COMPLEXFLOAT: | ||
| return c10::ScalarType::ComplexFloat; | ||
| case torch::_export::ScalarType::COMPLEXDOUBLE: | ||
| return c10::ScalarType::ComplexDouble; | ||
| case torch::_export::ScalarType::BOOL: | ||
| return c10::ScalarType::Bool; | ||
| case torch::_export::ScalarType::BFLOAT16: | ||
| return c10::ScalarType::BFloat16; | ||
| case torch::_export::ScalarType::UINT16: | ||
| return c10::ScalarType::UInt16; | ||
| case torch::_export::ScalarType::FLOAT8E4M3FN: | ||
| return c10::ScalarType::Float8_e4m3fn; | ||
| case torch::_export::ScalarType::FLOAT8E5M2: | ||
| return c10::ScalarType::Float8_e5m2; | ||
| default: | ||
| TORCH_CHECK(false, "unknown scalar type", static_cast<int>(scalarType)); | ||
| } | ||
| } | ||
|
|
||
| c10::MemoryFormat convertJsonMemoryFormat( | ||
| const torch::_export::MemoryFormat& memoryFormat) { | ||
| switch (memoryFormat) { | ||
| case torch::_export::MemoryFormat::Unknown: | ||
| TORCH_CHECK(false, "got unknown scalar type"); | ||
| case torch::_export::MemoryFormat::ContiguousFormat: | ||
| return c10::MemoryFormat::Contiguous; | ||
| case torch::_export::MemoryFormat::ChannelsLast: | ||
| return c10::MemoryFormat::ChannelsLast; | ||
| case torch::_export::MemoryFormat::ChannelsLast3d: | ||
| return c10::MemoryFormat::ChannelsLast3d; | ||
| case torch::_export::MemoryFormat::PreserveFormat: | ||
| return c10::MemoryFormat::Preserve; | ||
| default: | ||
| TORCH_CHECK( | ||
| false, "unknown memory format", static_cast<int>(memoryFormat)); | ||
| } | ||
| } | ||
|
|
||
| c10::Layout convertJsonLayout(const torch::_export::Layout& layout) { | ||
| switch (layout) { | ||
| case torch::_export::Layout::Unknown: | ||
| TORCH_CHECK(false, "got unknown layout"); | ||
| case torch::_export::Layout::SparseCoo: | ||
| // TODO is this the right translation | ||
| return c10::Layout::Sparse; | ||
| case torch::_export::Layout::SparseCsr: | ||
| return c10::Layout::SparseCsr; | ||
| case torch::_export::Layout::SparseCsc: | ||
| return c10::Layout::SparseCsc; | ||
| case torch::_export::Layout::SparseBsr: | ||
| return c10::Layout::SparseBsr; | ||
| case torch::_export::Layout::SparseBsc: | ||
| return c10::Layout::SparseBsc; | ||
| case torch::_export::Layout::_mkldnn: | ||
| return c10::Layout::Mkldnn; | ||
| case torch::_export::Layout::Strided: | ||
| return c10::Layout::Strided; | ||
| default: | ||
| TORCH_CHECK(false, "unknown layout", static_cast<int>(layout)); | ||
| } | ||
| } | ||
|
|
||
| c10::Device convertJsonDevice(const torch::_export::Device& device) { | ||
| c10::Device d(device.get_type()); | ||
| if (auto index = device.get_index()) { | ||
| d.set_index(*index); | ||
| } | ||
| return d; | ||
| } |
There was a problem hiding this comment.
Ok, as long as we don't make them public from here, that sounds like a good tradeoff for now
|
This pull request was exported from Phabricator. Differential Revision: D73820548 |
ff54b0b to
3a7c541
Compare
|
This pull request was exported from Phabricator. Differential Revision: D73820548 |
3a7c541 to
a66d52d
Compare
Summary: Pull Request resolved: pytorch#152475 Torch Native Runtime RFC: pytorch/rfcs#72 Thi diff moves `TensorMeta.cpp` and `TensorMeta.h` to PyTorch core. Test Plan: Internal CI. GitHub CI with newly added test under `test/cpp/nativert/test_tensor_meta.cpp` Reviewed By: zhxchen17 Differential Revision: D73820548
|
This pull request was exported from Phabricator. Differential Revision: D73820548 |
a66d52d to
004d31a
Compare
|
@pytorchbot merge (Initiating merge automatically since Phabricator Diff has merged) |
Merge startedYour change will be merged once all checks pass (ETA 0-4 Hours). Learn more about merging in the wiki. Questions? Feedback? Please reach out to the PyTorch DevX Team |
Summary:
Torch Native Runtime RFC: pytorch/rfcs#72
This diff moves
TensorMeta.cppandTensorMeta.hto PyTorch core undertorch/nativert/graph/Existing
torch::_export::TensorMetaintorch/csrc/utils/generated_serialization_types.his auto-generated from the export serde schema and therefore only containing the most basic serializable types. We need the newly addedTensorMeta.cppto deserialize the metadata into a in-memory class with c10 types so that it can be consumed by the runtime later.Test Plan:
Added test under
test/cpp/nativert/test_tensor_meta.cppDifferential Revision: D73820548