Conversation
Differential Revision: D15476434 Differential Version: 83701491
Differential Revision: D15476434 Differential Version: 83720045
Differential Revision: D15476434 Differential Version: 83738373
Differential Revision: D15476433 Differential Version: 83738374
|
This is a re-export of #20871 because some technical error screwed up that PR |
Differential Revision: D15476433 Differential Version: 83744687
zdevito
left a comment
There was a problem hiding this comment.
After seeing this patch, I am still skeptical of the value add compared to the noise this change is adding. So many places look non-idiomatic for C++. The changes to register_prim and register_special also suggest that users will have to write quite a bit differently as well.
Differential Revision: D15476433 Differential Version: 83838059
Differential Revision: D15476433 Differential Version: 83843752
Differential Revision: D15476433 Differential Version: 83895551
Differential Revision: D15476433 Differential Version: 84099249
Differential Revision: D15476433 Differential Version: 84235823
Differential Revision: D15476433 Differential Version: 84245863
Differential Revision: D15476433 Differential Version: 84268424
Differential Revision: D15476433 Differential Version: 84490438
Differential Revision: D15476433 Differential Version: 84493782
Differential Revision: D15476433 Differential Version: 84500329
Differential Revision: D15476433 Differential Version: 84639097
Differential Revision: D15476433 Differential Version: 84648896
Differential Revision: D15476433 Differential Version: 84651711
Differential Revision: D15476433 Differential Version: 84662129
Differential Revision: D15476433 Differential Version: 84667367
Differential Revision: D15476433 Differential Version: 84683317
Differential Revision: D15476433 Differential Version: 84727785
Differential Revision: D15476433 Differential Version: 84752433
Differential Revision: D15476433 Differential Version: 84756618
Differential Revision: D15476433 Differential Version: 84773353
Differential Revision: D15476433 Differential Version: 84774084
Differential Revision: D15476433 Differential Version: 84774561
Differential Revision: D15476433 Differential Version: 84782475
Differential Revision: D15476433 Differential Version: 84786750
Differential Revision: D15476433 Differential Version: 84789886
|
This pull request has been merged in b527e48. |
Summary: Pull Request resolved: pytorch/pytorch#21177 - Integrate c10::ListPtr into IValue and the c10 dispatcher. - Streamline conversion to/from IValue. Before, we had IValue::to<> and kernel_functor.h had its own ivalue_to_arg_type and return_type_to_ivalue. They are now unified. Also, this means that nested types like Dicts of Lists of Optional of Dict of ... do work as expected now Differential Revision: D15476433 fbshipit-source-id: bde9df80df20091aa8e6ae17ba7e90abd149b954
| for (const auto& b_element : b->elements()) { | ||
| ret.push_back(b_element); | ||
| for (T b_element : b) { | ||
| ret.push_back(std::move(b_element)); |
There was a problem hiding this comment.
This is a functional change that I don't think is correct without checking that the list being moved from has only a single use. See feedback from David and Michael on #21690
There was a problem hiding this comment.
This isn't moving from the list. It's copying from the list into b_element and then moving from b_element.
There was a problem hiding this comment.
out of curiosity, why was this code changed from the original way it was written?
I'd have expected something like
for (const T& b_element : b) {
ret.push_back(b_element);
}
There was a problem hiding this comment.
Some ListPtr<T> (e.g. ListPtr<string> or nested lists/dicts) store their elements not as T but as IValues. When iterating over them, they have to create instances of T on the fly. This means we're copying T.
If the loop is for (T e : list), then this instance is directly created as e and can be moved from there. If the loop is for (const T& e : list), then the instance is also created as e, but now we can't move from it, so we need a second copy.
Summary:
I have some test code in there as well, along with a script "test_libtorch" to run it. You'll need to modify `test_libtorch` to point to where you have `pytorch` built. I currently require that `pybind11` is included as a subdirectory of the test, but added it to the `.gitignore` to make this reviewable.
Currently, something like this works:
```cpp
struct Foo {
int x, y;
Foo(): x(2), y(5){}
Foo(int x_, int y_) : x(x_), y(y_) {}
void display() {
cout<<"x: "<<x<<' '<<"y: "<<y<<endl;
}
int64_t add(int64_t z) {
return (x+y)*z;
}
};
static auto test = torch::jit::class_<Foo>("Foo")
.def(torch::jit::init<int64_t, int64_t>())
.def("display", &Foo::display)
.def("add", &Foo::add)
.def("combine", &Foo::combine);
```
with
```py
torch.jit.script
def f(x):
val = torch._C.Foo(5, 3)
val.display()
print(val.add(3))
```
results in
```
x: 5 y: 3
24
```
Current issues:
- [x] The python class created by torchscript doesn't interactly properly with the surrounding code.
```
torch.jit.script
def f(x):
val = torch._C.Foo(5, 3)
return val
```
- [x] Doesn't properly take in non-pointer classes. Can't define this function signature in cpp (We don't want to support this I believe).
```cpp
void combine(Foo x) {
```
- [x] Has some issues with memory for blobs when constructing multiple objects (fix constant propagation pass to not treat capsules as the same object).
```py
torch.jit.script
def f(x):
val = torch._C.Foo(5, 3)
val2 = torch._C.Foo(100, 0)
val.display()
print(val.add(3))
```
- [ ] Can't define multiple constructors (need to define overload string. Currently not possible since we don't support overloaded methods).
- [x] `init` is a little bit different syntax than `pybind`. `.init<...>()` instead of `.def(py::init<>())`
- [x] I couldn't figure out how to add some files into the build so they'd be copied to the `include/` directories, so I symlinked them manually.
- [ ] Currently, the conversion from Python into Torchscript doesn't work.
- [ ] Torchbind also currently requires Python/Pybind dependency. Fixing this would probably involve some kind of macro to bind into Python when possible.
- [ ] We pass back into Python by value, currently. There's no way of passing by reference.
- [x] Currently can only register one method with the same type signature. This is because we create a `static auto opRegistry`, and the function is templated on the type signature.
Somewhat blocked on #21177. We currently use some structures that will be refactored by his PR (namely `return_type_to_ivalue` and `ivalue_to_arg_type`.
Pull Request resolved: #21098
Differential Revision: D16634872
Pulled By: Chillee
fbshipit-source-id: 1408bb89ea649c27d560df59e2cf9920467fe1de
Summary:
I have some test code in there as well, along with a script "test_libtorch" to run it. You'll need to modify `test_libtorch` to point to where you have `pytorch` built. I currently require that `pybind11` is included as a subdirectory of the test, but added it to the `.gitignore` to make this reviewable.
Currently, something like this works:
```cpp
struct Foo {
int x, y;
Foo(): x(2), y(5){}
Foo(int x_, int y_) : x(x_), y(y_) {}
void display() {
cout<<"x: "<<x<<' '<<"y: "<<y<<endl;
}
int64_t add(int64_t z) {
return (x+y)*z;
}
};
static auto test = torch::jit::class_<Foo>("Foo")
.def(torch::jit::init<int64_t, int64_t>())
.def("display", &Foo::display)
.def("add", &Foo::add)
.def("combine", &Foo::combine);
```
with
```py
torch.jit.script
def f(x):
val = torch._C.Foo(5, 3)
val.display()
print(val.add(3))
```
results in
```
x: 5 y: 3
24
```
Current issues:
- [x] The python class created by torchscript doesn't interactly properly with the surrounding code.
```
torch.jit.script
def f(x):
val = torch._C.Foo(5, 3)
return val
```
- [x] Doesn't properly take in non-pointer classes. Can't define this function signature in cpp (We don't want to support this I believe).
```cpp
void combine(Foo x) {
```
- [x] Has some issues with memory for blobs when constructing multiple objects (fix constant propagation pass to not treat capsules as the same object).
```py
torch.jit.script
def f(x):
val = torch._C.Foo(5, 3)
val2 = torch._C.Foo(100, 0)
val.display()
print(val.add(3))
```
- [ ] Can't define multiple constructors (need to define overload string. Currently not possible since we don't support overloaded methods).
- [x] `init` is a little bit different syntax than `pybind`. `.init<...>()` instead of `.def(py::init<>())`
- [x] I couldn't figure out how to add some files into the build so they'd be copied to the `include/` directories, so I symlinked them manually.
- [ ] Currently, the conversion from Python into Torchscript doesn't work.
- [ ] Torchbind also currently requires Python/Pybind dependency. Fixing this would probably involve some kind of macro to bind into Python when possible.
- [ ] We pass back into Python by value, currently. There's no way of passing by reference.
- [x] Currently can only register one method with the same type signature. This is because we create a `static auto opRegistry`, and the function is templated on the type signature.
Somewhat blocked on pytorch/pytorch#21177. We currently use some structures that will be refactored by his PR (namely `return_type_to_ivalue` and `ivalue_to_arg_type`.
Pull Request resolved: pytorch/pytorch#21098
Differential Revision: D16634872
Pulled By: Chillee
fbshipit-source-id: 1408bb89ea649c27d560df59e2cf9920467fe1de
Stack:
:black_circle: #21177 Use c10::List 💛
Differential Revision: D15476433