[Onnx] - refactoring serialization of ONNX initializers to be name-based#17420
[Onnx] - refactoring serialization of ONNX initializers to be name-based#17420spandantiwari wants to merge 8 commits intopytorch:masterfrom
Conversation
houseroad
left a comment
There was a problem hiding this comment.
Overall, the idea looks good to me. Could you fix the build issue?
|
@houseroad - yes, working on it. |
cbf0ff6 to
ba538db
Compare
|
Submitted a fix for the build issue. |
|
Seeing test failure(s). Investigating... |
ea85ca5 to
e9ee672
Compare
|
A couple of ONNX export tests in @houseroad - I also saw some unexpected SIGFPE type failure in the last build (possibly similar to your |
|
@houseroad - I am seeing that SIGFPE type failure again. Could you please take a look and suggest what can be done to fix this. Thanks. |
55038e3 to
0acd0e3
Compare
|
Rebased to master. |
houseroad
left a comment
There was a problem hiding this comment.
If we would like to reserve the order, we may try using orderedmap (in python)
…es and not order.
…alization of initializers.
0acd0e3 to
6c1a725
Compare
|
Updated the export method to use |
|
@pytorchbot retest this please |
houseroad
left a comment
There was a problem hiding this comment.
Looks good! This is a great change.
facebook-github-bot
left a comment
There was a problem hiding this comment.
@houseroad has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.
|
@houseroad - thanks for your help! |
* upstream/master: (24 commits) Automatic update of fbcode/onnx to 96c58ceeacf0f2b73d752e413e4fd78787a12da3 (pytorch#17676) Set the default ONNX opset to the latest stable opset (i.e., 9) (pytorch#17736) Add module attributes (pytorch#17309) - refactoring serialization of ONNX initializers to be name-based (pytorch#17420) ONNX Export for Max and Average Pooling in CEIL_MODE use flake8-mypy (pytorch#17721) use fp16<->fp32 intrinsic (pytorch#17496) Implement a Caffe2 standalone LSTM operator (pytorch#17726) caffe2:libtorch_cuda depends on caffe2:caffe2_gpu (pytorch#17729) add tensor and cost inference functions (pytorch#17684) ONNX Export Narrow op Keep the dim_type of hinted shape as BATCH if possible (pytorch#17734) fix different round behavior on CPU and GPU pytorch#16498 (pytorch#17443) Warn about memory overlaps on expanded tensors (pytorch#17576) fix exp fam. formula refactor caffe2 operator constructors - 10/9 (pytorch#17659) Improve ONNX symbolic for logsoftmax and softmax (pytorch#17672) Enable using CMD when building cpp extensions on Windows Do not rename net boundary inputs/outputs during ssaRewrite. (pytorch#17545) Reapply D14078519 (pytorch#17596) ...
Currently, serialization of model parameters in ONNX export depends on the order in which they are stored in a container (
liston Python side andstd::vectoron C++ side). This has worked fine till now, but if we need to do any pass on that graph that mutates the parameter list, then strictly order-based serialization may not work.This PR is the first in a set to bring in more passes (such as constant folding) related to ONNX export. This PR lays the groundwork by moving the serialization in ONNX export from order-based to name based approach, which is more amenable to some of the passes.
@houseroad - As discussed this change uses a map for export, and removes the code from
export.cppthat relies on the order to compute initializer names.