Add support for save and load mkldnn modules#20799
Conversation
Differential Revision: D15447891 Differential Version: 82663074
dzhulgakov
left a comment
There was a problem hiding this comment.
Btw, what happens if I just try to .save() a mkldnn tensor? does it magically convert to_dense? or do we produce a nice error message?
|
@pytorchbot retest this please |
|
@pytorchbot retest this please |
Differential Revision: D15447891 Differential Version: 82769280
Differential Revision: D15447891 Differential Version: 82814493
Differential Revision: D15447891 Differential Version: 82826426
Differential Revision: D15447891 Differential Version: 82829033
Differential Revision: D15447891 Differential Version: 82845519
Differential Revision: D15447891 Differential Version: 82882392
|
This pull request has been merged in 63585c3. |
| class MkldnnLinear(torch.jit.ScriptModule): | ||
| def __init__(self, dense_module): | ||
| super(MkldnnLinear, self).__init__() | ||
| self.register_buffer('weight', dense_module.weight.to_mkldnn()) |
There was a problem hiding this comment.
@bddppq , I doubt that why we regist weight to a buffer not a parameter, it is not suitable to training a mkldnn module if the weight is regist as a buffer. Can you tell me when we will use the jit save path?
There was a problem hiding this comment.
@bddppq , I have tried regist weight to a parameter to run backward, I found backward operation can be run, but the jit save and load have some problem, can you give me some advice? Thanks!
Stack:
:white_circle: #20820 Add mkldnn sigmoid operator 💚
:white_circle: #20800 Enable torch.jit.trace for mkldnn modules 💚
:black_circle: #20799 Add support for save and load mkldnn modules 💛
Pull Request resolved: #20799
Differential Revision: D15447891