Skip to content

Fix shape inference bug#9199

Closed
hlu1 wants to merge 1 commit intopytorch:masterfrom
hlu1:export-D8743346
Closed

Fix shape inference bug#9199
hlu1 wants to merge 1 commit intopytorch:masterfrom
hlu1:export-D8743346

Conversation

@hlu1
Copy link
Contributor

@hlu1 hlu1 commented Jul 6, 2018

Summary:
The input shapes are not logged correctly in production because PerfNetObserver::Stop() only gets called after the inference is done for the net and in the mobile models, it's common practice to reuse the blobs as much as possible to save memory. And the shapes of the blobs keep changing during inference. By the time you you query InputTensorShapes() in PerfNetObserver::Stop(), you only get the final shape of the blobs.

To fix this bug, I moved the 'InputTensorShapes()' query from PerfNetObserver::Stop() to PerfOperatorObserver::Stop(). The latter gets called at the end of operator->run() whereas PerfNetObserver::Stop() gets called at the end of net->run().

Also remove PerfOperatorObserver::getAnalyticalCost() since it's now done on the server side and no longer needed on mobile

Differential Revision: D8743346

@hlu1 hlu1 force-pushed the export-D8743346 branch from 2221962 to c6f8ca0 Compare July 6, 2018 01:03
@hlu1 hlu1 force-pushed the export-D8743346 branch from c6f8ca0 to 477d009 Compare July 6, 2018 01:06
@hlu1 hlu1 force-pushed the export-D8743346 branch from 477d009 to 1071fe2 Compare July 6, 2018 03:20
@hlu1 hlu1 force-pushed the export-D8743346 branch from 1071fe2 to 4a0911f Compare July 6, 2018 19:47
Summary:
Closes pytorch#9199

The input shapes are not logged correctly in production because `PerfNetObserver::Stop()` only gets called after the inference is done for the net and in the mobile models, it's common practice to reuse the blobs as much as possible to save memory. And the shapes of the blobs keep changing during inference. By the time you you query `InputTensorShapes()` in `PerfNetObserver::Stop()`, you only get the final shape of the blobs.

To fix this bug, I moved the 'InputTensorShapes()' query from `PerfNetObserver::Stop()` to `PerfOperatorObserver::Stop()`. The latter gets called at the end of operator->run() whereas `PerfNetObserver::Stop()` gets called at the end of net->run().

Also remove `PerfOperatorObserver::getAnalyticalCost()` since it's now done on the server side and no longer needed on mobile

Differential Revision: D8743346

fbshipit-source-id: cf28493bbb3d1b48903353dc3a1d86f96f4a699d
@hlu1 hlu1 force-pushed the export-D8743346 branch from 4a0911f to e511aeb Compare July 6, 2018 20:40
goodlux pushed a commit to goodlux/pytorch that referenced this pull request Aug 15, 2018
Summary:
Closes pytorch#9199

The input shapes are not logged correctly in production because `PerfNetObserver::Stop()` only gets called after the inference is done for the net and in the mobile models, it's common practice to reuse the blobs as much as possible to save memory. And the shapes of the blobs keep changing during inference. By the time you you query `InputTensorShapes()` in `PerfNetObserver::Stop()`, you only get the final shape of the blobs.

To fix this bug, I moved the 'InputTensorShapes()' query from `PerfNetObserver::Stop()` to `PerfOperatorObserver::Stop()`. The latter gets called at the end of operator->run() whereas `PerfNetObserver::Stop()` gets called at the end of net->run().

Also remove `PerfOperatorObserver::getAnalyticalCost()` since it's now done on the server side and no longer needed on mobile

Reviewed By: Maratyszcza

Differential Revision: D8743346

fbshipit-source-id: 5d2d0132e3f5e084be7d0173863e695e62a6b4a0
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant