Skip to content

[docs] Warn about memory overlaps on expanded tensors#17576

Closed
zou3519 wants to merge 2 commits intopytorch:masterfrom
zou3519:memwarning
Closed

[docs] Warn about memory overlaps on expanded tensors#17576
zou3519 wants to merge 2 commits intopytorch:masterfrom
zou3519:memwarning

Conversation

@zou3519
Copy link
Contributor

@zou3519 zou3519 commented Feb 28, 2019

Eventually we should remove these when we're certain that all our ops
handle memory overlaps correctly.

Test Plan:
Build the docs
image
(Ignore the font, there's something wrong with building the docs on my machine)

Eventually we should remove these when we're certain that all our ops
handle memory overlaps correctly.
@zou3519 zou3519 requested a review from fmassa February 28, 2019 15:16
Copy link
Member

@fmassa fmassa left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The message looks good to me.
But I think we might also want to add it to unfold, in which case maybe we could factor it in its own string and reuse it?

@zou3519
Copy link
Contributor Author

zou3519 commented Feb 28, 2019

Good point. The warning becomes slightly different because in the case of unfold the tensor isn't "broadcast" or "expanded" so I think I'll just copy it

@fmassa
Copy link
Member

fmassa commented Feb 28, 2019

well, the tensor is unfolded :-D but no worries, I'm ok with copying the text

Copy link
Contributor

@facebook-github-bot facebook-github-bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@zou3519 has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.

petrex pushed a commit to petrex/pytorch that referenced this pull request Mar 7, 2019
* upstream/master: (24 commits)
  Automatic update of fbcode/onnx to 96c58ceeacf0f2b73d752e413e4fd78787a12da3 (pytorch#17676)
  Set the default ONNX opset to the latest stable opset (i.e., 9) (pytorch#17736)
  Add module attributes (pytorch#17309)
  - refactoring serialization of ONNX initializers to be name-based (pytorch#17420)
  ONNX Export for Max and Average Pooling in CEIL_MODE
  use flake8-mypy (pytorch#17721)
  use fp16<->fp32 intrinsic (pytorch#17496)
  Implement a Caffe2 standalone LSTM operator (pytorch#17726)
  caffe2:libtorch_cuda depends on caffe2:caffe2_gpu (pytorch#17729)
  add tensor and cost inference functions (pytorch#17684)
  ONNX Export Narrow op
  Keep the dim_type of hinted shape as BATCH if possible (pytorch#17734)
  fix different round behavior on CPU and GPU pytorch#16498 (pytorch#17443)
  Warn about memory overlaps on expanded tensors (pytorch#17576)
  fix exp fam. formula
  refactor caffe2 operator constructors - 10/9 (pytorch#17659)
  Improve ONNX symbolic for logsoftmax and softmax (pytorch#17672)
  Enable using CMD when building cpp extensions on Windows
  Do not rename net boundary inputs/outputs during ssaRewrite. (pytorch#17545)
  Reapply D14078519 (pytorch#17596)
  ...
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants