Skip to content

Fixed a formatting issue in doc comments #17505

Closed
brianjo wants to merge 2 commits intopytorch:masterfrom
brianjo:master
Closed

Fixed a formatting issue in doc comments #17505
brianjo wants to merge 2 commits intopytorch:masterfrom
brianjo:master

Conversation

@brianjo
Copy link
Copy Markdown
Contributor

@brianjo brianjo commented Feb 26, 2019

for torch.distributed.broadcast_multigpu per issue #17243

@brianjo
Copy link
Copy Markdown
Contributor Author

brianjo commented Feb 26, 2019

Updated block:
mgpufix

Copy link
Copy Markdown
Collaborator

@soumith soumith left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

lint is failing.

@soumith
Copy link
Copy Markdown
Collaborator

soumith commented Feb 27, 2019

43.17s$ flake8
./torch/distributed/distributed_c10d.py:668:81: W291 trailing whitespace
./torch/distributed/distributed_c10d.py:669:77: W291 trailing whitespace
./torch/distributed/distributed_c10d.py:670:82: W291 trailing whitespace
The command "flake8" exited with 1.

@pietern
Copy link
Copy Markdown
Contributor

pietern commented Mar 4, 2019

@pytorchbot retest this please

Copy link
Copy Markdown
Contributor

@facebook-github-bot facebook-github-bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@pietern is landing this pull request. If you are a Facebook employee, you can view this diff on Phabricator.

petrex pushed a commit to petrex/pytorch that referenced this pull request Mar 14, 2019
* upstream/master: (87 commits)
  Make Variable::set_data non-const; cosmetic fixes.
  remove warning for upsample code (pytorch#17921)
  Optimize TileOp (pytorch#17290)
  Optimize channel_stats_op (pytorch#16243)
  enable shape inference for elementwise operators (pytorch#17885)
  Remove remaining test jit expects redux (pytorch#17924)
  Handle Scalars Better (pytorch#17875)
  Fixed a formatting issue in doc comments (pytorch#17505)
  Add nbytes, itemsize, element_size to at::Tensor. (pytorch#17810)
  Fix lint in test_distributions.py
  Fix lint in test_jit.py
  Fix lint errors in test_autograd
  Added a few extra python bindings to help with walking the IR graph from Python (pytorch#17822)
  kthvalue consistency with sort in the presence of NaN (pytorch#17824)
  Fix minor grammatical mistakes in torch/nn/modules/loss.py (pytorch#17892)
  Remove (almost all) TensorOptions from native_functions.yaml (pytorch#17385)
  Restore full Windows tests (pytorch#17102)
  Prevent VS2017 from emitting ambiguous symbol errors (second time)
  Fix windows test hang (pytorch#17778)
  torch.btrifact for tensors with greater than 3 dimensions (pytorch#14964)
  ...
laurentdupin pushed a commit to laurentdupin/pytorch that referenced this pull request Apr 24, 2026
Summary:
for torch.distributed.broadcast_multigpu per issue pytorch#17243
Pull Request resolved: pytorch#17505

Reviewed By: janewangfb

Differential Revision: D14373865

Pulled By: pietern

fbshipit-source-id: 6d7e91a3da50a7c9ba417ad852f7746eb5200043
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants