[build] Cleanup dependency of distributed flags#11221
[build] Cleanup dependency of distributed flags#11221orionr wants to merge 2 commits intopytorch:masterfrom
Conversation
|
@pytorchbot retest this please |
cmake/Summary.cmake
Outdated
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
|
Oh BTW USE_MPI will not affect THD and C10D currently. |
8bd0085 to
a48fec8
Compare
|
@teng-li in that case, should I keep it separate? As in, not under the |
|
@pytorchbot retest this please |
|
@orionr the thing is that THD and C10D's cmakefiles will not check this flag, maybe we should let them check USE_MPI, and only detect MPI and build with it if USE_MPI is set? |
facebook-github-bot
left a comment
There was a problem hiding this comment.
orionr has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.
d8acf40 to
7360ce2
Compare
facebook-github-bot
left a comment
There was a problem hiding this comment.
orionr has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.
|
@pytorchbot retest this please |
Summary: Now that we're building everything together, making all distributed flags conditional of USE_DISTRIBUTED being set. cc pietern The controller you requested could not be found. cpuhrsch Pull Request resolved: pytorch/pytorch#11221 Reviewed By: Yangqing Differential Revision: D9664267 Pulled By: orionr fbshipit-source-id: a296cda5746ad150028c97160f8beacba955ff73
* upstream/master: (26 commits) cudnn 7 upgrade with spatialBN fix (pytorch#11291) Ignore FuseGraph Call on Windows (pytorch#11015) defer resolution of mkl to a cmake wrapper library (pytorch#11298) Cleanup dependency of distributed flags (pytorch#11221) Move minimal wrapdim functionality to core, remove THTensor include i… (pytorch#11283) Change includes from ATen/Storage.h to ATen/core/Storage.h (pytorch#11217) Fix scalar tensor assert in fusion compiler (pytorch#10952) Add dead code elimination pass (pytorch#10101) Distributed Data Parallel CPU module for C10D (pytorch#11168) Back out "[pt1][tensor] Add strides to caffe2::Tensor" Fix conv gradient conversion (pytorch#11312) Bag of clang tidy fixes for torch/csrc/ and torch/csrc/autograd (pytorch#11050) Sparse tensor printing; add NotImplemented autograd fn (pytorch#10181) Add convertToCaffe2Proto to python API fix doc for functional.dropout* (pytorch#10417) typo fix Tranpose2D -> Transpose2D (pytorch#11281) Remove THFinalizer Forward declarations of needed curand functions (pytorch#10911) nomnigraph - simplify core graph API and test (pytorch#11256) Small fixes to cppdocs for sync script (pytorch#11300) ...
Summary: Now that we're building everything together, making all distributed flags conditional of USE_DISTRIBUTED being set. cc pietern The controller you requested could not be found. cpuhrsch Pull Request resolved: pytorch#11221 Reviewed By: Yangqing Differential Revision: D9664267 Pulled By: orionr fbshipit-source-id: a296cda5746ad150028c97160f8beacba955ff73
Now that we're building everything together, making all distributed flags conditional of USE_DISTRIBUTED being set.
cc @pietern @teng-li @cpuhrsch