Skip to content

[build] Cleanup dependency of distributed flags#11221

Closed
orionr wants to merge 2 commits intopytorch:masterfrom
orionr:cleanup-distributed-flags
Closed

[build] Cleanup dependency of distributed flags#11221
orionr wants to merge 2 commits intopytorch:masterfrom
orionr:cleanup-distributed-flags

Conversation

@orionr
Copy link
Contributor

@orionr orionr commented Sep 4, 2018

Now that we're building everything together, making all distributed flags conditional of USE_DISTRIBUTED being set.

cc @pietern @teng-li @cpuhrsch

@orionr
Copy link
Contributor Author

orionr commented Sep 4, 2018

@pytorchbot retest this please

This comment was marked as off-topic.

This comment was marked as off-topic.

This comment was marked as off-topic.

This comment was marked as off-topic.

@teng-li
Copy link
Contributor

teng-li commented Sep 4, 2018

Oh BTW USE_MPI will not affect THD and C10D currently.

@orionr orionr force-pushed the cleanup-distributed-flags branch 2 times, most recently from 8bd0085 to a48fec8 Compare September 4, 2018 19:45
@orionr
Copy link
Contributor Author

orionr commented Sep 4, 2018

@teng-li in that case, should I keep it separate? As in, not under the USE_DISTRIBUTED check? Let me know.

@orionr
Copy link
Contributor Author

orionr commented Sep 4, 2018

@pytorchbot retest this please

@teng-li
Copy link
Contributor

teng-li commented Sep 5, 2018

@orionr the thing is that THD and C10D's cmakefiles will not check this flag, maybe we should let them check USE_MPI, and only detect MPI and build with it if USE_MPI is set?

Copy link
Contributor

@facebook-github-bot facebook-github-bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

orionr has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.

@orionr orionr force-pushed the cleanup-distributed-flags branch from d8acf40 to 7360ce2 Compare September 6, 2018 00:03
Copy link
Contributor

@facebook-github-bot facebook-github-bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

orionr has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.

@orionr
Copy link
Contributor Author

orionr commented Sep 6, 2018

@pytorchbot retest this please

zdevito pushed a commit to zdevito/ATen that referenced this pull request Sep 6, 2018
Summary:
Now that we're building everything together, making all distributed flags conditional of USE_DISTRIBUTED being set.

cc pietern The controller you requested could not be found. cpuhrsch
Pull Request resolved: pytorch/pytorch#11221

Reviewed By: Yangqing

Differential Revision: D9664267

Pulled By: orionr

fbshipit-source-id: a296cda5746ad150028c97160f8beacba955ff73
petrex pushed a commit to petrex/pytorch that referenced this pull request Sep 6, 2018
* upstream/master: (26 commits)
  cudnn 7 upgrade with spatialBN fix (pytorch#11291)
  Ignore FuseGraph Call on Windows (pytorch#11015)
  defer resolution of mkl to a cmake wrapper library (pytorch#11298)
  Cleanup dependency of distributed flags (pytorch#11221)
  Move minimal wrapdim functionality to core, remove THTensor include i… (pytorch#11283)
  Change includes from ATen/Storage.h to ATen/core/Storage.h (pytorch#11217)
  Fix scalar tensor assert in fusion compiler (pytorch#10952)
  Add dead code elimination pass (pytorch#10101)
  Distributed Data Parallel CPU module for C10D (pytorch#11168)
  Back out "[pt1][tensor] Add strides to caffe2::Tensor"
  Fix conv gradient conversion (pytorch#11312)
  Bag of clang tidy fixes for torch/csrc/ and torch/csrc/autograd (pytorch#11050)
  Sparse tensor printing; add NotImplemented autograd fn (pytorch#10181)
  Add convertToCaffe2Proto to python API
  fix doc for functional.dropout* (pytorch#10417)
  typo fix Tranpose2D -> Transpose2D (pytorch#11281)
  Remove THFinalizer
  Forward declarations of needed curand functions (pytorch#10911)
  nomnigraph - simplify core graph API and test (pytorch#11256)
  Small fixes to cppdocs for sync script (pytorch#11300)
  ...
PenghuiCheng pushed a commit to PenghuiCheng/pytorch that referenced this pull request Sep 11, 2018
Summary:
Now that we're building everything together, making all distributed flags conditional of USE_DISTRIBUTED being set.

cc pietern The controller you requested could not be found. cpuhrsch
Pull Request resolved: pytorch#11221

Reviewed By: Yangqing

Differential Revision: D9664267

Pulled By: orionr

fbshipit-source-id: a296cda5746ad150028c97160f8beacba955ff73
@ezyang ezyang added the merged label Jun 26, 2019
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants