Skip to content

Adds __getattr__ to DataParallel to forward to self.module#1341

Closed
elistevens wants to merge 1 commit intopytorch:masterfrom
elistevens:feature/dataparallel_getattr_v0.11
Closed

Adds __getattr__ to DataParallel to forward to self.module#1341
elistevens wants to merge 1 commit intopytorch:masterfrom
elistevens:feature/dataparallel_getattr_v0.11

Conversation

@elistevens
Copy link
Contributor

Helps keep the use of DataParallel transparent to other code.

@elistevens
Copy link
Contributor Author

I should note that the Travis builds will skip the relevant test, since it depends on CUDA.

@apaszke
Copy link
Contributor

apaszke commented Apr 24, 2017

We've discussed that and decided that we'd rather keep attr access explicit. If you like that you can always subclass DataParallel and implement __getattr__ there. Thanks!

@apaszke apaszke closed this Apr 24, 2017
@elistevens
Copy link
Contributor Author

Given the somewhat odd/obscure spelling, would a PR with the explicit subclass be accepted?

@soumith
Copy link
Collaborator

soumith commented Apr 25, 2017

no, the subclassed pr won't be accepted into the core.

@elistevens
Copy link
Contributor Author

Ah, bummer. Thanks for letting me know.

@fmassa
Copy link
Member

fmassa commented Apr 25, 2017

@elistevens Note that an implicit getattr there could be very misleading in some cases. For example, when serializing a model with DataParallel, you save the module. name of it in the parameters, so if you want to load these parameters in a model without DataParallel, it will error out. But using the indirect getattr, it would be harder to debug it.

eqy pushed a commit to eqy/pytorch that referenced this pull request Jan 20, 2022
* Refactor War Sync Insertion Pass (pytorch#1339)
* Remove kir::Expr::scope_ (pytorch#1341)
* Fusion IR Refactor (pytorch#1343)
* Refactor KIR Step 1 - Remove kir::Node (pytorch#1347)
* Refactor KIR Step 2 - TMP IrUtils change (pytorch#1348)
* Refactor KIR Step 3 - Remove kir::Expr and kir::Val. (pytorch#1349)
* Refactor KIR Step 4 - Remove kir::Bool,Double,Int,NamedScalar. (pytorch#1350)
* Refactor KIR Step 5 - Remove kir::IterDomain/TensorDomain/TensorView (pytorch#1351)
* Refactor KIR Step 6 - Remove 
 kir::UnaryOp/BinaryOp/TernaryOp/ReductionOp/WelfordOp/BroadcastOp. (pytorch#1352)
* Refactor KIR Step 7 - Remove kir dispatch (pytorch#1353)
* Refactor KIR Step 8 - Clean up lower_utils (pytorch#1355)
* Refactor KIR Step 9 - lower_utils ir_utils::applyReplacements. (pytorch#1354)
* Refactor KIR Step 10 - Remove kir_printer in favor of io_stream (pytorch#1356)
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants