Adds __getattr__ to DataParallel to forward to self.module#1341
Adds __getattr__ to DataParallel to forward to self.module#1341elistevens wants to merge 1 commit intopytorch:masterfrom
Conversation
|
I should note that the Travis builds will skip the relevant test, since it depends on CUDA. |
|
We've discussed that and decided that we'd rather keep attr access explicit. If you like that you can always subclass |
|
Given the somewhat odd/obscure spelling, would a PR with the explicit subclass be accepted? |
|
no, the subclassed pr won't be accepted into the core. |
|
Ah, bummer. Thanks for letting me know. |
|
@elistevens Note that an implicit getattr there could be very misleading in some cases. For example, when serializing a model with DataParallel, you save the |
* Refactor War Sync Insertion Pass (pytorch#1339) * Remove kir::Expr::scope_ (pytorch#1341) * Fusion IR Refactor (pytorch#1343) * Refactor KIR Step 1 - Remove kir::Node (pytorch#1347) * Refactor KIR Step 2 - TMP IrUtils change (pytorch#1348) * Refactor KIR Step 3 - Remove kir::Expr and kir::Val. (pytorch#1349) * Refactor KIR Step 4 - Remove kir::Bool,Double,Int,NamedScalar. (pytorch#1350) * Refactor KIR Step 5 - Remove kir::IterDomain/TensorDomain/TensorView (pytorch#1351) * Refactor KIR Step 6 - Remove kir::UnaryOp/BinaryOp/TernaryOp/ReductionOp/WelfordOp/BroadcastOp. (pytorch#1352) * Refactor KIR Step 7 - Remove kir dispatch (pytorch#1353) * Refactor KIR Step 8 - Clean up lower_utils (pytorch#1355) * Refactor KIR Step 9 - lower_utils ir_utils::applyReplacements. (pytorch#1354) * Refactor KIR Step 10 - Remove kir_printer in favor of io_stream (pytorch#1356)
Helps keep the use of
DataParalleltransparent to other code.