-
Notifications
You must be signed in to change notification settings - Fork 27.7k
convert_sync_batchnorm should respect device affinity #37930
Copy link
Copy link
Closed
Labels
module: bootcampWe plan to do a full writeup on the issue, and then get someone to do it for onboardingWe plan to do a full writeup on the issue, and then get someone to do it for onboardingmodule: nnRelated to torch.nnRelated to torch.nnmodule: regressionIt used to work, and now it doesn'tIt used to work, and now it doesn'ttriagedThis issue has been looked at a team member, and triaged and prioritized into an appropriate moduleThis issue has been looked at a team member, and triaged and prioritized into an appropriate module
Milestone
Metadata
Metadata
Assignees
Labels
module: bootcampWe plan to do a full writeup on the issue, and then get someone to do it for onboardingWe plan to do a full writeup on the issue, and then get someone to do it for onboardingmodule: nnRelated to torch.nnRelated to torch.nnmodule: regressionIt used to work, and now it doesn'tIt used to work, and now it doesn'ttriagedThis issue has been looked at a team member, and triaged and prioritized into an appropriate moduleThis issue has been looked at a team member, and triaged and prioritized into an appropriate module
torch.nn.SyncBatchNorm.convert_sync_batchnormby default places newly created parameters on CPU even if all parameters of input model are on GPU. It might be better to respect input module's device affinity and place new parameters on the same device where the original_BatchNormmodule resides.To reproduce:
The output is:
cc @albanD @mruberry