Skip to content

Change default device to current acclerator#164399

Closed
drisspg wants to merge 3 commits intogh/drisspg/208/basefrom
gh/drisspg/208/head
Closed

Change default device to current acclerator#164399
drisspg wants to merge 3 commits intogh/drisspg/208/basefrom
gh/drisspg/208/head

Conversation

@drisspg
Copy link
Contributor

@drisspg drisspg commented Oct 1, 2025

[ghstack-poisoned]
@pytorch-bot
Copy link

pytorch-bot bot commented Oct 1, 2025

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/164399

Note: Links to docs will display an error until the docs builds have been completed.

✅ You can merge normally! (1 Unrelated Failure)

As of commit 1512848 with merge base b9e73e6 (image):

FLAKY - The following job failed but was likely due to flakiness present on trunk:

This comment was automatically generated by Dr. CI and updates every 15 minutes.

drisspg added a commit that referenced this pull request Oct 1, 2025
@drisspg drisspg added topic: not user facing topic category module: accelerator Issues related to the shared accelerator API labels Oct 1, 2025
@drisspg
Copy link
Contributor Author

drisspg commented Oct 1, 2025

cc @albanD

@Skylion007
Copy link
Collaborator

We need to allow None to be passed in as the default for autoselection probably.

@drisspg
Copy link
Contributor Author

drisspg commented Oct 1, 2025

what does torch.accelerator.current_accelerator() return on a device w/ no accelerator is it not cpu? That being said in retrospect there should have been no default arg for this func

@guangyey
Copy link
Collaborator

guangyey commented Oct 2, 2025

what does torch.accelerator.current_accelerator() return on a device w/ no accelerator is it not cpu? That being said in retrospect there should have been no default arg for this func

Hi @drisspg, torch.accelerator.current_accelerator() will return None for CPU-only builds, since CPU is not considered an accelerator (see https://github.com/pytorch/pytorch/blob/main/docs/source/torch.rst#accelerators).

When an accelerator is available, torch.accelerator.current_accelerator() returns a torch.device without an explicit device index (e.g., torch.device("cuda")).

[ghstack-poisoned]
drisspg added a commit that referenced this pull request Oct 2, 2025
[ghstack-poisoned]
drisspg added a commit that referenced this pull request Oct 2, 2025
Copy link
Collaborator

@albanD albanD left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

SGTM !

@drisspg drisspg added suppress-bc-linter Suppresses the failures of API backward-compatibility linter (Lint/bc_linter) ciflow/trunk Trigger trunk jobs on your pull request labels Oct 3, 2025
@drisspg
Copy link
Contributor Author

drisspg commented Oct 3, 2025

@pytorchbot merge

@pytorchmergebot
Copy link
Collaborator

Merge started

Your change will be merged once all checks pass (ETA 0-4 Hours).

Learn more about merging in the wiki.

Questions? Feedback? Please reach out to the PyTorch DevX Team

Advanced Debugging
Check the merge workflow status
here

Chao1Han pushed a commit to Chao1Han/pytorch that referenced this pull request Oct 21, 2025
@github-actions github-actions bot deleted the gh/drisspg/208/head branch November 3, 2025 02:17
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

ciflow/trunk Trigger trunk jobs on your pull request Merged module: accelerator Issues related to the shared accelerator API suppress-bc-linter Suppresses the failures of API backward-compatibility linter (Lint/bc_linter) topic: not user facing topic category

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants