Skip to content

Make torch.cuda.empty_cache() a no-op when cuda is not initialized#4936

Merged
soumith merged 1 commit intopytorch:masterfrom
albanD:empty_cache_fix
Jan 30, 2018
Merged

Make torch.cuda.empty_cache() a no-op when cuda is not initialized#4936
soumith merged 1 commit intopytorch:masterfrom
albanD:empty_cache_fix

Conversation

@albanD
Copy link
Copy Markdown
Collaborator

@albanD albanD commented Jan 30, 2018

cc: @ngimel @apaszke

current_blas_handle() still calls _lazy_init() as all the current_*() functions that return something corresponding to the currently set device.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants