Skip to content

Lazy init order in set device, should not be called in getDevCount#4918

Merged
soumith merged 1 commit intopytorch:masterfrom
csarofeen:lazy_init
Jan 30, 2018
Merged

Lazy init order in set device, should not be called in getDevCount#4918
soumith merged 1 commit intopytorch:masterfrom
csarofeen:lazy_init

Conversation

@csarofeen
Copy link
Copy Markdown
Contributor

No description provided.

@csarofeen
Copy link
Copy Markdown
Contributor Author

#4903

Copy link
Copy Markdown
Contributor

@apaszke apaszke left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can you please revert the pybind submodule update?

Comment thread torch/cuda/__init__.py Outdated

This comment was marked as off-topic.

This comment was marked as off-topic.

Comment thread torch/cuda/__init__.py Outdated

This comment was marked as off-topic.

This comment was marked as off-topic.

@csarofeen
Copy link
Copy Markdown
Contributor Author

The biggest issue seems to be _lazy_init as it is allocating a good chunk of memory that can end up on the wrong device fairly easily the way things are now. Calling device count in pytorch was allocating about 1.5GB on my first device. Device count in pure CUDA was creating ~10MB on each device.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants