Skip to content

Concatenate directly into shared memory when constructing batches for numpy#14534

Closed
boeddeker wants to merge 5 commits intopytorch:masterfrom
boeddeker:master
Closed

Concatenate directly into shared memory when constructing batches for numpy#14534
boeddeker wants to merge 5 commits intopytorch:masterfrom
boeddeker:master

Conversation

@boeddeker
Copy link
Copy Markdown
Contributor

Since #1323 tensors are shared with shared memory, but this feature is not active for numpy.
This PR fix this.

@ezyang
Copy link
Copy Markdown
Contributor

ezyang commented Dec 6, 2018

@boeddeker Thank you for this patch. Could you add a test for this?

@boeddeker
Copy link
Copy Markdown
Contributor Author

@ezyang I am not familiar with the pytorch repository, where can I find the tests for default_collate with shared memory?
The only test that I found for default_collate is

def test_default_collate_bad_numpy_types(self):
but it is only a test for numpy types and not for shared memory.

@ezyang
Copy link
Copy Markdown
Contributor

ezyang commented Dec 11, 2018

It should be OK to put a shared memory test here. Or are you asking, how to test shared memory case?

@boeddeker
Copy link
Copy Markdown
Contributor Author

Yes, I asked for the location. I was irritated, that there is only one test for that function.

My next question is, how should I test shared memory? I pushed a suggestion into this PR.

@boeddeker
Copy link
Copy Markdown
Contributor Author

According to https://stackoverflow.com/a/50684569/5766934 Python 2 has no unittest.mock. What should I do for Python 2?

@ezyang
Copy link
Copy Markdown
Contributor

ezyang commented Dec 20, 2018

In this case I would just advise patching it by hand. Make a little context manager that toggles it, and sets it back when you're done.

@boeddeker
Copy link
Copy Markdown
Contributor Author

Ok, I used now try finally and the python 2.7 tests work.

@soumith
Copy link
Copy Markdown
Collaborator

soumith commented Dec 28, 2018

@boeddeker thanks. this PR needs a rebase on master, as it conflicted with 9217bde which went in

@boeddeker
Copy link
Copy Markdown
Contributor Author

Ok, I made a rebase.

Copy link
Copy Markdown
Contributor

@facebook-github-bot facebook-github-bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@soumith is landing this pull request. If you are a Facebook employee, you can view this diff on Phabricator.

laurentdupin pushed a commit to laurentdupin/pytorch that referenced this pull request Apr 24, 2026
… numpy (pytorch#14534)

Summary:
Since pytorch#1323 tensors are shared with shared memory, but this feature is not active for numpy.
This PR fix this.
Pull Request resolved: pytorch#14534

Differential Revision: D13561649

Pulled By: soumith

fbshipit-source-id: b6bc9e99fb91e8b675c2ef131fba9fa11c1647c0
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants