Concatenate directly into shared memory when constructing batches for numpy#14534
Concatenate directly into shared memory when constructing batches for numpy#14534boeddeker wants to merge 5 commits intopytorch:masterfrom
Conversation
|
@boeddeker Thank you for this patch. Could you add a test for this? |
|
@ezyang I am not familiar with the pytorch repository, where can I find the tests for pytorch/test/test_dataloader.py Line 786 in e747acb |
|
It should be OK to put a shared memory test here. Or are you asking, how to test shared memory case? |
|
Yes, I asked for the location. I was irritated, that there is only one test for that function. My next question is, how should I test shared memory? I pushed a suggestion into this PR. |
|
According to https://stackoverflow.com/a/50684569/5766934 Python 2 has no |
|
In this case I would just advise patching it by hand. Make a little context manager that toggles it, and sets it back when you're done. |
|
Ok, I used now try finally and the python 2.7 tests work. |
|
@boeddeker thanks. this PR needs a rebase on master, as it conflicted with 9217bde which went in |
|
Ok, I made a rebase. |
facebook-github-bot
left a comment
There was a problem hiding this comment.
@soumith is landing this pull request. If you are a Facebook employee, you can view this diff on Phabricator.
… numpy (pytorch#14534) Summary: Since pytorch#1323 tensors are shared with shared memory, but this feature is not active for numpy. This PR fix this. Pull Request resolved: pytorch#14534 Differential Revision: D13561649 Pulled By: soumith fbshipit-source-id: b6bc9e99fb91e8b675c2ef131fba9fa11c1647c0
Since #1323 tensors are shared with shared memory, but this feature is not active for numpy.
This PR fix this.