fix issue #1549, make and operator correct#1556
Conversation
|
Thanks, but it's not a long-term solution. A better fix for this issue is to expose bitwise operations from TH and THC (in our C extension). |
|
OK, let me try implementing it in C modules (in |
|
The only C file you should need to modify is |
|
Eh I can't update the comment on my phone. You need to add new cwrap declarations (parts in [[ ]]) for the bitwise functions, call them |
|
Got it. |
|
THP is for objects that appear in Python. THTensor expands to THTensor when compiling CPU objects and to THCTensor when building CUDA objects (macro magic). |
|
Fixed. Not sure if I use the correct method. I only found |
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
test/test_torch.py
Outdated
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
|
I could add other bitwise operations. WIP |
|
Done |
apaszke
left a comment
There was a problem hiding this comment.
One last thing that needs a fix and it's good to go
torch/tensor.py
Outdated
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
|
@pytorchbot test this please |
|
thank you so much for the patch @stegben ! |
|
It's my honor to commit to such a great library :) And many thanks to @apaszke for these detailed instructions. |
|
You're right I forgot that there's invert. BTW how do bitwise operations work with floats/doubles? |
|
I suspect they don't work in any logical way and should only be enabled for ByteTensor (or any other integers we might have). |
…1556) * Propagate root domain mappings from rfactor to root domains in ComputeAtRootDomainMap The main purpose of ComputeAtRootDomainMap is to find unmappable domains for comptueAt. This analyais is done by traversing a fusion in a backward direction. Currently, the traversal only visits arithmetic expressions, so information propagation is done from consumer tensors to producer tensors. This propagation is also required from rfactor domains to root domains. Previously it doesn't really matter as rfactor is limited reduction domains, but that's not the case with view. This change also means that ComputeAtRootDomain does not guarantee one-to-one mappings. For example, ``` tv0: [I0, I1] tv1 = view(tv0); // tv1: [I0*I1/N, N] ``` I.e., the view op is done first merging the two domains of `tv0` and then splitting it by N. Note that both of the two rfactor axes of `tv1` are now mapped with the two axes of `tv0`. Because of this change, `ComputeAtRootDomainMap:mapBestEffort` and other mapping functions between a producer and a consumer that is supposed to return a one-to-one map can fail. `ComputeAtRootDomainMap::getMappableDims` is fine as it just grabs any domain that is mappable. `ComputeAtRootDomainMap::mapConsumerToProducer` and `ComputeAtRootDomainMap::mapProducerToConsumer` were used in `TransformReplay::replayPasC` and `TransformReplay::replayCasP`, but they don't really need to use `ComputeAtRootDomainMap` but just `PairwiseRootDomainMap` is sufficient, so replaed the usages with the pairwise variant.
pytorch#1556) This PR pins sympy==1.12.1 in the .ci/docker/requirements-ci.txt file Also it skips pytorch-nightly installation in docker images Installation of pytorch-nightly is needed to prefetch mobilenet_v2 avd v3 models for some tests. Came from ROCm@85bd6bc Models are downloaded on first use to the folder /root/.cache/torch/hub But pytorch-nightly installation also overrides .ci/docker/requirements-ci.txt settings and upgrades some of python packages (sympy from 1.12.0 to 1.13.0) which causes several 'dynamic_shapes' tests to fail Skip prefetching models affects these tests without any errors (but **internet access required**): - python test/mobile/model_test/gen_test_model.py mobilenet_v2 - python test/quantization/eager/test_numeric_suite_eager.py -k test_mobilenet_v3 Issue ROCm/frameworks-internal#8772 Also, in case of some issues these models can be prefetched after pytorch building and before testing (cherry picked from commit b92b34d)
This PR skips pytorch-nightly installation in docker images Installation of pytorch-nightly is needed to prefetch mobilenet_v2 avd v3 models for some tests. Came from ROCm@85bd6bc Models are downloaded on first use to the folder /root/.cache/torch/hub But pytorch-nightly installation also overrides .ci/docker/requirements-ci.txt settings and upgrades some of python packages (sympy from 1.12.0 to 1.13.0) which causes several 'dynamic_shapes' tests to fail Skip prefetching models affects these tests without any errors (but **internet access required**): - python test/mobile/model_test/gen_test_model.py mobilenet_v2 - python test/quantization/eager/test_numeric_suite_eager.py -k test_mobilenet_v3 Issue ROCm/frameworks-internal#8772 Also, in case of some issues these models can be prefetched after pytorch building and before testing (cherry picked from commit b92b34d) (cherry picked from commit d5608f3)
This PR skips pytorch-nightly installation in docker images Installation of pytorch-nightly is needed to prefetch mobilenet_v2 avd v3 models for some tests. Came from ROCm@85bd6bc Models are downloaded on first use to the folder /root/.cache/torch/hub But pytorch-nightly installation also overrides .ci/docker/requirements-ci.txt settings and upgrades some of python packages (sympy from 1.12.0 to 1.13.0) which causes several 'dynamic_shapes' tests to fail Skip prefetching models affects these tests without any errors (but **internet access required**): - python test/mobile/model_test/gen_test_model.py mobilenet_v2 - python test/quantization/eager/test_numeric_suite_eager.py -k test_mobilenet_v3 Issue ROCm/frameworks-internal#8772 Also, in case of some issues these models can be prefetched after pytorch building and before testing (cherry picked from commit b92b34d) (cherry picked from commit d5608f3) (cherry picked from commit aaa3134)
This PR skips pytorch-nightly installation in docker images Installation of pytorch-nightly is needed to prefetch mobilenet_v2 avd v3 models for some tests. Came from ROCm@85bd6bc Models are downloaded on first use to the folder /root/.cache/torch/hub But pytorch-nightly installation also overrides .ci/docker/requirements-ci.txt settings and upgrades some of python packages (sympy from 1.12.0 to 1.13.0) which causes several 'dynamic_shapes' tests to fail Skip prefetching models affects these tests without any errors (but **internet access required**): - python test/mobile/model_test/gen_test_model.py mobilenet_v2 - python test/quantization/eager/test_numeric_suite_eager.py -k test_mobilenet_v3 Issue ROCm/frameworks-internal#8772 Also, in case of some issues these models can be prefetched after pytorch building and before testing (cherry picked from commit b92b34d) (cherry picked from commit d5608f3) (cherry picked from commit aaa3134) (cherry picked from commit d33dd98)
A quick attempt to fix #1549