Skip to content

fix issue #1549, make and operator correct#1556

Merged
soumith merged 5 commits intopytorch:masterfrom
stegben:fix-and-operator
May 15, 2017
Merged

fix issue #1549, make and operator correct#1556
soumith merged 5 commits intopytorch:masterfrom
stegben:fix-and-operator

Conversation

@stegben
Copy link
Contributor

@stegben stegben commented May 15, 2017

A quick attempt to fix #1549

@stegben stegben force-pushed the fix-and-operator branch from 258286d to 2c3c765 Compare May 15, 2017 07:18
@apaszke
Copy link
Contributor

apaszke commented May 15, 2017

Thanks, but it's not a long-term solution. A better fix for this issue is to expose bitwise operations from TH and THC (in our C extension).

@stegben
Copy link
Contributor Author

stegben commented May 15, 2017

OK, let me try implementing it in C modules (in torch/csrc/generic/Tensor.cpp right?)

@apaszke
Copy link
Contributor

apaszke commented May 15, 2017

The only C file you should need to modify is torch/csrc/generic/methods/Tensor.cwrap

@apaszke
Copy link
Contributor

apaszke commented May 15, 2017

Eh I can't update the comment on my phone. You need to add new cwrap declarations (parts in [[ ]]) for the bitwise functions, call them __and__, etc. (using python_name) and remove them from the Python file.

@stegben
Copy link
Contributor Author

stegben commented May 15, 2017

Got it.
By the way, Is THPTensor_ an alias for both THTensor and THCTensor_?

@apaszke
Copy link
Contributor

apaszke commented May 15, 2017

THP is for objects that appear in Python. THTensor expands to THTensor when compiling CPU objects and to THCTensor when building CUDA objects (macro magic).

@stegben stegben force-pushed the fix-and-operator branch from 3ddb25e to ffd19f2 Compare May 15, 2017 12:16
@stegben
Copy link
Contributor Author

stegben commented May 15, 2017

Fixed. Not sure if I use the correct method. I only found bitand in THTensorMath.h that matches what we need.

Copy link
Contributor

@apaszke apaszke left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That looks great!

If you exposed other bit functions (or, xor, shifts), it would resolve #768 too. If you don't want to do it I can merge this and add that later myself.

This comment was marked as off-topic.

This comment was marked as off-topic.

This comment was marked as off-topic.

This comment was marked as off-topic.

@stegben
Copy link
Contributor Author

stegben commented May 15, 2017

I could add other bitwise operations. WIP

@stegben stegben force-pushed the fix-and-operator branch from ffd19f2 to 0925da6 Compare May 15, 2017 12:36
@stegben
Copy link
Contributor Author

stegben commented May 15, 2017

Done

Copy link
Contributor

@apaszke apaszke left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

One last thing that needs a fix and it's good to go

torch/tensor.py Outdated

This comment was marked as off-topic.

This comment was marked as off-topic.

This comment was marked as off-topic.

@stegben stegben force-pushed the fix-and-operator branch from 183ee10 to 1549ccd Compare May 15, 2017 13:50
@soumith
Copy link
Collaborator

soumith commented May 15, 2017

@pytorchbot test this please

@soumith soumith merged commit ba885a1 into pytorch:master May 15, 2017
@soumith
Copy link
Collaborator

soumith commented May 15, 2017

thank you so much for the patch @stegben !

@stegben
Copy link
Contributor Author

stegben commented May 16, 2017

It's my honor to commit to such a great library :)

And many thanks to @apaszke for these detailed instructions.

@hmansell
Copy link

This is awesome @stegben, thanks!

@soumith / @apaszke - isn't invert still incorrect? I don't think the TH/THC has a bitwise not, but we can xor with 255.

@apaszke
Copy link
Contributor

apaszke commented May 16, 2017

You're right I forgot that there's invert.

BTW how do bitwise operations work with floats/doubles?

@hmansell
Copy link

I suspect they don't work in any logical way and should only be enabled for ByteTensor (or any other integers we might have).

zasdfgbnm pushed a commit to zasdfgbnm/pytorch that referenced this pull request Apr 6, 2022
…1556)

* Propagate root domain mappings from rfactor to root domains in
ComputeAtRootDomainMap

The main purpose of ComputeAtRootDomainMap is to find unmappable domains
for comptueAt. This analyais is done by traversing a fusion in a
backward direction. Currently, the traversal only visits arithmetic
expressions, so information propagation is done from consumer tensors to
producer tensors. This propagation is also required from rfactor domains
to root domains. Previously it doesn't really matter as rfactor is
limited reduction domains, but that's not the case with view.

This change also means that ComputeAtRootDomain does not guarantee
one-to-one mappings. For example,

```
tv0: [I0, I1]
tv1 = view(tv0); // tv1: [I0*I1/N, N]
```

I.e., the view op is done first merging the two domains of `tv0` and
then splitting it by N. Note that both of the two rfactor axes of `tv1`
are now mapped with the two axes of `tv0`.

Because of this change, `ComputeAtRootDomainMap:mapBestEffort` and other
mapping functions between a producer and a consumer that is supposed to
return a one-to-one map can fail.
`ComputeAtRootDomainMap::getMappableDims` is fine as it just grabs any
domain that is mappable.

`ComputeAtRootDomainMap::mapConsumerToProducer` and
`ComputeAtRootDomainMap::mapProducerToConsumer` were used in
`TransformReplay::replayPasC` and `TransformReplay::replayCasP`, but
they don't really need to use `ComputeAtRootDomainMap` but just
`PairwiseRootDomainMap` is sufficient, so replaed the usages with the
pairwise variant.
petrex pushed a commit to petrex/pytorch that referenced this pull request Aug 29, 2024
pytorch#1556)

This PR pins sympy==1.12.1 in the .ci/docker/requirements-ci.txt file
Also it skips pytorch-nightly installation in docker images

Installation of pytorch-nightly is needed to prefetch mobilenet_v2 avd
v3 models for some tests.
Came from

ROCm@85bd6bc
Models are downloaded on first use to the folder /root/.cache/torch/hub
But pytorch-nightly installation also overrides
.ci/docker/requirements-ci.txt settings and upgrades some of python
packages (sympy from 1.12.0 to 1.13.0) which causes several
'dynamic_shapes' tests to fail
Skip prefetching models affects these tests without any errors (but
**internet access required**):

- python test/mobile/model_test/gen_test_model.py mobilenet_v2
- python test/quantization/eager/test_numeric_suite_eager.py -k
test_mobilenet_v3

Issue ROCm/frameworks-internal#8772

Also, in case of some issues these models can be prefetched after
pytorch building and before testing

(cherry picked from commit b92b34d)
jagadish-amd pushed a commit to jagadish-amd/pytorch that referenced this pull request Jan 14, 2025
This PR skips pytorch-nightly installation in docker images

Installation of pytorch-nightly is needed to prefetch mobilenet_v2 avd
v3 models for some tests.
Came from

ROCm@85bd6bc
Models are downloaded on first use to the folder /root/.cache/torch/hub
But pytorch-nightly installation also overrides
.ci/docker/requirements-ci.txt settings and upgrades some of python
packages (sympy from 1.12.0 to 1.13.0) which causes several
'dynamic_shapes' tests to fail
Skip prefetching models affects these tests without any errors (but
**internet access required**):

- python test/mobile/model_test/gen_test_model.py mobilenet_v2
- python test/quantization/eager/test_numeric_suite_eager.py -k
test_mobilenet_v3

Issue ROCm/frameworks-internal#8772

Also, in case of some issues these models can be prefetched after
pytorch building and before testing

(cherry picked from commit b92b34d)
(cherry picked from commit d5608f3)
jagadish-amd pushed a commit to jagadish-amd/pytorch that referenced this pull request May 8, 2025
This PR skips pytorch-nightly installation in docker images

Installation of pytorch-nightly is needed to prefetch mobilenet_v2 avd
v3 models for some tests.
Came from

ROCm@85bd6bc
Models are downloaded on first use to the folder /root/.cache/torch/hub
But pytorch-nightly installation also overrides
.ci/docker/requirements-ci.txt settings and upgrades some of python
packages (sympy from 1.12.0 to 1.13.0) which causes several
'dynamic_shapes' tests to fail
Skip prefetching models affects these tests without any errors (but
**internet access required**):

- python test/mobile/model_test/gen_test_model.py mobilenet_v2
- python test/quantization/eager/test_numeric_suite_eager.py -k
test_mobilenet_v3

Issue ROCm/frameworks-internal#8772

Also, in case of some issues these models can be prefetched after
pytorch building and before testing

(cherry picked from commit b92b34d)
(cherry picked from commit d5608f3)
(cherry picked from commit aaa3134)
jagadish-amd pushed a commit to jagadish-amd/pytorch that referenced this pull request May 20, 2025
This PR skips pytorch-nightly installation in docker images

Installation of pytorch-nightly is needed to prefetch mobilenet_v2 avd
v3 models for some tests.
Came from

ROCm@85bd6bc
Models are downloaded on first use to the folder /root/.cache/torch/hub
But pytorch-nightly installation also overrides
.ci/docker/requirements-ci.txt settings and upgrades some of python
packages (sympy from 1.12.0 to 1.13.0) which causes several
'dynamic_shapes' tests to fail
Skip prefetching models affects these tests without any errors (but
**internet access required**):

- python test/mobile/model_test/gen_test_model.py mobilenet_v2
- python test/quantization/eager/test_numeric_suite_eager.py -k
test_mobilenet_v3

Issue ROCm/frameworks-internal#8772

Also, in case of some issues these models can be prefetched after
pytorch building and before testing

(cherry picked from commit b92b34d)
(cherry picked from commit d5608f3)
(cherry picked from commit aaa3134)
(cherry picked from commit d33dd98)
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Unexpected logical AND behavior

5 participants