Skip to content

min, max: check that operand and outputs are on the same device type#32862

Closed
Baranowski wants to merge 4 commits intopytorch:masterfrom
Quansight:wbaranowski-max_cuda-32001
Closed

min, max: check that operand and outputs are on the same device type#32862
Baranowski wants to merge 4 commits intopytorch:masterfrom
Quansight:wbaranowski-max_cuda-32001

Conversation

@Baranowski
Copy link
Contributor

Fixes #32001

@kostmo
Copy link
Member

kostmo commented Jan 31, 2020

💊 CircleCI build failures summary and remediations

As of commit 1061b8d:

None of the build failures appear to be your fault.

  • 1/1 broken upstream at merge base affd598 since Jan 30

    Please rebase on the viable/strict branch (expand for instructions)

    Since your merge base is older than viable/strict, run these commands:

    git fetch origin viable/strict
    git rebase viable/strict
    

    Check out the recency history of this "viable master" tracking branch.

Detailed failure analysis

One may explore the probable reasons each build failed interactively on the Dr. CI website.

🚧 1 upstream failure recognized by patterns:

These builds matched patterns, but were probably caused by upstream breakages:


This comment was automatically generated by Dr. CI (expand for details).Follow this link to opt-out of these comments for your Pull Requests.

Please report bugs/suggestions on the GitHub issue tracker.

This comment has been revised 2 times.

TORCH_CHECK(self.layout() == Layout::Strided,
"max only supports strided layout, got: ", self.layout());
TORCH_CHECK(self.device().type() == max.device().type(),
"expected device type ", self.device().type(), " but got ",
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You should just check that the device() are equal, as mixed-device operations won't work on this kernel anyway

Copy link
Contributor

@ezyang ezyang left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM, but let's test device()

Copy link
Contributor

@facebook-github-bot facebook-github-bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@ezyang is landing this pull request. If you are a Facebook employee, you can view this diff on Phabricator.

@facebook-github-bot
Copy link
Contributor

@ezyang merged this pull request in 7cddc30.

BowenBao pushed a commit to BowenBao/pytorch that referenced this pull request Feb 12, 2020
…ytorch#32862)

Summary:
Fixes pytorch#32001
Pull Request resolved: pytorch#32862

Differential Revision: D19695935

Pulled By: ezyang

fbshipit-source-id: bb37eb7a187214aa69259828024366f479a258d7
ttumiel pushed a commit to ttumiel/pytorch that referenced this pull request Mar 4, 2020
…ytorch#32862)

Summary:
Fixes pytorch#32001
Pull Request resolved: pytorch#32862

Differential Revision: D19695935

Pulled By: ezyang

fbshipit-source-id: bb37eb7a187214aa69259828024366f479a258d7
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

[Segmentation Fault] torch.max gives a seg fault for a CPU input tensor and CUDA out tensors

6 participants