Skip to content

Move scalar_to_tensor_default_dtype out of ScalarOps.h because it's only useful for torch.where.#50111

Closed
gchanan wants to merge 7 commits intogh/gchanan/349/basefrom
gh/gchanan/349/head
Closed

Move scalar_to_tensor_default_dtype out of ScalarOps.h because it's only useful for torch.where.#50111
gchanan wants to merge 7 commits intogh/gchanan/349/basefrom
gh/gchanan/349/head

Conversation

@gchanan
Copy link
Copy Markdown
Contributor

@gchanan gchanan commented Jan 5, 2021

Stack from ghstack:

Differential Revision: D25789638

@facebook-github-bot
Copy link
Copy Markdown
Contributor

facebook-github-bot commented Jan 5, 2021

💊 CI failures summary and remediations

As of commit 033a616 (more details on the Dr. CI page):


  • 2/3 failures possibly* introduced in this PR
    • 2/2 non-CircleCI failure(s)
  • 1/3 broken upstream at merge base 6ee0044 on Jan 05 from 9:20am to 6:28pm

1 job timed out:

  • pytorch_linux_bionic_py3_8_gcc9_coverage_test1

🚧 1 fixed upstream failure:

These were probably caused by upstream breakages that were already fixed.

Please rebase on the viable/strict branch (expand for instructions)

If your commit is older than viable/strict, run these commands:

git fetch https://github.com/pytorch/pytorch viable/strict
git rebase FETCH_HEAD

Check out the recency history of this "viable master" tracking branch.


ci.pytorch.org: 1 failed


This comment was automatically generated by Dr. CI (expand for details).Follow this link to opt-out of these comments for your Pull Requests.

Please report bugs/suggestions to the (internal) Dr. CI Users group.

This comment has been revised 51 times.

@gchanan gchanan requested a review from mruberry January 5, 2021 22:37
namespace {

static Tensor wrapped_scalar_tensor(
at::Tensor scalar_to_tensor_default_dtype(
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This function must still deserve some kind of comment, right? Even if it's "DO NOT USE THIS".

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

oh, I didn't do that because it's just an implementation detail of wrapped_scalar_tensor below -- I'll make a note of that.

Comment thread aten/src/ATen/native/TensorCompare.cpp Outdated
return at::scalar_tensor(
s, at::device(device).dtype(at::get_default_complex_dtype()));
} else {
AT_ASSERT(s.isIntegral(false));
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

AT_ASSERT -> TORCH_INTERNAL_ASSERT

}
}

// `use_default_dtype` is a bit of a hack because torch.where doesn't support type promotion, but
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Wow this comment really dives in!

Is this function only intended to be used by torch.where? A sentence upfront about its intended use would be helpful. In particular, if torch.where supported type promotion properly would you expect us to get rid of this function, and should this comment tell developers not to use this function?

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ya, I can add a sentence upfront summarizing.

Copy link
Copy Markdown
Collaborator

@mruberry mruberry left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Cool

@facebook-github-bot
Copy link
Copy Markdown
Contributor

@gchanan merged this pull request in 0f412aa.

@facebook-github-bot facebook-github-bot deleted the gh/gchanan/349/head branch January 15, 2021 15:17
laurentdupin pushed a commit to laurentdupin/pytorch that referenced this pull request Apr 24, 2026
…nly useful for torch.where. (pytorch#50111)

Summary: Pull Request resolved: pytorch#50111

Test Plan: Imported from OSS

Reviewed By: mruberry

Differential Revision: D25789638

Pulled By: gchanan

fbshipit-source-id: 4254e11e08606b64e393433ef2c169889ff2ac07
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants