restore generic IndexToScatterGatherOffset specialization#40349
restore generic IndexToScatterGatherOffset specialization#40349ngimel wants to merge 2 commits intopytorch:masterfrom
Conversation
💊 CI failures summary and remediationsAs of commit c577769 (more details on the Dr. CI page): 💚 💚 Looks good so far! There are no failures yet. 💚 💚 This comment was automatically generated by Dr. CI (expand for details).Follow this link to opt-out of these comments for your Pull Requests.Please report bugs/suggestions on the GitHub issue tracker or post in the (internal) Dr. CI Users group. This comment has been revised 5 times. |
facebook-github-bot
left a comment
There was a problem hiding this comment.
@ngimel has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.
|
Whoops, thanks for the catch. CC @kurtamohler |
|
@kurtamohler you can get rid of THC gather if you wrap a TH tensor using THTensor_wrap and call ATen gather on the wrapped tensor. Then the remaining kernel can be removed from THC. |
|
Darn, sorry about that. I shouldn't have assumed that an existing test would fail if that template was actually needed. |
|
np, it is failing under very specific conditions and unfortunately we did not have a test to catch that. It could have gone unnoticed for a long time, but another PR exposed this bug. |
) Summary: pytorch#39963 erroneously removed template specialization to compute offsets, causing cases relying on this specialization (topk for 4d+ tensors with topk dimension >= 1024/2048 depending on the type) to produce bogus results. Pull Request resolved: pytorch#40349 Differential Revision: D22153756 Pulled By: ngimel fbshipit-source-id: cac04969acb6d7733a7da2c1784df7d30fda1606
#39963 erroneously removed template specialization to compute offsets, causing cases relying on this specialization (topk for 4d+ tensors with topk dimension >= 1024/2048 depending on the type) to produce bogus results.