Skip to content

Fix topk work size computation#5053

Merged
soumith merged 2 commits intopytorch:masterfrom
albanD:fix_topk
Feb 6, 2018
Merged

Fix topk work size computation#5053
soumith merged 2 commits intopytorch:masterfrom
albanD:fix_topk

Conversation

@albanD
Copy link
Copy Markdown
Collaborator

@albanD albanD commented Feb 5, 2018

Currently, when collapseDims actually collapses stuff, the inputSlices is multiplied by undefined value (which is what was previously in the .sizes array).

fix #4490 and #4513

@soumith soumith merged commit c9ee47b into pytorch:master Feb 6, 2018
@soumith soumith added the 0.3.1 label Feb 6, 2018
@albanD albanD deleted the fix_topk branch February 6, 2018 08:22
soumith pushed a commit that referenced this pull request Feb 7, 2018
* fix grid computation for topk kernel

* backslash alignment, no change in code
laurentdupin pushed a commit to laurentdupin/pytorch that referenced this pull request Apr 24, 2026
* fix grid computation for topk kernel

* backslash alignment, no change in code
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

cuda topk dim=0 crash on small tensor in two lines of code

3 participants