Implement gcd, lcm#40651
Conversation
💊 CI failures summary and remediationsAs of commit c8b9af0 (more details on the Dr. CI page): ✅ None of the CI failures appear to be your fault 💚
❄️ 1 failure tentatively classified as flakybut reruns have not yet been triggered to confirm:
|
There was a problem hiding this comment.
How do I change this so that gcd gets invoked only with integral arguments?
There was a problem hiding this comment.
You don't need to add gcd or lcm to common methods invocations, since it's mostly intended for testing gradients, which you're not dealing with.
There was a problem hiding this comment.
Okay. In that case, I'm ready for a review.
There was a problem hiding this comment.
What does this return if a is zero? If b is zero? If both are zero?
There was a problem hiding this comment.
(We should just add a comment explaining the behavior, since we can't return NaN for gcd(0, 0) here)
I suppose we could return an optional, but that seems like a lot.
There was a problem hiding this comment.
Also you can use the same type traits trick as calc_erfinv (with is_integral instead of is_floating_point):
There was a problem hiding this comment.
If a is 0, we return b (this is the case even when b is also 0). If b is 0 and a is nonzero then the while loop is executed once, b takes the value of a and is returned. In short, this will never return NaN, with gcd(a, 0) = a; gcd(0, b) = b; gcd(0, 0) = 0.
There was a problem hiding this comment.
Good. I just want to be clear about the (0, 0) behavior.
There was a problem hiding this comment.
All other functions in that header (Math.cuh) file also don't use that macro. Should I change them too?
There was a problem hiding this comment.
No. Old code that's working is best left undisturbed. Thank you for asking, though.
There was a problem hiding this comment.
Is the accumulate type ever different for the integer types?
There was a problem hiding this comment.
I thought accscalar_t stood for "accelerated scalar type" (as in accelerated with CUDA), my bad. I'll change that.
There was a problem hiding this comment.
Let's remove this for now. Named tensor support is on the back burner and these functions don't directly support named tensors since they're implementing a new kernel.
There was a problem hiding this comment.
For the tolerances here, can they not all be 0?
There was a problem hiding this comment.
"value"->"other" to be consistent with the documentation below?
There was a problem hiding this comment.
"-> Tensor", not "LongTensor"
There was a problem hiding this comment.
"... of input and other."
There was a problem hiding this comment.
Let's be pedantic and be clear the function defines gcd(0, 0)=0.
There was a problem hiding this comment.
"Both input and other must have integer dtypes."
There was a problem hiding this comment.
In particular, what is lcm(a, 0)?
Should there be a test in test_torch.py explicitly for the gcd and lcm edge cases where one of the two inputs is zero and when both of the inputs are zero?
There was a problem hiding this comment.
Yeah, I think having explicit tests is a good idea (I tested these edge cases against the numpy implementation manually). And, lcm(a, 0) = 0 (which is also the case in the numpy implementation).
There was a problem hiding this comment.
On second thought, doesn't this line already account for this?
There was a problem hiding this comment.
You would think so because our test suite is so confusing, but I think that function is for the Math tests and your tests are in the method test section. I'm actually about to overhaul all of this so it's not super important where you test the extremal values. You can probably just add lcm to the math test portion, but some people have struggled to fit their functions in there. You would do that by adding to torch_op_tests on line 19546.
mruberry
left a comment
There was a problem hiding this comment.
This is an excellent and well-written PR implementing two interesting functions. I've requested a few small changes for clarity.
|
Thanks! Do you know why the XLA tests are failing? |
|
Looking at the log |
You can't disable them because they're not even in master yet ;) It's not super surprising these new tests are failing on XLA. You can disable the test just on XLA by adding the |
|
I've updated the PR: adding edge-case tests for gcd and lcm, and reflected other suggestions. |
|
Checks have passed! @mruberry |
facebook-github-bot
left a comment
There was a problem hiding this comment.
@mruberry has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.
facebook-github-bot
left a comment
There was a problem hiding this comment.
@mruberry has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.
|
Sorry @aayn, looks like we got a small merge conflict at the last moment. Would you please resolve it and I'll restart the land process? |
|
@mruberry Done. And yeah, I'd love to work on another project. |
facebook-github-bot
left a comment
There was a problem hiding this comment.
@mruberry has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.
Great! This should land soon. If you'd like to implement a similar function with a twist there's divmod, something a little different is vdot, and something completely different is broadcast_to. Would any of those be interesting? |
Summary: Resolves pytorch#40018. Pull Request resolved: pytorch#40651 Reviewed By: ezyang Differential Revision: D22511828 Pulled By: mruberry fbshipit-source-id: 3ef251e45da4688b1b64c79f530fb6642feb63ab
Resolves #40018.