E2E SymInt example narrow_copy#75759
Conversation
🔗 Helpful links
💊 CI failures summary and remediationsAs of commit aca2fc8 (more details on the Dr. CI page): Expand to see more💚 💚 Looks good so far! There are no failures yet. 💚 💚 This comment was automatically generated by Dr. CI (expand for details).Please report bugs/suggestions to the (internal) Dr. CI Users group. |
7e5bdfb to
c5371ed
Compare
There was a problem hiding this comment.
I am confused. SymInt is the thing that fits in int64_t (and is where all the bitmasking logic ought to live). SymbolicIntNode is supposed to be the boxed thing that you can put all of the random extra IR bits you need to identify the object in question!
There was a problem hiding this comment.
We shouldn't expose data directly, since it isn't really an int64_t, it might also be some masked nonsense. Also, standard practice in C++ is when we're doing bit-masky stuff, we should represent it as a uint64_t, not a signed integer; signed integers are undefined with bitwise operators, see https://wiki.sei.cmu.edu/confluence/display/c/INT13-C.+Use+bitwise+operators+only+on+unsigned+operands
There was a problem hiding this comment.
Sure, I can switch to uint64_t. I was trying to avoid conversions from int64_t to uint64_t since that's the type we are using for dimension values and we aren't doing any "dangerous" bit-wise stuff as shifting.
There was a problem hiding this comment.
instead of doing this make the vector store an owning pointer (either unique_ptr or shared_ptr) so that you don't need an explicit destructor
There was a problem hiding this comment.
Yeah, I was thinking we could save some overhead here, but it may not worth it.
There was a problem hiding this comment.
unique_ptr is supposed to be zero overhead on top of a regular raw pointer (due to ABI shenanigans this is not strictly true but you are mostly definitely not thinking about that here.)
There was a problem hiding this comment.
I wouldn't call the transformation you need to do here memoization; instead, the invariant is that SymbolicIntNodes should only ever be allocated SIMULTANEOUSLY with assigning them in the table, so they know their index in that case.
There was a problem hiding this comment.
So this tests that a symint can behave correctly in a static condition, and serves as a smoke test. But i guess it doesn't really cover that dynamic aspects?
There was a problem hiding this comment.
yeah, though I could extend this and see how it behaves with different values, but this wasn't the goal of the milestone :)
There was a problem hiding this comment.
hm shouldn't you lock before you check?
There was a problem hiding this comment.
does 'or_list' mean list of symint in this case?
is this generated code correct for both cases? (maybe, as it is a base class)
There was a problem hiding this comment.
yes, I'm thinking of reusing this flag for both symints and list of symints, we don't quite yet have lists of symints if it's confusing please let me know I'll rename the flag latter.
There was a problem hiding this comment.
i guess this works for all cases (where operand(0) is a constant, a list of constants, a Value, etc.?)
it's cool to see this E2E finally!
There was a problem hiding this comment.
Err I think operand(0) should be a tensor otherwise JIT would burp.
miladm
left a comment
There was a problem hiding this comment.
Thanks @Krovatkin!
I added a couple of comments and questions.
There was a problem hiding this comment.
Looks like data_ is a private member? Do we beed need sci.date() here?
There was a problem hiding this comment.
we are inside SymInt this and sci is also SymInt. I could switch it to sci_.data() if you think it looks better.
There was a problem hiding this comment.
Trying to locate how exactly lmn->toSymInt()is propagated to the Lower function. Would be great to discuss it in our meeting tomorrow.
There was a problem hiding this comment.
Assuming the SymInt parameter allows a call to the upstream SizeNode object via the toSymbolicIntNode() API.
There was a problem hiding this comment.
yeah this is exactly how it works! Hopefully, it's intuitive enough API, but we can work on it.
There was a problem hiding this comment.
nit:
We use int64_t and uint64_t for data_. Is it necessary to let size_t propagate as well?
There was a problem hiding this comment.
yeah I need to do a better job here to reduce implicit conversions
c5b0f50 to
c83bf7e
Compare
c83bf7e to
3199a18
Compare
wconstab
left a comment
There was a problem hiding this comment.
lgtm pending addressing the rest of the review comments!
fe08e31 to
794c08a
Compare
|
for the future: could you avoid force pushing, it makes it hard for me to see what differentially changed in the revision. Since you're not using stack diffs it doesn't make a material difference if you rebase or merge, but merging preserves history. |
794c08a to
5129cf5
Compare
0c85fce to
8d2468f
Compare
|
@pytorchbot merge this |
|
Hey @Krovatkin. |
Summary: This **roughly** corresponds to Goal 3.2 in https://docs.google.com/document/d/1iiLNwR5ohAsw_ymfnOpDsyF6L9RTUaHMpD8 (d843f63f2a49c339fe79c3016a082c60fbf3fed5)YLw-jxEw/edit Namely: It adds the following: * SymbolicIntNode interface * LazySymbolicIntNode implementation * Lazy `narrow_copy` implementation * Need add support for SymInt in codegen * Test (below) ```cpp TEST(LazyDynamicOpsTest, NarrowCopy) { auto x = torch::rand({5, 10, 10}).to(kLazy); const size_t Y_DIM = 3; const size_t X_DIM_INDEX = 2; auto y = torch::rand({Y_DIM}).to(kLazy); auto ly = torch::lazy::TryGetLtcTensor(y); auto dim_node = MakeNode<SizeNode>(ly->GetIrValue(), 0); auto lmn = new torch::lazy::SymbolicIntNode(dim_node); auto z = x.narrow_copy(X_DIM_INDEX, 0, lmn->toSymInt()); AllClose(z.cpu(), x.cpu().narrow_copy(X_DIM_INDEX, 0, Y_DIM)); } ``` Pull Request resolved: #75759 Approved by: https://github.com/wconstab Test Plan: contbuild & OSS CI, see https://hud.pytorch.org/commit/pytorch/pytorch/bb60cac25af1f5e616432350f0ff5eb600ddc5e8 Reviewed By: osalpekar Differential Revision: D35938189 Pulled By: Krovatkin fbshipit-source-id: 59e5371efd27a635b83b4311fa53bb0e11af332b
This **roughly** corresponds to Goal 3.2 in https://docs.google.com/document/d/1iiLNwR5ohAsw_ymfnOpDsyF6L9RTUaHMpD8YLw-jxEw/edit# Namely: It adds the following: * SymbolicIntNode interface * LazySymbolicIntNode implementation * Lazy `narrow_copy` implementation * Need add support for SymInt in codegen * Test (below) ```cpp TEST(LazyDynamicOpsTest, NarrowCopy) { auto x = torch::rand({5, 10, 10}).to(kLazy); const size_t Y_DIM = 3; const size_t X_DIM_INDEX = 2; auto y = torch::rand({Y_DIM}).to(kLazy); auto ly = torch::lazy::TryGetLtcTensor(y); auto dim_node = MakeNode<SizeNode>(ly->GetIrValue(), 0); auto lmn = new torch::lazy::SymbolicIntNode(dim_node); auto z = x.narrow_copy(X_DIM_INDEX, 0, lmn->toSymInt()); AllClose(z.cpu(), x.cpu().narrow_copy(X_DIM_INDEX, 0, Y_DIM)); } ``` Pull Request resolved: pytorch#75759 Approved by: https://github.com/wconstab
This roughly corresponds to Goal 3.2 in https://docs.google.com/document/d/1iiLNwR5ohAsw_ymfnOpDsyF6L9RTUaHMpD8YLw-jxEw/edit#
Namely:
It adds the following:
narrow_copyimplementation