Skip to content

ENH: Add 'out' in tensordot#15186

Closed
mproszewska wants to merge 6 commits intonumpy:mainfrom
mproszewska:tensordot-out
Closed

ENH: Add 'out' in tensordot#15186
mproszewska wants to merge 6 commits intonumpy:mainfrom
mproszewska:tensordot-out

Conversation

@mproszewska
Copy link
Copy Markdown
Contributor

Closes #15103

else:
out = out.reshape((newshape_a[0], newshape_b[1]))
else:
correct = False
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This should be a TypeError, I think

Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Does it actually have to be a C order array? I think we can just keep those error details to the np.dot function.

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think that checking only shape should be enough. np.dot will check other details.

Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The danger is that reshape may create a copy, so F contiguous arrays may be accepted by dot, but not necessarily work here (at least not without a copy back into out). In any case, just make sure we have tests for these things, then it is easy to try around to see how many checks are needed.

return res.reshape(olda + oldb)

out = dot(at, bt, out)
return out.reshape(olda + oldb)
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This will end up returning a view of out when out is specified (since it is a reshaped version of the reshaped version). I am not sure I mind that though.

Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we promise that return(func, out=out) is out for functions that have an out kwarg, no?

Copy link
Copy Markdown
Member

@eric-wieser eric-wieser Jan 3, 2020

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@mattip: Indeed, although that promise is hard to keep unless transpose does not return a view (something subclasses can change, and I think to a certain extent ma.array does). I think the logic would be:

  • (somehow) check if transpose returns a view
  • If it does, then return the original out array
  • If it does not, then do a copy from the transposed array back to the original array.

Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we rewrite this function to use matmul instead of dot, using the new axes argument to matmul? Then the out would be handled for us without any reshaping.

Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

One small issue with such rewriting is that it gives a different speed tradeoff currently. I would not actually mind the lazy way of doing type(out) is np.ndarray saying that subclasses may be allowed in the future.
The whole __array_wrap__ dance is not fun, also the function currently uses np.asarray on the inputs in any case.

@seberg
Copy link
Copy Markdown
Member

seberg commented Jan 28, 2020

I still have some small issues with this, sorry that this is tricker than it seems on first sight...

  1. You did not yet ensure that returned_value is out. That would be good IMO.
  2. If possible, lets avoid any subclass involvement. After checking that out has the correct shape, we can use out.view(np.ndarray) to make it a base class array. That hopefully simplifies things a lot, in particular it avoids trusting the subarray to not be broken. After that, I trust the np.may_share_memory for sure (an alternative spelling would be new_out.base is out or new_out.base is out.base, but that may be a bit tricker to understand).
  3. We need tests for the above issues

EDIT: In point 2, actually, you could also do that before checking that the shape is correct really... That was unnecessary...

@mproszewska
Copy link
Copy Markdown
Contributor Author

I'm not sure how to force creation of copies in tests.

Base automatically changed from master to main March 4, 2021 02:04
@mattip
Copy link
Copy Markdown
Member

mattip commented Mar 28, 2023

I rebased this off main to see if tests still pass. @mproszewska would you want to continue with this? Did you have questions how to progress?

@mattip
Copy link
Copy Markdown
Member

mattip commented Mar 30, 2023

whoops, I meant to push to #18053, sorry. Fixing.

@mattip
Copy link
Copy Markdown
Member

mattip commented Jun 27, 2023

Closing, I don't think there is enough interest in this to make it happen.

@mattip mattip closed this Jun 27, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Will tensordot support an 'out' argument?

5 participants