BUG: Fix padding with large integers#11033
Conversation
The old way of creating the padded array padded with wrong values for large integers because the new prepended / appended array was implicitly created with dtype float64: >>> (np.zeros(1) + (2 ** 64 - 1)).astype(np.uint64) array([0], np.uint64) >>> (np.zeros(1) + (2 ** 63 - 1)).astype(np.int64) array([-9223372036854775808])
mhvk
left a comment
There was a problem hiding this comment.
Looks good; happy to see an even simpler solution than the one I suggested! I'll let is sit for the day just to let other people have a chance to chime in.
numpy/lib/arraypad.py
Outdated
| @@ -138,8 +138,8 @@ def _append_const(arr, pad_amt, val, axis=-1): | |||
| return np.concatenate((arr, np.zeros(padshape, dtype=arr.dtype)), | |||
There was a problem hiding this comment.
While you're here, you could remove this branch too
There was a problem hiding this comment.
Sure. Although np.full(padshape, 0) seems to be a little slower than np.zeros(padshape)...
|
Just to make this comment more visible: Replacing |
|
@lagru - given that |
Sounds reasonable.
If you don't mind me asking, where exactly are unnecessary copies made? This statement sounds like there would be a faster option to achieve the same thing |
|
From just a quick look, copies are made implicitly by passing any input through Some cleanup might be good... (but definitely in a separate PR!) |
eric-wieser
left a comment
There was a problem hiding this comment.
Looks great! Feel free to merge, @mhvk.
Regarding full being slower - that sounds like a possible optimization that could be made inside full.
|
OK, merging. Thanks, @lagru! |
Closes #11027
The old way of creating the padded array padded with wrong values for
large integers because the new prepended / appended array was implicitly
created with dtype float64:
cc @mhvk