ENH: add support for nan-like null strings in string replace#26355
ENH: add support for nan-like null strings in string replace#26355seberg merged 4 commits intonumpy:mainfrom
Conversation
mhvk
left a comment
There was a problem hiding this comment.
Looks good modulo some nitpicks...
| goto next_step; | ||
| } | ||
| else { | ||
| npy_gil_error(PyExc_ValueError, |
| } | ||
| else { | ||
| npy_gil_error(PyExc_ValueError, | ||
| "Only nan-like null values are not supported " |
There was a problem hiding this comment.
Thanks, fixed the double-negative and tweaked the wording. Hopefully the version I just pushed reads better.
| Buffer<ENCODING::UTF8> buf2((char *)i2s.buf, i2s.size); | ||
| Buffer<ENCODING::UTF8> buf3((char *)i3s.buf, i3s.size); | ||
| Buffer<ENCODING::UTF8> outbuf(new_buf, max_size); | ||
| { |
There was a problem hiding this comment.
Why the new indentation? It already is in the loop.
(And it makes reviewing harder...)
There was a problem hiding this comment.
It's because of the new use of goto next_step, I need to define a new lexical scope or define a bunch of variables at the top of the for loop that are only used at the bottom of it, otherwise the compiler complains about jumping over variable declarations.
There was a problem hiding this comment.
I'd probably have gone for top of the for-loop myself, but no big deal...
There was a problem hiding this comment.
While I don't hate the while (N--) loop in general, I do think a goto for loop control flow isn't nice and I much prefer a long for instead.
But this file has this pattern in a few places right now so it doesn't matter since the other places use this pattern also.
| npy_int64 end = NPY_MAX_INT64; | ||
|
|
||
| PyMem_RawFree(new_buf); | ||
| npy_int64 found_count = string_count<ENCODING::UTF8>(buf1, buf2, start, end); |
There was a problem hiding this comment.
I'd just hard code buf1, buf2, 0, NPY_MAX_INT64 - that seems clearer than defining variables that are only used here.
79ea50d to
4cc651b
Compare
| "as search strings for replace"); | ||
| npy_gil_error(PyExc_ValueError, | ||
| "Only NaN-like null strings can be used " | ||
| "as as search strings for replace"); |
There was a problem hiding this comment.
Now clearer, but this has a double "as as"
seberg
left a comment
There was a problem hiding this comment.
Looks good to me too, and if we are in a rush, we could put it in.
However, what we are missing are tests for the error paths, I think even the now fixed nan-like null path is untested?
Unless I mind-slipped, I also think the size calculation is odd and should try to use count now?
| } | ||
| else { | ||
| // replace i2 with i3 | ||
| max_size = i1s.size * (i3s.size/i2s.size + 1); |
There was a problem hiding this comment.
That didn't change, but now that you have count you should use it, I think.
Also, am I confused by the division. It seems correct, but a bit overly complicated, since you can use i1s.size + difference giving:
change = i2.size >= i3.size ? 0 : i3.size - i2.size;
max_size = i1s.size + count * change;
I.e. we replace at most count items (it might be less, if we can find overlaps with string_count. If overlaps are impossible in string_count then I guess the count might be exact).
There was a problem hiding this comment.
Thanks. I agree this logic here is poorly motivated and using the count directly makes more sense.
| } | ||
| npy_int64 found_count = string_count<ENCODING::UTF8>( | ||
| buf1, buf2, 0, NPY_MAX_INT64); | ||
| if (found_count == -2) { |
There was a problem hiding this comment.
| if (found_count == -2) { | |
| if (found_count < 0) { |
Yes, it returns -2 due to fastsearch, but let's clarify that it can't actually return -1
| Buffer<ENCODING::UTF8> buf2((char *)i2s.buf, i2s.size); | ||
| Buffer<ENCODING::UTF8> buf3((char *)i3s.buf, i3s.size); | ||
| Buffer<ENCODING::UTF8> outbuf(new_buf, max_size); | ||
| { |
There was a problem hiding this comment.
While I don't hate the while (N--) loop in general, I do think a goto for loop control flow isn't nice and I much prefer a long for instead.
But this file has this pattern in a few places right now so it doesn't matter since the other places use this pattern also.
| else { | ||
| npy_gil_error(PyExc_ValueError, | ||
| "Only NaN-like null strings can be used " | ||
| "as search strings for replace"); |
There was a problem hiding this comment.
(just a curious note for now)
I think default strings don't actually hit this, right? The only subtlety (which I don't care about), is that the we don't mutate the default string stored on the dtype probably, but rather insert the same string every time.
There was a problem hiding this comment.
Ah good point; this error message isn't quite right, using a string as a missing string is also supported. Will update the error to match this.
Not sure what you're getting at about mutating strings, but that's why they're static strings that store the string data in a const buffer. Anyone mutating it is going out of their way to do so.
There was a problem hiding this comment.
I was thinking of:
dt1 = StringDType(na_value="spam")
replace(arr(..., dtype=dt1), "spam", "parrot")
doesn't give a StringDtype(na_value="parrot"), I think so "bloats" memory.
I don't mind that enough to worry (at least fo rnow, I think this is a niche feature)
EDIT: Sorry, first edit didn't use the same replacemnt as was the na_value... Also, to be clear, I am not sure that should happen!
| "strip", | ||
| "lstrip", | ||
| "rstrip", | ||
| "replace" |
There was a problem hiding this comment.
@seberg this change makes sure the error paths are tested.
| else { | ||
| npy_gil_error(PyExc_ValueError, | ||
| "Only NaN-like null strings can be used " | ||
| "as search strings for replace"); |
There was a problem hiding this comment.
Ah good point; this error message isn't quite right, using a string as a missing string is also supported. Will update the error to match this.
Not sure what you're getting at about mutating strings, but that's why they're static strings that store the string data in a const buffer. Anyone mutating it is going out of their way to do so.
| } | ||
| else { | ||
| // replace i2 with i3 | ||
| max_size = i1s.size * (i3s.size/i2s.size + 1); |
There was a problem hiding this comment.
Thanks. I agree this logic here is poorly motivated and using the count directly makes more sense.
|
Thanks for following up on the count also! |
…6355) This fixes an issue similar to the one fixed by numpy#26353. In particular, right now np.strings.replace calls the count ufunc to get the number of replacements. This is necessary for fixed-width strings, but it turns out to make it impossible to support null strings in replace. I went ahead and instead found the replacement counts inline in the ufunc loop. This lets me add support for nan-like null strings, which it turns out pandas needs.
This fixes an issue similar to the one fixed by #26353.
In particular, right now
np.strings.replacecalls the count ufunc to get the number of replacements. This is necessary for fixed-width strings, but it turns out to make it impossible to support null strings in replace.I went ahead and instead found the replacement counts inline in the ufunc loop. This lets me add support for nan-like null strings, which it turns out pandas needs.
I marked this one as a backport and issued it separately from the other PR because the ufuncs fixed by the other PR aren't going to be in numpy 2.0.