-
Notifications
You must be signed in to change notification settings - Fork 4k
ARROW-9100: [C++] Add ascii_lower kernel #7357
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
|
@maartenbreddels are you planning to do more work in this PR (it wasn't clear since it's marked as Draft)? We are going to need to create many child JIRAs of ARROW-555 so the work can proceed in parallel. |
|
Few questions about the preferred workflows (didn't see this in the contribs guidelines):
I have to dive a bit into how the kernels work, but with the current implementation of ascii_lower and upper I see performance issues (we should be able to write into a pre-allocated buffer, since we know the byte length). I'm happy first getting some basic functionality in, and optimize later, it will surface the issues around the framework a bit more I think. So would you prefer to get in some basic string functions first, and optimize later? In that case, we can rename this PR and merge it, I'll open a few more PRs, and once a few functions are in, we have a baseline for benchmarking and can optimize. |
At least one JIRA pr PR, everything should have an issue number. If there isn't a need for more explanation, a title with correct tagging is enough. It would be helpful to have such JIRAs also to coordinate who is working which task.
As many commits as it makes sense to review, we will squash on merge. The important detail in the past that was annoying (also can be helpful) is that reviewers will not be notified for changes if you force-push.
I'm also quite concerned about this performance penalty. But for e.g. I'm happy to merge this PR (with a new JIRA id), we should though work on writing into pre-allocated memory before writing more kernels that would greatly benefit from pre-allocation. For optimization in the pre-allocated case, this should be a sufficient baseline. I'm more concerned about performance when we start to implement UTF-8 based functionality as we quickly get to the point where we need to look at multiple bytes at a time but don't know ahead whether it's 1, 2, 3, or 4. There we probably should implement kernels with decent performance and one we have a set of 5-10, we can look on how to improve performance there. |
Yeah, it's a bit tricky, but there are lots of options that would require some benchmarking I think. In most cases starting with a buffer of the same length should work, a buffer of 4*bufferlength would always work, but might waste memory, or require a realloc. The buffer can also grow dynamically (that's what I do in vaex, it grows by 2x when too small), but different algorithms may need different strategies. For instance, in upper/lower algos, growing by 20% each time the buffer is too small sounds. Also, making the algorithms SIMD optimizable (by the compiler) is another thing to think about. So I agree, having a few (diverse) functions first would make sense to have in. |
pitrou
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Our architecture for writing string kernels needs rethinking. We will probably going to need different approaches based on the kind of kernel.
| static std::string Call(KernelContext*, const util::string_view& val) { | ||
| std::string result = val.to_string(); | ||
| std::transform(result.begin(), result.end(), result.begin(), | ||
| [](unsigned char c) { return std::tolower(c); }); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please don't use std::tolower as it has locale-dependent behaviour. Instead, just create a hardcoded lookup table.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Also, please fix AsciiUpper similarly.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Note that I used std::toupper just for illustration purposes, not because I thought it was the correct way to implement the kernel.
| struct AsciiLower { | ||
| template <typename... Ignored> | ||
| static std::string Call(KernelContext*, const util::string_view& val) { | ||
| std::string result = val.to_string(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is completely inefficient. We should write directly into the allocated array. The way this is architected needs rethinking.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, @xhochy and I discussed this above. The question is, do we want a few functions in, and then start iterating on a strategy (for ascii it's trivial, for utf8 less so), or not, I'm happy to explore different strategies first as well.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think we should start finding a sane strategy right now for the existing ascii_upper kernel. Indeed, a different strategy will be needed for utf8 upper.
In both cases, this probably means implementing those kernels as "vector" kernels, not "scalar" kernels (because the latter implies you only process one item at a time).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For ascii_upper, the strategy should be simple:
- reuse the same null bitmap buffer
- recompute a similar offsets buffer, but without the base offset (or reuse the same offsets buffer, if the base offset is 0)
- allocate a same-size values buffer, and convert it in one pass (no need to iterate over individual array elements)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Agree. Happy to take a look into turning this into a 'vector' kernel, if you have some pointer that would be great. This also answers a question I had if it would be possible to operate on a full array/chunk in the existing framework, I guess that's a yes.
Any reason not to reuse the offset buffer, even if the offset is not 0?
What is the advantage of scalar kernels in Arrow? Apart from avoiding boilerplate? Also wondering if this is related to the Gandiva connection.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Avoiding boilerplate is the main motivation for scalar kernels. It really saves a lot of repetition in kernel implementations.
You can find a simple vector kernel example in compute/kernels/vector_sort.cc. Take a look especially at RegisterVectorSort, AddSortingKernels and SortIndices.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm afraid you are mistaking what is meant by "ScalarFunction" and "VectorFunction". "Scalar" and "Vector" refer to the semantics of the function, not the implementation.
The only requirement for these string functions is that you provide std::function<void(KernelContext*, const ExecBatch&, Datum*)> that implements the kernel. What is inside can be anything. The function is still a ScalarFunction though
wesm
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
+1. I'm going to go ahead and merge this so we aren't blocking on back and forth. Both upper and lower should be rewritten to simply process the data buffer bytes as a batch rather than iterating over the values (I could have done this originally but wanted to illustrate the value iteration machinery which is likely needed for some other kernel implementations). I'll work up a patch quickly that optimizes lower/upper and adds a benchmark
…sing input data buffers in batch Following on discussion in #7357. I added a simple benchmark also. ``` -------------------------------------------------- Benchmark Time CPU Iterations -------------------------------------------------- AsciiLower 4774004 ns 4773998 ns 149 3.24122GB/s 209.468M items/s AsciiUpper 4708606 ns 4708590 ns 146 3.28625GB/s 212.378M items/s ``` Closes #7418 from wesm/ARROW-9115 Authored-by: Wes McKinney <wesm@apache.org> Signed-off-by: Wes McKinney <wesm+git@apache.org>
No description provided.