Skip to content

Conversation

@maartenbreddels
Copy link
Contributor

@maartenbreddels maartenbreddels commented Jun 5, 2020

No description provided.

@github-actions
Copy link

github-actions bot commented Jun 5, 2020

@wesm
Copy link
Member

wesm commented Jun 11, 2020

@maartenbreddels are you planning to do more work in this PR (it wasn't clear since it's marked as Draft)? We are going to need to create many child JIRAs of ARROW-555 so the work can proceed in parallel.

@maartenbreddels
Copy link
Contributor Author

Few questions about the preferred workflows (didn't see this in the contribs guidelines):

  • should Jira issues and PR's be 1 tot 1, or can I open multiple PR's referencing the same Jira issue
  • 1 commit per PR, or multiple commits per PR if related, but each commit one idea/topic/functionlity?

I have to dive a bit into how the kernels work, but with the current implementation of ascii_lower and upper I see performance issues (we should be able to write into a pre-allocated buffer, since we know the byte length). I'm happy first getting some basic functionality in, and optimize later, it will surface the issues around the framework a bit more I think.

So would you prefer to get in some basic string functions first, and optimize later? In that case, we can rename this PR and merge it, I'll open a few more PRs, and once a few functions are in, we have a baseline for benchmarking and can optimize.

@xhochy
Copy link
Member

xhochy commented Jun 11, 2020

  • should Jira issues and PR's be 1 tot 1, or can I open multiple PR's referencing the same Jira issue

At least one JIRA pr PR, everything should have an issue number. If there isn't a need for more explanation, a title with correct tagging is enough. It would be helpful to have such JIRAs also to coordinate who is working which task.

  • 1 commit per PR, or multiple commits per PR if related, but each commit one idea/topic/functionlity?

As many commits as it makes sense to review, we will squash on merge. The important detail in the past that was annoying (also can be helpful) is that reviewers will not be notified for changes if you force-push.

I have to dive a bit into how the kernels work, but with the current implementation of ascii_lower and upper I see performance issues (we should be able to write into a pre-allocated buffer, since we know the byte length). I'm happy first getting some basic functionality in, and optimize later, it will surface the issues around the framework a bit more I think.

I'm also quite concerned about this performance penalty. But for e.g. utf8_upper (let's just call it upper), I expect that we cannot pre-allocate and thus these are the things where we could continue to as-is.

I'm happy to merge this PR (with a new JIRA id), we should though work on writing into pre-allocated memory before writing more kernels that would greatly benefit from pre-allocation. For optimization in the pre-allocated case, this should be a sufficient baseline. I'm more concerned about performance when we start to implement UTF-8 based functionality as we quickly get to the point where we need to look at multiple bytes at a time but don't know ahead whether it's 1, 2, 3, or 4. There we probably should implement kernels with decent performance and one we have a set of 5-10, we can look on how to improve performance there.

@maartenbreddels
Copy link
Contributor Author

I expect that we cannot pre-allocate and thus these are the things where we could continue to as-is.

Yeah, it's a bit tricky, but there are lots of options that would require some benchmarking I think. In most cases starting with a buffer of the same length should work, a buffer of 4*bufferlength would always work, but might waste memory, or require a realloc. The buffer can also grow dynamically (that's what I do in vaex, it grows by 2x when too small), but different algorithms may need different strategies. For instance, in upper/lower algos, growing by 20% each time the buffer is too small sounds.

Also, making the algorithms SIMD optimizable (by the compiler) is another thing to think about. So I agree, having a few (diverse) functions first would make sense to have in.

@maartenbreddels maartenbreddels changed the title ARROW-555: [C++] String algorithms ARROW-9100: [C++] Add ascii_lower kernel Jun 11, 2020
@github-actions
Copy link

@maartenbreddels maartenbreddels marked this pull request as ready for review June 11, 2020 10:21
Copy link
Member

@pitrou pitrou left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Our architecture for writing string kernels needs rethinking. We will probably going to need different approaches based on the kind of kernel.

static std::string Call(KernelContext*, const util::string_view& val) {
std::string result = val.to_string();
std::transform(result.begin(), result.end(), result.begin(),
[](unsigned char c) { return std::tolower(c); });
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please don't use std::tolower as it has locale-dependent behaviour. Instead, just create a hardcoded lookup table.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Also, please fix AsciiUpper similarly.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Note that I used std::toupper just for illustration purposes, not because I thought it was the correct way to implement the kernel.

struct AsciiLower {
template <typename... Ignored>
static std::string Call(KernelContext*, const util::string_view& val) {
std::string result = val.to_string();
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is completely inefficient. We should write directly into the allocated array. The way this is architected needs rethinking.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, @xhochy and I discussed this above. The question is, do we want a few functions in, and then start iterating on a strategy (for ascii it's trivial, for utf8 less so), or not, I'm happy to explore different strategies first as well.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we should start finding a sane strategy right now for the existing ascii_upper kernel. Indeed, a different strategy will be needed for utf8 upper.

In both cases, this probably means implementing those kernels as "vector" kernels, not "scalar" kernels (because the latter implies you only process one item at a time).

Copy link
Member

@pitrou pitrou Jun 11, 2020

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For ascii_upper, the strategy should be simple:

  • reuse the same null bitmap buffer
  • recompute a similar offsets buffer, but without the base offset (or reuse the same offsets buffer, if the base offset is 0)
  • allocate a same-size values buffer, and convert it in one pass (no need to iterate over individual array elements)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Agree. Happy to take a look into turning this into a 'vector' kernel, if you have some pointer that would be great. This also answers a question I had if it would be possible to operate on a full array/chunk in the existing framework, I guess that's a yes.

Any reason not to reuse the offset buffer, even if the offset is not 0?

What is the advantage of scalar kernels in Arrow? Apart from avoiding boilerplate? Also wondering if this is related to the Gandiva connection.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Avoiding boilerplate is the main motivation for scalar kernels. It really saves a lot of repetition in kernel implementations.

You can find a simple vector kernel example in compute/kernels/vector_sort.cc. Take a look especially at RegisterVectorSort, AddSortingKernels and SortIndices.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm afraid you are mistaking what is meant by "ScalarFunction" and "VectorFunction". "Scalar" and "Vector" refer to the semantics of the function, not the implementation.

The only requirement for these string functions is that you provide std::function<void(KernelContext*, const ExecBatch&, Datum*)> that implements the kernel. What is inside can be anything. The function is still a ScalarFunction though

Copy link
Member

@wesm wesm left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

+1. I'm going to go ahead and merge this so we aren't blocking on back and forth. Both upper and lower should be rewritten to simply process the data buffer bytes as a batch rather than iterating over the values (I could have done this originally but wanted to illustrate the value iteration machinery which is likely needed for some other kernel implementations). I'll work up a patch quickly that optimizes lower/upper and adds a benchmark

@wesm wesm closed this in 9bf8131 Jun 12, 2020
wesm added a commit that referenced this pull request Jun 12, 2020
…sing input data buffers in batch

Following on discussion in #7357. I added a simple benchmark also.

```
--------------------------------------------------
Benchmark           Time           CPU Iterations
--------------------------------------------------
AsciiLower    4774004 ns    4773998 ns        149   3.24122GB/s   209.468M items/s
AsciiUpper    4708606 ns    4708590 ns        146   3.28625GB/s   212.378M items/s
```

Closes #7418 from wesm/ARROW-9115

Authored-by: Wes McKinney <wesm@apache.org>
Signed-off-by: Wes McKinney <wesm+git@apache.org>
@maartenbreddels maartenbreddels deleted the ARROW-555 branch June 12, 2020 19:13
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants