-
Notifications
You must be signed in to change notification settings - Fork 4k
Description
For high-selectivity filters (most elements included), it may be wasteful and slow to copy large contiguous ranges of array chunks into the resulting ChunkedArray. Instead, we can scan the filter boolean array and slice off chunks of the source data rather than copying.
We will need to empirically determine how large the contiguous range needs to be in order to merit the slice-based approach versus simple/native materialization. For example, in a filter array like
1 0 1 0 1 0 1 0 1
it would not make sense to slice 5 times because slicing carries some overhead. But if we had
1 ... 1 [100 1's] 0 1 ... 1 [100 1's] 0 1 ... 1 [100 1's] 0 1 ... 1 [100 1's]
then performing 4 slices may be faster than doing a copy materialization.
Reporter: Wes McKinney / @wesm
Related issues:
- [C++] Arrow-native C++ Data Frame-style programming interface for analytics (umbrella issue) (is a child of)
Note: This issue was originally created as ARROW-7394. Please see the migration documentation for further details.