Big arrays sliced from netty buffers (long)#91641
Big arrays sliced from netty buffers (long)#91641elasticsearchmachine merged 7 commits intoelastic:mainfrom
Conversation
Based on elastic#90745 but for longs. This should allow aggregations down the road to read long values directly from netty buffer, rather than copying it from the netty buffer. Relates to elastic#89437
|
Pinging @elastic/es-analytics-geo (Team:Analytics) |
not-napoleon
left a comment
There was a problem hiding this comment.
This looks good to me. I think it's still good practice to get someone from Distributed to review these changes before merging though.
| int i = getOffsetIndex(index); | ||
| int idx = index - offsets[i]; | ||
| int end = idx + 8; | ||
| BytesReference wholeDoublesLivesHere = references[i]; |
There was a problem hiding this comment.
Copy paste? Should probably be wholeLongLivesHere, I would think?
There was a problem hiding this comment.
Yes, a classic copy paste error. I will change the variable name.
original-brownbear
left a comment
There was a problem hiding this comment.
LGTM just a few trivial points that might be nice to clean up :)
| @Override | ||
| public void writeTo(StreamOutput out) throws IOException { | ||
| int size = (int) size(); | ||
| out.writeVInt(size * 8); |
There was a problem hiding this comment.
Maybe just cache size * Long.BYTES to a variable? :) Calculating it twice in two different ways back to back makes me unhappy :D
|
|
||
| @Override | ||
| public void writeTo(StreamOutput out) throws IOException { | ||
| int size = (int) size(); |
There was a problem hiding this comment.
Maybe use Math.toIntExcact(sizze()) here to make this obviously correct?
There was a problem hiding this comment.
Thanks for the review! I pushed the changes via: ebe635c
| out.write(pages[i]); | ||
| } | ||
| out.write(pages[pages.length - 1], 0, lastPageEnd * Double.BYTES); | ||
| writePages(out, (int) size, pages, Double.BYTES, DOUBLE_PAGE_SIZE); |
There was a problem hiding this comment.
Maybe use Math.toIntExcact(sizze()) here to make this obviously correct?
|
|
||
| @Override | ||
| public void writeTo(StreamOutput out) throws IOException { | ||
| writePages(out, (int) size, pages, Long.BYTES, LONG_PAGE_SIZE); |
There was a problem hiding this comment.
Maybe use Math.toIntExcact(sizze()) here to make this obviously correct?
|
@elasticmachine run elasticsearch-ci/part-2 |
Based on #90745 but for longs. This should allow aggregations down the road to read long values directly from netty buffer, rather than copying it from the netty buffer.
Relates to #89437