Client-side chunks 2: introduce TransportChunk#6439
Merged
Conversation
This was referenced May 27, 2024
ced515c to
8342ebb
Compare
7276a7c to
94a5e7f
Compare
8342ebb to
4a9b5cd
Compare
jleibs
approved these changes
May 30, 2024
crates/re_chunk/src/transport.rs
Outdated
Comment on lines
+82
to
+85
| /// The marker used to identify whether a column is sorted in field-level [`ArrowSchema`] metadata. | ||
| /// | ||
| /// The associated value is irrelevant -- if this marker is present, then it is true. | ||
| pub const FIELD_METADATA_MARKER_IS_SORTED: &'static str = Self::CHUNK_METADATA_MARKER_IS_SORTED; |
Contributor
There was a problem hiding this comment.
I don't understand how this would be used in practice. All columns must have the same sort order within a chunk. This value seems like it would always have to match CHUNK_METADATA_MARKER_IS_SORTED
Contributor
There was a problem hiding this comment.
Ahh, I think I understand, but could probably use an improved name / comment.
It seems like:
- CHUNK_METADATA_MARKER_IS_SORTED => This chunk is sorted by row-id
- FIELD_METADATA_MARKER_IS_SORTED => This chunk is sorted on this timeline
If all timelines are monotically increasing they all might be set, but it's possible to be sorted by row-id, but not sorted by timeline when out-of-order logging shows up.
Contributor
Author
There was a problem hiding this comment.
Yes, that's exactly what it is. I'll see if i can improve the naming/docs.
4a9b5cd to
05cdde7
Compare
teh-cmc
added a commit
that referenced
this pull request
May 31, 2024
This new and improved `re_format_arrow` ™️ brings two major improvements: - It is now designed to format standard Arrow dataframes (aka chunks or batches), i.e. a `Schema` and a `Chunk`. In particular: chunk-level and field-level schema metadata will now be rendered properly with the rest of the table. - Tables larger than your terminal will now do their best to fit in, while making sure to still show just enough data. E.g. here's an excerpt of a real-world Rerun dataframe from our `helix` example: ``` cargo r -p rerun-cli --no-default-features --features native_viewer -- print helix.rrd --verbose ``` before (`main`):  and after:  --- Part of a PR series to implement our new chunk-based data model on the client-side (SDKs): - #6437 - #6438 - #6439 - #6440 - #6441
1981f31 to
e22ff58
Compare
teh-cmc
added a commit
that referenced
this pull request
May 31, 2024
…6438) Introduces the new `re_chunk` crate: > A chunk of Rerun data, encoded using Arrow. Used for logging, transport, storage and compute. Specifically, it introduces the `Chunk` type itself, and all methods and helpers related to sorting. A `Chunk` is self-describing: it contains all the data _and_ metadata needed to index it into storage. There are a lot of things that need to be sorted within a `Chunk`, and as such we must make sure to keep track of what is or isn't sorted at all times, to avoid needlessly re-sorting things everytime a chunk changes hands. This necessitates a bunch of sanity checking all over the place to make sure we never end up in undefined states. `Chunk` is not about transport, it's about providing a nice-to-work with representation when manipulating a chunk in memory. Transporting a `Chunk` happens in the next PR. - Fixes #1981 --- Part of a PR series to implement our new chunk-based data model on the client-side (SDKs): - #6437 - #6438 - #6439 - #6440 - #6441
05cdde7 to
3be1f77
Compare
teh-cmc
added a commit
that referenced
this pull request
May 31, 2024
This is a fork of the old `DataTable` batcher, and works very similarly. Like before, this batcher will micro-batch using both space and time thresholds. There are two main differences: - This batcher maintains a dataframe per-entity, as opposed to the old one which worked globally. - Once a threshold is reached, this batcher further splits the incoming batch in order to fulfill these invariants: ```rust /// In particular, a [`Chunk`] cannot: /// * contain data for more than one entity path /// * contain rows with different sets of timelines /// * use more than one datatype for a given component /// * contain more rows than a pre-configured threshold if one or more timelines are unsorted ``` Most of the code is the same, the real interesting piece is `PendingRow::many_into_chunks`, as well as the newly added tests. - Fixes #4431 --- Part of a PR series to implement our new chunk-based data model on the client-side (SDKs): - #6437 - #6438 - #6439 - #6440 - #6441
teh-cmc
added a commit
that referenced
this pull request
May 31, 2024
Integrate the new chunk batcher in all SDKs, and get rid of the old one. On the backend, we make sure to deserialize incoming chunks into the old `DataTable`s, so business can continue as usual. Although the new batcher has a much more complicated task with all these sub-splits to manage, it is somehow already more performant than the old one 🤷♂️: ```bash # this branch cargo b -p log_benchmark --release && hyperfine --runs 15 './target/release/log_benchmark --benchmarks points3d_many_individual' Benchmark 1: ./target/release/log_benchmark --benchmarks points3d_many_individual Time (mean ± σ): 4.499 s ± 0.117 s [User: 5.544 s, System: 1.836 s] Range (min … max): 4.226 s … 4.640 s 15 runs # main cargo b -p log_benchmark --release && hyperfine --runs 15 './target/release/log_benchmark --benchmarks points3d_many_individual' Benchmark 1: ./target/release/log_benchmark --benchmarks points3d_many_individual Time (mean ± σ): 4.407 s ± 0.773 s [User: 8.423 s, System: 0.880 s] Range (min … max): 2.997 s … 6.148 s 15 runs ``` Notice the massive difference in user time. --- Part of a PR series to implement our new chunk-based data model on the client-side (SDKs): - #6437 - #6438 - #6439 - #6440 - #6441
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
A
TransportChunkis aChunkthat is ready for transport and/or storage.It is very cheap to go from
Chunkto aTransportChunkand vice-versa.A
TransportChunkmaps 1:1 to a native ArrowRecordBatch. It has a stable ABI, and can be cheaply send across process boundaries.arrow2has noRecordBatchtype; we will get one once we migrate toarrow-rs.A
TransportChunkis self-describing: it contains all the data and metadata needed to index it into storage.We rely heavily on chunk-level and field-level metadata to communicate Rerun-specific semantics over the wire, e.g. whether some columns are already properly sorted.
The Arrow metadata system is fairly limited -- it's all untyped strings --, but for now that seems good enough. It will be trivial to switch to something else later, if need be.
DataCell's size (& other metadata) over the wire #1760Chunk#1692RERUN:component_name#3360Component'sDataTypeshould embed its metadata #1696Part of a PR series to implement our new chunk-based data model on the client-side (SDKs):
Chunkand its suffle/sort routines #6438TransportChunk#6439Checklist
mainbuild: rerun.io/viewernightlybuild: rerun.io/viewerTo run all checks from
main, comment on the PR with@rerun-bot full-check.