GH-36028: [Docs][Parquet] Detailed parquet format support and parquet integration status#36027
GH-36028: [Docs][Parquet] Detailed parquet format support and parquet integration status#36027alippai wants to merge 1 commit intoapache:mainfrom alippai:parquet-advanced-details
Conversation
|
Thanks for opening a pull request! If this is not a minor PR. Could you open an issue for this pull request on GitHub? https://github.com/apache/arrow/issues/new/choose Opening GitHub issues ahead of time contributes to the Openness of the Apache Arrow project. Then could you also rename the pull request title in the following format? or In the case of PARQUET issues on JIRA the title also supports: See also: |
|
|
|
I'm sure this is too detailed in some places also there is a good chance that it misses many useful features. My approach was going through the great blogpost, the parquet-format changelog, the thrift file, the parquet-mr, arrow and arrow-rs issue queue. I've intentionally tried to avoid 2.4-2.10 parquet format version info as it'd imply that the 2.9 features include 2.6 features which might not reflect the reality. Instead of that I've tried to focus on the end-user public API and providing a flat list of features instead. I'm open for different approaches as well. I feel particularly uncertain about the statistics and indices, I'm sure you can do that part better. |
|
@tustvold @mapleFU @westonpace @wgtmac What do you think? Would this be useful? |
There was a problem hiding this comment.
Left some comments, I would personally restrict this table to feature of the actual file readers and not query engine functionality like partitioning and concurrency - imo these are not features of a parquet implementation, but rather a query system. IMO a parquet implementation should not be unilaterally making concurrency decisions, but rather exposing APIs to allow query engines to distribute the work how they deem fit. Similarly partitions are a catalog detail
I would also suggest having separate tables for supported types, encodings, compression and feature support.
| +-------------------------------------------+-------+--------+--------+-------+-------+ | ||
| | LZ4_RAW | | | | | | | ||
| +-------------------------------------------+-------+--------+--------+-------+-------+ | ||
| | Hive-style partitioning | | | | | | |
There was a problem hiding this comment.
I'm not sure I'd consider this a feature of the parquet implementation, it is more a detail of the query engine imo?
There was a problem hiding this comment.
While arrow-rs needs datafusion for this functionality, arrow handles it without Acero. I don't have strong opinion though
There was a problem hiding this comment.
I agree with @tustvold, partitioning is more like a high-level use case on top of file format.
| +-------------------------------------------+-------+--------+--------+-------+-------+ | ||
| | ColumnIndex statistics | | | | | | | ||
| +-------------------------------------------+-------+--------+--------+-------+-------+ | ||
| | Page statistics | | | | | | |
There was a problem hiding this comment.
What is this referring to?
There was a problem hiding this comment.
Like I said there is a good chance I made a mistake here. I saw this in the thrift spec: ColumnChunk->ColumnMetadata->Statistics
There was a problem hiding this comment.
Could we organize these items in a layered fashion? Maybe this is a good start point: https://arrow.apache.org/docs/cpp/parquet.html#supported-parquet-features
| +-------------------------------------------+-------+--------+--------+-------+-------+ | ||
| | Page CRC32 checksum | | | | | | | ||
| +-------------------------------------------+-------+--------+--------+-------+-------+ | ||
| | Parallel partition processing | | | | | | |
There was a problem hiding this comment.
IMO this is a query engine detail, not a detail of the file format?
There was a problem hiding this comment.
It's part of the arrow API in python
| +-------------------------------------------+-------+--------+--------+-------+-------+ | ||
| | xxHash based bloom filter | | | | | | | ||
| +-------------------------------------------+-------+--------+--------+-------+-------+ | ||
| | bloom filter length | | | | | | |
There was a problem hiding this comment.
There was a problem hiding this comment.
OMG, they finally added it - amazing, will get that incorporated into the rust writer/reader
There was a problem hiding this comment.
OMG, they finally added it - amazing, will get that incorporated into the rust writer/reader
I just added it recently :) Please note that the latest format is not released yet so the parquet-mr does not know bloom_filter_length now.
| +-------------------------------------------+-------+--------+--------+-------+-------+ | ||
| | BYTE_STREAM_SPLIT | | | | | | | ||
| +-------------------------------------------+-------+--------+--------+-------+-------+ | ||
| | Partition pruning on the partition column | | | | | | |
There was a problem hiding this comment.
Again this is a detail of the query engine not the parquet implementation imo
There was a problem hiding this comment.
Same, it's part of the current API, but I agree it's not consistent across implementations.
| +-------------------------------------------+-------+--------+--------+-------+-------+ | ||
| | RowGroup append / delete | | | | | | | ||
| +-------------------------------------------+-------+--------+--------+-------+-------+ | ||
| | Page append / delete | | | | | | |
There was a problem hiding this comment.
I don't think any support page appending, the semantics would be peculiar for things like dictionary pages, the rust implementation does support appending column chunks though
There was a problem hiding this comment.
Yes, likely some / most of the Page references should be ColumnChunk. I'll read about this more.
There was a problem hiding this comment.
Isn't Parquet itself a write-once format that can't be appended to? I'm not sure what these are supposed to indicate. The inability to append/delete without re-writing a Parquet file is why table formats like Iceberg and Delta have proliferated.
| | Storage-aware defaults (1) | | | | | | | ||
| +-------------------------------------------+-------+--------+--------+-------+-------+ | ||
| | Adaptive concurrency (2) | | | | | | | ||
| +-------------------------------------------+-------+--------+--------+-------+-------+ | ||
| | Adaptive IO when pruning used (3) | | | | | | |
There was a problem hiding this comment.
I'm not sure which parquet reader these features are based off, but my 2 cents is that they indicate a problematic IO abstraction that relies on prefetching heuristics instead of pushing vectored IO down into the IO subsystem (which the Rust, and proprietary DataBricks implementation do).
There was a problem hiding this comment.
I wanted to capture the IO pushdown section https://arrow.apache.org/blog/2022/12/26/querying-parquet-with-millisecond-latency/#io-pushdown but also added more. Likely out of scope as none of the implementations goes into details or provides an API
There was a problem hiding this comment.
Perhaps just a "Vectorized IO Pushdown". I believe there are efforts to add such an API to parquet-mr
| +-------------------------------------------+-------+--------+--------+-------+-------+ | ||
| | RowGroup pruning using bloom filter | | | | | | | ||
| +-------------------------------------------+-------+--------+--------+-------+-------+ | ||
| | Page pruning using projection pushdown | | | | | | |
There was a problem hiding this comment.
| | Page pruning using projection pushdown | | | | | | | |
| | Column Pruning using projection pushdown | | | | | | |
There was a problem hiding this comment.
Isn't this also a detail of the engine choosing what columns to read or not? Or is the intent here to indicate that rows/values can be pruned based on projection directly in the parquet lib?
| +-------------------------------------------+-------+--------+--------+-------+-------+ | ||
| | Page pruning using statistics | | | | | | | ||
| +-------------------------------------------+-------+--------+--------+-------+-------+ | ||
| | Page pruning using bloom filter | | | | | | |
There was a problem hiding this comment.
I don't think this is supported by the format, bloom filters are per column chunk
| | Format | C++ | Python | Java | Go | Rust | | ||
| | | | | | | | | ||
| +===========================================+=======+========+========+=======+=======+ | ||
| | Basic compression | | | | | | |
There was a problem hiding this comment.
I wonder if we could have separate tables for supported physical types, encodings and compression
|
Thanks @tustvold. I'll address the Page vs ColumnChunk issues and other improvement ideas. Also it's a good insight that the parquet vs arrow vs dataset vs query engine level API separation is different in select languages. |
| | Format | C++ | Python | Java | Go | Rust | | ||
| | | | | | | | | ||
| +===========================================+=======+========+========+=======+=======+ | ||
| | Basic compression | | | | | | |
| +-------------------------------------------+-------+--------+--------+-------+-------+ | ||
| | LZ4_RAW | | | | | | | ||
| +-------------------------------------------+-------+--------+--------+-------+-------+ | ||
| | Hive-style partitioning | | | | | | |
There was a problem hiding this comment.
I agree with @tustvold, partitioning is more like a high-level use case on top of file format.
| +-------------------------------------------+-------+--------+--------+-------+-------+ | ||
| | File metadata | | | | | | | ||
| +-------------------------------------------+-------+--------+--------+-------+-------+ | ||
| | RowGroup metadata | | | | | | | ||
| +-------------------------------------------+-------+--------+--------+-------+-------+ | ||
| | Column metadata | | | | | | | ||
| +-------------------------------------------+-------+--------+--------+-------+-------+ |
There was a problem hiding this comment.
Are these intended for the completeness of fields defined in the metadata? If yes, probably they worth a separate table and indicate the states of each field. But that sounds too complicated.
| ================================= | ||
|
|
||
| +-------------------------------------------+-------+--------+--------+-------+-------+ | ||
| | Format | C++ | Python | Java | Go | Rust | |
There was a problem hiding this comment.
The Java column could be misleading here. In the arrow repo, there is a java dataset reader to support reading from parquet dataset. If this is for parquet-mr, then it can be easily out of sync.
| +-------------------------------------------+-------+--------+--------+-------+-------+ | ||
| | ColumnIndex statistics | | | | | | | ||
| +-------------------------------------------+-------+--------+--------+-------+-------+ | ||
| | Page statistics | | | | | | |
There was a problem hiding this comment.
Could we organize these items in a layered fashion? Maybe this is a good start point: https://arrow.apache.org/docs/cpp/parquet.html#supported-parquet-features
|
I'll repeat what the rest said about engine/format differences and maybe offer some clarification. In C++ the picture is pretty clear, as the APIs tend to be focused on implementation: There is a C++ parquet module which is purely a parquet reader. In pyarrow the picture is pretty muddled, as the APIs are more focused on user experience: There is a pyarrow.parquet module, however, many of its features are powered by C++ datasets. For example, the pyarrow.parquet module can read from S3 even the the C++ parquet module has no concept of S3 (it just has an abstraction for input streams). So I agree with the others that we should probably not base the features on the python API. |
|
Although...to play devil's advocate...it might be odd when a feature is available in the parquet reader, but not yet exposed in the query component. For example, there is some row skipping and bloom filters in the C++ parquet reader, but we haven't integrated those into the datasets layer yet. |
|
Also, do we think this table might belong at https://parquet.apache.org/docs/ (and we could link to it from Arrow's docs)? For example, the parquet-mr (java) implementation and the parquet.net (C#) implementation are not involved with the arrow project but are still standalone parquet readers. |
|
Agreed with @westonpace. |
|
Thanks, I can do another round on the weekend on the correct website and the suggestions included |
|
Moved it to the parquet-site repo: apache/parquet-site#34 |
This is a draft skeleton for: #35638 (comment)