[data] ignore metadata for pandas block#56402
Merged
alexeykudinkin merged 5 commits intoray-project:masterfrom Sep 11, 2025
Merged
[data] ignore metadata for pandas block#56402alexeykudinkin merged 5 commits intoray-project:masterfrom
alexeykudinkin merged 5 commits intoray-project:masterfrom
Conversation
Signed-off-by: iamjustinhsu <jhsu@anyscale.com>
7298367 to
1f7dcec
Compare
Signed-off-by: iamjustinhsu <jhsu@anyscale.com>
iamjustinhsu
commented
Sep 10, 2025
| arrow_table = pa.Table.from_pandas(df_pandas) | ||
|
|
||
| # Convert back to pandas | ||
| df_roundtrip = arrow_table.to_pandas(ignore_metadata=True) |
Contributor
Author
There was a problem hiding this comment.
confirmed this will fail without ignore_metadata=True. I wrote this test here to show that iamjustinhsu#3 will solve the issue.
52f2784 to
f6fc010
Compare
alexeykudinkin
approved these changes
Sep 10, 2025
Signed-off-by: iamjustinhsu <jhsu@anyscale.com>
alexeykudinkin
approved these changes
Sep 10, 2025
|
|
||
| DEFAULT_ENABLE_PANDAS_BLOCK = True | ||
|
|
||
| DEFAULT_PANDAS_BLOCK_IGNORE_METADATA = bool( |
TimothySeah
approved these changes
Sep 10, 2025
ZacAttack
pushed a commit
to ZacAttack/ray
that referenced
this pull request
Sep 24, 2025
<!-- Thank you for your contribution! Please review https://github.com/ray-project/ray/blob/master/CONTRIBUTING.rst before opening a pull request. --> <!-- Please add a reviewer to the assignee section when you create a PR. If you don't have the access to it, we will shortly find a reviewer and assign them to your PR. --> ## Why are these changes needed? Consider the following the code ```python import ray # Read File (1) source_path = "file_that_contains_tensor_strings.parquet" ds = ray.data.read_parquet(source_path) # Write File (2) dest_path = "/tmp" ds.map_batches(..., batch_format="pandas").write_parquet(dest_path) # Read File Again (3) new_ds = ray.data.read_parquet(dest_path).map_bataches(..., batch_format="pandas") ``` At a high level we read, write, read. On a lower-level, we convert arrow blocks -> pandas -> arrow blocks -> pandas. We have connectors and registered extension types in `python/ray/air/util/tensor_extensions/`, however we special case handle tensor types by converting them to `TensorArrays` [here](https://github.com/iamjustinhsu/ray/blob/1f7dcec413bf9aba3ac39c0a14d7d4b734a1939f/python/ray/data/_internal/pandas_block.py#L238) when we convert pandas -> arrow. During this process, however, pyarrow will store metadata about the pandas block, which will look something like this: ```json { "name": "feature1", "field_name": "feature1", "pandas_type": "object", "numpy_type": "numpy.ndarray(shape=(8, 2), dtype=<U38)", "metadata": null }, { "name": "feature2", "field_name": "feature2", "pandas_type": "object", "numpy_type": "numpy.ndarray(shape=(8,), dtype=float32)", "metadata": null } ``` For the most part this is fine, however, when converting _back_ to pandas, arrow will first attempt to search through the metadata("numpy_type") to restore the schema. This can be troublesome because pandas/numpy doesn't know how to handle those custom types. In pyarrow==14.0.0, this is an issue, because it surrenders the special casing to numpy/pandas in pyarrow==21.0.0, it's smarter and DOES handle that (I tested this) ### Solution To handle older pyarrow versions, we can do `ignore_metadata=True` ## Related issue number <!-- For example: "Closes ray-project#1234" --> ## Checks - [ ] I've signed off every commit(by using the -s flag, i.e., `git commit -s`) in this PR. - [ ] I've run `scripts/format.sh` to lint the changes in this PR. - [ ] I've included any doc changes needed for https://docs.ray.io/en/master/. - [ ] I've added any new APIs to the API Reference. For example, if I added a method in Tune, I've added it in `doc/source/tune/api/` under the corresponding `.rst` file. - [ ] I've made sure the tests are passing. Note that there might be a few flaky tests, see the recent failures at https://flakey-tests.ray.io/ - Testing Strategy - [ ] Unit tests - [ ] Release tests - [ ] This PR is not tested :( --------- Signed-off-by: iamjustinhsu <jhsu@anyscale.com> Signed-off-by: zac <zac@anyscale.com>
dstrodtman
pushed a commit
to dstrodtman/ray
that referenced
this pull request
Oct 6, 2025
<!-- Thank you for your contribution! Please review https://github.com/ray-project/ray/blob/master/CONTRIBUTING.rst before opening a pull request. --> <!-- Please add a reviewer to the assignee section when you create a PR. If you don't have the access to it, we will shortly find a reviewer and assign them to your PR. --> ## Why are these changes needed? Consider the following the code ```python import ray # Read File (1) source_path = "file_that_contains_tensor_strings.parquet" ds = ray.data.read_parquet(source_path) # Write File (2) dest_path = "/tmp" ds.map_batches(..., batch_format="pandas").write_parquet(dest_path) # Read File Again (3) new_ds = ray.data.read_parquet(dest_path).map_bataches(..., batch_format="pandas") ``` At a high level we read, write, read. On a lower-level, we convert arrow blocks -> pandas -> arrow blocks -> pandas. We have connectors and registered extension types in `python/ray/air/util/tensor_extensions/`, however we special case handle tensor types by converting them to `TensorArrays` [here](https://github.com/iamjustinhsu/ray/blob/1f7dcec413bf9aba3ac39c0a14d7d4b734a1939f/python/ray/data/_internal/pandas_block.py#L238) when we convert pandas -> arrow. During this process, however, pyarrow will store metadata about the pandas block, which will look something like this: ```json { "name": "feature1", "field_name": "feature1", "pandas_type": "object", "numpy_type": "numpy.ndarray(shape=(8, 2), dtype=<U38)", "metadata": null }, { "name": "feature2", "field_name": "feature2", "pandas_type": "object", "numpy_type": "numpy.ndarray(shape=(8,), dtype=float32)", "metadata": null } ``` For the most part this is fine, however, when converting _back_ to pandas, arrow will first attempt to search through the metadata("numpy_type") to restore the schema. This can be troublesome because pandas/numpy doesn't know how to handle those custom types. In pyarrow==14.0.0, this is an issue, because it surrenders the special casing to numpy/pandas in pyarrow==21.0.0, it's smarter and DOES handle that (I tested this) ### Solution To handle older pyarrow versions, we can do `ignore_metadata=True` ## Related issue number <!-- For example: "Closes ray-project#1234" --> ## Checks - [ ] I've signed off every commit(by using the -s flag, i.e., `git commit -s`) in this PR. - [ ] I've run `scripts/format.sh` to lint the changes in this PR. - [ ] I've included any doc changes needed for https://docs.ray.io/en/master/. - [ ] I've added any new APIs to the API Reference. For example, if I added a method in Tune, I've added it in `doc/source/tune/api/` under the corresponding `.rst` file. - [ ] I've made sure the tests are passing. Note that there might be a few flaky tests, see the recent failures at https://flakey-tests.ray.io/ - Testing Strategy - [ ] Unit tests - [ ] Release tests - [ ] This PR is not tested :( --------- Signed-off-by: iamjustinhsu <jhsu@anyscale.com> Signed-off-by: Douglas Strodtman <douglas@anyscale.com>
justinyeh1995
pushed a commit
to justinyeh1995/ray
that referenced
this pull request
Oct 20, 2025
<!-- Thank you for your contribution! Please review https://github.com/ray-project/ray/blob/master/CONTRIBUTING.rst before opening a pull request. --> <!-- Please add a reviewer to the assignee section when you create a PR. If you don't have the access to it, we will shortly find a reviewer and assign them to your PR. --> ## Why are these changes needed? Consider the following the code ```python import ray # Read File (1) source_path = "file_that_contains_tensor_strings.parquet" ds = ray.data.read_parquet(source_path) # Write File (2) dest_path = "/tmp" ds.map_batches(..., batch_format="pandas").write_parquet(dest_path) # Read File Again (3) new_ds = ray.data.read_parquet(dest_path).map_bataches(..., batch_format="pandas") ``` At a high level we read, write, read. On a lower-level, we convert arrow blocks -> pandas -> arrow blocks -> pandas. We have connectors and registered extension types in `python/ray/air/util/tensor_extensions/`, however we special case handle tensor types by converting them to `TensorArrays` [here](https://github.com/iamjustinhsu/ray/blob/1f7dcec413bf9aba3ac39c0a14d7d4b734a1939f/python/ray/data/_internal/pandas_block.py#L238) when we convert pandas -> arrow. During this process, however, pyarrow will store metadata about the pandas block, which will look something like this: ```json { "name": "feature1", "field_name": "feature1", "pandas_type": "object", "numpy_type": "numpy.ndarray(shape=(8, 2), dtype=<U38)", "metadata": null }, { "name": "feature2", "field_name": "feature2", "pandas_type": "object", "numpy_type": "numpy.ndarray(shape=(8,), dtype=float32)", "metadata": null } ``` For the most part this is fine, however, when converting _back_ to pandas, arrow will first attempt to search through the metadata("numpy_type") to restore the schema. This can be troublesome because pandas/numpy doesn't know how to handle those custom types. In pyarrow==14.0.0, this is an issue, because it surrenders the special casing to numpy/pandas in pyarrow==21.0.0, it's smarter and DOES handle that (I tested this) ### Solution To handle older pyarrow versions, we can do `ignore_metadata=True` ## Related issue number <!-- For example: "Closes ray-project#1234" --> ## Checks - [ ] I've signed off every commit(by using the -s flag, i.e., `git commit -s`) in this PR. - [ ] I've run `scripts/format.sh` to lint the changes in this PR. - [ ] I've included any doc changes needed for https://docs.ray.io/en/master/. - [ ] I've added any new APIs to the API Reference. For example, if I added a method in Tune, I've added it in `doc/source/tune/api/` under the corresponding `.rst` file. - [ ] I've made sure the tests are passing. Note that there might be a few flaky tests, see the recent failures at https://flakey-tests.ray.io/ - Testing Strategy - [ ] Unit tests - [ ] Release tests - [ ] This PR is not tested :( --------- Signed-off-by: iamjustinhsu <jhsu@anyscale.com>
landscapepainter
pushed a commit
to landscapepainter/ray
that referenced
this pull request
Nov 17, 2025
<!-- Thank you for your contribution! Please review https://github.com/ray-project/ray/blob/master/CONTRIBUTING.rst before opening a pull request. --> <!-- Please add a reviewer to the assignee section when you create a PR. If you don't have the access to it, we will shortly find a reviewer and assign them to your PR. --> ## Why are these changes needed? Consider the following the code ```python import ray # Read File (1) source_path = "file_that_contains_tensor_strings.parquet" ds = ray.data.read_parquet(source_path) # Write File (2) dest_path = "/tmp" ds.map_batches(..., batch_format="pandas").write_parquet(dest_path) # Read File Again (3) new_ds = ray.data.read_parquet(dest_path).map_bataches(..., batch_format="pandas") ``` At a high level we read, write, read. On a lower-level, we convert arrow blocks -> pandas -> arrow blocks -> pandas. We have connectors and registered extension types in `python/ray/air/util/tensor_extensions/`, however we special case handle tensor types by converting them to `TensorArrays` [here](https://github.com/iamjustinhsu/ray/blob/1f7dcec413bf9aba3ac39c0a14d7d4b734a1939f/python/ray/data/_internal/pandas_block.py#L238) when we convert pandas -> arrow. During this process, however, pyarrow will store metadata about the pandas block, which will look something like this: ```json { "name": "feature1", "field_name": "feature1", "pandas_type": "object", "numpy_type": "numpy.ndarray(shape=(8, 2), dtype=<U38)", "metadata": null }, { "name": "feature2", "field_name": "feature2", "pandas_type": "object", "numpy_type": "numpy.ndarray(shape=(8,), dtype=float32)", "metadata": null } ``` For the most part this is fine, however, when converting _back_ to pandas, arrow will first attempt to search through the metadata("numpy_type") to restore the schema. This can be troublesome because pandas/numpy doesn't know how to handle those custom types. In pyarrow==14.0.0, this is an issue, because it surrenders the special casing to numpy/pandas in pyarrow==21.0.0, it's smarter and DOES handle that (I tested this) ### Solution To handle older pyarrow versions, we can do `ignore_metadata=True` ## Related issue number <!-- For example: "Closes ray-project#1234" --> ## Checks - [ ] I've signed off every commit(by using the -s flag, i.e., `git commit -s`) in this PR. - [ ] I've run `scripts/format.sh` to lint the changes in this PR. - [ ] I've included any doc changes needed for https://docs.ray.io/en/master/. - [ ] I've added any new APIs to the API Reference. For example, if I added a method in Tune, I've added it in `doc/source/tune/api/` under the corresponding `.rst` file. - [ ] I've made sure the tests are passing. Note that there might be a few flaky tests, see the recent failures at https://flakey-tests.ray.io/ - Testing Strategy - [ ] Unit tests - [ ] Release tests - [ ] This PR is not tested :( --------- Signed-off-by: iamjustinhsu <jhsu@anyscale.com>
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Why are these changes needed?
Consider the following the code
At a high level we read, write, read. On a lower-level, we convert arrow blocks -> pandas -> arrow blocks -> pandas. We have connectors and registered extension types in
python/ray/air/util/tensor_extensions/, however we special case handle tensor types by converting them toTensorArrayshere when we convert pandas -> arrow. During this process, however, pyarrow will store metadata about the pandas block, which will look something like this:{ "name": "feature1", "field_name": "feature1", "pandas_type": "object", "numpy_type": "numpy.ndarray(shape=(8, 2), dtype=<U38)", "metadata": null }, { "name": "feature2", "field_name": "feature2", "pandas_type": "object", "numpy_type": "numpy.ndarray(shape=(8,), dtype=float32)", "metadata": null }For the most part this is fine, however, when converting back to pandas, arrow will first attempt to search through the metadata("numpy_type") to restore the schema. This can be troublesome because pandas/numpy doesn't know how to handle those custom types.
In pyarrow==14.0.0, this is an issue, because it surrenders the special casing to numpy/pandas
in pyarrow==21.0.0, it's smarter and DOES handle that (I tested this)
NOTE
This has been tested with pyarrow==14.0.0 (ray version 2.45) and this works.
Solution
To handle older pyarrow versions, we can do
ignore_metadata=TrueRelated issue number
Checks
git commit -s) in this PR.scripts/format.shto lint the changes in this PR.method in Tune, I've added it in
doc/source/tune/api/under thecorresponding
.rstfile.