Fix compression and serializer encodings#748
Conversation
Codecov Report❌ Patch coverage is
Additional details and impacted files@@ Coverage Diff @@
## main #748 +/- ##
==========================================
+ Coverage 85.30% 88.30% +2.99%
==========================================
Files 46 90 +44
Lines 2219 4952 +2733
Branches 306 326 +20
==========================================
+ Hits 1893 4373 +2480
- Misses 281 504 +223
- Partials 45 75 +30 ☔ View full report in Codecov by Sentry. 🚀 New features to boost your workflow:
|
|
In order to append additional Variables to the Dataset (e.g. transpose volume) in Zarr V2 I had to add the following snippet right before calling This should be less of an issue with this PR and something that should be addressed in the MDIO API. for var_name in list(ds.data_vars) + list(ds.coords):
if "_FillValue" in ds[var_name].attrs:
del ds[var_name].attrs["_FillValue"] |
| from zarr.codecs.numcodecs import ZFPY as zarr_ZFPY # noqa: N811 | ||
| except ImportError: | ||
| zfpy_ZFPY = None # noqa: N816 | ||
| zarr_ZFPY = None # noqa: N816 |
There was a problem hiding this comment.
I was looking through test_dataset_serializer.py and I only found imports for zfpy.ZFPY. Are there tests for zarr.codecs.numcodecs.ZFPY?
There was a problem hiding this comment.
#747 will help with this. Lossy compression has not been a high priority.
ZFP is an array to bytes serializer and cannot be treated as a compressor directly. Changes here make that distinction opaque to the user. Also avoids creating an unnecessary default compressor on ZFP compressed stores.