BUG: avoid memory segfault in from_ragged_array#2291
BUG: avoid memory segfault in from_ragged_array#2291jorisvandenbossche merged 6 commits intoshapely:mainfrom
Conversation
Pull Request Test Coverage Report for Build 15110636867Warning: This coverage report may be inaccurate.This pull request's base commit is no longer the HEAD commit of its target branch. This means it includes changes from outside the original pull request, including, potentially, unrelated coverage changes.
Details
💛 - Coveralls |
mwtoews
left a comment
There was a problem hiding this comment.
I can reproduce the segfault and verify the fix. Looks like there was a mix-up with n_parts and n_geoms, which is now resolved. Only a small suggestion to apply for new NumPy code. Thanks for nicely spotted fix!
shapely/tests/test_ragged_array.py
Outdated
| # https://github.com/shapely/shapely/discussions/2284 | ||
|
|
||
| # one of the geometries has more rings than the total number of geometries | ||
| coords = np.random.randn(60, 2) |
There was a problem hiding this comment.
New code should use the modern RNG methods (docs), also make the random coords stay the same-ish by specifying a seed.
| coords = np.random.randn(60, 2) | |
| rng = np.random.default_rng(2284) | |
| coords = rng.standard_normal(120).reshape((60, 2)) |
There was a problem hiding this comment.
also make the random coords stay the same-ish by specifying a seed.
For testing we don't necessarily want to use a fixed seed? It seems useful to me to use varying values. Of course if you then run into an issue with certain values, it is not reproducible, so it is not necessarily very helpful, but personally I don't see the point of wanting to fix the random values for this use case. I would then maybe rather use something like https://github.com/pytest-dev/pytest-randomly to ensure the seed is fixed per run of the test suite
There was a problem hiding this comment.
It probably doesn't make any difference on the randomness, as it's the data size that matters here. A non-seeded RNG is simply rng = np.random.default_rng(), if that's preferred.
There was a problem hiding this comment.
Updated.
(now, to be honest, I personally think this recommendation from numpy is too strong, and for testing purposes here where we don't care about the pseudo-random number generator theory (we are not doing statistical modelling), using np.random.default_rng().standard_normal(120).reshape((60, 2)) instead of np.random.randn(60, 2) is just unnecessary boilerplate and noise in the test code .. ;))
Fixes the issue reported in #2284
We were allocating a too small temporary array for storing the intermediate rings.