API: Shrink MultiIterObject and make NPY_MAXARGS a runtime macro#25271
API: Shrink MultiIterObject and make NPY_MAXARGS a runtime macro#25271ngoldbaum merged 1 commit intonumpy:mainfrom
NPY_MAXARGS a runtime macro#25271Conversation
Cython is OK with increased sizes, but doesn't like shrinking an object size. We should fix this (providing a size to compare with in the `pyd` maybe). But just not including the end of the object here publically is totally fine... This makes `NPY_MAXARGS` also a *runtime* constant, since the presumably only user (`numexpr`) should be better off with that anyway. Yes, they need to change the code to hard-code the maximum, but they also need the NumPy cap from us anyway. This is needed, because Cython breaks for sklearn which uses `numpy.broadcast` AKA `PyArrayMultiIterObject`.
055cf57 to
f60da51
Compare
| * to be runtime dependent. | ||
| */ | ||
| #if defined(NPY_INTERNAL_BUILD) && NPY_INTERNAL_BUILD | ||
| PyArrayIterObject *iters[64]; /* 64 is NPY_MAXARGS */ |
There was a problem hiding this comment.
The compat header is included later, so it could use these types in principle...
|
The sklearn issue and |
|
The cygwin tests failed initially so I re-triggered them and it seems to have been a flake. I'll merge this once that test run passes. |
|
Ah no, looks like a real failure. Multiarray doesn't import and then the diagnostic check finds: I know nothing about cygwin so don't know if this is related to this change... |
|
Not sure, was also hoping it was just fluking out, but if it is fluking, it is fluking on all PRs right now. |
|
OK, must have been some upstream change in a github action or cygwin. Since CI is failing on other PRs in the same way let's merge this to unblock scikit-learn's testing against numpy dev. |
Cython is OK with increased sizes, but doesn't like shrinking an object size. We should fix this (providing a size to compare with in the
pydmaybe). But just not including the end of the object here publically is totally fine...This makes
NPY_MAXARGSalso a runtime constant, since the presumably only user (numexpr) should be better off with that anyway. Yes, they need to change the code to hard-code the maximum, but they also need the NumPy cap from us anyway.This is needed, because Cython breaks for sklearn which uses
numpy.broadcastAKAPyArrayMultiIterObject.@ngoldbaum this is an unfortuante quick followup on the MAXDIMS bump, as it breaks sklearn compiling (and maybe some others).