CI: add pinning for scipy-openblas wheels#25697
Conversation
|
Hmm. The |
|
|
That is a known issue indeed, you cannot distinguish between 32-bit/64-bit Windows (or, more likely, 32-bit CPython on 64-bit Windows).
You could add separate requirements files for these specific packages. Although I'm not sure that that is any cleaner than direct pins where they are used in CI config files. Related comment: these |
It is 9 pins across 7 files. Updating that correctly will be painful.
Makes sense |
| spin | ||
| scipy-openblas64<0.3.26 | ||
| scipy-openblas32<0.3.26 |
There was a problem hiding this comment.
Another aspect to consider here is third-party packages being used to sneak into CI and inherit any permissions of CI. For instance, if we install a wheel that's unpinned and without hash validation, there is the chance of an attacker being able to publish a package, which is then loaded into a CI job with permission to publish a NumPy wheel. This may seem a bit farfetched, but I've been involved in security incidents involving Bitcoin miners being installed into systems this way 😸 In the past, this was one of the motivators for poetry and lock files.
In this case, it's a remote possibility, given we also publish the scipy-openblas wheels, but given we can fix the versions, would it be worth adding a hash for best practice anyway?
There was a problem hiding this comment.
which is then loaded into a CI job with permission to publish a NumPy wheel.
This is not possible, our CI doesn't have any permissions beyond those needed to run CI jobs. And we should keep it that way - the potential security issues you point out here being a main reason.
PyPI upload permissions are limited to a know small set of maintainers, who must do so themselves while using 2FA.
would it be worth adding a hash for best practice anyway?
Our current stance on hashes is:
- no for Actions that are built into GitHub, yes for others from random publishers
- dependencies: no, this is too much churn and overhead, for no real-world gain.
Running CI jobs with everything pinned does have other advantages - primarily better reproducibility, so some new version of a random dependency being released not being able to break our CI. However, it would be a big overhaul. And at that point we should follow scikit-learn/pandas and move it all to conda envs.
There was a problem hiding this comment.
That is a bit beyond this PR. If you feel deeply about hashes, could you open an issue about pinning hashes of all dependencies? I am against, I think it is overkill, but am willing to discuss.
There was a problem hiding this comment.
Sure, if I feel enthusiastic I will, but I'm happy with the stance from @rgommers 😸
a4193bc to
ab60d6d
Compare
|
I moved all the requirements files to a requirements directory, and updated the documentation via looking at all the results of I also added a |
|
I added ci_requirements to all_requirements in c9139e and checked locally that it works Since this is a convenience file and not used in CI, I skipped all testing. |
|
Good to go then, thanks Matti! |
Upgrading OpnBLAS should be done in a controlled fashion. Using the scipy-openblas wheels without pinning could cause problems when the wheel version is updated and pip installs a new version.
This PR adds a
ci_requirements.txtfile to pin the version. In order to only use one file, it will install both scipy-openblas64 and scipy-openblas32 wheels. While wasteful in terms of bandwidth/time, this will not cause problems since the subsequent use of the wheel is variant-specific.Thoughts?