Add arm64 to Travis instructions#9407
Conversation
.travis.yml
Outdated
| - language-pack-de | ||
|
|
||
| # This should allow building on arm64 | ||
| arch: |
There was a problem hiding this comment.
Remove this section, listing the arch in the jobs should be enough
There was a problem hiding this comment.
all right. I'm trying to follow https://blog.travis-ci.com/2019-10-07-multi-cpu-architecture-support
There was a problem hiding this comment.
Still, the result of this section is to add job no3 and no4, neither of those are needed.
There was a problem hiding this comment.
(we don't have any jobs in the default matrix, but it's just triggering that empty one nevertheless)
|
miniconda doesn't seem to be available on arm, not sure about pip either, so we first need to figure out one working way to build. E.g. I definitely suggest to try to have a build without optional dependencies, as not sure how much of them are actually available as wheels, and we definitely don't want to build them from source. Also, while you're experimenting, I suggest to add the single arm job as |
| # location during testing. We also use this build to make sure that the | ||
| # dependencies get correctly installed with pip. | ||
| - os: linux | ||
| arch: amd64 |
There was a problem hiding this comment.
I suppose even these are unnecessary as the amd64 is the default (the not sure part is it might not stay at arm64 for the jobs after it's changed. Anyway, this is not a real issue for now
There was a problem hiding this comment.
yeah, if the build works and passes tests I'll be astonished and then start figuring out the actual right way to do all this
There was a problem hiding this comment.
It would probably be more sensible to do this on an actual arm64 at home but my Raspberry Pi 4 isn't here.
There was a problem hiding this comment.
yes, I can confirm that these are unnecessary. I've also tried, but sadly the arm jobs still only just support C as language, so currently no way around of installing python.
|
Yeah, sadly this seems to be a no go atm: https://travis-ci.org/astropy/astropy/jobs/599931761#L1193 and we're very close already to timeout without even touching astropy. |
|
One last try if you want to list the absoulte necessary dependencies, e.g. don't use |
hah. We can do this old-school - apt has many of these dependencies, no need to compile if they're "system" packages. |
.travis.yml
Outdated
| stage: Arm tests | ||
| env: SETUP_CMD='test' | ||
| install: | ||
| - sudo apt-get -y install python3-venv python3-pip python3-numpy python3-matplotlib python3-kiwisolver python3-coverage python3-sgp4 python3-psutil python3-objgraph python3-ipython python3-graphviz |
There was a problem hiding this comment.
I meant pip installing the bare minimum python dependencies them not apt-get install. That should fit in the timeout range, and hopefully still lets enough time for building astropy and testing it.
There was a problem hiding this comment.
I thought we could get away with using a system numpy but it wants to compile its own. A version thing or is it in some sort of virtualenv?
There was a problem hiding this comment.
I'm not sure, we never suggest to use system packages but stick with pip (or conda if one must).
There was a problem hiding this comment.
oh certainly under any normal circumstances I've given up on anything outside a virtualenv. But we're in some kind of container here, and it seemed worth a try.
|
Astonishingly, it seems to work. The general non-availability of wheels and other quick-to-install software limits the testing we can do without encroaching on the timeout. But when it comes to testing precision issues and binary portability, this is a substantially different platform we could reasonably want to support. |
It doesn't help on other platforms and the cache doesn't work on arm64.
|
I'm sorry to say so, but I'm 👎 on merging this, simply because the arm run takes 37mins, even when only installing the bare minimum, and that is 4-5 times what our other travis jobs take. |
|
The question really is: what platforms does astropy support? And specifically, as we work on extended precision, for example in #9368 and #9361 , does our code work when longdouble is binary128 rather than binary80? I agree the build process is agonizing as it stands. Others might build wheels, or Travis might get their cache working - both ccache to accelerate compilation, and the pip cache to store built wheels would dramatically accelerate the build process. I don't think, as it stands, that we should add this to the mainline build process. But I think it would be good to keep it around so that it can easily be merged when the build process does become more reasonable, and also so we can temporarily turn on arm64 builds in branches where we are worried about whether astropy behaves properly. |
|
@aarchiba - see the slack thread on this. We're thinking about having this turned included in the release branches, but not on master until the build time issues get sorted out, one way or the other. |
How much does it help? First build will still be slow.
uh, which slack thread? I don't see anything relevant on any of the threads I know of... that sounds like a perfectly reasonable option (though I note that they now have cache and I'm poking around to see whether that helps and how much). I will say that I don't feel any special urgency about this, it's not something I'm trying to get in before the feature freeze, I'm just curious and when at some point I start trying to hammer Time with hypothesis I'll want to know what happens here too. |
|
testing related stuff are not bound to feature freeze anyway, they can come at any time ;) |
|
Uh, weird. The built wheel is not working out of the cache - it may be the case that the cache does not distinguish hardware architecture? Or wheels just can't cope with non-x86 hardware? |
|
Since this could have prevented the problems now happening because Debian does support arm, is there any way to cut the time needed? Above, I read about installing via apt vs pip - apt might well be faster as everything is precompiled. It would also mean we automatically test whether people who are on Debian-derived distributions will be able to install astropy (since that includes me, it would make me happy!). If it is simply too long for regular testing, a cron job might still be an idea? |
|
In principle, I think careful planning could get this working without heroic effort. On my arm64 SBC I installed essentially everything from pip, and what took forever was building all the wheels for all the compiled code. Once built, pip installing everything was fairly quick. In principle the Travis cache is capable of keeping all those wheels, but it's a bit flaky in general and on arm64. What packages are available from apt I don't know, but indeed that would help a lot. |
|
In fact I believe there were headaches when using apt because the apt installed versions of (say) numpy were not being used because pip felt the need to replace them with pip-compiled versions. |
|
Since Debian ships |
|
superseded by #9668 |
This pull request is an experiment to evaluate whether astropy works correctly on the arm64 architecture. This architecture is notable for the fact that long doubles are actually binary128, that is, long double is quadruple precision. It is not totally clear that the implementation is in hardware, but it is certainly valuable to see whether the assumption that long doubles are 80 bits or less is built into astropy somewhere. And many smartphones and single-board computers, including the Raspberry Pi 3 and 4, can run an arm64 operating system (though many use the 32-bit version).
Relevant to #9368 and #9361