Add Containerfile and build.sh to build it.#59868
Conversation
fb92950 to
18a3ade
Compare
|
jenkins retest windows |
|
jenkins test windows |
|
jenkins test make check arm64 |
|
I don't know what's going on with the windows tests or make check arm64 tests, but I can guarantee it's not related to this PR. |
zmc
left a comment
There was a problem hiding this comment.
I haven't done a full review yet, but: I would like to see:
sudodropped from thepodmaninvocations.- probably a
podman image pullof the base image inbuild.sh - nit: consistency with things like
>> packages.txtvs>>packages.txt
|
I sympathize with the "no sudo", but the failures were so weird on the build slaves that I couldn't decode them. Of course I don't know now what they were exactly, but I was out of patience that day and just did what everything else does. It was never necessary on my test hosts. If you're adamant I can try to debug again. I don't get the 'podman image pull'; is the implicit pull suboptimal somehow? I'll fix spaces on redirects. |
|
If you find you can't remove the sudo invocations because of the CI environment, could you replace them with a function call something like: so that someone invoking the script manually can choose disable sudo? |
|
I can help with issues around rootless podman. I'd like to see sane defaults set for the inputs - the script refuses to run without them all set. I would actually argue that the script should be unaware of authentication, or at most re-export |
|
Re: pulling before building, you can do it with |
|
@Phlogiston: sure, if non-sudo doesn't work for the builders, we can make it optional for non-builders. I hadn't really spent that much time optimizing the script for non-CI use, as you might expect, so that informs a number of the choices I made/flexibility I didn't add, but it won't hurt to have some more. @zmc: so can you elaborate on why you want an image pull? Does FROM not check the image for "latest available"? @ktdreyer thanks for the tip if we need it @zmc: I will set up a test without sudo and see what happened (happens). @zmc: defaults: I had them initially, but again, focusing on CI, I wanted primarily to make sure the job failed if it didn't have enough information, and make sure I was consciously setting everything to exactly what I wanted. It's less convenient for outside-CI use, true. I'm torn on this; it's easy to add a wrapper to call build.sh with whatever defaults someone wishes, but harder to catch "oh shit I didn't specify a branch and this is going to build and push main". Undecided, more input welcome. @zmc: auth is an interesting question. Of course when run from Jenkins the passwords will be elided, as usual, and it's sort of our pattern for all these tools/authentication mechs. I'm not opposed to the idea of external-to-build.sh auth, but jenkins needs to do it somehow, so would take another shell snippet somewhere. Maybe we could decompose it into pieces; Ken originally pushed toward "as simple as possible to invoke" for flexibility for other build systems, but maybe having a bit specific to Jenkins would allow the script to assume things like auth are taken care of somewhere else (and they would be different for, say, Tekton, one assumes). Enough open questions here that maybe we should meet? Maybe invite Ken to our Wednesday noon meeting this week if he can make it? |
|
A small change landed in ceph-container that we should copy here. We bumped nfs-ganesha from 5 to 6 for |
|
With some offline discussion, the plan is:
Other refinements can wait for incremental changes here once we get this in-tree |
andrewschoen
left a comment
There was a problem hiding this comment.
This is great, thank you for doing this work!
I agree with the points already made about not using sudo and keeping auth a responsibility of the caller and not build.sh.
Concerning the multi-arch support, have you looked into the --platform and --manifest flags of podman build? I did some work with them recently for the samba containers. Nothing to concern this PR with, mostly just a question / idea for future updates.
d15fe55 to
99eef10
Compare
|
@andrewschoen thanks for the look. I think --platform and --arch are of limited utility; I have to RUN a bunch of stuff inside the container and I think that needs to run on the right host to work. The manifest list image is easy enough to do; it's mostly about coordinating between the two separate builds (one for each arch) so that they're both done by the time the manifest-list build runs. |
|
@zmc @ktdreyer @andrewschoen @phlogistonjohn I think this is ready for re-review. I'll squash all the 'review' commits once everyone's happy. I managed to get a build done and test the resulting container with the original two tests (cephadm iscsi and nfs) so I'm confident the container is OK. There are a few changes to the ceph-build PR as well; that'll happen tomorrow. |
The intent is to replace ceph-container.git, at first for ci containers only, and eventually production containers as well. There is code present for production containers, including a separate "make-manifest-list.py" to scan for and glue the two arch-specific containers into a 'manifest-list' 'fat' container, but that code is not yet fully tested. This code will not be used until a corresponding change to the Jenkins jobs in ceph-build.git is pushed. Note that this tooling does not authenticate to the container repo; it is assumed that will be done elsewhere. Authentication is verified by pushing a minimal image to the requested repo. Signed-off-by: Dan Mick <dmick@redhat.com>
|
jenkins test api |
1 similar comment
|
jenkins test api |
|
There is a follow up fix in #60255. |
The intent is to replace ceph-container.git, at first for ci containers only, and eventually production containers as well.
There is code present for production containers, including a separate "make-manifest-list.py" to scan for and glue the two arch-specific containers into a 'manifest-list' 'fat' container, but that code is not yet fully tested.
This code will not be used until a corresponding change to the Jenkins jobs in ceph-build.git is pushed.
Currently marked "draft" until the final CI build after cleanup is clean and tested. Previously tested with a copy of the ceph-dev-new family of Jenkins jobs modified to use this container build mechanism, and run through these teuthology suites to validate basic container functionality as well as iscsi and nfs-ganesha:
http://pulpito.front.sepia.ceph.com/dmick-2024-09-18_01:08:20-orch:cephadm-wip-container-distro-default-smithi/
http://pulpito.front.sepia.ceph.com/dmick-2024-09-18_01:09:36-orch:cephadm-wip-container-distro-default-smithi/
Contribution Guidelines
To sign and title your commits, please refer to Submitting Patches to Ceph.
If you are submitting a fix for a stable branch (e.g. "quincy"), please refer to Submitting Patches to Ceph - Backports for the proper workflow.
When filling out the below checklist, you may click boxes directly in the GitHub web UI. When entering or editing the entire PR message in the GitHub web UI editor, you may also select a checklist item by adding an
xbetween the brackets:[x]. Spaces and capitalization matter when checking off items this way.Checklist
Show available Jenkins commands
jenkins retest this pleasejenkins test classic perfjenkins test crimson perfjenkins test signedjenkins test make checkjenkins test make check arm64jenkins test submodulesjenkins test dashboardjenkins test dashboard cephadmjenkins test apijenkins test docsjenkins render docsjenkins test ceph-volume alljenkins test ceph-volume toxjenkins test windowsjenkins test rook e2e