Use official checksums to verify Tini#55491
Conversation
Merge Remote Tracking Branch
Merge Remote Tracking Branch
|
Pinging @elastic/es-core-infra (:Core/Infra/Build) |
distribution/docker/build.gradle
Outdated
| final String elasticsearch = oss ? "elasticsearch-oss-${VersionProperties.elasticsearch}-${classifier}.tar.gz" : "elasticsearch-${VersionProperties.elasticsearch}-${classifier}.tar.gz" | ||
| return [ | ||
| 'base_image' : "aarch64".equals(architecture) ? "arm64v8/centos:7" : "centos:7", | ||
| 'base_image' : "centos:7", |
There was a problem hiding this comment.
@jasontedor Question for you, since centos uses manifests for their Docker builds pulling from the specific arch is unneeded. Was there another reason to directly pull from arm64v8/centos:7? Or would pulling from centos:7 work? You can see the manifests/archs that Centos supports here, https://hub.docker.com/_/centos?tab=tags.
There was a problem hiding this comment.
The thinking here is that using an explicit base image makes it possible to cross-build the ARM image, for example using Docker for Mac. It's entirely possible that we'll revert to just specifying centos:7 in the future however.
There was a problem hiding this comment.
The thinking here is that using an explicit base image makes it possible to cross-build the ARM image, for example using Docker for Mac.
Makes perfect sense! I went a head a made the amd64 image explicit too in case you’re not on a x86 system.
It's entirely possible that we'll revert to just specifying centos:7 in the future however.
Using manifests for this would help simplify things but all depends on your CI/CD. How are you currently building and publishing your images? Does Elastic have access to arm64 hardware that you will build the arm docker images on? Or are you planning on using qemu and binfmt_misc to emulate the architectures so you can build all the images on x86 hardware?
I am happy to lend a hand if you need when it comes to building multi-arch containers and building out manifests. Being at IBM, I do that quite a lot! 😄
Are you planning on producing manifests for your arm64 and amd64 images? That would be really cool to see!!
One quick note with using qemu and binfmt_misc is that there are some bugs you might run into. Images might seem like they built correctly, but when ran on the specific hardware you will run into errors. Hence why I encourage people to build on platform if they can. I am currently helping the Jenkins Docker team to build on platform for each of their images. AWS does offer arm64 resources, in case you guys were interested.
| RUN wget --no-check-certificate --no-cookies --quiet https://github.com/krallin/tini/releases/download/v0.19.0/tini-\$(dpkg --print-architecture) \ | ||
| && wget --no-check-certificate --no-cookies --quiet https://github.com/krallin/tini/releases/download/v0.19.0/tini-\$(dpkg --print-architecture).sha256sum \ | ||
| && echo "\$(cat tini-\$(dpkg --print-architecture).sha256sum)" | sha256sum -c \ | ||
| && rm -f tini-\$(dpkg --print-architecture).sha256sum \ |
There was a problem hiding this comment.
There's no need to remove the checksum, since this is just the builder image.
There was a problem hiding this comment.
The reason I remove it was were still under the working dir of /usr/share/elasticsearch, which gets copied into the main image. Thus the checksum will be copied to the second stage.
What would be easier is to move the whole Tini "block" of code up before you change the WORKDIR to /usr/share/elasticsearch, around line 20. Do you mind if I move the Tini code "block" to line 20? I did not want to move to much stuff around, but if you think is a okay solution I will!
There was a problem hiding this comment.
In my latest commit, I moved the Tini code "block" up in the Dockerfile before you change WORKDIR, which eliminates the need to delete the checksum.
There was a problem hiding this comment.
I like that we removed the checksum file even if it's currently unneeded, because it means if there's a later refactoring that would for example, accidentally revert executing this in /usr/share/elasticsearch, then we don't have to "remember" to now delete the checksum file. It can be done as part of this RUN command, so not introducing a new layer (which doesn't even matter since it's in the builder image).
There was a problem hiding this comment.
@jasontedor @pugnascotia I am flexiable either way.
It can be done as part of this RUN command, so not introducing a new layer (which doesn't even matter since it's in the builder image).
I think this is key. If there is no new layer were not adding to the build time or build size. Plus, like @jasontedor, its just the builder image.
|
@elasticmachine ok to test |
|
@elasticmachine update branch |
|
@james-crowley Thank you for the PR. |
Closes elastic#55490. Use the official checksums when downloading `tini` for our Docker images.
|
This will be backported in #55717, which also backports using |
|
@pugnascotia Thanks for the review and backporting it to |
Firstly, backport the use of tini as the Docker entrypoint. This was supposed to have been done following #50277, but was missed. It isn't a direct backport as this branch will continue using root as the initial Docker user. Secondly, backport #55491 to use the official checksums when downloading tini.
Firstly, backport the use of tini as the Docker entrypoint. This was supposed to have been done following #50277, but was missed. It isn't a direct backport as this branch will continue using root as the initial Docker user. Secondly, backport #55491 to use the official checksums when downloading tini.
This PR introduces using the official checksums published by Tini.
More information in issue #55490
Closes #55490