Skip to content

Release containerd for multiple architectures#3702

Closed
aojea wants to merge 1 commit intocontainerd:masterfrom
aojea:arch
Closed

Release containerd for multiple architectures#3702
aojea wants to merge 1 commit intocontainerd:masterfrom
aojea:arch

Conversation

@aojea
Copy link
Copy Markdown
Contributor

@aojea aojea commented Sep 27, 2019

Current distributions are not able to keep up with the pace that
projects like containerd are releasing, preventing users using
new features until a new distribution is released, somebody creates
"unofficial" packages or building locally the project.

Containerd provides binaries in a tarball that are easy to use, but
currently there are only amd64 binaries.

This patch adds a script that allows to cross-compile and release
containerd for multiple architectures.

Signed-off-by: Antonio Ojea antonio.ojea.garcia@gmail.com

@aojea
Copy link
Copy Markdown
Contributor Author

aojea commented Sep 27, 2019

You can see the results of the script in this job

https://travis-ci.org/aojea/containerd/builds/590422937

The command "sudo PATH=$PATH GOPATH=$GOPATH script/release/release-containerd.sh" exited with 0.
0.00s$ ls releases/*.tar.gz
releases/containerd-1.3.0-rc.3-18-g4bfc235.linux-amd64.tar.gz
releases/containerd-1.3.0-rc.3-18-g4bfc235.linux-arm64.tar.gz
releases/containerd-1.3.0-rc.3-18-g4bfc235.linux-arm.tar.gz
releases/containerd-1.3.0-rc.3-18-g4bfc235.linux-ppc64le.tar.gz
releases/containerd-1.3.0-rc.3-18-g4bfc235.linux-s390x.tar.gz

/cc @BenTheElder

Fixes #3664 , #2996

@theopenlab-ci
Copy link
Copy Markdown

theopenlab-ci bot commented Sep 27, 2019

Build succeeded.

@aojea
Copy link
Copy Markdown
Contributor Author

aojea commented Sep 27, 2019

the failure is unrelated to the patch 🤔

The command "DCO_VERBOSITY=-q ../project/script/validate/dco" exited with 1.

@estesp
Copy link
Copy Markdown
Member

estesp commented Sep 27, 2019

It's a whitespace issue with your patch--usually some kind of trailing whitespace:

e61c5df - FAIL - has whitespace errors. See `git show --check e61c5df7401626e95861a55018f580e49ca54478`.

Run the git show command above and it will tell you exactly what it didn't like.

@theopenlab-ci
Copy link
Copy Markdown

theopenlab-ci bot commented Sep 27, 2019

Build succeeded.

@theopenlab-ci
Copy link
Copy Markdown

theopenlab-ci bot commented Sep 27, 2019

Build succeeded.

Current distributions are not able to keep up with the pace that
projects like containerd are releasing, preventing users using
new features until a new distribution is released, somebody
creates "unofficial" packages or building locally the project.

Containerd provides binaries in a tarball that are easy to use,
but currently there are only amd64 binaries.

This patch adds a script that allows to cross-compile and release
containerd for multiple architectures.

Signed-off-by: Antonio Ojea <antonio.ojea.garcia@gmail.com>
@theopenlab-ci
Copy link
Copy Markdown

theopenlab-ci bot commented Sep 27, 2019

Build succeeded.

@Random-Liu
Copy link
Copy Markdown
Member

The problem is that we don't test containerd on multiple archs. Will that be a concern? Or we are sure that it will just work across different architecture?

@codecov-io
Copy link
Copy Markdown

codecov-io commented Sep 27, 2019

Codecov Report

Merging #3702 into master will not change coverage.
The diff coverage is n/a.

Impacted file tree graph

@@          Coverage Diff           @@
##           master   #3702   +/-   ##
======================================
  Coverage    42.1%   42.1%           
======================================
  Files         129     129           
  Lines       14307   14307           
======================================
  Hits         6024    6024           
  Misses       7383    7383           
  Partials      900     900
Flag Coverage Δ
#linux 45.61% <ø> (ø) ⬆️
#windows 37.05% <ø> (ø) ⬆️

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update 36cf5b6...ec2a107. Read the comment docs.

@BenTheElder
Copy link
Copy Markdown
Contributor

Granted that kind is a relatively small user but we're already consuming containerd on these architectures, just from packaging / builds further downstream (from canonical). I'm not sure that they're testing much either, and these lag behind upstream (current stable Ubuntu has 1.2.6), so we'd rather just consume recent release tarballs.

Can we just put a disclaimer in the release notes?
Or maybe build and post these binaries somewhere without actively advertising them?
Publishing binaries makes it easier to try.

We can start hosting our own unnofficial builds, but it seemed better to offer tooling upstream.

@BenTheElder
Copy link
Copy Markdown
Contributor

Even just ensuring that it builds successfully is a helpful step. Right now it seems we only have that in some capacity for Windows amd64, Linux amd64, and Linux arm64?


# Add repositories with multiple architectures
source /etc/os-release
cat <<EOF > /etc/apt/sources.list
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think the script should fail when $CI is not set.

Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Or wondering we can use Dockerfile here

@estesp
Copy link
Copy Markdown
Member

estesp commented Sep 27, 2019

We've definitely wanted to expand the set of architectures, and even tried Travis's new ppc64le support briefly, but any kind of performance issue or stability issue with other architectures kills CI performance/quality, and then we end up going back to "sure fire" configuration so we aren't crippling development work.

I know internal to IBM there are those making sure ppc64le and s390x work, as well as distros (as noted Ubuntu) who support a variety of architectures. But it's hard to connect those dots to upstream validation without some CI "build farm" (like we had in Docker and then moby/moby).

Without that, I think we've been hesitant to actually offer binaries that someone might decide they depend on or believe that we support as an upstream community--in most cases without access to hardware/resources to test, validate, debug, etc.

If there was a way to mark them as "contrib"-like binaries and separate from the fully tested Windows/Linux amd64 binaries, I could be convinced it is generally valuable, and that it saves time/effort for a smaller community who don't have the resources to keep up with upstream but still want to try the latest/greatest containerd.

@thaJeztah
Copy link
Copy Markdown
Member

perhaps "nightly builds" (to distinguish them from tested releases)?

@Random-Liu
Copy link
Copy Markdown
Member

Random-Liu commented Sep 27, 2019

perhaps "nightly builds" (to distinguish them from tested releases)?

Yeah, I was thinking about similar thing. Basically, we should distinguish them from tested release artifacts, either through documentation, or publishing channel.

I could be convinced it is generally valuable, and that it saves time/effort for a smaller community who don't have the resources to keep up with upstream but still want to try the latest/greatest containerd.

Agree.

@aojea
Copy link
Copy Markdown
Contributor Author

aojea commented Sep 27, 2019

+1 for nightlies that can be identified by commit so users can pin to one version.

@thaJeztah
Copy link
Copy Markdown
Member

+1 for nightlies that can be identified by commit so users can pin to one version.

if it's possible to publish nightlies somewhere, it might be good to have a documented retention policy to prevent it from ever growing (e.g., nightly builds will be kept for 30 days, after which they are removed)

@aojea
Copy link
Copy Markdown
Contributor Author

aojea commented Sep 28, 2019

Is it possible to create a new repo under containerd to store the nightlies? Seems this approach is being used by other projects like https://github.com/atom/atom-nightly-releases

It can be weekly if nightly is too fast

@aojea
Copy link
Copy Markdown
Contributor Author

aojea commented Sep 30, 2019

I've created a repository with a job that creates nightlies using travis, please take a look

https://github.com/aojea/containerd-nightly-releases
https://github.com/aojea/containerd-nightly-releases/releases

The job can be triggered using a cron job in travis.
I guess there is no need for retention policies in github, but shouldn't be difficult to implement another cron job that removes releases, I would say to keep all nightlies until last stable release

@estesp
Copy link
Copy Markdown
Member

estesp commented Oct 1, 2019

This looks reasonable to me; to officially implement this we would want to create a new "nightlies" repo and copy in your travis config from your example releases repo. If we do that, is that sufficient for the use case @BenTheElder has? For example, it won't provide an (easy) way to get official release builds (like 1.3.0 GA for multiple architectures)

@aojea
Copy link
Copy Markdown
Contributor Author

aojea commented Oct 1, 2019

This looks reasonable to me; to officially implement this we would want to create a new "nightlies" repo and copy in your travis config from your example releases repo. If we do that, is that sufficient for the use case @BenTheElder has? For example, it won't provide an (easy) way to get official release builds (like 1.3.0 GA for multiple architectures)

we can establish a retention policy to keep nightlies since last stable release, is just adding more logic to the script, but this will allow users to pin to the nightly commit that was released as GA.
Right now the ball started rolling, but naybe is feasible to do a nigthly release "in the past" with the 1.30 GA commit, this way we can obtain those multi-arch binaries.

For previous GA releases distros are already providing packages, I think the interest is in having the latest code available to consume.

What do you think @BenTheElder ?

@thaJeztah
Copy link
Copy Markdown
Member

Just a small suggestion/idea; for versioning the nightly builds, it might be an idea to use Golang's "pseudo-version", which is both valid SemVer, and captures all the information (commit date/time, and commit-sha); see docker/docker-ce-packaging#125

@BenTheElder
Copy link
Copy Markdown
Contributor

This looks reasonable to me; to officially implement this we would want to create a new "nightlies" repo and copy in your travis config from your example releases repo. If we do that, is that sufficient for the use case @BenTheElder has? For example, it won't provide an (easy) way to get official release builds (like 1.3.0 GA for multiple architectures)

I think that's great!

If we decide we really need a GA version we could also invoke the build against a GA version on alternate (kubernetes project?) infrastructure, but hopefully without re-implementing much / purely doing this downstream.

we can establish a retention policy to keep nightlies since last stable release, is just adding more logic to the script, but this will allow users to pin to the nightly commit that was released as GA.
Right now the ball started rolling, but naybe is feasible to do a nigthly release "in the past" with the 1.30 GA commit, this way we can obtain those multi-arch binaries.

We can also host a copy (EG in our node base image) if necessary to avoid concerns about how long these stay up.

Just a small suggestion/idea; for versioning the nightly builds, it might be an idea to use Golang's "pseudo-version", which is both valid SemVer, and captures all the information (commit date/time, and commit-sha); see docker/docker-ce-packaging#125

+1

@aojea
Copy link
Copy Markdown
Contributor Author

aojea commented Oct 3, 2019

Just a small suggestion/idea; for versioning the nightly builds, it might be an idea to use Golang's "pseudo-version", which is both valid SemVer, and captures all the information (commit date/time, and commit-sha); see docker/docker-ce-packaging#125

I've updated the script with the golang's pseudo-version, feel free to use it and improve it :-)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

7 participants