-
Notifications
You must be signed in to change notification settings - Fork 86
Makefile: build proto files unconditionally #229
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
cc8b948 to
840defb
Compare
|
@marquiz Is this a workaround for a problem with the local development workflow, or a workaround for the occasionally reoccuring 'protoc not found' problem in our github CI ? |
This is a workaround (or I would say fix) to local development workflow, doing |
840defb to
0ebefc6
Compare
|
ping @klihub |
|
@marquiz Would you be okay with a milder version of this, for instance something like this: klihub@486e315 ? That allows forcing rebuild of protobufs and it always forces it in the dockerized rebuild, so in your workflow it will always rebuild. |
|
I think that would work. TBH, I just don't get the the insistence on these pattern rules that fundamentally don't work in this setup (when both the sources and targets are stored in git). I kind of get that from a philosophical "this is the way that make was designed to be used" pov but if it doesn't work why bother. The amount of wasted build time should certainly not be an issue, I'd guess the rule doesn't get run very often even globally. |
Both the sources and targets are stored in git, but they are supposed to be consistently committed: if you change the protobuf definition, you should recompile and commit the compiled versions. Now I understand that right after a fresh clone the timestamps might be off, but that only should cause IMO (and in my experience) unnecessary recompilation and not the other way around (since the protobuf definition and the compiled protobuf files should always be consistently committed in the repo). I have seen this happening occasionally in CI. But I don't rememeber ever running into a situation in my local workflow, where I would have changed the protobuf definition, yet it would not have been recompiled. And to be completely honest, it might very well be that it has happened to me and I just shrugged it off with a
@marquiz That's not my reason/primary concern here. I'm more wary of folks having random different and newer versions of protoc& al. installed natively instead of the (fairly old) version we use (in sync with containerd). This then guarantees with 100% probability that a recompile will not result bit-by-bit in the same output, so it will result in false changes. If then folks are not extra careful and commit all their changes and file PRs from that, we need to point out that part of the PR are not real changes and should be reverted out. We have seen this already in the past even without a forced recompile by |
It doesn't matter how you commit then as git will tamper the timestamps. This is particularly annoying and breaks you when you have feature branches that you need to rebase (and e.g. happen to have conflicts).
I think this is not the correct way to try to prevent that. If this is a concern we should change all:/build: targets to depend on build-proto-containerized to get reproducible builds. |
If you have a conflict in a compiled protobuf file and you try to resolve it manually, you are really doing something wrong. The right way to resolve such a conflict is to (resolve any conflicts you might have in the proto source), remove all generated pb.go files with conflicts, then run make and let it regenerate the missing compiled proto files (and all others that come from that same compilation).
|
@marquiz Well, this is also true. And if I am the only one who thinks that unconditionally always regenerating and recompiling everything in the local development workflow is not the right way to go, then I realize I should simply give up and accept it. And if we go with that then I agree that a dockerized proto-build is friendlier. |
I beg to differ here. I don't want (and shouldn't need) to manually resolve or remove anything. I resolve conflicts in the .proto file (if any) and run "regenerate auto-generated stuff" and it should just work. |
@marquiz Well, if we get in #232 to avoid dockerized proto builds then I'm fine with this and always rebuilding. |
Ditch the makefile pattern rule which simply doesn't work in scenarios where both the source and the build targets are stored in the git repo (as git operations mangle the time stamps). So much grey hair avoided when the stuff just force generates everything instead of leaving you wonder why the tools didn't work as expected. Signed-off-by: Markus Lehtonen <markus.lehtonen@intel.com>
0ebefc6 to
9623748
Compare
5db759b to
f880e1c
Compare
Sure, done. I don't think the dockerized build even worked anymore as we mounted the source tree as ro inside the build... |
mikebrow
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
| build-proto-dockerized: | ||
| $(Q)docker build --build-arg ARTIFACTS="$(dir $(PROTO_GOFILES))" --target final \ | ||
| --output type=local,dest=$(RESOLVED_PWD) \ | ||
| -f hack/Dockerfile.buildproto . | ||
| $(Q)tar xf artifacts.tgz && rm -f artifacts.tgz |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Shouldn't we keep a target to allow building on non-Linux machines? Don't think the native target works on macOS currently:
Lines 33 to 51 in 25d9391
| arm64) | |
| wget -O "$PROTOBUF_DIR/protobuf" "https://github.com/protocolbuffers/protobuf/releases/download/v$PROTOBUF_VERSION/protoc-$PROTOBUF_VERSION-linux-aarch_64.zip" | |
| unzip "$PROTOBUF_DIR/protobuf" -d "$INSTALL_DIR" | |
| ;; | |
| amd64|386) | |
| if [ "$GOOS" = windows ]; then | |
| wget -O "$PROTOBUF_DIR/protobuf" "https://github.com/protocolbuffers/protobuf/releases/download/v$PROTOBUF_VERSION/protoc-$PROTOBUF_VERSION-win32.zip" | |
| elif [ "$GOOS" = linux ]; then | |
| wget -O "$PROTOBUF_DIR/protobuf" "https://github.com/protocolbuffers/protobuf/releases/download/v$PROTOBUF_VERSION/protoc-$PROTOBUF_VERSION-linux-x86_64.zip" | |
| fi | |
| unzip -o "$PROTOBUF_DIR/protobuf" -d "$INSTALL_DIR" | |
| ;; | |
| ppc64le) | |
| wget -O "$PROTOBUF_DIR/protobuf" "https://github.com/protocolbuffers/protobuf/releases/download/v$PROTOBUF_VERSION/protoc-$PROTOBUF_VERSION-linux-ppcle_64.zip" | |
| unzip -o "$PROTOBUF_DIR/protobuf" -d "$INSTALL_DIR" | |
| ;; | |
| *) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Mm, true. Maybe we should keep it. I just need to fix it (I think). WDYT @klihub
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sorry, my comment was a bit short (it was late 😂)
Yeah, these are tricky; I think the changes to install the tools locally were at least are good; there's various projects that require you to either "have the tools installed" already, or "figure out how to install", which may result in having to install different (system-wide) versions of the tools, which is a big pain if you happen to be jumping between various such projects for contributions 😅
Then again, depending on the situation, having a containerized option available is good as well, as sometimes the (version of) tools to install can differ between branches for the same project, so having an option to generate code without tools lingering around on your system ... is really great to have.
f880e1c to
5c7788e
Compare
|
Updated. Did what @thaJeztah suggested: fixed the dockerized proto build instead of dropping it. Not too bad on my laptop (when protoc has been cached): $ find pkg/ -name '*pb.go' | xargs rm
$ time make build-proto-dockerized
[+] Building 8.5s (10/10) FINISHED docker:default
=> [internal] load build definition from Dockerfile.buildproto 0.0s
=> => transferring dockerfile: 1.17kB 0.0s
=> [internal] load metadata for docker.io/library/golang:1.24-bookworm 0.2s
=> [internal] load .dockerignore 0.0s
=> => transferring context: 47B 0.0s
=> [builder 1/4] FROM docker.io/library/golang:1.24-bookworm@sha256:b8bae5bd9ba9b1f89b635c91c24cc75cea335a16 0.0s
=> [internal] load build context 0.1s
=> => transferring context: 130.40kB 0.1s
=> CACHED [builder 2/4] WORKDIR /go/src 0.0s
=> CACHED [builder 3/4] RUN apt-get update && apt-get install -y unzip 0.0s
=> [builder 4/4] RUN --mount=type=cache,target=/go/pkg/mod/ --mount=type=cache,target=/go/tools/,sharing 7.4s
=> [final 1/1] COPY --from=builder /artifacts.tgz . 0.0s
=> exporting to client directory 0.0s
=> => copying files 69.99kB 0.0s
real 0m8.768s
user 0m0.240s
sys 0m0.173s |
Signed-off-by: Markus Lehtonen <markus.lehtonen@intel.com>
5c7788e to
d99f960
Compare
Ditch the makefile pattern rule which simply doesn't work in scenarios where both the source and the build targets are stored in the git repo (as git operations mangle the time stamps).
So much grey hair avoided when the stuff just force generates everything instead of leaving you wonder why the tools didn't work as expected.