Mark integration / e2e test files with a build tag#4565
Mark integration / e2e test files with a build tag#4565SgtCoDFish wants to merge 1 commit intocert-manager:masterfrom
Conversation
|
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: SgtCoDFish The full list of commands accepted by this bot can be found here. The pull request process is described here DetailsNeeds approval from an approver in each of these files:
Approvers can indicate their approval by writing |
b6f8b1d to
a01e597
Compare
09f2815 to
1314a70
Compare
57f3082 to
d7d6240
Compare
|
I think prow makes the test output confusing here. Taking an example:
Bazel does run the test locally with Yet in the prow overview of the In the logs for this PR, the test is shown as running. So the test is run, but isn't reported in the JUnit section on the prow UI. I'm going to push to this PR with the following patch to confirm that the diff --git a/internal/apis/acme/install/pruning_test.go b/internal/apis/acme/install/pruning_test.go
index 32d1a1984..b0c493857 100644
--- a/internal/apis/acme/install/pruning_test.go
+++ b/internal/apis/acme/install/pruning_test.go
@@ -31,4 +31,5 @@ import (
func TestPruneTypes(t *testing.T) {
crdfuzz.SchemaFuzzTestForCRDWithPath(t, api.Scheme, apitesting.PathForCRD(t, "orders"), acmefuzzer.Funcs)
crdfuzz.SchemaFuzzTestForCRDWithPath(t, api.Scheme, apitesting.PathForCRD(t, "challenges"), acmefuzzer.Funcs)
+ panic("blork")
}If the bazel tests are run but not reported by prow, that seems acceptable to me given the bazel tests are on their deathbed. Alternatively it might be an easy fix to make them appear in the prow overview. |
|
As expected, the test failure isn't shown in the prow overview (which reports 91/91 tests passed) but the test was run, which is visible at the bottom of the logs: I'm fine with this, personally. I'll remove the test commit and remove the hold. |
|
/test all |
integration and e2e tests are broadly defined as being tests which require some kind of _external_ setup before they can successfully execute, differentiated by the amount of setup required. Signed-off-by: Ashley Davis <ashley.davis@jetstack.io>
|
Thanks for all the great work on the make migration! Also, awesome PR description & comments ❤️
I am not sure what is the general preferred approach in Go community about whether In this case, the obvious risk seems to be forgetting to add a tag to a Bazel rule and this will go away once we remove Bazel. But I would argue that it may be preferable to have the same approach and fall back to running all tests unless explicitly specified otherwise. At the moment we already have a mix of approaches where, for example, Venafi tests are skipped in most CI tests by passing a flag from test config, but there is also some code within the tests themselves to skip if a certain env var is not found (which introduces a risk as we could change the test pod config and accidentally remove the env var + it is not obvious that they won't be run for a person running them locally). I had a brief look at alternative approaches and saw that it is possible to negate the build tag (i.e Apologies, if some of this was already discussed earlier! |
|
I agree with what you're saying, broadly speaking. I'd love to be in a place where My main concern ATM is that running Plus as far as I know, build tags are the only approach that works everywhere because of pkg/util/versionchecker/versionchecker_test.go. Without prior setup it won't even compile properly. Admittedly though, that's the only test with that issue. My view is pretty much:
[1] probably except the e2e tests |
|
Discussed this in standup; consensus was to move all integration tests into |
|
Ah, I missed this conversation previously! 😄
I think this is an issue with both approaches though - moving into The drawback of not using tags is that It also means that we are able to store our test files wherever we want/alongside the source code they are testing like normal too... (which plays nicer with hierarchical OWNERS files too) |
This was initially part of cert-manager#4565 which was closed in favour of moving integration tests, but the consensus was that the e2e test is a special case. The e2e test requires so much more ahead-of-time setup that our bazel build flow special cases it by marking it manual. This is a `go test` equivalent to that, which enables the e2e test to remain under the test/ directory while still allowing `go test ./test/...` to work generally for all other tests. We'll add make targets for the e2e tests down the road. For now, we add it and define it in bazel so this commit should be a no-op in effect. Signed-off-by: Ashley Davis <ashley.davis@jetstack.io>
This was initially part of cert-manager#4565 which was closed in favour of moving integration tests, but the consensus was that the e2e test is a special case. The e2e test requires so much more ahead-of-time setup that our bazel build flow special cases it by marking it manual. This is a `go test` equivalent to that, which enables the e2e test to remain under the test/ directory while still allowing `go test ./test/...` to work generally for all other tests. We'll add make targets for the e2e tests down the road. For now, we add the build tag and define it in bazel so this commit should be a no-op in effect. Signed-off-by: Ashley Davis <ashley.davis@jetstack.io>
Notes for Reviewer
See my comments below about prow's reporting of tests.
This PR should be a no-op for all of our bazel-based tests. They should still be run in exactly the same way. Verifying that they do run the same way is IMO the most important part of reviewing this PR.
Overview
This PR adds
go:buildtags to every test I can find which has an external dependency - i.e. one which breaks or fails on a clean checkout of the repo when runninggo test ./..., or else everything undertest/integrationandtest/e2e. The e2e tests are tagged differently.This will enable us to use
go testfor all types of tests. Currentlygo testwill panic due to missing dependencies if run without bazel magic happening.This is part of the migration to make.
Unrelated notes, might be of interest but not needed for reviewing this PR
Mostly for interest, the following is useful analysing runtimes of tests, the following will list in `tests.txt` all tests ordered by time taken in seconds:Note that the output varies based on how the tests are specified:
/kind cleanup