In Kubernetes, we would like to have:
- an overall timeout for the entire suite (
-ginkgo.timeout, with a very high value because the suite is large)
- a per spec timeout (???, smaller, to avoid blocking for the entire
-ginkgo.timeout duration on a single spec)
- timeouts for AfterEach specs (we try to clean up, but cleaning up itself may get stuck)
So far I have only found -ginkgo.timeout. One downside of it is that it also aborts cleanup operations as soon as those block, which is not useful for Kubernetes because the cleanup operation may involved communication with the apiserver to remove objects.
I tried with this:
var _ = ginkgo.Describe("test", func() {
ginkgo.BeforeEach(func() {
fmt.Fprint(ginkgo.GinkgoWriter, "hello\n")
})
ginkgo.AfterEach(func() {
defer fmt.Fprint(ginkgo.GinkgoWriter, "done\n")
fmt.Fprint(ginkgo.GinkgoWriter, "world\n")
time.Sleep(time.Hour)
})
ginkgo.It("times out", func() {
time.Sleep(time.Hour)
})
})
When I run with -ginkgo.timeout=10s -ginkgo.v -ginkgo.progress, I get:
test
times out
/nvme/gopath/src/github.com/intel/pmem-csi/test/e2e/e2e.go:177
[BeforeEach] test
/nvme/gopath/src/github.com/intel/pmem-csi/test/e2e/e2e.go:167
hello
[It] times out
/nvme/gopath/src/github.com/intel/pmem-csi/test/e2e/e2e.go:177
[AfterEach] test
/nvme/gopath/src/github.com/intel/pmem-csi/test/e2e/e2e.go:171
world
------------------------------
•! [INTERRUPTED] [10.933 seconds]
test
/nvme/gopath/src/github.com/intel/pmem-csi/test/e2e/e2e.go:166
[It] times out
/nvme/gopath/src/github.com/intel/pmem-csi/test/e2e/e2e.go:177
Begin Captured GinkgoWriter Output >>
[BeforeEach] test
/nvme/gopath/src/github.com/intel/pmem-csi/test/e2e/e2e.go:167
hello
[It] times out
/nvme/gopath/src/github.com/intel/pmem-csi/test/e2e/e2e.go:177
[AfterEach] test
/nvme/gopath/src/github.com/intel/pmem-csi/test/e2e/e2e.go:171
world
<< End Captured GinkgoWriter Output
Interrupted by Timeout
Here's a stack trace of all running goroutines:
...
Note that the cleanup spec didn't run it's defer.
We could provide per-spec timeouts via ctx := context.Timeout(context.Background(), 10*time.Minute). We then need to be careful that the cleanup spec doesn't use the same context because it wouldn't get any work done after a timeout. The downside of this is that we would have to touch a lot of code in Kubernetes, which is always a daunting prospect.
IMHO it would be simpler to have a default -ginkgo.it-timeout (for It specs), -ginkgo.after-timeout (for AfterEach) and perhaps a Timeout(time.Duration) decorator to override those defaults.
In Kubernetes, we would like to have:
-ginkgo.timeout, with a very high value because the suite is large)-ginkgo.timeoutduration on a single spec)So far I have only found
-ginkgo.timeout. One downside of it is that it also aborts cleanup operations as soon as those block, which is not useful for Kubernetes because the cleanup operation may involved communication with the apiserver to remove objects.I tried with this:
When I run with
-ginkgo.timeout=10s -ginkgo.v -ginkgo.progress, I get:Note that the cleanup spec didn't run it's defer.
We could provide per-spec timeouts via
ctx := context.Timeout(context.Background(), 10*time.Minute). We then need to be careful that the cleanup spec doesn't use the same context because it wouldn't get any work done after a timeout. The downside of this is that we would have to touch a lot of code in Kubernetes, which is always a daunting prospect.IMHO it would be simpler to have a default
-ginkgo.it-timeout(forItspecs),-ginkgo.after-timeout(forAfterEach) and perhaps aTimeout(time.Duration)decorator to override those defaults.