Skip to content
This repository was archived by the owner on Jun 28, 2024. It is now read-only.

ci: increase timeout in unit tests#226

Closed
devimc wants to merge 1 commit intokata-containers:masterfrom
devimc:ci/increaseTimeout
Closed

ci: increase timeout in unit tests#226
devimc wants to merge 1 commit intokata-containers:masterfrom
devimc:ci/increaseTimeout

Conversation

@devimc
Copy link
Copy Markdown

@devimc devimc commented Apr 9, 2018

Increase timeout in go test to avoid panics while running
virtcontainers unit tests.
Panic occurs randomly and can be reproduced easily on systems running
low on resources. This patch sets the timeout in 30 seconds, after
having run all unit tests N times with the new timeout, the issues
could not be reproduced.

fixes #225

Signed-off-by: Julio Montes julio.montes@intel.com

@GabyCT
Copy link
Copy Markdown
Contributor

GabyCT commented Apr 9, 2018

lgtm

Approved with PullApprove

@bergwolf
Copy link
Copy Markdown
Member

lgtm.

@devimc any reason for the do-not-merge label?

Copy link
Copy Markdown
Contributor

@grahamwhaley grahamwhaley left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can you add some details over on #225 why we need to bump it over 10s - that is quite some time - what is taking/waiting >10s to complete?
Without any details it almost feels like we would continue to bump the timeout until the tests all passed ;-)

Increase timeout in `go test` to avoid panics while running
virtcontainers unit tests.
Panic occurs randomly and can be reproduced easily on systems running
low on resources. This patch sets the timeout in 30 seconds, after
having run all unit tests N times with the new timeout, the issues
could not be reproduced.

fixes kata-containers#225

Signed-off-by: Julio Montes <julio.montes@intel.com>
@devimc devimc force-pushed the ci/increaseTimeout branch from 3552312 to 97fe3a4 Compare April 10, 2018 13:33
@devimc
Copy link
Copy Markdown
Author

devimc commented Apr 10, 2018

@grahamwhaley

what is taking/waiting >10s to complete?

no idea, feel free to debug it

PR and issue's comment updated

@grahamwhaley
Copy link
Copy Markdown
Contributor

so @devimc , happens on VMs with 'low resources' - do we think we are running our VMs at their limits then - and if so, maybe the correct solution is to bump the VM size etc?

fixing an issue without being able to show what was fixed and why generally imho means 'no idea' == 'NACK' ... I'll leave the call to others...

@devimc
Copy link
Copy Markdown
Author

devimc commented Apr 10, 2018

@grahamwhaley ok, closing PR to let "others" fix the issue

@devimc devimc closed this Apr 10, 2018
@devimc devimc removed the review label Apr 10, 2018
@grahamwhaley
Copy link
Copy Markdown
Contributor

?
I didn't say you should close it - I was leaving it to 'others' to chip in and say if they thought the fix was good enough and appropriate?
@sboeuf @egernst @chavafg - what do you think? (and feel free to re-open if its good enough etc.)

@chavafg
Copy link
Copy Markdown
Contributor

chavafg commented Apr 10, 2018

I think this is still needed (at least temporarily) to unblock the CI, I see that some kata-containers/runtime PR jobs are being hit because the 10s is not enough.
The current VM size is 4 vCPUs and 16 GB of RAM, which I think should be enough. Also the issue I think its only seen on the Ubuntu 16.04 PR jobs, this seems more like a race condition?

@bergwolf
Copy link
Copy Markdown
Member

@devimc Can we re-open this PR since it happens again sometimes? I don't see what harm it can bring us by just increasing the timeout from 10s to 30s.

@sboeuf
Copy link
Copy Markdown

sboeuf commented Apr 12, 2018

@bergwolf I'll try to investigate this today, but basically we would like to understand what's happening before to forcibly bump the timeout.

@devimc
Copy link
Copy Markdown
Author

devimc commented Apr 13, 2018

@sboeuf any update on this ?

@sboeuf
Copy link
Copy Markdown

sboeuf commented Apr 13, 2018

@devimc didn't have enough time yesterday. Looking at it right now.

@devimc devimc deleted the ci/increaseTimeout branch May 28, 2018 18:44
sameo pushed a commit to sameo/runtime-1 that referenced this pull request Dec 5, 2018
in order to make log-parser happy, mockcontainer must return
always a valid process with a fake PID, since log-parser checks
that PID value in the logs and it must be different to zero

Depends-on: github.com/kata-containers/tests#226

Signed-off-by: Julio Montes <julio.montes@intel.com>
jcvenegas added a commit to jcvenegas/kata-containers that referenced this pull request Mar 24, 2021
Split qemu script to build qemu experimental using
same dockerfile.

Fixes: kata-containers#3255

Depends-on: github.com/kata-containers/tests#226

Signed-off-by: Carlos Venegas <jos.c.venegas.munoz@intel.com>
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

panic: test timed out after 10s

6 participants