Use case: I want to analyze my test results over time to find and address flaky tests. My test results are uploaded to a data analysis tool like Kibana using JUnit.
Problem: Pytest's JUnit support does not capture retries. Only the last test run is recorded and no properties are added that indicate retries.
Example JUnit:
<?xml version="1.0" encoding="utf-8"?><testsuites><testsuite errors="0" failures="1" hostname="epage-2328.local" name="pytest" skipped="0" tests="1" time="0.057" timestamp="2019-11-07T08:24:07.922770"><testcase classname="tests.test_foo" file="tests/test_foo.py" line="2" name="test_foo" time="0.003"><failure message="assert False">@pytest.mark.flaky(reruns=5)
def test_foo():
> assert False
E assert False
tests/test_foo.py:5: AssertionError</failure></testcase></testsuite></testsuites>
Suggested fix: Report execution_count using the record_property fixture so we know how many attempts were made.
Workarounds:
--result-log (deprecated): Create a parser for result-log and merge in its results
- Wait until pytest 5.3 is release and use the new
--report-log
Use case: I want to analyze my test results over time to find and address flaky tests. My test results are uploaded to a data analysis tool like Kibana using JUnit.
Problem: Pytest's JUnit support does not capture retries. Only the last test run is recorded and no properties are added that indicate retries.
Example JUnit:
Suggested fix: Report
execution_countusing therecord_propertyfixture so we know how many attempts were made.Workarounds:
--result-log(deprecated): Create a parser forresult-logand merge in its results--report-log