-
Notifications
You must be signed in to change notification settings - Fork 523
Description
We've lately added quite a few xfail tests as a sort of "to do", but I realized that this may cause the codecov metrics to be less meaningful than one might hope. The reason being that xfail tests can fail for any reason, including that they went down some random buggy code path whose proper behavior is not actually covered, and that buggy code path threw an actual bug we should be aware of. As such, I'd rather code executed as the result of xfail tests not contribute to the code coverage. (They should still be run because I'd like to know if they turn from xfail to xpass).
If there's not just some standard pytest-coverage or coverage option that does this, I think it's simple enough to set up by just running pytest with all xfail tests skipped with coverage on, then running it a second time with coverage off and only xfail tests included.