-
-
Notifications
You must be signed in to change notification settings - Fork 616
refactor: convert rerun tests to use fixtures #1756
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
refactor: convert rerun tests to use fixtures #1756
Conversation
features/rerun.feature
Outdated
| Given I run "behat features/apples.feature" | ||
| And I copy "features/apples-fixed.feature" to "features/apples.feature" | ||
| When I run "behat features/apples.feature" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think perhaps these initial two "I run" steps should check the result & output (even just the # scenarios (# failed) line) to prove that the scenarios ran / failed as expected prior to the final assertion?
Otherwise for example we don't know for sure that the "fixed" apples feature actually fixed the scenario rather than just removing it, or moving it to a different line number so it doesn't match the rerun cache.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah, makes sense, updated
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks - this scenario still doesn't quite make sense to me.
- We run Behat once at the very beginning of the scenario, which fails (presumably with 2 failing the same as line 13).
- Then we copy over the file which only changes one example.
- Then we run without --rerun, and we see that now it fails on one scenario (but runs all the others).
- And then we run with rerun and see that it only re-runs the failed scenario.
So the only thing we're really proving is that --rerun only re-runs the failed example - but we've already proved that in the "Rerun only failed scenarios" above?
I wonder if this scenario is instead meant to prove:
- Run the first time runs all scenarios, 2 fail, generates a rerun file
- Then fix one failure and run with --rerun and now it only runs the 2 that failed the first time, one passes, and it overwrites the rerun file to remove it
- Then run with
--rerunagain and now it only runs the scenario that is still failing.
Unless I'm missing something?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Makes sense, updated
13823c8 to
7cc6b22
Compare
1710467 to
a314409
Compare
acoulton
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks @carlos-granados
- Add file copy step to FeatureContext with touch() to bypass cache - Move rerun test files to tests/Fixtures/Rerun/ - Update rerun.feature to use fixture initialization - Move common options to Background
a314409 to
b5bd19f
Compare
No description provided.