Skip to content

Conversation

@carlos-granados
Copy link
Contributor

In this case we have re-used the fixtures that we had for the rerun.feature tests as they were almost identical, we have just added a new feature because the behaviour when we were fixing the tests was slightly different

Comment on lines 13 to 17
Scenario: Run one feature with 2 failed and 3 passing scenarios
When I run "behat --no-colors -f progress features/apples.feature"
When I run "behat features/apples.feature"
Then it should fail with:
"""
..F.............F....
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@carlos-granados sorry to keep picking on these rerun features.

I think the process you're working through is very helpfully showing where the noise of the setup in the past has made it hard to reason about what the features are actually covering.

I've realised this first scenario is IMO not a scenario at all. It is not testing anything about the rerun feature, only that the fixture we have created fails as expected.

This is actually a precondition of the later scenarios (where we start with the same `Given I run "behat features/apples.feature" but don't assert anything about the result).

The later scenarios assume that if we've run the same steps to setup the fixture, it will fail in the same way as this scenario did - but this is not actually guaranteed (there could have been some filesystem issue for example).

I think the content of this scenario should either be inlined as the first steps of the actual scenarios, or a Background if it is consistent for all scenarios (it isn't in this case).

This would also apply to the feature we looked at the other day, and to #1766

7 steps (5 passed, 2 failed)
"""

Scenario: Fixing scenario removes it from the rerun log
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this scenario isn't named quite right for what it covers. Run nothing if there were no previous failures or something would make more sense?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Or maybe this is actually two separate scenarios:

  • If Behat has never run before, nothing runs (even with the unchanged apples.feature - Behat passes because that failing scenario is never executed).
  • If Behat has run & failed, then was fixed, then --rerun-only passed, then the next run does not run any scenarios.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Agree that this is better defined as two separate scenarios, I have updated the code

@carlos-granados carlos-granados force-pushed the refactor-rerun-only-tests branch from 21c2836 to a26568d Compare December 9, 2025 13:36
Copy link
Contributor

@acoulton acoulton left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

thanks @carlos-granados that's a great improvement

@carlos-granados carlos-granados merged commit 68e2242 into Behat:3.x Dec 9, 2025
22 checks passed
@carlos-granados carlos-granados deleted the refactor-rerun-only-tests branch December 9, 2025 14:04
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants