Skip to content

[ty] Avoid the mandatory "ecosystem-analyzer workflow run cancelled" notification every time you make a PR#23695

Merged
AlexWaygood merged 1 commit intomainfrom
claude/fix-ecosystem-analyzer-oDXv7
Mar 3, 2026
Merged

[ty] Avoid the mandatory "ecosystem-analyzer workflow run cancelled" notification every time you make a PR#23695
AlexWaygood merged 1 commit intomainfrom
claude/fix-ecosystem-analyzer-oDXv7

Conversation

@AlexWaygood
Copy link
Member

Summary

Currently I get an "ecosystem-analyzer workflow cancelled" notification every time I make a PR, regardless of whether the ecosystem-analyzer label is present on the PR when the PR is initially made. Claude's analysis is:

The problem is the cancel-in-progress: true concurrency setting on line 17. Here's what happens:

  1. You push to a PR (no ecosystem-analyzer label) → workflow triggers → job is immediately skipped (line 32 condition is false)
  2. You push again → new workflow run starts → concurrency group cancels the previous run → you get a "cancelled" notification

Even though the previous run's job was already skipped/done, the concurrency mechanism still sends a cancellation signal, which generates the notification.

The fix is to only cancel in-progress runs when the label is actually present — for unlabeled PRs, the runs are harmlessly skipped in milliseconds and don't need cancellation:

Which seems plausible?

The `cancel-in-progress` concurrency setting was unconditionally
cancelling previous workflow runs on every push to a PR. For PRs
without the `ecosystem-analyzer` label, this generated a "cancelled"
notification even though the job would have been immediately skipped
anyway. Only cancel in-progress runs when the label is present.

https://claude.ai/code/session_01LH6Q6gjrb3aJqhEexrynjk
@AlexWaygood AlexWaygood added ci Related to internal CI tooling ty Multi-file analysis & type inference labels Mar 3, 2026
@AlexWaygood
Copy link
Member Author

Even if this isn't the cause, I don't see how this PR can do any harm.

@AlexWaygood AlexWaygood merged commit bb80aff into main Mar 3, 2026
44 checks passed
@AlexWaygood AlexWaygood deleted the claude/fix-ecosystem-analyzer-oDXv7 branch March 3, 2026 12:14
carljm added a commit that referenced this pull request Mar 3, 2026
* main:
  [ty] Apply narrowing to walrus values (#23687)
  [`perflint`] Extend `PERF102` to comprehensions and generators (#23473)
  [ty] Fix GitHub-annotations mdtest output format (#23694)
  [ty] Reduce the number of potentially-flaky projects (#23698)
  [`pydocstyle`] Fix numpy section ordering (`D420`) (#23685)
  [ty] Move method-related types to a submodule (#23691)
  [ty] Avoid the mandatory "ecosystem-analyzer workflow run cancelled" notification every time you make a PR (#23695)
  [ty] Move `Type::subtyping_is_always_reflexive` to `types::relation` (#23692)
  Update conformance suite commit hash (#23693)
  [ty] Add mdtest suite for `typing.Concatenate` (#23554)
  [ty] filter out pre-loop bindings from loop headers (#23536)
AlexWaygood pushed a commit that referenced this pull request Mar 4, 2026
PR #23695 only made `cancel-in-progress` conditional on the
`ecosystem-analyzer` label, but that wasn't sufficient. GitHub Actions
concurrency groups with `cancel-in-progress: false` still cancel
*pending/queued* runs in the same group — they just don't cancel
*in-progress* ones. So every push to any PR still entered the same
concurrency group, and queued runs got cancelled, generating
notifications.

Fix this by making the concurrency group name unique per run (using
`github.run_id`) when the `ecosystem-analyzer` label is absent. This
way, unlabeled PR runs never share a concurrency group, so no
collisions or cancellations occur. When the label is present, runs
share a group keyed by PR number so stale runs are properly cancelled.

https://claude.ai/code/session_01GNDqAXQ1boZHxv4H1tGQFy
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

ci Related to internal CI tooling ty Multi-file analysis & type inference

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants