fix(github): use correct name for failed checkruns on retest#2455
Conversation
Summary of ChangesHello @zakisk, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request addresses a critical bug that caused GitHub check runs for failed Tekton PipelineRuns to be incorrectly cleared after retest commands. The core problem stemmed from an inability to consistently identify PipelineRuns by their name during status updates. The fix ensures that the system correctly references PipelineRuns by utilizing their generated names when an explicit name is absent. This enhancement significantly improves the reliability of CI/CD status reporting and is validated by a newly introduced end-to-end test, preventing future regressions. Highlights
Changelog
Activity
Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
🔍 PR Lint Feedback
|
There was a problem hiding this comment.
Code Review
This pull request addresses an issue where failed check runs were being cleared on /retest commands. The fix in pkg/pipelineascode/pipelineascode.go correctly uses GetGenerateName as a fallback, which is a solid approach for handling pipeline run creation failures. The addition of an end-to-end test is a great way to prevent regressions. I've added a couple of suggestions to improve the new test file.
| @@ -1,11 +1,10 @@ | |||
| //go:build e2e | |||
There was a problem hiding this comment.
The //go:build e2e tag has been removed. This tag is important for separating end-to-end tests from unit tests. Without it, these potentially long-running tests will be executed with the standard test suite, which may be unintended. The new test TestGithubRetestWithMultipleFailedPipelineRuns doesn't have a conditional skip, so it will run every time. If these are e2e tests, please consider restoring the build tag to avoid slowing down the CI process for regular unit tests.
There was a problem hiding this comment.
Ask❓
@zakisk did anything changed?
Answer:
Yes—something did change in this PR: the //go:build e2e build tag at the top of test/github_pullrequest_retest_test.go was removed.
Impact:
- That file now compiles/runs as part of the default
go test ./...suite, instead of only running when built with-tags=e2e. - The newly added test
TestGithubRetestWithMultipleFailedPipelineRunswill also run unconditionally, which can slow down regular CI/unit test runs and may require e2e infra to be available.
Recommendation:
- If these are intended to remain e2e-only tests (they look like they are, since they interact with GitHub checks and Tekton PipelineRuns), re-add
//go:build e2e(and optionally the legacy// +build e2e) to this file, or move e2e tests into files that retain the tag. - If the removal was intentional, it would be good to clarify why and ensure CI is prepared to run these in the default test stage.
| containsFailedPLRName := false | ||
| for _, checkRun := range res.CheckRuns { | ||
| // check if the check run is for the validation failed pipelinerun | ||
| if strings.Contains(checkRun.GetExternalID(), "pipelinerun-tekton-validation") { | ||
| containsFailedPLRName = true | ||
| } | ||
| } | ||
| assert.Equal(t, containsFailedPLRName, true) |
There was a problem hiding this comment.
This loop can be made more efficient by exiting as soon as a match is found. Adding a break statement will improve performance, especially if the list of check runs grows. Additionally, assert.Assert(t, ...) is more idiomatic for boolean checks in gotest.tools/v3/assert and allows for a more descriptive failure message.
containsFailedPLRName := false
for _, checkRun := range res.CheckRuns {
// check if the check run is for the validation failed pipelinerun
if strings.Contains(checkRun.GetExternalID(), "pipelinerun-tekton-validation") {
containsFailedPLRName = true
break
}
}
assert.Assert(t, containsFailedPLRName, "expected to find a check run for 'pipelinerun-tekton-validation'")There was a problem hiding this comment.
yeah, I removed for ctrl+click to work but forgot to add again, updated
There was a problem hiding this comment.
code exploration in code editor
218a446 to
3099527
Compare
after /retest or /test command failed check run was being wiped out because we didn't used correct pipelinerun name before when reporting failure in startPR func. now this uses the generateName if name is not available for a pipelinerun. this also adds an e2e tests to ensure that checkrun for tekton validation failed pipelinerun is not wiped out. https://issues.redhat.com/browse/SRVKP-10741 Signed-off-by: Zaki Shaikh <zashaikh@redhat.com>
3099527 to
39a6b5b
Compare
after /retest or /test command failed check run was being wiped out because we didn't used correct pipelinerun name before when reporting failure in startPR func. now this uses the generateName if name is not available for a pipelinerun.
this also adds an e2e tests to ensure that checkrun for tekton validation failed pipelinerun is not wiped out.
https://issues.redhat.com/browse/SRVKP-10741
📝 Description of the Change
👨🏻 Linked Jira
🔗 Linked GitHub Issue
Fixes #
🚀 Type of Change
fix:)feat:)feat!:,fix!:)docs:)chore:)refactor:)enhance:)deps:)🧪 Testing Strategy
🤖 AI Assistance
If you have used AI assistance, please provide the following details:
Which LLM was used?
Extent of AI Assistance:
Important
If the majority of the code in this PR was generated by an AI, please add a
Co-authored-bytrailer to your commit message.For example:
Co-authored-by: Gemini gemini@google.com
Co-authored-by: ChatGPT noreply@chatgpt.com
Co-authored-by: Claude noreply@anthropic.com
Co-authored-by: Cursor noreply@cursor.com
Co-authored-by: Copilot Copilot@users.noreply.github.com
**💡You can use the script
./hack/add-llm-coauthor.shto automatically addthese co-author trailers to your commits.
✅ Submitter Checklist
fix:,feat:) matches the "Type of Change" I selected above.make testandmake lintlocally to check for and fix anyissues. For an efficient workflow, I have considered installing
pre-commit and running
pre-commit installtoautomate these checks.