Skip to content

fix: report psutil errors in case benchmark fails#3925

Merged
johanneskoester merged 1 commit intomainfrom
fix/benchmark-report-errors
Jan 15, 2026
Merged

fix: report psutil errors in case benchmark fails#3925
johanneskoester merged 1 commit intomainfrom
fix/benchmark-report-errors

Conversation

@johanneskoester
Copy link
Copy Markdown
Contributor

@johanneskoester johanneskoester commented Jan 15, 2026

QC

  • The PR contains a test case for the changes or the changes are already covered by an existing test case.
  • The documentation (docs/) is updated to reflect the changes or this is not necessary (e.g. if the change does neither modify the language nor the behavior or functionalities of Snakemake).

Summary by CodeRabbit

  • Bug Fixes
    • Improved error handling in benchmark operations to capture and report issues encountered during data collection, providing better visibility when benchmarking encounters problems.

✏️ Tip: You can customize this high-level summary in your review settings.

@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai bot commented Jan 15, 2026

📝 Walkthrough

Walkthrough

Enhanced error handling in the BenchmarkRecord class by introducing a new errors attribute to collect error messages during benchmarking. When data collection is incomplete or psutil.Error occurs, exception messages are now aggregated and appended to the benchmark warning.

Changes

Cohort / File(s) Summary
Error collection in BenchmarkRecord
src/snakemake/benchmark.py
Added errors: List[str] public attribute to store error messages. Enhanced error reporting to aggregate errors from self.errors into warning messages when data collection is incomplete. Propagated psutil.Error exceptions by appending error messages to bench_record.errors during record updates.

Estimated code review effort

🎯 2 (Simple) | ⏱️ ~8 minutes

🚥 Pre-merge checks | ✅ 2 | ❌ 1
❌ Failed checks (1 warning)
Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 50.00% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (2 passed)
Check name Status Explanation
Title check ✅ Passed The title clearly and concisely describes the main change: adding error reporting for psutil errors during benchmark operations.
Description check ✅ Passed The description includes the required QC checklist with both items marked as completed, confirming test coverage and documentation status.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing touches
  • 📝 Generate docstrings

🧹 Recent nitpick comments
src/snakemake/benchmark.py (1)

173-177: Consider handling the case where no errors were captured.

If data_collected is False but no psutil errors were recorded (e.g., the process completed before any measurement), err_msg will be empty, resulting in a warning message ending with : followed by nothing.

♻️ Suggested improvement
         if not self.data_collected:
             err_msg = "\n".join(self.errors)
+            if not err_msg:
+                err_msg = "process may have completed before measurement"
             logger.warning(
                 f"Benchmark: unable to collect cpu and memory benchmark statistics: {err_msg}"
             )

📜 Recent review details

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 0130fc8 and d43d04b.

⛔ Files ignored due to path filters (1)
  • pyproject.toml is excluded by !pyproject.toml
📒 Files selected for processing (1)
  • src/snakemake/benchmark.py
🧰 Additional context used
📓 Path-based instructions (1)
**/*.py

⚙️ CodeRabbit configuration file

**/*.py: Do not try to improve formatting.
Do not suggest type annotations for functions that are defined inside of functions or methods.
Do not suggest type annotation of the self argument of methods.
Do not suggest type annotation of the cls argument of classmethods.
Do not suggest return type annotation if a function or method does not contain a return statement.

Files:

  • src/snakemake/benchmark.py
🧠 Learnings (3)
📓 Common learnings
Learnt from: leoschwarz
Repo: snakemake/snakemake PR: 3176
File: tests/test_output_index.py:99-157
Timestamp: 2025-01-17T12:00:09.368Z
Learning: pytest-benchmark is not currently part of the Snakemake project's test dependencies. Suggestions involving pytest-benchmark should be deferred until it's added to the project.
📚 Learning: 2025-01-17T12:00:09.368Z
Learnt from: leoschwarz
Repo: snakemake/snakemake PR: 3176
File: tests/test_output_index.py:99-157
Timestamp: 2025-01-17T12:00:09.368Z
Learning: pytest-benchmark is not currently part of the Snakemake project's test dependencies. Suggestions involving pytest-benchmark should be deferred until it's added to the project.

Applied to files:

  • src/snakemake/benchmark.py
📚 Learning: 2026-01-08T11:13:20.523Z
Learnt from: johanneskoester
Repo: snakemake/snakemake PR: 3643
File: src/snakemake/sourcecache.py:474-492
Timestamp: 2026-01-08T11:13:20.523Z
Learning: In Python files under src/snakemake, ensure that the per-run cache_path for source files does not change within a single workflow run, so that HostingProviderFile._hosted_repos can be keyed reliably by repository name. This invariant should be documented and enforced (e.g., by freezing cache_path after initialization, avoiding mutations during a run, and ensuring any caching uses a run-scoped key). This pattern applies to all modules in the Snakemake source cache area where per-run caching and repository hosting are implemented.

Applied to files:

  • src/snakemake/benchmark.py
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (45)
  • GitHub Check: tests (6, ubuntu-latest, py312)
  • GitHub Check: tests (8, ubuntu-latest, py311)
  • GitHub Check: tests (6, ubuntu-latest, py311)
  • GitHub Check: tests (3, ubuntu-latest, py311)
  • GitHub Check: tests (2, ubuntu-latest, py312)
  • GitHub Check: tests (7, windows-2022, py313)
  • GitHub Check: tests (5, ubuntu-latest, py313)
  • GitHub Check: tests (9, ubuntu-latest, py311)
  • GitHub Check: tests (4, ubuntu-latest, py311)
  • GitHub Check: tests (7, ubuntu-latest, py312)
  • GitHub Check: tests (2, ubuntu-latest, py313)
  • GitHub Check: tests (10, ubuntu-latest, py313)
  • GitHub Check: tests (5, windows-2022, py313)
  • GitHub Check: tests (9, windows-2022, py313)
  • GitHub Check: tests (6, ubuntu-latest, py313)
  • GitHub Check: tests (6, windows-2022, py313)
  • GitHub Check: tests (10, ubuntu-latest, py311)
  • GitHub Check: tests (4, ubuntu-latest, py312)
  • GitHub Check: tests (5, macos-latest, py313)
  • GitHub Check: tests (3, ubuntu-latest, py313)
  • GitHub Check: tests (4, ubuntu-latest, py313)
  • GitHub Check: tests (10, ubuntu-latest, py312)
  • GitHub Check: tests (5, ubuntu-latest, py312)
  • GitHub Check: tests (3, macos-latest, py313)
  • GitHub Check: tests (5, ubuntu-latest, py311)
  • GitHub Check: tests (7, ubuntu-latest, py311)
  • GitHub Check: tests (9, ubuntu-latest, py312)
  • GitHub Check: tests (10, windows-2022, py313)
  • GitHub Check: tests (8, ubuntu-latest, py313)
  • GitHub Check: tests (8, windows-2022, py313)
  • GitHub Check: tests (3, windows-2022, py313)
  • GitHub Check: tests (4, windows-2022, py313)
  • GitHub Check: tests (8, ubuntu-latest, py312)
  • GitHub Check: tests (8, macos-latest, py313)
  • GitHub Check: tests (7, ubuntu-latest, py313)
  • GitHub Check: tests (9, ubuntu-latest, py313)
  • GitHub Check: tests (2, windows-2022, py313)
  • GitHub Check: tests (3, ubuntu-latest, py312)
  • GitHub Check: tests (2, ubuntu-latest, py311)
  • GitHub Check: tests (1, windows-2022, py313)
  • GitHub Check: tests (1, ubuntu-latest, py311)
  • GitHub Check: tests (1, ubuntu-latest, py313)
  • GitHub Check: tests (1, ubuntu-latest, py312)
  • GitHub Check: tests (1, macos-latest, py313)
  • GitHub Check: apidocs
🔇 Additional comments (3)
src/snakemake/benchmark.py (3)

13-13: LGTM!

The List import from typing is appropriate for maintaining compatibility with Python versions before 3.9.


117-117: LGTM!

The new errors attribute is properly initialized and typed. This aligns well with the existing attribute pattern in the class.


391-393: LGTM!

The error message is correctly captured and appended to the errors list. The early return appropriately prevents updating the benchmark record with incomplete data. The distinction between this handler (fatal for the measurement cycle) and the inner handler at line 349 (non-fatal, per-process) is well designed.

✏️ Tip: You can disable this entire section by setting review_details to false in your review settings.


Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@johanneskoester johanneskoester merged commit 293ec40 into main Jan 15, 2026
62 checks passed
@johanneskoester johanneskoester deleted the fix/benchmark-report-errors branch January 15, 2026 22:02
johanneskoester pushed a commit that referenced this pull request Jan 15, 2026
🤖 I have created a release *beep* *boop*
---


##
[9.14.7](v9.14.6...v9.14.7)
(2026-01-15)


### Bug Fixes

* report psutil errors in case benchmark fails
([#3925](#3925))
([293ec40](293ec40))
* reuse async runner to prevent port exhaustion
([#3911](#3911))
([385e0ca](385e0ca))

---
This PR was generated with [Release
Please](https://github.com/googleapis/release-please). See
[documentation](https://github.com/googleapis/release-please#release-please).
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant