Skip to content

feat: add validation statistics to track execution time, files checked, keys checked and validators run#20

Merged
konradmichalik merged 1 commit intomainfrom
statistics
Jul 8, 2025
Merged

feat: add validation statistics to track execution time, files checked, keys checked and validators run#20
konradmichalik merged 1 commit intomainfrom
statistics

Conversation

@konradmichalik
Copy link
Copy Markdown
Contributor

@konradmichalik konradmichalik commented Jul 8, 2025

Summary by CodeRabbit

  • New Features

    • Validation statistics (execution time, files checked, keys checked, validators run) are now displayed in verbose CLI output after validation runs.
    • Execution time is shown in milliseconds or seconds, depending on duration.
  • Bug Fixes

    • None.
  • Tests

    • Added comprehensive tests to ensure correct calculation, formatting, and display of validation statistics in various scenarios.

@coderabbitai
Copy link
Copy Markdown

coderabbitai bot commented Jul 8, 2025

Walkthrough

A new ValidationStatistics class was introduced to capture statistics such as execution time, files checked, keys checked, and validators run during validation. The ValidationResult and ValidationRun classes were updated to handle and propagate these statistics. CLI rendering and test suites were extended to support and verify the new statistics functionality.

Changes

File(s) Change Summary
src/Result/ValidationStatistics.php Added new ValidationStatistics class with properties, getters, and formatted time method.
src/Result/ValidationResult.php Extended constructor and added getter for optional ValidationStatistics property.
src/Result/ValidationRun.php Updated executeFor to compute and pass statistics; added key counting helper.
src/Result/ValidationResultCliRenderer.php Enhanced CLI output to display statistics in verbose mode; added private rendering method.
tests/src/Result/ValidationResultCliRendererTest.php Added tests for CLI statistics rendering under various verbosity and result scenarios.
tests/src/Result/ValidationResultTest.php Added tests for ValidationResult statistics handling and access.
tests/src/Result/ValidationRunTest.php Added tests for statistics generation in validation runs; introduced a mock parser for controlled tests.
tests/src/Result/ValidationStatisticsTest.php New test class verifying all aspects of ValidationStatistics, including formatted output.

Possibly related PRs

Poem

In the validator’s warren, new numbers appear,
Time ticks and keys count, statistics are clear.
Files and validators, now tallied with pride,
In verbose mode, their secrets no longer hide.
With every test passing, the code leaps ahead—
A rabbit’s delight, as the stats are now spread!
🐇✨

✨ Finishing Touches
  • 📝 Generate Docstrings

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share
🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Explain this complex logic.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai explain this code block.
    • @coderabbitai modularize this function.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read src/utils.ts and explain its main purpose.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.
    • @coderabbitai help me debug CodeRabbit configuration file.

Support

Need help? Create a ticket on our support page for assistance with any issues or questions.

Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.

CodeRabbit Commands (Invoked using PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai generate docstrings to generate docstrings for this PR.
  • @coderabbitai generate sequence diagram to generate a sequence diagram of the changes in this PR.
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

CodeRabbit Configuration File (.coderabbit.yaml)

  • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
  • Please see the configuration documentation for more information.
  • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

Copy link
Copy Markdown

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

🧹 Nitpick comments (2)
src/Result/ValidationRun.php (1)

47-47: Consider timing implications for key counting.

The execution time measurement includes the key counting process, which happens after the main validation loop. This may make the execution time less accurate as a measure of validation performance.

Consider either:

  1. Moving key counting inside the main validation loop
  2. Measuring validation time separately from statistics collection time
  3. Adding a comment explaining that execution time includes statistics collection

Also applies to: 51-57

tests/src/Result/ValidationRunTest.php (1)

247-247: Fix unused parameter in mock implementation.

The $attribute parameter in getContentByKey method is unused, as flagged by static analysis.

-    public function getContentByKey(string $key, string $attribute = 'source'): ?string
+    public function getContentByKey(string $key, string $attribute = 'source'): ?string
     {
+        unset($attribute); // Acknowledge unused parameter in mock
         return "Mock content for {$key}";
     }

Alternatively, if the parameter is truly not needed for the mock:

-    public function getContentByKey(string $key, string $attribute = 'source'): ?string
+    public function getContentByKey(string $key, string $attribute = 'source'): ?string
     {
         return "Mock content for {$key}";
     }

Since this is a mock implementation, the unset($attribute) approach is more appropriate to acknowledge the interface requirement while silencing the static analysis warning.

📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between d31e73a and 225bcc6.

📒 Files selected for processing (8)
  • src/Result/ValidationResult.php (2 hunks)
  • src/Result/ValidationResultCliRenderer.php (2 hunks)
  • src/Result/ValidationRun.php (3 hunks)
  • src/Result/ValidationStatistics.php (1 hunks)
  • tests/src/Result/ValidationResultCliRendererTest.php (1 hunks)
  • tests/src/Result/ValidationResultTest.php (1 hunks)
  • tests/src/Result/ValidationRunTest.php (1 hunks)
  • tests/src/Result/ValidationStatisticsTest.php (1 hunks)
🧰 Additional context used
🧬 Code Graph Analysis (3)
src/Result/ValidationResult.php (1)
src/Result/ValidationStatistics.php (1)
  • ValidationStatistics (7-45)
src/Result/ValidationRun.php (4)
src/FileDetector/FileSet.php (2)
  • getFiles (38-41)
  • getParser (20-23)
src/Result/ValidationStatistics.php (1)
  • ValidationStatistics (7-45)
src/Result/ValidationResult.php (1)
  • ValidationResult (11-69)
tests/src/Result/ValidationRunTest.php (1)
  • extractKeys (242-245)
tests/src/Result/ValidationStatisticsTest.php (1)
src/Result/ValidationStatistics.php (6)
  • ValidationStatistics (7-45)
  • getExecutionTime (17-20)
  • getFilesChecked (31-34)
  • getKeysChecked (36-39)
  • getValidatorsRun (41-44)
  • getExecutionTimeFormatted (22-29)
🪛 PHPMD (2.15.0)
tests/src/Result/ValidationRunTest.php

247-247: Avoid unused parameters such as '$attribute'. (Unused Code Rules)

(UnusedFormalParameter)

⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (3)
  • GitHub Check: Report test coverage
  • GitHub Check: Test (PHP 8.1, Symfony 6.x, Composer 2.2)
  • GitHub Check: cgl
🔇 Additional comments (14)
src/Result/ValidationResult.php (2)

21-21: LGTM: Well-implemented optional statistics parameter.

The addition of the optional $statistics parameter with proper type hinting and default value maintains backward compatibility while enabling the new functionality.


65-68: LGTM: Clean getter implementation.

The getter method correctly returns the nullable ValidationStatistics type and follows the existing code patterns in the class.

src/Result/ValidationResultCliRenderer.php (2)

37-39: LGTM: Proper conditional rendering for statistics.

The statistics are appropriately rendered only in verbose mode, which is the correct UX approach for supplementary information.


263-276: LGTM: Well-implemented statistics display.

The method properly handles null statistics with early return and formats the output consistently using gray color for supplementary information. The use of formatted execution time from the statistics object is appropriate.

src/Result/ValidationStatistics.php (2)

7-15: LGTM: Well-designed immutable value object.

The class follows excellent PHP practices with readonly properties and proper type hints. The constructor signature is clean and the parameter names are descriptive.


22-29: LGTM: Intuitive execution time formatting.

The formatting logic appropriately shows milliseconds for sub-second executions and seconds with 2 decimal places for longer executions, providing good UX for performance metrics.

src/Result/ValidationRun.php (1)

25-25: LGTM: Proper timing and file counting implementation.

The execution time measurement and file counting logic are correctly implemented, providing accurate metrics for the validation process.

Also applies to: 29-29, 32-32

tests/src/Result/ValidationResultTest.php (3)

124-142: LGTM: Comprehensive test for statistics integration.

The test properly verifies that the constructor accepts statistics and all getters return the expected values, ensuring the integration works correctly.


144-149: LGTM: Proper null statistics handling test.

The test correctly verifies the behavior when no statistics are provided, ensuring the default null behavior is maintained.


151-167: LGTM: Thorough statistics property verification.

Excellent test that verifies all statistics properties are accessible and correct. The use of assertEqualsWithDelta() for float comparison is the proper approach for floating-point assertions.

tests/src/Result/ValidationStatisticsTest.php (1)

1-67: Excellent comprehensive test coverage!

This test class provides thorough coverage of the ValidationStatistics class with well-structured test cases that cover:

  • Basic constructor and getter functionality
  • Execution time formatting edge cases (milliseconds vs seconds)
  • Boundary conditions (exactly 1 second, zero, very small values)

The test scenarios effectively validate the formatting logic in getExecutionTimeFormatted() method.

tests/src/Result/ValidationRunTest.php (2)

181-230: Well-designed statistics testing coverage!

The new test methods provide excellent coverage for the statistics functionality:

  • testExecuteForCreatesStatistics: Validates basic statistics generation
  • testExecuteForWithMultipleFilesSetsStatistics: Tests statistics aggregation across multiple file sets
  • testExecuteForWithEmptyFileSetGeneratesZeroKeys: Covers edge case with empty input

The assertions properly verify execution time, file counts, validator counts, and key counts.


234-266: Good mock implementation for controlled testing.

The MockParserForTesting class provides a clean way to test key counting with predictable results (always returns 3 keys). This enables reliable testing of the statistics functionality.

tests/src/Result/ValidationResultCliRendererTest.php (1)

361-471: Comprehensive statistics rendering test coverage!

The new test methods provide excellent coverage for the statistics rendering functionality:

  • Verbose mode rendering: Tests both millisecond and second formatting
  • Compact mode behavior: Verifies statistics are hidden in normal verbosity
  • Null statistics handling: Ensures graceful handling when statistics are not available
  • Mixed scenarios: Tests statistics display with both success and failure outcomes

The tests properly verify the conditional rendering logic and string formatting, ensuring statistics are only shown in verbose mode as intended.

Comment on lines +85 to +106
private function countKeysChecked(array $fileSets): int
{
$keysChecked = 0;

foreach ($fileSets as $fileSet) {
$parserClass = $fileSet->getParser();

foreach ($fileSet->getFiles() as $file) {
try {
$parser = new $parserClass($file);
$keys = $parser->extractKeys();
if (is_array($keys)) {
$keysChecked += count($keys);
}
} catch (\Throwable) {
// Skip files that can't be parsed
}
}
}

return $keysChecked;
}
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Improve error handling and consider performance implications.

The key counting implementation has a few concerns:

  1. Creating new parser instances for each file may be inefficient
  2. The broad catch (\Throwable) might hide actual bugs
  3. The parser constructor signature assumption may not be consistent across all parsers

Consider more specific error handling and verify parser interface consistency:

-                } catch (\Throwable) {
-                    // Skip files that can't be parsed
-                }
+                } catch (\Exception $e) {
+                    $this->logger->debug('Failed to parse file for key counting: ' . $file, ['exception' => $e]);
+                    // Skip files that can't be parsed
+                }

Committable suggestion skipped: line range outside the PR's diff.

🤖 Prompt for AI Agents
In src/Result/ValidationRun.php around lines 85 to 106, the method
countKeysChecked creates a new parser instance for each file, which may be
inefficient, and uses a broad catch for \Throwable that could hide real bugs. To
fix this, first verify that all parser classes implement a consistent interface
or abstract class with a standard constructor and extractKeys method. Then,
instantiate the parser once per parser class if possible, or refactor to reuse
parser instances. Replace the broad catch with more specific exception types
related to parsing errors to avoid masking unexpected issues.

return new ValidationResult($validatorInstances, $overallResult, $validatorFileSetPairs);
$keysChecked = $this->countKeysChecked($fileSets);

$validatorsRun = count($validatorClasses);
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Validator count may be misleading.

The current implementation counts validator classes rather than actual validator executions. If there are 3 validator classes and 2 file sets, 6 validators are actually executed, but this will report 3.

Consider updating the counting logic to reflect actual validator executions:

-        $validatorsRun = count($validatorClasses);
+        $validatorsRun = count($validatorClasses) * count($fileSets);
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
$validatorsRun = count($validatorClasses);
$validatorsRun = count($validatorClasses) * count($fileSets);
🤖 Prompt for AI Agents
In src/Result/ValidationRun.php at line 49, the code counts the number of
validator classes instead of the total validator executions. To fix this, update
the counting logic to multiply the number of validator classes by the number of
file sets or the actual number of executions to accurately reflect the total
validators run.

@konradmichalik konradmichalik merged commit bd08ae2 into main Jul 8, 2025
24 checks passed
@konradmichalik konradmichalik deleted the statistics branch July 28, 2025 17:14
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant