feat: add validation statistics to track execution time, files checked, keys checked and validators run#20
Conversation
…d, keys checked and validators run
WalkthroughA new Changes
Possibly related PRs
Poem
✨ Finishing Touches
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. 🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
SupportNeed help? Create a ticket on our support page for assistance with any issues or questions. Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments. CodeRabbit Commands (Invoked using PR comments)
Other keywords and placeholders
CodeRabbit Configuration File (
|
There was a problem hiding this comment.
Actionable comments posted: 2
🧹 Nitpick comments (2)
src/Result/ValidationRun.php (1)
47-47: Consider timing implications for key counting.The execution time measurement includes the key counting process, which happens after the main validation loop. This may make the execution time less accurate as a measure of validation performance.
Consider either:
- Moving key counting inside the main validation loop
- Measuring validation time separately from statistics collection time
- Adding a comment explaining that execution time includes statistics collection
Also applies to: 51-57
tests/src/Result/ValidationRunTest.php (1)
247-247: Fix unused parameter in mock implementation.The
$attributeparameter ingetContentByKeymethod is unused, as flagged by static analysis.- public function getContentByKey(string $key, string $attribute = 'source'): ?string + public function getContentByKey(string $key, string $attribute = 'source'): ?string { + unset($attribute); // Acknowledge unused parameter in mock return "Mock content for {$key}"; }Alternatively, if the parameter is truly not needed for the mock:
- public function getContentByKey(string $key, string $attribute = 'source'): ?string + public function getContentByKey(string $key, string $attribute = 'source'): ?string { return "Mock content for {$key}"; }Since this is a mock implementation, the
unset($attribute)approach is more appropriate to acknowledge the interface requirement while silencing the static analysis warning.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (8)
src/Result/ValidationResult.php(2 hunks)src/Result/ValidationResultCliRenderer.php(2 hunks)src/Result/ValidationRun.php(3 hunks)src/Result/ValidationStatistics.php(1 hunks)tests/src/Result/ValidationResultCliRendererTest.php(1 hunks)tests/src/Result/ValidationResultTest.php(1 hunks)tests/src/Result/ValidationRunTest.php(1 hunks)tests/src/Result/ValidationStatisticsTest.php(1 hunks)
🧰 Additional context used
🧬 Code Graph Analysis (3)
src/Result/ValidationResult.php (1)
src/Result/ValidationStatistics.php (1)
ValidationStatistics(7-45)
src/Result/ValidationRun.php (4)
src/FileDetector/FileSet.php (2)
getFiles(38-41)getParser(20-23)src/Result/ValidationStatistics.php (1)
ValidationStatistics(7-45)src/Result/ValidationResult.php (1)
ValidationResult(11-69)tests/src/Result/ValidationRunTest.php (1)
extractKeys(242-245)
tests/src/Result/ValidationStatisticsTest.php (1)
src/Result/ValidationStatistics.php (6)
ValidationStatistics(7-45)getExecutionTime(17-20)getFilesChecked(31-34)getKeysChecked(36-39)getValidatorsRun(41-44)getExecutionTimeFormatted(22-29)
🪛 PHPMD (2.15.0)
tests/src/Result/ValidationRunTest.php
247-247: Avoid unused parameters such as '$attribute'. (Unused Code Rules)
(UnusedFormalParameter)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (3)
- GitHub Check: Report test coverage
- GitHub Check: Test (PHP 8.1, Symfony 6.x, Composer 2.2)
- GitHub Check: cgl
🔇 Additional comments (14)
src/Result/ValidationResult.php (2)
21-21: LGTM: Well-implemented optional statistics parameter.The addition of the optional
$statisticsparameter with proper type hinting and default value maintains backward compatibility while enabling the new functionality.
65-68: LGTM: Clean getter implementation.The getter method correctly returns the nullable
ValidationStatisticstype and follows the existing code patterns in the class.src/Result/ValidationResultCliRenderer.php (2)
37-39: LGTM: Proper conditional rendering for statistics.The statistics are appropriately rendered only in verbose mode, which is the correct UX approach for supplementary information.
263-276: LGTM: Well-implemented statistics display.The method properly handles null statistics with early return and formats the output consistently using gray color for supplementary information. The use of formatted execution time from the statistics object is appropriate.
src/Result/ValidationStatistics.php (2)
7-15: LGTM: Well-designed immutable value object.The class follows excellent PHP practices with readonly properties and proper type hints. The constructor signature is clean and the parameter names are descriptive.
22-29: LGTM: Intuitive execution time formatting.The formatting logic appropriately shows milliseconds for sub-second executions and seconds with 2 decimal places for longer executions, providing good UX for performance metrics.
src/Result/ValidationRun.php (1)
25-25: LGTM: Proper timing and file counting implementation.The execution time measurement and file counting logic are correctly implemented, providing accurate metrics for the validation process.
Also applies to: 29-29, 32-32
tests/src/Result/ValidationResultTest.php (3)
124-142: LGTM: Comprehensive test for statistics integration.The test properly verifies that the constructor accepts statistics and all getters return the expected values, ensuring the integration works correctly.
144-149: LGTM: Proper null statistics handling test.The test correctly verifies the behavior when no statistics are provided, ensuring the default null behavior is maintained.
151-167: LGTM: Thorough statistics property verification.Excellent test that verifies all statistics properties are accessible and correct. The use of
assertEqualsWithDelta()for float comparison is the proper approach for floating-point assertions.tests/src/Result/ValidationStatisticsTest.php (1)
1-67: Excellent comprehensive test coverage!This test class provides thorough coverage of the
ValidationStatisticsclass with well-structured test cases that cover:
- Basic constructor and getter functionality
- Execution time formatting edge cases (milliseconds vs seconds)
- Boundary conditions (exactly 1 second, zero, very small values)
The test scenarios effectively validate the formatting logic in
getExecutionTimeFormatted()method.tests/src/Result/ValidationRunTest.php (2)
181-230: Well-designed statistics testing coverage!The new test methods provide excellent coverage for the statistics functionality:
testExecuteForCreatesStatistics: Validates basic statistics generationtestExecuteForWithMultipleFilesSetsStatistics: Tests statistics aggregation across multiple file setstestExecuteForWithEmptyFileSetGeneratesZeroKeys: Covers edge case with empty inputThe assertions properly verify execution time, file counts, validator counts, and key counts.
234-266: Good mock implementation for controlled testing.The
MockParserForTestingclass provides a clean way to test key counting with predictable results (always returns 3 keys). This enables reliable testing of the statistics functionality.tests/src/Result/ValidationResultCliRendererTest.php (1)
361-471: Comprehensive statistics rendering test coverage!The new test methods provide excellent coverage for the statistics rendering functionality:
- Verbose mode rendering: Tests both millisecond and second formatting
- Compact mode behavior: Verifies statistics are hidden in normal verbosity
- Null statistics handling: Ensures graceful handling when statistics are not available
- Mixed scenarios: Tests statistics display with both success and failure outcomes
The tests properly verify the conditional rendering logic and string formatting, ensuring statistics are only shown in verbose mode as intended.
| private function countKeysChecked(array $fileSets): int | ||
| { | ||
| $keysChecked = 0; | ||
|
|
||
| foreach ($fileSets as $fileSet) { | ||
| $parserClass = $fileSet->getParser(); | ||
|
|
||
| foreach ($fileSet->getFiles() as $file) { | ||
| try { | ||
| $parser = new $parserClass($file); | ||
| $keys = $parser->extractKeys(); | ||
| if (is_array($keys)) { | ||
| $keysChecked += count($keys); | ||
| } | ||
| } catch (\Throwable) { | ||
| // Skip files that can't be parsed | ||
| } | ||
| } | ||
| } | ||
|
|
||
| return $keysChecked; | ||
| } |
There was a problem hiding this comment.
🛠️ Refactor suggestion
Improve error handling and consider performance implications.
The key counting implementation has a few concerns:
- Creating new parser instances for each file may be inefficient
- The broad
catch (\Throwable)might hide actual bugs - The parser constructor signature assumption may not be consistent across all parsers
Consider more specific error handling and verify parser interface consistency:
- } catch (\Throwable) {
- // Skip files that can't be parsed
- }
+ } catch (\Exception $e) {
+ $this->logger->debug('Failed to parse file for key counting: ' . $file, ['exception' => $e]);
+ // Skip files that can't be parsed
+ }Committable suggestion skipped: line range outside the PR's diff.
🤖 Prompt for AI Agents
In src/Result/ValidationRun.php around lines 85 to 106, the method
countKeysChecked creates a new parser instance for each file, which may be
inefficient, and uses a broad catch for \Throwable that could hide real bugs. To
fix this, first verify that all parser classes implement a consistent interface
or abstract class with a standard constructor and extractKeys method. Then,
instantiate the parser once per parser class if possible, or refactor to reuse
parser instances. Replace the broad catch with more specific exception types
related to parsing errors to avoid masking unexpected issues.
| return new ValidationResult($validatorInstances, $overallResult, $validatorFileSetPairs); | ||
| $keysChecked = $this->countKeysChecked($fileSets); | ||
|
|
||
| $validatorsRun = count($validatorClasses); |
There was a problem hiding this comment.
🛠️ Refactor suggestion
Validator count may be misleading.
The current implementation counts validator classes rather than actual validator executions. If there are 3 validator classes and 2 file sets, 6 validators are actually executed, but this will report 3.
Consider updating the counting logic to reflect actual validator executions:
- $validatorsRun = count($validatorClasses);
+ $validatorsRun = count($validatorClasses) * count($fileSets);📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| $validatorsRun = count($validatorClasses); | |
| $validatorsRun = count($validatorClasses) * count($fileSets); |
🤖 Prompt for AI Agents
In src/Result/ValidationRun.php at line 49, the code counts the number of
validator classes instead of the total validator executions. To fix this, update
the counting logic to multiply the number of validator classes by the number of
file sets or the actual number of executions to accurately reflect the total
validators run.
Summary by CodeRabbit
New Features
Bug Fixes
Tests