FEAT-111 | Generate additional metrics for feature map documentation site#57
FEAT-111 | Generate additional metrics for feature map documentation site#57
Conversation
hstrowd
left a comment
There was a problem hiding this comment.
Overall I'm onboard with these changes. I dropped in a handful of comments that seem worth considering/addressing before merging. Don't hesitate to reach out if any of my feedback, concerns, or suggestions are unclear. Thanks.
| test_coverage_config = health_config['components']['test_coverage'] | ||
|
|
||
| test_coverage_component = health_score_component(test_coverage_config['weight'], T.must(test_coverage['score']), test_coverage_config['score_threshold']) | ||
| cyclomatic_complexity_component = health_score_component(cyclomatic_complexity_config['weight'], T.must(cyclomatic_complexity['percentile']), cyclomatic_complexity_config['score_threshold'], cyclomatic_complexity['percent_of_max'], 100 - cyclomatic_complexity_config['minimum_variance']) |
There was a problem hiding this comment.
I think I follow what is going on with this minimum_variance and the associated close_to_maximum_score criteria, but I'm not clear on what the benefit for this "closeness" check is. It seems to be adding a fair bit of complexity to this logic and was a bit hard to follow. Can you help me understand what we get in return for this additional complexity?
There was a problem hiding this comment.
Lots of the health score stuff is relative to other features in the codebase. Because of this, it becomes hard to get a "perfect" score as features start to converge. So if a given feature is "close" in a given metric to the top scoring feature, it should be awarded full points. For example, if there are four features with test coverages of [97, 98, 98, 99], then then 97% test coverage is technically in the bottom 25% of all features -- but it's still really good so it should be awarded full points. That's the general idea behind this implementation, which was more or less copied over from the metrics.js file.
There was a problem hiding this comment.
That example is helpful to understand the real world scenario in which this logic provides more valuable feedback/insight to our users. I wonder if there is a place in the code that this or a similar example could be shared to highlight our intention behind this logic. It seems like something that would help future engineers by keeping in mind as they modify this logic.
| "lines_of_code": null, | ||
| "cyclomatic_complexity": null | ||
| }, | ||
| "additional_metrics": { |
There was a problem hiding this comment.
If an engineer runs the docs CLI command without having first run the additional_metrics CLI command, it doesn't look like we raise any sort of error but in this case it appears that this key will receive a value of null rather than a fully initialized payload of the sort shown here. It seems like this would likely cause the documentation site to raise an error.
Do we need to support this key being assigned a value of null? If so, should we make that the value of this feature to ensure engineers are prompted to test and consider this scenario when working on the documentation site in isolation? If not, should we raise an error in this case when the docs CLI command is triggered without additional test metrics having been generated?
There was a problem hiding this comment.
I'm seeing the documentation site fail to load when it is generated without the additional_metrics CLI command being run:

It seems like we either need to make this content optional, aligning with the behavior of the test_pyramid and test_coverage commands, or execute these additional metrics calculations automatically as part of the docs command.
| FeatureMap.generate_test_pyramid!(unit_path, integration_path, regression_path, regression_assignments_path) | ||
| end | ||
|
|
||
| def self.additional_metrics!(argv) |
There was a problem hiding this comment.
Is there a reason this operation is being performed as an independent CLI command rather than as part of the existing docs command?
There was a problem hiding this comment.
I believe we were just trying to follow the same implementation pattern as other commands like: validate, test_pyramid, and test_coverage.
There was a problem hiding this comment.
My understanding was that the test_pyramid and test_coverage commands were isolated because they have external dependencies (i.e. requiring test summary files and regression test details or requiring CodeCov credentials/configurations to be provided). These are optional commands that can be run to inject additional insight into the documentation site but are not provided.
This use case seems a bit different, since we're just repackaging existing information that was generated/collected as part of previous commands. For that reason, it seems more reasonable to consider performing these operations automatically as part of the docs operation. However, I'm not tied to that approach. If we want to keep it separate we should probably just allow the documentation site to be successfully generated when it has not been executed.
c52608f to
9831167
Compare
Linear: https://linear.app/bf-settlements/issue/FEAT-111
Refactors the way we generate health metrics for the documentation site. Previously, they we're generated in JS here, however, we now generate them in ruby alongside other metrics like
abc_size,cyclomatic_complexity,lines_of_code, etc.Doing this makes all of the data we need to power key insights and metrics available in the
feature_map_config.jsfile that gets created in each repo when runningbin/featuremap docs.