Add more unit tests to improve test coverage consistency#9684
Add more unit tests to improve test coverage consistency#9684alecslupu merged 14 commits intodecidim:developfrom
Conversation
|
@ahukkanen can we retake this PR? If you don't have the time to allocate to this let me know and me or @alecslupu can take it out of your hands 😄 |
|
@andreslucena This is kind of problematic due to the ongoing redesign process which originally put this on hold. The problem is e.g. if we need to add more specs to the cells, but the cells are going to be changed by redesign. So I'd keep this on hold for the time being until redesign is merged back to develop. Note that this is not the only reason the codecov action is still occasionally failing. I have previously posted my findings on that issue at #9832. This would only help in those runs where the diff is some tenths of a percentage, e.g. 0.20% diff. But this would not solve the problem completely. |
The time has come :) Since last week with the #11685 merge, we're not expecting new big PRs and the folders/file structure should be (finally!) stable again, so this might be a good time to retake this. If you don't have the time, please let me know and I can take this over. |
Go ahead and feel free to continue on this work. I fixed the merge conflicts and specs, so you can now review this as-is or create a new branch based on this to continue the work. To recap the idea of this PR:
|
🎩 What? Why?
UPDATE: I think the root cause is with the merging of the reports and a workaround for that is provided at #9686. This is still relevant as there can be still small fluxuation with the coverage between different runs depending on the execution path (some code may not be run at all test runs due to the randomized run order).
UPDATE 2: The Codecov report can still fail due to the GitHub action failing to upload the report of some of the workflow runs to Codecov. There are some workarounds suggested at #9832 as it seems Codecov seems unwilling to add retry option to their GitHub action.
It has been a consistent issue over the years that the results of the Codecov reports cannot be trusted because they report seemingly random coverage changes for the diffs and whole PRs. This is rather bothering for the developers and can cause extra work for newcomers as they start to investigate why the coverage was changed with their PRs.
I started to investigate this issue and found this very useful piece of documentation:
https://docs.codecov.com/docs/unexpected-coverage-changes
It says the following:
Especially those bold sections are interesting regarding Decidim as we have:
I started to investigate the coverage reports when they are failing and it seems that most of the times the reason is that some methods are not just executed during every test run while they were executed during previous runs.
So I believe we can fix this issue by increasing the unit test coverage which would ensure that all methods are executed on all test runs consistently.
Another thing I am suspecting that can happen when only a subset of the tests are run is e.g. if we execute the
accountabilitytests but they call some methods from thecoreduring their run, but the rest of the methods in thecoreclass were not called during that run, it may also produce weird coverage results.This will be a long effort but I try to work on this occasionally as we bump into these issues by adding more unit specs to those methods that are reported as not covered by Codecov.
Testing
We just need to consistently add more unit specs for those methods showing up in the coverage reports.