You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository was archived by the owner on Nov 15, 2023. It is now read-only.
We regenerate benchmarks (well, from time to time) - for releases, when something big is coming to the Cumulus..), also we dont regenerate them on PRs to master or for every companion, companions just check if benchmarks compile.
But the consequence is that we could have non-working benchmarks in master.
Good example is: #2712 (where we want to pass with all benchmarks with new bench bot) #2727 #2765 #2777
everything was fixed and meantime there was some PR which again broke previously-working asset-hub-westendhttps://gitlab.parity.io/parity/mirrors/cumulus/-/jobs/3099790
Issues/Solutions:
add pipeline to the CI for every PR - run all benchmarks as (unit)tests
we dont want performance here, only just verify that benchmarking functions work and new PR does not break anything (--steps=1 and --repeat=1)
we dont want to regenerate weight files
we dont want additional CI overhead (as low as possible)
if this could run fast without need to compile polkadot-parachain again or maybe reuse some previous one from CI (with --features=runtime-benchmarks)
is there a way how to run any/all benchmarking function(s) as unit tests? without need to compile the whole: