We currently can support auditing from saved artifacts, but we haven't really generalized this for all artifacts, just for performance stuff.
Splitting these two up would allow for folks who want to run Lighthouse against 1000s of URLs to gather on one machine and audit on another (or farmed out too many).
I think this involves saving the artifacts to disk as each gatherer finishes, and then allowing LH to pick up the disk artifacts later and run the remainder of the analysis.
As a side benefit, it'll be nice to do lighthouse --process-last-run (or whatever) during development of an audit/formatter/report rather than doing the entire round trip each iteration.
Off the top of my head, we probably have to figure out:
- What this looks like at the CLI
- How we save/store/delete the disk artifacts
- How the artifacts are retained in the devtools/extension case
- How we adapt the existing config-based approach to this
This is open for ideas, discussion, and anyone's help in moving this forward. :)
We currently can support auditing from saved artifacts, but we haven't really generalized this for all artifacts, just for performance stuff.
Splitting these two up would allow for folks who want to run Lighthouse against 1000s of URLs to gather on one machine and audit on another (or farmed out too many).
I think this involves saving the artifacts to disk as each gatherer finishes, and then allowing LH to pick up the disk artifacts later and run the remainder of the analysis.
As a side benefit, it'll be nice to do
lighthouse --process-last-run(or whatever) during development of an audit/formatter/report rather than doing the entire round trip each iteration.Off the top of my head, we probably have to figure out:
This is open for ideas, discussion, and anyone's help in moving this forward. :)