Open
Conversation
|
Collaborator
|
It would be nice if Tachometer could run Node benchmarks: google/tachometer#274 |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
This PR adds a suite of performance benchmarks to Lit SSR help developers analyze the perf impact of their changes. Currently these need to be run manually on the developer's local machine before and after their changes.
The benchmarks try to simulate multiple different sizes and complexities of Lit templates representing something vaguely similar to Reddit's comments tree. This is easily Reddit's most complex and performance sensitive rendering scenario due to the heavy amounts of recursive nesting.
I wasn't able to use Tachometer like the existing benchmarks do, because we want these to run in Node and not in the browser. I originally wanted to use k6, but it doesn't support loading packages from node_modules. I ended up having AI implement a sort of custom test runner that simulates what k6 is trying to do.
One thing that's still missing is the ability to have a scenario run multiple times concurrently. This is somewhat outside of the scope of what I currently need, so it's left as future work.
I'm happy to move this onto a proper benchmarking platform (e.g. Artillery, Gatling, etc) if there's a strong preference.
Sample output:
