Reduce overhead of JavaScript calls#335
Merged
CryZe merged 1 commit intoLiveSplit:masterfrom Jun 1, 2020
Merged
Conversation
In the web our layout state calculation is actually massively dominated by the calls to `performance.now()`. However before we can even call `.now()` on the `performance` object, we first need to even get that. So you first have to get the `window` object and access its `performance` field. So overall that's three calls into JavaScript. If we however cache the `performance` object, we can reduce it to just a single call. That seems to improve the layout state calculation performance by over 2x. Optimally we would only take a single time stamp per layout state calculation, as otherwise there might be slight disagreements between the individual values shown in the layout state. That's however a much bigger refactoring, so that's something we can look into in the future.
wooferzfg
approved these changes
Jun 1, 2020
Member
wooferzfg
left a comment
There was a problem hiding this comment.
Could we potentially also cache the number of seconds in here and then add some kind of invalidation method that gets called once per frame?
Collaborator
Author
|
Yeah, I thought about that too, but I think that's a bit too hacky tbh (especially because this is only a problem in WebAssembly). We rather want to have some kind of "TimerSnapshot" or so, which could also help in some tests and with the synchronization protocol or so. |
CryZe
added a commit
to CryZe/LiveSplitOne
that referenced
this pull request
Jun 14, 2020
This updates livesplit-core which brings a variety of performance improvements: - The Layout State is now being reused and thus most frames don't require any heap allocations anymore. However we still serialize everything over into a JSON string for now, which puts a lot of garbage on the JS heap. LiveSplit/livesplit-core#334 - The frequent performance.now() calls we do, first lookup up the window and performance object every time. This tripled the amount of calls we do over into JavaScript, with each call into JavaScript being quite expensive in Chrome. LiveSplit/livesplit-core#335 - By introducing a timer snapshot mechanism we further reduce the calls to performance.now() to a single time. LiveSplit/livesplit-core#339 - Rust 1.44 regressed the performance of 128-bit integer multiplications by accident. Those are used for hashing the comparisons when looking up the times for a comparison, which is something we do very frequently. We however don't have many comparisons, so a simple Vec that we loop through is a bit faster, even in native code, and quite a bit faster in the web, because of the Rust 1.44 regression. LiveSplit/livesplit-core#338 - We delay registering the Gamepad Hook Interval until the first gamepad button is registered. Most people won't use a gamepad, so the interval just waits cpu time for no reason. LiveSplit/livesplit-core#340
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
In the web our layout state calculation is actually massively dominated by the calls to
performance.now(). However before we can even call.now()on theperformanceobject, we first need to even get that. So you first have to get thewindowobject and access itsperformancefield. So overall that's three calls into JavaScript. If we however cache theperformanceobject, we can reduce it to just a single call. That seems to improve the layout state calculation performance by over 2x.Optimally we would only take a single time stamp per layout state calculation, as otherwise there might be slight disagreements between the individual values shown in the layout state. That's however a much bigger refactoring, so that's something we can look into in the future.