Speed up JSON load in slashing protection import#2347
Closed
michaelsproul wants to merge 1 commit intosigp:unstablefrom
Closed
Speed up JSON load in slashing protection import#2347michaelsproul wants to merge 1 commit intosigp:unstablefrom
michaelsproul wants to merge 1 commit intosigp:unstablefrom
Conversation
Member
Author
|
Damn, I ran this for an hour and it didn't complete. I think our processing is also too slow. I'm going to switch back to Altair for now, but will come back to this and try to add a mode than minifies the file upon import (shrinks it to just one relevant entry per validator). |
Member
Author
|
Closing for now in favour of #2354 |
bors bot
pushed a commit
that referenced
this pull request
Jun 21, 2021
## Issue Addressed Closes #2354 ## Proposed Changes Add a `minify` method to `slashing_protection::Interchange` that keeps only the maximum-epoch attestation and maximum-slot block for each validator. Specifically, `minify` constructs "synthetic" attestations (with no `signing_root`) containing the maximum source epoch _and_ the maximum target epoch from the input. This is equivalent to the `minify_synth` algorithm that I've formally verified in this repository: https://github.com/michaelsproul/slashing-proofs ## Additional Info Includes the JSON loading optimisation from #2347
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Issue Addressed
One of our users had trouble exporting slashing protection data from Prysm to Lighthouse, and it turns out that Lighthouse was very slow at loading the JSON (for the related Prysm issue see OffchainLabs/prysm#8893).
Proposed Changes
The reason for the slowness is that
serde_json::from_readerisn't as heavily optimised as other deserialisation methods, as described here: serde-rs/json#160. Instead offrom_reader, the import function now reads the entire JSON object into memory and deserialises that. I think this is a reasonable tradeoff, as import JSON files should be reasonably sized (less than system memory 🤞) due to pruning.Marking this as WIP while I run the import on a modified version of the failing 1.2GB file (it's still taking a long time...).