index: Force database compaction in coinstatsindex#33306
index: Force database compaction in coinstatsindex#33306fjahr wants to merge 2 commits intobitcoin:masterfrom
Conversation
|
The following sections might be updated with supplementary metadata relevant to reviewers and maintainers. Code Coverage & BenchmarksFor details see: https://corecheck.dev/bitcoin/bitcoin/pulls/33306. ReviewsSee the guideline for information on the review process.
If your review is incorrectly listed, please copy-paste ConflictsReviewers, this pull request conflicts with the following ones:
If you consider this pull request important, please also help to review the conflicting pull requests. Ideally, start with the one that should be merged first. LLM Linter (✨ experimental)Possible typos and grammar issues:
2026-02-01 21:13:26 |
There was a problem hiding this comment.
Do you know if the current state with many small files and one large file has actual downsides, or does it just look weird?
I am not an expert on LevelDB, just mentioning that two AIs I consulted say that benefits of regular compaction would be only be of cosmetic value and suggest to just let LevelDB do its thing.
|
Good question and I am definitely not a LevelDB expert either. I think that at the size and usage pattern of the coinstatsindex we are not really facing any significant problems on a modern computer and system since the size of the data we are storing is very small relatively speaking for what LevelDB is capable of handling. But I think the impact is probably also non-zero and might increase over time as the blockchain grows. Very high-level rationale: there must be a reason LevelDB does these compactions usually. As far as I can tell the best resource on this topic is this docs page: https://chromium.googlesource.com/external/leveldb/%2B/HEAD/doc/impl.md#timing From my understanding these small files are L0 files and having many of them means we potentially have overlapping keys among them, meaning we are wasting disk space, and "This may significantly increase the cost of reads due to the overhead of merging more files together on every read." per the document. |
|
FWIW, from the patterns I have observed so far it seems likely to me that the problem is only occurring during sync and afterwards the data of new blocks are actually compacted as it should, just those 55kb files are left over from the sync phase. So I am thinking about changing the approach to only compact once when the index is being set to |
src/index/coinstatsindex.cpp
Outdated
There was a problem hiding this comment.
as other have also mentioned, compaction is already a background process, it is triggered regularly, it should already take care of these as far as I can tell.
Is it urgent to do the expensive compaction regularly? Doing more compactions should take more time overall, can you please share your benchmarks in this regard?
There was a problem hiding this comment.
As I and others have mentioned (see the linked PR), this doesn't work for coinstatsindex and we end up with hundreds of 55kb files that seem to stay around forever or until we trigger a full compaction manually.
Benchmarking didn't seem to make sense to me because the compaction completes almost infinitely so comparing two runs with compaction and without may not have given a result that gives an accurate answer. But I figured if I time the runtime of each compaction to the millisecond and then add them up that gives me a good indicator for how much time the compaction takes. I have been using Signet for this test and the individual runtimes of the compactions range from 12ms to 220ms. Overall the time for the coinstatsindex to sync (time between thread start and thread exit messages) took 5 minutes and 6 seconds. The cummulative time spend on compaction during that sync was 1,367ms. So the performance impact was making full sync of the index ~0.45% slower.
Here is the patch I used for this: fjahr@817990e In case you want to reproduce the result.
There was a problem hiding this comment.
I haven't checked the linked PR in detail (I have to have some stopping condition, and I think the PR should explain the important context) but I didn't get the impression until now that we weren't doing any compaction here before, I just thought you meant that "it's struggling to keep up".
Have a few questions here:
- could we rather add this to the other background compaction thread? Do we understand why there's no automatic compaction?
- does it make sense to do the compaction so often? There's a reason that's usually not done for every change, garbage collection is usually a rare event. Wouldn't it suffice to to it once at the very end only? Maybe 10k isn't that often, I'll see if there's a way to measure it.
- We're doing a compactionand logging for genesis (height == 0) as well, right? Is it deliberate?
- We seem to be writing each entry separately to LevelDB in a single-value batch and fsync-ing immediately? Could use some kind of batching like we do with the UTXO set? and
Line 94 in e419b0e
TxIndex::CustomAppendandBaseIndex::Commitalready write in batches - wouldn't that fix the coinstatsindex small files better than doing compactions manually (it should also be a lot faster)
I want to see a global, macro benchmark as well, since the continuous compaction most likely slows it down - but I haven't worked with this area of the code yet so it may take a while for me to set it up
Here's what I did to benchmark the performance of only the indexing - created 3 commits:
- One without any compaction, representing the before state: l0rinc@abab616
- One with compaction at every single block to see how much time it really takes: l0rinc@7d3fa1c
- Compaction at 10'000 blocks, as suggested in the PR: l0rinc@62d3ad1
- One which only does the compaction at the very end or since that's simpler, every 100k blocks: l0rinc@7779473.
For simplicity the application terminates after the indexes are successfully created and we're measuring the time it took to iterate and index them.
I've started the application with -connect=0 -coinstatsindex=1 to show the progress without any additional activity for stability and measure the output folder's size and number of files.
bash script comparing the 4 commits
COMMITS="62d3ad137f70803b210623c51ea3b8a20996b39b 7779473a6e2f3b9fa5b8b97800e29d6fb3299aad abab616a2ba4701816c582dab24593f3f8200b0a 7d3fa1c8e99828751187b13fb1e2de11052c6215";\
DATA_DIR="/mnt/my_storage/BitcoinData";\
for COMMIT in $COMMITS; do\
killall bitcoind 2>&1; rm -rf "$DATA_DIR/indexes" "$DATA_DIR/debug.log" &&\
git fetch -q origin "$COMMIT" >/dev/null 2>&1 || true && git checkout -q "$COMMIT" && git log -1 --pretty='%h %s' "$COMMIT" &&\
cmake -B build -G Ninja -DCMAKE_BUILD_TYPE=Release >/dev/null 2>&1 && ninja -C build >/dev/null 2>&1 &&\
./build/bin/bitcoind -datadir="$DATA_DIR" -connect=0 -coinstatsindex=1 -printtoconsole=0 2>&1;\
grep 'Indexing finished migrating' "$DATA_DIR/debug.log";\
du -h "$DATA_DIR/indexes/coinstatsindex/db";\
find "$DATA_DIR/indexes/coinstatsindex/db" -type f -name '*.ldb' | wc -l;\
doneIt seems the indexing takes several hours, I'll post the result here after it finishes
Edit:
no compaction: 33869s, 211MB, 112 filescompact at every 10k blocks: 33998s, 211MB, 16 filescompact at every 100k blocks: 34055s, 211M, 36 filescompact at every block: 56403s, 301M, 131 files (seems suspicious)
There was a problem hiding this comment.
Instead of the current single-entry LevelDB batching (creating a batch, serializing the DBHeightKey and std::pair<uint256, DBVal> and writing the batch, containing a single entry), I have extracted the CDBBatch outside of the loops and committing and writing that regularly, see https://github.com/l0rinc/bitcoin/pull/37/files for a draft (I will need to break into commits and maybe adjust some tests if needed).
I will run a follow-up benchmark to see how well this performs, since the original 9.5 hours sync time is (with or without compaction) is at least an order of magnitude slower than I expected it to be.
I expect this to solve the the small-file fragmentation issue as well - will get back on both.
Edit:
indexes: batch index writes: 34188s, 211M, 8 files
It solved the fragmentation, but didn't speed up anything.
I still think this is a better direction than adding manual compactions.
There was a problem hiding this comment.
could we rather add this to the other background compaction thread? Do we understand why there's no automatic compaction?
I am not aware of a compaction thread that we control, can you elaborate what you mean? And no, as I have tried to explain there seems to be no easy answer but the evidence points to the fact that LEVELDB can not keep up at a certain write speed.
does it make sense to do the compaction so often? There's a reason that's usually not done for every change, garbage collection is usually a rare event. Wouldn't it suffice to to it once at the very end only? Maybe 10k isn't that often, I'll see if there's a way to measure it.
I had thought about doing it once at the end of sync and I am doing this now in the latest push. Just necessitates a hook to be added for this.
Instead of the current single-entry LevelDB batching (creating a batch, serializing the DBHeightKey and std::pair<uint256, DBVal> and writing the batch, containing a single entry), I have extracted the CDBBatch outside of the loops and committing and writing that regularly,
Seems like a pretty big refactor and IMO not worth it only for the issue that we are dealing with here, given the edge cases and inconsistencies we have fixed in index over last few years.
src/index/coinstatsindex.cpp
Outdated
There was a problem hiding this comment.
shouldn't this be a debug? Do we need 91 instances of this?
(haven't checked, but will this be triggered for genesis?)
There was a problem hiding this comment.
I am pretty indifferent, so making it debug. I had to add a new category since none of the existing seemed to fit.
src/dbwrapper.h
Outdated
There was a problem hiding this comment.
is const the right thing to advertise here? And is it a noexcept?
There was a problem hiding this comment.
It doesn't modify the data but the storage on disk but since there are changes I think you are right and this shouldn't be const. I don't think it's noexcept because we are doing stuff on disk there are a lot issues that could come from that.
137ea94 to
7bc77c5
Compare
l0rinc
left a comment
There was a problem hiding this comment.
I'm missing benchmark results to see how this change affects the already very slow performance. I will post my own results as soon as they finish.
I would also like to understand why database compaction isn't enabled for this LevelDB instance.
src/index/coinstatsindex.cpp
Outdated
There was a problem hiding this comment.
I haven't checked the linked PR in detail (I have to have some stopping condition, and I think the PR should explain the important context) but I didn't get the impression until now that we weren't doing any compaction here before, I just thought you meant that "it's struggling to keep up".
Have a few questions here:
- could we rather add this to the other background compaction thread? Do we understand why there's no automatic compaction?
- does it make sense to do the compaction so often? There's a reason that's usually not done for every change, garbage collection is usually a rare event. Wouldn't it suffice to to it once at the very end only? Maybe 10k isn't that often, I'll see if there's a way to measure it.
- We're doing a compactionand logging for genesis (height == 0) as well, right? Is it deliberate?
- We seem to be writing each entry separately to LevelDB in a single-value batch and fsync-ing immediately? Could use some kind of batching like we do with the UTXO set? and
Line 94 in e419b0e
TxIndex::CustomAppendandBaseIndex::Commitalready write in batches - wouldn't that fix the coinstatsindex small files better than doing compactions manually (it should also be a lot faster)
I want to see a global, macro benchmark as well, since the continuous compaction most likely slows it down - but I haven't worked with this area of the code yet so it may take a while for me to set it up
Here's what I did to benchmark the performance of only the indexing - created 3 commits:
- One without any compaction, representing the before state: l0rinc@abab616
- One with compaction at every single block to see how much time it really takes: l0rinc@7d3fa1c
- Compaction at 10'000 blocks, as suggested in the PR: l0rinc@62d3ad1
- One which only does the compaction at the very end or since that's simpler, every 100k blocks: l0rinc@7779473.
For simplicity the application terminates after the indexes are successfully created and we're measuring the time it took to iterate and index them.
I've started the application with -connect=0 -coinstatsindex=1 to show the progress without any additional activity for stability and measure the output folder's size and number of files.
bash script comparing the 4 commits
COMMITS="62d3ad137f70803b210623c51ea3b8a20996b39b 7779473a6e2f3b9fa5b8b97800e29d6fb3299aad abab616a2ba4701816c582dab24593f3f8200b0a 7d3fa1c8e99828751187b13fb1e2de11052c6215";\
DATA_DIR="/mnt/my_storage/BitcoinData";\
for COMMIT in $COMMITS; do\
killall bitcoind 2>&1; rm -rf "$DATA_DIR/indexes" "$DATA_DIR/debug.log" &&\
git fetch -q origin "$COMMIT" >/dev/null 2>&1 || true && git checkout -q "$COMMIT" && git log -1 --pretty='%h %s' "$COMMIT" &&\
cmake -B build -G Ninja -DCMAKE_BUILD_TYPE=Release >/dev/null 2>&1 && ninja -C build >/dev/null 2>&1 &&\
./build/bin/bitcoind -datadir="$DATA_DIR" -connect=0 -coinstatsindex=1 -printtoconsole=0 2>&1;\
grep 'Indexing finished migrating' "$DATA_DIR/debug.log";\
du -h "$DATA_DIR/indexes/coinstatsindex/db";\
find "$DATA_DIR/indexes/coinstatsindex/db" -type f -name '*.ldb' | wc -l;\
doneIt seems the indexing takes several hours, I'll post the result here after it finishes
Edit:
no compaction: 33869s, 211MB, 112 filescompact at every 10k blocks: 33998s, 211MB, 16 filescompact at every 100k blocks: 34055s, 211M, 36 filescompact at every block: 56403s, 301M, 131 files (seems suspicious)
The coinstatsindex ends up using a lot of 55kb files if this is not the case. This is likely related to very fast writes occurring during sync.
7bc77c5 to
cb7f981
Compare
|
I am giving this another chance to receive some concept acks and see if the pain is big enough to add a fix. Addressed comments by @l0rinc and particularly only compact once upon completion of sync. This required adding a hook for this to the base index but it's still a very minimal change. There is still virtually zero performance impact from this change. I synced the |
|
I think we're just treating the symptoms here, the problem is that we write in batches of a single element - the solution would be to batch the writes instead of writing the one-by-one and compacting, please see my comments:
|
How can this be the problem when we do the exact same thing in BlockFilterIndex and we don't see the problem there? The problem is the write speed of coinstatsindex. |
I will investigate if you don't think it works, I remembered that I have tried it before - removed the nack in the meantime |
|
Closing for now since this will likely be resolved with #34489 |
This PR forces full database compaction on coinstatsindex every 10,000 blocks. This does not seem to come with any considerable performance impact, though I did not test on many different systems.
Motivation: Currently, coinstatsindex ends up using a lot of 55kb files if this is not the case. This is likely related to very fast writes occurring during sync but so far I could not pin down where exactly the difference in behavior is coming from. It seems to be a LevelDB internal thing though, since I can not make out any difference in configuration between coinstatsindex and the other indexes.
This was first reported here: #30469 (comment)