Skip to content

Bump dbcache to 1 GiB#34692

Merged
achow101 merged 4 commits intobitcoin:masterfrom
andrewtoth:bump_dbcache
Mar 6, 2026
Merged

Bump dbcache to 1 GiB#34692
achow101 merged 4 commits intobitcoin:masterfrom
andrewtoth:bump_dbcache

Conversation

@andrewtoth
Copy link
Contributor

@andrewtoth andrewtoth commented Feb 27, 2026

Alternative to #34641

This increases the default dbcache value from 450MiB to 1024MiB if:

  • dbcache is unset
  • The system is 64 bit
  • At least 4GiB of RAM is detected

Otherwise fallback to previous 450MiB default.

This should be simple enough to get into v31. The bump to 1GiB shows significant performance increases in #34641. It also alleviates concerns of too high default for steady state, and of lowering the current dbcache for systems with less RAM.

This change only changes bitcoind behavior, while kernel still defaults to 450 MiB.

@DrahtBot
Copy link
Contributor

DrahtBot commented Feb 27, 2026

The following sections might be updated with supplementary metadata relevant to reviewers and maintainers.

Reviews

See the guideline for information on the review process.

Type Reviewers
ACK sipa, ajtowns, kevkevinpal, svanstaa, achow101
Concept ACK yancyribbens, darosior

If your review is incorrectly listed, please copy-paste <!--meta-tag:bot-skip--> into the comment that the bot should ignore.

Conflicts

Reviewers, this pull request conflicts with the following ones:

  • #34641 (node: scale default -dbcache with system RAM by l0rinc)
  • #34435 (refactor: use _MiB/_GiB consistently for byte conversions by l0rinc)

If you consider this pull request important, please also help to review the conflicting pull requests. Ideally, start with the one that should be merged first.

@DrahtBot
Copy link
Contributor

🚧 At least one of the CI tasks failed.
Task 32 bit ARM: https://github.com/bitcoin/bitcoin/actions/runs/22490076628/job/65149651058
LLM reason (✨ experimental): C++ compile error: constexpr evaluation fails due to throwing in a constexpr context (MiB value too large for size_t).

Hints

Try to run the tests locally, according to the documentation. However, a CI failure may still
happen due to a number of reasons, for example:

  • Possibly due to a silent merge conflict (the changes in this pull request being
    incompatible with the current code in the target branch). If so, make sure to rebase on the latest
    commit of the target branch.

  • A sanitizer issue, which can only be found by compiling with the sanitizer and running the
    affected test.

  • An intermittent issue.

Leave a comment here, if you need help tracking down a confusing failure.

Copy link
Contributor

@ajtowns ajtowns left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Having a fairly simple bump to the dbcache seems better than adding scaling logic to me, fwiw (hi insta).

@kevkevinpal
Copy link
Contributor

Concept ACK 6b0210e

I agree with Ajtowns, I think it makes more sense to have a simple bump in cache size instead of auto scaling.

@kevkevinpal
Copy link
Contributor

We might want to add a release note since this will affect any nodes that currently do not have -dbcache set running on systems with greater than 4 GB RAM

If dbcache is unset, bump default from 450MB to 1024MB on 64-bit systems
that have at least 4GB of detected RAM.
@DrahtBot
Copy link
Contributor

🚧 At least one of the CI tasks failed.
Task test ancestor commits: https://github.com/bitcoin/bitcoin/actions/runs/22523590140/job/65252087815
LLM reason (✨ experimental): Build failed during the Qt target (src/qt/optionsmodel.cpp) with a non-zero exit status from the cmake/make step.

Hints

Try to run the tests locally, according to the documentation. However, a CI failure may still
happen due to a number of reasons, for example:

  • Possibly due to a silent merge conflict (the changes in this pull request being
    incompatible with the current code in the target branch). If so, make sure to rebase on the latest
    commit of the target branch.

  • A sanitizer issue, which can only be found by compiling with the sanitizer and running the
    affected test.

  • An intermittent issue.

Leave a comment here, if you need help tracking down a confusing failure.

Copy link
Member

@sipa sipa left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ACK 4ae9a10

@andrewtoth
Copy link
Contributor Author

Thanks @ajtowns I took all your suggestions.
Thanks @kevkevinpal added release notes.

@ajtowns
Copy link
Contributor

ajtowns commented Feb 28, 2026

ACK 4ae9a10

Checked it gives 450 on a system with less than 4GB and 1024 on a system with more.

@DrahtBot DrahtBot removed the CI failed label Mar 2, 2026
@hodlinator
Copy link
Contributor

hodlinator commented Mar 2, 2026

If you want to avoid changing kernel behavior in this PR, it would be good to state so in the PR-description. "This change only changes bitcoind behavior, while kernel still defaults to 450 MiB."

.cache_bytes = kernel::CacheSizes{DEFAULT_KERNEL_CACHE}.block_tree_db,

kernel::CacheSizes cache_sizes{DEFAULT_KERNEL_CACHE};


Please also use the correct "GiB" unit in PR title. (Edit: and correct all units in the PR description).

@andrewtoth andrewtoth changed the title Bump dbcache to 1GB Bump dbcache to 1 GiB Mar 2, 2026
@darosior
Copy link
Member

darosior commented Mar 4, 2026

Concept ~0 because i think it's fine if this PR is merged as is, but i reject the premise that we should be supporting systems with very low memory through the default configuration.

I think the goal of supporting systems with low resources is better achieved with configuration presets (as was suggested in the meeting where this issue was discussed) than with complicated and surprising system resource detection baked into our code.

@fanquake fanquake added this to the 31.0 milestone Mar 5, 2026
@fanquake
Copy link
Member

fanquake commented Mar 5, 2026

Put this on the milestone, to gather any more opinons/make a decision for v31.

@achow101
Copy link
Member

achow101 commented Mar 5, 2026

Concept ACK

@yancyribbens
Copy link
Contributor

Concept ACK - Prefer incremental improvement

@andrewtoth
Copy link
Contributor Author

weigh the argument for making this something that's configurable?

@yancyribbens do you mean making -dbcache configurable? If so it is already a configuration option. This PR and discussion is about changing the default value for it.

@yancyribbens
Copy link
Contributor

@yancyribbens do you mean making -dbcache configurable? If so it is already a configuration option. This PR and discussion is about changing the default value for it.

Oh, great! Then, I think if this breaks the default compatibility for some users, then they would simply need to adjust their configuration..

@sipa
Copy link
Member

sipa commented Mar 5, 2026

@yancyribbens It really helps to read PRs before commenting on them.

@yancyribbens
Copy link
Contributor

@yancyribbens It really helps to read PRs before commenting on them.

Leave it up to @sipa to put me in my place, again. Fair point.

@kevkevinpal
Copy link
Contributor

reACK 4ae9a10

This looks good to me, I think it makes sense to continue to support low RAM systems and not do a simple bump.

This is good to get into the v31.0 release without being too complicated and revisit the autoscaling RAM in a later release if desired.


changes from last ack:
- Release notes added
- ajtowns suggestions

@DrahtBot DrahtBot requested review from achow101 and darosior March 6, 2026 03:10
@ajtowns
Copy link
Contributor

ajtowns commented Mar 6, 2026

Concept ~0 because i think it's fine if this PR is merged as is, but i reject the premise that we should be supporting systems with very low memory through the default configuration.

Been thinking about this some more. I still think it's preferable this PR be merged as-is for 31.0, but a different approach could be:

  • choose defaults that give respectable performance on easy to obtain hardware; eg 1GB dbcache on an 8GB RAM machine
  • on startup, if the configuration settings have been left as defaults, check if they're obviously unreasonable (dbcache is >25% of memory, prune is unset on mainnet and there's only 20GB of diskspace, eg) and provide a startup error if so.
  • you can avoid the startup error by explicitly configuring the settings, even just setting to the same values as the defaults. ie saying "I know what I'm doing, shut up about it"

That lets us set the defaults to a single value that achieve reasonable performance on reasonable hardware, and leaves achieving great performance or running on unreasonable hardware with poor performance up to people to configure.

I feel like simple good-enough defaults and flexible configuration options is a better approach than trying to make the defaults automatically optimise themselves for the hardware, a la #34641. Having a check and an early error when we can detect that the good-enough defaults are probably not appropriate for the hardware would be a win on top of that, I think, while also not making things much more complex. For people who specifically want automatic optimisation of their config, then a third party tool (lopp's bitcoin.conf generator?) or AI consultation would let them achieve that.

namespace node {
size_t GetDefaultDBCache()
{
if constexpr (sizeof(void*) >= 8) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

is this check for 64 bit system actually needed here? I believe it's possible to run PAE kernel versions that allow for 32 bit systems to address 4+ Gigs of memory.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Also, along those lines, I think if you are running on i86 and have 4gib, it would be preferable to use the 1Gb cache size...

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe not necessary, but of course these users can always set the config themselves.

@DrahtBot DrahtBot requested a review from yancyribbens March 6, 2026 12:38
The size of some in-memory caches can be reduced. As caches trade off memory usage for performance, reducing these will usually have a negative effect on performance.

- `-dbcache=<n>` - the UTXO database cache size, this defaults to `450`. The unit is MiB (1024).
- `-dbcache=<n>` - the UTXO database cache size, this defaults to `1024` (or `450` if less than `4096` MiB system RAM is detected). The unit is MiB (1024).
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: technically this is checking for if is not 64bit and is less than 4096 MiB system RAM

@DrahtBot DrahtBot requested a review from yancyribbens March 6, 2026 12:40
## Updated settings

- The default `-dbcache` value has been increased to `1024` MiB from `450` MiB
on systems where at least `4096` MiB of RAM is detected.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Same as previous nit (64 bit thing).

@DrahtBot DrahtBot requested a review from yancyribbens March 6, 2026 12:41
@ViniciusCestarii
Copy link
Contributor

One edge case to consider is how GetTotalRAM() determines available memory. On Unix systems it uses sysconf(_SC_PHYS_PAGES) * sysconf(_SC_PAGE_SIZE), which reports the host’s physical RAM, not the memory limit of the container.

For example, if a Docker container is started with --memory=1g on a host with 32 GB of RAM, this call will still return about 32 GB. As a result, the 4 GiB check passes and dbcache may be set to 1024 MiB, which can cause the container to run out of memory.

However, the container’s memory limit is exposed through cgroups. In this setup, /sys/fs/cgroup/memory.max correctly reports the container limit (for example, 1 GiB). Taking the minimum between GetTotalRAM() and the cgroup memory limit, when the latter is available, would address this issue for common container environments such as Docker.

@svanstaa
Copy link

svanstaa commented Mar 6, 2026

ACK 4ae9a10

Reviewed the code, built and ran it. Can attest that it defaults to 1024 without the dbcache set, and still to 450 with dbcache=450.

Since I have no machine with less than 4GB available, tried to patch the GetTotalRAM function to set it to 2GB, but it did not work, since the dynamic linker never resolves it.

Failed attempt to patch GetTotalRam // fake_ram.cpp #include #include

std::optional<size_t> GetTotalRAM()
{
return 2ULL * 1024 * 1024 * 1024; // pretend 2 GB
}

Compile:
g++ -shared -fPIC src/node/fake_ram.cpp -o fake_ram.so

Test with fake 2 GB RAM
LD_PRELOAD=./fake_ram.so ./build/bin/bitcoind -regtest | grep -E "Cache|Mib"

Test results

$ ./build/bin/bitcoind --version | grep version
Bitcoin Core daemon version v30.99.0-4ae9a10ada95 bitcoind

$ ./build/bin/bitcoind -regtest -dbcache=450 | grep
-E "Cache|MiB"

2026-03-06T12:53:03Z Cache configuration:
2026-03-06T12:53:03Z * Using 2.0 MiB for block index database
2026-03-06T12:53:03Z * Using 8.0 MiB for chain state database
2026-03-06T12:53:03Z * Using 440.0 MiB for in-memory UTXO set (plus up to 286.1 MiB of unused mempool space)

$ ./build/bin/bitcoind -regtest | grep -E "Cache|MiB"
2026-03-06T12:53:26Z Cache configuration:
2026-03-06T12:53:26Z * Using 2.0 MiB for block index database
2026-03-06T12:53:26Z * Using 8.0 MiB for chain state database
2026-03-06T12:53:26Z * Using 1014.0 MiB for in-memory UTXO set (plus up to 286.1 MiB of unused mempool space)

@andrewtoth
Copy link
Contributor Author

It seems I may be too conservative when trying not to break use cases, since most other reviewers don't seem to think it's a big issue. I suppose there are likely very few users running on <4GB that are not already configuring their defaults and reading release notes. The defaults are meant for newbies who aren't yet configuring, which are likely on higher RAM systems.

Nevertheless, this PR is meant to be a last minute sneak in to v31. There are already several ACKs, and those with suggestions explicitly said they do not mind it being merged as is. So, I'm going to leave it as is for now and hopefully a maintainer will merge before cutoff.

on startup, if the configuration settings have been left as defaults, check if they're obviously unreasonable (dbcache is >25% of memory, prune is unset on mainnet and there's only 20GB of diskspace, eg) and provide a startup error if so.

@ajtowns Since #33333 we warn on oversized dbcache. I think it would make sense to tighten that up to an error.

@achow101
Copy link
Member

achow101 commented Mar 6, 2026

ACK 4ae9a10

@darosior
Copy link
Member

darosior commented Mar 6, 2026

So, I'm going to leave it as is for now and hopefully a maintainer will merge before cutoff.

Wouldn't that be surprising to merge the detection if it doesn't work on containers:

As a result, the 4 GiB check passes and dbcache may be set to 1024 MiB, which can cause the container to run out of memory.

which are at least as common a setup as VMs are, which were used as the main argument for the detection in the first place:

Currently we can support running indefinitely on 2GB VMs with default configuration. This is a popular cheap format (https://aws.amazon.com/ec2/instance-types/t2/) for running (not syncing) bitcoind.

(In any case anybody technical enough to spin up a VM or a container is able to set a configuration option, it's not like we are breaking a use case, but hey.)


Edited for clarity since apparently the condescending comment that ensued was genuine.

@darosior
Copy link
Member

darosior commented Mar 6, 2026

I opened #34763 as an alternative that only bumps the default without introducing the detection.

@achow101
Copy link
Member

achow101 commented Mar 6, 2026

which are at least as common a setup, as the one which was used as the main argument for the detection in the first place:

Currently we can support running indefinitely on 2GB VMs with default configuration. This is a popular cheap format (https://aws.amazon.com/ec2/instance-types/t2/) for running (not syncing) bitcoind.

(In any case anybody technical enough to spin up a VM or a container is able to set a configuration option, it's not like we are breaking a use case, but hey.)

A VM is not the same as a container, and will present available memory differently. In the case with AWS VPSes, the machine will appear to have the specified amount of memory and not be affected by this issue.

@achow101 achow101 merged commit c7a3ea2 into bitcoin:master Mar 6, 2026
75 of 78 checks passed
@darosior
Copy link
Member

darosior commented Mar 6, 2026

A VM is not the same as a container

Oh, really?

@andrewtoth andrewtoth deleted the bump_dbcache branch March 6, 2026 20:17
@yancyribbens
Copy link
Contributor

Oh, really?

This actually goes back to when chroot was introduced in 1979 as a way to provide an isolated filesystem while still using the same host kernel. FreeBSD extended this idea to also include isolation for other devices in the form of Jails, which was sort of co-opted later in the form of linux containers. Containers also brought along a who ecosystem of reproducible install tools eg Docker. I'm still pretty found of jails though for their simplicity over containers, and of course less overhead than VMs. Anyway, there's a pretty interesting detail of this here: https://hypha.pub/back-to-freebsd-part-1.

@optout21
Copy link
Contributor

optout21 commented Mar 8, 2026

I feel like simple good-enough defaults and flexible configuration options is a better approach than trying to make the defaults automatically optimise themselves for the hardware, a la #34641.

I argue with that. A large number of bitcoin-core runners will give zero thought about RAM sizes and memory limits, and expect things to work, and if things don't work optimally, they will blame the software. Even if there was concrete guidance offered, like "set dbcache to 20% of the available RAM" (there is no such guidance), most users would not comply. I think context-aware meaningful defaults are very useful. Context-aware estimated min/max values are also needed to be able to generate meaningful warnings in case user configuration is present, but nonsensical. I think context-aware defaults that try to cover a wider range of different systems (like #34641) are better overall. But for sure user configuration should be able to override everything.

@optout21
Copy link
Contributor

optout21 commented Mar 8, 2026

  • on startup, if the configuration settings have been left as defaults, check if they're obviously unreasonable (...) and provide a startup error if so.

Refusing to start on unreasonable values is an interesting idea. But I see inconsistency here: if we trust the software-decided reasonable-limits so much that it can even refuse to start, why not trust a default to be used? The logic/reliability to derive the optimal default and unreasonable limits are very similar.

To me it looks more consistent and useful to:

  • Derive meaningful context-aware default/min/max values, based on the available RAM (and possibly other parameters)
  • If no config is given, use the default, and state this in the logs to the user, alongside with all data points used (e.g. detected RAM amount).
  • If config is given, use that. Check against min/max values, and if it's outside, warn in the logs (alongside with all data points used e.g. detected RAM amount). But attempt to run anyways.
  • React to some symptoms of incorrect settings, e.g. to high swapping, with appropriate diagnostics logs.

I think this gives the best combo for:

  • informed defaults for 'non-technical' users (e.g. users of node-runner solutions but on custom hardware)
  • control and relevant data points for 'technical' users
  • info in case of rare strange setups where the software misbehaves (extremly low or high memory, misdetected memory, etc.).

@optout21
Copy link
Contributor

optout21 commented Mar 8, 2026

Post-merge-ACK 4ae9a10

A limited change of the single-value limit to a two-tier limit, benefiting IBD performance on (the majority?) non-low-memory systems, for v31.
For the future a more elaborate calculation of optimal default value and meaningful range, and more informative logging would be beneficial.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.