-
Notifications
You must be signed in to change notification settings - Fork 1.7k
reduce index memory usage #1988
Description
On a raspberry pi 3 model B, which has 1GB of memory, restic stopped fitting into RAM after a 3TB backup was done on another machine to same repository. I suspect indexes have grown big enough to not fit into memory.
Output of restic version
hmage@phmd:~$ restic version
restic 0.9.2 compiled with go1.10.3 on linux/arm
How did you run restic exactly?
export AWS_ACCESS_KEY_ID=secret
export AWS_SECRET_ACCESS_KEY=secret
export RESTIC_PASSWORD=secret
export RESTIC_REPOSITORY=s3:https://s3.wasabisys.com/restic-hmage
source ./restic-excludes.sh
restic backup --exclude-file <(printf "%s\n" "${EXCLUDES[@]}") /`Contents of restic-excludes.sh:
EXCLUDES=(
/dev
/proc
/run
/sys
)
EXCLUDES+=(
$'Icon\r'
$HOME/.bundle
$HOME/.cache
$HOME/.cargo
$HOME/.ccache*
$HOME/.config/chromium
$HOME/.cpan
$HOME/.dropbox
$HOME/.local/share/akonadi
$HOME/.npm
$HOME/Library/Application\ Support/Google/Chrome
$HOME/Library/Application\ Support/Telegram\ Desktop
$HOME/Library/Arq/Cache.noindex
$HOME/norm.*
**/var/cache/apt
**/var/cache/man
**/var/lib/apt/lists
**/var/lib/mlocate
.DS_Store
.DocumentRevisions-V100
.Spotlight-V100
.Trashes
.bzvol
.cache
.dropbox.cache
.fseventsd
/Volumes/Time\ Machine
/media/psf
/private/var/vm/
/srv/piwik/tmp
/srv/www/data/cache
/tmp
/usr/lib/debug
/var/lib/lxcfs
/var/swap
/var/tmp
Cache
Caches
)What backend/server/service did you use to store the repository?
Wasabi (S3 protocol)
Expected behavior
Restic should not run out of memory no matter how big the indexes are — they should be streamed from disk/repo rather than loaded completely into RAM since it's not infinite.
Actual behavior
Restic allocates tons of memory depending of index sizes — before backing up 3TB of data on my mac, restic on Pi had no problems backing up, after successful backup, restic gets killed by kernel's oom-killer:
hmage@phmd:~$ dmesg|fgrep restic
[426681.565821] [15683] 1000 15683 1393 295 8 0 0 0 restic-backup.s
[426681.565827] [15709] 1000 15709 268547 174432 353 0 0 0 restic
[426681.565897] Out of memory: Kill process 15709 (restic) score 664 or sacrifice child
[426681.565959] Killed process 15709 (restic) total-vm:1074188kB, anon-rss:697728kB, file-rss:0kB, shmem-rss:0kB
[426681.766777] oom_reaper: reaped process 15709 (restic), now anon-rss:0kB, file-rss:0kB, shmem-rss:0kB
[511937.088184] [17227] 1000 17227 1393 295 6 0 0 0 restic-backup.s
[511937.088190] [17255] 1000 17255 267651 176005 357 0 0 0 restic
[511937.088205] Out of memory: Kill process 17255 (restic) score 670 or sacrifice child
[511937.088266] Killed process 17255 (restic) total-vm:1070604kB, anon-rss:704020kB, file-rss:0kB, shmem-rss:0kB
[511937.324251] oom_reaper: reaped process 17255 (restic), now anon-rss:0kB, file-rss:0kB, shmem-rss:0kB
[598842.337688] [25297] 1000 25297 1393 295 5 0 0 0 restic-backup.s
[598842.337695] [25324] 1000 25324 201281 129430 264 0 0 0 restic
[598842.337735] Out of memory: Kill process 25324 (restic) score 493 or sacrifice child
[598842.337793] Killed process 25324 (restic) total-vm:805124kB, anon-rss:517720kB, file-rss:0kB, shmem-rss:0kB
[598842.529990] oom_reaper: reaped process 25324 (restic), now anon-rss:0kB, file-rss:0kB, shmem-rss:0kB
[642004.263182] [25392] 1000 25392 1409 314 5 0 0 0 restic-backup.s
[642004.263188] [25412] 1000 25412 201122 123536 252 0 0 0 restic
[642004.263252] Out of memory: Kill process 25412 (restic) score 470 or sacrifice child
[642004.263305] Killed process 25412 (restic) total-vm:804488kB, anon-rss:494144kB, file-rss:0kB, shmem-rss:0kB
[642004.409938] oom_reaper: reaped process 25412 (restic), now anon-rss:0kB, file-rss:0kB, shmem-rss:0kB
Steps to reproduce the behavior
- backup on Pi into empty repo anywhere — will succeed
- backup from another machine 3TB of data with 2 million files, which takes about several attempts and about 20 hours.
- backup on Pi again into same repo — will OOM
Do you have any idea what may have caused this?
Restic tries to load all indexes into RAM instead of mmaping them.
Do you have an idea how to solve the issue?
mmap the indexes from disk cache
Did restic help you or made you happy in any way?
So far best backup solution, and only backup solution that allows deduplicate backups from across multiple machines into single repo.