Skip to content

handle large prune much more efficent #2162

@neeral85

Description

@neeral85

Output of restic version

restic 0.9.3 compiled with go1.10.4 on linux/amd64

What should restic do differently? Which functionality do you think we should add?

In general, I like restic very much, and creating/recovery snapshots works perfectly fine.
But run restic with large repository’s is almost impossible. I have a repository with 5 TB / 30 snapshots.
Intention was to do this like a circular buffer (remove oldest, add newest).

Adding a snapshot and removing them works perfect, but as you necessarily come to prune your repository it can take WEEKS to just free 1 TB (because of rewrite).
This makes it almost impossible to use restic anymore as you can't create new snapshots during that time.

As you mentioned already here
you may do something to improve this.

Example:
found 5967884 of 7336415 data blobs still in use, removing 1368531 blobs
will delete 144850 packs and rewrite 142751 packs, this frees 1.082 TiB (took 2 weeks!)

Especially on remote repository’s where you just bought storage (with ssh access) and CPU resources are limited its much faster upload the whole repository again.

Metadata

Metadata

Assignees

No one assigned

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions