Reduced fetch.unpackLimit to minimum#369
Merged
derrickstolee merged 1 commit intomicrosoft:masterfrom Apr 20, 2020
Merged
Conversation
Scalar has already been managing multiple pack files situation with multi-pack-index. This means that adding packs to the git repo is relatively low cost. Reducing `fetch.unpackLimit` to 1 (from default value of 100) assures that all the fetch will store transferred pack as-is on disk, instead of spending computation to unpack data into loose objects. This should make fetching a little bit faster for large fetch that could be unpacked into many objects (i.e. 99 objects since the default ). These objects, instead of being unpacked and written onto disk, is then kept as packfile and get consolidated with incremental repack step using multi-index-pack. On a busy repo, this potentially might make incremental repack with multi-pack-index works a bit harder. But I think this worth the computation trade-off as this is a cost we pay at the background maintenance, and it allows fetches to run faster to keep the repo up-to-date.
Contributor
Author
|
cc: @derrickstolee |
dscho
approved these changes
Apr 18, 2020
Member
dscho
left a comment
There was a problem hiding this comment.
The reasoning sounds good to me, and the patch looks obviously correct. Thank you!
derrickstolee
approved these changes
Apr 20, 2020
Contributor
derrickstolee
left a comment
There was a problem hiding this comment.
Thanks! I did not know about this config option, so this is an excellent improvement. The loose object step would clean these up eventually, but better to save time and keep the pack from the server.
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Scalar has already been managing multiple pack files situation with
multi-pack-index. This means that adding packs to the git repo is
relatively low cost.
Reducing
fetch.unpackLimitto 1 (from default value of 100) assuresthat all the fetch will store transferred pack as-is on disk, instead
of spending computation to unpack data into loose objects. This
should make fetching a little bit faster for large fetch that could
be unpacked into many objects (i.e. 99 objects since the default ).
These objects, instead of being unpacked and written onto disk, is then
kept as packfile and get consolidated with incremental repack step using
multi-index-pack.
On a busy repo, this potentially might make incremental repack with
multi-pack-index works a bit harder. But I think this worth the
computation trade-off as this is a cost we pay at the background
maintenance, and it allows fetches to run faster to keep the repo
up-to-date.