This repository was archived by the owner on Aug 23, 2023. It is now read-only.
Conversation
a node's priority may be changed in between the two checks, which with the previous code could cause: * a node not be taken into account at all * a list of nodes for a given prio to be replaced by a single node with the same prio
note that in practice this doesn't work well.
out of 200 runs:
$ ./analyze.sh
failures
197
Mentions of 'Line 85:' specifically
197
passes:
out.txt:3
sed -n "s/.*Expected '\(.*\)' to almost equal '500'.*/\1/p" out.txt | goplot hist ⏎
462.00 -> 469.90: ▇▇▇ 2
477.80: ▇▇▇▇▇▇▇▇▇ 6
485.70: ▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇ 29
493.60: ▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇ 30
501.50: ▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇ 34
509.40: ▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇ 38
517.30: ▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇ 30
525.20: ▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇ 23
533.10: ▇▇▇▇ 3
541.00: ▇▇▇ 2
so our distribution is off by up to 41/500 = .082 = 8.2%
$ ./analyze.sh
failures
198
Mentions of 'Line 85:' specifically
198
passes:
2
$ sed -n "s/.*Expected '\(.*\)' to almost equal '5000'.*/\1/p" out.txt2 | goplot hist
4867.00 -> 4893.90: ▇ 1
4920.80: ▇▇▇▇▇▇▇▇▇▇▇▇▇ 9
4947.70: ▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇ 19
4974.60: ▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇ 41
5001.50: ▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇ 35
5028.40: ▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇ 40
5055.30: ▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇ 27
5082.20: ▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇ 16
5109.10: ▇▇▇▇▇▇▇▇▇▇▇ 8
5136.00: ▇▇ 2
now worst delta is 136/5000 = .0272 = 2.7%
I suspect if we crank up number of runs significantly we should have a
better convergence, a smaller delta that ultimately can consistently
pass the "almost label".
However I would argue that these bounds are too loose. I don't want
convergence to appear after thousands of requests. See next commit
the randomness, both in memberlist order, as well as in selection of peers was creating too much noise ./analyze.sh failures 0 Mentions of 'Line 85:' specifically 0 passes: 200
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to subscribe to this conversation on GitHub.
Already have an account?
Sign in.
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
for some reason #808 was never merged into master, though it was approved and merged into #555
seeing these failures again in CI, so i rebased the code on top of latest master
fix #857