Conversation
josef-widder
commented
Jun 3, 2019
- Updated all relevant documentation in docs
- Updated all code comments where relevant
- Wrote tests
- Updated CHANGELOG_PENDING.md
Codecov Report
@@ Coverage Diff @@
## develop #3710 +/- ##
===========================================
- Coverage 63.35% 63.33% -0.03%
===========================================
Files 218 218
Lines 18239 18239
===========================================
- Hits 11556 11552 -4
- Misses 5707 5710 +3
- Partials 976 977 +1
|
| powers of PA and UV is less than a third of the total voting power, then in | ||
| $nh$, more than 2/3 of the voting power is held by correct processes. Formally, | ||
| if for all PA | ||
|
|
There was a problem hiding this comment.
I am not sure where we take into account the power change of common validators. Should we consider that as an increase of "bad" power (add it to the left of the <) ?
There was a problem hiding this comment.
Note that in the expression below, we fix PA from old header, but we take into account only its voting power in the new header.
There was a problem hiding this comment.
But the calculation on the old header just gives us the maximum possible bad voting power. It doesn't tell us that in fact the validators in the PA that gives that maximum are faulty. It can happen that actual bad validators outside PA have increased in power and good validators have decreased in power. I will try to find an example for this.
There was a problem hiding this comment.
We don't do calculation on the old header. Note that the sum is only on p which is voting power of validators from PA in the new header. We only intersect with the old header.
|
|
||
| vpUV := votingpower(UV,nh) // second sum in Theorem 1 | ||
| vpNH := votingpower(nh.V,nh) // right hand side of premise of Theorem 1 | ||
| vpMaxPA := maximumvotingpower(PAs,nh) // voting powers of all PA and big max |
There was a problem hiding this comment.
Not sure if you looked into the complexity here yet. Maybe we can use the 0/1 knapsack algorithm (pseudo-polynomial), i.e. determine vpMaxPA, the maximum power that can fill a knapsack no bigger than 1/3 total voting power.
It would be O(n*vpNH), better than the O(2**n) for compute_all_PAs(oh) but still not great.
n - number validators (~100)
vpNH - 1/3 total voting power, would have to look into real numbers but I think it is high...maybe can be optimized using normalization of powers (weights in knapsack problem)?.
There was a problem hiding this comment.
Haven't been thinking about efficient way of implementing it. We would need your help there :)
| vpMaxPA := maximumvotingpower(PAs,nh) // voting powers of all PA and big max | ||
|
|
||
| return vpMaxPA + vpUV < 1/3 * vpNH // Theorem 1. It must be smaller for all | ||
| // so it must be smaller for the max |
There was a problem hiding this comment.
Maybe we can memoize (vpMaxPA + vpUV) as the vpMaxPA of next iteration's old header so we would only need to do the expensive computation once for initial header.
| ## Remarks | ||
|
|
||
| **Remark.** Computing all PAs might be too expensive (all subsets of $oh.V$ that have a certain combined voting power in oh). Similarly, we then have to compute all voting powers of PAs in nh to get the maximum. This is disturbing, as right now, based on the examples, I expect that CheckVS will mostly return false, assuming that there are frequent changes in the validator sets. However, $oh.V=nh.V$ might be the common case. | ||
|
|
There was a problem hiding this comment.
Should this spec also cover what the lite client should do if CheckVS returns false? There might be just doing bisection or maybe a smarter strategy
There was a problem hiding this comment.
Yes, it should be complete algorithm that includes bisection most probably. We need to integrate this with the old spec so it complete. There are also some parts missing regarding counter factual slashing.
| - $nextV$: next validators | ||
| * $tp$: trusting period | ||
| * for a time $t$, the predicate $correct(v,t)$ is true if the validator $v$ | ||
| follows the protocol until time $t$ (we will see about recovery later). |
There was a problem hiding this comment.
What is t in reference to? A local clock or a global clock?
|
|
||
| ## LightClient Trusting Spec | ||
|
|
||
| ### LightClient Invariant |
There was a problem hiding this comment.
I am not sure this is the best place to start the light client definition. Is this how we define full nodes? I think we just want the light client header acceptance algorithm to reflect what a full node would have done given the same set of messages (condensed into headers in the case of a light client).
In particular, this does not even hold for subsequent blocks (block 11 following block 10) if the validator set changes by more than 1/3, which is possible. It also doesn't capture fault-accountability (which we should be able to capture precisely, such that a light client could only diverge from a full node if a slashable signature event by at least a well-defined fraction of stake had occurred).
|
|
||
| **Question:** What should be the precise assumption here. Is the invariant on $h.V$ or on $h.NextV$ or both? | ||
|
|
||
| *Assumption:* If a header is properly generated, then the above equations hold. |
There was a problem hiding this comment.
What does this mean? No, I don't think they do, if the validator set changes by enough even in a single block.
|
|
||
| ### Liveness | ||
|
|
||
| *Draft:* If a header $h$ as been properly generated by the blockchain (and its age is less than the trusting period), then a correct LightClient will eventually set $trust(h) = true$. |
There was a problem hiding this comment.
We need to assume synchronous data availability of headers, which is fine. Maybe we should emphasize that headers can be received out of order and the light client will determine which it needs to query for at what times?
| 1. **an uninterrupted sequence of proof.** If a block is appended to the chain, where the last block | ||
| is trusted (and properly committed by the old validator set in the next block), | ||
| and the new block contains a new validator set, the new block is trusted if the LightClient knows all headers in the prefix. | ||
| Intuitively, a trusted validator set is assumed to not chose a new validator set |
There was a problem hiding this comment.
OK, but it's also the case, critically, that headers contain the hash of the next validator set, so the light client can verify that the validator set of block n (which they trust) signed off on the validator set of block n + 1, which provides some level of assurance (in practice) on what has happened in the state machine, e.g. that the new validators have put up stake which could be slashed.
|
|
||
| 2. **trusting period.** Based on a trusted block *h*, and the LightClient | ||
| Invariant, which ensures the fault assumption during the trusting period, we can | ||
| try to infer wether it is guaranteed that the validator set of the new block |
There was a problem hiding this comment.
| try to infer wether it is guaranteed that the validator set of the new block | |
| try to infer whether it is guaranteed that the validator set of the new block |
| \] | ||
|
|
||
|
|
||
| Validator 4 has more than a third voting power in nh.V. As trusting oh does not |
There was a problem hiding this comment.
This could occur on subsequent blocks.
|
|
||
| vpUV := votingpower(UV,nh) // second sum in Theorem 1 | ||
| vpNH := votingpower(nh.V,nh) // right hand side of premise of Theorem 1 | ||
| vpMaxPA := maximumvotingpower(PAs,nh) // voting powers of all PA and big max |
|
Closing this PR and #3796, as @milosevic & @josef-widder will open a new pr with these two documents combined. |