3s consensus - With consensus time included into consensus intervals. #3622
3s consensus - With consensus time included into consensus intervals. #3622Jim8y wants to merge 85 commits intoneo-project:masterfrom
Conversation
Co-authored-by: Christopher Schuchardt <cschuchardt88@gmail.com>
| if (block_received_index + 1 == context.Block.Index) | ||
|
|
||
| if (block_received_index + 1 == context.Block.Index && onPrepareBlockIndex + 1 == context.Block.Index) | ||
| { | ||
| var diff = TimeProvider.Current.UtcNow - block_received_time; | ||
| // Include the consensus time into the consensus intervals. | ||
| var diff = TimeProvider.Current.UtcNow - onPrepareReceivedTime; |
There was a problem hiding this comment.
This is the major change to this pr
There was a problem hiding this comment.
This should be in another PR and not here, if that is the case.
And for your notice, as I said in our meetings, this configuration has been tested in the past.
I think that you need more experimental tests before publishing things to quick and even writing new standards (that are not even necessary).
I am not saying that the change is terrible, @Jim8y, but there are some consequences, I think that you did not analyse the entire scenario before pushing this and creating standards with it.
I will surely be against this change here in this PR for a series of reasons.
Try this in a decentralized scenario or inserting delays (you can deal with that on docker) or create on AWS/AZURE in different locations.
There was a problem hiding this comment.
this configuration has been tested in the past.
Lots have changed in code since then. Would that affect anything like this? if so, we have to run these tests again to ensure its trueful.
There was a problem hiding this comment.
This should be in another PR and not here, if that is the case.
And for your notice, as I said in our meetings, this configuration has been tested in the past.
I think that you need more experimental tests before publishing things to quick and even writing new standards (that are not even necessary). I am not saying that the change is terrible, @Jim8y, but there are some consequences, I think that you did not analyse the entire scenario before pushing this and creating standards with it.
I will surely be against this change here in this PR for a series of reasons. Try this in a decentralized scenario or inserting delays (you can deal with that on docker) or create on AWS/AZURE in different locations.
We have setup a real testnet to carry out stress test and stability test for weeks. The network works fine and no issue. As you said should be in another pr thing, all changes in this pr now is for the service of and only serve the 3-seconds consensus, it is meaningless to add any of the prs and test any of the prs that only contain part of the changes in this pr. Casue non of the code in this pr means a thing without rest part.
We encourage small prs for the intention of easier review and test, its not like everything must be as small as possible.
If there is no review issue and no real logic problem, we should not split a complete and already tested pr.
There was a problem hiding this comment.
Lots have changed in code since then. Would that affect anything like this? if so, we have to run these tests again to ensure its trueful.
You are right @cschuchardt88 , I was also saying that in my last message. Let's see
| } | ||
|
|
||
| [ContractMethod(CpuFee = 1 << 15, RequiredCallFlags = CallFlags.ReadStates | CallFlags.WriteStates)] | ||
| public BigInteger GetGasPerBlock(ApplicationEngine engine, DataCache snapshot) |
There was a problem hiding this comment.
@shargon @roman-khimov please check this, i am try to update the gasperblock without really update it casue it should be a council thing.
There was a problem hiding this comment.
You can change the arguments without hardfork, because both arguments are native ones
There was a problem hiding this comment.
But we should not write into storages here... should be done by council
There was a problem hiding this comment.
Maybe CN can detect when this is made by council and start the new time?
There was a problem hiding this comment.
Hi shargon, i think my latest commit has addressed your concern:
- reuse existing GetGasPerBlock
- avoid write to the store but directly calculate a value
- ensures the council can still update the gasperblock, and the new value can be used immediately.
There was a problem hiding this comment.
UT updated accordingly.
There was a problem hiding this comment.
I intentially have two return currentgasperblolck to make the logic more clear as they are different situations. Code can be optimized if the logic passes review if necessary.
There was a problem hiding this comment.
The ContractMethod cannot have ApplicationEngine and DataCache parameters at the same time.
roman-khimov
left a comment
There was a problem hiding this comment.
- Most of the changes in this PR are just not needed at all (the only relevant is timer shift in fact).
- No hardfork is needed to change the block time in the current structure. It's just a matter of restarting consensus nodes with appropriate configuration.
- No native contract changes are required to change the GAS per block value. It's just a matter of sending a committee transaction. This one is really serious, btw, we do have a governance model and your changes just break it.
- Give 2 and 3 we can just schedule 3s block time to some post-Echidna date, create and send a transaction, restart CNs. This won't and can't be perfectly aligned, true, but I doubt it's a problem in practice. Changing governance assumptions behind the scenes is much bigger problem.
- But this still doesn't answer the time lock problem we can have with mainnet.
Let me remove the document first, then add another version after the release of v3.8, looks like you already did, great. |
Signed-off-by: Anna Shaleva <shaleva.ann@nspcc.ru>
Fix neo-project#3622 (comment). Signed-off-by: Anna Shaleva <shaleva.ann@nspcc.ru>
Added. |
No functional changes, just remove useless changes and adjust documentation. Signed-off-by: Anna Shaleva <shaleva.ann@nspcc.ru>
Sure, give me until tomorrow |
|
https://github.com/neo-project/neo/pull/3622/files#r2037019862 (conversion is collapsed) |
| /// For code that needs the block generation time based on blockchain state, | ||
| /// use NeoSystemExtensions.GetBlockGenTime extension method instead. | ||
| /// </summary> | ||
| public uint MillisecondsPerBlock { get; init; } |
There was a problem hiding this comment.
We can use obsolete to avoid the use, only in the new method
There was a problem hiding this comment.
But this value is actually not obsolete, it's still used until Echidna is enabled. And it's used to initialize Policy at Echidna height, so this value will never be removed from the node's configuration.
| public uint GetMSPerBlock(IReadOnlyStore snapshot) | ||
| { | ||
| // this should not be null since this method is only available after the HF_Echidna | ||
| snapshot.TryGet(_blockGenTime, out var item); |
There was a problem hiding this comment.
If ! throw an exception instead of get null exception should be good for debug
There was a problem hiding this comment.
We should just use simple index expression here, like for any other Policy getter method. Will fix.
| { | ||
| block_received_index = context.Block.Index; | ||
| block_received_time = TimeProvider.Current.UtcNow; | ||
| Block block = context.CreateBlock(); |
There was a problem hiding this comment.
What's happend with this variable?
There was a problem hiding this comment.
It's removed because it's not used after we implement #3622 (comment).
| TimeSpan span = neoSystem.Settings.TimePerBlock; | ||
| if (block_received_index + 1 == context.Block.Index) | ||
| TimeSpan span = context.BlockGenTime; | ||
| if (block_received_index + 1 == context.Block.Index && onPrepareBlockIndex + 1 == context.Block.Index) |
There was a problem hiding this comment.
I did not notice that! Good eyes @shargon.
I will dismiss my approval until this is clarified.
There was a problem hiding this comment.
If you wish, I can split this PR into two PRs to decouple Policy update from dBFT timer logic change. That was my original intention in #3622 (comment).
There was a problem hiding this comment.
OK, I see #3622 (comment), let's use two separate PRs. I'll create them soon.
There was a problem hiding this comment.
@AnnaShaleva, in fact I knew about the change in how to process block time.
Initially I was against changing it here directly because it is off course not the focus of this PR, but @Jim8y insisted and said @superboyiii was testing on testnet for long time.
So, we did also our experiments with this PR and the behaviours were quite good, we introduced adversaries delays, blocked communication and etc...The results were better than before.
But, when I say better than before, I mean the overall neo-core itself, not the change separately.
In this sense, I still believe it should be merged in another PR, let's say a minor patch after release. That is my opinion.
There was a problem hiding this comment.
Indeed I double checked here again this line.
I was just confused...aheuaheuahea
In fact, in the past, when we studied this exactly modification that @Jim8y is pushing forward, we did not add this line.
This line is important and makes the difference for avoiding some problems on the node relative time in the new rule. It was a good catch by @Jim8y .
For me this PR can be merged.
There was a problem hiding this comment.
On the other hand, fell free to split the PR and merge in two different commits, that also seems to be a better way to proceed.
There was a problem hiding this comment.
Let's use #3895, it's more clean and contains a single logical change, it's easier to review for everyone. So @vncoelho, @cschuchardt88 if you agree with this PR then please approve also #3895.
Changes were made.
|
As agreed in #3622 (comment), this PR will be replaced by two PRs to split the logic: native Policy update (#3895) and dBFT timer adjustment (will create it after the merge of #3895). |
…ect#3861) * Native: move MaxValidUntilBlockIncrement to native Policy The problem is described in nspcc-dev/neo-go#3841. In short words, the chain's state is dependent on MaxValidUntilBlockIncrement setting, whereas this setting depends on the block time. Since block generation time is a part of native Policy (neo-project#3622), MaxValidUntilBlockIncrement should follow, we need the same value to be applied at all CNs simultaneously. Port nspcc-dev/neo-go#3849. Signed-off-by: Anna Shaleva <shaleva.ann@nspcc.ru> * Extensions: improve formatting No functional changes. Signed-off-by: Anna Shaleva <shaleva.ann@nspcc.ru> * Native: set maximum constraint for MaxValidUntilBlockIncrement It's set to be a day of 1-second blocks, that should be enough, ref. neo-project#3861 (comment). Signed-off-by: Anna Shaleva <shaleva.ann@nspcc.ru> --------- Signed-off-by: Anna Shaleva <shaleva.ann@nspcc.ru> Co-authored-by: Shargon <shargon@gmail.com> Co-authored-by: Will <201105916+Wi1l-B0t@users.noreply.github.com> Co-authored-by: Christopher Schuchardt <cschuchardt88@gmail.com> Co-authored-by: Jimmy <jinghui@wayne.edu>
Description
Credits of this pr belong to @AnnaShaleva. Changes included into this PR:
Type of change
Checklist: