Skip to content

Conversation

@CaveSpectre11
Copy link

Initially, this fix is a rebase of PR #849 ; which is contained within wallet.cpp. More description is written in that PR, however the nutshell version is that there were certain situations were you would continually try to combine the previous combination (below the threshold) and the change (taking you above the threshold).

Even with this fix, AutoCombineRewards was still hardly usable, as wallet scans on wallets with many active addresses was consuming significant resources when running AutoCombineDust() on every block. I found an old change in many PIVX based coins that implemented a concept of "AutoCombineRewardsThresholdTime", which allowed you to configure the number of minutes between dust collection.

That old algorithm was commented out and replaced with a 5 second wait. This concept also was flawed, in that it was going to be doing a sleep within the ProcessNewBlock() code. The first attempt was abandoned because it likely was blocking ProcessNewBlock for 15 minutes at a time, and no way to not block, if using the feature, for under a minute. The workaround (5 seconds) still wasn't desirable, as it would still lock up ProcessNewBlock for the 5 second block.

This new method changes that design from ThresholdTime, to Block Frequency, and defaults to 15 blocks. Time can be adjusted per implementation, to get a desired minute time based on the block frequency of the coin parameters. If AutoCombineDust is enabled, it will only check and attempt to combine dust if the block height is a multiple of the Block Frequency. e.g. if set to 10, then every 10th block it will be executed. 100; then every 100th block.

This can now be tailored by the user, based on their desired dust cleanup threshold, and their expectation of frequency that they will need to clean.

After some testing, I added one other addition; the concept of a "one shot" dust cleanup, allowing you to specify a block frequency of "zero", which will run the autocombine on the processing of the next block. This functionality assumes that someone would be running their wallet infrequently, and therefore is likely to want the one shot to run when they start the wallet again. As such, the "on" and "frequency 0" is saved in the database, but after it runs, it will be shut off for the duration of the wallet run.

If they do not want it running on startup; they would simply "autocombinerewards false" after it's done it's one shot.

The feature, in it's entirety:

  • Defaults to false
  • When "true", requires a threshold (in coin count)
  • Block frequency defaults to every 15 blocks. This can changed to every block (resource intensive) or however many blocks one expects their threshold to hit.
  • Block frequency of "0" will run the sweep on the next block, and then turn it off until they run it again, or restart their wallet.
  • To prevent the one shot to run on startup, they simply need to turn it off after the sweep is complete (when getautocombineinfo returns "on startup" rather than "on next block")

For example.
Configuring One Shot

autocombinerewards true 5 0
{
  "autocombine set to <on/off>  ": 1,
  "autocombine threshold set to <Coin Amount>": 5,
  "autocombine block frequency set to ": "one time"
}

Before the run

getautocombineinfo
{
  "autocombine set to <on/off>  ": 1,
  "autocombine threshold set to <Coin Amount>": 5,
  "autocombine block frequency set to ": "on next block"
}

After the run

getautocombineinfo
{
  "autocombine set to <on/off>  ": 1,
  "autocombine threshold set to <Coin Amount>": 5,
  "autocombine block frequency set to ": "on startup"
}

Configuring for frequency use*

autocombinerewards true 5 250
{
  "autocombine set to <on/off>  ": 1,
  "autocombine threshold set to <Coin Amount>": 5,
  "autocombine block frequency set to ":  250
}

The latter will run every time the block count is a multiple of 250.

Please Note: This code has been tested on two Pivx based coins I have implemented it in. I, however, am not a PIVX holder so I was unable to test this code in PIVX itself; but I felt the changes makes autocombinerewards a useful feature, and porting back to PIVX is the right thing to do. Conceptually it should work fine, however there may have been a mistake made when porting, so please test on my behalf.

CaveSpectre11 and others added 9 commits April 11, 2019 06:29
* Initial release notes for v3.2.0

* In progress multi-input witness generation.

* more progress

* More progress on caching spend data.

* Remove security level from zerocoin spending.

* prevent infinate loop in miner

* ensure override is set for GetSerialHash()

* Include dependent headers in stakeinput.h

* set the accumulator start height when initializing the witness

* remove spam log output in stakeTargetHit()

* add benchmarking to zPIV spend inner functions

* fully initialize member variables when setting the witness data

* don't duplicate function call

* benchmark fixup

* don't hold cd_spendcache for the entire loop

* Remove security level from UI

* Add clearspendcache RPC command

* re-add zpivchain.h (fixes merge conflict error)

* more work on pre-computing

* Make GetSerialHash return const again

* Fix RPC spendzerocoin

* Precompute zPIV spends in a dedicated thread

Moves the spend precomputation process into it's own thread, instead of
being tied to the staking thread.

Loop currently has a 5000ms sleep at the end so as to not overconsume
locks.

* Qt: Add precomputed percentage indicator column to zPIV control UI

Adds a new column to the zPIV control UI showing the mint's completed
precompute percentage.

* Allow zPIV precomputing to be disabled

Adds a new command line (conf) option `-precompute` to disable
precomputing. Default is to enable.

* Add Databasing of zpiv precomputes, they are not encrypted yet

* Update precompute cache

* Clear database cache with rpc call

* Fix rebase conflicts from precomputing

* Convert precompute cache into LRU cache

* Add global boolean, that stop precompute cache lock

* reorg + refactoring + conflicts

* precomputation-upstream conflicts solved

* getaccumulatorwitness method fix

* re added LNZP removed files

* Remove duplicate functions from rebase

* Fix assertion failure in mining/staking

Need to explicitely set pindexPrev and nHeight within the locked scope.

* Fixup coinspend unit test header includes

* Fix wrapped serial functional test

The error message had changed

* Remove security level in `DoZpivSpend()` call

* Disable precompute for functional tests

The locking can interfere with block generation

* Fixup the interface_rest functional test

* Better logging for CheckCoinSpend

Also re-introduce the regtest specific conditional

* Prevent lock spamming when no stakable inputs are present

* Cleanup arbitrary sleep times in CreateCoinStake

* Fixup bad logprintf

* fixup makefile.am

* Add more release notes

Includes notes regarding Fake Stake, Wrapped Serials, Regtest, and
Precompute

* more notes for RPC commands and gitian build script

* [Net] Increment Protocol Version

Minor protocol version increment, with no enforcement rulings, used in
tracking upgrade deployment across the network.

* fix to display missing clock5.png tx image

* Don't error out when a range has already been computed

When a range of blocks has already been computed, use that existing data
 instead of erroring out. This is used in the zPoS pipeline where, after
  precomputation, the passed `pindexCheckpoint` is likely to be at a
  block height below the accumulator's tip/end.

* Fix irrelevant receipt status message

Since security level has been removed, this status message needed to be
changed.

Also removed some stale unused code that was left over from when
security level was used.

* Reduce log spam

New debug category "staking" added with some previous `LogPrintf`
messages recategorized to reduce log spam.

Also noted both "staking" and "precompute" in the init help message and
added them to the "pivx" umbrella category.

* [Qt] Update localizations from Transifex

* [Refactoring] [Move Only] Move wallet files into their own folder

* [macOS] Remove DS_Store WindowBounds bytes object

* [Build] Bump master to 3.2.99 (non-release)

Now that 3.2 has been branched off, the `master` branch can be bumped up
 to the 3.2.99 "dummy" version specifier, ie pre-3.3.
…peared to be a failed attempt at delaying the AutoCombineDust() to occur after X minutes. That old algorithm was commented out and replaced with a 5 second wait. This concept also was flawed, in that it was going to be doing a sleep within the ProcessNewBlock() code. The first attempt was abandoned because it likely was blocking ProcessNewBlock for 15 minutes at a time, and no way to not block, if using the feature, for under a minute. The workaround (5 seconds) still wasn't desirable, as it would still lock up ProcessNewBlock for the 5 second block.

This new method changes that design from ThresholdTime, to Block Frequency, and defaults to 15 blocks. Time can be adjusted per implementation, to get a desired minute time based on the block
frequency of the coin parameters. If AutoCombineDust is enabled, it will only check and attempt to combine dust if the block height is a multiple of the Block Frequency. e.g. if set to 10, then every
10th block it will be executed. 100; then every 100th block.

Additionally, logic was added to provide a "one time" sweep; if you don't want a regularly scheduled sweep but you want to do a sweep at that given moment. Specifying a block frequency of zero will run AutoCombineDust() on the next block received, and then turn it off. It will be saved in the database as a one time sweep; so it will then run every time you start up the wallet. If you do not wish it to run on startup, you can turn off autocombinerewards after the sweep was performed.

This can now be tailored by the user, based on their desired dust cleanup threshold, and their expectation of frequency that they will need to clean.
…e threshold takes the total combined coins above the threshold, or when the total amount of dust is less than 10% above the threshold.

What occurs is two fold. First, the nTotalRewardsValue > nAutoCombineThreshold will break it out of the for loop; but the "safety margin" will split it up into two utxos, one within 10% of the threshold, and then the utxo for the change. When the wallet comes back through on it's dust collection, it can pick up those two utxos and repeat until the fees widdle the two combined transactions fall below the threshold.

If there is another utxo to add in order to get us far enough above the threshold that the 10% reduction is still above the threshold; then we're now good when accounting for the 10% in our check in the for loop. However there is still one other case that slips through. If the total amount being collected falls into the "within 10% of the threshold" situation, and the for loop can't make another pass... we exit the for loop normally, and find our way into the zero fee check. However the zero fee check sees we are over the threshold, but not that the transaction amount will be over the threshold. So we need to account for the 10% there as well, by using the actual amount (vsecSend[0].second) rather than nTotalRewardsValue to determine if we should continue only if free.
…peared to be a failed attempt at delaying the AutoCombineDust() to occur after X minutes. That old algorithm was commented out and replaced with a 5 second wait. This concept also was flawed, in that it was going to be doing a sleep within the ProcessNewBlock() code. The first attempt was abandoned because it likely was blocking ProcessNewBlock for 15 minutes at a time, and no way to not block, if using the feature, for under a minute. The workaround (5 seconds) still wasn't desirable, as it would still lock up ProcessNewBlock for the 5 second block.

This new method changes that design from ThresholdTime, to Block Frequency, and defaults to 15 blocks. Time can be adjusted per implementation, to get a desired minute time based on the block
frequency of the coin parameters. If AutoCombineDust is enabled, it will only check and attempt to combine dust if the block height is a multiple of the Block Frequency. e.g. if set to 10, then every
10th block it will be executed. 100; then every 100th block.

Additionally, logic was added to provide a "one time" sweep; if you don't want a regularly scheduled sweep but you want to do a sweep at that given moment. Specifying a block frequency of zero will run AutoCombineDust() on the next block received, and then turn it off. It will be saved in the database as a one time sweep; so it will then run every time you start up the wallet. If you do not wish it to run on startup, you can turn off autocombinerewards after the sweep was performed.

This can now be tailored by the user, based on their desired dust cleanup threshold, and their expectation of frequency that they will need to clean.
…e threshold takes the total combined coins above the threshold, or when the total amount of dust is less than 10% above the threshold.

What occurs is two fold. First, the nTotalRewardsValue > nAutoCombineThreshold will break it out of the for loop; but the "safety margin" will split it up into two utxos, one within 10% of the threshold, and then the utxo for the change. When the wallet comes back through on it's dust collection, it can pick up those two utxos and repeat until the fees widdle the two combined transactions fall below the threshold.

If there is another utxo to add in order to get us far enough above the threshold that the 10% reduction is still above the threshold; then we're now good when accounting for the 10% in our check in the for loop. However there is still one other case that slips through. If the total amount being collected falls into the "within 10% of the threshold" situation, and the for loop can't make another pass... we exit the for loop normally, and find our way into the zero fee check. However the zero fee check sees we are over the threshold, but not that the transaction amount will be over the threshold. So we need to account for the 10% there as well, by using the actual amount (vsecSend[0].second) rather than nTotalRewardsValue to determine if we should continue only if free.
Copy link
Collaborator

@Fuzzbawls Fuzzbawls left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Overall Concept ACK, but a few things need changing:

  • RPC JSON results should be machine readable
    • Member names shouldn't include whitespace
    • Member names shouldn't include special characters (<, >, /, etc)
  • JSON Results should be deterministic
    • Expected format needs to be detailed in the help message
    • IF conditionals are used to modify the output format, that also needs to be detailed
  • RPC help text must include the following
    • Command name, with arguments (if any, see below)
    • Brief description
    • Arguments detail (if any)
    • Output detail (if not null)
    • CLI Example
    • RPC Example

The first line returned in the RPC help text should use the following format:
<command_name> <required_arguments> ( <optional_arguments> )
note that all optional arguments come at the end, and are wrapped around a single set of braces ( )
also, arguments should be named, even boolean arguments. ex: enable is the name of a boolean argument, and should be used instead of true|false in the first help line and the Arguments detail.

I'm also going to request that when the above is taken into consideration, that the commits be squashed/rebased down to the minimum number of commits necessary, and without any merge commits.

Also, as an aside note for any future PRs you may do: you can forego changing any copyright years in file headers as we have a script that we periodically use for that purpose :)

@CaveSpectre11
Copy link
Author

A lot was from the original Threshold Time code, and I didn't really question the format (although there were some things I didn't like about the output). I'll get this all taken care of this weekend when I have my linux box handy and don't have to use github to fixup travis errors.

@CaveSpectre11
Copy link
Author

Replaced with #873

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants