Skip to content

Conversation

@sdaftuar
Copy link
Member

@sdaftuar sdaftuar commented Mar 9, 2017

I've been investigating performance regressions in CreateNewBlock.

addPackageTxs() is supposed to "fail fast" for transactions that don't fit in a block. However while we skip the most expensive ancestor/descendant calculations for failing transactions (generally), it turns out the other work we're doing (mostly map lookups?) can still be pretty slow.

After trying various approaches to speed up those operations (replacing maps with unordered_maps, changing the sort order on the ancestor score function to avoid needless re-sorting, and even getting rid of the maps altogether in favor of storing set information directly in the mempool entry), the one optimization that dominates all these is to just return earlier when the block is nearly full.

So in this PR: when we're within 4000 weight of the block being full, if we consider and fail to add 1000 transactions in a row, then give up. I've benchmarked this as reducing the average run of CNB from 84ms to 63ms 74ms to 61ms, with negligible difference in fees. Most of CNB time is currently taken by the call to TestBlockValidity, which is unaffected by this PR; so focusing just on package selection, the improvement from this change is a reduction in average addPackageTxs() time from 19ms to 7ms, over the time period I analyzed (first week of December, 2016).

I also added some commits that provide benchmarking of CNB when running with -debug=bench, which I thought might be generally useful/interesting.

@fanquake fanquake added the Mining label Mar 9, 2017
@sdaftuar
Copy link
Member Author

sdaftuar commented Mar 9, 2017

Update OP with a correction and a more relevant statistic.

Copy link
Member

@luke-jr luke-jr left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

utACK, certainly an improvement either way. (this behaviour used to exist before the CNB refactoring, maybe the old code is worth looking at)


addPackageTxs();
int nPackagesSelected = 0;
int nDescendantsUpdated = 0;
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Might make more sense to put these on the class, rather than pass by-reference?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm working on a refactor of the BlockAssembler members separately (as part of a patch to exclude recent transactions from new blocks, when the fee difference is negligible), so I'd prefer to defer the decision of whether to include this in the class until I'm ready to PR that patch.


// Limit the number of attempts to add transactions to the block when it is
// close to full.
const int64_t MAX_CONSECUTIVE_FAILURES = 1000;
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The old code used 50 for this. Does 1000 work good enough in practice?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, well I also tried setting this to 100 and the performance was not really distinguishable.

++nConsecutiveFailed;

if (nConsecutiveFailed > MAX_CONSECUTIVE_FAILURES && nBlockWeight >
nBlockMaxWeight - 4000) {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Perhaps the additional condition(s) should be before incrementing, or we could end up just aborting immediately as we approach the max block weight even if we could easily fit a few more txs in.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm not following; every time we add a new tx to the block we reset the counter... Can you clarify?

@laanwj laanwj added this to the 0.14.1 milestone Mar 9, 2017
@TheBlueMatt
Copy link
Contributor

utACK 03a4076, didnt bother to benchmark, should be obvious wins, even if minor.

@TheBlueMatt
Copy link
Contributor

TheBlueMatt commented Mar 24, 2017

Is this worth backporting (my vote would be weak yes)?

@sdaftuar
Copy link
Member Author

@TheBlueMatt I don't feel strongly about whether we backport this change, to be honest (though it's simple enough that I don't think there's any downside).

But it would be nice to merge this into master if there are no further comments, so I can continue work on additional mining changes (to support excluding recently received transactions if the fee difference from doing so is below some threshold).

Copy link
Contributor

@JeremyRubin JeremyRubin left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

utack. I've reviewed the changes and it seems reasonable. @sdaftuar I also left some ideas for a few complexity optimizations you may not have yet considered for the expensive loop; but those are for future work :).

src/miner.cpp Outdated
int BlockAssembler::UpdatePackagesForAdded(const CTxMemPool::setEntries& alreadyAdded,
indexed_modified_transaction_set &mapModifiedTx)
{
int nDescendants = 0;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can you rename to nDescendantsUpdated to make clear that it should not be initialized to alreadyAdded.size()

CTxMemPool::setEntries descendants;
mempool.CalculateDescendants(it, descendants);
// Insert all descendants (not yet in block) into the modified set
BOOST_FOREACH(CTxMemPool::txiter desc, descendants) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is a really interesting section. I got a bit nerd-sniped so apologies for long writeup.

Doing the following may be more efficient (I can code this up, but I don't want to step on your toes if you're operating here).

std::vector<txiter> diff(descendants.size());
auto end = std::set_difference(descendants.begin(), descendants.end(),
     alreadyAdded.begin(), alreadyAdded.end(), diff.begin()); // Linear time!
nDescendants += end - diff.begin();
for (auto it = diff.begin(); it != diff.end(); ++it) {
  // ....
} 

It requires an extra copy compared to the naive, but they're iterators so who cares (we can also reuse the vector for the entire call at the top of updatepackages)...

Let N = alreadyAdded.size()
Let M = descendants.size()
This does O(M+N) work (I think that most implementations actually do O(max(M, N)), but the standard specified 2*(M+N-1)), while what's currently happening would appear to be O(M*log(N)).

There is also a pre-loop-check one could do if it's likely they don't intersect.

// O(1)
if (descendants.back() < alreadyAdded.front() || descendants.front() > alreadAdded.back())
    // skip set_difference, descendants - alreadyAdded = descendants

And a pre-loop narrowing one could do to make it O(min(M, N)).

// O(log(M) + log(N))
auto range1 = descendants.equal_range(alreadyAdded.begin(), alreadyAdded.end())
auto range2 = alreadyAdded.equal_range(descendants.begin(), descendants.end())
std::vector<txiter> diff(range1.second - range1.first);
auto end = std::set_difference(range1.first, range1.second,
     range2.first, range2.second, diff.begin()); // Linear time!
nDescendants += end - diff.begin();
for (auto it = diff.begin(); it != diff.end(); ++it) {
  // ....
} 

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks, will try some of these out in future optimization efforts.

}

// This transaction will make it in; reset the failed counter.
nConsecutiveFailed = 0;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could you add more of a comment on why resetting is correct behavior here?

It seems to me at first glance, that if we fail to add something earlier, we should continue to tally those as failures?
Perhaps:

Assuming that below a certain threshold `T` of probability
(i.e.,`T = P(success at M | failure at M-1....0)` where `M = 1000`) 
of adding something to the block we want to give up,
 we expect that 
`1000>D>0. P(success at N+D+1 | failure at N+D,... success at N, failure at N-1, ...) 
> P(success at N+D | failure at N, failure at N-1, ...)`?

Maybe not the most accurate, but would be helpful for future reviewers trying to understand what the intention is.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is incremented whenever we fail (not just for failures after we're above a certain block weight), so it needs to be reset when adding a new tx.

@sdaftuar
Copy link
Member Author

Addressed @JeremyRubin's nits.

@JeremyRubin
Copy link
Contributor

re-utack 4d1eb10

@sdaftuar sdaftuar force-pushed the 2017-03-cnb-optimizations branch from 4d1eb10 to 011124a Compare March 29, 2017 17:59
@sdaftuar
Copy link
Member Author

Squashed 4d1eb10 -> 011124a

@TheBlueMatt
Copy link
Contributor

re-utACK 011124a

Copy link
Contributor

@gmaxwell gmaxwell left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

utACK

@laanwj laanwj merged commit 011124a into bitcoin:master Mar 30, 2017
laanwj added a commit that referenced this pull request Mar 30, 2017
…ools

011124a Update benchmarking with package statistics (Suhas Daftuar)
42cd8c8 Add benchmarking for CreateNewBlock (Suhas Daftuar)
eed816a Mining: return early when block is almost full (Suhas Daftuar)

Tree-SHA512: c0d8f71e4e0441acf3f4ca12f8705e413b59b323659346a447145653def71710537fb4c6d80cad8e36d68b0aabf19c92e9eab7135a8897b053ed58720856cdda
sdaftuar added a commit to sdaftuar/bitcoin that referenced this pull request Mar 30, 2017
sdaftuar added a commit to sdaftuar/bitcoin that referenced this pull request Mar 30, 2017
sdaftuar added a commit to sdaftuar/bitcoin that referenced this pull request Mar 30, 2017
@laanwj
Copy link
Member

laanwj commented Mar 31, 2017

Removing 'needs backport' label as a backport exists (#10127)

codablock pushed a commit to codablock/dash that referenced this pull request Jan 26, 2018
…ge mempools

011124a Update benchmarking with package statistics (Suhas Daftuar)
42cd8c8 Add benchmarking for CreateNewBlock (Suhas Daftuar)
eed816a Mining: return early when block is almost full (Suhas Daftuar)

Tree-SHA512: c0d8f71e4e0441acf3f4ca12f8705e413b59b323659346a447145653def71710537fb4c6d80cad8e36d68b0aabf19c92e9eab7135a8897b053ed58720856cdda
lateminer pushed a commit to lateminer/bitcoin that referenced this pull request Jan 5, 2019
lateminer pushed a commit to lateminer/bitcoin that referenced this pull request Jan 5, 2019
andvgal pushed a commit to energicryptocurrency/gen2-energi that referenced this pull request Jan 6, 2019
…ge mempools

011124a Update benchmarking with package statistics (Suhas Daftuar)
42cd8c8 Add benchmarking for CreateNewBlock (Suhas Daftuar)
eed816a Mining: return early when block is almost full (Suhas Daftuar)

Tree-SHA512: c0d8f71e4e0441acf3f4ca12f8705e413b59b323659346a447145653def71710537fb4c6d80cad8e36d68b0aabf19c92e9eab7135a8897b053ed58720856cdda
CryptoCentric pushed a commit to absolute-community/absolute that referenced this pull request Feb 27, 2019
@bitcoin bitcoin locked as resolved and limited conversation to collaborators Sep 8, 2021
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.

Labels

Projects

None yet

Development

Successfully merging this pull request may close these issues.

7 participants