Skip to content

Fix ref count handling in Engine.failEngine#48639

Merged
original-brownbear merged 2 commits intoelastic:masterfrom
original-brownbear:48625
Oct 29, 2019
Merged

Fix ref count handling in Engine.failEngine#48639
original-brownbear merged 2 commits intoelastic:masterfrom
original-brownbear:48625

Conversation

@original-brownbear
Copy link
Copy Markdown
Contributor

We can run into an already closed store here and hence
throw on trying to increment the ref count => moving to
the guarded ref count increment

closes #48625

We can run into an already closed store here and hence
throw on trying to increment the ref count => moving to
the guarded ref count increment

closes #48625
@original-brownbear original-brownbear added >non-issue :Distributed/Recovery Anything around constructing a new shard, either from a local or a remote source. v8.0.0 v7.6.0 labels Oct 29, 2019
@elasticmachine
Copy link
Copy Markdown
Collaborator

Pinging @elastic/es-distributed (:Distributed/Recovery)

docs[i] = client().prepareIndex("test").setId(id).setSource("field1", English.intToEnglish(numDocs + i));
}
indexRandom(true, docs);
numDocs *= 2;
Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this line is just dead code we're never touching numDocs again in this test :)

Copy link
Copy Markdown
Member

@dnhatn dnhatn left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@original-brownbear
Copy link
Copy Markdown
Contributor Author

original-brownbear commented Oct 29, 2019

Thanks @dnhatn ! Sorry for pushing
b60b437 didn't see your review. Can you maybe take another look though, I think that fix is better/safer as it still goes through all the close steps even if the store got closed concurrently?
Seems like this fix would also give a more accurate warnings and avoid needlessly messing with the ref-count when we're not dealing with a corruption?

Copy link
Copy Markdown
Member

@dnhatn dnhatn left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM. The new fix is better. Thanks @original-brownbear for an extra iteration :)

@original-brownbear original-brownbear merged commit 4b89171 into elastic:master Oct 29, 2019
@original-brownbear original-brownbear deleted the 48625 branch October 29, 2019 17:22
original-brownbear added a commit that referenced this pull request Oct 30, 2019
We can run into an already closed store here and hence
throw on trying to increment the ref count => moving to
the guarded ref count increment

closes #48625
dnhatn pushed a commit that referenced this pull request Nov 3, 2019
We can run into an already closed store here and hence
throw on trying to increment the ref count => moving to
the guarded ref count increment

closes #48625
@dnhatn dnhatn added the v7.5.0 label Nov 3, 2019
@dnhatn
Copy link
Copy Markdown
Member

dnhatn commented Nov 3, 2019

I have backported this PR to 7.5 since #48414 needs it.

@mfussenegger mfussenegger mentioned this pull request Mar 26, 2020
37 tasks
mfussenegger added a commit to crate/crate that referenced this pull request Apr 30, 2020
mergify bot pushed a commit to crate/crate that referenced this pull request Apr 30, 2020
@original-brownbear original-brownbear restored the 48625 branch August 6, 2020 18:27
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

:Distributed/Recovery Anything around constructing a new shard, either from a local or a remote source. >non-issue v7.5.0 v7.6.0 v8.0.0-alpha1

Projects

None yet

Development

Successfully merging this pull request may close these issues.

[CI] RelocationIT.testIndexAndRelocateConcurrently fails on master intake

4 participants