client channel: don't hold mutexes while calling the ConfigSelector or the LB picker#31973
Merged
markdroth merged 21 commits intogrpc:masterfrom Feb 7, 2023
Merged
Conversation
…w to drain queue safely
ctiller
approved these changes
Jan 26, 2023
markdroth
commented
Jan 26, 2023
Member
Author
markdroth
left a comment
There was a problem hiding this comment.
I also changed another existing place where we were using std::set<> to use absl::flat_hash_set<> instead.
markdroth
added a commit
to markdroth/grpc
that referenced
this pull request
Feb 8, 2023
…ConfigSelector or the LB picker (grpc#31973)" (grpc#32324)" This reverts commit 90524b0.
markdroth
added a commit
that referenced
this pull request
Feb 16, 2023
…ConfigSelector or the LB picker (#32326) Original attempt was #31973, reverted in #32324 due to test flakiness. There were two problems causing test flakiness here. The first problem was that, upon resolver error, we were dispatching an async callback to re-process each of the queued picks *before* we updated the channel's connectivity state, which meant that the queued picks might be re-processed in another thread before the new connectivity state was set, so tests that expected the state to be TRANSIENT_FAILURE once RPCs failed might not see the expected state. The second problem affected the xDS ring hash tests, and it's a bit more involved to explain. We have an e2e test that simulates an aggregate cluster failover from a primary cluster using ring_hash at startup. The primary cluster has two addresses, both of which are unreachable when the client starts up, so the client should immediately fail over to the secondary cluster, which does have reachable endpoints. The test requires that no RPCs are failed while this failover occurs. The original PR made this test flaky. The problem here was caused by a combination of two factors: 1. Prior to the original PR, when the picker was updated (which happens inside the WorkSerializer), we re-processed previously queued picks synchronously, so it was not possible for another subchannel connectivity state update (which also happens in the WorkSerializer) to be processed between the time that we updated the picker and the time that we re-processed the previously queued picks. The original PR changed this such that the queued picks are re-processed asynchronously (outside of the WorkSerializer), so it is now possible for a subchannel connectivity state update to be processed between when the picker is updated and when we re-process the previously queued picks. 2. Unlike most LB policies, where the picker does not see updated subchannel connectivity states until a new picker is created, the ring_hash picker gets the subchannel connectivity states from the LB policy via a lock, so it can wind up seeing the new states before it gets updated. This means that when a subchannel connectivity state update is processed by the ring_hash policy in the WorkSerializer, it will immediately be seen by the existing picker, even without a picker update. With those two points in mind, the sequence of events in the failing test were as follows: 1. The pick is attempted in the ring_hash picker for the primary cluster. This causes the first subchannel to attempt to connect. 2. The subchannel transitions from IDLE to CONNECTING. A new picker is returned due to the subchannel connectivity state change, and the channel retries the queued pick. The retried pick is done asynchronously, but in this case it does not matter: the call will be re-queued. 3. The connection attempt fails, and the subchannel reports TRANSIENT_FAILURE. A new picker is again returned, and the channel retries the queued pick. The retried pick is done asynchronously, but in this case it does not matter: this causes the picker to trigger a connection attempt for the second subchannel. 4. The second subchannel transitions from IDLE to CONNECTING. A new picker is again returned, and the channel retries the queued pick. The retried pick is done asynchronously, and in this case it *does* matter. 5. The second subchannel now transitions to TRANSIENT_FAILURE. The ring_hash policy will now report TRANSIENT_FAILURE, but before it can finish that... 6. ...In another thread, the channel now tries to re-process the queued pick using the CONNECTING picker from step 4. However, because the ring_hash policy has already seen the TRANSIENT_FAILURE report from the second subchannel, that picker will now fail the pick instead of queuing it. After discussion with @ejona86 and @dfawley (since this bug actually exists in Java and Go as well), we agreed that the right solution is to change the ring_hash picker to contain its own copy of the subchannel connectivity state information, rather than sharing that information with the LB policy using synchronization.
XuanWang-Amos
pushed a commit
to XuanWang-Amos/grpc
that referenced
this pull request
May 1, 2023
…r the LB picker (grpc#31973) * WIP * use the same lock for queue and picker -- still need to figure out how to drain queue safely * working! * always unref pickers in WorkSerializer * simplify tracking of queued calls * fix sanity * simplify resolver queueing code * fix tsan failure * simplify queued LB pick reprocessing * minor cleanups * remove unnecessary wrap-around checks * clang-format * generate_projects * fix pollset bug * use absl::flat_hash_set<> instead of std::set<> * fix use-after-free in retry code * add missing BUILD dep
XuanWang-Amos
pushed a commit
to XuanWang-Amos/grpc
that referenced
this pull request
May 1, 2023
…lector or the LB picker (grpc#31973)" (grpc#32324) This reverts commit e699e01.
XuanWang-Amos
pushed a commit
to XuanWang-Amos/grpc
that referenced
this pull request
May 1, 2023
…ConfigSelector or the LB picker (grpc#32326) Original attempt was grpc#31973, reverted in grpc#32324 due to test flakiness. There were two problems causing test flakiness here. The first problem was that, upon resolver error, we were dispatching an async callback to re-process each of the queued picks *before* we updated the channel's connectivity state, which meant that the queued picks might be re-processed in another thread before the new connectivity state was set, so tests that expected the state to be TRANSIENT_FAILURE once RPCs failed might not see the expected state. The second problem affected the xDS ring hash tests, and it's a bit more involved to explain. We have an e2e test that simulates an aggregate cluster failover from a primary cluster using ring_hash at startup. The primary cluster has two addresses, both of which are unreachable when the client starts up, so the client should immediately fail over to the secondary cluster, which does have reachable endpoints. The test requires that no RPCs are failed while this failover occurs. The original PR made this test flaky. The problem here was caused by a combination of two factors: 1. Prior to the original PR, when the picker was updated (which happens inside the WorkSerializer), we re-processed previously queued picks synchronously, so it was not possible for another subchannel connectivity state update (which also happens in the WorkSerializer) to be processed between the time that we updated the picker and the time that we re-processed the previously queued picks. The original PR changed this such that the queued picks are re-processed asynchronously (outside of the WorkSerializer), so it is now possible for a subchannel connectivity state update to be processed between when the picker is updated and when we re-process the previously queued picks. 2. Unlike most LB policies, where the picker does not see updated subchannel connectivity states until a new picker is created, the ring_hash picker gets the subchannel connectivity states from the LB policy via a lock, so it can wind up seeing the new states before it gets updated. This means that when a subchannel connectivity state update is processed by the ring_hash policy in the WorkSerializer, it will immediately be seen by the existing picker, even without a picker update. With those two points in mind, the sequence of events in the failing test were as follows: 1. The pick is attempted in the ring_hash picker for the primary cluster. This causes the first subchannel to attempt to connect. 2. The subchannel transitions from IDLE to CONNECTING. A new picker is returned due to the subchannel connectivity state change, and the channel retries the queued pick. The retried pick is done asynchronously, but in this case it does not matter: the call will be re-queued. 3. The connection attempt fails, and the subchannel reports TRANSIENT_FAILURE. A new picker is again returned, and the channel retries the queued pick. The retried pick is done asynchronously, but in this case it does not matter: this causes the picker to trigger a connection attempt for the second subchannel. 4. The second subchannel transitions from IDLE to CONNECTING. A new picker is again returned, and the channel retries the queued pick. The retried pick is done asynchronously, and in this case it *does* matter. 5. The second subchannel now transitions to TRANSIENT_FAILURE. The ring_hash policy will now report TRANSIENT_FAILURE, but before it can finish that... 6. ...In another thread, the channel now tries to re-process the queued pick using the CONNECTING picker from step 4. However, because the ring_hash policy has already seen the TRANSIENT_FAILURE report from the second subchannel, that picker will now fail the pick instead of queuing it. After discussion with @ejona86 and @dfawley (since this bug actually exists in Java and Go as well), we agreed that the right solution is to change the ring_hash picker to contain its own copy of the subchannel connectivity state information, rather than sharing that information with the LB policy using synchronization.
wanlin31
pushed a commit
that referenced
this pull request
May 18, 2023
…r the LB picker (#31973) * WIP * use the same lock for queue and picker -- still need to figure out how to drain queue safely * working! * always unref pickers in WorkSerializer * simplify tracking of queued calls * fix sanity * simplify resolver queueing code * fix tsan failure * simplify queued LB pick reprocessing * minor cleanups * remove unnecessary wrap-around checks * clang-format * generate_projects * fix pollset bug * use absl::flat_hash_set<> instead of std::set<> * fix use-after-free in retry code * add missing BUILD dep
wanlin31
pushed a commit
that referenced
this pull request
May 18, 2023
…ConfigSelector or the LB picker (#32326) Original attempt was #31973, reverted in #32324 due to test flakiness. There were two problems causing test flakiness here. The first problem was that, upon resolver error, we were dispatching an async callback to re-process each of the queued picks *before* we updated the channel's connectivity state, which meant that the queued picks might be re-processed in another thread before the new connectivity state was set, so tests that expected the state to be TRANSIENT_FAILURE once RPCs failed might not see the expected state. The second problem affected the xDS ring hash tests, and it's a bit more involved to explain. We have an e2e test that simulates an aggregate cluster failover from a primary cluster using ring_hash at startup. The primary cluster has two addresses, both of which are unreachable when the client starts up, so the client should immediately fail over to the secondary cluster, which does have reachable endpoints. The test requires that no RPCs are failed while this failover occurs. The original PR made this test flaky. The problem here was caused by a combination of two factors: 1. Prior to the original PR, when the picker was updated (which happens inside the WorkSerializer), we re-processed previously queued picks synchronously, so it was not possible for another subchannel connectivity state update (which also happens in the WorkSerializer) to be processed between the time that we updated the picker and the time that we re-processed the previously queued picks. The original PR changed this such that the queued picks are re-processed asynchronously (outside of the WorkSerializer), so it is now possible for a subchannel connectivity state update to be processed between when the picker is updated and when we re-process the previously queued picks. 2. Unlike most LB policies, where the picker does not see updated subchannel connectivity states until a new picker is created, the ring_hash picker gets the subchannel connectivity states from the LB policy via a lock, so it can wind up seeing the new states before it gets updated. This means that when a subchannel connectivity state update is processed by the ring_hash policy in the WorkSerializer, it will immediately be seen by the existing picker, even without a picker update. With those two points in mind, the sequence of events in the failing test were as follows: 1. The pick is attempted in the ring_hash picker for the primary cluster. This causes the first subchannel to attempt to connect. 2. The subchannel transitions from IDLE to CONNECTING. A new picker is returned due to the subchannel connectivity state change, and the channel retries the queued pick. The retried pick is done asynchronously, but in this case it does not matter: the call will be re-queued. 3. The connection attempt fails, and the subchannel reports TRANSIENT_FAILURE. A new picker is again returned, and the channel retries the queued pick. The retried pick is done asynchronously, but in this case it does not matter: this causes the picker to trigger a connection attempt for the second subchannel. 4. The second subchannel transitions from IDLE to CONNECTING. A new picker is again returned, and the channel retries the queued pick. The retried pick is done asynchronously, and in this case it *does* matter. 5. The second subchannel now transitions to TRANSIENT_FAILURE. The ring_hash policy will now report TRANSIENT_FAILURE, but before it can finish that... 6. ...In another thread, the channel now tries to re-process the queued pick using the CONNECTING picker from step 4. However, because the ring_hash policy has already seen the TRANSIENT_FAILURE report from the second subchannel, that picker will now fail the pick instead of queuing it. After discussion with @ejona86 and @dfawley (since this bug actually exists in Java and Go as well), we agreed that the right solution is to change the ring_hash picker to contain its own copy of the subchannel connectivity state information, rather than sharing that information with the LB policy using synchronization.
markdroth
added a commit
that referenced
this pull request
Aug 25, 2023
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Specific changes:
ConfigSelectorand the LB picker. Instead, each call holds the mutex for just long enough to grab a ref to the object. This has a bunch of side effects:ConfigSelectorand LB picker outside of theWorkSerializer, we need to explicitly hop back into theWorkSerializerto release the refs, sinceConfigSelectors and picker implementations should be able to expect that they are destroyed in theWorkSerializer. (A couple of LB policies actually had some code to take care of this themselves, but that shouldn't be necessary, so I've removed it.)EventEngine::Run()instead ofExecCtx::Run()for re-processing queued calls. This avoids the need to store agrpc_closurefor each of name resolution and load balancing, which reduces per-call memory.absl::flat_hash_set<>to store queued calls instead of a bespoke linked list approach. This allocates memory on the slow path, but it reduces per-call memory usage. (It also makes it more efficient to cancel an individual call when there are a large number of queued calls.)Also a couple of tangential changes while I was here:
rand(3)to the absl random library in a few places.SubchannelWrapper.absl::flat_hash_set<>instead ofstd::set<>for the set of subchannel wrappers.