-
Notifications
You must be signed in to change notification settings - Fork 5.3k
Reliability fixes for the Thread Pool #121887
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
|
I think this is ready for review. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pull request overview
This PR addresses critical reliability issues in three thread pool implementations by reverting problematic changes from #100506 that led to deadlocks in .NET 9. The fix restores a simpler and safer enqueuer/worker handshake protocol that guarantees work items will always have a worker thread available to process them.
Key Changes:
- Simplified the
QueueProcessingStageenum by removing theDeterminingstate, leaving onlyNotScheduledandScheduled - Changed the worker thread protocol to reset the processing stage to
NotScheduledbefore checking for work items (preventing a race condition window) - Removed complex retry/dequeue logic and the
_nextWorkItemToProcessoptimization in favor of always requesting an additional worker when processing an item
Reviewed changes
Copilot reviewed 2 out of 2 changed files in this pull request and generated 7 comments.
| File | Description |
|---|---|
| src/libraries/System.Private.CoreLib/src/System/Threading/ThreadPoolWorkQueue.cs | Simplified general-purpose ThreadPool enqueuer/worker handshake by removing Determining state, _nextWorkItemToProcess field, and complex retry logic; streamlined Dispatch() to always request a worker after dequeuing an item; |
| src/libraries/System.Net.Sockets/src/System/Net/Sockets/SocketAsyncEngine.Unix.cs | Applied same handshake simplification to Unix socket async engine; removed UpdateEventQueueProcessingStage() method; simplified Execute() to use consistent pattern of resetting state before checking queue |
src/libraries/System.Private.CoreLib/src/System/Threading/ThreadPoolWorkQueue.cs
Outdated
Show resolved
Hide resolved
src/libraries/System.Net.Sockets/src/System/Net/Sockets/SocketAsyncEngine.Unix.cs
Outdated
Show resolved
Hide resolved
src/libraries/System.Private.CoreLib/src/System/Threading/ThreadPoolWorkQueue.cs
Outdated
Show resolved
Hide resolved
src/libraries/System.Private.CoreLib/src/System/Threading/ThreadPoolWorkQueue.cs
Outdated
Show resolved
Hide resolved
src/libraries/System.Net.Sockets/src/System/Net/Sockets/SocketAsyncEngine.Unix.cs
Show resolved
Hide resolved
src/libraries/System.Private.CoreLib/src/System/Threading/ThreadPoolWorkQueue.cs
Outdated
Show resolved
Hide resolved
src/libraries/System.Private.CoreLib/src/System/Threading/ThreadPoolWorkQueue.cs
Outdated
Show resolved
Hide resolved
…ToHighPriorityGlobalQueue. them
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
|
/benchmark plaintext,json,fortunes aspnet-citrine-lin runtime,libs |
|
Benchmark started for plaintext, json, fortunes on aspnet-citrine-lin with runtime, libs. Logs: link |
|
An error occurred, please check the logs |
|
Is this just a revert of the earlier change? Looks like that had some perf improvements so we might notice some regressions due to revert. |
Yes, it is a partial revert.
There were several changes in the original PR and the result was some improvements and also couple regressions. This PR reverts only the part that is important for correctness. Not sure how much that was contributing. |
|
There are ways to make threadpool less eager with introducing workers. But that would need to be in the part that actually controls the introducing of threads - around the LIFO semaphore and the logic that controls it. In the airport parking lot shuttle analogy - The parking lot decides how many shuttles they have in rotation. But this flag is just calling them and telling that a traveler has arrived and needs to be picked up, somehow, eventually... |
|
The changes look good to me, actually the current logic to handle thread requests is quite complicated so it was hard to be 100% sure that it worked correctly. However since the current thread request handling was introduced to deal with some issues around using more CPU in certain scenarios that we may want to address after this PR is merged, should we add a test similar to the one in #121608 such that it's more likely to detect if work items are not getting picked up? |
The repro involves two processes: a client and a server. They may run for quite a while before hanging - could be minutes. And the time-to-hang appears can depend on CPU manufacturer/model or number of cores. It is a good real-world-like sample for running locally as a stress test, but hardly useful as a CI test. I think we have some existing tests that look for TP items not being picked up, but since this issue requires rare races, it could be that tests cannot detect it on the kind of hardware the lab runs them (or catch very rarely). Maybe we should think of some kind of "stress suite" for the thread pool. At least keep a collection of apps known to have stress issues in the past. Like this example. |
|
I was curious about what perf effect here could be and tried using It is not blocking, but it seems the right tool for this kind of queries. |
|
I'll try one more time with benchmar. Maybe it was some transient infra issue. |
|
/benchmark plaintext,json,fortunes aspnet-citrine-lin runtime,libs |
|
Benchmark started for plaintext, json, fortunes on aspnet-citrine-lin with runtime, libs. Logs: link |
src/libraries/System.Net.Sockets/src/System/Net/Sockets/SocketAsyncEngine.Unix.cs
Outdated
Show resolved
Hide resolved
src/libraries/System.Net.Sockets/src/System/Net/Sockets/SocketAsyncEngine.Unix.cs
Show resolved
Hide resolved
src/libraries/System.Private.CoreLib/src/System/Threading/ThreadPoolWorkQueue.cs
Show resolved
Hide resolved
|
I think I've addressed all the concerns/questions. Let me know if I missed something. |
|
Thanks!! |
|
Any plans for backporting to NET9 / NET10 ? Or waiting for performance reports like #122186 ? |
Yes, will be ported to net10 and net9. Just letting the fix to run for a few days to be sure the fix does not cause some unintended effects. |
Fixes: #121608 Backport of #121887 The ThreadPool in rare cases allows a scenario when an enqueued workitem is not guaranteed to be executed unless more workitems are enqueued. In some scenarios execution of a particular workitem may be necessary before more work is enqueued, thus leading to a deadlock. This is a subtle regression introduced by a change in enqueuer/worker handshake algorithm. The same pattern is used in 2 other ThreadPool-like internal features in addition to the ThreadPool. ## Customer Impact - [ ] Customer reported - [x] Found internally ## Regression - [x] Yes - [ ] No ## Testing Standard test pass for deterministic regressions. A targeted stress application that demonstrates the issue. (no fix: hangs within 1-2 minutes, with the fix: runs for 20+ min. on the same system) ## Risk Low. This is a revert to the preexisting algorithm for the enqueuer/worker handshake. (in all 3 places) --------- Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com> Co-authored-by: Andy Gocke <angocke@microsoft.com>
…122362) Fixes: #121608 Backport of #121887 The ThreadPool in rare cases allows a scenario when an enqueued workitem is not guaranteed to be executed unless more workitems are enqueued. In some scenarios execution of a particular workitem may be necessary before more work is enqueued, thus leading to a deadlock. This is a subtle regression introduced by a change in enqueuer/worker handshake algorithm. The same pattern is used in 2 other ThreadPool-like internal features in addition to the ThreadPool. ## Customer Impact - [ ] Customer reported - [x] Found internally ## Regression - [x] Yes - [ ] No ## Testing Standard test pass for deterministic regressions. A targeted stress application that demonstrates the issue. (no fix: hangs within 1-2 minutes, with the fix: runs for 20+ min. on the same system) ## Risk Low. This is a revert to the preexisting algorithm for the enqueuer/worker handshake. (in all 3 places)
This is a follow up on recommendations from Scalability Experiments done some time ago. The Scalability Experiments resulted in many suggestions. In this part we look at overheads of submitting and executing a workitem to the threadpool from the thread scheduling point of view. In particular - this PR tries to minimize changes to the workqueue to scope the changes. The workqueue related recommendations will be addressed separately. The threadpool parts are very interconnected though, and sometimes removing one bottleneck results in another one to show up, so some workqueue changes had to be done, just to avoid regressions. There are also a few "low hanging fruit" fixes for per-workitem overheads like unnecessary fences or too frequent modifications of shared state. Hopefully this will negate some of the regressions from #121887 (as was reported in #122186) In this change: - fewer operations per work item where possible. such as fewer/weaker fences where possible, reporting heartbeat once per dispatch quantum vs. per each workitem, etc.. - avoid spurious wakes of worker threads. (except, unavoidably, when thread goal is changed - by HillClimb and such). only one thread is requested at a time. requesting another thread is conditioned on evidence of work present in the queue (basically the minimum required for correctness). as a result a thread that becomes active typically finds work. in particular this avoids a cascade of spurious wakes when pool is running out of workitems. - stop tracking spinners count in LIFO semaphore. we can keep track of spinners, but informational value of knowing the extremely transient count is close to zero, so we should not. - no Sleep in LIFO semaphore. using spin-Sleep is questionable in a synchronization feature that can block and ask OS to wake a thread deterministically. - shortening spinning in the LIFO semaphore to a more affordable value. since the LIFO semaphore can perform a blocking wait until condition it wants to see happens, once spinning gets into the range of wait/wake latency, it makes no sense to spin for much longer. it is also not uncommon that the work is introduced by non-pool threads, thus the pool threads may have to block just to allow for more work to be scheduled. --------- Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
When inserting workitems into threadpool queues, we must always guarantee that for every workitem there will be some worker at some point in the future that will certainly notice the presence of the workitem and executes it.
There was an attempt to relax the requirements in #100506.
Sadly, it leads to occasional deadlocks when items are present in the work queues and no workers are coming to pick them up.
The same change was made in all 3 threadpools - IO completion, Sockets and the general purpose ThreadPool. The fix is applied to all three threadpools.
We have seen reports about deadlocks when running on net9 or later releases:
The fix will need to be ported to net10 and net9. Thus this PR tries to restore just the part which changed the enqueuers/workers handshake algorithm.
More stuff was piled up into threadpool since the change, so doing the minimal fix without disturbing the rest is somewhat tricky.
Fixes: #121608 (definitely, I have tried with the repro)
Fixes: #119043 (likely, I do not have a repro to try, but symptoms seem like from the same issue)