Skip to content

Conversation

@tsoos99dev
Copy link
Contributor

Description

Async results create subscriptions and they get stored in the backend's _pending_results store. The docs clearly warns about the mandatory use of either get or forget to clean up resources. Still these subscriptions aren't cleaned up when only forget is called for example.
See #9797 for more details.

Changes(Fixes #9797)

Added a call to backend.remove_pending_result when forget is called to release any subscriptions and references stored in the backend.

@tsoos99dev
Copy link
Contributor Author

tsoos99dev commented Jul 6, 2025

This was the smallest change I could come up with, but I believe the result backends could be updated to take care of this on the call to forget as well. In that case, I can create a new PR for that. Though I'm fine with this approach.

@auvipy auvipy requested review from auvipy and Copilot July 7, 2025 02:49

This comment was marked as outdated.

@codecov
Copy link

codecov bot commented Jul 7, 2025

Codecov Report

✅ All modified and coverable lines are covered by tests.
✅ Project coverage is 78.61%. Comparing base (6dcecbe) to head (a761b28).
⚠️ Report is 1 commits behind head on main.

Additional details and impacted files
@@            Coverage Diff             @@
##             main    #9806      +/-   ##
==========================================
- Coverage   78.62%   78.61%   -0.01%     
==========================================
  Files         153      153              
  Lines       19199    19200       +1     
  Branches     2547     2547              
==========================================
  Hits        15095    15095              
+ Misses       3811     3809       -2     
- Partials      293      296       +3     
Flag Coverage Δ
unittests 78.59% <100.00%> (-0.01%) ⬇️

Flags with carried forward coverage won't be shown. Click here to find out more.

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

Copy link
Member

@auvipy auvipy left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

should we also add unit and integration tests? or they are already covered?

@auvipy auvipy added this to the 5.6.0 milestone Jul 7, 2025
@auvipy auvipy requested a review from Copilot July 7, 2025 06:32

This comment was marked as outdated.

@tsoos99dev
Copy link
Contributor Author

tsoos99dev commented Jul 7, 2025

should we also add unit and integration tests? or they are already covered?

I think so. A sanity check seems appropriate for this issue.

@auvipy
Copy link
Member

auvipy commented Jul 7, 2025

lets do it

@tsoos99dev
Copy link
Contributor Author

I added this unit test, but it feels a bit redundant.

@tsoos99dev tsoos99dev requested a review from auvipy July 12, 2025 22:36
@auvipy auvipy requested a review from Copilot July 13, 2025 03:30

This comment was marked as resolved.

@auvipy
Copy link
Member

auvipy commented Jul 13, 2025

self = <pytest_celery.api.worker.CeleryTestWorker object at 0x7f42339a6b10>
log = 'Starting long running task', message = '', timeout = 60

  def assert_log_exists(self, log: str, message: str = "", timeout: int = RESULT_TIMEOUT) -> None:
      """Assert that a log exists in the container.
  
      Args:
          log (str): Log to assert.
          message (str, optional): Message to display while waiting. Defaults to "".
          timeout (int, optional): Timeout in seconds. Defaults to RESULT_TIMEOUT.
      """
      try:
          self.wait_for_log(log, message, timeout)
      except pytest_docker_tools.exceptions.TimeoutError:
      assert False, f"Worker container '{self.name()}' did not log -> {log} within {timeout} seconds"

E AssertionError: Worker container 'peaceful_kowalevski' did not log -> Starting long running task within 60 seconds
E assert False

@auvipy
Copy link
Member

auvipy commented Jul 13, 2025

t/integration/test_canvas.py::test_stamping_mechanism::test_all_tasks_of_canvas_are_stamped [Assert all tasks are stamped] SUBPASS::debug::Code: null
Error: Final attempt failed. Timeout of 3600000ms hit

@auvipy
Copy link
Member

auvipy commented Jul 13, 2025

test failures are seems to be unrelated

@auvipy
Copy link
Member

auvipy commented Jul 14, 2025

so how should we move forward with this? #9797 (comment)

@auvipy auvipy self-requested a review July 15, 2025 08:00
@tsoos99dev
Copy link
Contributor Author

so how should we move forward with this? #9797 (comment)

I don't see how it could be related at the moment., and I had the same issues with multiple previous releases. I'm hoping it's a an issue with my local setup. We can go ahead.

@auvipy auvipy requested a review from Copilot July 17, 2025 04:35

This comment was marked as resolved.

@auvipy auvipy force-pushed the fix-pending-tasks-memory-leak branch from 9ddb04c to 306dad6 Compare August 7, 2025 08:39
@auvipy auvipy force-pushed the fix-pending-tasks-memory-leak branch from 306dad6 to a761b28 Compare August 10, 2025 06:33
@auvipy auvipy requested a review from Copilot August 10, 2025 06:34

This comment was marked as resolved.

@auvipy auvipy merged commit 7adc9e6 into celery:main Aug 11, 2025
0 of 2 checks passed
@tsoos99dev tsoos99dev deleted the fix-pending-tasks-memory-leak branch August 15, 2025 10:55
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Group results aren't cleaned up on result.forget() - memory leak

2 participants