-
-
Notifications
You must be signed in to change notification settings - Fork 4.9k
Closed
Labels
Milestone
Description
Checklist
- I have verified that the issue exists against the
mainbranch of Celery. - This has already been asked to the discussions forum first.
- I have read the relevant section in the
contribution guide
on reporting bugs. - I have checked the issues list
for similar or identical bug reports. - I have checked the pull requests list
for existing proposed fixes. - I have checked the commit log
to find out if the bug was already fixed in the main branch. - I have included all related issues and possible duplicate issues
in this issue (If there are none, check this box anyway). - I have tried to reproduce the issue with pytest-celery and added the reproduction script below.
Mandatory Debugging Information
- I have included the output of
celery -A proj reportin the issue.
(if you are not able to do this, then at least specify the Celery
version affected). - I have verified that the issue exists against the
mainbranch of Celery. - I have included the contents of
pip freezein the issue. - I have included all the versions of all the external dependencies required
to reproduce this bug.
Optional Debugging Information
- I have tried reproducing the issue on more than one Python version
and/or implementation. - I have tried reproducing the issue on more than one message broker and/or
result backend. - I have tried reproducing the issue on more than one version of the message
broker and/or result backend. - I have tried reproducing the issue on more than one operating system.
- I have tried reproducing the issue on more than one workers pool.
- I have tried reproducing the issue with autoscaling, retries,
ETA/Countdown & rate limits disabled. - I have tried reproducing the issue after downgrading
and/or upgrading Celery and its dependencies.
Related Issues and Possible Duplicates
Related Issues
- Tasks silently fail with result None if Redis result backend goes away #8588
- Tasks are not retried when backend store fails #8029
Possible Duplicates
- None
Environment & Settings
Celery version:
5.4.0
celery report Output:
Steps to Reproduce
Required Dependencies
- Minimal Python Version: N/A or Unknown
- Minimal Celery Version: N/A or Unknown
- Minimal Kombu Version: N/A or Unknown
- Minimal Broker Version: N/A or Unknown
- Minimal Result Backend Version: N/A or Unknown
- Minimal OS and/or Kernel Version: N/A or Unknown
- Minimal Broker Client Version: N/A or Unknown
- Minimal Result Backend Client Version: N/A or Unknown
Python Packages
pip freeze Output:
argcomplete==3.2.2
attrs==21.2.0
Automat==20.2.0
Babel==2.8.0
bcrypt==3.2.0
blinker==1.4
build==1.2.1
certifi==2020.6.20
chardet==4.0.0
click==8.1.7
cloud-init==24.3.1
colorama==0.4.4
command-not-found==0.3
configobj==5.0.6
constantly==15.1.0
cryptography==3.4.8
dbus-python==1.2.18
distro==1.7.0
distro-info==1.1+ubuntu0.2
httplib2==0.20.2
hyperlink==21.0.0
idna==3.3
importlib-metadata==4.6.4
incremental==21.3.0
jeepney==0.7.1
Jinja2==3.0.3
jsonpatch==1.32
jsonpointer==2.0
jsonschema==3.2.0
keyring==23.5.0
launchpadlib==1.10.16
lazr.restfulclient==0.14.4
lazr.uri==1.0.6
MarkupSafe==2.0.1
more-itertools==8.10.0
netifaces==0.11.0
oauthlib==3.2.0
packaging==23.2
pipx==1.4.3
platformdirs==4.1.0
pyasn1==0.4.8
pyasn1-modules==0.2.1
pycurl==7.44.1
Pygments==2.11.2
PyGObject==3.42.1
PyHamcrest==2.0.2
PyJWT==2.3.0
pyOpenSSL==21.0.0
pyparsing==2.4.7
pyproject_hooks==1.1.0
pyrsistent==0.18.1
pyserial==3.5
python-apt==2.4.0+ubuntu4
pytz==2022.1
PyYAML==5.4.1
requests==2.25.1
SecretStorage==3.3.1
service-identity==18.1.0
six==1.16.0
systemd-python==234
tomli==2.0.1
Twisted==22.1.0
ubuntu-pro-client==8001
ufw==0.36.1
unattended-upgrades==0.1
urllib3==1.26.5
userpath==1.9.1
wadllib==1.3.6
zipp==1.0.0
zope.interface==5.4.0
Other Dependencies
Details
N/A
Minimally Reproducible Test Case
Details
This can be reproduced with the following test in test_redis.py that reflects the expected behavior as defined in test_base.py, but for the RedisBackend.
try:
from redis import exceptions
except ImportError:
exceptions = None
class test_RedisBackend(basetest_RedisBackend):
def test_store_result_with_retries(self):
self.app.conf.result_backend_always_retry, prev = True, self.app.conf.result_backend_always_retry
try:
b = self.Backend(app=self.app)
# b.exception_safe_to_retry = lambda exc: True
b._sleep = Mock()
b._get_task_meta_for = Mock()
b._get_task_meta_for.return_value = {
'status': states.RETRY,
'result': {
"exc_type": "Exception",
"exc_message": ["failed"],
"exc_module": "builtins",
},
}
b._store_result = Mock()
b._store_result.side_effect = [
exceptions.ConnectionError("failed"),
42
]
res = b.store_result("testing", 42, states.SUCCESS) # raises redis.exceptions.ConnectionError: failed
assert res == 42
assert b._sleep.call_count == 1
finally:
self.app.conf.result_backend_always_retry = prevExpected Behavior
Based on result-backend-always-retry I would expect the store_result operation to be retried for connection errors.
Similar to the ElasticsearchBackend.
While the two linked issues discuss a similar issue, they propose a full task requeue instead of a retry on the failed opeartion. For our use case, a simple retry works, but I understand that might not be ideal for everyone. For example, if you do not wish to use result_backend_always_retry=True.
Actual Behavior
Errors are not retried, result-backend-always-retry is ignored for Redis.