-
-
Notifications
You must be signed in to change notification settings - Fork 4.9k
Closed
Description
celery 4.2.1
gevent 1.3.5
redis-py 2.10.6
With gevent.monkey.patch_all(), when try to get result of a delay executed task(shutdown worker or make task in long pending task queue), result.get() will cause very high cpu usage on the producer side.
Tested on python2.7 and python3.6, both have this issue.
Test script:
import gevent
import gevent.monkey
gevent.monkey.patch_all()
import time
from celery import Celery
app = Celery()
app.conf.update(BROKER_URL='redis://localhost:6379/1',
CELERY_RESULT_BACKEND='redis://localhost:6379/2')
@app.task()
def dummy():
return '1'
ret = dummy.delay()
ret.get()
strace -p {pid}, get:
getpid() = 10484
epoll_wait(3, [], 64, 1) = 0
getpid() = 10484
epoll_wait(3, [], 64, 1) = 0
getpid() = 10484
epoll_wait(3, [], 64, 1) = 0
getpid() = 10484
epoll_wait(3, [], 64, 1) = 0
getpid() = 10484
epoll_wait(3, [], 64, 1) = 0
getpid() = 10484
epoll_wait(3, [], 64, 1) = 0
getpid() = 10484
epoll_wait(3, [], 64, 1) = 0
getpid() = 10484
epoll_wait(3, [], 64, 1) = 0
getpid() = 10484
epoll_wait(3, [], 64, 1) = 0
getpid() = 10484
epoll_wait(3, [], 64, 1) = 0
getpid() = 10484
epoll_wait(3, [], 64, 1) = 0
getpid() = 10484
epoll_wait(3, [], 64, 1) = 0
getpid() = 10484
epoll_wait(3, [], 64, 1) = 0
getpid() = 10484
It's flushing very quickly, seems gevent is checking something actively, after debuging, I find ret.get() stucked in https://github.com/celery/celery/blob/master/celery/backends/asynchronous.py#L99
If change it to sleep(0.1), high cpu usage go way. Not sure what sleep(0) does here.
elvanowen and tildedave