When i use celery + gevent for tasks that uses subprocess module i'm getting following stacktrace:
Traceback (most recent call last):
File "/home/venv/admin/lib/python2.7/site-packages/celery/task/trace.py", line 228, in trace_task
R = retval = fun(*args, **kwargs)
File "/home/venv/admin/lib/python2.7/site-packages/celery/task/trace.py", line 415, in __protected_call__
return self.run(*args, **kwargs)
File "/home/webapp/admin/webadmin/apps/loggingquarantine/tasks.py", line 107, in release_mail_task
res = call_external_script(popen_obj.communicate)
File "/home/webapp/admin/webadmin/apps/core/helpers.py", line 42, in call_external_script
return func_to_call(*args, **kwargs)
File "/usr/lib64/python2.7/subprocess.py", line 740, in communicate
return self._communicate(input)
File "/usr/lib64/python2.7/subprocess.py", line 1257, in _communicate
stdout, stderr = self._communicate_with_poll(input)
File "/usr/lib64/python2.7/subprocess.py", line 1287, in _communicate_with_poll
poller = select.poll()
AttributeError: 'module' object has no attribute 'poll'
My manage.py looks following (doing monkeypatch there):
#!/usr/bin/env python
from gevent import monkey
import sys
import os
if __name__ == "__main__":
if not 'celery' in sys.argv:
monkey.patch_all()
os.environ.setdefault("DJANGO_SETTINGS_MODULE", "webadmin.settings")
from django.core.management import execute_from_command_line
sys.path.append(".")
execute_from_command_line(sys.argv)
Is there a reason why celery tasks act like it wasn't patched properly?
p.s. strange thing that my local setup on Macos works fine while i getting such exceptions under Centos (all package versions are the same, init and config scripts too)
There's no simulation for poll in gevent so monkey.patch_all removes polling mechanisms that gevent.select does not simulate: poll, epoll, kqueue, kevent. See gevent.monkey – Make the standard library cooperative.
Related
I'm trying to use celery for background process in my Django application. Django version is 1.4.8 and latest suitable celery version is 3.1.25.
I use Redis (3.1.0) as broker and backend, json as serializer.
When I start the worker
celery -A celery_app worker -l info I'm getting Attribute error 'unicode' object has no attribute 'iteritems'
My settings.py file:
BROKER_URL = 'redis://localhost'
CELERY_RESULT_BACKEND = 'redis://localhost/'
CELERY_ACCEPT_CONTENT = ['json']
CELERY_TASK_SERIALIZER = 'json'
CELERY_RESULT_SERIALIZER = 'json'
CELERY_STORE_ERRORS_EVEN_IF_IGNORED = True
celery_app.py:
import sys
from django.conf import settings
from celery import Celery
project_root = os.path.dirname(__file__)
sys.path.insert(0, os.path.join(project_root, '../env'))
sys.path.insert(0, os.path.join(project_root, '../'))
os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'project.settings')
app = Celery('project')
app.config_from_object('project.settings')
app.autodiscover_tasks(lambda: settings.INSTALLED_APPS, force=True)
tasks.py:
#celery_app.task
def sample_task(x):
return 'Test response'
and that's how I run this task:
sample_task.delay({'key': 'test'})
And I get the following error:
File "/Users/user/project/venv/lib/python2.7/site-packages/redis/_compat.py", line 94, in iteritems
return x.iteritems()
AttributeError: 'unicode' object has no attribute 'iteritems'
full traceback:
[2019-01-31 16:43:08,909: ERROR/MainProcess] Unrecoverable error: AttributeError("'unicode' object has no attribute 'iteritems'",)
Traceback (most recent call last):
File "/Users/user/project/venv/lib/python2.7/site-packages/celery/worker/__init__.py", line 206, in start
self.blueprint.start(self)
File "/Users/user/project/venv/lib/python2.7/site-packages/celery/bootsteps.py", line 123, in start
step.start(parent)
File "/Users/user/project/venv/lib/python2.7/site-packages/celery/bootsteps.py", line 374, in start
return self.obj.start()
File "/Users/user/project/venv/lib/python2.7/site-packages/celery/worker/consumer.py", line 280, in start
blueprint.start(self)
File "/Users/user/project/venv/lib/python2.7/site-packages/celery/bootsteps.py", line 123, in start
step.start(parent)
File "/Users/user/project/venv/lib/python2.7/site-packages/celery/worker/consumer.py", line 884, in start
c.loop(*c.loop_args())
File "/Users/user/project/venv/lib/python2.7/site-packages/celery/worker/loops.py", line 76, in asynloop
next(loop)
File "/Users/user/project/venv/lib/python2.7/site-packages/kombu/async/hub.py", line 340, in create_loop
cb(*cbargs)
File "/Users/user/project/venv/lib/python2.7/site-packages/kombu/transport/redis.py", line 1019, in on_readable
self._callbacks[queue](message)
File "/Users/user/project/venv/lib/python2.7/site-packages/kombu/transport/virtual/__init__.py", line 534, in _callback
self.qos.append(message, message.delivery_tag)
File "/Users/user/project/venv/lib/python2.7/site-packages/kombu/transport/redis.py", line 146, in append
pipe.zadd(self.unacked_index_key, delivery_tag, time()) \
File "/Users/user/project/venv/lib/python2.7/site-packages/redis/client.py", line 2320, in zadd
for pair in iteritems(mapping):
File "/Users/user/project/venv/lib/python2.7/site-packages/redis/_compat.py", line 94, in iteritems
return x.iteritems()
AttributeError: 'unicode' object has no attribute 'iteritems'
I tried to find the issue on the internet, tried to pass another params to task. I don't know how to debug celery process and could not find the solution by myself. Please help me
Seems that this Celery version doesn't support Redis 3. Try to install Redis 2.10.6.
every time i run the code i see Serving Flask-SocketIO app "app.py"
i don't even have SpcketIO on my system
import os, passlib ,requests ,time ,json
from flask import Flask, session , render_template , request,redirect,url_for ,jsonify
from flask_session import Session
import pandas as pd
from flask_uploads import UploadSet, configure_uploads, IMAGES, patch_request_class
from werkzeug import secure_filename
from datetime import date , datetime
from flask_sqlalchemy import SQLAlchemy
from passlib.hash import sha256_crypt
from flask_login import LoginManager , UserMixin , current_user ,login_user ,login_required ,logout_user
app = Flask(__name__)
by the end of my code I've this
if __name__ == "__main__":
app.run(debug=True)
i run the system by cmd: flask run
I used to ignore this error as everything else was working but when i tried to add app.run(debug=True,host='0.0.0.0')
also debug=True didn't seem to be doing anything anymore i had to turn it on through the CMD set Flask_Debug=1
i got an error stating ValueError: signal only works in main thread
and the app didn't run at all
the exact error
* Serving Flask-SocketIO app "app.py"
* Forcing debug mode on
* Restarting with stat
* Debugger is active!
* Debugger PIN: 885-769-473
Exception in thread Thread-1:
Traceback (most recent call last):
File "c:\users\mena\appdata\local\programs\python\python37-32\lib\threading.py
", line 917, in _bootstrap_inner
self.run()
File "c:\users\mena\appdata\local\programs\python\python37-32\lib\threading.py
", line 865, in run
self._target(*self._args, **self._kwargs)
File "c:\users\mena\appdata\local\programs\python\python37-32\lib\site-package
s\flask_socketio\cli.py", line 59, in run_server
return run_command()
File "c:\users\mena\appdata\local\programs\python\python37-32\lib\site-package
s\click\core.py", line 764, in __call__
return self.main(*args, **kwargs)
File "c:\users\mena\appdata\local\programs\python\python37-32\lib\site-package
s\click\core.py", line 717, in main
rv = self.invoke(ctx)
File "c:\users\mena\appdata\local\programs\python\python37-32\lib\site-package
s\click\core.py", line 956, in invoke
return ctx.invoke(self.callback, **ctx.params)
File "c:\users\mena\appdata\local\programs\python\python37-32\lib\site-package
s\click\core.py", line 555, in invoke
return callback(*args, **kwargs)
File "c:\users\mena\appdata\local\programs\python\python37-32\lib\site-package
s\click\decorators.py", line 64, in new_func
return ctx.invoke(f, obj, *args, **kwargs)
File "c:\users\mena\appdata\local\programs\python\python37-32\lib\site-package
s\click\core.py", line 555, in invoke
return callback(*args, **kwargs)
File "c:\users\mena\appdata\local\programs\python\python37-32\lib\site-package
s\flask\cli.py", line 771, in run_command
threaded=with_threads, ssl_context=cert)
File "c:\users\mena\appdata\local\programs\python\python37-32\lib\site-package
s\werkzeug\serving.py", line 812, in run_simple
reloader_type)
File "c:\users\mena\appdata\local\programs\python\python37-32\lib\site-package
s\werkzeug\_reloader.py", line 267, in run_with_reloader
signal.signal(signal.SIGTERM, lambda *args: sys.exit(0))
File "c:\users\mena\appdata\local\programs\python\python37-32\lib\signal.py",
line 47, in signal
handler = _signal.signal(_enum_to_int(signalnum), _enum_to_int(handler))
ValueError: signal only works in main thread
to get back to work i had to turn off the debug in cmd set Flask_Debug=0
what's wrong with my system ?
please check if there is any other thread running or not. And try running your app by command python app.py from terminal instead of flask run. may be flask run command invoking other services to which are not necessary to your application.
Also I am assuming you are working on Linux system. In terminal (Windows cmd) and in your project directory where your app.py is located just enter python (python/python3 depnds whichever version you are using) app.py and hit enter.
try this on command line
python app.py
or
python3 app.py
I'm trying to write a celery application that passes numpy arrays (or any arbitrary objects) to the workers. As far as I can tell, this requires serialization to occur via pickle (NB: I'm aware of the security implications but this isn't a concern in this case).
However, even after trying every possible way I could find to allow pickle as a serializer, I keep getting the following kombu exception:
kombu.exceptions.ContentDisallowed: Refusing to deserialize untrusted
content of type pickle (application/x-python-serialize)
My current files are currently:
# tasks.py
from celery import Celery
app = Celery(
'tasks',
broker='redis://localhost',
accept_content=['pickle'],
task_serializer='pickle'
)
#app.task
def adding(x, y):
return x + y
if __name__ == '__main__':
import numpy as np
adding.apply_async((np.array([1]), np.array([1])), serializer='pickle')
In addition I have a config file:
# celeryconfig.py
print('configuring...')
accept_content = ['pickle', 'application/x-python-serialize']
task_serializer = 'pickle'
result_serializer = 'pickle'
from kombu import serialization
serialization.register_pickle()
serialization.enable_insecure_serializers()
However, if I run the worker (celery -A tasks worker --loglevel=info) and then execute the code that makes an async call (python tasks.py), I get the following traceback. Am I missing something?
[2018-06-16 11:46:23,617: CRITICAL/MainProcess] Unrecoverable error: ContentDisallowed('Refusing to deserialize untrusted content of type pickle (application/x-python-serialize)',)
Traceback (most recent call last):
File "/opt/anaconda/envs/Python3/lib/python3.6/site-packages/celery/worker/worker.py", line 205, in start
self.blueprint.start(self)
File "/opt/anaconda/envs/Python3/lib/python3.6/site-packages/celery/bootsteps.py", line 119, in start
step.start(parent)
File "/opt/anaconda/envs/Python3/lib/python3.6/site-packages/celery/bootsteps.py", line 369, in start
return self.obj.start()
File "/opt/anaconda/envs/Python3/lib/python3.6/site-packages/celery/worker/consumer/consumer.py", line 322, in start
blueprint.start(self)
File "/opt/anaconda/envs/Python3/lib/python3.6/site-packages/celery/bootsteps.py", line 119, in start
step.start(parent)
File "/opt/anaconda/envs/Python3/lib/python3.6/site-packages/celery/worker/consufrom celery import Celery
mer/consumer.py", line 598, in start
c.loop(*c.loop_args())
File "/opt/anaconda/envs/Python3/lib/python3.6/site-packages/celery/worker/loops.py", line 91, in asynloop
next(loop)
File "/opt/anaconda/envs/Python3/lib/python3.6/site-packages/kombu/asynchronous/hub.py", line 354, in create_loop
cb(*cbargs)
File "/opt/anaconda/envs/Python3/lib/python3.6/site-packages/kombu/transport/redis.py", line 1040, in on_readable
self.cycle.on_readable(fileno)
File "/opt/anaconda/envs/Python3/lib/python3.6/site-packages/kombu/transport/redis.py", line 337, in on_readable
chan.handlers[type]()
File "/opt/anaconda/envs/Python3/lib/python3.6/site-packages/kombu/transport/redis.py", line 724, in _brpop_read
self.connection._deliver(loads(bytes_to_str(item)), dest)
File "/opt/anaconda/envs/Python3/lib/python3.6/site-packages/kombu/transport/virtual/base.py", line 983, in _deliver
callback(message)
File "/opt/anaconda/envs/Python3/lib/python3.6/site-packages/kombu/transport/virtual/base.py", line 633, in _callback
return callback(message)
File "/opt/anaconda/envs/Python3/lib/python3.6/site-packages/kombu/messaging.py", line 624, in _receive_callback
return on_m(message) if on_m else self.receive(decoded, message)
File "/opt/anaconda/envs/Python3/lib/python3.6/site-packages/celery/worker/consumer/consumer.py", line 572, in on_task_received
callbacks,
File "/opt/anaconda/envs/Python3/lib/python3.6/site-packages/celery/worker/strategy.py", line 136, in task_message_handler
if body is None and 'args' not in message.payload:
File "/opt/anaconda/envs/Python3/lib/python3.6/site-packages/kombu/message.py", line 207, in payload
return self._decoded_cache if self._decoded_cache else self.decode()
File "/opt/anaconda/envs/Python3/lib/python3.6/site-packages/kombu/message.py", line 192, in decode
self._decoded_cache = self._decode()
File "/opt/anaconda/envs/Python3/lib/python3.6/site-packages/kombu/message.py", line 197, in _decode
self.content_encoding, accept=self.accept)
File "/opt/anaconda/envs/Python3/lib/python3.6/site-packages/kombu/serialization.py", line 253, in loads
raise self._for_untrusted_content(content_type, 'untrusted')
kombu.exceptions.ContentDisallowed: Refusing to deserialize untrusted content of type pickle (application/x-python-serialize)
For anyone coming to this question:
The answer was to use the app.config_from_object method:
import celeryconfig
app.config_from_object(celeryconfig)
This is some strange regression that I can only reproduce on the more powerful production machine we have.
def test_foo(self):
res = self._run_job( ....)
self.assertTrue("Hello Input!" in res.json()["stdout"], res.text)
.........
def _run_job(self, cbid, auth, d):
.........
while True:
res = requests.get(URL+"/status/"+status_id, auth=auth) <--- hangs here
if res.json()["status"] != "Running":
break
else:
time.sleep(2)
..........
I have to break the process and this is the traceback:
Traceback (most recent call last):
File "test_full.py", line 231, in <module>
unittest.main()
File "/opt/graphyte/vens/gcs/local/lib/python2.7/site-packages/unittest2/main.py", line 98, in __init__
self.runTests()
File "/opt/graphyte/vens/gcs/local/lib/python2.7/site-packages/unittest2/main.py", line 232, in runTests
self.result = testRunner.run(self.test)
File "/opt/graphyte/vens/gcs/local/lib/python2.7/site-packages/unittest2/runner.py", line 162, in run
test(result)
File "/opt/graphyte/vens/gcs/local/lib/python2.7/site-packages/unittest2/suite.py", line 64, in __call__
return self.run(*args, **kwds)
File "/opt/graphyte/vens/gcs/local/lib/python2.7/site-packages/unittest2/suite.py", line 84, in run
self._wrapped_run(result)
File "/opt/graphyte/vens/gcs/local/lib/python2.7/site-packages/unittest2/suite.py", line 114, in _wrapped_run
test._wrapped_run(result, debug)
File "/opt/graphyte/vens/gcs/local/lib/python2.7/site-packages/unittest2/suite.py", line 116, in _wrapped_run
test(result)
File "/opt/graphyte/vens/gcs/local/lib/python2.7/site-packages/unittest2/case.py", line 398, in __call__
return self.run(*args, **kwds)
File "/opt/graphyte/vens/gcs/local/lib/python2.7/site-packages/unittest2/case.py", line 340, in run
testMethod()
File "test_full.py", line 59, in test_session
"cmd": "python helloworld.py"
File "test_full.py", line 129, in _run_job
time.sleep(2)
File "/opt/graphyte/vens/gcs/local/lib/python2.7/site-packages/gevent/hub.py", line 79, in sleep
switch_result = get_hub().switch()
File "/opt/graphyte/vens/gcs/local/lib/python2.7/site-packages/gevent/hub.py", line 164, in switch
return greenlet.switch(self)
KeyboardInterrupt
Exception KeyError: KeyError(155453036,) in <module 'threading' from '/usr/lib/python2.7/threading.pyc'> ignored
Why is gevent involved? This is a functional test. It only makes HTTP requests through requests library so maybe the switch refers to requests.
But being a simple loop, how could this fail?
Are you monkey patching in gevent?
It could be switching on the network request and never getting back for some reason. I'd say stop monkey patching for now, and put in gevent where you need it.
It could be that now that requests is asynchronous, it's returning immediately, then sleeping (again asynchronously) and the requesting, and rinse / repeat...
Why is gevent involved?
The gevent library monkey-patches some standard modules to make them cooperative. Replacing time.sleep by gevent.sleep is one of the changes.
http://www.gevent.org/gevent.monkey.html#gevent.monkey.patch_time
I'm having trouble getting celery working with django. I want to use celery to scrape a website and update some django models every 20 minutes.
I created a task file in my app directory that has an update class:
class Update(PeriodicTask):
run_every=datetime.timedelta(minutes=20)
def run(self, **kwargs):
#update models
The class correctly updates my modesl if I run it from the command line:
if __name__ == '__main__':
Update().run()
My celery config in setting.py looks like this:
CELERY_RESULT_BACKEND = "database"
BROKER_HOST = 'localhost'
BROKER_PORT = 5672
BROKER_USER = 'Broker'
BROKER_PASSWORD = '*password*'
BROKER_VHOST = 'broker_vhost'
But when I run manage.py celeryd -v 2 I get connection errors:
[2010-12-29 09:28:15,150: ERROR/MainProcess] CarrotListener: Connection Error: [Errno 111] Connection refused. Trying again in 10 seconds...
What am I missing?
Update:
I found django-kombu which looked pretty good becuase it uses my existing database. I've installed django-kombu and kombu but now I get the following error when running manage.py celeryd -v 2.
Traceback (most recent call last):
File "manage.py", line 11, in <module>
execute_manager(settings)
File "<webapp_path>/lib/python2.6/django/core/management/__init__.py", line 438, in execute_manager
utility.execute()
File "<webapp_path>/lib/python2.6/django/core/management/__init__.py", line 379, in execute
self.fetch_command(subcommand).run_from_argv(self.argv)
File "<webapp_path>/lib/python2.6/django/core/management/base.py", line 191, in run_from_argv
self.execute(*args, **options.__dict__)
File "<webapp_path>/lib/python2.6/django/core/management/base.py", line 220, in execute
output = self.handle(*args, **options)
File "<webapp_path>/lib/python2.6/django_celery-2.1.4-py2.6.egg/djcelery/management/commands/celeryd.py", line 20, in handle
worker.run(*args, **options)
File "<webapp_path>/lib/python2.6/celery-2.1.4-py2.6.egg/celery/bin/celeryd.py", line 83, in run
from celery.apps.worker import Worker
File "<webapp_path>/lib/python2.6/celery-2.1.4-py2.6.egg/celery/apps/worker.py", line 15, in <module>
from celery.task import discard_all
File "<webapp_path>/lib/python2.6/celery-2.1.4-py2.6.egg/celery/task/__init__.py", line 7, in <module>
from celery.execute import apply_async
File "<webapp_path>/lib/python2.6/celery-2.1.4-py2.6.egg/celery/execute/__init__.py", line 7, in <module>
from celery.result import AsyncResult, EagerResult
File "<webapp_path>/lib/python2.6/celery-2.1.4-py2.6.egg/celery/result.py", line 9, in <module>
from celery.backends import default_backend
File "<webapp_path>/lib/python2.6/celery-2.1.4-py2.6.egg/celery/backends/__init__.py", line 51, in <module>
default_backend = DefaultBackend()
TypeError: __init__() takes exactly 2 arguments (1 given)
Doesn't look like you have a broker installed/running (RabbitMQ?)
I had the same issue, and the problem was that I had the import path wrong.
Probably, you import task as
from celery import task.
While you should
from celery.task import task