I have a Django project on an Ubuntu EC2 node, which I have been using to set up an asynchronous using Celery.
I am following How to list the queued items in celery? along with the docs, to experiment with celery at the command line.
I've been able to get a basic task working at the command line, using:
(env1)ubuntu#ip-172-31-22-65:~/projects/tp$ celery --app=myproject.celery:app worker --loglevel=INFO
However, if I run other celery commands like below I'm getting the following:
(env1)ubuntu#ip-172-31-22-65:~/projects/tp$ celery inspect ping
Traceback (most recent call last):
File "/home/ubuntu/.virtualenvs/env1/bin/celery", line 11, in <module>
sys.exit(main())
File "/home/ubuntu/.virtualenvs/env1/lib/python3.4/site-packages/celery/__main__.py", line 30, in main
main()
File "/home/ubuntu/.virtualenvs/env1/lib/python3.4/site-packages/celery/bin/celery.py", line 81, in main
cmd.execute_from_commandline(argv)
File "/home/ubuntu/.virtualenvs/env1/lib/python3.4/site-packages/celery/bin/celery.py", line 769, in execute_from_commandline
super(CeleryCommand, self).execute_from_commandline(argv)))
File "/home/ubuntu/.virtualenvs/env1/lib/python3.4/site-packages/celery/bin/base.py", line 307, in execute_from_commandline
return self.handle_argv(self.prog_name, argv[1:])
File "/home/ubuntu/.virtualenvs/env1/lib/python3.4/site-packages/celery/bin/celery.py", line 761, in handle_argv
return self.execute(command, argv)
File "/home/ubuntu/.virtualenvs/env1/lib/python3.4/site-packages/celery/bin/celery.py", line 693, in execute
).run_from_argv(self.prog_name, argv[1:], command=argv[0])
File "/home/ubuntu/.virtualenvs/env1/lib/python3.4/site-packages/celery/bin/base.py", line 311, in run_from_argv
sys.argv if argv is None else argv, command)
File "/home/ubuntu/.virtualenvs/env1/lib/python3.4/site-packages/celery/bin/base.py", line 373, in handle_argv
return self(*args, **options)
File "/home/ubuntu/.virtualenvs/env1/lib/python3.4/site-packages/celery/bin/base.py", line 270, in __call__
ret = self.run(*args, **kwargs)
File "/home/ubuntu/.virtualenvs/env1/lib/python3.4/site-packages/celery/bin/celery.py", line 324, in run
return self.do_call_method(args, **kwargs)
File "/home/ubuntu/.virtualenvs/env1/lib/python3.4/site-packages/celery/bin/celery.py", line 346, in do_call_method
callback=self.say_remote_command_reply)
File "/home/ubuntu/.virtualenvs/env1/lib/python3.4/site-packages/celery/bin/celery.py", line 385, in call
return getattr(i, method)(*args)
File "/home/ubuntu/.virtualenvs/env1/lib/python3.4/site-packages/celery/app/control.py", line 100, in ping
return self._request('ping')
File "/home/ubuntu/.virtualenvs/env1/lib/python3.4/site-packages/celery/app/control.py", line 71, in _request
timeout=self.timeout, reply=True,
File "/home/ubuntu/.virtualenvs/env1/lib/python3.4/site-packages/celery/app/control.py", line 307, in broadcast
limit, callback, channel=channel,
File "/home/ubuntu/.virtualenvs/env1/lib/python3.4/site-packages/kombu/pidbox.py", line 283, in _broadcast
chan = channel or self.connection.default_channel
File "/home/ubuntu/.virtualenvs/env1/lib/python3.4/site-packages/kombu/connection.py", line 756, in default_channel
self.connection
File "/home/ubuntu/.virtualenvs/env1/lib/python3.4/site-packages/kombu/connection.py", line 741, in connection
self._connection = self._establish_connection()
File "/home/ubuntu/.virtualenvs/env1/lib/python3.4/site-packages/kombu/connection.py", line 696, in _establish_connection
conn = self.transport.establish_connection()
File "/home/ubuntu/.virtualenvs/env1/lib/python3.4/site-packages/kombu/transport/pyamqp.py", line 112, in establish_connection
conn = self.Connection(**opts)
File "/home/ubuntu/.virtualenvs/env1/lib/python3.4/site-packages/amqp/connection.py", line 165, in __init__
self.transport = self.Transport(host, connect_timeout, ssl)
File "/home/ubuntu/.virtualenvs/env1/lib/python3.4/site-packages/amqp/connection.py", line 186, in Transport
return create_transport(host, connect_timeout, ssl)
File "/home/ubuntu/.virtualenvs/env1/lib/python3.4/site-packages/amqp/transport.py", line 299, in create_transport
return TCPTransport(host, connect_timeout)
File "/home/ubuntu/.virtualenvs/env1/lib/python3.4/site-packages/amqp/transport.py", line 95, in __init__
raise socket.error(last_err)
OSError: [Errno 111] Connection refused
The installed python packages:
(env1)ubuntu#ip-172-31-22-65:~/projects/tp$ pip freeze
amqp==1.4.6
anyjson==0.3.3
billiard==3.3.0.19
celery==3.1.17
Django==1.7.7
django-redis-cache==0.13.0
kombu==3.0.24
pytz==2015.2
redis==2.10.3
requests==2.6.0
uWSGI==2.0.10
/projects/tp/tp/celery.py
from __future__ import absolute_import
import os
import django
from celery import Celery
from django.conf import settings
# set the default Django settings module for the 'celery' program.
os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'tp.settings')
django.setup()
app = Celery('hello_django')
# Using a string here means the worker will not have to
# pickle the object when using Windows.
app.config_from_object('django.conf:settings')
app.autodiscover_tasks(lambda: settings.INSTALLED_APPS)
also , in redis.conf:
# Specify the path for the unix socket that will be used to listen for
# incoming connections. There is no default, so Redis will not listen
# on a unix socket when not specified.
#
unixsocket /var/run/redis/redis.sock
unixsocketperm 777
tp.settings.py:
# CELERY SETTINGS
BROKER_URL = 'redis://localhost:6379/0'
CELERY_ACCEPT_CONTENT = ['json']
CELERY_TASK_SERIALIZER = 'json'
CELERY_RESULT_SERIALIZER = 'json'
CACHES = {
'default': {
'BACKEND': 'redis_cache.RedisCache',
'LOCATION': '/var/run/redis/redis.sock',
},
}
edit 2:
ubuntu#ip-172-31-22-65:~$ redis-cli ping
PONG
ubuntu#ip-172-31-22-65:~$ service redis-server status
redis-server is not running
edit 3:
(env1)ubuntu#ip-172-31-22-65:~/projects/tp$ redis-cli ping
PONG
(env1)ubuntu#ip-172-31-22-65:~/projects/tp$ sudo service redis-server start
Starting redis-server: failed
(env1)ubuntu#ip-172-31-22-65:~/projects/tp$ service redis-server status
redis-server is not running
What am I doing wrong?
Try this add it to project/__init__.py it should work, app is always imported so shared_task can use this app
from __future__ import absolute_import
from .celery import app as celery_app
__all__ = ('celery_app',)
I think you are using rabbitmq as queue.So check
sudo service rabbitmq-server status
if stop,
sudo service rabbitmq-server start
Related
I'm invoking nameko shell inside my docker container of the example service, but I receive this error. I have setup two containers. My rabbitmq container and my service container. I'm invoking the nameko shell from inside the service container bash. The containers start up correctly and the service container connects successfully. But I can't use the shell.
Error
Traceback (most recent call last):
File "/usr/local/lib/python3.6/site-packages/amqp/transport.py", line 138, in _connect
host, port, family, socket.SOCK_STREAM, SOL_TCP)
File "/usr/local/lib/python3.6/socket.py", line 745, in getaddrinfo
for res in _socket.getaddrinfo(host, port, family, type, proto, flags):
socket.gaierror: [Errno -9] Address family for hostname not supported
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/usr/local/bin/nameko", line 11, in <module>
sys.exit(main())
File "/usr/local/lib/python3.6/site-packages/nameko/cli/main.py", line 112, in main
args.main(args)
File "/usr/local/lib/python3.6/site-packages/nameko/cli/commands.py", line 143, in main
main(args)
File "/usr/local/lib/python3.6/site-packages/nameko/cli/shell.py", line 98, in main
ctx['n'] = make_nameko_helper(config)
File "/usr/local/lib/python3.6/site-packages/nameko/cli/shell.py", line 73, in make_nameko_helper
module.rpc = proxy.start()
File "/usr/local/lib/python3.6/site-packages/nameko/standalone/rpc.py", line 228, in start
self._reply_listener.setup()
File "/usr/local/lib/python3.6/site-packages/nameko/rpc.py", line 260, in setup
self.queue_consumer.register_provider(self)
File "/usr/local/lib/python3.6/site-packages/nameko/standalone/rpc.py", line 123, in register_provider
self._setup_consumer()
File "/usr/local/lib/python3.6/site-packages/nameko/standalone/rpc.py", line 102, in _setup_consumer
channel = self.connection.channel()
File "/usr/local/lib/python3.6/site-packages/kombu/connection.py", line 289, in channel
chan = self.transport.create_channel(self.connection)
File "/usr/local/lib/python3.6/site-packages/kombu/connection.py", line 867, in connection
max_retries=1, reraise_as_library_errors=False
File "/usr/local/lib/python3.6/site-packages/kombu/connection.py", line 445, in _ensure_connection
callback, timeout=timeout
File "/usr/local/lib/python3.6/site-packages/kombu/utils/functional.py", line 344, in retry_over_time
return fun(*args, **kwargs)
File "/usr/local/lib/python3.6/site-packages/kombu/connection.py", line 874, in _connection_factory
self._connection = self._establish_connection()
File "/usr/local/lib/python3.6/site-packages/kombu/connection.py", line 809, in _establish_connection
conn = self.transport.establish_connection()
File "/usr/local/lib/python3.6/site-packages/kombu/transport/pyamqp.py", line 130, in establish_connection
conn.connect()
File "/usr/local/lib/python3.6/site-packages/amqp/connection.py", line 314, in connect
self.transport.connect()
File "/usr/local/lib/python3.6/site-packages/amqp/transport.py", line 78, in connect
self._connect(self.host, self.port, self.connect_timeout)
File "/usr/local/lib/python3.6/site-packages/amqp/transport.py", line 149, in _connect
"failed to resolve broker hostname"))
File "/usr/local/lib/python3.6/site-packages/amqp/transport.py", line 162, in _connect
self.sock.connect(sa)
ConnectionRefusedError: [Errno 111] Connection refused
Dockerfile
FROM python:3-onbuild
CMD ["nameko", "run", "--config", "conf.yml", "helloworld"]
Config file
AMQP_URI: 'pyamqp://guest:guest#rabbitmq'
docker-compose file
version: '2'
services:
echo:
build: ./echo
restart: always
volumes:
- .:/echo/code
depends_on:
- rabbitmq
rabbitmq:
image: "rabbitmq"
ports:
- "15673:15672"
I found out after a while that it was my own stupid mistake. I forgot to add the config file in my nameko shell command. You have to specify the message broker when executing nameko shell. In my case I needed to run nameko shell --config config.yml. That enabled me to connect and test my nameko service.
I try to create a simple flask app:
from flask import Flask
app = Flask(__name__)
if __name__ == '__main__':
app.run()
but when I add the debug:
FLASK_APP = run.py
FLASK_ENV = development
FLASK_DEBUG = 1
I got the following error:
ValueError: signal only works in main thread
here the full stacktrace
FLASK_APP = run.py
FLASK_ENV = development
FLASK_DEBUG = 1
In folder c:/MyProjectPath/api
c:\MyProjectPath\api\venv\Scripts\python.exe -m flask run
* Serving Flask-SocketIO app "run.py"
* Forcing debug mode on
* Restarting with stat
* Debugger is active!
* Debugger PIN: 283-122-745
Exception in thread Thread-1:
Traceback (most recent call last):
File "c:\appdata\local\programs\python\python37\Lib\threading.py", line 917, in _bootstrap_inner
self.run()
File "c:\appdata\local\programs\python\python37\Lib\threading.py", line 865, in run
self._target(*self._args, **self._kwargs)
File "c:\MyProjectPath\api\venv\lib\site-packages\flask_socketio\cli.py", line 59, in run_server
return run_command()
File "c:\MyProjectPath\api\venv\lib\site-packages\click\core.py", line 764, in __call__
return self.main(*args, **kwargs)
File "c:\MyProjectPath\api\venv\lib\site-packages\click\core.py", line 717, in main
rv = self.invoke(ctx)
File "c:\MyProjectPath\api\venv\lib\site-packages\click\core.py", line 956, in invoke
return ctx.invoke(self.callback, **ctx.params)
File "c:\MyProjectPath\api\venv\lib\site-packages\click\core.py", line 555, in invoke
return callback(*args, **kwargs)
File "c:\MyProjectPath\api\venv\lib\site-packages\click\decorators.py", line 64, in new_func
return ctx.invoke(f, obj, *args, **kwargs)
File "c:\MyProjectPath\api\venv\lib\site-packages\click\core.py", line 555, in invoke
return callback(*args, **kwargs)
File "c:\MyProjectPath\api\venv\lib\site-packages\flask\cli.py", line 771, in run_command
threaded=with_threads, ssl_context=cert)
File "c:\MyProjectPath\api\venv\lib\site-packages\werkzeug\serving.py", line 812, in run_simple
reloader_type)
File "c:\MyProjectPath\api\venv\lib\site-packages\werkzeug\_reloader.py", line 267, in run_with_reloader
signal.signal(signal.SIGTERM, lambda *args: sys.exit(0))
File "c:\appdata\local\programs\python\python37\Lib\signal.py", line 47, in signal
handler = _signal.signal(_enum_to_int(signalnum), _enum_to_int(handler))
ValueError: signal only works in main thread
The problem you are facing has to do with a bug in the Flask-SocketIO package which replaces the flask run command. Due to this Flask-SocketIO is always used even if you don’t import it. There are several solutions:
Uninstall Flask-SocketIO
Do not use flask run but run the main file of your program
Disable debugging
Disable auto loading if debugging required flask run --no-reload
Reference to the Flask-SocketIO bug: issue 817
I solved the problem thanks to #AkshayKumar007 answer on github. That was the most convenient solution for me.
Hey guys, I was also facing the same problem. So to summarize, if
you're using socket-io, don't do flask run. First, add
if __name__ == "__main__":
socketio.run(app)
At the end of your application. To run it just do
python3 __init__.py
Hope it helped.
I am running this python code where I am trying to run a socketio flask application and passing ssl certificate files:
from flask import Flask, render_template, request, session, Markup, current_app, jsonify
from flask_socketio import emit, SocketIO
import eventlet
from flask_babel import gettext
app = Flask(__name__)
app.config['SECRET_KEY'] = '123'
app.config['FILEDIR'] = 'static/_files/'
socketio = SocketIO(app)
if __name__ == '__main__':
try:
app_host = os.environ.get('APP_HOST')
app_port = os.environ.get('APP_PORT')
eventlet.wsgi.server(eventlet.wrap_ssl(eventlet.listen((app_host, int(app_port))),certfile ='selfsigned.crt', keyfile = 'selfsigned.key',server_side = True),app)
except Exception as e:
logger.error(e)
When I run this code it throws following SSL error:
(21755) wsgi starting up on https://12.34.56.78:5000
(21755) accepted ('12.34.56.79', 50021)
Traceback (most recent call last):
File "/usr/local/lib/python3.6/site-packages/eventlet/hubs/hub.py", line 458, in fire_timers
timer()
File "/usr/local/lib/python3.6/site-packages/eventlet/hubs/timer.py", line 58, in __call__
cb(*args, **kw)
File "/usr/local/lib/python3.6/site-packages/eventlet/greenthread.py", line 218, in main
result = function(*args, **kwargs)
File "/usr/local/lib/python3.6/site-packages/eventlet/wsgi.py", line 781, in process_request
proto.__init__(conn_state, self)
File "/usr/local/lib/python3.6/site-packages/eventlet/wsgi.py", line 335, in __init__
self.handle()
File "/usr/local/lib/python3.6/site-packages/eventlet/wsgi.py", line 368, in handle
self.handle_one_request()
File "/usr/local/lib/python3.6/site-packages/eventlet/wsgi.py", line 397, in handle_one_request
self.raw_requestline = self._read_request_line()
File "/usr/local/lib/python3.6/site-packages/eventlet/wsgi.py", line 380, in _read_request_line
return self.rfile.readline(self.server.url_length_limit)
File "/usr/local/lib/python3.6/socket.py", line 586, in readinto
return self._sock.recv_into(b)
File "/usr/local/lib/python3.6/site-packages/eventlet/green/ssl.py", line 204, in recv_into
return self._base_recv(nbytes, flags, into=True, buffer_=buffer)
File "/usr/local/lib/python3.6/site-packages/eventlet/green/ssl.py", line 225, in _base_recv
read = self.read(nbytes, buffer_)
File "/usr/local/lib/python3.6/site-packages/eventlet/green/ssl.py", line 139, in read
super(GreenSSLSocket, self).read, *args, **kwargs)
File "/usr/local/lib/python3.6/site-packages/eventlet/green/ssl.py", line 113, in _call_trampolining
return func(*a, **kw)
File "/usr/local/lib/python3.6/ssl.py", line 871, in read
return self._sslobj.read(len, buffer)
File "/usr/local/lib/python3.6/ssl.py", line 631, in read
v = self._sslobj.read(len, buffer)
ssl.SSLError: [SSL: HTTP_REQUEST] http request (_ssl.c:2217)
What I want is that this app should run over https connection but this error is preventing it from running. Below are my python and package version details:
python 3.6.3
eventlet==0.22.1
Flask==0.12.2
Flask-SocketIO==2.9.3
It seems that the request that has been made to the server is using http instead of https. You have to make sure that the client is also making a request using the same https protocol as the server. For example in you client code, you have to make a request to https://<server-ip> instead of http://<server-ip>
I am trying to understand how to run django tests in parallel with in memory sqlite3.
I have django app with that structure:
gbook
order
...
tests
__init__.py
test_a1.py
test_b1.py
utils.py
test_a1.py and test_b1.py contains same code:
import time
from order import models
from .utils import BackendTestCase
class ATestCase(BackendTestCase):
def test_a(self):
time.sleep(1)
a = models.City.objects.count()
self.assertEqual(a, a)
class BTestCase(BackendTestCase):
def test_b(self):
time.sleep(1)
a = models.City.objects.count()
self.assertEqual(a, a)
utils.py is:
from django.test import TestCase, Client
from order import models
from django.conf import settings
from order.utils import to_hash
class BackendTestCase(TestCase):
fixtures = ['City.json', 'Agency.json']
def setUp(self):
self.client = Client()
self.lang_codes = (i[0] for i in settings.LANGUAGES)
...
settings_test.py:
from .settings import *
DEBUG = False
TEMPLATE_DEBUG = False
STATICFILES_STORAGE = 'django.contrib.staticfiles.storage.StaticFilesStorage'
PASSWORD_HASHERS = ['django.contrib.auth.hashers.MD5PasswordHasher',] # faster
DATABASES['default'] = {
'ENGINE': 'django.db.backends.sqlite3',
}
When I run test in single process, all goes well (about 4 sec):
python.exe manage.py test order --settings=gbook.settings_test
Then I trying to run tests in parallel:
python.exe manage.py test order --settings=gbook.settings_test --parallel=2
I get this trace (console):
Creating test database for alias 'default'...
Cloning test database for alias 'default'...
Cloning test database for alias 'default'...
System check identified no issues (0 silenced).
Process SpawnPoolWorker-2:
Process SpawnPoolWorker-1:
Traceback (most recent call last):
Traceback (most recent call last):
File "C:\python\Python36-32\lib\multiprocessing\process.py", line 258, in _bootstrap
self.run()
File "C:\python\Python36-32\lib\multiprocessing\process.py", line 258, in _bootstrap
self.run()
File "C:\python\Python36-32\lib\multiprocessing\process.py", line 93, in run
self._target(*self._args, **self._kwargs)
File "C:\python\Python36-32\lib\multiprocessing\process.py", line 93, in run
self._target(*self._args, **self._kwargs)
File "C:\python\Python36-32\lib\multiprocessing\pool.py", line 108, in worker
task = get()
File "C:\python\Python36-32\lib\multiprocessing\pool.py", line 108, in worker
task = get()
File "C:\python\Python36-32\lib\multiprocessing\queues.py", line 337, in get
return _ForkingPickler.loads(res)
File "C:\python\Python36-32\lib\multiprocessing\queues.py", line 337, in get
return _ForkingPickler.loads(res)
File "C:\kvk\develop\Python\gbook\order\tests\test_a1.py", line 2, in <module>
from order import models
File "C:\kvk\develop\Python\gbook\order\tests\test_a1.py", line 2, in <module>
from order import models
File "C:\kvk\develop\Python\gbook\order\models.py", line 79, in <module>
class Agency(models.Model):
File "C:\kvk\develop\Python\gbook\order\models.py", line 79, in <module>
class Agency(models.Model):
File "C:\python\venv\gbook\lib\site-packages\django\db\models\base.py", line 110, in __new__
app_config = apps.get_containing_app_config(module)
File "C:\python\venv\gbook\lib\site-packages\django\db\models\base.py", line 110, in __new__
app_config = apps.get_containing_app_config(module)
File "C:\python\venv\gbook\lib\site-packages\django\apps\registry.py", line 247, in get_containing_app_config
self.check_apps_ready()
File "C:\python\venv\gbook\lib\site-packages\django\apps\registry.py", line 247, in get_containing_app_config
self.check_apps_ready()
File "C:\python\venv\gbook\lib\site-packages\django\apps\registry.py", line 125, in check_apps_ready
raise AppRegistryNotReady("Apps aren't loaded yet.")
File "C:\python\venv\gbook\lib\site-packages\django\apps\registry.py", line 125, in check_apps_ready
raise AppRegistryNotReady("Apps aren't loaded yet.")
django.core.exceptions.AppRegistryNotReady: Apps aren't loaded yet.
django.core.exceptions.AppRegistryNotReady: Apps aren't loaded yet.
From Pycharm trace is different:
...
Traceback (most recent call last):
File "C:\python\Python36-32\lib\unittest\suite.py", line 163, in _handleClassSetUp
setUpClass()
File "C:\python\venv\gbook\lib\site-packages\django\test\testcases.py", line 1036, in setUpClass
'database': db_name,
File "C:\python\venv\gbook\lib\site-packages\django\core\management\__init__.py", line 131, in call_command
return command.execute(*args, **defaults)
File "C:\python\venv\gbook\lib\site-packages\django\core\management\base.py", line 330, in execute
output = self.handle(*args, **options)
File "C:\python\venv\gbook\lib\site-packages\modeltranslation\management\commands\loaddata.py", line 61, in handle
return super(Command, self).handle(*fixture_labels, **options)
File "C:\python\venv\gbook\lib\site-packages\django\core\management\commands\loaddata.py", line 69, in handle
self.loaddata(fixture_labels)
File "C:\python\venv\gbook\lib\site-packages\django\core\management\commands\loaddata.py", line 109, in loaddata
self.load_label(fixture_label)
File "C:\python\venv\gbook\lib\site-packages\django\core\management\commands\loaddata.py", line 175, in load_label
obj.save(using=self.using)
File "C:\python\venv\gbook\lib\site-packages\django\core\serializers\base.py", line 205, in save
models.Model.save_base(self.object, using=using, raw=True, **kwargs)
File "C:\python\venv\gbook\lib\site-packages\django\db\models\base.py", line 838, in save_base
updated = self._save_table(raw, cls, force_insert, force_update, using, update_fields)
File "C:\python\venv\gbook\lib\site-packages\django\db\models\base.py", line 905, in _save_table
forced_update)
File "C:\python\venv\gbook\lib\site-packages\django\db\models\base.py", line 955, in _do_update
return filtered._update(values) > 0
File "C:\python\venv\gbook\lib\site-packages\django\db\models\query.py", line 664, in _update
return query.get_compiler(self.db).execute_sql(CURSOR)
File "C:\python\venv\gbook\lib\site-packages\django\db\models\sql\compiler.py", line 1204, in execute_sql
cursor = super(SQLUpdateCompiler, self).execute_sql(result_type)
File "C:\python\venv\gbook\lib\site-packages\django\db\models\sql\compiler.py", line 899, in execute_sql
raise original_exception
File "C:\python\venv\gbook\lib\site-packages\django\db\models\sql\compiler.py", line 889, in execute_sql
cursor.execute(sql, params)
File "C:\python\venv\gbook\lib\site-packages\django\db\backends\utils.py", line 64, in execute
return self.cursor.execute(sql, params)
File "C:\python\venv\gbook\lib\site-packages\django\db\utils.py", line 94, in __exit__
six.reraise(dj_exc_type, dj_exc_value, traceback)
File "C:\python\venv\gbook\lib\site-packages\django\utils\six.py", line 685, in reraise
raise value.with_traceback(tb)
File "C:\python\venv\gbook\lib\site-packages\django\db\backends\utils.py", line 64, in execute
return self.cursor.execute(sql, params)
File "C:\python\venv\gbook\lib\site-packages\django\db\backends\sqlite3\base.py", line 328, in execute
return Database.Cursor.execute(self, query, params)
django.db.utils.OperationalError: Problem installing fixture 'C:\kvk\develop\Python\gbook\order\fixtures\AirportInfo.json': Could not load order.AirportInfo(pk=2411): no such table: GB_AIRPORT_INFO
It seem like migrations not works for parallel, but why?
Docs says: "--parallel" Runs tests in separate parallel processes. Each process gets its own database. And I do not need to change my code for use it.
Please, help me to understand, what am i doing wrong.
multiprocessing.cpu_count() = 4
Django version 1.11.10
Python 3.6.5
Same issue as above with MacOS and Python 3.8+. You have to explicitly set import multiprocessing; multiprocessing.set_start_method('fork') at the top of your settings.py file. But be sure to understand the side effects before you do!
I ran into a similar issue trying to use the --parallel feature on Windows.
Django's documentation states
This feature isn’t available on Windows. It doesn’t work with the Oracle database backend either.
Running the same command on Linux completed with no issues.
Parallel running is still disabled on Windows as of today. You can track the ticket that keeps progress of this feature here: https://code.djangoproject.com/ticket/31169.
And here's the code block that disables this option on Windows:
def default_test_processes():
"""Default number of test processes when using the --parallel option."""
# The current implementation of the parallel test runner requires
# multiprocessing to start subprocesses with fork().
if multiprocessing.get_start_method() != 'fork':
return 1
try:
return int(os.environ['DJANGO_TEST_PROCESSES'])
except KeyError:
return multiprocessing.cpu_count()
Source: https://github.com/django/django/blob/59b4e99dd00b9c36d56055b889f96885995e4240/django/test/runner.py#L286-L295
In reply to #Menth, this is how I enable for testing only:
# near the top of settings.py
if "test" in sys.argv[1:]:
import multiprocessing
logging.info("Using multiproc for testing.")
multiprocessing.set_start_method("fork")
I have a RabbitMQ message broker and a remote Celery worker. It is working fine but about every five minutes I get this error:
[2014-01-06 14:02:27,247: WARNING/MainProcess] consumer: Connection to broker lost. Trying to re-establish the connection...
Traceback (most recent call last):
File "/usr/local/ABCD/venv/local/lib/python2.7/site-packages/celery/worker/consumer.py", line 270, in start
blueprint.start(self)
File "/usr/local/ABCD/venv/local/lib/python2.7/site-packages/celery/bootsteps.py", line 123, in start
step.start(parent)
File "/usr/local/ABCD/venv/local/lib/python2.7/site-packages/celery/worker/consumer.py", line 786, in start
c.loop(*c.loop_args())
File "/usr/local/ABCD/venv/local/lib/python2.7/site-packages/celery/worker/loops.py", line 72, in asynloop
next(loop)
File "/usr/local/ABCD/venv/local/lib/python2.7/site-packages/kombu/async/hub.py", line 333, in create_loop
cb(*cbargs)
File "/usr/local/ABCD/venv/local/lib/python2.7/site-packages/kombu/transport/base.py", line 156, in on_readable
reader(loop)
File "/usr/local/ABCD/venv/local/lib/python2.7/site-packages/kombu/transport/base.py", line 141, in _read
drain_events(timeout=0)
File "/usr/local/ABCD/venv/local/lib/python2.7/site-packages/amqp/connection.py", line 282, in drain_events
chanmap, None, timeout=timeout,
File "/usr/local/ABCD/venv/local/lib/python2.7/site-packages/amqp/connection.py", line 345, in _wait_multiple
channel, method_sig, args, content = read_timeout(timeout)
File "/usr/local/ABCD/venv/local/lib/python2.7/site-packages/amqp/connection.py", line 316, in read_timeout
return self.method_reader.read_method()
File "/usr/local/ABCD/venv/local/lib/python2.7/site-packages/amqp/method_framing.py", line 195, in read_method
raise m
IOError: Socket closed
[2014-01-06 14:02:27,308: ERROR/MainProcess] Unrecoverable error: ValueError('I/O operation on closed epoll fd',)
Traceback (most recent call last):
File "/usr/local/ABCD/venv/local/lib/python2.7/site-packages/celery/worker/__init__.py", line 206, in start
self.blueprint.start(self)
File "/usr/local/ABCD/venv/local/lib/python2.7/site-packages/celery/bootsteps.py", line 123, in start
step.start(parent)
File "/usr/local/ABCD/venv/local/lib/python2.7/site-packages/celery/bootsteps.py", line 373, in start
return self.obj.start()
File "/usr/local/ABCD/venv/local/lib/python2.7/site-packages/celery/worker/consumer.py", line 270, in start
blueprint.start(self)
File "/usr/local/ABCD/venv/local/lib/python2.7/site-packages/celery/bootsteps.py", line 123, in start
step.start(parent)
File "/usr/local/ABCD/venv/local/lib/python2.7/site-packages/celery/worker/consumer.py", line 468, in start
c.connection = c.connect()
File "/usr/local/ABCD/venv/local/lib/python2.7/site-packages/celery/worker/consumer.py", line 369, in connect
conn.transport.register_with_event_loop(conn.connection, self.hub)
File "/usr/local/ABCD/venv/local/lib/python2.7/site-packages/kombu/transport/pyamqp.py", line 124, in register_with_event_loop
loop.add_reader(connection.sock, self.on_readable, connection, loop)
File "/usr/local/ABCD/venv/local/lib/python2.7/site-packages/kombu/async/hub.py", line 214, in add_reader
return self.add(fds, callback, READ | ERR, args)
File "/usr/local/ABCD/venv/local/lib/python2.7/site-packages/kombu/async/hub.py", line 165, in add
self.poller.register(fd, flags)
File "/usr/local/ABCD/venv/local/lib/python2.7/site-packages/kombu/utils/eventio.py", line 78, in register
self._epoll.register(fd, events)
ValueError: I/O operation on closed epoll fd
This is the init script I use to start a Celery deamon:
# description "Celery worker using sync broker"
console log
start on runlevel [2345]
stop on runlevel [!2345]
setuid yoyo_login
setgid yoyo_login
script
chdir /usr/local/ABCD/abcdegg
exec /usr/local/ABCD/venv/bin/celery worker -n ABCD_sync.%h -A proj.sync_celery -Q sync_queue -l info --autoscale=10,3 --autoreload --without-gossip --without-mingle --without-heartbeat
end script
respawn
Any idea why this error keep happening every few minutes?
It seems the worker is not the issue here, rather it seems RabbitMQ is closing the connection which the worker consumes. Check RabbitMQ/queue itself settings. Perhaps a proxy in the middle?