I have a flask server running on top of uWSGI, with the following configuration:
[uwsgi]
http-socket = :9000
plugin = python
wsgi-file = /.../whatever.py
enable-threads = true
The flask server has a background thread which makes periodic calls to another server, using the following command:
r = requests.get(...)
I've added logging before and after this command, and it seems that the command never returns, and the thread just stops there.
Any idea why the background thread is hanging? Note that I've added enable-threads = true to the configuration.
Updates
I've added a timeout parameter to requests.get(). Now the behaviour is unexpected - the background thread works in one server, but fails in another.
killing all the uWSGI instances and restarting them using sudo service uwsgi restart solved the problem.
It seems that sudo service uwsgi stop does not actually stop all the instances of uwsgi.
Related
In development, flask-socketio (4.1.0) with uwsgi is working nicely with just 1 worker and standard initialization.
Now I'm preparing for production and want to make it work with multiple workers.
I've done the following:
Added redis message_queue in init_app:
socketio = SocketIO()
socketio.init_app(app,async_mode='gevent_uwsgi', message_queue=app.config['SOCKETIO_MESSAGE_QUEUE'])
(Sidenote: we are using redis in the app itself as well)
gevent monkey patching at top of the file that we run with uwsgi
from gevent import monkey
monkey.patch_all()
run uwsgi with:
uwsgi --http 0.0.0.0:63000 --gevent 1000 --http-websockets --master --wsgi-file rest.py --callable application --py-autoreload 1 --gevent-monkey-patch --workers 4 --threads 1
This doesn't seem to work. The connection starts rapidly alternating between a connection and 400 Bad request responses. I suspect these correspond to the ' Invalid session ....' errors I see when I enable SocketIO logging.
Initially it was not using redis at all,
redis-cli > PUBSUB CHANNELS *
resulted in an empty result even with workers=1.
it seemed the following (taken from another SO answer) fixed that:
# https://stackoverflow.com/a/19117266/492148
import gevent
import redis.connection
redis.connection.socket = gevent.socket
after doing so I got a "flask-socketio" pubsub channel with updating data.
but after returning to multiple workers, the issue returned. Given that changing the redis socket did seem to bring things in the right direction I feel like the monkeypatching isn't working properly yet, but the code I used seems to match all examples I can find and is at the very top of the file that is loaded by uwsgi.
You can run as many workers as you like, but only if you run each worker as a standalone single-worker uwsgi process. Once you have all those workers running each on its own port, you can put nginx in front to load balance using sticky sessions. And of course you also need the message queue for the workers to use when coordinating broadcasts.
Eventually found https://github.com/miguelgrinberg/Flask-SocketIO/issues/535
so it seems you can't have multiple workers with uwsgi either as it needs sticky sessions. Documentation mentions that for gunicorn, but I did not interpret that to extend to uwsgi.
I have a flask application using an uwsgi instance. This application runs some threads in background when a cron command starts. Is there a method for updating my template files without restarting the uwsgi service ?
Currently I'm waiting for my threads to stop and then reloading the uwsgi service.
Enabling TEMPLATES_AUTO_RELOAD works nicely:
app = Flask(__name__)
app.config['TEMPLATES_AUTO_RELOAD'] = True
Whether to check for modifications of the template source and reload
it automatically. By default the value is None which means that Flask
checks original file only in debug mode.
Source: http://flask.pocoo.org/docs/0.12/config/
I've some problem with run uwsgi.
I run application(Pyramid with zerorpc, gevent) in uwsgi. And some requests fails.
Python writes this error:
Assertion failed: ok (bundled/zeromq/src/mailbox.cpp:79)
Aborted
uWSGI worker 1 screams: UAAAAAAH my master disconnected: i will kill myself !!!
Why there might be such a problem?
uwsgi config:
[uwsgi]
socket = /tmp/sock.sock
chmod-socket = 666
master = true
processes = 1
vacuum = true
i run so:
uwsgi --ini-paste development.ini
the whole zeromq magic is managed by a background thread. A property of threads is that they "disappear" after fork(), so zeromq will not work in your uWSGI worker. Just add
lazy-apps = true
in your uWSGI options to load zeromq (read: your app) after each fork()
I found this 0 dependency python websocket server from SO: https://gist.github.com/jkp/3136208
I am using gunicorn for my flask app and I wanted to run this websocket server using gunicorn also. In the last few lines of the code it runs the server with:
if __name__ == "__main__":
server = SocketServer.TCPServer(
("localhost", 9999), WebSocketsHandler)
server.serve_forever()
I cannot figure out how to get this websocketserver.py running in gunicorn. This is because one would think you would want gunicorn to run server_forever() as well as the SocketServer.TCPServer(....
Is this possible?
GUnicorn expects a WSGI application (PEP 333) not just a function. Your app has to accept an environ variable and a start_response callback and return an iterator of data (roughly speaking). All the machinery encapsuled by SocketServer.StreamRequestHandler is on gunicorn side. I imagine this is a lot of work to modify this gist to become a WSGI application (But that'll be fun!).
OR, maybe this library will get the job done for you: https://github.com/CMGS/gunicorn-websocket
If you use Flask-Sockets extension, you have a websocket implementation for gunicorn directly in the extension which make it possible to start with the following command line :
gunicorn -k flask_sockets.worker app:app
Though I don't know if that's what you want to do.
I'm using Python's concurrent.futures module (module version 2.1.3, Python version 2.7.3). I have nginx running with 4 worker processes, and 4 uWSGI running (on Ubuntu precise) as an upstart daemon, with the following uwsgi config (note enable-threads is true, so the GIL is accessible, and lazy is true):
virtualenv=[ path to venv ]
chdir=[ path to python project ]
enable-threads=true
lazy=true
buffer-size=32768
post-buffering=true
processes=4
master=true
module=[ my app ].wsgi
callable=wsgi_app
logto=/var/log/uwsgi.log
pidfile=[ replaced ]
plugins=python27
socket=[ replaced, but works fine ]
The entire app works fine, but it seems that some missing context is not available to the futures pool: When I call somefunc() without future(), all is well, but when I call somefunc() with future, the HTTP request (I'm using Flask) hangs for quite some time before failing.
The only entries to the log file are related to HTTP requests, and general wsgi startup stuff like:
WSGI application 0 (mountpoint='') ready on interpreter 0x11820a0 pid: 26980 (default app)
How can I get some visibility into the futures execution, or figure out what context might not be available to the futures pool?
Does that make sense?
Thanks in advance.
If you are using ProcessPoolExecutor instead of threads, be sure to add close-on-exec to your uWSGI options, otherwise the connection socket with the client/webserver will be inherited after fork()