I found this 0 dependency python websocket server from SO: https://gist.github.com/jkp/3136208
I am using gunicorn for my flask app and I wanted to run this websocket server using gunicorn also. In the last few lines of the code it runs the server with:
if __name__ == "__main__":
server = SocketServer.TCPServer(
("localhost", 9999), WebSocketsHandler)
server.serve_forever()
I cannot figure out how to get this websocketserver.py running in gunicorn. This is because one would think you would want gunicorn to run server_forever() as well as the SocketServer.TCPServer(....
Is this possible?
GUnicorn expects a WSGI application (PEP 333) not just a function. Your app has to accept an environ variable and a start_response callback and return an iterator of data (roughly speaking). All the machinery encapsuled by SocketServer.StreamRequestHandler is on gunicorn side. I imagine this is a lot of work to modify this gist to become a WSGI application (But that'll be fun!).
OR, maybe this library will get the job done for you: https://github.com/CMGS/gunicorn-websocket
If you use Flask-Sockets extension, you have a websocket implementation for gunicorn directly in the extension which make it possible to start with the following command line :
gunicorn -k flask_sockets.worker app:app
Though I don't know if that's what you want to do.
Related
Hope you're well!
I have a problem with the Psycopg module allowing to interact with the PostgreSQL database.
When I start my HTTP server (I use FastAPI and Uvicorn) normally, and I send a request to my server, I have this error:
error connecting in 'pool-1': Psycopg cannot use the 'ProactorEventLoop' to run in async mode. Please use a compatible event loop, for instance by setting 'asyncio.set_event_loop_policy(WindowsSelectorEventLoopPolicy())'
I have already done what the error solution suggests, that is, set the event loop policy to WindowsSelectorEventLoopPolicy, which I did.
import asyncio
from asyncio import WindowsSelectorEventLoopPolicy
asyncio.set_event_loop_policy(WindowsSelectorEventLoopPolicy())
I put this line in my main file (which is the central file of my app)
But still nothing, I still have the same error when I send a request to my server.
But what is strange is that when I start my server with the --reload option (which is used to reload the files automatically in case of modification but is not adapted for production, according to the documentation of Uvicorn/FastAPI), I have no error when I send my requests and everything goes correctly
Can you tell me what is the cause of the problem and how to solve it?
P.S: Here is how i start my HTTP from Windows Powershell
Here is how I start my HTTP server normally (which doesn't works)
uvicorn src.main:app --port 2314
Here is how I start my HTTP server with the --reload option (which works perfectly)
uvicorn src.main:app --port 2314 --reload
Uvicorn sets the WindowsSelectorEventLoopPolicy policy when using the reload option and Windows by default, otherwise it doesn't. That's why you are facing this mismatch behavior.
Changing another policy should have worked, maybe it's misplaced.
The below works:
import asyncio
import uvicorn
asyncio.set_event_loop_policy(asyncio.DefaultEventLoopPolicy())
async def app(scope, receive, send):
print(asyncio.get_event_loop_policy())
if __name__ == "__main__":
uvicorn.run("main:app", port=8010)
If we remove the line on which we set the policy, and we have uvloop installed (uvicorn automatically selects uvloop if installed), we can see that the uvloop policy is printed.
In case it's not misplaced, I recommend open a discussion on uvicorn.
Disclaimer: I'm a maintainer of Uvicorn.
Thank you! After having made several tests and having read again the documentation of uvicorn.
Everything was a little bit my fault
I launched the uvicorn HTTP server (without the --reload option) directly from the command line from a terminal (in my case Powershell), I got the error.
error connecting in 'pool-1': Psycopg cannot use the 'ProactorEventLoop' to run in async mode. Please use a compatible event loop, for instance by setting 'asyncio.set_event_loop_policy(WindowsSelectorEventLoopPolicy())'
Because this way of doing things didn't read the import I had done in my main.py file, but simply started the server without caring about anything, hence the error asking me to set the event loop policy to WindowsSelectorEventLoopPolicy (which I had done), simply because the import line defining my event loop policy was never reached.
So now instead of launching my uvicorn server directly in the terminal like this
uvicorn src.main:app --port 2314
I execute my python code like this
python -m src.main
and I had to modify my code in main.py like this
import asyncio
from asyncio import WindowsSelectorEventLoopPolicy
import uvicorn
from fastapi import FastAPI
...
asyncio.set_event_loop_policy(WindowsSelectorEventLoopPolicy())
app = FastAPI()
...
if __name__ == '__main__':
uvicorn.run('src.main:app', host="127.0.0.1", port=2314)
The reason why it was working without any problem in the terminal is by running my HTTP server with the --reload option like this
uvicorn src.main:app --port 2314 --reload
is that as #marcelo-trylesinski said, the --reload option automatically sets event loop to WindowsSelectorEventLoopPolicy. I wish this could be notified / mentioned in the documentation.
Thanks again to #marcelo-trylesinski for his help
I've deployed a flask-socketio web server, but after installing zerorpc which installs gevent i'm facing a lot of troubles..
at first my code looked like this:
socketio.start_background_task(poll_events)
socketio.run(app, host="0.0.0.0", keyfile='key.pem', certfile='cert.pem')
I'm starting a background task which will constantly read from a queue and send messages through socketio. now that gevent is installed it flask-socketio will try to use it (which i'm fine with actually making my server a production server and not a development one) but then socketio.start_background_task blocks. So I read that
from gevent import monkey; monkey.patch_all()
is required.
So now my code looks like that:
socketio.start_background_task(poll_events)
WSGIServer(('0.0.0.0', 5000), app, keyfile='key.pem', certfile='cert.pem').serve_forever()
For some reason when debugging with pycharm I received a lot of weird greenlet exceptions and also I think that sometimes socketio messages are dropped so I decided to use eventlet. Then again, patching is required. So my code looks like this:
socketio.start_background_task(poll_events)
eventlet.wsgi.server(eventlet.wrap_ssl(eventlet.listen(("0.0.0.0", 5000)), keyfile='key.pem', certfile='cert.pem'), app)
Because of monkey patching zerorpc throws an exception
"gevent.exceptions.LoopExit: This operation would block forever"
What is the correct way to deploy a production server with flask + socketio + zerorpc?
I've resolved the issue, when debugging I choose "threading" as async_mode
socketio = SocketIO(app, async_mode="threading")
and when deploying with gunicorn I use gevent
CMD ["gunicorn", "-w", "1", "-k", "gevent","--reload", "web_app:app"]
For some reason gevent doesn't work without gunicorn and eventlet wouldn't work with zerorpc..
In development, flask-socketio (4.1.0) with uwsgi is working nicely with just 1 worker and standard initialization.
Now I'm preparing for production and want to make it work with multiple workers.
I've done the following:
Added redis message_queue in init_app:
socketio = SocketIO()
socketio.init_app(app,async_mode='gevent_uwsgi', message_queue=app.config['SOCKETIO_MESSAGE_QUEUE'])
(Sidenote: we are using redis in the app itself as well)
gevent monkey patching at top of the file that we run with uwsgi
from gevent import monkey
monkey.patch_all()
run uwsgi with:
uwsgi --http 0.0.0.0:63000 --gevent 1000 --http-websockets --master --wsgi-file rest.py --callable application --py-autoreload 1 --gevent-monkey-patch --workers 4 --threads 1
This doesn't seem to work. The connection starts rapidly alternating between a connection and 400 Bad request responses. I suspect these correspond to the ' Invalid session ....' errors I see when I enable SocketIO logging.
Initially it was not using redis at all,
redis-cli > PUBSUB CHANNELS *
resulted in an empty result even with workers=1.
it seemed the following (taken from another SO answer) fixed that:
# https://stackoverflow.com/a/19117266/492148
import gevent
import redis.connection
redis.connection.socket = gevent.socket
after doing so I got a "flask-socketio" pubsub channel with updating data.
but after returning to multiple workers, the issue returned. Given that changing the redis socket did seem to bring things in the right direction I feel like the monkeypatching isn't working properly yet, but the code I used seems to match all examples I can find and is at the very top of the file that is loaded by uwsgi.
You can run as many workers as you like, but only if you run each worker as a standalone single-worker uwsgi process. Once you have all those workers running each on its own port, you can put nginx in front to load balance using sticky sessions. And of course you also need the message queue for the workers to use when coordinating broadcasts.
Eventually found https://github.com/miguelgrinberg/Flask-SocketIO/issues/535
so it seems you can't have multiple workers with uwsgi either as it needs sticky sessions. Documentation mentions that for gunicorn, but I did not interpret that to extend to uwsgi.
I'm trying to deploy a flask app on heroku. I've gotten to the point where the app builds and deploys, but when I try to go to the URL, the app times out with the following error.
Error R10 (Boot timeout) -> Web process failed to bind to $PORT within 60 seconds of launch
I think the problem is with my procfile. It has one line.
web: python add_entry3.py
Other people have procfiles that look like this:
web: gunicorn app:app
This is just a toy app and I don't care about performance so I don't think I need to use gunicorn for the web server. Should I be putting a colon and command after my app's file name (add_entry3.py)?
Most likely your flask app isn't answering on the port and interface the Heroku expects. By default, Flask only listens on 127.0.0.1, and I think on port 5000. Heroku passes your app a PORT environment variable and you'd need to tell Flask to listen on all interfaces.
But there are reasons other than performance you want to avoid Flask's default debug server for production code. It's got memory leaks, there are security implications, and really ... just don't do it. Add gunicorn to your requirements.txt and use that.
But if you must use the Flask test/debug server, change your app.run() call to something like this:
app.run(host='0.0.0.0', port=int(os.environ.get("PORT", 5000)))
I'm running a simple bottle application with gunicorn as a webserver, the application is working fine. My code:
from bottle import route, run, template
#route('/hello/<name>')
def index(name):
return template('<b>Hello {{name}}</b>!', name=name)
bottle.run(server='gunicorn', workers="3")
The Problem
Now I would like to create my own gunicorn config file and use it with bottle. I want to add a lot of extra functionality to the gunicorn workers (like SSL for example) and using a config file is a great way to do this.
I've tried this:
bottle.run(server='gunicorn', config="settings.py.ini")
AND
bottle.run(server='gunicorn', -c="settings.py.ini")
I know that in the CLI this the settings file can be set as an extra option like so:
-c CONFIG, --config CONFIG
gunicorn --config="settings.py.ini"
Anyone knows how to achieve the same thing when using the bottle gunicorn controller?
Solved it by taking a different aproach.
I'm using the code from this question: Bottle with Gunicorn
This makes it possible to use gunicorn from the CLI and load the bottle app in gunicorn. Now I can use the CLI to use gunicorn with config file, it works with bottle. Bottle is now basicly an Gunicorn module.