I have been attempting to push my flask app running socketio to Heroku, but to no avail. I have narrowed it down to the Procfile. I am constantly getting 503 server errors because my program doesn't want to be connected to. I tested it locally and it works just fine.
I have had a couple versions of the Procfile, which are
web: gunicorn -b 0.0.0.0:$PORT app:userchat_manager
and
web: python userchat_manager.py
where the userchar_manager file holds the SocketIO.run() function to run the app. What would be the best way to fix this?
EDIT: I changed the Procfile to
web: gunicorn -b 0.0.0.0:$PORT app:app
and it loads. However, whenever I try to send a message, it doesn't send the message and I get a 400 code.
See the Deployment section of the documentation. The gunicorn web server is only supported when used alongside eventlet or gevent, and in both cases you have to use a single worker process.
If you want to drop gunicorn and run the native web server instead, you should code your userchat_manager.py script in a way that loads the port on which the server should listen from the PORT environment variable exposed by Heroku. If you go this route, I still think you should look into using eventlet or gevent, without using an asynchronous framework the performance is pretty bad (no WebSocket support), and the number of clients that can be connected at the same time is very limited (just one client per worker).
Try this:
web: gunicorn --worker-class eventlet -w 1 your_module:app
You don't need port to connect the socket, just use your heroku app url as socket connection withou :PORT.
Related
A simple flask app accepts requests and then makes calls to https endpoints. Using gunicorn with multiple worker processes leads to ssl failures.
Using flask run works perfectly, albeit slowly.
Using gunicorn --preload --workers 1 also works perfectly, albeit slowly.
Changing to gunicorn --preload --workers 10 very frequently fails with [SSL: DECRYPTION_FAILED_OR_BAD_RECORD_MAC] which leads me to think that there's some per-connection state that is being messed up. But, gunicorn is supposed to fork before beginning service of requests.
Ideas?
I was using --preload to avoid having each worker retrieve initial oauth context for use in some of the https webapi calls. Rule of thumb should be that when doing fork() (in gunicorn), you really need to understand what is happening with the ssl state.
Solution was to disable the preload and do the oauth individually in each worker.
So I am using a docker for this python chat app project.
I originally had python manage.py runserver 0.0.0.0:8000 as my command in docker-compose.
I found that I should switch to gunicorn if I want to deploy my app on web (like heroku). The tutorial I found say simply change the command in docker-compose to gunicorn myproject.wsgi -b 0.0.0.0:8000. I did that, and all the websocket connections broke. There's a fail to send due to websocket is still in CONNECTING state, then after a while handshake fails with a status code of 404. All the setup were the same as before, except that one line. Just wondering what else I need to change to make websocket work with gunicorn? Thanks
EDIT: after some digging on the internet it seems that gunicorn wasn't supposed to be run with websocket (wsgi asgi difference I suppose?) If anyone could tell me something I could use instead of gunicorn for web server it would be extremely appreciated, or if there's any way I can run gunicorn with my django channels still working? Thanks!!
When using ASGI, for asynchronous servers (websockets), you should use an asynchronous server, like Daphne or Uvicorn, the Django documentation has examples on how to deploy for both of them.
If you want to use uvicorn directly you could do something like:
uvicorn myproject.asgi:application --host 0.0.0.0 --port 8000
You can also run uvicorn through gunicorn using the worker class:
gunicorn myproject.asgi:application -k uvicorn.workers.UvicornWorker -b 0.0.0.0:8000
I've got a web app developed in Flask. The setup is simple. The app is running on Gunicorn. All requests are proxied through the nginx. The Flask app itself makes HTTP requests to external API. The HTTP requests from the flask app to the external API are initiated by AJAX calls from the javascript code in the frontend. The external API returns data in JSON format to the Flask app and the back to the frontend.
The problem is that when I run this app in development mode with the option multithreaded = True I can see that the JSON data get returned asynchronously to the server and I can see the result on the frontend page very quickly.
However, when I try to run the app in production mode with nginx and gunicorn I see that the JSON data get returned sequentially - quit slowly, one by one. It seems that due to some reason the HTTP requests to the external API get blocked.
I use supervisor on linux Ubuntu Server 16.04. This is how I start gunicorn through supervisor:
command = /path/to/project/env/bin/gunicorn -k gevent --worker-connections 1000 wsgi:app -b localhost:8500
It seems that gunicorn does not handle the requests asynchronously, although it should.
As experiment I ran the Flask app using it's built in wsgi server (NOT gunicorn) in development mode, with debug=True and multithreaded=True. All requests were still proxied through the nginx. The JSON data returned much quicker, i.e. asynchronously (seems the calls did not block).
I read gunicorn's documentation. It says if I need to make calls to external API, then I should use async workers. I use them but it doesn't work.
All the caching stuff was taken into account. I may assume that I don't use any cache. I cleared it all when I checked the server setups.
What am I missing? How can I make gunicorn run as expected?
Thanks.
I actually solved this problem quite quickly and forgot to post the answer right away. The reason why the gunicorn server did not process the requests acynchronously as I would expect was very simple and stupid. Since I was managing gunicorn through the supervisor after I had changed the config to:
command = /path/to/project/env/bin/gunicorn -k gevent --worker-connections 1000 wsgi:app -b localhost:8500
I forgot to run:
sudo supervisorctl reread
sudo supervisorctl update
It's simple but not obvious though. My mistake was that I expected the config to update automatically after I restart my app on gunicorn using this command:
sudo supervisorctl restart my_app
Yes it restart the app, but not the config of gunicorn.
I have a WSGI application (it's a Flask app, but that should be irrelevant, I think) running under a Gunicorn server at port 9077. The app has a /status endpoint, which is supposed to report 'OK' if the app is running. If it fails to report OK within a reasonable time, the whole container gets killed (by Kubernetes).
The problem is this: when the app is under very heavy load (which does happen occasionally), the /status endpoint can take a while to respond and the container sometimes gets killed prematurely. Is there a way to configure Gunicorn to always serve the /status endpoint in a separate thread? Perhaps even on a different port? I would appreciate any hints or ideas for dealing with this situation.
never worked with Gunicorn, and im not sure if it supports this feature.
But with uWSGI, when i know that the app is going to be under a heavy load,
i run uwsgi with --processes (can also run in multithread mode or both)
uWSGI just spins up multiple instances of the flask app and act as a load balancer, no need for different ports, uwsgi takes care of everything.
You are not bound by GIL anymore and your app uses all the resources available on the machine.
documentation about uWSGI concurrency
a quick tutorial on how to setup a flask app, uWSGI and nginx (you can skip the nginx part)
here is an example of the config file i provide.
[uwsgi]
module = WSGI:app
master = true
processes = 16
die-on-term = true
socket = 0.0.0.0:8808
protocol = http
uwsgi --daemonize --ini my_uwsgi_conf.ini
I can easily achieve 1000 calls/sec when its running that way.
hope that helps.
ps: Another solution for you, just spin up more containers that are running your app.
And put them behind nginx to load-balance
I'm trying to run my flask app with gunicorn on my Raspberry pi. I've set up my router to port forward the localhost:5000. This works well when I run my flask app via python manage.py runserver. I can use my browser from any device and type http://**.**.***.***:5000/ and it will load my flask application. However when I try and run the app via gunicorn I receive an error connecting page. I run the the gunicorn exactly like the flask documentation says to. If I check gunicorn's logs I can see the html being rendered. Here's the kicker, when I run the app with gunicorn locally (gunicorn -w 2 -b localhost:5000 my_app:app), it works just fine. I have optimum online, my router setting are as followed...
protocol -> all
port -> 5000
forward port to -> same as incoming port
host -> raspberrypi
locate device by -> ipaddress
Like I said these settings work just fine from my pi when I use python's built in wsgi server. Gunicorn works just fine on when I run it locally and I can see my app when I type localhost:5000 in the browser, it's just when I set it up on my pi and try to access the page with the external IP, if I don't use gunicorn the external IP works just fine. I can't figure it out. Any ideas?
You need to have Gunicorn listen on 0.0.0.0 (all network interfaces). This then means it will be listening on an externally accessible IP address.
There is more information on the difference between localhost and 0.0.0.0 in this post on ServerFault.