So I am using a docker for this python chat app project.
I originally had python manage.py runserver 0.0.0.0:8000 as my command in docker-compose.
I found that I should switch to gunicorn if I want to deploy my app on web (like heroku). The tutorial I found say simply change the command in docker-compose to gunicorn myproject.wsgi -b 0.0.0.0:8000. I did that, and all the websocket connections broke. There's a fail to send due to websocket is still in CONNECTING state, then after a while handshake fails with a status code of 404. All the setup were the same as before, except that one line. Just wondering what else I need to change to make websocket work with gunicorn? Thanks
EDIT: after some digging on the internet it seems that gunicorn wasn't supposed to be run with websocket (wsgi asgi difference I suppose?) If anyone could tell me something I could use instead of gunicorn for web server it would be extremely appreciated, or if there's any way I can run gunicorn with my django channels still working? Thanks!!
When using ASGI, for asynchronous servers (websockets), you should use an asynchronous server, like Daphne or Uvicorn, the Django documentation has examples on how to deploy for both of them.
If you want to use uvicorn directly you could do something like:
uvicorn myproject.asgi:application --host 0.0.0.0 --port 8000
You can also run uvicorn through gunicorn using the worker class:
gunicorn myproject.asgi:application -k uvicorn.workers.UvicornWorker -b 0.0.0.0:8000
Related
I don't quite understand how this works, and I've been searching everywhere but didn't quite find the answer.
So when I deploy a django app on heroku the other day I was using daphne with this in the Procfile :
daphne app.asgi:application --port $PORT --bind 0.0.0.0 -v2
The app works just fine and it make sense (I heard that heroku dynamically assigns the port). But how does the following code knows how to bind it's port, isn't it always 8000 by default ?
gunicorn app.asgi
Sorry for the silly question. I'm a newbie in devops stuff
No, the port is whatever Heroku wants it to be. Since they're running many many sites, they need to dynamically choose a port to serve each one on. They provide that port in the PORT environment variable, and the Procfile takes that value and binds gunicorn to it.
I have been attempting to push my flask app running socketio to Heroku, but to no avail. I have narrowed it down to the Procfile. I am constantly getting 503 server errors because my program doesn't want to be connected to. I tested it locally and it works just fine.
I have had a couple versions of the Procfile, which are
web: gunicorn -b 0.0.0.0:$PORT app:userchat_manager
and
web: python userchat_manager.py
where the userchar_manager file holds the SocketIO.run() function to run the app. What would be the best way to fix this?
EDIT: I changed the Procfile to
web: gunicorn -b 0.0.0.0:$PORT app:app
and it loads. However, whenever I try to send a message, it doesn't send the message and I get a 400 code.
See the Deployment section of the documentation. The gunicorn web server is only supported when used alongside eventlet or gevent, and in both cases you have to use a single worker process.
If you want to drop gunicorn and run the native web server instead, you should code your userchat_manager.py script in a way that loads the port on which the server should listen from the PORT environment variable exposed by Heroku. If you go this route, I still think you should look into using eventlet or gevent, without using an asynchronous framework the performance is pretty bad (no WebSocket support), and the number of clients that can be connected at the same time is very limited (just one client per worker).
Try this:
web: gunicorn --worker-class eventlet -w 1 your_module:app
You don't need port to connect the socket, just use your heroku app url as socket connection withou :PORT.
I've got a web app developed in Flask. The setup is simple. The app is running on Gunicorn. All requests are proxied through the nginx. The Flask app itself makes HTTP requests to external API. The HTTP requests from the flask app to the external API are initiated by AJAX calls from the javascript code in the frontend. The external API returns data in JSON format to the Flask app and the back to the frontend.
The problem is that when I run this app in development mode with the option multithreaded = True I can see that the JSON data get returned asynchronously to the server and I can see the result on the frontend page very quickly.
However, when I try to run the app in production mode with nginx and gunicorn I see that the JSON data get returned sequentially - quit slowly, one by one. It seems that due to some reason the HTTP requests to the external API get blocked.
I use supervisor on linux Ubuntu Server 16.04. This is how I start gunicorn through supervisor:
command = /path/to/project/env/bin/gunicorn -k gevent --worker-connections 1000 wsgi:app -b localhost:8500
It seems that gunicorn does not handle the requests asynchronously, although it should.
As experiment I ran the Flask app using it's built in wsgi server (NOT gunicorn) in development mode, with debug=True and multithreaded=True. All requests were still proxied through the nginx. The JSON data returned much quicker, i.e. asynchronously (seems the calls did not block).
I read gunicorn's documentation. It says if I need to make calls to external API, then I should use async workers. I use them but it doesn't work.
All the caching stuff was taken into account. I may assume that I don't use any cache. I cleared it all when I checked the server setups.
What am I missing? How can I make gunicorn run as expected?
Thanks.
I actually solved this problem quite quickly and forgot to post the answer right away. The reason why the gunicorn server did not process the requests acynchronously as I would expect was very simple and stupid. Since I was managing gunicorn through the supervisor after I had changed the config to:
command = /path/to/project/env/bin/gunicorn -k gevent --worker-connections 1000 wsgi:app -b localhost:8500
I forgot to run:
sudo supervisorctl reread
sudo supervisorctl update
It's simple but not obvious though. My mistake was that I expected the config to update automatically after I restart my app on gunicorn using this command:
sudo supervisorctl restart my_app
Yes it restart the app, but not the config of gunicorn.
I'm trying to run my flask app with gunicorn on my Raspberry pi. I've set up my router to port forward the localhost:5000. This works well when I run my flask app via python manage.py runserver. I can use my browser from any device and type http://**.**.***.***:5000/ and it will load my flask application. However when I try and run the app via gunicorn I receive an error connecting page. I run the the gunicorn exactly like the flask documentation says to. If I check gunicorn's logs I can see the html being rendered. Here's the kicker, when I run the app with gunicorn locally (gunicorn -w 2 -b localhost:5000 my_app:app), it works just fine. I have optimum online, my router setting are as followed...
protocol -> all
port -> 5000
forward port to -> same as incoming port
host -> raspberrypi
locate device by -> ipaddress
Like I said these settings work just fine from my pi when I use python's built in wsgi server. Gunicorn works just fine on when I run it locally and I can see my app when I type localhost:5000 in the browser, it's just when I set it up on my pi and try to access the page with the external IP, if I don't use gunicorn the external IP works just fine. I can't figure it out. Any ideas?
You need to have Gunicorn listen on 0.0.0.0 (all network interfaces). This then means it will be listening on an externally accessible IP address.
There is more information on the difference between localhost and 0.0.0.0 in this post on ServerFault.
So, I have looked around stack overflow + other sites, but havent been able to solve this problem: hence posting this question!
I have recently started learning django... and am now trying to run it on ec2.
I have an ec2 instance of this format: ec2-xx-xxx-xx-xxx.us-west-2.compute.amazonaws.com on which I have a django app running. I changed the security group of this instance to allow http port 80 connections.
I did try to run it the django app the following ways: python manage.py runserver 0.0.0.0:8000 and python manage.py runserver ec2-xx-xxx-xx-xxx.us-west-2.compute.amazonaws.com:8000 and that doesnt seem to be helping either!
To make sure that there is nothing faulty from django's side, I opened another terminal window and ssh'ed into the instance and did a curl GET request to localhost:8000/admin which went through successfully.
Where am I going wrong? Will appreciate any help!
You are running the app on port 8000, when that port isn't open on the instance (you only opened port 80).
So either close port 80 and open port 8000 from the security group, or run your app on port 80.
Running any application on a port that is less than 1024 requires root privileges; so if you try to do python manage.py runserver 0.0.0.0:80 as a normal user, you'll get an error.
Instead of doing sudo python manage.py runserver 0.0.0.0:80, you have a few options:
Run a pre-configured AMI image for django (like this one from bitnami).
Configure a front end server to listen on port 80, and then proxy requests to your django application. The common stack here is nginx + gunicorn + supervisor, and this blog post explains how to set that up (along with a virtual environment which is always a good habit to get into).
Make sure to include your IPv4 Public IP address in the ALLOWED_HOSTS section in Django project/app/settings.py script...