I don't quite understand how this works, and I've been searching everywhere but didn't quite find the answer.
So when I deploy a django app on heroku the other day I was using daphne with this in the Procfile :
daphne app.asgi:application --port $PORT --bind 0.0.0.0 -v2
The app works just fine and it make sense (I heard that heroku dynamically assigns the port). But how does the following code knows how to bind it's port, isn't it always 8000 by default ?
gunicorn app.asgi
Sorry for the silly question. I'm a newbie in devops stuff
No, the port is whatever Heroku wants it to be. Since they're running many many sites, they need to dynamically choose a port to serve each one on. They provide that port in the PORT environment variable, and the Procfile takes that value and binds gunicorn to it.
Related
This question already has answers here:
Are a WSGI server and HTTP server required to serve a Flask app?
(3 answers)
Closed 2 years ago.
I uploaded a Flask project that I prepared to a server of a windows computer. I can run the project over localhost on the computer I connect remotely. But I was asked to access the project from any computer with the IP and port address of the remote computer. What should I do for it?
You need to tell flask to run on all interfaces, either with:
flask run -h 0.0.0.0
Or if you're launching via app.run, provide the host argument:
if __name__ == '__main__':
app.run(host='0.0.0.0')
Of course if your machine has several interfaces, you could provide the IP of the specific interface instead of 0.0.0.0.
Bear in mind that the dev server is not meant for production. The above is fine if you want to access you're dev server remotely, but you'll probably want to run with something like gunicorn eventually, in which case provide the IP:port combo as the bind flag:
gunicorn --bind 0.0.0.0:5000 app:app
So I am using a docker for this python chat app project.
I originally had python manage.py runserver 0.0.0.0:8000 as my command in docker-compose.
I found that I should switch to gunicorn if I want to deploy my app on web (like heroku). The tutorial I found say simply change the command in docker-compose to gunicorn myproject.wsgi -b 0.0.0.0:8000. I did that, and all the websocket connections broke. There's a fail to send due to websocket is still in CONNECTING state, then after a while handshake fails with a status code of 404. All the setup were the same as before, except that one line. Just wondering what else I need to change to make websocket work with gunicorn? Thanks
EDIT: after some digging on the internet it seems that gunicorn wasn't supposed to be run with websocket (wsgi asgi difference I suppose?) If anyone could tell me something I could use instead of gunicorn for web server it would be extremely appreciated, or if there's any way I can run gunicorn with my django channels still working? Thanks!!
When using ASGI, for asynchronous servers (websockets), you should use an asynchronous server, like Daphne or Uvicorn, the Django documentation has examples on how to deploy for both of them.
If you want to use uvicorn directly you could do something like:
uvicorn myproject.asgi:application --host 0.0.0.0 --port 8000
You can also run uvicorn through gunicorn using the worker class:
gunicorn myproject.asgi:application -k uvicorn.workers.UvicornWorker -b 0.0.0.0:8000
I have a production ready application that I've installed on a VM with CentOS. All dependencies and all other settings are up and running and all that's left for me is to properly configure the gunicorn server and run it with an start.sh script to begin routing web traffic to the app.
However, I'm not sure how I can have gunicorn handle the SSL layer itself. I'd prefer to simply have gunicorn handle the SSL to keep deployments simple and streamlined and not the load balancers.
I've got a my_site.ca-bundle file from an SSL validator.
My bash script looks something like this based off the documentation here and referenced in this stack overflow question
#!/bin/bash
exec gunicorn -w3 --certfile=my_site.crt --keyfile=my_site.key myapp.wsgi:application
However, how do I use the ca-bundle file given these settings referenced in the documentation? I don't actually have my_site.csr and my_site.key since I think both private and public key are inside the ca-bundle file.
Sorry for the super-noob question, first time setting up SSL by hand and not through load balancers. Is there a different gunicorn setting parameter for just a ca-bundle file like AWS?
I have been attempting to push my flask app running socketio to Heroku, but to no avail. I have narrowed it down to the Procfile. I am constantly getting 503 server errors because my program doesn't want to be connected to. I tested it locally and it works just fine.
I have had a couple versions of the Procfile, which are
web: gunicorn -b 0.0.0.0:$PORT app:userchat_manager
and
web: python userchat_manager.py
where the userchar_manager file holds the SocketIO.run() function to run the app. What would be the best way to fix this?
EDIT: I changed the Procfile to
web: gunicorn -b 0.0.0.0:$PORT app:app
and it loads. However, whenever I try to send a message, it doesn't send the message and I get a 400 code.
See the Deployment section of the documentation. The gunicorn web server is only supported when used alongside eventlet or gevent, and in both cases you have to use a single worker process.
If you want to drop gunicorn and run the native web server instead, you should code your userchat_manager.py script in a way that loads the port on which the server should listen from the PORT environment variable exposed by Heroku. If you go this route, I still think you should look into using eventlet or gevent, without using an asynchronous framework the performance is pretty bad (no WebSocket support), and the number of clients that can be connected at the same time is very limited (just one client per worker).
Try this:
web: gunicorn --worker-class eventlet -w 1 your_module:app
You don't need port to connect the socket, just use your heroku app url as socket connection withou :PORT.
So, I have looked around stack overflow + other sites, but havent been able to solve this problem: hence posting this question!
I have recently started learning django... and am now trying to run it on ec2.
I have an ec2 instance of this format: ec2-xx-xxx-xx-xxx.us-west-2.compute.amazonaws.com on which I have a django app running. I changed the security group of this instance to allow http port 80 connections.
I did try to run it the django app the following ways: python manage.py runserver 0.0.0.0:8000 and python manage.py runserver ec2-xx-xxx-xx-xxx.us-west-2.compute.amazonaws.com:8000 and that doesnt seem to be helping either!
To make sure that there is nothing faulty from django's side, I opened another terminal window and ssh'ed into the instance and did a curl GET request to localhost:8000/admin which went through successfully.
Where am I going wrong? Will appreciate any help!
You are running the app on port 8000, when that port isn't open on the instance (you only opened port 80).
So either close port 80 and open port 8000 from the security group, or run your app on port 80.
Running any application on a port that is less than 1024 requires root privileges; so if you try to do python manage.py runserver 0.0.0.0:80 as a normal user, you'll get an error.
Instead of doing sudo python manage.py runserver 0.0.0.0:80, you have a few options:
Run a pre-configured AMI image for django (like this one from bitnami).
Configure a front end server to listen on port 80, and then proxy requests to your django application. The common stack here is nginx + gunicorn + supervisor, and this blog post explains how to set that up (along with a virtual environment which is always a good habit to get into).
Make sure to include your IPv4 Public IP address in the ALLOWED_HOSTS section in Django project/app/settings.py script...