I have a Flask application i want to run with Gunicorn server instead of Werkzeug (even in development). But as the app is created using a create_app function, Gunicorn can't be started from the command line with my_module:my_app. Plus, i have a manage.py script written with the help of Flask-Script extension to run the server and some other operations.
I've tried to inherit gunicorn.app.wsgiapp.WSGIApplication in the same style as the solution proposed here: How to use Flask-Script and Gunicorn, but the app_uri attribute is not found on my app object.
Does anyone have an idea of how to do it ?
You're missing the same obvious thing I once did. ;)
gunicorn 'my_module:create_app()'
I solved this problem using the recipe in the Custom Application section of the Gunicorn documentation. The basic idea is that you subclass gunicorn.app.base.BaseApplication and overload load_config and load.
Related
I have a very simple API (2 routes) which just has GET requests, and doesnt need any authentication or anything for now.
I want to know what is the best and appropariate way to deploy my API for production. I am unable to use docker, and would like to do it the server way.
So i have a few questions:
On the fastapi documentation it says you can do uvicorn main:app --host 0.0.0.0 --port 80 but i was thinking if that is the correct way for production? Do i just enter that command, and will the API automatically start listening on the servers IP address? Also is this method efficient and will it be able to handle all the requests? Or what would i change for it to be faster?
When should i use a process manager?
When should i use multiple workers? And what benefits do they provide?
When should i use Gunicorn as mentioned here? https://www.uvicorn.org/deployment/#gunicorn
I am just a little confused on how to deploy this because one article says do this, another says do this.
If for whatever reasons you don't like to use Docker-Ce, the best way is to create a systemd-service unit for your application so every time it goes down, systemd will try to restart it, then run it with servers like wgsi or gunicorn.
This link can help about systemd-services too:
https://blog.miguelgrinberg.com/post/running-a-flask-application-as-a-service-with-systemd
P.S note that the way you serve gunicorn isn't really related to docker or systemd-service, for both approaches you need to config gunicorn.
To answer your Question:
How can I deploy FastAPI manually on a Ubuntu Server?
You can check out this video tutorial on how to
Deploy FastAPI on Ubuntu
The deployment has the following architecture within a single Ubuntu VM.
As you take a look at the Architectural diagram above for FastAPI Deployment, it shows a single VM deployment.
Within the Ubuntu VM, there are two systemd services namely caddy.service and gunicorn.service up and running. The gunicorn.service runs the FastAPI application and the caddy.service exposes the FastAPI application running on Gunicorn as a reverse proxy with the help of uvicorn.workers.UvicornWorker worker class. In addition to this, our FastAPI communicates to PostgreSQL database server in an asynchronous fashion with the help of databases package that provides simple asyncio support for PostgreSQL database.
I have an existing application which is written in python3 default threading based http server. Is it possible to use gunicorn on top of that?
I know in flaks application, we creare a wsgi file or an instance of application to gunicorn command.
Is there any similar way?
I presently use system wide mutexes to handle multiprocessing in my Flask application.
Due to the GIL, and ultimately that the fact that multiprocessing will already provide me with concurrency, I'd like not to have to worry about multithreading in my application as well.
Can I get the Flask development server to run single threaded?
As an aside, if I deploy using Gunicorn, can this do the same (i.e. running multiple processes, all of which are single threaded)?
you can run your application with gunicorn using parameters 'workers' and 'threads'
gunicorn --workers=5 --threads=1 main:app
it means that all workers will be run using single thread
After looking at the source code, I see that Flask has the --without-threads parameter, which was added as a result of this bug report.
. . .
flask run --without-threads . . .
As far as I can tell it doesn't appear that the Flask documentation has been updated as a result of the bug fix, so the best documentation can be found in the bug report itself. You can query this property at run-time via flask.request.is_multithread.
The flask development server is only single-threaded by default, and yes you can use unicorn with the workers and thread flags
gunicorn --workers=8 --threads=1
I'm trying to understand the behaviour of Pyramid regarding the [main:server] configuration and gunicorn.
If I use pserve, it'll use the configuration of [main:server], for both waitress and gunicorn. For example:
# development.ini
[server:main]
use = egg:waitress#main
listen = *:6543
So now, $ pserve development.ini will launch the project with waitress, which is expected. But if I use the command $ gunicorn (with gunicorn or waitress in the ini file) it'll work as well, which is not expected by me.
My questions are:
why does this configuration work if I run the command $ gunicorn --paste development.ini?
what happens under the hook? is waitress working? (I would say it's not according to the processes in my computer)
There are two independent pieces of configuration required to start serving requests for any WSGI app.
1) Which WSGI app to use.
2) Which WSGI server to use.
These pieces are handled separately and can be done in different ways depending on how you set it up. The ini file format is defined by the PasteDeploy library and provides a way for a consumer of the format to determine both the app config and the server config. However, when using gunicorn --paste foo.ini you're already telling gunicorn you want to use the gunicorn server (not waitress) so it ignores the server section and focuses only on loading the app. Gunicorn actually has other ways to load the app as well, but I'll ignore that complexity for now since that part is working for you. Any server config for gunicorn needs to be done separately... it is not reading the [server:main] section when you run gunicorn from the cli. Alternatively you can start your app using pserve which does use the server section to determine what server to use - but in your current setup that would run waitress instead of gunicorn.
So, after lots of reading and test, I have to conclude that:
using [main:server] is mandatory for a pyramid application
if you are running the application with gunicorn, you have to define this [main:server] nevertheless
gunicorn will ignore the use attribute, but pyramid will check the egg exists
gunicorn will use the rest of the settings (if any) but they will have less priority than the command line arguments or the config.py file
The reason behind this behaviour is still confusing to me, but at least I can work with it. Any other hints will be very appreciated.
I have a django application that i serve using gunicorn. I do that by using the method prescribed on the gunicorn site - embedding gunicorn into my django application.
I'm trying to set up a proxy into my application so that when you go to "http://mysite.com/proxy/" it does proxy you to "http://mysite.com:8100".
I know i can do that with apache and other webservers. For some reasons i would prefer to do it directly with gunicorn/django. One of theses reasons is keeping everything in the same place.
My question is, what is the best way to do that ? Also is it a terrible idea altogether ?
Thanks.
You should deploy some proxy application into your gunicorn installation, such as WSGIProxy.
I've written dj-revproxy for easy integration of a proxy in django. Bonus point it's using restkit which use the gunicorn HTTP engine. (I'm one of the gunicorn authors). More info here:
https://github.com/benoitc/dj-revproxy