Can we use python ThreadingHTTPServer in gunicorn? - python

I have an existing application which is written in python3 default threading based http server. Is it possible to use gunicorn on top of that?
I know in flaks application, we creare a wsgi file or an instance of application to gunicorn command.
Is there any similar way?

Related

How can I deploy FastAPI manually on a Ubuntu Server?

I have a very simple API (2 routes) which just has GET requests, and doesnt need any authentication or anything for now.
I want to know what is the best and appropariate way to deploy my API for production. I am unable to use docker, and would like to do it the server way.
So i have a few questions:
On the fastapi documentation it says you can do uvicorn main:app --host 0.0.0.0 --port 80 but i was thinking if that is the correct way for production? Do i just enter that command, and will the API automatically start listening on the servers IP address? Also is this method efficient and will it be able to handle all the requests? Or what would i change for it to be faster?
When should i use a process manager?
When should i use multiple workers? And what benefits do they provide?
When should i use Gunicorn as mentioned here? https://www.uvicorn.org/deployment/#gunicorn
I am just a little confused on how to deploy this because one article says do this, another says do this.
If for whatever reasons you don't like to use Docker-Ce, the best way is to create a systemd-service unit for your application so every time it goes down, systemd will try to restart it, then run it with servers like wgsi or gunicorn.
This link can help about systemd-services too:
https://blog.miguelgrinberg.com/post/running-a-flask-application-as-a-service-with-systemd
P.S note that the way you serve gunicorn isn't really related to docker or systemd-service, for both approaches you need to config gunicorn.
To answer your Question:
How can I deploy FastAPI manually on a Ubuntu Server?
You can check out this video tutorial on how to
Deploy FastAPI on Ubuntu
The deployment has the following architecture within a single Ubuntu VM.
As you take a look at the Architectural diagram above for FastAPI Deployment, it shows a single VM deployment.
Within the Ubuntu VM, there are two systemd services namely caddy.service and gunicorn.service up and running. The gunicorn.service runs the FastAPI application and the caddy.service exposes the FastAPI application running on Gunicorn as a reverse proxy with the help of uvicorn.workers.UvicornWorker worker class. In addition to this, our FastAPI communicates to PostgreSQL database server in an asynchronous fashion with the help of databases package that provides simple asyncio support for PostgreSQL database.

Apscheduler issues with uwsgi

I have a Python Flask App with Apscheduler that checks if there is a change in an online spreadsheet (Smartsheet). I am caching contents of the spreadsheet in a dictionary (in memory).
Everything works fine with the built-in Flask development web server, as well with uwsgi command.
However, when I serve the application with nginx + uwsgi, it seems that the scheduler does not work.
My question is, will uwsgi spawn multiple copies of the Flask app?
How do I use uwsgi mule to trigger a function in the main Flask app?
Any pointers is much appreciated.
Thanks and regards,
Hantzley

Pyramid gunicorn and waitress

I'm trying to understand the behaviour of Pyramid regarding the [main:server] configuration and gunicorn.
If I use pserve, it'll use the configuration of [main:server], for both waitress and gunicorn. For example:
# development.ini
[server:main]
use = egg:waitress#main
listen = *:6543
So now, $ pserve development.ini will launch the project with waitress, which is expected. But if I use the command $ gunicorn (with gunicorn or waitress in the ini file) it'll work as well, which is not expected by me.
My questions are:
why does this configuration work if I run the command $ gunicorn --paste development.ini?
what happens under the hook? is waitress working? (I would say it's not according to the processes in my computer)
There are two independent pieces of configuration required to start serving requests for any WSGI app.
1) Which WSGI app to use.
2) Which WSGI server to use.
These pieces are handled separately and can be done in different ways depending on how you set it up. The ini file format is defined by the PasteDeploy library and provides a way for a consumer of the format to determine both the app config and the server config. However, when using gunicorn --paste foo.ini you're already telling gunicorn you want to use the gunicorn server (not waitress) so it ignores the server section and focuses only on loading the app. Gunicorn actually has other ways to load the app as well, but I'll ignore that complexity for now since that part is working for you. Any server config for gunicorn needs to be done separately... it is not reading the [server:main] section when you run gunicorn from the cli. Alternatively you can start your app using pserve which does use the server section to determine what server to use - but in your current setup that would run waitress instead of gunicorn.
So, after lots of reading and test, I have to conclude that:
using [main:server] is mandatory for a pyramid application
if you are running the application with gunicorn, you have to define this [main:server] nevertheless
gunicorn will ignore the use attribute, but pyramid will check the egg exists
gunicorn will use the rest of the settings (if any) but they will have less priority than the command line arguments or the config.py file
The reason behind this behaviour is still confusing to me, but at least I can work with it. Any other hints will be very appreciated.

Run Gunicorn from Python with FlaskScript

I have a Flask application i want to run with Gunicorn server instead of Werkzeug (even in development). But as the app is created using a create_app function, Gunicorn can't be started from the command line with my_module:my_app. Plus, i have a manage.py script written with the help of Flask-Script extension to run the server and some other operations.
I've tried to inherit gunicorn.app.wsgiapp.WSGIApplication in the same style as the solution proposed here: How to use Flask-Script and Gunicorn, but the app_uri attribute is not found on my app object.
Does anyone have an idea of how to do it ?
You're missing the same obvious thing I once did. ;)
gunicorn 'my_module:create_app()'
I solved this problem using the recipe in the Custom Application section of the Gunicorn documentation. The basic idea is that you subclass gunicorn.app.base.BaseApplication and overload load_config and load.

Is there a way to deploy new code with Tornado/Python without restarting the server?

I've recently started to experiment with Python and Tornado web server/framework for web development. Previously, I have used PHP with my own framework on a LAMP stack. With PHP, deploying updated code/new code is as easy as uploading it to the server because of the way mod_php and Apache interact.
When I add new code or update code in Python/Tornado, do I need to restart the Tornado server? I could see this being problematic if you have a number of active users.
(a) Do I have to restart the server, or is there another/better way?
(b) If so, how can I avoid users being disconnected/getting errors/etc. while it's restarting (which could take a few seconds)?
[One possible thought is to use the page flipping paradigm with Nginx pointing to a server, launch the new server instance with updated code, redirect Nginx there and take down the original server...?]
It appears the best method is to use Nginx with multiple Tornado instances as I alluded to in my original question and as Cole mentions. Nginx can reload its configuration file on the fly . So the process looks like this:
Update Python/Tornado web application code
Start a new instance of the application on a different port
Update the configuration file of Nginx to point to the new instance (testing the syntax of the configuration file first)
Reload the Nginx configuration file with a kill -HUP command
Stop the old instance of Python/Tornado web server
A couple useful resources on Nginx regarding hot-swapping the configuration file:
https://calomel.org/nginx.html (in "Explaining the directives in nginx.conf" section)
http://wiki.nginx.org/CommandLine (in "Loading a New Configuration Using Signals" section)
Use HAProxy or Nginx and proxy to multiple Tornado processes, which you can then restart one by one. The Tornado docs cover Nginx, but it doesn't support websockets, so if you're using them you'll need HAProxy.
You could use a debug=True switch with the tornado web instance.
T_APP = tornado.web.Application(<URL_MAP>, debug=True)
This reflects the handler changes as and when they happen.
Is this what you are searching for?
A module to automatically restart the server when a module is modified.
http://www.tornadoweb.org/en/branch2.4/autoreload.html
If you just want to deploy new code with tornado/python during development without restarting the server, you can use the realtimefunc decorator in this GitHub repository.

Categories