Run Flask single threaded - python

I presently use system wide mutexes to handle multiprocessing in my Flask application.
Due to the GIL, and ultimately that the fact that multiprocessing will already provide me with concurrency, I'd like not to have to worry about multithreading in my application as well.
Can I get the Flask development server to run single threaded?
As an aside, if I deploy using Gunicorn, can this do the same (i.e. running multiple processes, all of which are single threaded)?

you can run your application with gunicorn using parameters 'workers' and 'threads'
gunicorn --workers=5 --threads=1 main:app
it means that all workers will be run using single thread

After looking at the source code, I see that Flask has the --without-threads parameter, which was added as a result of this bug report.
. . .
flask run --without-threads . . .
As far as I can tell it doesn't appear that the Flask documentation has been updated as a result of the bug fix, so the best documentation can be found in the bug report itself. You can query this property at run-time via flask.request.is_multithread.

The flask development server is only single-threaded by default, and yes you can use unicorn with the workers and thread flags
gunicorn --workers=8 --threads=1

Related

How can I deploy FastAPI manually on a Ubuntu Server?

I have a very simple API (2 routes) which just has GET requests, and doesnt need any authentication or anything for now.
I want to know what is the best and appropariate way to deploy my API for production. I am unable to use docker, and would like to do it the server way.
So i have a few questions:
On the fastapi documentation it says you can do uvicorn main:app --host 0.0.0.0 --port 80 but i was thinking if that is the correct way for production? Do i just enter that command, and will the API automatically start listening on the servers IP address? Also is this method efficient and will it be able to handle all the requests? Or what would i change for it to be faster?
When should i use a process manager?
When should i use multiple workers? And what benefits do they provide?
When should i use Gunicorn as mentioned here? https://www.uvicorn.org/deployment/#gunicorn
I am just a little confused on how to deploy this because one article says do this, another says do this.
If for whatever reasons you don't like to use Docker-Ce, the best way is to create a systemd-service unit for your application so every time it goes down, systemd will try to restart it, then run it with servers like wgsi or gunicorn.
This link can help about systemd-services too:
https://blog.miguelgrinberg.com/post/running-a-flask-application-as-a-service-with-systemd
P.S note that the way you serve gunicorn isn't really related to docker or systemd-service, for both approaches you need to config gunicorn.
To answer your Question:
How can I deploy FastAPI manually on a Ubuntu Server?
You can check out this video tutorial on how to
Deploy FastAPI on Ubuntu
The deployment has the following architecture within a single Ubuntu VM.
As you take a look at the Architectural diagram above for FastAPI Deployment, it shows a single VM deployment.
Within the Ubuntu VM, there are two systemd services namely caddy.service and gunicorn.service up and running. The gunicorn.service runs the FastAPI application and the caddy.service exposes the FastAPI application running on Gunicorn as a reverse proxy with the help of uvicorn.workers.UvicornWorker worker class. In addition to this, our FastAPI communicates to PostgreSQL database server in an asynchronous fashion with the help of databases package that provides simple asyncio support for PostgreSQL database.

uWSGI + builded Go .so not working

Problem: .so(shared object) as library in python works well when python calls it and fails in uWSGI-running python(Django) application.
More info: I've build Go module with go build -buildmode=c-shared -o output.so input.go to call it in Python with
from ctypes import cdll
lib = cdll.LoadLibrary('path_to_library/output.so')
When django project is served via uWSGI the request handler that calling Go library freezes, causing future 504 in Nginx. After getting in "so called freeze", uWSGI is locked there and only restarting helps to enliven app. No logs AT ALL! It just freezes.
Everything works correctly when i run in python interpreter on the same machine.
My thoughts: i've tried to debug this and put a lot of log messages in library, but it won't give much info because everything is fine with library(because it works in interpreter). Library loads correctly, because some log messages that i've putted in library. I think it some sort of uWSGI limitation. I don't think putting uwsgi.ini file is somehow helpful.
Additional info:
Go dependencies:
fmt
github.com/360EntSecGroup-Skylar/excelize
log
encoding/json
OS: CentOS 6.9
Python: Python 3.6.2
uWSGI: 2.0.15
What limitations can be in uWSGI in that type of shared object work and if there a way to overcome them?
Firstly, are you absolutely positive you need to call Go as a library from uWSGI process?
uWSGI are usually for interpreted languages such as PHP, Python, Ruby and others. It bootstraps the interpreter and manages the master/worker processes to handle requests. It seems strange to be using it on Go library.
You mentioned having nginx as your webserver, why not just use your Go program as the http server (which it does great) and call it from nginx directly using it's URL:
location /name/ {
proxy_pass http://127.0.0.1/go_url/;
}
See nginx docs.
If you really want to use Go as a python imported library via a .so module, you have to be aware Go has its own runtime, thread management and may not work well with uWSGI which handles threads/processes in a different way. In this case I'm unable to help you, since I never actually tried this.
If you could clarify your question with what are you actually tring to do, we might me able to answer more helpfully.
Attempts and thoughts
I tried to avoid separation of shared library from my python code, since it requires support of at least one more process, and i would have to rewrite some of the library to create new api.
As #Lu.nemec kindly noted that:
Go has its own runtime, thread management and may not work well with uWSGI which handles threads/processes in a different way
Since uWSGI is the problem i started seaching for a solution there. One of the hopes was installing GCCGO uWSGI plugin somehow solve that problem. But even it's hard to install on old OSes, because it lacks of pre-builded plugins and manual build haven't gone very well, it haven't helped, nothing changes, it still freezes.
And then i thought that i wan't to disable coroutines and that type of stuff that differs from uWSGI and one of the changes that i am able to do is to set GOMAXPROCS
GOMAXPROCS sets the maximum number of CPUs that can be executing simultaneously and returns the previous setting. If n < 1, it does not change the current setting. The number of logical CPUs on the local machine can be queried with NumCPU. This call will go away when the scheduler improves.
And it worked like a charm!!!
The solution
import (
...
"runtime"
)
...
//export yourFunc
func yourFunc(yourArgs argType) {
runtime.GOMAXPROCS(1)
...
}
My previous answer works in some cases. HOWEVER, when i tried to run the same project on another server with same OS, same Python, same uWSGI (version, plugins, config files), same requirements, same .so file, it freezed the save way as i described in answer.
I personally didn't want to run this as separate process and bind it to socket/port and create API for communicating with shared library.
The solution:
Required only separate process. Run with celery.
Some caveats:
1.You cannot run task with task.apply() since it would be run in main application, not in celery:
result = task.apply_async()
while result.ready():
time.wait(5)
2.You neeed to run celery with solo execution pool
celery -A app worker -P solo

Pyramid gunicorn and waitress

I'm trying to understand the behaviour of Pyramid regarding the [main:server] configuration and gunicorn.
If I use pserve, it'll use the configuration of [main:server], for both waitress and gunicorn. For example:
# development.ini
[server:main]
use = egg:waitress#main
listen = *:6543
So now, $ pserve development.ini will launch the project with waitress, which is expected. But if I use the command $ gunicorn (with gunicorn or waitress in the ini file) it'll work as well, which is not expected by me.
My questions are:
why does this configuration work if I run the command $ gunicorn --paste development.ini?
what happens under the hook? is waitress working? (I would say it's not according to the processes in my computer)
There are two independent pieces of configuration required to start serving requests for any WSGI app.
1) Which WSGI app to use.
2) Which WSGI server to use.
These pieces are handled separately and can be done in different ways depending on how you set it up. The ini file format is defined by the PasteDeploy library and provides a way for a consumer of the format to determine both the app config and the server config. However, when using gunicorn --paste foo.ini you're already telling gunicorn you want to use the gunicorn server (not waitress) so it ignores the server section and focuses only on loading the app. Gunicorn actually has other ways to load the app as well, but I'll ignore that complexity for now since that part is working for you. Any server config for gunicorn needs to be done separately... it is not reading the [server:main] section when you run gunicorn from the cli. Alternatively you can start your app using pserve which does use the server section to determine what server to use - but in your current setup that would run waitress instead of gunicorn.
So, after lots of reading and test, I have to conclude that:
using [main:server] is mandatory for a pyramid application
if you are running the application with gunicorn, you have to define this [main:server] nevertheless
gunicorn will ignore the use attribute, but pyramid will check the egg exists
gunicorn will use the rest of the settings (if any) but they will have less priority than the command line arguments or the config.py file
The reason behind this behaviour is still confusing to me, but at least I can work with it. Any other hints will be very appreciated.

Celery tasks functions - web server vs remote server

I'm willing to send tasks from a web server (running Django) to a remote machine that is holding a Rabbitmq server and some workers that I implemented with Celery.
If I follow the Celery way to go, it seems I have to share the code between both machines, which means replicating the workers logic code in the web app code.
So:
Is there a best practice to do that? Since code is redundant, I am thinking about using a git submodule (=> replicated in the web app code repo, and in the workers code repo)
Should I better use something else than Celery then?
Am I missing something?
One way to manage this is to store your workers in your django project. Django and celery play nice to each other allowing you to use parts of your django project in your celery app. http://celery.readthedocs.org/en/latest/django/first-steps-with-django.html
Deploying this would mean that your web application would not use the modules involved with your celery workers, and on your celery machine your django views and such would never be used. This usually only results in a couple of megs of unused django application code...
You can use send_task. It takes same parameters than apply_async but you only have to give the task name. Without loading the module in django you can send tasks:
app.send_task('tasks.add', args=[2, 2], kwargs={})
http://celery.readthedocs.org/en/latest/reference/celery.html#celery.Celery.send_task

Running Python through FastCGI with nginx on Ubuntu

I've already looked at other threads on this, but most don't go into enough setup detail which is where I need help.
I have an Ubuntu based VPS running with nginx, serving PHP sites through php-cgi on port 9000.
I'd like to start doing more with Python, so I've written a deployment script which I essentially want to use as a post-receive hook on my local GitLab server as my first python script. I can run this script successfully by running python script.py on the command line but in order to use this as a post-receive hook I need it be able to access it via http.
I looked at this guide on the nginx wiki but partway down is says to:
And start the django fastcgi process:
python ./manage.py runfcgi host=127.0.0.1 port=8080
Now, like I said I am pretty new to python, and I have never used the Django framework. Can anyone assit on how I am supposed to start the fastcgi server? Do I replace ./manage.py with the name of my script? Any help would be appriciated as everything I've found online refers to working with Django.
Do I replace ./manage.py with the name of my script?
No. It's highly unlikely your script is a FastCGI server, or that it can accept HTTP requests of any kind since you mention running it over the command line. (From what little I know of FastCGI, an app supporting it has to be able to handle a stream of requests coming in over stdin in a specific format, so there's definitely some plumbing involved.)
I'd say the easiest approach would be to use some web framework just to act as HTTP/FastCGI middleware. For your use a "microframework" like Flask (or even Paste but I found the documentation inscrutable) sounds like it'd work fine. The idea would be to have two interfaces to your main code, one that can handle command line arguments, and one that can handle a HTTP request, ultimately both would just call one function that actually does the work. (If you want to keep the command-line version of the app.)
The Flask documentation also mentions using uWSGI or standalone workers as deployment options. I'm not familiar with the former; the latter I wouldn't recommend for a simple, low-traffic app for the same reasons as the approach in the next paragraph.
Considering you use a VPS, you might even be able to just run the app as a standalone server process using the http.server module, but I'm not sure that's the better choice unless you absolutely want to avoid using any sort of framework. You'd have to make sure the app starts up if the server is rebooted or that it restarts when it crashes and it seems easier to just have nginx do the job of the supervisor.
UPDATE: Scratch that, it seems that nginx won't handle supervising a FastCGI worker process for you, which would've been the main advantage of the approach. In light of that it doesn't matter which of the three approaches you use since you'll have to set up a service supervisor one way or the other. I'd say go with uWSGI since flup (which is needed for Flask+FastCGI) seems abandoned since 2011, and the uWSGI protocol is apparently supported in nginx natively. Otherwise you'd need to use a different webserver than nginx, one that will manage a FastCGI worker for you. If this is an option, I'd consider Cherokee, which can be configured using a web GUI.
tl;dr: you need to write a (very simple) webapp. While it is feasible to do this without a web framework of any kind, in my opinion using one is easier, since you some (nontrivial) plumbing for free and there's a lot of guidance available on how to deploy them.

Categories