uWSGI mules' counterpart in Gunicorn - python

Does Gunicorn have a separate worker like the uwsgi mules that I can use to offload tasks? I need at least 4 workers dedicated to certain logics. I searched the documentation but I can't find anything that resembles them
I love uwsgi but I found out the hard way that you can bring it down with only one request that has multiple headers(can't limit the size too much bc of my program) so i need to migrate to Gunicorn

Related

Should I have separate containers for Flask, uWSGI, and nginx?

I intend to use Kubernetes and Ingress for load balancing. I'm trying to learn how to set up Flask, uWSGI and Nginx.
I see this tutorial that has all three installed in the same container, and I'm wondering whether I should use it or not.
https://ianlondon.github.io/blog/deploy-flask-docker-nginx/
I'm guessing the benefit of having them as separate containers and separate pods is that they can then all scale individually?
But also, should Flask and uwsgi even be in separate containers? (or Flask and Gunicorn, since uwsgi seems to be very similar to Gunicorn)
Flask is a web framework, any application written with it needs a WSGI server to host it. Although you could use the Flask builtin developer server, you shouldn't as that isn't suitable for production systems. You therefore need to use a WSGI server such as uWSGI, gunicorn or mod_wsgi (mod_wsgi-express). Since the web application is hosted by the WSGI server, it can only be in the same container, but there isn't a separate process for Flask, it runs in the web server process.
Whether you need a separate web server such as nginx then depends. In the case of mod_wsgi you don't as it uses the Apache web server and so draws direct benefits from that. When using mod_wsgi-express it also is already setup to run in an optimal base configuration and how it does that avoids the need to have a separate front facing web server like people often do with nginx when using uWSGI or gunicorn.
For containerised systems, where the platform already provides a routing layer for load balancing, as is the case for ingress in Kubernetes, using nginx in the mix could just add extra complexity you don't need and could reduce performance. This is because you either have to run nginx in the same container, or create a separate container in the same pod and use shared emptyDir volume type to allow them to communicate via a UNIX socket still. If you don't use a UNIX socket, and use INET socket, or run nginx in a completely different pod, then it is sort of pointless as you are introducing an additional hop for traffic which is going to be more expensive than having it closely bound using a UNIX socket. The uWSGI server doesn't perform as well when accepting requests over INET when coupled with nginx, and having nginx in a separate pod, potentially on different host, can make that worse.
Part of the reason for using nginx in front is that it can protect you from slow clients due to request buffering, as well as other potential issues. When using ingress though, you already have a haproxy or nginx front end load balancer that can to a degree protect you from that. So it is really going to depend on what you are doing as to whether there is a point in introducing an additional nginx proxy in the mix. It can be simpler to just put gunicorn or uWSGI directly behind the load balancer.
Suggestions are as follows.
Also look at mod_wsgi-express. It was specifically developed with containerised systems in mind to make it easier, and can be a better choice than uWSGI and gunicorn.
Test different WSGI servers and configurations with your actual application with real world traffic profiles, not benchmarks which just overload it. This is important as the dynamics of a Kubernetes based system, along with how its routing may be implemented, means it all could behave a lot differently to more traditional systems you may be used to.

Running ApScheduler in Gunicorn Without Duplicating Per Worker

The title basically says it all. I have gunicorn running my app with 5 workers. I have a data structure that all the workers need access to that is being updated on a schedule by apscheduler. Currently apscheduler is being run once per worker, but I just want it run once period. Is there a way to do this? I've tried using the --preload option, which let's me load the shared data structure just once, but doesn't seem to let all the workers have access to it when it updates. I'm open to switching to uWSGI if that helps.
I'm not aware of any way to do this with either, at least not without some sort of RPC. That is, run APScheduler in a separate process and then connect to it from each worker. You may want to look up projects like RPyC and Execnet to do that.

Django Gunicorn Long Polling

Is using Django with gunicorn is considered to be a replacement for using evented/async servers like Tornado, Node.js, and similar ? Additionally, Will that be helpful in handling long-polling/cometed services?
Finally, is Gunicorn only replacing the memory consuming Apache threads (in case of Apache/mod-wsgi) with lightweight threads, or there are an additional benefits?
Gunicorn by default will spawn regular synchronous WSGI processes. You can however tell it to spawn processes that use gevent, eventlet or tornado instead. I am only familiar with gevent which can certainly be used instead of Node.js for long polling.
The memory footprint per process is about the same for mod_wsgi and gunicorn (in my limited experience), but you get more bells-and-whistles with gunicorn. If you change the default worker class to gevent (or eventlet or tornado) you also get a LOT more performance out of each process.

Is it feasible to run multiple processeses on a Heroku dyno?

I am aware of the memory limitations of the Heroku platform, and I know that it is far more scalable to separate an app into web and worker dynos. However, I still would like to run asynchronous tasks alongside the web process for testing purposes. Dynos are costly and I would like to prototype on the free instance that Heroku provides.
Are there any issues with spawning a new job as a process or subprocess in the same dyno as a web process?
On the newer Cedar stack, there are no issues with spawning multiple processes. Each dyno is a virtual machine and has no particular limitations except in memory and CPU usage (about 512 MB of memory, I think, and 1 CPU core). Following the newer installation instructions for some stacks such as Python will result in a configuration with multiple (web server) processes out of the box.
Software installed on web dynos may vary depending on what buildpack you are using; if your subprocesses need special software then you may have to either bundle it with your application or (better) roll your own buildpack.
At this point I would normally remind you that running asynchronous tasks on worker dynos instead of web dynos, with a proper task queue system, is strongly encouraged, but it sounds like you know that already. Do keep in mind that accounts with only one web dyno (typically this means, "free" accounts) will have that dyno spun down after an hour or so of not receiving any web requests, and that any background processes running on the dyno at that time will necessarily be killed. Accounts with multiple web dynos are not subject to this restriction.

using celery with pyramid and mod_wsgi

I've been able to deploy a test application by using pyramid with pserve and running pceleryd (I just send an email without blocking while it is sent).
But there's one point that I don't understand: I want to run my application with mod_wsgi, and I don't understand if I can can do it without having to run pceleryd from a shell, but if I can do something in the virtualhost configuration.
Is it possible? How?
There are technically ways you could use Apache/mod_wsgi to manage a process distinct from that handling web requests, but the pain point is that Celery will want to fork off further worker processes. Forking further processes from a process managed by Apache can cause problems at times and so is not recommended.
You are thus better of starting up Celery process separately. One option is to use supervisord to start it up and manage it.

Categories