I want to use ThreadPoolExecutor on a webapp (django),
All examples that I saw are using the thread pool like that:
with ThreadPoolExecutor(max_workers=1) as executor:
code
I tried to store the thread pool as a class member of a class and to use map fucntion
but I got memory leak, the only way I could use it is by the with notation
so I have 2 questions:
Each time I run with ThreadPoolExecutor does it creates threads again and then release them, in other word is this operation is expensive?
If I avoid using with how can I release the memory of the threads
thanks
Normally, web applications are stateless. That means every object you create should live in a request and die at the end of the request. That includes your ThreadPoolExecutor. Having an executor at the application level may work, but it will be embedded into your web application instead of running as a separate group of processes.
So if you want to take the workers down or restart them, your web app will have to restart as well.
And there will be stability concerns, since there is no main process watching over child processes detecting which one has gotten stale, so requires a lot of code to get multiprocessing right.
Alternatively, If you want a persistent group of processes to listen to a job queue and run your tasks, there are several projects that do that for you. All you need to do is to set up a server that takes care of queueing and locking such as redis or rabbitmq, then point your project at that server and start the workers. Some projects even let you use the database as a job queue backend.
Related
I'm trying to build a python webserver using Django and Waitress, but I'd like to know how Waitress handles concurrent requests, and when blocking may occur.
While the Waitress documentation mentions that multiple worker threads are available, it doesn't provide a lot of information on how they are implemented and how the python GIL affects them (emphasis my own):
When a channel determines the client has sent at least one full valid HTTP request, it schedules a "task" with a "thread dispatcher". The thread dispatcher maintains a fixed pool of worker threads available to do client work (by default, 4 threads). If a worker thread is available when a task is scheduled, the worker thread runs the task. The task has access to the channel, and can write back to the channel's output buffer. When all worker threads are in use, scheduled tasks will wait in a queue for a worker thread to become available.
There doesn't seem to be much information on Stackoverflow either. From the question "Is Gunicorn's gthread async worker analogous to Waitress?":
Waitress has a master async thread that buffers requests, and enqueues each request to one of its sync worker threads when the request I/O is finished.
These statements don't address the GIL (at least from my understanding) and it'd be great if someone could elaborate more on how worker threads work for Waitress. Thanks!
Here's how the event-driven asynchronous servers generally work:
Start a process and listen to incoming requests. Utilizing the event notification API of the operating system makes it very easy to serve thousands of clients from single thread/process.
Since there's only one process managing all the connections, you don't want to perform any slow (or blocking) tasks in this process. Because then it will block the program for every client.
To perform blocking tasks, the server delegates the tasks to "workers". Workers can be threads (running in the same process) or separate processes (or subprocesses). Now the main process can keep on serving clients while workers perform the blocking tasks.
How does Waitress handle concurrent tasks?
Pretty much the same way I just described above. And for workers it creates threads, not processes.
how the python GIL affects them
Waitress uses threads for workers. So, yes they are affected by GIL in that they aren't truly concurrent though they seem to be. "Asynchronous" is the correct term.
Threads in Python run inside a single process, on a single CPU core, and don't run in parallel. A thread acquires the GIL for a very small amount of time and executes its code and then the GIL is acquired by another thread.
But since the GIL is released on network I/O, the parent process will always acquire the GIL whenever there's a network event (such as an incoming request) and this way you can stay assured that the GIL will not affect the network bound operations (like receiving requests or sending response).
On the other hand, Python processes are actually concurrent: they can run in parallel on multiple cores. But Waitress doesn't use processes.
Should you be worried?
If you're just doing small blocking tasks like database read/writes and serving only a few hundred users per second, then using threads isn't really that bad.
For serving a large volume of users or doing long running blocking tasks, you can look into using external task queues like Celery. This will be much better than spawning and managing processes yourself.
Hint: Those were my comments to the accepted answer and the conversation below, moved to a separate answer for space reasons.
Wait.. The 5th request will stay in the queue until one of the 4 threads is done with their previous handling, and therefore gone back to the pool. One thread will only ever server one request at a time. "IO bound" tasks only help in that the threads waiting for IO will implicitly (e.g. by calling time.sleep) tell the scheduler (python's internal one) that it can pass the GIL along to another thread since there's currently nothing to do, so that the others will get more CPU time for their stuff. On thread level this is fully sequential, which is still concurrent and asynchronous on process level, just not parallel. Just to get some wording staight.
Also, Python threads are "standard" OS threads (like those in C). So they will use all CPU cores and make full use of them. The only thing restricting them is that they need to hold the GIL when calling Python C-API functions, because the whole API in general is not thread-safe. On the other hand, calls to non-Python functions, i.e. functions in C extensions like numpy for example, but also many database APIs, including anything loaded via ctypes, do not hold the GIL while running. Why should they, they are running external C binaries which don't know anything of the Python interpreter running in the parent process. Therefore, such tasks will run truely in parallel when called from a WSGI app hosted by waitress. And if you've got more cores available, turn the thread number up to that amount (threads=X kwarg on waitress.create_server).
I have a python application running inside of a pod in kubernetes which subscribes to a Google Pub/Sub topic and on each message downloads a file from a google bucket.
The issue I have is that I can't process the workload quickly enough using a single threaded Python application. I would normally run a number of pods to handle the workload but the problem is that all the files have to end up on the same filesystem to be processed by another application.
I have tried spawning a new thread for each request but the volume is too great.
What I would like to do is:
1) Have a number of processes that can process new messages
2) Keep the processes alive and use them to respond to new requests coming in.
All the examples for multiprocessing in python are single workload examples, for example providing 10 numbers to a square function, which isn't what I'm trying to achieve.
I've used gunicorn in the past which spawns a number of worker threads for a flask application, what I want is to do something similar without flask.
In the first, try to separate IO-bound (e.g. request, read/write and etc.) task from CPU-bound (parse JSON/XML, calculating and etc.) task.
For IO-bound case use Threading or ThreadPoolExecutor primitives for auto reuse working thread. Keep attention, writing on disk is blocking function!
If you want to use parallelism for CPU-bound user Processing or ProcessPoolExecutor. For sync them you can use shared object (proxy object) or file or pipe or redis and etc.
Shared objects like Managers (Namespaces, dicts and etc.) is preferred if you want to use pure python.
For work with files to avoid blocking, use individual thread or use async.
For asyncio use aiofile library.
I have a Python web application in which the client (Ember.js) communicates with the server via WebSocket (I am using Flask-SocketIO).
Apart from the WebSocket server the backend does two more things that are worth to be mentioned:
Doing some image conversion (using graphicsmagick)
OCR incoming images from the client (using tesseract)
When the client submits an image its entity is created in the database and the id is put in an image conversion queue. The worker grabs it and does image conversion. After that the worker puts it in the OCR queue where it will be handled by the OCR queue worker.
So far so good. The WS requests are handled synchronously in separate threads (Flask-SocketIO uses Eventlet for that) and the heavy computational action happens asynchronously (in separate threads as well).
Now the problem: the whole application runs on a Raspberry Pi 3. If I do not make use of the 4 cores it has I only have one ARMv8 core clocked at 1.2 GHz. This is very little power for OCR. So I decided to find out how to use multiple cores with Python. Although I read about the problems with the GIL) I found out about multiprocessing where it says The multiprocessing package offers both local and remote concurrency, effectively side-stepping the Global Interpreter Lock by using subprocesses instead of threads.. Exactly what I wanted. So I instantly replaced the
from threading import Thread
thread = Thread(target=heavy_computational_worker_thread)
thread.start()
by
from multiprocessing import Process
process = Process(target=heavy_computational_worker_thread)
process.start()
The queue needed to be handled by the multiple cores as well So i had to change
from queue import Queue
queue = multiprocessing.Queue()
to
import multiprocessing
queue = multiprocessing.Queue()
as well. Problematic: the queue and the Thread libraries are monkey patched by Eventlet. If I stop using the monkey patched version of Thread and Queue and use the one from multiprocsssing instead then the request thread started by Eventlet blocks forever when accessing the queue.
Now my question:
Is there any way I can make this application do the OCR and image conversion on a separate core?
I would like to keep using WebSocket and Eventlet if that's possible. The advantage I have is that the only communication interface between the processes would be the queue.
Ideas that I already had:
- Not using a Python implementation of a queue but rather using I/O. For example a dedicated Redis which the different subprocesses would access
- Going a step further: starting every queue worker as a separate Python process (e.g. python3 wsserver | python3 ocrqueue | python3 imgconvqueue). Then I would have to make sure myself that the access on the queue and on the database would be non-blocking
The best thing would be to keep the single process and make it work with multiprocessing, though.
Thank you very much in advance
Eventlet is currently incompatible with the multiprocessing package. There is an open issue for this work: https://github.com/eventlet/eventlet/issues/210.
The alternative that I think will work well in your case is to use Celery to manage your queue. Celery will start a pool of worker processes that wait for tasks provided by the main process via a message queue (RabbitMQ and Redis are both supported).
The Celery workers do not need to use eventlet, only the main server does, so this frees them to do whatever they need to do without the limitations imposed by eventlet.
If you are interested in exploring this approach, I have a complete example that uses it: https://github.com/miguelgrinberg/flack.
In my wsgi.py startup hooks I create a queue objects which I need to be passed to the views module.
# Create and start thread for euclid.
q = queue.Queue()
euclidThread = threading.Thread(target=startEuclidServer,
kwargs={"msgq":q})
euclidThread.setDaemon(True)
euclidThread.start()
The queue is used for communication between my "euclid" thread and django.
My django project contains an app called "monitor" where my views need to be able to access the queue I create on startup.
Previously I did this by starting my thread and creating my queue in ../monitor/urls.py however this was problematic as it would only run upon the first http request to that app.
Anyone know the best way to do this, or should I be doing this in a completely different way. For the sake of simplicity I want to avoid using a dedicated queue such as rabbitmq/redis.
The Queue you are using here is designed for communication when all the threads are managed by one master process:
The Queue module implements multi-producer, multi-consumer queues. It
is especially useful in threaded programming when information must be
exchanged safely between multiple threads. The Queue class in this
module implements all the required locking semantics. It depends on
the availability of thread support in Python; see the threading
module.
This is not the case when you are doing web development.
You need to separate your queue process completely from your web process; the way you are doing it now I cannot even imagine how many issues it will cause in the future.
You need to have three separate processes:
Process that launches your queue.
Process that launches your wsgi process(es), which could be something like "runserver" if you are in development mode; or uwsgi+supervisord+circus or similar.
The worker(s) that will do the job that's posted on the queue.
Don't combine these.
Your views can then access the queue without worrying about thread issues; and your workers can also post updates without any issues.
Read on celery which is the defacto standard way of getting all this done easily in django.
I have some spider that download pages and store data in database. I have created flask application with admin panel (by Flask-Admin extension) that show database.
Now I want append function to my flask app for control spider state: switch on/off.
I thing it posible by threads or multiprocessing. Celery is not good decision because total program must use minimum memory.
Which method to choose for implementation this function?
Discounting Celery based on memory usage would probably be a mistake, as Celery has low overhead in both time and space. In fact, using Celery+Flask does not use much more memory than using Flask alone.
In addition Celery comes with several choices you can make that can have an impact
on the amount of memory used. For example, there are 5 different pool implementations that all have different strengths and trade-offs, the pool choices are:
multiprocessing
By default Celery uses multiprocessing, which means that it will spawn child processes
to offload work to. This is the most memory expensive option - simply because
every child process will duplicate the amount of base memory needed.
But Celery also comes with an autoscale feature that will kill off worker
processes when there's little work to do, and spawn new processes when there's more work:
$ celeryd --autoscale=0,10
where 0 is the mininum number of processes, and 10 is the maximum. Here celeryd will
start off with no child processes, and grow based on load up to a maximum of 10 processes. When load decreases, so will the number of worker processes.
eventlet/gevent
When using the eventlet/gevent pools only a single process will be used, and thus it will
use a lot less memory, but with the downside that tasks calling blocking code will
block other tasks from executing. If your tasks are mostly I/O bound you should be ok,
and you can also combine different pools and send problem tasks to a multiprocessing pool instead.
threads
Celery also comes with a pool using threads.
The development version that will become version 2.6 includes a lot of optimizations,
and there is no longer any need for the Flask-Celery extension module. If you are not going
into production in the next days then I would encourage you to try the development version
which must be installed like this:
$ pip install https://github.com/ask/kombu/zipball/master
$ pip install https://github.com/ask/celery/zipball/master
The new API is now also Flask inspired, so you should read the new getting started guide:
http://ask.github.com/celery/getting-started/first-steps-with-celery.html
With all this said, most optimization work has been focused on execution speed so far,
and there is probably many more memory optimizations that can be made. It has not been a request so far, but in the unlikely event that Celery does not match your memory constraints, you can open up an issue at our bug tracker and I'm sure it will get focus, or you can even help us to do so.
You could hypervize the process using multiprocess or subprocess, then just hand the handle round the session.