I have a tornado.websocket.WebSocketHandler which processes data. The idea is to instantiate a limited number of handlers (e.g. so they are bounded by the the number of CPU cores). I would like to put the rest of the connections in a queue (as soon as they are opened) so one of them is activated when another finishes.
I was trying to do that via threading.Semaphore, but it seems that tornado socket handlers run in a single thread, so everything hangs out. How can I achieve that ?
Tornado has its own asynchronous semaphore class in tornado.locks.Semaphore.
Tornado is designed to make connections very cheap - one connection per core would be an extremely low limit. I suggest not limiting the number of connections per se, but limiting what you do with these connections. (and remember the GIL - unless you're calling out to C extensions for your cpu-intensive work, you can't make use of multiple CPU cores from python anyway). Doing your CPU-intensive work on a bounded ThreadPoolExecutor may be the best way to do what it sounds like you're trying to do.
Related
I'm trying to build a python webserver using Django and Waitress, but I'd like to know how Waitress handles concurrent requests, and when blocking may occur.
While the Waitress documentation mentions that multiple worker threads are available, it doesn't provide a lot of information on how they are implemented and how the python GIL affects them (emphasis my own):
When a channel determines the client has sent at least one full valid HTTP request, it schedules a "task" with a "thread dispatcher". The thread dispatcher maintains a fixed pool of worker threads available to do client work (by default, 4 threads). If a worker thread is available when a task is scheduled, the worker thread runs the task. The task has access to the channel, and can write back to the channel's output buffer. When all worker threads are in use, scheduled tasks will wait in a queue for a worker thread to become available.
There doesn't seem to be much information on Stackoverflow either. From the question "Is Gunicorn's gthread async worker analogous to Waitress?":
Waitress has a master async thread that buffers requests, and enqueues each request to one of its sync worker threads when the request I/O is finished.
These statements don't address the GIL (at least from my understanding) and it'd be great if someone could elaborate more on how worker threads work for Waitress. Thanks!
Here's how the event-driven asynchronous servers generally work:
Start a process and listen to incoming requests. Utilizing the event notification API of the operating system makes it very easy to serve thousands of clients from single thread/process.
Since there's only one process managing all the connections, you don't want to perform any slow (or blocking) tasks in this process. Because then it will block the program for every client.
To perform blocking tasks, the server delegates the tasks to "workers". Workers can be threads (running in the same process) or separate processes (or subprocesses). Now the main process can keep on serving clients while workers perform the blocking tasks.
How does Waitress handle concurrent tasks?
Pretty much the same way I just described above. And for workers it creates threads, not processes.
how the python GIL affects them
Waitress uses threads for workers. So, yes they are affected by GIL in that they aren't truly concurrent though they seem to be. "Asynchronous" is the correct term.
Threads in Python run inside a single process, on a single CPU core, and don't run in parallel. A thread acquires the GIL for a very small amount of time and executes its code and then the GIL is acquired by another thread.
But since the GIL is released on network I/O, the parent process will always acquire the GIL whenever there's a network event (such as an incoming request) and this way you can stay assured that the GIL will not affect the network bound operations (like receiving requests or sending response).
On the other hand, Python processes are actually concurrent: they can run in parallel on multiple cores. But Waitress doesn't use processes.
Should you be worried?
If you're just doing small blocking tasks like database read/writes and serving only a few hundred users per second, then using threads isn't really that bad.
For serving a large volume of users or doing long running blocking tasks, you can look into using external task queues like Celery. This will be much better than spawning and managing processes yourself.
Hint: Those were my comments to the accepted answer and the conversation below, moved to a separate answer for space reasons.
Wait.. The 5th request will stay in the queue until one of the 4 threads is done with their previous handling, and therefore gone back to the pool. One thread will only ever server one request at a time. "IO bound" tasks only help in that the threads waiting for IO will implicitly (e.g. by calling time.sleep) tell the scheduler (python's internal one) that it can pass the GIL along to another thread since there's currently nothing to do, so that the others will get more CPU time for their stuff. On thread level this is fully sequential, which is still concurrent and asynchronous on process level, just not parallel. Just to get some wording staight.
Also, Python threads are "standard" OS threads (like those in C). So they will use all CPU cores and make full use of them. The only thing restricting them is that they need to hold the GIL when calling Python C-API functions, because the whole API in general is not thread-safe. On the other hand, calls to non-Python functions, i.e. functions in C extensions like numpy for example, but also many database APIs, including anything loaded via ctypes, do not hold the GIL while running. Why should they, they are running external C binaries which don't know anything of the Python interpreter running in the parent process. Therefore, such tasks will run truely in parallel when called from a WSGI app hosted by waitress. And if you've got more cores available, turn the thread number up to that amount (threads=X kwarg on waitress.create_server).
What's a reasonable default for pool_size in a ZODB.DB call in a multi-threaded web application?
Leaving the actual default value 7 gives me some connection WARNINGs even when I'm the only one navigating through db-interacting handlers. Is it possible to set a number that's too high? What factors play into deciding what exactly to set it to?
The pool size is only a 'guideline'; the warning is logged when you exceed that size; if you were to use double the number of connections an CRITICAL log message would be registed instead. These are there to indicate you may be using too many connections in your application.
The pool will try to reduce the number of retained connections to the pool size as you close connections.
You need to set it to the maximum number of threads in your application. For Tornado, which I believe uses asynchronous events instead of threading almost exclusively, that might be harder to determine; if there is a maximum number of concurrent connections configurable in Tornado, then the pool size needs to be set to that number.
I am not sure how the ZODB will perform when your application scales to hundreds or thousands of concurrent connections, though. I've so far only used it with at most 100 or so concurrent connections spread across several processes and even machines (using ZEO or RelStorage to serve the ZODB across those processes).
I'd say that if most of these connections only read, you should be fine; it's writing on the same object concurrently that is ZODB's weak point as far as scalability is concerned.
Any web server might have to handle a lot of requests at the same time. As python interpreter actually has GIL constraint, how concurrency is implemented?
Do they use multiple processes and use IPC for state sharing?
You usually have many workers(i.e. gunicorn), each being dispatched with independent requests. Everything else(concurrency related) is handled by the database so it is abstracted from you.
You don't need IPC, you just need a "single source of truth", which will be the RDBMS, a cache server(redis, memcached), etc.
First of all, requests can be handled independently. However, servers want to simultaneously handle them in order to keep the number of requests that can be handled per time at a maximum.
The implementation of this concept of concurrency depends on the webserver.
Some implementations may have a fixed number of threads or processes for handling requests. If all are in use, additional requests have to wait until being handled.
Another possibility is that a process or thread is spawned for each request. Spawning a process for each request leads to an absurd memory and cpu overhead. Spawning lightweight threads is better. Doing so, you can serve hundreds of clients per second. However, also threads bring their management overhead, manifesting itself in high memory and CPU consumption.
For serving thousands of clients per second, an event-driven architecture based on asynchronous coroutines is a state-of-the-art solution. It enables the server to serve clients at a high rate without spawning zillions of threads. On the Wikipedia page of the so-called C10k problem you find a list of web servers. Among those, many make use of this architecture.
Coroutines are available for Python, too. Have look at http://www.gevent.org/. That's why a Python WSGI app based on e.g uWSGI + gevent is an extremely performant solution.
As normal. Web serving is mostly I/O-bound, and the GIL is released during I/O operations. So either threading is used without any special accommodations, or an event loop (such as Twisted) is used.
I am working on a web backend that frequently grabs realtime market data from the web, and puts the data in a MySQL database.
Currently I have my main thread push tasks into a Queue object. I then have about 20 threads that read from that queue, and if a task is available, they execute it.
Unfortunately, I am running into performance issues, and after doing a lot of research, I can't make up my mind.
As I see it, I have 3 options:
Should I take a distributed task approach with something like Celery?
Should I switch to JPython or IronPython to avoid the GIL issues?
Or should I simply spawn different processes instead of threads using processing?
If I go for the latter, how many processes is a good amount? What is a good multi process producer / consumer design?
Thanks!
Maybe you should use an event-driven approach, and use an event-driven oriented frameworks like twisted(python) or node.js(javascript), for example this frameworks make use of the UNIX domain sockets, so your consumer listens at some port, and your event generator object pushes all the info to the consumer, so your consumer don't have to check every time to see if there's something in the queue.
First, profile your code to determine what is bottlenecking your performance.
If each of your threads are frequently writing to your MySQL database, the problem may be disk I/O, in which case you should consider using an in-memory database and periodically write it to disk.
If you discover that CPU performance is the limiting factor, then consider using the multiprocessing module instead of the threading module. Use a multiprocessing.Queue object to push your tasks. Also make sure that your tasks are big enough to keep each core busy for a while, so that the granularity of communication doesn't kill performance. If you are currently using threading, then switching to multiprocessing would be the easiest way forward for now.
I'm trying to write a scalable custom web server.
Here's what I have so far:
The main loop and request interpreter are in Cython. The main loop accepts connections and assigns the sockets to one of the processes in the pool (has to be processes, threads won't get any benefit from multi-core hardware because of the GIL).
Each process has a thread pool. The process assigns the socket to a thread.
The thread calls recv (blocking) on the socket and waits for data. When some shows up, it gets piped into the request interpreter, and then sent via WSGI to the application running in that thread.
Now I've heard about epoll and am a little confused. Is there any benefit to using epoll to get socket data and then pass that directly to the processes? Or should I just go the usual route of having each thread wait on recv?
PS: What is epoll actually used for? It seems like multithreading and blocking fd calls would accomplish the same thing.
If you're already using multiple threads, epoll doesn't offer you much additional benefit.
The point of epoll is that a single thread can listen for activity on many file selectors simultaneously (and respond to events on each as they occur), and thus provide event-driven multitasking without requiring the spawning of additional threads. Threads are relatively cheap (compared to spawning processes), but each one does require some overhead (after all, they each have to maintain a call stack).
If you wanted to, you could rewrite your pool processes to be single-threaded using epoll, which would reduce your overall thread usage count, but of course you'd have to consider whether that's something you care about or not - in general, for low numbers of simultaneous requests on each worker, the overhead of spawning threads wouldn't matter, but if you want each worker to be able to handle 1000s of open connections, that overhead can become significant (and that's where epoll shines).
But...
What you're describing sounds suspiciously like you're basically reinventing the wheel - your:
main loop and request interpreter
pool of processes
sounds almost exactly like:
nginx (or any other load balancer/reverse proxy)
A pre-forking tornado app
Tornado is a single-threaded web server python module using epoll, and it has the capability built-in for pre-forking (meaning that it spawns multiple copies of itself as separate processes, effectively creating a process pool). Tornado is based on the tech created to power Friendfeed - they needed a way to handle huge numbers of open connections for long-polling clients looking for new real-time updates.
If you're doing this as a learning process, then by all means, reinvent away! It's a great way to learn. But if you're actually trying to build an application on top of these kinds of things, I'd highly recommend considering using the existing, stable, communally-developed projects - it'll save you a lot of time, false starts, and potential gotchas.
(P.S. I approve of your avatar. <3)
The epoll function (and the other functions in the same family poll and select) allow you to write single threading networking code that manage multiple networking connection. Since there is no threading, there is no need fot synchronisation as would be required in a multi-threaded program (this can be difficult to get right).
On the other hand, you'll need to have an explicit state machine for each connection. In a threaded program, this state machine is implicit.
Those function just offer another way to multiplex multiple connexion in a process. Sometimes it is easier not to use threads, other times you're already using threads, and thus it is easier just to use blocking sockets (which release the GIL in Python).