I have some Python code that leverages ctypes.CDLL, according to the docs this does not involve the gil. With that being said, I am experiencing some bottlenecks that I am unclear of when profiling. If I run some trivial code using time.sleep or even ctypes.windll.kernel32.Sleep I can see the time scale equally as the number of threads matches the number of tasks, in other words if the task is to sleep 1 second and I submit 1 task in 1 thread or 20 tasks in 20 threads they both take ~1 second to complete.
Switching back to my code, it is not scaling out as expected but rather linearly. Profiling indicates waits from acquire() in _thread.lock.
What are some techniques to further dig into this to see where the issue is manifesting? Is ThreadPoolExecutor not the optimal choice here? I understood it implemented a basic thread pool and was no different than ThreadPool from multiprocessing.pool?
Related
I've never used the async-await syntax but I do often need to make HTTP/S requests and parse responses while awaiting future responses. To accomplish this task, I currently use the ThreadPoolExecutor class which execute the calls asynchronously anyways; effectively I'm achieving (I believe) the same result I would get with more lines of code to use async-await.
Operating under the assumption that my current implementations work asynchronously, I am wondering how the async-await implementation would differ from that of my original one which used Threads and a Queue to manage workers; it also used a Semaphore to limit workers.
That implementation was devised under the following conditions:
There may be any number of requests
Total number of active requests may be 4
Only send next request when a response is received
The basic flow of the implementation was as follows:
Generate container of requests
Create a ListeningQueue
For each request create a Thread and pass the URL, ListeningQueue and Semaphore
Each Thread attempts to acquire the Semaphore (limited to 4 Threads)
Main Thread continues in a while checking ListeningQueue
When a Thread receives a response, place in ListeningQueue and release Semaphore
A waiting Thread acquires Semaphore (process repeats)
Main Thread processes responses until count equals number of requests
Because I need to limit the number of active Threads I use a Semaphore, and if I were to try this using async-await I would have to devise some logic in the Main Thread or in the async def that prevents a request from being sent if the limit has been reached. Apart from that constraint, I don't see where using async-await would be any more useful. Is it that it lowers overhead and race condition chances by eliminating Threads? Is that the main benefit? If so, even though using a ThreadPoolExecutor is making asynchronous calls it is using a pool of Threads, thus making async-await a better option?
Operating under the assumption that my current implementations work asynchronously, I am wondering how the async-await implementation would differ from that of my original one which used Threads and a Queue to manage workers
It would not be hard to implement very similar logic using asyncio and async-await, which has its own version of semaphore that is used in much the same way. See answers to this question for examples of limiting the number of parallel requests with a fixed number of tasks or by using a semaphore.
As for advantages of asyncio over equivalent code using threads, there are several:
Everything runs in a single thread regardless of the number of active connections. Your program can scale to a large number of concurrent tasks without swamping the OS with an unreasonable number of threads or the downloads having to wait for a free slot in the thread pool before they even start.
As you pointed out, single-threaded execution is less susceptible to race conditions because the points where a task switch can occur are clearly marked with await, and everything in-between is effectively atomic. The advantage of this is less obvious in small threaded programs where the executor just hands tasks to threads in a fire-and-collect fashion, but as the logic grows more complex and the threads begin to share more state (e.g. due to caching or some synchronization logic), this becomes more pronounced.
async/await allows you to easily create additional independent tasks for things like monitoring, logging and cleanup. When using threads, those do not fit the executor model and require additional threads, always with a design smell that suggests threads are being abused. With asyncio, each task can be as if it were running in its own thread, and use await to wait for something to happen (and yield control to others) - e.g. a timer-based monitoring task would consist of a loop that awaits asyncio.sleep(), but the logic could be arbitrarily complex. Despite the code looking sequential, each task is lightweight and carries no more weight to the OS than that of a small allocated object.
async/await supports reliable cancellation, which threads never did and likely never will. This is often overlooked, but in asyncio it is perfectly possible to cancel a running task, which causes it to wake up from await with an exception that terminates it. Cancellation makes it straightforward to implement timeouts, task groups, and other patterns that are impossible or a huge chore when using threads.
On the flip side, the disadvantage of async/await is that all your code must be async. Among other things, it means that you cannot use libraries like requests, you have to switch to asyncio-aware alternatives like aiohttp.
So I have a batch of 1000 tasks that I assign using parmap/python multiprocessing module to 8 cores (dual xeon machine 16 physical cores). Currently this runs using synchronized.
The issue is that usually 1 of the cores lags well behind the other cores and still has several jobs/tasks to complete after all the other cores finished their work. This may be related to core speed (older computer) but more likely due to some of the tasks being more difficult than others - so the 1 core that gets the slightly more difficult jobs gets laggy...
I'm a little confused here - but is this what asynch parallelization does? I've tried using it before, but because this step is part of a very large processing step - it wasn't clear how to create a barrier to force the program to wait until all async processes are done.
Any advice/links to similar questions/answers are appreciated.
[EDIT] To clarify, the processes are ok to run independently, they all save data to disk and do not share variables.
parmap author here
By default, both in multiprocessing and in parmap, tasks are divided in chunks and chunks are sent to each multiprocessing process (see the multiprocessing documentation). The reason behind this is that sending tasks individually to a process would introduce significant computational overhead in many situations. The overhead is reduced if several tasks are sent at once, in chunks.
The number of tasks on each chunk is controlled with chunksize in multiprocessing (and pm_chunksize in parmap). By default, chunksize is computed as "number of tasks"/(4*"pool size"), rounded up (see the multiprocessing source code). So for your case, 1000/(4*4) = 62.5 -> 63 tasks per chunk.
If, as in your case, many computationally expensive tasks fall into the same chunk, that chunk will take a long time to finish.
One "cheap and easy" way to workaround this is to pass a smaller chunksize value. Note that using the extreme chunksize=1 may introduce undesired larger cpu overhead.
A proper queuing system as suggested in other answers is a better solution on the long term, but maybe an overkill for a one-time problem.
You really need to look at creating microservices and using a queue pool. For instance, you could put a list of jobs in celery or redis, and then have the microservices pull from the queue one at a time and process the job. Once done they pull the next item and so forth. That way your load is distributed based on readiness, and not based on a preset list.
http://www.celeryproject.org/
https://www.fullstackpython.com/task-queues.html
I am trying to get my head around threading vs. CPU usage. There are plenty of discussions about threading vs. multiprocessing (a good overview being this answer) so I decided to test this out by launching a maximum number of threads on my 8 CPU laptop running Windows 10, Python 3.4.
My assumption was that all the threads would be bound to a single CPU.
EDIT: it turns out that it was not a good assumption. I now understand that for multithreaded code, only one piece of python code can run at once (no matter where/on which core). This is different for multiprocessing code (where processes are independent and run indeed independently).
While I read about these differences, it is one answer which actually clarified this point.
I think it also explains the CPU view below: that it is an average view of many threads spread out on many CPUs, but only one of them running at one given time (which "averages" to all of them running all the time).
It is not a duplicate of the linked question (which addresses the opposite problem, i.e. all threads on one core) and I will leave it hanging in case someone has a similar question one day and is hopefully helped by my enlightenment.
The code
import threading
import time
def calc():
time.sleep(5)
while True:
a = 2356^36
n = 0
while True:
try:
n += 1
t = threading.Thread(target=calc)
t.start()
except RuntimeError:
print("max threads: {n}".format(n=n))
break
else:
print('.')
time.sleep(100000)
Led to 889 threads being started.
The load on the CPUs was however distributed (and surprisingly low for a pure CPU calculation, the laptop is otherwise idle with an empty load when not running my script):
Why is it so? Are the threads constantly moved as a pack between CPUs and what I see is just an average (the reality being that at a given moment all threads are on one CPU)? Or are they indeed distributed?
As of today it is still the case that 'one thread holds the GIL'. So one thread is running at a time.
The threads are managed on the operating system level. What happens is that every 100 'ticks' (=interpreter instruction) the running thread releases the GIL and resets the tick counter.
Because the threads in this example do continuous calculations, the tick limit of 100 instructions is reached very fast, leading to an almost immediate release of the GIL and a 'battle' between threads starts to acquire the GIL.
So, my assumption is that your operating system has a higher than expected load , because of (too) fast thread switching + almost continuous releasing and acquiring the GIL. The OS spends more time on switching than actually doing any useful calculation.
As you mention yourself, for using more than one core at a time, it's better to look at multiprocessing modules (joblib/Parallel).
Interesting read:
http://www.dabeaz.com/python/UnderstandingGIL.pdf
Um. The point of multithreading is to make sure they work gets spread out. A really easy cheat is to use as many threads as you have CPU cores. The point is they are all independent so they can actually run at the same time. If they were on the same core only one thread at a time could really run at all. They'd pass that core back and forth for processing at the OS level.
Your assumption is wrong and bizarre. What would ever lead you to think they should run on the same CPU and consequently go at 1/8th speed? As the only reason to thread them is typically to get the whole batch to go faster than a single core alone.
In fact, what the hell do you think writing parallel code is for if not to run independently on several cores at the same time? Like this would be pointless and hard to do, let's make complex fetching, branching, and forking routines to accomplish things slower than one core just plugging away at the data?
I am writing a python code using mpi4py from which I import MPI. Then, I set up the global communicator MPI.COMM_WORLD and store in the variable comm.
I am running this code with n > 1 threads and at some point they all enter a for loop (all cores have the same number of iterations to go through).
Inside the for loop I have a "comm.reduce(...)" call.
This seems to work for a small number of cores but as the problem size increases (with 64 cores, say) I experience that my program "hangs".
So I am wondering if this has to do with the reduce(...) call. I know that this call needs all threads (that is, say we run 2 threads in total. If one thread enters the loop but the other doesn't for whatever reason, the program will hang because the reduce(...) call waits for both threads).
My question is:
Is the reduce call a "synchronization" task, i.e., does it work like a "comm.Barrier()" call?
And, if possible, in more general, what are the synchronization tasks (if any besides Barrier)?
Yes, the standard MPI reduce call is blocking (all threads must communicate to root before any thread can proceed). Other blocking calls are Allgather, Allreduce, AlltoAll, Barrier, Bsend, Gather, Recv, Reduce, Scatter, etc.
Many of these have non-blocking equivalents, which you'll find preceded by an I (Isend e.g.) but these aren't implemented across the board in mpi4py.
See mpi: blocking vs non-blocking for more info on that.
Not sure about your hangup. May be an issue of processor crowding--running a 64 thread job on a 4 core desktop might get loud.
I've written a working program in Python that basically parses a batch of binary files, extracting data into a data structure. Each file takes around a second to parse, which translates to hours for thousands of files. I've successfully implemented a threaded version of the batch parsing method with an adjustable number of threads. I tested the method on 100 files with a varying number of threads, timing each run. Here are the results (0 threads refers to my original, pre-threading code, 1 threads to the new version run with a single thread spawned).
0 threads: 83.842 seconds
1 threads: 78.777 seconds
2 threads: 105.032 seconds
3 threads: 109.965 seconds
4 threads: 108.956 seconds
5 threads: 109.646 seconds
6 threads: 109.520 seconds
7 threads: 110.457 seconds
8 threads: 111.658 seconds
Though spawning a thread confers a small performance increase over having the main thread do all the work, increasing the number of threads actually decreases performance. I would have expected to see performance increases, at least up to four threads (one for each of my machine's cores). I know threads have associated overhead, but I didn't think this would matter so much with single-digit numbers of threads.
I've heard of the "global interpreter lock", but as I move up to four threads I do see the corresponding number of cores at work--with two threads two cores show activity during parsing, and so on.
I also tested some different versions of the parsing code to see if my program is IO bound. It doesn't seem to be; just reading in the file takes a relatively small proportion of time; processing the file is almost all of it. If I don't do the IO and process an already-read version of a file, I adding a second thread damages performance and a third thread improves it slightly. I'm just wondering why I can't take advantage of my computer's multiple cores to speed things up. Please post any questions or ways I could clarify.
This is sadly how things are in CPython, mainly due to the Global Interpreter Lock (GIL). Python code that's CPU-bound simply doesn't scale across threads (I/O-bound code, on the other hand, might scale to some extent).
There is a highly informative presentation by David Beazley where he discusses some of the issues surrounding the GIL. The video can be found here (thanks #Ikke!)
My recommendation would be to use the multiprocessing module instead of multiple threads.
The threading library does not actually utilize multiple cores simultaneously for computation. You should use the multiprocessing library instead for computational threading.