I was wondering whether there is a straight-forward way to parallelize the method animation.FuncAnimation.to_html5_video() which takes a lot of time. While running it only uses 1 of my cpu's cores at a given time and I am guessing the process should be parallelizable. Any ideas to make this work without digging too much?
I need this to plot some time-evolving curves from an ODE system but it takes a lot of time
PS When I say parallelize, I mean e.g. using a library like multiprocessing
Related
I am working on an optimization algorithm which I need to run at least 100 times to see its performance. The entire script is a straight loop script, running a specific set of code multiple times over. The problem is that this entire thing takes up to 10 hours on a small dataset.
Is it possible to run this on a platform so that I can decrease this time? Can I run it faster on the cloud?
I suggest to you to "divide and conquer" your algo.
First, take a small sample of data in order to explore the code without lost too much time. After that you have a manageable piece of code, you can appy some kind of profiling tools to see where your time is spent.
I am calling a function and saving the two outputs to variables but this process is taking time because the outputs are generated by solving an ODE.
Is it possible to use multiple cores to run the function faster so the values are saved sooner? If so, could someone provide a simple example?
Thank you
Simply running the same code on multiple cores will not make it run faster. It really depends on the type of tasks your doing. Here are some questions you need to find answers to before you can decide if the code will benefit from parallel processing:
Are the steps in your computation sequence dependent? In other words, does one part of the code depend on the calculations done in the previous? Or can some of them be calculated in parallel? Look at Amdahl's law to learn about how much speedup to expect based on how much of your code you can parallelize
Does your code involve lots of reading/writing to disk and memory? Or is it just lots of computation? If you are doing significant reads and rights to disk, then creating multiple processes to do other work while your threads wait for disk can result in significant speedups. But again, this depend on your answer to the previous point about sequence dependency
How long does your code currently take to run? And is the overhead of creating multiple processes going to be more than the time it takes to run sequentially? In your question you don't give specific times - if you're talking about speeding up a task that takes a few seconds then the time required to create multiple processes might be significant compared to the time for the total task. But if you're talking about a task that takes minutes, then the overhead won't be as significant
Have you considered whether your code is is data parallel or task parallel? If so, you can decide if you want to parallelize using CPU or GPU. For large mathematical operations, look at Numpy for CPU-based and Cupy for GPU-based operations.
I am currently working on an ML NLP project and I want to measure the execution time of certain parts and also potentially predict how long the execution will take. For example, I want to measure the ML training process (including sub-processes like the data preprocessing part). I have been looking online and I have come across different python modules that can measure the execution time of functions (like the time or timeit ones). However, I still haven't found a concrete solution to predict the time it will take for a function to execute. I have thought about running the code several times, save the (data_size, time) values and then use that to extrapolate for future data. I also thought about then updating this estimation with the time it took the run several subparts of a function (like seeing how much of the process was computed, how long it took and then use that to adjust the time left).
However, I am not sure of any of this and I wanted to see if there were better options out there that I wasn't aware of, so if anyone has a better idea, I'd be thankful if you could share it.
Have you looked into using profiling? It should give a detailed breakdown of the function execution times, the number of calls, etc. You will have to execute the script with profiling, and then you will get the detailed breakdown.
https://docs.python.org/3/library/profile.html#module-cProfile
If you want in-time progress reports there are a couple of libraries I've seen. https://pypi.org/project/tqdm/
https://pypi.org/project/progressbar2/
Hope these help!
I have a project in Python that requires regressing many variables against many others. I am using a Jupyter Notebook for clarity but am also willing to use another IDE if it's easier. My code looks something like:
for a in dependent_variables:
for b in independent_variables:
regress a on b
My current dataset isn't huge, so this whole thing takes maybe 30 seconds, but I will soon have a much larger dataset that will significantly increase time required. I'm curious if this is a situation suitable for parallelization. Specifically, if I have a dual-threaded eight-core processor (meaning 16 CPUs total), is it possible to run simultaneous processes where each process regresses one of the first variables against one of the second variables, allowing me to complete, say, eight of these regressions at a time (if I allocate half of the CPUs to this process)? I am not super familiar with parallelization and most other answers I've found have discussed the parallelization of a single function call, not the simultaneous execution of multiple similar functions. I appreciate the help!
Nominally, this is
import itertools
import multiprocessing as mp
def regress_me(vars):
ind_var, dep_var = vars
# your regression may be better than mine...
result = "{} {}".format(ind_var, dep_var)
return result
if __name__ == "__main__":
with mp.Pool(8) as pool:
analyse_this = list(itertools.product(independent_variables,
dependent_variables))
result = mp.map(regress_me, analyse_this)
A lot depends on what is being passed between parent and child and whether you are using a forking system like linux or a spawning system like windows. If these datasets are being fetched from disk, it may be better to do the read in the worker regress_me instead of passing it from the parent. You can read up on that with the standard python multiprocessing library.
So I have this really simple code:
import numpy as np
import scipy as sp
mat = np.identity(4)
for i in range (100000):
np.linalg.inv(mat)
for i in range (100000):
sp.linalg.inv(mat)
Now, the first bit where the inverse is done through numpy, for some reason, launches 3 additional threads (so 4 total, including the main) and collectively they consume roughly 3 or my available cores, causing the fans on my computer to go wild.
The second bit where I use Scipy has no noticable impact on CPU use and there's only one thread, the main thread. This runs about 20% slower than the numpy loop.
Anyone have any idea what's going on? Is numpy doing threading in the background? Why is it so inefficient?
I faced the same issue, the fix was to set export OPENBLAS_NUM_THREADS=1 and then run the python script in the same terminal.
My issue was that a simple code block that consists of np.linalg.inv() was consuming more than 50% of CPU usage. After setting the OPENBLAS_NUM_THREADS parameter, the CPU usage dropped to around 3% and also the total execution time reduced. I read somewhere that this is an issue with the OPENBLAS library (Which is used by numpy.linalg.inv function). Hope this helps!