I've looked around the forum and can't seem to find anything that helps. I'm trying to parallelize some processes but cannot seem to get it to work.
from multiprocessing import *
from numpy import *
x=array([1,2,3,4,5])
def SATSolver(args):
#many_calculations
return result
def main(arg):
new_args=append(x,arg)
return SATSolver(new_args)
y=array([8,9,10,11])
if __name__ == '__main__':
pool=Pool()
results=pool.map(main,y)
print(results)
The SATSolver function is where the bulk of the work happens. Basically, I have an array x and a second array y. I want to add append each value in y to x individually, and then run this new set through my SATSolver function. I would like to use the multiprocessing module so this can run in parallel.
Whenever I try and run this, I don't get an error, but a new interactive window pops up and says "Could not load the file from filepath\-c. Do you want to create a new file?"
Everything works perfectly when it's run without multiprocessing.
Any ideas on how to make this work?
Thanks!
Related
I'm trying to understand why the following simple example doesn't successfully complete execution and seems to get stuck on the first line of really_simple_func (on Ubuntu machines, but not Windows). The code is:
import torch as t
import numpy as np
import multiprocessing as mp # I've tried both multiprocessing
# import torch.multiprocessing as mp # and torch.multiprocessing
def really_simple_func():
temp_val_2 = t.tensor(np.zeros(425447)[0:400000]) # this is the line that blocks.
return 4.3
if __name__ == "__main__":
print("Run brief starting")
some_zeros = np.zeros(425447)
temp_val = t.tensor(some_zeros[0:400000]) # DELETE THIS LINE TO MAKE IT WORK
pool = mp.Pool(processes=1)
job = pool.apply_async(really_simple_func)
print("just before job.get()")
result = job.get()
print("Run brief completed. Reward = {}".format(result))
I have torch 1.11.0 installed, numpy 1.22.3 and have tried both CPU and GPU versions of Torch. When I run this code on two different Ubuntu machines, I get the following output:
Run brief starting
just before job.get()
However, the code never successfully completes (doesn't print the "Run brief completed" line). (It does complete on a third Windows box).
On the Ubuntu machines, if I delete the line with the comment "#DELETE THIS LINE TO MAKE IT WORK" the execution DOES complete, printing the final line as expected. Similarly, if I leave the line defining temp_val in but delete the line with the comment "This is the line that blocks" it will also complete. Moreover, if I reduce the size of the temp_val tensor (say from 400000 to 4000) it will also complete successfully. Finally, it is worth noting that while I can reproduce this behaviour on two different Ubuntu machines, this code does actually complete on my Windows machine - though, as far as I can tell, the versions of key packages, such as torch, are the same.
I don't understand this behaviour. I suspect it is something to do with the way torch allocates memory or stores information. I've tried calling del temp_val to free up memory, but that doesn't seem to fix things. It seems to me that the async call to t.tensor within really_simple_func is stopped from completing if there has already been a call to t.tensor in the main code block, creating a sufficiently large tensor.
I don't understand why this is happening, or even if that is the correct explanation. In any case, what would be best practice if I do need to do some tensor processing within apply_async as well as in the main thread? More generally, what is Torch waiting on when I make a call to t.tensor?
(Obviously, this is just the simplest version of the real code I'm trying to get to work that reproduced this issue. I realise that calling mp.Pool with only one process doesn't really make sense...nor, indeed, does using apply_async to call a function that returns a constant!)
Unfortunately, I cannot provide any answers to your questions.
I can, however, share experiences with seemingly the same issue. I use a Linux machine with torch 1.8.1 and numpy 1.19.2.
When I run the following code on my machine:
with Pool(max_pool) as p:
pool_outputs = list(
tqdm(
p.imap(lambda f: get_model_results_per_query_file(get_preds, tokenizer, f), query_files),
total=len(query_files)
)
)
For which the function get_model_results_per_query_file contains operations similar to the following:
feats = features.unsqueeze(0).repeat(batch_size, 1, 1).to(device)
(features is a torch tensor)
The first round of jobs automatically fail, and new ones are immediately started (that do not fail for some reason). The whole process never completes though, since the main process still seems to be waiting for the first failed jobs.
If I remove the lines in my code involving the repeat function, no jobs fail.
I managed to solve my issue and preserve the same results by adapting a similar solution to yours:
feats = torch.as_tensor(np.tile(features, (batch_size, 1, 1))).to(device)
I believe as_tensor works in a similar fashion to from_numpy in this case.
I only managed to find this solution thanks to your post and your proposed workaround, so thank you!
After some further exploration, here is a brief answer to my own question.
While I still don't fully understand the blocking behaviour (and would welcome any further explanation), I have just seen that the way I'm generating torch tensors from a numpy array is not correct.
In particular, instead of using torch.tensor(temp_val) where temp_val is a numpy array, I should be using torch.from_numpy(temp_val). Doing this fixes the problem.
Alternatively, I can convert temp_val into a list and then create the tensor via torch.tensor(temp_val_as_list) - which also avoids the issue.
I am a fairly beginner programmer with python and in general with not that much experience, and currently I'm trying to parallelize a process that is heavily CPU bound in my code. I'm using anaconda to create environments and Visual Code to debug.
A summary of the code is as following :
from tkinter import filedialog
import myfuncs as mf, concurrent.futures
file_path = filedialog.askopenfilename('Ask for a file containing data')
# import data from file_path
a = input('Ask the user for input')
Next calculations are made from these and I reach a stage where I need to iterate of a list of lists. These lists may contain up to two values and calls are made to a separate file.
For example the inputs are :
sub_data1 = [test1]
sub_data2 = [test1, test2]
dataset = [sub_data1, sub_data2]
This is the stage I use concurrent.futures.ProcessPoolExecutor()-instance and its .map() method :
with concurrent.futures.ProcessPoolExecutor() as executor:
sm_res = executor.map(mf.process_distr, dataset)
While inside a myfuncs.py, the mf.process_distr() function works like this :
def process_distr(tests):
sm_reg = []
for i in range(len(tests)):
if i==0:
# do stuff
sm_reg.append(result1)
else:
# do stuff
sm_reg.append(result2)
return sm_reg
The problem is that when I try to execute this code on the main.py file, it seems that the main.py starts running multiple times, and asks for user inputs and file dialog pops up multiple times (same amount as cores count).
How can I resolve this matter?
Edit: After reading more into it, encapsulating the whole main.py code with:
if __name__ == '__main__':
did the trick. Thank you to anyone who gave time to help with my rookie problem.
I am trying to parallelise a series of computations that use bigfloat. However, there is the error
Error sending result: '[BigFloat.exact('1.0000000', precision=20)]'. Reason: 'TypeError('self._value cannot be converted to a Python object for pickling')'
I MWE to reproduce the error is
from bigfloat import *
from multiprocessing import Pool
def f(x,a,b,N):
with precision(20):
X=BigFloat(x)
for i in range(N):
X = a*X*X-b
return X
if __name__ == '__main__':
pool = Pool(processes=2)
out1,out2 = pool.starmap(f,[(1,2,1,3),(2,2,2,2)])
(the function itself is not important at all). If I do not use bigfloat, then everything is fine. So, it is definitely some sort of interaction between multiprocessing and bigfloat.
So, I imagine that multiprocessing is having troubles saving the BigFloat object. I do not seem to be able to "extract" only the value thrown by BigFloat. How can I resolve this issue?
apparently bigfloat doesn't support pickling, I get the same error when doing pickle.dumps(BigFloat(1))
https://github.com/mdickinson/bigfloat/issues/106 notes this as needing to be done
as a work around, why not just convert to strings when transferring between processes? i.e. change f to return str(X) and then have other routines parse the strings into BigFloats as needed
otherwise, you could write some code to support this and submit it to the project
I have tried many set and search quite some time. I'll try to summarize to my problem.
I a have a file I name script.py.
Inside this script.py I have something like this:
import math
import numpy as np
from numpy import matrix
#Inserting variables:
A=float(input("insert position 1: "))
K=float(input("insert position 2: "))
#Doing some math:
a1=A*K
a2=A/K
#Defining a funtion:
def solve(var1,var2)
#This function uses numpy and math and handles matrices.
#I am not putting it to save space and make my problem clear
#Calling the funtion:
solve(a1,a2)
print (solve)
#The values of a1 and a2 are the once I calculated previously
Then I press "run module" to run script.py, it shows:
>> insert position 1:
>> insert position 2:
I insert the values and then it shows:
<function solve at 0x000000000A0C1378>
What can I do to make the python shell display the result directly?
Currently in order to get the results I need to type in the python shell
>> solve(a1,a2)
to have my result.
I hope I was able to make my problem clear and simple. Thanks.
You are printing the function itself, not the output from the function call. To achieve this, either save the function output to a variable then print, or just print straight away.
1st Method:
ans = solve(a1,a2)
print(ans)
2nd Method:
print(solve(a1,a2))
following (simplified) code applies an interpolation function on the multiprocessing module:
from multiprocessing import Pool
from scipy.interpolate import LinearNDInterpolator
...
if __name__=="__main__":
p=Pool(4)
lndi = LinearNDInterpolator(points, valuesA)
valuesB = list(np.split(valuesA, 4))
ret = p.map(lndi.__call__, valuesB)
when i run the .py, python freezes, if the last line is run separately everything works fine and i get the speed-up that i hoped for.
anyone knows how to fix the code to have it work automatically?
thanks in advance
edit: github issue was opened -> https://github.com/spyder-ide/spyder/issues/3367