Global variable not updated in python while using multiprocessing - python

I have a following simple code
from multiprocessing import Pool
x = []
def func(a):
print(x,a)
def main():
a = [1,2,3,4,5]
pool = Pool(1)
global x
x = [1,2,3,4]
ans = pool.map(func,a)
print(x)
It gives me the result
[] 1
[] 2
[] 3
[] 4
[] 5
[1, 2, 3, 4]
I expected the result to reflects the change in global variable x.
Which seems that the changed in global variable x is not updated before the pool call. I would like to ask what is the cause of this?

So I have done what GuangshengZuo suggested, and sadly the result was not desirable. After looking deeper into it, I realized the problem was not because of script, but rather the OS.
In windows, there is no os.fork(), hence the change in global variable is not copied. But, on Unix machine, the script works fine.

I think it is because this is multiprocess, not multithread. the main process and the new process does not share a same global variable. So the new process has the copy of the main process when x is [], and after created, main process change x's value, but it does not change to new process's x.
if change the code to this :
from multiprocessing import Pool
x = []
def func(a):
print(x,a)
def main():
a = [1,2,3,4,5]
global x
x = [1,2,3,4]
pool = Pool(1)
ans = pool.map(func,a)
print(x)
and the ouput will be what you want.
Notice the pool = Pool(1) 's position

Two seperate processes will not share the same global variables. A multiprocessing pool abstracts away the fact that you are using two seperate processes which makes this tough to recognise.

Related

Python multiprocessing is not giving expected results

I am new to multiprocessing with python, I was following a course and i find thatthe code is not working as they say in the tutorials. for example:
this code :
import multiprocessing
# empty list with global scope
result = []
def square_list(mylist):
"""
function to square a given list
"""
global result
# append squares of mylist to global list result
for num in mylist:
result.append(num * num)
# print global list result
print("Result(in process p1): {}".format(result))
if __name__ == "__main__":
# input list
mylist = [1,2,3,4]
# creating new process
p1 = multiprocessing.Process(target=square_list, args=(mylist,))
# starting process
p1.start()
# wait until process is finished
p1.join()
# print global result list
print("Result(in main program): {}".format(result))
should print this result as they say in the tutorial:
Result(in process p1): [1, 4, 9, 16]
Result(in main program): []
but when I run it it prints
Result(in main program): []
I think the prosses did not even start.
I am using python 3.7.9 from anaconda.
how to fix this ?
Do not use global Variables which you access at the same time. Global Variables are most of the time a very bad idea and should be used very carefully.
The easiest way is to use p.map. (you don't have to start/join the processes)
with Pool(5) as p:
result=p.map(square_list,mylist)
If you do not want to use p.map you can use also q.put() to return the value and q.get() to get the value from the function
You can find also examples for getting the result in multiprocessed function here:
https://docs.python.org/3/library/multiprocessing.html

Use a global variable to keep track of progression of a multiprocessing program

I got a program that I ran in multiprocessing. I would like to have a progression system with a print.
This is what I came up with:
import multiprocessing as mp
import os
global counter
global size
def f(x):
global counter
global size
print ("{} / {}".format(counter, size))
counter += 1
return x**2
size = 4
counter = 1
result = list()
for x in [1,2,3,4]:
result.append(f(x))
This one works. However, if you replace the bottom part with:
with mp.Pool(processes = 2) as p:
p.starmap(f, [1,2,3,4])
It doesn't. I don't understand why, can anyone help to get that up and running ? Thanks :)
N.B: This is of course a dummy example.
EDIT:
Ok new issue appear with your solution. I'll make an example:
fix1 = 1
fix2 = 2
dynamic = [1,2,3,4,5]
def f(x, y, z):
return x**2 + y + z
size = len(dynamic)
counter = 1
with mp.Pool(processes = 2) as p:
for output in p.starmap(f, [(x, fix1, fix2) for x in dynamic]):
print ("{} / {}".format(counter, size))
counter += 1
This one works but does all the print at the end.
with mp.Pool(processes = 2) as p:
for output in p.imap_unordered(f, [(x, fix1, fix2) for x in dynamic]):
print ("{} / {}".format(counter, size))
counter += 1
This one doesn't work and say that f() is missing 2 required positional arguments fix1 and fix2.
Any idea why I get this behavior?
N.B: I'm running on windows.
On a forking system like linux, subprocesses share a copy-on-write view of the parent memory space. If one side updates memory, it gets its own private copy of the changed pages. On other systems, a new process is created and a new python is executed. In either case, neither side sees the changes the others make. And that means that everyone is updating their own private copy of count and don't see the additions made by the others.
To keep things complicated, stdout is not synchronized. If workers print, you re likely to get garbled messages.
An alternative is to count the results as they come back to the parent pool. The parent tracks the count and the parent is the only one printing. If you don't care about the order of the returned data, then imap_unordered will work well for you.
import multiprocessing as mp
def f(x):
return x**2
data = [1,2,3,4]
result = []
with mp.Pool(processes = 2) as p:
for val in p.imap_unordered(f, data):
result.append(val)
print("progress", len(result)/len(data))

Dictionary multiprocessing

I want to parallelize the processing of a dictionary using the multiprocessing library.
My problem can be reduced to this code:
from multiprocessing import Manager,Pool
def modify_dictionary(dictionary):
if((3,3) not in dictionary):
dictionary[(3,3)]=0.
for i in range(100):
dictionary[(3,3)] = dictionary[(3,3)]+1
return 0
if __name__ == "__main__":
manager = Manager()
dictionary = manager.dict(lock=True)
jobargs = [(dictionary) for i in range(5)]
p = Pool(5)
t = p.map(modify_dictionary,jobargs)
p.close()
p.join()
print dictionary[(3,3)]
I create a pool of 5 workers, and each worker should increment dictionary[(3,3)] 100 times. So, if the locking process works correctly, I expect dictionary[(3,3)] to be 500 at the end of the script.
However; something in my code must be wrong, because this is not what I get: the locking process does not seem to be "activated" and dictionary[(3,3)] always have a valuer <500 at the end of the script.
Could you help me?
The problem is with this line:
dictionary[(3,3)] = dictionary[(3,3)]+1
Three things happen on that line:
Read the value of the dictionary key (3,3)
Increment the value by 1
Write the value back again
But the increment part is happening outside of any locking.
The whole sequence must be atomic, and must be synchronized across all processes. Otherwise the processes will interleave giving you a lower than expected total.
Holding a lock whist incrementing the value ensures that you get the total of 500 you expect:
from multiprocessing import Manager,Pool,Lock
lock = Lock()
def modify_array(dictionary):
if((3,3) not in dictionary):
dictionary[(3,3)]=0.
for i in range(100):
with lock:
dictionary[(3,3)] = dictionary[(3,3)]+1
return 0
if __name__ == "__main__":
manager = Manager()
dictionary = manager.dict(lock=True)
jobargs = [(dictionary) for i in range(5)]
p = Pool(5)
t = p.map(modify_array,jobargs)
p.close()
p.join()
print dictionary[(3,3)]
I ve managed many times to find here the correct solution to a programming difficulty I had. So I would like to contribute a little bit. Above code still has the problem of not updating right the dictionary. To have the right result you have to pass lock and correct jobargs to f. In above code you make a new dictionary in every proccess. The code I found to work fine:
from multiprocessing import Process, Manager, Pool, Lock
from functools import partial
def f(dictionary, l, k):
with l:
for i in range(100):
dictionary[3] += 1
if __name__ == "__main__":
manager = Manager()
dictionary = manager.dict()
lock = manager.Lock()
dictionary[3] = 0
jobargs = list(range(5))
pool = Pool()
func = partial(f, dictionary, lock)
t = pool.map(func, jobargs)
pool.close()
pool.join()
print(dictionary)
In the OP's code, it is locking the entire iteration. In general, you should only apply locks for the shortest time, as long as it is effective. The following code is much more efficient. You acquire the lock only to make the code atomic
def f(dictionary, l, k):
for i in range(100):
with l:
dictionary[3] += 1
Note that dictionary[3] += 1 is not atomic, so it must be locked.

Python Multiprocessing with a single function

I have a simulation that is currently running, but the ETA is about 40 hours -- I'm trying to speed it up with multi-processing.
It essentially iterates over 3 values of one variable (L), and over 99 values of of a second variable (a). Using these values, it essentially runs a complex simulation and returns 9 different standard deviations. Thus (even though I haven't coded it that way yet) it is essentially a function that takes two values as inputs (L,a) and returns 9 values.
Here is the essence of the code I have:
STD_1 = []
STD_2 = []
# etc.
for L in range(0,6,2):
for a in range(1,100):
### simulation code ###
STD_1.append(value_1)
STD_2.append(value_2)
# etc.
Here is what I can modify it to:
master_list = []
def simulate(a,L):
### simulation code ###
return (a,L,STD_1, STD_2 etc.)
for L in range(0,6,2):
for a in range(1,100):
master_list.append(simulate(a,L))
Since each of the simulations are independent, it seems like an ideal place to implement some sort of multi-threading/processing.
How exactly would I go about coding this?
EDIT: Also, will everything be returned to the master list in order, or could it possibly be out of order if multiple processes are working?
EDIT 2: This is my code -- but it doesn't run correctly. It asks if I want to kill the program right after I run it.
import multiprocessing
data = []
for L in range(0,6,2):
for a in range(1,100):
data.append((L,a))
print (data)
def simulation(arg):
# unpack the tuple
a = arg[1]
L = arg[0]
STD_1 = a**2
STD_2 = a**3
STD_3 = a**4
# simulation code #
return((STD_1,STD_2,STD_3))
print("1")
p = multiprocessing.Pool()
print ("2")
results = p.map(simulation, data)
EDIT 3: Also what are the limitations of multiprocessing. I've heard that it doesn't work on OS X. Is this correct?
Wrap the data for each iteration up into a tuple.
Make a list data of those tuples
Write a function f to process one tuple and return one result
Create p = multiprocessing.Pool() object.
Call results = p.map(f, data)
This will run as many instances of f as your machine has cores in separate processes.
Edit1: Example:
from multiprocessing import Pool
data = [('bla', 1, 3, 7), ('spam', 12, 4, 8), ('eggs', 17, 1, 3)]
def f(t):
name, a, b, c = t
return (name, a + b + c)
p = Pool()
results = p.map(f, data)
print results
Edit2:
Multiprocessing should work fine on UNIX-like platforms such as OSX. Only platforms that lack os.fork (mainly MS Windows) need special attention. But even there it still works. See the multiprocessing documentation.
Here is one way to run it in parallel threads:
import threading
L_a = []
for L in range(0,6,2):
for a in range(1,100):
L_a.append((L,a))
# Add the rest of your objects here
def RunParallelThreads():
# Create an index list
indexes = range(0,len(L_a))
# Create the output list
output = [None for i in indexes]
# Create all the parallel threads
threads = [threading.Thread(target=simulate,args=(output,i)) for i in indexes]
# Start all the parallel threads
for thread in threads: thread.start()
# Wait for all the parallel threads to complete
for thread in threads: thread.join()
# Return the output list
return output
def simulate(list,index):
(L,a) = L_a[index]
list[index] = (a,L) # Add the rest of your objects here
master_list = RunParallelThreads()
Use Pool().imap_unordered if ordering is not important. It will return results in a non-blocking fashion.

Make the random module thread-safe in Python

I have an application requiring the same results given the same random seed. But I find random.randint not threadsafe. I have tried mutex but this does not work. Here is my experiment code (long but simple):
import threading
import random
def child(n, a):
g_mutex = threading.Lock()
g_mutex.acquire()
random.seed(n)
for i in xrange(100):
a.append(random.randint(0, 1000))
g_mutex.release()
def main():
a = []
b = []
c1 = threading.Thread(target = child, args = (10, a))
c2 = threading.Thread(target = child, args = (20, b))
c1.start()
c2.start()
c1.join()
c2.join()
c = []
d = []
c1 = threading.Thread(target = child, args = (10, c))
c2 = threading.Thread(target = child, args = (20, d))
c1.start()
c1.join()
c2.start()
c2.join()
print a == c, b == d
if __name__ == "__main__":
main()
I want to code to print true, true, but it stands a chance to give false, false. How can I make threadsafe randint?
You can create separate instances of random.Random for each thread
>>> import random
>>> local_random = random.Random()
>>> local_random.seed(1234)
>>> local_random.randint(1,1000)
967
From the documentation for random:
The functions supplied by this module are actually bound methods of a hidden instance of the random.Random class. You can instantiate your own instances of Random to get generators that don’t share state. This is especially useful for multi-threaded programs, creating a different instance of Random for each thread, and using the jumpahead() method to make it likely that the generated sequences seen by each thread don’t overlap.
The documentation doesn't say exactly what this class is, but it does show class random.SystemRandom([seed]), and random.Random([seed]) seems to be the same.
Example:
local_random = random.Random(n)
for i in xrange(100):
a.append(local_random.randint(0, 1000))
Others have pointed out the proper way to use random in a thread safe way. But I feel it's important to point out that the code you wrote would not be thread-safe for anything.
def child(n, a):
g_mutex = threading.Lock()
g_mutex.acquire()
random.seed(n)
for i in xrange(100):
a.append(random.randint(0, 1000))
g_mutex.release()
Each thread is running this method independently. That means that each thread is making their own lock instance, acquiring it, doing work, and then releasing it. Unless every thread is attempting to acquire the same lock, the there is nothing to ensure non-parallel execution. You need to assign a single value to g_mutex outside of the context of your run method.
Edit:
I just want to add that simply switching to a global lock is not guaranteed to do exactly what you said. The lock will ensure that only one thread is generating numbers at a time, but it does not guarantee which thread will start first.

Categories