Is modifying a class variable in python threadsafe? - python

I was reading this question (which you do not have to read because I will copy what is there... I just wanted to give show you my inspiration)...
So, if I have a class that counts how many instances were created:
class Foo(object):
instance_count = 0
def __init__(self):
Foo.instance_count += 1
My question is, if I create Foo objects in multiple threads, is instance_count going to be correct? Are class variables safe to modify from multiple threads?

It's not threadsafe even on CPython. Try this to see for yourself:
import threading
class Foo(object):
instance_count = 0
def inc_by(n):
for i in xrange(n):
Foo.instance_count += 1
threads = [threading.Thread(target=inc_by, args=(100000,)) for thread_nr in xrange(100)]
for thread in threads: thread.start()
for thread in threads: thread.join()
print(Foo.instance_count) # Expected 10M for threadsafe ops, I get around 5M
The reason is that while INPLACE_ADD is atomic under GIL, the attribute is still loaded and store (see dis.dis(Foo.__init__)). Use a lock to serialize the access to the class variable:
Foo.lock = threading.Lock()
def interlocked_inc(n):
for i in xrange(n):
with Foo.lock:
Foo.instance_count += 1
threads = [threading.Thread(target=interlocked_inc, args=(100000,)) for thread_nr in xrange(100)]
for thread in threads: thread.start()
for thread in threads: thread.join()
print(Foo.instance_count)

No it is not thread safe. I've faced a similar problem a few days ago, and I chose to implement the lock thanks to a decorator. The benefit is that it makes the code readable:
def threadsafe_function(fn):
"""decorator making sure that the decorated function is thread safe"""
lock = threading.Lock()
def new(*args, **kwargs):
lock.acquire()
try:
r = fn(*args, **kwargs)
except Exception as e:
raise e
finally:
lock.release()
return r
return new
class X:
var = 0
#threadsafe_function
def inc_var(self):
X.var += 1
return X.var

Following on from luc's answer, here's a simplified decorator using with context manager and a little __main__ code to spin up the test. Try it with and without the #synchronized decorator to see the difference.
import concurrent.futures
import functools
import logging
import threading
def synchronized(function):
lock = threading.Lock()
#functools.wraps(function)
def wrapper(self, *args, **kwargs):
with lock:
return function(self, *args, **kwargs)
return wrapper
class Foo:
counter = 0
#synchronized
def increase(self):
Foo.counter += 1
if __name__ == "__main__":
foo = Foo()
print(f"Start value is {foo.counter}")
with concurrent.futures.ThreadPoolExecutor(max_workers=10) as executor:
for index in range(200000):
executor.submit(foo.increase)
print(f"End value is {foo.counter}")
Without #synchronized
End value is 198124
End value is 196827
End value is 197968
With #synchronized
End value is 200000
End value is 200000
End value is 200000

Is modifying a class variable in python threadsafe?
It depends on the operation.
While the Python GIL (Global Interpreter Lock) only allows access to one thread at a time, per atomic operation, some operations are not atomic, that is, they are implemented with more than one operation, such as, given (L, L1, L2 are lists, D, D1, D2 are dicts, x, y are objects, i, j are ints)
i = i+1
L.append(L[-1])
L[i] = L[j]
D[x] = D[x] + 1
See What kinds of global value mutation are thread-safe?
You're example is included in the non-safe operations, as += is short hand for i = i + 1.
Other posters have shown how to make the operation thread-safe. An alternative thread-safe way to implement your operation, without using a thread locking mechanism would be to reference a different variable, only set via an atomic operation. For example
max_reached = False
# in one thread
count = 0
maximum = 100
count += 1
if count >= maximum:
max_reached = True
# in another thread
while not max_reached:
time.sleep(1)
# do something
This would be thread safe, as long as only one thread increments the count.

I would say it is thread-safe, at least on CPython implementation. The GIL will make all your "threads" to run sequentially so they will not be able to mess with your reference count.

Related

How do read and writes work with a manager in Python?

Sorry if this is a stupid question, but I'm having trouble understanding how managers work in python.
Let's say I have a manager that contains a dictionary to be shared across all processes. I want to have just one process writing to the dictionary at a time, while many others read from the dictionary.
Can this happen concurrently, with no synchronization primitives or will something break if read/writes happen at the same time?
What if I want to have multiple processes writing to the dictionary at once - is that allowed or will it break (I know it could cause race conditions, but could it error out)?
Additionally, does a manager process each read and write transaction in a queue like fashion, one at a time, or does it do them all at once?
https://docs.python.org/3/library/multiprocessing.html#sharing-state-between-processes
It depends on how you write to the dictionary, i.e. whether the operation is atomic or not:
my_dict[some_key] = 9 # this is atomic
my_dict[some_key] += 1 # this is not atomic
So creating a new key and updating a an existing key as in the first line of code above are atomic operations. But the second line of code are really multiple operations equivalent to:
temp = my_dict[some_key]
temp = temp + 1
my_dict[some_key] = temp
So if two processes were executing my_dict[some_key] += 1 in parallel, they could be reading the same value of temp = my_dict[some_key] and incrementing temp to the same new value and the net effect would be that the dictionary value only gets incremented once. This can be demonstrated as follows:
from multiprocessing import Pool, Manager, Lock
def init_pool(the_lock):
global lock
lock = the_lock
def worker1(d):
for _ in range(1000):
with lock:
d['x'] += 1
def worker2(d):
for _ in range(1000):
d['y'] += 1
if __name__ == '__main__':
lock = Lock()
with Manager() as manager, \
Pool(4, initializer=init_pool, initargs=(lock,)) as pool:
d = manager.dict()
d['x'] = 0
d['y'] = 0
# worker1 will serialize with a lock
pool.apply_async(worker1, args=(d,))
pool.apply_async(worker1, args=(d,))
# worker2 will not serialize with a lock:
pool.apply_async(worker2, args=(d,))
pool.apply_async(worker2, args=(d,))
# wait for the 4 tasks to complete:
pool.close()
pool.join()
print(d)
Prints:
{'x': 2000, 'y': 1162}
Update
As far as serialization, goes:
The BaseManager creates a server using by default a socket for Linux and a named pipe for Windows. So essentially every method you execute against a managed dictionary, for example, is pretty much like a remote method call implemented with message passing. This also means that the server could also be running on a different computer altogether. But, these method calls are not serialized; the object methods themselves must be thread-safe because each method call is run in a new thread.
The following is an example of creating our own managed type and having the server listening for requests possibly from a different computer (although in this example, the client is running on the same computer). The client is calling increment on the managed object 1000 times across two threads, but the method implementation is not done under a lock and so the resulting value of self.x when we are all done is not 1000. Also, when we retrieve the value of x twice concurrently by method get_x we see that both invocations start up more-or-less at the same time:
from multiprocessing.managers import BaseManager
from multiprocessing.pool import ThreadPool
from threading import Event, Thread, get_ident
import time
class MathManager(BaseManager):
pass
class MathClass:
def __init__(self, x=0):
self.x = x
def increment(self, y):
temp = self.x
time.sleep(.01)
self.x = temp + 1
def get_x(self):
print(f'get_x started by thread {get_ident()}', time.time())
time.sleep(2)
return self.x
def set_x(self, value):
self.x = value
def server(event1, event2):
MathManager.register('Math', MathClass)
manager = MathManager(address=('localhost', 5000), authkey=b'abracadabra')
manager.start()
event1.set() # show we are started
print('Math server running; waiting for shutdown...')
event2.wait() # wait for shutdown
print("Math server shutting down.")
manager.shutdown()
def client():
MathManager.register('Math')
manager = MathManager(address=('localhost', 5000), authkey=b'abracadabra')
manager.connect()
math = manager.Math()
pool = ThreadPool(2)
pool.map(math.increment, [1] * 1000)
results = [pool.apply_async(math.get_x) for _ in range(2)]
for result in results:
print(result.get())
def main():
event1 = Event()
event2 = Event()
t = Thread(target=server, args=(event1, event2))
t.start()
event1.wait() # server started
client() # now we can run client
event2.set()
t.join()
# Required for Windows:
if __name__ == '__main__':
main()
Prints:
Math server running; waiting for shutdown...
get_x started by thread 43052 1629375415.2502146
get_x started by thread 71260 1629375415.2502146
502
502
Math server shutting down.

Apply a method to a list of objects in parallel using multi-processing

I have created a class with a number of methods. One of the methods is very time consuming, my_process, and I'd like to do that method in parallel. I came across Python Multiprocessing - apply class method to a list of objects but I'm not sure how to apply it to my problem, and what effect it will have on the other methods of my class.
class MyClass():
def __init__(self, input):
self.input = input
self.result = int
def my_process(self, multiply_by, add_to):
self.result = self.input * multiply_by
self._my_sub_process(add_to)
return self.result
def _my_sub_process(self, add_to):
self.result += add_to
list_of_numbers = range(0, 5)
list_of_objects = [MyClass(i) for i in list_of_numbers]
list_of_results = [obj.my_process(100, 1) for obj in list_of_objects] # multi-process this for-loop
print list_of_numbers
print list_of_results
[0, 1, 2, 3, 4]
[1, 101, 201, 301, 401]
I'm going to go against the grain here, and suggest sticking to the simplest thing that could possibly work ;-) That is, Pool.map()-like functions are ideal for this, but are restricted to passing a single argument. Rather than make heroic efforts to worm around that, simply write a helper function that only needs a single argument: a tuple. Then it's all easy and clear.
Here's a complete program taking that approach, which prints what you want under Python 2, and regardless of OS:
class MyClass():
def __init__(self, input):
self.input = input
self.result = int
def my_process(self, multiply_by, add_to):
self.result = self.input * multiply_by
self._my_sub_process(add_to)
return self.result
def _my_sub_process(self, add_to):
self.result += add_to
import multiprocessing as mp
NUM_CORE = 4 # set to the number of cores you want to use
def worker(arg):
obj, m, a = arg
return obj.my_process(m, a)
if __name__ == "__main__":
list_of_numbers = range(0, 5)
list_of_objects = [MyClass(i) for i in list_of_numbers]
pool = mp.Pool(NUM_CORE)
list_of_results = pool.map(worker, ((obj, 100, 1) for obj in list_of_objects))
pool.close()
pool.join()
print list_of_numbers
print list_of_results
A big of magic
I should note there are many advantages to taking the very simple approach I suggest. Beyond that it "just works" on Pythons 2 and 3, requires no changes to your classes, and is easy to understand, it also plays nice with all of the Pool methods.
However, if you have multiple methods you want to run in parallel, it can get a bit annoying to write a tiny worker function for each. So here's a tiny bit of "magic" to worm around that. Change worker() like so:
def worker(arg):
obj, methname = arg[:2]
return getattr(obj, methname)(*arg[2:])
Now a single worker function suffices for any number of methods, with any number of arguments. In your specific case, just change one line to match:
list_of_results = pool.map(worker, ((obj, "my_process", 100, 1) for obj in list_of_objects))
More-or-less obvious generalizations can also cater to methods with keyword arguments. But, in real life, I usually stick to the original suggestion. At some point catering to generalizations does more harm than good. Then again, I like obvious things ;-)
If your class is not "huge", I think process oriented is better.
Pool in multiprocessing is suggested.
This is the tutorial -> https://docs.python.org/2/library/multiprocessing.html#using-a-pool-of-workers
Then seperate the add_to from my_process since they are quick and you can wait util the end of the last process.
def my_process(input, multiby):
return xxxx
def add_to(result,a_list):
xxx
p = Pool(5)
res = []
for i in range(10):
res.append(p.apply_async(my_process, (i,5)))
p.join() # wait for the end of the last process
for i in range(10):
print res[i].get()
Generally the easiest way to run the same calculation in parallel is the map method of a multiprocessing.Pool (or the as_completed function from concurrent.futures in Python 3).
However, the map method applies a function that only takes one argument to an iterable of data using multiple processes.
So this function cannot be a normal method, because that requires at least two arguments; it must also include self! It could be a staticmethod, however. See also this answer for a more in-depth explanation.
Based on the answer of Python Multiprocessing - apply class method to a list of objects and your code:
add MyClass object into simulation object
class simulation(multiprocessing.Process):
def __init__(self, id, worker, *args, **kwargs):
# must call this before anything else
multiprocessing.Process.__init__(self)
self.id = id
self.worker = worker
self.args = args
self.kwargs = kwargs
sys.stdout.write('[%d] created\n' % (self.id))
run what you want in run function
def run(self):
sys.stdout.write('[%d] running ... process id: %s\n' % (self.id, os.getpid()))
self.worker.my_process(*self.args, **self.kwargs)
sys.stdout.write('[%d] completed\n' % (self.id))
Try this:
list_of_numbers = range(0, 5)
list_of_objects = [MyClass(i) for i in list_of_numbers]
list_of_sim = [simulation(id=k, worker=obj, multiply_by=100*k, add_to=10*k) \
for k, obj in enumerate(list_of_objects)]
for sim in list_of_sim:
sim.start()
If you don't absolutely need to stick with Multiprocessing module then,
it can easily achieved using concurrents.futures library
here's the example code:
from concurrent.futures.thread import ThreadPoolExecutor, wait
MAX_WORKERS = 20
class MyClass():
def __init__(self, input):
self.input = input
self.result = int
def my_process(self, multiply_by, add_to):
self.result = self.input * multiply_by
self._my_sub_process(add_to)
return self.result
def _my_sub_process(self, add_to):
self.result += add_to
list_of_numbers = range(0, 5)
list_of_objects = [MyClass(i) for i in list_of_numbers]
With ThreadPoolExecutor(MAX_WORKERS) as executor:
for obj in list_of_objects:
executor.submit(obj.my_process, 100, 1).add_done_callback(on_finish)
def on_finish(future):
result = future.result() # do stuff with your result
here executor returns future for every task it submits. keep in mind that if you use add_done_callback() finished task from thread returns to the main thread (which would block your main thread) if you really want true parallelism then you should wait for future objects separately. here's the code snippet for that.
futures = []
with ThreadPoolExecutor(MAX_WORKERS) as executor:
for objin list_of_objects:
futures.append(executor.submit(obj.my_process, 100, 1))
wait(futures)
for succeded, failed in futures:
# work with your result here
if succeded:
print (succeeeded.result())
if failed:
print (failed.result())
hope this helps.

When I'm testing about multiprocessing and threading with python, and I meet a odd situation

I am using process pools(including 3 processes). In every process, I have set (created) some threads by using the thread classes to speed handle something.
At first, everything was OK. But when I wanted to change some variable in a thread, I met an odd situation.
For testing or to know what happens, I set a global variable COUNT to test. Honestly, I don't know this is safe or not. I just want to see, by using multiprocessing and threading can I change COUNT or not?
#!/usr/bin/env python
# encoding: utf-8
import os
import threading
from Queue import Queue
from multiprocessing import Process, Pool
# global variable
max_threads = 11
Stock_queue = Queue()
COUNT = 0
class WorkManager:
def __init__(self, work_queue_size=1, thread_pool_size=1):
self.work_queue = Queue()
self.thread_pool = [] # initiate, no have a thread
self.work_queue_size = work_queue_size
self.thread_pool_size = thread_pool_size
self.__init_work_queue()
self.__init_thread_pool()
def __init_work_queue(self):
for i in xrange(self.work_queue_size):
self.work_queue.put((func_test, Stock_queue.get()))
def __init_thread_pool(self):
for i in xrange(self.thread_pool_size):
self.thread_pool.append(WorkThread(self.work_queue))
def finish_all_threads(self):
for i in xrange(self.thread_pool_size):
if self.thread_pool[i].is_alive():
self.thread_pool[i].join()
class WorkThread(threading.Thread):
def __init__(self, work_queue):
threading.Thread.__init__(self)
self.work_queue = work_queue
self.start()
def run(self):
while self.work_queue.qsize() > 0:
try:
func, args = self.work_queue.get(block=False)
func(args)
except Queue.Empty:
print 'queue is empty....'
def handle(process_name):
print process_name, 'is running...'
work_manager = WorkManager(Stock_queue.qsize()/3, max_threads)
work_manager.finish_all_threads()
def func_test(num):
# use a global variable to test what happens
global COUNT
COUNT += num
def prepare():
# prepare test queue, store 50 numbers in Stock_queue
for i in xrange(50):
Stock_queue.put(i)
def main():
prepare()
pools = Pool()
# set 3 process
for i in xrange(3):
pools.apply_async(handle, args=('process_'+str(i),))
pools.close()
pools.join()
global COUNT
print 'COUNT: ', COUNT
if __name__ == '__main__':
os.system('printf "\033c"')
main()
Now, finally the result of COUNT is just 0.I am unable to understand whats happening here?
You print the COUNT var in the father process. Variables doesn't sync across processes because they doesn't share memory, that means that the variable stay 0 at the father process and is increased in the subprocesses
In the case of threading, threads share memory, that means that they share the variable count, so they should have COUNT as more than 0 but again they are at the subprocesses, and when they change the variable, it doesn't update it in other processes.

Python, counter atomic increment

How can I translate the following code from Java to Python?
AtomicInteger cont = new AtomicInteger(0);
int value = cont.getAndIncrement();
Most likely with an threading.Lock around any usage of that value. There's no atomic modification in Python unless you use pypy (if you do, have a look at __pypy__.thread.atomic in stm version).
itertools.count returns an iterator which will perform the equivalent to getAndIncrement() on each iteration.
Example:
import itertools
cont = itertools.count()
value = next(cont)
This will perform the same function, although its not lockless as the name 'AtomicInteger' would imply.
Note other methods are also not strictly lockless -- they rely on the GIL and are not portable between python interpreters.
class AtomicInteger():
def __init__(self, value=0):
self._value = int(value)
self._lock = threading.Lock()
def inc(self, d=1):
with self._lock:
self._value += int(d)
return self._value
def dec(self, d=1):
return self.inc(-d)
#property
def value(self):
with self._lock:
return self._value
#value.setter
def value(self, v):
with self._lock:
self._value = int(v)
return self._value
Using the atomics library, the same could would be written in Python as:
import atomics
a = atomics.atomic(width=4, atype=atomics.INT)
value = a.fetch_inc()
This method is strictly lock-free.
Note: I am the author of this library
8 years and still no full example code for the threading.Lock option without using any external library... Here it comes:
import threading
i = 0
lock = threading.Lock()
# Worker thread for increasing count
class CounterThread(threading.Thread):
def __init__(self):
super(CounterThread, self).__init__()
def run(self):
lock.acquire()
global i
i = i + 1
lock.release()
threads = []
for a in range(0, 10000):
th = CounterThread()
th.start()
threads.append(th)
for thread in threads:
thread.join()
global i
print(i)
Python atomic for shared data types.
https://sharedatomic.top
The module can be used for atomic operations under multiple processs and multiple threads conditions. High performance python! High concurrency, High performance!
atomic api Example with multiprocessing and multiple threads:
You need the following steps to utilize the module:
create function used by child processes, refer to UIntAPIs, IntAPIs, BytearrayAPIs, StringAPIs, SetAPIs, ListAPIs, in each process, you can create multiple threads.
def process_run(a):
def subthread_run(a):
a.array_sub_and_fetch(b'\x0F')
threadlist = []
for t in range(5000):
threadlist.append(Thread(target=subthread_run, args=(a,)))
for t in range(5000):
threadlist[t].start()
for t in range(5000):
threadlist[t].join()
create the shared bytearray
a = atomic_bytearray(b'ab', length=7, paddingdirection='r', paddingbytes=b'012', mode='m')
start processes / threads to utilize the shared bytearray
processlist = []
for p in range(2):
processlist.append(Process(target=process_run, args=(a,)))
for p in range(2):
processlist[p].start()
for p in range(2):
processlist[p].join()
assert a.value == int.to_bytes(27411031864108609, length=8, byteorder='big')

Equivalent of setInterval in python

I have recently posted a question about how to postpone execution of a function in Python (kind of equivalent to Javascript setTimeout) and it turns out to be a simple task using threading.Timer (well, simple as long as the function does not share state with other code, but that would create problems in any event-driven environment).
Now I am trying to do better and emulate setInterval. For those who are not familiar with Javascript, setInterval allows to repeat a call to a function every x seconds, without blocking the execution of other code. I have created this example decorator:
import time, threading
def setInterval(interval, times = -1):
# This will be the actual decorator,
# with fixed interval and times parameter
def outer_wrap(function):
# This will be the function to be
# called
def wrap(*args, **kwargs):
# This is another function to be executed
# in a different thread to simulate setInterval
def inner_wrap():
i = 0
while i != times:
time.sleep(interval)
function(*args, **kwargs)
i += 1
threading.Timer(0, inner_wrap).start()
return wrap
return outer_wrap
to be used as follows
#setInterval(1, 3)
def foo(a):
print(a)
foo('bar')
# Will print 'bar' 3 times with 1 second delays
and it seems to me it is working fine. My problem is that
it seems overly complicated, and I fear I may have missed a simpler/better mechanism
the decorator can be called without the second parameter, in which case it will go on forever. When I say foreover, I mean forever - even calling sys.exit() from the main thread will not stop it, nor will hitting Ctrl+c. The only way to stop it is to kill python process from the outside. I would like to be able to send a signal from the main thread that would stop the callback. But I am a beginner with threads - how can I communicate between them?
EDIT In case anyone wonders, this is the final version of the decorator, thanks to the help of jd
import threading
def setInterval(interval, times = -1):
# This will be the actual decorator,
# with fixed interval and times parameter
def outer_wrap(function):
# This will be the function to be
# called
def wrap(*args, **kwargs):
stop = threading.Event()
# This is another function to be executed
# in a different thread to simulate setInterval
def inner_wrap():
i = 0
while i != times and not stop.isSet():
stop.wait(interval)
function(*args, **kwargs)
i += 1
t = threading.Timer(0, inner_wrap)
t.daemon = True
t.start()
return stop
return wrap
return outer_wrap
It can be used with a fixed amount of repetitions as above
#setInterval(1, 3)
def foo(a):
print(a)
foo('bar')
# Will print 'bar' 3 times with 1 second delays
or can be left to run until it receives a stop signal
import time
#setInterval(1)
def foo(a):
print(a)
stopper = foo('bar')
time.sleep(5)
stopper.set()
# It will stop here, after printing 'bar' 5 times.
Your solution looks fine to me.
There are several ways to communicate with threads. To order a thread to stop, you can use threading.Event(), which has a wait() method that you can use instead of time.sleep().
stop_event = threading.Event()
...
stop_event.wait(1.)
if stop_event.isSet():
return
...
For your thread to exit when the program is terminated, set its daemon attribute to True before calling start(). This applies to Timer() objects as well because they subclass threading.Thread. See http://docs.python.org/library/threading.html#threading.Thread.daemon
Maybe these are the easiest setInterval equivalent in python:
import threading
def set_interval(func, sec):
def func_wrapper():
set_interval(func, sec)
func()
t = threading.Timer(sec, func_wrapper)
t.start()
return t
Maybe a bit simpler is to use recursive calls to Timer:
from threading import Timer
import atexit
class Repeat(object):
count = 0
#staticmethod
def repeat(rep, delay, func):
"repeat func rep times with a delay given in seconds"
if Repeat.count < rep:
# call func, you might want to add args here
func()
Repeat.count += 1
# setup a timer which calls repeat recursively
# again, if you need args for func, you have to add them here
timer = Timer(delay, Repeat.repeat, (rep, delay, func))
# register timer.cancel to stop the timer when you exit the interpreter
atexit.register(timer.cancel)
timer.start()
def foo():
print "bar"
Repeat.repeat(3,2,foo)
atexit allows to signal stopping with CTRL-C.
this class Interval
class ali:
def __init__(self):
self.sure = True;
def aliv(self,func,san):
print "ali naber";
self.setInterVal(func, san);
def setInterVal(self,func, san):
# istenilen saniye veya dakika aralığında program calışır.
def func_Calistir():
func(func,san); #calışıcak fonksiyon.
self.t = threading.Timer(san, func_Calistir)
self.t.start()
return self.t
a = ali();
a.setInterVal(a.aliv,5);

Categories