Understanding multi-threading and locks in Python (concept and example) - python

I did research on multi-threading for a programming project using it (first-timer here...). I would appreciate if you deemed my statements below correct or, rather, comment on the ones that are wrong or need correction.
A lock is an object which can be passed to functions, methods, ... by reference. A (in this example) function can then make use of that lock object reference in order to safely operate on data (a variable in this example). It does this by acquiring the lock, modifying the variable and then releasing the lock.
A thread can be created to target a function, which may obtain a reference to a lock (to then achieve what is stated above).
A lock does not protect a specific variable, object etc.
A lock does not protect or do anything unless it is acquired (and released).
Thus, it is in the responsibility of the programmer to use the lock in order achieve the desired protection.
If a lock is acquired inside a function executed by thread A, this has no immediate influence on any other running thread B. Not even if the functions targeted by threads A and B have a reference to the same lock object.
Only if the function targeted by thread B wants to acquire the same lock (i.e. via the same referenced lock object), which already was acquired by the function targeted by thread A at that time, the lock conveys influence on both threads in that thread B will pause further execution until the function targeted by thread A releases the lock again.
Thus, a locked lock only ever pauses execution of a thread, if its targeted function wants (and waits) to acquire the very same lock itself. Thus, by thread A acquiring the lock, it can only prevent thread B from acquiring the same lock, nothing more, nothing less.
If I want to use a lock to prevent race conditions when setting a variable, I (as the programmer) need to:
pass a lock to all functions targeted by threads that will want to set the variable and
acquire the lock in every function and every time before I set the variable (and release it afterwards). (*)
If I create even only one thread targeting a function without providing it a reference to the lock object and let it set the variable or
if I set the variable via a thread whose targeted function has the lock object, but doesn't acquire it prior to the operation, I will have failed to implement thread-safe setting of the variable.
(*) The lock should be acquired as long as the variable must not be accessed by other threads. Right now, I like to compare that to a database transaction... I lock the database (~ acquire a lock) until my set of instructions is completed, then I commit (~ release the lock).
Example If I wanted to create a class whose member _value should be set in a thread-safe fashion, I would implement one of these two versions:
class Version1:
def __init__(self):
self._value:int = 0
self._lock:threading.Lock = threading.Lock()
def getValue(self) -> int:
"""Getting won't be protected in this example."""
return self._value
def setValue(self, val:int) -> None:
"""This will be made thread-safe by member lock."""
with self._lock:
self._value = val
v1 = Version1()
t1_1 = threading.Thread(target=v1.setValue, args=(1)).start()
t1_2 = threading.Thread(target=v1.setValue, args=(2)).start()
class Version2:
def __init__(self):
self._value:int = 0
def getValue(self) -> int:
"""Getting won't be protected in this example."""
return self._value
def setValue(self, val:int, lock:threading.Lock) -> None:
"""This will be made thread-safe by injected lock."""
with self._lock:
self._value = val
v2 = Version2()
l = threading.Lock()
t2_1 = threading.Thread(target=v2.setValue, args=(1, l)).start()
t2_2 = threading.Thread(target=v2.setValue, args=(2, l)).start()
In Version1, I, as the class provider, can guarantee that setting _value is always thread-safe...
...because in Version2, the user of my class might pass to different lock objects to the two spawned threads and thus render the lock protection useless.
If I want to give the user of my class the freedom to include the setting of _value into a larger collection of steps that should be executed in a thread-safe manner, I could inject a Lock reference into Version1's __init__ function and assign that to the _lock member. Thus, the thread-safe operation of the class would be guaranteed while still allowing the user of the class to use "her own" lock for that purpose.
A score from 0-15 will now rate how well I have (mis)understood locks... :-D

It's also quite common to use global variables for locks. It depends on what the lock is protecting.
True, although somewhat meaningless. Any function can use a lock, not just the function that's the target of a thread.
If you mean there's no direct link between a lock and the data it protects, that's true. But you can define a data structure that contains a value that needs protecting and a reference to its lock.
True. Although as I say in 3, you can define a data structure that packages the data and lock. You could make this a class and have the class methods automatically acquire the lock as needed.
Correct. But see 4 for how you can automate this.
Correct.
Correct.
Correct.
Correct if it's not a global lock.
Partially correct. You should also often acquire the lock if you're merely reading the variable. If reading the object is not atomic (e.g. it's a list and you're reading multiple elements, or you read the same scalar object variable times and expect it to be stable), you need to prevent another thread from modifying it while you're reading.
Correct.
Correct.
Correct. This is an example of what I described above in 3 and 4.
Correct. Which is why the design in 13 is often better.
This is tricky, because the granularity of the locking needs to reflect all the objects that need to be protected. Your class only protects the assignment of that one variable -- it will release the lock before all the other steps associated with the caller-provided lock have been completed.

Related

Can threading.Event be used to protect variable access in Python?

I have a relatively simple scenario in some Python code where I have two threads, one of which sets a value and the other is waiting for it to be set. My instinct was to reach for threading.Condition to implement this but I got wondering whether I could simply use threading.Event instead.
So, I have something like this:
value = None
readyToRead = threading.Event()
def set():
# executes in thread 1
global value
value = computeValue()
readyToRead.set()
def get():
# executes in thread 2
readyToRead.wait()
useValue(value)
I suppose I am uneasy because access to value is not actually mutex protected and I think in some languages at least it might not be safe simply to rely on the ordering implied by the statements in the code.
Is this a valid use of Event in Python?
yes this is the valid use-case of event..
value is thread protected.
if you increase the number of thread you have to wait in all thread .if that is the case you can use semaphore to.

Unable to modify global variable in Python [duplicate]

I am using the Pool class from python's multiprocessing library write a program that will run on an HPC cluster.
Here is an abstraction of what I am trying to do:
def myFunction(x):
# myObject is a global variable in this case
return myFunction2(x, myObject)
def myFunction2(x,myObject):
myObject.modify() # here I am calling some method that changes myObject
return myObject.f(x)
poolVar = Pool()
argsArray = [ARGS ARRAY GOES HERE]
output = poolVar.map(myFunction, argsArray)
The function f(x) is contained in a *.so file, i.e., it is calling a C function.
The problem I am having is that the value of the output variable is different each time I run my program (even though the function myObject.f() is a deterministic function). (If I only have one process then the output variable is the same each time I run the program.)
I have tried creating the object rather than storing it as a global variable:
def myFunction(x):
myObject = createObject()
return myFunction2(x, myObject)
However, in my program the object creation is expensive, and thus, it is a lot easier to create myObject once and then modify it each time I call myFunction2(). Thus, I would like to not have to create the object each time.
Do you have any tips? I am very new to parallel programming so I could be going about this all wrong. I decided to use the Pool class since I wanted to start with something simple. But I am willing to try a better way of doing it.
I am using the Pool class from python's multiprocessing library to do
some shared memory processing on an HPC cluster.
Processes are not threads! You cannot simply replace Thread with Process and expect all to work the same. Processes do not share memory, which means that the global variables are copied, hence their value in the original process doesn't change.
If you want to use shared memory between processes then you must use the multiprocessing's data types, such as Value, Array, or use the Manager to create shared lists etc.
In particular you might be interested in the Manager.register method, which allows the Manager to create shared custom objects(although they must be picklable).
However I'm not sure whether this will improve the performance. Since any communication between processes requires pickling, and pickling takes usually more time then simply instantiating the object.
Note that you can do some initialization of the worker processes passing the initializer and initargs argument when creating the Pool.
For example, in its simplest form, to create a global variable in the worker process:
def initializer():
global data
data = createObject()
Used as:
pool = Pool(4, initializer, ())
Then the worker functions can use the data global variable without worries.
Style note: Never use the name of a built-in for your variables/modules. In your case object is a built-in. Otherwise you'll end up with unexpected errors which may be obscure and hard to track down.
Global keyword works on the same file only. Another way is to set value dynamically in pool process initialiser, somefile.py can just be an empty file:
import importlib
def pool_process_init():
m = importlib.import_module("somefile.py")
m.my_global_var = "some value"
pool = Pool(4, initializer=pool_process_init)
How to use the var in task:
def my_coroutine():
m = importlib.import_module("somefile.py")
print(m.my_global_var)

Python destructor basing on try/finally + yield?

I've been testing a dirty hack inspired by this http://docs.python.org/2/library/contextlib.html .
The main idea is to bring try/finally idea onto class level and get reliable and simple class destructor.
class Foo():
def __init__(self):
self.__res_mgr__ = self.__acquire_resources__()
self.__res_mgr__.next()
def __acquire_resources__(self):
try:
# Acquire some resources here
print "Initialize"
self.f = 1
yield
finally:
# Release the resources here
print "Releasing Resources"
self.f = 0
f = Foo()
print "testing resources"
print f.f
But it always gives me:
Initialize
testing resources
1
and never "Releasing Resources". I'm basing my hope on:
As of Python version 2.5, the yield statement is now allowed in the
try clause of a try ... finally construct. If the generator is not
resumed before it is finalized (by reaching a zero reference count or
by being garbage collected), the generator-iterator’s close() method
will be called, allowing any pending finally clauses to execute. Source link
But it seems when the class member is being garbage collected together with the class their ref counts don't decrease, so as a result generators close() and thus finally is never called. As for the second part of the quote
"or by being garbage collected"
I just don't know why it's not true. Any chance to make this utopia work? :)
BTW this works on module level:
def f():
try:
print "ack"
yield
finally:
print "release"
a = f()
a.next()
print "testing"
Output will be as I expect:
ack
testing
release
NOTE: In my task I'm not able to use WITH manager because I'm releasing the resource inside end_callback of the thread (it will be out of any WITH). So I wanted to get a reliable destructor for cases when callback won't be called for some reason
The problem you are having is caused by a reference cycle and an implicit __del__ defined on your generator (it's so implicit, CPython doesn't actually show __del__ when you introspect, because only the C level tp_del exists, no Python-visible __del__ is created). Basically, when a generator has a yield inside:
A try block, or equivalently
A with block
it has an implicit __del__-like implementation. On Python 3.3 and earlier, if a reference cycle contains an object whose class implements __del__ (technically, has tp_del in CPython), unless the cycle is manually broken, the cyclic garbage collector cannot clean it up, and just sticks it in gc.garbage (import gc to gain access), because it doesn't know which objects (if any) must be collected first to clean up "nicely".
Because your class's __acquire_resources__(self) contains a reference to the instance's self, you form a reference cycle:
self -> self.__res_mgr__ (generator object) -> generator frame (referencing locals which includes) -> self
Because of this reference cycle, and the fact that the generator has a try/finally in it (creating tp_del equivalent to __del__), the cycle is uncollectable, and your finally block never gets executed unless you manually advance self.__res_mgr__ (which defeats the whole purpose).
You experiment happens to display this problem automatically because the reference cycle is implicit/automatic, but any accidental reference cycle where an object in the cycle has a class with __del__ will trigger the same problem, so even if you just did:
class Foo():
def __init__(self):
# Acquire some resources here
print "Initialize"
self.f = 1
def __del__(self):
# Release the resources here
print "Releasing Resources"
self.f = 0
if the "resources" involved could conceivably lead to a reference cycle with an instance of Foo, you'd have the same problem.
The solution here is one or both of:
Make your class a context manager so users provide the information necessary for deterministic finalization (by using with blocks) as well as providing an explicit cleanup method (e.g. close) for when with blocks aren't feasible (part of another object's state that is cleaned up through its own resource management). This is also the only way to provide deterministic cleanup on most non-CPython interpreters where reference counting semantics have never been used (so all finalizers are called non-deterministically, if at all)
Move to Python 3.4 or higher, where PEP 442 resolves the issue with uncollectable cyclic garbage (it's technically still possible to produce such cycles on CPython, but only via third party extensions that continue to use tp_del instead of updating to use the tp_finalize slot that allows cyclic garbage to be cleaned properly). It's still non-deterministic cleanup (if a reference cycle exists, you're waiting on the cyclic gc to run, sometime), but it's possible, where pre-3.4, cyclic garbage of this sort could not be cleaned up at all.

multiprocessing.Pool with a global variable

I am using the Pool class from python's multiprocessing library write a program that will run on an HPC cluster.
Here is an abstraction of what I am trying to do:
def myFunction(x):
# myObject is a global variable in this case
return myFunction2(x, myObject)
def myFunction2(x,myObject):
myObject.modify() # here I am calling some method that changes myObject
return myObject.f(x)
poolVar = Pool()
argsArray = [ARGS ARRAY GOES HERE]
output = poolVar.map(myFunction, argsArray)
The function f(x) is contained in a *.so file, i.e., it is calling a C function.
The problem I am having is that the value of the output variable is different each time I run my program (even though the function myObject.f() is a deterministic function). (If I only have one process then the output variable is the same each time I run the program.)
I have tried creating the object rather than storing it as a global variable:
def myFunction(x):
myObject = createObject()
return myFunction2(x, myObject)
However, in my program the object creation is expensive, and thus, it is a lot easier to create myObject once and then modify it each time I call myFunction2(). Thus, I would like to not have to create the object each time.
Do you have any tips? I am very new to parallel programming so I could be going about this all wrong. I decided to use the Pool class since I wanted to start with something simple. But I am willing to try a better way of doing it.
I am using the Pool class from python's multiprocessing library to do
some shared memory processing on an HPC cluster.
Processes are not threads! You cannot simply replace Thread with Process and expect all to work the same. Processes do not share memory, which means that the global variables are copied, hence their value in the original process doesn't change.
If you want to use shared memory between processes then you must use the multiprocessing's data types, such as Value, Array, or use the Manager to create shared lists etc.
In particular you might be interested in the Manager.register method, which allows the Manager to create shared custom objects(although they must be picklable).
However I'm not sure whether this will improve the performance. Since any communication between processes requires pickling, and pickling takes usually more time then simply instantiating the object.
Note that you can do some initialization of the worker processes passing the initializer and initargs argument when creating the Pool.
For example, in its simplest form, to create a global variable in the worker process:
def initializer():
global data
data = createObject()
Used as:
pool = Pool(4, initializer, ())
Then the worker functions can use the data global variable without worries.
Style note: Never use the name of a built-in for your variables/modules. In your case object is a built-in. Otherwise you'll end up with unexpected errors which may be obscure and hard to track down.
Global keyword works on the same file only. Another way is to set value dynamically in pool process initialiser, somefile.py can just be an empty file:
import importlib
def pool_process_init():
m = importlib.import_module("somefile.py")
m.my_global_var = "some value"
pool = Pool(4, initializer=pool_process_init)
How to use the var in task:
def my_coroutine():
m = importlib.import_module("somefile.py")
print(m.my_global_var)

Is this Python producer-consumer lockless approach thread-safe?

I recently wrote a program that used a simple producer/consumer pattern. It initially had a bug related to improper use of threading.Lock that I eventually fixed. But it made me think whether it's possible to implement producer/consumer pattern in a lockless manner.
Requirements in my case were simple:
One producer thread.
One consumer thread.
Queue has place for only one item.
Producer can produce next item before the current one is consumed. The current item is therefore lost, but that's OK for me.
Consumer can consume current item before the next one is produced. The current item is therefore consumed twice (or more), but that's OK for me.
So I wrote this:
QUEUE_ITEM = None
# this is executed in one threading.Thread object
def producer():
global QUEUE_ITEM
while True:
i = produce_item()
QUEUE_ITEM = i
# this is executed in another threading.Thread object
def consumer():
global QUEUE_ITEM
while True:
i = QUEUE_ITEM
consume_item(i)
My question is: Is this code thread-safe?
Immediate comment: this code isn't really lockless - I use CPython and it has GIL.
I tested the code a little and it seems to work. It translates to some LOAD and STORE ops which are atomic because of GIL. But I also know that del x operation isn't atomic when x implements __del__ method. So if my item has a __del__ method and some nasty scheduling happens, things may break. Or not?
Another question is: What kind of restrictions (for example on produced items' type) do I have to impose to make the above code work fine?
My questions are only about theoretical possibility to exploit CPython's and GIL's quirks in order to come up with lockless (i.e. no locks like threading.Lock explicitly in code) solution.
Trickery will bite you. Just use Queue to communicate between threads.
Yes this will work in the way that you described:
That the producer may produce a skippable element.
That the consumer may consume the same element.
But I also know that del x operation isn't atomic when x implements del method. So if my item has a del method and some nasty scheduling happens, things may break.
I don't see a "del" here. If a del happens in consume_item then the del may occur in the producer thread. I don't think this would be a "problem".
Don't bother using this though. You will end up using up CPU on pointless polling cycles, and it is not any faster than using a queue with locks since Python already has a global lock.
This is not really thread safe because producer could overwrite QUEUE_ITEM before consumer has consumed it and consumer could consume QUEUE_ITEM twice. As you mentioned, you're OK with that but most people aren't.
Someone with more knowledge of cpython internals will have to answer you more theoretical questions.
I think it's possible that a thread is interrupted while producing/consuming, especially if the items are big objects.
Edit: this is just a wild guess. I'm no expert.
Also the threads may produce/consume any number of items before the other one starts running.
You can use a list as the queue as long as you stick to append/pop since both are atomic.
QUEUE = []
# this is executed in one threading.Thread object
def producer():
global QUEUE
while True:
i = produce_item()
QUEUE.append(i)
# this is executed in another threading.Thread object
def consumer():
global QUEUE
while True:
try:
i = QUEUE.pop(0)
except IndexError:
# queue is empty
continue
consume_item(i)
In a class scope like below, you can even clear the queue.
class Atomic(object):
def __init__(self):
self.queue = []
# this is executed in one threading.Thread object
def producer(self):
while True:
i = produce_item()
self.queue.append(i)
# this is executed in another threading.Thread object
def consumer(self):
while True:
try:
i = self.queue.pop(0)
except IndexError:
# queue is empty
continue
consume_item(i)
# There's the possibility producer is still working on it's current item.
def clear_queue(self):
self.queue = []
You'll have to find out which list operations are atomic by looking at the bytecode generated.
The __del__ could be a problem as You said. It could be avoided, if only there was a way to prevent the garbage collector from invoking the __del__ method on the old object before We finish assigning the new one to the QUEUE_ITEM. We would need something like:
increase the reference counter on the old object
assign a new one to `QUEUE_ITEM`
decrease the reference counter on the old object
I'm afraid, I don't know if it is possible, though.

Categories