python Threadsafe lock on loop - python

Can someone please tell me if the following is threadsafe or not and if it isnt what must I do to make it so.
Note: this only a small sample, not sure if it runs.
TIMER = True
time_lock = threading.Lock()
def timing():
while TIMER:
# some logic will be here for now print time
print time.time()
timer = threading.Thread(target=timing)
timer2 = threading.Thread(target=timing)
timer.start()
timer2.start()
while True:
time_lock.aquire()
if doSomeStuff():
TIMER = True
if otherThings():
break
time_lock.aquire()
TIMER = False
time_lock.release()
time_lock.aquire()
TIMER = False
time_lock.release()

It depends a bit on which implementation you are using, and what you are doing.
First, in the de facto standard implementation of python ("cpython"), only one thread is allowed to run at a time since some internals of the python interpreter aren't thread-safe. This is controlled by the Global Interpreter Lock (aka "GIL"). So in cpython, you don't really need extra locks at all; the GIL makes sure that only one thread at a time is running python code and possibly changing variables. This is a feature of the implementation, not of the language.
Second, if only one thread writes to a simple variable and others only read it you don't need a lock either, for obvious reasons. It is however up to you as the programmer to make sure that this is the case, and it is easy to make mistakes with that.
Even assinging to a simple variable might not need a lock. In python variables are more like labels used to refer to an object, rather than boxes where you can put something in. So simple assignments are atomic (in the sense that they cannot be interrupted halfway), as you can see when you look at the generated python bytecode:
In [1]: import dis
In [2]: x = 7
In [3]: def setx():
global x
x = 12
...:
In [4]: dis.dis(setx)
3 0 LOAD_CONST 1 (12)
3 STORE_GLOBAL 0 (x)
6 LOAD_CONST 0 (None)
9 RETURN_VALUE
The only code that changes x is the single STORE_GLOBAL opcode. So the variable is either changed or it isn't; there is no inconsistent state inbetween.
But if you e.g. want to test a variable for a certain value, and perform an action while that test still holds true, you do need a lock. Because another thread could have changed the variable just after you tested it.
Things like appending to a list or swapping variables are not atomic. But again in cpython these would be protected by the GIL. In other implementations you'd need a lock around such operations to protect against possible inconsistencies.

If I understand you correctly, you want to signal your thread when to stop. Since this is a situation where only one thread writes to a shared variable and only once, you do not need a lock.
Locks are necessary when you are doing concurrent modification of a shared datastructure that cannot be read atomically.

Related

Why is it doesn't the function run?

Ok so I have multiple problems with the code under:
when the key chosen in the Combo Box is held down, it keeps printing "You Pressed It", is there a way to avoid this?
When I press the set hotkey, the label changes but the while loop in the process() doesnt, its suppose to do a process of tasks but I simplified it to print for this question.
run = False
def press():
global run
while True:
if keyboard.read_key(hotCombo.get()):
print("You Pressed It")
run = not run
keyboard.wait(hotCombo.get())
if run == True:
status["text"]="Working"
else:
status["text"]="Not Working"
def process():
while run == True:
print("runnning")
Been tinkering with it and found more problems
I ended up with this but while its printing run I cant seem to stop it
def process():
global run
while True:
if keyboard.read_key(hotCombo.get()):
print("kijanbdsjokn")
run = not run
keyboard.wait(hotCombo.get())
if run == True:
status["text"]="Working"
else:
status["text"]="Not Working"
while run == True:
print("run")
time.sleep(1)
Can I ask why I cant just integrate tkinter into a working python script using threading?
A Python script is generally linear. You do things in sequence and then you exit.
In a tkinter program, your code consists of three things.
Code to set up the window and widgets.
Initialization of global variables (doesn't really matter if you hide them in a class instance; they're still globals).
Most of it will be functions/methods that are called as callbacks from tkinter when it is in the mainloop.
So in atkinter program most of your code is a guest in the mainloop. Where it is executed in small pieces in reaction to events. This is a completely different kind of program. It was called event-driven or message based programming long before that became cool in web servers and frameworks.
So, can you integrate a script in a tkinter program? Yes, it is possible.
There are basically three ways you can do it;
Split the code up into small pieces that can be called via after timeouts. This involves the most reorganization of your code. To keep the GUI responsive, event handlers (like timeouts) should not take too long; 50 ms seems to be a reasonable upper limit.
Run it in a different thread. We will cover that in more detail below.
Run it in a different process. Broadly similar to running in a thread (the API's of threading.Thread and multiprocessing.Process are almost the same by design). The largest difference is that communication between processes has to be done explicitly via e.g. Queue or Pipe.
There are some things that you have to take into account when using extra threads, especially in a tkinter program.
1) Python version
You need to use Python 3. This will not work well in Python 2 for reasons that are beyond the scope of this answer. Better preemption of threads in Python 3 is a big part of it.
2) Multithreaded tkinter build
The tkinter (or rather the underlying tcl interpreter) needs to be built with threading enabled. I gather that the official python.org builds for ms-windows are, but apart from that YMMV. On some UNIX-like systems such as Linux or *BSD the packages/ports systems gives you a choice in this.
3) Make your code into a function
You need to wrap up the core of your original script in a function so you can start it in a thread.
4) Make the function thread-friendly
You probably want to be able to interrupt that thread if it takes too long. So you have to adapt it to check regularly if it should continue. Checking if a global named run is True is one method. Note that the threading API does not allow you to just terminate a thread.
5 The normal perils of multithreading
You have to be careful with modifying widgets or globals from both threads at the same time.
At the time of writing, the Python GIL helps you here. Since it assures that only one thread at a time is executing Python bytecode, any change that can be done in a single bytecode is multithreading safe as a side effect.
For example, look at the modification of a global in the modify function:
In [1]: import dis
In [2]: data = []
Out[2]: []
In [3]: def modify():
...: global data
...: newdata = [1,2,3]
...: data = newdata
...:
In [4]: dis.dis(modify)
3 0 BUILD_LIST 0
2 LOAD_CONST 1 ((1, 2, 3))
4 LIST_EXTEND 1
6 STORE_FAST 0 (newdata)
4 8 LOAD_FAST 0 (newdata)
10 STORE_GLOBAL 0 (data)
12 LOAD_CONST 0 (None)
14 RETURN_VALUE
See how the new list is built separately, and only when it is comple is it assigned to the global. (This was not by accident.)
It takes only a single bytecode instruction (STORE_GLOBAL) to set a global variable to a newly created list. So at no point can the value of data be ambiguous.
But a lot of things take more then one bytecode. So there is a chance that one thread is preempted in favor of the other while it was modifying a variable or widget. How big of a chance that is depends on how often these situations happen and how long they take.
IIRC, currently a thread is preempted every 15 ms.
So an change that takes longer than that is guaranteed to be preempted. As is any task that drops the GIL for I/O.
So if you see weird things happening, make sure to use Locks to regulate access to shared resources.
It helps if e.g. a widget or variable is only modified from one thread, and only read from all the other threads.
One way to handle your key is to turn it into a two-phase loop:
def press():
global run
while True:
while not keyboard.read_key(hotCombo.get()):
time.sleep(0.2)
run = True
status["text"]="Working"
while keyboard.read_key(hotCombo.get()):
print("running")
time.sleep(0.2)
run == False
status["text"]="Not Working"

Make certain operators have no effect over a duration of time? (Python)

I am working with multi-threading. There is a thread that periodically adds to a certain variable, say every 5 seconds. I want to make it so that, given a call to function f(), additions to that variable will have no effect.
I can't find how to do this anywhere and I'm not sure where to start.
Is there any syntax that makes certain operations not no effect no specific variables for a certain duration?
Or is there any starting point someone could give me and I can figure out the rest?
Thanks.
I would suggest creating an addToVariable or setVariable method to take care of the actual adding to the variable in question. This way you can just set a flag, and if the flag is set addToVariable returns immediately instead of actually adding to the variable in question.
If not, you should look into operator overloading. Basically what you want is to make your own number-class and write a new add function which contains some sort of flag that stops further additions to have any effect.
You could protect the increment with a lock and acquire the lock for the duration of f() call:
import threading
import time
lock = threading.Lock()
# ...
def f():
with lock:
# the rest of the function
# incrementing thread
while True:
time.sleep(5 - timer() % 5) # increment on 5 second boundaries
with lock:
x += 1
The incrementing thread is blocked while f() is executed. See also, How to run a function periodically in python

how to safely pass a variables value between threads in python

I read on the python documentation that Queue.Queue() is a safe way of passing variables between different threads. I didn't really know that there was a safety issue with multithreading. For my application, I need to develop multiple objects with variables that can be accessed from multiple different threads. Right now I just have the threads accessing the object variables directly. I wont show my code here because there's way too much of it, but here is an example to demonstrate what I'm doing.
from threading import Thread
import time
import random
class switch:
def __init__(self,id):
self.id=id
self.is_on = False
def self.toggle():
self.is_on = not self.is_on
switches = []
for i in range(5):
switches[i] = switch(i)
def record_switch():
switch_record = {}
while True:
time.sleep(10)
current = {}
current['time'] = time.srftime(time.time())
for i in switches:
current[i.id] = i.is_on
switch_record.update(current)
def toggle_switch():
while True:
time.sleep(random.random()*100)
for i in switches:
i.toggle()
toggle = Thread(target=toggle_switch(), args = ())
record = Thread(target=record_switch(), args = ())
toggle.start()
record.start()
So as I understand, the queue object can be used only to put and get values, which clearly won't work for me. Is what I have here "safe"? If not, how can I program this so that I can safely access a variable from multiple different threads?
Whenever you have threads modifying a value other threads can see, then you are going to have safety issues. The worry is that a thread will try to modify a value when another thread is in the middle of modifying it, which has risky and undefined behavior. So no, your switch-toggling code is not safe.
The important thing to know is that changing the value of a variable is not guaranteed to be atomic. If an action is atomic, it means that action will always happen in one uninterrupted step. (This differs very slightly from the database definition.) Changing a variable value, especially a list value, can often times take multiple steps on the processor level. When you are working with threads, all of those steps are not guaranteed to happen all at once, before another thread starts working. It's entirely possible that thread A will be halfway through changing variable x when thread B suddenly takes over. Then if thread B tries to read variable x, it's not going to find a correct value. Even worse, if thread B tries to modify variable x while thread A is halfway through doing the same thing, bad things can happen. Whenever you have a variable whose value can change somehow, all accesses to it need to be made thread-safe.
If you're modifying variables instead of passing messages, you should be using aLockobject.
In your case, you'd have a global Lock object at the top:
from threading import Lock
switch_lock = Lock()
Then you would surround the critical piece of code with the acquire and release functions.
for i in switches:
switch_lock.acquire()
current[i.id] = i.is_on
switch_lock.release()
for i in switches:
switch_lock.acquire()
i.toggle()
switch_lock.release()
Only one thread may ever acquire a lock at a time (this kind of lock, anyway). When any of the other threads try, they'll be blocked and wait for the lock to become free again. So by putting locks around critical sections of code, you make it impossible for more than one thread to look at, or modify, a given switch at any time. You can put this around any bit of code you want to be kept exclusive to one thread at a time.
EDIT: as martineau pointed out, locks are integrated well with the with statement, if you're using a version of Python that has it. This has the added benefit of automatically unlocking if an exception happens. So instead of the above acquire and release system, you can just do this:
for i in switches:
with switch_lock:
i.toggle()

Using a global dictionary with threads in Python

Is accessing/changing dictionary values thread-safe?
I have a global dictionary foo and multiple threads with ids id1, id2, ... , idn. Is it OK to access and change foo's values without allocating a lock for it if it's known that each thread will only work with its id-related value, say thread with id1 will only work with foo[id1]?
Assuming CPython: Yes and no. It is actually safe to fetch/store values from a shared dictionary in the sense that multiple concurrent read/write requests won't corrupt the dictionary. This is due to the global interpreter lock ("GIL") maintained by the implementation. That is:
Thread A running:
a = global_dict["foo"]
Thread B running:
global_dict["bar"] = "hello"
Thread C running:
global_dict["baz"] = "world"
won't corrupt the dictionary, even if all three access attempts happen at the "same" time. The interpreter will serialize them in some undefined way.
However, the results of the following sequence is undefined:
Thread A:
if "foo" not in global_dict:
global_dict["foo"] = 1
Thread B:
global_dict["foo"] = 2
as the test/set in thread A is not atomic ("time-of-check/time-of-use" race condition). So, it is generally best, if you lock things:
from threading import RLock
lock = RLock()
def thread_A():
with lock:
if "foo" not in global_dict:
global_dict["foo"] = 1
def thread_B():
with lock:
global_dict["foo"] = 2
The best, safest, portable way to have each thread work with independent data is:
import threading
tloc = threading.local()
Now each thread works with a totally independent tloc object even though it's a global name. The thread can get and set attributes on tloc, use tloc.__dict__ if it specifically needs a dictionary, etc.
Thread-local storage for a thread goes away at end of thread; to have threads record their final results, have them put their results, before they terminate, into a common instance of Queue.Queue (which is intrinsically thread-safe). Similarly, initial values for data a thread is to work on could be arguments passed when the thread is started, or be taken from a Queue.
Other half-baked approaches, such as hoping that operations that look atomic are indeed atomic, may happen to work for specific cases in a given version and release of Python, but could easily get broken by upgrades or ports. There's no real reason to risk such issues when a proper, clean, safe architecture is so easy to arrange, portable, handy, and fast.
Since I needed something similar, I landed here. I sum up your answers in this short snippet :
#!/usr/bin/env python3
import threading
class ThreadSafeDict(dict) :
def __init__(self, * p_arg, ** n_arg) :
dict.__init__(self, * p_arg, ** n_arg)
self._lock = threading.Lock()
def __enter__(self) :
self._lock.acquire()
return self
def __exit__(self, type, value, traceback) :
self._lock.release()
if __name__ == '__main__' :
u = ThreadSafeDict()
with u as m :
m[1] = 'foo'
print(u)
as such, you can use the with construct to hold the lock while fiddling in your dict()
The GIL takes care of that, if you happen to be using CPython.
global interpreter lock
The lock used by Python threads to assure that only one thread executes in the CPython virtual machine at a time. This simplifies the CPython implementation by assuring that no two processes can access the same memory at the same time. Locking the entire interpreter makes it easier for the interpreter to be multi-threaded, at the expense of much of the parallelism afforded by multi-processor machines. Efforts have been made in the past to create a “free-threaded” interpreter (one which locks shared data at a much finer granularity), but so far none have been successful because performance suffered in the common single-processor case.
See are-locks-unnecessary-in-multi-threaded-python-code-because-of-the-gil.
How it works?:
>>> import dis
>>> demo = {}
>>> def set_dict():
... demo['name'] = 'Jatin Kumar'
...
>>> dis.dis(set_dict)
2 0 LOAD_CONST 1 ('Jatin Kumar')
3 LOAD_GLOBAL 0 (demo)
6 LOAD_CONST 2 ('name')
9 STORE_SUBSCR
10 LOAD_CONST 0 (None)
13 RETURN_VALUE
Each of the above instructions is executed with GIL lock hold and STORE_SUBSCR instruction adds/updates the key+value pair in a dictionary. So you see that dictionary update is atomic and hence thread safe.

Is the += operator thread-safe in Python?

I want to create a non-thread-safe chunk of code for experimentation, and those are the functions that 2 threads are going to call.
c = 0
def increment():
c += 1
def decrement():
c -= 1
Is this code thread safe?
If not, may I understand why it is not thread safe, and what kind of statements usually lead to non-thread-safe operations.
If it is thread-safe, how can I make it explicitly non-thread-safe?
No, this code is absolutely, demonstrably not threadsafe.
import threading
i = 0
def test():
global i
for x in range(100000):
i += 1
threads = [threading.Thread(target=test) for t in range(10)]
for t in threads:
t.start()
for t in threads:
t.join()
assert i == 1000000, i
fails consistently.
i += 1 resolves to four opcodes: load i, load 1, add the two, and store it back to i. The Python interpreter switches active threads (by releasing the GIL from one thread so another thread can have it) every 100 opcodes. (Both of these are implementation details.) The race condition occurs when the 100-opcode preemption happens between loading and storing, allowing another thread to start incrementing the counter. When it gets back to the suspended thread, it continues with the old value of "i" and undoes the increments run by other threads in the meantime.
Making it threadsafe is straightforward; add a lock:
#!/usr/bin/python
import threading
i = 0
i_lock = threading.Lock()
def test():
global i
i_lock.acquire()
try:
for x in range(100000):
i += 1
finally:
i_lock.release()
threads = [threading.Thread(target=test) for t in range(10)]
for t in threads:
t.start()
for t in threads:
t.join()
assert i == 1000000, i
(note: you would need global c in each function to make your code work.)
Is this code thread safe?
No. Only a single bytecode instruction is ‘atomic’ in CPython, and a += may not result in a single opcode, even when the values involved are simple integers:
>>> c= 0
>>> def inc():
... global c
... c+= 1
>>> import dis
>>> dis.dis(inc)
3 0 LOAD_GLOBAL 0 (c)
3 LOAD_CONST 1 (1)
6 INPLACE_ADD
7 STORE_GLOBAL 0 (c)
10 LOAD_CONST 0 (None)
13 RETURN_VALUE
So one thread could get to index 6 with c and 1 loaded, give up the GIL and let another thread in, which executes an inc and sleeps, returning the GIL to the first thread, which now has the wrong value.
In any case, what's atomic is an implementation detail which you shouldn't rely on. Bytecodes may change in future versions of CPython, and the results will be totally different in other implementations of Python that do not rely on a GIL. If you need thread safety, you need a locking mechanism.
To be sure I recommend to use a lock:
import threading
class ThreadSafeCounter():
def __init__(self):
self.lock = threading.Lock()
self.counter=0
def increment(self):
with self.lock:
self.counter+=1
def decrement(self):
with self.lock:
self.counter-=1
The synchronized decorator can also help to keep the code easy to read.
It's easy to prove that your code is not thread safe. You can increase the likelyhood of seeing the race condition by using a sleep in the critical parts (this simply simulates a slow CPU). However if you run the code for long enough you should see the race condition eventually regardless.
from time import sleep
c = 0
def increment():
global c
c_ = c
sleep(0.1)
c = c_ + 1
def decrement():
global c
c_ = c
sleep(0.1)
c = c_ - 1
Short answer: no.
Long answer: generally not.
While CPython's GIL makes single opcodes thread-safe, this is no general behaviour. You may not assume that even simple operations like an addition is a atomic instruction. The addition may only be half done when another thread runs.
And as soon as your functions access a variable in more than one opcode, your thread safety is gone. You can generate thread safety, if you wrap your function bodies in locks. But be aware that locks may be computationally costly and may generate deadlocks.
If you actually want to make your code not thread-safe, and have good chance of "bad" stuff actually happening without you trying like ten thousand times (or one time when you real don't want "bad" stuff to happen), you can 'jitter' your code with explicit sleeps:
def íncrement():
global c
x = c
from time import sleep
sleep(0.1)
c = x + 1
Single opcodes are thread-safe because of the GIL but nothing else:
import time
class something(object):
def __init__(self,c):
self.c=c
def inc(self):
new = self.c+1
# if the thread is interrupted by another inc() call its result is wrong
time.sleep(0.001) # sleep makes the os continue another thread
self.c = new
x = something(0)
import threading
for _ in range(10000):
threading.Thread(target=x.inc).start()
print x.c # ~900 here, instead of 10000
Every resource shared by multiple threads must have a lock.
Are you sure that the functions increment and decrement execute without any error?
I think it should raise an UnboundLocalError because you have to explicitly tell Python that you want to use the global variable named 'c'.
So change increment ( also decrement ) to the following:
def increment():
global c
c += 1
I think as is your code is thread unsafe. This article about thread synchronisation mechanisms in Python may be helpful.

Categories