How to stop a thread in python - python

I am using thread in my python card application.
whenever i press the refresh button i am calling a function using thread, when the function was called, there will be a another function inside the main function.
what i want is whenever the child function ends. I want the thread to be killed or stopped without closing the application or ctrl+ c.
i started the thread like this
def on_refresh_mouseClick(self,event):
thread.start_new_thread( self.readalways,() )
in the "readalways" function i am using while loop, in that while loop whenever the condition satisfies it will call continuousread() function. check it:
def readalways(self):
while 1:
cardid = self.parent.uhf.readTagId()
print "the tag id is",cardid
self.status = self.parent.db.checktagid(cardid)
if len(self.status) != 0:
break
print "the value is",self.status[0]['id']
self.a = self.status[0]['id']
self.continuesread()
def continuesread(self):
.......
.......
after this continuesread read function the values that in the thread should be cleared.
Because, if i again click the refresh button a new thread is starting but the some of the values are coming from the old thread.
so i want to kill the old thread when it completes the continuesread function

Please note that different threads from the same process share their memory, e.g. when you access self.status, you (probably) manipulate an object shared within the whole process. Thus, even if your threads are killed when finishing continuesread (what they probably are), the manipulated object's state will still remain the same.
You could either
hold the status in a local variable instead of an attribute of self,
initialize those attributes when entering readalways,
or save this state in local storage of an thread object, which is not shared (see documentation).
The first one seems to be the best as far as I can see.

Related

Python: trigger a graphical activity in main_thread upon threaded timer expiry

I am developing a simulation of railway signalling and have created a graphical visualisation of the output. There are many instances of a "signal" object. Upon certain conditions, a delay timer is started in a new thread inside a signal object instance so that the signal shows a certain colour for a period of time, then reverts to another colour.
It doesn't seem possible to carry out graphics actions in any thread other than the main thread, so I have used an instance-level flag to indicate when the timer has expired, and a monitoring loop in the main thread checks these flags often and carries out the graphical changes when the flag value of a signal object indicates "expired".
I don't like this much and I'd rather execute the graphical changes from the same function as the timer, but of course that's not in the main thread so it doesn't work (e.g. https://stackoverflow.com/questions/14694408/runtimeerror-main-thread-is-not-in-main-loop).
I'm aware of the following: (1) threading.main_thread() delivers the main thread, (2) that it is possible to override the run method of a thread object in a subclass, and (3) is_alive() "returns True just before the run() method starts until just after the run() method terminates". If it is possible to see the main thread from inside another thread, and detect when the main thread is not busy, and to override its run method temporarily with something I want it to execute, then I can get my graphical changes done when the timer expires without needing the flags and the wasteful separate checking loop.
These snippets I hope show my current solution.
Timer initiation inside the signal object (self.control_time delivers the timer value argument):
self.aspect_locked = 2 # Aspect locked until timer expires.
self.tick = th.Timer(self.control_time, self.aspect_unlock)
# When the timer runs down, the aspect lock is released,
# but GUI actions cannot be executed from inside a thread
# other than the main thread, so the fact can only be
# flagged by the timer thread and must be responded to
# elsewhere.
self.tick.start()
What happens inside the signal object when the timer runs out:
def aspect_unlock(self):
self.aspect_locked = 3
What happens in the main thread to monitor for expired timers:
if self.dw and self.timeout:
for sg in self.sgs:
if sg.sigtype != 'D': sg.ac_route_timeout()
# Wait a bit.
self.timeout = False
tick = Timer(1, self.release_timer)
tick.start()
def release_timer(self):
self.timeout = True
The function inside the signal object that carries out the graphical action.
def ac_route_timeout(self):
if self.aspect_locked == 3: # Aspect has been locked for a timer, but
# is now unlocked.
if self.dw: self.sigdw.routeLine.setFill('green')
What I want to do is carry out what's inside ac_route_timeout() directly from within aspect_unlock(), and dispense with the flags and the monitoring loops.

Global variable doesnt update on loop on external process

I have been trying to pass data into a loop in another process by creating an empty object in which I put the data in. When I change the content of my object the process doesn't seem to update, and keeps returning the same value.
Here is some code I have tried:
from multiprocessing import Process
from time import sleep
class Carrier(): # Empty object to exchange data
pass
def test():
global carry
while True:
print(carry.content)
sleep(0.5)
carry = Carrier()
carry.content = "test" # Giving the value I want to share
p = Process(target=test, args=())
p.start()
while True:
carrier.content = input() # Change the value from the object
I have also tried deleting the object from memory each time and redefining it in the next loop but doesn't seem to have any effect, instead it keeps the initial "test" value, the one present when it was executed.
I assuming that you are running this program on windows as the child process or threads you open in a program are independent of their parent process. So you actually can't share data between the process like this.
It is possible to share data in when running python on linux as the spawned processes are child of parent process where it can share the data.
To be able to share data between parent and child process irrespective of the underlying OS, you can see this link - How to share data between Python processes?

Python thread run() blocking

I was attempting to create a thread class that could be terminated by an exception (since I am trying to have the thread wait on an event) when I created the following:
import sys
class testThread(threading.Thread):
def __init__(self):
super(testThread,self).__init__()
self.daemon = True
def run(self):
try:
print('Running')
while 1:
pass
except:
print('Being forced to exit')
test1 = testThread()
test2 = testThread()
print(test1.daemon)
test1.run()
test2.run()
sys.exit()
However, running the program will only print out one Running message, until the other is terminated. Why is that?
The problem is that you're calling the run method.
This is just a plain old method that you implement, which does whatever you put in its body. In this case, the body is an infinite loop, so calling run just loops forever.
The way to start a thread is the start method. This method is part of the Thread class, and what it does is:
Start the thread’s activity.
It must be called at most once per thread object. It arranges for the object’s run() method to be invoked in a separate thread of control.
So, if you call this, it will start a new thread, make that new thread run your run() method, and return immediately, so the main thread can keep doing other stuff.1 That's what you want here.
1. As pointed out by Jean-François Fabre, you're still not going to get any real parallelism here. Busy loops are never a great idea in multithreaded code, and if you're running this in CPython or PyPy, almost all of that busy looping is executing Python bytecode while holding the GIL, and only one thread can hold the GIL at a time. So, from a coarse view, things look concurrent—three threads are running, and all making progress. But if you zoom in, there's almost no overlap where two threads progress at once, usually not even enough to make up for the small scheduler overhead.

how to safely pass a variables value between threads in python

I read on the python documentation that Queue.Queue() is a safe way of passing variables between different threads. I didn't really know that there was a safety issue with multithreading. For my application, I need to develop multiple objects with variables that can be accessed from multiple different threads. Right now I just have the threads accessing the object variables directly. I wont show my code here because there's way too much of it, but here is an example to demonstrate what I'm doing.
from threading import Thread
import time
import random
class switch:
def __init__(self,id):
self.id=id
self.is_on = False
def self.toggle():
self.is_on = not self.is_on
switches = []
for i in range(5):
switches[i] = switch(i)
def record_switch():
switch_record = {}
while True:
time.sleep(10)
current = {}
current['time'] = time.srftime(time.time())
for i in switches:
current[i.id] = i.is_on
switch_record.update(current)
def toggle_switch():
while True:
time.sleep(random.random()*100)
for i in switches:
i.toggle()
toggle = Thread(target=toggle_switch(), args = ())
record = Thread(target=record_switch(), args = ())
toggle.start()
record.start()
So as I understand, the queue object can be used only to put and get values, which clearly won't work for me. Is what I have here "safe"? If not, how can I program this so that I can safely access a variable from multiple different threads?
Whenever you have threads modifying a value other threads can see, then you are going to have safety issues. The worry is that a thread will try to modify a value when another thread is in the middle of modifying it, which has risky and undefined behavior. So no, your switch-toggling code is not safe.
The important thing to know is that changing the value of a variable is not guaranteed to be atomic. If an action is atomic, it means that action will always happen in one uninterrupted step. (This differs very slightly from the database definition.) Changing a variable value, especially a list value, can often times take multiple steps on the processor level. When you are working with threads, all of those steps are not guaranteed to happen all at once, before another thread starts working. It's entirely possible that thread A will be halfway through changing variable x when thread B suddenly takes over. Then if thread B tries to read variable x, it's not going to find a correct value. Even worse, if thread B tries to modify variable x while thread A is halfway through doing the same thing, bad things can happen. Whenever you have a variable whose value can change somehow, all accesses to it need to be made thread-safe.
If you're modifying variables instead of passing messages, you should be using aLockobject.
In your case, you'd have a global Lock object at the top:
from threading import Lock
switch_lock = Lock()
Then you would surround the critical piece of code with the acquire and release functions.
for i in switches:
switch_lock.acquire()
current[i.id] = i.is_on
switch_lock.release()
for i in switches:
switch_lock.acquire()
i.toggle()
switch_lock.release()
Only one thread may ever acquire a lock at a time (this kind of lock, anyway). When any of the other threads try, they'll be blocked and wait for the lock to become free again. So by putting locks around critical sections of code, you make it impossible for more than one thread to look at, or modify, a given switch at any time. You can put this around any bit of code you want to be kept exclusive to one thread at a time.
EDIT: as martineau pointed out, locks are integrated well with the with statement, if you're using a version of Python that has it. This has the added benefit of automatically unlocking if an exception happens. So instead of the above acquire and release system, you can just do this:
for i in switches:
with switch_lock:
i.toggle()

New thread life-cycle within Tkinter object

I've got an issue with working with the threading class within a Tkinter GUI. On initiating the Tkinter GUI, I create new Threading & Queue objects with a daemon and start it. In the Tkinter GUI, I have a button that calls an internal method. This method then calls put on the Queue object and is posted below. The Threading object performs all the necessary actions that I expect.
def my_method_threaded(self, my_name):
try:
self.queue.put(("test", dict(name=my_name)))
self.label_str_var.set('')
self.queue.join()
except:
self.error_out(msg=traceback.format_exc())
However, I am encountering an issue AFTER it has finished. If I call self.queue.join(), then the set call is never executed and the app freezes after the thread has completed its task. If I comment out the join() command, the set call IS executed, but the button will only work the first time, after it does nothing (I am tracking what the run() method is doing using a logger. It is only ever called the first time).
I am assuming there is an issue with calling join() and the Tkinter loop, which is why the first issue occurs. Can anyone shed any light on the second issue? If you need more code, then let me know.
Edit: A second issue I've just noticed is that the while True loop executes my action twice even though I have called self.queue.task_done(). Code for the run method is below:
def run(self):
args = self._queue.get()
my_name = args[1]["name"]
while True:
if my_name == "Barry":
#calls a static method elsewhere
self.queue.task_done()

Categories