I am developing a simulation of railway signalling and have created a graphical visualisation of the output. There are many instances of a "signal" object. Upon certain conditions, a delay timer is started in a new thread inside a signal object instance so that the signal shows a certain colour for a period of time, then reverts to another colour.
It doesn't seem possible to carry out graphics actions in any thread other than the main thread, so I have used an instance-level flag to indicate when the timer has expired, and a monitoring loop in the main thread checks these flags often and carries out the graphical changes when the flag value of a signal object indicates "expired".
I don't like this much and I'd rather execute the graphical changes from the same function as the timer, but of course that's not in the main thread so it doesn't work (e.g. https://stackoverflow.com/questions/14694408/runtimeerror-main-thread-is-not-in-main-loop).
I'm aware of the following: (1) threading.main_thread() delivers the main thread, (2) that it is possible to override the run method of a thread object in a subclass, and (3) is_alive() "returns True just before the run() method starts until just after the run() method terminates". If it is possible to see the main thread from inside another thread, and detect when the main thread is not busy, and to override its run method temporarily with something I want it to execute, then I can get my graphical changes done when the timer expires without needing the flags and the wasteful separate checking loop.
These snippets I hope show my current solution.
Timer initiation inside the signal object (self.control_time delivers the timer value argument):
self.aspect_locked = 2 # Aspect locked until timer expires.
self.tick = th.Timer(self.control_time, self.aspect_unlock)
# When the timer runs down, the aspect lock is released,
# but GUI actions cannot be executed from inside a thread
# other than the main thread, so the fact can only be
# flagged by the timer thread and must be responded to
# elsewhere.
self.tick.start()
What happens inside the signal object when the timer runs out:
def aspect_unlock(self):
self.aspect_locked = 3
What happens in the main thread to monitor for expired timers:
if self.dw and self.timeout:
for sg in self.sgs:
if sg.sigtype != 'D': sg.ac_route_timeout()
# Wait a bit.
self.timeout = False
tick = Timer(1, self.release_timer)
tick.start()
def release_timer(self):
self.timeout = True
The function inside the signal object that carries out the graphical action.
def ac_route_timeout(self):
if self.aspect_locked == 3: # Aspect has been locked for a timer, but
# is now unlocked.
if self.dw: self.sigdw.routeLine.setFill('green')
What I want to do is carry out what's inside ac_route_timeout() directly from within aspect_unlock(), and dispense with the flags and the monitoring loops.
Related
Suppose I have something like this :
import threading
import time
_FINISH = False
def hang():
while True:
if _FINISH:
break
print 'hanging..'
time.sleep(10)
def main():
global _FINISH
t = threading.Thread(target=hang)
t.setDaemon( True )
t.start()
time.sleep(10)
if __name__ == '__main__':
main()
If my thread is daemon, do I need to have a global _FINISH to control exit clause of break loop? I tried and I don't seem to need it - when program exits ( in that case after the sleep ) then program terminates, which closes the thread too.
But I've seen that code too - is it just bad practise? Can I get away with no global flag for controlling the loop?
According to [Python 3.Docs]: threading - Thread Objects (emphasis is mine):
A thread can be flagged as a “daemon thread”. The significance of this flag is that the entire Python program exits when only daemon threads are left. The initial value is inherited from the creating thread. The flag can be set through the daemon property or the daemon constructor argument.
Note: Daemon threads are abruptly stopped at shutdown. Their resources (such as open files, database transactions, etc.) may not be released properly. If you want your threads to stop gracefully, make them non-daemonic and use a suitable signalling mechanism such as an Event.
Per above, technically, you don't need the _FINISH logic, as the thread will end when the main one does. But, according to your code, no one (main thread) signals that the thread should end (something like _FINISH = True), so the logic in the thread is useless (therefore it can be removed).
Also, according to the above recommendation, you should implement the synchronization mechanism between your threads, and avoid making them daemons (in most of the cases).
I was attempting to create a thread class that could be terminated by an exception (since I am trying to have the thread wait on an event) when I created the following:
import sys
class testThread(threading.Thread):
def __init__(self):
super(testThread,self).__init__()
self.daemon = True
def run(self):
try:
print('Running')
while 1:
pass
except:
print('Being forced to exit')
test1 = testThread()
test2 = testThread()
print(test1.daemon)
test1.run()
test2.run()
sys.exit()
However, running the program will only print out one Running message, until the other is terminated. Why is that?
The problem is that you're calling the run method.
This is just a plain old method that you implement, which does whatever you put in its body. In this case, the body is an infinite loop, so calling run just loops forever.
The way to start a thread is the start method. This method is part of the Thread class, and what it does is:
Start the thread’s activity.
It must be called at most once per thread object. It arranges for the object’s run() method to be invoked in a separate thread of control.
So, if you call this, it will start a new thread, make that new thread run your run() method, and return immediately, so the main thread can keep doing other stuff.1 That's what you want here.
1. As pointed out by Jean-François Fabre, you're still not going to get any real parallelism here. Busy loops are never a great idea in multithreaded code, and if you're running this in CPython or PyPy, almost all of that busy looping is executing Python bytecode while holding the GIL, and only one thread can hold the GIL at a time. So, from a coarse view, things look concurrent—three threads are running, and all making progress. But if you zoom in, there's almost no overlap where two threads progress at once, usually not even enough to make up for the small scheduler overhead.
I'm very new to python development, I need to call a function every x seconds.
So I'm trying to use a timer for that, something like:
def start_working_interval():
def timer_tick():
do_some_work() // need to be called on the main thread
timer = threading.Timer(10.0, timer_tick)
timer.start()
timer = threading.Timer(10.0, timer_tick)
timer.start()
the do_some_work() method need to be called on the main thread, and I think using the timer causing it to execute on different thread.
so my question is, how can I call this method on the main thread?
I'm now sure what you trying to achive but i played with your code and did this:
import threading
import datetime
def do_some_work():
print datetime.datetime.now()
def start_working_interval():
def timer_tick():
do_some_work()
timer = threading.Timer(10.0, timer_tick)
timer.start()
timer_tick()
start_working_interval()
So basically what i did was to set the Time inside the timer_tick() so it will call it-self after 10 sec and so on, but i removed the second timer.
I needed to do this too, here's what I did:
import time
MAXBLOCKINGSECONDS=5 #maximum time that a new task will have to wait before it's presence in the queue gets noticed.
class repeater:
repeatergroup=[] #our only static data member it holds the current list of the repeaters that need to be serviced
def __init__(self,callback,interval):
self.callback=callback
self.interval=abs(interval) #because negative makes no sense, probably assert would be better.
self.reset()
self.processing=False
def reset(self):
self.nextevent=time.time()+self.interval
def whennext(self):
return self.nextevent-time.time() #time until next event
def service(self):
if time.time()>=self.nextevent:
if self.processing=True: #or however you want to be re-entrant safe or thread safe
return 0
self.processing==True
self.callback(self) #just stuff all your args into the class and pull them back out?
#use this calculation if you don't want slew
self.nextevent+=self.interval
#reuse this calculation if you do want slew/don't want backlog
#self.reset()
#or put it just before the callback
self.processing=False
return 1
return 0
#this the transition code between class and classgroup
#I had these three as a property getter and setter but it was behaving badly/oddly
def isenabled(self):
return (self in self.repeatergroup)
def start(self):
if not (self in self.repeatergroup):
self.repeatergroup.append(self)
#another logical place to call reset if you don't want backlog:
#self.reset()
def stop(self):
if (self in self.repeatergroup):
self.repeatergroup.remove(self)
#group calls in c++ I'd make these static
def serviceall(self): #the VB hacker in me wants to name this doevents(), the c hacker in me wants to name this probe
ret=0
for r in self.repeatergroup:
ret+=r.service()
return ret
def minwhennext(self,max): #this should probably be hidden
ret=max
for r in self.repeatergroup:
ret=min(ret,r.whennext())
return ret
def sleep(self,seconds):
if not isinstance(threading.current_thread(), threading._MainThread): #if we're not on the main thread, don't process handlers, just sleep.
time.sleep(seconds)
return
endtime=time.time()+seconds #record when caller wants control back
while time.time()<=endtime: #spin until then
while self.serviceall()>0: #service each member of the group until none need service
if (time.time()>=endtime):
return #break out of service loop if caller needs control back already
#done with servicing for a while, yield control to os until we have
#another repeater to service or it's time to return control to the caller
minsleeptime=min(endtime-time.time(),MAXBLOCKINGPERIOD) #smaller of caller's requested blocking time, and our sanity number (1 min might be find for some systems, 5 seconds is good for some systems, 0.25 to 0.03 might be better if there could be video refresh code waiting, 0.15-0.3 seems a common range for software denouncing of hardware buttons.
minsleeptime=self.minwhennext(minsleeptime)
time.sleep(max(0,minsleeptime))
###################################################################
# and now some demo code:
def handler1(repeater):
print("latency is currently {0:0.7}".format(time.time()-repeater.nextevent))
repeater.count+=repeater.interval
print("Seconds: {0}".format(repeater.count))
def handler2(repeater): #or self if you prefer
print("Timed message is: {0}".format(repeater.message))
if repeater.other.isenabled():
repeater.other.stop()
else:
repeater.other.start()
repeater.interval+=1
def demo_main():
counter=repeater(handler1,1)
counter.count=0 #I'm still new enough to python
counter.start()
greeter=repeater(handler2,2)
greeter.message="Hello world." #that this feels like cheating
greeter.other=counter #but it simplifies everything.
greeter.start()
print ("Currently {0} repeaters in service group.".format(len(repeater.repeatergroup)))
print("About to yield control for a while")
greeter.sleep(10)
print("Got control back, going to do some processing")
time.sleep(5)
print("About to yield control for a while")
counter.sleep(20) #you can use any repeater to access sleep() but
#it will only service those currently enabled.
#notice how it gets behind but tries to catch up, we could add repeater.reset()
#at the beginning of a handler to make it ignore missed events, or at the
#end to let the timing slide, depending on what kind of processing we're doing
#and what sort of sensitivity there is to time.
#now just replace all your main thread's calls to time.sleep() with calls to mycounter.sleep()
#now just add a repeater.sleep(.01) or a while repeater.serviceall(): pass to any loop that will take too long.
demo_main()
There's a couple of odd things left to consider:
Would it be better to sort handlers that you'd prefer to run on main thread from handlers that you don't care? I later went on to add a threadingstyle property, which depending on it's value would run on main thread only, on either main thread or a shared/group thread, or stand alone on it's own thread. That way longer or more time-sensitive tasks, could run without causing the other threads to be as slowed down, or closer to their scheduled time.
I wonder whether, depending on the implementation details of threading: is my 'if not main thread: time.sleep(seconds); return' effectively make it sufficiently more likely to be the main thread's turn, and I shouldn't worry about the difference.
(It seems like adding our MAXBLOCKINGPERIOD as the 3rd arg to the sched library could fix it's notorious issue of not servicing new events after older longer in the future events have already hit the front of the queue.)
I am using python with Raspian on the Raspberry pi. I have a peripheral attached that causes my interrupt handler function to run. Sometimes the interrupt get fired when the response to the first interrupt has not yet completed. So I added a variable that is set when the interrupt function is entered and reset when exited, and if upon entering the function, it finds that the lock is set it will immediately exit.
Is there a more standard way of dealing this kind of thing.
def IrqHandler(self, channel):
if self.lockout: return
self.lockout = True;
# do stuff
self.lockout = False;
You have a race condition if the IrqHandler is called twice sufficiently close together, both calls can see self.lockout as False and both proceed to set it to True etc.
The threading module has a Lock() object. Usually (the default) this is used to block a thread until the lock is released. This means that all the interrupts would be queued up and have a turn running the Handler.
You can also create a Lock(False) which will just return False if the Lock has been acquired. This is close to your use here
from threading import Lock
def __init__(self):
self.irq_lock = Lock(False)
def IrqHandler(self, channel):
if not self.irq_lock.acquire():
return
# do stuff
self.irq_local.release()
You can tie that in with a borg pattern. This way you can have several interrupt instances paying attention to one state.
There is another one called singleton but here is a discussion on the two.
Why is the Borg pattern better than the Singleton pattern in Python
I am using thread in my python card application.
whenever i press the refresh button i am calling a function using thread, when the function was called, there will be a another function inside the main function.
what i want is whenever the child function ends. I want the thread to be killed or stopped without closing the application or ctrl+ c.
i started the thread like this
def on_refresh_mouseClick(self,event):
thread.start_new_thread( self.readalways,() )
in the "readalways" function i am using while loop, in that while loop whenever the condition satisfies it will call continuousread() function. check it:
def readalways(self):
while 1:
cardid = self.parent.uhf.readTagId()
print "the tag id is",cardid
self.status = self.parent.db.checktagid(cardid)
if len(self.status) != 0:
break
print "the value is",self.status[0]['id']
self.a = self.status[0]['id']
self.continuesread()
def continuesread(self):
.......
.......
after this continuesread read function the values that in the thread should be cleared.
Because, if i again click the refresh button a new thread is starting but the some of the values are coming from the old thread.
so i want to kill the old thread when it completes the continuesread function
Please note that different threads from the same process share their memory, e.g. when you access self.status, you (probably) manipulate an object shared within the whole process. Thus, even if your threads are killed when finishing continuesread (what they probably are), the manipulated object's state will still remain the same.
You could either
hold the status in a local variable instead of an attribute of self,
initialize those attributes when entering readalways,
or save this state in local storage of an thread object, which is not shared (see documentation).
The first one seems to be the best as far as I can see.