A non blocking way to sleep/wait in twisted? - python

Is there a non-blocking way to wait or come back to this function within twisted?
I have a loop ticker that is just set to tick on a set interval and that is all working GREAT. However, when stopping it I want to make sure that it isn't currently in a Tick doing any work. If it is I just want twisted to come back to it and kill it in a moment.
def stop(self):
import time
while self.in_tick:
time.sleep(.001) # Blocking
self.active = False
self.reset()
self.timer.stop()
Sometimes this above function gets called while another thread is running a Tick operation and I want to finish the Tick and then come back and stop this.
I DO NOT want to block the loop in anyway during this operation. How could I do so?

It looks a bit strange to do a time.sleep() instead of just waiting for an event to signal and fire a deferred to do what you want. With threads you might wake up this thread, check 'self.in_tick == False' and before you reach 'self.active = False', the other thread starts the new tick, so this might have a race condition and may not work as you expect anyway. So hopefully you have some other thread synchronization somewhere to make it work.
You can try to split your code into two functions and have one that schedules itself if not done.
def stop(self):
# avoid reentracy
if not self._stop_call_running:
self._stop()
def _stop(self):
if self.in_tick:
# reschedule itself
self._stop_call_running = reactor.callLater(0.001, self._stop)
return
self.active = False
self.reset()
self.timer.stop()
self._stop_call_running = None
You might also look at twisted.internet.task.LoopingCall.

Related

How to start a thread again with an Event object in Python?

I want to make a thread and control it with an event object. Detailedly speaking, I want the thread to be executed whenever the event object is set and to wait itselt, repeatedly.
The below shows a sketchy logic I thought of.
import threading
import time
e = threading.Event()
def start_operation():
e.wait()
while e.is_set():
print('STARTING TASK')
e.clear()
t1 = threading.Thread(target=start_operation)
t1.start()
e.set() # first set
e.set() # second set
I expected t1 to run once the first set has been commanded and to stop itself(due to e.clear inside it), and then to run again after the second set has been commanded. So, accordign to what I expected, it should print out 'STARTING TASK' two times. But it shows it only once, which I don't understand why. How am I supposed to change the code to make it run the while loop again, whenever the event object is set?
The first problem is that once you exit a while loop, you've exited it. Changing the predicate back won't change anything. Forget about events for a second and just look at this code:
i = 0
while i == 0:
i = 1
It obviously doesn't matter if you set i = 0 again later, right? You've already left the while loop, and the whole function. And your code is doing exactly the same thing.
You can fix problem that by just adding another while loop around the whole thing:
def start_operation():
while True:
e.wait()
while e.is_set():
print('STARTING TASK')
e.clear()
However, that still isn't going to work—except maybe occasionally, by accident.
Event.set doesn't block; it just sets the event immediately, even if it's already set. So, the most likely flow of control here is:
background thread hits e.wait() and blocks.
main thread hits e.set() and sets event.
main thread hits e.set() and sets event again, with no effect.
background thread wakes up, does the loop once, calls e.clear() at the end.
background thread waits forever on e.wait().
(The fact that there's no way to avoid missed signals with events is effectively the reason conditions were invented, and that anything newer than Win32 and Python doesn't bother with events… But a condition isn't sufficient here either.)
If you want the main thread to block until the event is clear, and only then set it again, you can't do that. You need something extra, like a second event, which the main thread can wait on and the background thread can set.
But if you want to keep track of multiple set calls, without missing any, you need to use a different sync mechanism. A queue.Queue may be overkill here, but it's dead simple to do in Python, so let's just use that. Of course you don't actually have any values to put on the queue, but that's OK; you can just stick a dummy value there:
import queue
import threading
q = queue.Queue()
def start_operation():
while True:
_ = q.get()
print('STARTING TASK')
t1 = threading.Thread(target=start_operation)
t1.start()
q.put(None)
q.put(None)
And if you later want to add a way to shut down the background thread, just change it to stick values on:
import queue
import threading
q = queue.Queue()
def start_operation():
while True:
if q.get():
return
print('STARTING TASK')
t1 = threading.Thread(target=start_operation)
t1.start()
q.put(False)
q.put(False)
q.put(True)

code not keep going after thread

I wrote this code to lock a mouse in the middle of the screen
def lockmouse():
print "here"
while True:
win32api.SetCursorPos((GetSystemMetrics(0)/2,GetSystemMetrics(1)/2))
win32api.mouse_event(win32con.MOUSEEVENTF_LEFTDOWN,GetSystemMetrics(0)/2,GetSystemMetrics(1)/2,0,0)
win32api.mouse_event(win32con.MOUSEEVENTF_LEFTUP,GetSystemMetrics(0)/2,GetSystemMetrics(1)/2,0,0)
t = threading.Thread(target=lockmouse())
command = "lockmouse"
if "lockmouse" in command:
if t.is_alive==False:
t.start()
time.sleep(3)
t._Thread_stop()
and its not keep going after the t.start().I've been trying using different methods to stop the thread,but its even not make it after that line.anyone know what the problem?
It might be the fact that your function isn't indented properly. It should be:
def foo():
return 'bar'
Also, you seem to only be starting a single thread. What's the point?
EDIT:
I just realised that your function has an infinite loop. The program cannot carry on from t.start() because it has to wait for that function to finish execution, which it won't because there's a while loop. You either need to restructure your program somehow, or if you want to keep it how it is, see this answer for how to avoid waiting for a thread.
How to avoid waiting for a thread to finish execution - Python

python, calling method on main thread from timer callback

I'm very new to python development, I need to call a function every x seconds.
So I'm trying to use a timer for that, something like:
def start_working_interval():
def timer_tick():
do_some_work() // need to be called on the main thread
timer = threading.Timer(10.0, timer_tick)
timer.start()
timer = threading.Timer(10.0, timer_tick)
timer.start()
the do_some_work() method need to be called on the main thread, and I think using the timer causing it to execute on different thread.
so my question is, how can I call this method on the main thread?
I'm now sure what you trying to achive but i played with your code and did this:
import threading
import datetime
def do_some_work():
print datetime.datetime.now()
def start_working_interval():
def timer_tick():
do_some_work()
timer = threading.Timer(10.0, timer_tick)
timer.start()
timer_tick()
start_working_interval()
So basically what i did was to set the Time inside the timer_tick() so it will call it-self after 10 sec and so on, but i removed the second timer.
I needed to do this too, here's what I did:
import time
MAXBLOCKINGSECONDS=5 #maximum time that a new task will have to wait before it's presence in the queue gets noticed.
class repeater:
repeatergroup=[] #our only static data member it holds the current list of the repeaters that need to be serviced
def __init__(self,callback,interval):
self.callback=callback
self.interval=abs(interval) #because negative makes no sense, probably assert would be better.
self.reset()
self.processing=False
def reset(self):
self.nextevent=time.time()+self.interval
def whennext(self):
return self.nextevent-time.time() #time until next event
def service(self):
if time.time()>=self.nextevent:
if self.processing=True: #or however you want to be re-entrant safe or thread safe
return 0
self.processing==True
self.callback(self) #just stuff all your args into the class and pull them back out?
#use this calculation if you don't want slew
self.nextevent+=self.interval
#reuse this calculation if you do want slew/don't want backlog
#self.reset()
#or put it just before the callback
self.processing=False
return 1
return 0
#this the transition code between class and classgroup
#I had these three as a property getter and setter but it was behaving badly/oddly
def isenabled(self):
return (self in self.repeatergroup)
def start(self):
if not (self in self.repeatergroup):
self.repeatergroup.append(self)
#another logical place to call reset if you don't want backlog:
#self.reset()
def stop(self):
if (self in self.repeatergroup):
self.repeatergroup.remove(self)
#group calls in c++ I'd make these static
def serviceall(self): #the VB hacker in me wants to name this doevents(), the c hacker in me wants to name this probe
ret=0
for r in self.repeatergroup:
ret+=r.service()
return ret
def minwhennext(self,max): #this should probably be hidden
ret=max
for r in self.repeatergroup:
ret=min(ret,r.whennext())
return ret
def sleep(self,seconds):
if not isinstance(threading.current_thread(), threading._MainThread): #if we're not on the main thread, don't process handlers, just sleep.
time.sleep(seconds)
return
endtime=time.time()+seconds #record when caller wants control back
while time.time()<=endtime: #spin until then
while self.serviceall()>0: #service each member of the group until none need service
if (time.time()>=endtime):
return #break out of service loop if caller needs control back already
#done with servicing for a while, yield control to os until we have
#another repeater to service or it's time to return control to the caller
minsleeptime=min(endtime-time.time(),MAXBLOCKINGPERIOD) #smaller of caller's requested blocking time, and our sanity number (1 min might be find for some systems, 5 seconds is good for some systems, 0.25 to 0.03 might be better if there could be video refresh code waiting, 0.15-0.3 seems a common range for software denouncing of hardware buttons.
minsleeptime=self.minwhennext(minsleeptime)
time.sleep(max(0,minsleeptime))
###################################################################
# and now some demo code:
def handler1(repeater):
print("latency is currently {0:0.7}".format(time.time()-repeater.nextevent))
repeater.count+=repeater.interval
print("Seconds: {0}".format(repeater.count))
def handler2(repeater): #or self if you prefer
print("Timed message is: {0}".format(repeater.message))
if repeater.other.isenabled():
repeater.other.stop()
else:
repeater.other.start()
repeater.interval+=1
def demo_main():
counter=repeater(handler1,1)
counter.count=0 #I'm still new enough to python
counter.start()
greeter=repeater(handler2,2)
greeter.message="Hello world." #that this feels like cheating
greeter.other=counter #but it simplifies everything.
greeter.start()
print ("Currently {0} repeaters in service group.".format(len(repeater.repeatergroup)))
print("About to yield control for a while")
greeter.sleep(10)
print("Got control back, going to do some processing")
time.sleep(5)
print("About to yield control for a while")
counter.sleep(20) #you can use any repeater to access sleep() but
#it will only service those currently enabled.
#notice how it gets behind but tries to catch up, we could add repeater.reset()
#at the beginning of a handler to make it ignore missed events, or at the
#end to let the timing slide, depending on what kind of processing we're doing
#and what sort of sensitivity there is to time.
#now just replace all your main thread's calls to time.sleep() with calls to mycounter.sleep()
#now just add a repeater.sleep(.01) or a while repeater.serviceall(): pass to any loop that will take too long.
demo_main()
There's a couple of odd things left to consider:
Would it be better to sort handlers that you'd prefer to run on main thread from handlers that you don't care? I later went on to add a threadingstyle property, which depending on it's value would run on main thread only, on either main thread or a shared/group thread, or stand alone on it's own thread. That way longer or more time-sensitive tasks, could run without causing the other threads to be as slowed down, or closer to their scheduled time.
I wonder whether, depending on the implementation details of threading: is my 'if not main thread: time.sleep(seconds); return' effectively make it sufficiently more likely to be the main thread's turn, and I shouldn't worry about the difference.
(It seems like adding our MAXBLOCKINGPERIOD as the 3rd arg to the sched library could fix it's notorious issue of not servicing new events after older longer in the future events have already hit the front of the queue.)

Python: How to manage and kill worker threads that are either stuck or waiting on a timeout...?

This has been discussed many, many times, but I still don't have a good grasp on how to best accomplish this.
Suppose I have two threads: a main app thread and a worker thread. The main app thread (say it's a WXWidgets GUI thread, or a thread that is looping and accepting user input at the console) could have a reason to stop the worker thread - the user's closing the application, a stop button was clicked, some error occurred in the main thread, whatever.
Commonly suggested is to setup a flag that the thread checks frequently to determine whether to exit. I have two problems with the suggested ways to approach this, however:
First, writing constant checks of a flag into my code makes my code really ugly, and it's very, very prone to problems due to the huge amount of code duplication. Take this example:
def WorkerThread():
while (True):
doOp1() # assume this takes say 100ms.
if (exitThread == True):
safelyEnd()
return
doOp2() # this one also takes some time, say 200ms
if (exitThread == True):
safelyEnd()
return
if (somethingIsTrue == True):
doSomethingImportant()
if (exitThread == True): return
doSomethingElse()
if (exitThread == True): return
doOp3() # this blocks for an indeterminate amount of time - say, it's waiting on a network respond
if (exitThread == True):
safelyEnd()
return
doOp4() # this is doing some math
if (exitThread == True):
safelyEnd()
return
doOp5() # This calls a buggy library that might block forever. We need a way to detect this and kill this thread if it's stuck for long enough...
saveSomethingToDisk() # might block while the disk spins up, or while a network share is accessed...whatever
if (exitThread == True):
safelyEnd()
return
def safelyEnd():
cleanupAnyUnfinishedBusiness() # do whatever is needed to get things to a workable state even if something was interrupted
writeWhatWeHaveToDisk() # it's OK to wait for this since it's so important
If I add more code or change code, I have to make sure I'm adding those check blocks all over the place. If my worker thread is a very lengthy thread, I could easily have tens or even hundreds of those checks. Very cumbersome.
Think of the other problems. If doOp4() does accidentally deadlock, my app will spin forever and never exit. Not a good user experience!
Using daemon threads isn't really a good option either because it denies me the opportunity to execute the safelyEnd() code. This code might be important - flushing disk buffers, writing log data for debugging purposes, etc.
Second, my code might call functions that block where I don't have the opportunity to check frequently. Let's say this function exists but it's in code that I don't have access to - say part of a library:
def doOp4():
time.sleep(60) # imagine that this is a network thread, that waits for 60 seconds for a reply before returning.
If that timeout is 60 seconds, even if my main thread gives the signal for the thread to end, it still might sit there for 60 seconds, when it would be perfectly reasonable for it to just stop waiting for a network response and exit. If that code is part of a library I didn't write, however, I have no control over how that works.
Even if I did write the code for a network check, I'd basically have to refactor it so that rather than waiting 60 seconds, it loops 60 times and waits 1 second before checking the exit thread! Again, very messy!
The upshot of all of this, is it feels like a good way to be able to implement this easily would be to somehow cause an exception on a specific thread. If I could do that, I could wrap the entire worker thread's code in a try block, and put the safelyEnd() code in the exception handler, or even a finally block.
Is there a way to either accomplish this, or refactor this code with a different technique that will make things work? The thing is, ideally, when the user requests a quit, we want to make them wait the minimum possible amount. It seems that there has to be a simple way to accomplish this, as this is a very common thing in apps!
Most of the thread communication objects don't allow for this type of setup. They might allow for a cleaner way to have an exit flag, but it still doesn't eliminate the need to constantly check that exit flag, and it still won't deal with the thread blocking because of an external call or because it's simply in a busy loop.
The biggest thing for me is really that if I have a long worker thread procedure I have to litter it with hundreds of checks of the flag. This just seems way too messy and doesn't feel like it's very good coding practice. There has to be a better way...
Any advice would be greatly appreciated.
First, you can make this a lot less verbose and repetitive by using an exception, without needing the ability to raise exceptions into the thread from outside, or any other new tricks or language features:
def WorkerThread():
class ExitThreadError(Exception):
pass
def CheckEnd():
if exitThread:
raise ExitThreadError()
try:
while True:
doOp1() # assume this takes say 100ms.
CheckEnd()
doOp2() # this one also takes some time, say 200ms
CheckEnd()
# etc.
except ExitThreadError:
safelyEnd()
Note that you really ought to be guarding exitThread with a Lock or Condition—which is another good reason to wrap up the check, so you only need to fix that in one place.
Anyway, I've taken out some excessive parentheses, == True checks, etc. that added nothing to the code; hopefully you can still see how it's equivalent to the original.
You can take this even farther by restructuring your function into a simple state machine; then you don't even need an exception. I'll show a ridiculously trivial example, where every state always implicitly transitions to the next state no matter what. For this case, the refactor is obviously reasonable; whether it's reasonable for your real code, only you can really tell.
def WorkerThread():
states = (doOp1, doOp2, doOp3, doOp4, doOp5)
current = 0
while not exitThread:
states[current]()
current += 1
safelyEnd()
Neither of these does anything to help you interrupt in the middle of one of your steps.
If you have some function that takes 60 seconds and there's not a damn thing you can do about it, then there's no way to cancel your thread during those 60 seconds and there's not a damn thing you can do about it. That's just the way it is.
But usually, things that take 60 seconds are really doing something like blocking on a select, and there is something you can do about that—create a pipe, stick its read end in the select, and write on the other end to wake up the thread.
Or, in you're feeling hacky, often just closing/deleting/etc. a file or other object that the function is waiting on/processing/otherwise using will often guarantee that it fails quickly with an exception. Of course sometimes it guarantees a segfault, or corrupted data, or a 50% chance of exiting and a 50% chance of hanging forever, or… So, even if you can't control that doOp4 function, you'd better be able to analyze its source and/or whitebox test it.
If worst comes to worst, then yes, you do have to either change that one 60-second timeout into 60 1-second timeouts. But usually it won't come to that.
Finally, if you really do need to be able to kill a thread, don't use a thread, use a child process. Those are killable.
Just make sure that your process is always in a state where it's safe to kill it—or, if you only care about Unix, use a USR signal and mask it out when the process isn't in a safe-to-kill state.
But if it's not safe to kill your process in the middle of that 60-second doOp4 call, this isn't really going to help you, because you still won't be able to kill it during those 60 seconds.
In some cases, you can have the child process arrange for the parent to clean up for it if it gets killed unexpectedly, or even arrange for it to be cleaned up on the next run (e.g., think of a typical database journal).
But ultimately, what you're asking for is ultimately a contradiction: You want to hard-kill a thread without giving it a chance to finish what it's doing, but you want to guarantee that it finishes what it's doing, and you don't want to rewrite the code to make that possible. So, you need to rethink your design so that it requires something that isn't impossible.
If you do not mind your code running about ten times slower, you can use the Thread2 class implemented below. An example follows that shows how calling the new stop method should kill the thread on the next bytecode instruction. Implementing a cleanup system is left as an exercise for the reader to accomplish.
import threading
import sys
class StopThread(StopIteration): pass
threading.SystemExit = SystemExit, StopThread
class Thread2(threading.Thread):
def stop(self):
self.__stop = True
def _bootstrap(self):
if threading._trace_hook is not None:
raise ValueError('Cannot run thread with tracing!')
self.__stop = False
sys.settrace(self.__trace)
super()._bootstrap()
def __trace(self, frame, event, arg):
if self.__stop:
raise StopThread()
return self.__trace
class Thread3(threading.Thread):
def _bootstrap(self, stop_thread=False):
def stop():
nonlocal stop_thread
stop_thread = True
self.stop = stop
def tracer(*_):
if stop_thread:
raise StopThread()
return tracer
sys.settrace(tracer)
super()._bootstrap()
################################################################################
import time
def main():
test = Thread2(target=printer)
test.start()
time.sleep(1)
test.stop()
test.join()
def printer():
while True:
print(time.time() % 1)
time.sleep(0.1)
if __name__ == '__main__':
main()
The Thread3 class appears to run code approximately 33% faster than the Thread2 class.

Terminate thread immediately when some variable set to true

how to terminate the thread when some variable value set to true?
import threading
class test(threading.Thread):
def __init__(self):
self.stopThread = False
self.start():
def start(self):
while not self.stopThread:
callOtherFunction //which takes alomst 30 sec to execute
def stop(self):
self.stopThread = True
now the problem is that if the start function is called and while loop started then it will check the stop condition on next iteration, when it completed its internal work, so if the call is made to callOtherFunction then it still waits 30 sec to exit.. i want to immediately terminate the thread when the variable is set. is it possible?
This question was answered here:
Is there any way to kill a Thread in Python?
The bottom line is that, if you can help it, you should avoid killing a thread this way. However, if you must, there are some tricks you can try.
Also, to be thread-safe, you should make self.stopThread a threading.Event() and use set() and clear() to signal when the thread should be stopped.

Categories