Handle a blocking function call in Python - python

I'm working with the Gnuradio framework. I handle flowgraphs I generate to send/receive signals. These flowgraphs initialize and start, but they don't return the control flow to my application:
I imported time
while time.time() < endtime:
# invoke GRC flowgraph for 1st sequence
if not seq1_sent:
tb = send_seq_2.top_block()
tb.Run(True)
seq1_sent = True
if time.time() < endtime:
break
# invoke GRC flowgraph for 2nd sequence
if not seq2_sent:
tb = send_seq_2.top_block()
tb.Run(True)
seq2_sent = True
if time.time() < endtime:
break
The problem is: only the first if statement invokes the flow-graph (that interacts with the hardware). I'm stuck in this. I could use a Thread, but I'm unexperienced how to timeout threads in Python. I doubt that this is possible, because it seems killing threads isn't within the APIs. This script only has to work on Linux...
How do you handle blocking functions with Python properly - without killing the whole program.
Another more concrete example for this problem is:
import signal, os
def handler(signum, frame):
# print 'Signal handler called with signal', signum
#raise IOError("Couldn't open device!")
import time
print "wait"
time.sleep(3)
def foo():
# Set the signal handler and a 5-second alarm
signal.signal(signal.SIGALRM, handler)
signal.alarm(3)
# This open() may hang indefinitely
fd = os.open('/dev/ttys0', os.O_RDWR)
signal.alarm(0) # Disable the alarm
foo()
print "hallo"
How do I still get print "hallo". ;)
Thanks,
Marius

First of all - the use of signals should be avoided at all cost:
1) It may lead to a deadlock. SIGALRM may reach the process BEFORE the blocking syscall (imagine super-high load in the system!) and the syscall will not be interrupted. Deadlock.
2) Playing with signals may have some nasty non-local consequences. For example, syscalls in other threads may be interrupted which usually is not what you want. Normally syscalls are restarted when (not a deadly) signal is received. When you set up a signal handler it automatically turns off this behavior for the whole process, or thread group so to say. Check 'man siginterrupt' on that.
Believe me - I met two problems before and they are not fun at all.
In some cases the blocking can be avoided explicitely - I strongly recommend using select() and friends (check select module in Python) to handle blocking writes and reads. This will not solve blocking open() call, though.
For that I've tested this solution and it works well for named pipes. It opens in a non-blocking way, then turns it off and uses select() call to eventually timeout if nothing is available.
import sys, os, select, fcntl
f = os.open(sys.argv[1], os.O_RDONLY | os.O_NONBLOCK)
flags = fcntl.fcntl(f, fcntl.F_GETFL, 0)
fcntl.fcntl(f, fcntl.F_SETFL, flags & ~os.O_NONBLOCK)
r, w, e = select.select([f], [], [], 2.0)
if r == [f]:
print 'ready'
print os.read(f, 100)
else:
print 'unready'
os.close(f)
Test this with:
mkfifo /tmp/fifo
python <code_above.py> /tmp/fifo (1st terminal)
echo abcd > /tmp/fifo (2nd terminal)
With some additional effort select() call can be used as a main loop of the whole program, aggregating all events - you can use libev or libevent, or some Python wrappers around them.
When you can't explicitely force non-blocking behavior, say you just use an external library, then it's going to be much harder. Threads may do, but obviously it is not a state-of-the-art solution, usually being just wrong.
I'm afraid that in general you can't solve this in a robust way - it really depends on WHAT you block.

IIUC, each top_block has a stop method. So you actually can run the top_block in a thread, and issue a stop if the timeout has arrived. It would be better if the top_block's wait() also had a timeout, but alas, it doesn't.
In the main thread, you then need to wait for two cases: a) the top_block completes, and b) the timeout expires. Busy-waits are evil :-), so you should use the thread's join-with-timeout to wait for the thread. If the thread is still alive after the join, you need to stop the top_run.

You can set a signal alarm that will interrupt your call with a timeout:
http://docs.python.org/library/signal.html
signal.alarm(1) # 1 second
my_blocking_call()
signal.alarm(0)
You can also set a signal handler if you want to make sure it won't destroy your application:
def my_handler(signum, frame):
pass
signal.signal(signal.SIGALRM, my_handler)
EDIT:
What's wrong with this piece of code ? This should not abort your application:
import signal, time
def handler(signum, frame):
print "Timed-out"
def foo():
# Set the signal handler and a 5-second alarm
signal.signal(signal.SIGALRM, handler)
signal.alarm(3)
# This open() may hang indefinitely
time.sleep(5)
signal.alarm(0) # Disable the alarm
foo()
print "hallo"
The thing is:
The default handler for SIGALRM is to abort the application, if you set your handler then it should no longer stop the application.
Receiving a signal usually interrupts system calls (then unblocks your application)

The easy part of your question relates to the signal handling. From the perspective of the Python runtime a signal which has been received while the interpreter was making a system call is presented to your Python code as an OSError exception with an errno attributed corresponding to errno.EINTR
So this probably works roughly as you intended:
#!/usr/bin/env python
import signal, os, errno, time
def handler(signum, frame):
# print 'Signal handler called with signal', signum
#raise IOError("Couldn't open device!")
print "timed out"
time.sleep(3)
def foo():
# Set the signal handler and a 5-second alarm
signal.signal(signal.SIGALRM, handler)
try:
signal.alarm(3)
# This open() may hang indefinitely
fd = os.open('/dev/ttys0', os.O_RDWR)
except OSError, e:
if e.errno != errno.EINTR:
raise e
signal.alarm(0) # Disable the alarm
foo()
print "hallo"
Note I've moved the import of time out of the function definition as it seems to be poor form to hide imports in that way. It's not at all clear to me why you're sleeping in your signal handler and, in fact, it seems like a rather bad idea.
The key point I'm trying to make is that any (non-ignored) signal will interrupt your main line of Python code execution. Your handler will be invoked with arguments indicating which signal number triggered the execution (allowing for one Python function to be used for handling many different signals) and a frame object (which could be used for debugging or instrumentation of some sort).
Because the main flow through the code is interrupted it's necessary for you to wrap that code in some exception handling in order to regain control after such events have occurred. (Incidentally if you're writing code in C you'd have the same concern; you have to be prepared for any of your library functions with underlying system calls to return errors and handle -EINTR in the system errno by looping back to retry or branching to some alternative in your main line (such as proceeding to some other file, or without any file/input, etc).
As others have indicated in their responses to your question, basing your approach on SIGALARM is likely to be fraught with portability and reliability issues. Worse, some of these issues may be race conditions that you'll never encounter in your testing environment and may only occur under conditions that are extremely hard to reproduce. The ugly details tend to be in cases of re-entrancy --- what happens if signals are dispatched during execution of your signal handler?
I've used SIGALARM in some scripts and it hasn't been an issue for me, under Linux. The code I was working on was suitable to the task. It might be adequate for your needs.
Your primary question is difficult to answer without knowing more about how this Gnuradio code behaves, what sorts of objects you instantiate from it, and what sorts of objects they return.
Glancing at the docs to which you've linked, I see that they don't seem to offer any sort of "timeout" argument or setting that could be used to limit blocking behavior directly. In the table under "Controlling Flow Graphs" I see that they specifically say that .run() can execute indefinitely or until SIGINT is received. I also note that .start() can start threads in your application and, it seems, returns control to your Python code line while those are running. (That seems to depend on the nature of your flow graphs, which I don't understand sufficiently).
It sounds like you could create your flow graphs, .start() them, and then (after some time processing or sleeping in your main line of Python code) call the .lock() method on your controlling object (tb?). This, I'm guessing, puts the Python representation of the state ... the Python object ... into a quiescent mode to allow you to query the state or, as they say, reconfigure your flow graph. If you call .run() it will call .wait() after it calls .start(); and .wait() will apparently run until either all blocks "indicate they are done" or until you call the object's .stop() method.
So it sounds like you want to use .start() and neither .run() nor .wait(); then call .stop() after doing any other processing (including time.sleep()).
Perhaps something as simple as:
tb = send_seq_2.top_block()
tb.start()
time.sleep(endtime - time.time())
tb.stop()
seq1_sent = True
tb = send_seq_2.top_block()
tb.start()
seq2_sent = True
.. though I'm suspicious of my time.sleep() there. Perhaps you want to do something else where you query the tb object's state (perhaps entailing sleeping for smaller intervals, calling its .lock() method, and accessing attributes that I know nothing about and then calling its .unlock() before sleeping again.

if not seq1_sent:
tb = send_seq_2.top_block()
tb.Run(True)
seq1_sent = True
if time.time() < endtime:
break
If the 'if time.time() < endtime:' then you will break out of the loop and the seq2_sent stuff will never be hit, maybe you mean 'time.time() > endtime' in that test?

you could try using Deferred execution... Twisted framework uses them alot
http://www6.uniovi.es/python/pycon/papers/deferex/

You mention killing threads in Python - this is partialy possible although you can kill/interrupt another thread only when Python code runs, not in C code, so this may not help you as you want.
see this answer to another question:
python: how to send packets in multi thread and then the thread kill itself
or google for killable python threads for more details like this:
http://code.activestate.com/recipes/496960-thread2-killable-threads/

If you want to set a timeout on a blocking function, threading.Thread as the method join(timeout) which blocks until the timeout.
Basically, something like that should do what you want :
import threading
my_thread = threading.Thread(target=send_seq_2.top_block)
my_thread.start()
my_thread.join(TIMEOUT)

Related

Set function timeout without having to use contextlib [duplicate]

I looked online and found some SO discussing and ActiveState recipes for running some code with a timeout. It looks there are some common approaches:
Use thread that run the code, and join it with timeout. If timeout elapsed - kill the thread. This is not directly supported in Python (used private _Thread__stop function) so it is bad practice
Use signal.SIGALRM - but this approach not working on Windows!
Use subprocess with timeout - but this is too heavy - what if I want to start interruptible task often, I don't want fire process for each!
So, what is the right way? I'm not asking about workarounds (eg use Twisted and async IO), but actual way to solve actual problem - I have some function and I want to run it only with some timeout. If timeout elapsed, I want control back. And I want it to work on Linux and Windows.
A completely general solution to this really, honestly does not exist. You have to use the right solution for a given domain.
If you want timeouts for code you fully control, you have to write it to cooperate. Such code has to be able to break up into little chunks in some way, as in an event-driven system. You can also do this by threading if you can ensure nothing will hold a lock too long, but handling locks right is actually pretty hard.
If you want timeouts because you're afraid code is out of control (for example, if you're afraid the user will ask your calculator to compute 9**(9**9)), you need to run it in another process. This is the only easy way to sufficiently isolate it. Running it in your event system or even a different thread will not be enough. It is also possible to break things up into little chunks similar to the other solution, but requires very careful handling and usually isn't worth it; in any event, that doesn't allow you to do the same exact thing as just running the Python code.
What you might be looking for is the multiprocessing module. If subprocess is too heavy, then this may not suit your needs either.
import time
import multiprocessing
def do_this_other_thing_that_may_take_too_long(duration):
time.sleep(duration)
return 'done after sleeping {0} seconds.'.format(duration)
pool = multiprocessing.Pool(1)
print 'starting....'
res = pool.apply_async(do_this_other_thing_that_may_take_too_long, [8])
for timeout in range(1, 10):
try:
print '{0}: {1}'.format(duration, res.get(timeout))
except multiprocessing.TimeoutError:
print '{0}: timed out'.format(duration)
print 'end'
If it's network related you could try:
import socket
socket.setdefaulttimeout(number)
I found this with eventlet library:
http://eventlet.net/doc/modules/timeout.html
from eventlet.timeout import Timeout
timeout = Timeout(seconds, exception)
try:
... # execution here is limited by timeout
finally:
timeout.cancel()
For "normal" Python code, that doesn't linger prolongued times in C extensions or I/O waits, you can achieve your goal by setting a trace function with sys.settrace() that aborts the running code when the timeout is reached.
Whether that is sufficient or not depends on how co-operating or malicious the code you run is. If it's well-behaved, a tracing function is sufficient.
An other way is to use faulthandler:
import time
import faulthandler
faulthandler.enable()
try:
faulthandler.dump_tracebacks_later(3)
time.sleep(10)
finally:
faulthandler.cancel_dump_tracebacks_later()
N.B: The faulthandler module is part of stdlib in python3.3.
If you're running code that you expect to die after a set time, then you should write it properly so that there aren't any negative effects on shutdown, no matter if its a thread or a subprocess. A command pattern with undo would be useful here.
So, it really depends on what the thread is doing when you kill it. If its just crunching numbers who cares if you kill it. If its interacting with the filesystem and you kill it , then maybe you should really rethink your strategy.
What is supported in Python when it comes to threads? Daemon threads and joins. Why does python let the main thread exit if you've joined a daemon while its still active? Because its understood that someone using daemon threads will (hopefully) write the code in a way that it wont matter when that thread dies. Giving a timeout to a join and then letting main die, and thus taking any daemon threads with it, is perfectly acceptable in this context.
I've solved that in that way:
For me is worked great (in windows and not heavy at all) I'am hope it was useful for someone)
import threading
import time
class LongFunctionInside(object):
lock_state = threading.Lock()
working = False
def long_function(self, timeout):
self.working = True
timeout_work = threading.Thread(name="thread_name", target=self.work_time, args=(timeout,))
timeout_work.setDaemon(True)
timeout_work.start()
while True: # endless/long work
time.sleep(0.1) # in this rate the CPU is almost not used
if not self.working: # if state is working == true still working
break
self.set_state(True)
def work_time(self, sleep_time): # thread function that just sleeping specified time,
# in wake up it asking if function still working if it does set the secured variable work to false
time.sleep(sleep_time)
if self.working:
self.set_state(False)
def set_state(self, state): # secured state change
while True:
self.lock_state.acquire()
try:
self.working = state
break
finally:
self.lock_state.release()
lw = LongFunctionInside()
lw.long_function(10)
The main idea is to create a thread that will just sleep in parallel to "long work" and in wake up (after timeout) change the secured variable state, the long function checking the secured variable during its work.
I'm pretty new in Python programming, so if that solution has a fundamental errors, like resources, timing, deadlocks problems , please response)).
solving with the 'with' construct and merging solution from -
Timeout function if it takes too long to finish
this thread which work better.
import threading, time
class Exception_TIMEOUT(Exception):
pass
class linwintimeout:
def __init__(self, f, seconds=1.0, error_message='Timeout'):
self.seconds = seconds
self.thread = threading.Thread(target=f)
self.thread.daemon = True
self.error_message = error_message
def handle_timeout(self):
raise Exception_TIMEOUT(self.error_message)
def __enter__(self):
try:
self.thread.start()
self.thread.join(self.seconds)
except Exception, te:
raise te
def __exit__(self, type, value, traceback):
if self.thread.is_alive():
return self.handle_timeout()
def function():
while True:
print "keep printing ...", time.sleep(1)
try:
with linwintimeout(function, seconds=5.0, error_message='exceeded timeout of %s seconds' % 5.0):
pass
except Exception_TIMEOUT, e:
print " attention !! execeeded timeout, giving up ... %s " % e

How to close all threads with endless loops? (with _thread! nothing else!)

import _thread
import time
def test1():
while True:
time.sleep(1)
print('TEST1')
def test2():
while True:
time.sleep(3)
print('TEST2')
try:
_thread.start_new_thread(test1,())
_thread.start_new_thread(test2,())
except:
print("ERROR")
How can I stop the two threads for example in case of KeyboardInterrupts?
Because for "except KeyboardInterrupt" the threads are still running :/
Important:
The question is about closing threads only with the module _thread!
Is it possible?
There's no way to directly interact with another thread, except for the main thread. While some platforms do offer thread cancel or kill semantics, Python doesn't expose them, and for good reason.1
So, the usual solution is to use some kind of signal to tell everyone to exit. One possibility is a done flag with a Lock around it:
done = False
donelock = _thread.allocate_lock()
def test1():
while True:
try:
donelock.acquire()
if done:
return
finally:
donelock.release()
time.sleep(1)
print('TEST1')
_thread.start_new_thread(test1,())
time.sleep(3)
try:
donelock.acquire()
done = True
finally:
donelock.release()
Of course the same thing is a lot cleaner if you use threading (or a different higher-level API like Qt's threads). Plus, you can use a Condition or Event to make the background threads exit as soon as possible, instead of only after their next sleep finishes.
done = threading.Event()
def test1():
while True:
if done.wait(1):
return
print('TEST1')
t1 = threading.Thread(target=test1)
t1.start()
time.sleep(3)
done.set()
The _thread module doesn't have an Event or Condition, of course, but you can always build one yourself—or just borrowing from the threading source.
Or, if you wanted the threads to be killed asynchronously (which obviously isn't safe if they're, e.g., writing files, but if they're just doing computation or downloads or the like that you don't care about if you're canceling, that's fine), threading makes it even easier:
t1 = threading.Thread(target=test1, daemon=True)
As a side note, the behavior you're seeing isn't actually reliable across platforms:
Background threads created with _thread may keep running, or shut down semi-cleanly, or terminate hard. So, when you use _thread in a portable application, you have to write code that can handle any of the three.
KeyboardInterrupt may be delivered to an arbitrary thread rather than the main thread. If it is, it will usually kill that thread, unless you've set up a handler. So, if you're using _thread, you usually want to handle KeyboardInterrupt and call _thread.interrupt_main().
Also, I don't think your except: is doing what you think it is. That try only covers the start_new_thread calls. If the threads start successfully, the main thread exits the try block and reaches the end of the program. If a KeyboardInterrupt or other exception is raised, the except: isn't going to be triggered. (Also, using a bare except: and not even logging which exception got handled is a really bad idea if you want to be able to understand what your code is doing.) Presumably, on your platform, background threads continue running, and the main thread blocks on them (and probably at the OS level, not the Python level, so there's no code you can write that gets involved there).
If you want your main thread to keep running to make sure it can handle a KeyboardInterrupt and so something with it (but see the caveats above!), you have to give it code to keep running:
try:
while True:
time.sleep(1<<31)
except KeyboardInterrupt:
# background-thread-killing code goes here.
1. TerminateThread on Windows makes it impossible to do all the cleanup Python needs to do. pthread_cancel on POSIX systems like Linux and macOS makes it possible, but very difficult. And the semantics are different enough between the two that trying to write a cross-platform wrapper would be a nightmare. Not to mention that Python supports systems (mostly older Unixes) that don't have the full pthread API, or even have a completely different threading API.

signal.alarm not triggering exception on time

I've slightly modified the signal example from the official docs (bottom of page).
I'm calling sleep 10 but I would like an alarm to be raised after 1 second. When I run the following snippet it takes way more than 1 second to trigger the exception (I think it runs the full 10 seconds).
import signal, os
def handler(signum, frame):
print 'Interrupted', signum
raise IOError("Should after 1 second")
signal.signal(signal.SIGALRM, handler)
signal.alarm(1)
os.system('sleep 10')
signal.alarm(0)
How can I be sure to terminate a function after a timeout in a single-threaded application?
From the docs:
A Python signal handler does not get executed inside the low-level (C)
signal handler. Instead, the low-level signal handler sets a flag
which tells the virtual machine to execute the corresponding Python
signal handler at a later point(for example at the next bytecode
instruction).
Therefore, a signal such as that generated by signal.alarm() can't terminate a function after a timeout in some cases. Either the function should cooperate by allowing other Python code to run (e.g., by calling PyErr_CheckSignals() periodically in C code) or you should use a separate process, to terminate the function in time.
Your case can be fixed if you use subprocess.check_call('sleep 10'.split()) instead of os.system('sleep 10').

Programmatically exiting python script while multithreading

I have some code which runs routinely, and every now and then (like once a month) the program seems to hang somewhere and I'm not sure where.
I thought I would implement [what has turned out to be not quite] a "quick fix" of checking how long the program has been running for. I decided to use multithreading to call the function, and then while it is running, check the time.
For example:
import datetime
import threading
def myfunc():
#Code goes here
t=threading.Thread(target=myfunc)
t.start()
d1=datetime.datetime.utcnow()
while threading.active_count()>1:
if (datetime.datetime.utcnow()-d1).total_seconds()>60:
print 'Exiting!'
raise SystemExit(0)
However, this does not close the other thread (myfunc).
What is the best way to go about killing the other thread?
The docs could be clearer about this. Raising SystemExit tells the interpreter to quit, but "normal" exit processing is still done. Part of normal exit processing is .join()-ing all active non-daemon threads. But your rogue thread never ends, so exit processing waits forever to join it.
As #roippi said, you can do
t.daemon = True
before starting it. Normal exit processing does not wait for daemon threads. Your OS should kill them then when the main process exits.
Another alternative:
import os
os._exit(13) # whatever exit code you want goes there
That stops the interpreter "immediately", and skips all normal exit processing.
Pick your poison ;-)
There is no way to kill a thread. You must kill the target from within the target. The best way is with a hook and a queue. It goes something like this.
import Threading
from Queue import Queue
# add a kill_hook arg to your function, kill_hook
# is a queue used to pass messages to the main thread
def myfunc(*args, **kwargs, kill_hook=None):
#Code goes here
# put this somewhere which is periodically checked.
# an ideal place to check the hook is when logging
try:
if q.get_nowait(): # or use q.get(True, 5) to wait a longer
print 'Exiting!'
raise SystemExit(0)
except Queue.empty:
pass
q = Queue() # the queue used to pass the kill call
t=threading.Thread(target=myfunc, args = q)
t.start()
d1=datetime.datetime.utcnow()
while threading.active_count()>1:
if (datetime.datetime.utcnow()-d1).total_seconds()>60:
# if your kill criteria are met, put something in the queue
q.put(1)
I originally found this answer somewhere online, possibly this. Hope this helps!
Another solution would be to use a separate instance of Python, and monitor the other Python thread, killing it from the system level, with psutils.
Wow, I like the daemon and stealth os._exit solutions too!

Anything similar to a microcontroller interrupt handler?

Is there some method where one could use a try statement to catch an error caused by a raise statement, execute code to handle the flag e.g. update some variables and then return to the line where the code had been operating when the flag was raised?
I am thinking specifically of an interrupt handler for a micro-controller (which does what ive just described).
I am writing some code that has a thread checking a file to see if it updates and I want it to interrupt the main program so it is aware of the update, deals with it appropriately, and returns to the line it was running when interrupted.
Ideally, the main program would recognize the flag from the thread regardless of where it is in execution. A try statement would do this but how could I return to the line where the flag was raised?
Thanks!
Paul
EDIT:
My attempt at ISR after comments albeit it looks like a pretty straight forward example of using locks. Small test routine at the bottom to demonstrate code
import os
import threading
import time
def isr(path, interrupt):
prev_mod = os.stat(path).st_mtime
while(1):
new_mod = os.stat(path).st_mtime
if new_mod != prev_mod:
print "Updates! Waiting to begin"
# Prevent enter into critical code and updating
# While the critical code is running.
with interrupt:
print "Starting updates"
prev_mod = new_mod
print "Fished updating"
else:
print "No updates"
time.sleep(1)
def func2(interrupt):
while(1):
with interrupt: # Prevent updates while running critical code
# Execute critical code
print "Running Crit Code"
time.sleep(5)
print "Finished Crit Code"
# Do other things
interrupt = threading.Lock()
path = "testfil.txt"
t1 = threading.Thread(target = isr, args = (path, interrupt))
t2 = threading.Thread(target = func2, args = (interrupt,))
t1.start()
t2.start()
# Create and "Update" to the file
time.sleep(12)
chngfile = open("testfil.txt","w")
chngfile.write("changing the file")
chngfile.close()
time.sleep(10)
One standard OS way to handle interrupts is to enqueue the interrupt so another kernel thread can process it.
This partially applies in Python.
I am writing some code that has a thread checking a file to see if it updates and I want it to interrupt the main program so it is aware of the update, deals with it appropriately, and returns to the line it was running when interrupted.
You have multiple threads. You don't need to "interrupt" the main program. Simply "deal with it appropriately" in a separate thread. The main thread will find the updates when the other thread has "dealt with it appropriately".
This is why we have locks. To be sure that shared state is updated correctly.
You interrupt a thread by locking a resource the thread needs.
You make a thread interruptable by acquiring locks on resources.
In python we call that pattern "function calls". You cannot do this with exceptions; exceptions only unroll the stack, and always to the first enclosing except clause.
Microcontrollers have interrupts to support asynchronous events; but the same mechanism is also used in software interrupts for system calls, because an interrupt can be configured to have a different set of protection bits; the system call can be allowed to do more than the user program calling it. Python doesn't have any kind of protection levels like this, and so software interrupts are not of much use here.
As for handling asynchronous events, you can do that in python, using the signal module, but you may want to step lightly if you are also using threads.

Categories