Killing cv2 read process after N time - python

I'm desperate.
My code reads nframe in videos, sometimes the code just stop for no reason, and no error.
So I decided to somehow raise an error.
The thing is, the code does raise an error, but it ignores it for some reason, and just works as normal.
*Ive provided a code block on which exactly the same method works.
handler:
def handler(signum,frame):
print("error") ## This is printed
raise Exception('time out') ## I guess this is getting raised
Code part i want to wrap:
for i in range(0,int(frame_count), nframe): # basicly loads every nframe from the video
try:
frame = video.set(1,i)
signal.signal(signal.SIGALRM), handler)
signal.alarm(1) # At this point, the 'handler' did raise the error, but it did not kill this 'try' block.
_n,frame = video.read() # This line sometimes gets for infinit amount of time, and i want to wrap it
except Exception as e:
print('test') # Code does not get here, yet the 'handler' does raise an exception
raise e
# Here i need to return False, or rise an error, but the code just does not get here.
An example where exactly the same method will work:
import signal
import time
def handler(signum, frame):
raise Exception('time out')
def function():
try:
signal.signal(signal.SIGALRM,handler)
signal.alarm(5) # 5 seconds till raise
time.sleep(10) # does not get here, an Exception is raised after 5 seconds
except Exception as e:
raise e # This will indeed work

My guess is that the read() call is blocked somewhere inside C code. The signal handler runs, puts an exception into the Python interpreter somewhere, but the exception isn't handled until the Python interpreter regains control. This is a limitation documented in the signal module:
A long-running calculation implemented purely in C (such as regular expression matching on a large body of text) may run uninterrupted for an arbitrary amount of time, regardless of any signals received. The Python signal handlers will be called when the calculation finishes.
One possible workaround is to read frames on a separate process using the multiprocessing module, and return them to the main process using a multiprocessing.Queue (from which you can get with a timeout). However, there will be extra overhead in sending the frames between processes.
Another approach might be to try and avoid the root of the problem. OpenCV has different video backends (V4L, GStreamer, ffmpeg, ...); one of them might work where another doesn't. Using the second argument to the VideoCapture constructor, you can indicate a preference for which backend to use:
cv.VideoCapture(..., cv.CAP_FFMPEG)
See the documentation for the full list of backends. Depending on your platform and OpenCV build, not all of them will be available.

Related

Python 2.7 - weird behaviour on finally clause

I've run into a very strange problem, that is making me wonder if I understand exception handling at all.
I have a code (that I'll post at the end) that looks more or less like this:
try:
doSomething()
finally:
print 'bye'
The code in the finally clause is not being executed when I exit my program via ctrl+c.
To make matters worse, now consider the following:
try:
doSomething()
except: # this could be replaced by except Exception, it doesn't matter
print 'something'
finally:
print 'bye'
Now the code in the except clause is not executed.. but the code in the finally clause is!
I realize this has to be the fault of the code executed by doSomething(). But my question is, how is it even possible? I thought we could be 100% confident that finally clauses always got executed.
Here goes the real code. It's running on a raspberry pi 3. It's an adaptation of the code found here.
import RPi.GPIO as GPIO, time
GPIO.setmode(GPIO.BCM)
# Define function to measure charge time
def RCtime (PiPin):
# Discharge capacitor
GPIO.setup(PiPin, GPIO.OUT)
GPIO.output(PiPin, GPIO.LOW)
time.sleep(.1)
time1 = time.time()
GPIO.setup(PiPin, GPIO.IN)
if (GPIO.input(PiPin) == GPIO.LOW):
GPIO.wait_for_edge(PiPin, GPIO.RISING, timeout=1000)
time_elap = time.time()-time1
return time_elap*1e3
# Main program loop
try:
while True:
print RCtime(4) # Measure timing using GPIO4
except Exception:
print '---------got ya-----------------'
finally:
print '---Finaly---'
GPIO.cleanup() # this ensures a clean exit
To be more specific, the behaviour depicted appears when the program is waiting at the GPIO.wait_for_edge(PiPin, GPIO.RISING, timeout=1000) line.
If your code does not handle the KeyboardInterrupt exception, the system has to do it: which means it kills the code. Instead, if you have an exception handler, your code takes over, leaves the loop and then executes the finally block.
Well, this seems to be a bug related to the way RPi.GPIO handles signals (such as KeyboardInterrupt). This module invokes some functions written in c. The function is sort of catching the KeyboardInterrupt and throwing an exception, but it doesn't do it properly. Instead of getting only one exception, the two exceptions get "stacked". Because of that, the first thing that run after an exception in my code is going to be terminated by the second exception. If I don't include an except block, the finally block gets executed after the first exception, and is terminated by the second exception. If I include an except block, it tries to run after the first exception, it fails because of the second exception, and then it goes to the finally block.
I only found one forum where people deal with a similar problem.
After my previous experiment I was curious how this works with
input() [https://svn.python.org/projects/python/trunk/Parser/myreadline.c]. I replaced the sem.acquire() with raw_input() and ran the same
tests. Now the inner exception is really taken so it works like the OP
expected. The exception, however is KeyboardInterrupt, not the special
exception from the IPC module.
So I looked in the source code how they did it:
The code is in Parser/myreadline.c.
This code for input in function calls PyErr_CheckSignals() and
PyOS_InterruptOccurred() for a proper handling of the interrupt. So it
seems the OP should do something similar. Onl;y to deliver the custom
error you will have to do some other stuff. I don't know what but maybe
calling PyErr_SetString is sufficient as it might overwrite the
KeyboardInterrupt stuff.

Masking exceptions in Python?

It is typical to use the with statement to open a file so that the file handle cannot be leaked:
with open("myfile") as f:
…
But what if the exception occurs somewhere within the open call? The open function is very likely not an atomic instruction in the Python interpreter, so it's entirely possible that an asynchronous exception such as KeyboardInterrupt would be thrown* at some moment before the open call has finished, but after the system call has already completed.
The conventional way of handle this (in, for example, POSIX signals) to use the masking mechanism: while masked, the delivery of exceptions is suspended until they are later unmasked. This allows operations such as open to be implemented in an atomic way. Does such a primitive exist in Python?
[*] One might say it's doesn't matter for KeyboardInterrupt since the program is about to die anyway, but that is not true of all programs. It's conceivable that a program might choose to catch KeyboardInterrupt on the top level and continue execution, in which case the leaked file handle can add up over time.
I do not think its possible to mask exceptions , you can mask signals but not exceptions . In your case KeyboardInterrupt is the exception that is raised when the signal.SIGINT is raised (which is the Ctrl + C) .
It is not possible to mask Exceptions because well it does not make sense, right? Let's say you are doing open('file','r') , but file does not exist, this causes the open function to throw IOError Exception, we should not be able to mask these kinds of exceptions. It does not make sense to mask it, because open would never be able to complete in the above case.
exceptions – anomalous or exceptional conditions requiring special processing
For KeyboardInterrupt exception , its different because like I said, its actually a signal that causes the KeyboardInterrupt exception to be raised.
You can only mask signals in Unix starting from Python 3.3 using the function signal.pthread_sigmask [Reference]
For that you will have to move the the context expression to a different block so that we can so something like mask the signal, run the context expression to get the context manager and then unmask the signal , a sample code would look like (please note I have not personally tested this code) -
import signal
signal.pthread_sigmask(signal.SIG_BLOCK,[signal.SIGINT])
with <context expression> as variable: # in your case ,open('filename','r')
signal.pthread_sigmask(signal.SIG_UNBLOCK,[signal.SIGINT])
...
Some clarification: it seems that asynchronous exceptions are not commonly used in Python. The standard library only documents KeyboardInterrupt AFAIK. Other libraries can implement their own via signal handlers, but I don't think (or hope?) this is a common practice, as asynchronous exceptions are notoriously tricky to work with.
Here is a naive solution that won't work:
try:
handle = acquire_resource(…)
except BaseException as e:
handle.release()
raise e
else:
with handle:
…
The exception-handling part is still vulnerable to exceptions: a KeyboardInterrupt could occur a second time after the exception is caught but before release is complete.
There is also a "gap" between the end of the try statement and the beginning of the with statement where it is vulnerable to exceptions.
I don't think there's anyway to make it work this way.
Thinking from a different perspective, it seems that the only way in which asynchronous exceptions can arise is from signals. If this is true, one could mask them as #AnandSKumar suggested. However, masking is not portable as it requires pthreads.
Nonetheless, we can fake masking with a little trickery:
def masking_handler(queue, prev_handler):
def handler(*args):
queue.append(lambda: prev_handler[0](*args))
return handler
mask_stack = []
def mask():
queue = []
prev_handler = []
new_handler = masking_handler(queue, prev_handler)
# swap out the signal handler with our own
prev_handler[0] = signal.signal(signal.SIGINT, new_handler)
mask_stack.append((prev_handler[0], queue))
def unmask():
prev_handler, queue = mask_stack.pop()
# restore the original signal handler
signal.signal(signal.SIGINT, prev_handler)
# the remaining part may never occur if a signal happens right now
# but that's fine
for event in queue:
event()
mask()
with acquire_resource(…) as handle:
unmask()
…
This will work if SIGINT is the only source that we care about. Unfortunately it breaks down for multiple signals, not just because we don't know which ones are being handled, but also because we can't swap out multiple signals atomically!

Is there a way for workers in multiprocessing.Pool's apply_async to catch errors and continue?

When using multiprocessing.Pool's apply_async(), what happens to breaks in code? This includes, I think, just exceptions, but there may be other things that make the worker functions fail.
import multiprocessing as mp
pool = mp.Pool(mp.cpu_count())
for f in files:
pool.apply_async(workerfunct, args=(*args), callback=callbackfunct)
As I understand it right now, the process/worker fails (all other processes continue) and anything past a thrown error is not executed, EVEN if I catch the error with try/except.
As an example, usually I'd except Errors and put in a default value and/or print out an error message, and the code continues. If my callback function involves writing to file, that's done with default values.
This answerer wrote a little about it:
I suspect the reason you're not seeing anything happen with your example code is because all of your worker function calls are failing. If a worker function fails, callback will never be executed. The failure won't be reported at all unless you try to fetch the result from the AsyncResult object returned by the call to apply_async. However, since you're not saving any of those objects, you'll never know the failures occurred. If I were you, I'd try using pool.apply while you're testing so that you see errors as soon as they occur.
If you're using Python 3.2+, you can use the error_callback keyword argument to to handle exceptions raised in workers.
pool.apply_async(workerfunct, args=(*args), callback=callbackfunct, error_callback=handle_error)
handle_error will be called with the exception object as an argument.
If you're not, you have to wrap all your worker functions in a try/except to ensure your callback is executed. (I think you got the impression that this wouldn't work from my answer in that other question, but that's not the case. Sorry!):
def workerfunct(*args):
try:
# Stuff
except Exception as e:
# Do something here, maybe return e?
pool.apply_async(workerfunct, args=(*args), callback=callbackfunct)
You could also use a wrapper function if you can't/don't want to change the function you actually want to call:
def wrapper(func, *args):
try:
return func(*args)
except Exception as e:
return e
pool.apply_async(wrapper, args=(workerfunct, *args), callback=callbackfunct)

Python3: purge exception chain?

I'm trying to raise an exception within an except: block but the interpreter tries to be helpful and prints stack traces 'by force'. Is it possible to avoid this?
A little bit of background information:
I'm toying with urwid, a TUI library for python. The user interface is started by calling urwid.MainLoop.run() and ended by raising urwid.ExitMainLoop(). So far this works fine but what happens when another exception is raised? E.g. when I'm catching KeyboardInterrupt (the urwid MainLoop does not), I do some cleanup and want to end the user interface - by raising the appropriate exception. But this results in a screen full of stack traces.
Some little research showed python3 remembers chained exceptions and one can explicitly raise an exception with a 'cause': raise B() from A(). I learned a few ways to change or append data regarding the raised exceptions but I found no way to 'disable' this feature. I'd like to avoid the printing of stack traces and lines like The above exception was the direct cause of... and just raise the interface-ending exception within an except: block like I would outside of one.
Is this possible or am I doing something fundamentally wrong?
Edit:
Here's an example resembling my current architecture, resulting in the same problem:
#!/usr/bin/env python3
import time
class Exit_Main_Loop(Exception):
pass
# UI main loop
def main_loop():
try:
while True:
time.sleep(0.1)
except Exit_Main_Loop as e:
print('Exit_Main_Loop')
# do some UI-related clean up
# my main script
try:
main_loop()
except KeyboardInterrupt as e:
print('KeyboardInterrupt')
# do some clean up
raise Exit_Main_Loop() # signal the UI to terminate
Unfortunately I can't change main_loop to except KeyboardInterrupt as well. Is there a pattern to solve this?
I still don't quite understand your explanation, but from the code:
try:
main_loop()
except KeyboardInterrupt as e:
print('KeyboardInterrupt')
# do some clean up
raise Exit_Main_Loop() # signal the UI to terminate
There is no way that main_loop could ever see the Exit_Main_Loop() exception. By the time you get to the KeyboardInterrupt handle, main_loop is guaranteed to have already finished (in this case, because of an unhandled KeyboardInterrupt), so its exception handler is no longer active.
So, what happens is that you raise a new exception that nobody catches. And when an exception gets to the top of your code without being handled, Python handles it automatically by printing a traceback and quitting.
If you want to convert one type of exception into another so main_loop can handle it, you have to do that somewhere inside the try block.
You say:
Unfortunately I can't change main_loop to except KeyboardInterrupt as well.
If that's true, there's no real answer to your problem… but I'm not sure there's a problem in the first place, other than the one you created. Just remove the Exit_Main_Loop() from your code, and isn't it already doing what you wanted? If you're just trying to prevent Python from printing a traceback and exiting, this will take care of it for you.
If there really is a problem—e.g., the main_loop code has some cleanup code that you need to get executed no matter what, and it's not getting executed because it doesn't handle KeyboardInterrupt—there are two ways you could work around this.
First, as the signal docs explain:
The signal.signal() function allows to define custom handlers to be executed when a signal is received. A small number of default handlers are installed: … SIGINT is translated into a KeyboardInterrupt exception.
So, all you have to do is replace the default handler with a different one:
def handle_sigint(signum, frame):
raise ExitMainLoop()
signal.signal(signal.SIGINT, handle_sigint)
Just do this before you start main_loop, and you should be fine. Keep in mind that there are some limitations with threaded programs, and with Windows, but if none of those limitations apply, you're golden; a ctrl-C will trigger an ExitMainLoop exception instead of a KeyboardInterrupt, so the main loop will handle it. (You may want to also add an except ExitMainLoop: block in your wrapper code, in case there's an exception outside of main_loop. However, you could easily write a contextmanager that sets and restores the signal around the call to main_loop, so there isn't any outside code that could possibly raise it.)
Alternatively, even if you can't edit the main_loop source code, you can always monkeypatch it at runtime. Without knowing what the code looks like, it's impossible to explain exactly how to do this, but there's almost always a way to do it.

Overriding basic signals (SIGINT, SIGQUIT, SIGKILL??) in Python

I'm writing a program that adds normal UNIX accounts (i.e. modifying /etc/passwd, /etc/group, and /etc/shadow) according to our corp's policy. It also does some slightly fancy stuff like sending an email to the user.
I've got all the code working, but there are three pieces of code that are very critical, which update the three files above. The code is already fairly robust because it locks those files (ex. /etc/passwd.lock), writes to to a temporary files (ex. /etc/passwd.tmp), and then, overwrites the original file with the temporary. I'm fairly pleased that it won't interefere with other running versions of my program or the system useradd, usermod, passwd, etc. programs.
The thing that I'm most worried about is a stray ctrl+c, ctrl+d, or kill command in the middle of these sections. This has led me to the signal module, which seems to do precisely what I want: ignore certain signals during the "critical" region.
I'm using an older version of Python, which doesn't have signal.SIG_IGN, so I have an awesome "pass" function:
def passer(*a):
pass
The problem that I'm seeing is that signal handlers don't work the way that I expect.
Given the following test code:
def passer(a=None, b=None):
pass
def signalhander(enable):
signallist = (signal.SIGINT, signal.SIGQUIT, signal.SIGABRT, signal.SIGPIPE, signal.SIGALRM, signal.SIGTERM, signal.SIGKILL)
if enable:
for i in signallist:
signal.signal(i, passer)
else:
for i in signallist:
signal.signal(i, abort)
return
def abort(a=None, b=None):
sys.exit('\nAccount was not created.\n')
return
signalhander(True)
print('Enabled')
time.sleep(10) # ^C during this sleep
The problem with this code is that a ^C (SIGINT) during the time.sleep(10) call causes that function to stop, and then, my signal handler takes over as desired. However, that doesn't solve my "critical" region problem above because I can't tolerate whatever statement encounters the signal to fail.
I need some sort of signal handler that will just completely ignore SIGINT and SIGQUIT.
The Fedora/RH command "yum" is written is Python and does basically exactly what I want. If you do a ^C while it's installing anything, it will print a message like "Press ^C within two seconds to force kill." Otherwise, the ^C is ignored. I don't really care about the two second warning since my program completes in a fraction of a second.
Could someone help me implement a signal handler for CPython 2.3 that doesn't cause the current statement/function to cancel before the signal is ignored?
As always, thanks in advance.
Edit: After S.Lott's answer, I've decided to abandon the signal module.
I'm just going to go back to try: except: blocks. Looking at my code there are two things that happen for each critical region that cannot be aborted: overwriting file with file.tmp and removing the lock once finished (or other tools will be unable to modify the file, until it is manually removed). I've put each of those in their own function inside a try: block, and the except: simply calls the function again. That way the function will just re-call itself in the event of KeyBoardInterrupt or EOFError, until the critical code is completed.
I don't think that I can get into too much trouble since I'm only catching user provided exit commands, and even then, only for two to three lines of code. Theoretically, if those exceptions could be raised fast enough, I suppose I could get the "maximum reccurrsion depth exceded" error, but that would seem far out.
Any other concerns?
Pesudo-code:
def criticalRemoveLock(file):
try:
if os.path.isFile(file):
os.remove(file)
else:
return True
except (KeyboardInterrupt, EOFError):
return criticalRemoveLock(file)
def criticalOverwrite(tmp, file):
try:
if os.path.isFile(tmp):
shutil.copy2(tmp, file)
os.remove(tmp)
else:
return True
except (KeyboardInterrupt, EOFError):
return criticalOverwrite(tmp, file)
There is no real way to make your script really save. Of course you can ignore signals and catch a keyboard interrupt using try: except: but it is up to your application to be idempotent against such interrupts and it must be able to resume operations after dealing with an interrupt at some kind of savepoint.
The only thing that you can really to is to work on temporary files (and not original files) and move them after doing the work into the final destination. I think such file operations are supposed to be "atomic" from the filesystem prospective. Otherwise in case of an interrupt: restart your processing from start with clean data.

Categories