I have a multiprocessed program - 4 processes, and inside these i spawn multiple threads so it is multithreaded as well. I do not have so much experience with tracing and debugging these, i am getting into it now. I have this code block in my tcp server though:
I have Exceptions like this
class Error(Exception):
"""Basic exception"""
class NoDataReceived(Error):
"""Raised when there is no data"""
Now the code
try:
...
if True:
raise ValueError('No data receieved') # <--- this will print
# raise NoDataReceived('There is no data') # <--- when i use this it won't print
except ValueError as err: # bultin
print('err')
except NoDataReceived as err: # custom
print('err') # Does NOT print..?
Now the thing is, when i raise ValueError, it prints out, when i raise the custom NoDataRecieved, it does not print at all.
I need to say it is multithreaded program and multiprocessed as well (threads within process), if i run this piece of code in singlethreaded simple script, it works as it should. I do not like this behaviour. Before i build anything on it i need to understand this better. I know i could use sys_traceback, but i really want to make this work as it should, simply.
Do you have an explanation for me my friends?
[Python 3.8]
Thank you for your time, Q.
Related
I have a function which catches KeyboardInterrupts for additional functionalities and I have to write tests in pytest and I don't know how to create a testcase to cover the KeyboardInterrupt catch block.
The code is similar to this:
# main_game.py
class Cache:
other_details = dict()
def save_progress_for_future(progress):
file = open('progress.json', 'w')
content = {'progress_percent':progress, **other_details}
json.dump(content, file)
file.close()
def loadingBar():
progress = 0
while True:
try:
... # other stuff
progress = get_progress_percent()
print('\r Loading' + str('.' * progress//10) + '\r', end='')
except KeyboardInterrupt:
save_progress_in_file(progress)
How am I going to write tests in pytest in other tests file (say test_main_game.py) to cover the KeyboardInterrupt part, and so that coverage cli tool shows 100% of code coverage in its report?
Exceptions like keyboard interrupt signals, out of memory failures and such are generally non-deterministic, so one can't possibly have any guarantees if and when they will be even raised during the normal flow of execution. They originate at OS level rather than interpreter itself (unlike ValueError for instance). Given that, there is no reliable way to simulate those conditions to arise and align properly with the execution of a unit test code.
Now what you can do is to simulate the interrupt somewhere in your try block to raise an exception to redirect the execution to the code inside the except block. In order to do this, some code in # other stuff section or get_progress_percent() function should somehow raise the KeyboardInterrupt when under the unit test context.
Since it is unknown what is happening in the # other stuff, I'll stick with the get_progress_percent().
For this, a refactoring needs to be applied to a loadingBar() to make it accept a delegate to get_progress_percent() function, like so:
def loadingBar(progress_loader = get_progress_percent):
progress = 0
while (True):
try:
# other stuff
progress = progress_loader()
# print to stdout, etc...
except KeyboardInterrupt:
save_progress_in_file(progress)
Now if loadingBar() is called without arguments, it will assume the default value of progress_loader variable to be your default get_progress_percent() function. This is the actual call you make in your program.
To test the alternative flow inside the except block, you might consider to create additional unit test utilizing an overloaded call to loadingBar(), passing it the function which raises the KeyboardInterrupt.
Your unit test case might look like this:
import unittest
class LoadingBarTestCase(unittest.TestCase):
def testLoadingBar(self):
"""Test the code in try block"""
loadingBar()
# assertions for other stuff
# also assert that save_progress_in_file() doesn't get called
def testLoadingBarInterrupted(self):
"""Test the code in except block"""
# mock function to raise the interrupt
def interrupted_progress_loader():
raise KeyboardInterrupt()
# call loadingBar passing it a mock delegate
loadingBar(interrupted_progress_loader)
# assert that save_progress_in_file() got called by exception handler
if __name__ == '__main__':
unittest.main()
So, to wrap it up, some particular edge cases require that your code needs to be adjusted in a way to make it a bit more unit test friendly, which might not a bad thing at all.
I hope this helps 😃
I'm desperate.
My code reads nframe in videos, sometimes the code just stop for no reason, and no error.
So I decided to somehow raise an error.
The thing is, the code does raise an error, but it ignores it for some reason, and just works as normal.
*Ive provided a code block on which exactly the same method works.
handler:
def handler(signum,frame):
print("error") ## This is printed
raise Exception('time out') ## I guess this is getting raised
Code part i want to wrap:
for i in range(0,int(frame_count), nframe): # basicly loads every nframe from the video
try:
frame = video.set(1,i)
signal.signal(signal.SIGALRM), handler)
signal.alarm(1) # At this point, the 'handler' did raise the error, but it did not kill this 'try' block.
_n,frame = video.read() # This line sometimes gets for infinit amount of time, and i want to wrap it
except Exception as e:
print('test') # Code does not get here, yet the 'handler' does raise an exception
raise e
# Here i need to return False, or rise an error, but the code just does not get here.
An example where exactly the same method will work:
import signal
import time
def handler(signum, frame):
raise Exception('time out')
def function():
try:
signal.signal(signal.SIGALRM,handler)
signal.alarm(5) # 5 seconds till raise
time.sleep(10) # does not get here, an Exception is raised after 5 seconds
except Exception as e:
raise e # This will indeed work
My guess is that the read() call is blocked somewhere inside C code. The signal handler runs, puts an exception into the Python interpreter somewhere, but the exception isn't handled until the Python interpreter regains control. This is a limitation documented in the signal module:
A long-running calculation implemented purely in C (such as regular expression matching on a large body of text) may run uninterrupted for an arbitrary amount of time, regardless of any signals received. The Python signal handlers will be called when the calculation finishes.
One possible workaround is to read frames on a separate process using the multiprocessing module, and return them to the main process using a multiprocessing.Queue (from which you can get with a timeout). However, there will be extra overhead in sending the frames between processes.
Another approach might be to try and avoid the root of the problem. OpenCV has different video backends (V4L, GStreamer, ffmpeg, ...); one of them might work where another doesn't. Using the second argument to the VideoCapture constructor, you can indicate a preference for which backend to use:
cv.VideoCapture(..., cv.CAP_FFMPEG)
See the documentation for the full list of backends. Depending on your platform and OpenCV build, not all of them will be available.
I'm trying to raise an exception within an except: block but the interpreter tries to be helpful and prints stack traces 'by force'. Is it possible to avoid this?
A little bit of background information:
I'm toying with urwid, a TUI library for python. The user interface is started by calling urwid.MainLoop.run() and ended by raising urwid.ExitMainLoop(). So far this works fine but what happens when another exception is raised? E.g. when I'm catching KeyboardInterrupt (the urwid MainLoop does not), I do some cleanup and want to end the user interface - by raising the appropriate exception. But this results in a screen full of stack traces.
Some little research showed python3 remembers chained exceptions and one can explicitly raise an exception with a 'cause': raise B() from A(). I learned a few ways to change or append data regarding the raised exceptions but I found no way to 'disable' this feature. I'd like to avoid the printing of stack traces and lines like The above exception was the direct cause of... and just raise the interface-ending exception within an except: block like I would outside of one.
Is this possible or am I doing something fundamentally wrong?
Edit:
Here's an example resembling my current architecture, resulting in the same problem:
#!/usr/bin/env python3
import time
class Exit_Main_Loop(Exception):
pass
# UI main loop
def main_loop():
try:
while True:
time.sleep(0.1)
except Exit_Main_Loop as e:
print('Exit_Main_Loop')
# do some UI-related clean up
# my main script
try:
main_loop()
except KeyboardInterrupt as e:
print('KeyboardInterrupt')
# do some clean up
raise Exit_Main_Loop() # signal the UI to terminate
Unfortunately I can't change main_loop to except KeyboardInterrupt as well. Is there a pattern to solve this?
I still don't quite understand your explanation, but from the code:
try:
main_loop()
except KeyboardInterrupt as e:
print('KeyboardInterrupt')
# do some clean up
raise Exit_Main_Loop() # signal the UI to terminate
There is no way that main_loop could ever see the Exit_Main_Loop() exception. By the time you get to the KeyboardInterrupt handle, main_loop is guaranteed to have already finished (in this case, because of an unhandled KeyboardInterrupt), so its exception handler is no longer active.
So, what happens is that you raise a new exception that nobody catches. And when an exception gets to the top of your code without being handled, Python handles it automatically by printing a traceback and quitting.
If you want to convert one type of exception into another so main_loop can handle it, you have to do that somewhere inside the try block.
You say:
Unfortunately I can't change main_loop to except KeyboardInterrupt as well.
If that's true, there's no real answer to your problem… but I'm not sure there's a problem in the first place, other than the one you created. Just remove the Exit_Main_Loop() from your code, and isn't it already doing what you wanted? If you're just trying to prevent Python from printing a traceback and exiting, this will take care of it for you.
If there really is a problem—e.g., the main_loop code has some cleanup code that you need to get executed no matter what, and it's not getting executed because it doesn't handle KeyboardInterrupt—there are two ways you could work around this.
First, as the signal docs explain:
The signal.signal() function allows to define custom handlers to be executed when a signal is received. A small number of default handlers are installed: … SIGINT is translated into a KeyboardInterrupt exception.
So, all you have to do is replace the default handler with a different one:
def handle_sigint(signum, frame):
raise ExitMainLoop()
signal.signal(signal.SIGINT, handle_sigint)
Just do this before you start main_loop, and you should be fine. Keep in mind that there are some limitations with threaded programs, and with Windows, but if none of those limitations apply, you're golden; a ctrl-C will trigger an ExitMainLoop exception instead of a KeyboardInterrupt, so the main loop will handle it. (You may want to also add an except ExitMainLoop: block in your wrapper code, in case there's an exception outside of main_loop. However, you could easily write a contextmanager that sets and restores the signal around the call to main_loop, so there isn't any outside code that could possibly raise it.)
Alternatively, even if you can't edit the main_loop source code, you can always monkeypatch it at runtime. Without knowing what the code looks like, it's impossible to explain exactly how to do this, but there's almost always a way to do it.
I know using below code to ignore a certain exception, but how to let the code go back to where it got exception and keep executing? Say if the exception 'Exception' raises in do_something1, how to make the code ignore it and keep finishing do_something1 and process do_something2? My code just go to finally block after process pass in except block. Please advise, thanks.
try:
do_something1
do_something2
do_something3
do_something4
except Exception:
pass
finally:
clean_up
EDIT:
Thanks for the reply. Now I know what's the correct way to do it. But here's another question, can I just ignore a specific exception (say if I know the error number). Is below code possible?
try:
do_something1
except Exception.strerror == 10001:
pass
try:
do_something2
except Exception.strerror == 10002:
pass
finally:
clean_up
do_something3
do_something4
There's no direct way for the code to go back inside the try-except block. If, however, you're looking at trying to execute these different independant actions and keep executing when one fails (without copy/pasting the try/except block), you're going to have to write something like this:
actions = (
do_something1, do_something2, #...
)
for action in actions:
try:
action()
except Exception, error:
pass
update. The way to ignore specific exceptions is to catch the type of exception that you want, test it to see if you want to ignore it and re-raise it if you dont.
try:
do_something1
except TheExceptionTypeThatICanHandleError, e:
if e.strerror != 10001:
raise
finally:
clean_up
Note also, that each try statement needs its own finally clause if you want it to have one. It wont 'attach itself' to the previous try statement. A raise statement with nothing else is the correct way to re-raise the last exception. Don't let anybody tell you otherwise.
What you want are continuations which python doesn't natively provide. Beyond that, the answer to your question depends on exactly what you want to do. If you want do_something1 to continue regardless of exceptions, then it would have to catch the exceptions and ignore them itself.
if you just want do_something2 to happen regardless of if do_something1 completes, you need a separate try statement for each one.
try:
do_something1()
except:
pass
try:
do_something2()
except:
pass
etc. If you can provide a more detailed example of what it is that you want to do, then there is a good chance that myself or someone smarter than myself can either help you or (more likely) talk you out of it and suggest a more reasonable alternative.
This is pretty much missing the point of exceptions.
If the first statement has thrown an exception, the system is in an indeterminate state and you have to treat the following statement as unsafe to run.
If you know which statements might fail, and how they might fail, then you can use exception handling to specifically clean up the problems which might occur with a particular block of statements before moving on to the next section.
So, the only real answer is to handle exceptions around each set of statements that you want to treat as atomic
you could have all of the do_something's in a list, and iterate through them like this, so it's no so wordy. You can use lambda functions instead if you require arguments for the working functions
work = [lambda: dosomething1(args), dosomething2, lambda: dosomething3(*kw, **kwargs)]
for each in work:
try:
each()
except:
pass
cleanup()
Exceptions are usually raised when a performing task can not be completed in a manner intended by the code due to certain reasons. This is usually raised as exceptions. Exceptions should be handled and not ignored. The whole idea of exception is that the program can not continue in the normal execution flow without abnormal results.
What if you write a code to open a file and read it? What if this file does not exist?
It is much better to raise exception. You can not read a file where none exists. What you can do is handle the exception, let the user know that no such file exists. What advantage would be obtained for continuing to read the file when a file could not be opened at all.
In fact the above answers provided by Aaron works on the principle of handling your exceptions.
I posted this recently as an answer to another question. Here you have a function that returns a function that ignores ("traps") specified exceptions when calling any function. Then you invoke the desired function indirectly through the "trap."
def maketrap(*exceptions):
def trap(func, *args, **kwargs):
try:
return func(*args, **kwargs)
except exceptions:
return None
return trap
# create a trap that ignores all exceptions
trapall = maketrap(Exception)
# create a trap that ignores two exceptions
trapkeyattrerr = maketrap(KeyError, AttributeError)
# Now call some functions, ignoring specific exceptions
trapall(dosomething1, arg1, arg2)
trapkeyattrerr(dosomething2, arg1, arg2, arg3)
In general I'm with those who say that ignoring exceptions is a bad idea, but if you do it, you should be as specific as possible as to which exceptions you think your code can tolerate.
Python 3.4 added contextlib.suppress(), a context manager that takes a list of exceptions and suppresses them within the context:
with contextlib.suppress(IOError):
print('inside')
print(pathlib.Path('myfile').read_text()) # Boom
print('inside end')
print('outside')
Note that, just as with regular try/except, an exception within the context causes the rest of the context to be skipped. So, if an exception happens in the line commented with Boom, the output will be:
inside
outside
I am in the process of writing a small(er) Python script to automate a semi-frequent, long, and error-prone task. This script is responsible for making various system calls - either though os.system or through os.(mkdir|chdir|etc).
Here is an example of my code right now:
class AClass:
def __init__(self, foo, bar, verbose=False, silent=False):
# Sets up variables needed for each instance, etc
self.redirect = ''
if silent:
self.redirect = '> 2>&1'
self.verbose = verbose
def a_method(self):
""" Responsible for running 4-6 things via system calls as described """
if self.verbose and not self.silent:
print "Creating a directory"
try:
os.mkdir('foobar')
except OSError, e:
raise OSError, "Problem creating directory %s: %s" % (e.filename, e.strerror)
if self.verbose and not self.silent:
print "Listing a directory"
if (os.system('ls foobar %s') % self.redirect) is not 0:
raise OSError, "Could not list directory foobar"
def b_method(self):
""" Looks very similar to a_method() """
def run(self):
""" Stitches everything together """
try:
a_method()
except OSError, e:
print "a_method(): %s" % e.strerror
sys.exit(-1)
try:
b_method()
except OSError, e:
print "b_method(): %s" % e.strerror
sys.exit(-1)
Obviously writing all the if self.verbose and not self.silent is messy and then the try/catch or if around each call is ugly to look at.
I would have liked to use Python's logging class and simply have one logging level (verbose) configurable via command line and then I can simple call logger.debug('My message') but I am using Python 2.2 and I do not have access to the logging class.
Summary/Base Questions
I am using Python 2.2 and I cannot change this. I am running on an ESX 3.0.2 server and I cannot touch it in any other way for the time being.
What is the best way to handle error checking and verbose output without tying this logic to your class (which should only do One Thing)?
How can I reduce the clutter with something more simple or elegant to look at?
Thanks!
writing all the if verbose and not silent is messy
So, instead, assign sys.stdout to a dummy class whose write is a no-op if you need to be unverbose or silent, then just use print without needing guards. (Do remember to restore sys.stdout to the real thing for prints that aren't so conditioned -- easier to encapsulate in a couple of functions, actually).
For error checks, all the blocks like:
try:
a_method()
except OSError, e:
print "a_method(): %s" % e.strerror
sys.exit(-1)
can and should be like
docall(a_method)
for what I hope is a pretty obvious def docall(acallable):.
Similarly, other try/except case and ones where the new exception is raised conditionally can become calls to functions with appropriate arguments (including callables, i.e. higher order functions). I'll be glad to add detailed code if you clarify what parts of this are hard or unclear to you!
Python 2.2, while now very old, was a great language in its way, and it's just as feasible to use it neatly, as you wish, as it would be for other great old languages, like, say, MacLisp;-).
How to clean up your verbose output
Move the verbose/quiet logic into a single function, and then call that function for all of your output. If you make it something nice and short it keeps your mainline code quite tidy.
def P(s):
if (verbose):
print s
I have a package that does this in our internal code, it has the following methods:
P -- normal print: P('this prints regardless, --quiet does not shut it up')
V -- verbose print: V('this only prints if --verbose')
D -- debug print: D('this only prints if --debug')