Handling Exceptions with else clause - python

The Python Tutorial states:
The try ... except statement has an optional else clause, which,
when present, must follow all except clauses. It is useful for code
that must be executed if the try clause does not raise an exception.
For example:
for arg in sys.argv[1:]:
try:
f = open(arg, 'r')
except IOError:
print 'cannot open', arg
else:
print arg, 'has', len(f.readlines()), 'lines'
f.close()
The use of the else clause is better than adding additional code
to the try clause because it avoids accidentally catching an exception
that wasn’t raised by the code being protected by the try ... except
statement.
Question 1> After reading the above document, I still don't get the idea why we cannot simply move the code from else clause into try clause.
Question 2> How does try clause can accidentally catch an exception since all those catches are done in the except clause, right?

You could put the else code in the try suite, but then you'd catch any exceptions that might be raised there. If you didn't intend that to happen, it would be "accidental," hence the wording of the document you linked to.
Best practice is to put as little code as possible in a try block, so that when an error occurs, you know what operation caused it and can handle it appropriately. If you have five lines of code in a try block and only expect one of them to ever raise an exception, your exception-handling code will be ill-prepared when an exception occurs in a line you didn't expect it to. Better in that case to let the exception be raised than handle it the wrong way.

If you move the code from the else into the try then that becomes part of the "critical path" which can raise an exception. If f.readlines() raises some sort of exception (perhaps an I/O error while reading the file because of a bad sector on the disk) then that error will be conflated with the one error that you currently catch. (Technically the "cannot open" error message would be wrong at that point ... because opening a file can succeed when reading it later fails; in fact opening it must succeed before you can even get an I/O error while processing it).
Normally you'd use a pattern more like:
foo = None
try:
# some code to access/load/initialize foo from an external source
except ...:
# handle various types of file open/read, database access, etc errors
else:
foo = something
... so that your subsequently run code and simply check if foo is None and use it or
work around it's unavailability in whatever way you see fit.

Answer for both questions is similar,
If you move code to the try clause, then you can not be sure from where the exception is coming from. Thus if you have another line of code that produces an unexpected IOError you could end searching for a problem where there is not.
So to better disect your code you want to simplify as much as possible the lines in the try making the catch as especific as possible.

1) Of course you could just move code from the else clause into the try clause. You could move it outside of the try block entirely, but this allows extra flexibility, and further modulation of the code. Also, perhaps specificity with the errors being caught. You could list a whole load of different exceptions that might be likely to happen, each with different statements. The stuff in the else clause would still only happen if no exception was raised, after the execution of the final line in the try block. e.g. printing a successful return message.
Also, try clauses add some extra CPU overhead with regards to garbage collection management and error tracing, so anything outside of the try clause is not protected in the same manner and may run more efficiently.
2) Error catching is quite specific. e.g. The except clause in your above example will only be run if an IOError is raised when running the f = open(arg,'r') line. If you want to catch any form of exception, use except Exception:.

Related

Why is intentionally passing any exceptions bad? [duplicate]

Why is it a bad idea to catch all exceptions in Python ?
I understand that catching all exceptions using the except: clause will even catch the 'special' python exceptions: SystemExit, KeyboardInterrupt, and GeneratorExit. So why not just use a except Exception: clause to catch all exceptions?
Because it's terribly nonspecific and it doesn't enable you to do anything interesting with the exception. Moreover, if you're catching every exception there could be loads of exceptions that are happening that you don't even know are happening (which could cause your application to fail without you really knowing why). You should be able to predict (either through reading documentation or experimentation) specifically which exceptions you need to handle and how to handle them, but if you're blindly suppressing all of them from the beginning you'll never know.
So, by popular request, here's an example. A programmer is writing Python code and she gets an IOError. Instead of investigating further, she decides to catch all exceptions:
def foo():
try:
f = open("file.txt")
lines = f.readlines()
return lines[0]
except:
return None
She doesn't realize the issue in his ways: what if the file exists and is accessible, but it is empty? Then this code will raise an IndexError (since the list lines is empty). So she'll spend hours wondering why she's getting None back from this function when the file exists and isn't locked without realizing something that would be obvious if she had been more specific in catching errors, which is that she's accessing data that might not exist.
Because you probably want to handle each exception differently. It's not the same thing to have a KeyInterrupt than to have a Encoding problem, or an OS one... You can catch specific exceptions one after the other.
try:
XXX
except TYPE:
YYY
except TYPE:
ZZZ

In python, is there a way to enter a try/except block if a condition is met, otherwise execute the code in the try block plainly?

I would like to execute a loop that attempts to run a block of code, and skips past any iterations that happen to fail. However I would also like to be able to pass in a debug flag, so that if one of the iterations fails the program crashes and the user can see the backtrace to help themselves see where it failed. Essentially, I want to do this:
debug = False # or True
for i in range(10):
if not debug:
try:
# many lines of code
except:
print(f'Case {i} failed')
else:
# many lines of code
However, I don't want to duplicate the #many lines of code. I could wrap them in a helper function and do precisely the structure I wrote above, but I'm going to have to pass around a bunch of variables that will add some complexity (aka unit tests that I don't want to write). Is there another way to do this?
Best solution shy of factoring out many lines of code to its own function (which is often a good idea, but not always) would be to unconditionally use the try/except, but have the except conditionally re-raise the exception:
debug = False # or True
for i in range(10):
try:
# many lines of code
except:
if debug:
raise # Bare raise reraises the exception as if it was never caught
print(f'Case {i} failed')
A caught exception reraised with a bare raise leaves no user-visible traces to distinguish it from an uncaught exception; the traceback is unchanged, with no indication of it passing through the except block, so the net effect is identical to your original code, aside from debug being evaluated later (not at all if no exception occurs) and many lines of code appearing only once.
Side-note: Bare except blocks are a bad idea. If nothing else, limit it to except Exception: so you're not catching and suppressing stuff like KeyboardInterrupt or SystemExit when not in debug mode.
Best way is to use a function, just one line more does not matter too much.
Otherwise, if there is no need to test debug, There is no need to repeat the else block. Because if statements in try don't raise error, then the it will execute and result will be shown normally.

Get Exit Status of a Nested Function?

I have a function called within a different function. In the nested function, various errors (e.g. improper arguments, missing parameters, etc.) should result in exit status 1. Something like:
if not os.path.isdir(filepath):
print('Error: could not find source directory...')
sys.exit(1)
Is this the correct way to use exit statuses within python? Should I have, instead,
return sys.exit(1)
??? Importantly, how would I reference the exit status of this nested function in the other function once the nested function had finished?
sys.exit() raises a SystemExit exception. Normally, you should not use it unless you really mean to exit your program.
You could catch this exception:
try:
function_that_uses_sys.exit()
except SystemExit as exc:
print exc.code
The .code attribute of the SystemExit exception is set to the proposed exit code.
However, you should really use a more specific exception, or create a custom exception for the job. A ValueError might be appropriate here, for example:
if not os.path.isdir(filepath):
raise ValueError('Error: could not find source directory {!r}'.format(filepath))
then catch that exception:
try:
function_that_may_raise_valueerror()
except ValueError as exc:
print "Oops, something went wrong: {}".format(exc.message)
By using sys.exit, you typically signal that you want the entire program to end. If you want to handle the error in a calling function, you should probably have the inner function raise a more specific exception instead. (You could catch the exception raised by SystemExit, but it would be a rather awkward way to pass error information out.)
I guess that the right thing to do is this:
if not os.path.isdir(filepath):
raise ValueError('the given filepath is not a directory')
However, the code as it stands still could be improved. One point is that a path to a file should never be a directory, so that is not an exceptional state. Maybe what you want is to just name it path without adding an unintended connotations.
Further, and that has actual functional implications, you are still not guaranteed to be able to access a directory there even if isdir() returns true! The reason is that something could have switched the thing under your feet, typically a malicious attacker, or, more simple, you could simply not have the rights to access it. If you care, you should rather just open the directory and handle the according errors instead of trying in advance to determine if something in the future will fail. This is in general a better approach, as the "normal" code doesn't get cluttered by such checks and you also don't pay any albeit small performance penalty except when an error occurs.

Bad idea to catch all exceptions in Python

Why is it a bad idea to catch all exceptions in Python ?
I understand that catching all exceptions using the except: clause will even catch the 'special' python exceptions: SystemExit, KeyboardInterrupt, and GeneratorExit. So why not just use a except Exception: clause to catch all exceptions?
Because it's terribly nonspecific and it doesn't enable you to do anything interesting with the exception. Moreover, if you're catching every exception there could be loads of exceptions that are happening that you don't even know are happening (which could cause your application to fail without you really knowing why). You should be able to predict (either through reading documentation or experimentation) specifically which exceptions you need to handle and how to handle them, but if you're blindly suppressing all of them from the beginning you'll never know.
So, by popular request, here's an example. A programmer is writing Python code and she gets an IOError. Instead of investigating further, she decides to catch all exceptions:
def foo():
try:
f = open("file.txt")
lines = f.readlines()
return lines[0]
except:
return None
She doesn't realize the issue in his ways: what if the file exists and is accessible, but it is empty? Then this code will raise an IndexError (since the list lines is empty). So she'll spend hours wondering why she's getting None back from this function when the file exists and isn't locked without realizing something that would be obvious if she had been more specific in catching errors, which is that she's accessing data that might not exist.
Because you probably want to handle each exception differently. It's not the same thing to have a KeyInterrupt than to have a Encoding problem, or an OS one... You can catch specific exceptions one after the other.
try:
XXX
except TYPE:
YYY
except TYPE:
ZZZ

Python Ignore Exception and Go Back to Where I Was

I know using below code to ignore a certain exception, but how to let the code go back to where it got exception and keep executing? Say if the exception 'Exception' raises in do_something1, how to make the code ignore it and keep finishing do_something1 and process do_something2? My code just go to finally block after process pass in except block. Please advise, thanks.
try:
do_something1
do_something2
do_something3
do_something4
except Exception:
pass
finally:
clean_up
EDIT:
Thanks for the reply. Now I know what's the correct way to do it. But here's another question, can I just ignore a specific exception (say if I know the error number). Is below code possible?
try:
do_something1
except Exception.strerror == 10001:
pass
try:
do_something2
except Exception.strerror == 10002:
pass
finally:
clean_up
do_something3
do_something4
There's no direct way for the code to go back inside the try-except block. If, however, you're looking at trying to execute these different independant actions and keep executing when one fails (without copy/pasting the try/except block), you're going to have to write something like this:
actions = (
do_something1, do_something2, #...
)
for action in actions:
try:
action()
except Exception, error:
pass
update. The way to ignore specific exceptions is to catch the type of exception that you want, test it to see if you want to ignore it and re-raise it if you dont.
try:
do_something1
except TheExceptionTypeThatICanHandleError, e:
if e.strerror != 10001:
raise
finally:
clean_up
Note also, that each try statement needs its own finally clause if you want it to have one. It wont 'attach itself' to the previous try statement. A raise statement with nothing else is the correct way to re-raise the last exception. Don't let anybody tell you otherwise.
What you want are continuations which python doesn't natively provide. Beyond that, the answer to your question depends on exactly what you want to do. If you want do_something1 to continue regardless of exceptions, then it would have to catch the exceptions and ignore them itself.
if you just want do_something2 to happen regardless of if do_something1 completes, you need a separate try statement for each one.
try:
do_something1()
except:
pass
try:
do_something2()
except:
pass
etc. If you can provide a more detailed example of what it is that you want to do, then there is a good chance that myself or someone smarter than myself can either help you or (more likely) talk you out of it and suggest a more reasonable alternative.
This is pretty much missing the point of exceptions.
If the first statement has thrown an exception, the system is in an indeterminate state and you have to treat the following statement as unsafe to run.
If you know which statements might fail, and how they might fail, then you can use exception handling to specifically clean up the problems which might occur with a particular block of statements before moving on to the next section.
So, the only real answer is to handle exceptions around each set of statements that you want to treat as atomic
you could have all of the do_something's in a list, and iterate through them like this, so it's no so wordy. You can use lambda functions instead if you require arguments for the working functions
work = [lambda: dosomething1(args), dosomething2, lambda: dosomething3(*kw, **kwargs)]
for each in work:
try:
each()
except:
pass
cleanup()
Exceptions are usually raised when a performing task can not be completed in a manner intended by the code due to certain reasons. This is usually raised as exceptions. Exceptions should be handled and not ignored. The whole idea of exception is that the program can not continue in the normal execution flow without abnormal results.
What if you write a code to open a file and read it? What if this file does not exist?
It is much better to raise exception. You can not read a file where none exists. What you can do is handle the exception, let the user know that no such file exists. What advantage would be obtained for continuing to read the file when a file could not be opened at all.
In fact the above answers provided by Aaron works on the principle of handling your exceptions.
I posted this recently as an answer to another question. Here you have a function that returns a function that ignores ("traps") specified exceptions when calling any function. Then you invoke the desired function indirectly through the "trap."
def maketrap(*exceptions):
def trap(func, *args, **kwargs):
try:
return func(*args, **kwargs)
except exceptions:
return None
return trap
# create a trap that ignores all exceptions
trapall = maketrap(Exception)
# create a trap that ignores two exceptions
trapkeyattrerr = maketrap(KeyError, AttributeError)
# Now call some functions, ignoring specific exceptions
trapall(dosomething1, arg1, arg2)
trapkeyattrerr(dosomething2, arg1, arg2, arg3)
In general I'm with those who say that ignoring exceptions is a bad idea, but if you do it, you should be as specific as possible as to which exceptions you think your code can tolerate.
Python 3.4 added contextlib.suppress(), a context manager that takes a list of exceptions and suppresses them within the context:
with contextlib.suppress(IOError):
print('inside')
print(pathlib.Path('myfile').read_text()) # Boom
print('inside end')
print('outside')
Note that, just as with regular try/except, an exception within the context causes the rest of the context to be skipped. So, if an exception happens in the line commented with Boom, the output will be:
inside
outside

Categories