listing exceptions programmatically - Python - python

Is there any way to programmatically determine which exceptions an object or method might raise?
Like dir(obj) lists available methods, I'm looking for the equivalent dir_exceptions(obj).
As far as I know, the only way to achieve this would be to parse the source.

I don't think this is possible. An exception is a runtime phenomenon and you'll know what it possible (or what happens) only while running. Why would you want to do this though?

It looks like you'll have to trust the code's developers on this one: if they did a good job, the method/class documentation should list all the exceptions that could be raised.

No, there is not a practical way to do this.
Most python developers derive from Exception, so if you're not sure, just catch Exception.
try:
some_secret_code()
except Exception:
print 'oops, something happened'
If you're thinking that you can import a module and poke around looking for things derived from Exception, that won't quite work either. What about that python nut that does this ->
exec "raise SystemExit()"
I'm not sure that there is a non-practical way to accomplish this.

I don't think this is possible either, but if you trust that the programmer has named their exceptions with "Exception" or "Error" in the name, then you could do a dir on the class and search for elements that end with "Exception" or "Error". Aside from that (which is pretty hacky in itself), I don't see a straightforward/native/idiomatic way to do this

Related

What is the most Pythonic way to raise an exception related to an unexpected file type?

Quick question. I am trying to write reusable code, and haven't found too many other instances of it coming up. Say a script is looking for an XML file, I could just raise a generic RuntimeException, but I am not sure if it would be informative for others using my code, or even the type of thing they would be likely to catch in an except statement. Any thoughts?
You can make your own exception by extending the base Exception class.
See: https://docs.python.org/2/tutorial/errors.html#user-defined-exceptions

In python how does the caller of something know if that something would throw an exception or not?

In the Java world, we know that the exceptions are classified into checked vs runtime and whenever something throws a checked exception, the caller of that something will be forced to handle that exception, one way or another. Thus the caller would be well aware of the fact that there is an exception and be prepared/coded to handle that.
But coming to Python, given there is no concept of checked exceptions (I hope that is correct), how does the caller of something know if that something would throw an exception or not? Given this "lack of knowledge that an exception could be thrown", how does the caller ever know that it could have handled an exception until it is too late?
There are no checked exceptions in Python.
Read the module docs.
Read the source.
Discover during testing.
Catch a wide range of exception types if necessary (see below).
For example, if you need to be safe:
try:
...
except Exception:
...
Avoid using a bare except clause, as it will even catch things like a KeyboardInterrupt.
As far as I know Python (6 years) there isn't anything similar to Java's throws keyword in Python.
how does the caller of something know if that something would throw an exception or not?
By reading the documentation for that something.
Design Principle of Python: it's easier to ask forgiveness than permission
EAFP
Easier to ask for forgiveness than permission. This common Python coding style assumes the existence of valid keys or attributes and
catches exceptions if the assumption proves false. This clean and fast
style is characterized by the presence of many try and except
statements. The technique contrasts with the LBYL style common to many
other languages such as C.
Basics of Unix Philosophy: Rule of Repair
Repair what you can — but when you must fail, fail noisily and as soon
as possible.
The essence of both is to use error handling that allows you to find your bugs quickly and wind up with a much more robust program over the long run.
The practical lesson is to learn what errors you should look for as you develop, and only attempt to catch those in your modules, and only use generic Exception handling as a wrapper.

Catching Python runtime errors only

I find myself handling exceptions without specifying an exception type when I call code that interacts with system libraries like shutil, http, etc, all of which can throw if the system is in an unexpected state (e.g. a file locked, network unavailable, etc)
try:
# call something
except:
print("OK, so it went wrong.")
This bothers me because it also catches SyntaxError and other exceptions based in programmer error, and I've seen recommendations to avoid such open-ended exception handlers.
Is there a convention that all runtime errors derive from some common exception base class that I can use here? Or anything that doesn't involve syntax errors, module import failures, etc.? Even KeyError I would consider a bug, because I tend to use dict.get() if I'm not 100% sure the key will be there.
I'd hate to have to list every single conceivable exception type, especially since I'm calling a lot of supporting code I have no control over.
UPDATE: OK, the answers made me realize I'm asking the wrong question -- what I'm really wondering is if there's a Python convention or explicit recommendation for library writers to use specific base classes for their exceptions, so as to separate them from the more mundane SyntaxError & friends.
Because if there's a convention for library writers, I, as a library consumer, can make general assumptions about what might be thrown, even if specific cases may vary. Not sure if that makes more sense?
UPDATE AGAIN: Sven's answer finally led me to understand that instead of giving up and catching everything at the top level, I can handle and refine exceptions at the lower levels, so the top level only needs to worry about the specific exception type from the level below.
Thanks!
Always make the try block as small as possible.
Only catch the exceptions you want to handle. Look in the documentation of the functions you are dealing with.
This ensures that you think about what exceptions may occur, and you think about what to do if they occur. If something happens you never thought about, chances are your exception handling code won't be able to correctly deal with that case anyway, so it would be better the exception gets propagated.
You said you'd "hate to have to list every single conceivable exception type", but usually it's not that bad. Opening a file? Catch IOError. Dealing with some library code? They often have their own exception hierarchies with a specific top-level exception -- just catch this one if you want to catch any of the library-specific exceptions. Be as specific as possible, otherwise errors will pass unnoticed sooner or later.
As for convention about user-defined exceptions in Python: They usually should be derived from Exception. This is also what most user-defined exceptions in the standard library derive from, so the least you should do is use
except Exception:
instead of a bare except clause, which also catches KeyboardInterrupt and SystemExit. As you have noted yourself, this would still catch a lot of exceptions you don't want to catch.
Check out this list of built-in exceptions in the Python docs. It sounds like what you mostly want is to catch StandardError although this also includes KeyError. There are also other base classes, such as ArithmeticError and EnvironmentError, that you may find useful sometimes.
I find third-party libraries do derive their custom exceptions from Exception as documented here. Programmers also tend to raise standard Python exceptions such as TypeError, ValueError, etc. in appropriate situations. Unfortunately, this makes it difficult to consistently catch library errors separately from other errors derived from Exception. I wish Python defined e.g. a UserException base class. Some libraries do declare a base class for their exceptions, but you'd have to know what these are, import the module, and catch them explicitly.
Of course, if you want to catch everything except KeyError and, say, IndexError, along with the script-stopper exceptions, you could do this:
try:
doitnow()
except (StopIteration, GeneratorExit, KeyboardInterrupt, SystemExit):
raise # these stop the script
except (KeyError, IndexError):
raise # we don't want to handle these ones
except Exception as e:
handleError(e)
Admittedly, this becomes a hassle to write each time.
How about RuntimeError: http://docs.python.org/library/exceptions.html#exceptions.RuntimeError
If that isn't what you want (and it may well not be), look at the list of exceptions on that page. If you're confused by how the hierarchy fits together, I suggest you spend ten minutes examining the __bases__ property of the exceptions you're interested in, to see what base classes they share. (Note that __bases__ isn't closed over the whole hierarchy - you may need to examine superclass bases also).

try except and programming etiquette

I'm making a GUI and I'm finding myself to be using a lot of try except statements. My question is, should I be redesigning my program to use less try except statements or is try except a good practice to be using in python programs? I like them because they're informative and make debugging, for me, easier. Just wondering what real developers think about it.
Thanks
One of Python's idioms is: It's easier to ask for forgiveness than for permission. (Python Glossary, have a look at EAFP).
So it's perfectly acceptable to structure program flow with exception handling (and reasonably fast too, compared to other languages). It fits Python's dynamic nature nicely imho.
One large consideration when deciding whether to catch an exception is what legitimate errors you could be hiding.
For example, consider this code:
try:
name = person['name']
except KeyError:
name = '<none provided>'
This is reasonable if person is known to be a dict… But if person can possibly be something more complex, for example:
class Person(object):
def __getitem__(self, key):
return do_something(key)
You run the risk of accidentally catching an exception which was the result of a legitimate bug (for example, a bug in do_something).
And I feel the need to mention: you should never, ever (except under a couple of very specific circumstances) use a "naked" except:.
My personal preference is to avoid catching exceptions when ever possible (for example, using name = person.get('name', '<none provided>')), both because I find it cleaner and I dislike the look of try/catch blocks.
It's hard to give a general answer on whether you should use less exception handling... you can definitely do too much and too little. It's almost certainly wrong to be catching every possible exception and also almost certainly wrong to be doing no exception handling.
Here are some things to think about:
It's usually a good idea to catch the exception if you can programmatically do something about the error condition. E.g. your code is trying to make a web request and if it fails, you want to retry. In that situation you want to catch the exception and then do the retry.
Think carefully about where to catch an exception. In some low-level function, can you reasonably do something about the error? E.g. let's say you have a function that writes out a file and it fails with a permissions error. Probably not much you can do about it there but maybe at a higher level you can catch the exception and display a message to the user instructing them to try to save the file somewhere else.
It almost never makes sense to catch "fatal" types of errors e.g. out of memory, stack overflow etc. At least not low down in your code - it might make sense to have a top-level handler that tries to gracefully exit.
Don't "swallow" exceptions that really should bubble up i.e. don't have an except clause that doesn't re-raise the exception if your calling function should really see it. This can hide serious bugs.
For more, do a Google search for "exception handling guidelines". Many of the results you see will be for other languages/environments, but the concepts apply just as well.

How to handle "duck typing" in Python?

I usually want to keep my code as generic as possible. I'm currently writing a simple library and being able to use different types with my library feels extra important this time.
One way to go is to force people to subclass an "interface" class. To me, this feels more like Java than Python and using issubclass in each method doesn't sound very tempting either.
My preferred way is to use the object in good faith, but this will raise some AttributeErrors. I could wrap each dangerous call in a try/except block. This, too, seems kind of cumbersome:
def foo(obj):
...
# it should be able to sleep
try:
obj.sleep()
except AttributeError:
# handle error
...
# it should be able to wag it's tail
try:
obj.wag_tail()
except AttributeError:
# handle this error as well
Should I just skip the error handling and expect people to only use objects with the required methods? If I do something stupid like [x**2 for x in 1234] I actually get a TypeError and not a AttributeError (ints are not iterable) so there must be some type checking going on somewhere -- what if I want to do the same?
This question will be kind of open ended, but what is the best way to handle the above problem in a clean way? Are there any established best practices? How is the iterable "type checking" above, for example, implemented?
Edit
While AttributeErrors are fine, the TypeErrors raised by native functions usually give more information about how to solve the errors. Take this for example:
>>> ['a', 1].sort()
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: unorderable types: int() < str()
I'd like my library to be as helpful as possible.
I'm not a python pro but I believe that unless you can try an alternative for when the parameter doesn't implement a given method, you shoudn't prevent exceptions from being thrown. Let the caller handle these exceptions. This way, you would be hidding problems from the developers.
As I have read in Clean Code, if you want to search for an item in a collection, don't test your parameters with ìssubclass (of a list) but prefer to call getattr(l, "__contains__"). This will give someone who is using your code a chance to pass a parameter that isn't a list but which has a __contains__ method defined and things should work equally well.
So, I think that you should code in an abstract, generic way, imposing as few restrictions as you can. For that, you'll have to make the fewest assumptions possible. However, when you face something that you can't handle, raise an exception and let the programmer know what mistake he made!
If your code requires a particular interface, and the user passes an object without that interface, then nine times out of ten, it's inappropriate to catch the exception. Most of the time, an AttributeError is not only reasonable but expected when it comes to interface mismatches.
Occasionally, it may be appropriate to catch an AttributeError for one of two reasons. Either you want some aspect of the interface to be optional, or you want to throw a more specific exception, perhaps a package-specific exception subclass. Certainly you should never prevent an exception from being thrown if you haven't honestly handled the error and any aftermath.
So it seems to me that the answer to this question must be problem- and domain- specific. It's fundamentally a question of whether using a Cow object instead of a Duck object ought to work. If so, and you handle any necessary interface fudging, then that's fine. On the other hand, there's no reason to explicitly check whether someone has passed you a Frog object, unless that will cause a disastrous failure (i.e. something much worse than a stack trace).
That said, it's always a good idea to document your interface -- that's what docstrings (among other things) are for. When you think about it, it's much more efficient to throw a general error for most cases and tell users the right way to do it in the docstring, than to try to foresee every possible error a user might make and create a custom error message.
A final caveat -- it's possible that you're thinking about UI here -- I think that's another story. It's good to check the input that an end user gives you to make sure it isn't malicious or horribly malformed, and provide useful feedback instead of a stack trace. But for libraries or things like that, you really have to trust the programmer using your code to use it intelligently and respectfully, and to understand the errors that Python generates.
If you just want the unimplemented methods to do nothing, you can try something like this, rather than the multi-line try/except construction:
getattr(obj, "sleep", lambda: None)()
However, this isn't necessarily obvious as a function call, so maybe:
hasattr(obj, "sleep") and obj.sleep()
or if you want to be a little more sure before calling something that it can in fact be called:
hasattr(obj, "sleep") and callable(obj.sleep) and obj.sleep()
This "look-before-you-leap" pattern is generally not the preferred way to do it in Python, but it is perfectly readable and fits on a single line.
Another option of course is to abstract the try/except into a separate function.
Good question, and quite open-ended. I believe typical Python style is not to check, either with isinstance or catching individual exceptions. Cerainly, using isinstance is quite bad style, as it defeats the whole point of duck typing (though using isinstance on primitives can be OK -- be sure to check for both int and long for integer inputs, and check for basestring for strings (base class of str and unicode). If you do check, you hould raise a TypeError.
Not checking is generally OK, as it typically raises either a TypeError or AttributeError anyway, which is what you want. (Though it can delay those errors making client code hard to debug).
The reason you see TypeErrors is that primitive code raises it, effectively because it does an isinstance. The for loop is hard-coded to raise a TypeError if something is not iterable.
First of all, the code in your question is not ideal:
try:
obj.wag_tail()
except AttributeError:
...
You don't know whether the AttributeError is from the lookup of wag_tail or whether it happened inside the function. What you are trying to do is:
try:
f = getattr(obj, 'wag_tail')
except AttributeError:
...
finally:
f()
Edit: kindall rightly points out that if you are going to check this, you should also check that f is callable.
In general, this is not Pythonic. Just call and let the exception filter down, and the stack trace is informative enough to fix the problem. I think you should ask yourself whether your rethrown exceptions are useful enough to justify all of this error-checking code.
The case of sorting a list is a great example.
List sorting is very common,
passing unorderable types happens for a significant proportion of those, and
throwing AttributeError in that case is very confusing.
If those three criteria apply to your problem (especially the third), I agree with building pretty exception rethrower.
You have to balance with the fact that throwing these pretty errors is going to make your code harder to read, which statistically means more bugs in your code. It's a question of balancing the pros and the cons.
If you ever need to check for behaviours (like __real__ and __contains__), don't forget to use the Python abstract base classes found in collections, io, and numbers.

Categories