Avoid exceptions? - python

This particular example relates to Django in Python, but should apply to any language supporting exceptions:
try:
object = ModelClass.objects.get(search=value)
except DoesNotExist:
pass
if object:
# do stuff
The Django model class provides a simple method get which allows me to search for one and only one object from the database, if it finds more or less it raises an exception. If can find zero or more with an alternative filter method, which returns a list:
objects = ModelClass.objects.filter(search=value)
if len(objects) == 1:
object = objects[0]
# do stuff
Am I overly averse to exceptions? To me the exception seems a little wasteful, at a guess, a quarter-to-a-half of the time will be 'exceptional'. I'd much prefer a function that returns None on failure. Would I be better to use Django's filter method and process the list myself?

Believe it or not, this actually is an issue that is a bit different in each language. In Python, exceptions are regularly thrown for events that aren't exceptional by the language itself. Thus I think that the "you should only throw exceptions under exceptional circumstances" rule doesn't quite apply. I think the results you'll get on this forum will be slanted towards that point of view though, considering the high number of .Net programmers (see this question) for more info on that).
At a very minimum, I'd better not catch anyone who sticks to that rule ever using a generator or a for loop in Python (both of which involve throwing exceptions for non-exceptional circumstances).

There's a big schism in programming languages around the use of exceptions.
The majority view is that exceptions should be exceptional. In most languages with exceptions, transfer of control by exception is considerably more expensive than by procedure return, for example.
There is a strong minority view that exceptions are just another control-flow construct, and they should be cheap. The Standard ML of New Jersey and Objective Caml compilers subscribe to that view. If you have cheap exceptions you can code some fancy backtracking algorithms in ways that are more difficult to code cleanly using other mechanisms.
I've seen this debate repeated many times for new language designs, and almost always, the winner decides that exceptions should be expensive and rare. When you care about performanced, you'd be wise to program with this in mind.

The clue is in the name - exceptions should be exceptional.
If you always expect the item will exist then use get, but if you expect it not to exist a reasonable proportion of the time (i.e. it not existing is an expected result rather than an exceptional result) then I'd suggest using filter.
So, seeing as you indicated that between 1 in 2 and 1 in 4 are expected not to exist, I'd definitely write a wrapper around filter, as that's definitely not an exceptional case.

I agree with the other answer but I wanted to add that exception passing like this is going to give you a very noticeable performance hit. Highly recommended that you check to see if the result exists (if that's what filter does) instead of passing on exceptions.
Edit:
In response to request for numbers on this, I ran this simple test...
import time
def timethis(func, list, num):
st=time.time()
for i in xrange(0,1000000):
try:
func(list,num)
except:
pass
et = time.time()
print "Took %gs" % (et-st)
def check(list, num):
if num < len(list):
return list[num]
else:
return None
a=[1]
timethis(check, a, 1)
timethis(lambda x,y:x[y], a, 1)
And the output was..
Took 0.772558s
Took 3.4512s
HTH.

The answer will depend on the intent of the code. (I'm not sure what your code sample was meant to do, the pass in the exceptional case is confusing, what will the rest of the code do with object variable to work with?)
Whether to use exceptions or to use a method which treat the case as non-exceptional is a matter of taste in many cases. Certainly if the real code in the except clause is as complicated as the filter method you'd have to use to avoid the exception, then use the filter method. Simpler code is better code.

Aversion to excpetions is a matter of opinion - however, if there's reason to believe that a function or method is going to be called many times or called rapidly, exceptions will cause a significant slowdown. I learned this from my previous question, where I was previously relying on a thrown exception to return a default value rather than doing parameter checking to return that default.
Of course, exceptions can still exist for any reason, and you shouldn't be afraid to use or throw one if necessary - especially ones that could potentially break the normal flow of the calling function.

I disagree with the above comments that an exception is inefficient in this instance, especially since it's being used in an I/O bound operation.
Here's a more realistic example using Django with an in-memory sqlite database. Each of a 100 different queries was run, then averaged for each of a 100 runs. Although I doubt if it would matter, I also changed the order of execution.
With ObjectDoesNotExist... 0.102783939838
Without exception ........ 0.105322141647
With ObjectDoesNotExist... 0.102762134075
Without exception ........ 0.101523952484
With ObjectDoesNotExist... 0.100004930496
Without exception ........ 0.107946784496
You can instrument this in your own Django environment, but I doubt if your time is well spent avoiding this exception.

Related

Is exception handling always expensive?

I've been told time and again that exception handling for operations like determining type is bad form since exceptions are always computationally expensive. Nevertheless, I've seen posts (especially Python-related ones, such as the to reply of this one) that advise using exception handling for exactly that purpose.
I was wondering, then, if throwing and catching exceptions is to be avoided universally, because it is always computationally expensive, or whether some languages, such as Python, handle exceptions better and it is permissible to use exception handling more liberally.
You cannot give general advice such as "exceptions are expensive and therefore they should be avoided" for all programming languages.
As you suspected, in Python, Exceptions are used more liberally than in other languages such as C++. Instead of raw performance, Python puts emphasis on code readability. There is an idiom "It's easier to ask for forgiveness than for permission", meaning: It's easier to just attempt what you want to achieve and catch an exception than check for compatibility first.
Forgiveness:
try:
do_something_with(dict["key"])
except (KeyError, TypeError):
# Oh well, there is no "key" in dict, or it has the wrong type
Permission:
if hasattr(dict, "__getitem__") and "key" in dict:
do_something_with(dict["key"])
else:
# Oh well
Actually, in Python, iteration with for loops is implemented with exceptions under the hood: The iterable raises a StopIteration exception when the end is reached. So even if you try to avoid exceptions, you will use them anyway all the time.
I think a lot of it comes down to specific use cases.
In the example you posted, the poster explicitly refers to the "duck-typing" aspect of Python. Essentially, you use the exceptions generated to determine if a variable has a particular capability or set of capabilities instead of manually checking (since Python allows a lot of dynamic operations, a class might access "split" through __getattr__, which makes it impossible to check using a standard if statement, so you try to use split, then if it can't do it, we go to plan B).
In a lot of Python applications, also, we tend not to worry a lot about some of the performance details that might matter in other applications, so any overhead from exceptions is "trivial."
In coding my module tco, I encountered this question. In the version 1.0.1alpha, I emplemented three versions of the same class. The module is intended for computational purpose; thus I think I may give some answer to your question.
Computing quick operations by embedding them in the class working without exceptions was twice as quick as with the two classes working with exception. But you have to know that such a test may be meaningless if you think that computing interesting things between the exceptions will make the difference become very tiny. Nobody will seriously care about the difference of time between an empty loop and an empty system raising and catching exceptions!
For that reason, I decided to remove the first system when releasing the 1.1 version of my module. Though a little slower, I found that the system relying on exceptions was more robust and I focused on it.

Using exception handler to extend the functionality of default methods (Python taken as example)

So, I'm new to programming and my question is:
Is it considered a bad practice to use an exception handler to override error-message-behaviour of default methods of a programming language with custom functionality? I mean, is it ethically correct to use something like this (Python):
def index(p, val):
try:
return p.index(val)
except ValueError:
return -1
Maybe I wasn't precise enough. What I meant is: is it a normal or not-recommended practice to consider thrown exceptions (well, I guess it's not applicable everywhere) as legit and valid case-statements?
Like, the idea of the example given above is not to make a custom error message, but to suppress possible errors happening without warning neither users nor other program modules, that something is going wrong.
I think that doing something like this is OK as long as you use function names which make it clear that the user isn't using a built-in. If the user thinks they're using a builtin and all of a sudden index returns -1, imagine the bugs that could happen ... They do:
a[index(a,'foo')]
and all of a sudden they get the last element in the list (which isn't foo).
As a very important rule though, Only handle exceptions that you know what to do with. Your example above does this nicely. Kudos.
This is perfectly fine but depends on what kind of condition you are checking. It is the developers responsibility to check for these conditions. Some exceptions are fatal for the program and some may not. All depends on the context of the method.
With a language like python, I would argue it is much better to give a custom error message for the function than the generic ValueError exception.
However, for your own applications, having this functionality inside your methods can make code easier to read and maintain.
For other languages, the same is true, but you should try and make sure that you don't mimick another function with a different behaviour, whilst hiding the Exceptions.
If you know where exactly your errors will occur and the cause of error too, then there is nothing wrong with such kind of handling. Becaues you are just taking appropriate action for something wrong happening, that you know can happen .
So, For E.g: - If you are trying to divide two numbers, and you know that if the denominator is 0, then you can't divide, then in that case you can use a custom message to denote the problem.

try except and programming etiquette

I'm making a GUI and I'm finding myself to be using a lot of try except statements. My question is, should I be redesigning my program to use less try except statements or is try except a good practice to be using in python programs? I like them because they're informative and make debugging, for me, easier. Just wondering what real developers think about it.
Thanks
One of Python's idioms is: It's easier to ask for forgiveness than for permission. (Python Glossary, have a look at EAFP).
So it's perfectly acceptable to structure program flow with exception handling (and reasonably fast too, compared to other languages). It fits Python's dynamic nature nicely imho.
One large consideration when deciding whether to catch an exception is what legitimate errors you could be hiding.
For example, consider this code:
try:
name = person['name']
except KeyError:
name = '<none provided>'
This is reasonable if person is known to be a dict… But if person can possibly be something more complex, for example:
class Person(object):
def __getitem__(self, key):
return do_something(key)
You run the risk of accidentally catching an exception which was the result of a legitimate bug (for example, a bug in do_something).
And I feel the need to mention: you should never, ever (except under a couple of very specific circumstances) use a "naked" except:.
My personal preference is to avoid catching exceptions when ever possible (for example, using name = person.get('name', '<none provided>')), both because I find it cleaner and I dislike the look of try/catch blocks.
It's hard to give a general answer on whether you should use less exception handling... you can definitely do too much and too little. It's almost certainly wrong to be catching every possible exception and also almost certainly wrong to be doing no exception handling.
Here are some things to think about:
It's usually a good idea to catch the exception if you can programmatically do something about the error condition. E.g. your code is trying to make a web request and if it fails, you want to retry. In that situation you want to catch the exception and then do the retry.
Think carefully about where to catch an exception. In some low-level function, can you reasonably do something about the error? E.g. let's say you have a function that writes out a file and it fails with a permissions error. Probably not much you can do about it there but maybe at a higher level you can catch the exception and display a message to the user instructing them to try to save the file somewhere else.
It almost never makes sense to catch "fatal" types of errors e.g. out of memory, stack overflow etc. At least not low down in your code - it might make sense to have a top-level handler that tries to gracefully exit.
Don't "swallow" exceptions that really should bubble up i.e. don't have an except clause that doesn't re-raise the exception if your calling function should really see it. This can hide serious bugs.
For more, do a Google search for "exception handling guidelines". Many of the results you see will be for other languages/environments, but the concepts apply just as well.

How to handle "duck typing" in Python?

I usually want to keep my code as generic as possible. I'm currently writing a simple library and being able to use different types with my library feels extra important this time.
One way to go is to force people to subclass an "interface" class. To me, this feels more like Java than Python and using issubclass in each method doesn't sound very tempting either.
My preferred way is to use the object in good faith, but this will raise some AttributeErrors. I could wrap each dangerous call in a try/except block. This, too, seems kind of cumbersome:
def foo(obj):
...
# it should be able to sleep
try:
obj.sleep()
except AttributeError:
# handle error
...
# it should be able to wag it's tail
try:
obj.wag_tail()
except AttributeError:
# handle this error as well
Should I just skip the error handling and expect people to only use objects with the required methods? If I do something stupid like [x**2 for x in 1234] I actually get a TypeError and not a AttributeError (ints are not iterable) so there must be some type checking going on somewhere -- what if I want to do the same?
This question will be kind of open ended, but what is the best way to handle the above problem in a clean way? Are there any established best practices? How is the iterable "type checking" above, for example, implemented?
Edit
While AttributeErrors are fine, the TypeErrors raised by native functions usually give more information about how to solve the errors. Take this for example:
>>> ['a', 1].sort()
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: unorderable types: int() < str()
I'd like my library to be as helpful as possible.
I'm not a python pro but I believe that unless you can try an alternative for when the parameter doesn't implement a given method, you shoudn't prevent exceptions from being thrown. Let the caller handle these exceptions. This way, you would be hidding problems from the developers.
As I have read in Clean Code, if you want to search for an item in a collection, don't test your parameters with ìssubclass (of a list) but prefer to call getattr(l, "__contains__"). This will give someone who is using your code a chance to pass a parameter that isn't a list but which has a __contains__ method defined and things should work equally well.
So, I think that you should code in an abstract, generic way, imposing as few restrictions as you can. For that, you'll have to make the fewest assumptions possible. However, when you face something that you can't handle, raise an exception and let the programmer know what mistake he made!
If your code requires a particular interface, and the user passes an object without that interface, then nine times out of ten, it's inappropriate to catch the exception. Most of the time, an AttributeError is not only reasonable but expected when it comes to interface mismatches.
Occasionally, it may be appropriate to catch an AttributeError for one of two reasons. Either you want some aspect of the interface to be optional, or you want to throw a more specific exception, perhaps a package-specific exception subclass. Certainly you should never prevent an exception from being thrown if you haven't honestly handled the error and any aftermath.
So it seems to me that the answer to this question must be problem- and domain- specific. It's fundamentally a question of whether using a Cow object instead of a Duck object ought to work. If so, and you handle any necessary interface fudging, then that's fine. On the other hand, there's no reason to explicitly check whether someone has passed you a Frog object, unless that will cause a disastrous failure (i.e. something much worse than a stack trace).
That said, it's always a good idea to document your interface -- that's what docstrings (among other things) are for. When you think about it, it's much more efficient to throw a general error for most cases and tell users the right way to do it in the docstring, than to try to foresee every possible error a user might make and create a custom error message.
A final caveat -- it's possible that you're thinking about UI here -- I think that's another story. It's good to check the input that an end user gives you to make sure it isn't malicious or horribly malformed, and provide useful feedback instead of a stack trace. But for libraries or things like that, you really have to trust the programmer using your code to use it intelligently and respectfully, and to understand the errors that Python generates.
If you just want the unimplemented methods to do nothing, you can try something like this, rather than the multi-line try/except construction:
getattr(obj, "sleep", lambda: None)()
However, this isn't necessarily obvious as a function call, so maybe:
hasattr(obj, "sleep") and obj.sleep()
or if you want to be a little more sure before calling something that it can in fact be called:
hasattr(obj, "sleep") and callable(obj.sleep) and obj.sleep()
This "look-before-you-leap" pattern is generally not the preferred way to do it in Python, but it is perfectly readable and fits on a single line.
Another option of course is to abstract the try/except into a separate function.
Good question, and quite open-ended. I believe typical Python style is not to check, either with isinstance or catching individual exceptions. Cerainly, using isinstance is quite bad style, as it defeats the whole point of duck typing (though using isinstance on primitives can be OK -- be sure to check for both int and long for integer inputs, and check for basestring for strings (base class of str and unicode). If you do check, you hould raise a TypeError.
Not checking is generally OK, as it typically raises either a TypeError or AttributeError anyway, which is what you want. (Though it can delay those errors making client code hard to debug).
The reason you see TypeErrors is that primitive code raises it, effectively because it does an isinstance. The for loop is hard-coded to raise a TypeError if something is not iterable.
First of all, the code in your question is not ideal:
try:
obj.wag_tail()
except AttributeError:
...
You don't know whether the AttributeError is from the lookup of wag_tail or whether it happened inside the function. What you are trying to do is:
try:
f = getattr(obj, 'wag_tail')
except AttributeError:
...
finally:
f()
Edit: kindall rightly points out that if you are going to check this, you should also check that f is callable.
In general, this is not Pythonic. Just call and let the exception filter down, and the stack trace is informative enough to fix the problem. I think you should ask yourself whether your rethrown exceptions are useful enough to justify all of this error-checking code.
The case of sorting a list is a great example.
List sorting is very common,
passing unorderable types happens for a significant proportion of those, and
throwing AttributeError in that case is very confusing.
If those three criteria apply to your problem (especially the third), I agree with building pretty exception rethrower.
You have to balance with the fact that throwing these pretty errors is going to make your code harder to read, which statistically means more bugs in your code. It's a question of balancing the pros and the cons.
If you ever need to check for behaviours (like __real__ and __contains__), don't forget to use the Python abstract base classes found in collections, io, and numbers.

Is it better to use exceptions in a "validation" class or return status codes?

Suppose I'm creating a class to validate a number, like "Social Security" in US (just as an example of a country-based id). There are some rules to validate this number that comes from an input in a html form in a website.
I thinking about creating a simple class in Python, and a public validate method. This validate returns True or False, simply. This method will call other small private methods (like for the first 'x' numbers if there is a different rule), each one returning True or False as well.
Since this is really simple, I'm thinking of using boolean status codes only (if it's valid or not, don't need meaningful messages about what is wrong).
I've been reading some articles about using exceptions, and I would like to know your opinion in my situation: would using exceptions would be a good idea?
This is a very old question but since the only answer - IMO - is not applicable to Python, here comes my take on it.
Exceptions in Python is something many programmers new to the language have difficulties dealing with. Compared to other languages, Python differs significantly in how exceptions are used: in fact Python routinely uses exceptions for flow control.
The canonical example is the for loop: you will certainly agree that there is nothing "uniquely bizarre" about the loop exhausting its iterations (indeed that's what all loops do, unless broken)... yet rather than checking in advance if there are still values to process, Python keeps on trying reading values from the iterable, and failing that, rises the StopIterator exception, which in turn is catch by the for expression and make the code exiting the loop.
Furthermore, it is idiomatic in Python to go by the EAFP (it's Easier to Ask for Forgiveness than Permission = try-except) rather than LBYL (Look Before You Leap = if not A, B or C then).
In this regard, csj's answer is correct for C or Java but is irrelevant for Python (whose exceptions are seldom "exceptional" in nature).
Another factor to consider - though - is the scenario in which user data is invalid but you fail to act on the validation function outcome:
with a return statement, failing to process the False value will result in having your non-valid data sent down the pipeline,
contrarily, if you were to raise an Exception, failing to catch it would result in the exception propagating through your stack eventually resulting in your code to halt.
While the second option might seems scary at first, it is still the right road to take: if data is invalid, there is no sense in passing it further down the line... it will most probably introduce difficult-to-track bugs later on in the flow and you will have also missed the chance to fix a bug in your code (failing to act on non-valid data).
Again. Using exceptions is the pythonic way to do (but it does not apply to most other languages) as also stated in this other answer and in the zen of python:
Errors should never pass silently.
Unless explicitly silenced.
HTH!
If an input is either valid or not, then just return the boolean. There's nothing exceptional about a validation test encountering an invalid value.

Categories