I know 'tryelse' is not a real thing, but my question is about the best way to write out this logic. Given the following example, foo and bar are functions that could break.
I want aa to be foo(), but if that breaks, I want it to become bar(), but if that one breaks too, then set aa to 0 as a default.
try:
aa = foo()
elsetry:
aa = bar()
except e:
aa = 0
Restating my question, what is the best real way to write out this logic in python?
If you have either a long chain of these, or a dynamic list of alternatives, you probably want to use a for loop:
for func in funcs:
try:
aa = func()
break
except:
pass
else:
aa = 0
Note that either way, you probably don't really want a bare except here. Usually you're only expecting some specific class of errors (e.g., ValueError or LookupError), and anything else shouldn't mean "silently try the next one" but "show the programmer that something unexpected went wrong". But you know how to fix that.
You could of course wrap that up in a function if you need to use it repeatedly:
def try_funcs(*funcs, defval=0, ok_exceptions=Exception):
for func in funcs:
try:
return func()
except ok_exceptions: # ok_exceptions can also be a tuple of exception classes.
pass
return defval
Of course, as mentioned in the comments, this does require that everything you want to try be a function of no arguments. What if you want to try spam(42), then eggs(beans) if that fails? Or something that isn't even a function call but some other expression, like foo[bar]?
That's the same as the general case that occurs all over Python: you use partial to bind in the arguments for the first case, or write a wrapper function with lambda for the second:
result = try_funcs(partial(spam, 42), partial(eggs, beans), lambda: foo[bar])
However, if you just have a static two or three alternatives to try, Simon Visser's simple nested answer is much clearer.
If you're asking why the language doesn't have your tryelse… well, I know it's come up on the python-ideas/-dev lists and elsewhere a few times, and been discussed, so you might want to search for more detailed/authoritative answers. But I think it comes down to this: While it's theoretically possible to come up with cases that might look cleaner with a tryelse, every example anyone's ever given is either fine as-is, or should obviously be refactored, so there's no compelling argument to change the grammar, reserve a new keyword, etc.
The nested approach is still best:
try:
aa = foo()
except Exception:
try:
aa = bar()
except Exception:
aa = 0
Despite trying to do it with less nesting, the above expresses what you wish to do and it's clear to the reader. If you try to nest more it becomes awkward to write and that's time to rethink your approach. But nesting two try/excepts is fine.
You can also write:
try:
aa = foo()
except Exception:
aa = None
if aa is None:
try:
aa = bar()
except Exception:
aa = 0
but somehow that doesn't look right (to me at least). It would also be incorrect in case foo() can return None as a valid value for aa.
I think this is the best way to do that:
try:
aa = foo()
except:
try:
aa = bar()
except:
aa = 0
This is what you do when you dont have elif statement like in JavaScript:
if if_condition:
code
else:
if elif_condition:
code
else:
code
Related
I often have functions that return multiple outputs which are structured like so:
def f(vars):
...
if something_unexpected():
return None, None
...
# normal return
return value1, value2
In this case, there might be a infrequent problem that something_unexpected detects (say, a empty dataframe when the routine expects at least one row of data), and so I want to return a value to the caller that says to ignore the output and skip over it. If this were a single return function then returning None once would seem fine, but when I'm returning multiple values it seems sloppy to return multiple copies of None just so the caller has the right number of arguments to unpack.
What are some better ways of coding up this construct? Is simply having the caller use a try-except block and the function raising an exception the way to go, or is there another example of good practice to use here?
Edit: Of course I could return the pair of outputs into a single variable, but then I'd have to call the function like
results = f(inputs)
if results is None:
continue
varname1, varname2 = results[0], results[1]
rather than the more clean-seeming
varname1, varname2 = f(inputs)
if varname1 is None:
continue
Depends on where you want to handle this behavior, but exceptions are a pretty standard way to do this. Without exceptions, you could still return None, None:
a, b = f(inputs)
if None in (a, b):
print("Got something bad!")
continue
Though, I think it might be better to raise in your function and catch it instead:
def f():
if unexpected:
raise ValueError("Got empty values")
else:
return val1, val2
try:
a, b = f()
except ValueError:
print("bad behavior in f, skipping")
continue
The best practice is to raise an exception:
if something_unexpected():
raise ValueError("Something unexpected happened")
REFERENCES:
Explicit is better than implicit.
Errors should never pass silently.
Unless explicitly silenced.
PEP 20 -- The Zen of Python
This is a little difficult to explain, so let's hope I'm expressing the problem coherently:
Say I have this list:
my_list = ["a string", 45, 0.5]
The critical point to understand in order to see where the question comes from is that my_list is generated by another function; I don't know ahead of time anything about my_list, specifically its length and the datatype of any of its members.
Next, say that every time <my_list> is generated, there is a number of predetermined operations I want to perform on it. For example, I want to:
my_text = my_list[1]+"hello"
some_var = my_list[10]
mini_list = my_list[0].split('s')[1]
my_sum = my_list[7]+2
etc. The important point here is that it's a large number of operations.
Obviously, some of these operations would succeed with any given my_list and some would fail and, importantly, those which fail will do so with an unpredictable Error type; but I need to run all of them on every generation of my_list.
One obvious solution would be to use try/except on each of these operations:
try:
my_text = my_list[1]+"hello"
except:
my_text = "None"
try:
some_var = my_list[10]
except:
some_var = "couldn't do it"
etc.
But with a large number of operations, this gets very cumbersome. I looked into the various questions about multiple try/excepts, but unless I'm missing something, they don't address this.
Based on someone's suggestion (sorry, lost the link), I tried to create a function with a built-in try/except, create another list of these operations, and send each operation to the function. Something along the lines of
def careful(op):
try:
return op
else:
return "None"
And use it with, for example, the first operation:
my_text = careful(my_list[1]+"hello")
The problem is python seems to evaluate the careful() argument before it's sent out to the function and the error is generated before it can be caught...
So I guess I'm looking for a form of a ternary operator that can do something like:
my text = my_list[1]+"hello" if (this doesn't cause any type of error) else "None"
But, if one exist, I couldn't find it...
Any ideas would be welcome and sorry for the long post.
Maybe something like this?
def careful(op, default):
ret = default
try:
ret = computation()
else:
pass
return ret
If you must do this, consider keeping a collection of the operations as strings and calling exec on them in a loop
actions = [
'my_text = my_list[1]+"hello"',
'some_var = my_list[10]',
'mini_list = my_list[0].split("s")[1]',
'my_sum = my_list[7]+2',
]
If you make this collection a dict, you may also assign a default
Note that if an action default (or part of an action string) is meant to be a string, it must be quoted twice. Consider using block-quotes for this if you already have complex escaping, like returning a raw strings or a string representing a regular expression
{
"foo = bar": r"""r'[\w]+baz.*'"""
}
complete example:
>>> actions_defaults = {
... 'my_text = my_list[1]+"hello"': '"None"',
... 'some_var = my_list[10]': '"couldn\'t do it"',
... 'mini_list = my_list[0].split("s")[1]': '"None"',
... 'my_sum = my_list[7]+2': '"None"',
... }
>>>
>>> for action, default in actions_defaults.items():
... try:
... exec(action)
... except Exception: # consider logging error
... exec("{} = {}".format(action.split("=")[0], default))
...
>>> my_text
'None'
>>> some_var
"couldn't do it"
Other notes
this is pretty evil
declaring your vars before running to be their default values is probably better/clearer (sufficient to pass in the except block, as the assignment will fail)
you may run into weird scoping and need to access some vars via locals()
This sounds like an XY Problem
If you can make changes to the source logic, returning a dict may be a much better solution. Then you can determine if a key exists before doing some action, and potentially also look up the action which should be taken if the key exists in another dict.
This question already has answers here:
Callable as the default argument to dict.get without it being called if the key exists
(6 answers)
Closed 6 years ago.
Seems like the fallback is called even if the key is present inside the dictionary. Is this an intended behaviour? How can workaround it?
>>> i = [1,2,3,4]
>>> c = {}
>>> c[0]= 0
>>> c.get(0, i.pop())
0
>>> c.get(0, i.pop())
0
>>> c.get(0, i.pop())
0
>>> c.get(0, i.pop())
0
>>> c.get(0, i.pop())
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
IndexError: pop from empty list
When doing c.get(0, i.pop()), the i.pop() part gets evaluated before the result it returns is passed to the c.get(...). That's the reason that the error appears if a list i is empty due to the previous .pop() calls on it.
To get around this, you should either check if the list is not empty before trying to pop an element from it, or just try it an catch a possible exception:
if not i:
# do not call do i.pop(), handle the case in some way
default_val = i.pop()
or
try:
c.get(0, i.pop())
except IndexError:
# gracefully handle the case in some way, e.g. by exiting
default_val = i.pop()
The first approach is called LBYL ("look before you leap"), while the second is referred to as EAFP ("easier to ask for forgiveness than permission"). The latter is usually preferred in Python and considered more Pythonic, because the code does not get cluttered with a lot of safeguarding checks, although the LBYL approach has its merits, too, and can be just as readable (use-case dependent).
This is the expected results because you're directly invoking i.pop() which gets called before c.get().
The default argument to dict.get does indeed get evaluated before the dictionary checks if the key is present or not. In fact, it's evaluated before the get method is even called! Your get calls are equivalent to this:
default = i.pop() # this happens unconditionally
c.get(0, default)
default = i.pop() # this one too
c.get(0, default)
#...
If you want to specify a callable that will be used only to fill in missing dictionary values, you might want to use a collections.defaultdict. It takes a callable which is used exactly that way:
c = defaultdict(i.pop) # note, no () after pop
c[0] = 0
c[0] # use regular indexing syntax, won't pop anything
Note that unlike a get call, the value returned by the callable will actually be stored in the dictionary afterwards, which might be undesirable.
There is no real way to workaround this except using if...else... !
In your case, this code would work:
c[0] if 0 in c else i.pop()
This is intended behavior because i.pop() is an expression that is evaluated before c.get(...) is. Imagine what would happen if that weren't the case. You might have something like this:
def myfunction(number):
print("Starting work")
# Do long, complicated setup
# Do long, complicated thing with number
myfunction(int('kkk'))
When would you have int('kkk') to be evaluated? Would it be as soon as myfunction() uses it (after the parameters)? It would then finally have the ValueError after the long, complicated setup. If you were to say x = int('kkk'), when would you expect the ValueError? The right side is evaluated first, and the ValueError occurs immediately. x does not get defined.
There are a couple possible workarounds:
c.get(0) or i.pop()
That will probably work in most cases, but won't work if c.get(0) might return a Falsey value that is not None. A safer way is a little longer:
try:
result = c[0]
except IndexError:
result = i.pop()
Of course, we like EAFP (Easier to Ask Forgiveness than Permission), but you could ask permission:
c[0] if 0 in c else i.pop()
(Credits to #soon)
Both the arguments are evaluated before calling the get function. Thus in all calls the size of list is decreased by 1 even if the key is present.
Try something like
if c.has_key(0):
print c[0]
else:
print i.pop()
I want to summarize the following code. What it should do is check if the variable in the calculation is assigned. If not, then the result will be zero. Because I have hundreds of calculations like these I don't want to repeat the try-except for every calculation.
How could I do that?
a = 1
b = 2
d = 3
f = 2
try:
ab = a + b
except:
ab = 0
try:
ac = a - c
except:
ac = 0
try:
bg = b / g
except:
ac = 0
Write a function to do it, using a lambda (a one-line function) to defer the evaluation of the variables in case one of them doesn't exist:
def call_with_default(func, default):
try:
return func()
except NameError: # for names that don't exist
return default
ab = call_with_default(lambda: a+b, 0)
# etc.
You might benefit by using some sort of data structure (e.g. list or dictionary) to contain your values rather than storing them in individual variables; it's possible you could then use loops to do all these calculations instead of writing them all individually.
If you have a bunch of variables that might not even be defined, you probably don't really have a bunch of variables.
For example, if you're trying to build an interactive interpreter, where the user can create new variables, don't try to save each user variable as a global variable of the same name (if for no other reason than safety—what happens if the user tries to create a variable named main and erases your main function?). Store a dictionary of user variables.
Once you do that, the solutions suggested by Alexey and kindall will work:
def add_default(first, second, default):
try:
return variables[first] + variables[second]
except KeyError:
return default
variables['ab'] = add_default('a', 'b', 0)
If you really do need to mix in your code and user code at the same level, you can do it, by using globals() itself as your dictionary:
def add_default(first, second, default):
try:
return globals()[first] + globals()[second]
except KeyError:
return default
ab = add_default('a', 'b', 0)
However, using globals this way is almost always a sign that you've made a major design error earlier, and the right thing to do is back up until you find that error…
Meanwhile, from a comment:
I create a list of all my variables and loop through them if they have a value assigned or not. In case they have not I will set them to float('nan').
There's no way to create a list of variables (except, of course, by referencing them by name off globals()). You can create a list of values, but that won't do you any good, because there are no values for the undefined variables.
This is yet another sign that what you probably want here is not a bunch of separate variables, but a dictionary.
In particular, you probably want a defaultdict:
variables = collections.defaultdict(lambda: float('nan'))
For a more generic case you may use lambdas (though not too graceful solution):
def lambda_default(func, default, *args):
try:
return func(*args)
except:
return default
abc = lambda_default(lambda x, y: x + y * z, 0, a, b, c)
In case you have some commonly used functions, you may wrap them into one more def, of course:
def add_default(first, second, default):
return lambda_default(operator.add, 0, first, second)
ab = add_default(a, b, 0)
I am still new to Python and have been reviewing the following code not written by me.
Could someone please explain how the first instance of the variable "clean" is able to be be called in the check_arguments function? It seems to me as though it is calling an as yet undefined variable. The code works but shouldn't that call to "clean" produce an error?
To be clear the bit I am referring to is this.
def check_arguments(ages):
clean, ages_list = parse_ages_argument(ages)
The full code is as follows...
def check_arguments(ages):
clean, ages_list = parse_ages_argument(ages)
if clean != True:
print('invalid ages: %s') % ages
return ages_list
def parse_ages_argument(ages):
clean = True
ages_list = []
ages_string_list = ages.split(',')
for age_string in ages_string_list:
if age_string.isdigit() != True:
clean = False
break
for age_string in ages_string_list:
try:
ages_list.append(int(age_string))
except ValueError:
clean = False
break
ages_list.sort(reverse=True)
return clean, ages_list
ages_list = check_arguments('1,2,3')
print(ages_list)
Python doesn't have a comma operator. What you are seeing is sequence unpacking.
>>> a, b = 1, 2
>>> print a, b
1 2
how the first instance of the variable "clean" is able to be be called in the check_arguments function?
This is a nonsensical thing to ask in the first place, since variables aren't called; functions are. Further, "instance" normally means "a value that is of some class type", not "occurrence of the thing in question in the code listing".
That said: the line of code in question does not use an undefined variable clean. It defines the variable clean (and ages_list at the same time). parse_ages_argument returns two values (as you can see by examining its return statement). The two returned values are assigned to the two variables, respectively.