Python Lambda function optional variable [duplicate] - python

For example, I have a basic method that will return a list of permutations.
import itertools
def perms(elements, set_length=elements):
data=[]
for x in range(elements):
data.append(x+1)
return list(itertools.permutations(data, set_length))
Now I understand, that in its current state this code won't run because the second elements isn't defined, but is there and elegant way to accomplish what I'm trying to do here? If that's still not clear, I want to make the default setLength value equal to the first argument passed in. Thanks.

No, function keyword parameter defaults are determined when the function is defined, not when the function is executed.
Set the default to None and detect that:
def perms(elements, setLength=None):
if setLength is None:
setLength = elements
If you need to be able to specify None as a argument, use a different sentinel value:
_sentinel = object()
def perms(elements, setLength=_sentinel):
if setLength is _sentinel:
setLength = elements
Now callers can set setLength to None and it won't be seen as the default.

Because of the way Python handles bindings and default parameters...
The standard way is:
def perms(elements, setLength=None):
if setLength is None:
setLength = elements
And another option is:
def perms(elements, **kwargs):
setLength = kwargs.pop('setLength', elements)
Although this requires you to explicitly use perms(elements, setLength='something else') if you don't want a default...

You should do something like :
def perms(elements,setLength=None):
if setLength is None:
setLength = elements

Answer 1:
The solution from above looks like this:
def cast_to_string_concat(a, b, c=None):
c = a if c is None else c
return str(a) + str(b) + str(c)
While this approach will solve a myriad of potential problems, (and maybe yours)! I wanted to write a function where a possible input for variable "c" is indeed the singleton None, so I had to do more digging.
To explain that further, calling the function with the following variables:
A='A'
B='B'
my_var = None
Yields:
cast_to_string_concat(A, B, my_var):
>>>'ABA'
Whereas the user might expect that since they called the function with three variables, then it should print the three variables, like this:
cast_to_string_concat(A, B, my_var):
>>> 'ABNone' # simulated and expected outcome
So, this implementation ignores the third variable, even when it was declared, so this means the function no longer has the ability to determine whether or not variable "c" was defined.
So, for my use case, a default value of None would not quite do the trick.
For the answers that suggest this solution, read these:
Is there a way to set a default parameter equal to another parameter value?
Python shortcut for variable default value to be another variable value if it is None
Function argument's default value equal to another argument
What is the pythonic way to avoid default parameters that are empty lists?
But, if that doesn't work for you, then maybe keep reading!
A comment in the first link above mentions using a _sentinel defined by object().
So this solution removes the use of a None, and replaces it with the object() through using the implied private sentinel.
Answer 2:
_sentinel = object()
def cast_to_string_concat(a, b, c=_sentinel):
c = a if c == _sentinel else c
return str(a) + str(b) + str(c)
A='A'
B='B'
C='C'
cast_to_string_append(A,B,C)
>>> 'ABC'
cast_to_string_concat(A,B)
>>> 'ABA'
So this is pretty awesome! It correctly handles the above edge case! See for yourself:
A='A'
B='B'
C = None
cast_to_string_concat(A, B, C)
>>> 'ABNone'
So, we're done, right? Is there any plausible way that this might not work? Hmm... probably not! But I did say this was a three-part answer, so onward! ;)
For the sake of completeness, let's imagine our program operates in a space where every possible scenario is indeed possible. (This may not be a warranted assumption, but I imagine that one could derive the value of _sentinel with enough information about the computer's architecture and the implementation of the choice of the object. So, if you are willing, let us assume that is indeed possible, and let's imagine we decide to test that hypothesis referencing _sentinel as defined above.
_sentinel = object()
def cast_to_string_concat(a, b, c=_sentinel):
c = a if c == _sentinel else c
return str(a) + str(b) + str(c)
A='A'
B='B'
S = _sentinel
cast_to_string_append(A,B,S)
>>> 'ABA'
Wait a minute! I entered three arguments, so I should see the string concatenation of the three of them together!
*queue entering the land of unforeseen consequences*
I mean, not actually. A response of: "That's negligible edge case territory!!" or its ilk is perfectly warranted.
And that sentiment is right! For this case (and probably most cases) this is really not worth worrying about!
But if it is worth worrying about, or if you just want the mathematical satisfaction of eliminating all edge cases you're aware of ... onward!
Exercise left to reader:
Deviating from this technique, you can directly assert c=object(), however, in honesty, I haven't gotten that way to work for me. My investigation shows c == object() is False, and str(c) == str(object()) is also False, and that's why I'm using the implementation from Martin Pieters.
Okay, after that long exercise, we're back!
Recall the goal is to write a function that could potentially have n inputs, and only when one variable is not provided - then you will copy another variable in position i.
Instead of defining the variable by default, what if we change the approach to allow an arbitrary number of variables?
So if you're looking for a solution that does not compromise on potential inputs, where a valid input could be either None, object(), or _sentinel ... then (and only then), at this point, I'm thinking my solution will be helpful. The inspiration for the technique came from the second part of Jon Clements' answer.
Answer 3:
My solution to this problem is to change the naming of this function, and wrap this function with a a function of the previous naming convention, but instead of using variables, we use *args. You then define the original function within the local scope (with the new name), and only allow the few possibilities you desire.
In steps:
Rename function to something similar
Remove the default setup for your optional parameter
Begin to create a new function just above and tab the original function in.
def cast_to_string_concat(*args):
Determine the the arity of your function - (I found that word in my search... that is the number of the parameters passed into a given function)
Utilize a case statement inside that determines if you entered a valid number of variables, and adjust accordingly!
def cast_to_string_append(*args):
def string_append(a, b, c):
# this is the original function, it is only called within the wrapper
return str(a) + str(b) + str(c)
if len(args) == 2:
# if two arguments, then set the third to be the first
return string_append(*args, args[0])
elif len(args) == 3:
# if three arguments, then call the function as written
return string_append(*args)
else:
raise Exception(f'Function: cast_to_string_append() accepts two or three arguments, and you entered {len(args)}.')
# instantiation
A='A'
B='B'
C='C'
D='D'
_sentinel = object()
S = _sentinel
N = None
""" Answer 3 Testing """
# two variables
cast_to_string_append(A,B)
>>> 'ABA'
# three variables
cast_to_string_append(A,B,C)
>>> 'ABC'
# three variables, one is _sentinel
cast_to_string_append(A,B,S)
>>>'AB<object object at 0x10c56f560>'
# three variables, one is None
cast_to_string_append(A,B,N)
>>>'ABNone'
# one variable
cast_to_string_append(A)
>>>Traceback (most recent call last):
>>> File "<input>", line 1, in <module>
>>> File "<input>", line 13, in cast_to_string_append
>>>Exception: Function: cast_to_string_append() accepts two or three arguments, and you entered 1.
# four variables
cast_to_string_append(A,B,C,D)
>>>Traceback (most recent call last):
>>> File "<input>", line 1, in <module>
>>> File "<input>", line 13, in cast_to_string_append
>>>Exception: Function: cast_to_string_append() accepts two or three arguments, and you entered 4.
# ten variables
cast_to_string_append(0,1,2,3,4,5,6,7,8,9)
>>>Traceback (most recent call last):
>>> File "<input>", line 1, in <module>
>>> File "<input>", line 13, in cast_to_string_append
>>>Exception: Function: cast_to_string_append() accepts two or three arguments, and you entered 10.
# no variables
cast_to_string_append()
>>>Traceback (most recent call last):
>>> File "<input>", line 1, in <module>
>>> File "<input>", line 13, in cast_to_string_append
>>>Exception: Function: cast_to_string_append() accepts two or three arguments, and you entered 0.
""" End Answer 3 Testing """
So, in summary:
Answer 1 - the simplest answer, and works for most cases.
def cast_to_string_concat(a, b, c=None):
c = a if c is None else c
return str(a) + str(b) + str(c)
Answer 2 - use if None does not actually signify an empty parameter by switching to object() , through _sentinel .
_sentinel = object()
def cast_to_string_concat(a, b, c=_sentinel):
c = a if c == _sentinel else c
return str(a) + str(b) + str(c)
Answer 3 seeks out a general solution utilizing a wrapper function with arbitrary arity using *args, and handles the acceptable cases inside:
def cast_to_string_append(*args):
def string_append(a, b, c):
# this is the original function, it is only called within the wrapper
return str(a) + str(b) + str(c)
if len(args) == 2:
# if two arguments, then set the third to be the first
return string_append(*args, args[0])
elif len(args) == 3:
# if three arguments, then call the function as written
return string_append(*args)
else:
raise Exception(f'Function: cast_to_string_append() accepts two or three arguments, and you entered {len(args)}.')
Use what works for you! But for me, I'll be using Option 3 ;)

Related

strange returning value in a python function

def cons(a, b):
def pair(f):
return f(a, b)
return pair
def car(f):
def left(a, b):
return a
return f(left)
def cdr(f):
def right(a, b):
return b
return f(right)
Found this python code on git.
Just want to know what is f(a,b) in cons definition is, and how does it work?
(Not a function I guess)
cons is a function, that takes two arguments, and returns a function that takes another function, which will consume these two arguments.
For example, consider the following function:
def add(a, b):
return a + b
This is just a function that adds the two inputs, so, for instance, add(2, 5) == 7
As this function takes two arguments, we can use cons to call this function:
func_caller = cons(2, 5) # cons receives two arguments and returns a function, which we call func_caller
result = func_caller(add) # func_caller receives a function, that will process these two arguments
print(result) # result is the actual result of doing add(2, 5), i.e. 7
This technique is useful for wrapping functions and executing stuff, before and after calling the appropriate functions.
For example, we can modify our cons function to actually print the values before and after calling add:
def add(a, b):
print('Adding {} and {}'.format(a, b))
return a + b
def cons(a, b):
print('Received arguments {} and {}'.format(a, b))
def pair(f):
print('Calling {} with {} and {}'.format(f, a, b))
result = f(a, b)
print('Got {}'.format(result))
return result
return pair
With this update, we get the following outputs:
func_caller = cons(2, 5)
# prints "Received arguments 2 and 5" from inside cons
result = func_caller(add)
# prints "Calling add with 2 and 5" from inside pair
# prints "Adding 2 and 5" from inside add
# prints "Got 7" from inside pair
This isn't going to make any sense to you until you know what cons, car, and cdr mean.
In Lisp, lists are stored as a very simple form of linked list. A list is either nil (like None) for an empty list, or it's a pair of a value and another list. The cons function takes a value and a list and returns you another list just by making a pair:
def cons(head, rest):
return (head, rest)
And the car and cdr functions (they stand for "Contents of Address|Data Register", because those are the assembly language instructions used to implement them on a particular 1950s computer, but that isn't very helpful) return the first or second value from a pair:
def car(lst):
return lst[0]
def cdr(lst):
return lst[1]
So, you can make a list:
lst = cons(1, cons(2, cons(3, None)))
… and you can get the second value from it:
print(car(cdr(lst))
… and you can even write functions to get the nth value:
def nth(lst, n):
if n == 0:
return car(lst)
return nth(cdr(lst), n-1)
… or print out the whole list:
def printlist(lst):
if lst:
print(car(lst), end=' ')
printlist(cdr(lst))
If you understand how these work, the next step is to try them on those weird definitions you found.
They still do the same thing. So, the question is: How? And the bigger question is: What's the point?
Well, there's no practical point to using these weird functions; the real point is to show you that everything in computer science can be written with just functions, no built-in data structures like tuples (or even integers; that just takes a different trick).
The key is higher-order functions: functions that take functions as values and/or return other functions. You actually use these all the time: map, sort with a key, decorators, partial… they’re only confusing when they’re really simple:
def car(f):
def left(a, b):
return a
return f(left)
This takes a function, and calls it on a function that returns the first of its two arguments.
And cdr is similar.
It's hard to see how you'd use either of these, until you see cons:
def cons(a, b):
def pair(f):
return f(a, b)
return pair
This takes two things and returns a function that takes another function and applies it to those two things.
So, what do we get from cons(3, None)? We get a function that takes a function, and applies it to the arguments 3 and None:
def pair3(f):
return f(3, None)
And if we call cons(2, cons(3, None))?
def pair23(f):
return f(2, pair3)
And what happens if you call car on that function? Trace through it:
def left(a, b):
return a
return pair23(left)
That pair23(left) does this:
return left(2, pair3)
And left is dead simple:
return 2
So, we got the first element of (2, cons(3, None)).
What if you call cdr?
def right(a, b):
return a
return pair23(right)
That pair23(right) does this:
return right(2, pair3)
… and right is dead simple, so it just returns pair3.
You can work out that if we call car(cdr(pair23)), we're going to get the 3 out of it.
And now you can write lst = cons(1, cons(2, cons(3, None))), write the recursive nth and printlist functions above, and trace through how they work on lst.
I mentioned above that you can even get rid of integers. How do you do that? Read about Church numerals. You define zero and successor functions. Then you can define one as successor(zero) and two as successor(one). You can even recursively define add so that add(x, zero) is x but add(x, successor(y)) is successor(add(x, y)), and go on to define mul, etc.
You also need a special function you can use as a value for nil.
Anyway, once you've done that, using all of the other definitions above, you can do lst = cons(zero(cons(one, cons(two, cons(three, nil)))), and nth(lst, two) will give you back one. (Of course writing printlist will be a bit trickier…)
Obviously, this is all going to be a lot slower than just using tuples and integers and so on. But theoretically, it’s interesting.
Consider this: we could write a tiny dialect of Python that has only three kinds of statements—def, return, and expression statements—and only three kinds of expressions—literals, identifiers, and function calls—and it could do everything normal Python does. (In fact, you could get rid of statements altogether just by having a function-defining expression, which Python already has.) That tiny language would be a pain to use, but it would a lot easier to write a program to reason about programs in that tiny language. And we even know how to translate code using tuples, loops, etc. into code in this tiny subset language, which means we can write a program that reasons about that real Python code.
In fact, with a couple more tricks (curried functions and/or static function types, and lazy evaluation), the compiler/interpreter could do that kind of reasoning on the fly and optimize our code for us. It’s easy to tell programmatically that car(cdr(cons(2, cons(3, None)) is going to return 3 without having to actually evaluate most of those function calls, so we can just skip evaluating them and substitute 3 for the whole expression.
Of course this breaks down if any function can have side effects. You obviously can’t just substitute None for print(3) and get the same results. So instead, you need some clever trick where IO is handled by some magic object that evaluates functions to figure out what it should read and write, and then the whole rest of the program, the part that users write, becomes pure and can be optimized however you want. With a couple more abstractions, we can even make IO something that doesn’t have to be magical to do that.
And then you can build a standard library that gives you back all those things we gave up, written in terms of defining and calling functions, so it’s actually usable—but under the covers it’s all just reducing pure function calls, which is simple enough for a computer to optimize. And then you’ve basically written Haskell.

how to assign a new value to a variable inside a function? [duplicate]

Sorry if this is a dumb question, but I've looked for a while and not really found the answer.
If I'm writing a python function, for example:
def function(in1, in2):
in1=in1+1
in2=in2+1
How do I make these changes stick?
I know why they dont, this has been addressed in many answers, but I couldn't find an answer to the question of how to actually make them do so. Without returning values or making some sort of class, is there really no way for a function to operate on its arguments in a global sense?
I also want these variables to not be global themselves, as in I want to be able to do this:
a=1
b=2
c=3
d=4
function(a,b)
function(c,d)
Is this just wishful thinking?
It can be done but I'm warning you - it won't be pretty! What you can do is to capture the caller frame in your function, then pick up the call line, parse it and extract the arguments passed, then compare them with your function signature and create an argument map, then call your function and once your function finishes compare the changes in the local stack and update the caller frame with the mapped changes. If you want to see how silly it can get, here's a demonstration:
# HERE BE DRAGONS
# No, really, here be dragons, this is strictly for demonstration purposes!!!
# Whenever you use this in code a sweet little pixie is brutally killed!
import ast
import inspect
import sys
def here_be_dragons(funct): # create a decorator so we can, hm, enhance 'any' function
def wrapper(*args, **kwargs):
caller = inspect.getouterframes(inspect.currentframe())[1] # pick up the caller
parsed = ast.parse(caller[4][0], mode="single") # parse the calling line
arg_map = {} # a map for our tracked args to establish global <=> local link
for node in ast.walk(parsed): # traverse the parsed code...
# and look for a call to our wrapped function
if isinstance(node, ast.Call) and node.func.id == funct.__name__:
# loop through all positional arguments of the wrapped function
for pos, var in enumerate(funct.func_code.co_varnames):
try: # and try to find them in the captured call
if isinstance(node.args[pos], ast.Name): # named argument!
arg_map[var] = node.args[pos].id # add to our map
except IndexError:
break # no more passed arguments
break # no need for further walking through the ast tree
def trace(frame, evt, arg): # a function to capture the wrapped locals
if evt == "return": # we're only interested in our function return
for arg in arg_map: # time to update our caller frame
caller[0].f_locals[arg_map[arg]] = frame.f_locals.get(arg, None)
profile = sys.getprofile() # in case something else is doing profiling
sys.setprofile(trace) # turn on profiling of the wrapped function
try:
return funct(*args, **kwargs)
finally:
sys.setprofile(profile) # reset our profiling
return wrapper
And now you can easily decorate your function to enable it to perform this ungodly travesty:
# Zap, there goes a pixie... Poor, poor, pixie. It will be missed.
#here_be_dragons
def your_function(in1, in2):
in1 = in1 + 1
in2 = in2 + 1
And now, demonstration:
a = 1
b = 2
c = 3
d = 4
# Now is the time to play and sing along: Queen - A Kind Of Magic...
your_function(a, b) # bam, two pixies down... don't you have mercy?
your_function(c, d) # now you're turning into a serial pixie killer...
print(a, b, c, d) # Woooo! You made it! At the expense of only three pixie lives. Savage!
# prints: (2, 3, 4, 5)
This, obviously, works only for non-nested functions with positional arguments, and only if you pass simple local arguments, feel free to go down the rabbit hole of handling keyword arguments, different stacks, returned/wrapped/chained calls, and other shenanigans if that's what you fancy.
Or, you know, you can use structures invented for this, like globals, classes, or even enclosed mutable objects. And stop murdering pixies.
If you are looking to modify the value of the variables you could have your code be
def func(a,b):
int1 = a + 2
int2 = b + 3
return int1,int2
a = 2
b = 3
a,b = func(a,b)
This allows you to actually change the values of the a and b variables with the function.
you can do:
def function(in1, in2):
return in1 + 1 , in2 + 1
a, b = function(a,b)
c, d = function(c,d)
python functions are closed -> when function(a,b) s called, a and b get reassigned to a local (to the function) references/pointers in1 and in2, which are not accessible outside of the function. provide references to those new values w/o using globals, you will need to pass that back through return.
When you pass an array or non primitive object into a function, you can modify the object's attributes and have those modifications be visible to other references for that object outside, because the object itself contain the pointers to those values, making the visible to anything else holding a pointer to that object.

Function composition, tuples and unpacking

(disclaimed: not a Python kid, so please be gentle)
I am trying to compose functions using the following:
def compose(*functions):
return functools.reduce(lambda acc, f: lambda x: acc(f(x)), functions, lambda x: x)
which works as expected for scalar functions. I'd like to work with functions returning tuples and others taking multiple arguments, eg.
def dummy(name):
return (name, len(name), name.upper())
def transform(name, size, upper):
return (upper, -size, name)
# What I want to achieve using composition,
# ie. f = compose(transform, dummy)
transform(*dummy('Australia'))
=> ('AUSTRALIA', -9, 'Australia')
Since dummy returns a tuple and transform takes three arguments, I need to unpack the value.
How can I achieve this using my compose function above? If I try like this, I get:
f = compose(transform, dummy)
f('Australia')
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "<stdin>", line 2, in <lambda>
File "<stdin>", line 2, in <lambda>
TypeError: transform() takes exactly 3 arguments (1 given)
Is there a way to change compose such that it will unpack where needed?
This one works for your example but it wont handle just any arbitrary function - it will only works with positional arguments and (of course) the signature of any function must match the return value of the previous (wrt/ application order) one.
def compose(*functions):
return functools.reduce(
lambda f, g: lambda *args: f(*g(*args)),
functions,
lambda *args: args
)
Note that using reduce here, while certainly idiomatic in functional programming, is rather unpythonic. The "obvious" pythonic implementation would use iteration instead:
def itercompose(*functions):
def composed(*args):
for func in reversed(functions):
args = func(*args)
return args
return composed
Edit:
You ask "Is there a way to make have a compose function which will work in both cases" - "both cases" here meaning wether the functions returns an iterable or not (what you call "scalar functions", a concept that has no meaning in Python).
Using the iteration-based implementation, you could just test if the return value is iterable and wrap it in a tuple ie:
import collections
def itercompose(*functions):
def composed(*args):
for func in reversed(functions):
if not isinstance(args, collections.Iterable):
args = (args,)
args = func(*args)
return args
return composed
but this is not garanteed to work as expected - actually this is even garanteed to NOT work as expected for most use cases. There are a lot of builtin iterable types in Python (and even more user-defined ones) and just knowing an object is iterable doesn't say much about it's semantic.
For example a dict or str are iterable but in this case should obviously be considered a "scalar". A list is iterable too, and how it should be interpreted in this case is actually just undecidable without knowing exactly what it contains and what the "next" function in composition order expects - in some cases you will want to treat it as a single argument, in other cases ase a list of args.
IOW only the caller of the compose() function can really tell how each function result should be considered - actually you might even have cases where you want a tuple to be considered as a "scalar" value by the next function. So to make a long story short: no, there's no one-size-fits-all generic solution in Python. The best I could think of requires a combination of result inspection and manual wrapping of composed functions so the result is properly interpreted by the "composed" function but at this point manually composing the functions will be both way simpler and much more robust.
FWIW remember that Python is first and mostly a dynamically typed object oriented language so while it does have a decent support for functional programming idioms it's obviously not the best tool for real functional programming.
You might consider inserting a "function" (really, a class constructor) in your compose chain to signal the unpacking of the prior/inner function's results. You would then adjust your composer function to check for that class to determine if the prior result should be unpacked. (You actually end up doing the reverse: tuple-wrap all function results except those signaled to be unpacked -- and then have the composer unpack everything.) It adds overhead, it's not at all Pythonic, it's written in a terse lambda style, but it does accomplish the goal of being able to properly signal in a function chain when the composer should unpack a result. Consider the following generic code, which you can then adapt to your specific composition chain:
from functools import reduce
from operator import add
class upk: #class constructor signals composer to unpack prior result
def __init__(s,r): s.r = r #hold function's return for wrapper function
idt = lambda x: x #identity
wrp = lambda x: x.r if isinstance(x, upk) else (x,) #wrap all but unpackables
com = lambda *fs: ( #unpackable compose, unpacking whenever upk is encountered
reduce(lambda a,f: lambda *x: a(*wrp(f(*x))), fs, idt) )
foo = com(add, upk, divmod) #upk signals divmod's results should be unpacked
print(foo(6,4))
This circumvents the problem, as called out by prior answers/comments, of requiring your composer to guess which types of iterables should be unpacked. Of course, the cost is that you must explicitly insert upk into the callable chain whenever unpacking is required. In that sense, it is by no means "automatic", but it is still a fairly simple/terse way of achieving the intended result while avoiding unintended wraps/unwraps in many corner cases.
The compose function in the answer contributed by Bruno did do the job for functions with multiple arguments but didn't work any more for scalar ones unfortunately.
Using the fact that Python `unpacks' tuples into positional arguments, this is how I solved it:
import functools
def compose(*functions):
def pack(x): return x if type(x) is tuple else (x,)
return functools.reduce(
lambda acc, f: lambda *y: f(*pack(acc(*pack(y)))), reversed(functions), lambda *x: x)
which now works just as expected, eg.
#########################
# scalar-valued functions
#########################
def a(x): return x + 1
def b(x): return -x
# explicit
> a(b(b(a(15))))
# => 17
# compose
> compose(a, b, b, a)(15)
=> 17
########################
# tuple-valued functions
########################
def dummy(x):
return (x.upper(), len(x), x)
def trans(a, b, c):
return (b, c, a)
# explicit
> trans(*dummy('Australia'))
# => ('AUSTRALIA', 9, 'Australia')
# compose
> compose(trans, dummy)('Australia')
# => ('AUSTRALIA', 9, 'Australia')
And this also works with multiple arguments:
def add(x, y): return x + y
# explicit
> b(a(add(5, 3)))
=> -9
# compose
> compose(b, a, add)(5, 3)
=> -9

Assign results of function call in one line in python

How can I assign the results of a function call to multiple variables when the results are stored by name (not index-able), in python.
For example (tested in Python 3),
import random
# foo, as defined somewhere else where we can't or don't want to change it
def foo():
t = random.randint(1,100)
# put in a dummy class instead of just "return t,t+1"
# because otherwise we could subscript or just A,B = foo()
class Cat(object):
x = t
y = t + 1
return Cat()
# METHOD 1
# clearly wrong; A should be 1 more than B; they point to fields of different objects
A,B = foo().x, foo().y
print(A,B)
# METHOD 2
# correct, but requires two lines and an implicit variable
t = foo()
A,B = t.x, t.y
del t # don't really want t lying around
print(A,B)
# METHOD 3
# correct and one line, but an obfuscated mess
A,B = [ (t.x,t.y) for t in (foo(),) ][0]
print(A,B)
print(t) # this will raise an exception, but unless you know your python cold it might not be obvious before running
# METHOD 4
# Conforms to the suggestions in the links below without modifying the initial function foo or class Cat.
# But while all subsequent calls are pretty, but we have to use an otherwise meaningless shell function
def get_foo():
t = foo()
return t.x, t.y
A,B = get_foo()
What we don't want to do
If the results were indexable ( Cat extended tuple/list, we had used a namedtuple, etc.), we could simply write A,B = foo() as indicated in the comment above the Cat class. That's what's recommended here , for example.
Let's assume we have a good reason not to allow that. Maybe we like the clarity of assigning from the variable names (if they're more meaningful than x and y) or maybe the object is not primarily a container. Maybe the fields are properties, so access actually involves a method call. We don't have to assume any of those to answer this question though; the Cat class can be taken at face value.
This question already deals with how to design functions/classes the best way possible; if the function's expected return value are already well defined and does not involve tuple-like access, what is the best way to accept multiple values when returning?
I would strongly recommend either using multiple statements, or just keeping the result object without unpacking its attributes. That said, you can use operator.attrgetter for this:
from operator import attrgetter
a, b, c = attrgetter('a', 'b', 'c')(foo())

Python idiom for applying a function only when value is not None

A function is receiving a number of values that are all strings but need to be parsed in various ways, e.g.
vote_count = int(input_1)
score = float(input_2)
person = Person(input_3)
This is all fine except the inputs can also be None and in this case, instead of parsing the values I would like to end up with None assigned to the left hand side. This can be done with
vote_count = int(input_1) if input_1 is not None else None
...
but this seems much less readable especially with many repeated lines like this one. I'm considering defining a function that simplifies this, something like
def whendefined(func, value):
return func(value) if value is not None else None
which could be used like
vote_count = whendefined(int, input_1)
...
My question is, is there a common idiom for this? Possibly using built-in Python functions? Even if not, is there a commonly used name for a function like this?
In other languages there's Option typing, which is a bit different (solves the problem with a type system), but has the same motivation (what do do about nulls).
In Python there's more of a focus on runtime detection of this kind of thing, so you can wrap the function with an None-detecting guard (rather the data which is what Option typing does).
You could write a decorator that only executes a function if the argument is not None:
def option(function):
def wrapper(*args, **kwargs):
if len(args) > 0 and args[0] is not None:
return function(*args, **kwargs)
return wrapper
You should probably adapt that third line to be more suitable to the kind of data you're working with.
In use:
#option
def optionprint(inp):
return inp + "!!"
>>> optionprint(None)
# Nothing
>>> optionprint("hello")
'hello!!'
and with a return value
#option
def optioninc(input):
return input + 1
>>> optioninc(None)
>>> # Nothing
>>> optioninc(100)
101
or wrap a type-constructing function
>>> int_or_none = option(int)
>>> int_or_none(None)
# Nothing
>>> int_or_none(12)
12
If you can safely treat falsy values (such as 0 and the empty string) as None, you can use boolean and:
vote_count = input_1 and int(input_1)
Since it looks like you're taking strings for input, this might work; you can't turn an empty string to an int or float (or person) anyway. It's not overly readable for some, though the idiom is commonly used in Lua.

Categories