What is considered to be a better programming practice when dealing with more object at time (but with the option to process just one object)?
A: LOOP INSIDE FUNCTION
Function can be called with one or more objects and it is iterating inside function:
class Object:
def __init__(self, a, b):
self.var_a = a
self.var_b = b
var_a = ""
var_b = ""
def func(obj_list):
if type(obj_list) != list:
obj_list = [obj_list]
for obj in obj_list:
# do whatever with an object
print(obj.var_a, obj.var_b)
obj_list = [Object("a1", "a2"), Object("b1", "b2")]
obj_alone = Object("c1", "c2")
func(obj_list)
func(obj_alone)
B: LOOP OUTSIDE FUNCTION
Function is dealing with one object only and when it is dealing with more objects in must be called multiple times.
class Object:
def __init__(self, a, b):
self.var_a = a
self.var_b = b
var_a = ""
var_b = ""
def func(obj):
# do whatever with an object
print(obj.var_a, obj.var_b)
obj_list = [Object("a1", "a2"), Object("b1", "b2")]
obj_alone = Object("c1", "c2")
for obj in obj_list:
func(obj)
func(obj_alone)
I personally like the first one (A) more, because for me it makes cleaner code when calling the function, but maybe it's not the right approach. Is there some method generally better than the other? And if not, what are the cons and pros of each method?
A function should have a defined input and output and follow the single responsibility principle. You need to be able to clearly define your function in terms of "I put foo in, I get bar back". The more qualifiers you need to make in this statement to properly describe your function probably means your function is doing too much. "I put foo in and get bar back, unless I put baz in then I also get bar back, unless I put a foo-baz in then it'll error".
In this particular case, you can pass an object or a list of objects. Try to generalise that to a value or a list of values. What if you want to pass a list as a value? Now your function behaviour is ambiguous. You want the single list object to be your value, but the function treats it as multiple arguments instead.
Therefore, it's trivial to adapt a function which takes one argument to work on multiple values in practice. There's no reason to complicate the function's design by making it adaptable to multiple arguments. Write the function as simple and clearly as possible, and if you need it to work through a list of things then you can loop it through that list of things outside the function.
This might become clearer if you try to give an actual useful name to your function which describes what it does. Do you need to use plural or singular terms? foo_the_bar(bar) does something else than foo_the_bars(bars).
Move loops outside functions (when possible)
Generally speaking, keep loops that do nothing but iterate over the parameter outside of functions. This gives the caller maximum control and assumes the least about how the client will use the function.
The rule of thumb is to use the most minimal parameter complexity that the function needs do its job.
For example, let's say you have a function that processes one item. You've anticipated that a client might conceivably want to process multiple items, so you changed the parameter to an iterable, baked a loop into the function, and are now returning a list. Why not? It could save the client from writing an ugly loop in the caller, you figure, and the basic functionality is still available -- and then some!
But this turns out to be a serious constraint. Now the caller needs to pack (and possibly unpack, if the function returns a list of results in addition to a list of arguments) that single item into a list just to use the function. This is confusing and potentially expensive on heap memory:
>>> def square(it): return [x ** 2 for x in it]
...
>>> square(range(6)) # you're thinking ...
[0, 1, 4, 9, 16, 25]
>>> result, = square([3]) # ... but the client just wants to square 1 number
>>> result
9
Here's a much better design for this particular function, intuitive and flexible:
>>> def square(x): return x ** 2
...
>>> square(3)
9
>>> [square(x) for x in range(6)]
[0, 1, 4, 9, 16, 25]
>>> list(map(square, range(6)))
[0, 1, 4, 9, 16, 25]
>>> (square(x) for x in range(6))
<generator object <genexpr> at 0x00000166D122CBA0>
>>> all(square(x) % 2 for x in range(6))
False
This brings me to a second problem with the functions in your code: they have a side-effect, print. I realize these functions are just for demonstration, but designing functions like this makes the example somewhat contrived. Functions typically return values rather than simply produce side-effects, and the parameters and return values are often related, as in the above example -- changing the parameter type bound us to a different return type.
When does it make sense to use an iterable argument? A good example is sort -- the smallest unit of operation for a sorting function is an iterable, so the problem of packing and unpacking in the square example above is a non-issue.
Following this logic a step further, would it make sense for a sort function to accept a list (or variable arguments) of lists? No -- if the caller wants to sort multiple lists, they should loop over them explicitly and call sort on each one, as in the second square example.
Consider variable arguments
A nice feature that bridges the gap between iterables and single arguments is support for variable arguments, which many languages offer. This sometimes gives you the best of both worlds, and some functions go so far as to accept either args or an iterable:
>>> max([1, 3, 2])
3
>>> max(1, 3, 2)
3
One reason max is nice as a variable argument function is that it's a reduction function, so you'll always get a single value as output. If it were a mapping or filtering function, the output is always a list (or generator) so the input should be as well.
To take another example, a sort routine wouldn't make much sense with varargs because it's a classically in-place algorithm that works on lists, so you'd need to unpack the list into the arguments with the * operator pretty much every time you invoke the function -- not cool.
There's no real need for a call like sort(1, 3, 4, 2) as there is with max, where the parameters are just as likely to be loose variables as they are a packed iterable. Varargs are usually used when you have a small number of arguments, or the thing you're unpacking is a small pair or tuple-type element, as often the case with zip.
There's definitely a "feel" to when to offer parameters as varargs, an iterable, or a single value (i.e. let the caller handle looping), but as long as you follow the rule of avoiding iterables unless they're essential to the function, it's hard to go wrong.
As a final tip, try to write your functions with similar contracts to the library functions in your language or the tools you use frequently. These are pretty much always designed well; mimic good design.
If you implement B then you will make it harder for yourself to achieve A.
If you implement A then it isn't too difficult to achieve B. You also have many tools already available to apply this function to a list of arguments (the loop method you described, using something like map, or even a multiprocessing approach if needed)
Therefore I would choose to implement A, and if it makes things neater or easier in a given case you can think about also implementing B (using A) also so that you have both.
Related
I can find lots of stuff showing me what a lambda function is, and how the syntax works and what not. But other than the "coolness factor" (I can make a function in middle a call to another function, neat!) I haven't seen something that's overwelmingly compelling to say why I really need/want to use them.
It seems to be more of a stylistic or structual choice in most examples I've seen. And kinda breaks the "Only one correct way to do something" in python rule. How does it make my programs, more correct, more reliable, faster, or easier to understand? (Most coding standards I've seen tend to tell you to avoid overly complex statements on a single line. If it makes it easier to read break it up.)
Here's a good example:
def key(x):
return x[1]
a = [(1, 2), (3, 1), (5, 10), (11, -3)]
a.sort(key=key)
versus
a = [(1, 2), (3, 1), (5, 10), (11, -3)]
a.sort(key=lambda x: x[1])
From another angle: Lambda expressions are also known as "anonymous functions", and are very useful in certain programming paradigms, particularly functional programming, which lambda calculus provided the inspiration for.
http://en.wikipedia.org/wiki/Lambda_calculus
The syntax is more concise in certain situations, mostly when dealing with map et al.
map(lambda x: x * 2, [1,2,3,4])
seems better to me than:
def double(x):
return x * 2
map(double, [1,2,3,4])
I think the lambda is a better choice in this situation because the def double seems almost disconnected from the map that is using it. Plus, I guess it has the added benefit that the function gets thrown away when you are done.
There is one downside to lambda which limits its usefulness in Python, in my opinion: lambdas can have only one expression (i.e., you can't have multiple lines). It just can't work in a language that forces whitespace.
Plus, whenever I use lambda I feel awesome.
For me it's a matter of the expressiveness of the code. When writing code that people will have to support, that code should tell a story in as concise and easy to understand manner as possible. Sometimes the lambda expression is more complicated, other times it more directly tells what that line or block of code is doing. Use judgment when writing.
Think of it like structuring a sentence. What are the important parts (nouns and verbs vs. objects and methods, etc.) and how should they be ordered for that line or block of code to convey what it's doing intuitively.
Lambda functions are most useful in things like callback functions, or places in which you need a throwaway function. JAB's example is perfect - It would be better accompanied by the keyword argument key, but it still provides useful information.
When
def key(x):
return x[1]
appears 300 lines away from
[(1,2), (3,1), (5,10), (11,-3)].sort(key)
what does key do? There's really no indication. You might have some sort of guess, especially if you're familiar with the function, but usually it requires going back to look. OTOH,
[(1,2), (3,1), (5,10), (11,-3)].sort(lambda x: x[1])
tells you a lot more.
Sort takes a function as an argument
That function takes 1 parameter (and "returns" a result)
I'm trying to sort this list by the 2nd value of each of the elements of the list
(If the list were a variable so you couldn't see the values) this logic expects the list to have at least 2 elements in it.
There's probably some more information, but already that's a tremendous amount that you get just by using an anonymous lambda function instead of a named function.
Plus it doesn't pollute your namespace ;)
Yes, you're right — it is a structural choice. It probably does not make your programs more correct by just using lambda expressions. Nor does it make them more reliable, and this has nothing to do with speed.
It is only about flexibility and the power of expression. Like list comprehension. You can do most of that defining named functions (possibly polluting namespace, but that's again purely stylistic issue).
It can aid to readability by the fact, that you do not have to define a separate named function, that someone else will have to find, read and understand that all it does is to call a method blah() on its argument.
It may be much more interesting when you use it to write functions that create and return other functions, where what exactly those functions do, depends on their arguments. This may be a very concise and readable way of parameterizing your code behaviour. You can just express more interesting ideas.
But that is still a structural choice. You can do that otherwise. But the same goes for object oriented programming ;)
Ignore for a moment the detail that it's specifically anonymous functions we're talking about. functions, including anonymous ones, are assignable quantities (almost, but not really, values) in Python. an expression like
map(lambda y: y * -1, range(0, 10))
explicitly mentions four anonymous quantities: -1, 0, 10 and the result of the lambda operator, plus the implied result of the map call. it's possible to create values of anonymous types in some languages. so ignore the superficial difference between functions and numbers. the question when to use an anonymous function as opposed to a named one is similar to a question of when to put a naked number literal in the code and when to declare a TIMES_I_WISHED_I_HAD_A_PONY or BUFFER_SIZE beforehand. there are times when it's appropriate to use a (numeric, string or function) literal, and there are times when it's more appropriate to name such a thing and refer to it through its name.
see eg. Allen Holub's provocative, thought-or-anger-provoking book on Design Patterns in Java; he uses anonymous classes quite a bit.
Lambda, while useful in certain situations, has a large potential for abuse. lambda's almost always make code more difficult to read. And while it might feel satisfying to fit all your code onto a single line, it will suck for the next person who has to read your code.
Direct from PEP8
"One of Guido's key insights is that code is read much more often than it is written."
It is definitely true that abusing lambda functions often leads to bad and hard-to-read code. On the other hand, when used accurately, it does the opposite. There are already great answers in this thread, but one example I have come across is:
def power(n):
return lambda x: x**n
square = power(2)
cubic = power(3)
quadruple = power(4)
print(square(10)) # 100
print(cubic(10)) # 1000
print(quadruple(10)) # 10000
This simplified case could be rewritten in many other ways without the use of lambda. Still, one can infer how lambda functions can increase readability and code reuse in perhaps more complex cases and functions with this example.
Lambdas are anonymous functions (function with no name) that can be assigned to a variable or that can be passed as an argument to another function. The usefulness of lambda will be realized when you need a small piece of function that will be run once in a while or just once. Instead of writing the function in global scope or including it as part of your main program you can toss around few lines of code when needed to a variable or another function. Also when you pass the function as an argument to another function during the function call you can change the argument (the anonymous function) making the function itself dynamic. Suppose if the anonymous function uses variables outside its scope it is called closure. This is useful in callback functions.
One use of lambda function which I have learned, and where is not other good alternative or at least looks for me best is as default action in function parameter by
parameter=lambda x: x
This returns the value without change, but you can supply one function optionally to perform a transformation or action (like printing the answer, not only returning)
Also often it is useful to use in sorting as key:
key=lambda x: x[field]
The effect is to sort by fieldth (zero based remember) element of each item in sequence. For reversing you do not need lambda as it is clearer to use
reverse=True
Often it is almost as easy to do new real function and use that instead of lambda. If people has studied much Lisp or other functional programming, they also have natural tendency to use lambda function as in Lisp the function definitions are handled by lambda calculus.
Lambdas are objects, not methods, and they cannot be invoked in the same way that methods are.
for e.g
succ = ->(x){ x+1 }
succ mow holds a Proc object, which we can use like any other:
succ.call(2)
gives us an output = 3
I want to point out one situation other than list-processing where the lambda functions seems the best choice:
from tkinter import *
from tkinter import ttk
def callback(arg):
print(arg)
pass
root = Tk()
ttk.Button(root, text = 'Button1', command = lambda: callback('Button 1 clicked')).pack()
root.mainloop()
And if we drop lambda function here, the callback may only execute the callback once.
ttk.Button(root, text = 'Button1', command = callback('Button1 clicked')).pack()
Another point is that python does not have switch statements. Combining lambdas with dicts can be an effective alternative. e.g.:
switch = {
'1': lambda x: x+1,
'2': lambda x: x+2,
'3': lambda x: x+3
}
x = starting_val
ans = expression
new_ans = switch[ans](x)
In some cases it is much more clear to express something simple as a lambda. Consider regular sorting vs. reverse sorting for example:
some_list = [2, 1, 3]
print sorted(some_list)
print sorted(some_list, lambda a, b: -cmp(a, b))
For the latter case writing a separate full-fledged function just to return a -cmp(a, b) would create more misunderstanding then a lambda.
Lambdas allow you to create functions on the fly. Most of the examples I've seen don't do much more than create a function with parameters passed at the time of creation rather than execution. Or they simplify the code by not requiring a formal declaration of the function ahead of use.
A more interesting use would be to dynamically construct a python function to evaluate a mathematical expression that isn't known until run time (user input). Once created, that function can be called repeatedly with different arguments to evaluate the expression (say you wanted to plot it). That may even be a poor example given eval(). This type of use is where the "real" power is - in dynamically creating more complex code, rather than the simple examples you often see which are not much more than nice (source) code size reductions.
you master lambda, you master shortcuts in python.Here is why:
data=[(lambda x:x.text)(x.extract()) for x in soup.findAll('p') ]
^1 ^2 ^3 ^4
here we can see 4 parts of the list comprehension:
1: i finally want this
2: x.extract will perform some operation on x, here it pop the element from soup
3: x is the list iterable which is passed to the input of lambda at 2 along with extract operation
4: some arbitary list
i had found no other way to use 2 statements in lambda, but with this
kind of pipe-lining we can exploit the infinite potential of lambda.
Edit: as pointed out in the comments, by juanpa, its completely fine to use x.extract().text but the point was explaining the use of lambda pipe, ie passing the output of lambda1 as input to lambda2. via (lambda1 y:g(x))(lambda2 x:f(x))
This question already has answers here:
"Least Astonishment" and the Mutable Default Argument
(33 answers)
Closed 6 months ago.
I had a very difficult time with understanding the root cause of a problem in an algorithm. Then, by simplifying the functions step by step I found out that evaluation of default arguments in Python doesn't behave as I expected.
The code is as follows:
class Node(object):
def __init__(self, children = []):
self.children = children
The problem is that every instance of Node class shares the same children attribute, if the attribute is not given explicitly, such as:
>>> n0 = Node()
>>> n1 = Node()
>>> id(n1.children)
Out[0]: 25000176
>>> id(n0.children)
Out[0]: 25000176
I don't understand the logic of this design decision? Why did Python designers decide that default arguments are to be evaluated at definition time? This seems very counter-intuitive to me.
The alternative would be quite heavyweight -- storing "default argument values" in the function object as "thunks" of code to be executed over and over again every time the function is called without a specified value for that argument -- and would make it much harder to get early binding (binding at def time), which is often what you want. For example, in Python as it exists:
def ack(m, n, _memo={}):
key = m, n
if key not in _memo:
if m==0: v = n + 1
elif n==0: v = ack(m-1, 1)
else: v = ack(m-1, ack(m, n-1))
_memo[key] = v
return _memo[key]
...writing a memoized function like the above is quite an elementary task. Similarly:
for i in range(len(buttons)):
buttons[i].onclick(lambda i=i: say('button %s', i))
...the simple i=i, relying on the early-binding (definition time) of default arg values, is a trivially simple way to get early binding. So, the current rule is simple, straightforward, and lets you do all you want in a way that's extremely easy to explain and understand: if you want late binding of an expression's value, evaluate that expression in the function body; if you want early binding, evaluate it as the default value of an arg.
The alternative, forcing late binding for both situation, would not offer this flexibility, and would force you to go through hoops (such as wrapping your function into a closure factory) every time you needed early binding, as in the above examples -- yet more heavy-weight boilerplate forced on the programmer by this hypothetical design decision (beyond the "invisible" ones of generating and repeatedly evaluating thunks all over the place).
In other words, "There should be one, and preferably only one, obvious way to do it [1]": when you want late binding, there's already a perfectly obvious way to achieve it (since all of the function's code is only executed at call time, obviously everything evaluated there is late-bound); having default-arg evaluation produce early binding gives you an obvious way to achieve early binding as well (a plus!-) rather than giving TWO obvious ways to get late binding and no obvious way to get early binding (a minus!-).
[1]: "Although that way may not be obvious at first unless you're Dutch."
The issue is this.
It's too expensive to evaluate a function as an initializer every time the function is called.
0 is a simple literal. Evaluate it once, use it forever.
int is a function (like list) that would have to be evaluated each time it's required as an initializer.
The construct [] is literal, like 0, that means "this exact object".
The problem is that some people hope that it to means list as in "evaluate this function for me, please, to get the object that is the initializer".
It would be a crushing burden to add the necessary if statement to do this evaluation all the time. It's better to take all arguments as literals and not do any additional function evaluation as part of trying to do a function evaluation.
Also, more fundamentally, it's technically impossible to implement argument defaults as function evaluations.
Consider, for a moment the recursive horror of this kind of circularity. Let's say that instead of default values being literals, we allow them to be functions which are evaluated each time a parameter's default values are required.
[This would parallel the way collections.defaultdict works.]
def aFunc( a=another_func ):
return a*2
def another_func( b=aFunc ):
return b*3
What is the value of another_func()? To get the default for b, it must evaluate aFunc, which requires an eval of another_func. Oops.
Of course in your situation it is difficult to understand. But you must see, that evaluating default args every time would lay a heavy runtime burden on the system.
Also you should know, that in case of container types this problem may occur -- but you could circumvent it by making the thing explicit:
def __init__(self, children = None):
if children is None:
children = []
self.children = children
The workaround for this, discussed here (and very solid), is:
class Node(object):
def __init__(self, children = None):
self.children = [] if children is None else children
As for why look for an answer from von Löwis, but it's likely because the function definition makes a code object due to the architecture of Python, and there might not be a facility for working with reference types like this in default arguments.
I thought this was counterintuitive too, until I learned how Python implements default arguments.
A function's an object. At load time, Python creates the function object, evaluates the defaults in the def statement, puts them into a tuple, and adds that tuple as an attribute of the function named func_defaults. Then, when a function is called, if the call doesn't provide a value, Python grabs the default value out of func_defaults.
For instance:
>>> class C():
pass
>>> def f(x=C()):
pass
>>> f.func_defaults
(<__main__.C instance at 0x0298D4B8>,)
So all calls to f that don't provide an argument will use the same instance of C, because that's the default value.
As far as why Python does it this way: well, that tuple could contain functions that would get called every time a default argument value was needed. Apart from the immediately obvious problem of performance, you start getting into a universe of special cases, like storing literal values instead of functions for non-mutable types to avoid unnecessary function calls. And of course there are performance implications galore.
The actual behavior is really simple. And there's a trivial workaround, in the case where you want a default value to be produced by a function call at runtime:
def f(x = None):
if x == None:
x = g()
This comes from python's emphasis on syntax and execution simplicity. a def statement occurs at a certain point during execution. When the python interpreter reaches that point, it evaluates the code in that line, and then creates a code object from the body of the function, which will be run later, when you call the function.
It's a simple split between function declaration and function body. The declaration is executed when it is reached in the code. The body is executed at call time. Note that the declaration is executed every time it is reached, so you can create multiple functions by looping.
funcs = []
for x in xrange(5):
def foo(x=x, lst=[]):
lst.append(x)
return lst
funcs.append(foo)
for func in funcs:
print "1: ", func()
print "2: ", func()
Five separate functions have been created, with a separate list created each time the function declaration was executed. On each loop through funcs, the same function is executed twice on each pass through, using the same list each time. This gives the results:
1: [0]
2: [0, 0]
1: [1]
2: [1, 1]
1: [2]
2: [2, 2]
1: [3]
2: [3, 3]
1: [4]
2: [4, 4]
Others have given you the workaround, of using param=None, and assigning a list in the body if the value is None, which is fully idiomatic python. It's a little ugly, but the simplicity is powerful, and the workaround is not too painful.
Edited to add: For more discussion on this, see effbot's article here: http://effbot.org/zone/default-values.htm, and the language reference, here: http://docs.python.org/reference/compound_stmts.html#function
I'll provide a dissenting opinion, by addessing the main arguments in the other posts.
Evaluating default arguments when the function is executed would be bad for performance.
I find this hard to believe. If default argument assignments like foo='some_string' really add an unacceptable amount of overhead, I'm sure it would be possible to identify assignments to immutable literals and precompute them.
If you want a default assignment with a mutable object like foo = [], just use foo = None, followed by foo = foo or [] in the function body.
While this may be unproblematic in individual instances, as a design pattern it's not very elegant. It adds boilerplate code and obscures default argument values. Patterns like foo = foo or ... don't work if foo can be an object like a numpy array with undefined truth value. And in situations where None is a meaningful argument value that may be passed intentionally, it can't be used as a sentinel and this workaround becomes really ugly.
The current behaviour is useful for mutable default objects that should be shared accross function calls.
I would be happy to see evidence to the contrary, but in my experience this use case is much less frequent than mutable objects that should be created anew every time the function is called. To me it also seems like a more advanced use case, whereas accidental default assignments with empty containers are a common gotcha for new Python programmers. Therefore, the principle of least astonishment suggests default argument values should be evaluated when the function is executed.
In addition, it seems to me that there exists an easy workaround for mutable objects that should be shared across function calls: initialise them outside the function.
So I would argue that this was a bad design decision. My guess is that it was chosen because its implementation is actually simpler and because it has a valid (albeit limited) use case. Unfortunately, I don't think this will ever change, since the core Python developers want to avoid a repeat of the amount of backwards incompatibility that Python 3 introduced.
Python function definitions are just code, like all the other code; they're not "magical" in the way that some languages are. For example, in Java you could refer "now" to something defined "later":
public static void foo() { bar(); }
public static void main(String[] args) { foo(); }
public static void bar() {}
but in Python
def foo(): bar()
foo() # boom! "bar" has no binding yet
def bar(): pass
foo() # ok
So, the default argument is evaluated at the moment that that line of code is evaluated!
Because if they had, then someone would post a question asking why it wasn't the other way around :-p
Suppose now that they had. How would you implement the current behaviour if needed? It's easy to create new objects inside a function, but you cannot "uncreate" them (you can delete them, but it's not the same).
I've defined a class that takes a list of the form [fname1,[parameters1],fname2,[parameters2],...] as parameter for the creation of an instance.
The idea is for the instance to execute all functions in the list at once, passing them their respective parameters - which works just fine as is, but the implementation I came up with is incredibly ugly.
It looks something like this:
# (The input list is split up and transformed into two lists -
# one containing the function names as strings, the other one containing tuples)
# (It then runs a for-loop containing the following statement)
exec '%s%s'%(fname[i],repr(parameter_tuple[i]))
Which outputs and runs 'fname(parameters,more_parameters,and,so,on)', just as it should do.
I don't know why, but since I coded this, I consequently get the idea that I deserve a really good beating for it... although it works, I just know there has to be a less ugly implementation.
Anyone willing to help me see it? Or maybe to beat me up a bit? ;-)
The simplest answer here, if you can do it, is to simply pass the functions rather than the function names, and then do a simple:
for function, params in zip(functions, parameters):
function(*params)
Note my use of zip() to loop over two lists at once. This is the far more Pythonic option than looping over indices.
Or, alternatively, use itertools.starmap().
If you truly cannot pass the functions directly, it becomes more difficult. For one, any solution relying on the names of variables will be inherently fragile, so the best option would be to create an explicit dictionary from function names to functions, then look up the function.
Otherwise, I would suggest using the inspect module to find the functions you want. E.g:
import inspect
import sys
def some_function():
...
print(dict(inspect.getmembers(sys.modules[__name__], inspect.isfunction)))
Produces:
{'some_function': <function some_function at 0x7faec13005a0>}
Note, however, this is far less efficient and more roundabout. The best answer remains passing the functions themselves.
Here is a function that takes a list of functions (not function names) and a list of argument lists (or tuples):
def callmany(funs, funargs):
return [f(*args) for (f,args) in zip(funs, funargs)]
It returns a list of results where the first element in the result is the result of calling the first function in funs with the first argument list in funargs. Like this:
callmany([len, range], [["hello"], [1, 10]])
>> [5, [1, 2, 3, 4, 5, 6, 7, 8, 9]]
I was thinking about this recently since Python 3 is changing print from a statement to a function.
However, Ruby and CoffeeScript take the opposite approach, since you often leave out parentheses from functions, thereby blurring the distinction between keywords/statements and functions. (A function call without parentheses looks a lot like a keyword.)
Generally, what's the difference between a keyword and a function? It seems to me that some keywords are really just functions. For example, return 3 could equally be thought of as return(3) where the return function is implemented natively in the language. Or in JavaScript, typeof is a keyword, but it seems very much like a function, and can be called with parentheses.
Thoughts?
A function is executed within a stack frame, whereas a keyword statement isn't necessarily. A good example is the return statement: If it were a function and would execute in its own stack, there would be no way it could control the execution flow in the way it does.
Keywords and functions are ambiguous. Whether or not parentheses are necessary is completely dependent upon the design of the language syntax.
Consider an integer declaration, for instance:
int my_integer = 4;
vs
my_integer = int(4)
Both of these examples are logically equivalent, but vary by the language syntax.
Programming languages use keywords to reserve their finite number of basic functions. When you write a function, you are extending a language.
Keywords are lower-level building blocks than functions, and can do things that functions can't.
You cite return in your question, which is a good example: In all the languages you mention, there's no way to use a function to provide the same behavior as return x.
In Python, parenthesis are used for function calls, creating tuples or just for defining precedence.
a = (1) #same as a =1
a = (1,) #tuple with one element
print a #prints the value of a
print(a) #same thing, as (a) == a
def foo(x):
return x+1
foo(10) #function call, one element
foo(10,) #function call, also one element
foo 10 #not allowed!
foo(10)*2 #11 times 2 = 22
def foo2(y):
return (y*2)*2 #Not a function call. Same thing as y*4
Also, keywords can't be assigned as values.
def foo(x):
return x**2
foo = 1234 #foo with new value
return = 10 #invalid!
PS: Another use for parenthesis are generators. Just like list comprehensions but they aren't evaluated after creation.
(x**2 for x in range(10))
sum(x+1 for x in [1,2,3]) #Parenthesis used in function call are 'shared' with generator
In Python, is it considered better style to:
explicitly define useful functions in terms of more general, possibly internal use, functions; or,
use partial function application to explicitly describe function currying?
I will explain my question by way of a contrived example.
Suppose one writes a function, _sort_by_scoring, that takes two arguments: a scoring function and a list of items. It returns a copy of the original list sorted by scores based on each item's position within the original list. Two example scoring functions are also provided.
def _sort_by_score(scoring, items_list):
unsorted_scored_list = [(scoring(len(items_list), item_position), item) for item_position, item in enumerate(items_list)]
sorted_list = [item for score, item in sorted(unsorted_scored_list)]
return sorted_list
def _identity_scoring(items_list_size, item_position):
return item_position
def _reversed_scoring(items_list_size, item_position):
return items_list_size - item_position
The function _sort_by_score is never called directly; instead, it is called by other single-argument functions that pass a scoring function and their lone argument (a list of items) to _sort_by_scoring and return the result.
# Explicit function definition style
def identity_ordering(items_list):
return _sort_by_score(_identity_scoring, items_list)
def reversed_ordering(items_list):
return _sort_by_score(_reversed_scoring, items_list)
Obviously, this intent is better expressed in terms of function currying.
# Curried function definition style
import functools
identity_ordering = functools.partial(_sort_by_score, _identity_scoring)
reversed_ordering = functools.partial(_sort_by_score, _reversed_scoring)
Usage (in either case):
>>> foo = [1, 2, 3, 4, 5]
>>> identity_ordering(foo)
[1, 2, 3, 4, 5]
>>> reversed_ordering(foo)
[5, 4, 3, 2, 1]
Apparent advantages of the explicit function definition style:
useful functions may be defined before the more general functions are, without raising NameErrors;
helper functions (e.g., scoring functions) could be defined within the function definition body;
possibly easier to debug;
code looks nice by virtue of "explicit is better than implicit."
Apparent advantages of curried function definition style:
expresses intent of functional programming idiomatically;
code looks nice by virtue of succinctness.
For defining "useful" functions, which of the two styles is preferred? Are there other styles that are more idiomatic/Pythonic/etc.?
If you want to have the curried functions as part of a public interface, use explicit function definitions. This has the following additional advantages:
It is easier to assign a docstring to an explicit function definition. For partial() functions, you would have to assign to the __doc__ attribute, which is somewhat ugly.
Real function definitions are easier to skim when browsing the module source.
I would use functools.partial() in a similar way to lambda expressions, i.e. for locally needed throw-away functions.
In your particular example, I'd probably use neither, drop the leading underscores and call
sort_by_score(identity_scoring, foo)
which seems the most explicit to me.
As a slight tangent, it's generally desirable to let the sorted builtin do as much the decorate-sort-undecorate work as is practical. For example:
def _sort_by_score(scoring, items_list):
num_items = len(items_list)
def score(entry):
return scoring(num_items, entry[0])
return [item for position, item in sorted(enumerate(items_list), key=score)]
(Only posted as an answer because blocks of code don't work as comments. See Sven's response for an answer to the actual question asked)
Edit by someone else: The Python sort function iterates through the list and generates the list of keys first. The key() function is called only once for each list item, in the order of the input list. Thus, you can also use the following implementation:
def _sort_by_score(scoring, items_list):
num_items = len(items_list)
index = itertools.count()
def score(entry):
return scoring(num_items, next(index))
return sorted(items_list, key=score)
(Only posted as a revision because blocks of code don't work as comments.)