Can one use closures to simplify functions in Python? - python

Imagine if you want to make a closure function that decides some option for what its inner function does. In this example, we have an inner function that decides whether a number is even, but the generator decides whether the number zero is even, as if there were a debate.
def generate_is_even(reject_zero):
def is_even(x):
return (x % 2 == 0 and x != 0) if reject_zero else x % 2 == 0
return is_even
If is_even(x) is run millions of times, it appears reject_zero will still be checked every single time is_even(x) is run! My actual code has many similar 'options' to create a function that is run millions of times, and it would be inconvenient to write functions for every combination of options. Is there a way to prevent this inefficiency, or does some implementation of Python simplify this?

You seem to be looking for something like macros in C. Unfortunately, Python is not compiled (not the same way as C, for purists), and I don't see direct solutions for your need.
Still, you could set all your parameters at the beginning of runtime, and select the functions at this moment according to the values of the parameters. For instance, your function generator would be something like:
def generate_is_even(reject_zero):
def is_even_true(x):
return (x % 2 == 0 and x != 0)
def is_even_false(x):
return x % 2 == 0
return (is_even_true if reject_zero else is_even_false)
def setup(reject_zero, arg2, arg3):
is_even = generate_is_even(reject_zero)
The backlash of this is having to write a generator for each function that handles such a parameter. In the case you present, this is not a big problem, because there are only two versions of the function, that are not very long.
You need to ask yourself when it is useful to do so. In your situation, there is only one boolean comparison, which is not really resource-consuming, but there might be situations where generating the functions before could become worthwhile.

consider caching all your options in a list, and the generated function only iterates the chosen function
def generate_is_even(**kwargs):
options = {'reject_zero': lambda x: x != 0}
enabled = [options[o] for o in options if o in kwargs and kwargs[o]]
def is_even(x):
return all([fn(x) for fn in enabled]) and x % 2 == 0
return is_even
then you could use
is_even_nozero = generate_is_even(reject_zero=True)
is_even_nozero(0) # gives False
is_even = generate_is_even()
is_even(0) # gives True
if you need add options then add it to the options dict, and you could usee new_option=True is the generate_is_even function to enable it

Related

python lambda evaluate expression

I am trying out lambda in python and came across this question:
def foo(y):
return lambda x: x(x(y))
def bar(x):
return lambda y: x(y)
print((bar)(bar)(foo)(2)(lambda x:x+1))
can someone explain/breakdown how this code works? I am having problems trying to figure out what is x and y.
Lambda functions are just functions. They're almost syntatic sugar, as you can think of this structure:
anony_mouse = lambda x: x # don't actually assign lambdas
as equivalent to this structure:
def anony_mouse(x):
return x
(Almost, as there is no other way of getting a function without assigning it to some variable, and the syntax prevents you doing some things with them, such as using multiple lines.)
Thus let's write out the top example using standard function notation:
def foo(y):
# note that y exists here
def baz(x):
return x(x(y))
return baz
So we have a factory function, which generates a function which... expects to be called with a function (x), and returns x(x(arg_to_factory_function)). Consider:
>>> def add_six(x):
return x + 6
>>> bazzer = foo(3)
>>> bazzer(add_six) # add_six(add_six(3)) = 6+(6+3)
I could go on, but does that make it clearer?
Incidentally that code is horrible, and almost makes me agree with Guido that lambdas are bad.
The 1st ‘(bar)’ is equal to just ‘bar’ so it is an ordinary function call, the 2nd — argument to that call, i.e. bar(bar) — substitute ‘x’ to ‘bar’ there any you will get what is result of bar(bar); the’(foo)’ argument passing to the result of bar(bar) it will be a lambda-function with some arg. — substitute it to ‘foo’ and get result and so on until you reach the end of expression
I slightly modify your original function to make clearer what's going on (so it should be clearer which parameter is callable!)
# given a function it evaluates it at value p
def eval(func): # your foo
return lambda p: func(p)
# given a value p perform a double composition of the function at this value (2-step recursion)
def iter_2(p): # your bar
return lambda func: func(func(p))
increment = lambda x: x + 1 # variable binding only for readability
This example is quite hard to understand because one of the function, eval just do nothing special, and it composition is equivalent to the identity! ... so it could be quite confusing.
(foo)(2)(lambda x:x+1)):
x = 2
iter_2(x)(increment) # increment by 2 because iter_2 calls increment twice
# 4
idempotency: (or composition with itself return the identity function)
increment(3) == eval(increment)(3)
# True
# idempotency - second composition is equivalent to the identity
eval(increment)(3) == eval(eval)(increment)(3)
# True
eval(increment)(3) == eval(eval)(eval)(increment)(3)
# True
# ... and so on
final: consequence of idempotency -> bar do nothing, just confusion
eval(eval)(iter_2)(x)(increment) == iter_2(x)(increment)
# True
Remark:
in (bar)(bar)(foo)(2)(lambda x:x+1) you can omit the brackets around the 1st term, just bar(bar)(foo)(2)(lambda x:x+1)
Digression: [since you example is quite scaring]
Lambda functions are also known as anonymous function. Why this? Simply because that they don't need to be declared. They are designed to be single purpose, so you should "never" assign to a variable. The arise for example in the context of functional programming where the basic ingredients are... functions! They are used to modify the behavior of other functions (for example by decoration!). Your example it is just a standalone syntactical one... essentially a nonsense example which hides the truth "power" of the lambda functions. There is also a branch mathematics which based on them called lambda calculus.
Here a totally different example of application of the lambda functions, useful for decoration (but this is another story):
def action(func1):
return lambda func2: lambda p: func2(p, func1())
def save(path, content):
print(f'content saved to "{path}"')
def content():
return 'content' # i.e. from a file, url, ...
# call
action(content)(save)('./path')
# with each key-parameter would be
action(func1=content)(func2=save)(p='./path')
Output
content saved to "./path"

strange returning value in a python function

def cons(a, b):
def pair(f):
return f(a, b)
return pair
def car(f):
def left(a, b):
return a
return f(left)
def cdr(f):
def right(a, b):
return b
return f(right)
Found this python code on git.
Just want to know what is f(a,b) in cons definition is, and how does it work?
(Not a function I guess)
cons is a function, that takes two arguments, and returns a function that takes another function, which will consume these two arguments.
For example, consider the following function:
def add(a, b):
return a + b
This is just a function that adds the two inputs, so, for instance, add(2, 5) == 7
As this function takes two arguments, we can use cons to call this function:
func_caller = cons(2, 5) # cons receives two arguments and returns a function, which we call func_caller
result = func_caller(add) # func_caller receives a function, that will process these two arguments
print(result) # result is the actual result of doing add(2, 5), i.e. 7
This technique is useful for wrapping functions and executing stuff, before and after calling the appropriate functions.
For example, we can modify our cons function to actually print the values before and after calling add:
def add(a, b):
print('Adding {} and {}'.format(a, b))
return a + b
def cons(a, b):
print('Received arguments {} and {}'.format(a, b))
def pair(f):
print('Calling {} with {} and {}'.format(f, a, b))
result = f(a, b)
print('Got {}'.format(result))
return result
return pair
With this update, we get the following outputs:
func_caller = cons(2, 5)
# prints "Received arguments 2 and 5" from inside cons
result = func_caller(add)
# prints "Calling add with 2 and 5" from inside pair
# prints "Adding 2 and 5" from inside add
# prints "Got 7" from inside pair
This isn't going to make any sense to you until you know what cons, car, and cdr mean.
In Lisp, lists are stored as a very simple form of linked list. A list is either nil (like None) for an empty list, or it's a pair of a value and another list. The cons function takes a value and a list and returns you another list just by making a pair:
def cons(head, rest):
return (head, rest)
And the car and cdr functions (they stand for "Contents of Address|Data Register", because those are the assembly language instructions used to implement them on a particular 1950s computer, but that isn't very helpful) return the first or second value from a pair:
def car(lst):
return lst[0]
def cdr(lst):
return lst[1]
So, you can make a list:
lst = cons(1, cons(2, cons(3, None)))
… and you can get the second value from it:
print(car(cdr(lst))
… and you can even write functions to get the nth value:
def nth(lst, n):
if n == 0:
return car(lst)
return nth(cdr(lst), n-1)
… or print out the whole list:
def printlist(lst):
if lst:
print(car(lst), end=' ')
printlist(cdr(lst))
If you understand how these work, the next step is to try them on those weird definitions you found.
They still do the same thing. So, the question is: How? And the bigger question is: What's the point?
Well, there's no practical point to using these weird functions; the real point is to show you that everything in computer science can be written with just functions, no built-in data structures like tuples (or even integers; that just takes a different trick).
The key is higher-order functions: functions that take functions as values and/or return other functions. You actually use these all the time: map, sort with a key, decorators, partial… they’re only confusing when they’re really simple:
def car(f):
def left(a, b):
return a
return f(left)
This takes a function, and calls it on a function that returns the first of its two arguments.
And cdr is similar.
It's hard to see how you'd use either of these, until you see cons:
def cons(a, b):
def pair(f):
return f(a, b)
return pair
This takes two things and returns a function that takes another function and applies it to those two things.
So, what do we get from cons(3, None)? We get a function that takes a function, and applies it to the arguments 3 and None:
def pair3(f):
return f(3, None)
And if we call cons(2, cons(3, None))?
def pair23(f):
return f(2, pair3)
And what happens if you call car on that function? Trace through it:
def left(a, b):
return a
return pair23(left)
That pair23(left) does this:
return left(2, pair3)
And left is dead simple:
return 2
So, we got the first element of (2, cons(3, None)).
What if you call cdr?
def right(a, b):
return a
return pair23(right)
That pair23(right) does this:
return right(2, pair3)
… and right is dead simple, so it just returns pair3.
You can work out that if we call car(cdr(pair23)), we're going to get the 3 out of it.
And now you can write lst = cons(1, cons(2, cons(3, None))), write the recursive nth and printlist functions above, and trace through how they work on lst.
I mentioned above that you can even get rid of integers. How do you do that? Read about Church numerals. You define zero and successor functions. Then you can define one as successor(zero) and two as successor(one). You can even recursively define add so that add(x, zero) is x but add(x, successor(y)) is successor(add(x, y)), and go on to define mul, etc.
You also need a special function you can use as a value for nil.
Anyway, once you've done that, using all of the other definitions above, you can do lst = cons(zero(cons(one, cons(two, cons(three, nil)))), and nth(lst, two) will give you back one. (Of course writing printlist will be a bit trickier…)
Obviously, this is all going to be a lot slower than just using tuples and integers and so on. But theoretically, it’s interesting.
Consider this: we could write a tiny dialect of Python that has only three kinds of statements—def, return, and expression statements—and only three kinds of expressions—literals, identifiers, and function calls—and it could do everything normal Python does. (In fact, you could get rid of statements altogether just by having a function-defining expression, which Python already has.) That tiny language would be a pain to use, but it would a lot easier to write a program to reason about programs in that tiny language. And we even know how to translate code using tuples, loops, etc. into code in this tiny subset language, which means we can write a program that reasons about that real Python code.
In fact, with a couple more tricks (curried functions and/or static function types, and lazy evaluation), the compiler/interpreter could do that kind of reasoning on the fly and optimize our code for us. It’s easy to tell programmatically that car(cdr(cons(2, cons(3, None)) is going to return 3 without having to actually evaluate most of those function calls, so we can just skip evaluating them and substitute 3 for the whole expression.
Of course this breaks down if any function can have side effects. You obviously can’t just substitute None for print(3) and get the same results. So instead, you need some clever trick where IO is handled by some magic object that evaluates functions to figure out what it should read and write, and then the whole rest of the program, the part that users write, becomes pure and can be optimized however you want. With a couple more abstractions, we can even make IO something that doesn’t have to be magical to do that.
And then you can build a standard library that gives you back all those things we gave up, written in terms of defining and calling functions, so it’s actually usable—but under the covers it’s all just reducing pure function calls, which is simple enough for a computer to optimize. And then you’ve basically written Haskell.

How to simplify several functions with the same purpose

Let's say I have several functions that basically do the same thing but on a set of different variables. I think about something like that:
def changex():
if x:
# do procedure1 with x as variable
else:
# do procedure2 with x as variable
def changey():
if y:
# do procedure1 with y as variable
else:
# do procedure2 with y as variable
def changez():
...
How can I simplify that set of functions, such that I only have to write the function once but it does the job for all variables? I already looked into decorators which I think can do the job, but I can't make out how to make it work for my purpose.
Functions can accept variables as parameters.
Something like this:
def change(p):
if p:
# do procedure1 with p as variable
else:
# do procedure2 with p as variable
Then changex() becomes change(x); changey() becomes change(y) etc.
Decorators are way too complex for what you are trying to do. Functions can take arguments, so you can pass in the variable you want to do stuff with:
def change(var):
if var:
procedure1(var)
else:
procedure2(var)
Notice that I used if/else instead of checking both conditions with an if. I also recommend not testing explicitly against True/False. If you must do so, however, it is considered better practice to use is instead of == since True and False are singleton objects in Python: if x is True is better than if x == True.

Pythonic way to efficiently handle variable number of return args

So I have a function that can either work quietly or verbosely. In quiet mode it produces an output. In verbose mode it also saves intermediate calculations to a list, though doing so takes extra computation in itself.
Before you ask, yes, this is an identified bottleneck for optimization, and the verbose output is rarely needed so that's fine.
So the question is, what's the most pythonic way to efficiently handle a function which may or may not return a second value? I suspect a pythonic way would be named tuples or dictionary output, e.g.
def f(x,verbose=False):
result = 0
verbosity = []
for _ in x:
foo = # something quick to calculate
result += foo
if verbose:
verbosity += # something slow to calculate based on foo
return {"result":result, "verbosity":verbosity}
But that requires constructing a dict when it's not needed.
Some alternatives are:
# "verbose" changes syntax of return value, yuck!
return result if verbose else (result,verbosity)
or using a mutable argument
def f(x,verbosity=None):
if verbosity:
assert verbosity==[[]]
result = 0
for _ in x:
foo = # something quick to calculate
result += foo
if verbosity:
# hard coded value, yuck
verbosity[0] += # something slow to calculate based on foo
return result
# for verbose results call as
verbosity = [[]]
f(x,verbosity)
Any better ideas?
Don't return verbosity. Make it an optional function argument, passed in by the caller, mutated in the function if not empty.
The non-pythonic part of some answers is the need to test the structure of the return value. Passing mutable arguments for optional processing avoids this ugliness.
I like the first option, but instead of passing a verbose parameter in the function call, return a tuple of a quick result and a lazily-evaluated function:
import time
def getResult(x):
quickResult = x * 2
def verboseResult():
time.sleep(5)
return quickResult * 2
return (quickResult, verboseResult)
# Returns immediately
(quickResult, verboseResult) = getResult(2)
print(quickResult) # Prints immediately
print(verboseResult()) # Prints after running the long-running function

Python: do r = random_stuff() while not meets_condition(r)

I often have to randomly generate stuff with certain constraints. In many cases, it's quicker to ignore the constraints in generation, check if they are met afterwards and redo the process otherwise. Lacking a do keyword, I usually write
r = random_stuff()
while not meets_condition(r):
r = random_stuff()
That's a bit ugly, as I have the same line of code twice. What I'd really like to have is a construct like
r = random_stuff() until meets_condition(r)
similar to the ternary operator introduced in 2.5:
a = b if condition else c
Just that here condition is evaluated before the left-hand side of the statement is executed. Does anybody have a suggestion for a design pattern (should work in Python 2.7) that remedies the while-constructs intrinsic unpythonic ugliness?
while True:
r = random_stuff()
if meets_condition(r):
break
or
condition = True
while condition:
r = random_stuff()
condition = not meets_condition(r)
Your idea is not bad - but not with the new keyword until, but rather like
a = (<expression> while <condition>)
extending the idea of generator expressions.
As they don't exist, it won't help you.
But what you can use, is the iter function with the sentinel.
dummy_sentinel = object()
for r in iter(random_stuff, dummy_sentinel):
if meets_condition(r): break
If you can be sure about your random_stuff() returning only a certain kind of values, such as numbers, strings, etc., you can take some other value as sentinel. Especially, when None never can occur, take this, in order to have a never-ending generator.
for r in iter(random_stuff, None):
if meets_condition(r): break
Then random_stuff() gets called until it meets the condition.
Even better might be
r = next(r for r in iter(random_stuff, None) if meets_condition(r))
which gives you the first matching one.
Maybe syntactic sugar is what the Dr. ordered? You could do something like this.
I was too lazy to handle kw args in find_condition. You get what you pay for :D.
def find_condition(cond_mk, cond_ck, *args):
"""
.. function:: find_condition(cond_mk, cond_ck) -> cond
Create conditions by calling cond_mk until one is found that passes
the condition check, cond_ck. Once cond_ck returns True, iteration
over cond_mk stops and the last value processed is returned.
** WARNING **
This function could loop infinitely.
``cond_mk`` - callable that creates a condition. It's return value is
passed to cond_ck for verification.
``cond_ck`` - callable that checks the return value of cond_mk.
``args`` - any arguments to pass to cond_mk should be supplied here.
"""
v = cond_mk(*args)
while not cond_ck(v):
v = cond_mk(*args)
return v
# Test it out..
import random
random.seed()
print find_condition(random.randint, lambda x: x > 95, 1, 100)
while not meets_condition(random_stuff()): pass
If you actually need the random_stuff() then it can be stored elsewhere as a side effect (e.g. make random_stuff the __call__ method of a class).
Okay, inspired by #jaime I wrote the following decorator:
def retry(condition):
def deco_retry(f):
def f_retry(*args, **kwargs):
success = False
while not success:
result = f(*args, **kwargs)
success = condition(result)
return result
return f_retry
return deco_retry
Now, the following works:
def condition(calue):
return value < .5
#retry(condition)
def random_stuff():
return random.random()
print random_stuff()
Also, inline:
#retry(lambda x: x < .5)
def random_stuff():
return random.random()
print random_stuff()
However, the retry is now bound to the random_stuff() method, which can now only be used with the condition with which it was decorated. Also, it doesn't work for instance methods (as in #retry(self.condition)). Any ideas to circumvent that?

Categories