Can you explain closures (as they relate to Python)? - python

I've been reading a lot about closures and I think I understand them, but without clouding the picture for myself and others, I am hoping someone can explain closures as succinctly and clearly as possible. I'm looking for a simple explanation that might help me understand where and why I would want to use them.

Closure on closures
Objects are data with methods
attached, closures are functions with
data attached.
def make_counter():
i = 0
def counter(): # counter() is a closure
nonlocal i
i += 1
return i
return counter
c1 = make_counter()
c2 = make_counter()
print (c1(), c1(), c2(), c2())
# -> 1 2 1 2

It's simple: A function that references variables from a containing scope, potentially after flow-of-control has left that scope. That last bit is very useful:
>>> def makeConstantAdder(x):
... constant = x
... def adder(y):
... return y + constant
... return adder
...
>>> f = makeConstantAdder(12)
>>> f(3)
15
>>> g = makeConstantAdder(4)
>>> g(3)
7
Note that 12 and 4 have "disappeared" inside f and g, respectively, this feature is what make f and g proper closures.

To be honest, I understand closures perfectly well except I've never been clear about what exactly is the thing which is the "closure" and what's so "closure" about it. I recommend you give up looking for any logic behind the choice of term.
Anyway, here's my explanation:
def foo():
x = 3
def bar():
print x
x = 5
return bar
bar = foo()
bar() # print 5
A key idea here is that the function object returned from foo retains a hook to the local var 'x' even though 'x' has gone out of scope and should be defunct. This hook is to the var itself, not just the value that var had at the time, so when bar is called, it prints 5, not 3.
Also be clear that Python 2.x has limited closure: there's no way I can modify 'x' inside 'bar' because writing 'x = bla' would declare a local 'x' in bar, not assign to 'x' of foo. This is a side-effect of Python's assignment=declaration. To get around this, Python 3.0 introduces the nonlocal keyword:
def foo():
x = 3
def bar():
print x
def ack():
nonlocal x
x = 7
x = 5
return (bar, ack)
bar, ack = foo()
ack() # modify x of the call to foo
bar() # print 7

I like this rough, succinct definition:
A function that can refer to environments that are no longer active.
I'd add
A closure allows you to bind variables into a function without passing them as parameters.
Decorators which accept parameters are a common use for closures. Closures are a common implementation mechanism for that sort of "function factory". I frequently choose to use closures in the Strategy Pattern when the strategy is modified by data at run-time.
In a language that allows anonymous block definition -- e.g., Ruby, C# -- closures can be used to implement (what amount to) novel new control structures. The lack of anonymous blocks is among the limitations of closures in Python.

I've never heard of transactions being used in the same context as explaining what a closure is and there really aren't any transaction semantics here.
It's called a closure because it "closes over" the outside variable (constant)--i.e., it's not just a function but an enclosure of the environment where the function was created.
In the following example, calling the closure g after changing x will also change the value of x within g, since g closes over x:
x = 0
def f():
def g():
return x * 2
return g
closure = f()
print(closure()) # 0
x = 2
print(closure()) # 4

# A Closure is a function object that remembers values in enclosing scopes even if they are not present in memory.
# Defining a closure
# This is an outer function.
def outer_function(message):
# This is an inner nested function.
def inner_function():
print(message)
return inner_function
# Now lets call the outer function and return value bound to name 'temp'
temp = outer_function("Hello")
# On calling temp, 'message' will be still be remembered although we had finished executing outer_function()
temp()
# Technique by which some data('message') that remembers values in enclosing scopes
# even if they are not present in memory is called closures
# Output: Hello
Criteria to met by Closures are:
We must have nested function.
Nested function must refer to the value defined in the enclosing function.
Enclosing function must return the nested function.
# Example 2
def make_multiplier_of(n): # Outer function
def multiplier(x): # Inner nested function
return x * n
return multiplier
# Multiplier of 3
times3 = make_multiplier_of(3)
# Multiplier of 5
times5 = make_multiplier_of(5)
print(times5(3)) # 15
print(times3(2)) # 6

Here's a typical use case for closures - callbacks for GUI elements (this would be an alternative to subclassing the button class). For example, you can construct a function that will be called in response to a button press, and "close" over the relevant variables in the parent scope that are necessary for processing the click. This way you can wire up pretty complicated interfaces from the same initialization function, building all the dependencies into the closure.

In Python, a closure is an instance of a function that has variables bound to it immutably.
In fact, the data model explains this in its description of functions' __closure__ attribute:
None or a tuple of cells that contain bindings for the function’s free variables. Read-only
To demonstrate this:
def enclosure(foo):
def closure(bar):
print(foo, bar)
return closure
closure_instance = enclosure('foo')
Clearly, we know that we now have a function pointed at from the variable name closure_instance. Ostensibly, if we call it with an object, bar, it should print the string, 'foo' and whatever the string representation of bar is.
In fact, the string 'foo' is bound to the instance of the function, and we can directly read it here, by accessing the cell_contents attribute of the first (and only) cell in the tuple of the __closure__ attribute:
>>> closure_instance.__closure__[0].cell_contents
'foo'
As an aside, cell objects are described in the C API documentation:
"Cell" objects are used to implement variables referenced by multiple
scopes
And we can demonstrate our closure's usage, noting that 'foo' is stuck in the function and doesn't change:
>>> closure_instance('bar')
foo bar
>>> closure_instance('baz')
foo baz
>>> closure_instance('quux')
foo quux
And nothing can change it:
>>> closure_instance.__closure__ = None
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: readonly attribute
Partial Functions
The example given uses the closure as a partial function, but if this is our only goal, the same goal can be accomplished with functools.partial
>>> from __future__ import print_function # use this if you're in Python 2.
>>> partial_function = functools.partial(print, 'foo')
>>> partial_function('bar')
foo bar
>>> partial_function('baz')
foo baz
>>> partial_function('quux')
foo quux
There are more complicated closures as well that would not fit the partial function example, and I'll demonstrate them further as time allows.

Here is an example of Python3 closures
def closure(x):
def counter():
nonlocal x
x += 1
return x
return counter;
counter1 = closure(100);
counter2 = closure(200);
print("i from closure 1 " + str(counter1()))
print("i from closure 1 " + str(counter1()))
print("i from closure 2 " + str(counter2()))
print("i from closure 1 " + str(counter1()))
print("i from closure 1 " + str(counter1()))
print("i from closure 1 " + str(counter1()))
print("i from closure 2 " + str(counter2()))
# result
i from closure 1 101
i from closure 1 102
i from closure 2 201
i from closure 1 103
i from closure 1 104
i from closure 1 105
i from closure 2 202

we all have used Decorators in python. They are nice examples to show what are closure functions in python.
class Test():
def decorator(func):
def wrapper(*args):
b = args[1] + 5
return func(b)
return wrapper
#decorator
def foo(val):
print val + 2
obj = Test()
obj.foo(5)
here final value is 12
Here, the wrapper function is able to access func object because wrapper is "lexical closure", it can access it's parent attributes.
That is why, it is able to access func object.

I would like to share my example and an explanation about closures. I made a python example, and two figures to demonstrate stack states.
def maker(a, b, n):
margin_top = 2
padding = 4
def message(msg):
print('\n’ * margin_top, a * n,
' ‘ * padding, msg, ' ‘ * padding, b * n)
return message
f = maker('*', '#', 5)
g = maker('', '♥’, 3)
…
f('hello')
g(‘good bye!')
The output of this code would be as follows:
***** hello #####
 good bye! ♥♥♥
Here are two figures to show stacks and the closure attached to the function object.
when the function is returned from maker
when the function is called later
When the function is called through a parameter or a nonlocal variable, the code needs local variable bindings such as margin_top, padding as well as a, b, n. In order to ensure the function code to work, the stack frame of the maker function which was gone away long ago should be accessible, which is backed up in the closure we can find along with the 'message's function object.

For me, "closures" are functions which are capable to remember the environment they were created. This functionality, allows you to use variables or methods within the closure wich, in other way,you wouldn't be able to use either because they don't exist anymore or they are out of reach due to scope. Let's look at this code in ruby:
def makefunction (x)
def multiply (a,b)
puts a*b
end
return lambda {|n| multiply(n,x)} # => returning a closure
end
func = makefunction(2) # => we capture the closure
func.call(6) # => Result equal "12"
it works even when both, "multiply" method and "x" variable,not longer exist. All because the closure capability to remember.

The best explanation I ever saw of a closure was to explain the mechanism. It went something like this:
Imagine your program stack as a degenerate tree where each node has only one child and the single leaf node is the context of your currently executing procedure.
Now relax the constraint that each node can have only one child.
If you do this, you can have a construct ('yield') that can return from a procedure without discarding the local context (i.e. it doesn't pop it off the stack when you return). The next time the procedure is invoked, the invocation picks up the old stack (tree) frame and continues executing where it left off.

Related

Python - modifying variable passed to function

If I have a function f(x) and variable var of any type and scope is there any way to modify var inside call to f(var)?
Meaning that function f does some magic stuff to get reference to original (passed) var (like reference/pointer in C++) and modifies that original var, for any type of var (even int/float/str/bytes).
Wrapping var into dict or list or any other class is not allowed. Because it is known that dict/list are passed by reference.
Returning new value from a function to re-assign variable is not allowed too.
Modifying scope of original variable (like making it global) is not allowed too.
In fact any change to function's caller's code is not allowed.
This variable can be of any imaginary type and scope, no assumptions should be done about them.
So that next code should work (such code can be placed both inside another wrapping function or globally, should not matter):
def test():
def magic_inplace_add(x, val):
# ...
var = 111
magic_inplace_add(var, 222) # modifies int var by adding 222
print(var) # prints 333
test()
If needed to solve the task this function can do any complex manipulations like using inspect module.
Basically, I need somehow to break a convention about simple types being passed by value, even if to achieve this in some non-simple/cryptic way.
I'm pretty sure this task can be solved by tools from standard reverse-engineering modules like inspect / ast / dis.
Just to clarify my need for this task - right now for this task I'm not trying to code nicely using tidy Python style and methods, you can imagine that this task in future is a kind of interview question for companies specializing in reverse-engineering / antiviruses / code security / compilers implementation. Also tasks like this are interesting in exploring all hidden possibilities of languages like Python.
For module scope variables.
import inspect
import re
def magic_inplace_add(var, val):
'''
Illustation of updating reference to var when function is called at module level (i.e. not nested)
'''
# Use inspect to get funcstion call syntax
previous_frame = inspect.currentframe().f_back
(filename, line_number,
function_name, lines, index) = inspect.getframeinfo(previous_frame)
# lines contains call syntax
# i.e. ['magic_inplace_add(x, 222)\n']
args = re.findall("\w+", lines[0]) # get arguments of calling syntax i.e. ('magic_inplace_add', 'x', 222)
# Update variable in context of previous frame
# i.e. args[1] == 'x' in example
if args[1] in previous_frame.f_globals:
# update variable from global
previous_frame.f_globals.update({args[1]:var + val})
elif args[1] in previous_frame.f_locals:
# for nested function variable would be in f_locals
# but this doesn't work in Python 3 this f_locals is a copy of actual locals in calling frame
# see Martijn Pieters answer at
# https://stackoverflow.com/questions/36678241/how-can-i-force-update-the-python-locals-dictionary-of-a-different-stack-frame
previous_frame.f_locals.update({args[1]:var + val}) # this is what would be used for nested functions
Example
x = 111 # x module scope
y = 222 # y module scope
z = 333 # z module scope
magic_inplace_add(x, 222)
magic_inplace_add(y, 222)
magic_inplace_add(z, 222)
print(x) # Output: 333
print(y) # Output: 444
print(z) # Output: 555
as #rcvaram suggested use global variables, like this
def magic_inplace_add(x, val):
global var
var = 333
global var
var = 111
magic_inplace_add(var, 222) # modifies int var by adding 222
print(var)

how to assign a new value to a variable inside a function? [duplicate]

Sorry if this is a dumb question, but I've looked for a while and not really found the answer.
If I'm writing a python function, for example:
def function(in1, in2):
in1=in1+1
in2=in2+1
How do I make these changes stick?
I know why they dont, this has been addressed in many answers, but I couldn't find an answer to the question of how to actually make them do so. Without returning values or making some sort of class, is there really no way for a function to operate on its arguments in a global sense?
I also want these variables to not be global themselves, as in I want to be able to do this:
a=1
b=2
c=3
d=4
function(a,b)
function(c,d)
Is this just wishful thinking?
It can be done but I'm warning you - it won't be pretty! What you can do is to capture the caller frame in your function, then pick up the call line, parse it and extract the arguments passed, then compare them with your function signature and create an argument map, then call your function and once your function finishes compare the changes in the local stack and update the caller frame with the mapped changes. If you want to see how silly it can get, here's a demonstration:
# HERE BE DRAGONS
# No, really, here be dragons, this is strictly for demonstration purposes!!!
# Whenever you use this in code a sweet little pixie is brutally killed!
import ast
import inspect
import sys
def here_be_dragons(funct): # create a decorator so we can, hm, enhance 'any' function
def wrapper(*args, **kwargs):
caller = inspect.getouterframes(inspect.currentframe())[1] # pick up the caller
parsed = ast.parse(caller[4][0], mode="single") # parse the calling line
arg_map = {} # a map for our tracked args to establish global <=> local link
for node in ast.walk(parsed): # traverse the parsed code...
# and look for a call to our wrapped function
if isinstance(node, ast.Call) and node.func.id == funct.__name__:
# loop through all positional arguments of the wrapped function
for pos, var in enumerate(funct.func_code.co_varnames):
try: # and try to find them in the captured call
if isinstance(node.args[pos], ast.Name): # named argument!
arg_map[var] = node.args[pos].id # add to our map
except IndexError:
break # no more passed arguments
break # no need for further walking through the ast tree
def trace(frame, evt, arg): # a function to capture the wrapped locals
if evt == "return": # we're only interested in our function return
for arg in arg_map: # time to update our caller frame
caller[0].f_locals[arg_map[arg]] = frame.f_locals.get(arg, None)
profile = sys.getprofile() # in case something else is doing profiling
sys.setprofile(trace) # turn on profiling of the wrapped function
try:
return funct(*args, **kwargs)
finally:
sys.setprofile(profile) # reset our profiling
return wrapper
And now you can easily decorate your function to enable it to perform this ungodly travesty:
# Zap, there goes a pixie... Poor, poor, pixie. It will be missed.
#here_be_dragons
def your_function(in1, in2):
in1 = in1 + 1
in2 = in2 + 1
And now, demonstration:
a = 1
b = 2
c = 3
d = 4
# Now is the time to play and sing along: Queen - A Kind Of Magic...
your_function(a, b) # bam, two pixies down... don't you have mercy?
your_function(c, d) # now you're turning into a serial pixie killer...
print(a, b, c, d) # Woooo! You made it! At the expense of only three pixie lives. Savage!
# prints: (2, 3, 4, 5)
This, obviously, works only for non-nested functions with positional arguments, and only if you pass simple local arguments, feel free to go down the rabbit hole of handling keyword arguments, different stacks, returned/wrapped/chained calls, and other shenanigans if that's what you fancy.
Or, you know, you can use structures invented for this, like globals, classes, or even enclosed mutable objects. And stop murdering pixies.
If you are looking to modify the value of the variables you could have your code be
def func(a,b):
int1 = a + 2
int2 = b + 3
return int1,int2
a = 2
b = 3
a,b = func(a,b)
This allows you to actually change the values of the a and b variables with the function.
you can do:
def function(in1, in2):
return in1 + 1 , in2 + 1
a, b = function(a,b)
c, d = function(c,d)
python functions are closed -> when function(a,b) s called, a and b get reassigned to a local (to the function) references/pointers in1 and in2, which are not accessible outside of the function. provide references to those new values w/o using globals, you will need to pass that back through return.
When you pass an array or non primitive object into a function, you can modify the object's attributes and have those modifications be visible to other references for that object outside, because the object itself contain the pointers to those values, making the visible to anything else holding a pointer to that object.

How to make two functions share the same non global variable (Python)

Is there a way to make function B to be able to access a non global variable that was declared in only in function A, without return statements from function A.
As asked, the question:
Define two functions:
p: prints the value of a variable
q: increments the variable
such that
Initial value of the variable is 0. You can't define the variable in the global
enviroment.
Variable is not located in the global environment and the only way to change it is by invoking q().
The global enviroment should know only p() and q().
Tip: 1) In python, a function can return more than 1 value. 2) A function can be
assigned to a variable.
# Example:
>>> p()
0
>>> q()
>>> q()
>>> p()
2
The question says the global enviroment should know only p and q.
So, taking that literally, it could be done inline using a single function scope:
>>> p, q = (lambda x=[0]: (lambda: print(x[0]), lambda: x.__setitem__(0, x[0] + 1)))()
>>> p()
0
>>> q()
>>> q()
>>> p()
2
Using the tips provided as clues, it could be done something like this:
def make_p_and_q():
context = {'local_var': 0}
def p():
print('{}'.format(context['local_var']))
def q():
context['local_var'] += 1
return p, q
p, q = make_p_and_q()
p() # --> 0
q()
q()
p() # --> 2
The collection of things that functions can access is generally called its scope. One interpretation of your question is whether B can access a "local variable" of A; that is, one that is defined normally as
def A():
x = 1
The answer here is "not easily": Python lets you do a lot, but local variables are one of the things that are not meant to be accessed inside a function.
I suspect what your teacher is getting at is that A can modify things outside of its scope, in order to send information out without sending it through the return value. (Whether this is good coding practise is another matter.) For example, functions are themselves Python objects, and you can assign arbitrary properties to Python objects, so you can actually store values on the function object and read them from outside it.
def a():
a.key = "value"
a()
print a.key
Introspection and hacking with function objects
In fact, you can sort of get at the constant values defined in A by looking at the compiled Python object generated when you define a function. For example, in the example above, "value" is a constant, and constants are stored on the code object:
In [9]: a.func_code.co_consts
Out[9]: (None, 'value')
This is probably not what you meant.
Firstly, it's bad practise to do so. Such variables make debugging difficult and are easy to lose track of, especially in complex code.
Having said that, you can accomplish what you want by declaring a variable as global:
def funcA():
global foo
foo = 3
def funcB():
print foo # output is 3
That's one weird homework assignment; especially the tips make me suspect that you've misunderstood or left out something.
Anyway, here's a simpler solution than the accepted answer: Since calls to q increment the value of the variable, it must be a persistent ("static") variable of some sort. Store it somewhere other than the global namespace, and tell p about it. The obvious place to store it is as an attribute of q:
def q():
q.x += 1
q.x = 0 # Initialize
def p():
print(q.x)

python executing nested functions

I have this piece of code, and i am confused about it's execution:
def makeInc(x):
def inc(y):
return y + x
return inc
inc5 = makeInc(5)
# print inc5 ---> <function inc at 0xb721bdbc>
inc5(12)
# print inc5(12) gives 17
So, I am wondering why it makes the sum, x+y, only on the second call of the function inc5().
I know what a closure in python really means, but I am still confused.
I have found that piece of code in this web page:
http://ynniv.com/blog/2007/08/closures-in-python.html
Here are comments to what is happening.
makeInc is a function, which returns another function. Using closures, the returned function knows
about x being set to value as it was true at the moment it was created and returned.
Calling inc2 = makeInc(5) first makes x inside makeInc set to 5, and then creates a function
y. This inc function knows that x equals to 5 and will remember it as long as it lives - this is concept of closures (get in your backpack all needed variables in the environment at the moment of birth).
Calling inc5(12) executes inc(12). Inside if this call to inc it adds argument y being 12 to the value of x, which the inc5 remembers is 5, so the sum makes 17.
N.B.: Closures are used naturaly in languages like JavaScript. In Python, closures are available
too, but not used so often and some developers are considering them even evil - your question shows
very clearly, it is not trivial to understand, what is happening in the code - this goes against
"readibility counts" from Zen of Python. Also you can find some surprises related to scope and
visibility of variables. In case you do not get exactly what has happened in the code above, do
not worry, Python offers many other nice challenging structures, which you will use much more often
(list comprehension being one example).
The other answers explain it well, but here's a line by line breakdown of what happens in your code
def makeInc(x): # define a function makeInc that takes a variable x
# when makeInc is executed, define a function "inc" that takes a variable y
def inc(y):
# when inc is executed, return the sum (where x is closed in the outer function)
# note this is not executed when makeInc is called
return y + x
# don't call inc yet, just return a reference to the function itself - note there
# are no parenthesis after inc
return inc
inc5 = makeInc(5) # now execute makeInc which returns a reference to the inc function
# with the variable x captured by the closure
inc5(12) # now call the result of the above which is the only place that we call inc
# note the parenthesis to designate the function call
# print inc5(12) gives 17
Central to understanding this is the concept of functions as objects. In Python you can do this:
>>> def test(x):
... return x + 6
...
>>> test(1) # call test with the argument x = 1
7
>>> a = test # assign the function "test" to a new variable "a" - this is not calling test
>>> a
<function test at 0x101cfbb90> # printing a shows the original function name "test"
>>> a(1) # now call that function again with the same value for x = 1
7
When makeInc is called, it returns a reference to inc within which has a different function signature (taking an argument called y). The fact that they both take a single argument is not relevant.

How to access a variable which is declared in some other function?

def main():
x=2
def cool():
y=4
nonlocal x
print (x)
It is showing an error as --nonlocal x is a invalid syntax--.And if I dont declare it as nonlocal it says undefined x. So, how do i acess a variable which is in some other function ?Now how do i access x variable which is defined at main().
You can't.
You shouldn't. Doing so would just make it hard to read code because code from anywhere could read and modify your variables. By keeping access restricted to the one function it is easier to follow.
You do not. The variables local to a function only exist while that function is running; so once main has returned, its x does not exist. This also ties in to the fact that a separate call to the function gets a separate variable.
What you describe is a bit like reading the value of a static variable in C. The difference with static variables is that they're independent of the call; they still exist, and that makes most functions that use them non-reentrant. Sometimes this is emulated in Python by adding a default argument with a mutable value, with the same downsides.
In CPython, you actually can find out what the local variables of a pure Python function are by inspecting its code object, but their values will only exist in the call itself, normally on a call stack.
def func():
x=2
import dis
print func.__code__.co_varnames
dis.disassemble(func.__code__)
yields:
('x',)
2 0 LOAD_CONST 1 (2)
3 STORE_FAST 0 (x)
6 LOAD_CONST 0 (None)
9 RETURN_VALUE
So x is actually local variable 0.
I would suggest looking up a debugger for details on call stack inspection.
Once you start needing attributes on a function, turn it into a functor:
class MainFactory(object):
def __call__(self):
self.x = 4
main = MainFactory() # Create the function-like object main
main() # call it
print main.x # inspect internal attributes
A functor in python is simply a normal object whose class implements the special __call__ method. It can have an __init__ method as well, in which you may pre-set the values of attributes if you wish. Then you can create different "flavours" of your functor by supplying different arguments when instantiating:
class MainFactory(object):
def __init__(self, parameter=1):
self.parameter = parameter
def __call__(self):
self.x = 4 * self.parameter
Now main = MainFactory(2); main() will have main.x set to 8.
Obviously, you can keep unimportant variables of the function inaccessible by simply using local variables instead of attributes of self:
def __call__(self):
# i and p are not accessible from outside, self.x is
for i in range(10):
p = i ** self.parameter
self.x += p
x = 0
def main():
global x
x = 2 # accesses global x
main()
print x # prints 2

Categories