Remembered values, and scope gone from memory, in a Python Closure - python

Below is a simple piece of code I found in this tutorial.
Here's a nice definition of Closure I found here: "a function object that remembers values in enclosing scopes regardless of whether those scopes are still present in memory."
I gather that rotate() below is a closure. Please help me understand what values is it remembering even after their scope is gone from memory (and why does their scope leave memory)?
def make_rotater(seq):
def rotate():
val = seq.pop(0)
seq.append(val)
return val
return rotate
r = make_rotater([1,2,3])
r()
# 1
r()
# 2
(Update) Part 2: Why does the (closure-less) code below not work?
def make_rotater(seq):
val = seq.pop(0)
seq.append(val)
return val
r = make_rotater([1,2,3])
r()
# File "<stdin>", line 1, in <module>
# TypeError: 'int' object is not callable

It remembers local values from make_rotator so if u do:
def make_rotater():
seq=[1,2,3]
def rotate():
val = seq.pop(0)
seq.append(val)
return val
return rotate
The seq is referenced by rotate, so it will remain in memory for when you will call rotate, even though it was defined in make_rotater (that is already done, and cleaned from memory)
When you call make_rotater it creates a new seq and defines the rotate method that references seq, so one you leave make_rotater it's memory isn't needed except seq, (cause rotate still uses it). When you will no longer reference rotate, seq will also be cleaned
part 2:
your method now doesnt return another method it returns the number directly, so you dont need to do r()
you can use it like this:
seq = [1,2,3]
def make_rotater(seq):
val = seq.pop(0)
seq.append(val)
return val
r = make_rotater(seq)
print r # prints 1, note there is no r() just r
r = make_rotater(seq)
print r # prints 2
r = make_rotater(seq)
print r # prints 3

That definition is sort of right, sort of wrong. It depends on what you mean by a scope being "still present in memory". I'd say a better definition would be "a function object that remembers variables in enclosing scopes regardless of whether those scopes are still present on the call stack."
When you call make_rotater:
def make_rotater(seq):
def rotate():
val = seq.pop(0)
seq.append(val)
return val
return rotate
The rotate closure keeps the seq variable alive even after execution leaves the scope of make_rotater. Ordinarily, when execution leaves a function, its local variables would cease to exist.

Related

Decorator fundamentals (Closure) , how/why are function local arguments reusable without passing them in again? [duplicate]

I have seen and used nested functions in Python, and they match the definition of a closure. So why are they called "nested functions" instead of "closures"?
Are nested functions not closures because they are not used by the external world?
UPDATE: I was reading about closures and it got me thinking about this concept with respect to Python. I searched and found the article mentioned by someone in a comment below, but I couldn't completely understand the explanation in that article, so that is why I am asking this question.
A closure occurs when a function has access to a local variable from an enclosing scope that has finished its execution.
def make_printer(msg):
def printer():
print(msg)
return printer
printer = make_printer('Foo!')
printer()
When make_printer is called, a new frame is put on the stack with the compiled code for the printer function as a constant and the value of msg as a local. It then creates and returns the function. Because the function printer references the msg variable, it is kept alive after the make_printer function has returned.
So, if your nested functions don't
access variables that are local to enclosing scopes,
do so when they are executed outside of that scope,
then they are not closures.
Here's an example of a nested function which is not a closure.
def make_printer(msg):
def printer(msg=msg):
print(msg)
return printer
printer = make_printer("Foo!")
printer() #Output: Foo!
Here, we are binding the value to the default value of a parameter. This occurs when the function printer is created and so no reference to the value of msg external to printer needs to be maintained after make_printer returns. msg is just a normal local variable of the function printer in this context.
The question has already been answered by aaronasterling
However, someone might be interested in how the variables are stored under the hood.
Before coming to the snippet:
Closures are functions that inherit variables from their enclosing environment. When you pass a function callback as an argument to another function that will do I/O, this callback function will be invoked later, and this function will — almost magically — remember the context in which it was declared, along with all the variables available in that context.
If a function does not use free variables it doesn't form a closure.
If there is another inner level which uses free variables -- all previous levels save the lexical environment ( example at the end )
function attributes func_closure in python < 3.X or __closure__ in python > 3.X save the free variables.
Every function in python has the closure attribute, but if there are no free variables, it is empty.
example: of closure attributes but no content inside as there is no free variable.
>>> def foo():
... def fii():
... pass
... return fii
...
>>> f = foo()
>>> f.func_closure
>>> 'func_closure' in dir(f)
True
>>>
NB: FREE VARIABLE IS MUST TO CREATE A CLOSURE.
I will explain using the same snippet as above:
>>> def make_printer(msg):
... def printer():
... print msg
... return printer
...
>>> printer = make_printer('Foo!')
>>> printer() #Output: Foo!
And all Python functions have a closure attribute so let's examine the enclosing variables associated with a closure function.
Here is the attribute func_closure for the function printer
>>> 'func_closure' in dir(printer)
True
>>> printer.func_closure
(<cell at 0x108154c90: str object at 0x108151de0>,)
>>>
The closure attribute returns a tuple of cell objects which contain details of the variables defined in the enclosing scope.
The first element in the func_closure which could be None or a tuple of cells that contain bindings for the function’s free variables and it is read-only.
>>> dir(printer.func_closure[0])
['__class__', '__cmp__', '__delattr__', '__doc__', '__format__', '__getattribute__',
'__hash__', '__init__', '__new__', '__reduce__', '__reduce_ex__', '__repr__',
'__setattr__', '__sizeof__', '__str__', '__subclasshook__', 'cell_contents']
>>>
Here in the above output you can see cell_contents, let's see what it stores:
>>> printer.func_closure[0].cell_contents
'Foo!'
>>> type(printer.func_closure[0].cell_contents)
<type 'str'>
>>>
So, when we called the function printer(), it accesses the value stored inside the cell_contents. This is how we got the output as 'Foo!'
Again I will explain using the above snippet with some changes:
>>> def make_printer(msg):
... def printer():
... pass
... return printer
...
>>> printer = make_printer('Foo!')
>>> printer.func_closure
>>>
In the above snippet, I didn't print msg inside the printer function, so it doesn't create any free variable. As there is no free variable, there will be no content inside the closure. Thats exactly what we see above.
Now I will explain another different snippet to clear out everything Free Variable with Closure:
>>> def outer(x):
... def intermediate(y):
... free = 'free'
... def inner(z):
... return '%s %s %s %s' % (x, y, free, z)
... return inner
... return intermediate
...
>>> outer('I')('am')('variable')
'I am free variable'
>>>
>>> inter = outer('I')
>>> inter.func_closure
(<cell at 0x10c989130: str object at 0x10c831b98>,)
>>> inter.func_closure[0].cell_contents
'I'
>>> inn = inter('am')
So, we see that a func_closure property is a tuple of closure cells, we can refer them and their contents explicitly -- a cell has property "cell_contents"
>>> inn.func_closure
(<cell at 0x10c9807c0: str object at 0x10c9b0990>,
<cell at 0x10c980f68: str object at 0x10c9eaf30>,
<cell at 0x10c989130: str object at 0x10c831b98>)
>>> for i in inn.func_closure:
... print i.cell_contents
...
free
am
I
>>>
Here when we called inn, it will refer all the save free variables so we get I am free variable
>>> inn('variable')
'I am free variable'
>>>
Python has a weak support for closure. To see what I mean take the following example of a counter using closure with JavaScript:
function initCounter(){
var x = 0;
function counter () {
x += 1;
console.log(x);
};
return counter;
}
count = initCounter();
count(); //Prints 1
count(); //Prints 2
count(); //Prints 3
Closure is quite elegant since it gives functions written like this the ability to have "internal memory". As of Python 2.7 this is not possible. If you try
def initCounter():
x = 0;
def counter ():
x += 1 ##Error, x not defined
print x
return counter
count = initCounter();
count(); ##Error
count();
count();
You'll get an error saying that x is not defined. But how can that be if it has been shown by others that you can print it? This is because of how Python it manages the functions variable scope. While the inner function can read the outer function's variables, it cannot write them.
This is a shame really. But with just read-only closure you can at least implement the function decorator pattern for which Python offers syntactic sugar.
Update
As its been pointed out, there are ways to deal with python's scope limitations and I'll expose some.
1. Use the global keyword (in general not recommended).
2. In Python 3.x, use the nonlocal keyword (suggested by #unutbu and #leewz)
3. Define a simple modifiable class Object
class Object(object):
pass
and create an Object scope within initCounter to store the variables
def initCounter ():
scope = Object()
scope.x = 0
def counter():
scope.x += 1
print scope.x
return counter
Since scope is really just a reference, actions taken with its fields do not really modify scope itself, so no error arises.
4. An alternative way, as #unutbu pointed out, would be to define each variable as an array (x = [0]) and modify it's first element (x[0] += 1). Again no error arises because x itself is not modified.
5. As suggested by #raxacoricofallapatorius, you could make x a property of counter
def initCounter ():
def counter():
counter.x += 1
print counter.x
counter.x = 0
return counter
Python 2 didn't have closures - it had workarounds that resembled closures.
There are plenty of examples in answers already given - copying in variables to the inner function, modifying an object on the inner function, etc.
In Python 3, support is more explicit - and succinct:
def closure():
count = 0
def inner():
nonlocal count
count += 1
print(count)
return inner
Usage:
start = closure()
another = closure() # another instance, with a different stack
start() # prints 1
start() # prints 2
another() # print 1
start() # prints 3
The nonlocal keyword binds the inner function to the outer variable explicitly mentioned, in effect enclosing it. Hence more explicitly a 'closure'.
I had a situation where I needed a separate but persistent name space.
I used classes. I don't otherwise.
Segregated but persistent names are closures.
>>> class f2:
... def __init__(self):
... self.a = 0
... def __call__(self, arg):
... self.a += arg
... return(self.a)
...
>>> f=f2()
>>> f(2)
2
>>> f(2)
4
>>> f(4)
8
>>> f(8)
16
# **OR**
>>> f=f2() # **re-initialize**
>>> f(f(f(f(2)))) # **nested**
16
# handy in list comprehensions to accumulate values
>>> [f(i) for f in [f2()] for i in [2,2,4,8]][-1]
16
def nested1(num1):
print "nested1 has",num1
def nested2(num2):
print "nested2 has",num2,"and it can reach to",num1
return num1+num2 #num1 referenced for reading here
return nested2
Gives:
In [17]: my_func=nested1(8)
nested1 has 8
In [21]: my_func(5)
nested2 has 5 and it can reach to 8
Out[21]: 13
This is an example of what a closure is and how it can be used.
People are confusing about what closure is. Closure is not the inner function. the meaning of closure is act of closing. So inner function is closing over a nonlocal variable which is called free variable.
def counter_in(initial_value=0):
# initial_value is the free variable
def inc(increment=1):
nonlocal initial_value
initial_value += increment
print(initial_value)
return inc
when you call counter_in() this will return inc function which has a free variable initial_value. So we created a CLOSURE. people call inc as closure function and I think this is confusing people, people think "ok inner functions are closures". in reality inc is not a closure, since it is part of the closure, to make life easy, they call it closure function.
myClosingOverFunc=counter_in(2)
this returns inc function which is closing over the free variable initial_value. when you invoke myClosingOverFunc
myClosingOverFunc()
it will print 2.
when python sees that a closure sytem exists, it creates a new obj called CELL. this will store only the name of the free variable which is initial_value in this case. This Cell obj will point to another object which stores the value of the initial_value.
in our example, initial_value in outer function and inner function will point to this cell object, and this cell object will be point to the value of the initial_value.
variable initial_value =====>> CELL ==========>> value of initial_value
So when you call counter_in its scope is gone, but it does not matter. because variable initial_value is directly referencing the CELL Obj. and it indirectly references the value of initial_value. That is why even though scope of outer function is gone, inner function will still have access to the free variable
let's say I want to write a function, which takes in a function as an arg and returns how many times this function is called.
def counter(fn):
# since cnt is a free var, python will create a cell and this cell will point to the value of cnt
# every time cnt changes, cell will be pointing to the new value
cnt = 0
def inner(*args, **kwargs):
# we cannot modidy cnt with out nonlocal
nonlocal cnt
cnt += 1
print(f'{fn.__name__} has been called {cnt} times')
# we are calling fn indirectly via the closue inner
return fn(*args, **kwargs)
return inner
in this example cnt is our free variable and inner + cnt create CLOSURE. when python sees this it will create a CELL Obj and cnt will always directly reference this cell obj and CELL will reference the another obj in the memory which stores the value of cnt. initially cnt=0.
cnt ======>>>> CELL =============> 0
when you invoke the inner function wih passing a parameter counter(myFunc)() this will increase the cnt by 1. so our referencing schema will change as follow:
cnt ======>>>> CELL =============> 1 #first counter(myFunc)()
cnt ======>>>> CELL =============> 2 #second counter(myFunc)()
cnt ======>>>> CELL =============> 3 #third counter(myFunc)()
this is only one instance of closure. You can create multiple instances of closure with passing another function
counter(differentFunc)()
this will create a different CELL obj from the above. We just have created another closure instance.
cnt ======>> difCELL ========> 1 #first counter(differentFunc)()
cnt ======>> difCELL ========> 2 #secon counter(differentFunc)()
cnt ======>> difCELL ========> 3 #third counter(differentFunc)()
I'd like to offer another simple comparison between python and JS example, if this helps make things clearer.
JS:
function make () {
var cl = 1;
function gett () {
console.log(cl);
}
function sett (val) {
cl = val;
}
return [gett, sett]
}
and executing:
a = make(); g = a[0]; s = a[1];
s(2); g(); // 2
s(3); g(); // 3
Python:
def make ():
cl = 1
def gett ():
print(cl);
def sett (val):
cl = val
return gett, sett
and executing:
g, s = make()
g() #1
s(2); g() #1
s(3); g() #1
Reason: As many others said above, in python, if there is an assignment in the inner scope to a variable with the same name, a new reference in the inner scope is created. Not so with JS, unless you explicitly declare one with the var keyword.
For the readers of Structure and Interpretation of Computer Programs (SICP): there are 2 unrelated meanings of closure (CS VS Math), see Wikipedia for the latter/less common one:
Sussman and Abelson also use the term closure in the 1980s with a second, unrelated meaning: the property of an operator that adds data to a data structure to also be able to add nested data structures. This usage of the term comes from the mathematics usage rather than the prior usage in computer science. The authors consider this overlap in terminology to be "unfortunate."
https://en.wikipedia.org/wiki/Closure_(computer_programming)#History_and_etymology
The second (mathematical) meaning is also used in SICP in Python, see for example the discussion of tuples
Our ability to use tuples as the elements of other tuples provides a new means of combination in our programming language. We call the ability for tuples to nest in this way a closure property of the tuple data type. In general, a method for combining data values satisfies the closure property if the result of combination can itself be combined using the same method.
2.3 Sequences | SICP in Python
Here is presented a way to identify if a function is a closure or not via code objects.
As already mentioned in other answers, not every nested function is a closure. Given a composite function (which represent the overall action) its intermediate states can be either be a closure or a nested function.
A closure is a kind function which is "parametrized" by its (non-empty) enclosing scope, the space of free-variables. Notice that a composite function may be made by both types.
The (Python's) internal type code
object represents the compiled function body. Its attribute co_cellvars and co_freevars can be used to "lookaround" the closure/scope of a function.
As mentioned in the doc
co_freevars: tuple of names of free variables (referenced via a function’s closure)
co_cellvars: tuple of names of cell variables (referenced by containing scopes).
Once the function is read, by performing recursive calls a partial function is returned with its own __closure__ (hence cell_contents) and a list of free-variables from its clousre and in its scope.
Let introduce some support functions
# the "lookarounds"
def free_vars_from_closure_of(f):
print(f.__name__, 'free vars from its closure', f.__code__.co_cellvars)
def free_vars_in_scopes_of(f):
print(f.__name__, 'free vars in its scope ', f.__code__.co_freevars)
# read cells values
def cell_content(f):
if f.__closure__ is not None:
if len(f.__closure__) == 1: # otherwise problem with join
c = f.__closure__[0].cell_contents
else:
c = ','.join(str(c.cell_contents) for c in f.__closure__)
else:
c = None
print(f'cells of {f.__name__}: {c}')
Here an example from another answer rewritten in a more systematic way
def f1(x1):
def f2(x2):
a = 'free' # <- better choose different identifier to avoid confusion
def f3(x3):
return '%s %s %s %s' % (x1, x2, a, x3)
return f3
return f2
# partial functions
p1 = f1('I')
p2 = p1('am')
# lookaround
for p in (f1, p1, p2):
free_vars_in_scopes_of(p)
free_vars_from_closure_of(p)
cell_content(p)
Output
f1 free vars in its scope () # <- because it's the most outer function
f1 free vars from its closure ('x1',)
cells of f1: None
f2 free vars in its scope ('x1',)
f2 free vars from its closure ('a', 'x2')
cells of f2: I
f3 free vars in its scope ('a', 'x1', 'x2')
f3 free vars from its closure () # <- because it's the most inner function
cells of f3: free, I, am
The lambda counterpart:
def g1(x1):
return lambda x2, a='free': lambda x3: '%s %s %s %s' % (x1, x2, a, x3)
From the point of view of the free variables/scoping are equivalent. The only minor differences are some values of some attributes of the code object:
co_varnames, co_consts, co_code, co_lnotab, co_stacksize... and natuarlly the __name__ attribute.
A mixed example, closures and not at once:
# example: counter
def h1(): # <- not a closure
c = 0
def h2(c=c): # <- not a closure
def h3(x): # <- closure
def h4(): # <- closure
nonlocal c
c += 1
print(c)
return h4
return h3
return h2
# partial functions
p1 = h1()
p2 = p1()
p3 = p2('X')
p1() # do nothing
p2('X') # do nothing
p2('X') # do nothing
p3() # +=1
p3() # +=1
p3() # +=1
# lookaround
for p in (h1, p1, p2, p3):
free_vars_in_scopes_of(p)
#free_vars_from_closure_of(p)
cell_content(p)
Output
1 X
2 X
3 X
h1 free vars in its scope ()
cells of h1: None
h2 free vars in its scope ()
cells of h2: None
h3 free vars in its scope ('c',)
cells of h3: 3
h4 free vars in its scope ('c', 'x')
cells of h4: 3,X
h1 and h2 are both not closures since they have no cell and no free-variables in their scope.
h3 and h3 are closures and share (in this case) the same cell and free-variable for c. h4 has a further free-variable x with its own cell.
Final considerations:
the __closure__ attribute and __code__.co_freevars can be used to check for values and names (identifiers) of the free-variables
anti-analogies (in a broad sense) between nonlocal and __code__.co_cellvars: nonlocal acts towards the outer function, __code__.co_cellvars instead towards the internal function

Python the function locals() was changed by just calling it

please read the two code ,and i find the only different is printing the locals() or not. But one of them is wrong.
Please help me , thanks
import numpy as np
class Solution:
def solve(self, f, a, b, n):
x = np.linspace(a,b,n)
# print(locals())
loc = locals()
fstr = '''
def fun(x):
return %s
''' % f
exec(fstr)
# print(locals())
fun = loc['fun']
y = fun(x)
print(x,y,sep = '\n')
a = Solution()
a.solve('x+1',-5,5,5)
In this code ,I didn't print the locals()
if I only print it and write '#' in front of "fun = loc['fun']" and "y = fun(x)" ,there is a key named 'fun' in the output of locals()
import numpy as np
class Solution:
def solve(self, f, a, b, n):
x = np.linspace(a,b,n)
# print(locals())
loc = locals()
fstr = '''
def fun(x):
return %s
''' % f
exec(fstr)
print(locals())
fun = loc['fun']
y = fun(x)
print(x,y,sep = '\n')
a = Solution()
a.solve('x+1',-5,5,5)
But in this code ,i can't find the key named 'fun' in the output of locals()
Traceback (most recent call last):
File "tmp.py", line 20, in <module>
a.solve('x+1',-5,5,5)
File "tmp.py", line 15, in solve
fun = loc['fun']
KeyError: 'fun'
All of this seem that "fun = loc['fun']" and "y = fun(x)" determine the output of locals()
but i think it is impossible for python that latter code can change the front code
Yeah, that happens with locals(). locals() is confusing and not well documented.
Calling locals() repeatedly in the same stack frame returns the same dict every time, and every call to locals() updates that dict with the current values of local (or closure) variables. The dict is attached to the stack frame as its f_locals attribute, and accessing that attribute will also update the dict.
To use locals() safely without the values changing unpredictably, you should copy the returned dict:
current_locals = locals().copy()
Otherwise, even running your code in a debugger could change its behavior, since debuggers typically access f_locals to inspect local variables.
Also, trying to exec code that assigns any variables in a local scope is officially unsupported and behaves weirdly, and def counts as an assignment. You shouldn't use exec for this anyway.

Function that returns an accumulator in Python

I am reading Hackers and Painters and am confused by a problem mentioned by the author to illustrate the power of different programming languages.
The problem is:
We want to write a function that generates accumulators—a function that takes a number n, and returns a function that takes another number i and returns n incremented by i. (That’s incremented by, not plus. An accumulator has to accumulate.)
The author mentions several solutions with different programming languages. For example, Common Lisp:
(defun foo (n)
(lambda (i) (incf n i)))
and JavaScript:
function foo(n) { return function (i) { return n += i } }
However, when it comes to Python, the following codes do not work:
def foo(n):
s = n
def bar(i):
s += i
return s
return bar
f = foo(0)
f(1) # UnboundLocalError: local variable 's' referenced before assignment
A simple modification will make it work:
def foo(n):
s = [n]
def bar(i):
s[0] += i
return s[0]
return bar
I am new to Python. Why doesn the first solution not work while the second one does? The author mentions lexical variables but I still don't get it.
s += i is just sugar for s = s + i.*
This means you assign a new value to the variable s (instead of mutating it in place). When you assign to a variable, Python assumes it is local to the function. However, before assigning it needs to evaluate s + i, but s is local and still unassigned -> Error.
In the second case s[0] += i you never assign to s directly, but only ever access an item from s. So Python can clearly see that it is not a local variable and goes looking for it in the outer scope.
Finally, a nicer alternative (in Python 3) is to explicitly tell it that s is not a local variable:
def foo(n):
s = n
def bar(i):
nonlocal s
s += i
return s
return bar
(There is actually no need for s - you could simply use n instead inside bar.)
*The situation is slightly more complex, but the important issue is that computation and assignment are performed in two separate steps.
An infinite generator is one implementation. You can call __next__ on a generator instance to extract successive results iteratively.
def incrementer(n, i):
while True:
n += i
yield n
g = incrementer(2, 5)
print(g.__next__()) # 7
print(g.__next__()) # 12
print(g.__next__()) # 17
If you need a flexible incrementer, one possibility is an object-oriented approach:
class Inc(object):
def __init__(self, n=0):
self.n = n
def incrementer(self, i):
self.n += i
return self.n
g = Inc(2)
g.incrementer(5) # 7
g.incrementer(3) # 10
g.incrementer(7) # 17
In Python if we use a variable and pass it to a function then it will be Call by Value whatever changes you make to the variable it will not be reflected to the original variable.
But when you use a list instead of a variable then the changes that you make to the list in the functions are reflected in the original List outside the function so this is called call by reference.
And this is the reason for the second option does work and the first option doesn't.

Python variables that hold functions retaining state information? [duplicate]

I have seen and used nested functions in Python, and they match the definition of a closure. So why are they called "nested functions" instead of "closures"?
Are nested functions not closures because they are not used by the external world?
UPDATE: I was reading about closures and it got me thinking about this concept with respect to Python. I searched and found the article mentioned by someone in a comment below, but I couldn't completely understand the explanation in that article, so that is why I am asking this question.
A closure occurs when a function has access to a local variable from an enclosing scope that has finished its execution.
def make_printer(msg):
def printer():
print(msg)
return printer
printer = make_printer('Foo!')
printer()
When make_printer is called, a new frame is put on the stack with the compiled code for the printer function as a constant and the value of msg as a local. It then creates and returns the function. Because the function printer references the msg variable, it is kept alive after the make_printer function has returned.
So, if your nested functions don't
access variables that are local to enclosing scopes,
do so when they are executed outside of that scope,
then they are not closures.
Here's an example of a nested function which is not a closure.
def make_printer(msg):
def printer(msg=msg):
print(msg)
return printer
printer = make_printer("Foo!")
printer() #Output: Foo!
Here, we are binding the value to the default value of a parameter. This occurs when the function printer is created and so no reference to the value of msg external to printer needs to be maintained after make_printer returns. msg is just a normal local variable of the function printer in this context.
The question has already been answered by aaronasterling
However, someone might be interested in how the variables are stored under the hood.
Before coming to the snippet:
Closures are functions that inherit variables from their enclosing environment. When you pass a function callback as an argument to another function that will do I/O, this callback function will be invoked later, and this function will — almost magically — remember the context in which it was declared, along with all the variables available in that context.
If a function does not use free variables it doesn't form a closure.
If there is another inner level which uses free variables -- all previous levels save the lexical environment ( example at the end )
function attributes func_closure in python < 3.X or __closure__ in python > 3.X save the free variables.
Every function in python has the closure attribute, but if there are no free variables, it is empty.
example: of closure attributes but no content inside as there is no free variable.
>>> def foo():
... def fii():
... pass
... return fii
...
>>> f = foo()
>>> f.func_closure
>>> 'func_closure' in dir(f)
True
>>>
NB: FREE VARIABLE IS MUST TO CREATE A CLOSURE.
I will explain using the same snippet as above:
>>> def make_printer(msg):
... def printer():
... print msg
... return printer
...
>>> printer = make_printer('Foo!')
>>> printer() #Output: Foo!
And all Python functions have a closure attribute so let's examine the enclosing variables associated with a closure function.
Here is the attribute func_closure for the function printer
>>> 'func_closure' in dir(printer)
True
>>> printer.func_closure
(<cell at 0x108154c90: str object at 0x108151de0>,)
>>>
The closure attribute returns a tuple of cell objects which contain details of the variables defined in the enclosing scope.
The first element in the func_closure which could be None or a tuple of cells that contain bindings for the function’s free variables and it is read-only.
>>> dir(printer.func_closure[0])
['__class__', '__cmp__', '__delattr__', '__doc__', '__format__', '__getattribute__',
'__hash__', '__init__', '__new__', '__reduce__', '__reduce_ex__', '__repr__',
'__setattr__', '__sizeof__', '__str__', '__subclasshook__', 'cell_contents']
>>>
Here in the above output you can see cell_contents, let's see what it stores:
>>> printer.func_closure[0].cell_contents
'Foo!'
>>> type(printer.func_closure[0].cell_contents)
<type 'str'>
>>>
So, when we called the function printer(), it accesses the value stored inside the cell_contents. This is how we got the output as 'Foo!'
Again I will explain using the above snippet with some changes:
>>> def make_printer(msg):
... def printer():
... pass
... return printer
...
>>> printer = make_printer('Foo!')
>>> printer.func_closure
>>>
In the above snippet, I didn't print msg inside the printer function, so it doesn't create any free variable. As there is no free variable, there will be no content inside the closure. Thats exactly what we see above.
Now I will explain another different snippet to clear out everything Free Variable with Closure:
>>> def outer(x):
... def intermediate(y):
... free = 'free'
... def inner(z):
... return '%s %s %s %s' % (x, y, free, z)
... return inner
... return intermediate
...
>>> outer('I')('am')('variable')
'I am free variable'
>>>
>>> inter = outer('I')
>>> inter.func_closure
(<cell at 0x10c989130: str object at 0x10c831b98>,)
>>> inter.func_closure[0].cell_contents
'I'
>>> inn = inter('am')
So, we see that a func_closure property is a tuple of closure cells, we can refer them and their contents explicitly -- a cell has property "cell_contents"
>>> inn.func_closure
(<cell at 0x10c9807c0: str object at 0x10c9b0990>,
<cell at 0x10c980f68: str object at 0x10c9eaf30>,
<cell at 0x10c989130: str object at 0x10c831b98>)
>>> for i in inn.func_closure:
... print i.cell_contents
...
free
am
I
>>>
Here when we called inn, it will refer all the save free variables so we get I am free variable
>>> inn('variable')
'I am free variable'
>>>
Python has a weak support for closure. To see what I mean take the following example of a counter using closure with JavaScript:
function initCounter(){
var x = 0;
function counter () {
x += 1;
console.log(x);
};
return counter;
}
count = initCounter();
count(); //Prints 1
count(); //Prints 2
count(); //Prints 3
Closure is quite elegant since it gives functions written like this the ability to have "internal memory". As of Python 2.7 this is not possible. If you try
def initCounter():
x = 0;
def counter ():
x += 1 ##Error, x not defined
print x
return counter
count = initCounter();
count(); ##Error
count();
count();
You'll get an error saying that x is not defined. But how can that be if it has been shown by others that you can print it? This is because of how Python it manages the functions variable scope. While the inner function can read the outer function's variables, it cannot write them.
This is a shame really. But with just read-only closure you can at least implement the function decorator pattern for which Python offers syntactic sugar.
Update
As its been pointed out, there are ways to deal with python's scope limitations and I'll expose some.
1. Use the global keyword (in general not recommended).
2. In Python 3.x, use the nonlocal keyword (suggested by #unutbu and #leewz)
3. Define a simple modifiable class Object
class Object(object):
pass
and create an Object scope within initCounter to store the variables
def initCounter ():
scope = Object()
scope.x = 0
def counter():
scope.x += 1
print scope.x
return counter
Since scope is really just a reference, actions taken with its fields do not really modify scope itself, so no error arises.
4. An alternative way, as #unutbu pointed out, would be to define each variable as an array (x = [0]) and modify it's first element (x[0] += 1). Again no error arises because x itself is not modified.
5. As suggested by #raxacoricofallapatorius, you could make x a property of counter
def initCounter ():
def counter():
counter.x += 1
print counter.x
counter.x = 0
return counter
Python 2 didn't have closures - it had workarounds that resembled closures.
There are plenty of examples in answers already given - copying in variables to the inner function, modifying an object on the inner function, etc.
In Python 3, support is more explicit - and succinct:
def closure():
count = 0
def inner():
nonlocal count
count += 1
print(count)
return inner
Usage:
start = closure()
another = closure() # another instance, with a different stack
start() # prints 1
start() # prints 2
another() # print 1
start() # prints 3
The nonlocal keyword binds the inner function to the outer variable explicitly mentioned, in effect enclosing it. Hence more explicitly a 'closure'.
I had a situation where I needed a separate but persistent name space.
I used classes. I don't otherwise.
Segregated but persistent names are closures.
>>> class f2:
... def __init__(self):
... self.a = 0
... def __call__(self, arg):
... self.a += arg
... return(self.a)
...
>>> f=f2()
>>> f(2)
2
>>> f(2)
4
>>> f(4)
8
>>> f(8)
16
# **OR**
>>> f=f2() # **re-initialize**
>>> f(f(f(f(2)))) # **nested**
16
# handy in list comprehensions to accumulate values
>>> [f(i) for f in [f2()] for i in [2,2,4,8]][-1]
16
def nested1(num1):
print "nested1 has",num1
def nested2(num2):
print "nested2 has",num2,"and it can reach to",num1
return num1+num2 #num1 referenced for reading here
return nested2
Gives:
In [17]: my_func=nested1(8)
nested1 has 8
In [21]: my_func(5)
nested2 has 5 and it can reach to 8
Out[21]: 13
This is an example of what a closure is and how it can be used.
People are confusing about what closure is. Closure is not the inner function. the meaning of closure is act of closing. So inner function is closing over a nonlocal variable which is called free variable.
def counter_in(initial_value=0):
# initial_value is the free variable
def inc(increment=1):
nonlocal initial_value
initial_value += increment
print(initial_value)
return inc
when you call counter_in() this will return inc function which has a free variable initial_value. So we created a CLOSURE. people call inc as closure function and I think this is confusing people, people think "ok inner functions are closures". in reality inc is not a closure, since it is part of the closure, to make life easy, they call it closure function.
myClosingOverFunc=counter_in(2)
this returns inc function which is closing over the free variable initial_value. when you invoke myClosingOverFunc
myClosingOverFunc()
it will print 2.
when python sees that a closure sytem exists, it creates a new obj called CELL. this will store only the name of the free variable which is initial_value in this case. This Cell obj will point to another object which stores the value of the initial_value.
in our example, initial_value in outer function and inner function will point to this cell object, and this cell object will be point to the value of the initial_value.
variable initial_value =====>> CELL ==========>> value of initial_value
So when you call counter_in its scope is gone, but it does not matter. because variable initial_value is directly referencing the CELL Obj. and it indirectly references the value of initial_value. That is why even though scope of outer function is gone, inner function will still have access to the free variable
let's say I want to write a function, which takes in a function as an arg and returns how many times this function is called.
def counter(fn):
# since cnt is a free var, python will create a cell and this cell will point to the value of cnt
# every time cnt changes, cell will be pointing to the new value
cnt = 0
def inner(*args, **kwargs):
# we cannot modidy cnt with out nonlocal
nonlocal cnt
cnt += 1
print(f'{fn.__name__} has been called {cnt} times')
# we are calling fn indirectly via the closue inner
return fn(*args, **kwargs)
return inner
in this example cnt is our free variable and inner + cnt create CLOSURE. when python sees this it will create a CELL Obj and cnt will always directly reference this cell obj and CELL will reference the another obj in the memory which stores the value of cnt. initially cnt=0.
cnt ======>>>> CELL =============> 0
when you invoke the inner function wih passing a parameter counter(myFunc)() this will increase the cnt by 1. so our referencing schema will change as follow:
cnt ======>>>> CELL =============> 1 #first counter(myFunc)()
cnt ======>>>> CELL =============> 2 #second counter(myFunc)()
cnt ======>>>> CELL =============> 3 #third counter(myFunc)()
this is only one instance of closure. You can create multiple instances of closure with passing another function
counter(differentFunc)()
this will create a different CELL obj from the above. We just have created another closure instance.
cnt ======>> difCELL ========> 1 #first counter(differentFunc)()
cnt ======>> difCELL ========> 2 #secon counter(differentFunc)()
cnt ======>> difCELL ========> 3 #third counter(differentFunc)()
I'd like to offer another simple comparison between python and JS example, if this helps make things clearer.
JS:
function make () {
var cl = 1;
function gett () {
console.log(cl);
}
function sett (val) {
cl = val;
}
return [gett, sett]
}
and executing:
a = make(); g = a[0]; s = a[1];
s(2); g(); // 2
s(3); g(); // 3
Python:
def make ():
cl = 1
def gett ():
print(cl);
def sett (val):
cl = val
return gett, sett
and executing:
g, s = make()
g() #1
s(2); g() #1
s(3); g() #1
Reason: As many others said above, in python, if there is an assignment in the inner scope to a variable with the same name, a new reference in the inner scope is created. Not so with JS, unless you explicitly declare one with the var keyword.
For the readers of Structure and Interpretation of Computer Programs (SICP): there are 2 unrelated meanings of closure (CS VS Math), see Wikipedia for the latter/less common one:
Sussman and Abelson also use the term closure in the 1980s with a second, unrelated meaning: the property of an operator that adds data to a data structure to also be able to add nested data structures. This usage of the term comes from the mathematics usage rather than the prior usage in computer science. The authors consider this overlap in terminology to be "unfortunate."
https://en.wikipedia.org/wiki/Closure_(computer_programming)#History_and_etymology
The second (mathematical) meaning is also used in SICP in Python, see for example the discussion of tuples
Our ability to use tuples as the elements of other tuples provides a new means of combination in our programming language. We call the ability for tuples to nest in this way a closure property of the tuple data type. In general, a method for combining data values satisfies the closure property if the result of combination can itself be combined using the same method.
2.3 Sequences | SICP in Python
Here is presented a way to identify if a function is a closure or not via code objects.
As already mentioned in other answers, not every nested function is a closure. Given a composite function (which represent the overall action) its intermediate states can be either be a closure or a nested function.
A closure is a kind function which is "parametrized" by its (non-empty) enclosing scope, the space of free-variables. Notice that a composite function may be made by both types.
The (Python's) internal type code
object represents the compiled function body. Its attribute co_cellvars and co_freevars can be used to "lookaround" the closure/scope of a function.
As mentioned in the doc
co_freevars: tuple of names of free variables (referenced via a function’s closure)
co_cellvars: tuple of names of cell variables (referenced by containing scopes).
Once the function is read, by performing recursive calls a partial function is returned with its own __closure__ (hence cell_contents) and a list of free-variables from its clousre and in its scope.
Let introduce some support functions
# the "lookarounds"
def free_vars_from_closure_of(f):
print(f.__name__, 'free vars from its closure', f.__code__.co_cellvars)
def free_vars_in_scopes_of(f):
print(f.__name__, 'free vars in its scope ', f.__code__.co_freevars)
# read cells values
def cell_content(f):
if f.__closure__ is not None:
if len(f.__closure__) == 1: # otherwise problem with join
c = f.__closure__[0].cell_contents
else:
c = ','.join(str(c.cell_contents) for c in f.__closure__)
else:
c = None
print(f'cells of {f.__name__}: {c}')
Here an example from another answer rewritten in a more systematic way
def f1(x1):
def f2(x2):
a = 'free' # <- better choose different identifier to avoid confusion
def f3(x3):
return '%s %s %s %s' % (x1, x2, a, x3)
return f3
return f2
# partial functions
p1 = f1('I')
p2 = p1('am')
# lookaround
for p in (f1, p1, p2):
free_vars_in_scopes_of(p)
free_vars_from_closure_of(p)
cell_content(p)
Output
f1 free vars in its scope () # <- because it's the most outer function
f1 free vars from its closure ('x1',)
cells of f1: None
f2 free vars in its scope ('x1',)
f2 free vars from its closure ('a', 'x2')
cells of f2: I
f3 free vars in its scope ('a', 'x1', 'x2')
f3 free vars from its closure () # <- because it's the most inner function
cells of f3: free, I, am
The lambda counterpart:
def g1(x1):
return lambda x2, a='free': lambda x3: '%s %s %s %s' % (x1, x2, a, x3)
From the point of view of the free variables/scoping are equivalent. The only minor differences are some values of some attributes of the code object:
co_varnames, co_consts, co_code, co_lnotab, co_stacksize... and natuarlly the __name__ attribute.
A mixed example, closures and not at once:
# example: counter
def h1(): # <- not a closure
c = 0
def h2(c=c): # <- not a closure
def h3(x): # <- closure
def h4(): # <- closure
nonlocal c
c += 1
print(c)
return h4
return h3
return h2
# partial functions
p1 = h1()
p2 = p1()
p3 = p2('X')
p1() # do nothing
p2('X') # do nothing
p2('X') # do nothing
p3() # +=1
p3() # +=1
p3() # +=1
# lookaround
for p in (h1, p1, p2, p3):
free_vars_in_scopes_of(p)
#free_vars_from_closure_of(p)
cell_content(p)
Output
1 X
2 X
3 X
h1 free vars in its scope ()
cells of h1: None
h2 free vars in its scope ()
cells of h2: None
h3 free vars in its scope ('c',)
cells of h3: 3
h4 free vars in its scope ('c', 'x')
cells of h4: 3,X
h1 and h2 are both not closures since they have no cell and no free-variables in their scope.
h3 and h3 are closures and share (in this case) the same cell and free-variable for c. h4 has a further free-variable x with its own cell.
Final considerations:
the __closure__ attribute and __code__.co_freevars can be used to check for values and names (identifiers) of the free-variables
anti-analogies (in a broad sense) between nonlocal and __code__.co_cellvars: nonlocal acts towards the outer function, __code__.co_cellvars instead towards the internal function

python TypeError: 'int' object is not callable

i have homework and we need to do something like iterator, the func work great but the techer told he run the func with (t=Make_iterator()) like this, what i do wrong? tnx!
global x
x=-1
def Make_iterator(fn):
global x
x+=1
return fn(x)
fn=lambda y:y*2
t=Make_iterator(fn)
print(t())
I think you want a closure, which is a function defined within the local namespace of anther function, so that it can access the outer function's variables:
def make_iterator(func):
x = -1
def helper():
nonlocal x
x += 1
return func(x)
return helper
The nonlocal statement allows the inner function to modify the variable declared in the outer function (otherwise you'd either get an error, or you'd bind your own local variable without changing the outer one). It was only added in Python 3, so if you're still using Python 2, you'll need to wrap the x value in a mutable data structure, like a list.
Another approach to the same idea is to write class, rather than a function. An instance of a class can be callable (just like a function) if the class defines a __call__ method:
class MyIterator(object):
def __init__(self, func):
self.index = -1
self.func = func
def __call__(self):
self.index += 1
return self.func(self.index)
This can be useful if the state you need to keep track of is more complicated (or should change in more complicated ways) than the simple integer index used in this example. It also works in Python 2 without annoying workarounds.
I think he wants your Make_iterator function to return a function that acts as an iterator. So you could wrap the contents of your current Make_iterator function within an inner function f and return that:
def Make_iterator(fn):
def f():
global x
x+=1
return fn(x)
return f
Now if you do t = Make_iterator(fn), every time you call t() it will return the next value of the iterator, in your case 0, 2, 4, 6, 8, etc...

Categories