Unexpected behavior while importing python Modules with Class Definitions - python

Consider the code in the file my_module.py:
class A(object):
def __init__(self, x=er()):
self.x = x
Now, when I import this module
import my_module
I get an error,
name 'er is not defined
While I understand that my_module does not have er defined, but I am never creating an instance of class A. Therefore it is puzzling that python tries to execute the __init__ callback when simply importing the module. Although, the __init__ call is not fully executed as explained by the example below:
class A(object):
def __init__(self, x=5):
self.x = x
print ('I am here')
Now, when I import the module - nothing is printed and this is expected behavior.
I am puzzled why is function er called in the first example when I donot instantiate an object of class A. Any pointers to the documentation that explains this?

Because in Python, default argument values are evaluated at definition time. See, for example this question, or this notorious question.
This is documented here
The default values are evaluated at the point of function definition
in the defining scope, so that
i = 5
def f(arg=i):
print(arg)
i = 6
f() will print 5.
Important warning: The default value is evaluated only once. This
makes a difference when the default is a mutable object such as a
list, dictionary, or instances of most classes. For example, the
following function accumulates the arguments passed to it on
subsequent calls:
def f(a, L=[]):
L.append(a)
return L
print(f(1))
print(f(2))
print(f(3))
This will print
[1]
[1, 2]
[1, 2, 3]
If you don’t want the default to be shared between subsequent calls,
you can write the function like this instead:
def f(a, L=None):
if L is None:
L = []
L.append(a)
return L

Related

python class attributes as standard input in a class method

I don't seam to be able to do this but is would make sense that you could.
So mybe I just made a mistake.
class Foobar:
def __init__(self):
self.myatr = 0
def add(self, someinput=self.myatr): # <-- someinput=self.myatr???
return someinput += 1
but you get the error
NameError: name 'self' is not defined
But it would be logicl if the this was the way it worket
f = Foobar()
f.add() # returns 1
f.add(1) # returns 2
Instance methods are functions bound to class attributes, defined when the class is defined, before any instance exists. Similarly, the default value is set once, at definition time, not on-demand when the method is called without an explicit argument.
As such, you need a sentinel (typically None) which signals that no argument was passed.
def add(self, someinput=None):
if someinput is None:
someinput = self.myatr
return someinput + 1
Default arguments are evaluated at function definition. Moreover, the names of the arguments defined earlier (like self in your function) aren't available during function definition. So when you refer to self.myattr, there's no self yet.
For example, consider this function:
>>> def test(thing=print('hello')):
... ...
...
hello
>>>
The expression print('hello') was evaluated right when the function was defined, and it won't be re-evaluated when you call test.
Also, return someinput += 1 is an error too because assignment is not an expression.
Furthermore, integers are always copied, so if you do this:
def test(x):
x += 1
return x
a = 6
test(a)
a will still be equal to six, since the call test(a) copied a.

Behavioural difference between decorated function and method in Python

I use the following workaround for "Pythonic static variables":
def static_vars(**kwargs):
"""decorator for funciotns that sets static variables"""
def decorate(func):
for k, v in kwargs.items():
setattr(func, k, v)
return func
return decorate
#static_vars(var=1)
def global_foo():
_ = global_foo
print _.var
_.var += 1
global_foo() # >>> 1
global_foo() # >>> 2
It works just as supposed to. But when I move such a decorated function inside a class I get a strange change:
class A(object):
#static_vars(var=1)
def foo(self):
bound = self.foo
unbound = A.foo
print bound.var # OK, print 1 at first call
bound.var += 1 # AttributeError: 'instancemethod' object has no attribute 'var'
def check(self):
bound = self.foo
unbound = A.foo
print 'var' in dir(bound)
print 'var' in dir(unbound)
print bound.var is unbound.var # it doesn't make much sense but anyway
a = A()
a.check() # >>> True
# >>> True
# >>> True
a.foo() # ERROR
I can not see what causes such behaviour. It seems to me that it has something to do with python descriptors protocol, all that bound vs unbound method stuff. Somehow the foo.var attribute is accessible but is not writable.
Any help is appreciated.
P.S. I understand that static function variables are essentially class variables and this decorator is unnecessary in the second case but the question is more for understanding the Python under the hood than to get any working solution.
a.foo doesn't return the actual function you defined; it returns a bound method of it, which wraps up the function and has self assigned toa.
https://docs.python.org/3/howto/descriptor.html#functions-and-methods
That guide is out of date a little, though, since unbound methods just return the function in Python 3.
So, to access the attributes on the function, you need to go through A.foo (or a.foo.__func__)instead of a.foo. And this will only work in Python 3. In Python 2, I think A.foo.__func__ will work.

python TypeError: 'int' object is not callable

i have homework and we need to do something like iterator, the func work great but the techer told he run the func with (t=Make_iterator()) like this, what i do wrong? tnx!
global x
x=-1
def Make_iterator(fn):
global x
x+=1
return fn(x)
fn=lambda y:y*2
t=Make_iterator(fn)
print(t())
I think you want a closure, which is a function defined within the local namespace of anther function, so that it can access the outer function's variables:
def make_iterator(func):
x = -1
def helper():
nonlocal x
x += 1
return func(x)
return helper
The nonlocal statement allows the inner function to modify the variable declared in the outer function (otherwise you'd either get an error, or you'd bind your own local variable without changing the outer one). It was only added in Python 3, so if you're still using Python 2, you'll need to wrap the x value in a mutable data structure, like a list.
Another approach to the same idea is to write class, rather than a function. An instance of a class can be callable (just like a function) if the class defines a __call__ method:
class MyIterator(object):
def __init__(self, func):
self.index = -1
self.func = func
def __call__(self):
self.index += 1
return self.func(self.index)
This can be useful if the state you need to keep track of is more complicated (or should change in more complicated ways) than the simple integer index used in this example. It also works in Python 2 without annoying workarounds.
I think he wants your Make_iterator function to return a function that acts as an iterator. So you could wrap the contents of your current Make_iterator function within an inner function f and return that:
def Make_iterator(fn):
def f():
global x
x+=1
return fn(x)
return f
Now if you do t = Make_iterator(fn), every time you call t() it will return the next value of the iterator, in your case 0, 2, 4, 6, 8, etc...

Generate functions without closures in python

right now I'm using closures to generate functions like in this simplified example:
def constant_function(constant):
def dummyfunction(t):
return constant
return dummyfunction
These generated functions are then passed to the init-method of a custom class which stores them as instance attributes. The disadvantage is that that makes the class-instances unpickleable. So I'm wondering if there is a way to create function generators avoiding closures.
You could use a callable class:
class ConstantFunction(object):
def __init__(self, constant):
self.constant = constant
def __call__(self, t):
return self.constant
def constant_function(constant):
return ConstantFunction(constant)
The closure state of your function is then transferred to an instance attribute instead.
Not that I'd recommend this for general use… but there's an alternate approach of compiling and exec'ing the code. It's generating a function w/o a closure.
>>> def doit(constant):
... constant = "def constant(t):\n return %s" % constant
... return compile(constant, '<string>', 'exec')
...
>>> exec doit(1)
>>> constant(4)
1
>>> constant
Note that to do this inside an enclosing function or class (i.e. not in the global namespace) you have to also pass in the appropriate namespace to exec. See: https://docs.python.org/2/reference/simple_stmts.html#the-exec-statement
There's also the double lambda approach, which is not really a closure, well, sort of…
>>> f = lambda x: lambda y:x
>>> g = f(1)
>>> g(4)
1
>>> import dill
>>> _g = dill.dumps(g)
>>> g_ = dill.loads(_g)
>>> g_(5)
1
You seemed worried about the ability to pickle closure-like objects, so you can see even the double lambdas are pickleable if you use dill. The same for class instances.

Default value of memberfunction in python

class foo():
def __init__(self,a):
self.A=a
def foo2(self,aa=self.A):
return "la"
#classmethod
def test(cls):
d=foo()
d.foo2()
The memberfunction foo2 cannot find self.A nor A. Is it because A is not globally set inside the class?
Keyword arguments, such as aa, cannot take on a default value from self. Keyword arguments are evaluated when the method is defined, not when it is called. Typically one would achieve what you're trying by setting the default of aa to None:
class foo():
def __init__(self, a):
self.A = a
def foo2(self, aa=None):
if aa is None:
aa = self.A
return 'la'
Note also that since keyword argument defaults are evaluated at definition, not execution, all invocations of foo2 share their default argument even if called from different instances of foo. This often trips up new Python programmers when working with methods such as:
def footoo(a=list()):
a.append(1)
return a
All calls to footoo will get the same list object; not a new one at each call. So calling footoo repeatedly will result in the following:
>>> footoo()
[1]
>>> footoo()
[1, 1]
>>> footoo()
[1, 1, 1]
You are right, you can't do that.
For example, in Python, you have to be very careful about this because it will use the same list over and over (in this case, it will continue appending to the same list):
def foo2(self,aa=[]):
aa.append('foo')
return "la"
Instead, a very common approach is to assign None as the default value, and then have an if-statement to set it inside the function:
def foo2(self,aa=None):
if not aa:
aa = self.A
return "la"
The error occurs when the default value is evaluated. This happens when the class is being defined (not when the method is called). At that point in time, self has no meaning.
One way to fix this is like so:
...
def foo2(self,aa=None):
if aa is None:
aa = self.A
return "la"
...

Categories