Generate functions without closures in python - python

right now I'm using closures to generate functions like in this simplified example:
def constant_function(constant):
def dummyfunction(t):
return constant
return dummyfunction
These generated functions are then passed to the init-method of a custom class which stores them as instance attributes. The disadvantage is that that makes the class-instances unpickleable. So I'm wondering if there is a way to create function generators avoiding closures.

You could use a callable class:
class ConstantFunction(object):
def __init__(self, constant):
self.constant = constant
def __call__(self, t):
return self.constant
def constant_function(constant):
return ConstantFunction(constant)
The closure state of your function is then transferred to an instance attribute instead.

Not that I'd recommend this for general use… but there's an alternate approach of compiling and exec'ing the code. It's generating a function w/o a closure.
>>> def doit(constant):
... constant = "def constant(t):\n return %s" % constant
... return compile(constant, '<string>', 'exec')
...
>>> exec doit(1)
>>> constant(4)
1
>>> constant
Note that to do this inside an enclosing function or class (i.e. not in the global namespace) you have to also pass in the appropriate namespace to exec. See: https://docs.python.org/2/reference/simple_stmts.html#the-exec-statement
There's also the double lambda approach, which is not really a closure, well, sort of…
>>> f = lambda x: lambda y:x
>>> g = f(1)
>>> g(4)
1
>>> import dill
>>> _g = dill.dumps(g)
>>> g_ = dill.loads(_g)
>>> g_(5)
1
You seemed worried about the ability to pickle closure-like objects, so you can see even the double lambdas are pickleable if you use dill. The same for class instances.

Related

Behavioural difference between decorated function and method in Python

I use the following workaround for "Pythonic static variables":
def static_vars(**kwargs):
"""decorator for funciotns that sets static variables"""
def decorate(func):
for k, v in kwargs.items():
setattr(func, k, v)
return func
return decorate
#static_vars(var=1)
def global_foo():
_ = global_foo
print _.var
_.var += 1
global_foo() # >>> 1
global_foo() # >>> 2
It works just as supposed to. But when I move such a decorated function inside a class I get a strange change:
class A(object):
#static_vars(var=1)
def foo(self):
bound = self.foo
unbound = A.foo
print bound.var # OK, print 1 at first call
bound.var += 1 # AttributeError: 'instancemethod' object has no attribute 'var'
def check(self):
bound = self.foo
unbound = A.foo
print 'var' in dir(bound)
print 'var' in dir(unbound)
print bound.var is unbound.var # it doesn't make much sense but anyway
a = A()
a.check() # >>> True
# >>> True
# >>> True
a.foo() # ERROR
I can not see what causes such behaviour. It seems to me that it has something to do with python descriptors protocol, all that bound vs unbound method stuff. Somehow the foo.var attribute is accessible but is not writable.
Any help is appreciated.
P.S. I understand that static function variables are essentially class variables and this decorator is unnecessary in the second case but the question is more for understanding the Python under the hood than to get any working solution.
a.foo doesn't return the actual function you defined; it returns a bound method of it, which wraps up the function and has self assigned toa.
https://docs.python.org/3/howto/descriptor.html#functions-and-methods
That guide is out of date a little, though, since unbound methods just return the function in Python 3.
So, to access the attributes on the function, you need to go through A.foo (or a.foo.__func__)instead of a.foo. And this will only work in Python 3. In Python 2, I think A.foo.__func__ will work.

Which information is returned when I type the name of a function in the Python interpreter?

I am trying to learn more about the attributes of Python objects. To that end, I would like to change the output I see in the Python interpreter when I enter the name of an object (in my particular case a function) in Python. How can I do that?
To elaborate, I defined a function returning a polynomial as follows:
def poly(coefs):
#rename(coefs)
def function(x):
y,i = 0,0
while i < len(coefs):
y += coefs[i]*x**i
i += 1
return y
return function
The docorator #rename(coefs) makes sure that when I type poly(coefs).__name__ in the interpreter a string of the associated polynomial function is returned. Now, suppose I define, for example, a_polynomial = poly((1,2,3)) and type a_polynomial in the interpreter, the following is returned:
<function poly.<locals>.function>
From which information does Python return this reply? Instead of such a replay, I want to receive:
<function 3*x**2 + 2*x + 1 at 0x7f05d5be7b28>.
The polynomial function is already returned when I type a_polynomial.__name__. The last element is the location where the function is stored.
In old python versions changing the __name__ attribute is enough, however in recent python versions qualified names were introduced and the value returned by __repr__ is based on the __qualname__ attribute.
In this case the decorator would look like this:
def rename(coefs):
# just a simple sketch for the polynomial representation.
coefs_and_exps = reversed(list(enumerate(coefs)))
str_repr = ' + '.join('{} * x**{}'.format(coef, i) for i, coef in coefs_and_exps)
def decorator(func):
func.__name__ = func.__qualname__ = str_repr
return func
return decorator
The best way to achieve what you want is to use a class instead of a function that returns a function:
class Poly:
def __init__(self, coefs):
self.coefs = coefs
def __str__(self):
coefs_and_pows = reversed(list(enumerate(self.coefs)))
return ' + '.join('{}*x**{}'.format(coef, i) for i, coef in coefs_and_pows)
def __repr__(self):
return '<function {} at 0x{:x}'.format(self, id(self))
def __call__(self, x):
return sum(coef * x**i for i, coef in enumerate(self.coefs))
And you can use the class in the same manner as you used your poly function:
In [23]: p = Poly((1, 2, 3))
In [24]: p
Out[24]: <function 3*x**2 + 2*x**1 + 1*x**0 at 0x7f87c0478588
In [25]: p(0)
Out[25]: 1
In [26]: p(3)
Out[26]: 34
In Python 3.3, PEP 3155 was implemented. This gives classes and functions a new __qualname__ attribute, and makes their __str__ and __repr__ use this instead of __name__. What you are seeing is the value of this __qualname__ attribute. If you make your rename decorator set __qualname__ as well, you'll get the result you want.

Assign function arguments to `self`

I've noticed that a common pattern I use is to assign SomeClass.__init__() arguments to self attributes of the same name. Example:
class SomeClass():
def __init__(self, a, b, c):
self.a = a
self.b = b
self.c = c
In fact it must be a common task for others as well as PyDev has a shortcut for this - if you place the cursor on the parameter list and click Ctrl+1 you're given the option to Assign parameters to attributes which will create that boilerplate code for you.
Is there a different, short and elegant way to perform this assignment?
You could do this, which has the virtue of simplicity:
>>> class C(object):
def __init__(self, **kwargs):
self.__dict__ = dict(kwargs)
This leaves it up to whatever code creates an instance of C to decide what the instance's attributes will be after construction, e.g.:
>>> c = C(a='a', b='b', c='c')
>>> c.a, c.b, c.c
('a', 'b', 'c')
If you want all C objects to have a, b, and c attributes, this approach won't be useful.
(BTW, this pattern comes from Guido his own bad self, as a general solution to the problem of defining enums in Python. Create a class like the above called Enum, and then you can write code like Colors = Enum(Red=0, Green=1, Blue=2), and henceforth use Colors.Red, Colors.Green, and Colors.Blue.)
It's a worthwhile exercise to figure out what kinds of problems you could have if you set self.__dict__ to kwargs instead of dict(kwargs).
I sympathize with your sense that boilerplate code is a bad thing. But in this case, I'm not sure there even could be a better alternative. Let's consider the possibilities.
If you're talking about just a few variables, then a series of self.x = x lines is easy to read. In fact, I think its explicitness makes that approach preferable from a readability standpoint. And while it might be a slight pain to type, that alone isn't quite enough to justify a new language construct that might obscure what's really going on. Certainly using vars(self).update() shenanigans would be more confusing than it's worth in this case.
On the other hand, if you're passing nine, ten, or more parameters to __init__, you probably need to refactor anyway. So this concern really only applies to cases that involve passing, say, 5-8 parameters. Now I can see how eight lines of self.x = x would be annoying both to type and to read; but I'm not sure that the 5-8 parameter case is common enough or troublesome enough to justify using a different method. So I think that, while the concern you're raising is a good one in principle, in practice, there are other limiting issues that make it irrelevant.
To make this point more concrete, let's consider a function that takes an object, a dict, and a list of names, and assigns values from the dict to names from the list. This ensures that you're still being explicit about which variables are being assigned to self. (I would never suggest a solution to this problem that didn't call for an explicit enumeration of the variables to be assigned; that would be a rare-earth bug magnet):
>>> def assign_attributes(obj, localdict, names):
... for name in names:
... setattr(obj, name, localdict[name])
...
>>> class SomeClass():
... def __init__(self, a, b, c):
... assign_attributes(self, vars(), ['a', 'b', 'c'])
Now, while not horribly unattractive, this is still harder to figure out than a straightforward series of self.x = x lines. And it's also longer and more trouble to type than one, two, and maybe even three or four lines, depending on circumstances. So you only get certain payoff starting with the five-parameter case. But that's also the exact moment that you begin to approach the limit on human short-term memory capacity (= 7 +/- 2 "chunks"). So in this case, your code is already a bit challenging to read, and this would only make it more challenging.
Mod for #pcperini's answer:
>>> class SomeClass():
def __init__(self, a, b=1, c=2):
for name,value in vars().items():
if name != 'self':
setattr(self,name,value)
>>> s = SomeClass(7,8)
>>> print s.a,s.b,s.c
7 8 2
Your specific case could also be handled with a namedtuple:
>>> from collections import namedtuple
>>> SomeClass = namedtuple("SomeClass", "a b c")
>>> sc = SomeClass(1, "x", 200)
>>> print sc
SomeClass(a=1, b='x', c=200)
>>> print sc.a, sc.b, sc.c
1 x 200
Decorator magic!!
>>> class SomeClass():
#ArgsToSelf
def __init__(a, b=1, c=2, d=4, e=5):
pass
>>> s=SomeClass(6,b=7,d=8)
>>> print s.a,s.b,s.c,s.d,s.e
6 7 2 8 5
while defining:
>>> import inspect
>>> def ArgsToSelf(f):
def act(self, *args, **kwargs):
arg_names,_,_,defaults = inspect.getargspec(f)
defaults=list(defaults)
for arg in args:
setattr(self, arg_names.pop(0),arg)
for arg_name,arg in kwargs.iteritems():
setattr(self, arg_name,arg)
defaults.pop(arg_names.index(arg_name))
arg_names.remove(arg_name)
for arg_name,arg in zip(arg_names,defaults):
setattr(self, arg_name,arg)
return f(*args, **kwargs)
return act
Of course you could define this decorator once and use it throughout your project.Also, This decorator works on any object function, not only __init__.
You can do it via setattr(), like:
[setattr(self, key, value) for key, value in kwargs.items()]
Is not very beautiful, but can save some space :)
So, you'll get:
kwargs = { 'd':1, 'e': 2, 'z': 3, }
class P():
def __init__(self, **kwargs):
[setattr(self, key, value) for key, value in kwargs.items()]
x = P(**kwargs)
dir(x)
['__doc__', '__init__', '__module__', 'd', 'e', 'z']
For that simple use-case I must say I like putting things explicitly (using the Ctrl+1 from PyDev), but sometimes I also end up using a bunch implementation, but with a class where the accepted attributes are created from attributes pre-declared in the class, so that I know what's expected (and I like it more than a namedtuple as I find it more readable -- and it won't confuse static code analysis or code-completion).
I've put on a recipe for it at: http://code.activestate.com/recipes/577999-bunch-class-created-from-attributes-in-class/
The basic idea is that you declare your class as a subclass of Bunch and it'll create those attributes in the instance (either from default or from values passed in the constructor):
class Point(Bunch):
x = 0
y = 0
p0 = Point()
assert p0.x == 0
assert p0.y == 0
p1 = Point(x=10, y=20)
assert p1.x == 10
assert p1.y == 20
Also, Alex Martelli also provided a bunch implementation: http://code.activestate.com/recipes/52308-the-simple-but-handy-collector-of-a-bunch-of-named/ with the idea of updating the instance from the arguments, but that'll confuse static code-analysis (and IMO can make things harder to follow) so, I'd only use that approach for an instance that's created locally and thrown away inside that same scope without passing it anywhere else).
I solved it for myself using locals() and __dict__:
>>> class Test:
... def __init__(self, a, b, c):
... l = locals()
... for key in l:
... self.__dict__[key] = l[key]
...
>>> t = Test(1, 2, 3)
>>> t.a
1
>>>
Disclaimer
Do not use this: I was simply trying to create the answer closest to OPs initial intentions. As pointed out in comments, this relies on entirely undefined behavior, and explicitly prohibited modifications of the symbol table.
It does work though, and has been tested under extremely basic circumstances.
Solution
class SomeClass():
def __init__(self, a, b, c):
vars(self).update(dict((k,v) for k,v in vars().iteritems() if (k != 'self')))
sc = SomeClass(1, 2, 3)
# sc.a == 1
# sc.b == 2
# sc.c == 3
Using the vars() built-in function, this snippet iterates through all of the variables available in the __init__ method (which should, at this point, just be self, a, b, and c) and set's self's variables equal to the same, obviously ignoring the argument-reference to self (because self.self seemed like a poor decision.)
One of the problems with #user3638162's answer is that locals() contain the 'self' variable. Hence, you end up with an extra self.self. If one doesn't mind the extra self, that solution can simply be
class X:
def __init__(self, a, b, c):
self.__dict__.update(locals())
x = X(1, 2, 3)
print(x.a, x.__dict__)
The self can be removed after construction by del self.__dict__['self']
Alternatively, one can remove the self during construction using dictionary comprehensions introduced in Python3
class X:
def __init__(self, a, b, c):
self.__dict__.update(l for l in locals().items() if l[0] != 'self')
x = X(1, 2, 3)
print(x.a, x.__dict__)

Python Get function parent attribute

I have a function in python that return an inner function
def parent_func(func):
def decorator(a,b):
return a + b
return decorator
for simplify lets consider this code
def in_func ( a, b)
return a*b
child = parent_func ( in_func)
Does someone know a way to get the "func" attribute of parent_func from child?
The func attribute only exists in the scope of the parent_func() function.
If you really need that value, you can expose it:
def parent_func(func):
def decorator(a,b):
return a + b
decorator.original_function = func
return decorator
Next question is, why would you want to do that?
What is the actual design problem behind this issue?
You can store it as an attribute on decorator before returning it.
>>> def parent_func(func):
... def decorator(a,b):
... return a + b
... decorator.func = func
... return decorator
...
>>> #parent_func
... def product(a, b):
... return a * b
...
>>> product.func
<function product at 0x000000000274BD48>
>>> product(1, 1)
2
You are slightly misusing decorators here. What is the point of writing a decorator which completely ignores the original function it is given?
Oh, I've also used the #foo decorator syntax, because it's cleaner. It's equivalent to what you have written, though.

Is it possible to change an instance's method implementation without changing all other instances of the same class? [duplicate]

This question already has answers here:
Override a method at instance level
(11 answers)
Closed 3 years ago.
I do not know python very much (never used it before :D), but I can't seem to find anything online. Maybe I just didn't google the right question, but here I go:
I want to change an instance's implementation of a specific method. When I googled for it, I found you could do it, but it changes the implementation for all other instances of the same class, for example:
def showyImp(self):
print self.y
class Foo:
def __init__(self):
self.x = "x = 25"
self.y = "y = 4"
def showx(self):
print self.x
def showy(self):
print "y = woohoo"
class Bar:
def __init__(self):
Foo.showy = showyImp
self.foo = Foo()
def show(self):
self.foo.showx()
self.foo.showy()
if __name__ == '__main__':
b = Bar()
b.show()
f = Foo()
f.showx()
f.showy()
This does not work as expected, because the output is the following:
x = 25
y = 4
x = 25
y = 4
And I want it to be:
x = 25
y = 4
x = 25
y = woohoo
I tried to change Bar's init method with this:
def __init__(self):
self.foo = Foo()
self.foo.showy = showyImp
But I get the following error message:
showyImp() takes exactly 1 argument (0 given)
So yeah... I tried using setattr(), but seems like it's the same as self.foo.showy = showyImp.
Any clue? :)
Since Python 2.6, you should use the types module's MethodType class:
from types import MethodType
class A(object):
def m(self):
print 'aaa'
a = A()
def new_m(self):
print 'bbb'
a.m = MethodType(new_m, a)
As another answer pointed out, however, this will not work for 'magic' methods of new-style classes, such as __str__().
This answer is outdated; the answer below works with modern Python
Everything you wanted to know about Python Attributes and Methods.
Yes, this is an indirect answer, but it demonstrates a number of techniques and explains some of the more intricate details and "magic".
For a "more direct" answer, consider python's new module. In particular, look at the instancemethod function which allows "binding" a method to an instance -- in this case, that would allow you to use "self" in the method.
import new
class Z(object):
pass
z = Z()
def method(self):
return self
z.q = new.instancemethod(method, z, None)
z is z.q() # true
If you ever need to do it for a special method (which, for a new-style class -- which is what you should always be using and the only kind in Python 3 -- is looked up on the class, not the instance), you can just make a per-instance class, e.g....:
self.foo = Foo()
meths = {'__str__': lambda self: 'peekaboo!'}
self.foo.__class__ = type('yFoo', (Foo,), meths)
Edit: I've been asked to clarify the advantages of this approach wrt new.instancemethod...:
>>> class X(object):
... def __str__(self): return 'baah'
...
>>> x=X()
>>> y=X()
>>> print x, y
baah baah
>>> x.__str__ = new.instancemethod(lambda self: 'boo!', x)
>>> print x, y
baah baah
As you can see, the new.instancemethod is totally useless in this case. OTOH...:
>>> x.__class__=type('X',(X,),{'__str__':lambda self:'boo!'})
>>> print x, y
boo! baah
...assigning a new class works great for this case and every other. BTW, as I hope is clear, once you've done this to a given instance you can then later add more method and other class attributes to its x.__class__ and intrinsically affect only that one instance!
If you're binding to the instance, you shouldn't include the self argument:
>>> class Foo(object):
... pass
...
>>> def donothing():
... pass
...
>>> f = Foo()
>>> f.x = donothing
>>> f.x()
>>>
You do need the self argument if you're binding to a class though:
>>> def class_donothing(self):
... pass
...
>>> foo.y = class_donothing
>>> f.y()
>>>
Your example is kind of twisted and complex, and I don't quite see what it has to do with your question. Feel free to clarify if you like.
However, it's pretty easy to do what you're looking to do, assuming I'm reading your question right.
class Foo(object):
def bar(self):
print('bar')
def baz():
print('baz')
In an interpreter ...
>>> f = Foo()
>>> f.bar()
bar
>>> f.bar = baz
>>> f.bar()
baz
>>> g = Foo()
>>> g.bar()
bar
>>> f.bar()
baz
Do Not Do This.
Changing one instance's methods is just wrong.
Here are the rules of OO Design.
Avoid Magic.
If you can't use inheritance, use delegation.
That means that every time you think you need something magic, you should have been writing a "wrapper" or Facade around the object to add the features you want.
Just write a wrapper.

Categories