I've never fully understood exception handling in Python (or any language to be honest). I was experimenting with custom exceptions, and found the following behaviour.
class MyError(Exception):
def __init__(self, anything):
pass
me = MyError("iiiiii")
print(me)
Output:
iiiiii
I assume that print() calls Exception.__str__().
How does the base class Exception know to print iiiiii? The string "iiiiii" was passed to the constructor of MyError via the argument anything, but anything isn't stored anywhere in MyError at all!
Furthermore, the constructor of MyError does not call its superclass's (Exception's) constructor. So, how did print(me) print iiiiii?
In Python 3, the BaseException class has a __new__ method that stores the arguments in self.args:
>>> me.args
('iiiiii',)
You didn't override the __new__ method, only __init__. You'd need to override both to completely prevent from self.args to be set, as both implementations happily set that attribute:
>>> class MyError(Exception):
... def __new__(cls, *args, **kw):
... return super().__new__(cls) # ignoring args and kwargs!
... def __init__(self, *args, **kw):
... super().__init__() # again ignoring args and kwargs
...
>>> me = MyError("iiiiii")
>>> me
MyError()
>>> print(me)
>>> me.args
()
In Python 2, exceptions do not implement __new__ and your sample would not print anything. See issue #1692335 as to why the __new__ method was added; basically to avoid issues like yours where the __init__ method does not also call super().__init__().
Note that __init__ is not a constructor; the instance is already constructed by that time, by __new__. __init__ is merely the initialiser.
Related
I am trying to patch __new__ method of a class, and it is not working as I expect.
from contextlib import contextmanager
class A:
def __init__(self, arg):
print('A init', arg)
#contextmanager
def patch_a():
new = A.__new__
def fake_new(cls, *args, **kwargs):
print('call fake_new')
return new(cls, *args, **kwargs)
# here I get error: TypeError: object.__new__() takes exactly one argument (the type to instantiate)
A.__new__ = fake_new
try:
yield
finally:
A.__new__ = new
if __name__ == '__main__':
A('foo')
with patch_a():
A('bar')
A('baz')
I expect the following output:
A init foo
call fake_new
A init bar
A init baz
But after call fake_new I get an error (see comment in the code).
For me It seems like I just decorate a __new__ method and propagate all args unchanged.
It doesn't work and the reason is obscure for me.
Also I can write return new(cls) and call A('bar') works fine. But then A('baz') breaks.
Can someone explain what is going on?
Python version is 3.8
You've run into a complicated part of Python object instantiation - in which the language opted for a design that would allow one to create a custom __init__ method with parameters, without having to touch __new__.
However, the in the base of class hierarchy, object, both __new__ and __init__ take one single parameter each.
IIRC, it goes this way: if your class have a custom __init__ and you did not touch __new__ and there are more any parameters to the class instantiation that would be passed to both __init__ and __new__, the parameters will be stripped from the call do __new__, so you don't have to customize it just to swallow the parameters you consume in __init__. The converse is also true: if your class have a custom __new__ with extra parameters, and no custom __init__, these are not passed to object.__init__.
With your design, Python sees a custom __new__ and passes it the same extra arguments that are passed to __init__ - and by using *args, **kw, you forward those to object.__new__ which accepts a single parameter - and you get the error you presented us.
The fix is to not pass those extra parameters to the original __new__ method - unless they are needed there - so you have to make the same check Python's type does when initiating an object.
And an interesting surprise to top it: while making the example work, I found out that even if A.__new__
is deleted when restoring the patch, it is still considered as "touched" by cPython's type instantiation, and the arguments are passed through.
In order to get your code working I needed to leave a permanent stub A.__new__ that will forward only the cls argument:
from contextlib import contextmanager
class A:
def __init__(self, arg):
print('A init', arg)
#contextmanager
def patch_a():
new = A.__new__
def fake_new(cls, *args, **kwargs):
print('call fake_new')
if new is object.__new__:
return new(cls)
return new(cls, *args, **kwargs)
# here I get error: TypeError: object.__new__() takes exactly one argument (the type to instantiate)
A.__new__ = fake_new
try:
yield
finally:
del A.__new__
if new is not object.__new__:
A.__new__ = new
else:
A.__new__ = lambda cls, *args, **kw: object.__new__(cls)
print(A.__new__)
if __name__ == '__main__':
A('foo')
with patch_a():
A('bar')
A('baz')
(I tried inspecting the original __new__ signature instead of the new is object.__new__ comparison - to no avail: object.__new__ signature is *args, **kwargs - possibly made so that it will never fail on static checking)
I'm trying to create a #synchronized wrapper that creates one Lock per object and makes method calls thread safe. I can only do this if I can access method.im_self of the method in the wrapped method.
class B:
def f(self): pass
assert inspect.ismethod( B.f ) # OK
assert inspect.ismethod( B().f ) # OK
print B.f # <unbound method B.f>
print B().f # <bound method B.f of <__main__.B instance at 0x7fa2055e67e8>>
def synchronized(func):
# func is not bound or unbound!
print func # <function f at 0x7fa20561b9b0> !!!!
assert inspect.ismethod(func) # FAIL
# ... allocate one lock per C instance
return func
class C:
#synchronized
def f(self): pass
(1) What's confusing is that the func parameter passed to my decorator changes type before it gets passed into the wrapper-generator. This seem is rude and unnecessary. Why does this happen?
(2) Is there some decorator magic by which I can make method calls to an object mutex-ed (i.e. one lock per object, not per class).
UPDATE: There are many examples of #synchronized(lock) wrappers. However, really what I want is #synchronized(self). I can solve it like this:
def synchronizedMethod(func):
def _synchronized(*args, **kw):
self = args[0]
lock = oneLockPerObject(self)
with lock: return func(*args, **kw)
return _synchronized
However, because its much more efficient, I'd prefer:
def synchronizedMethod(func):
lock = oneLockPerObject(func.im_self)
def _synchronized(*args, **kw):
with lock: return func(*args, **kw)
return _synchronized
Is this possible?
Go read:
https://github.com/GrahamDumpleton/wrapt/tree/develop/blog
and in particular:
https://github.com/GrahamDumpleton/wrapt/blob/develop/blog/07-the-missing-synchronized-decorator.md
https://github.com/GrahamDumpleton/wrapt/blob/develop/blog/08-the-synchronized-decorator-as-context-manager.md
The wrapt module then contains the #synchronized decorator described there.
https://pypi.python.org/pypi/wrapt
The full implementation is flexible enough to do:
#synchronized # lock bound to function1
def function1():
pass
#synchronized # lock bound to function2
def function2():
pass
#synchronized # lock bound to Class
class Class(object):
#synchronized # lock bound to instance of Class
def function_im(self):
pass
#synchronized # lock bound to Class
#classmethod
def function_cm(cls):
pass
#synchronized # lock bound to function_sm
#staticmethod
def function_sm():
pass
Along with context manager like usage as well:
class Object(object):
#synchronized
def function_im_1(self):
pass
def function_im_2(self):
with synchronized(self):
pass
Further information and examples can also be found in:
http://wrapt.readthedocs.org/en/latest/examples.html
There is also a conference talk you can watch on how this is implemented at:
https://www.youtube.com/watch?v=EB6AH-85zfY&t=1s
You can't get self at decoration time because the decorator is applied at function definition time. No self exists yet; in fact, the class doesn't exist yet.
If you're willing to store your lock on the instance (which is arguably where a per-instance value should go) then this might do ya:
def synchronized_method(func):
def _synchronized(self, *args, **kw):
if not hasattr(self, "_lock"): self._lock = oneLockPerObject(self)
with self._lock: return func(self, *args, **kw)
return _synchronized
You could also generate the lock in your __init__() method on a base class of some sort, and store it on the instance in the same way. That simplifies your decorator because you don't have to check for the existence of the self._lock attribute.
(1) What's confusing is that the func parameter passed to my decorator
changes type before it gets passed into the wrapper-generator. This
seem is rude and unnecessary. Why does this happen?
It doesn't! Rather, function objects (and other descriptors) produce their __get__'s results when that method of theirs is called -- and that result is the method object!
But what lives in the class's __dict__ is always the descriptor -- specifically, the function object! Check it out...:
>>> class X(object):
... def x(self): pass
...
>>> X.__dict__['x']
<function x at 0x10fe04e60>
>>> type(X.__dict__['x'])
<type 'function'>
See? No method objects around anywhere at all!-)
Therefore, no im_self around either, at decoration time -- and you'll need to go with your introspection-based alternative idea.
I'm trying to use a class decorator to achieve a singleton pattern as below:
python3.6+
def single_class(cls):
cls._instance = None
origin_new = cls.__new__
# #staticmethod
# why staticmethod decorator is not needed here?
def new_(cls, *args, **kwargs):
if cls._instance:
return cls._instance
cls._instance = cv = origin_new(cls)
return cv
cls.__new__ = new_
return cls
#single_class
class A():
...
a = A()
b = A()
print(a is b ) # True
The singleton pattern seems to be working well, but I'm wondering why #staticmethod is not needed above the function new_ in my code, as I know that cls.__new__ is a static method.
class object:
""" The most base type """
...
#staticmethod # known case of __new__
def __new__(cls, *more): # known special case of object.__new__
""" Create and return a new object. See help(type) for accurate signature. """
pass
...
Update test with python2.7+
The #staticmethod seems to be needed in py2 and not needed in py3
def single_class(cls):
cls._instance = None
origin_new = cls.__new__
# #staticmethod
# without #staticmethod there will be a TypeError
# and work fine with #staticmethod adding
def new_(cls, *args, **kwargs):
if cls._instance:
return cls._instance
cls._instance = cv = origin_new(cls)
return cv
cls.__new__ = new_
return cls
#single_class
class A(object):
pass
a = A()
b = A()
print(a is b )
# TypeError: unbound method new_() must be called with A instance as the first argument (got type instance instead)
__new__ explicitly takes the class instance as its first argument. __new__, as mentioned in other answers, is a special case and a possible reason for it to be staticmethod is to allow the creation of other classes using new:
super(CurrentClass, cls).__new__(otherCls, *args, **kwargs)
The reason why your code works without #staticmethod decorator in Python 3 but doesn't work in Python 2 is because of the difference in how Python 2 and Python 3 allow a class's method access.
There is no unbounded method in Python 3 [ 2 ]. When you try to access a class method on Python 3 you get a function whereas in Python 2 you get unbounded method. You can see this if you do:
# Python 2
>>> A.__new__
<unbound method A.new_>
# Python 3
>>> A.__new__
<function __main__.single_class.<locals>.new_(cls, *args, **kwargs)>
In Python 2, your decorator is equal to single_class.__new__(A) but since __new__ is an unbound method you can't call it with the class itself. You need a class instance but for that, you need to create your class(catch-22) and that's why staticmethod is needed. The error message says the same thing unbound method new_() must be called with A instance as first argument.
Whereas in Python 3 __new__ is treated as a function you can call it with class A itself. So, single_class.__new__(A) will work.
From the docs:
Called to create a new instance of class cls. __new__() is a static method (special-cased so you need not declare it as such) that takes the class of which an instance was requested as its first argument.
(My emphasis.)
You might also want to have a look at this SO answer.
It's not needed for this special method because that's the official spec (docs.python.org/3/reference/datamodel.html#object.new). Quite simply.
EDIT:
The #staticmethod seems to be needed in py2
It's not:
bruno#bruno:~$ python2
Python 2.7.17 (default, Nov 7 2019, 10:07:09)
[GCC 7.4.0] on linux2
Type "help", "copyright", "credits" or "license" for more information.
pythonrc start
pythonrc done
>>> class Foo(object):
... def __new__(cls, *args, **kw):
... print("hello %s" % cls)
... return object.__new__(cls, *args, **kw)
...
>>> f = Foo()
hello <class '__main__.Foo'>
but your example is quite a corner case since you're rebinding this method after the class has been created, and then it stop working in py2 indeed:
>>> class Bar(object):
... pass
...
>>> def new(cls, *args, **kw):
... print("yadda %s" % cls)
... return object.__new__(cls, *args, **kw)
...
>>> Bar.__new__ = new
>>> Bar()
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: unbound method new() must be called with Bar instance as first argument (got type instance instead)
I assume that in py2, __new__ (if present) is special-cased by the metaclass constructor (type.__new__ / type.__init__) which wraps it in a staticmethod:
>>> Foo.__dict__["__new__"]
<staticmethod object at 0x7fe11e15af50>
>>> Bar.__dict__["__new__"]
<function new at 0x7fe11e12b950>
There have been a couple changes in the object model between py2 and py3 which probably explain the different behaviour here, one might be able to find the exact info somewhere in the release notes.
Recently, I faced a problem which was similar to this question:
Accessing the class that owns a decorated method from the decorator
My rep was not high enough to comment there, so I am starting a new question to address some improvements to the answer to that problem.
This is what I needed:
def original_decorator(func):
# need to access class here
# for eg, to append the func itself to class variable "a", to register func
# or say, append func's default arg values to class variable "a"
return func
class A(object):
a=[]
#classmethod
#original_decorator
def some_method(self,a=5):
''' hello'''
print "Calling some_method"
#original_decorator
def some_method_2(self):
''' hello again'''
print "Calling some_method_2"
The solution would need to work both with class methods and instance methods, the method returned from the decorator should work and behave just the same way if it was undecorated i.e. method signature should be preserved.
The accepted answer for that question returned a Class from the decorator and the metaclass identified that specific Class, and did the "class-accessing" operations.
The answer did mention itself as a rough solution, but clearly it had a few caveats :
Decorator returned a class and it was not callable. Obviously, it can be made callable easily, but the returned value is still a class - it just behaves the same way while calling, but its properties and behaviors would be different. Essentially, it would not work the same way as the undecorated method.
It forced the decorator to return a custom-type class and all the "class-accessing" code was put inside the metaclass directly. It is simply not nice, writing the decorator should not enforce touching the metaclass directly.
I have tried to come up with a better solution, documented in the answer.
Here is the solution.
It uses a decorator (which would work on "class-accessing" decorators) and a metaclass, which would fulfill all my requirements and address the problems of that answer. Probably the best advantage is that the "class-accessing" decorators can just access the class, without even touching the metaclass.
# Using metaclass and decorator to allow class access during class creation time
# No method defined within the class should have "_process_meta" as arg
# Potential problems: Using closures, function.func_globals is read-only
from functools import partial
import inspect
class meta(type):
def __new__(cls, name, base, clsdict):
temp_cls = type.__new__(cls, name, base, clsdict)
methods = inspect.getmembers(temp_cls, inspect.ismethod)
for (method_name, method_obj) in methods:
tmp_spec = inspect.getargspec(method_obj)
if "__process_meta" in tmp_spec.args:
what_to_do, main_func = tmp_spec.defaults[:-1]
f = method_obj.im_func
f.func_code, f.func_defaults, f.func_dict, f.func_doc, f.func_name = main_func.func_code, main_func.func_defaults, main_func.func_dict, main_func.func_doc, main_func.func_name
mod_func = what_to_do(temp_cls, f)
f.func_code, f.func_defaults, f.func_dict, f.func_doc, f.func_name = mod_func.func_code, mod_func.func_defaults, mod_func.func_dict, mod_func.func_doc, mod_func.func_name
return temp_cls
def do_it(what_to_do, main_func=None):
if main_func is None:
return partial(do_it, what_to_do)
def whatever(what_to_do=what_to_do, main_func=main_func, __process_meta=True):
pass
return whatever
def original_classmethod_decorator(cls, func):
# cls => class of the method
# appends default arg values to class variable "a"
func_defaults = inspect.getargspec(func).defaults
cls.a.append(func_defaults)
func.__doc__ = "This is a class method"
print "Calling original classmethod decorator"
return func
def original_method_decorator(cls, func):
func_defaults = inspect.getargspec(func).defaults
cls.a.append(func_defaults)
func.__doc__ = "This is a instance method" # Can change func properties
print "Calling original method decorator"
return func
class A(object):
__metaclass__ = meta
a = []
#classmethod
#do_it(original_classmethod_decorator)
def some_method(cls, x=1):
''' hello'''
print "Calling original class method"
#do_it(original_method_decorator)
def some_method_2(self, y=2):
''' hello again'''
print "Calling original method"
# signature preserved
print(inspect.getargspec(A.some_method))
print(inspect.getargspec(A.some_method_2))
Open to suggestions on whether this approach has any ceveats.
Can anyone explain why the following code doesn't work? I'm trying to make a class decorator to provide new __repr__ and __init__ methods, and if I decorate a class with it only the repr method seems to get defined. I managed to fix the original problem by making the decorator modify the original class destructively instead of creating a new class (e.g. it defines the new methods and then just uses cl.__init__ = __init__ to overwrite them). Now I'm just curious why the subclassing-based attempt didn't work.
def higherorderclass(cl):
#functools.wraps(cl)
class wrapped(cl):
def __init__(self, *args, **kwds):
print 'in wrapped init'
super(wrapped, self).__init__(*args, **kwds)
def __repr__(self):
return 'in wrapped repr'
return wrapped
The first problem is that you're using old-style classes. (That is, classes that don't inherit from object, another built-in type, or another new-style class.) Special method lookup works differently in old-style classes. Really, you don't want to learn how it works; just use new-style classes instead.
But then you run into the next problem: functools.wraps doesn't work on classes in the first place. With new-style classes, you will get some kind of AttributeError; with old-style classes, things just silently fail in various ways. And you can't just use update_wrapper explicitly either. The problem is that you're trying to replace attributes of the class that aren't writeable, and there's no (direct) way around that.
If you use new-style classes, and don't try to wraps them, everything works fine.
Remove the #functools.wraps() decorator, this only applies to function decorators. With a newstyle class your decorator fails with:
>>> #higherorderclass
... class Foo(object):
... def __init__(self):
... print 'in foo init'
...
Traceback (most recent call last):
File "<stdin>", line 2, in <module>
File "<stdin>", line 3, in higherorderclass
File "/Users/mj/Development/Library/buildout.python/parts/opt/lib/python2.7/functools.py", line 33, in update_wrapper
setattr(wrapper, attr, getattr(wrapped, attr))
AttributeError: attribute '__doc__' of 'type' objects is not writable
Without the #functools.wraps() line your decorator works just fine:
>>> def higherorderclass(cl):
... class wrapped(cl):
... def __init__(self, *args, **kwds):
... print 'in wrapped init'
... super(wrapped, self).__init__(*args, **kwds)
... def __repr__(self):
... return 'in wrapped repr'
... return wrapped
...
>>> #higherorderclass
... class Foo(object):
... def __init__(self):
... print 'in foo init'
...
>>> Foo()
in wrapped init
in foo init
in wrapped repr